Get Started with the Amazon Bedrock Knowledge Base Integration
On this page
Note
Atlas Vector Search is currently available as a knowledge base only in AWS regions located in the United States.
You can use Atlas Vector Search as a knowledge base for Amazon Bedrock to build generative AI applications and implement retrieval-augmented generation (RAG). This tutorial demonstrates how to start using Atlas Vector Search with Amazon Bedrock. Specifically, you perform the following actions:
Load custom data into an Amazon S3 bucket.
Optionally, configure an endpoint service using AWS PrivateLink.
Create an Atlas Vector Search index on your data.
Create a knowledge base to store data on Atlas.
Create an agent that uses Atlas Vector Search to implement RAG.
Background
Amazon Bedrock is a fully-managed service for building generative AI applications. It allows you to leverage foundation models (FMs) from various AI companies as a single API.
You can use Atlas Vector Search as a knowledge base for Amazon Bedrock to store custom data in Atlas and create an agent to implement RAG and answer questions on your data. To learn more about RAG, see Retrieval-Augmented Generation (RAG) with Atlas Vector Search.
Prerequisites
To complete this tutorial, you must have the following:
An Atlas M10+ cluster running MongoDB version 6.0.11, 7.0.2, or later.
An AWS account with a secret that contains credentials to your Atlas cluster.
Access to the following foundation models used in this tutorial:
The AWS CLI and npm installed if you plan to configure an AWS PrivateLink endpoint service.
Load Custom Data
If you don't already have an Amazon S3 bucket that contains text data, create a new bucket and load the following publicly accessible PDF about MongoDB best practices:
Download the PDF.
Navigate to the Best Practices Guide for MongoDB.
Click either Read Whitepaper or Email me the PDF to access the PDF.
Download and save the PDF locally.
Upload the PDF to an Amazon S3 bucket.
Follow the steps to create an S3 Bucket. Ensure that you use a descriptive Bucket Name.
Follow the steps to upload a file to your Bucket. Select the file that contains the PDF that you just downloaded.
Configure an Endpoint Service
By default, Amazon Bedrock connects to your knowledge base over the public internet. To further secure your connection, Atlas Vector Search supports connecting to your knowledge base over a virtual network through an AWS PrivateLink endpoint service.
Optionally, complete the following steps to enable an endpoint service that connects to an AWS PrivateLink private endpoint for your Atlas cluster:
Set up a private endpoint in Atlas.
Follow the steps to set up a AWS PrivateLink private endpoint for your Atlas cluster. Ensure that you use a descriptive VPC ID to identify your private endpoint.
For more information, see Learn About Private Endpoints in Atlas.
Configure the endpoint service.
MongoDB and partners provide a Cloud Development Kit (CDK) that you can use to configure an endpoint service backed by a network load balancer that forwards traffic to your private endpoint.
Follow the steps specified in the CDK GitHub Repository to prepare and run the CDK script.
Create the Atlas Vector Search Index
In this section, you set up Atlas as a vector database, also called a vector store, by creating an Atlas Vector Search index on your collection.
Required Access
To create an Atlas Vector Search index, you must have Project Data Access Admin
or higher access to the Atlas project.
Procedure
In Atlas, go to the Clusters page for your project.
If it's not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.
If it's not already displayed, select your desired project from the Projects menu in the navigation bar.
If the Clusters page is not already displayed, click Database in the sidebar.
The Clusters page displays.
Go to the Collections page.
Click the Browse Collections button for your cluster.
The Data Explorer displays.
Go to the Atlas Search page for your cluster.
You can go the Atlas Search page from the sidebar, the Data Explorer, or your cluster details page.
In the sidebar, click Atlas Search under the Services heading.
From the Select data source dropdown, select your cluster and click Go to Atlas Search.
The Atlas Search page displays.
Click the Browse Collections button for your cluster.
Expand the database and select the collection.
Click the Search Indexes tab for the collection.
The Atlas Search page displays.
Click the cluster's name.
Click the Atlas Search tab.
The Atlas Search page displays.
Define the Atlas Vector Search index.
Click the Create Search Index button.
Under Atlas Vector Search, select JSON Editor and then click Next.
In the Database and Collection section, find the
bedrock_db
database and select thetest
collection.In the Index Name field, enter
vector_index
.Replace the default definition with the following sample index definition and then click Next.
This index definition specifies indexing the following fields in an index of the vectorSearch type:
embedding
field as the vector type. Theembedding
field contains the vector embeddings created using the embedding model that you specify when you configure the knowledge base. The index definition specifies1536
vector dimensions and measures similarity usingcosine
.The
metadata
andtext_chunk
fields as filter types for pre-filtering your data. You specify these fields when you configure the knowledge base.
1 { 2 "fields": [ 3 { 4 "numDimensions": 1536, 5 "path": "embedding", 6 "similarity": "cosine", 7 "type": "vector" 8 }, 9 { 10 "path": "metadata", 11 "type": "filter" 12 }, 13 { 14 "path": "text_chunk", 15 "type": "filter" 16 } 17 ] 18 }
Create a Knowledge Base
In this section, you create a knowledge base to load custom data into your vector store.
Navigate to Amazon Bedrock management console.
Log in to the AWS Console.
In the upper-left corner, click the Services dropdown menu.
Click Machine Learning, and then select Amazon Bedrock.
On the Amazon Bedrock landing page, click Get started.
Manage model access.
Amazon Bedrock doesn't grant access to FMs automatically. If you haven't already, follow the steps to add model access for the Titan Embeddings G1 - Text and Anthropic Claude V2.1 models.
Add a data source.
Specify a name for the data source used by the knowledge base.
Enter the URI for the S3 bucket that contains your data source. Or, click Browse S3 and find the S3 bucket that contains your data source from the list.
Click Next.
Amazon Bedrock displays available embeddings models that you can use to convert your data source's text data into vector embeddings.
Select Titan Embeddings G1 - Text.
Connect Atlas to the Knowledge Base.
In the Vector database section, select Choose a vector store you have created.
Select MongoDB Atlas and configure the following options:
For the Hostname, enter the URL for your Atlas cluster located in its connection string. The hostname uses the following format:
<clusterName>.mongodb.net For the Database name, enter
bedrock_db
.For the Collection name, enter
test
.For the Credentials secret ARN, enter the ARN for the secret that contains your Atlas cluster credentials. To learn more, see AWS Secrets Manager concepts.
In the Metadata field mapping section, configure the following options to determine the search index and field names that Atlas uses to embed and store your data source:
For the Vector search index name, enter
vector_index
.For the Vector embedding field path, enter
embedding
.For the Text field path, enter
text_chunk
.For the Metadata field path, enter
metadata
.
If you configured an endpoint service, enter your PrivateLink Service Name.
Click Next.
Sync the data source.
After Amazon Bedrock creates the knowledge base, it prompts you to sync your data. In the Data source section, select your data source and click Sync to sync the data from the S3 bucket and load it into Atlas.
When the sync completes, you can view your vector embeddings
in the Atlas UI
by navigating to the bedrock_db.test
collection in your cluster.
Create an Agent
In this section, you create an agent that uses Atlas Vector Search to implement RAG and answer questions on your data. When you prompt this agent, it does the following:
Connects to your knowledge base to access the custom data stored in Atlas.
Uses Atlas Vector Search to retrieve relevant documents from your vector store based on the prompt.
Leverages an AI chat model to generate a context-aware response based on these documents.
Complete the following steps to create and test the RAG agent:
Select a model and provide a prompt.
By default, Amazon Bedrock creates a new IAM role to access the agent. In the Agent details section, specify the following:
From the dropdown menus, select Anthropic and Claude V2.1 as the provider and AI model used to answer questions on your data.
Note
Amazon Bedrock doesn't grant access to FMs automatically. If you haven't already, follow the steps to add model access for the Anthropic Claude V2.1 model.
Provide instructions for the agent so that it knows how to complete the task.
For example, if you're using the sample data, paste the following instructions:
You are a friendly AI chatbot that answers questions about working with MongoDB. Click Save.
Add the knowledge base.
To connect the agent to the knowledge base that you created:
In the Knowledge Bases section, click Add.
Select mongodb-atlas-knowledge-base from the dropdown.
Describe the knowledge base to determine how the agent should interact with the data source.
If you're using the sample data, paste the following instructions:
This knowledge base describes best practices when working with MongoDB. Click Add, and then click Save.
Test the agent.
Click the Prepare button.
Click Test. Amazon Bedrock displays a testing window to the right of your agent details if it's not already displayed.
In the testing window, enter a prompt. The agent prompts the model, uses Atlas Vector Search to retrieve relevant documents, and then generates a response based on the documents.
If you used the sample data, enter the following prompt. The generated response might vary.
What's the best practice to reduce network utilization with MongoDB? The best practice to reduce network utilization with MongoDB is to issue updates only on fields that have changed rather than retrieving the entire documents in your application, updating fields, and then saving the document back to the database. [1] Tip
Click the annotation in the agent's response to view the text chunk that Atlas Vector Search retrieved.
Next Steps
MongoDB and partners also provide the following developer resources: