BlogAtlas Vector Search voted most loved vector database in 2024 Retool State of AI reportLearn more >>
MongoDB Developer
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right

MongoDB Atlas Vector Search and AWS Bedrock modules RAG tutorial

Pavel Duchovny10 min read • Published Jun 24, 2024 • Updated Jun 24, 2024
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
Learn how to launch a fully managed, end-to-end RAG workflow with MongoDB Atlas and Amazon Bedrock.
Welcome to our in-depth tutorial on MongoDB Atlas Vector Search and AWS Bedrock modules, tailored for creating a versatile database assistant for product catalogs. This tutorial will guide you through building an application that simplifies product searches using diverse inputs such as individual products, lists, images, and even recipes. Imagine finding all the necessary ingredients for a recipe with just a simple search. Whether you're a developer or a product manager, this guide will equip you with the skills to create a powerful tool for navigating complex product databases. Some examples of what this application can do:

Single product search:

Search query: "Organic Almonds"
Result: displays the top-rated or most popular organic almond product in the catalog

List-based search:

Search query: ["Rice", "Black Beans", "Avocado"]
Result: shows a list of products including rice, black beans, and avocados, along with their different brands and prices

Image-based search:

Search query: [image of a whole wheat bread loaf]
Result: identifies and shows the top-picked brand of whole wheat bread available in the catalog

Recipe-based search:

Search query: "Chocolate Chip Cookie Recipe"
Result: lists all ingredients needed for the recipe, like flour, chocolate chips, sugar, butter, etc., and suggests relevant products
Demo Application Search Functionality
Let’s start!

High-level architecture

1. Frontend VUE js application implementing a chat application
2. Trigger:
  • A trigger watching for inserted “product” documents and using a function logic sets vector embeddings on the product “title,” “img,” or both.
3. App services to facilitate a backend hosting the endpoints to interact with the database and AI models
  • getSearch — the main search engine point that receives a search string or base64 image and outputs a summarized document
  • getChats — an endpoint to retrieve user chats array
  • saveChats — an endpoint to save the chats array
4. MongoDB Atlas database with a vector search index to retrieve relevant documents for RAG enter image description here
Main Vector Query Processing Illustration

Deploy a free cluster​

Before moving forward, ensure the following prerequisites are met:
  • Database cluster setup on MongoDB Atlas
  • Obtained the URI to your cluster
For assistance with database cluster setup and obtaining the URI, refer to our guide for setting up a MongoDB cluster, and our guide to get your connection string.
Preferably the database location will be in the same AWS region as the Bedrock enabled modules.
MongoDB Atlas has a rich set of application services that allow a developer to host an entire application logic (authentication, permissions, functions, triggers, etc.) with a generous free tier. We will leverage this ability to streamline development and data integration in minutes of work.

Setup app services

1. Start by navigating to the App Services tab.
App services tab
2. You’ll be prompted to select a starter template. Let’s go with the Build your own App option that’s already selected. Click the Next button.
3. Next, you need to configure your application.
  • Data Source: Since we have created a single cluster, Atlas already linked it to our application.
  • (Optional) Application Name: Let’s give our application a meaningful name, such as bedrockDemo. (This option might be chosen for you automatically as "Application-0" for the first application.)
  • (Optional) App Deployment Model: Change the deployment to Single Region and select the region closest to your physical location.
4. Click the Create App Service button to create your first App Services application!
5. Once the application is created, we need to verify data sources are linked to our cluster. Visit the Linked Data Sources tab: Our Atlas cluster with a linked name of mongodb-atlas
Linked DS

Setup secrets and trigger

We will use the app services to create a Value and a Secret for AWS access and secret keys to access our Bedrock modules.
Navigate to the Values tab and click Create New Value by following this configuration:
Value TypeNameValue
By the end of this process you should have:
Values and Secrets
Once done, press Review Draft & Deploy and then Deploy.

Add aws sdk dependency​

The AWS SDK Bedrock client is the easiest and most convenient way to interact with AWS bedrock models.
1. In your app services application, navigate to the Functions tab and click the Dependencies tab.
2. Click Add Dependency and add the following dependency:
3. Click Add and wait for it to be successfully added.
4. Once done, press Review Draft & Deploy and then Deploy.

Create a trigger

Navigate to Triggers tab and create a new trigger:
Set Embedding Trigger
Trigger Code
Choose Function type and in the dropdown, click New Function. Add a name like setEmbeddings under Function Name.
Copy and paste the following code.
Click Save and Review Draft & Deploy.
Now, we need to set the function setEmbeddings as a SYSTEM function. Click on the Functions tab and then click on the setEmbeddings function, Settings tab. Change the Authentication to System and click Save.
System setting on a function
A trigger running successfully will produce a collection in our Atlas cluster. You can navigate to Data Services > Database. Click the Browse Collections button on the cluster view. The database name is Bedrock and the collection is products.
Please note that the trigger run will only happen when we insert data into the bedrock.products collection and might take a while the first time. Therefore, you can watch the Logs section on the App Services side.

Create an Atlas Vector Search index

Let’s move back to the Data Services and Database tabs.
Atlas search index creation​
  1. First, navigate to your cluster’s "Atlas Search" section and press the Create Index button. Atlas Index Creation
  2. Click Create Search Index.
  3. Choose the Atlas Vector Search index and click Next.
  4. Select the "bedrock" database and "products" collection.
  5. Paste the following index definition:
  1. Click Create and wait for the index to be created.
  2. The index is going to go through a build phase and will appear "Active" eventually. Now, you are ready to write $search aggregations for Atlas Search.
The HTTP endpoint getSearch implemented in Chapter 3 already includes a search query.
With this code, we are performing a vector search with whatever is placed in the "doc.embedding" variable on fields "embedding." We look for just one document’s results and limit the set for the first one.

Set up the backend logic

Our main functionality will rely on a user HTTP endpoint which will orchestrate the logic of the catalog search. The input from the user will be turned into a multimodal embedding via AWS Titan and will be passed to Atlas Vector Search to find the relevant document. The document will be returned to the user along with a prompt that will engineer a response from a Cohere LLM.
Cohere LLM cohere.command-light-text-v14 is part of the AWS Bedrock base model suite.

Create application search HTTPS endpoint​

  1. On the App Services application, navigate to the HTTPS Endpoints section.
  2. Create a new POST endpoint by clicking Add An Endpoint with a path of /getSearch.
  3. Important! Toggle the Response With Result to On.
  4. The logic of this endpoint will get a "term" from the query string and search for that term. If no term is provided, it will return the first 15 results.
getSearch endpoint 5. Add under Function and New Function (name: getProducts) the following function logic:
Click Save Draft and follow the Review Draft & Deploy process. Make sure to keep the http callback URL as we will use it in our final chapter when consuming the data from the frontend application.
The URL will usually look something like:<APP-ID>/endpoint/getSearch
Make sure that the function created (e.g., getProducts) is on "SYSTEM" privilege for this demo.
This page can be accessed by going to the Functions tab and looking at the Settings tab of the relevant function.

Import data into Atlas​

Now, we will import the data into Atlas from our github repo.
  1. On the Data Services main tab, click your cluster name. Click the Collections tab.
  2. We will start by going into the "bedrock" database and importing the "products" collection.
  3. Click Insert Document or Add My Own Data (if present) and switch to the document view. Paste the content of the "products.json" file from the "data" folder in the repository.
  4. Click Insert and wait for the data to be imported.

Create an endpoint to save and retrieve chats​

1. /getChats - will save a chat to the database Endpoint
  • Name: getChats
  • Path: /getChats
  • Method: GET
  • Response with Result: Yes
2. /saveChats — will save a chat to the database Endpoint
  • Name: saveChats
  • Path: /saveChats
  • Method: POST
  • Response with Result: Yes
Make sure that all the functions created (e.g., registerUser) are on "SYSTEM" privilege for this demo.
System setting on a function This page can be accessed by going to the Functions tab and looking at the Settings tab of the relevant function. Finally, click Save Draft and follow the Review Draft & Deploy process.

GitHub Codespaces frontend setup

It’s time to test our back end and data services. We will use the created search HTTPS endpoint to show a simple search page on our data.
You will need to get the HTTPS Endpoint URL we created as part of the App Services setup.

Play with the front end​

We will use the github repo to launch codespaces from:
  1. Open the repo in GitHub.
  2. Click the green Code button.
  3. Click the Codespaces tab and + to create a new codespace.

Configure the front end​

  1. Create a file called .env in the root of the project.
  2. Add the following to the file:

Install the front end​

Install serve.

Build the front end​

Run the front end​

Test the front end​

Open the browser to the URL provided by serve in a popup.


In summary, this tutorial has equipped you with the technical know-how to leverage MongoDB Atlas Vector Search and AWS Bedrock for building a cutting-edge database assistant for product catalogs. We've delved deep into creating a robust application capable of handling a variety of search inputs, from simple text queries to more complex image and recipe-based searches. As developers and product managers, the skills and techniques explored here are crucial for innovating and improving database search functionalities.
The combination of MongoDB Atlas and AWS Bedrock offers a powerful toolkit for efficiently navigating and managing complex product data. By integrating these technologies into your projects, you’re set to significantly enhance the user experience and streamline the data retrieval process, making every search query more intelligent and results more relevant. Embrace this technology fusion to push the boundaries of what’s possible in database search and management.
If you want to explore more about MongoDB and AI please refer to our main landing page.
Additionally, if you wish to communicate with our community please visit .
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this tutorial

Manage Game User Profiles with MongoDB, Phaser, and JavaScript

Apr 02, 2024 | 11 min read

Atlas Search Multi-Language Data Modeling

Sep 09, 2022 | 2 min read

Authentication for Your iOS Apps with Atlas App Services

Apr 04, 2024 | 8 min read

Streamlining Log Management to Amazon S3 Using Atlas Push-based Log Exports With HashiCorp Terraform

Jul 08, 2024 | 6 min read
Table of Contents