Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

AI Shop: The Power of LangChain, OpenAI, and MongoDB Atlas Working Together

Pavel Duchovny7 min read • Published Nov 29, 2023 • Updated Sep 18, 2024
Node.jsAIVector SearchJavaScriptAtlas
FULL APPLICATION
Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty
Building AI applications in the last few months has made my mind run into different places, mostly inspired by ideas and new ways of interacting with sources of information. After eight years at MongoDB, I can clearly see the potential of MongoDB when it comes to powering AI applications. Surprisingly, it's the same main fundamental reason users chose MongoDB and MongoDB Atlas up until the generative AI era, and it's the document model flexibility.
Using unstructured data is not always easy. The data produced by GenAI models is considered highly unstructured. It can come in different wording formats as well as sound, images, and even videos. Applications are efficient and built correctly when the application can govern and safely predict data structures and inputs. Therefore, in order to build successful AI applications, we need a method to turn unstructured data into what we call semi-structured or flexible documents.
Once we can fit our data stream into a flexible pattern, we are in power of efficiently utilizing this data and providing great features for our users.

RAG as a fundamental approach to building AI applications

In light of this, retrieval-augmented generation (RAG) emerges as a pivotal methodology in the realm of AI development. This approach synergizes the retrieval of information and generative processes to refine the quality and relevance of AI outputs. By leveraging the document model flexibility inherent to MongoDB and MongoDB Atlas, RAG can dynamically incorporate a vast array of unstructured data, transforming it into a more manageable semi-structured format. This is particularly advantageous when dealing with the varied and often unpredictable data produced by AI models, such as textual outputs, auditory clips, visual content, and video sequences.
MongoDB's prowess lies in its ability to act as a robust backbone for RAG processes, ensuring that AI applications can not only accommodate but also thrive on the diversity of generative AI data streams. The integration of MongoDB Atlas with features like vector search and the linguistic capabilities of LangChain, detailed in RAG with Atlas Vector Search, LangChain, and OpenAI, exemplifies the cutting-edge potential of MongoDB in harnessing the full spectrum of AI-generated content. This seamless alignment between data structuring and AI innovation positions MongoDB as an indispensable asset in the GenAI era, unlocking new horizons for developers and users alike
Once we can fit our data stream into a flexible pattern we are in power of efficiently utilising this data and provide great features for our users.

Instruct to struct unstructured AI structures

To demonstrate the ability of Gen AI models like Open AI chat/image generation I decided to build a small grocery store app that provides a catalog of products to the user. Searching for online grocery stores is now a major portion of world wide shopping habits and I bet almost all readers have used those.
However, I wanted to take the user experience to another level by providing a chatbot which anticipate users' grocery requirements. Whether it's from predefined lists, casual text exchanges, or specific recipe inquiries like "I need to cook a lasagne, what should I buy?".
AI Shop UI
The stack I decided to use is:
  • A MongoDB Atlas cluster to store products, categories, and orders.
  • Atlas search indexes to power vector search (semantic search based on meaning).
  • Express + LangChain to orchestrate my AI tasks.
  • OpenAI platform API - GPT4, GPT3.5 as my AI engine.
RAG-AI-Shop
I quickly realized that in any application I will build with AI, I want to control the way my inputs are passed and produced by the AI, at least their template structure.
So in the store query, I want the user to provide a request and the AI to produce a list of potential groceries.
As I don’t know how many ingredients there are or what their categories and types are, I need the template to be flexible enough to describe the list in a way my application can safely traverse it further down the search pipeline.
The structured I decided to use is:
1const schema = z.object({
2"shopping_list": z.array(z.object({
3"product": z.string().describe("The name of the product"),
4"quantity": z.number().describe("The quantity of the product"),
5"unit": z.string().optional(),
6"category": z.string().optional(),
7})),
8}).deepPartial();
I have used a zod package which is recommended by LangChain in order to describe the expected schema. Since the shopping_list is an array of objects, it can host N entries filled by the AI, However, their structure is strictly predictable.
Additionally, I don’t want the AI engine to provide me with ingredients or products that are far from the categories I’m selling in my shop. For example, if a user requests a bicycle from a grocery store, the AI model should have context that it's not reasonable to have something for the user. Therefore, the relevant categories that are stored in the database have to be provided as context to the model.
1 // Initialize OpenAI instance
2 const llm = new OpenAI({
3 openAIApiKey: process.env.OPEN_AI_KEY,
4 modelName: "gpt-4",
5 temperature: 0
6 });
7
8 // Create a structured output parser using the Zod schema
9 const outputParser = StructuredOutputParser.fromZodSchema(schema);
10 const formatInstructions = outputParser.getFormatInstructions();
11
12 // Create a prompt template
13 const prompt = new PromptTemplate({
14 template: "Build a user grocery list in English as best as possible, if all the products does not fit the categories output empty list, however if some does add only those. \n{format_instructions}\n possible category {categories}\n{query}. Don't output the schema just the json of the list",
15 inputVariables: ["query", "categories"],
16 partialVariables: { format_instructions: formatInstructions },
17 });
We take advantage of the LangChain library to turn the schema into a set of instructions and produce an engineering prompt consisting of the category documents we fetched from our database and the extraction instructions.
The user query has a flexible requirement to be built by an understandable schema by our application. The rest of the code only needs to validate and access the well formatted lists of products provided by the LLM.
1 // Fetch all categories from the database
2 const categories = await db.collection('categories').find({}, { "_id": 0 }).toArray();
3 const docs = categories.map((category) => category.categoryName);
4
5 // Format the input prompt
6 const input = await prompt.format({
7 query: query,
8 categories: docs
9 });
10
11 // Call the OpenAI model
12 const response = await llm.call(input);
13 const responseDoc = await outputParser.parse(response);
14
15 let shoppingList = responseDoc.shopping_list;
16 // Embed the shopping list
17 shoppingList = await placeEmbeddings(shoppingList);
Here is an example of how this list might look like: Document with Embeddings

LLM to embeddings

A structured flexible list like this will allow me to create embeddings for each of those terms found by the LLM as relevant to the user input and the categories my shop has.
For simplicity reasons, I am going to only embed the product name.
1const placeEmbeddings = async (documents) => {
2
3 const embeddedDocuments = documents.map(async (document) => {
4 const embeddedDocument = await embeddings.embedQuery(document.product);
5 document.embeddings = embeddedDocument;
6 return document;
7 });
8 return Promise.all(embeddedDocuments);
9};
But in real life applications, we can provide the attributes to quantity or unit inventory search filtering.
From this point, coding and aggregation that will fetch three candidates for each product is straightforward.
It will be a vector search for each item connected in a union with the next item until the end of the list.

Embeddings to aggregation

1[ {$vectorSearch: // product 1 (vector 3 alternatives)},
2 { $unionWith : { $search : //product 2...},
3 { $unionWith : { $search : //product 3...}]
Finally, I will reshape the data so each term will have an array of its three candidates to make the frontend coding simpler.
1[ { searchTerm : "parmesan" ,
2 Products : [ //parmesan 1, //parmesan 2, // Mascarpone ]},
3 ...
4]
Here’s my NodeJS server-side code to building the vector search:
1const aggregationQuery = [
2 { "$vectorSearch": {
3 "index": "default",
4 "queryVector": shoppingList[0].embeddings,
5 "path": "embeddings",
6 "numCandidates": 20,
7 "limit": 3
8 }
9 },
10 { $addFields: { "searchTerm": shoppingList[0].product } },
11 ...shoppingList.slice(1).map((item) => ({
12 $unionWith: {
13 coll: "products",
14 pipeline: [
15 {
16 "$search": {
17 "index": "default",
18 "knnBeta": {
19 "vector": item.embeddings,
20 "path": "embeddings",
21 "k": 20
22 }
23 }
24 },
25 {$limit: 3},
26 { $addFields: { "searchTerm": item.product } }
27 ]
28 }
29 })),
30 { $group: { _id: "$searchTerm", products: { $push: "$$ROOT" } } },
31 { $project: { "_id": 0, "category": "$_id", "products.title": 1, "products.description": 1,"products.emoji" : 1, "products.imageUrl" : 1,"products.price": 1 } }
32 ]

The process

The process we presented here can be applied to a massive amount of use cases. Let’s reiterate it according to the chart below. RAG-AI-Diagram In this context, we have enriched our product catalog with embeddings on the title/description of the products. We’ve also provided the categories and structuring instructions as context to engineer our prompt. Finally, we pipped the prompt through the LLM which creates a manageable list that can be transformed to answers and follow-up questions.
Embedding LLM results can create a chain of semantic searches whose results can be pipped back to LLMs or manipulated smartly by the robust aggregation framework.
Eventually, data becomes clay we can shape and morph using powerful LLMs and combining with aggregation pipelines to add relevance and compute power to our applications.
For the full example and step-by-step tutorial to set up the demo grocery store, use the GitHub project.

Summary

In conclusion, the journey of integrating AI with MongoDB showcases the transformative impact of combining generative AI capabilities with MongoDB's dynamic data model. The flexibility of MongoDB's document model has proven to be the cornerstone for managing the unpredictable nature of AI-generated data, paving the way for innovative applications that were previously inconceivable. Through the use of structured schemas, vector searches, and the powerful aggregation framework, developers can now craft AI-powered applications that not only understand and predict user intent but also offer unprecedented levels of personalization and efficiency.
The case study of the grocery store app exemplifies the practical application of these concepts, illustrating how a well-structured data approach can lead to more intelligent and responsive AI interactions. MongoDB stands out as an ideal partner for AI application development, enabling developers to structure, enrich, and leverage unstructured data in ways that unlock new possibilities.
As we continue to explore the synergy between MongoDB and AI, it is evident that the future of application development lies in our ability to evolve data management techniques that can keep pace with the rapid advancements in AI technology. MongoDB's role in this evolution is indispensable, as it provides the agility and power needed to turn the challenges of unstructured data into opportunities for innovation and growth in the GenAI era.
Want to continue the conversation? Meet us over in the MongoDB Developer Community.
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Article

Capturing and Storing Real-World Optics With MongoDB Atlas, OpenAI GPT-4o, and PyMongo


Sep 04, 2024 | 7 min read
Article

Atlas Search Relevancy Explained


Aug 14, 2024 | 13 min read
Tutorial

Combining Your Database With Azure Blob Storage Using Data Federation


Oct 08, 2024 | 7 min read
Tutorial

Building Generative AI Applications Using MongoDB: Harnessing the Power of Atlas Vector Search and Open Source Models


Sep 18, 2024 | 10 min read
Table of Contents