Build a Local RAG Implementation with Atlas Vector Search
On this page
This tutorial demonstrates how to implement retrieval-augmented generation (RAG) locally, without the need for API keys or credits. To learn more about RAG, see Retrieval-Augmented Generation (RAG) with Atlas Vector Search.
Specifically, you perform the following actions:
Create a local Atlas deployment.
Set up the environment.
Use a local embedding model to generate vector embeddings.
Create an Atlas Vector Search index on your data.
Deploy a local LLM to answer questions on your data.
➤ Use the Select your language drop-down menu to set the language of the examples on this page.
Background
To complete this tutorial, you can either create a local Atlas deployment by using the Atlas CLI or deploy a cluster on the cloud. The Atlas CLI is the command-line interface for MongoDB Atlas, and you can use the Atlas CLI to interact with Atlas from the terminal for various tasks, including creating local Atlas deployments. To learn more, see Manage Local and Cloud Deployments from the Atlas CLI.
Note
Local Atlas deployments are intended for testing only. For production environments, deploy a cluster.
You also use the following open-source models in this tutorial:
mxbai-embed-large-v1 embedding model
Mistral 7B generative model
You also use the following open-source models in this tutorial:
bge-large-en-v1.5 embedding model
Mistral 7B generative model
There are several ways to download and deploy LLMs locally. In this tutorial, you download the Mistral 7B model by using GPT4All, an open-source ecosystem for local LLM development.
This tutorial also uses LangChain, a popular open-source LLM framework, to connect to these models and integrate them with Atlas Vector Search. If you prefer different models or a different framework, you can adapt this tutorial by replacing the model names and LangChain-specific components with their equivalents for your preferred setup.
To learn more about how to leverage LangChain in your RAG applications, see Get Started with the LangChain Integration. To learn more about other frameworks you can use with Atlas Vector Search, see Integrate Vector Search with AI Technologies.
Prerequisites
To complete this tutorial, you must have the following:
The Atlas CLI installed and running v1.14.3 or later.
A Hugging Face Access Token with read access.
Git Large File Storage installed.
A terminal and code editor to run your Node.js project.
npm and Node.js installed.
To complete this tutorial, you must have the following:
The Atlas CLI installed and running v1.14.3 or later.
An interactive Python notebook that you can run locally. You can run interactive Python notebooks in VS Code. Ensure that your environment runs Python v3.10 or later.
Note
If you use a hosted service such as Colab, ensure that you have enough RAM to run this tutorial. Otherwise, you might experience performance issues.
Create a Local Atlas Deployment
In this section, you create a local Atlas deployment to use as a vector database. If you have an Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later with the sample data loaded, you can skip this step.
To create the local deployment:
Connect from the Atlas CLI.
In your terminal, run atlas auth login
to authenticate with your
Atlas login credentials. To learn more, see
Connect from the Atlas CLI.
Note
If you don't have an existing Atlas account, run atlas setup
or create a new account.
Create a local deployment by using the Atlas CLI.
Run atlas deployments setup
and follow the prompts to create a
local deployment.
For detailed instructions, see Create a Local Atlas Deployment.
Load the sample data into your deployment.
Run the following command in your terminal to download the sample data:
curl https://atlas-education.s3.amazonaws.com/sampledata.archive -o sampledata.archive Run the following command to load the data into your deployment, replacing
<port-number>
with the port where you're hosting the deployment:mongorestore --archive=sampledata.archive --port=<port-number>
Set Up the Environment
In this section, you set up the environment for this tutorial. Create a project, install the required packages, and define a connection string:
Create a .env
file.
In your project, create a .env
file to store your connection string.
ATLAS_CONNECTION_STRING = "<connection-string>"
Replace the <connection-string>
placeholder value with your Atlas
connection string.
If you're using a local Atlas deployment,
your connection string follows this format, replacing
<port-number>
with the port for your local deployment.
ATLAS_CONNECTION_STRING = "mongodb://localhost:<port-number>/?directConnection=true";
If you're using an Atlas cluster, your connection string
follows this format, replacing "<connection-string>";
with your Atlas cluster's SRV connection string:
ATLAS_CONNECTION_STRING = "<connection-string>";
Note
Your connection string should use the following format:
mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
Note
Minimum Node.js Version Requirements
Node.js v20.x introduced the --env-file
option. If you are using an
older version of Node.js, add the dotenv
package to your project, or
use a different method to manage your environment variables.
In this section, you set up the environment for this tutorial.
Create an interactive Python notebook by saving a file
with the .ipynb
extension, and then run the following code snippets
in the notebook.
Define your Atlas connection string.
If you're using a local Atlas deployment,
run the following code in your notebook, replacing <port-number>
with the port for your local deployment.
ATLAS_CONNECTION_STRING = ("mongodb://localhost:<port-number>/?directConnection=true")
If you're using an Atlas cluster,
run the following code in your notebook, replacing <connection-string>
with your Atlas cluster's SRV connection string:
ATLAS_CONNECTION_STRING = ("<connection-string>")
Note
Your connection string should use the following format:
mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
Generate Embeddings with a Local Model
In this section, you load an embedding model locally and
create vector embeddings using data from the
sample_airbnb database,
which contains a single collection called listingsAndReviews
.
Download the local embedding model.
This example uses the mixedbread-ai/mxbai-embed-large-v1 model from the Hugging Face model hub. The simplest method to download the model files is to clone the repository using Git with Git Large File Storage. Hugging Face requires a user access token or Git over SSH to authenticate your request to clone the repository.
git clone https://<your-hugging-face-username>:<your-hugging-face-user-access-token>@huggingface.co/mixedbread-ai/mxbai-embed-large-v1
git clone git@hf.co:mixedbread-ai/mxbai-embed-large-v1
Tip
Git Large File Storage
The Hugging Face model files are large, and require Git Large File Storage (git-lfs) to clone the repositories. If you see errors related to large file storage, ensure you have installed git-lfs.
Get the local path to the model files.
Get the path to the local model files on your machine. This is the parent directory that contains the git repository you just cloned. If you cloned the model repository inside the project directory you created for this tutorial, the parent directory path should resemble:
/Users/<username>/local-rag-mongodb
Check the model directory and make sure it contains an onnx
directory
that has a model_quantized.onnx
file:
cd mxbai-embed-large-v1/onnx ls
model.onnx model_fp16.onnx model_quantized.onnx
Generate embeddings.
Navigate back to the
local-rag-mongodb
parent directory.Create a file called
get-embeddings.js
, and paste the following code into it:import { env, pipeline } from '@xenova/transformers'; // Function to generate embeddings for given data export async function getEmbeddings(data) { // Replace this path with the parent directory that contains the model files env.localModelPath = '/Users/<username>/local-rag-mongodb/'; env.allowRemoteModels = false; const task = 'feature-extraction'; const model = 'mxbai-embed-large-v1'; const embedder = await pipeline( task, model); const results = await embedder(data, { pooling: 'mean', normalize: true }); return Array.from(results.data); } Replace the
'/Users/<username>/local-rag-mongodb/'
with the local path from the prior step.Create another file called
generate-embeddings.js
and paste the following code into it:1 import { MongoClient } from 'mongodb'; 2 import { getEmbeddings } from './get-embeddings.js'; 3 4 async function run() { 5 const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING); 6 7 try { 8 // Connect to your local MongoDB deployment 9 await client.connect(); 10 const db = client.db("sample_airbnb"); 11 const collection = db.collection("listingsAndReviews"); 12 13 const filter = { '$and': [ 14 { 'summary': { '$exists': true, '$ne': null } }, 15 { 'embeddings': { '$exists': false } } 16 ]}; 17 18 // This is a long-running operation for all docs in the collection, 19 // so we limit the docs for this example 20 const cursor = collection.find(filter).limit(50); 21 22 // To verify that you have the local embedding model configured properly, 23 // try generating an embedding for one document 24 const firstDoc = await cursor.next(); 25 if (!firstDoc) { 26 console.log('No document found.'); 27 return; 28 } 29 30 const firstDocEmbeddings = await getEmbeddings(firstDoc.summary); 31 console.log(firstDocEmbeddings); 32 33 // After confirming you are successfully generating embeddings, 34 // uncomment the following code to generate embeddings for all docs. 35 /* cursor.rewind(); // Reset the cursor to process documents again 36 * console.log("Generating embeddings for documents. Standby."); 37 * let updatedDocCount = 0; 38 * 39 * for await (const doc of cursor) { 40 * const text = doc.summary; 41 * const embeddings = await getEmbeddings(text); 42 * await collection.updateOne({ "_id": doc._id }, 43 * { 44 * "$set": { 45 * "embeddings": embeddings 46 * } 47 * } 48 * ); 49 * updatedDocCount += 1; 50 * } 51 * console.log("Count of documents updated: " + updatedDocCount); 52 */ 53 } catch (err) { 54 console.log(err.stack); 55 } 56 finally { 57 await client.close(); 58 } 59 } 60 run().catch(console.dir); This code includes a few lines to test that you have correctly downloaded the model and are using the correct path. Run the following command to execute the code:
node --env-file=.env generate-embeddings.js Tensor { dims: [ 1, 1024 ], type: 'float32', data: Float32Array(1024) [ -0.01897735893726349, -0.001120976754464209, -0.021224822849035263, -0.023649735376238823, -0.03350808471441269, -0.0014186901971697807, -0.009617107920348644, 0.03344292938709259, 0.05424851179122925, -0.025904450565576553, 0.029770011082291603, -0.0006215018220245838, 0.011056603863835335, -0.018984895199537277, 0.03985185548663139, -0.015273082070052624, -0.03193040192127228, 0.018376577645540237, -0.02236943319439888, 0.01433168537914753, 0.02085157483816147, -0.005689046811312437, -0.05541415512561798, -0.055907104164361954, -0.019112611189484596, 0.02196515165269375, 0.027313007041811943, -0.008618313819169998, 0.045496534556150436, 0.06271681934595108, -0.0028660669922828674, -0.02433634363114834, 0.02016191929578781, -0.013882477767765522, -0.025465600192546844, 0.0000950733374338597, 0.018200192600488663, -0.010413561016321182, -0.002004098379984498, -0.058351870626211166, 0.01749623566865921, -0.013926318846642971, -0.00278360559605062, -0.010333008132874966, 0.004406726453453302, 0.04118744656443596, 0.02210155501961708, -0.016340743750333786, 0.004163357429206371, -0.018561601638793945, 0.0021984230261296034, -0.012378614395856857, 0.026662321761250496, -0.006476820446550846, 0.001278138137422502, -0.010084952227771282, -0.055993322283029556, -0.015850437805056572, 0.015145729295909405, 0.07512971013784409, -0.004111358895897865, -0.028162647038698196, 0.023396577686071396, -0.01159974467009306, 0.021751703694462776, 0.006198467221111059, 0.014084039255976677, -0.0003913900291081518, 0.006310020107775927, -0.04500332102179527, 0.017774192616343498, -0.018170733004808426, 0.026185045018792152, -0.04488714039325714, -0.048510149121284485, 0.015152698382735252, 0.012136898003518581, 0.0405895821750164, -0.024783289059996605, -0.05514788627624512, 0.03484730422496796, -0.013530988246202469, 0.0319477915763855, 0.04537525027990341, -0.04497901350259781, 0.009621822275221348, -0.013845544308423996, 0.0046155862510204315, 0.03047163411974907, 0.0058857654221355915, 0.005858785007148981, 0.01180865429341793, 0.02734190598130226, 0.012322399765253067, 0.03992653638124466, 0.015777742490172386, 0.017797520384192467, 0.02265017107129097, -0.018233606591820717, 0.02064627595245838, ... 924 more items ], size: 1024 } After you have confirmed you are successfully generating embeddings with the local model, uncomment the code in lines 35-52 to generate embeddings for all the documents in the collection. Save the file.
Then, run the command to execute the code:
node --env-file=.env generate-embeddings.js
The following code snippet performs the following actions:
Establishes a connection to your local Atlas deployment or your Atlas cluster and the
sample_airbnb.listingsAndReviews
collection.Loads the
bge-large-en-v1.5
embedding model from LangChain'sHuggingFaceEmbeddings
library.Creates a filter to include only documents that have a
summary
field and don't have anembeddings
field.For each document in the collection that satisfies the filter:
Generates an embedding from the document's
summary
field by using thebge-large-en-v1.5
embedding model.Updates the document by creating a new field called
embeddings
that contains the embedding.
Run the following code in your notebook:
from langchain_huggingface import HuggingFaceEmbeddings from pymongo import MongoClient # Connect to your local Atlas deployment or Atlas Cluster client = MongoClient(ATLAS_CONNECTION_STRING) # Select the sample_airbnb.listingsAndReviews collection collection = client["sample_airbnb"]["listingsAndReviews"] # Specify local embedding model (https://huggingface.co/sentence-transformers/baai/bge-large-en-v1.5) model = HuggingFaceEmbeddings(model_name="baai/bge-large-en-v1.5") # Filters for only documents with a summary field and without an embeddings field filter = { '$and': [ { 'summary': { '$exists': True, '$ne': None } }, { 'embeddings': { '$exists': False } } ] } count = 0 totalCount = collection.count_documents(filter) # Creates embeddings and updates the documents in the collection for document in collection.find(filter): text = document['summary'] embedding = model.embed_query(text) collection.update_one({ '_id': document['_id'] }, { "$set": { 'embeddings': embedding } }, upsert = True) count+=1 print("Documents updated: {}/{} ".format(count, totalCount))
This code takes some time to run. After it's finished, you can
connect to your local deployment
from mongosh
or your application by using your deployment's
connection string. Then, to view your vector embeddings,
run read operations on the
sample_airbnb.listingsAndReviews
collection.
This code takes some time to run. After it's finished, you can
view your vector embeddings in the Atlas UI
by navigating to the sample_airbnb.listingsAndReviews
collection in your
cluster and expanding the fields in a document.
Note
Convert the embeddings in the sample data to BSON vectors for efficient storage and ingestion of vectors in Atlas. To learn more, see how to convert native embeddings to BSON vectors.
Create the Atlas Vector Search Index
To enable vector search on the sample_airbnb.listingsAndReviews
collection, create an Atlas Vector Search index.
If you're using a local Atlas deployment, complete the following steps:
Define the Atlas Vector Search index.
Create a file named vector-index.json
and paste the following index
definition in the file.
This index definition specifies indexing the embeddings
field
in an index of the vectorSearch type
for the sample_airbnb.listingsAndReviews
collection.
This field contains the embeddings created using the
embedding model. The index definition specifies 1024
vector
dimensions and measures similarity using cosine
.
{ "database": "sample_airbnb", "collectionName": "listingsAndReviews", "type": "vectorSearch", "name": "vector_index", "fields": [ { "type": "vector", "path": "embeddings", "numDimensions": 1024, "similarity": "cosine" } ] }
Note
To create an Atlas Vector Search index, you must have Project Data Access Admin
or higher access to the Atlas project.
If you're using an Atlas cluster, complete the following steps:
Define the Atlas Vector Search index.
Create a file named vector-index.js
and paste the following code in
the file:
import { MongoClient } from 'mongodb'; // Connect to your Atlas deployment const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING); async function run() { try { const database = client.db("sample_airbnb"); const collection = database.collection("listingsAndReviews"); // Define your Atlas Vector Search index const index = { name: "vector_index", type: "vectorSearch", definition: { "fields": [ { "type": "vector", "numDimensions": 1024, "path": "embeddings", "similarity": "cosine" } ] } } // Call the method to create the index const result = await collection.createSearchIndex(index); console.log(result); } finally { await client.close(); } } run().catch(console.dir);
This index definition specifies indexing the embeddings
field
in an index of the vectorSearch type
for the sample_airbnb.listingsAndReviews
collection.
This field contains the embeddings created using the
embedding model. The index definition specifies 1024
vector
dimensions and measures similarity using cosine
.
You can create the index directly from your application with the PyMongo driver. Paste and run the following code in your notebook:
pymongo.operations import SearchIndexModel # Create your index model, then create the search index search_index_model = SearchIndexModel( definition = { "fields": [ { "type": "vector", "numDimensions": 1024, "path": "embeddings", "similarity": "cosine" } ] }, name = "vector_index", type = "vectorSearch" ) collection.create_search_index(model=search_index_model)
This index definition specifies indexing the embeddings
field
in an index of the vectorSearch type
for the sample_airbnb.listingsAndReviews
collection.
This field contains the embeddings created using the
embedding model. The index definition specifies 1024
vector
dimensions and measures similarity using cosine
.
Answer Questions with a Local LLM
This section demonstrates a sample RAG implementation that you can run locally by using Atlas Vector Search and GPT4All.
Query the database for relevant documents.
Create a file called retrieve-documents.js
and paste the following
code into it:
import { MongoClient } from 'mongodb'; import { getEmbeddings } from './get-embeddings.js'; // Function to get the results of a vector query export async function getQueryResults(query) { // Connect to your Atlas cluster const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING); try { // Get embeddings for a query const queryEmbeddings = await getEmbeddings(query); await client.connect(); const db = client.db("sample_airbnb"); const collection = db.collection("listingsAndReviews"); const pipeline = [ { $vectorSearch: { index: "vector_index", queryVector: queryEmbeddings, path: "embeddings", exact: true, limit: 5 } }, { $project: { _id: 0, summary: 1, listing_url: 1, score: { $meta: "vectorSearchScore" } } } ]; // Retrieve documents from Atlas using this Vector Search query const result = collection.aggregate(pipeline); const arrayOfQueryDocs = []; for await (const doc of result) { arrayOfQueryDocs.push(doc); } return arrayOfQueryDocs; } catch (err) { console.log(err.stack); } finally { await client.close(); } }
This code performs a vector query on your local Atlas deployment or your Atlas cluster.
Run a test query to confirm you're getting the expected results. Create
a new file called test-query.js
, and paste the following code into it:
Run the following code to execute the query:
node --env-file=.env test-query.js
{ listing_url: 'https://www.airbnb.com/rooms/10317142', summary: 'Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! This spacious 4 Bedroom and 4 Bath house has all you need for your family or group. Perfect for Family Vacations and executive retreats. We are in a gated beachfront estate, with lots of space for your activities.', score: 0.8703486323356628 } { listing_url: 'https://www.airbnb.com/rooms/10488837', summary: 'There are 2 bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places.', score: 0.861828088760376 } { listing_url: 'https://www.airbnb.com/rooms/11719579', summary: 'This is a gorgeous home just off the main rd, with lots of sun and new amenities. room has own entrance with small deck, close proximity to the beach , bus to the junction , around the corner form all the cafes, bars and restaurants (2 mins).', score: 0.8616757392883301 } { listing_url: 'https://www.airbnb.com/rooms/12657285', summary: 'This favourite home offers a huge balcony, lots of space, easy life, all the comfort you need and a fantastic location! The beach is only 3 minutes away. Metro is 2 blocks away (starting august 2016).', score: 0.8583258986473083 } { listing_url: 'https://www.airbnb.com/rooms/10985735', summary: '5 minutes to seaside where you can swim, and 5 minutes to the woods, this two floors single house contains a cultivated garden with fruit trees, two large bedrooms and a big living room with a large sea view.', score: 0.8573609590530396 }
Download the local LLM and model information mapping.
Click the following button to download the Mistral 7B model from GPT4All. To explore other models, refer to the GPT4All website.
DownloadMove this model into your
local-rag-mongodb
project directory.In your project directory, download the file that contains the model information.
curl -L https://gpt4all.io/models/models3.json -o ./models3.json
Answer questions on your data.
Create a file called local-llm.js
and paste the following code:
import { loadModel, createCompletionStream } from "gpt4all"; import { getQueryResults } from './retrieve-documents.js'; async function run() { try { const query = "beach house"; const documents = await getQueryResults(query); let textDocuments = ""; documents.forEach(doc => { const summary = doc.summary; const link = doc.listing_url; const string = `Summary: ${summary} Link: ${link}. \n` textDocuments += string; }); const model = await loadModel( "mistral-7b-openorca.gguf2.Q4_0.gguf", { verbose: true, allowDownload: false, modelConfigFile: "./models3.json" } ); const question = "Can you recommend me a few AirBnBs that are beach houses? Include a link to the listings."; const prompt = `Use the following pieces of context to answer the question at the end. {${textDocuments}} Question: {${question}}`; process.stdout.write("Output: "); const stream = createCompletionStream(model, prompt); stream.tokens.on("data", (data) => { process.stdout.write(data); }); //wait till stream finishes. await stream.result; process.stdout.write("\n"); model.dispose(); console.log("\n Source documents: \n"); console.log(textDocuments); } catch (err) { console.log(err.stack); } } run().catch(console.dir);
This code does the following:
Creates an embedding for your query string.
Queries for relevant documents.
Prompts the LLM and returns the response. The generated response might vary.
Run the following code to complete your RAG implementation:
node --env-file=.env local-llm.js
Found mistral-7b-openorca.gguf2.Q4_0.gguf at /Users/dachary.carey/.cache/gpt4all/mistral-7b-openorca.gguf2.Q4_0.gguf Creating LLModel: { llmOptions: { model_name: 'mistral-7b-openorca.gguf2.Q4_0.gguf', model_path: '/Users/dachary.carey/.cache/gpt4all', library_path: '/Users/dachary.carey/temp/local-rag-mongodb/node_modules/gpt4all/runtimes/darwin/native;/Users/dachary.carey/temp/local-rag-mongodb', device: 'cpu', nCtx: 2048, ngl: 100 }, modelConfig: { systemPrompt: '<|im_start|>system\n' + 'You are MistralOrca, a large language model trained by Alignment Lab AI.\n' + '<|im_end|>', promptTemplate: '<|im_start|>user\n%1<|im_end|>\n<|im_start|>assistant\n%2<|im_end|>\n', order: 'e', md5sum: 'f692417a22405d80573ac10cb0cd6c6a', name: 'Mistral OpenOrca', filename: 'mistral-7b-openorca.gguf2.Q4_0.gguf', filesize: '4108928128', requires: '2.7.1', ramrequired: '8', parameters: '7 billion', quant: 'q4_0', type: 'Mistral', description: '<strong>Strong overall fast chat model</strong><br><ul><li>Fast responses</li><li>Chat based model</li><li>Trained by Mistral AI<li>Finetuned on OpenOrca dataset curated via <a href="https://atlas.nomic.ai/">Nomic Atlas</a><li>Licensed for commercial use</ul>', url: 'https://gpt4all.io/models/gguf/mistral-7b-openorca.gguf2.Q4_0.gguf', path: '/Users/dachary.carey/.cache/gpt4all/mistral-7b-openorca.gguf2.Q4_0.gguf' } } Output: Yes, here are a few AirBnB beach houses with links to the listings: 1. Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! - https://www.airbnb.com/rooms/10317142 2. 2 Bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places - https://www.airbnb.com/rooms/10488837 3. Gorgeous home just off the main rd, with lots of sun and new amenities. Room has own entrance with small deck, close proximity to the beach - https://www.airbnb.com/rooms/11719579 4. This favourite home offers a huge balcony, lots of space, easy life, all the comfort you need and a fantastic location! The beach is only 3 minutes away. Metro is 2 blocks away (starting august 2016) - https://www.airbnb.com/rooms/12657285 5. 5 minutes to seaside where you can swim, and 5 minutes to the woods, this two floors single house contains a cultivated garden with fruit trees, two large bedrooms and a big living room with a large sea view - https://www.airbnb.com/rooms/10985735 Source documents: Summary: Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! This spacious 4 Bedroom and 4 Bath house has all you need for your family or group. Perfect for Family Vacations and executive retreats. We are in a gated beachfront estate, with lots of space for your activities. Link: https://www.airbnb.com/rooms/10317142. Summary: There are 2 bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places. Link: https://www.airbnb.com/rooms/10488837. Summary: This is a gorgeous home just off the main rd, with lots of sun and new amenities. room has own entrance with small deck, close proximity to the beach , bus to the junction , around the corner form all the cafes, bars and restaurants (2 mins). Link: https://www.airbnb.com/rooms/11719579. Summary: This favourite home offers a huge balcony, lots of space, easy life, all the comfort you need and a fantastic location! The beach is only 3 minutes away. Metro is 2 blocks away (starting august 2016). Link: https://www.airbnb.com/rooms/12657285. Summary: 5 minutes to seaside where you can swim, and 5 minutes to the woods, this two floors single house contains a cultivated garden with fruit trees, two large bedrooms and a big living room with a large sea view. Link: https://www.airbnb.com/rooms/10985735.
This section demonstrates a sample RAG implementation that you can run locally by using Atlas Vector Search, LangChain, and GPT4All.
In your interactive Python notebook, run the following code snippets:
Instantiate Atlas as a vector database.
The following code uses the LangChain integration for Atlas Vector Search to instantiate your local Atlas deployment or your Atlas cluster as a vector database, also called a vector store.
from langchain_mongodb import MongoDBAtlasVectorSearch # Instantiate vector store vector_store = MongoDBAtlasVectorSearch( collection=collection, embedding=model, index_name="vector_index", embedding_key="embeddings", text_key="summary")
You can also run the following code to execute a sample semantic search query:
import pprint query = "beach house" results = vector_store.similarity_search(query) pprint.pprint(results)
[Document(page_content='Beach house with contemporary interior', metadata={'_id': '22123688', 'listing_url': 'https://www.airbnb.com/rooms/22123688', 'name': 'Bungan Beach House', ... }), Document(page_content="Well done !!! you won't find a better location in Manly. The “Beach House” Apartments overlook Cabbage Tree Bay Aquatic Reserve between Manly and Shelly Beach, in one of Manly's premier locations Swim, dive, snorkel, surf, paddle board, coastal walkways, ocean pool, restaurants, all literally at your doorstep, or simply chill and unwind. Manly is amazing, I look forward to welcoming you", metadata={'_id': '18917022', 'listing_url': 'https://www.airbnb.com/rooms/18917022', 'name': 'Beach House Manly Apartment 4', ... }]}), Document(page_content='Beautiful spacious two story beach house that has an amazing private gated grass area fronting Makaha beach. Perfect for family BBQ,s while watching the sun set into the ocean. Room for 10 people. Four night minimum stay required', metadata={'_id': '7099038', 'listing_url': 'https://www.airbnb.com/rooms/7099038', 'name': 'Ocean front Beach House in Makaha', ... }]}), Document(page_content='Beautifully finished, newly renovated house with pool. The ultimate in indoor/outdoor living. Excellent finishes and a short stroll to the beach.', metadata={'_id': '19768051', 'listing_url': 'https://www.airbnb.com/rooms/19768051', 'name': 'Ultra Modern Pool House Maroubra', ... }]})]
Download and configure the local LLM.
Click the following button to download the Mistral 7B model from GPT4All. To explore other models, refer to the GPT4All website.
DownloadPaste the following code in your notebook to configure the LLM. Before running, replace
<path-to-model>
with the path where you saved the LLM locally.from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain_community.llms import GPT4All # Configure the LLM local_path = "<path-to-model>" # Callbacks support token-wise streaming callbacks = [StreamingStdOutCallbackHandler()] # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
Answer questions on your data.
Run the following code to complete your RAG implementation. This code does the following:
Instantiates Atlas Vector Search as a retriever to query for similar documents.
Creates the following LangChain-specific components:
A prompt template to instruct the LLM to use the retrieved documents as context for your query. LangChain passes these documents to the
{context}
input variable and your query to the{question}
variable.A chain that specifies Atlas Vector Search as the retriever, the prompt template that you wrote, and the local LLM that you configured to generate a context-aware response.
Prompts the LLM with a sample query and returns the response. The generated response might vary.
from langchain_core.prompts import PromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough # Instantiate Atlas Vector Search as a retriever retriever = vector_store.as_retriever() # Define prompt template template = """ Use the following pieces of context to answer the question at the end. {context} Question: {question} """ custom_rag_prompt = PromptTemplate.from_template(template) def format_docs(docs): return "\n\n".join(doc.page_content for doc in docs) # Create chain rag_chain = ( {"context": retriever | format_docs, "question": RunnablePassthrough()} | custom_rag_prompt | llm | StrOutputParser() ) # Prompt the chain question = "Can you recommend me a few AirBnBs that are beach houses? Include a link to the listings." answer = rag_chain.invoke(question) # Return source documents documents = retriever.invoke(question) print("\nSource documents:") pprint.pprint(documents)
Answer: Yes, I can recommend a few AirBnBs that are beach houses. Here are some links to their respective listings: 1. Oceanfront home on private, gated property - https://www.airbnb.com/rooms/15073739 2. Ground Floor, Oceanfront condo with complete remodeling - https://www.airbnb.com/rooms/14687148 3. 4 bedroom house in a great location with ocean views and free salt water pool - https://www.airbnb.ca/s?host_id=740644 Source documents: [Document(page_content='Please look for Airbnb numb (Phone number hidden by Airbnb) to book with us. We do not take bookings through this one. It is live for others to read reviews. Oceanfront home on private, gated property. This unique property offers year-round swimming, private beach access and astounding ocean and mountain views. Traveling with a large group? Another 3 bedroom home is available for rent on this spacious property. Visit https://www.airbnb.com/rooms/15073739 or contact us for more information.', metadata={'_id': '14827972', 'listing_url': 'https://www.airbnb.com/rooms/14827972', 'name': 'Oceanfront Beach House Makai', ... }]}), Document(page_content='This GROUND FLOOR, OCEANFRONT condo is just feet from ocean access. Completely remodeled kitchen, bathroom and living room, with queen size bed in the bedroom, and queen size convertible sofa bed in the living room. Relax with the 55" SMART blue ray DVD, cable, and free WiFi. With ceiling fans in each room and trade winds, this condo rarely need the air conditioning unit in the living room. Airbnb charges a reservation fee to all guests at the time of booking. Please see "Other Things to Note"', metadata={'_id': '18173803', 'listing_url': 'https://www.airbnb.com/rooms/18173803', 'name': 'Papakea A108', ... }]}), Document(page_content='2 minutes to bus stop, beach - Cafes, Sun, Surf & Sand. 4 Secure rooms in older style, 4 bedroom house. Can squeeze in up to 15 guests (9 beds, 2 sofa beds in lounge & a single sofa mattress) BUT is best suited to 10-12 people Wireless Internet, under cover parking, unlimited street parking.', metadata={'_id': '2161945', 'listing_url': 'https://www.airbnb.com/rooms/2161945', 'name': 'Sand Sun Surf w Parking. City 9km', ... }]}), Document(page_content='High Quality for a third of the price! Great Location & Ocean Views! FREE Salt Water Roof-Deck Pool, Activities & Rental Car Desk! Hassle-Free Self Check-In via Lockbox. Located In Famous Waikiki: Easily walk to Beaches, Shops/all Restaurants! Hawaiian Convention Center is only 2 Blocks Away! On-Site Garage $. See my similar listings if your dates are not available. https://www.airbnb.ca/s?host_id=740644', metadata={'_id': '13146333', 'listing_url': 'https://www.airbnb.com/rooms/13146333', 'name': '~TROPICAL DREAM VACATION~ Ocean View', ... }]})]