Docs Menu
Docs Home
/
MongoDB Atlas
/ /

Build a Local RAG Implementation with Atlas Vector Search

On this page

  • Background
  • Prerequisites
  • Create a Local Deployment or Atlas Cluster
  • Set Up the Environment
  • Generate Embeddings with a Local Model
  • Create the Atlas Vector Search Index
  • Answer Questions with the Local LLM

This tutorial demonstrates how to implement retrieval-augmented generation (RAG) locally, without the need for API keys or credits. To learn more about RAG, see Retrieval-Augmented Generation (RAG) with Atlas Vector Search.

Specifically, you perform the following actions:

  1. Create a local Atlas deployment or deploy a cluster on the cloud.

  2. Set up the environment.

  3. Use a local embedding model to generate vector embeddings.

  4. Create an Atlas Vector Search index on your data.

  5. Use a local LLM to answer questions on your data.


➤ Use the Select your language drop-down menu to set the language of the examples on this page.


To complete this tutorial, you can either create a local Atlas deployment by using the Atlas CLI or deploy a cluster on the cloud. The Atlas CLI is the command-line interface for MongoDB Atlas, and you can use the Atlas CLI to interact with Atlas from the terminal for various tasks, including creating local Atlas deployments. To learn more, see Manage Local and Cloud Deployments from the Atlas CLI.

Note

Local Atlas deployments are intended for testing only. For production environments, deploy a cluster.

You also use the following open-source models in this tutorial:

There are several ways to download and deploy LLMs locally. In this tutorial, you download Ollama and pull the open source models listed above to perform RAG tasks.

This tutorial also uses the Microsoft.Extensions.AI.Ollama package to connect to these models and integrate them with Atlas Vector Search. If you prefer different models or a different framework, you can adapt this tutorial by replacing the Ollama model names with their equivalents for your preferred setup.

You also use the following open-source models in this tutorial:

There are several ways to download and deploy LLMs locally. In this tutorial, you download Ollama and pull the open source models listed above to perform RAG tasks.

This tutorial also uses the Go language port of LangChain, a popular open-source LLM framework, to connect to these models and integrate them with Atlas Vector Search. If you prefer different models or a different framework, you can adapt this tutorial by replacing the Ollama model names or LangChain library components with their equivalents for your preferred setup.

There are several ways to download and deploy LLMs locally. In this tutorial, you download Ollama and pull the following open source models to perform RAG tasks:

This tutorial also uses LangChain4j, a popular open-source LLM framework for Java, to connect to these models and integrate them with Atlas Vector Search. If you prefer different models or a different framework, you can adapt this tutorial by replacing the Ollama model names or LangChain4j library components with their equivalents for your preferred setup.

You also use the following open-source models in this tutorial:

There are several ways to download and deploy LLMs locally. In this tutorial, you download the Mistral 7B model by using GPT4All, an open-source ecosystem for local LLM development.

When working through this tutorial, you use an interactive Python notebook. This environment allows you to create and execute individual code blocks without running the entire file each time.

You also use the following open-source models in this tutorial:

There are several ways to download and deploy LLMs locally. In this tutorial, you download the Mistral 7B model by using GPT4All, an open-source ecosystem for local LLM development.

To complete this tutorial, you must have the following:

To complete this tutorial, you must have the following:

To complete this tutorial, you must have the following:

  • Java Development Kit (JDK) version 8 or later.

  • An environment to set up and run a Java application. We recommend that you use an integrated development environment (IDE) such as IntelliJ IDEA or Eclipse IDE to configure Maven or Gradle to build and run your project.

To complete this tutorial, you must have the following:

To complete this tutorial, you must have the following:

  • The Atlas CLI installed and running v1.14.3 or later.

  • MongoDB Command Line Database Tools installed.

  • An interactive Python notebook that you can run locally. You can run interactive Python notebooks in VS Code. Ensure that your environment runs Python v3.10 or later.

Note

If you use a hosted service such as Colab, ensure that you have enough RAM to run this tutorial. Otherwise, you might experience performance issues.

This tutorial requires a local or cloud Atlas deployment loaded with the sample AirBnB listings dataset to use as a vector database.

If you have an existing Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later with the sample_airbnb.listingsAndReviews sample data loaded, you can skip this step.

You can create a local Atlas deployment using the Atlas CLI or deploy a cluster on the cloud.

You can create a local deployment using the Atlas CLI.

1

In your terminal, run atlas auth login to authenticate with your Atlas login credentials. To learn more, see Connect from the Atlas CLI.

Note

If you don't have an existing Atlas account, run atlas setup or create a new account.

2

Run atlas deployments setup and follow the prompts to create a local deployment.

For detailed instructions, see Create a Local Atlas Deployment.

3
  1. Run the following command in your terminal to download the sample data:

    curl https://atlas-education.s3.amazonaws.com/sampledata.archive -o sampledata.archive
  2. Run the following command to load the data into your deployment, replacing <port-number> with the port where you're hosting the deployment:

    mongorestore --archive=sampledata.archive --port=<port-number>

    Note

    You must install MongoDB Command Line Database Tools to access the mongorestore command.

You can create and deploy a new cluster using the Atlas CLI or Atlas UI. Ensure that you preload the new cluster with the sample data.

To learn how to load the sample data provided by Atlas into your cluster, see Load Sample Data.

For detailed instructions, see Create a Cluster.

In this section, you set up the environment for this tutorial. Create a project, install the required packages, and define a connection string:

1

Run the following commands in your terminal to create a new directory named MyCompany.RAG.Local and initialize your project:

dotnet new console -o MyCompany.RAG.Local
cd MyCompany.RAG.Local
2

Run the following commands:

dotnet add package MongoDB.Driver --version 3.1.0
dotnet add package Microsoft.Extensions.AI.Ollama --prerelease

Run the following commands:

dotnet add package MongoDB.Driver --version 3.1.0
dotnet add package Microsoft.Extensions.Configuration
dotnet add package Microsoft.Extensions.Configuration.Json
dotnet add package Microsoft.Extensions.AI --prerelease
dotnet add package Microsoft.Extensions.AI.Ollama --prerelease
3

Export your connection string, set it in PowerShell, or use your IDE's environment variable manager to make the connection string available to your project.

export ATLAS_CONNECTION_STRING="<connection-string>"

Replace the <connection-string> placeholder value with your Atlas connection string.

If you're using a local Atlas deployment, your connection string follows this format, replacing <port-number> with the port for your local deployment.

export ATLAS_CONNECTION_STRING="mongodb://localhost:<port-number>/?directConnection=true"

If you're using an Atlas cluster, your connection string follows this format, replacing "<connection-string>"; with your Atlas cluster's SRV connection string:

export ATLAS_CONNECTION_STRING="<connection-string>"

Note

Your connection string should use the following format:

mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

In this section, you set up the environment for this tutorial. Create a project, install the required packages, and define a connection string:

1

Run the following commands in your terminal to create a new directory named local-rag-mongodb and initialize your project:

mkdir local-rag-mongodb
cd local-rag-mongodb
go mod init local-rag-mongodb
2

Run the following commands:

go get github.com/joho/godotenv
go get go.mongodb.org/mongo-driver/mongo
go get github.com/tmc/langchaingo/llms
go get github.com/tmc/langchaingo/llms/ollama
go get github.com/tmc/langchaingo/prompts
3

In your project, create a .env file to store your connection string.

.env
ATLAS_CONNECTION_STRING = "<connection-string>"

Replace the <connection-string> placeholder value with your Atlas connection string.

If you're using a local Atlas deployment, your connection string follows this format, replacing <port-number> with the port for your local deployment.

ATLAS_CONNECTION_STRING = "mongodb://localhost:<port-number>/?directConnection=true"

If you're using an Atlas cluster, your connection string follows this format, replacing "<connection-string>"; with your Atlas cluster's SRV connection string:

ATLAS_CONNECTION_STRING = "<connection-string>"

Note

Your connection string should use the following format:

mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

In this section, you set up the environment for this tutorial. Create a project, install the required packages, and define a connection string:

1
  1. From your IDE, create a Java project named local-rag-mongodb using Maven or Gradle.

  2. Add the following dependencies, depending on your package manager:

    If you are using Maven, add the following dependencies to the dependencies array in your project's pom.xml file:

    pom.xml
    <dependencies>
    <!-- MongoDB Java Sync Driver v5.2.0 or later -->
    <dependency>
    <groupId>org.mongodb</groupId>
    <artifactId>mongodb-driver-sync</artifactId>
    <version>[5.2.0,)</version>
    </dependency>
    <!-- Java library for working with Ollama -->
    <dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-ollama</artifactId>
    <version>0.35.0</version>
    </dependency>
    </dependencies>

    If you are using Gradle, add the following to the dependencies array in your project's build.gradle file:

    build.gradle
    dependencies {
    // MongoDB Java Sync Driver v5.2.0 or later
    implementation 'org.mongodb:mongodb-driver-sync:[5.2.0,)'
    // Java library for working with Ollama
    implementation 'dev.langchain4j:langchain4j-ollama:0.35.0'
    }
  3. Run your package manager to install the dependencies to your project.

2

Note

This example sets the variable in the IDE. Production applications might manage environment variables through a deployment configuration, CI/CD pipeline, or secrets manager, but you can adapt the provided code to fit your use case.

In your IDE, create a new configuration template and add the following variables to your project:

  • If you are using IntelliJ IDEA, create a new Application run configuration template, then add your variables as semicolon-separated values in the Environment variables field (for example, FOO=123;BAR=456). Apply the changes and click OK.

    To learn more, see the Create a run/debug configuration from a template section of the IntelliJ IDEA documentation.

  • If you are using Eclipse, create a new Java Application launch configuration, then add each variable as a new key-value pair in the Environment tab. Apply the changes and click OK.

    To learn more, see the Creating a Java application launch configuration section of the Eclipse IDE documentation.

Replace the <port-number> with the port for your local deployment.

Your connection string should follow the following format:

ATLAS_CONNECTION_STRING = "mongodb://localhost:<port-number>/?directConnection=true"

Replace the <connection-string> placeholder value with the SRV connection string for your Atlas cluster.

Your connection string should use the following format:

mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

In this section, you set up the environment for this tutorial. Create a project, install the required packages, and define a connection string:

1

Run the following commands in your terminal to create a new directory named local-rag-mongodb and initialize your project:

mkdir local-rag-mongodb
cd local-rag-mongodb
npm init -y
2

Run the following command:

npm install mongodb @xenova/transformers node-gyp gpt4all
3

In your project's package.json file, specify the type field as shown in the following example, and then save the file.

package.json
{
"name": "local-rag-mongodb",
"type": "module",
...
}
4

In your project, create a .env file to store your connection string.

.env
ATLAS_CONNECTION_STRING = "<connection-string>"

Replace the <connection-string> placeholder value with your Atlas connection string.

If you're using a local Atlas deployment, your connection string follows this format, replacing <port-number> with the port for your local deployment.

ATLAS_CONNECTION_STRING = "mongodb://localhost:<port-number>/?directConnection=true";

If you're using an Atlas cluster, your connection string follows this format, replacing "<connection-string>"; with your Atlas cluster's SRV connection string:

ATLAS_CONNECTION_STRING = "<connection-string>";

Note

Your connection string should use the following format:

mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

Note

Minimum Node.js Version Requirements

Node.js v20.x introduced the --env-file option. If you are using an older version of Node.js, add the dotenv package to your project, or use a different method to manage your environment variables.

In this section, you set up the environment for this tutorial.

1

Run the following commands in your terminal to create a new directory named local-rag-mongodb.

mkdir local-rag-mongodb
cd local-rag-mongodb
2

In the local-rag-mongodb directory, save a file with the .ipynb extension. You will run the remaining code snippets for this tutorial in your notebook. You must create a new code block for each snippet.

3

Run the following command in your notebook:

pip install --quiet pymongo gpt4all sentence_transformers
4

If you're using a local Atlas deployment, run the following code in your notebook, replacing <port-number> with the port for your local deployment.

ATLAS_CONNECTION_STRING = ("mongodb://localhost:<port-number>/?directConnection=true")

If you're using an Atlas cluster, run the following code in your notebook, replacing <connection-string> with your Atlas cluster's SRV connection string:

ATLAS_CONNECTION_STRING = ("<connection-string>")

Note

Your connection string should use the following format:

mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

In this section, you load an embedding model locally and generate vector embeddings by using data from the sample_airbnb database, which contains a single collection called listingsAndReviews.

1

This example uses the nomic-embed-text model from Ollama.

Run the following command to pull the embedding model:

ollama pull nomic-embed-text
2

To encapsulate the logic for each piece of the implementation, create a few classes to coordinate and manage the services.

  1. Create a file called OllamaAIService.cs, and paste the following code into it:

    OllamaAIService.cs
    namespace MyCompany.RAG.Local;
    using Microsoft.Extensions.AI;
    public class OllamaAIService
    {
    private static readonly Uri OllamaUri = new("http://localhost:11434/");
    private static readonly string EmbeddingModelName = "nomic-embed-text";
    private static readonly OllamaEmbeddingGenerator EmbeddingGenerator = new OllamaEmbeddingGenerator(OllamaUri, EmbeddingModelName);
    public async Task<float[]> GetEmbedding(string text)
    {
    var embedding = await EmbeddingGenerator.GenerateEmbeddingVectorAsync(text);
    return embedding.ToArray();
    }
    }
  2. Create another file called MongoDBDataService.cs and paste the following code into it:

    MongoDBDataService.cs
    1namespace MyCompany.RAG.Local;
    2
    3using MongoDB.Driver;
    4using MongoDB.Bson;
    5
    6public class MongoDBDataService
    7{
    8 private static readonly string? ConnectionString = Environment.GetEnvironmentVariable("ATLAS_CONNECTION_STRING");
    9 private static readonly MongoClient Client = new MongoClient(ConnectionString);
    10 private static readonly IMongoDatabase Database = Client.GetDatabase("sample_airbnb");
    11 private static readonly IMongoCollection<BsonDocument> Collection = Database.GetCollection<BsonDocument>("listingsAndReviews");
    12
    13 public List<BsonDocument>? GetDocuments()
    14 {
    15 var filter = Builders<BsonDocument>.Filter.And(
    16 Builders<BsonDocument>.Filter.And(
    17 Builders<BsonDocument>.Filter.Exists("summary", true),
    18 Builders<BsonDocument>.Filter.Ne("summary", "")
    19 ),
    20 Builders<BsonDocument>.Filter.Exists("embeddings", false)
    21 );
    22 return Collection.Find(filter).Limit(250).ToList();
    23 }
    24
    25 public async Task<string> UpdateDocuments(Dictionary<string, float[]> embeddings)
    26 {
    27 var listWrites = new List<WriteModel<BsonDocument>>();
    28 foreach(var kvp in embeddings)
    29 {
    30 var filterForUpdate = Builders<BsonDocument>.Filter.Eq("_id", kvp.Key);
    31 var updateDefinition = Builders<BsonDocument>.Update.Set("embeddings", kvp.Value);
    32 listWrites.Add(new UpdateOneModel<BsonDocument>(filterForUpdate, updateDefinition));
    33 }
    34
    35 try
    36 {
    37 var result = await Collection.BulkWriteAsync(listWrites);
    38 listWrites.Clear();
    39 return $"{result.ModifiedCount} documents updated successfully.";
    40 } catch (Exception e)
    41 {
    42 return $"Exception: {e.Message}";
    43 }
    44 }
    45}

    Generating embeddings takes time and computational resources. In this example, you generate embeddings for only 250 documents from the collection, which should take less than a few minutes. If you want to change the number of documents you're generating embeddings for:

    • Change the number of documents: Adjust the .Limit(250) number in the Find() call in line 21.

    • Generate embeddings for all documents: Omit the .Limit(250) entirely in the Find() call in line 21.

  3. Create another file called EmbeddingGenerator.cs and paste the following code into it:

    EmbeddingGenerator.cs
    1namespace MyCompany.RAG.Local;
    2
    3public class EmbeddingGenerator
    4{
    5 private readonly MongoDBDataService _dataService = new();
    6 private readonly OllamaAIService _ollamaAiService = new();
    7
    8 public async Task<string> GenerateEmbeddings()
    9 {
    10 // Retrieve documents from MongoDB
    11 var documents = _dataService.GetDocuments();
    12 if (documents != null)
    13 {
    14 Console.WriteLine("Generating embeddings.");
    15 Dictionary<string, float[]> embeddings = new Dictionary<string, float[]>();
    16 foreach (var document in documents)
    17 {
    18 try
    19 {
    20 var id = document.GetValue("_id").ToString();
    21 var summary = document.GetValue("summary").ToString();
    22 if (id != null && summary != null)
    23 {
    24 // Use Ollama to generate vector embeddings for each
    25 // document's "summary" field
    26 var embedding = await _ollamaAiService.GetEmbedding(summary);
    27 embeddings.Add(id, embedding);
    28 }
    29 }
    30 catch (Exception e)
    31 {
    32 return $"Error creating embeddings for summaries: {e.Message}";
    33 }
    34 }
    35 // Add a new field to the MongoDB documents with the vector embedding
    36 var result = await _dataService.UpdateDocuments(embeddings);
    37 return result;
    38 }
    39 else
    40 {
    41 return "No documents found";
    42 }
    43 }
    44}

    This code contains the logic to:

    • Get documents from the database.

    • Use the embedding model to generate vector embeddings for the summary field of each document.

    • Update the documents with the new embeddings.

  4. Paste the following code into Program.cs:

    Program.cs
    1using MyCompany.RAG.Local;
    2
    3var embeddingGenerator = new EmbeddingGenerator();
    4var result = await embeddingGenerator.GenerateEmbeddings();
    5Console.WriteLine(result);
  5. Compile and run your project to generate embeddings:

    dotnet run MyCompany.RAG.Local.csproj
    Generating embeddings.
    250 documents updated successfully.
1

This example uses the nomic-embed-text model from Ollama.

Run the following command to pull the embedding model:

ollama pull nomic-embed-text
2
  1. Create a common directory to store code that you'll reuse in multiple steps.

    mkdir common && cd common
  2. Create a file called get-embeddings.go, and paste the following code into it:

    get-embeddings.go
    package common
    import (
    "context"
    "log"
    "github.com/tmc/langchaingo/llms/ollama"
    )
    func GetEmbeddings(documents []string) [][]float32 {
    llm, err := ollama.New(ollama.WithModel("nomic-embed-text"))
    if err != nil {
    log.Fatalf("failed to connect to ollama: %v", err)
    }
    ctx := context.Background()
    embs, err := llm.CreateEmbedding(ctx, documents)
    if err != nil {
    log.Fatalf("failed to create ollama embedding: %v", err)
    }
    return embs
    }
  3. To simplify marshalling and unmarshalling documents in this collection to and from BSON, create a file called models.go and paste the following code into it:

    models.go
    package common
    import (
    "time"
    "go.mongodb.org/mongo-driver/bson/primitive"
    )
    type Image struct {
    ThumbnailURL string `bson:"thumbnail_url"`
    MediumURL string `bson:"medium_url"`
    PictureURL string `bson:"picture_url"`
    XLPictureURL string `bson:"xl_picture_url"`
    }
    type Host struct {
    ID string `bson:"host_id"`
    URL string `bson:"host_url"`
    Name string `bson:"host_name"`
    Location string `bson:"host_location"`
    About string `bson:"host_about"`
    ThumbnailURL string `bson:"host_thumbnail_url"`
    PictureURL string `bson:"host_picture_url"`
    Neighborhood string `bson:"host_neighborhood"`
    IsSuperhost bool `bson:"host_is_superhost"`
    HasProfilePic bool `bson:"host_has_profile_pic"`
    IdentityVerified bool `bson:"host_identity_verified"`
    ListingsCount int32 `bson:"host_listings_count"`
    TotalListingsCount int32 `bson:"host_total_listings_count"`
    Verifications []string `bson:"host_verifications"`
    }
    type Location struct {
    Type string `bson:"type"`
    Coordinates []float64 `bson:"coordinates"`
    IsLocationExact bool `bson:"is_location_exact"`
    }
    type Address struct {
    Street string `bson:"street"`
    Suburb string `bson:"suburb"`
    GovernmentArea string `bson:"government_area"`
    Market string `bson:"market"`
    Country string `bson:"Country"`
    CountryCode string `bson:"country_code"`
    Location Location `bson:"location"`
    }
    type Availability struct {
    Thirty int32 `bson:"availability_30"`
    Sixty int32 `bson:"availability_60"`
    Ninety int32 `bson:"availability_90"`
    ThreeSixtyFive int32 `bson:"availability_365"`
    }
    type ReviewScores struct {
    Accuracy int32 `bson:"review_scores_accuracy"`
    Cleanliness int32 `bson:"review_scores_cleanliness"`
    CheckIn int32 `bson:"review_scores_checkin"`
    Communication int32 `bson:"review_scores_communication"`
    Location int32 `bson:"review_scores_location"`
    Value int32 `bson:"review_scores_value"`
    Rating int32 `bson:"review_scores_rating"`
    }
    type Review struct {
    ID string `bson:"_id"`
    Date time.Time `bson:"date,omitempty"`
    ListingId string `bson:"listing_id"`
    ReviewerId string `bson:"reviewer_id"`
    ReviewerName string `bson:"reviewer_name"`
    Comments string `bson:"comments"`
    }
    type Listing struct {
    ID string `bson:"_id"`
    ListingURL string `bson:"listing_url"`
    Name string `bson:"name"`
    Summary string `bson:"summary"`
    Space string `bson:"space"`
    Description string `bson:"description"`
    NeighborhoodOverview string `bson:"neighborhood_overview"`
    Notes string `bson:"notes"`
    Transit string `bson:"transit"`
    Access string `bson:"access"`
    Interaction string `bson:"interaction"`
    HouseRules string `bson:"house_rules"`
    PropertyType string `bson:"property_type"`
    RoomType string `bson:"room_type"`
    BedType string `bson:"bed_type"`
    MinimumNights string `bson:"minimum_nights"`
    MaximumNights string `bson:"maximum_nights"`
    CancellationPolicy string `bson:"cancellation_policy"`
    LastScraped time.Time `bson:"last_scraped,omitempty"`
    CalendarLastScraped time.Time `bson:"calendar_last_scraped,omitempty"`
    FirstReview time.Time `bson:"first_review,omitempty"`
    LastReview time.Time `bson:"last_review,omitempty"`
    Accommodates int32 `bson:"accommodates"`
    Bedrooms int32 `bson:"bedrooms"`
    Beds int32 `bson:"beds"`
    NumberOfReviews int32 `bson:"number_of_reviews"`
    Bathrooms primitive.Decimal128 `bson:"bathrooms"`
    Amenities []string `bson:"amenities"`
    Price primitive.Decimal128 `bson:"price"`
    WeeklyPrice primitive.Decimal128 `bson:"weekly_price"`
    MonthlyPrice primitive.Decimal128 `bson:"monthly_price"`
    CleaningFee primitive.Decimal128 `bson:"cleaning_fee"`
    ExtraPeople primitive.Decimal128 `bson:"extra_people"`
    GuestsIncluded primitive.Decimal128 `bson:"guests_included"`
    Image Image `bson:"images"`
    Host Host `bson:"host"`
    Address Address `bson:"address"`
    Availability Availability `bson:"availability"`
    ReviewScores ReviewScores `bson:"review_scores"`
    Reviews []Review `bson:"reviews"`
    Embeddings []float32 `bson:"embeddings,omitempty"`
    }
  4. Return to the root directory.

    cd ../
  5. Create another file called generate-embeddings.go and paste the following code into it:

    generate-embeddings.go
    1package main
    2
    3import (
    4 "context"
    5 "local-rag-mongodb/common" // Module that contains the models and GetEmbeddings function
    6 "log"
    7 "os"
    8
    9 "github.com/joho/godotenv"
    10 "go.mongodb.org/mongo-driver/bson"
    11 "go.mongodb.org/mongo-driver/mongo"
    12 "go.mongodb.org/mongo-driver/mongo/options"
    13)
    14
    15func main() {
    16 ctx := context.Background()
    17
    18 if err := godotenv.Load(); err != nil {
    19 log.Println("no .env file found")
    20 }
    21
    22 // Connect to your Atlas cluster
    23 uri := os.Getenv("ATLAS_CONNECTION_STRING")
    24 if uri == "" {
    25 log.Fatal("set your 'ATLAS_CONNECTION_STRING' environment variable.")
    26 }
    27 clientOptions := options.Client().ApplyURI(uri)
    28 client, err := mongo.Connect(ctx, clientOptions)
    29 if err != nil {
    30 log.Fatalf("failed to connect to the server: %v", err)
    31 }
    32 defer func() { _ = client.Disconnect(ctx) }()
    33
    34 // Set the namespace
    35 coll := client.Database("sample_airbnb").Collection("listingsAndReviews")
    36
    37 filter := bson.D{
    38 {"$and",
    39 bson.A{
    40 bson.D{
    41 {"$and",
    42 bson.A{
    43 bson.D{{"summary", bson.D{{"$exists", true}}}},
    44 bson.D{{"summary", bson.D{{"$ne", ""}}}},
    45 },
    46 }},
    47 bson.D{{"embeddings", bson.D{{"$exists", false}}}},
    48 }},
    49 }
    50
    51 findOptions := options.Find().SetLimit(250)
    52
    53 cursor, err := coll.Find(ctx, filter, findOptions)
    54 if err != nil {
    55 log.Fatalf("failed to retrieve data from the server: %v", err)
    56 }
    57
    58 var listings []common.Listing
    59 if err = cursor.All(ctx, &listings); err != nil {
    60 log.Fatalf("failed to unmarshal retrieved docs to model objects: %v", err)
    61 }
    62
    63 var summaries []string
    64 for _, listing := range listings {
    65 summaries = append(summaries, listing.Summary)
    66 }
    67
    68 log.Println("Generating embeddings.")
    69 embeddings := common.GetEmbeddings(summaries)
    70
    71 updateDocuments := make([]mongo.WriteModel, len(listings))
    72 for i := range updateDocuments {
    73 updateDocuments[i] = mongo.NewUpdateOneModel().
    74 SetFilter(bson.D{{"_id", listings[i].ID}}).
    75 SetUpdate(bson.D{{"$set", bson.D{{"embeddings", embeddings[i]}}}})
    76 }
    77
    78 bulkWriteOptions := options.BulkWrite().SetOrdered(false)
    79
    80 result, err := coll.BulkWrite(ctx, updateDocuments, bulkWriteOptions)
    81 if err != nil {
    82 log.Fatalf("failed to update documents: %v", err)
    83 }
    84
    85 log.Printf("%d documents updated successfully.", result.MatchedCount)
    86}

    In this example, we set a limit of 250 documents when generating embeddings. The process to generate embeddings for the more than 5000 documents in the collection is slow. If you want to change the number of documents you're generating embeddings for:

    • Change the number of documents: Adjust the .SetLimit(250) number in the Find() options in line 52.

    • Generate embeddings for all documents: Omit the options in the Find() call in line 54.

  6. Run the following command to execute the code:

    go run generate-embeddings.go
    2024/10/10 15:49:23 Generating embeddings.
    2024/10/10 15:49:28 250 documents updated successfully.
1

Run the following command to pull the nomic-embed-text model from Ollama:

ollama pull nomic-embed-text
2

Create a file called OllamaModels.java and paste the following code.

This code defines the local Ollama embedding and chat models that you'll use in your project. We'll work with the chat model in a later step. You can adapt or create additional models as needed for your preferred setup.

This code also defines two methods to generate embeddings for a given input using the embedding model that you downloaded previously:

  • Multiple Inputs: The getEmbeddings method accepts an array of text inputs (List<String>), allowing you to create multiple embeddings in a single API call. The method converts the API-provided arrays of floats to BSON arrays of doubles for storing in your Atlas cluster.

  • Single Input: The getEmbedding method accepts a single String, which represents a query you want to make against your vector data. The method converts the API-provided array of floats to a BSON array of doubles to use when querying your collection.

OllamaModels.java
import dev.langchain4j.data.embedding.Embedding;
import dev.langchain4j.data.segment.TextSegment;
import dev.langchain4j.model.ollama.OllamaChatModel;
import dev.langchain4j.model.ollama.OllamaEmbeddingModel;
import dev.langchain4j.model.output.Response;
import org.bson.BsonArray;
import org.bson.BsonDouble;
import java.util.List;
import static java.time.Duration.ofSeconds;
public class OllamaModels {
private static final String host = "http://localhost:11434";
private static OllamaEmbeddingModel embeddingModel;
private static OllamaChatModel chatModel;
/**
* Returns the Ollama embedding model used by the getEmbeddings() and getEmbedding() methods
* to generate vector embeddings.
*/
public static OllamaEmbeddingModel getEmbeddingModel() {
if (embeddingModel == null) {
embeddingModel = OllamaEmbeddingModel.builder()
.timeout(ofSeconds(10))
.modelName("nomic-embed-text")
.baseUrl(host)
.build();
}
return embeddingModel;
}
/**
* Returns the Ollama chat model interface used by the createPrompt() method
* to process queries and generate responses.
*/
public static OllamaChatModel getChatModel() {
if (chatModel == null) {
chatModel = OllamaChatModel.builder()
.timeout(ofSeconds(25))
.modelName("mistral")
.baseUrl(host)
.build();
}
return chatModel;
}
/**
* Takes an array of strings and returns a collection of BSON array embeddings
* to store in the database.
*/
public static List<BsonArray> getEmbeddings(List<String> texts) {
List<TextSegment> textSegments = texts.stream()
.map(TextSegment::from)
.toList();
Response<List<Embedding>> response = getEmbeddingModel().embedAll(textSegments);
return response.content().stream()
.map(e -> new BsonArray(
e.vectorAsList().stream()
.map(BsonDouble::new)
.toList()))
.toList();
}
/**
* Takes a single string and returns a BSON array embedding to
* use in a vector query.
*/
public static BsonArray getEmbedding(String text) {
Response<Embedding> response = getEmbeddingModel().embed(text);
return new BsonArray(
response.content().vectorAsList().stream()
.map(BsonDouble::new)
.toList());
}
}
3

Create a file named EmbeddingGenerator.java and paste the following code.

This code uses the getEmbeddings method and the MongoDB Java Sync Driver to do the following:

  1. Connect to your local Atlas deployment or Atlas cluster.

  2. Get a subset of documents from the sample_airbnb.listingsAndReviews collection that have a non-empty summary field.

    Note

    For demonstration purposes, we set a limit of 250 documents to reduce the processing time. You can adjust or remove this limit as needed to better suit your use case.

  3. Generate an embedding from each document's summary field using the getEmbeddings method that you defined previously.

  4. Update each document with a new embedding field that contains the corresponding embedding value.

EmbeddingGenerator.java
import com.mongodb.MongoException;
import com.mongodb.bulk.BulkWriteResult;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoCursor;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.BulkWriteOptions;
import com.mongodb.client.model.Filters;
import com.mongodb.client.model.Projections;
import com.mongodb.client.model.UpdateOneModel;
import com.mongodb.client.model.Updates;
import com.mongodb.client.model.WriteModel;
import org.bson.BsonArray;
import org.bson.Document;
import org.bson.conversions.Bson;
import java.util.ArrayList;
import java.util.List;
public class EmbeddingGenerator {
public static void main(String[] args) {
String uri = System.getenv("ATLAS_CONNECTION_STRING");
if (uri == null || uri.isEmpty()) {
throw new RuntimeException("ATLAS_CONNECTION_STRING env variable is not set or is empty.");
}
// establish connection and set namespace
try (MongoClient mongoClient = MongoClients.create(uri)) {
MongoDatabase database = mongoClient.getDatabase("sample_airbnb");
MongoCollection<Document> collection = database.getCollection("listingsAndReviews");
// define parameters for the find() operation
// NOTE: this example uses a limit to reduce processing time
Bson projectionFields = Projections.fields(
Projections.include("_id", "summary"));
Bson filterSummary = Filters.ne("summary", "");
int limit = 250;
try (MongoCursor<Document> cursor = collection
.find(filterSummary)
.projection(projectionFields)
.limit(limit)
.iterator()) {
List<String> summaries = new ArrayList<>();
List<String> documentIds = new ArrayList<>();
while (cursor.hasNext()) {
Document document = cursor.next();
String summary = document.getString("summary");
String id = document.get("_id").toString();
summaries.add(summary);
documentIds.add(id);
}
// generate embeddings for the summary in each document
// and add to the document to the 'embeddings' array field
System.out.println("Generating embeddings for " + summaries.size() + " documents.");
System.out.println("This operation may take up to several minutes.");
List<BsonArray> embeddings = OllamaModels.getEmbeddings(summaries);
List<WriteModel<Document>> updateDocuments = new ArrayList<>();
for (int j = 0; j < summaries.size(); j++) {
UpdateOneModel<Document> updateDoc = new UpdateOneModel<>(
Filters.eq("_id", documentIds.get(j)),
Updates.set("embeddings", embeddings.get(j)));
updateDocuments.add(updateDoc);
}
// bulk write the updated documents to the 'listingsAndReviews' collection
int result = performBulkWrite(updateDocuments, collection);
System.out.println("Added embeddings successfully to " + result + " documents.");
}
} catch (MongoException me) {
throw new RuntimeException("Failed to connect to MongoDB", me);
} catch (Exception e) {
throw new RuntimeException("Operation failed: ", e);
}
}
/**
* Performs a bulk write operation on the specified collection.
*/
private static int performBulkWrite(List<WriteModel<Document>> updateDocuments, MongoCollection<Document> collection) {
if (updateDocuments.isEmpty()) {
return 0;
}
BulkWriteResult result;
try {
BulkWriteOptions options = new BulkWriteOptions().ordered(false);
result = collection.bulkWrite(updateDocuments, options);
return result.getModifiedCount();
} catch (MongoException me) {
throw new RuntimeException("Failed to insert documents", me);
}
}
}
4

Save and run the file. The output resembles:

Generating embeddings for 250 documents.
This operation may take up to several minutes.
Added embeddings successfully to 250 documents.
1

This example uses the mixedbread-ai/mxbai-embed-large-v1 model from the Hugging Face model hub. The simplest method to download the model files is to clone the repository using Git with Git Large File Storage. Hugging Face requires a user access token or Git over SSH to authenticate your request to clone the repository.

git clone https://<your-hugging-face-username>:<your-hugging-face-user-access-token>@huggingface.co/mixedbread-ai/mxbai-embed-large-v1
git clone git@hf.co:mixedbread-ai/mxbai-embed-large-v1

Tip

Git Large File Storage

The Hugging Face model files are large, and require Git Large File Storage (git-lfs) to clone the repositories. If you see errors related to large file storage, ensure you have installed git-lfs.

2

Get the path to the local model files on your machine. This is the parent directory that contains the git repository you just cloned. If you cloned the model repository inside the project directory you created for this tutorial, the parent directory path should resemble:

/Users/<username>/local-rag-mongodb

Check the model directory and make sure it contains an onnx directory that has a model_quantized.onnx file:

cd mxbai-embed-large-v1/onnx
ls
model.onnx model_fp16.onnx model_quantized.onnx
3
  1. Navigate back to the local-rag-mongodb parent directory.

  2. Create a file called get-embeddings.js, and paste the following code into it:

    get-embeddings.js
    import { env, pipeline } from '@xenova/transformers';
    // Function to generate embeddings for given data
    export async function getEmbeddings(data) {
    // Replace this path with the parent directory that contains the model files
    env.localModelPath = '/Users/<username>/local-rag-mongodb/';
    env.allowRemoteModels = false;
    const task = 'feature-extraction';
    const model = 'mxbai-embed-large-v1';
    const embedder = await pipeline(
    task, model);
    const results = await embedder(data, { pooling: 'mean', normalize: true });
    return Array.from(results.data);
    }

    Replace the '/Users/<username>/local-rag-mongodb/' with the local path from the prior step.

  3. Create another file called generate-embeddings.js and paste the following code into it:

    generate-embeddings.js
    1import { MongoClient } from 'mongodb';
    2import { getEmbeddings } from './get-embeddings.js';
    3
    4async function run() {
    5 const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING);
    6
    7 try {
    8 // Connect to your local MongoDB deployment
    9 await client.connect();
    10 const db = client.db("sample_airbnb");
    11 const collection = db.collection("listingsAndReviews");
    12
    13 const filter = { '$and': [
    14 { 'summary': { '$exists': true, '$ne': null } },
    15 { 'embeddings': { '$exists': false } }
    16 ]};
    17
    18 // This is a long-running operation for all docs in the collection,
    19 // so we limit the docs for this example
    20 const cursor = collection.find(filter).limit(50);
    21
    22 // To verify that you have the local embedding model configured properly,
    23 // try generating an embedding for one document
    24 const firstDoc = await cursor.next();
    25 if (!firstDoc) {
    26 console.log('No document found.');
    27 return;
    28 }
    29
    30 const firstDocEmbeddings = await getEmbeddings(firstDoc.summary);
    31 console.log(firstDocEmbeddings);
    32
    33 // After confirming you are successfully generating embeddings,
    34 // uncomment the following code to generate embeddings for all docs.
    35 /* cursor.rewind(); // Reset the cursor to process documents again
    36 * console.log("Generating embeddings for documents. Standby.");
    37 * let updatedDocCount = 0;
    38 *
    39 * for await (const doc of cursor) {
    40 * const text = doc.summary;
    41 * const embeddings = await getEmbeddings(text);
    42 * await collection.updateOne({ "_id": doc._id },
    43 * {
    44 * "$set": {
    45 * "embeddings": embeddings
    46 * }
    47 * }
    48 * );
    49 * updatedDocCount += 1;
    50 * }
    51 * console.log("Count of documents updated: " + updatedDocCount);
    52 */
    53 } catch (err) {
    54 console.log(err.stack);
    55 }
    56 finally {
    57 await client.close();
    58 }
    59}
    60run().catch(console.dir);

    This code includes a few lines to test that you have correctly downloaded the model and are using the correct path. Run the following command to execute the code:

    node --env-file=.env generate-embeddings.js
    Tensor {
    dims: [ 1, 1024 ],
    type: 'float32',
    data: Float32Array(1024) [
    -0.01897735893726349, -0.001120976754464209, -0.021224822849035263,
    -0.023649735376238823, -0.03350808471441269, -0.0014186901971697807,
    -0.009617107920348644, 0.03344292938709259, 0.05424851179122925,
    -0.025904450565576553, 0.029770011082291603, -0.0006215018220245838,
    0.011056603863835335, -0.018984895199537277, 0.03985185548663139,
    -0.015273082070052624, -0.03193040192127228, 0.018376577645540237,
    -0.02236943319439888, 0.01433168537914753, 0.02085157483816147,
    -0.005689046811312437, -0.05541415512561798, -0.055907104164361954,
    -0.019112611189484596, 0.02196515165269375, 0.027313007041811943,
    -0.008618313819169998, 0.045496534556150436, 0.06271681934595108,
    -0.0028660669922828674, -0.02433634363114834, 0.02016191929578781,
    -0.013882477767765522, -0.025465600192546844, 0.0000950733374338597,
    0.018200192600488663, -0.010413561016321182, -0.002004098379984498,
    -0.058351870626211166, 0.01749623566865921, -0.013926318846642971,
    -0.00278360559605062, -0.010333008132874966, 0.004406726453453302,
    0.04118744656443596, 0.02210155501961708, -0.016340743750333786,
    0.004163357429206371, -0.018561601638793945, 0.0021984230261296034,
    -0.012378614395856857, 0.026662321761250496, -0.006476820446550846,
    0.001278138137422502, -0.010084952227771282, -0.055993322283029556,
    -0.015850437805056572, 0.015145729295909405, 0.07512971013784409,
    -0.004111358895897865, -0.028162647038698196, 0.023396577686071396,
    -0.01159974467009306, 0.021751703694462776, 0.006198467221111059,
    0.014084039255976677, -0.0003913900291081518, 0.006310020107775927,
    -0.04500332102179527, 0.017774192616343498, -0.018170733004808426,
    0.026185045018792152, -0.04488714039325714, -0.048510149121284485,
    0.015152698382735252, 0.012136898003518581, 0.0405895821750164,
    -0.024783289059996605, -0.05514788627624512, 0.03484730422496796,
    -0.013530988246202469, 0.0319477915763855, 0.04537525027990341,
    -0.04497901350259781, 0.009621822275221348, -0.013845544308423996,
    0.0046155862510204315, 0.03047163411974907, 0.0058857654221355915,
    0.005858785007148981, 0.01180865429341793, 0.02734190598130226,
    0.012322399765253067, 0.03992653638124466, 0.015777742490172386,
    0.017797520384192467, 0.02265017107129097, -0.018233606591820717,
    0.02064627595245838,
    ... 924 more items
    ],
    size: 1024
    }
  4. Optionally, after you have confirmed you are successfully generating embeddings with the local model, you can uncomment the code in lines 35-52 to generate embeddings for all the documents in the collection. Save the file.

    Then, run the command to execute the code:

    node --env-file=.env generate-embeddings.js
    [
    Tensor {
    dims: [ 1024 ],
    type: 'float32',
    data: Float32Array(1024) [
    -0.043243519961833954, 0.01316747535020113, -0.011639945209026337,
    -0.025046885013580322, 0.005129443947225809, -0.02003324404358864,
    0.005245734006166458, 0.10105721652507782, 0.05425914749503136,
    -0.010824322700500488, 0.021903572604060173, 0.048009492456912994,
    0.01291663944721222, -0.015903260558843613, -0.008034848608076572,
    -0.003592714900150895, -0.029337648302316666, 0.02282896265387535,
    -0.029112281277775764, 0.011099508963525295, -0.012238143011927605,
    -0.008351574651896954, -0.048714976757764816, 0.001015961286611855,
    0.02252192236483097, 0.04426417499780655, 0.03514830768108368,
    -0.02088250033557415, 0.06391220539808273, 0.06896235048770905,
    -0.015386332757771015, -0.019206153228878975, 0.015263230539858341,
    -0.00019019744649995118, -0.032121095806360245, 0.015855342149734497,
    0.05055809020996094, 0.004083932377398014, 0.026945054531097412,
    -0.0505746565759182, -0.009507855400443077, -0.012497996911406517,
    0.06249537691473961, -0.04026378318667412, 0.010749109089374542,
    0.016748877242207527, -0.0235306303948164, -0.03941794112324715,
    0.027474915608763695, -0.02181144617497921, 0.0026422827504575253,
    0.005104491952806711, 0.027314607053995132, 0.019283341243863106,
    0.005245842970907688, -0.018712762743234634, -0.08618085831403732,
    0.003314188914373517, 0.008071620017290115, 0.05356570705771446,
    -0.008000597357749939, 0.006983411032706499, -0.0070550404489040375,
    -0.043323490768671036, 0.03490140289068222, 0.03810165822505951,
    0.0406375490128994, -0.0032191979698836803, 0.01489361934363842,
    -0.01609957590699196, -0.006372962612658739, 0.03360277786850929,
    -0.014810526743531227, -0.00925799086689949, -0.01885424554347992,
    0.0182492695748806, 0.009002899751067162, -0.004713123198598623,
    -0.00846288911998272, -0.012471121735870838, -0.0080558517947793,
    0.0135461101308465, 0.03335557505488396, -0.0027410900220274925,
    -0.02145615592598915, 0.01378028653562069, 0.03708091005682945,
    0.03519297018647194, 0.014239554293453693, 0.02219904027879238,
    0.0015641176141798496, 0.02624501660466194, 0.022713981568813324,
    -0.004414170514792204, 0.026919621974229813, -0.002607459668070078,
    -0.04017219692468643, -0.003570320550352335, -0.022905709221959114,
    0.030657364055514336,
    ... 924 more items
    ],
    size: 1024
    }
    ]
    Generating embeddings for documents. Standby.
    Count of documents updated: 50
1

This code performs the following actions:

  • Connects to your local Atlas deployment or Atlas cluster and selects the sample_airbnb.listingsAndReviews collection.

  • Loads the mixedbread-ai/mxbai-embed-large-v1 model from the Hugging Face model hub and saves it locally. To learn more, see Downloading models.

  • Defines a function that uses the model to generate vector embeddings.

  • For a subset of documents in the collection:

    • Generates an embedding from the document's summary field.

    • Updates the document by creating a new field called embeddings that contains the embedding.

    from pymongo import MongoClient
    from sentence_transformers import SentenceTransformer
    # Connect to your local Atlas deployment or Atlas Cluster
    client = MongoClient(ATLAS_CONNECTION_STRING)
    # Select the sample_airbnb.listingsAndReviews collection
    collection = client["sample_airbnb"]["listingsAndReviews"]
    # Load the embedding model (https://huggingface.co/sentence-transformers/mixedbread-ai/mxbai-embed-large-v1)
    model_path = "<model-path>"
    model = SentenceTransformer('mixedbread-ai/mxbai-embed-large-v1')
    model.save(model_path)
    model = SentenceTransformer(model_path)
    # Define function to generate embeddings
    def get_embedding(text):
    return model.encode(text).tolist()
    # Filters for only documents with a summary field and without an embeddings field
    filter = { '$and': [ { 'summary': { '$exists': True, '$ne': None } }, { 'embeddings': { '$exists': False } } ] }
    # Creates embeddings for subset of the collection
    updated_doc_count = 0
    for document in collection.find(filter).limit(50):
    text = document['summary']
    embedding = get_embedding(text)
    collection.update_one({ '_id': document['_id'] }, { "$set": { 'embeddings': embedding } }, upsert=True)
    updated_doc_count += 1
    print("Documents updated: {}".format(updated_doc_count))
    Documents updated: 50
2

This path should resemble: /Users/<username>/local-rag-mongodb

3

This code might take several minutes to run. After it's finished, you can view your vector embeddings by connecting to your local deployment from mongosh or your application using your deployment's connection string. Then you can run read operations on the sample_airbnb.listingsAndReviews collection.

You can view your vector embeddings in the Atlas UI by navigating to the sample_airbnb.listingsAndReviews collection in your cluster and expanding the fields in a document.

Tip

You can convert the embeddings in the sample data to BSON vectors for efficient storage and ingestion of vectors in Atlas. To learn more, see how to convert native embeddings to BSON vectors.

To enable vector search on the sample_airbnb.listingsAndReviews collection, create an Atlas Vector Search index.

This tutorial walks you through how to create an Atlas Vector Search index programmatically with a supported MongoDB Driver or using the Atlas CLI. For information on other ways to create an Atlas Vector Search index, see How to Index Fields for Vector Search.

Note

To create an Atlas Vector Search index, you must have Project Data Access Admin or higher access to the Atlas project.

To create an Atlas Vector Search index for a collection using the MongoDB C# driver v3.1.0 or later, perform the following steps:

1

Add a new CreateVectorIndex() method in the file named MongoDBDataService.cs to define the search index:

MongoDBDataService.cs
namespace MyCompany.RAG.Local;
using MongoDB.Driver;
using MongoDB.Bson;
public class DataService
{
private static readonly string? ConnectionString = Environment.GetEnvironmentVariable("ATLAS_CONNECTION_STRING");
private static readonly MongoClient Client = new MongoClient(ConnectionString);
private static readonly IMongoDatabase Database = Client.GetDatabase("sample_airbnb");
private static readonly IMongoCollection<BsonDocument> Collection = Database.GetCollection<BsonDocument>("listingsAndReviews");
public List<BsonDocument>? GetDocuments()
{
// Method details...
}
public async Task<string> UpdateDocuments(Dictionary<string, float[]> embeddings)
{
// Method details...
}
public string CreateVectorIndex()
{
try
{
var searchIndexView = Collection.SearchIndexes;
var name = "vector_index2";
var type = SearchIndexType.VectorSearch;
var definition = new BsonDocument
{
{ "fields", new BsonArray
{
new BsonDocument
{
{ "type", "vector" },
{ "path", "embeddings" },
{ "numDimensions", 768 },
{ "similarity", "cosine" }
}
}
}
};
var model = new CreateSearchIndexModel(name, type, definition);
searchIndexView.CreateOne(model);
Console.WriteLine($"New search index named {name} is building.");
// Polling for index status
Console.WriteLine("Polling to check if the index is ready. This may take up to a minute.");
bool queryable = false;
while (!queryable)
{
var indexes = searchIndexView.List();
foreach (var index in indexes.ToEnumerable())
{
if (index["name"] == name)
{
queryable = index["queryable"].AsBoolean;
}
}
if (!queryable)
{
Thread.Sleep(5000);
}
}
return $"{name} is ready for querying.";
}
catch (Exception e)
{
return $"Exception: {e.Message}";
}
}
}

This index definition specifies indexing the embeddings field in an index of the vectorSearch type for the sample_airbnb.listingsAndReviews collection. This field contains the embeddings created using the embedding model. The index definition specifies 768 vector dimensions and measures similarity using cosine.

2

Replace the code in your Program.cs with the following code to initialize the DataService and call the index creation method:

using MyCompany.RAG.Local;
var dataService = new MongoDBDataService();
var result = dataService.CreateVectorIndex();
Console.WriteLine(result);
3

Save the file, and then compile and run your project to create the index:

dotnet run MyCompany.RAG.Local.csproj

To create an Atlas Vector Search index for a collection using the MongoDB Go driver v1.16.0 or later, perform the following steps:

1

Create a file named vector-index.go and paste the following code in the file:

vector-index.go
package main
import (
"context"
"fmt"
"log"
"os"
"time"
"github.com/joho/godotenv"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
func main() {
ctx := context.Background()
if err := godotenv.Load(); err != nil {
log.Println("no .env file found")
}
// Connect to your Atlas cluster
uri := os.Getenv("ATLAS_CONNECTION_STRING")
if uri == "" {
log.Fatal("set your 'ATLAS_CONNECTION_STRING' environment variable.")
}
clientOptions := options.Client().ApplyURI(uri)
client, err := mongo.Connect(ctx, clientOptions)
if err != nil {
log.Fatalf("failed to connect to the server: %v", err)
}
defer func() { _ = client.Disconnect(ctx) }()
// Set the namespace
coll := client.Database("sample_airbnb").Collection("listingsAndReviews")
indexName := "vector_index"
opts := options.SearchIndexes().SetName(indexName).SetType("vectorSearch")
type vectorDefinitionField struct {
Type string `bson:"type"`
Path string `bson:"path"`
NumDimensions int `bson:"numDimensions"`
Similarity string `bson:"similarity"`
}
type vectorDefinition struct {
Fields []vectorDefinitionField `bson:"fields"`
}
indexModel := mongo.SearchIndexModel{
Definition: vectorDefinition{
Fields: []vectorDefinitionField{{
Type: "vector",
Path: "embeddings",
NumDimensions: 768,
Similarity: "cosine"}},
},
Options: opts,
}
log.Println("Creating the index.")
searchIndexName, err := coll.SearchIndexes().CreateOne(ctx, indexModel)
if err != nil {
log.Fatalf("failed to create the search index: %v", err)
}
// Await the creation of the index.
log.Println("Polling to confirm successful index creation.")
log.Println("NOTE: This may take up to a minute.")
searchIndexes := coll.SearchIndexes()
var doc bson.Raw
for doc == nil {
cursor, err := searchIndexes.List(ctx, options.SearchIndexes().SetName(searchIndexName))
if err != nil {
fmt.Errorf("failed to list search indexes: %w", err)
}
if !cursor.Next(ctx) {
break
}
name := cursor.Current.Lookup("name").StringValue()
queryable := cursor.Current.Lookup("queryable").Boolean()
if name == searchIndexName && queryable {
doc = cursor.Current
} else {
time.Sleep(5 * time.Second)
}
}
log.Println("Name of Index Created: " + searchIndexName)
}

This index definition specifies indexing the embeddings field in an index of the vectorSearch type for the sample_airbnb.listingsAndReviews collection. This field contains the embeddings created using the embedding model. The index definition specifies 768 vector dimensions and measures similarity using cosine.

2

Save the file, and then run the following command in your terminal to execute the code:

go run vector-index.go

To create an Atlas Vector Search index for a collection using the MongoDB Java driver v5.2.0 or later, perform the following steps:

1

Create a file named VectorIndex.java and paste the following code.

This code calls a createSearchIndexes() helper method, which takes your MongoCollection object and creates an Atlas Vector Search index on your collection using the following index definition:

  • Index the embedding field in a vectorSearch index type for the sample_airbnb.listingsAndReviews collection. This field contains the embedding created using the embedding model.

  • Enforce 768 vector dimensions and measure similarity between vectors using cosine.

VectorIndex.java
import com.mongodb.MongoException;
import com.mongodb.client.ListSearchIndexesIterable;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoCursor;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.SearchIndexModel;
import com.mongodb.client.model.SearchIndexType;
import org.bson.Document;
import org.bson.conversions.Bson;
import java.util.Collections;
import java.util.List;
public class VectorIndex {
public static void main(String[] args) {
String uri = System.getenv("ATLAS_CONNECTION_STRING");
if (uri == null || uri.isEmpty()) {
throw new IllegalStateException("ATLAS_CONNECTION_STRING env variable is not set or is empty.");
}
// establish connection and set namespace
try (MongoClient mongoClient = MongoClients.create(uri)) {
MongoDatabase database = mongoClient.getDatabase("sample_airbnb");
MongoCollection<Document> collection = database.getCollection("listingsAndReviews");
// define the index details for the index model
String indexName = "vector_index";
Bson definition = new Document(
"fields",
Collections.singletonList(
new Document("type", "vector")
.append("path", "embeddings")
.append("numDimensions", 768)
.append("similarity", "cosine")));
SearchIndexModel indexModel = new SearchIndexModel(
indexName,
definition,
SearchIndexType.vectorSearch());
// create the index using the defined model
try {
List<String> result = collection.createSearchIndexes(Collections.singletonList(indexModel));
System.out.println("Successfully created a vector index named: " + result);
} catch (Exception e) {
throw new RuntimeException(e);
}
// wait for Atlas to build the index and make it queryable
System.out.println("Polling to confirm the index has completed building.");
System.out.println("It may take up to a minute for the index to build before you can query using it.");
waitForIndexReady(collection, indexName);
} catch (MongoException me) {
throw new RuntimeException("Failed to connect to MongoDB ", me);
} catch (Exception e) {
throw new RuntimeException("Operation failed: ", e);
}
}
/**
* Polls the collection to check whether the specified index is ready to query.
*/
public static void waitForIndexReady(MongoCollection<Document> collection, String indexName) throws InterruptedException {
ListSearchIndexesIterable<Document> searchIndexes = collection.listSearchIndexes();
while (true) {
try (MongoCursor<Document> cursor = searchIndexes.iterator()) {
if (!cursor.hasNext()) {
break;
}
Document current = cursor.next();
String name = current.getString("name");
boolean queryable = current.getBoolean("queryable");
if (name.equals(indexName) && queryable) {
System.out.println(indexName + " index is ready to query");
return;
} else {
Thread.sleep(500);
}
}
}
}
}
2

Save and run the file. The output resembles:

Successfully created a vector index named: [vector_index]
Polling to confirm the index has completed building.
It may take up to a minute for the index to build before you can query using it.
vector_index index is ready to query

To create an Atlas Vector Search index for a collection using the MongoDB Node driver v6.6.0 or later, perform the following steps:

1

Create a file named vector-index.js and paste the following code in the file:

vector-index.js
import { MongoClient } from 'mongodb';
// Connect to your Atlas deployment
const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING);
async function run() {
try {
const database = client.db("sample_airbnb");
const collection = database.collection("listingsAndReviews");
// Define your Atlas Vector Search index
const index = {
name: "vector_index",
type: "vectorSearch",
definition: {
"fields": [
{
"type": "vector",
"numDimensions": 1024,
"path": "embeddings",
"similarity": "cosine"
}
]
}
}
// Call the method to create the index
const result = await collection.createSearchIndex(index);
console.log(result);
} finally {
await client.close();
}
}
run().catch(console.dir);

This index definition specifies indexing the embeddings field in an index of the vectorSearch type for the sample_airbnb.listingsAndReviews collection. This field contains the embeddings created using the embedding model. The index definition specifies 1024 vector dimensions and measures similarity using cosine.

2
  1. Save the file, and then run the following command in your terminal to execute the code:

    node --env-file=.env vector-index.js

To create an Atlas Vector Search index for a collection using the PyMongo driver v4.7 or later, perform the following steps:

You can create the index directly from your application with the PyMongo driver. Paste and run the following code in your notebook:

from pymongo.operations import SearchIndexModel
# Create your index model, then create the search index
search_index_model = SearchIndexModel(
definition = {
"fields": [
{
"type": "vector",
"numDimensions": 1024,
"path": "embeddings",
"similarity": "cosine"
}
]
},
name = "vector_index",
type = "vectorSearch"
)
collection.create_search_index(model=search_index_model)

This index definition specifies indexing the embeddings field in an index of the vectorSearch type for the sample_airbnb.listingsAndReviews collection. This field contains the embeddings created using the embedding model. The index definition specifies 1024 vector dimensions and measures similarity using cosine.

To create an Atlas Vector Search index using the Atlas CLI, perform the following steps:

1

Create a file named vector-index.json and paste the following index definition in the file:

vector-index.json
{
"database": "sample_airbnb",
"collectionName": "listingsAndReviews",
"type": "vectorSearch",
"name": "vector_index",
"fields": [
{
"type": "vector",
"path": "embeddings",
"numDimensions": 768,
"similarity": "cosine"
}
]
}

This index definition specifies the following:

  • Index the embeddings field in a vectorSearch index type for the sample_airbnb.listingsAndReviews collection. This field contains the embeddings created using the embedding model.

  • Enforce 768 vector dimensions and measure similarity between vectors using cosine.

2

Save the file in your project directory, and then run the following command in your terminal, replacing <path-to-file> with the path to the vector-index.json file that you created.

atlas deployments search indexes create --file <path-to-file>

For example, your path might resemble: /Users/<username>/local-rag-mongodb/vector-index.json.

1

Create a file named vector-index.json and paste the following index definition in the file:

vector-index.json
{
"database": "sample_airbnb",
"collectionName": "listingsAndReviews",
"type": "vectorSearch",
"name": "vector_index",
"fields": [
{
"type": "vector",
"path": "embeddings",
"numDimensions": 768,
"similarity": "cosine"
}
]
}

This index definition specifies the following:

  • Index the embeddings field in a vectorSearch index type for the sample_airbnb.listingsAndReviews collection. This field contains the embeddings created using the embedding model.

  • Enforce 768 vector dimensions and measure similarity between vectors using cosine.

2

Save the file in your project directory, and then run the following command in your terminal, replacing <path-to-file> with the path to the vector-index.json file that you created.

atlas deployments search indexes create --file <path-to-file>

For example, your path might resemble: /Users/<username>/local-rag-mongodb/vector-index.json.

1

Create a file named vector-index.json and paste the following index definition in the file:

vector-index.json
{
"database": "sample_airbnb",
"collectionName": "listingsAndReviews",
"type": "vectorSearch",
"name": "vector_index",
"fields": [
{
"type": "vector",
"path": "embeddings",
"numDimensions": 768,
"similarity": "cosine"
}
]
}

This index definition specifies the following:

  • Index the embeddings field in a vectorSearch index type for the sample_airbnb.listingsAndReviews collection. This field contains the embeddings created using the embedding model.

  • Enforce 768 vector dimensions and measure similarity between vectors using cosine.

2

Save the file in your project directory, and then run the following command in your terminal, replacing <path-to-file> with the path to the vector-index.json file that you created.

atlas deployments search indexes create --file <path-to-file>

For example, your path might resemble: /Users/<username>/local-rag-mongodb/vector-index.json.

1

Create a file named vector-index.json and paste the following index definition in the file.

This index definition specifies indexing the embeddings field in an index of the vectorSearch type for the sample_airbnb.listingsAndReviews collection. This field contains the embeddings created using the embedding model. The index definition specifies 1024 vector dimensions and measures similarity using cosine.

{
"database": "sample_airbnb",
"collectionName": "listingsAndReviews",
"type": "vectorSearch",
"name": "vector_index",
"fields": [
{
"type": "vector",
"path": "embeddings",
"numDimensions": 1024,
"similarity": "cosine"
}
]
}
2

Save the file in your project directory, and then run the following command in your terminal, replacing <path-to-file> with the path to the vector-index.json file that you created.

atlas deployments search indexes create --file <path-to-file>

This path should resemble: /Users/<username>/local-rag-mongodb/vector-index.json.

1

Create a file named vector-index.json and paste the following index definition in the file.

This index definition specifies indexing the embeddings field in an index of the vectorSearch type for the sample_airbnb.listingsAndReviews collection. This field contains the embeddings created using the embedding model. The index definition specifies 1024 vector dimensions and measures similarity using cosine.

{
"database": "sample_airbnb",
"collectionName": "listingsAndReviews",
"type": "vectorSearch",
"name": "vector_index",
"fields": [
{
"type": "vector",
"path": "embeddings",
"numDimensions": 1024,
"similarity": "cosine"
}
]
}
2

Save the file in your project directory, and then run the following command in your terminal, replacing <path-to-file> with the path to the vector-index.json file that you created.

atlas deployments search indexes create --file <path-to-file>

This path should resemble: /Users/<username>/local-rag-mongodb/vector-index.json.

This section demonstrates a sample RAG implementation that you can run locally using Atlas Vector Search and Ollama.

1
  1. Add a new PerformVectorQuery() method in the file named MongoDBDataService.cs:

    MongoDBDataService.cs
    namespace MyCompany.RAG.Local;
    using MongoDB.Driver;
    using MongoDB.Bson;
    public class MongoDBDataService
    {
    private static readonly string? ConnectionString = Environment.GetEnvironmentVariable("ATLAS_CONNECTION_STRING");
    private static readonly MongoClient Client = new MongoClient(ConnectionString);
    private static readonly IMongoDatabase Database = Client.GetDatabase("sample_airbnb");
    private static readonly IMongoCollection<BsonDocument> Collection = Database.GetCollection<BsonDocument>("listingsAndReviews");
    public List<BsonDocument>? GetDocuments()
    {
    // Method details...
    }
    public async Task<string> UpdateDocuments(Dictionary<string, float[]> embeddings)
    {
    // Method details...
    }
    public string CreateVectorIndex()
    {
    // Method details...
    }
    public List<BsonDocument>? PerformVectorQuery(float[] vector)
    {
    var vectorSearchStage = new BsonDocument
    {
    {
    "$vectorSearch",
    new BsonDocument
    {
    { "index", "vector_index" },
    { "path", "embeddings" },
    { "queryVector", new BsonArray(vector) },
    { "exact", true },
    { "limit", 5 }
    }
    }
    };
    var projectStage = new BsonDocument
    {
    {
    "$project",
    new BsonDocument
    {
    { "_id", 0 },
    { "summary", 1 },
    { "listing_url", 1 },
    { "score",
    new BsonDocument
    {
    { "$meta", "vectorSearchScore"}
    }
    }
    }
    }
    };
    var pipeline = new[] { vectorSearchStage, projectStage };
    return Collection.Aggregate<BsonDocument>(pipeline).ToList();
    }
    }

    This code performs a vector query on your local Atlas deployment or your Atlas cluster.

  2. Create another file called PerformTestQuery.cs and paste the following code into it:

    PerformTestQuery.cs
    1namespace MyCompany.RAG.Local;
    2
    3public class PerformTestQuery
    4{
    5 private readonly MongoDBDataService _dataService = new();
    6 private readonly OllamaAIService _ollamaAiService = new();
    7
    8 public async Task<string> GetQueryResults(string question)
    9 {
    10 // Get the vector embedding for the query
    11 var query = question;
    12 var queryEmbedding = await _ollamaAiService.GetEmbedding(query);
    13 // Query the vector database for applicable query results
    14 var matchingDocuments = _dataService.PerformVectorQuery(queryEmbedding);
    15 // Construct a string from the query results for performing QA with the LLM
    16 var sb = new System.Text.StringBuilder();
    17 if (matchingDocuments != null)
    18 {
    19 foreach (var doc in matchingDocuments)
    20 {
    21 sb.AppendLine($"Summary: {doc.GetValue("summary").ToString()}");
    22 sb.AppendLine($"Listing URL: {doc.GetValue("listing_url").ToString()}");
    23 }
    24 }
    25 else
    26 {
    27 return "No matching documents found.";
    28 }
    29 return sb.ToString();
    30 }
    31}

    This code contains the logic to:

    • Define an embedding for the query.

    • Retrieve matching documents from the MongoDBDataService.

    • Construct a string containing the "Summary" and "Listing URL" from each document to pass on to the LLM for summarizing.

  3. Run a test query to confirm you're getting the expected results.

    Replace the code in Program.cs with the following code:

    Program.cs
    1using MyCompany.RAG.Local;
    2
    3var query = "beach house";
    4var queryCoordinator = new PerformTestQuery();
    5var result = await queryCoordinator.GetQueryResults(query);
    6Console.WriteLine(result);
  4. Save the file, and then compile and run your project to test that you get the expected query results:

    dotnet run MyCompany.RAG.Local.csproj
    Summary: "Lani Beach House" Aloha - Please do not reserve until reading about the State Tax in "Other Things to Note" section. Please do not reserve unless you agree to pay taxes to Hawaii Beach Homes directly. If you have questions, please inquire before booking. The home has been completely redecorated in a luxurious island style: vaulted ceilings, skylights, granite counter tops, stainless steel appliances and a gourmet kitchen are just some of the the features. All bedrooms have ocean views
    Listing URL: https://www.airbnb.com/rooms/11553333
    Summary: This peaceful house in North Bondi is 300m to the beach and a minute's walk to cafes and bars. With 3 bedrooms, (can sleep up to 8) it is perfect for families, friends and pets. The kitchen was recently renovated and a new lounge and chairs installed. The house has a peaceful, airy, laidback vibe - a perfect beach retreat. Longer-term bookings encouraged. Parking for one car. A parking permit for a second car can also be obtained on request.
    Listing URL: https://www.airbnb.com/rooms/10423504
    Summary: There are 2 bedrooms and a living room in the house. 1 Bathroom. 1 Kitchen. Friendly neighbourhood. Close to sea side and Historical places.
    Listing URL: https://www.airbnb.com/rooms/10488837
    Summary: 4 Bedroom Country Beach House w/ option to add a separate studio unit- total of 5 bedrooms/2.5 baths at an additional cost. 27 girl steps to white sand beach & infamous Alligator Pond. Private road, NO highway to cross! Safe beach for children & seniors. Convenient! For pricing to add on additional Studio unit, click on our profile pic and input your dates for quote and details!
    Listing URL: https://www.airbnb.com/rooms/12906000
    Summary: Ocean Living! Secluded Secret Beach! Less than 20 steps to the Ocean! This spacious 4 Bedroom and 4 Bath house has all you need for your family or group. Perfect for Family Vacations and executive retreats. We are in a gated beachfront estate, with lots of space for your activities.
    Listing URL: https://www.airbnb.com/rooms/10317142
2

Run the following command to pull the generative model:

ollama pull mistral
3
  1. Add some new static members to your OllamaAIService.cs class, for use in a new SummarizeAnswer async Task:

    OllamaAIService.cs
    namespace MyCompany.RAG.Local;
    using Microsoft.Extensions.AI;
    public class OllamaAIService
    {
    private static readonly System.Uri OllamaUri = new Uri("http://localhost:11434/");
    private static readonly Uri OllamaUri = new("http://localhost:11434/");
    private static readonly string EmbeddingModelName = "nomic-embed-text";
    private static readonly OllamaEmbeddingGenerator EmbeddingGenerator = new OllamaEmbeddingGenerator(OllamaUri, EmbeddingModelName);
    private static readonly string ChatModelName = "mistral";
    private static readonly OllamaChatClient ChatClient = new OllamaChatClient(OllamaUri, ChatModelName);
    public async Task<float[]> GetEmbedding(string text)
    {
    // Method details...
    }
    public async Task<string> SummarizeAnswer(string context)
    {
    string question = "Can you recommend me a few AirBnBs that are beach houses? Include a link to the listings.";
    string prompt = $"""
    Use the following pieces of context to answer the question at the end.
    Context: {context}
    Question: {question}
    """;
    ChatCompletion response = await ChatClient.CompleteAsync(prompt, new ChatOptions { MaxOutputTokens = 400 });
    return response.ToString();
    }
    }

    This prompts the LLM and returns the response. The generated response might vary.

  2. Define a new PerformQuestionAnswer class to:

    • Define an embedding for the query.

    • Retrieve matching documents from the MongoDBDataService.

    • Use the LLM to summarize the response.

    AIService.cs
    namespace MyCompany.RAG.Local;
    public class PerformQuestionAnswer
    {
    private readonly MongoDBDataService _dataService = new();
    private readonly OllamaAIService _ollamaAiService = new();
    public async Task<string> SummarizeResults(string question)
    {
    // Get the vector embedding for the query
    var query = question;
    var queryEmbedding = await _ollamaAiService.GetEmbedding(query);
    // Query the vector database for applicable query results
    var matchingDocuments = _dataService.PerformVectorQuery(queryEmbedding);
    // Construct a string from the query results for performing QA with the LLM
    var sb = new System.Text.StringBuilder();
    if (matchingDocuments != null)
    {
    foreach (var doc in matchingDocuments)
    {
    sb.AppendLine($"Summary: {doc.GetValue("summary").ToString()}");
    sb.AppendLine($"Listing URL: {doc.GetValue("listing_url").ToString()}");
    }
    }
    else
    {
    return "No matching documents found.";
    }
    return await _ollamaAiService.SummarizeAnswer(sb.ToString());
    }
    }
  3. Replace the contents of Program.cs with a new block to perform the task:

    Program.cs
    using MyCompany.RAG.Local;
    var qaTaskCoordinator = new PerformQuestionAnswer();
    const string query = "beach house";
    var results = await qaTaskCoordinator.SummarizeResults(query);
    Console.WriteLine(results);
  4. Save the file, and then compile and run your project to complete your RAG implementation:

    dotnet run MyCompany.RAG.Local.csproj
    Based on the context provided, here are some Airbnb listings for beach houses that you might find interesting:
    1. Lani Beach House (Hawaii) - [Link](https://www.airbnb.com/rooms/11553333)
    2. Peaceful North Bondi House (Australia) - [Link](https://www.airbnb.com/rooms/10423504)
    3. Ocean Living! Secluded Secret Beach! (Florida, USA) - [Link](https://www.airbnb.com/rooms/10317142)
    4. Gorgeous Home just off the main road (California, USA) - [Link](https://www.airbnb.com/rooms/11719579)

This section demonstrates a sample RAG implementation that you can run locally using Atlas Vector Search and Ollama.

1
  1. Navigate to the common directory.

    cd common
  2. Create a file called retrieve-documents.go and paste the following code into it:

    retrieve-documents.go
    package common
    import (
    "context"
    "log"
    "os"
    "github.com/joho/godotenv"
    "go.mongodb.org/mongo-driver/bson"
    "go.mongodb.org/mongo-driver/mongo"
    "go.mongodb.org/mongo-driver/mongo/options"
    )
    type Document struct {
    Summary string `bson:"summary"`
    ListingURL string `bson:"listing_url"`
    Score float64 `bson:"score"`
    }
    func RetrieveDocuments(query string) []Document {
    ctx := context.Background()
    if err := godotenv.Load(); err != nil {
    log.Println("no .env file found")
    }
    // Connect to your Atlas cluster
    uri := os.Getenv("ATLAS_CONNECTION_STRING")
    if uri == "" {
    log.Fatal("set your 'ATLAS_CONNECTION_STRING' environment variable.")
    }
    clientOptions := options.Client().ApplyURI(uri)
    client, err := mongo.Connect(ctx, clientOptions)
    if err != nil {
    log.Fatalf("failed to connect to the server: %v", err)
    }
    defer func() { _ = client.Disconnect(ctx) }()
    // Set the namespace
    coll := client.Database("sample_airbnb").Collection("listingsAndReviews")
    var array []string
    array = append(array, query)
    queryEmbedding := GetEmbeddings(array)
    vectorSearchStage := bson.D{
    {"$vectorSearch", bson.D{
    {"index", "vector_index"},
    {"path", "embeddings"},
    {"queryVector", queryEmbedding[0]},
    {"exact", true},
    {"limit", 5},
    }}}
    projectStage := bson.D{
    {"$project", bson.D{
    {"_id", 0},
    {"summary", 1},
    {"listing_url", 1},
    {"score", bson.D{{"$meta", "vectorSearchScore"}}},
    }}}
    cursor, err := coll.Aggregate(ctx, mongo.Pipeline{vectorSearchStage, projectStage})
    if err != nil {
    log.Fatalf("failed to retrieve data from the server: %v", err)
    }
    var results []Document
    if err = cursor.All(ctx, &results); err != nil {
    log.Fatalf("failed to unmarshal retrieved docs to model objects: %v", err)
    }
    return results
    }

    This code performs a vector query on your local Atlas deployment or your Atlas cluster.

  3. Run a test query to confirm you're getting the expected results. Move back to the project root directory.

    cd ../
  4. Create a new file called test-query.go, and paste the following code into it:

    test-query.go
    package main
    import (
    "fmt"
    "local-rag-mongodb/common" // Module that contains the RetrieveDocuments function
    "log"
    "strings"
    )
    func main() {
    query := "beach house"
    matchingDocuments := common.RetrieveDocuments(query)
    if matchingDocuments == nil {
    log.Fatal("No documents matched the query.\n")
    }
    var textDocuments strings.Builder
    for _, doc := range matchingDocuments {
    // Print the contents of the matching documents for verification
    fmt.Printf("Summary: %v\n", doc.Summary)
    fmt.Printf("Listing URL: %v\n", doc.ListingURL)
    fmt.Printf("Score: %v\n", doc.Score)
    // Build a single text string to use as the context for the QA
    textDocuments.WriteString("Summary: ")
    textDocuments.WriteString(doc.Summary)
    textDocuments.WriteString("\n")
    textDocuments.WriteString("Listing URL: ")
    textDocuments.WriteString(doc.ListingURL)
    textDocuments.WriteString("\n")
    }
    fmt.Printf("\nThe constructed context for the QA follows:\n\n")
    fmt.Printf(textDocuments.String())
    }
  5. Run the following code to execute the query:

    go run test-query.go
    Summary: "Lani Beach House" Aloha - Please do not reserve until reading about the State Tax in "Other Things to Note" section. Please do not reserve unless you agree to pay taxes to Hawaii Beach Homes directly. If you have questions, please inquire before booking. The home has been completely redecorated in a luxurious island style: vaulted ceilings, skylights, granite counter tops, stainless steel appliances and a gourmet kitchen are just some of the the features. All bedrooms have ocean views