Vector Atlas Search for Mistral 7B

Hi there!

I’ve been trying to implement the Atlas Vector Search for optimizing mi Mistral LLM model. I’ve been following this MongoDB tutorial RAG with Atlas Vector Search, LangChain, and OpenAI. Instead of using the OpenAI embedding, I have used the HuggingFace embeddings, I think there is no problem on doing so. Moreover, instead of using the GPT-3.5 OpenAI model, I have used a local Mistral 7B model.

It all seem to work fine, but the LLM is not receiving any context, so I’m gettin wrong outputs for my inputs. I’ve checked MongoDB Compass and I created the embeddings collection correctly.

When I run the code with as_output = docs[0].page_content, I receive an error caused by not finding any similarity result (I’m using the same *.txt files from the tutorials as loaders).

To sum up, I beleive my main problem is that my vectorStore is not working properly and I don’t understand why. I attach a little piece of code, maybe someone notices any mistake:

def query_data(query):

    docs = vectorStore.similarity_search(query, k=4)
    as_output = docs[0].page_content

    llm = CTransformers(model = "./mistral-7b-instruct-v0.1.Q4_0.gguf",
                        model_type = "llama",
                        config = {'max_new_tokens': 100, 'temperature': 0.01})

    retriever = vectorStore.as_retriever()

    QA_CHAIN_PROMPT = PromptTemplate.from_template(template)

    qa = RetrievalQA.from_chain_type(llm, chain_type="stuff", retriever=retriever, chain_type_kwargs = {'prompt': QA_CHAIN_PROMPT})

    retriever_output =

    return as_output, retriever_output

I have tried also with a Llama-2 model and I get the same error. Please If anyone can help I would be so thankful! Thank you guys.

Hi Victor,

Thanks for posting your issue on the community forums.

Please confirm that the Atlas Vector Search Index JSON definition has been updated after changing the embedding model you are using from OpenAI to Hugging Face?

More specifically, check the JSON definition for the atlas vector search index and ensure the numDimensions field matches the embedding dimension of the hugging face embedding model.

Hi Richmond,

Thanks for the fast answer, I appreciate it!

Yes, is set to 384. I have checked it and it agrees with the embedding array field that the HuggingFace model created on my MongoDB collection.

  "fields": [
      "numDimensions": 384,
      "path": "embedding",
      "similarity": "cosine",
      "type": "vector"


Hi Victor,

Thanks for confirming.

I would like to confirm as well that the index_name has been specified when insitialising the Vector Store:

It should look something similar to the line below.

vectorStore = MongoDBAtlasVectorSearch.from_documents( data, embeddings, collection=collection, index_name="vector_index")

Hi Richmond,

Sorry for the late answer, I’ve been busy lately.

I finally found the mistake, there was an issue naming the index_name in the VectorStore. Fixing that mistake the example worked, thank you so much Richmond!

Continuing with this topic, I am working on a project that involves using a MongoDB database as a loader instead of *txt files. My question is: should I apply the same process as if I was working with *txt files? I tried using the MongoDBLoader from Langchain and the standard restaurant database but something is not going on as it suposed becaused it only recognize 2/3 restaurants from the database.

Here I attach my changes from the original code:

loader = MongodbLoader(
    db_name = "sample_restaurants",
    filter_criteria={"borough": "Bronx", "cuisine": "Bakery"}

doc = loader.load()

splitter = RecursiveCharacterTextSplitter(
    chunk_size = 250,
    chunk_overlap = 50,
data = splitter.split_documents(doc)

embeddings = HuggingFaceBgeEmbeddings(
    model_name = "sentence-transformers/paraphrase-MiniLM-L6-v2",

vectorStore = MongoDBAtlasVectorSearch.from_documents( data, embeddings, collection=collection, index_name = "Modelito" )

Thank you so much for your help, Richmond.

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.