Docs Menu
Docs Home
/ /

MongoDB Vector Search Quick Start

In this section, you create a MongoDB Vector Search index on sample data that you load into your cluster:

In this section, you run a sample vector search query on your indexed embeddings.

This quick start focused on retrieving documents from your cluster that contain text that is semantically related to a provided query. However, you can create a vector search index on embeddings that represent any type of data that you might write to your cluster, such as images or videos.

The vector embeddings in the sample_mflix.embedded_movies collection and in the example query were created using the Voyage AI voyage-3-large embedding model. Your choice of embedding model informs the vector dimensions and vector similarity function you use in your vector search index. You can use any embedding model you like, and it is worth experimenting with different models as accuracy can vary from model to model depending on your specific use case.

To learn how to create vector embeddings of your own data, see How to Create Vector Embeddings.

The query you ran in this quick start is an aggregation pipeline, in which the $vectorSearch stage performs an Approximate Nearest Neighbor (ANN) search followed by a $project stage that refines the results. To see all the options for a vector search query, including using Exact Nearest Neighbor (ENN) or how to narrow the scope of your vector search with the filter option, see Run Vector Search Queries.

Back

Vector Search

Earn a Skill Badge

Master "Vector Search Fundamentals" for free!

Learn more

On this page