EASY: Build Generative AI Applications
Rate this video
00:00:00Introduction to the Fourth Industrial Revolution
- Discusses the progression of industrial revolutions, culminating in the current era of artificial intelligence.00:00:26The Rise of Generative AI and ChatGPT
- Talks about the significance of generative AI and the explosive growth of ChatGPT.00:01:37Leveraging LLMs with MongoDB
- Introduces the concept of leveraging LLMs on private data using MongoDB and Atlas Vector Search.00:02:02Building a Semantic Search Application
- Describes the process of building an application for semantic movie searches using natural language.00:03:06Understanding Vector Embeddings
- Explains what vector embeddings are and how they capture semantic information.00:04:16Vector Search Explained
- Defines vector search and its role in finding semantically similar objects.00:06:38Atlas Vector Search Capabilities
- Highlights the features of Atlas Vector Search and its integration with MongoDB.00:08:11Tutorial: Using a Movie Dataset
- Begins the tutorial using a movie dataset and discusses the necessary tools and accounts.00:10:53Creating and Storing Vector Embeddings
- Demonstrates how to create vector embeddings and store them in the MongoDB database.00:12:56Querying with Vector Search Index
- Shows how to create a vector search index in MongoDB Atlas and query data using it.00:15:08Conclusion and Next Steps
- Summarizes the tutorial, encourages viewers to like and subscribe, and provides next steps for MongoDB content.The primary focus of the video is on demonstrating how to use MongoDB and Atlas Vector Search to build transformative AI-powered applications that can perform semantic searches using natural language queries.
🔑 Key Points
- The video discusses the fourth Industrial Revolution and the rise of AI, particularly generative AI.
- It highlights the rapid growth of ChatGPT and the underlying technology of large language models (LLMs).
- The tutorial shows how to use MongoDB and Atlas Vector Search to build AI-powered applications.
- It provides a step-by-step guide on creating vector embeddings and setting up semantic search.
- The video emphasizes the ease of integrating machine learning models with MongoDB's data platform.
🔗 Related Links
Full Video Transcript
you heard of the fourth Industrial Revolution the first was mechanical the second was mass production the third was Automation and the fourth we're in the middle of it it's artificial intelligence there is a fundamental change happening in the way we live and the way we work and it's happening right now while Ai and its application across businesses are not new recently generative AI has become a Hot Topic worldwide with the incredible success of chat gbt the popular chat bot from openai it reached 100 million active monthly users in just two months becoming the fastest growing consumer application the closest to this was Tick Tock and it took nine months which is still really fast so what powers chat GPT large language models llms in this video we're going to talk about how you can Leverage The Power of llms on your private data to build transformative AI powered applications using mongodb and Atlas Vector search we're also going to walk through an example of building an application that uses semantic search machine learning models and Atlas vectors search for finding movies using natural language queries for instance to find funny movies with lead characters that are not human that would involve performing a semantic search that understands the meaning and intent behind the query to retrieve relevant movie recommendations and not just the keywords present in the data set using Vector embeddings you can Leverage The Power of llms for your use case like semantic search a recommendation system anomaly detection or a customer support chat bot which are all based on your own data now to do that we first have to understand what Vector embeddings are well a vector is a list of floating Point numbers representing a point in an in-dimensional embedding space and that captures semantic information about the text that it represents for instance an embedding for the string mongodb is awesome using an open source llm model called all mini LM l6v2 would consist of 384 floating Point numbers and it would look something like this now later on in this tutorial we're going to cover the steps to obtain Vector embeddings just like this so now we know what a vector is and Vector embeddings well what is Vector search Vector search is a capability that allows you to find related objects that have a semantic similarity that means searching for data based on meaning rather than the keywords present in the data set Vector search uses machine learning models to transform unstructured data like text audio images and other types of data into numeric representation locations called vector embeddings and these capture the intent and the meaning of the data and then it finds related content by comparing the distances between these Vector embeddings now the most commonly used method for finding the distance between these vectors involves calculating the cosine similarity between two vectors but that sounds way too complicated for my brain to comprehend so I just want something that can just do this for me so that I don't have to think about it and this is where Atlas Vector search comes in Atlas Vector search is a fully managed service that simplifies the process of effectively indexing High dimensional Vector data within mongodb and being able to perform fast Vector similarity searches with Atlas Vector search you can use mongodb as a standalone Vector database for a new project or augment your existing mongodb collections with Vector search functionality having a single solution that can take care of your operational application data as well as your vector data eliminates the complexities of using a standalone system just for Vector search functionality such as data transfer and infrastructure management overhead with Atlas Vector search you can use the powerful capabilities of vector search in any major public cloud like AWS Azure gcp and Achieve massive scalability and data security out of the box so let's move on to the tutorial now we'll be using a movie data set containing over 23 000 documents in mongodb and we'll use the all mini LM l6v2 model from hugging face for generating the vector embeddings during the index time as well as query time but you can apply the same concepts by using a data set and model of your own choice as well you will need a mongodb atlas account and a hugging face account for a hands-on experience when we're looking at a movie database it may contain various types of content such as the movie description plot genre actors users comments movie posters Etc and these can all easily be converted into Vector embeddings now in a similar manner the user query can also be converted into a vector embedding and then the vector search can find the most relevant results by finding the nearest Neighbors in the embedding space now for our first step we'll need to set up and connect to our mongodb instance if you don't already have a mongodb atlas account it's completely free there's a link in the video description below and if you need help getting your first cluster set up and running there's a great video right here that can walk you through that now for this tutorial we'll be using one of our sample data sets the sample underscore inflix database which contains a movie collection where each document contains Fields like title plot genre cast directors Etc and we're going to use node.js but if you'd rather see examples in Python there is a written version of this tutorial Linked In the video description that uses python alright so I have an empty directory here and we're going to initialize a new project so npm init Dash y to accept all the defaults now we have our package Json it's also npm install mongodb now I've created a main JS file we're going to require mongodb we're also going to use EnV to use environment variables I know that this is built into node.js now but in case you're using an older version of node.js you will still need to use the EMV package and in fact I forgot to install that so let's do npm install.env and then we're going to get our URI which is going to be our mongodb connection string now to get your connection string in Atlas just go to your database and click connect and then under drivers we're using node.js and so this would be my connection string you'll need to copy yours because yours will be unique to you so I'll copy this back in vs code let's create a EnV file so I named this mongodb connection string and then I'll paste in my connection string I'll also need to enter my password here so I'll do that and then save it next we'll create our client by creating a new [ __ ] client and using our URI and then our main function is just going to test to make sure that our connection is working so it's an async function we're going to await client connect and then we're going to await an admin command called ping one and if that is successful we should see that we have successfully connected to mongodb after that we'll close our connection and then let's just call this and use catch and consultur if there is an error so let's save that go into our terminal and let's run node Main and there we go we've successfully connected to mongodb so for step two we need to set up the embedding creation function and there are many options for creating embeddings like calling a managed API hosting your own model or having the model run locally in this example we're going to use the hugging face inference API a hugging face is an open source platform that provides tools for building training and deploying machine learning models we're going to use them because they make it easy to use machine learning models via apis and sdks so if you don't already have an account go to huggingface.co create a new account and then you'll need to retrieve your access token so you can do that by going up to the top right click your avatar and then go to settings under settings go to access tokens and then create a new token give it a name and make sure that it has the read role be sure you never share this token with anyone and also don't commit it to GitHub that's why we're going to use environment variables so copy this and let's add it to our EnV file I'm going to add a new entry here here called HF token that is going to be equal to that access token that you just copied and I'll go ahead and paste mine in here save this and then we'll go back to our main JS file we're going to add on to this file and we are going to use axial so let's go ahead and npm install axios and then we'll add our axios require we'll get our hugging phase token from our EnV file and then create our embedding URL so for this we're going to reference the all mini LM l6v2 model from hugging face and then here is our generate embedding function it's going to be an asynchronous function and accept some text so we'll create our response which will await axios.post we're going to include our embedding URL our input is going to be our text that is passed into our function and then in our authorization headers we're going to pass in our hugging base token if the status response that we get back is not a 200 then we're going to throw an error otherwise we're just going to right now console log the response.data and of course we have a try catch to catch any other errors then to run this we'll run generate embedding and pass in the text mongodb is awesome one last thing at the very bottom let's just go ahead and comment out our main function we're just going to run the generate embedding function right now just to test it and make sure it's working so let's save this go back into our terminal and run node main again and there we have it our Vector embeddings for mongodb is awesome now if you're not familiar with hugging face their inference API is free to begin with and it's meant for quick prototyping but has strict rate limits so if you're dealing with a lot of data you might want to consider setting up a paid hugging face inference endpoint that's going to create a private deployment of the model for you for step three we're going to create and store embeddings so let's put all this together and execute an operation to create a vector embedding for the data in the plot field from our movie documents collection and we're going to store those embeddings in our database now like we talked about before creating Vector embeddings using a machine learning model is necessary for performing a similarity search based on intent so let's build out this function so back in our main JS file let's continue to iterate on this so in our generate embedding function instead of console logging the response data we need to actually return the response data so let's make that change and then we're not going to call the function here so let's delete that and then in our main function instead of pinging the database to see if we're connected let's make some alterations we'll select our database the sample inflix database and our collection of the movies collection so let's get some documents docs is going to await collection.find any document where the plot field exists and we are going to limit that to the first 50. after that let's Loop over our documents so we're going to create a plot underscore embedding underscore HF field on our document and that is going to equal our generated embedding so we're going to await generate embedding function that we defined earlier and we're going to pass that function our movie plot so this is going to return that embedding then we're going to await collection replace one and we're going to actually update that document in the database with this new information and then we're just going to console log that the document has been updated just so we know what's going on and then finally we'll close our connection and let's go ahead and uncomment this so let's go ahead and open up our terminal and run node Main nice and so we can see that 50 updates have been made if we go over to Atlas and we go to our sample inflix database movies collection we can verify that this worked by looking at one of these documents and we'll see that we have a plot embedding HF field that's an array if we expand that we'll see the vector embeddings that were created now in this case we're storing the vector embeddings in the original collection alongside the application data alternatively you could store the vector embeddings in a separate collection it all depends on your use case and data access patterns and once this step completes you can verify in your database that a new field plot underscore embedding underscore HF has been created for some of the documents now we are restricting this to just 50 documents to avoid running into rate limits on the hugging face inference API if you want to do this over the entire data set of 23 000 documents in our sample inflix database it will take a while and you may need to create a paid inference endpoint now step four is to create a vector search index so let's head over to Atlas and create a search index so back in Atlas we can go to search and then our data source is going to be cluster 0 in this instance and we're going to go to atlas search let's go ahead and create a search index we're going to use the Json editor and then click next for the index name I'm going to name it plot semantic search I'm going to go over to my sample inflix database and select the movies collection that's where I want the index to be created and then for the Json configuration I'm going to paste this in this is where we can configure how our Vector search is going to work so we need to make sure that the field matches the field that we have created in our collection so in this instance it's plot underscore embedding underscore HF we need to define the dimensions in our case the model that we're using has 384 Dimensions so that's why I set that for similarity we're going to choose dot product and the type is going to be k n Vector for a description of these fields and other configuration options be sure to check out the vector search documentation Linked In the video description below so we'll hit next and then create search index for step 5 we're going to query our data once the index is created you can query it using the dollar Vector search aggregation pipeline stage let's rename this main function I'm going to rename this to save embeddings and let's create a new function where we will query our embeddings so let's go ahead and comment out the save embeddings function because we don't want to run that this time so for our query embeddings we're going to accept the query in this instance this will be text that we're accepting from our user this is an asynchronous function again we're going to connect to our database to our sample influx database and movies collection this is where we'll use our dollar Vector search aggregation pipeline stage so for this we want to make sure that we specify our index that we just created plot semantic search is what I called it or query Vector we're going to await generate embedding and then pass it our query so that same generate embedding function that we've been using we're going to generate an embedding of the query that the user is searching for we will tell which path which field to look at in the document we named that field plot embedding HF and the number of candidates will set to 100 and limit to 4. so we just want to return the top four results and then again for more information on the specifics of these parameters check out the vector search documentation Linked In the video description lastly I have one more aggregation stage that is just going to project the title and the plot Fields just to make it more readable in the console and then we'll console log those results so at the bottom we'll create our query is going to be imaginary characters from outer space at War so we're going to call that query embeddings function pass it our query and then we'll have a catch at the end in case there's any errors so let's go ahead and run this we'll save it open up the console and let's run node Main and you should get something like this we have four movies returned with the plot and the title as you can see here the results are not very accurate because we only embedded 50 of the movie documents if the entire movie data set of 23 000 plus documents were embedded the query imaginary characters from outer space at War would return these results in this tutorial we demonstrated how to use the hugging face inference apis how to generate embeddings and how to use Atlas Vector search we also learned how to build a semantic Search application to find movies whose plots most closely matched the intent behind the natural language query rather than just searching based on existing keywords in the data set we also demonstrated how efficient it is to bring the power of machine learning models to your data using the atlas developer data platform if this video was helpful please give it a like And subscribe for more mongodb content