Mat Keep

126 results

Building AI with MongoDB: Retrieval-Augmented Generation (RAG) Puts Power in Developers’ Hands

As recently as 12 months ago, any mention of retrieval-augmented generation (RAG) would have left most of us confused. However, with the explosion of generative AI, the RAG architectural pattern has now firmly established itself in the enterprise landscape. RAG presents developers with a potent combination. They can take the reasoning capabilities of pre-trained, general-purpose LLMs and feed them with real-time, company-specific data. As a result, developers can build AI-powered apps that generate outputs grounded in enterprise data and knowledge that is accurate, up-to-date, and relevant. They can do this without having to turn to specialized data science teams to either retrain or fine-tune models — a complex, time-consuming, and expensive process. Over this series of Building AI with MongoDB blog posts, we’ve featured developers using tools like MongoDB Atlas Vector Search for RAG in a whole range of applications. Take a look at our AI case studies page and you’ll find examples spanning conversational AI with chatbots and voice bots, co-pilots, threat intelligence and cybersecurity, contract management, question-answering, healthcare compliance and treatment assistants, content discovery and monetization, and more. Further reflecting its growing adoption, Retool’s State of AI survey from a couple of weeks ago shows Atlas Vector Search earning the highest net promoter score (NPS) among developers . Check out our AI resource page to learn more about building AI-powered apps with MongoDB. In this blog post, I’ll highlight three more interesting and novel use cases: Unlocking geological data for better decision-making and accelerating the path to net zero at Eni Video and audio personalization at Potion Unlocking insights from enterprise knowledge bases at Kovai Eni makes terabytes of subsurface unstructured data actionable with MongoDB Atlas Based in Italy, Eni is a leading integrated energy company with more than 30,000 employees across 69 countries. In 2020, the company launched a strategy to reach net zero emissions by 2050 and develop more environmentally and financially sustainable products. Sabato Severino, Senior AI Solution Architect for Geoscience at Eni, explains the role of his team: “We’re responsible for finding the best solutions in the market for our cloud infrastructure and adapting them to meet specific business needs.” Projects include using AI for drilling and exploration, leveraging cloud APIs to accelerate innovation, and building a smart platform to promote knowledge sharing across the company. Eni’s document management platform for geosciences offers an ecosystem of services and applications for creating and sharing content. It leverages embedded AI models to extract information from documents and stores unstructured data in MongoDB. The challenges for Severino’s team were to maintain the platform as it ingested a growing volume of data — hundreds of thousands of documents and terabytes of data — and to enable different user groups to extract relevant insights from comprehensive records quickly and easily. With MongoDB Atlas , Eni users can quickly find data spanning multiple years and geographies to identify trends and analyze models that support decision-making within their fields. The platform uses MongoDB Atlas Search to filter out irrelevant documents while also integrating AI and machine learning models, such as vector search, to make it even easier to identify patterns. “The generative AI we’ve introduced currently creates vector embeddings from documents, so when a user asks a question, it retrieves the most relevant document and uses LLMs to build the answer,” explains Severino. “We’re looking at migrating vector embeddings into MongoDB Atlas to create a fully integrated, functional system. We’ll then be able to use Atlas Vector Search to build AI-powered experiences without leaving the Atlas platform — a much better experience for developers.” Read the full case study to learn more about Eni and how it is making unstructured data actionable. Video personalization at scale with Potion and MongoDB Potion enables salespeople to personalize prospecting videos at scale. Already over 7,500 sales professionals at companies including SAP, AppsFlyer, CaptivateIQ, and Opensense are using SendPotion to increase response rates, book more meetings, and build customer trust. All a sales representative needs to do is record a video template, select which words need to be personalized, and let Potion’s audio and vision AI models do the rest. Kanad Bahalkar, co-founder and CEO at Potion explains: “The sales rep tells us what elements need to be personalized in the video — that is typically provided as a list of contacts with their name, company, desired call-to-action, and so on. Our vision and audio models then inspect each frame and reanimate the video and audio with personalized messages lip-synced into the stream. Reanimation is done in bulk in minutes. For example, one video template can be transformed into over 1,000 unique video messages, personalized to each contact.” Potion’s custom generative AI models are built with PyTorch and TensorFlow, and run on Amazon Sagemaker. Describing their models, Kanad says “Our vision model is trained on thousands of different faces, so we can synthesize the video without individualized AI training. The audio models are tuned on-demand for each voice.” And where does the data for the AI lifecycle live? “This is where we use MongoDB Atlas ,” says Kanad. “We use the MongoDB database to store metadata for all the videos, including the source content for personalization, such as the contact list and calls to action. For every new contact entry created in MongoDB, a video is generated for it using our AI models, and a link to that video is stored back in the database. MongoDB also powers all of our application analytics and intelligence . With the insights we generate from MongoDB, we can see how users interact with the service, capturing feedback loops, response rates, video watchtimes, and more. This data is used to continuously train and tune our models in Sagemaker." On selecting MongoDB Kanad says, “I had prior experience of MongoDB and knew how easy and fast it was to get started for both modeling and querying the data. Atlas provides the best-managed database experience out there, meaning we can safely offload running the database to MongoDB. This ease-of-use, speed, and efficiency are all critical as we build and scale the business." To further enrich the SendPotion service, Kanad is planning to use more of the developer features within MongoDB Atlas. This includes Atlas Vector Search to power AI-driven semantic search and RAG for users who are exploring recommendations across video libraries. The engineering team is also planning on using Atlas Triggers to enable event-driven processing of new video content. Potion is a member of the MongoDB AI Innovators program. Asked about the value of the program, Kanad responds, “Access to free credits helped support rapid build and experimentation on top of MongoDB, coupled with access to technical guidance and support." Bringing the power of Vector Search to enterprise knowledge bases Founded in 2011, Kovai is an enterprise software company that offers multiple products in both the enterprise and B2B SaaS arena. Since its founding, the company has grown to nearly 300 employees serving over 2,500 customers. One of Kovai’s key products is Document360, a knowledge base platform for SaaS companies looking for a self-service software documentation solution. Seeing the rise of GenAI, Kovai began developing its AI assistant, “Eddy.” The assistant provides answers to customers' questions utilizing LLMs augmented by retrieving information in a Document360 knowledge base. During the development phase Kovai’s engineering and data science teams explored multiple vector databases to power the RAG portion of the application. They found the need to sync data between its system-of-record MongoDB database and a separate vector database introduced inaccuracies in answers from the assistant. The release of MongoDB Atlas Vector Search provided a solution with three key advantages for the engineers: Architectural simplicity: MongoDB Vector Search's architectural simplicity helps Kovai optimize the technical architecture needed to implement Eddy. Operational efficiency: Atlas Vector Search allows Kovai to store both knowledge base articles and their embeddings together in MongoDB collections, eliminating “data syncing” issues that come with other vendors. Performance: Kovai gets faster query response from MongoDB Vector Search at scale to ensure a positive user experience. Atlas Vector Search is robust, cost-effective, and blazingly fast! Said Saravana Kumar, CEO, Kovai, when speaking about his team's experience Specifically, the team has seen the average time taken to return three, five, and 10 chunks between two and four milliseconds, and if the question is a closed loop, the average time reduces to less than two milliseconds. You can learn more about Kovai’s journey into the world of RAG in the full case study . Getting started As the case studies in our Building AI with MongoDB series demonstrate, retrieval-augmented generation is a key design pattern developers can use as they build AI-powered applications for the business. Take a look at our Embedding Generative AI whitepaper to explore RAG in more detail.

November 28, 2023

Building AI with MongoDB: Improving Productivity with WINN.AI’s Virtual Sales Assistant

Better serving customers is a primary driver for the huge wave of AI innovations we see across enterprises. WINN.AI is a great example. Founded in November 2021 by sales tech entrepreneur Eldad Postan Koren and cybersecurity expert Bar Haleva, their innovations are enabling sales teams to improve productivity by increasing the time they focus on customers. WINN.AI orchestrates a multimodal suite of state-of-the-art models for speech recognition, entity extraction, and meeting summarization, relying on MongoDB Atlas as the underlying data layer. I had the opportunity to sit down with Orr Mendelson, Ph.D., Head of R&D at WINN.AI, to learn more. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Tell us a little bit about what WINN.AI is working to accomplish Today’s salespeople spend over 25% of their time on administrative busywork - costing organizations time, money, and opportunity. We are working to change that so that sales teams can spend more time solving their customer’s problems and less on administrative tasks. At the heart of WINN.AI is an AI-powered real-time sales assistant that joins your virtual meetings. It detects and interprets customer questions, and immediately surfaces relevant information for the salesperson. Think about retrieving relevant customer references or competitive information. It can provide prompts from a sales playbook, and also make sure meetings stay on track and on time. After concluding, WINN.AI extracts relevant information from the meeting and updates the CRM system. WINN.AI integrates with the leading tools used by sales teams, including Zoom, Hubspot, Salesforce, and more. Can you describe what role AI plays in your application? Our technology allows the system to understand not only what people are saying on a sales call, but also to specifically comprehend the context of a sales conversation, thus optimizing meeting summaries and follow-on actions. This includes identifying the most important talking points discussed in the meeting, knowing how to break down the captured data into different sales methodology fields (MEDDICC, BANT, etc.), and automatically pushing updates to the CRM. What specific AI/ML techniques, algorithms, or models are utilized in the application? We started out building and training our own custom Natural Language Processing (NLP) algorithms and later switched to GPT 3.5 and 4 for entity extraction and summarization. Our selection of models is based on specific requirements of the application feature – balancing things like latency with context length and data modality. We orchestrate all of the models with massive automation, reporting, and monitoring mechanisms. This is developed by our engineering teams and assures high-quality AI products across our services and users. We have a dedicated team of AI Engineers and Prompts Engineers that develop and monitor each prompt and response so we are continuously tuning and optimizing app capabilities. How do you use MongoDB in your application stack? MongoDB stores everything in the WINN.AI platform. Organizations and users, sessions, their history, and more. The primary driver for selecting MongoDB was its flexibility in being able to store, index, and query data of any shape or structure. The database fluidly adapts to our application schema, which gives us a more agile approach than traditional relational databases. My developers love the ecosystem that has built up around MongoDB. MongoDB Atlas provides the managed services we need to run, scale, secure, and backup our data. How do you see the broader benefits of MongoDB in your business? In the ever-changing AI tech market, MongoDB is our stable anchor. MongoDB provides the freedom to work with structured and unstructured data while using any of our preferred tools, and we leave database management to the Atlas service. This means my developers are free to create with AI while being able to sleep at night! MongoDB is familiar to our developers so we don’t need any DBA or external experts to maintain and run it safely. We can invest those savings back into building great AI-powered products. What are your future plans for new applications and how does MongoDB fit into them? We’re always looking for opportunities to offer new functionality to our users. Capabilities like Atlas Search for faceted full-text navigation over data coupled with MongoDB’s application-driven intelligence for more real-time analytics and insights are all incredibly valuable. Streaming is one area that I’m really excited about. Our application is composed of multiple microservices that are soon to be connected with Kafka for an event-driven architecture. Building on Kafka based messaging, Atlas Stream Processing is another direction we will explore. It will give our services a way of continuously querying, analyzing and reacting to streaming data without having to first land it in the database. This will give our customers even lower latency AI outputs. Everybody WINNs! Wrapping up Orr, thank you for sharing WINN.AI’s story with the community! WINN.AI is part of the MongoDB AI Innovators program , benefiting from access to free Atlas credits and technical expertise. If you are getting started with AI, sign-up for the program and build with MongoDB.

November 20, 2023

Atlas Vector Search Commands Highest Developer NPS in Retool State of AI 2023 Survey

This post is also available in: Deutsch , Français , 中文 , Español , Português . Retool has just published its first-ever State of AI report and it's well worth a read. Modeled on its massively popular State of Internal Tools report, the State of AI survey took the pulse of over 1,500 tech folks spanning software engineering, leadership, product managers, designers, and more drawn from a variety of industries. The survey’s purpose is to understand how these tech folk use and build with artificial intelligence (AI). As a part of the survey, Retool dug into which tools were popular, including the vector databases used most frequently with AI. The survey found MongoDB Atlas Vector Search commanded the highest Net Promoter Score (NPS) and was the second most widely used vector database - within just five months of its release. This places it ahead of competing solutions that have been around for years. In this blog post, we’ll examine the phenomenal rise of vector databases and how developers are using solutions like Atlas Vector Search to build AI-powered applications. We’ll also cover other key highlights from the Retool report. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Vector database adoption: Off the charts (well almost...) From mathematical curiosity to the superpower behind generative AI and LLMs, vector embeddings and the databases that manage them have come a long way in a very short time. Check out DB-Engines trends in database models over the past 12 months and you'll see that vector databases are head and shoulders above all others in popularity change. Just look at the pink line’s "up and to the right" trajectory in the chart below. Screenshot courtesy of DB-engines, November 8, 2023 But why have vector databases become so popular? They are a key component in a new architectural pattern called retrieval-augmented generation — otherwise known as RAG — a potent mix that combines the reasoning capabilities of pre-trained, general-purpose LLMs and feeds them real-time, company-specific data. The results are AI-powered apps that uniquely serve the business — whether that’s creating new products, reimagining customer experiences, or driving internal productivity and efficiency to unprecedented heights. Vector embeddings are one of the fundamental components required to unlock the power of RAG. Vector embedding models encode enterprise data, no matter whether it is text, code, video, images, audio streams, or tables, as vectors. Those vectors are then stored, indexed, and queried in a vector database or vector search engine, providing the relevant input data as context to the chosen LLM. The result are AI apps grounded in enterprise data and knowledge that is relevant to the business, accurate, trustworthy, and up-to-date. As the Retool survey shows, the vector database landscape is still largely greenfield. Fewer than 20% of respondents are using vector databases today, but with the growing trend towards customizing models and AI infrastructure, adoption is guaranteed to grow. Why are developers adopting Atlas Vector Search? Retool's State of AI survey features some great vector databases that have blazed a trail over the past couple of years, especially in applications requiring context-aware semantic search. Think product catalogs or content discovery. However, the challenge developers face in using those vector databases is that they have to integrate them alongside other databases in their application’s tech stack. Every additional database layer in the application tech stack adds yet another source of complexity, latency, and operational overhead. This means they have another database to procure, learn, integrate (for development, testing, and production), secure and certify, scale, monitor, and back up, And this is all while keeping data in sync across these multiple systems. MongoDB takes a different approach that avoids these challenges entirely: Developers store and search native vector embeddings in the same system they use as their operational database. Using MongoDB’s distributed architecture, they can isolate these different workloads while keeping the data fully synchronized. Search Nodes provide dedicated compute and workload isolation that is vital for memory-intensive vector search workloads, thereby enabling improved performance and higher availability With MongoDB’s flexible and dynamic document schema, developers can model and evolve relationships between vectors, metadata, and application data in ways other databases cannot. They can process and filter vector and operational data in any way the application needs with an expressive query API and drivers that support all of the most popular programming languages. Using the fully managed MongoDB Atlas developer data platform empowers developers to achieve the scale, security, and performance that their application users expect. What does this unified approach mean for developers? Faster development cycles, higher performing apps providing lower latency with fresher data, coupled with lower operational overhead and cost. Outcomes that are reflected in MongoDB’s best-in-class NPS score. Atlas Vector Search is robust, cost-effective, and blazingly fast! Saravana Kumar, CEO, Kovai discussing the development of his company’s AI assistant Check out our Building AI with MongoDB blog series (head to the Getting Started section to see the back issues). Here you'll see Atlas Vector Search used for GenAI-powered applications spanning conversational AI with chatbots and voicebots, co-pilots, threat intelligence and cybersecurity, contract management, question-answering, healthcare compliance and treatment assistants, content discovery and monetization, and more. MongoDB was already storing metadata about artifacts in our system. With the introduction of Atlas Vector Search, we now have a comprehensive vector-metadata database that’s been battle-tested over a decade and that solves our dense retrieval needs. No need to deploy a new database we'd have to manage and learn. Our vectors and artifact metadata can be stored right next to each other. Pierce Lamb, Senior Software Engineer on the Data and Machine Learning team at VISO TRUST What can you learn about the state of AI from the Retool report? Beyond uncovering the most popular vector databases, the survey covers AI from a range of perspectives. It starts by exploring respondents' perceptions of AI. (Unsurprisingly, the C-suite is more bullish than individual contributors.) It then explores investment priorities, AI’s impact on future job prospects, and how it will likely affect developers and the skills they need in the future. The survey then explores the level of AI adoption and maturity. Over 75% of survey respondents say their companies are making efforts to get started with AI, with around half saying these were still early projects, and mainly geared towards internal applications. The survey goes on to examine what those applications are, and how useful the respondents think they are to the business. It finds that almost everyone’s using AI at work, whether they are allowed to or not, and then identifies the top pain points. It's no surprise that model accuracy, security, and hallucinations top that list. The survey concludes by exploring the top models in use. Again no surprise that Open AI’s offerings are leading the way, but it also indicates growing intent to use open source models along with AI infrastructure and tools for customization in the future. You can dig into all of the survey details by reading the report . Getting started with Atlas Vector Search Eager to take a look at our Vector Search offering? Head over to our Atlas Vector Search product page . There you will find links to tutorials, documentation, and key AI ecosystem integrations so you can dive straight into building your own genAI-powered apps . If you want to learn more about the high level possibilities of Vector Search, then download our Embedding Generative AI whitepaper.

November 13, 2023

Building AI with MongoDB: Giving Your Apps a Voice

In previous posts in this series, we covered how generative AI and MongoDB are being used to unlock value from data of any modality and in supercharging communications . Put those topics together, and we can start to harness the most powerful communications medium (arguably!) of them all: Voice . Voice brings context, depth, and emotion in ways that text, images, and video alone simply cannot. Or as the ancient Chinese Proverb tells us, “The tongue can paint what the eyes can’t see.” The rise of voice technology has been a transformative journey that spans over a century, from the earliest days of radio and telephone communication to the cutting-edge realm of generative AI. It began with the invention of the telephone in the late 19th century, enabling voice conversations across distances. The evolution continued with the advent of radio broadcasting, allowing mass communication through spoken word and music. As technology advanced, mobile communications emerged, making voice calls accessible anytime, anywhere. Today, generative AI, powered by sophisticated machine learning (ML) models, has taken voice technology to unprecedented levels. The generation of human-like voices and text-to-speech capabilities are one example. Another is the ability to detect sentiment and create summaries from voice communications. These advances are revolutionizing how we interact with technology and information in the age of intelligent software. In this post, we feature three companies that are harnessing the power of voice with generative AI to build completely new classes of user experiences: Xoltar uses voice along with vision to improve engagement and outcomes for patients through clinical treatment and recovery. Cognigy puts voice at the heart of its conversational AI platform, integrating with back-office CRM, ERP, and ticketing systems for some of the world’s largest manufacturing, travel, utility, and ecommerce companies. Artificial Nerds enables any company to enrich its customer service with voice bots and autonomous agents. Let's learn more about the role voice plays in each of these very different applications. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. GenAI companion for patient engagement and better clinical outcomes XOLTAR is the first conversational AI platform designed for long-lasting patient engagement. XOLTAR’s hyper-personalized digital therapeutic app is led by Heather, XOLTAR’s live AI agent. Heather is able to conduct omni-channel interactions, including live video chats. The platform is able to use its multimodal architecture to better understand patients, get more data, increase engagement, create long-lasting relationships, and ultimately achieve real behavioral changes. Figure 1: About 50% of patients fail to stick to prescribed treatments. Through its app and platform, XOLTAR is working to change this, improving outcomes for both patients and practitioners. It provides physical and emotional well-being support through a course of treatment, adherence to medication regimes, monitoring post-treatment recovery, and collection of patient data from wearables for remote analysis and timely interventions. Powering XOLTAR is a sophisticated array of state-of-the-art machine learning models working across multiple modalities — voice and text, as well as vision for visual perception of micro-expressions and non-verbal communication. Fine-tuned LLMs coupled with custom multilingual models for real-time automatic speech recognition and various transformers are trained and deployed to create a truthful, grounded, and aligned free-guided conversation. XOLTAR’s models personalize each patient’s experience by retrieving data stored in MongoDB Atlas . Taking advantage of the flexible document model, XOLTAR developers store both structured data, such as patient details and sensor measurements from wearables, alongside unstructured data, such as video transcripts. This data provides both long-term memory for each patient as well as input for ongoing model training and tuning. MongoDB also powers XOLTAR’S event-driven data pipelines. Follow-on actions generated from patient interactions are persisted in MongoDB, with Atlas Triggers notifying downstream consuming applications so they can react in real-time to new treatment recommendations and regimes. Through its participation in the MongoDB AI Innovators program , XOLTAR’s development team receives access to free Atlas credits and expert technical support, helping them de-risk new feature development. How Cognigy built a leading conversational AI solution Cognigy delivers AI solutions that empower businesses to provide exceptional customer service that is instant, personalized, in any language, and on any channel. Its main product, Cognigy.AI, allows companies to create AI Agents, improving experiences through smart automation and natural language processing. This powerful solution is at the core of Cognigy's offerings, making it easy for businesses to develop and deploy intelligent voice and chatbots. Developing a conversational AI system poses challenges for any company. These solutions must effectively interact with diverse systems like CRMs, ERPs, and ticketing systems. This is where Cognigy introduces the concept of a centralized platform. This platform allows you to construct and deploy agents through an intuitive low-code user interface. Cognigy took a deliberate approach when constructing the platform, employing a composable architecture model, as depicted in Figure 1 below. To achieve this, it designed over 30 specialized microservices, adeptly orchestrated through Kubernetes. These microservices were strategically fortified with MongoDB's replica sets, spanning across three availability zones. In addition, sophisticated indexing and caching strategies were integrated to enhance query performance and expedite response times. Figure 2: Congnigy's composable architecture model platform MongoDB has been a driving force behind Cognigy's unprecedented flexibility and scalability and has been instrumental in bringing groundbreaking products like Cognigy.AI to life. Check out the Cognigy case study to learn more about their architecture and how they use MongoDB. The power of custom voice bots without the complexity of fine-tuning Founded in 2017, Artificial Nerds assembled a group of creative, passionate, and "nerdy" technologists focused on unlocking the benefits of AI for all businesses. Its aim was to liberate teams from repetitive work, freeing them up to spend more time building closer relationships with their clients. The result is a suite of AI-powered products that improve customer sales and service. These include multimodal bots for conversational AI via voice and chat along with intelligent hand-offs to human operators for live chat. These are all backed by no-code functions to integrate customer service actions with backend business processes and campaigns. Originally the company’s ML engineers fine-tuned GPT and BERT language models to customize its products for each one of its clients. This was a time-consuming and complex process. The maturation of vector search and tooling to enable Retrieval-Augmented Generation (RAG) has radically simplified the workflow, allowing Artificial Nerds to grow its business faster. Artificial Nerds started using MongoDB in 2019, taking advantage of its flexible schema to provide long-term memory and storage for richly structured conversation history, messages, and user data. When dealing with customers, it was important for users to be able to quickly browse and search this history. Adopting Atlas Search helped the company meet this need. With Atlas Search, developers were able to spin up a powerful full-text index right on top of their database collections to provide relevance-based search across their entire corpus of data. The integrated approach offered by MongoDB Atlas avoided the overhead of bolting on a separate search engine and creating an ETL mechanism to sync with the database. This eliminated the cognitive overhead of developing against, and operating, separate systems. The release of Atlas Vector Search unlocks those same benefits for vector embeddings. The company has replaced its previously separate standalone vector database with the integrated MongoDB Atlas solution. Not only has this improved the productivity of its developers, but it has also improved the customer experience by reducing latency 4x . Artificial Nerds is growing fast, with revenues expanding 8% every month. The company continues to push the boundaries of customer service by experimenting with new models including the Llama 2 LLM and multilingual sentence transformers hosted in Hugging Face. Being part of the MongoDB AI Innovators program helps Artificial Nerds stay abreast of all of the latest MongoDB product enhancements and provides the company with free Atlas credits to build new features. Getting started Check out our MongoDB for AI page to get access to all of the latest resources to help you build. We see developers increasingly adopting state-of-the-art multimodal models and MongoDB Atlas Vector Search to work with data formats that have previously been accessible only to those organizations with access to the very deepest data science resources. Check out some examples from our previous Building AI with MongoDB blog post series here: Building AI with MongoDB: first qualifiers includes AI at the network edge for computer vision and augmented reality, risk modeling for public safety, and predictive maintenance paired with Question-Answering generation for maritime operators. Building AI with MongoDB: compliance to copilots features AI in healthcare along with intelligent assistants that help product managers specify better products and sales teams compose emails that convert 2x higher. Building AI with MongoDB: unlocking value from multimodal data showcases open source libraries that transform unstructured data into a usable JSON format, entity extraction for contracts management, and making sense of “dark data” to build customer service apps. Building AI with MongoDB: Cultivating Trust with Data covers three key customer use cases of improving model explainability, securing generative AI outputs, and transforming cyber intelligence with the power of MongoDB. Building AI with MongoDB: Supercharging Three Communication Paradigms features developer tools that bring AI to existing enterprise data, conversational AI, and monetization of video streams and the metaverse. There is no better time to release your own inner voice and get building!

November 8, 2023

Announcing LangChain Templates for MongoDB Atlas

Since announcing the public preview of MongoDB Atlas Vector Search back in June, we’ve seen tremendous adoption by developers working to build AI-powered applications. The ability to store, index, and query vector embeddings right alongside their operational data in a single, unified platform dramatically boosts engineering velocity while keeping their technology footprint streamlined and efficient. Atlas Vector Search is used by developers as a key part of the Retrieval-Augmented Generation (RAG) pattern. RAG is used to feed LLMs with the additional data they need to ground their responses, providing outputs that are reliable, relevant, and accurate for the business. One of the key enabling technologies being used to bring external data into LLMs is LangChain. Just one example is healthcare innovator Inovaare who is building AI with MongoDB and LangChain for document classification, information extraction and enrichment, and chatbots over medical data. Now making it even easier for developers to build AI-powered apps, we are excited to announce our partnership with LangChain in the launch of LangChain Templates ! We have worked with LangChain to create a RAG template using MongoDB Atlas Vector Search and OpenAI . This easy-to-use template can help developers build and deploy a Chatbot application over their own proprietary data. LangChain Templates offer a reference architecture that’s easily deployable as a REST API using LangServe . We have also been working with LangChain to release the latest features of Atlas Vector Search, like the recently announced dedicated vector search aggregation stage $vectorSearch, to both the MongoDB LangChain python integration as well as the MongoDB LangChain Javascript integration . Similarly, we will continue working with LangChain to create more templates, that will allow developers to bring their ideas to production faster. If you’re building AI-powered apps on MongoDB, we’d love to hear from you. Sign up to our AI Innovators program where successful applicants receive no-cost MongoDB Atlas credits to develop apps, access to technical resources, and the opportunity to showcase your work to the broader AI community.

November 2, 2023

Building AI with MongoDB: Supercharging Three Communication Paradigms

Communication mediums are core to who we are as humans, from understanding each other to creating bonds and a shared purpose. The methods of communication have evolved over thousands of years, from cave drawings and scriptures to now being able to connect with anyone at any time via internet-enabled devices. The latest paradigm shift to supercharge communication is through the use and application of natural language processing and artificial intelligence. In our latest roundup of AI innovators building with MongoDB, we’re going to focus on three companies building the future across three mediums of communication: data, language, and video. Our blog begins by featuring SuperDuperDB . The company provides tools for developers to apply AI and machine learning on top of their existing data stores for generative AI applications such as chatbots, Question-Answering (Q-A), and summarization. We then cover Algomo , who uses generative AI to help companies offer their best and most personalized service to customers and employees across more than 100 languages. Finally, Source Digital is a monetization platform delivering a new era of customer engagement through video and the metaverse. Let’s dive in to learn more about each company and use case. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Bringing AI to your database SuperDuperDB is an open-source Python package providing tools for developers to apply AI and machine learning on top of their existing data stores. Developers and data scientists continue to use their preferred tools, avoiding both data migration and duplication to specialized data stores. They also have the freedom to run SuperDuperDB anywhere, avoiding lock-in to any one AI ecosystem. With SuperDuperDB developers can: Deploy their chosen AI models to automatically compute outputs (inference) in their database in a single environment with simple Python commands. Train models on their data simply by querying without additional ingestion and pre-processing. Integrate AI APIs (such as OpenAI) to work together with other models on their data effortlessly. Search data with vector search, including model management and serving. Today SuperDuperDB supports MongoDB alongside select relational databases, cloud data warehouses, data lake houses, and object stores. SuperDuperDB provides an array of sample use cases and notebooks that developers can use to get started including vector search with MongoDB, multimodal search, retrieval augmented generation (RAG), transfer learning, and many more. The team has also built an AI chatbot app that allows users to ask questions about technical documentation. The app is built on top of MongoDB and OpenAI with FastAPI and React (FARM stack) + SuperDuperDB. It showcases how easily developers can build next-generation AI applications on top of their existing data stores with SuperDuperDB. You can try the app and read more about how it is built at SuperDuperDB's documentation . “We integrate MongoDB as one of the key backend databases for our platform, the PyMongo driver for the app connectivity and Atlas Vector Search for storing and querying vector embeddings” , said Duncan Blythe, co-founder of SuperDuperDB. “It therefore made sense for us to partner more closely with the company through MongoDB Ventures . We get direct access to the MongoDB engineering team to help optimize our product, along with visibility within MongoDB’s vast ecosystem of developers.” Here are some useful links to learn more: SuperDuperDB Github SuperDuperDB Docs Intro SuperDuperDB Use Cases Page SuperDuberDB Blog Conversational support, powered by generative AI Algomo uses generative AI to help companies offer their best service to both their customers and employees across more than 100 languages. The company’s name is a portmanteau of the words Algorithm (originating from Arabic) and Homo, (human in Latin). It reflects the two core design principles underlying Algomo’s products: Human-centered AI that amplifies and augments rather than displaces human abilities. Inclusive AI that is accessible to all, and that is non-discriminatory and unbiased in its outputs. With Algomo, customers can get a ChatGPT-powered bot up on their site in less than 3 minutes. More than just a bot, Algomo also provides a complete conversational platform. This includes Question-Answering text generators and autonomous agents that triage and orchestrate support processes, escalating to human support staff for live chat as needed. It works across any communication channel from web and Google Chat to Intercom, Slack, WhatsApp, and more. Customers can instantly turn their support articles, past conversations, slack channels, Notion pages, Google Docs, and content on their public website into personalized answers. Algomo vectorizes customer content, using that alongside OpenAI’s ChatGPT. The company uses RAG (Retrieval Augmented Generation) prompting to inject relevant context to LLM prompts and Chain-Of-Thought prompting to increase answer accuracy. A fine-tuned implementation of BERT is also used to classify user intent and retrieve custom FAQs. Taking advantage of its flexible document data model, Algomo uses MongoDB Atlas to store customer data alongside conversation history and messages, providing long-term memory for context and continuity in support interactions. As a fully managed cloud service, Algomo’s team can leave all of the operational heavy lifting to MongoDB, freeing its team up to focus on building great conversational experiences. The team considers using MongoDB as a “no-brainer,” allowing them to iterate quickly while removing the support burden via the simplicity and reliability of the Atlas platform. The company’s engineers are now evaluating Atlas Vector Search as a replacement for its current standalone vector database, further reducing costs and simplifying their codebase. Being able to store source data, chunks, and metadata alongside vector embeddings eliminates the overhead and duplication of synchronizing data across two separate systems. The team is also looking forward to using Atlas Vector Search for their upcoming Agent Assist feature that will provide suggested answers, alongside relevant documentation snippets, to customer service agents who are responding to live customer queries. Being part of the AI Innovators program provides Algomo with direct access to MongoDB technical expertise and best practices to accelerate its evaluation of Atlas Vector Search. Free Atlas credits in addition to those provided by the AWS and Azure start-up program help Algomo reduce its development costs. Creating a new media currency with video detection and monetization Source Digital, Inc . is a monetization platform that delivers a new era of customer engagement through video and the metaverse. The company provides tools for content creators and advertisers to display real-time advertisements and content recommendations directly to users on websites or in video streams hosted on platforms like Netflix, YouTube, Meta, and Vimeo. Source Digital engineers built it’s own in-house machine learning and vector embedding models using Google Vision AI and TensorFlow. These models provide computer vision across video streams, detecting elements that automatically trigger the display of relevant ads and recommendations. An SDK is also provided to customers so that they can integrate the video detection models onto their own websites. The company started out using PostgreSQL to store video metadata and model features, alongside the pgvector extension for video vector embeddings. This initial setup worked well at a small scale, but as Source Digital grew, PostgreSQL began to creak with costs rapidly escalating. PostgreSQL can only be scaled vertically, and so the company encountered step changes in costs as they moved to progressively larger cloud instance sizes. Scaling limitations were compounded by the need for queries to execute resource-intensive JOIN operations. These were needed to bring together data in all of the different database tables hosting video metadata, model features, and vector embeddings. With prior MongoDB experience from an earlier audio streaming project, the company’s engineers were confident they could tame their cost challenges. Horizontal scale-out allows MongoDB to grow at much more granular levels, aligning costs with application usage. Expensive JOIN operations are eliminated because of the flexibility of MongoDB’s document data model. Now developers store the metadata, model features, and vector embeddings together in a single record. The company estimates that the migration from PostgreSQL to MongoDB Atlas and Vector Search will reduce monthly costs by 7x . These are savings that can be reinvested into accelerating delivery against the feature backlog. Being part of the MongoDB AI Innovators Program provides Source Digital with access to expert technical advice on scaling its platform, along with co-marketing opportunities to further fuel its growth. What's next? If you are getting started with building AI-enabled apps on MongoDB, sign up for our AI Innovators Program . Successful applicants get access to expert technical advice, free MongoDB Atlas credits, co-marketing opportunities, and – for eligible startups, introductions to potential venture investors. We’ve seen a whole host of interesting use cases and different companies building the future with AI, so you can refer back to some of our earlier blog posts below: Building AI with MongoDB: first qualifiers include AI at the network edge for computer vision and augmented reality; risk modeling for public safety; and predictive maintenance paired with Question-Answering generation for maritime operators. Building AI with MongoDB: compliance to copilots features AI in healthcare along with intelligent assistants that help product managers specify better products and help sales teams compose emails that convert 2x higher. Building AI with MongoDB: unlocking value from multimodal data showcases open source libraries that transform unstructured data into a usable JSON format; entity extraction for contracts management; and making sense of “dark data” to build customer service apps. Building AI with MongoDB: Cultivating Trust with Data covers three key customer use cases improving model explainability, securing generative AI outputs, and transforming cyber intelligence with the power of MongoDB. And please take a look at the MongoDB for Artificial Intelligence resources page for the latest best practices that get you started in turning your idea into an AI-driven reality. Consider joining our AI Innovators Program to build the next big thing in AI with us!

October 16, 2023

Building AI with MongoDB: Cultivating Trust with Data

“Trust is like the air we breathe – when it’s present, nobody really notices; when it’s absent, everybody notices.” - Warren Buffett The issue of trust is one that dominates discussions around the safe and responsible adoption of AI across business and society. It was another Warren - this time Warren Bennis, a pioneer in modern leadership principles – who was attributed as saying "Trust is the lubrication that makes it possible for organizations to work." Particularly relevant when we think about how organizations are starting to embed AI into the very fabric of their businesses. On one hand, we have governments around the world that are at varying stages of regulating their way to trustworthy AI. However, this will not be a quick process, and enterprises can’t afford to wait. Businesses need to make progress now if they are going to unlock the opportunities presented by AI. In our latest roundup of AI innovators building with MongoDB, we’re going to focus on three companies tackling trust from different angles. We feature Nomic who are working to make AI more explainable. Robust Intelligence is focused on securing AI models against prompt injections, data poisoning, bias, PII leakage, and more. Finally, VISO TRUST comes at this issue from a totally different perspective. They use AI to help their customers reduce cybersecurity risks and improve trust across the supply chain. Let's dig in. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Making AI explainable and accessible Despite the huge advances in AI and its use in almost every industry, very little is known about how the most popular models actually work. What data are they trained on? What are they learning? How can we compare accuracy between different models? These are the questions Nomic AI is seeking to help us answer through its Atlas and GPT4All products. Nomic Atlas is a data engine that allows users to explore, label, search, share, and build on massive datasets using their web browser. With Atlas, users can begin to understand what data their chosen AI models are learning from and the associations they are making during the training phase. Atlas can be used for exploratory data analysis, data labeling and cleansing, and visualizations of vector embeddings. To see Nomic Atlas in action, take a look at the recent blog post with Hugging Face announcing IDEFICS , an open-access reproduction of the visual language model based on Flamingo. The model takes image and text inputs and produces text outputs from them. For example, it can answer questions about images, describe visual content, and create stories grounded in multiple images. Nomic allows users to visually explore the content of the training data, as illustrated in the image below. Atlas can be used to curate high-quality training and instruction-tuned datasets for the GPT4All models. Nomic GPT4All is an ecosystem for training and deploying powerful and customized large language models that run locally on consumer-grade CPUs in Windows, Mac, and Ubuntu Linux clients. With GPT4All, users have access to a free-to-use, locally running, privacy-aware chatbot that doesn’t require expensive and scarce GPUs to train and infer on, or an internet connection. It can power question-answering systems, personal writing assistants, document summarization, and code generation. Demand for GPT4All has been explosive, accruing more than 20,000 GitHub stars within its first week of launch. “Every month MongoDB is adding hundreds of organizations and thousands of developers who are building AI-enabled apps on its multi-cloud developer data platform ,” said Brandon Duderstadt, CEO of Nomic. “It makes sense for us to partner with MongoDB Ventures . They are helping us accelerate our vision of making AI explainable and accessible to everyone.” Securing generative AI, supercharged by your data Robust Intelligence delivers end-to-end AI risk management to protect organizations from security, ethical, and operational risks. The company’s platform automates testing and compliance across the AI lifecycle through continuous validation and protects models in real-time with AI Firewall. This combined approach enables Robust Intelligence to proactively manage risk for any model type, including generative AI and gives organizations the confidence to unleash the true potential of AI. Robust Intelligence is trusted by leading companies including ADP, JPMorgan Chase, Expedia, Deloitte, PwC, and the U.S. Department of Defense. Recent advancements in generative AI have motivated companies to experiment with potential applications, but a lack of security controls has exposed companies to unmanaged risks. This challenge is exacerbated when sensitive company information is used to enrich pre-trained models, such as connecting vector databases, in order to increase the relevance to the end user. Robust Intelligence’s AI Firewall protects large language models (LLMs) in production by validating inputs and outputs in real-time. It assesses and mitigates operational risks such as hallucinations; ethical risks, including model bias and toxic outputs; and security risks such as prompt injections and PII extraction. AI Firewall stops bad or malicious inputs from reaching AI models and prevents undesired AI-generated results from reaching the application. Customers can confidently connect MongoDB Atlas Vector Search to any commercial or open-source LLM for secure retrieval-augmented generation with the AI Firewall integration. Atlas Vector Search serves as the memory and fact database for AI Firewall, ensuring the AI model provides enriched responses without hallucinating. Additionally, it serves as the memory and database to store historical data points. This is important in the context of identifying more advanced security attacks, such as data poisoning and model extraction, which often manifest across a cluster of data points as opposed to a single data point. Yaron Singer, CEO and co-founder at Robust Intelligence commented “By incorporating MongoDB’s Atlas Vector Search into the AI validation process, customers can confidently use their databases to enhance LLM responses knowing that sensitive information will remain secure. The integration provides seamless protection against a comprehensive set of security, ethical, and operational risks.” Being part of the MongoDB Partner Program provides Robust Intelligence with access to specialist technical support to optimize product integrations and provides visibility to the MongoDB customer base. Transforming cyber risk intelligence VISO TRUST is an AI-powered third-party cyber risk and trust platform that enables any company to access actionable vendor security information in minutes. VISO TRUST delivers fast and accurate intelligence needed to make informed cybersecurity risk decisions at scale. Today VISO TRUST has many great enterprise customers like InstaCart, Gusto, and Upwork and they all say the same thing: 90% less work, 80% reduction in time to assess risk, and near 100% vendor adoption. How does VISO TRUST achieve these results? Pierce Lamb, Senior Software Engineer on the Data and Machine Learning team at VISO TRUST provides more detail: “VISO TRUST Platform easily engages third parties, saving everyone time and resources. In a 5-minute web-based session, third parties are prompted to upload relevant artifacts of the security program that already exists, and our supervised AI – which we call Artifact Intelligence – does the rest. First, VISO TRUST deploys discriminator models that produce high-confidence predictions about features of the artifact. Secondly, artifacts have text content parsed out of them which we embed and store in MongoDB Atlas to become part of our dense retrieval system. This dense retrieval system performs Retrieval-Augmented Generation (RAG) using MongoDB features like Atlas Vector Search to provide ranked context to large language model (LLM) prompts. Thirdly, we use RAG results to seed LLM prompts and chain together their outputs to produce extremely accurate factual information about the artifact in the pipeline. This information is able to provide instant intelligence to customers that previously took weeks to produce.” VISO TRUST is the only SaaS third-party cyber risk management platform that delivers the rapid security intelligence needed for modern companies to make critical risk decisions early in the procurement process VISO TRUST uses state-of-the-art models from OpenAI, Hugging Face, Anthropic, Google, and AWS, augmented by vector search and retrieval from MongoDB Atlas. Read our interview blog post with VISO TRUST to learn more. What's next? If you are getting started with building AI-enabled apps on MongoDB, sign up for our AI Innovators Program . Successful applicants get access to expert technical advice, free MongoDB Atlas credits, co-marketing opportunities, and – for eligible startups, introductions to potential venture investors. In the spirit of "Trust, but verify" (Ronald Reagan), if you’re not sure how the program or indeed, MongoDB, could deliver value to you, take a look at earlier blog posts in this series: Building AI with MongoDB: first qualifiers include AI at the network edge for computer vision and augmented reality; risk modeling for public safety; and predictive maintenance paired with Question-answer generation for maritime operators. Building AI with MongoDB: compliance to copilots features AI in healthcare along with intelligent assistants that help product managers specify better products and help sales teams compose emails that convert 2x higher. Building AI with MongoDB: unlocking value from multimodal data showcases open source libraries that transform unstructured data into a usable JSON format; entity extraction for contracts management; and making sense of “dark data” to build customer service apps. You should look at the MongoDB for Artificial Intelligence resources page for the latest best practices that get you started in turning your idea into an AI-driven reality.

October 3, 2023

Melhores práticas de desempenho: indexação

Bem-vindo ao terceiro de nossa série de postagens de blog que abordam as práticas recomendadas de desempenho para MongoDB. Nesta série, abordamos as principais considerações para alcançar o desempenho em escala em uma série de dimensões importantes, incluindo: Modelagem de dados e dimensionamento de memória (o conjunto de trabalho) Padrões de consulta e criação de perfil Indexação, que abordaremos hoje Fragmentação Transações e preocupações de leitura/​gravação Configuração de hardware e sistema operacional Aquecimento de bancada Tendo ambos trabalhado para alguns fornecedores de bancos de dados diferentes nos últimos 15 anos, podemos dizer com segurança que não definir os índices apropriados é o principal problema de desempenho que as equipes de suporte técnico precisam resolver com os usuários. Portanto, precisamos acertar… aqui estão as melhores práticas para ajudá-lo. Índices no MongoDB Em qualquer banco de dados, os índices suportam a execução eficiente de consultas. Sem eles, o banco de dados deve examinar todos os documentos de uma collection ou tabela para selecionar aqueles que correspondem à instrução da consulta. Se existir um índice apropriado para uma consulta, o banco de dados poderá usar o índice para limitar o número de documentos que deve inspecionar. O MongoDB oferece uma ampla variedade de tipos de índices e recursos com ordens de classificação específicas de linguagem para oferecer suporte a padrões de acesso complexos aos seus dados. Os índices MongoDB podem ser criados e eliminados sob demanda para acomodar requisitos de aplicativos e padrões de consulta em evolução e podem ser declarados em qualquer campo de seus documentos, incluindo campos aninhados em matrizes. Então, vamos abordar como você faz o melhor uso dos índices no MongoDB. Use índices compostos Índices compostos são índices compostos por vários campos diferentes. Por exemplo, em vez de ter um índice em "Sobrenome" e outro em "Nome", normalmente é mais eficiente criar um índice que inclua "Sobrenome" e "Nome" se você consultar ambos os nomes. . Nosso índice composto ainda pode ser usado para filtrar consultas que especificam apenas o sobrenome. Siga a regra ESR Para índices compostos, esta regra prática é útil para decidir a ordem dos campos no índice: Primeiro, adicione os campos nos quais as consultas de igualdade são executadas Os próximos campos a serem indexados devem refletir a ordem de classificação da consulta Os últimos campos representam o intervalo de dados a serem acessados Use consultas cobertas quando possível As consultas cobertas retornam resultados diretamente de um índice, sem precisar acessar os documentos de origem e, portanto, são muito eficientes. Para que uma consulta seja coberta todos os campos necessários para filtrar, ordenar e/​ou retornar ao cliente devem estar presentes em um índice. Para determinar se uma consulta é coberta, use o método explain() . Se a saída de explain() exibir totalDocsExamined como 0, isso mostra que a consulta é coberta por um índice. Leia mais na documentação para explicar os resultados . Um problema comum ao tentar obter consultas cobertas é que o campo ID é sempre retornado por padrão. Você precisa excluí-lo explicitamente dos resultados da consulta ou adicioná-lo ao índice. Em clusters fragmentados, o MongoDB precisa acessar internamente os campos da chave do fragmento. Isso significa que as consultas cobertas só são possíveis quando a chave de fragmento faz parte do índice. Geralmente é uma boa ideia fazer isso de qualquer maneira. Tenha cuidado ao considerar índices em campos de baixa cardinalidade Consultas em campos com um pequeno número de valores exclusivos (baixa cardinalidade) podem retornar grandes conjuntos de resultados. Os índices compostos podem incluir campos com baixa cardinalidade, mas o valor dos campos combinados deve apresentar alta cardinalidade. Elimine índices desnecessários Os índices consomem muitos recursos: mesmo com compactação no mecanismo de armazenamento MongoDB WiredTiger, eles consomem RAM e disco. À medida que os campos são atualizados, os índices associados devem ser mantidos, incorrendo em sobrecarga adicional de CPU e E/​S de disco. O MongoDB fornece ferramentas para ajudá-lo a entender o uso do índice, que abordaremos mais adiante nesta postagem. Os índices curinga não substituem o planejamento de índices baseado em carga de trabalho Para cargas de trabalho com muitos padrões de consulta ad hoc ou que lidam com estruturas de documentos altamente polimórficas, os índices curinga oferecem muita flexibilidade extra. Você pode definir um filtro que indexe automaticamente todos os campos, subdocumentos e matrizes correspondentes em uma collection. Como acontece com qualquer índice, eles também precisam ser armazenados e mantidos, portanto, adicionarão sobrecarga ao banco de dados. Se os padrões de consulta do seu aplicativo forem conhecidos antecipadamente, você deverá usar índices mais seletivos nos campos específicos acessados pelas consultas. Use a pesquisa de texto para combinar palavras dentro de um campo Os índices regulares são úteis para combinar o valor inteiro de um campo. Se você deseja corresponder apenas uma palavra específica em um campo com muito texto, use um índice de texto . Se você estiver executando o MongoDB no serviço Atlas, considere usar o Atlas Full Text Search , que fornece um índice Lucene totalmentemanaged e integrado ao banco de dados MongoDB. O FTS oferece maior desempenho e maior flexibilidade para filtrar, classificar e classificar seu banco de dados para exibir rapidamente os resultados mais relevantes para seus usuários. Use índices parciais Reduza o tamanho e a sobrecarga de desempenho dos índices incluindo apenas os documentos que serão acessados por meio do índice. Por exemplo, crie um índice parcial no campo orderID que inclua apenas documentos de pedido com um orderStatus de "Em andamento" ou indexe apenas o campo emailAddress para documentos onde ele existir. Aproveite as vantagens dos índices multichave para consultar matrizes Se seus padrões de consulta exigirem acesso a elementos individuais da matriz, use um índice multichave . O MongoDB cria uma chave de índice para cada elemento do array e pode ser construído sobre arrays que contêm valores escalares e documentos aninhados. Evite expressões regulares que não estejam ancoradas ou enraizadas Os índices são ordenados por valor. Os curingas iniciais são ineficientes e podem resultar em varreduras completas do índice. Os curingas finais podem ser eficientes se houver caracteres iniciais que diferenciam maiúsculas de minúsculas suficientes na expressão. Evite expressões regulares que não diferenciam maiúsculas de minúsculas Se o único motivo para usar um regex for a insensibilidade a maiúsculas e minúsculas, use um índice que não diferencia maiúsculas de minúsculas , pois eles são mais rápidos. Use otimizações de índice disponíveis no mecanismo de armazenamento WiredTiger Se você estiver autogerenciando o MongoDB, poderá opcionalmente colocar índices em seu próprio volume separado, permitindo paginação de disco mais rápida e menor contenção. Consulte as opções WiredTiger para obter mais informações. Use o Plano Explicar Abordamos o uso do plano de explicação do MongoDB na postagem anterior sobre padrões de consulta e criação de perfil, e esta é a melhor ferramenta para verificar a cobertura do índice para consultas individuais. Trabalhando a partir do plano de explicação, o MongoDB fornece ferramentas de visualização para ajudar a melhorar ainda mais a compreensão de seus índices e fornece recomendações inteligentes e automáticas sobre quais índices adicionar. Visualize a cobertura do índice com MongoDB Compass e Atlas Data Explorer Como a GUI gratuita do MongoDB Compass oferece muitos recursos para ajudá-lo a otimizar o desempenho da consulta, incluindo a exploração do seu esquema e a visualização dos planos de explicação da consulta – duas áreas abordadas anteriormente nesta série. A guia de índices do Compass adiciona outra ferramenta ao seu arsenal. Ele lista os índices existentes para uma collection, informando o nome e as chaves do índice, juntamente com seu tipo, tamanho e quaisquer propriedades especiais. Através da guia de índice você também pode adicionar e eliminar índices conforme necessário. Um recurso realmente útil é o uso do índice, que mostra com que frequência um índice foi usado. Ter muitos índices pode ser quase tão prejudicial ao seu desempenho quanto ter poucos, tornando esse recurso especialmente valioso para ajudá-lo a identificar e remover índices que não estão sendo usados. Isso ajuda a liberar espaço no conjunto de trabalho e elimina a sobrecarga do banco de dados resultante da manutenção do índice. Se você estiver executando o MongoDB em nosso serviço Atlas totalmentemanaged , a visualização dos índices no Data Explorer lhe dará a mesma funcionalidade do Compass, sem que você precise se conectar ao seu banco de dados com uma ferramenta separada. Você também pode recuperar estatísticas de índice usando o estágio aggregation pipeline $indexStats . Recomendações de índice automatizado Mesmo com toda a telemetria fornecida pelas ferramentas do MongoDB, você ainda é responsável por extrair e analisar os dados necessários para tomar decisões sobre quais índices adicionar. O limite para consultas lentas varia com base no tempo médio de operações no seu cluster para fornecer recomendações pertinentes à sua carga de trabalho. Os índices recomendados são acompanhados por consultas de amostra, agrupadas por formato de consulta (ou seja, consultas com estrutura de predicado, classificação e projeção semelhantes), que foram executadas em uma collection que se beneficiaria com a adição de um índice sugerido. O Performance Advisor não afeta negativamente o desempenho do seu Atlas cluster. Se você estiver satisfeito com a recomendação, poderá implementar os novos índices automaticamente, sem incorrer em tempo de inatividade do aplicativo. Qual é o próximo Isso encerra esta última edição da série de práticas recomendadas de desempenho. A MongoDB University oferece um curso de treinamento gratuito baseado na Web sobre o desempenho do MongoDB . Esta é uma ótima maneira de aprender mais sobre o poder da indexação.

October 2, 2023

Mejores prácticas de rendimiento: indexación

Bienvenido al tercero de nuestra serie de publicaciones de blog que cubren las mejores prácticas de rendimiento para MongoDB. En esta serie, cubrimos consideraciones clave para lograr un rendimiento a escala en una serie de dimensiones importantes, que incluyen: Modelado de datos y dimensionamiento de la memoria (el conjunto de trabajo) Patrones de consulta y creación de perfiles . Indexación, que cubriremos hoy Fragmentación Transacciones y preocupaciones de lectura/​escritor Configuración de hardware y sistema operativo Calentamiento de banca Habiendo trabajado ambos para un par de proveedores de bases de datos diferentes durante los últimos 15 años, podemos decir con seguridad que no definir los índices apropiados es el problema de rendimiento número uno que los equipos de soporte técnico deben abordar con los usuarios. Así que tenemos que hacerlo bien... aquí están las mejores prácticas para ayudarle. Índices en MongoDB En cualquier base de datos, los índices apoyan la ejecución eficiente de consultas. Sin ellos, la base de datos debe escanear cada documento de una colección o tabla para seleccionar aquellos que coincidan con la declaración de la consulta. Si existe un índice apropiado para una consulta, la base de datos puede utilizar el índice para limitar la cantidad de documentos que debe inspeccionar. MongoDB ofrece una amplia gama de tipos de índices y funciones con criterios de clasificación específicos del idioma para admitir patrones de acceso complejos a sus datos. Los índices de MongoDB se pueden crear y eliminar según demanda para adaptarse a los requisitos de aplicación y patrones de consulta en evolución, y se pueden declarar en cualquier campo dentro de sus documentos, incluidos los campos anidados dentro de matrices. Entonces, veamos cómo aprovechar al máximo los índices en MongoDB. Utilice índices compuestos Los índices compuestos son índices compuestos por varios campos diferentes. Por ejemplo, en lugar de tener un índice en "Apellido" y otro en "Nombre", normalmente es más eficaz crear un índice que incluya tanto "Apellido" como "Nombre" si consulta ambos nombres. . Nuestro índice compuesto aún se puede utilizar para filtrar consultas que especifican solo el apellido. Siga la regla ESR Para índices compuestos, esta regla general resulta útil para decidir el orden de los campos en el índice: Primero, agregue los campos en los que se ejecutan las consultas de igualdad. Los siguientes campos a indexar deben reflejar el orden de clasificación de la consulta. Los últimos campos representan el rango de datos a los que se accederá. Utilice consultas cubiertas cuando sea posible Las consultas cubiertas devuelven resultados de un índice directamente sin tener que acceder a los documentos fuente y, por lo tanto, son muy eficientes. Para que se cubra una consulta, todos los campos necesarios para filtrar, ordenar y/​o devolver al cliente deben estar presentes en un índice. Para determinar si una consulta es una consulta cubierta, utilice el método explain() . Si la salida de explain() muestra totalDocsExamined como 0, esto muestra que la consulta está cubierta por un índice. Lea más en la documentación para explicar los resultados . Un problema común al intentar lograr consultas cubiertas es que el campo _id siempre se devuelve de forma predeterminada. Debe excluirlo explícitamente de los resultados de la consulta o agregarlo al índice. En los clústeres fragmentados, MongoDB necesita acceder internamente a los campos de la clave del fragmento. Esto significa que las consultas cubiertas solo son posibles cuando la clave de fragmento es parte del índice. Generalmente es una buena idea hacer esto de todos modos. Tenga precaución al considerar índices en campos de baja cardinalidad Las consultas sobre campos con una pequeña cantidad de valores únicos (baja cardinalidad) pueden devolver grandes conjuntos de resultados. Los índices compuestos pueden incluir campos con baja cardinalidad, pero el valor de los campos combinados debe exhibir una alta cardinalidad. Eliminar índices innecesarios Los índices consumen muchos recursos: incluso con compresión en el motor de almacenamiento MongoDB WiredTiger, consumen RAM y disco. A medida que se actualizan los campos, se deben mantener los índices asociados, lo que genera una sobrecarga adicional de CPU y E/​S de disco. MongoDB proporciona herramientas para ayudarle a comprender el uso del índice, que cubriremos más adelante en esta publicación. Los índices comodín no reemplazan la planificación de índices basada en cargas de trabajo Para cargas de trabajo con muchos patrones de consulta ad hoc o que manejan estructuras de documentos altamente polimórficas, los índices comodín le brindan mucha flexibilidad adicional. Puede definir un filtro que indexe automáticamente todos los campos, subdocumentos y matrices coincidentes en una colección. Como ocurre con cualquier índice, también deben almacenarse y mantenerse, por lo que agregarán gastos generales a la base de datos. Si los patrones de consulta de su aplicación se conocen de antemano, entonces debe utilizar índices más selectivos en los campos específicos a los que acceden las consultas. Utilice la búsqueda de texto para hacer coincidir palabras dentro de un campo Los índices regulares son útiles para hacer coincidir el valor completo de un campo. Si solo desea hacer coincidir una palabra específica en un campo con mucho texto, utilice un índice de texto . Si está ejecutando MongoDB en el servicio Atlas, considere utilizar Atlas Full Text Search , que proporciona un índice de Lucene totalmente administrado e integrado con la base de datos MongoDB. FTS proporciona mayor rendimiento y mayor flexibilidad para filtrar, clasificar y clasificar su base de datos para mostrar rápidamente los resultados más relevantes a sus usuarios. Usar índices parciales Reduzca el tamaño y la sobrecarga de rendimiento de los índices incluyendo únicamente los documentos a los que se accederá a través del índice. Por ejemplo, cree un índice parcial en el campo ID de pedido que solo incluya documentos de pedido con un estado de pedido de "En curso" o solo indexe el campo dirección de correo electrónico para los documentos donde exista. Aproveche los índices de claves múltiples para consultar matrices Si sus patrones de consulta requieren acceder a elementos de matriz individuales, utilice un índice de claves múltiples . MongoDB crea una clave de índice para cada elemento de la matriz y se puede construir sobre matrices que contienen valores escalares y documentos anidados. Evite expresiones regulares que no queden ancladas ni enraizadas Los índices están ordenados por valor. Los comodines iniciales son ineficaces y pueden dar lugar a exploraciones de índice completo. Los comodines finales pueden ser eficaces si hay suficientes caracteres iniciales que distinguen entre mayúsculas y minúsculas en la expresión. Evite las expresiones regulares que no distinguen entre mayúsculas y minúsculas Si la única razón para usar una expresión regular es que no distingue entre mayúsculas y minúsculas, use un índice que no distinga entre mayúsculas y minúsculas , ya que son más rápidos. Utilice las optimizaciones de índice disponibles en el motor de almacenamiento WiredTiger Si administra MongoDB usted mismo, puede opcionalmente colocar índices en su propio volumen separado, lo que permite una paginación del disco más rápida y una menor contención. Consulte las opciones de WiredTiger para obtener más información. Utilice el plan explicativo Cubrimos el uso del plan de explicación de MongoDB en la publicación anterior sobre patrones de consulta y creación de perfiles, y esta es la mejor herramienta para verificar la cobertura del índice para consultas individuales. Trabajando desde el plan de explicación, MongoDB proporciona herramientas de visualización para ayudar a mejorar aún más la comprensión de sus índices y proporciona recomendaciones inteligentes y automáticas sobre qué índices agregar. Visualice la cobertura del índice con MongoDB Compass y Atlas Data Explorer Como GUI gratuita para MongoDB, Compass proporciona muchas funciones para ayudarle a optimizar el rendimiento de las consultas, incluida la exploración de su esquema y la visualización de planes de explicación de consultas, dos áreas tratadas anteriormente en esta serie. La pestaña de índices en Compass agrega otra herramienta a tu arsenal. Enumera los índices existentes para una colección, informando el nombre y las claves del índice, junto con su tipo, tamaño y cualquier propiedad especial. A través de la pestaña de índice también puede agregar y eliminar índices según sea necesario. Una característica realmente útil es el uso del índice, que muestra con qué frecuencia se ha utilizado un índice. Tener demasiados índices puede ser casi tan perjudicial para el rendimiento como tener muy pocos, por lo que esta característica es especialmente valiosa para ayudarle a identificar y eliminar índices que no se están utilizando. Esto le ayuda a liberar espacio en el conjunto de trabajo y elimina la sobrecarga de la base de datos que se produce al mantener el índice. Si está ejecutando MongoDB en nuestro servicio Atlas totalmente administrado, la vista de índices en el Explorador de datos le brindará la misma funcionalidad que Compass, sin que tenga que conectarse a su base de datos con una herramienta separada. También puede recuperar estadísticas de índice utilizando la etapa de canalización de agregación $indexStats. Recomendaciones de índices automatizados Incluso con toda la telemetría proporcionada por las herramientas de MongoDB, usted sigue siendo responsable de extraer y analizar los datos necesarios para tomar decisiones sobre qué índices agregar. El umbral para consultas lentas varía según el tiempo promedio de operaciones en su clúster para brindar recomendaciones pertinentes a su carga de trabajo. Los índices recomendados van acompañados de consultas de muestra, agrupadas por forma de consulta (es decir, consultas con una estructura de predicados, clasificación y proyección similares), que se ejecutaron en una colección que se beneficiaría de la adición de un índice sugerido. El Performance Advisor no afecta negativamente el rendimiento de sus clústeres Atlas. Si está satisfecho con la recomendación, puede implementar los nuevos índices automáticamente, sin incurrir en ningún tiempo de inactividad de la aplicación. Que sigue Con esto concluye esta última entrega de la serie de mejores prácticas de rendimiento. La Universidad MongoDB ofrece un curso de capacitación gratuito basado en la web sobre el rendimiento de MongoDB . Esta es una excelente manera de aprender más sobre el poder de la indexación.

October 2, 2023

Best Practices für die Leistung: Indizierung

Willkommen zum dritten Teil unserer Reihe von Blogbeiträgen zu Best Practices für die Leistung von MongoDB. In dieser Reihe behandeln wir wichtige Überlegungen zur Erzielung von Leistung bei skalieren in einer Reihe wichtiger Dimensionen, darunter: Datenmodellierung und Speicherdimensionierung (die Arbeitsfestlegung) Abfragemuster und Profilerstellung Indizierung, die wir heute behandeln werden Sharding Transaktionen und Lese-/Schreibprobleme Hardware und Betriebssystemkonfiguration Bankaufwärmen Da beide in den letzten 15 Jahren für verschiedene Datenbankanbieter gearbeitet haben, können wir mit Sicherheit sagen, dass das Versäumnis, den richtigen Index zu definieren, das größte Leistungsproblem ist, mit dem sich technische Supportteams bei Benutzern befassen müssen. Wir müssen es also richtig machen ... hier sind die Best Practices, die Ihnen helfen. Index in MongoDB In jeder Datenbank unterstützt Index die effiziente Ausführung von Abfragen. Ohne sie muss die Datenbank jedes Dokument in einer collection oder Tabelle scannen, um diejenigen auszuwählen, die der Abfrageanweisung entsprechen. Wenn für eine Abfrage ein geeigneter Index vorhanden ist, kann die Datenbank mithilfe des Index die Anzahl der Dokumente begrenzen, die sie überprüfen muss. MongoDB bietet eine breite Palette an Indextypen und Funktionen mit sprachspezifischen Sortierreihenfolgen, um komplexe Zugriffsmuster auf Ihre Daten zu unterstützen. Der MongoDB- Index kann bei Bedarf erstellt und gelöscht werden, um sich ändernden Anwendungsanforderungen und Abfragemustern gerecht zu werden, und kann für jedes Feld in Ihren Dokumenten deklariert werden, einschließlich der in Arrays verschachtelten Felder. Sehen wir uns also an, wie Sie den Index in MongoDB optimal nutzen. Verwenden Sie den zusammengesetzten Index Zusammengesetzte Index sind Index , die aus mehreren verschiedenen Feldern bestehen. Anstatt beispielsweise einen Index für „Nachname“ und einen anderen für „Vorname“ zu haben, ist es in der Regel am effizientesten, einen Index zu erstellen, der sowohl „Nachname“ als auch „Vorname“ enthält, wenn Sie beide Namen abfragen . Unser zusammengesetzter Index kann weiterhin zum Filtern von Abfragen verwendet werden, die nur den Nachnamen angeben. Befolgen Sie die ESR-Regel Bei zusammengesetzten Indizes ist diese Faustregel hilfreich, um die Reihenfolge der Felder im Index festzulegen: Fügen Sie zunächst die Felder hinzu, für die Gleichheitsabfragen ausgeführt werden Die nächsten Index sollten die Sortierreihenfolge der Abfrage widerspiegeln Die letzten Felder stellen den Bereich der Daten dar, auf die zugegriffen werden soll Verwenden Sie nach Möglichkeit abgedeckte Abfragen Abgedeckte Abfragen liefern Ergebnisse direkt aus einem Index, ohne dass auf die Quelldokumente zugegriffen werden muss, und sind daher sehr effizient. Damit eine Abfrage abgedeckt werden kann, müssen alle Felder, die zum Filtern, Sortieren und/​oder zur Rückgabe an den Client benötigt werden, in einem Index vorhanden sein. Um festzustellen, ob es sich bei einer Abfrage um eine abgedeckte Abfrage handelt, verwenden Sie die Methode „ explain() “. Wenn in der EXPLAIN()-Ausgabe „totalDocsExamined“ als 0 angezeigt wird, zeigt dies, dass die Abfrage durch einen Index abgedeckt ist. Weitere Informationen zur Erläuterung der Ergebnisse finden Sie in der Dokumentation . Ein häufiges Problem beim Versuch, abgedeckte Abfragen zu erreichen, besteht darin, dass das ID Feld immer standardmäßig zurückgegeben wird. Sie müssen es explizit aus den Abfrageergebnissen ausschließen oder dem Index hinzufügen. Im sharded cluster muss MongoDB intern auf die Felder des shard key zugreifen. Dies bedeutet, dass abgedeckte Abfragen nur möglich sind, wenn der shard key Teil des Index ist. Normalerweise ist es trotzdem eine gute Idee, dies zu tun. Seien Sie vorsichtig, wenn Sie Index für Felder mit niedriger Kardinalität in Betracht ziehen Abfragen auf Felder mit einer kleinen Anzahl eindeutiger Werte (geringe Kardinalität) können große Ergebnisse zurückgeben. Der zusammengesetzte Index kann Felder mit niedriger Kardinalität enthalten, der Wert der kombinierten Felder sollte jedoch eine hohe Kardinalität aufweisen. Eliminieren Sie unnötige Index Index sind ressourcenintensiv: Selbst bei Komprimierung in der MongoDB WiredTiger Storage Engine verbrauchen sie RAM und Festplatte. Während Felder aktualisiert werden, muss der zugehörige Index beibehalten werden, was zusätzlichen CPU- und Festplatten-E/​A-Overhead verursacht. MongoDB bietet Tools, die Ihnen helfen, die Indexnutzung zu verstehen, auf die wir später in diesem Beitrag eingehen werden. Platzhalterindizes sind kein Ersatz für die arbeitslastbasierte Indexplanung Für Arbeitslasten mit vielen Ad-hoc-Abfragemustern oder die stark polymorphe Dokumentstrukturen bewältigen, bietet Ihnen der Wildcard Index viel zusätzliche Flexibilität. Sie können einen Filter definieren, der automatisch alle übereinstimmenden Felder, Unterdokumente und Arrays in einer collection Index. Wie jeder Index müssen auch sie gespeichert und verwaltet werden, sodass sie der Datenbank Overhead verleihen. Wenn die Abfragemuster Ihrer Anwendung im Voraus bekannt sind, sollten Sie einen selektiveren Index für die spezifischen Felder verwenden, auf die die Abfragen zugreifen. Verwenden Sie die Textsuche, um Wörter in einem Feld zu finden Reguläre Index sind nützlich, um den gesamten Wert eines Felds abzugleichen. Wenn Sie nur ein bestimmtes Wort in einem Feld mit viel Text finden möchten, verwenden Sie einen Index . Wenn Sie MongoDB im Atlas-Dienst ausführen, sollten Sie die Verwendung der Atlas Full Text Search in Betracht ziehen, die einen vollständigmanaged Lucene-Index bereitstellt, der in die MongoDB-Datenbank integriert ist. FTS bietet eine höhere Leistung und größere Flexibilität beim Filtern, Einordnen und Sortieren Ihrer Datenbank, um Ihren Benutzern schnell die relevantesten Ergebnisse anzuzeigen. Verwenden Sie einen partiellen Index Reduzieren Sie den Größen- und Overhead von Indizes, indem Sie nur Dokumente einschließen, auf die über den Index zugegriffen werden soll. Erstellen Sie beispielsweise einen Index für das Feld „orderID“, der nur Bestelldokumente mit dem orderStatus „In Bearbeitung“ enthält, oder Index nur das Feld „emailAddress“ für Dokumente, sofern es vorhanden ist. Nutzen Sie den Multi-Key- Index zum Abfragen von Arrays Wenn Ihre Abfragemuster den Zugriff auf einzelne Array-Elemente erfordern, verwenden Sie einen Index mit mehreren Schlüsseln. MongoDB erstellt für jedes Element im Array einen Indexschlüssel und kann über Arrays erstellt werden, die sowohl Skalarwerte als auch verschachtelte Dokumente enthalten. Vermeiden Sie reguläre Ausdrücke, die nicht verankert oder verwurzelt sind Index ist nach Wert sortiert. Führende Platzhalter sind ineffizient und können zu vollständigen Index-Scans führen. Nachfolgende Platzhalter können effizient sein, wenn der Ausdruck genügend führende Zeichen enthält, bei denen die Groß-/Kleinschreibung beachtet werden muss. Vermeiden Sie reguläre Ausdrücke ohne Berücksichtigung der Groß- und Kleinschreibung Wenn der einzige Grund für die Verwendung eines regulären Ausdrucks darin besteht, dass die Groß-/Kleinschreibung nicht berücksichtigt wird, verwenden Sie stattdessen einen Index, bei dem die Groß-/Kleinschreibung nicht berücksichtigt wird, da diese schneller sind. Nutzen Sie die in der WiredTiger Storage Engine verfügbaren Indexoptimierungen Wenn Sie MongoDB selbst verwalten, können Sie Index optional auf einem eigenen separaten Volume platzieren, was ein schnelleres Festplatten-Paging und weniger Konflikte ermöglicht. Weitere Informationen finden Sie unter WiredTiger -Optionen . Nutzen Sie den Explain-Plan Wir haben die Verwendung des Explain-Plans von MongoDB im vorherigen Beitrag zu Abfragemustern und zur Profilerstellung behandelt. Dies ist das beste Tool, um die Indexabdeckung für einzelne Abfragen zu überprüfen. Basierend auf dem Explain-Plan stellt MongoDB Visualisierungstools bereit, die dabei helfen, das Verständnis Ihres Index weiter zu verbessern, und die intelligente und automatische Empfehlungen dazu liefern, welcher Index hinzugefügt werden sollte. Visualisieren Sie die Indexabdeckung mit MongoDB Compass und Atlas Data Explorer Als kostenlose grafische Benutzeroberfläche für MongoDB Compass viele Funktionen, die Ihnen bei der Optimierung der Abfrageleistung helfen, einschließlich der Untersuchung Ihres Schemas und der Visualisierung von Abfrage-Erklärungsplänen – zwei Bereiche, die bereits in dieser Serie behandelt wurden. Die Registerkarte Index in Compass erweitert Ihr Arsenal um ein weiteres Werkzeug. Es listet die vorhandenen Indizes für eine collection auf und meldet den Namen und die Schlüssel des Index sowie seinen Typ, seine Größe und alle speziellen Eigenschaften. Über die Registerkarte „Index“ können Sie bei Bedarf auch Indizes hinzufügen und löschen. Eine wirklich nützliche Funktion ist die Indexnutzung, die Ihnen anzeigt, wie oft ein Index verwendet wurde. Zu viele Index können Ihre Leistung fast genauso beeinträchtigen wie zu wenige. Daher ist diese Funktion besonders wertvoll, wenn es darum geht, nicht verwendete Index zu identifizieren und zu entfernen. Dies hilft Ihnen, Arbeitsspeicherplatz freizugeben und eliminiert den Datenbank- Overhead , der durch die Pflege des Index entsteht. Wenn Sie MongoDB in unserem vollständigmanaged Atlas-Dienst ausführen, bietet Ihnen die Indexansicht im Daten-Explorer die gleiche Funktionalität wie Compass, ohne dass Sie mit einem separaten Tool eine Verbindung zu Ihrer Datenbank herstellen müssen. Sie können Indexstatistiken auch mithilfe der aggregation pipeline $indexStats abrufen . Automatisierte Indexempfehlungen Trotz der gesamten Telemetrie, die von den MongoDB-Tools bereitgestellt wird, sind Sie immer noch dafür verantwortlich, die erforderlichen Daten abzurufen und zu analysieren, um Entscheidungen darüber zu treffen, welcher Index hinzugefügt werden soll. Der Schwellenwert für langsame Abfragen variiert je nach der durchschnittlichen Betriebszeit Ihres cluster , um Empfehlungen bereitzustellen, die für Ihre Arbeitslast relevant sind. Empfohlene Indizes werden von Beispielabfragen begleitet, die nach Abfrageform gruppiert sind (d. h. Abfragen mit ähnlicher Prädikatstruktur, Sortierung und Projektion), die für eine collection ausgeführt wurden, die von der Hinzufügung eines vorgeschlagenen Index profitieren würde. Der Performance Advisor hat keinen negativen Einfluss auf die Leistung Ihres Atlas cluster. Wenn Sie mit der Empfehlung zufrieden sind, können Sie den neuen Index automatisch einführen, ohne dass es zu Ausfallzeiten der Anwendung kommt. Was kommt als nächstes Damit ist diese neueste Ausgabe der Best-Practices-Serie zur Leistung abgeschlossen. Die MongoDB University bietet einen kostenlosen, webbasierten Schulungskurs zur MongoDB-Leistung an . Dies ist eine großartige Möglichkeit, mehr über die Leistungsfähigkeit der Indizierung zu erfahren.

September 15, 2023

Building AI with MongoDB: Unlocking Value from Multimodal Data

One of the most powerful capabilities of AI is its ability to learn, interpret, and create from input data of any shape and modality. This could be structured records stored in a database to unstructured text, computer code, video, images, and audio streams. Vector embeddings are one of the key AI enablers in this space. Encoding our data as vector embeddings dramatically expands the ability to work with this multimodal data. We’ve gone from depending on data scientists training highly specialized models just a few years ago to developers today building general-purpose apps incorporating NLP and computer vision. The beauty of vector embeddings is that data that is unstructured and therefore completely opaque to a computer can now have its meaning and structure inferred and represented via these embeddings. Using a vector store such as Atlas Vector Search means we can search and compute unstructured and multimodal data in the same way we’ve always been able to with structured business data. Now we can search for it using natural language, rather than specialized query languages. Considering that 80%+ of the data that enterprises create every day is unstructured, we start to see how vector search combined with LLMs and generative AI opens up new use cases and revenue streams. In this latest round-up of companies building AI with MongoDB, we feature three examples who are doing just that. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. The future of business data: Unlocking the hidden potential of unstructured data In today's data-driven world, businesses are always searching for ways to extract meaningful insights from the vast amounts of information at their disposal. From improving customer experiences to enhancing employee productivity, the ability to leverage data enables companies to make more informed and strategic decisions. However, most of this valuable data is trapped in complex formats, making it difficult to access and analyze. That's where comes in. Imagine an innovative tool that can take all of your unstructured data – be it a PDF report, a colorful presentation, or even an image – and transform it into an easily accessible format. This is exactly what does. They delve deep, pulling out crucial data, and present it in a simple, universally understood JSON format. This makes your data ready to be transformed, stored and searched in powerful databases like MongoDB Atlas Vector Search . What does this mean for your business? It's simple. By automating the data extraction process, you can quickly derive actionable insights, offering enhanced value to your customers and improving operational efficiencies. Unstructured also offers an upcoming image-to-text model. This provides even more flexibility for users to ingest and process nearly any file containing natural language data. And, keep an eye out for notable upgrades in table extraction – yet another step in ensuring you get the most from your data. isn't just a tool for tech experts. It's for any business aiming to understand their customers better, seeking to innovate, and looking to stay ahead in a competitive landscape. Unstructured’s widespread usage is a testament to its value – with over 1.5 million downloads and adoption by thousands of enterprises and government organizations. Brian Raymond, the founder and CEO of, perfectly captures this synergy, saying, “As the world’s most widely used natural language ingestion and preprocessing platform, partnering with MongoDB was a natural choice for us. This collaboration allows for even faster development of intelligent applications. Together, we're paving the way businesses harness their data.” MongoDB and are bridging the gap between data and insights, ensuring businesses are well-equipped to navigate the challenges of the digital age. Whether you’re a seasoned entrepreneur or just starting, it's time to harness the untapped potential of your unstructured data. Visit to get started with any of their open-source libraries. Or join Unstructured’s community Slack and explore how to seamlessly use your data in conjunction with large language models. Making sense of complex contracts with entity extraction and analysis Catylex is a revolutionary contract analytics solution for any business that needs to extract and optimize contract data. The company’s best-in-class contract AI automatically recognizes thousands of legal and business concepts out-of-the-box, making it easy to get started and quickly generate value. Catylex’s AI models transform wordy, opaque documents into detailed insights revealing rights, obligations, risks, and commitments associated with the business, its suppliers, and customers. The insights generated can be used to accelerate contract review and to feed operational and risk data into core business systems (CLMs, ERPs, etc.) and teams. Documents are processed using Catylex’s proprietary extraction pipeline that uses a combination of various machine learning/NLP techniques (custom Named Entity Recognition, Text Classification) and domain expert augmentation to parse documents into an easy-to-query ontology. This eliminates the need for end users to annotate data or train any custom models. The application is very intuitive and provides easy-to-use controls to Quality Check the system-extracted data, search and query using a combination of text and concepts, and generate visualizations across portfolios. You can try all of this for free by signing up for the “Essentials'' version of Catylex . Catylex leverages a suite of applications and features from the MongoDB Atlas developer data platform . It uses the MongoDB Atlas database to store documents and extract metadata due to its flexible data model and easy-to-scale options, and it uses Atlas Search to provide end users with easy-to-use and efficient text search capabilities. Features like highlighting within Atlas Search add a lot of value and enhance the user experience. Atlas Triggers are used to handle change streams and efficiently relay information to various parts within the Catylex application to make it event-driven and scalable. Catylex is actively evaluating Atlas Vector Search. Bringing together vector search alongside keyword search and database in a single, fully synchronized, and flexible storage layer, accessed by a single API, will simplify development and eliminate technology sprawl. Being part of the MongoDB AI Innovators Program gives Catylex’s engineers direct access to the product management team at MongoDB, helping to share feedback and receive the latest product updates and best practices. The provision of Atlas credits reduces the costs of experimenting with new features. Co-marketing initiatives help build visibility and awareness of the company’s offerings. Harness Generative AI with observed and dark data for customer 360 Dataworkz enables enterprises to harness the power of LLMs with their own proprietary data for customer applications. The company’s products empower businesses to effortlessly develop and implement Retrieval-Augmented Generation (RAG) applications using proprietary data, utilizing either public LLM APIs or privately hosted open-source foundation models. The emergence of hallucinations presents a notable obstacle in the widespread adoption of Gen AI within enterprises. Dataworkz streamlines the implementation of RAG applications enabling Gen AI to reference its origins, consequently enhancing traceability. As a result, users can easily use conversational natural language to produce high-quality, LLM-ready, customer 360 views powering chatbots, Question-Answering systems, and summarization services. Dataworkz provides connectors for a vast array of customer data sources. These include back-office SaaS applications such as CRM, Marketing Automation, and Finance systems. In addition, leading relational and NoSQL databases, cloud object stores, data warehouses, and data lake houses are all supported. Dataflows, aka composable AI-enabled workflows, are a set of steps that users combine and arrange to perform any sort of data transformation – from creating vector embeddings to complex JSON transformations. Users can describe data wrangling tasks in natural language, have LLMs orchestrate the processing of data in any modality, and merge it into a “golden” 360-degree customer view. MongoDB Atlas is used to store the source document chunks for this customer's 360-degree view and Atlas Vector Search is used to index and query the associated vector embeddings. The generation of outputs produced by the customer’s chosen LLM is augmented with similarity search and retrieval powered by Atlas. Public LLMs such as OpenAI and Cohere or privately hosted LLMs such as Databricks Dolly are also available. The integrated experience of the MongoDB Atlas database and Atlas Vector Search simplifies developer workflows. Dataworkz has the freedom and flexibility to meet their customers wherever they run their business with multi-cloud support. For Dataworkz, access to Atlas credits and the MongoDB partner ecosystem are key drivers for becoming part of the AI Innovators program. What's next? If you are building AI-enabled apps on MongoDB, sign up for our AI Innovators Program . We’ve had applicants from all industries building for a huge diversity of new use cases. To get a flavor, take a look at earlier blog posts in this series: Building AI with MongoDB: First Qualifiers includes AI at the network edge for computer vision and augmented reality; risk modeling for public safety; and predictive maintenance paired with Question-Answering systems for maritime operators. Building AI with MongoDB: Compliance to Copilots features AI in healthcare along with intelligent assistants that help product managers specify better products and help sales teams compose emails that convert 2x higher. Finally, check out our MongoDB for Artificial Intelligence resources page for the latest best practices that get you started in turning your idea into AI-driven reality.

September 11, 2023

Building AI with MongoDB: How VISO TRUST is Transforming Cyber Risk Intelligence

Since announcing MongoDB Atlas Vector Search preview availability back in June, we’ve seen rapid adoption from developers building a wide range of AI-enabled apps. Today we're going to talk to one of these customers. VISO TRUST puts reliable, comprehensive, actionable vendor security information directly in the hands of decision-makers who need to make informed risk assessments. The company uses a combination of state-of-the-art models from OpenAI, Hugging Face, Anthropic, Google, and AWS, augmented by vector search and retrieval from MongoDB Atlas. We sat down with Pierce Lamb, Senior Software Engineer on the Data and Machine Learning team at VISO TRUST to learn more. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Tell us a little bit about your company. What are you trying to accomplish and how that benefits your customers or society more broadly? VISO TRUST is an AI-powered third-party cyber risk and trust platform that enables any company to access actionable vendor security information in minutes. VISO TRUST delivers the fast and accurate intelligence needed to make informed cybersecurity risk decisions at scale for companies at any maturity level. Our commitment to innovation means that we are constantly looking for ways to optimize business value for our customers. VISO TRUST ensures that complex business-to-business (B2B) transactions adequately protect the confidentiality, integrity, and availability of trusted information. VISO TRUST’s mission is to become the largest global provider of cyber risk intelligence and become the intermediary for business transactions. Through the use of VISO TRUST, customers will reduce their threat surface in B2B transactions with vendors and thereby reduce the overall risk posture and potential security incidents like breaches, malicious injections, and more. Today VISO TRUST has many great enterprise customers like InstaCart, Gusto, and Upwork and they all say the same thing: 90% less work, 80% reduction in time to assess risk, and near 100% vendor adoption. Because it’s the only approach that can deliver accurate results at scale, for the first time, customers are able to gain complete visibility into their entire third-party populations and take control of their third-party risk. Describe what your application does and what role AI plays in it The VISO TRUST Platform approach uses patented, proprietary machine learning and a team of highly qualified third-party risk professionals to automate this process at scale. Simply put, VISO TRUST automates vendor due diligence and reduces third-party at scale. And security teams can stop chasing vendors, reading documents, or analyzing spreadsheets. Figure 1: VISO TRUST is the only SaaS third-party cyber risk management platform that delivers the rapid security intelligence needed for modern companies to make critical risk decisions early in the procurement process VISO TRUST Platform easily engages third parties, saving everyone time and resources. In a 5-minute web-based session third parties are prompted to upload relevant artifacts of the security program that already exists and our supervised AI – we call Artifact Intelligence – does the rest. Security artifacts that enter VISO’s Artifact Intelligence pipeline interact with AI/ML in three primary ways. First, VISO deploys discriminator models that produce high-confidence predictions about features of the artifact. For example, one model performs artifact classification, another detects organizations inside the artifact, another predicts which pages are likely to contain security controls, and more. Our modules reference a comprehensive set of over 25 security frameworks and use document heuristics and natural language processing to analyze any written material and extract all relevant control information. Secondly, artifacts have text content parsed out of them in the form of sentences, paragraphs, headers, table rows, and more; these text blobs are embedded and stored in MongoDB Atlas to become part of our dense retrieval system. This dense retrieval system performs retrieval-augmented generation (RAG) using MongoDB features like Atlas Vector Search to provide ranked context to large language model (LLM) prompts. Thirdly, we use RAG results to seed LLM prompts and chain together their outputs to produce extremely accurate factual information about the artifact in the pipeline. This information is able to provide instant intelligence to customers that previously took weeks to produce. VISO TRUST’s risk model analyzes your risk and delivers a complete assessment that provides everything you need to know to make qualified risk decisions about the relationship. In addition, the platform continuously monitors and reassesses third-party vendors to ensure compliance. What specific AI/ML techniques, algorithms, or models are utilized in your application? For our discriminator models, we research the state-of-the-art pre-trained models (typically narrowed by those contained in HuggingFace’s transformers package) and perform fine-tuning of these models using our dataset. For our dense retrieval system, we use MongoDB Atlas Vector Search which internally uses the Hierarchical Navigable Small Worlds algorithm to retrieve similar embeddings to embedded text content. We have plans to perform a re-ranking of these results as well. For our LLM system, we have experimented with GPT3.5-turbo, GPT4, Claude 1 & 2, Bard, Vertex, and Bedrock. We blend a variety of these based on our customer's accuracy, latency, and security needs. Can you describe other AI technologies used in your application stack? Some of the other frameworks we use are HuggingFace transformers, evaluate, accelerate, and Datasets, PyTorch, WandB, and Amazon Sagemaker. We have a library for ML experiments (fine-tuning) that is custom-built, a library for workflow orchestration that is custom-built, and all of our prompt engineering is custom-built. Why did you choose MongoDB as part of your application stack? Which MongoDB features are you using and where are you running MongoDB? The VISO TRUST Platform relies on effective solutions and tools like MongoDB's distinctive attributes to fulfill specific objectives. MongoDB supports our platform's mechanism to engage third parties efficiently, employing both AI and human oversight to automate the assessment of security artifacts at scale. The fundamental value proposition of MongoDB – a robust document database – is why we originally chose it. It was originally deployed as a storage/retrieval mechanism for all the factual information our artifact intelligence pipeline produces about artifacts. While it still performs this function today, it has now become our “vector/metadata database.” MongoDB executes fast ranking of large quantities of embedded text blobs for us while Atlas provides us with all the ease-of-use of a cloud-ready database. We use both the Atlas search index visualization, and the query profiler visualization daily. Even just the basic display of a few documents in collections often saves time. Finally, when we recently backfilled embeddings across one of our MongoDB deployments, Atlas would automatically provision more disk space for large indexes without us needing to be around which was incredibly helpful. What are the benefits you've achieved by using MongoDB? I would say there are two primary benefits that have greatly helped us with respect to MongoDB and Atlas. First, MongoDB was already a place where we were storing metadata about artifacts in our system; with the introduction of Atlas Vector Search now we have a comprehensive vector/metadata database – that’s been battle-tested over a decade – that solves our dense retrieval needs. No need to deploy a new database we have to manage and learn. Our vectors and artifact metadata can be stored right next to each other. Second, Atlas has been helpful in making all the painful parts of database management easy. Creating indexes, provisioning capacity, alerting slow queries, visualizing data, and much more have saved us time and allowed us to focus on more important things. What are your future plans for new applications and how does MongoDB fit into them? Retrieval-augmented generation is going to continue to be a first-class feature of our application. In this regard, the evolution of Atlas Vector Search and its ecosystem in MongoDB will be highly relevant to us. MongoDB has become the database our ML team uses, so as our ML footprint expands, our use of MongoDB will expand. Getting started Thanks so much to Pierce for sharing details on VISO TRUST’s AI-powered applications and experiences with MongoDB. The best way to get started with Atlas Vector Search is to head over to the product page . There you will find tutorials, documentation, and whitepapers along with the ability to sign up for MongoDB Atlas. You’ll just be a few clicks away from spinning up your own vector search engine where you can experiment with the power of vector embeddings and RAG. We’d love to see what you build, and are eager for any feedback that will make the product even better in the future!

September 5, 2023