Artificial Intelligence

Building AI-powered Apps with MongoDB

Building Gen AI with MongoDB & AI Partners | September 2024

Last week I was in London for MongoDB.local London —the 19th stop of the 2024 MongoDB.local tour—where MongoDB, our customers, and our AI partners came together to share solutions we’ve been building that enable companies to accelerate their AI journey. I love attending these events because they offer an opportunity to celebrate our collective achievements, and because it’s great to meet so many (mainly Zoom) friends in person! One of the highlights of MongoDB.local London 2024 was the release of our reference architecture with our MAAP partners AWS and Anthropic , which supports memory-enhanced AI agents. This architecture is already helping businesses streamline complex processes and develop smarter, more responsive applications. We also announced a robust set of vector quantization capabilities in MongoDB Atlas Vector Search that will help developers build powerful semantic search and generative AI applications with more scale—and at a lower cost. Now, with support for the ingestion of scalar quantized vectors, you can import and work with quantized vectors from your embedding model providers of choice, including MAAP partners Cohere, Nomic, and others. A big thank you to all of MongoDB’s AI partners, who continually amaze me with their innovation. MongoDB.local London was another great reminder of the power of collaboration, and I’m excited for what lies ahead as we continue to shape the future of AI together. As the Brits say: Cheers! Welcoming new AI and tech partners In September we also welcomed seven new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Arize Arize AI is a platform that helps organizations visualize and debug the flow of data through AI applications by quickly identifying bottlenecks in LLM calls and understanding agentic paths. "At Arize AI, we are committed to helping AI teams build, evaluate, and troubleshoot cutting-edge agentic systems. Partnering with MongoDB allows us to provide a comprehensive solution for managing the memory and retrieval that these systems rely on”, said Jason Lopatecki, co-founder and CEO of Arize AI. “With MongoDB’s robust vector search and flexible document storage, combined with Arize’s advanced observability and evaluation tools, we’re empowering developers to confidently build and deploy AI applications." Baseten Baseten provides the applied AI research and infrastructure needed to serve custom and open-source machine learning models performantly, scalably, and cost-efficiently. " We're excited to partner with MongoDB to combine their scalable vector database with Baseten's high-performance inference infrastructure and high-performance models. Together, we're enabling companies to build and deploy generative AI applications, such as RAG apps, that not only scale infinitely but also deliver optimal performance per dollar,” said Tuhin Srivastava, CEO of Baseten. “This partnership empowers developers to bring mission-critical AI solutions to market faster, while maintaining cost-effectiveness at every stage of growth." Doppler Doppler is a cloud-based platform that helps teams manage, organize, and secure secrets across environments and applications that can be used throughout the entire development lifecycle. “Doppler rigorously focuses on making the easy path, the most secure path for developers. This is only possible with deep product partnerships with all the tooling developers have come to love. We are excited to join forces with MongoDB to make zero-downtime secrets rotation for non-relational databases effortlessly simple to set up and maintenance-free,” said Brian Vallelunga, founder and CEO of Doppler. “This will immediately bolster the security posture of a company’s most sensitive data without any additional overhead or distractions." Haize Labs Haize Labs automates language model stress testing at massive scales to discover and eliminate failure modes. This, alongside their inference-time mitigations and observability tools, enables the risk-free adoption of AI. " We're thrilled to partner with MongoDB in empowering companies to build RAG applications that are both powerful yet secure, safe, and reliable,” said Leonard Tang, co-founder and CEO of Haize Labs. “MongoDB Atlas has streamlined the process of developing production-ready GenAI systems, and we're excited to work together to accelerate customers' journey to trust and confidence in their GenAI initiatives." Modal Modal is a serverless platform for data and AI/ML engineers to run and deploy code in the cloud without having to think about infrastructure. Run generative AI models, large-scale batch jobs, job queues, and more, all faster than ever before. “The coming wave of intelligent applications will be built on the potent combination of foundation models, large-scale data, and fast search,” explained Charles Frye, AI Engineer at Modal. “MongoDB Atlas provides an excellent platform for storing, querying, and searching data, from hot new techniques like vector indices to old standbys like lexical search. It's the perfect counterpart to Modal's flexible compute, like serverless GPUs. Together, MongoDB and Modal make it easy to get started with this new paradigm, and then they make it easy to scale it out to millions of users querying billions of records & maxing out thousands of GPUs.” Portkey AI Portkey AI is an AI gateway and observability suite that helps companies develop, deploy, and manage LLM-based applications. " Our partnership with MongoDB is a game-changer for organizations looking to operationalize AI at scale. By combining Portkey's LLMOps expertise with MongoDB's comprehensive data solution, we're enabling businesses to deploy, manage, and scale AI applications with unprecedented efficiency and control,” said Ayush Garg, Chief Technology Officer of Portkey AI. “Together, we're not just streamlining the path from POC to production; we're setting a new standard for how businesses can leverage AI to drive innovation and deliver tangible value." Reka Reka offers fully multimodal models including images, videos with audio, text, and documents to empower AI agents that can see, hear, and speak. "At Reka, we know how challenging it can be to retrieve information buried in unstructured multimodal data. We are excited to join forces with MongoDB to help companies test and optimize multimodal RAG features for faster production deployment,” said Dani Yogatama, CEO of Reka. “Our models understand and reason over multimodal data including text, tables, and images in PDF documents or conversations in videos. Our joint solution streamlines the whole RAG development lifecycle, speeding up time to market and helping companies deliver real values to their customers faster." But wait, there's more! To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub , and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

October 9, 2024
Artificial Intelligence

Introducing Two MongoDB Generative AI Learning Badges

Want to boost your resume quickly? MongoDB is introducing two new Learning Badges , Building gen AI Apps and Deploying and Evaluating gen AI Apps. Unlike high-stakes certifications, which cover a large breadth and depth of subjects, these digital credentials are focused on specific topics, making them easier and quicker to earn. Best of all, they’re free! The Building Gen AI Applications with MongoDB Learning Badge validates users’ knowledge of developing gen AI applications using MongoDB Atlas Vector Search. It recognizes your understanding of semantic search and how to build chatbots with retrieval-augmented generation (RAG), MongoDB, and Langchain. The Deploying and Evaluating Gen AI Applications with MongoDB Learning Badge validates users’ knowledge of optimizing the performance and evaluating the results of gen AI applications. It recognizes your understanding of chunking strategies, performance evaluation techniques, and deployment options within MongoDB for both prototyping and production stages. Learn, prepare, and earn To earn your badge, simply complete the Learning Badge Path and take a short assessment at the end. Once you pass the short assessment, you'll receive an email with your official Credly badge and digital certificate. You can share it on social media, in email signatures, or on digital resumes. Additionally, you'll gain inclusion in the Credly Talent Directory , where you will be visible to recruiters from top employers and can open up new career opportunities. Learning paths are like curated roadmaps that guide you through essential concepts and skills needed for the assessment. Each badge has its own learning path: Building Gen AI Apps Learning Badge Path: This learning path guides you through the foundations of building a gen AI application with MongoDB Atlas Vector Search. You'll learn what semantic search is and how you can leverage it across a variety of use cases. Then you'll learn how to build your own chatbot by creating a RAG application with MongoDB and Langchain. Deploying and Evaluating Gen AI Apps Learning Badge Path: This learning path will help you take a gen AI application from creation to full deployment, with a focus on optimizing performance and evaluating results. You'll explore chunking strategies, performance evaluation techniques, and deployment options in MongoDB for both prototyping and production stages. We recommend completing the Building gen AI Apps Learning Badge Path before beginning this path. Badge up with MongoDB MongoDB Learning Badges offer a valuable opportunity to showcase your commitment to continuous learning and expertise in specific topics. These digital credentials not only recognize your educational achievements but also serve as a testament to your knowledge and skills. Whether you're a seasoned developer, an aspiring data scientist, or an enthusiastic student, earning a MongoDB badge can significantly enhance your profile and open up new opportunities in your field. Start earning your badges today—it’s quick, effective, and free! Visit MongoDB Learning Badges to begin your journey toward becoming a gen AI application expert and boosting your career prospects.

October 8, 2024
Artificial Intelligence

Vector Quantization: Scale Search & Generative AI Applications

We are excited to announce a robust set of vector quantization capabilities in MongoDB Atlas Vector Search . These capabilities will reduce vector sizes while preserving performance, enabling developers to build powerful semantic search and generative AI applications with more scale—and at a lower cost. In addition, unlike relational or niche vector databases, MongoDB’s flexible document model—coupled with quantized vectors—allows for greater agility in testing and deploying different embedding models quickly and easily. Support for scalar quantized vector ingestion is now generally available, and will be followed by several new releases in the coming weeks. Read on to learn how vector quantization works and visit our documentation to get started! The challenges of large-scale vector applications While the use of vectors has opened up a range of new possibilities , such as content summarization and sentiment analysis, natural language chatbots, and image generation, unlocking insights within unstructured data can require storing and searching through billions of vectors—which can quickly become infeasible. Vectors are effectively arrays of floating-point numbers representing unstructured information in a way that computers can understand (ranging from a few hundred to billions of arrays), and as the number of vectors increases, so does the index size required to search over them. As a result, large-scale vector-based applications using full-fidelity vectors often have high processing costs and slow query times, hindering their scalability and performance. Vector quantization for cost-effectiveness, scalability, and performance Vector quantization, a technique that compresses vectors while preserving their semantic similarity, offers a solution to this challenge. Imagine converting a full-color image into grayscale to reduce storage space on a computer. This involves simplifying each pixel's color information by grouping similar colors into primary color channels or "quantization bins," and then representing each pixel with a single value from its bin. The binned values are then used to create a new grayscale image with smaller size but retaining most original details, as shown in Figure 1. Figure 1: Illustration of quantizing an RGB image into grayscale Vector quantization works similarly, by shrinking full-fidelity vectors into fewer bits to significantly reduce memory and storage costs without compromising the important details. Maintaining this balance is critical, as search and AI applications need to deliver relevant insights to be useful. Two effective quantization methods are scalar (converting a float point into an integer) and binary (converting a float point into a single bit of 0 or 1). Current and upcoming quantization capabilities will empower developers to maximize the potential of Atlas Vector Search. The most impactful benefit of vector quantization is increased scalability and cost savings through reduced computing resources and efficient processing of vectors. And when combined with Search Nodes —MongoDB’s dedicated infrastructure for independent scalability through workload isolation and memory-optimized infrastructure for semantic search and generative AI workloads— vector quantization can further reduce costs and improve performance, even at the highest volume and scale to unlock more use cases. "Cohere is excited to be one of the first partners to support quantized vector ingestion in MongoDB Atlas,” said Nils Reimers, VP of AI Search at Cohere. “Embedding models, such as Cohere Embed v3, help enterprises see more accurate search results based on their own data sources. We’re looking forward to providing our joint customers with accurate, cost-effective applications for their needs.” In our tests, compared to full-fidelity vectors, BSON-type vectors —MongoDB’s JSON-like binary serialization format for efficient document storage—reduced storage size by 66% (from 41 GB to 14 GB). And as shown in Figures 2 and 3, the tests illustrate significant memory reduction (73% to 96% less) and latency improvements using quantized vectors, where scalar quantization preserves recall performance and binary quantization’s recall performance is maintained with rescoring–a process of evaluating a small subset of the quantized outputs against full-fidelity vectors to improve the accuracy of the search results. Figure 2: Significant storage reduction + good recall and latency performance with quantization on different embedding models Figure 3: Remarkable improvement in recall performance for binary quantization when combining with rescoring In addition, thanks to the reduced cost advantage, vector quantization facilitates more advanced, multiple vector use cases that would have been too computationally-taxing or cost-prohibitive to implement. For example, vector quantization can help users: Easily A/B test different embedding models using multiple vectors produced from the same source field during prototyping. MongoDB’s document model —coupled with quantized vectors—allows for greater agility at lower costs. The flexible document schema lets developers quickly deploy and compare embedding models’ results without the need to rebuild the index or provision an entirely new data model or set of infrastructure. Further improve the relevance of search results or context for large language models (LLMs) by incorporating vectors from multiple sources of relevance, such as different source fields (product descriptions, product images, etc.) embedded within the same or different models. How to get started, and what’s next Now, with support for the ingestion of scalar quantized vectors, developers can import and work with quantized vectors from their embedding model providers of choice (such as Cohere, Nomic, Jina, Mixedbread, and others)—directly in Atlas Vector Search. Read the documentation and tutorial to get started. And in the coming weeks, additional vector quantization features will equip developers with a comprehensive toolset for building and optimizing applications with quantized vectors: Support for ingestion of binary quantized vectors will enable further reduction of storage space, allowing for greater cost savings and giving developers the flexibility to choose the type of quantized vectors that best fits their requirements. Automatic quantization and rescoring will provide native capabilities for scalar quantization as well as binary quantization with rescoring in Atlas Vector Search, making it easier for developers to take full advantage of vector quantization within the platform. With support for quantized vectors in MongoDB Atlas Vector Search, you can build scalable and high-performing semantic search and generative AI applications with flexibility and cost-effectiveness. Check out these resources to get started documentation and tutorial . Head over to our quick-start guide to get started with Atlas Vector Search today.

October 7, 2024
Artificial Intelligence

Bringing Gen AI Into The Real World with Ramblr and MongoDB

How do you bring the benefits of gen AI, a technology typically experienced on a keyboard and screen, into the physical world? That's the problem the team at Ramblr.ai , a San Francisco-based startup, is solving with its powerful and versatile 3D annotation and recognition capabilities. “With Ramblr you can record continuously what you are doing, and then ask the computer, in natural language, ‘Where did I go wrong’ or ‘What should I do next?” said Frank Angermann, Lead Pipeline & Infrastructure Engineer at Ramblr.ai. Gen AI for the real world One of the best examples of Ramblr’s technology, and its potential, is its work with the international chemical giant BASF. In a video demonstration on Ramblr’s website, a BASF engineer can be seen tightening bolts on a connector (or ‘flange’) joining two parts of a pipeline. Every move the engineer makes is recorded via a helmet-mounted camera. Once the worker is finished for the day this footage, and the footage of every other person working on the pipeline, is uploaded to a database. Using Ramblr’s technology, quality assurance engineers from BASF then query the collected footage from every worker, asking the software to, ‘Please assess footage from today’s pipeline connection work and see if any of the bolts were not tightened enough.’ Having processed the footage, Ramblr assesses whether those flanges had been assembled correctly and identifies any that required further inspection or correction. The method behind the magic “We started Ramblr.ai as an annotation platform, a place where customers could easily label images from a video and have machine learning models then identify that annotation throughout the video automatically,” said Frank. “In the past this work would be carried out manually by thousands of low-paid workers tagging videos by hand. We thought we could be better by automating that process,” he added. The software allows customers to easily customize and add annotations to footage for their particular use case, and with its gen-AI powered active learning approach Ramblr then ‘fills in’ the rest of the video based on those annotations. Why MongoDB? MongoDB has been part of the Ramblr technology stack since the beginning. “We use MongoDB Atlas for half of our storage processes. Metadata, annotation data, etc., can all be stored in the same database. This means we don’t have to rely on separate databases to store different types of data,” said Frank. Flexibility of data storage was also a key consideration when choosing a database. “With MongoDB Atlas, we could store information the way we wanted to,” he added. The built-in vector database capabilities of Atlas were also appealing to the Rambler team, “The ability to store vector embeddings without having to do any more work - for instance not having to move a 3mb array of data somewhere else to process it, was a big bonus for us.” The future Aside from infrastructure and construction Q&A, robotics is another area in which the Ramblr team is eager to deploy their technology. “Smaller robotics companies don’t typically have the data to train the models that inform their products. There are quite a few use cases where we could support these companies and provide a more efficient and cost-effective way to teach the robots more efficiently. We are extremely efficient in providing information for object detectors,” said Frank. But while there are plenty of commercial uses for Ramblr’s technology, the growth in spatial computing in the consumer sector - especially following the release of Apple’s Vision Pro and Meta Quest headsets - opens up a whole new category of use cases. “Spatial computing will be a big part of the world. Being able to understand the particular processes, taxonomy, and what the person is actually seeing in front of them will be a vital part of the next wave of innovation in user interfaces and the evolution of gen AI,” Frank added. Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com. Head over to our quick-start guide to get started with Atlas Vector Search today.

September 30, 2024
Artificial Intelligence

AI-Driven Noise Analysis for Automotive Diagnostics

Aftersales service is a crucial revenue stream for the automotive industry, with leading manufacturers executing repairs through their dealer networks. One global automotive giant recently embarked on an ambitious project to revolutionize their diagnostic process. Their project—which aimed to increase efficiency, customer satisfaction, and revenue throughput—involved the development of an AI-powered solution that could quickly analyze engine sounds and compare them to a database of known problems, significantly reducing diagnostic times for complex engine issues. Traditional diagnostic methods can be time-consuming, expensive, and imprecise, especially for complex engine issues. MongoDB’s client in automotive manufacturing envisioned an AI-powered solution that could quickly analyze engine sounds and compare them to a database of known problems, significantly reducing diagnostic times. Initial setbacks, then a fresh perspective Despite the client team's best efforts, the project faced significant challenges and setbacks during the nine-month prototype phase. Though the team struggled to produce reliable results, they were determined to make the project a success. At this point, MongoDB introduced its client to Pureinsights , a specialized gen AI implementation and MongoDB AI Application Program partner , to rethink the solution and to salvage the project. As new members of the project team, and as Pureinsights’s CTO and Lead Architect, respectively, we brought a fresh perspective to the challenge. Figure 1: Before and after the AI-powered noise diagnostic solution A pragmatic approach: Text before sound Upon review, we discovered that the project had initially started with a text-based approach before being persuaded to switch to sound analysis. The Pureinsights team recommended reverting to text analysis as a foundational step before tackling the more complex audio problem. This strategy involved: Collecting text descriptions of car problems from technicians and customers. Comparing these descriptions against a vast database of known issues already stored in MongoDB. Utilizing advanced natural language processing, semantic / vector search, and Retrieval Augmented Generation techniques to identify similar cases and potential solutions. Our team tested six different models for cross-lingual semantic similarity, ultimately settling on Google's Gecko model for its superior performance across 11 languages. Pushing boundaries: Integrating audio analysis With the text-based foundation in place, we turned to audio analysis. Pureinsights developed an innovative approach to the project by combining our AI expertise with insights from advanced sound analysis research. We drew inspiration from groundbreaking models that had gained renown for their ability to identify cities solely from background noise in audio files. This blend of AI knowledge and specialized audio analysis techniques resulted in a robust, scalable system capable of isolating and analyzing engine sounds from various recordings. We adapted these sophisticated audio analysis models, originally designed for urban sound identification, to the specific challenges of automotive diagnostics. These learnings and adaptations are also applicable to future use cases for AI-driven audio analysis across various industries. This expertise was crucial in developing a sophisticated audio analysis model capable of: Isolating engine and car noises from customer or technician recordings. Converting these isolated sounds into vectors. Using these vectors to search the manufacturer's existing database of known car problem sounds. At the heart of this solution is MongoDB’s powerful database technology. The system leverages MongoDB’s vector and document stores to manage over 200,000 case files. Each "document" is more akin to a folder or case file containing: Structured data about the vehicle and reported issue Sound samples of the problem Unstructured text describing the symptoms and context This unified approach allows for seamless comparison of text and audio descriptions of customer engine problems using MongoDB's native vector search technology. Encouraging progress and phased implementation The solution's text component has already been rolled out to several dealers, and the audio similarity feature will be integrated in late 2024. This phased approach allows for real-world testing and refinement before a full-scale deployment across the entire repair network. The client is taking a pragmatic, step-by-step approach to implementation. If the initial partial rollout with audio diagnostics proves successful, the plan is to expand the solution more broadly across the dealer network. This cautious (yet forward-thinking) strategy aligns with the automotive industry's move towards more data-driven maintenance practices. As the solution continues to evolve, the team remains focused on enhancing its core capabilities in text and audio analysis for current diagnostic needs. The manufacturer is committed to evaluating the real-world impact of these innovations before considering potential future enhancements. This measured approach ensures that each phase of the rollout delivers tangible benefits in efficiency, accuracy, and customer satisfaction. By prioritizing current diagnostic capabilities and adopting a phased implementation strategy, the automotive giant is paving the way for a new era of efficiency and customer service in their aftersales operations. The success of this initial rollout will inform future directions and potential expansions of the AI-powered diagnostic system. A new era in automotive diagnostics The automotive giant brought industry expertise and a clear vision for improving their aftersales service. MongoDB provided the robust, flexible data platform essential for managing and analyzing diverse, multi-modal data types at scale. We, at Pureinsights, served as the AI application specialist partner, contributing critical AI and machine learning expertise, and bringing fresh perspectives and innovative approaches. We believe our role was pivotal in rethinking the solution and salvaging the project at a crucial juncture. This synergy of strengths allowed the entire project team to overcome initial setbacks and develop a groundbreaking solution that combines cutting-edge AI technologies with MongoDB's powerful data management capabilities. The result is a diagnostic tool leveraging text and audio analysis to significantly reduce diagnostic times, increase customer satisfaction, and boost revenue through the dealer network. The project's success underscores several key lessons: The value of persistence and flexibility in tackling complex challenges The importance of choosing the right technology partners The power of combining domain expertise with technological innovation The benefits of a phased, iterative approach to implementation As industries continue to evolve in the age of AI and big data, this collaborative model—bringing together industry leaders, technology providers, and specialized AI partners—sets a new standard for innovation. It demonstrates how companies can leverage partnerships to turn ambitious visions into reality, creating solutions that drive business value while enhancing customer experiences. The future of automotive diagnostics—and AI-driven solutions across industries—looks brighter thanks to the combined efforts of forward-thinking enterprises, cutting-edge database technologies like MongoDB, and specialized AI partners like Pureinsights. As this solution continues to evolve and deploy across the global dealer network, it paves the way for a new era of efficiency, accuracy, and customer satisfaction in the automotive industry. This solution has the potential to not only revolutionize automotive diagnostics but also set a new standard for AI-driven solutions in other industries, demonstrating the power of collaboration and innovation. To deliver more solutions like this—and to accelerate gen AI application development for organizations at every stage of their AI journey—Pureinsights has joined the MongoDB AI Application Program (MAAP). Check out the MAAP page to learn more about the program and how MAAP ecosystem members like Pureinsights can help your organization accelerate time-to-market, minimize risks, and maximize the value of your AI investments.

September 27, 2024
Artificial Intelligence

Revolutionizing Sales with AI: Glyphic AI’s Journey with MongoDB

When connecting with customers, sales teams often struggle to understand and address the unique needs and preferences of each prospect, leading to ineffective pitches. Additionally, time-consuming admin tasks like data entry, sales tool updates, follow-up management, and maintaining personalized interactions across numerous leads can overwhelm teams, leaving less time for impactful selling. Glyphic AI, a pioneering AI-powered sales co-pilot, addresses these challenges. By analyzing sales processes and calls, Glyphic AI helps teams streamline workflows and focus on building stronger customer relationships. Founded by former engineers from Google DeepMind and Apple, Glyphic AI leverages expertise in large language models (LLMs) to work with private and dynamic data. "As LLM researchers, we discovered the true potential of these models lies in the sales domain, generating vast numbers of calls rich with untapped insights. Traditionally, these valuable insights were lost in digital archives, as extracting them required manually reviewing calls and making notes," says Devang Agrawal, co-Founder and Chief Technology Officer of Glyphic AI. “Our aim became to enhance customer centricity by harnessing AI to capture and utilize conversational and historical data, transforming it into actionable intelligence for ongoing and future deals.” Built on MongoDB, AWS, and Anthropic, Glyphic AI automatically breaks down sales calls using established methodologies like MEDDIC. It leverages ingested sales playbooks to provide tailored strategies for different customer personas and company types. By using data sources such as Crunchbase, LinkedIn, and internal CRM information, the tool proactively surfaces relevant insights before sales teams engage with customers. Glyphic AI employs LLMs to offer complete visibility into sales deals by understanding the full context and intent of real-time conversations. The system captures information at various points, primarily focusing on sales calls and recordings. These data are analyzed by LLMs tailored for sales tasks, summarizing content based on sales frameworks and extracting specific information requested by teams. MongoDB records serve as the main database for customer records, sales call data, and related metadata, while large video files are stored in AWS S3. MongoDB Atlas Search and Vector Search features are integrated, providing the ability to index and query high-dimensional vectors efficiently. Glyphic AI’s Global Search feature uses Atlas Vector Search to allow users to ask strategic questions and retrieve data from numerous sales calls. It matches queries with vector embeddings in MongoDB, utilizing metadata, account details, and external sources like LinkedIn and Crunchbase to identify relevant content. This content is processed by the LLM model for detailed conversational responses. Additionally, MongoDB's Atlas Vector Search continuously updates records, building a dynamic knowledge base that provides quick insights and proactively generates summaries enriched with data from various sources, assisting with sales calls and customer analysis. Figure 1: How Glyphic AI transforms sales call analysis Why Glyphic AI relies on advanced cloud solutions for efficient data management and innovation "I used MongoDB in the first app I ever built, and ever since it has consistently met our needs, no matter the project," says Agrawal. For Glyphic AI, MongoDB has seamlessly integrated into the company’s existing workflows. MongoDB Atlas has greatly simplified database management and analytics, initially involving the team implementing vector search from scratch. When MongoDB introduced Atlas Vector Search, Glyphic AI transitioned to this more streamlined and integrated solution. “If MongoDB's Atlas Vector Search had been available back then, we would have adopted it immediately for its ease of testing and deployment,” Agrawal reflects. While Agrawal appreciates the benefits of building from scratch, he acknowledges that maintaining complex systems, like databases or developing LLM models, becomes increasingly challenging over time. The AI feature enabling natural language queries in MongoDB Compass has been particularly beneficial for Glyphic AI, especially when extracting insights not yet available in dashboards or analyzing specific database elements. In the fast-paced AI industry, time to market is critical. MongoDB Atlas, as a cloud solution, offers Glyphic AI the flexibility and scalability needed to quickly test, deploy, and refine its applications. The integration of MongoDB Atlas with features like Atlas Vector Search has enabled the team to focus on innovation without being bogged down by infrastructure complexities, speeding up the development of AI-powered features. As a small, agile team, Glyphic AI leverages MongoDB's document model, which aligns well with object-oriented programming principles. This allows for rapid development and iteration of product features, enabling the company to stay competitive in the evolving generative AI market. By simplifying data management and reducing friction, MongoDB’s document model helps Glyphic AI maintain agility and focus on delivering impactful solutions. With vector search embedded in MongoDB, the team found relief in using a unified language and system. Keeping all data—including production records and vectors—in one place has greatly simplified operations. Before adopting MongoDB, the team struggled with synchronizing data across multiple systems and managing deletions to avoid inconsistencies. MongoDB’s ACID compliance has made this process far more straightforward, ensuring reliable transactions and maintaining data integrity. By consolidating production records and vectors into MongoDB, the team achieved the simplicity they needed, eliminating the complexities of managing disparate systems. Glyphic AI's next step: Refining LLMs for enhanced sales insights and strategic decision-making “Over the next year, our goal is to refine our LLMs specifically for the sales context to deliver more strategic insights. We've built a strong conversational intelligence product that enhances efficiency for frontline sales reps and managers. Now, we're focused on aggregating conversation data to provide strategy teams and CROs with valuable insights into their teams' performance,” says Agrawal. As sales analysis evolves to become more strategic, significant technical challenges will arise, especially when scaling from summarizing a handful of calls to analyzing thousands in search of complex patterns. Current LLMs are often limited in their ability to process large amounts of sales call data, which means ongoing adjustments and improvements will be necessary to keep up with new developments. Additionally, curating effective datasets, including synthetic and openly available sales data, will be a key hurdle in training these models to deliver meaningful insights. By using MongoDB, Glyphic AI will be able to accelerate innovation due to the reduced need for time-consuming maintenance and management of complex systems. This will allow the team to focus on essential tasks like hiring skilled talent, driving innovation, and improving the end-user experience. As a result, Glyphic AI will be able to prioritize core objectives and continue to develop and refine their products effectively. As Glyphic AI fine-tunes its LLMs for the sales context, the team will embrace retrieval-augmented generation (RAG) to push the boundaries of AI-driven insights. Leveraging Atlas Vector Search will enable Glyphic AI to handle large datasets more efficiently, transforming raw data into actionable sales strategies. This will enhance its AI’s ability to understand and predict sales trends with greater precision, setting the stage for a new level of sales intelligence and positioning Glyphic AI at the forefront of AI-driven sales solutions. As part of the MongoDB AI Innovators Program , Glyphic AI’s engineers gain direct access to MongoDB’s product management team, facilitating feedback exchange and receiving the latest updates and best practices. This collaboration allows them to concentrate on developing their LLM models and accelerating application development. Additionally, the provision of MongoDB Atlas credits helps reduce costs associated with experimenting with new features. Get started with your AI-powered apps by registering for MongoDB Atlas and exploring the tutorials in our AI resources center . If you're ready to dive into Atlas Vector Search, head over to the quick-start guide to kick off your journey. Additionally, if your company is interested in being featured in a story like this, we'd love to hear from you. Reach out to us at ai_adopters@mongodb.com .

September 24, 2024
Artificial Intelligence

Ahamove Rides Vietnam’s E-commerce Boom with AI on MongoDB

The energy in Vietnam’s cities is frenetic as millions of people navigate the busy streets with determination and purpose. Much of this traffic is driven by e-commerce, with food and parcel deliveries perched on the back of the country’s countless motorcycles or in cars and trucks. In the first quarter of 2024, online spending in Vietnam grew a staggering 79% over the previous year. Explosive growth like this is expected to continue, raising the industry’s value to $32 billion by 2025 , with 70% of the country’s 100 million population making e-commerce transactions . With massive numbers like this, in logistics, efficiency is king. The high customer expectations for rapid deliveries drive companies like Ahamove to innovate their way to seamless operations with cloud technology. Ahamove is Vietnam’s largest on-demand delivery company, handling more than 200,000 e-commerce, food, and warehouse deliveries daily, with 100,000 drivers and riders plying the streets nationwide. The logistics leader serves a network of more than 300,000 merchants, including regional e-commerce giants like Lazada and Shopee, as well as nationwide supermarket chains and small restaurants. The stakes are high for all involved, so maximizing efficiency is of utmost importance. Innovating to make scale count Online shoppers’ behavior is rarely predictable, and to cope with sudden spikes in daily delivery demand, Ahamove needed to efficiently scale up its operations to enhance customer and end-user satisfaction. Moving to MongoDB Atlas on Amazon Web Services (AWS) in 2019, Ahamove fundamentally changed its ability to meet the rising demand for deliveries and new services that please e-commerce providers, online shoppers, and diners. The scalability of MongoDB is crucial for Ahamove, especially during peak times, like Christmas or Lunar New Year, when the volume of orders surges to more than 200,000 a day. “MongoDB's ability to scale ensures that the database can handle increased loads, including data requests, without compromising performance and leading to quicker order processing and improved user experience,” said Tien Ta, Strategic Planning Manager at Ahamove. One of the powerful services that improves e-commerce across Vietnam is geospatial queries enabled by MongoDB. Using this geospatial data associated with specific locations on Earth's surface, Ahamove can easily locate drivers, map drivers to restaurants to accelerate deliveries, and track orders without relying on third-party services to provide information, which slows deliveries. Meanwhile, the versatility of MongoDB’s developer data platform empowers Ahamove to store its operational data, metadata, and vector embeddings on MongoDB Atlas and seamlessly use Atlas Vector Search to index, retrieve, and build performant generative artificial intelligence (AI) applications. AI evolution Powered by MongoDB Atlas , Ahamove is transforming Vietnam’s e-commerce industry with innovations like instant order matching, real-time GPS vehicle tracking, generative AI chatbots, and services like driver rating and variable delivery times, all available 24 hours a day, seven days a week. In addition to traffic, Vietnam is also famous for its excellent street food. Recognizing the importance of the country’s rapidly growing food and beverage (F&B) industry, which is projected to be worth more than US$27.3 billion in 2024 , Ahamove decided to help Vietnam’s small food vendors benefit from the e-commerce boom gripping the country. Using the latest models, including ChatGPT-4o-mini and Llama 3.1, Ahamove’s fully automated generative AI chatbot on MongoDB integrates with restaurants’ Facebook pages. This makes it easier for hungry consumers to handle the entire order process with the restaurant in natural language, from seeking recommendations to placing orders, making payments, and tracking deliveries to their doorsteps. How AhaFood AI chatbot automates the food order journey “Vietnam’s e-commerce industry is growing rapidly as more people turn to their mobile devices to purchase goods and services,” added Ta. “With MongoDB, we meet this customer need for new purchase experiences with innovative services like generative AI chatbots and faster delivery times.” Anticipated to achieve 10% of food deliveries at Da Nang market and take the solution nationwide in the first half of 2025, AhaFood.AI - Ahamove’s latest initiative, also provides personalized dish recommendations based on consumer demographics, budgets, or historical preferences, helping people find and order their favorite food faster. Moreover, merchants receive timely notifications of incoming orders via the AhaMerchant web portal, allowing them to start preparing dishes earlier. AhaFood.AI also collects and securely stores users’ delivery addresses and phone numbers, ensuring better driver assignment and fulfilling food orders in less than 15 minutes. “Adopting MongoDB Atlas was one of the best decisions we’ve ever made for Ahamove, allowing us to build an effective infrastructure that can scale with growing demand and deliver a better experience for our drivers and customers,” said Ngon Pham, CEO, Ahamove. “Generative AI will significantly disrupt the e-commerce and food industry, and with MongoDB Vector Search we can rapidly build new solutions using the latest database and AI technology.” The vibrant atmosphere of Vietnam's bustling cities is part of the country's charm. Rather than seeking to bring calm to this energy, Vietnam thrives on it. Focusing on improving efficiency and supporting street food vendors in lively urban areas with cloud technology will benefit all. Learn how to build AI applications with MongoDB Atlas . Head over to our quick-start guide to get started with Atlas Vector Search today.

September 19, 2024
Artificial Intelligence

MongoDB Enables AI-Powered Legal Searches with Qura

The launch of ChatGPT in November 2022 caught the world by surprise. But while the rest of us marveled at the novelty of its human-like responses, the founders of Qura immediately saw another, more focused use case. “Legal data is a mess,” said Kevin Kastberg, CTO for Qura. “The average lawyer spends tens of hours each month on manual research. We thought to ourselves, ‘what impact would this new LLM technology have on the way lawyers search for information?’” And with that, Qura was born. Gaining trust From its base in Stockholm, Sweden, Qura set about building an AI-powered legal search engine. The team trained custom models and did continual pre-training on millions of pages of publicly available legal texts, looking to bring the comprehensive power of LLMs to the complex and intricate language of the law. “Legal searches have typically been done via keyword search,” said Kastberg. “ We wanted to bring the power of LLMs to this field. ChatGPT created hype around the ability of LLMs to write. Qura is one of the first startups to showcase their far more impressive ability to read. LLMs can read and analyze, on a logical and semantic level, millions of pages of textual data in seconds. This is a game changer for legal search.” Unlike other AI-powered applications, Qura is not interested in generating summaries or “answers” to the questions posed by lawyers or researchers. Instead, Qura aims to provide customers with the best sources and information. “We deliberately wanted to stay away from generative AI. Our customers can be sure that with Qura there is no risk of hallucinations or bad interpretation. Put another way, we will not put an answer in your mouth; rather, we give you the best possible information to create that answer yourselves,” said Kastberg. “Our users are looking for hard-to-find sources, not a gen AI-summary of the basic sources,” he added. With this mantra, the company claims to have reduced research times by 78% while surfacing double the number of relevant sources when compared to similar legal search products. MongoDB in the mix Qura has worked with MongoDB since the beginning. “We needed a document database for flexibility. MongoDB was really convenient as we had a lot of unstructured data with many different characteristics.” In addition to the flexibility to adapt to different data types, MongoDB also offered the Qura team lightning-fast search capabilities. “ MongoDB Atlas search is a crucial tool for our search algorithm agents to navigate our huge datasets. This is especially true of the speed at which we can do efficient text searches on huge corpuses of text, an important part for navigating documents,” said Kastberg. And when it came to AI, a vector database to store and retrieve embeddings was also a real benefit. “Having vector search built into Atlas was convenient and offered an efficient way to work with embeddings and vectorized data.” What's next? Qura's larger goal is to bring about the next generation of intelligent search. The legal space is only the start, and the company has larger ambitions to expand beyond Sweden and into other industries too. “We are live with Qura in the legal space in Sweden and currently onboarding EU customers in the coming month. What we are building towards is a new way of navigating huge text databases, and that could be applied to any type of text data, in any industry,” said Kastberg. Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com. Head over to our quick-start guide to get started with Atlas Vector Search today.

September 18, 2024
Artificial Intelligence

AI Agents, Hybrid Search, and Indexing with LangChain and MongoDB

Since we announced integration with LangChain last year, MongoDB has been building out tooling to help developers create advanced AI applications with LangChain . With recent releases, MongoDB has made it easier to develop agentic AI applications (with a LangGraph integration), perform hybrid search by combining Atlas Search and Atlas Vector Search , and ingest large-scale documents more effectively. For more on each development—plus new support for the LangChain Indexing API—please read on! The rise of AI agents Agentic applications have emerged as a compelling next step in the development of AI. Imagine an application able to act on its own, working towards complicated goals and drawing on context to create a strategy. These applications leverage large language models (LLMs) to dynamically determine their execution path, breaking free from the constraints of traditional, deterministic logic. Consider an application tasked with answering a question like "In our most profitable market, what is the current weather?" While a traditional retrieval-augmented generation (RAG) app may falter, unable to obtain information about “current weather,” an agentic application shines. The application can intelligently deduce the need for an external API call to obtain current weather information, seamlessly integrating this with data retrieved from a vector search to identify the most profitable market. These systems take action and gather additional information with limited human intervention, supplementing what they already know. Building such a system is easier than ever thanks to MongoDB’s continued work with LangGraph. Unleashing the power of AI agents with LangGraph and MongoDB Because it now offers LangGraph—a framework for performing multi-agent orchestration—LangChain is more effective than ever at simplifying the creation of applications using LLMs, including AI agents. These agents require memory to maintain context across multiple interactions, allowing users to engage with them repeatedly while the agent retains information from previous exchanges. While basic agentic applications can utilize in-memory structures, for more complicated use cases these structures are not sufficient. MongoDB allows developers to build stateful, multi-actor applications with LLMs, storing and retrieving the “checkpoints” needed by LangGraph.js. The new MongoDBSaver class makes integration simpler than ever before, as LangGraph.js is able to utilize historical user interactions to enhance agentic AI. By segmenting this history into checkpoints, the library allows for persistent session memory, easier error recovery, and even the ability to “time travel”—allowing users to jump back in the graph to a previous state to explore alternative execution. The MongoDBSaver class implements all of this functionality right into LangGraph.js, with sensible defaults and MongoDB-specific optimization. To learn more, please visit the source code , the documentation , and our new tutorial (which includes both a written and video version). Improve retrieval accuracy with Hybrid Search Retriever Hybrid search is particularly well-suited for queries that have both semantic and keyword-based components. Let’s look at an example, a query such as "find recent scientific papers about climate change impacts on coral reefs that specifically mention ocean acidification". This query would use a hybrid search approach, combining semantic search to identify papers discussing climate change effects on coral ecosystems, keyword matching to ensure "ocean acidification" is mentioned, and potential date-based filtering or boosting to prioritize recent publications. This combination allows for more comprehensive and relevant results than either semantic or keyword search alone could provide. With our recent release of Retrievers in LangChain-MongoDB, building such advanced retrieval patterns is more accessible than ever. Retrievers are how LangChain integrates external data sources into LLM applications. MongoDB has added two new custom, purpose-built Retrievers to the langchain-mongodb Python package, giving developers a unified way to perform hybrid search and full-text search with sensible defaults and extensive code annotation. These new classes make it easier than ever to use the full capabilities of MongoDB Vector Search with LangChain. The new MongoDBAtlasFullTextSearchRetriever class performs full-text searches using the Best Match 25 (BM25) analyzer. The MongoDBAtlasHybridSearchRetriever class builds on this work, combining the above implementation with vector search, fusing the results with Reciprocal Rank Fusion (RRF) algorithm. The combination of these two techniques is a potent tool for improving the retrieval step of a Retrieval-Augmented Generation (RAG) application, enhancing the quality of the results. To find out more, please dive into the MongoDBAtlasHybridSearchRetriever and MongoDBAtlasFullTextSearchRetriever classes. Seamless synchronization using LangChain Indexing API In addition to these releases, we’re also excited to announce that MongoDB now supports the LangChain Indexing API, allowing for seamless loading and synchronization of documents from any source into MongoDB, leveraging LangChain's intelligent indexing features. This new support will help users avoid duplicate content, minimize unnecessary rewrites, and optimize embedding computations. The LangChain Indexing API's record management system ensures efficient tracking of document writes, computing hashes for each document, and storing essential information like write time and source ID. This feature is particularly valuable for large-scale document processing and retrieval applications, offering flexible cleanup modes to manage documents effectively in MongoDB vector search. To read more about how to use the Indexing API, please visit the LangChain Indexing API Documentation . We’re excited about these LangChain integrations and we hope you are too. Here are some resources to further your learning: Check out our written and video tutorial to walk you through building your own JavaScript AI agent with LangGraph.js and MongoDB. Experiment with Hybrid search retrievers to see the power of Hybrid search for yourself. Read the previous announcement with LangChain about Semantic Caching.

September 12, 2024
Artificial Intelligence

Building Gen AI with MongoDB & AI Partners | August 2024

As the AI landscape continues to evolve, companies, industries, and developers seek tailored solutions to their unique challenges. Gone are the days when general-purpose AI models could be applied universally. Now, organizations are looking for industry-specific applications, verticalized AI solutions, and specialized tools to gain a competitive edge and best serve their customers. And as gen AI use cases have diversified—from healthcare diagnostics and autonomous driving, to personalized recommendations and creative content generation—so has the technology stack supporting them. The complexity of building and deploying AI models has led to the rise of specialized AI frameworks and platforms that streamline workflows and optimize performance for specific use cases. In this context, having the right AI stack is essential for driving innovation. AI development is no longer just about choosing the best model but also about selecting the right tools, libraries, and infrastructure to support that model across the board. All of which makes partnerships (and combining technical strengths) increasingly important to innovating with AI. Take, for example, our most recent integration with LangChain: the MongoDB-LangChain partnership exemplifies how having the right components in an AI stack allows teams to focus on innovating instead of managing infrastructure bottlenecks. By combining LangGraph with MongoDB’s vector search capabilities, developers can create more sophisticated, high-performing AI applications. This integration allows for the seamless development of agentic AI systems capable of generating actionable insights and delivering complex tasks. To learn more about building powerful AI agents with LangGraph.js and MongoDB, plus our recent work making vector search even more versatile with custom LangChain Retrievers, check out our tutorial . Welcoming new AI partners MongoDB’s partnership with LangChain highlights the importance of building adaptable solutions that can grow and change as the needs of developers and customers grow and change. Which is why MongoDB is always on the lookout for innovative partners and solutions—in August we welcomed five new AI partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! BuildShip BuildShip is a low-code visual backend and workflow builder to instantly create APIs, scheduled tasks, backend cloud jobs, and automation, powered by AI. " We at BuildShip are thrilled to partner with MongoDB to introduce an innovative low-code approach for rapidly building AI workflows and backend tasks in a visual and scalable manner,” said Harini Janakiraman, CEO of BuildShip.com. “MongoDB offers a comprehensive data stack for AI developers and organizations, enabling them to efficiently build scalable databases and access vector or hybrid search options for their products. Our collaboration provides customizable low-code templates that allow for easy integration of MongoDB databases with a variety of AI models and tools. This enables teams and companies to quickly build powerful APIs, automations, vector search, and scheduled tasks, unlocking organizational efficiency and driving product innovation.” Inductor Inductor is a platform to prototype, evaluate, improve, and observe LLM apps and features, helping developers ship high-quality LLM-powered functionality rapidly and systematically. “ We’re excited to partner with MongoDB to enable companies to rapidly create production-grade LLM applications, by combining MongoDB's powerful vector search with Inductor’s developer platform enabling streamlined, systematic workflows for developing RAG-based applications,” said Ariel Kleiner, CEO of Inductor. “While many LLM-powered demos have been created, few have successfully evolved into production-grade applications that deliver business wins. Together, Inductor and MongoDB enable enterprises to build impactful, needle-moving LLM applications, accelerating time to market and delivering real value to customers.” Metabase Metabase is the easy-to-use, open source Business Intelligence tool that lets everyone work with data, with or without SQL, for internal and customer-facing, embedded analytics. "This partnership is an important step forward for NoSQL database analytics. By integrating Metabase with MongoDB , two popular open-source tools, we are making it easier for users to quickly get valuable insights from their MongoDB data,” explained Luiz Arakaki, Product Manager at Metabase. “Our goal is to create a better integration between the tools to offer more advanced features and stability, simplifying the use of NoSQL databases for advanced analytics.” Shakudo Shakudo is a comprehensive development platform that lets data professionals develop, run, and deploy data pipelines and applications in an all-in-one integrated environment. “ Shakudo is thrilled to be partnering with MongoDB to streamline the entire retrieval-augmented generation (RAG) development lifecycle. Together we help companies test and optimize their RAG features for faster PoC, and production deployment,” noted Yevgeniy Vahlis, CEO of Shakudo. “MongoDB has made it dead simple to launch a scalable vector database with operational data, and Shakudo brings industry leading AI tooling to that data. Our collaboration speeds up time to market and helps companies get real value to customers faster.” VLM Run VLM Run is a versatile API that enables accurate JSON extraction from any visual content such as images, videos, and documents, helping users to integrate visual AI to applications. “ VLM Run is excited to partner with MongoDB to help enterprises accurately extract structured insights from visual content such as images, videos and visual documents,” said Sudeep Pillai, Co-Founder and CEO of VLM Run. “Our combined solution will enable enterprises to turn their often-untapped unstructured visual content into actionable, queryable business intelligence.” But wait, there's more! To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub , and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem. Head over to our quick-start guide to get started with Atlas Vector Search today.

September 11, 2024
Artificial Intelligence

Boosting Customer Lifetime Value with Agmeta and MongoDB

Nobody likes calling customer service. The phone trees, the wait times, the janky music, and how often your issue just isn’t resolved can make the whole process one most people would rather avoid. For business owners, the customer contact center can also be a source of frustration, simultaneously creating customer churn and unhappiness, while also acting as a black hole of information as to why that churn occurred. It doesn’t have to be this way. What if instead, customer service centers offered valuable ways to increase the Customer Lifetime Value (CLTV) of customers, pipelines of upsell opportunities, and valuable sources of information? That’s the goal of Agmeta.AI , a startup dedicated to giving businesses actionable insights to fight churn, identify key customers primed for upsell, and improve customer service overall. Lost in translation “We started with a very simple thesis±people call into contact centers because they have a problem. That is a real make-or-break moment. The opportunity for churn is very high… or that customer can be a great target for upselling,” said Samir Agarwal, CEO and co-founder of Agmeta. “All of this data sits in a contact center, and businesses don't ever get to see it,” he added. According to Samir, even the businesses that think they are collecting useful information on customer service interactions are instead collecting incorrect or incomplete information. Or worse, they’re analyzing the information they do record incorrectly. Every business today talks about the importance of customer experience (CX), but the challenge businesses face is how they quantify that CX. Many contact centers substitute call sentiment for CX, or use keywords to determine canned responses. For example, imagine if a customer calls into a service center and they have what appears to be a positive conversation with an agent. They use words and phrases like “thank you,” and “yes, I understand,” and reply “no, I do not have anything else to ask” at the end of a call in which their complaint is not resolved. After putting the phone down, the customer goes on to cancel the service, or worse, initiate a chargeback request with their credit card provider. In some businesses the customer service agent may manually mark such a call as positive’ The agent, after all, ‘answered all the customers' concerns.’ As this example illustrates, the sentiment of a call should not be confused with the measure of customer experience. Another common way businesses try to gather feedback is by sending a post-call survey. However, a problem with this approach is that industry response rates for surveys are close to 3%. This implies that decisions get made on that small sample, and may not take into account the other 97% of the customers who didn’t respond to the survey. Survey results are also frequently skewed, as those most likely to respond are also the ones who were most unhappy with the contact center interaction and want their voices heard. The MongoDB advantage Using machine learning and generative AI, backed by MongoDB Atlas , Agmeta’s software understands not only the content of the call, but the context too. Taking our example above, Agmeta’s software would detect that the customer is unhappy, despite their polite and ‘positive’ sounding conversation with the agent, and flag the customer as a potential churn or chargeback candidate in need of immediate attention. “We will give you a CSAT (customer satisfaction) score and a reason for that CSAT score within seconds of the call ending±for 100% of the interactions,” said Samir. For Agmeta to work, Samir and his team had to have a database ready to accept all kinds of data, including voice recordings, unstructured text, and constantly evolving schema. “We didn’t have a fixed schema, we needed a database that was as flexible as Agmeta needed to be. I’ve known of MongoDB forever, so when I started to look at databases it seemed an obvious choice to me,” he said. The ability to quickly and easily work with vectorized data for gen AI was also crucial. “MongoDB provides vector search capabilities in an operational database. Rather than having to add a bolt on a vector database and figure out the ETL, MongoDB solved this issue for me in a single product. The way I look at it, if you do a good job on Vector search, then my life as an entrepreneur and software builder becomes much easier,” Samir said. After assessing database options and multiple LLMs, Samir and his team chose to pair MongoDB Atlas with Google Cloud, taking advantage of Gemini on Google’s generative AI platform. “With Atlas on Google Cloud, there are zero worries about database administration, maintenance, and availability. This frees us up to focus on creating business value,” Samir said. “Another benefit of using MongoDB is the flexibility to use the customer’s MongoDB setup which gives the customer the peace of mind from the perspective of security and privacy of their data.” Customer service first With the power of generative AI and MongoDB, Agmeta can deliver a CSAT score that measures the customers’ true takeaway from the call. The CSAT score is a multi-dimensional score that takes into account areas including resolution (as the customer sees it), politeness, the onus on the customer, and many other attributes. In the short term, the primary use for this technology is to detect and flag those customers at risk of churn, filing a charge dispute with their card provider, or potentially upselling, giving businesses an opportunity to “see” what they could never find out before. “When we talk to customers, the number one thing they are concerned about is customer churn. Right now they operate completely blind with no idea why people are leaving them,” said Samir. “One large telecoms customer Agmeta is in talks with had no idea where their churn was happening. But when we described being able to assign every customer a CSAT score, they were very excited,” he added. And it’s not just about preventing churn. Businesses can identify happy customers too, targeting them for upsell opportunities. “One of the things we do is spot patterns of unanswered questions from product support interactions,” Samir added. “When we see ‘Oh look, suddenly there are a lot more calls because of a release,’ then we can flag this to product teams as a must-fix issue.” The future of customer service Agmeta aims to amalgamate customer information with current and past experiences to provide businesses a more holistic±and nuanced—picture of their customers, and more precise next steps they can take. “What we want to do is look back in time and see what else happened with this customer,” Samir said. “The goal is to provide businesses with targeted directives to minimize churn and grow customer lifetime value.” Retrieval-augmented generation plays a key role in Agmeta’s vision. This also means an expanded role for both MongoDB’s vector database as the source of information against which semantic searches can be run, as well as Gemini for both analysis and presentation of the directives for the business. You can learn more about how innovators across the world are using MongoDB by reviewing our Building AI case studies . If your team is building AI apps, sign up for the AI Innovators Program . Successful companies get access to free Atlas credits and technical enablement, as well as connections into the broader AI ecosystem. Additionally, if your company is interested in being featured in a story like this, we'd love to hear from you! Reach out to us at ai_adopters@mongodb.com . Head over to our quick-start guide to get started with Atlas Vector Search today.

September 10, 2024
Artificial Intelligence

Elevate Your Java Applications with MongoDB and Spring AI

MongoDB is excited to announce an integration with Spring AI, enhancing MongoDB Atlas Vector Search for Java developers. This collaboration brings Vector Search to Java applications, making it easier to build intelligent, high-performance AI applications. Why Spring AI? Spring AI is an AI library designed specifically for Java, applying the familiar principles of the Spring ecosystem to AI development. It enables developers to build, train, and deploy AI models efficiently within their Java applications. Spring AI addresses the gap left by other AI frameworks and integrations that focus on other programming languages, such as Python, providing a streamlined solution for Java developers. Spring has been a cornerstone for Java developers for decades, offering a consistent and reliable framework for building robust applications. The introduction of Spring AI continues this legacy, providing a straightforward path for Java developers to incorporate AI into their projects. With the MongoDB-Spring integration, developers can leverage their existing Spring knowledge to build next-generation AI applications without the friction associated with learning a new framework. Key features of Spring AI include: Familiarity: Leverage the design principles of the Spring ecosystem. Spring AI allows Java developers to use the same familiar tools and patterns they already know from other Spring projects, reducing the learning curve and allowing them to focus on building innovative AI applications. This means you can integrate AI capabilities—including Atlas Vector Search—without having to learn a new language or framework, making the transition smoother and more intuitive. Portability: Applications built with Spring AI can run anywhere the Spring framework runs. This ensures that AI applications are highly portable and can be deployed across various environments without modification, guaranteeing flexibility and consistency in deployment strategies. Modular design: Use Plain Old Java Objects (POJOs) as building blocks. Spring AI’s modular design promotes clean code architecture and maintainability. By using POJOs, developers can create modular, reusable components that simplify the development and maintenance of AI applications. This modularity also facilitates easier testing and debugging, leading to more robust applications that efficiently integrate with Atlas Vector Search. Efficiency: Streamline development with tools and features designed for AI applications in Java. Spring AI provides a range of tools that enhance development efficiency, including pre-built templates, configuration management, and integrated testing tools. These features reduce the time and effort required to develop AI applications, allowing developers to bring their ideas to market faster. These features streamline AI development by enhancing the integration and performance of Atlas Vector Search within Java applications, making it easier to build and scale AI-driven features. Enhancing AI development with Spring AI and Atlas Vector Search MongoDB Atlas Vector Search enhances AI application development by providing advanced search capabilities. The new Spring AI integration enables developers to manage and search vector data within AI models, enabling features like recommendation systems, natural language processing, and predictive analytics. Atlas Vector Search allows you to store, index, and search high-dimensional vectors, which are crucial for AI and machine learning models. This capability supports a range of AI features: Recommendation systems: Provide personalized recommendations based on user behavior and preferences. Natural language processing: Enhance text analysis and understanding for chatbots, sentiment analysis, and more. Predictive analytics: Improve forecasting and decision-making with advanced data models. What the integration means for Java developers Prior to MongoDB-Spring integration, Java developers did not have an easy way to integrate Spring into their AI applications using MongoDB Atlas Vector Search, which led to longer development times and suboptimal application performance. With this integration, the Java development landscape is transformed, allowing developers to build and deploy AI applications with greater efficiency. The integration simplifies the entire process, enabling developers to concentrate on creating innovative solutions rather than dealing with integration hurdles. This approach not only reduces development time but also accelerates time-to-market. Additionally, MongoDB offers robust support through comprehensive tutorials and a wealth of community-driven content. Whether you’re just beginning or looking to optimize existing applications, you’ll find the resources and guidance you need at every stage of your development journey. Get started! The MongoDB and Spring AI integration is designed to simplify the development of intelligent Java applications. By combining MongoDB's robust data platform with Spring AI's capabilities, you can create high-performance applications more efficiently. To start using MongoDB with Spring AI, explore our documentation , tutorial , and check out our GitHub repository to build the next generation of AI-driven applications today. Head over to our quick-start guide to get started with Atlas Vector Search today.

August 26, 2024
Artificial Intelligence

Ready to get Started with MongoDB Atlas?

Start Free