MongoDB Blog
Announcements, updates, news, and more
Top 4 Reasons to Use MongoDB 8.0
October 2, 2024
Updates
THL Simplifies Architecture with MongoDB Atlas Search
Tourism Holdings Limited (THL) originally became a MongoDB customer in 2019, using MongoDB Atlas to help manage a wide variety of telematics data. I was very excited to welcome Charbel Abdo, Solutions Architect for THL at MongoDB .local Sydney in July 2024 to hear more about how the company has significantly expanded its use of MongoDB. The largest RV rental company in the world, THL has branches in New Zealand (where it is headquartered), Australia, the US, Canada, the UK and Europe. Specializing in building, renting, and selling camper vans, THL has a number of well-known brands under its umbrella. In recent years, THL has made a number of significant digital transformation and technology stack optimization efforts, moving from a ‘bolt-on’ approach that necessitated the use of a distributed search and analytics engine to an integrated search solution with MongoDB Atlas . THL operates a complex ecosystem managed by their in-house platform, Motek, which handles booking, pricing, fleet management, and more—with MongoDB Atlas as the central database. Its +7,000 RVs are fitted with telematics devices that send information—such as location, high-speed events, engine problems, and geofences or restricted areas (for example, during the Australian bushfires of 2020)—to vehicles’ onboard computers. THL initially used a bolt-on approach for complex search functionalities by extending their deployment footprint to include a stand-alone instance of Elasticsearch. This setup, while functional, introduced significant data synchronization and performance issues, as well as increased maintenance overhead. Elasticsearch struggled under heavy loads which led to critical failures and system instability, resulting in THL experiencing frequent outages and data inconsistencies. After two years of coping with these challenges, THL resolved to migrate away from ElasticSearch. After doing due diligence, they identified the MongoDB developer data platform’s integrated Search capabilities as the optimum solution. "A couple of months later, we had migrated everything," said Abdo. "Kudos to the MongoDB account team. They were exceptional." The migration process turned out to be relatively straightforward. By iteratively replacing Elasticsearch with MongoDB Atlas Search , THL was able to simplify its architecture, reduce costs, and eliminate the synchronization issues that had plagued the system. The simplification also led to significant performance and reliability improvements. Because it no longer needed the dedicated sync resources processing millions upon millions of records per day, THL was able to turn off its Elasticsearch cluster and to consolidate its resources. “All data sync related issues were gone, eliminated. But also we got our Friday afternoons back, which is always a good thing!” added Abdo. Abdo’s team can now also use existing monitoring tools rather than having to set up something completely separate from the standalone search engine they were using. “Sometimes, changes are easier than you think,” said Abdo. “We spent two-and-a-half years with our faulty solutions just looking for ways to patch up all the problems that we were having. We tried everything except actually looking into how much it would actually take to migrate. We wasted so much time, so much effort, so much money. While if we had thought about this a couple of years ago, it would have been a breeze.” “Over-engineering is bad, simple is better,” he noted. To learn more about how MongoDB Atlas Search can help you build or deepen your search capabilities, visit our MongoDB Atlas Search page .
Vector Quantization: Scale Search & Generative AI Applications
We are excited to announce a robust set of vector quantization capabilities in MongoDB Atlas Vector Search . These capabilities will reduce vector sizes while preserving performance, enabling developers to build powerful semantic search and generative AI applications with more scale—and at a lower cost. In addition, unlike relational or niche vector databases, MongoDB’s flexible document model—coupled with quantized vectors—allows for greater agility in testing and deploying different embedding models quickly and easily. Support for scalar quantized vector ingestion is now generally available, and will be followed by several new releases in the coming weeks. Read on to learn how vector quantization works and visit our documentation to get started! The challenges of large-scale vector applications While the use of vectors has opened up a range of new possibilities , such as content summarization and sentiment analysis, natural language chatbots, and image generation, unlocking insights within unstructured data can require storing and searching through billions of vectors—which can quickly become infeasible. Vectors are effectively arrays of floating-point numbers representing unstructured information in a way that computers can understand (ranging from a few hundred to billions of arrays), and as the number of vectors increases, so does the index size required to search over them. As a result, large-scale vector-based applications using full-fidelity vectors often have high processing costs and slow query times, hindering their scalability and performance. Vector quantization for cost-effectiveness, scalability, and performance Vector quantization, a technique that compresses vectors while preserving their semantic similarity, offers a solution to this challenge. Imagine converting a full-color image into grayscale to reduce storage space on a computer. This involves simplifying each pixel's color information by grouping similar colors into primary color channels or "quantization bins," and then representing each pixel with a single value from its bin. The binned values are then used to create a new grayscale image with smaller size but retaining most original details, as shown in Figure 1. Figure 1: Illustration of quantizing an RGB image into grayscale Vector quantization works similarly, by shrinking full-fidelity vectors into fewer bits to significantly reduce memory and storage costs without compromising the important details. Maintaining this balance is critical, as search and AI applications need to deliver relevant insights to be useful. Two effective quantization methods are scalar (converting a float point into an integer) and binary (converting a float point into a single bit of 0 or 1). Current and upcoming quantization capabilities will empower developers to maximize the potential of Atlas Vector Search. The most impactful benefit of vector quantization is increased scalability and cost savings through reduced computing resources and efficient processing of vectors. And when combined with Search Nodes —MongoDB’s dedicated infrastructure for independent scalability through workload isolation and memory-optimized infrastructure for semantic search and generative AI workloads— vector quantization can further reduce costs and improve performance, even at the highest volume and scale to unlock more use cases. "Cohere is excited to be one of the first partners to support quantized vector ingestion in MongoDB Atlas,” said Nils Reimers, VP of AI Search at Cohere. “Embedding models, such as Cohere Embed v3, help enterprises see more accurate search results based on their own data sources. We’re looking forward to providing our joint customers with accurate, cost-effective applications for their needs.” In our tests, compared to full-fidelity vectors, BSON-type vectors —MongoDB’s JSON-like binary serialization format for efficient document storage—reduced storage size by 66% (from 41 GB to 14 GB). And as shown in Figures 2 and 3, the tests illustrate significant memory reduction (73% to 96% less) and latency improvements using quantized vectors, where scalar quantization preserves recall performance and binary quantization’s recall performance is maintained with rescoring–a process of evaluating a small subset of the quantized outputs against full-fidelity vectors to improve the accuracy of the search results. Figure 2: Significant storage reduction + good recall and latency performance with quantization on different embedding models Figure 3: Remarkable improvement in recall performance for binary quantization when combining with rescoring In addition, thanks to the reduced cost advantage, vector quantization facilitates more advanced, multiple vector use cases that would have been too computationally-taxing or cost-prohibitive to implement. For example, vector quantization can help users: Easily A/B test different embedding models using multiple vectors produced from the same source field during prototyping. MongoDB’s document model —coupled with quantized vectors—allows for greater agility at lower costs. The flexible document schema lets developers quickly deploy and compare embedding models’ results without the need to rebuild the index or provision an entirely new data model or set of infrastructure. Further improve the relevance of search results or context for large language models (LLMs) by incorporating vectors from multiple sources of relevance, such as different source fields (product descriptions, product images, etc.) embedded within the same or different models. How to get started, and what’s next Now, with support for the ingestion of scalar quantized vectors, developers can import and work with quantized vectors from their embedding model providers of choice (such as Cohere, Nomic, Jina, Mixedbread, and others)—directly in Atlas Vector Search. Read the documentation and tutorial to get started. And in the coming weeks, additional vector quantization features will equip developers with a comprehensive toolset for building and optimizing applications with quantized vectors: Support for ingestion of binary quantized vectors will enable further reduction of storage space, allowing for greater cost savings and giving developers the flexibility to choose the type of quantized vectors that best fits their requirements. Automatic quantization and rescoring will provide native capabilities for scalar quantization as well as binary quantization with rescoring in Atlas Vector Search, making it easier for developers to take full advantage of vector quantization within the platform. With support for quantized vectors in MongoDB Atlas Vector Search, you can build scalable and high-performing semantic search and generative AI applications with flexibility and cost-effectiveness. Check out these resources to get started documentation and tutorial . Head over to our quick-start guide to get started with Atlas Vector Search today.
MongoDB.local London 2024: Better Applications, Faster
Since we kicked off MongoDB’s series of 2024 events in April, we’ve connected with thousands of customers, partners, and community members in cities around the world—from Mexico City to Mumbai. Yesterday marked the nineteenth stop of the 2024 MongoDB.local tour, and we had a blast welcoming folks across industries to MongoDB.local London, where we discussed the latest technology trends, celebrated customer innovations, and unveiled product updates that make it easier than ever for developers to build next-gen applications. Over the past year, MongoDB’s more than 50,000 customers have been telling us that their needs are changing. They’re increasingly focused on three areas: Helping developers build faster and more efficiently Empowering teams to create AI-powered applications Moving from legacy systems to modern platforms Across these areas, there’s a common need for a solid foundation: each requires a resilient, scalable, secure, and highly performant database. The updates we shared at MongoDB.local London reflect these priorities. MongoDB is committed to ensuring that our products are built to exceed our customers’ most stringent requirements, and that they provide the strongest possible foundation for building a wide range of applications, now and in the future. Indeed, during yesterday’s event, Sahir Azam, MongoDB’s Chief Product Officer, discussed the foundational role data plays in his keynote address. He also shared the latest advancement from our partner ecosystem, an AI solution powered by MongoDB, Amazon Web Services, and Anthropic that makes it easier for customers to deploy gen AI customer care applications. MongoDB 8.0: The best version of MongoDB ever The biggest news at .local London was the general availability of MongoDB 8.0 , which provides significant performance improvements, reduced scaling costs, and adds additional scalability, resilience, and data security capabilities to the world’s most popular document database. Architectural optimizations in MongoDB 8.0 have significantly reduced memory usage and query times, and MongoDB 8.0 has more efficient batch processing capabilities than previous versions. Specifically, MongoDB 8.0 features 36% better read throughput, 56% faster bulk writes, and 20% faster concurrent writes during data replication. In addition, MongoDB 8.0 can handle higher volumes of time series data and can perform complex aggregations more than 200% faster—with lower resource usage and costs. Last (but hardly least!), Queryable Encryption now supports range queries, ensuring data security while enabling powerful analytics. For more on MongoDB.local London’s product announcements—which are designed to accelerate application development, simplify AI innovation, and speed developer upskilling—please read on! Accelerating application development Improved scaling and elasticity on MongoDB Atlas capabilities New enhancements to MongoDB Atlas’s control plane allow customers to scale clusters faster, respond to resource demands in real-time, and optimize performance—all while reducing operational costs. First, our new granular resource provisioning and scaling features—including independent shard scaling and extended storage and IOPS on Azure—allow customers to optimize resources precisely where needed. Second, Atlas customers will experience faster cluster scaling with up to 50% quicker scaling times by scaling clusters in parallel by node type. Finally, MongoDB Atlas users will enjoy more responsive auto-scaling, with a 5X improvement in responsiveness thanks to enhancements in our scaling algorithms and infrastructure. These enhancements are being rolled out to all Atlas customers, who should start seeing benefits immediately. IntelliJ plugin for MongoDB Announced in private preview, the MongoDB for IntelliJ Plugin is designed to functionally enhance the way developers work with MongoDB in IntelliJ IDEA, one of the most popular IDEs among Java developers. The plugin allows enterprise Java developers to write and test Java queries faster, receive proactive performance insights, and reduce runtime errors right in their IDE. By enhancing the database-to-IDE integration, JetBrains and MongoDB have partnered to deliver a seamless experience for their shared user-base and unlock their potential to build modern applications faster. Sign up for the private preview here . MongoDB Copilot Participant for VS Code (Public Preview) Now in public preview, the new MongoDB Participant for GitHub Copilot integrates domain-specific AI capabilities directly with a chat-like experience in the MongoDB Extension for VS Code .
MongoDB 8.0: Raising the Bar
I recently received an automated reminder that I was approaching a work anniversary, which took me somewhat by surprise. It’s hard to believe that it’s already been a year (to the day) that I joined MongoDB ! So I thought I’d take a moment to reflect on my MongoDB journey so far, share some exciting product updates, and signal where we’re headed next. Our customers I joined MongoDB because it built a product developers love. The innovation of MongoDB’s document model empowered developers to simply build. No longer encumbered by having to formalize and denormalize their data schema before their application was even designed, MongoDB enabled developers to interact with data in an intuitive JSON format, and made it easy to evolve data structures as the life of their application evolved. One of my first steps upon joining the company was to learn more about our customers. I was excited to learn that in addition to delighting developers, MongoDB had launched capabilities that enabled it to win mission-critical workloads from enterprise class customers—including 70% of the Fortune 100 and highly regulated global financial institutions, health care providers, and government agencies. I found it remarkable that customers could replicate data across AWS, Google Cloud, and Microsoft Azure in MongoDB Atlas (our fully-managed cloud database service) with just a few mouse clicks, and that some customers replicate data between the cloud and on premises using MongoDB Enterprise Advanced. This optionality struck me as powerful in the era of rapid advancements in AI, as it enables customers to easily bring their data to the best cloud provider for AI. Soon after I joined MongoDB, the team was firming up the development roadmap for the next version of MongoDB, and they asked for my input on the plan. The team was debating whether to focus on features developers would love, or governance capabilities required by large enterprises. I knew that ideally we would please all of our customers, so we had to try to make this an “and” and not an “or.” While I was new to MongoDB, from my 17+ years at AWS I learned that all customers demand security, durability, availability, and performance (in that order) from any modern technology offering. If a product or service doesn’t have those four elements, customers won’t buy whatever you’re selling. So as a team, we agreed that our next release of MongoDB—MongoDB 8.0—had to raise the bar for all of our customers, delivering great security, durability, availability, and performance. The plan We had less than a year before our target launch, so we knew we had to get moving, fast. My team and I brought MongoDB’s product and engineering organizations together to align on the plan for our next release. We set goals around delivering significant improvements in security, durability, and availability. And we set a line in the sand—that we weren’t going to release MongoDB 8.0 unless it was the best-performing version of MongoDB yet. Measuring the performance of a feature-rich database like MongoDB can be tricky, as customers run a wide range of workloads. So we decided to run a suite of benchmarks to simulate customer workloads. We also developed Andon cord -inspired automation that would automatically roll back any code contributions that regressed our performance metrics. Finally, a set of senior engineering leaders met regularly to review our progress and immediately escalated any blockers that could jeopardize our launch, so that we could quickly fix things. From my experience, I knew that great teams really respond when they’re given clear goals, and when they’re empowered to innovate, so I was excited to see what they would come up with. I’m proud to say that our product and engineering teams rose to the challenge. Announcing MongoDB 8.0 Today, I’m thrilled to announce the general availability of MongoDB 8.0 —the most secure, durable, available, and performant version of MongoDB yet! The team came up with architectural optimizations in MongoDB 8.0 that have significantly reduced memory usage and query times, and have made batch processing more efficient than previous versions. Specifically, MongoDB 8.0 features: 36% better read throughput 56% faster bulk writes 20% faster concurrent writes during data replication 200% faster on complex aggregations of times series data In making these improvements, we're seeing benchmarks for typical web applications perform 32% better overall. Here’s a breakdown of how MongoDB 8.0 performs against some of our benchmarks: Improved performance benefits all users of applications built atop MongoDB, and for MongoDB customers, it can mean reduced costs (due to an improved price/performance ratio). In addition to significant performance gains, MongoDB 8.0 delivers a wide range of improvements, including (but not limited to): Improving availability by delivering sharding enhancements to distribute data across shards up to 50 times faster and at up to 50% lower starting cost, with reduced need for additional configuration or setup. Improving support for a wide range of search and AI applications at higher scale and lower cost, via the delivery of quantized vectors—compressed representations of full-fidelity vectors—that require up to 96% less memory and are faster to retrieve while preserving accuracy. Enabling customers to encrypt data at rest, in transit, and in use by expanding MongoDB’s Queryable Encryption to also support range queries. Queryable Encryption is a groundbreaking, industry-first innovation developed by the MongoDB Cryptography Research Group that allows customers to encrypt sensitive application data, store it securely as fully randomized encrypted data in the MongoDB database, and run expressive queries on the encrypted data —with no cryptography expertise required. You might wonder why we’re so confident that customers are going to love MongoDB 8.0. Well, we’ve been acting as our own customer, and have moved our own applications over to 8.0. This approach is generally called “ dogfooding ,” but we think that “eating our own pizza” sounds more appetizing. Our internal build system—which our software developers use daily—is built atop MongoDB, and when we upgraded to MongoDB 8.0 we saw query latencies drop by approximately 75%! This was a double win, as it improved the performance of our own tooling, and it set our performance chat room abuzz with excitement in anticipation of delighting external customers. While results may vary based on your particular workload, the point is that we just couldn’t wait to share MongoDB 8.0’s performance gains with customers. Indeed, customers are also already seeing great results on MongoDB 8.0. For example, Felix Horvat, Chief Technology Officer at OCELL , a climate technology company in Germany, said: “With MongoDB 8.0, we have seen an incredible boost in performance, with some of our queries running twice as fast as before . This improvement not only enhances our data processing capabilities but also aligns perfectly with our commitment to resource efficiency. By optimizing our backend operations, we can be more effective in our climate initiatives while conserving resources—a true reflection of our dedication to sustainable solutions.” I encourage you to check out MongoDB 8.0 yourself. It’s available today via MongoDB Atlas, as part of MongoDB Enterprise Advanced for on-premises and hybrid deployments, and as a free download from mongodb.com/try with MongoDB Community Edition. In addition, customers upgrading from previous versions of MongoDB to 8.0 can find helpful upgrade guides on mongodb.com. What’s next? We’re excited for you to try MongoDB 8.0 and to share your feedback, as customer feedback helps us guide our roadmap for future releases. Going forward, please watch this space. Over the next few weeks, we’ll be publishing a series of engineering blog posts that dig into MongoDB’s investments in the technology behind MongoDB 8.0. We’re also planning posts about horizontal scaling in MongoDB 8.0, and one that will look closely at queryable encryption (QE), but let me know what you’d like to hear more about. It’s been an exciting year at MongoDB—I can’t wait to see what the next one has in store! –Jim
Bringing Gen AI Into The Real World with Ramblr and MongoDB
How do you bring the benefits of gen AI, a technology typically experienced on a keyboard and screen, into the physical world? That's the problem the team at Ramblr.ai , a San Francisco-based startup, is solving with its powerful and versatile 3D annotation and recognition capabilities. “With Ramblr you can record continuously what you are doing, and then ask the computer, in natural language, ‘Where did I go wrong’ or ‘What should I do next?” said Frank Angermann, Lead Pipeline & Infrastructure Engineer at Ramblr.ai. Gen AI for the real world One of the best examples of Ramblr’s technology, and its potential, is its work with the international chemical giant BASF. In a video demonstration on Ramblr’s website, a BASF engineer can be seen tightening bolts on a connector (or ‘flange’) joining two parts of a pipeline. Every move the engineer makes is recorded via a helmet-mounted camera. Once the worker is finished for the day this footage, and the footage of every other person working on the pipeline, is uploaded to a database. Using Ramblr’s technology, quality assurance engineers from BASF then query the collected footage from every worker, asking the software to, ‘Please assess footage from today’s pipeline connection work and see if any of the bolts were not tightened enough.’ Having processed the footage, Ramblr assesses whether those flanges had been assembled correctly and identifies any that required further inspection or correction. The method behind the magic “We started Ramblr.ai as an annotation platform, a place where customers could easily label images from a video and have machine learning models then identify that annotation throughout the video automatically,” said Frank. “In the past this work would be carried out manually by thousands of low-paid workers tagging videos by hand. We thought we could be better by automating that process,” he added. The software allows customers to easily customize and add annotations to footage for their particular use case, and with its gen-AI powered active learning approach Ramblr then ‘fills in’ the rest of the video based on those annotations. Why MongoDB? MongoDB has been part of the Ramblr technology stack since the beginning. “We use MongoDB Atlas for half of our storage processes. Metadata, annotation data, etc., can all be stored in the same database. This means we don’t have to rely on separate databases to store different types of data,” said Frank. Flexibility of data storage was also a key consideration when choosing a database. “With MongoDB Atlas, we could store information the way we wanted to,” he added. The built-in vector database capabilities of Atlas were also appealing to the Rambler team, “The ability to store vector embeddings without having to do any more work - for instance not having to move a 3mb array of data somewhere else to process it, was a big bonus for us.” The future Aside from infrastructure and construction Q&A, robotics is another area in which the Ramblr team is eager to deploy their technology. “Smaller robotics companies don’t typically have the data to train the models that inform their products. There are quite a few use cases where we could support these companies and provide a more efficient and cost-effective way to teach the robots more efficiently. We are extremely efficient in providing information for object detectors,” said Frank. But while there are plenty of commercial uses for Ramblr’s technology, the growth in spatial computing in the consumer sector - especially following the release of Apple’s Vision Pro and Meta Quest headsets - opens up a whole new category of use cases. “Spatial computing will be a big part of the world. Being able to understand the particular processes, taxonomy, and what the person is actually seeing in front of them will be a vital part of the next wave of innovation in user interfaces and the evolution of gen AI,” Frank added. Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com. Head over to our quick-start guide to get started with Atlas Vector Search today.
AI-Driven Noise Analysis for Automotive Diagnostics
Aftersales service is a crucial revenue stream for the automotive industry, with leading manufacturers executing repairs through their dealer networks. One global automotive giant recently embarked on an ambitious project to revolutionize their diagnostic process. Their project—which aimed to increase efficiency, customer satisfaction, and revenue throughput—involved the development of an AI-powered solution that could quickly analyze engine sounds and compare them to a database of known problems, significantly reducing diagnostic times for complex engine issues. Traditional diagnostic methods can be time-consuming, expensive, and imprecise, especially for complex engine issues. MongoDB’s client in automotive manufacturing envisioned an AI-powered solution that could quickly analyze engine sounds and compare them to a database of known problems, significantly reducing diagnostic times. Initial setbacks, then a fresh perspective Despite the client team's best efforts, the project faced significant challenges and setbacks during the nine-month prototype phase. Though the team struggled to produce reliable results, they were determined to make the project a success. At this point, MongoDB introduced its client to Pureinsights , a specialized gen AI implementation and MongoDB AI Application Program partner , to rethink the solution and to salvage the project. As new members of the project team, and as Pureinsights’s CTO and Lead Architect, respectively, we brought a fresh perspective to the challenge. Figure 1: Before and after the AI-powered noise diagnostic solution A pragmatic approach: Text before sound Upon review, we discovered that the project had initially started with a text-based approach before being persuaded to switch to sound analysis. The Pureinsights team recommended reverting to text analysis as a foundational step before tackling the more complex audio problem. This strategy involved: Collecting text descriptions of car problems from technicians and customers. Comparing these descriptions against a vast database of known issues already stored in MongoDB. Utilizing advanced natural language processing, semantic / vector search, and Retrieval Augmented Generation techniques to identify similar cases and potential solutions. Our team tested six different models for cross-lingual semantic similarity, ultimately settling on Google's Gecko model for its superior performance across 11 languages. Pushing boundaries: Integrating audio analysis With the text-based foundation in place, we turned to audio analysis. Pureinsights developed an innovative approach to the project by combining our AI expertise with insights from advanced sound analysis research. We drew inspiration from groundbreaking models that had gained renown for their ability to identify cities solely from background noise in audio files. This blend of AI knowledge and specialized audio analysis techniques resulted in a robust, scalable system capable of isolating and analyzing engine sounds from various recordings. We adapted these sophisticated audio analysis models, originally designed for urban sound identification, to the specific challenges of automotive diagnostics. These learnings and adaptations are also applicable to future use cases for AI-driven audio analysis across various industries. This expertise was crucial in developing a sophisticated audio analysis model capable of: Isolating engine and car noises from customer or technician recordings. Converting these isolated sounds into vectors. Using these vectors to search the manufacturer's existing database of known car problem sounds. At the heart of this solution is MongoDB’s powerful database technology. The system leverages MongoDB’s vector and document stores to manage over 200,000 case files. Each "document" is more akin to a folder or case file containing: Structured data about the vehicle and reported issue Sound samples of the problem Unstructured text describing the symptoms and context This unified approach allows for seamless comparison of text and audio descriptions of customer engine problems using MongoDB's native vector search technology. Encouraging progress and phased implementation The solution's text component has already been rolled out to several dealers, and the audio similarity feature will be integrated in late 2024. This phased approach allows for real-world testing and refinement before a full-scale deployment across the entire repair network. The client is taking a pragmatic, step-by-step approach to implementation. If the initial partial rollout with audio diagnostics proves successful, the plan is to expand the solution more broadly across the dealer network. This cautious (yet forward-thinking) strategy aligns with the automotive industry's move towards more data-driven maintenance practices. As the solution continues to evolve, the team remains focused on enhancing its core capabilities in text and audio analysis for current diagnostic needs. The manufacturer is committed to evaluating the real-world impact of these innovations before considering potential future enhancements. This measured approach ensures that each phase of the rollout delivers tangible benefits in efficiency, accuracy, and customer satisfaction. By prioritizing current diagnostic capabilities and adopting a phased implementation strategy, the automotive giant is paving the way for a new era of efficiency and customer service in their aftersales operations. The success of this initial rollout will inform future directions and potential expansions of the AI-powered diagnostic system. A new era in automotive diagnostics The automotive giant brought industry expertise and a clear vision for improving their aftersales service. MongoDB provided the robust, flexible data platform essential for managing and analyzing diverse, multi-modal data types at scale. We, at Pureinsights, served as the AI application specialist partner, contributing critical AI and machine learning expertise, and bringing fresh perspectives and innovative approaches. We believe our role was pivotal in rethinking the solution and salvaging the project at a crucial juncture. This synergy of strengths allowed the entire project team to overcome initial setbacks and develop a groundbreaking solution that combines cutting-edge AI technologies with MongoDB's powerful data management capabilities. The result is a diagnostic tool leveraging text and audio analysis to significantly reduce diagnostic times, increase customer satisfaction, and boost revenue through the dealer network. The project's success underscores several key lessons: The value of persistence and flexibility in tackling complex challenges The importance of choosing the right technology partners The power of combining domain expertise with technological innovation The benefits of a phased, iterative approach to implementation As industries continue to evolve in the age of AI and big data, this collaborative model—bringing together industry leaders, technology providers, and specialized AI partners—sets a new standard for innovation. It demonstrates how companies can leverage partnerships to turn ambitious visions into reality, creating solutions that drive business value while enhancing customer experiences. The future of automotive diagnostics—and AI-driven solutions across industries—looks brighter thanks to the combined efforts of forward-thinking enterprises, cutting-edge database technologies like MongoDB, and specialized AI partners like Pureinsights. As this solution continues to evolve and deploy across the global dealer network, it paves the way for a new era of efficiency, accuracy, and customer satisfaction in the automotive industry. This solution has the potential to not only revolutionize automotive diagnostics but also set a new standard for AI-driven solutions in other industries, demonstrating the power of collaboration and innovation. To deliver more solutions like this—and to accelerate gen AI application development for organizations at every stage of their AI journey—Pureinsights has joined the MongoDB AI Application Program (MAAP). Check out the MAAP page to learn more about the program and how MAAP ecosystem members like Pureinsights can help your organization accelerate time-to-market, minimize risks, and maximize the value of your AI investments.
Away From the Keyboard: Apoorva Joshi, MongoDB Senior AI Developer Advocate
Welcome to our article series focused on developers and what they do when they’re not building incredible things with code and data. “Away From the Keyboard” features interviews with developers at MongoDB, discussing what they do, how they establish a healthy work-life balance, and their advice for others looking to create a more holistic approach to coding. In this article, Apoorva Joshi shares her day-to-day responsibilities as a Senior AI Developer Advocate at MongoDB; what a flexible approach to her job and life looks like; and how her work calendar helps prioritize overall balance. Q: What do you do at MongoDB? Apoorva: My job is to help developers successfully build AI applications using MongoDB. I do this through written technical content, hands-on workshops, and design whiteboarding sessions. Q: What does work-life balance look like for you? Apoorva: I love remote work. It allows me to have a flexible approach towards work and life where I can accommodate life things, like dental appointments, walks, or lunches in the park during my work day—as long as work gets done. Q: Was that balance always a priority for you or did you develop it later in your career? Apoorva: Making work-life balance a priority has been a fairly recent development. During my first few years on the job, I would work long hours, partly because I felt like I needed to prove myself and also because I hadn’t prioritized finding activities I enjoyed outside of school or work up until then. The first lockdown during the pandemic put a lot of things into perspective. With work and life happening in the same place, I felt the need for boundaries. Having nowhere to go encouraged me to try out new hobbies, such as solving jigsaw puzzles; as well as reconnecting with old favorites, like reading and painting. Q: What benefits has this balance given you? Apoorva: Doing activities away from the keyboard makes me more productive at work. A flexible working schedule also creates a stress-free environment and allows me to bring my 100% to work. This balance helps me make time for family and friends, exercise, chores, and hobbies. Overall, having a healthy work-life balance helps me lead a fulfilling life that I am proud of. Q: What advice would you give to a developer seeking to find a better balance? Apoorva: The first step to finding a balance between work and life is to recognize that boundaries are healthy. I have found that putting everyday things, such as lunch breaks and walks on my work calendar is a good way to remind myself to take that break or close my laptop, while also communicating those boundaries with my colleagues. If you are having trouble doing this on your own, ask a family member, partner, or friend to remind you! Thank you to Apoorva Joshi for sharing her insights! And thanks to all of you for reading. Look for more in our new series. Interested in learning more about or connecting more with MongoDB? Join our MongoDB Community to meet other community members, hear about inspiring topics, and receive the latest MongoDB news and events. And let us know if you have any questions for our future guests when it comes to building a better work-life balance as developers. Tag us on social media: @/mongodb
Pathfinder Labs Tames Data Chaos and Unleashes AI with MongoDB
Pathfinder Labs develops software that specializes in empowering law enforcement agencies and investigators to apprehend criminals and rescue victims of child abuse. The New Zealand-headquartered company is staffed by professionals with diverse backgrounds and expertise, including counter-terrorism, online child abuse investigations, industrial espionage, digital forensics and more, spanning both the government and private sectors. Last July, I was thrilled to welcome Pathfinder Labs’ CEO Bree Atkinson, as well as co-founder and DevOps Architect, Peter Pilley to MongoDB .local Sydney where they shared more about the company’s innovative solutions powered by MongoDB. Those solutions are deployed and utilized by prestigious organizations on a global scale, including Interpol . Pathfinder Labs’ main product, Paradigm , has been built on MongoDB Atlas and runs on AWS . The tool—which relies on MongoDB’s developer data platform and document database model to sift through complex and continually growing numbers of data sets—helps collect, gather, and convert data into actionable decisions for law enforcement professionals. Pilley explained that Paradigm was “made by investigators, for investigators.” Paradigm is designed to present the information it helps gather in a way that will support a successful prosecution and outcome at trial. MongoDB Atlas enables Pathfinder Labs to tame the chaos arising from the data sets created and gathered throughout an investigation. MongoDB’s scalability and automation capabilities are particularly helpful in this regard. Powered by MongoDB Atlas, Paradigm can also easily identify similarities between cases, and uncover unique insights by bringing together information from disparate data sources. This could, for example, be about bringing together geolocalization data and metadata from an image, or identifying similar case patterns from law enforcement agencies operating in different states or countries. Ultimately, Paradigm simplifies evidence gathering and analysis, integrates external data sources and vendors, future-proof investigation methods, and helps minimize overall costs. Its capabilities are unlocking a whole new generation of data-driven investigative capabilities. During the presentation, Pilley used the example of the case of a missing female in the United States: it took a team of three investigators 12 months to solve the case. Using Paradigm, PathfinderLabs was able to solve that same case in less than an hour. “With Paradigm, we were able to feed some extra information and solve the case in 40 minutes. MongoDB Atlas allowed us to make quick decisions and present information to investigators in the most efficient way.” Pathfinder Labs also incorporates AI capabilities, including MongoDB Vector Search , which help identify which information is particularly relevant, select specific data points that can be used at a strategic point in time, connect data from one case to another, and identify what information might be missing. MongoDB Atlas Vector Search helps Pathfinder match images and details in images (i.e. people, objects), classify documents and text, and to build better search experiences for users via semantic search. “I was super excited when [Atlas Vector Search] came out. The fact that I can now have it as part of my standard workflow without having to deploy other kits all the time to support our vector searches has been an absolute game changer,” added Pilley. Finally, the team has seen great value in MongoDB’s Performance Adviser and Schema Anti Patterns features: “The performance Adviser alone has solved many problems,” concluded Pilley. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start page .
Revolutionizing Sales with AI: Glyphic AI’s Journey with MongoDB
When connecting with customers, sales teams often struggle to understand and address the unique needs and preferences of each prospect, leading to ineffective pitches. Additionally, time-consuming admin tasks like data entry, sales tool updates, follow-up management, and maintaining personalized interactions across numerous leads can overwhelm teams, leaving less time for impactful selling. Glyphic AI, a pioneering AI-powered sales co-pilot, addresses these challenges. By analyzing sales processes and calls, Glyphic AI helps teams streamline workflows and focus on building stronger customer relationships. Founded by former engineers from Google DeepMind and Apple, Glyphic AI leverages expertise in large language models (LLMs) to work with private and dynamic data. "As LLM researchers, we discovered the true potential of these models lies in the sales domain, generating vast numbers of calls rich with untapped insights. Traditionally, these valuable insights were lost in digital archives, as extracting them required manually reviewing calls and making notes," says Devang Agrawal, co-Founder and Chief Technology Officer of Glyphic AI. “Our aim became to enhance customer centricity by harnessing AI to capture and utilize conversational and historical data, transforming it into actionable intelligence for ongoing and future deals.” Built on MongoDB, AWS, and Anthropic, Glyphic AI automatically breaks down sales calls using established methodologies like MEDDIC. It leverages ingested sales playbooks to provide tailored strategies for different customer personas and company types. By using data sources such as Crunchbase, LinkedIn, and internal CRM information, the tool proactively surfaces relevant insights before sales teams engage with customers. Glyphic AI employs LLMs to offer complete visibility into sales deals by understanding the full context and intent of real-time conversations. The system captures information at various points, primarily focusing on sales calls and recordings. These data are analyzed by LLMs tailored for sales tasks, summarizing content based on sales frameworks and extracting specific information requested by teams. MongoDB records serve as the main database for customer records, sales call data, and related metadata, while large video files are stored in AWS S3. MongoDB Atlas Search and Vector Search features are integrated, providing the ability to index and query high-dimensional vectors efficiently. Glyphic AI’s Global Search feature uses Atlas Vector Search to allow users to ask strategic questions and retrieve data from numerous sales calls. It matches queries with vector embeddings in MongoDB, utilizing metadata, account details, and external sources like LinkedIn and Crunchbase to identify relevant content. This content is processed by the LLM model for detailed conversational responses. Additionally, MongoDB's Atlas Vector Search continuously updates records, building a dynamic knowledge base that provides quick insights and proactively generates summaries enriched with data from various sources, assisting with sales calls and customer analysis. Figure 1: How Glyphic AI transforms sales call analysis Why Glyphic AI relies on advanced cloud solutions for efficient data management and innovation "I used MongoDB in the first app I ever built, and ever since it has consistently met our needs, no matter the project," says Agrawal. For Glyphic AI, MongoDB has seamlessly integrated into the company’s existing workflows. MongoDB Atlas has greatly simplified database management and analytics, initially involving the team implementing vector search from scratch. When MongoDB introduced Atlas Vector Search, Glyphic AI transitioned to this more streamlined and integrated solution. “If MongoDB's Atlas Vector Search had been available back then, we would have adopted it immediately for its ease of testing and deployment,” Agrawal reflects. While Agrawal appreciates the benefits of building from scratch, he acknowledges that maintaining complex systems, like databases or developing LLM models, becomes increasingly challenging over time. The AI feature enabling natural language queries in MongoDB Compass has been particularly beneficial for Glyphic AI, especially when extracting insights not yet available in dashboards or analyzing specific database elements. In the fast-paced AI industry, time to market is critical. MongoDB Atlas, as a cloud solution, offers Glyphic AI the flexibility and scalability needed to quickly test, deploy, and refine its applications. The integration of MongoDB Atlas with features like Atlas Vector Search has enabled the team to focus on innovation without being bogged down by infrastructure complexities, speeding up the development of AI-powered features. As a small, agile team, Glyphic AI leverages MongoDB's document model, which aligns well with object-oriented programming principles. This allows for rapid development and iteration of product features, enabling the company to stay competitive in the evolving generative AI market. By simplifying data management and reducing friction, MongoDB’s document model helps Glyphic AI maintain agility and focus on delivering impactful solutions. With vector search embedded in MongoDB, the team found relief in using a unified language and system. Keeping all data—including production records and vectors—in one place has greatly simplified operations. Before adopting MongoDB, the team struggled with synchronizing data across multiple systems and managing deletions to avoid inconsistencies. MongoDB’s ACID compliance has made this process far more straightforward, ensuring reliable transactions and maintaining data integrity. By consolidating production records and vectors into MongoDB, the team achieved the simplicity they needed, eliminating the complexities of managing disparate systems. Glyphic AI's next step: Refining LLMs for enhanced sales insights and strategic decision-making “Over the next year, our goal is to refine our LLMs specifically for the sales context to deliver more strategic insights. We've built a strong conversational intelligence product that enhances efficiency for frontline sales reps and managers. Now, we're focused on aggregating conversation data to provide strategy teams and CROs with valuable insights into their teams' performance,” says Agrawal. As sales analysis evolves to become more strategic, significant technical challenges will arise, especially when scaling from summarizing a handful of calls to analyzing thousands in search of complex patterns. Current LLMs are often limited in their ability to process large amounts of sales call data, which means ongoing adjustments and improvements will be necessary to keep up with new developments. Additionally, curating effective datasets, including synthetic and openly available sales data, will be a key hurdle in training these models to deliver meaningful insights. By using MongoDB, Glyphic AI will be able to accelerate innovation due to the reduced need for time-consuming maintenance and management of complex systems. This will allow the team to focus on essential tasks like hiring skilled talent, driving innovation, and improving the end-user experience. As a result, Glyphic AI will be able to prioritize core objectives and continue to develop and refine their products effectively. As Glyphic AI fine-tunes its LLMs for the sales context, the team will embrace retrieval-augmented generation (RAG) to push the boundaries of AI-driven insights. Leveraging Atlas Vector Search will enable Glyphic AI to handle large datasets more efficiently, transforming raw data into actionable sales strategies. This will enhance its AI’s ability to understand and predict sales trends with greater precision, setting the stage for a new level of sales intelligence and positioning Glyphic AI at the forefront of AI-driven sales solutions. As part of the MongoDB AI Innovators Program , Glyphic AI’s engineers gain direct access to MongoDB’s product management team, facilitating feedback exchange and receiving the latest updates and best practices. This collaboration allows them to concentrate on developing their LLM models and accelerating application development. Additionally, the provision of MongoDB Atlas credits helps reduce costs associated with experimenting with new features. Get started with your AI-powered apps by registering for MongoDB Atlas and exploring the tutorials in our AI resources center . If you're ready to dive into Atlas Vector Search, head over to the quick-start guide to kick off your journey. Additionally, if your company is interested in being featured in a story like this, we'd love to hear from you. Reach out to us at ai_adopters@mongodb.com .
Introducing the New MongoDB Application Delivery Certification
Since we launched our System Integrators Certification Program in 2022, we have certified over 18,000 associates and architects across MongoDB’s various system integrator, advisory, and consulting services partners. This program gives system integrators a solid foundation in MongoDB and the capabilities that enable them to architect modernization projects and modern, AI-enriched applications. Our customers continue to tell us that they are looking to innovate quicker and take advantage of new technologies, and we want to support them in these goals. They want to work with partners who have in-depth knowledge of the problems they are trying to solve and hands-on experience working with the technology they are implementing. To meet this customer need and continue to evolve our partnership with our system integrators, we have launched the MongoDB Application Delivery Certification . This is a natural evolution of our certification program that provides comprehensive training and equips developers and application delivery leads with the knowledge and skills needed to design, develop, and deploy modern solutions at scale. Driving innovation alongside our partners The MongoDB Application Delivery Certification includes exclusive, partner-only, online learning and hands-on labs, as well as a proctored certification exam that teaches application delivery fundamentals and implementation best practices. Partners can expect carefully curated content on everything from optimizing storage, queries, and aggregation to retrieval-augmented generation (RAG), and how to architect and deliver with Atlas Vector Search . We piloted this new program with our partners at Accenture and Capgemini to ensure it would drive value for all participants. Twenty developers were invited from each company to participate in an initial version of the curriculum and were able to provide their input on its content. Based on their feedback, we created a program that’s completely self-service and flexible, so learners can fit the coursework into their schedules, at their own pace. "With the growth of AI and data-powered applications, Capgemini are investing in our staff to ensure they have the skills required for this transformation,” said Steve Jones, Executive Vice President, Data Driven Business & Collaborative Data Ecosystems at Capgemini. “The MongoDB Application Delivery Certification helps ensure our people have the right skills to help MongoDB and Capgemini collaborate with our clients on delivering the maximum business value possible in the data-powered future." "Accenture, a strategic partner and part of MongoDB’s AI Application Program, leverages MongoDB’s certification program to ensure the highest quality of delivery capability as our clients race to modernize legacy systems to MongoDB,” said Ram Ramalingam, Senior Managing Director and Global Lead, Platform Engineering and Intelligent Edge at Accenture. We understand that for many businesses, speed is a necessity, and keeping pace with the technological innovation in the current market is essential. Now, customers looking to implement MongoDB solutions will be able to do so quickly and easily by working with partners who have achieved the new MongoDB Application Delivery Certification. They can have the peace of mind knowing that these validated partners are extensively equipped to create and deploy robust MongoDB solutions at scale. What’s more, this new certification will provide partners with other opportunities. Partners who have demonstrated deep technical expertise by successfully completing the MongoDB Application Delivery Certification Program may be considered for the MongoDB AI Applications Program (MAAP). This will give them access to a greater network of customers that need help building and deploying modern applications enriched with AI technology. To learn more about MongoDB’s partners helping boost developer productivity with a range of proven technology integrations, visit the MongoDB Partner Ecosystem . Current SI partners can register for the MongoDB Certification Program and MongoDB Application Delivery Certification Program .
Ahamove Rides Vietnam’s E-commerce Boom with AI on MongoDB
The energy in Vietnam’s cities is frenetic as millions of people navigate the busy streets with determination and purpose. Much of this traffic is driven by e-commerce, with food and parcel deliveries perched on the back of the country’s countless motorcycles or in cars and trucks. In the first quarter of 2024, online spending in Vietnam grew a staggering 79% over the previous year. Explosive growth like this is expected to continue, raising the industry’s value to $32 billion by 2025 , with 70% of the country’s 100 million population making e-commerce transactions . With massive numbers like this, in logistics, efficiency is king. The high customer expectations for rapid deliveries drive companies like Ahamove to innovate their way to seamless operations with cloud technology. Ahamove is Vietnam’s largest on-demand delivery company, handling more than 200,000 e-commerce, food, and warehouse deliveries daily, with 100,000 drivers and riders plying the streets nationwide. The logistics leader serves a network of more than 300,000 merchants, including regional e-commerce giants like Lazada and Shopee, as well as nationwide supermarket chains and small restaurants. The stakes are high for all involved, so maximizing efficiency is of utmost importance. Innovating to make scale count Online shoppers’ behavior is rarely predictable, and to cope with sudden spikes in daily delivery demand, Ahamove needed to efficiently scale up its operations to enhance customer and end-user satisfaction. Moving to MongoDB Atlas on Amazon Web Services (AWS) in 2019, Ahamove fundamentally changed its ability to meet the rising demand for deliveries and new services that please e-commerce providers, online shoppers, and diners. The scalability of MongoDB is crucial for Ahamove, especially during peak times, like Christmas or Lunar New Year, when the volume of orders surges to more than 200,000 a day. “MongoDB's ability to scale ensures that the database can handle increased loads, including data requests, without compromising performance and leading to quicker order processing and improved user experience,” said Tien Ta, Strategic Planning Manager at Ahamove. One of the powerful services that improves e-commerce across Vietnam is geospatial queries enabled by MongoDB. Using this geospatial data associated with specific locations on Earth's surface, Ahamove can easily locate drivers, map drivers to restaurants to accelerate deliveries, and track orders without relying on third-party services to provide information, which slows deliveries. Meanwhile, the versatility of MongoDB’s developer data platform empowers Ahamove to store its operational data, metadata, and vector embeddings on MongoDB Atlas and seamlessly use Atlas Vector Search to index, retrieve, and build performant generative artificial intelligence (AI) applications. AI evolution Powered by MongoDB Atlas , Ahamove is transforming Vietnam’s e-commerce industry with innovations like instant order matching, real-time GPS vehicle tracking, generative AI chatbots, and services like driver rating and variable delivery times, all available 24 hours a day, seven days a week. In addition to traffic, Vietnam is also famous for its excellent street food. Recognizing the importance of the country’s rapidly growing food and beverage (F&B) industry, which is projected to be worth more than US$27.3 billion in 2024 , Ahamove decided to help Vietnam’s small food vendors benefit from the e-commerce boom gripping the country. Using the latest models, including ChatGPT-4o-mini and Llama 3.1, Ahamove’s fully automated generative AI chatbot on MongoDB integrates with restaurants’ Facebook pages. This makes it easier for hungry consumers to handle the entire order process with the restaurant in natural language, from seeking recommendations to placing orders, making payments, and tracking deliveries to their doorsteps. How AhaFood AI chatbot automates the food order journey “Vietnam’s e-commerce industry is growing rapidly as more people turn to their mobile devices to purchase goods and services,” added Ta. “With MongoDB, we meet this customer need for new purchase experiences with innovative services like generative AI chatbots and faster delivery times.” Anticipated to achieve 10% of food deliveries at Da Nang market and take the solution nationwide in the first half of 2025, AhaFood.AI - Ahamove’s latest initiative, also provides personalized dish recommendations based on consumer demographics, budgets, or historical preferences, helping people find and order their favorite food faster. Moreover, merchants receive timely notifications of incoming orders via the AhaMerchant web portal, allowing them to start preparing dishes earlier. AhaFood.AI also collects and securely stores users’ delivery addresses and phone numbers, ensuring better driver assignment and fulfilling food orders in less than 15 minutes. “Adopting MongoDB Atlas was one of the best decisions we’ve ever made for Ahamove, allowing us to build an effective infrastructure that can scale with growing demand and deliver a better experience for our drivers and customers,” said Ngon Pham, CEO, Ahamove. “Generative AI will significantly disrupt the e-commerce and food industry, and with MongoDB Vector Search we can rapidly build new solutions using the latest database and AI technology.” The vibrant atmosphere of Vietnam's bustling cities is part of the country's charm. Rather than seeking to bring calm to this energy, Vietnam thrives on it. Focusing on improving efficiency and supporting street food vendors in lively urban areas with cloud technology will benefit all. Learn how to build AI applications with MongoDB Atlas . Head over to our quick-start guide to get started with Atlas Vector Search today.
MongoDB Enables AI-Powered Legal Searches with Qura
The launch of ChatGPT in November 2022 caught the world by surprise. But while the rest of us marveled at the novelty of its human-like responses, the founders of Qura immediately saw another, more focused use case. “Legal data is a mess,” said Kevin Kastberg, CTO for Qura. “The average lawyer spends tens of hours each month on manual research. We thought to ourselves, ‘what impact would this new LLM technology have on the way lawyers search for information?’” And with that, Qura was born. Gaining trust From its base in Stockholm, Sweden, Qura set about building an AI-powered legal search engine. The team trained custom models and did continual pre-training on millions of pages of publicly available legal texts, looking to bring the comprehensive power of LLMs to the complex and intricate language of the law. “Legal searches have typically been done via keyword search,” said Kastberg. “ We wanted to bring the power of LLMs to this field. ChatGPT created hype around the ability of LLMs to write. Qura is one of the first startups to showcase their far more impressive ability to read. LLMs can read and analyze, on a logical and semantic level, millions of pages of textual data in seconds. This is a game changer for legal search.” Unlike other AI-powered applications, Qura is not interested in generating summaries or “answers” to the questions posed by lawyers or researchers. Instead, Qura aims to provide customers with the best sources and information. “We deliberately wanted to stay away from generative AI. Our customers can be sure that with Qura there is no risk of hallucinations or bad interpretation. Put another way, we will not put an answer in your mouth; rather, we give you the best possible information to create that answer yourselves,” said Kastberg. “Our users are looking for hard-to-find sources, not a gen AI-summary of the basic sources,” he added. With this mantra, the company claims to have reduced research times by 78% while surfacing double the number of relevant sources when compared to similar legal search products. MongoDB in the mix Qura has worked with MongoDB since the beginning. “We needed a document database for flexibility. MongoDB was really convenient as we had a lot of unstructured data with many different characteristics.” In addition to the flexibility to adapt to different data types, MongoDB also offered the Qura team lightning-fast search capabilities. “ MongoDB Atlas search is a crucial tool for our search algorithm agents to navigate our huge datasets. This is especially true of the speed at which we can do efficient text searches on huge corpuses of text, an important part for navigating documents,” said Kastberg. And when it came to AI, a vector database to store and retrieve embeddings was also a real benefit. “Having vector search built into Atlas was convenient and offered an efficient way to work with embeddings and vectorized data.” What's next? Qura's larger goal is to bring about the next generation of intelligent search. The legal space is only the start, and the company has larger ambitions to expand beyond Sweden and into other industries too. “We are live with Qura in the legal space in Sweden and currently onboarding EU customers in the coming month. What we are building towards is a new way of navigating huge text databases, and that could be applied to any type of text data, in any industry,” said Kastberg. Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com. Head over to our quick-start guide to get started with Atlas Vector Search today.