dj-walker-morgan

2893 results

The Learning Experience: Celebrating a Year of MongoDB Developer Days

One year ago today MongoDB held our first regional Developer Day event, a full-day experience designed to teach the fundamentals and advanced capabilities of MongoDB. Developer Day events have been held in person in over 35 cities, across 16 countries, and in seven languages. Initially created by a core team with a passion for enabling developers, a group of talented MongoDB Developer Advocates and Solution Architects have since scaled the program to directly engage and teach thousands of developers worldwide. As developer relations programs go, engaging builders through hands-on workshops is hardly a new approach. But the difference at MongoDB is how closely we collaborate cross-functionally with other teams, and how focused we are on providing authentically hands-on, engaging experiences for developers. From the start, the MongoDB Developer Day program was a collaborative effort to take a platform that is easy to get started with and to introduce more advanced capabilities in a compelling way. Based on the helpful and positive feedback we continue to get from participants, we know Developer Days help developers gain skills and a sense of accomplishment as they alternate between instruction and applying that learning through hands-on exercises. To share more about what has made Developer Days so successful, I asked members of that core team—Lead Developer Advocate Joel Lord , and Sr Developer Advocates Mira Vlaeva and Diego Freniche Brito —to reflect on key areas that went into creating this enduring experience. Our Developer Day class at MongoDB.local NYC in May, 2024 A sense of accomplishment “We all agreed that a Developer Day should focus on hands-on learning, where developers can experiment with MongoDB, potentially make mistakes, and take pride in building something on their own,” Mira said. “The goal was to build a fun learning experience rather than just sitting and listening to lectures all day.” The curriculum was designed to encourage developers to work together at certain points as they advance from data modeling and schema design concepts to implementing powerful Atlas Search and Vector Search capabilities. “One of my favorite moments of the day is when people start working together,” Joel said. “At the beginning of the day, our attendees can be a little hesitant, but they quickly begin to collaborate with each other, and it's wonderful to witness that happen.” Building great Developer Days together While our MongoDB Developer Relations team designed the agenda and course material, it was our partnerships with other teams and stakeholders that helped Developer Days take flight. There were many key stakeholders that shared a vision for enabling developers to realize more value with our platform. As Joel remembers, “We had to work and collaborate with a number of other teams, which was, at the time, new to us.” Among the key teams involved, Diego added, “Working with Field and Strategic Marketing teams has been a great experience. They help us so much with all the really important tasks… there's so much they've done that Developer Days wouldn't be a reality without them.” The program has expanded our collaboration with several other teams, including marketing, product, and sales, to ensure our courses remain up-to-date and we make the most of our time in each city by welcoming developers from key accounts. Continued success and improvements To ensure Developer Days were as impactful as possible, we initially ran the program as a pilot in seven cities. In addition to noting live observations and interactions, we used surveys to collect feedback and report on an NPS (net promoter score) to assess whether the event exceeded participant expectations. These initial events were spaced out enough that the team could implement improvements and try new approaches at subsequent Developer Days. “We had the opportunity to run the same labs multiple times, make small changes each time, and observe how people react to the different configurations,” said Mira, who continues to contribute improvements such as new interactive elements. As we continue to bring the Developer Day experience to new cities, we’re also taking Developer Days online. “There are so many reasons why people may not be able to attend our in-person events,” said Lauren Schaefer , who recently rejoined MongoDB Developer Relations to lead the program forward. “I look forward to working with my team to tackle the challenges of bringing our curriculum successfully online.” So a year later, I want to say thank you to everyone who has made Developer Days a success—from seven staff members who supported our first event in Chicago, to the roughly 100 talented people across MongoDB who are now part of the program. Even more importantly, I’m thankful for all of the participants (across 35 great cities and 16 countries!) that joined us for this full-day experience. As I love to say at the beginning of every Developer Day, we’re here to learn from each other. I hope you’ve learned as much from us as we have from you! To learn more about MongoDB’s global community of millions of developers—and to check out upcoming events like Developer Days—please visit our events page .

August 29, 2024

The Dual Journey: Healthcare Interoperability and Modernization

Interoperability in healthcare isn’t just a buzzword; it’s a fundamental necessity. It refers to the ability of IT systems to enable the timely and secure access, integration, and use of electronic health data. However, integrating data across different applications is a pressing challenge, with 48% of US hospitals reporting a one-sided sharing relationship in which they share patient data with other providers who do not, in turn, share patient data with the hospital. The ability to share electronic health data seamlessly across various healthcare systems can revolutionize patient care, enhance operational efficiency, and drive innovation. In this post, we’ll explore the challenges of healthcare data sharing, the role of interoperability , and how MongoDB can be a game-changer in this landscape. The challenge of data sharing in healthcare Today's consumers have high expectations for accessing information, and many now anticipate quick and continuous access to their health and care records. One of the biggest IT challenges faced by healthcare organizations is sharing data effectively and creating seamless data integrations to build patient-centric healthcare solutions. Healthcare data has to be shared in multiple ways: Between internal applications to ensure seamless data flow across various internal systems. Between primary and secondary care , to coordinate care across healthcare providers. To patient portals and telemedicine to enhance patient engagement and remote care. To payers, institutions, and patients themselves , to streamline interactions with insurance companies and regulatory bodies. To R&D units to accelerate medical research and pharmaceutical developments. The complexity of healthcare data is staggering, and hospitals regularly need to integrate dozens of different applications—all of which means that there are significant barriers to healthcare data sharing and integration. A vision for patient-centric healthcare Imagine a world where patient data is shared in real-time with all relevant parties—doctors, hospitals, labs, pharmacies, and insurance companies. This level of interoperability would streamline the flow of information, reduce errors, and improve patient outcomes. Achieving this, however, is no easy feat as healthcare data is immensely complex, involving various types of data such as unstructured clinical notes, lab tests, medical images, medical devices, and even genomic data. Furthermore, types of data mean different things depending on where and where it was collected. Achieving seamless data sharing also involves overcoming barriers to data sharing between different healthcare providers and systems, all while adapting to evolving regulations and standards. Watch the " MongoDB and FHIR: Navigating Healthcare Data " session from MongoDB.local NYC on YouTube. The intersection of modernization and interoperability Modernization of healthcare IT systems and achieving interoperability are two sides of the same coin. Both require significant investments and a focus on transitioning from application-driven to data-driven architecture. By focusing first on data and then connecting applications with a developer data platform like MongoDB Atlas , healthcare organizations can avoid data silos and achieve vendor-neutral data ownership. As healthcare interoperability standards define a common language, organizations might question whether, instead of reinventing the wheel with their own data domains, they can use the interoperability journey (and its high investments) to modernize their applications. MongoDB’s document data model supports the JSON format, just like FHIR (Fast Healthcare Interoperability Resources) and other interoperability standards, making it a more efficient and flexible data platform for developing healthcare applications beyond the limitations of external APIs. FHIR for storing healthcare data? The most implemented standard worldwide, HL7 FHIR , treats each piece of data as a self-contained resource with external links, similar to web pages. HL7 adopted a pragmatic approach: there was no need to define a complete set of resources for all the clinical data, but they wanted to get the 80% that most electronic health records (EHR) share. For the 20% of non-standardized data, they created FHIR Extensions to extend every resource to specific needs. However, FHIR is not yet fully developed, with only 15 of the 158 resources it defines having reached the highest level of maturity. The constant changes can be as simple as a name change or can be so complex that data has to be rearranged. FHIR is designed for the exchange of data but can also be used for persistence. Figure 1: Using FHIR for persistence depending on the complexity of the use case For specific applications with no complex data, such as integrating data from wearables, you can leverage FHIR. However, building a primary operational repository for broader applications like a patient summary or even a healthcare information system presents a significant challenge. This is because the data model required goes beyond the capabilities of FHIR, and solving that through FHIR extensions is an inefficient approach. OpenEHR as an alternative approach In Catalonia, Spain—for about 8 million people and roughly 60 public hospitals—there are 29 different hospital EHR systems. Each hospital maintains a team of developers exclusively focused on building interoperability interfaces. Due to an increased demand for data sharing, the cost of maintaining this data will only grow. Rather than implementing interoperability interfaces, why not create new applications that are implicitly interoperable? This is what openEHR proposes, defining the clinical information model from the maximal perspective and developing applications that consume a subset of the clinical information system using an open architecture. However, while FHIR is very versatile—offering data models for administrative and operational data efficiently—openEHR focuses exclusively on clinical data. So while a combination of FHIR and openEHR can solve part of the problem, future healthcare applications need to integrate a wide variety of data, including medical images, genomics, proteomics, and data from complex medical devices—which could be complicated by the lack of a single standard. Overcoming this challenge with the document model and MongoDB Now, let’s discover the power of the document model to advance interoperability while modernizing systems. MongoDB Atlas features a flexible document data model, which provides a flexible way of storing and organizing healthcare data using JSON-like documents. With a flexible data schema, healthcare organizations can accommodate any data structure, format, or source into one platform, providing seamless third-party integration capabilities necessary for interoperability. While different use cases will have different solutions, the flexibility of the document model means MongoDB is able to adapt to changes. Figure 2 below shows a database modeled in MongoDB, where each collection stores a FHIR resource type (e.g., patients, encounters, conditions). These documents mirror the FHIR objects, conserving its complex hierarchy. Let's imagine our application requires specific fields not supported by FHIR, and there is no need for an FHIR extension because it won’t be shared externally. We can add a metadata field containing all this information that is as flexible as needed. It can be used to track the standard evolution of the resource, the version of the document itself, tenant ID for multi-tenant applications, and more. Figure 2: Data modeled in MongoDB Another possibility is to add the searchable fields of the resource as key-value pairs so that you can retrieve data with a single index. We can maintain the indexes by automating this with the FHIR search parameters. In a single repository, we can combine FHIR data with custom application data. Additionally, data in other protocols can be integrated, providing unparalleled flexibility in accessing your data. This setup permits access through different endpoints within one repository. Using MQL (the MongoDB Query Language), you can build FHIR APIs or use the MongoDB SQL interface to provide SQL querying capabilities to connect your preferred business intelligence tools. Figure 3: Unified and flexible data store and flexible data retrieval MongoDB’s developer data platform At the center of MongoDB’s developer data platform is MongoDB Atlas, the most advanced cloud database service on the market. It provides integrated full-text search capabilities, allowing applications to perform when making complex queries without the need to maintain a separate system. With generative AI, multidimensional vectors that represent data are becoming a necessity. MongoDB Atlas stores vectors along the operational data and provides vector search, which enables fast data retrieval. Therefore, you can store metadata in your vector embeddings, as shown in Figure 4. Figure 4: Vector embeddings These capabilities transform a single database into a unique, powerful, and easy-to-use interface capable of handling diverse use cases without the need for any single-purpose databases. Solving the dual challenge with MongoDB Achieving interoperability and modernization in healthcare IT are challenging but essential. MongoDB provides a powerful platform that meets organizations’ modern data management needs. By embracing MongoDB, healthcare organizations can unlock the full potential of their data, leading to improved patient outcomes and operational efficiency. Figure 5: Closing the gap between interoperability and modernization journey with MongoDB Refactoring applications to incorporate interoperability resources as part of documents—and extending them with all the requirements for your modern needs—will ensure organizations’ data layers remain robust and adaptable. By doing so, organizations can create a flexible architecture that can seamlessly integrate diverse data types and accommodate future advancements. This approach not only enhances data accessibility and simplifies data management but also supports compliance with evolving standards and regulations. Furthermore, it enables real-time data analytics and insights, fostering innovation and driving better decision-making. Ultimately, this strategy positions healthcare organizations to effectively manage and leverage their data, leading to improved patient outcomes and operational efficiencies. For more detailed information and resources on how MongoDB can transform organizations’ healthcare IT systems, we encourage you to apply for an exclusive innovation workshop with MongoDB's industry experts to explore bespoke modern app development and tailored solutions for your organization. Additionally, check out these resources: MongoDB and FHIR: Navigating Healthcare Data with MongoDB How Leading Industries are Transforming with AI and MongoDB Atlas The MongoDB Solutions Library is curated with tailored solutions to help developers kick-start their projects For developers: From FHIR Synthea data to MongoDB

August 28, 2024

CTF Life Leverages MongoDB Atlas to Deliver Customer-Centric Service

Hong Kong-based Chow Tai Fook Life Insurance Company Limited (CTF Life) is proud of its rich, nearly 40-year history of providing a wide range of insurance and financial planning services. The company provides life, health, accident, savings, and investment insurance to its customers, helping them and their loved ones navigate life’s journey with personalized planning solutions, lifelong protection, and diverse lifestyle experiences. A wholly-owned subsidiary of NWS Holdings Limited and a member of Chow Tai Fook Group, CTF Life consistently strengthens its collaboration with the diverse conglomerate of the Cheng family (Chow Tai Fook Group) and draws on the Group’s robust financial strength, strategic investments across the globe, and advanced customer-focused digital technology with the aspiration of becoming a leading insurance company in the Greater Bay Area. To achieve this goal, CTF Life modernized its on-premises infrastructure to provide the speed and flexibility required to offer customers personalized experiences. To turn their vision into reality, CTF Life decided to adopt MongoDB Atlas . By modernizing their systems and processes with the world’s most versatile developer data platform, CTF Life knew they’d be able to meet customer expectations, offering improved customer service, faster response times, and more convenient access to their products and services. Data-driven customer service The insurance industry is undergoing a significant shift, from traditional data management to near-real-time data-driven insights, driven by strong consumer demand and the urgent need for companies to process large amounts of data efficiently. As insurance companies strive to provide personalized and real-time products, the move towards sophisticated and real-time data-driven customer service is inevitable. CTF Life is on its digital transformation journey to modernize its relational database management system (RDBMS) infrastructure to empower its agents, known as Life Planners, to provide enhanced customer experiences. The company faced obstacles to legacy systems and siloed data. Life Planners were spending a lot of time looking up customer information from various systems and organizing this into useful customer insights. Not having a holistic view of customer data also made it challenging to recommend personalised products and services within CTF Life, the Group, and beyond. Reliance on legacy RDBMS systems presented a major challenge in CTF Life’s pursuit of leveraging real-time customer information to enhance customer experiences and operational efficiency. For their modernization efforts, CTF Life was looking for the following required capabilities: A modernized application with agile development No downtime for changing schema, new modules, or feature updates A single way of centralizing and organizing data from a number of sources (backend, CRM, etc.) into a standardized format ready for a front-end mobile application A future-proof data platform with extensible capability for analytics across CTF Life, their diverse conglomerate collaboration, and their strategic partners to support the company’s digital solutions Embracing the operational data layer for enhanced experiences CTF Life knew they had to build a solution for the Life Planners to harness the wealth of useful information available to them, making it easier to engage and connect with customers. The first project identified was their clienteling system, which is designed to establish long-term relationships with customers based on data about their preferences, behaviors, and needs. To overcome their legacy systems and siloed data, CTF Life built their clienteling system on MongoDB Atlas . Atlas serves as the digital data store for Life Planners, creating a single view of the customer (SVOC) with a flexible document model that enables CTF Life to handle large volumes of customer data in real-time efficiently. By integrating their operational data into one platform with MongoDB Atlas on Microsoft Azure, CTF Life’s revamped clienteling system provides their Life Planners with a comprehensive view of customer profiles, which allows them to share targeted content with customers. Additionally, CTF Life is using Atlas Search to build relevance-based search capabilities directly into the application, making it faster and easier to search for customer data across the company’s system landscape. These benefits helped improve customer service with faster access to data with an SVOC so Life Planners can provide more accurate and timely information to their customers. Atlas Search is now the foundation of the clienteling system, which powers data analytics and machine learning capabilities to support various use cases. For example, the clienteling app's smart reminder feature recognizes key moments in a customer's life, like the impending arrival of a newborn child. Based on these types of insights, the app can help Life Planners make personalized recommendations to the customer about relevant services and products that may be of interest to them as new parents. Because of its work with MongoDB, CTF Life can now analyze customer profiles and use smart reminders to engage customers at the right time in the right context. This has made following up with customers and leads faster and easier. And, contacting prospects, scheduling appointments, setting reminders, sharing relevant content, running campaigns and promotions, recommending products and services, and tracking lead progress can all be performed in one system. Moreover, access to real-time data enables Life Planners to streamline their work and reduce manual processes. And data-driven insights empower Life Planners to make informed decisions quickly. They can analyze customer information, identify trends, and tailor their recommendations to meet individual needs more effectively. With MongoDB Atlas Search, Life Planners can use advanced search capabilities to identify opportunities to serve customers better. Continuing to create value beyond insurance CTF Life strives to provide its customers with value beyond insurance. Through a range of collaborations with Chow Tai Fook Group, and strategic partnerships with technology partners like MongoDB, CTF Life has created a customer-centric approach and continues to advance its digital transformation strategy to enhance a well-rounded experience for customers that goes beyond insurance with a sincere and deep understanding of their diverse needs in every chapter of their life journey. In the future, CTF Life will continue to build upon its strategic partnership with MongoDB and expand the use of its digital data store on MongoDB Atlas by creating new client servicing modules on the mobile app their Life Planners use. CTF Life will also be expanding its search capabilities with Atlas Vector Search to accelerate their journey to building advanced search and generative AI applications for more automated servicing. Partnering with MongoDB helped us prioritize technology that accelerates our digital transformation. The integration between generative AI and MongoDB as a medium for information search can be leveraged to further support front-line Life Planners as well as mid/back-office operations. Derek Ip, Chief Digital and Technology Officer of CTF Life Learn how to tap into real-time data with MongoDB Atlas .

August 28, 2024

Revolutionizing Retail with RFID and MongoDB Atlas

In today's fast-paced retail environment, keeping pace with both customer expectations and competition, maintaining accurate, real-time inventory is more critical than ever. Radio Frequency Identification (RFID) technology has emerged as a transformative solution , enabling retailers to track inventory with unprecedented precision and efficiency. However, the potential of RFID can only be fully realized when paired with a robust, scalable data platform capable of handling large volumes of data and providing actionable insights. MongoDB Atlas, with its Device Sync capabilities, is an ideal solution for leveraging RFID data in retail. The role of RFID technology in retail RFID technology uses electromagnetic fields to automatically identify and track tags attached to objects. This technology offers significant advantages over traditional barcode systems, including the ability to read multiple tags simultaneously and from a distance, and provide real-time updates on inventory levels. For retailers, RFID technology translates into enhanced accuracy in stock management, reduced labor costs, and improved customer satisfaction through better product availability. However, the implementation of RFID technology generates vast amounts of data that need to be efficiently captured, processed, and analyzed. This is where MongoDB Atlas , a fully managed cloud database, and its Device Sync feature, come into play. Figure 1: E2E Supply Chain RFID Tracking Architecture Managing RFID data with MongoDB Atlas and Device Sync Retail enterprises face significant challenges when integrating RFID data into their existing systems to provide real-time visibility and actionable insights. While RFID technology offers substantial benefits, the sheer volume of data generated by RFID tags and readers can be overwhelming without an efficient, scalable database solution. Below are key challenges retailers encounter and how MongoDB Atlas, with Device Sync , can address these issues: Real-time data synchronization: Ensuring that RFID data from multiple locations is synchronized in real-time is critical for maintaining accurate inventory levels and providing timely insights to store associates and management. Data integration and flexibility: Retailers often have legacy systems that need to integrate seamlessly with new RFID data streams. A flexible database schema is required to accommodate different data types and structures. Data volume and velocity: RFID systems can generate millions of data points daily. Retailers need a database solution that can handle this high volume and velocity of data without compromising performance. Data security and compliance: Protecting sensitive inventory and customer data is paramount. Retailers must ensure that their database solution complies with industry standards and regulations. Research-driven solutions with MongoDB Atlas Recent studies have shown that retailers who integrate RFID technology with a robust database platform experience significant operational improvements. According to a report by GS1 US , retailers implementing RFID saw a 25% improvement in inventory accuracy and a 30% reduction in out-of-stock incidents. Additionally, a study by Auburn University RFID Lab found that RFID technology can increase inventory accuracy from an industry average of 65% to more than 95%. MongoDB Atlas enhances these benefits by offering a fully managed, cloud-based database solution that simplifies the process of capturing and analyzing RFID data. Key features of MongoDB Atlas that support RFID integration include: Scalability: MongoDB Atlas can handle the vast amounts of data generated by RFID systems, ensuring that retailers can scale their operations without worrying about database performance. Real-time data processing: With Device Sync, data from RFID readers can be synchronized in real-time, providing instant visibility into inventory levels across all locations. Flexibility: The flexible schema of MongoDB allows retailers to store various types of data, including complex inventory information and transactional data. Security: MongoDB Atlas offers robust security features, including end-to-end encryption, to protect sensitive inventory data. Use cases for MongoDB Atlas and RFID in retail The integration of MongoDB Atlas and RFID technology can revolutionize various aspects of retail operations. Here are some key use cases: Effective inventory management involves several key strategies to ensure seamless operations and customer satisfaction. Real-time inventory tracking plays a crucial role by automatically updating stock levels whenever products are moved, sold, or restocked, providing up-to-date and accurate information. This allows businesses to maintain an accurate view of their inventory at all times. Additionally, automated stock replenishment systems predict stock shortages and trigger reorder requests based on current inventory levels, helping to avoid stockouts and overstock situations. Proper stock management is another essential component. Implementing loss prevention measures helps to identify discrepancies between recorded and actual inventory levels, enabling the detection of theft or loss. Detailed stock insights , such as analyzing inventory turnover rates and product performance, further enhance inventory management by optimizing stock levels and guiding strategic decisions on product placement. In-store associates also benefit from real-time inventory data , which empowers them to provide better customer service. For example, associates can quickly check stock availability and assist customers in locating items. Moreover, efficient stocking is made possible as associates receive guidance on which products need replenishment, ensuring shelves are always stocked with high-demand items. For businesses engaged in omnichannel retailing, integrating online and in-store operations is critical. A seamless connection between these channels enables the efficient fulfillment of Buy Online, Pickup In-Store (BOPIS) orders by leveraging accurate inventory data across both platforms. Additionally, returns management is streamlined by updating inventory levels in real-time as returned items are processed and restocked, ensuring that stock information remains consistent and up-to-date across all channels. Below, you can see a simple representation of how to connect your hardware devices—in this case, a Zebra RFD8500—to interact with your MongoDB Atlas clusters, effectively acting as the perfect data capturing/retrieval tool to elevate the previously commented use cases. Figure 2: RFID Product Tracking Architecture Embracing MongoDB Atlas for RFID solutions in retail For IT decision-makers, adopting MongoDB Atlas and Device Sync can unlock the full potential of RFID technology in retail. By providing a scalable, flexible, and secure platform, MongoDB Atlas ensures that retailers can capture and analyze RFID data in real-time, driving operational efficiencies and enhancing customer satisfaction. What’s more, by integrating MongoDB Atlas with RFID technology, retailers can achieve unprecedented levels of inventory accuracy, streamline their operations, and provide a seamless shopping experience for their customers. It's time for Retail IT to leverage the power of MongoDB Atlas to transform their retail operations and stay ahead in the competitive market. Adopting MongoDB Atlas and Device Sync can revolutionize your retail operations by harnessing the power of RFID technology. As a decision-maker, investing in this key platform will provide enterprises with the tools needed to enhance inventory management, optimize stock levels, and deliver exceptional customer service. Visit our solutions page to learn more about MongoDB for retail innovation. Find out more about connecting a Zebra Technologies 123RFID app with MongoDB Atlas by checking our dedicated GitHub repository .

August 27, 2024

Elevate Your Java Applications with MongoDB and Spring AI

MongoDB is excited to announce an integration with Spring AI, enhancing MongoDB Atlas Vector Search for Java developers. This collaboration brings Vector Search to Java applications, making it easier to build intelligent, high-performance AI applications. Why Spring AI? Spring AI is an AI library designed specifically for Java, applying the familiar principles of the Spring ecosystem to AI development. It enables developers to build, train, and deploy AI models efficiently within their Java applications. Spring AI addresses the gap left by other AI frameworks and integrations that focus on other programming languages, such as Python, providing a streamlined solution for Java developers. Spring has been a cornerstone for Java developers for decades, offering a consistent and reliable framework for building robust applications. The introduction of Spring AI continues this legacy, providing a straightforward path for Java developers to incorporate AI into their projects. With the MongoDB-Spring integration, developers can leverage their existing Spring knowledge to build next-generation AI applications without the friction associated with learning a new framework. Key features of Spring AI include: Familiarity: Leverage the design principles of the Spring ecosystem. Spring AI allows Java developers to use the same familiar tools and patterns they already know from other Spring projects, reducing the learning curve and allowing them to focus on building innovative AI applications. This means you can integrate AI capabilities—including Atlas Vector Search—without having to learn a new language or framework, making the transition smoother and more intuitive. Portability: Applications built with Spring AI can run anywhere the Spring framework runs. This ensures that AI applications are highly portable and can be deployed across various environments without modification, guaranteeing flexibility and consistency in deployment strategies. Modular design: Use Plain Old Java Objects (POJOs) as building blocks. Spring AI’s modular design promotes clean code architecture and maintainability. By using POJOs, developers can create modular, reusable components that simplify the development and maintenance of AI applications. This modularity also facilitates easier testing and debugging, leading to more robust applications that efficiently integrate with Atlas Vector Search. Efficiency: Streamline development with tools and features designed for AI applications in Java. Spring AI provides a range of tools that enhance development efficiency, including pre-built templates, configuration management, and integrated testing tools. These features reduce the time and effort required to develop AI applications, allowing developers to bring their ideas to market faster. These features streamline AI development by enhancing the integration and performance of Atlas Vector Search within Java applications, making it easier to build and scale AI-driven features. Enhancing AI development with Spring AI and Atlas Vector Search MongoDB Atlas Vector Search enhances AI application development by providing advanced search capabilities. The new Spring AI integration enables developers to manage and search vector data within AI models, enabling features like recommendation systems, natural language processing, and predictive analytics. Atlas Vector Search allows you to store, index, and search high-dimensional vectors, which are crucial for AI and machine learning models. This capability supports a range of AI features: Recommendation systems: Provide personalized recommendations based on user behavior and preferences. Natural language processing: Enhance text analysis and understanding for chatbots, sentiment analysis, and more. Predictive analytics: Improve forecasting and decision-making with advanced data models. What the integration means for Java developers Prior to MongoDB-Spring integration, Java developers did not have an easy way to integrate Spring into their AI applications using MongoDB Atlas Vector Search, which led to longer development times and suboptimal application performance. With this integration, the Java development landscape is transformed, allowing developers to build and deploy AI applications with greater efficiency. The integration simplifies the entire process, enabling developers to concentrate on creating innovative solutions rather than dealing with integration hurdles. This approach not only reduces development time but also accelerates time-to-market. Additionally, MongoDB offers robust support through comprehensive tutorials and a wealth of community-driven content. Whether you’re just beginning or looking to optimize existing applications, you’ll find the resources and guidance you need at every stage of your development journey. Get started! The MongoDB and Spring AI integration is designed to simplify the development of intelligent Java applications. By combining MongoDB's robust data platform with Spring AI's capabilities, you can create high-performance applications more efficiently. To start using MongoDB with Spring AI, explore our documentation , tutorial , and check out our GitHub repository to build the next generation of AI-driven applications today.

August 26, 2024

Better Business Loans with MongoDB and Generative AI

Business loans are a cornerstone of banking operations, providing significant benefits to both financial institutions and broader economies. For example, in 2023 the value of commercial and industrial loans in the United States reached nearly $2.8 trillion . However, these loans can present unique challenges and risks that banks must navigate. Besides credit risk, where the borrower may default, banks also face business risk, in which economic downturns or sector-specific declines can impact borrowers' ability to repay loans. In this post, we dive into the potential of generative AI to generate detailed risk assessments for business loans, and how MongoDB’s multimodal features can be leveraged for comprehensive and multidimensional risk analyses. The critical business plan A business plan is essential for a business loan as it serves as a comprehensive roadmap detailing the borrower's plans, strategies, and financial projections. It helps lenders understand the business's goals, viability, and profitability, demonstrating how the loan will be used for growth and repayment. A detailed business plan includes market analysis, competitive positioning, operational plans, and financial forecasts which build a compelling case for the lender's investment and the business’s ability to manage risks effectively, increasing the likelihood of securing the loan. Reading through borrower credit information and detailed business plans (roughly 15-20 pages long ) poses significant challenges for loan officers due to time constraints, the material’s complexity, and the difficulty of extracting key metrics from detailed financial projections, market analyses, and risk factors. Navigating technical details and industry-specific jargon can also be challenging and require specialized knowledge. Identifying critical risk factors and mitigation strategies only adds further complexity along with ensuring accuracy and consistency among loan officers and approval committees. To overcome these challenges, gen AI can assist loan officers by efficiently analyzing business plans, extracting essential information, identifying key risks, and providing consistent interpretations, thereby facilitating informed decision-making. Assessing loans with gen AI Interactive risk analysis with gen AI-powered chatbots Gen AI can help analyze business plans when built on a flexible developer data platform like MongoDB Atlas . One approach is implementing a gen AI-powered chatbot that allows loan officers to "discuss" the business plan. The chatbot can analyze the input and provide insights on the various risks associated with lending to the borrower for the proposed business. MongoDB sits at the heart of many customer support applications due to its flexible data model that makes it easy to build a single, 360-degree view of data from a myriad of siloed backend source systems. Figure 1 below shows an example of how ChatGPT-4o responds when asked to assess the risk of a business loan. Although the input of the loan purpose and business description is simplistic, gen AI can offer a detailed analysis. Figure 1: Example of how ChatGPT-4o could respond when asked to assess the risk of a business loan Hallucinations or ignorance? By applying gen AI to risk assessments, lenders can explore additional risk factors that gen AI can evaluate. One factor could be the risk of natural disasters or broader climate risks. In Figure 2 below, we added flood risk specifically as a factor to the previous question to see what the ChatGPT4-o comes back with. Figure 2: Example of how ChatGPT-4o responded to flood risk as a factor Based on the above, there is a low risk of flooding. To validate this, we asked ChatGPT-4o the question differently, focusing on its knowledge of flood data. It suggested reviewing FEMA flood maps and local flood history, indicating it might not have the latest information. Figure 3: Asking location-specific flood questions In the query shown in Figure 3 above, ChatGPT gave an opposite answer and indicated there is “significant flooding” providing references to flood evidence after having performed an internet search across 4 sites which it did not perform previously. From this example, we can see that when ChatGPT does not have the relevant data, it starts to make false claims, which can be considered hallucinations. Initially, it indicated a low flood risk due to a lack of information. However, when specifically asked about flood risk in the second query, it suggested reviewing external sources like FEMA flood maps, recognizing its limitations and need for external validation. Gen AI-powered chatbots can recognize and intelligently seek additional data sources to fill their knowledge gaps. However, a causal web search won’t provide the level of detail required. Retrieval-augmented generation-assisted risk analysis The promising example above demonstrates the experience of how gen AI can augment loan officers to analyze business loans. However, interacting with a gen AI chatbot relies on loan officers repeatedly prompting and augmenting the context with relevant information. This can be time-consuming and impractical due to the lack of prompt engineering skills or the lack of data needed. Below is a simplified solution of how gen AI can be used to augment the risk analysis process to fill the knowledge gap of the LLM. This demo uses MongoDB as an operational data store leveraging geospatial queries to find out the floods within 5km of the proposed business location. The prompting for this risk analysis highlights the analysis of the flood risk assessment rather than the financial projections. A similar test was performed on Llama 3 , hosted by our MAAP partner Fireworks.AI . It tested the model’s knowledge of flood data showing a similar knowledge gap as ChatGPT-4o. Interestingly, rather than providing misleading answers, LLama 3 provided a “hallucinated list of flood data,” but highlighted that “this data is fictional and for demonstration purposes only. In reality, you would need to access reliable sources such as FEMA's flood data or other government agencies' reports to obtain accurate information.” Figure 4: LLM’s response with Fictional flood locations With this consistent demonstration of the knowledge gap in the LLMs in specialized areas, it reinforces the need to explore how RAG (retrieval-augmented generation) with a multimodal data platform can help. In this simplified demo, you select a business location, a business purpose, and a description of a business plan. To make inputs easier, an “Example” button has been added to leverage gen AI to generate a sample brief business description to avoid the need to key in the description template from scratch. Figure 5: Choosing a location on the map and writing a brief plan description Upon submission, it will provide an analysis using RAG with the appropriate prompt engineering to provide a simplified analysis of the business with the consideration of the location and also the flood risk earlier downloaded from external flood data sources. Figure 6: Loan risk response using RAG In the Flood Risk Assessment section, gen AI-powered geospatial analytics enable loan officers to quickly understand historical flood occurrences and identify the data sources. You can also reveal all the sample flood locations within the vicinity of the business location selected by clicking on the “Pin” icon. The geolocation pins include the flood location and the blue circle indicates the 5km radius in which flood data is queried, using a simple geospatial command $geoNear . Figure 7: Flood locations displayed with pins The following diagram provides a logical architecture overview of the RAG data process implemented in this solution highlighting the different technologies used including MongoDB, Meta Llama 3, and Fireworks.AI. Figure 8: RAG data flow architecture diagram With MongoDB's multimodal capabilities, developers can enhance the RAG process by utilizing features such as network graphs, time series, and vector search. This enriches the context for the gen AI agent, enabling it to provide more comprehensive and multidimensional risk analysis through multimodal analytics. Building risk assessments with MongoDB When combined with RAG and a multimodal developer data platform like MongoDB Atlas , gen AI applications can provide more accurate and context-aware insights to reduce hallucination and offer profound insights to augment a complex business loan risk assessment process. Due to the iterative nature of the RAG process, the gen AI model will continually learn and improve from new data and feedback, leading to increasingly accurate risk assessments and minimizing hallucinations. A multimodal data platform would allow you to fully maximize the capabilities of the multimodal AI models. If you would like to discover how MongoDB can help you on this multimodal gen AI application journey, we encourage you to apply for an exclusive innovation workshop with MongoDB's industry experts to explore bespoke modern app development and tailored solutions to your organization. Additionally, you can enjoy these resources: Solution GitHub: Loan Risk Assessor How Leading Industries are Transforming with AI and MongoDB Atlas Accelerate Your AI Journey with MongoDB’s AI Applications Program The MongoDB Solutions Library is curated with tailored solutions to help developers kick-start their projects

August 22, 2024

MongoDB Atlas for Government Supports GCP Assured Workloads

We’re excited to announce that MongoDB Atlas for Government now supports the US regions of Google Cloud Assured Workloads, alongside existing support for AWS GovCloud and AWS US regions. This expansion offers greater flexibility and expanded support for public sector organizations and the independent software vendors (ISVs) that serve them as they modernize applications and migrate workloads to the cloud. Furthermore, MongoDB Atlas for Government is now available for purchase through the Google Cloud Marketplace . MongoDB Atlas for Government: Driving digital transformation in the public sector MongoDB Atlas for Government is an independent, dedicated version of MongoDB Atlas, designed specifically to meet the unique needs of the U.S. public sector and ISVs developing public sector solutions. This developer data platform provides the versatility and scalability required to modernize legacy applications and migrate workloads to the cloud, all within a secure, fully-managed, FedRAMP authorized environment. Refer to the FedRAMP Marketplace listing for additional information about Atlas for Government. By leveraging the full functionality of MongoDB's document database and application services, Atlas for Government supports a wide range of use cases within a unified developer data platform, including Internet of Things, AI/ML, analytics, mobile development, single view, transactional workloads, and more. Ensuring robust resilience and comprehensive disaster recovery, Atlas for Government maintains business continuity and minimizes downtime. With a ~99.995% uptime SLA , auto-scaling to handle data consumption fluctuations, and automated backup and recovery, organizations can have peace of mind that their data is always protected. Getting started with MongoDB Atlas for Government MongoDB Atlas for Government can be used to create database clusters deployed to a single region or spanning multiple US regions. Google Cloud Assured Workloads US regions are now supported in Atlas for Government projects tagged as “Gov regions only,” allowing for the use of both traditional Google Cloud regions as well as Assured Workloads US regions. To get started, create a project in Atlas for Government and make sure to select 'Designate as a Gov Cloud regions-only project' during the project creation process. After creating the project, you can set up a MongoDB cluster in the GCP regions. To do this, start the cluster creation process and select GCP as the Cloud Provider, as shown in the figure below. You'll then be prompted to choose one or more GCP regions for your cluster. You can find more details on supported cloud providers and regions in the Atlas for Government documentation . Creating multi-cloud clusters The introduction of support for Google Cloud Assured Workloads (US regions) makes MongoDB Atlas for Government the first fully managed multi-cloud data platform authorized at FedRAMP Moderate. This means that public sector organizations and ISVs can now deploy clusters across Google Cloud Assured Workloads US regions and AWS GovCloud regions, in addition to deploying database clusters across multiple US regions. Whether prioritizing performance, cost, or specific feature sets, Atlas for Government empowers teams to deploy application architectures that simultaneously take advantage of the best-of-class services from multiple cloud providers while meeting FedRAMP requirements. Multi-cloud support also provides additional resiliency and enhanced disaster recovery, safeguarding data and applications against potential service outages and failures with automatic failover. Ensuring robust data protection and seamless continuity MongoDB Atlas for Government now supports Google Cloud Assured Workloads US regions, expanding its multi-cloud capabilities alongside existing support for AWS GovCloud and AWS US regions. This enhancement provides public sector organizations and ISVs with the flexibility to modernize applications and migrate workloads in a secure, FedRAMP authorized environment. With robust resilience, comprehensive disaster recovery, and a ~99.995% uptime SLA, Atlas for Government ensures data protection and business continuity. By offering a unified developer data platform for a wide range of use cases, Atlas for Government empowers teams to leverage best-in-class cloud services while meeting stringent compliance requirements. How do I get started? Visit our product page to learn more about MongoDB Atlas for Government. Or, read the Atlas for Government documentation to learn how to get started today.

August 20, 2024

Find Hidden Insights in Vector Databases: Semantic Clustering

Vector databases, a powerful class of databases designed to optimize the storage, processing, and retrieval of large volume, multi-dimensional data, have increasingly been instrumental to generative AI (gen AI) applications, with Forrester predicted a 200% increase in the adoption of vector databases in 2024. But their power extends far beyond these applications. Semantic vector clustering, a technique within vector databases, can unlock hidden knowledge within your organization’s data, democratizing insights across teams. Mining diverse data for hidden knowledge Imagine your organization’s data as a library of diverse knowledge—a treasure trove of information waiting to be unearthed. Traditionally, uncovering valuable insights from data often relied on asking the right questions, which can be a challenge for developers, data scientists, and business leaders alike. They might spend vast amounts of time sifting through limited, siloed datasets, potentially missing hidden gems buried within the organization's vast data troves. Simply put, without knowing the right questions to ask, these valuable insights often remain undiscovered, leading to missed opportunities or losses. Enter vector databases and semantic vector clustering. A vector database is designed to store and manage unstructured data efficiently. Within a vector database, semantic vector clustering is a technique for organizing information by grouping vectors with similar meaning together. Text analysis, sentiment analysis, knowledge classification, and uncovering semantic connections between data sets—these are just a few examples of how semantic vector clustering empowers organizations to vastly improve data mining. Semantic vector clustering offers a multifaceted approach to organizational improvement. By analyzing text data, it can illuminate customer and employee sentiments, behaviors, and preferences, informing strategic decisions, enhancing customer service, and optimizing employee satisfaction. Furthermore, it revolutionizes knowledge management by categorizing information into easily accessible clusters, thereby boosting collaboration and efficiency. Finally, by bridging data silos and uncovering hidden relationships, semantic vector clustering facilitates informed decision-making and breaks down organizational barriers. For example, the business can gain significant insights from its customer interaction data which is routinely kept, classified, or summarized. Those data points (texts, numbers, images, videos, etc.) can be vectorized and semantic vector clustering applied to identify the most prominent customer patterns (the densest vector clusters) from those interactions, classifications, or summaries. From the identified patterns, the business can take further actions or make more informed decisions that they wouldn’t have been able to do otherwise. The power of semantic vector clustering So, how does semantic vector clustering achieve all this? Discover semantic structures: Clustering groups similar LLM-embedded vector sets together. This allows for fast retrieval of themes. Beyond clustering regular vectors (individual data points or concepts), clustering RAG vectors (summarization of themes and concepts) can provide superior LLM contexts compared to basic semantic search. Reduce data complexity via clustering: Data points are grouped based on overall similarity, effectively reducing the complexity of the data. This reveals patterns and summarizes key features, making it easier to grasp the bigger picture. Imagine organizing the library by theme or genre, making it easier to navigate vast amounts of information. Semantic auto-aggregation: Here is the coolest part. We can classify groups of vectors into hierarchies by effectively semantically "auto-aggregating" them. This means that the data itself “figures out” these groups and "self-organizes." Imagine a library with an efficient automated catalog system, allowing researchers to find what they need quickly and easily. Vector clustering can be used to create hierarchies, essentially "auto-aggregating" groups of vectors semantically. Think of it as automatically organizing sections of the library based on thematic connections without a set of pre-built questions. This allows you to identify patterns within a vast, semantically-diverse data within your organization. Unlock hidden insights in your vector database The semantic clustering of vector embeddings is a powerful tool to go beyond the surface of data and identify meanings that otherwise would not have been discovered. By unlocking hidden relationships and patterns, you can extract valuable insights that drive better decision-making, enhance customer experiences, and improve overall business efficiency—all enabled through MongoDB’ secure, unified, and fully-managed vector database capabilities. Head over to our quick-start guide to get started with Atlas Vector Search today. Add vector search to your arsenal for more accurate and cost-efficient RAG applications by enrolling in the MongoDB and DeepLearning.AI course " Prompt Compression and Query Optimization " for free today.

August 19, 2024

Built With MongoDB: Atlas Helps Team-GPT Launch in Two Weeks

Team-GPT enables teams large and small to collaborate on AI projects. When OpenAI released GPT-4, it turned out to be a game-changer for the startup. Founded in 2023, the company has been helping people train machine learning (ML) models, in particular natural language processing (NLP) models. But when OpenAI launched GPT-4 in March 2023, the team was blown away by how much progress had been made on large language models (LLMs). So Team-GPT dropped everything they were doing and started experimenting with it. Many of those early ideas are still memorialized on a whiteboard in one of the office's meeting rooms: The birth of an idea. Like many startups, Team-GPT began with a brainstorm on a whiteboard. Evolving the application Of all the ideas they batted around, there was one issue in particular the team wanted to solve—the need for a shared workspace where they could experiment with LLMs together. What they found was that having to work with LLMs in the terminal was a major point of friction. Plus, there weren't any sharing abilities. So they set out to create a UI consisting of chat sharing, in-chat team collaboration, folders and subfolders, and a prompt library. The whole thing came together in an incredibly short period of time. This was due, in large part, to their initial choice of MongoDB Atlas, which allowed them to build with speed and scalability. "MongoDB made it possible for us to launch in just two weeks," said Team-GPT Founder and CTO, Ilko Kacharov. "With the MongoDB Atlas cloud platform, we were able to move rapidly, focusing our efforts on developing innovative product features rather than dealing with the complexities of infrastructure management." Before long, the team realized there was a lot more that could be built around LLMs than simply chat, and set out to add more advanced capabilities. Today, users can integrate any LLM of their choice and add custom instructions. The platform also supports multimodality like ChatGPT Vision and DALL-E. Users use any GPT model to turn chat responses into a standalone document that can then be edited. All these improvements are meant to unify teams' AI workflows in a single, AI-powered tool. A platform built for developers Diving deeper into more technical aspects of the solution, Team-GPT CEO Iliya Valchanov acknowledges the virtues of the document data model, which underpins the Atlas developer data platform. "We wanted the ability to quickly update and create new collections, add more data, and expand the existing database setup without major hurdles or time consumption," he said. "That's something that relational databases often struggle with." A developer data platform consists of integrated data infrastructure components and services for quick deployment. With transactional, analytical, search, and stream processing capabilities, it supports various use cases, reduces complexity, and accelerates development. Valchanov's team leverages a few key elements of the platform to address a range of application needs. "We benefited from Atlas Triggers , which allow automatic execution of specified database operations," he said. "This greatly simplified many of our routine tasks." It's not easy to build truly differentiated applications without a friction-free developer experience. Valchanov cites Atlas' user-friendly UI as a key advantage for a startup where time is of the essence. And he said that Atlas Charts has been instrumental for the team, who use it every day, even their less technical people. Of course one of the biggest reasons why developers and tech leaders choose MongoDB, and why so many are moving away from relational databases, is its ability to scale—which Valchanov said is one of the most critical requirements for supporting the company's growth. "With MongoDB handling the scaling aspect, we were able to focus our attention entirely on building the best possible features for our customers." Team-GPT deployment options Accelerating AI transformation Team-GPT is a collaborative platform that allows teams of up to 20,000 people to use AI in their work. It's designed to help teams learn, collaborate, and master AI in a shared workspace. The platform is used by over 2,000 high-performing businesses worldwide, including EY, Charles Schwab, Johns Hopkins University, Yale University, and Columbia University, all of which are also MongoDB customers. The company's goal is to empower every person who works on a computer to use AI in a productive and safe manner. Valchanov fully appreciates the rapid change that accompanies a product's explosive growth. "We never imagined that we would eventually grow to provide our service to over 40,000 users," he said. "As a startup, our primary focus when selecting a data platform was flexibility and the speed of iteration. As we transitioned from a small-scale tool to a product used by tens of thousands, MongoDB's attributes like flexibility, agility, and scalability became necessary for us." Another key enabler of Team-GPT's explosive growth has been the MongoDB for Startups program . It offers valuable resources such as free Atlas credits, technical guidance, co-marketing opportunities, and access to a network of partners. Valchanov makes no secret of how instrumental the program has been for his company's success. "The startup program made it free! It offered us enough credits to build out the MVP and cater to all our needs," he said. "Beyond financial aid, the program opened doors for us to learn and network. For instance, my co-founder, Yavor Belakov, and I participated in a MongoDB hackathon in MongoDB's office in San Francisco." Team-GPT co-founders Yavor Belakov (l) and Iliya Valchanov (r) participated in a MongoDB hackathon at the San Francisco office Professional services engagements are an essential part of the program, especially for early-stage startups. "The program offered technical sessions and consultations with MongoDB staff, which enriched our knowledge and understanding, especially for Atlas Vector Search , aiding our growth as a startup," said Valchanov. The roadmap ahead for the company includes the release of Team-GPT 2.0, which will introduce a brand-new user interface and new, robust functionalities. The company encourages anyone looking to learn more or join their efforts to ease adoption of AI innovations to reach out on LinkedIn . Are you part of a startup and interested in joining the MongoDB for Startups program? Apply to the program now . For more startup content, check out our Built With MongoDB blog collection.

August 15, 2024

Atlas Search Nodes: Now with Multi-Region Availability

At MongoDB, we are continually refining our products to try and create the simplest and most seamless developer experience possible. This mantra has also been applicable to how we think about search, from the beginning with Atlas Text Search, to the announcement of the next paradigm with Atlas Vector Search. We have continued to expand this vision with the introduction of Search Nodes, initially launching on AWS , and then expanding to both Google Cloud and Microsoft Azure . Today we’re excited to take the next step in that journey with the announcement of multi-region availability on all three major cloud providers. Search Nodes: Isolation and scale As a quick refresher, Search Nodes provide dedicated infrastructure for Atlas Search and Vector Search workloads, enabling even greater control over search workloads. They also allow you to isolate and optimize compute resources to scale search and database needs independently, delivering better performance at scale and higher availability. Since our announcements, we’ve been thrilled with the excitement around Search Nodes and the desire for better control, flexibility, and availability for scaling both Atlas Search and Vector Search workloads. Incorporating Search Nodes into your deployment delivers workload isolation, and the ability to optimize resource usage. A visual of the evolution from the previous coupled architecture to dedicated nodes is shown below: Figure 1: Improved workload sizing alignment and enhanced scalability with Search Nodes Introducing Global Availability Another tenet of our builder's journey is making sure the flexibility, scalability, and performance with Search Nodes are available to everyone, regardless of the cloud you’re using or cloud region. Today, we’re excited to officially announce multi-region availability for Search Nodes to allow anyone to better optimize resource usage regardless of location. Now, with multi-region availability, you can take full advantage of global scalability by no longer being limited to one geographic area. Furthermore, you now have the peace of mind by having the redundancy needed to protect yourself in the case of any unforeseen outage event, whether due to technical issues or natural disasters that could cause data center downtime. Figure 2: Multi-region availability on all three major cloud providers Here is a quick video tutorial about how to enable Search Nodes, as well as take advantage of multi-region availability: Brief tutorial on how to enable multi-region Search Nodes With today’s announcements we’re excited to bring the power and control of dedicated Search Nodes to people using all clouds and regions across the globe. We’re excited to see the continued adoption and improved results from having greater ubiquity across your search implementations. As always, reach out to us with any feedback, as we’d love to hear what you think!

August 14, 2024

MongoDB AI Course in Partnership with Andrew Ng and DeepLearning.AI

MongoDB is committed to empowering developers and meeting them where they are. With a thriving community of 7 million developers across 117 regions, MongoDB has become a cornerstone in the world of database technology. Building on this foundation, we're excited to announce our collaboration with AI pioneer Andrew Ng and DeepLearning.AI, a leading educational technology company specializing in AI and machine learning. Together, we've created an informative course that bridges the gap between database technology and modern AI applications, further enhancing our mission to support developers in their journey to build innovative solutions. Introducing "Prompt Compression and Query Optimization" MongoDB’s latest course on DeepLearning.AI, Prompt Compression and Query Optimization , covers the prominent form factor of modern AI applications today: Retrieval Augmented Generation (RAG) . This course showcases how MongoDB Atlas Vector Search capabilities enable developers to build sophisticated AI applications, leveraging MongoDB as an operational and vector database. To ensure that learners taking this course are not just introduced to vector search, the course presents an approach to reducing the operational cost of running AI applications in production by a technique known as prompt compression. “RAG, or retrieval augmented generation, has moved from being an interesting new idea a few months ago to becoming a mainstream large-scale application.” — Andrew Ng, DeepLearning.AI Key course highlights RAG Applications: Learn to build and optimize the most prominent form of AI applications using MongoDB Atlas and the MongoDB Query Language(MQL). MongoDB Atlas Vector Search: Leverage the power of vector search for efficient information retrieval. MongoDB Document Model: Explore MongoDB's flexible, JSON-like document model, which represents complex data structures and is ideal for storing and querying diverse AI-related data. Prompt Compression: Use techniques to reduce the operational costs of AI applications in production environments. In this course, you'll learn techniques to enhance your RAG applications' efficiency, search relevance, and cost-effectiveness. As AI applications become more sophisticated, efficient data retrieval and processing becomes crucial. This course bridges the gap between traditional database operations and modern vector search capabilities, enabling you to confidently build robust, scalable AI applications that can handle real-world challenges. MongoDB's document model: The perfect fit for AI A key aspect of this course is that it introduces learners to MongoDB's document model and its numerous benefits for AI applications: Python-Compatible Structure: MongoDB's BSON format aligns seamlessly with Python dictionaries, enabling effortless data representation and manipulation. Schema Flexibility: Adapt to varied data structures without predefined schemas, matching the dynamic nature of AI applications. Nested Data Structures: Easily represent complex, hierarchical data often found in AI models and datasets. Efficient Data Ingestion: Directly ingest data without complex transformations, speeding up the data preparation process. Leveraging the combined insights from MongoDB and DeepLearning.AI, this course offers a perfect blend of practical database knowledge and advanced AI concepts. Who should enroll? This course is ideal for developers who: Are familiar with vector search concepts Building RAG applications and Agentic Systems Have a basic understanding of Python and MongoDB and are curious about AI Want to optimize their RAG applications for better performance and cost-efficiency This course offers an opportunity to grasp techniques in AI application development. You'll gain the skills to build more efficient, powerful, cost-effective RAG applications, from advanced query optimization to innovative prompt compression. With hands-on code, detailed walkthroughs, and real-world applications, you'll be equipped to tackle complex AI challenges using MongoDB's robust features. Take advantage of this chance to stay ahead in the rapidly evolving field of AI. Whether you're a seasoned developer or just starting your AI journey, this course will provide invaluable insights and practical skills to enhance your capabilities. Improve your AI application development skills with MongoDB's practical course. Learn to build efficient RAG applications using vector search and prompt compression. Enroll now and enhance your developer toolkit.

August 8, 2024

Building Gen AI with MongoDB & AI Partners | July 2024

My colleague Richmond Alake recently published an article about the evolution of the AI stack that breaks down the “comprehensive collection of integrated tools, solutions, and components designed to streamline the development and management of AI applications.” It’s a good read, and Richmond—who’s an AI/ML expert and developer advocate—explains clearly how the modern AI stack evolved from a set of disparate tools to the (beautifully) interdependent ecosystem on which AI development relies today. “The modern AI stack represents an evolution from the fragmented tooling landscape of traditional machine learning to a more cohesive and specialized ecosystem optimized for the era of LLMs and gen AI,” Richmond writes. In other words, this cohesive ecosystem is aimed at ensuring end-to-end interoperability and seamless developer experiences, both of which are of utmost importance when it comes to AI innovation (and software innovation overall). Empowering developer innovation is exactly what MongoDB is all about—from streamlining how developers build modern applications, to the blog post you’re reading now, to the news that the MongoDB AI Applications Program (MAAP) is now generally available. In particular, the MAAP ecosystem represents leaders from every part of the AI stack who will provide customer service and support, and who will work with them to ensure smooth integrations—with the ultimate aim of helping them build gen AI applications with confidence. As the saying goes, it takes a village. Welcoming new AI partners Because the AI ecosystem is constantly evolving, we're always working to ensure that customers can seamlessly integrate with the latest cohort of industry-leading companies. In July we welcomed nine new AI partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Enkrypt AI Enkrypt AI secures enterprises against generative AI risks with its comprehensive security platform that detects threats, removes vulnerabilities, and monitors performance for continuous insights. The solution enables organizations to accelerate AI adoption while managing risk and minimizing brand damage. Sahil Agarwal, CEO of Enkrypt AI said, “We are thrilled to announce our strategic partnership with MongoDB, to help companies secure their RAG workflows for faster production deployment. Together, Enkrypt AI and MongoDB are dedicated to delivering unparalleled safety and performance, ensuring that companies can leverage AI technologies with confidence and improved trust.” FriendliAI FriendliAI’s mission is to empower organizations to harness the full potential of their generative AI models with ease and cost efficiency. By eliminating the complexities of generative AI serving, FriendliAI aims to empower more companies to achieve innovation with generative AI. “We’re excited to partner with MongoDB to empower companies in testing and optimizing their RAG features for faster production deployment,” said Byung-Gon Chon, CEO and co-founder of FriendliAI. “MongoDB simplifies the launch of a scalable vector database with operational data. Our collaboration streamlines the entire RAG development lifecycle, accelerating time to market and enabling companies to deliver real value to their customers more swiftly.” HoneyHive HoneyHive helps organizations continuously debug, evaluate, and monitor AI applications, and ship new AI features faster and with confidence. "We’re thrilled to announce our partnership with MongoDB, which addresses a critical challenge in GenAI deployment—the gap between prototyping and production-ready RAG systems,” said Mohak Sharma, CEO of HoneyHive. “By integrating HoneyHive's evaluation and monitoring capabilities with MongoDB's robust vector database, we're enabling developers to build, test, and deploy RAG applications with greater confidence. This collaboration provides the necessary tools for continuous quality assurance, from development through to production. For companies aiming to leverage gen AI responsibly and at scale, our combined solution offers a pragmatic path to faster, more reliable deployment." Iguazio The Iguazio AI platform operationalizes and de-risks ML & gen AI applications at scale so organizations can implement AI effectively and responsibly in live business environments. “We're delighted to expand our partnership with MongoDB into the gen AI domain, jointly helping enterprises build, deploy and manage gen AI applications in live business environments with our gen AI Factory,” said Asaf Somekh, co-founder and CEO of Iguazio (acquired by McKinsey). “Together, we mitigate the challenges of scaling gen AI and minimizing risk with built-in guardrails. Our seamlessly integrated technologies enable enterprises to realize the potential of gen AI and turn their AI strategy into real business impact." Netlify Netlify is the essential platform for the delivery of exceptional and dynamic web experiences, without limitations. The Netlify Composable Web Platform simplifies content orchestration, streamlines and unifies developer workflow, and enables website speed and agility for enterprise teams. "Netlify is excited to join forces with MongoDB to help companies test and optimize their RAG features for faster production deployment,” said Dana Lawson, Chief Technical Officer at Netlify. “MongoDB has made it easy to launch a scalable vector database with operational data, while Netlify enhances the deployment process and speed to production. Our collaboration streamlines the development lifecycle of RAG applications, decreasing time to market and helping companies deliver real value to customers faster." Render Render helps software teams ship products fast and at any scale. The company hosts applications for customers that range from solopreneurs, small agencies, and early stage startups, to mature, scaling businesses with services deployed around the world, all with a relentless commitment to reliability and uptime. Jess Lin, Developer Advocate at Render, said, “We’re thrilled to join forces with MongoDB to help companies effortlessly deploy and scale their applications—from their first user to their billionth. Render and MongoDB Atlas both empower engineers to focus on developing their products, not their infrastructure. Together, we're streamlining how engineers build full-stack apps, which notably include new AI applications that use RAG.” Superlinked Superlinked is a compute framework that helps MongoDB Atlas Vector Search work at the level of documents, rather than individual properties, enabling MongoDB customers to build high-quality RAG, Search, and Recommender systems with ease. “We're thrilled to join forces with MongoDB to help companies build vector search solutions for complex datasets,” said Daniel Svonava, CEO of Superlinked. “MongoDB makes it simple to manage operational data and a scalable vector index in one place. Our collaboration brings the operational data into the vector embeddings themselves, making the joint system able to answer multi-faceted queries like “largest clients with exposure to manufacturing risk” and operate the full vector search development cycle, speeding up time to market and helping companies get real value to customers faster." Twelve Labs Twelve Labs builds AI that perceives the world the way humans do. The company models the world by shipping next-generation multimodal foundation models that push the boundaries in video understanding. "We are excited to partner with MongoDB to enable developers and enterprises to build advanced multimodal video understanding applications,” said Jae Lee, CEO of Twelve Labs. “Developers can store Twelve Labs' state-of-the-art video embeddings in MongoDB Atlas Vector Search for efficient semantic video retrieval—which enables video recommendations, data curation, RAG workflows, and more. Our collaboration supports native video processing and ensures high-performance & low latency for large-scale video datasets." Upstage Upstage specializes in delivering above-human-grade performance AI solutions for enterprises, focusing on superior usability, customizability, and data privacy. “We are thrilled to partner with MongoDB to provide our enterprise customers with a powerful full-stack LLM solution featuring RAG capabilities,” said Sung Kim, CEO and co-founder of Upstage. “By combining Upstage AI's Document AI, Solar LLM, and embedding models with the robust vector database MongoDB Atlas, developers can create a powerful end-to-end RAG application that's grounded with the enterprise's unstructured data. This application achieves a fast time to value with productivity gains while minimizing the risk of hallucination.” But wait, there's more! To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub , and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

August 7, 2024