clark-gates-george

2871 results

Welcome to the (Tech) Olympics!

Welcome to the Tech Olympics, where code meets competition! With the 2024 Summer Olympics starting today, we thought it’d be fun to imagine developers as athletes, showcasing their skills in a series of thrilling events. From relay races to coding challenges, the Tech Olympics would bring together the brightest minds in tech for a competition like no other. Whether you're a coding wizard, a bug-squashing maestro, or an AI aficionado, there would be something to test your limits and celebrate your talents. Opening ceremony The opening ceremony is one of the most iconic aspects of the Olympics. From the lighting of the torch to performances by local artists, the opening ceremony encapsulates the spectacle of the games, and is a necessity for the Tech Olympics. The Tech Olympics opening ceremony would kick off with a grand procession of teams involved, adorned in attire representing their area of expertise. Next, there’d be a performance by artists and developers using augmented and virtual reality to blend art with cutting-edge technology. Finally, there would be the lighting of the torch, but instead of the flame being run across the country, an application would be written and passed between developers from around the world that, when run, would light the torch and start the games. Now that we’ve kicked off the Tech Olympics, let's consider what its events might look like. Code sprint relay The "code sprint relay" would be a collaborative coding event where teams of developers would tackle a series of coding challenges in relay format. The twist would be that each member could only code for a set period (say, 5-10 minutes) before handing the code off to the next person. This setup requires clear communication and strategic planning, as each coder must quickly understand and build upon their predecessor's work. Code sprint relay challenges would range from algorithm problems to debugging tasks, demanding various skills and swift adaptability. This event would be fast-paced and dynamic, with a lively atmosphere filled with the buzz of coding and quick exchanges of ideas. Success would be measured not only by the completion of challenges but also by the efficiency and quality of the code, making this event a test of teamwork and technical skill under pressure. Security capture the flag Capture the flag might seem more like a kids’ game than an Olympic event, but trust us, there’d be nothing childish about this event. The "security capture the flag" event would be an exciting cybersecurity competition in which participants would need to solve security-related challenges to capture hidden "flags." These challenges would range from web application exploits and reverse engineering, to cryptographic puzzles and network forensics. Working in teams, participants would race against the clock to uncover vulnerabilities, exploit them, and find the embedded flags within a controlled, simulated environment. At the end, a debriefing session would highlight the most innovative solutions and techniques used. Success would be measured by the number of flags captured and the ingenuity of the approaches, showcasing participants' technical skills and strategic thinking under pressure. Bug hunt Have you ever built out your code and then, upon running it, realized that you made a mistake? If you have, you’ll understand just how intense this next event could be! The "bug hunt challenge" would be a fast-paced competition in which participants are tasked with finding and fixing bugs within a complex codebase. Each individual would be given the same software project with numerous hidden bugs, ranging from simple syntax errors to intricate logical flaws. Participants must use their debugging skills and tools to identify and resolve as many issues as possible within a set time limit. The event would be marked by intense focus and strategic problem-solving as competitors meticulously comb through the code. An automated system would verify the fixes instantly, ensuring accuracy and efficiency. Success would be measured by the quantity and severity of bugs resolved, along with the quality of the fixes, making this event a test of attention to detail and technical proficiency. AI arena We’d be remiss not to include an AI event! The "AI arena" event would be a competitive showcase where participants create machine learning models using a provided dataset to solve a specified problem. Teams would have several hours to analyze the data, create features, and train their models. The objective would be to develop a model with the highest accuracy and performance, balancing technical innovation with practical application. In the end, teams would present their models to judges, explaining their methodologies and challenges faced. Judging criteria would include model accuracy, creativity, and clarity of the presentation, making this event a comprehensive test of technical and communication skills. Location Finally, you can’t have an Olympics without a city to host it. There are plenty of tech hubs to choose from—San Francisco, London, Beijing—but we thought it’d be more fun to pick a growing tech hub like Ha Noi, Vietnam, as our location. Vietnam had the highest digital economy growth in Southeast Asia in 2022 , putting it on the path to be named alongside other “tech giant” cities. Also, Vietnamese food is excellent! During the games, local startups and tech companies would showcase their work on the world stage, and visiting developers would see the innovations that Vietnamese companies are working on. Sadly, there won’t be an actual Tech Olympics this year, but maybe in the future, there will be. An event that will bring the world's best developers together to showcase their skills, foster friendly competition, and allow the world to see just how amazing developers are. If you have some ideas about other events you would want to see at a Tech Olympics, connect with us on X (Twitter) and let us know what your ideas are. Interested in learning more about or connecting more with MongoDB? Join our MongoDB Community to meet other community members, hear about inspiring topics, and receive the latest MongoDB news and events.

July 26, 2024

Building Gen AI Applications Using Iguazio and MongoDB

AI can lead to major enterprise advancements and productivity gains. By offering new capabilities, they open up opportunities for enhancing customer engagement, content creation, process automation, and more. According to McKinsey & Company, generative Al has the potential to deliver an additional $200-340B in value for the banking industry . One popular use case is customer service, where gen AI chatbots have quickly transformed the way customers interact with organizations. They handle customer inquiries and provide personalized recommendations while empathizing with them and offering nuanced support tailored to individual needs. Another less obvious use case is fraud detection and prevention. AI offers a transformative approach by interpreting regulations, supporting data cleansing, and enhancing the efficacy of surveillance systems. These systems can analyze transactions in real-time and flag suspicious activities more accurately, which helps institutions prevent monetary losses. In this post, we introduce the joint MongoDB and Iguazio gen AI solution which allows for the development and deployment of resilient and scalable gen AI applications. Before diving into how it works and its value for you, let’s first discuss the challenges enterprises face when operationalizing gen AI applications. Challenges to operationalizing gen AI Building an AI application starts with a proof of concept. However, enterprises need to successfully operationalize and deploy models in production to derive business value and ensure the solution is resilient. Doing so comes with its own set of challenges such as: Engineering challenges - Deploying gen AI applications requires substantial engineering efforts from enterprises. They need to maintain technological consistency throughout the operational pipeline, set up sufficient infrastructure resources, and ensure the availability of a team equipped with a comprehensive ML and data skillset. Currently, AI development and deployment processes are slow, time-consuming, and fraught with friction. LLM risks - When deploying LLMs, enterprises need to reduce privacy risks and comply with ethical AI standards. This includes preventing hallucinations, ensuring unbiased outputs, filtering out offensive content, protecting intellectual property, and aligning with regulatory standards. Glue logic and standalone solutions - The AI landscape is vibrant, and new solutions are frequently being developed. Autonomously integrating these solutions can create overhead for ops and data professionals, resulting in duplicate efforts, brittle architectures, time-consuming processes, and a lack of consistency. Iguazio and MongoDB together: High-performing and simplified gen AI operationalization The joint Iguazio and MongoDB solution leverages the innovation of these two leading platforms. The integrated solution allows customers to streamline data processing and storage, ensuring gen AI apps reach production while eliminating risks, improving performance, and enhancing governance. MongoDB for end-to-end AI data management MongoDB Atlas , an integrated suite of data services centered around a multi-cloud NoSQL database, enables developers to unify operational (structured and unstructured data), analytical, and AI data services into a single platform to streamline building AI-enriched applications . MongoDB’s flexible data model enables easy integration with different AI/ML platforms, allowing organizations to adapt to changes in the AI landscape without extensive infrastructure modifications. MongoDB meets the requirements of a modern AI and vector data store: Operational and unified: MongoDB’s ability to serve as the operational data store (ODS) enables financial institutions to efficiently handle large volumes of real-time operational data and unifies AI/vector data, ensuring AI/ML models use the most accurate information. It also enables organizations to meet compliance and regulatory requirements (e.g., 3DS2, ISO20022, TCDF) by the timely processing of large data volumes. Multi-model: Alongside structured data, there's a growing need for semi-structured and unstructured data in gen AI applications. MongoDB's JSON-based multimodal document model allows you to handle and process diverse data types, including documents, network/knowledge graphs, geospatial data, and time series data. Atlas Vector Search lets you search unstructured data. You can create vector embeddings with ML models and store and index them in Atlas for retrieval augmented generation (RAG), semantic search, recommendation engines, dynamic personalization, and other use cases. Flexible: MongoDB’s flexible schema design enables development teams to make application adjustments to meet changing data requirements and redeploy application changes in an agile manner. Vector store: Alongside the operational data store, MongoDB serves as a vector store with vector indexing and search capabilities for performing semantic analysis. To help improve gen AI experiences with greater accuracy and mitigate hallucination risks, using a RAG architecture together with the multimodal operational data typically required by AI applications. Deployment flexibility: MongoDB can be deployed self-managed on-premise, in the cloud, or in a SaaS environment. Or deployed across a hybrid cloud environment for institutions not ready to be entirely on the public cloud. Iguazio’s AI platform Iguazio (acquired by McKinsey) is an AI platform designed to streamline the development of ML and gen AI applications in production at scale. Iguazio’s gen AI-ready architecture includes capabilities for data management, model development, application deployment, and LiveOps. The platform—now part of QuantumBlack Horizon , McKinsey’s suite of AI development tools—addresses enterprises’ two biggest challenges when advancing from gen AI proofs of concept to live implementations within business environments. Scalability: Ensures uninterrupted service regardless of workload demands, scaling gen AI applications when required. Governance: Gen AI guardrails mitigate risk by directing essential monitoring, data privacy, and compliance activities. By automating and orchestrating AI, Iguazio accelerates time-to-market, lowers operating costs, enables enterprise-grade governance, and enhances business profitability. Iguazio’s platform includes LLM customization capabilities, GPU provisioning to improve utilization and reduce cost, and hybrid deployment options (including multi-cloud or on premises). This positions Iguazio to uniquely answer enterprise needs, even in highly regulated environments, either in a self-serve or managed services model (through QuantumBlack, McKinsey’s AI arm). Iguazio’s AI platform provides: Structured and unstructured data pipelines for processing, versioning, and loading documents. Automated flow of data prep, tuning, validating, and LLM optimization to specific data efficiently using elastic resources (CPUs, GPUs, etc.). Rapid deployment of scalable real-time serving and application pipelines that use LLMs (locally hosted or external) as well as the required data integration and business logic. Built-in monitoring for the LLM data, training, model, and resources, with automated model re-tuning and RLHF. Ready-made gen AI application recipes and components. An open solution with support for various frameworks and LLMs and flexible deployment options (any cloud, on-prem). Built-in guardrails to eliminate risks and improve accuracy and control. Examples: Building with Iguazio and MongoDB #1 Building a smart customer care agent The joint solution can be used to create smart customer care agents. The diagram below illustrates a production-ready gen AI agent application with its four main elements: Data pipeline for processing the raw data (eliminating risks, improving quality, encoding, etc.). Application pipelines for processing incoming requests (enriched with data from MongoDB’s multimodel store), running the agent logic, and applying various guardrails and monitoring tasks. Development and CI/CD pipelines for fine-tuning and validating models, testing the application to detect accuracy risk challenges, and automatically deploying the application. A monitoring system collecting application and data telemetry to identify resource usage, application performance, risks, etc. The monitoring data can be used to improve the application performance further through an RLHF (reinforcement learning from human feedback) integration. #2 Building a hyper-personalized banking agent In this example, accompanied by a demo video , we show a banking agent based on a modular RAG architecture that helps customers choose the right credit card for them. The agent has access to a MongoDB Atlas data platform with a list of credit cards and a large array of customer details. When a customer chats with the agent, it chooses the best credit card for them, based on the data and additional personal customer information, and can converse with them in an appropriate tone. The bank can further hyperpersonalize the chat to make it more appealing to the client and improve the odds of the conversion, or add guardrails to minimize AI hallucinations and improve interaction accuracy. Example customer #1: Olivia Olivia is a young client requesting a credit card. The agent looks at her credit card history and annual income and recommends a card with low fees. The tone of the conversation is casual. When Olivia asks for more information, the agent accesses the card data while retaining the same youthful and fun tone. Example customer #2: Miss Jessope The second example involves an older woman who the agent calls “Ms Jessope”. When asking for a new card, the agent accesses her credit card history to choose the best card based on her history. The conversation takes place in a respectful tone. When requesting more information, the response is more informative and detailed, and the language remains respectful. How does this work under the hood? As you can see from the figure below, the tool has access to customer profile data in MongoDB Atlas collection bfsi.user_data and is able to hyperpersonalize its response and recommendations based on various aspects of the customer profile. A RAG process is implemented using the Iguazio AI Platform with MongoDB Atlas data platform. The Atlas Vector Search capabilities were used to find the relevant operational data stored in MongoDB (card name, annual fees, client occupation, interest rates, and more) to augment the contextual data during the interaction itself to personalize the interaction. The virtual agent is also able to talk to another agent tool that has a view of the credit card data in bfsi.card_info (such as card name, annual and joining fees, card perks such as cashback, and more), to pick a credit card that would best suit the needs of the customer. To ensure the client gets the best choice of card, a guardrail is added that filters the cards chosen according to the data gathered by the agent as a built-in component of the agent tool. In addition, another set of guardrails is added to validate that the card offered suits the customer by comparing the card with the optimal ones recommended for the customer’s age range. This whole process is straightforward to set up and configure using the Iguazio AI Platform, with seamless integration to MongoDB. The user only needs to create the agent workflow and connect it to MongoDB Atlas, and everything works out of the box. Lastly, as you can see from the demo above, the agent was able to leverage the vector search capabilities of MongoDB Atlas to retrieve, summarize, and personalize the messaging on the card information and benefits in the same tone as the user’s. For more detailed information and resources on how MongoDB and Iguazio can transform your gen AI applications, we encourage you to apply for an exclusive innovation workshop with MongoDB's industry experts to explore bespoke modern app development and tailored solutions for your organization. Additionally, you can enjoy these resources: Start implementing gen AI applications in your enterprise today How Leading Industries are Transforming with AI and MongoDB Atlas The MongoDB Solutions Library is curated with tailored solutions to help developers kick-start their projects

July 24, 2024

The MongoDB AI Applications Program (MAAP) is Now Available

At MongoDB, everything starts with helping our customers solve their application and data challenges (regardless of use case). We talk to customers every day, and they’re excited about gen AI. But they’re also unsure how to move from concept to production, and need to control costs. So, finding the right way to adopt AI is critical. We’re therefore thrilled to announce the general availability of the MongoDB AI Applications Program (MAAP) ! A first-of-its-kind program, MAAP will help organizations take advantage of rapidly advancing AI technologies. It offers customers a wealth of resources to put AI applications into production: reference architectures and an end-to-end technology stack that includes integrations with leading technology providers, professional services, and a unified support system to help customers quickly build and deploy AI applications. Indeed, some early AI adopters found that legacy technologies can’t manage the multi-modal data structures required to power AI applications. This was compounded by a lack of in-house skills and the perceived risk of integrating disparate components without support. As a result, businesses couldn’t take advantage of AI advances quickly enough. Which is why we’re excited that MAAP is now available: the MAAP program and its ecosystem of companies addresses these challenges comprehensively. MAAP offers customers the right expertise and solutions for their use cases, and removes integration risk. Meanwhile, the MAAP ecosystem seamlessly integrates many of the world’s leading AI and tech organizations—a real value-add for customers. While the MAAP ecosystem is just getting started, it already includes tech leaders like Accenture, AWS, Google Cloud, and Microsoft Azure, as well as gen AI innovators Anthropic, Cohere, and LangChain. The result is a group of organizations that will enable customers to build differentiated, production-ready AI applications, while aiming to deliver substantial return on investment. Unlocking the power of data… It’s an understatement to say that the AI landscape is ever-changing. To keep pace with the latest developments and customer expectations, access to trusted collaborators and a robust support system are critical for organizations who want to innovate with AI. What’s more, innovating with AI can mean tackling data silos and overcoming limited in-house technical expertise—which MAAP solves for with a central architecture for gen AI applications, pre-configured integrations, and professional services to ensure organizations’ requirements are met. This framework provides flexibility for technical and non-technical teams alike, empowering them to leverage AI and company data for tasks specific to their department, no matter their preferred cloud or LLM. The MAAP ecosystem—representing industry leaders from every part of the AI stack—includes Accenture , Anthropic , Anyscale , Arcee AI , AWS , Cohere , Credal , Fireworks AI , Google Cloud , gravity9 , LangChain , LlamaIndex , Microsoft Azure , Nomic , PeerIslands , Pureinsights , and Together AI . MongoDB is uniquely qualified to bring together the solutions MAAP offers: MongoDB customers can use any LLM provider, we can run anywhere (on all major cloud providers, on premises, and at the edge), and MongoDB offers seamless integrations with a variety of frameworks and systems. Perhaps most importantly, thousands of customers already rely on MongoDB to power their mission-critical apps, and we have years of experience helping customers unlock the power of data. The ultimate aim of MAAP is to enable customers to get the most out of their data, and to ensure that they can confidently innovate with AI. A recent success is Anywhere Real Estate (NASDAQ: HOUS), the parent company of well-known brands like Century 21, Coldwell Banker, and Sotheby’s International Realty. Anywhere partnered with MongoDB to drive their digital transformation, and is now delving into the potential of MAAP to fast-track their AI adoption. By harnessing MongoDB’s expertise, Anywhere is set to future-proof its tech stack and to excel in an increasingly AI-driven landscape. “Generative AI is a game-changer for Anywhere, and we’re integrating it into our products with enthusiasm,” said Damian Ng, Senior Vice President of Technology at Anywhere. “MongoDB has been an invaluable partner, helping us rapidly explore and develop new approaches and opportunities. The journey ahead is exciting!” …and clearing the way for AI innovation MAAP offers customers a clear path to developing and deploying AI-enriched applications. The cornerstone of MAAP is MongoDB : applications are underpinned by MongoDB, which securely unifies real-time, operational, unstructured, and AI-related data without the need for bolt-on solutions. MongoDB’s open and integrated architecture provides easy access to the MAAP partner network and enables the extension and customization of applications. With MAAP, customers can: Accelerate their gen AI development with expert, hands-on support and services . MAAP expert services, combining the strengths of MongoDB Professional Services and industry-leading gen AI consultancies, will enable customers to rapidly innovate with AI. MAAP offers strategic guidance on roadmaps and skillsets, assists with data integration into advanced AI technologies, and can even develop production-ready applications. MAAP goes beyond development, empowering teams with best practices for securely integrating your data into scalable gen AI solutions, ensuring businesses are equipped to tackle future AI initiatives. Build high-performing gen AI applications that tackle industry-specific needs . Pre-designed architectures give customers repeatable, accelerated frameworks for building AI applications. Architectures are fully customizable and extendable to accommodate ever-evolving generative AI use cases, like retrieval-augmented generation (RAG) or advanced AI capabilities like Agentic AI and advanced RAG technique integrations. With MongoDB’s open and integrated platform at its core, innovation with MAAP’s composable architectures is unlimited, making it easy for customers to bring the power of leading AI platforms directly to their applications. Upskill teams to quickly—and repeatedly—build modern AI applications . MAAP customers have access to a variety of learning materials , including a dedicated MAAP GitHub library featuring integration code, demos, and a gen AI application prototype. These comprehensive resources will enable developers to build intelligent, personalized applications faster, while giving organizations the tools to expand their in-house AI expertise. With MAAP, customers have access to integration and development best practices that they can use for future gen AI projects. It’s early days, but there are wide-ranging indications that AI will impact everything from developer productivity to economic output. We’ve already seen customers use gen AI to speed modernization efforts, boost worker productivity with agents, unlock sales productivity , and power identity governance with natural language . In other words, AI is here to stay, and now is the time to take advantage of it. MAAP is designed to set customers up for AI success today and tomorrow: the program will be continuously enhanced with the latest technological advancements and industry best practices, to ensure that customers stay ahead of this rapidly evolving space. So please visit the MAAP page to learn more or to connect with the team! Our MAAP experts are happy to guide you on your AI journey and to show how the MongoDB AI Applications Program can help your organization.

July 23, 2024

magicpin Builds India's Largest Hyperlocal Retail Platform on MongoDB

Despite its trillion-dollar economy, 90% of retail consumption in India still takes place offline . While online retail in India has grown in recent years, much of it still consists of dark stores (a retail outlet or distribution center that exists exclusively for online shopping) and warehouses, the majority of retail establishments—fashion, food, dining, nightlife, and groceries—thrive as physical stores. What’s more, businesses looking to transition to online models are hindered by major platforms that focus primarily on clicks rather than encouraging transactions. This opportunity was the inspiration for the founders of magicpin , India’s largest hyperlocal retail platform. magicpin has revolutionized the conventional pay-per-click model, where businesses bid on keywords or phrases related to their products or services and then pay a fee each time someone clicks on an ad, with a new pay-per-conversion strategy. In a pay-per-conversion model, businesses only pay when they make an actual sale of a product or item. magicpin does not rely on dark stores, warehouses, or deep discounting; instead, it collaborates with local retailers, augmenting foot traffic and preserving the essence of local economies. This unique model ensures that consumers not only enjoy existing in-store benefits, but also receive additional perks when opting to transact through magicpin. “We enable the discovery of those merchants,” says Kunal Gupta, senior vice president at magicpin. “Which merchants in your local neighborhood are selling interesting stuff? What’s their inventory? What savings can we offer to buyers? We have data for everything.” Effectively three SaaS platforms in one, magicpin is a seller app, a buyer app, and a developing logistics app on the Open Network for Digital Commerce ( ONDC ), which is backed by the Indian government. With over 10 million users on its platform (covering the majority of Indian cities and over 100 localities), magicpin has established itself as a leading offline retail discovery and savings app. magicpin currently has 250,000 merchants in categories ranging from food to fashion to pharmacy. The power behind magicpin has always been MongoDB's flexibility and scalability. And from the company’s start in 2015, it became clear that magicpin was on to something special. “In the first week of March 2023 when we onboarded ONDC, we hit almost 10,000 transactions a day. In October last year, we peaked at 50,000 orders in a single day, which is a huge milestone,” says Kunal. “When an ONDC order is placed, it flows through us. We manage the entire process—from sending the order to the merchant, assigning logistics personnel for pickup and delivery, to handling any customer support tickets that may arise. It's the seamless integration of these elements that defines our contribution to the intricate framework of ONDC." Having launched using the community version of MongoDB , Kunal realized that magicpin needed to make better use of its relatively lean tech team and allow them to focus more on building the business. He also saw that a managed service would be a more effective way of handling maintenance and related tasks. “We realized there had to be a better solution. We can’t afford to have all the database expertise tied up with a team that’s focusing on creating businesses and building applications,” said Kunal. “That’s when we started to use MongoDB Atlas." magicpin uses a multitude of technologies, to store over 600 million SKUs, and handle its SaaS platform, session cache, card, and order management, and MongoDB Atlas sits at the heart of the business. “For our operational and scaling needs, it’s seamless,” Kunal concludes. “Availability is high, and monitoring and routing are super-good. Our lives have become much easier.” Watch the full presentation on YouTube to learn more.

July 23, 2024

Technical Services Tools: Embracing Modern Frameworks and Influencing Efficiency

The Technical Services Tools team at MongoDB plays a pivotal role in ensuring efficiency throughout the customer lifecycle. Led by experienced engineers like Jarrod Hinson , the team leverages advanced technologies such as generative AI and modern frameworks like MERN and MEAN to develop tools that enhance security, streamline support processes, and integrate seamlessly into MongoDB's ecosystem. By prioritizing collaboration and innovation, the team not only supports internal stakeholders, but also delivers significant value to MongoDB's customers, setting new standards in technical service excellence. Read on to learn more about working on the team from Jarrod’s perspective. My journey as a Technical Services Tools Engineer at MongoDB Hey there, I’m Jarrod, a Staff Software Engineer on MongoDB’s Technical Services Tools team. Before joining, I worked as a Salesforce engineer, tinkering with custom Full Stack technologies like MERN and MEAN stacks. My background includes stints in healthcare software engineering and service in the US Army. I initially joined MongoDB as a Salesforce engineer in 2017, transitioning to the Technical Services Tools team in 2022 to delve deeper into Full Stack development and engineer applications using gen AI. Influencing tools and security practices In my role as a Staff Software Engineer at MongoDB, I influence internal tools and security best practices across the entire customer lifecycle. Central to my role is ensuring the secure storage of sensitive data and strictly controlling access to resolve customer issues. I craft robust authentication, authorization, and logging systems that track interactions with customer data from both ends, upholding rigorous standards of security and accountability. Embracing modern frameworks and gen AI My work involves extensive use of modern frameworks and gen AI. We deploy the MERN stack with Next.js and MongoDB's Leafygreen UI framework, complemented by Salesforce architecture for our data needs. Leveraging cutting-edge technology with gen AI, we employ MongoDB for databases and vector stores. Customer and internal support is enhanced through our work with assisted gen AI which refines Technical Services case work to boost efficiency and satisfaction. For example, our public Support Portal lets customers chat about any issues they’re experiencing with responses provided through Retrieval-Augmented Generation (RAG) system results pulled from the MongoDB Technical Services Tools database. We utilize RAG for documentation, knowledge base articles, cached entries, and corrective guidance from our custom-built LLM feedback responses. Cross-collaboration opportunities Collaboration thrives at MongoDB. Within our network of customer-facing teams, we work together to define new features and capabilities. For example, we incorporate diagnostic tools into our frameworks through integration efforts with the broader Technical Services team, while partnerships with Cloud and CRM Tech teams manage user permissions across sites. We’ve worked closely with Professional Services leadership to identify existing use cases and seamlessly implement the tools their teams built into our infrastructure. Plus, we’ve helped them identify overlapping customer engagements, quickly understand projects, and enhance customer consultation experience through custom AI summarization. This collaborative approach ensures our tools are robust, secure, and aligned with stakeholder needs, ultimately providing our customers with the best possible experience. Addressing internal stakeholders Engaging with stakeholders across MongoDB presents unique opportunities and challenges. Our projects focus on optimizing processes and enhancing internal capabilities to deliver value to the company and our customers. This role fosters deep cross-departmental relationships, allowing us to solve complex problems with custom tools that drives efficiency across the organization. We provide improved AI triage assistance, case summaries, and next steps to help engineers quickly gain context and engage with customers. We've generated over 100,000 case summaries and have offered triage assistance on over 40,000 cases. Team dynamics Our team is uniquely skilled and operates under a product-oriented delivery (POD) engineering framework broken into five technical areas: Front End, Back End, AI, Salesforce, and Infrastructure. This framework promotes collaboration and efficiency in an effort to drive successful projects. Led by Team Leads and Senior Engineers and guided by our Engineering Manager Lila Brooks , our structure supports continuous learning and engineering best practices. Paul Rooney , our Senior Director, leads by example and takes a hands-on approach with the team's most challenging engineering goals. In my role as a Staff Engineer, I drive multiple projects and aid PODs in achieving their objectives. This structure makes delegation efficient and empowers POD members to make decisions as needed. Why join our team? Our global, primarily remote team embraces flexible work. Yearly in-person offsites encourage team bonding, while the rest of the time, we communicate asynchronously to accommodate global time zones, with essential meetings aligned for overlap. Our leaders prioritize work-life balance and support individual needs to ensure flexibility for all team members. I’ve also found support through MongoDB Veterans, which is an employee resources group that guides veteran applicants to a role within the company aligned to their military experience and brings together a community of supportive co-workers. In terms of day-to-day work on the Technical Services Tools team, what’s most exciting is our specialization in cutting-edge AI applications involving MongoDB processes and data. With rapid expansion, we explore areas like QA, security, AI, Full Stack, and Salesforce, fostering diverse skill sets and knowledge sharing. Projects span intriguing domains from AI applications to custom MERN stack development and Salesforce integration. By joining the team, you’ll have the opportunity to grow your skills, dive into modern frameworks, and join a community of inspiring and talented people! Find out more about Life at MongoDB in technical services by joining our talent community .

July 22, 2024

The Converged AI and Application Datastore for Insurance

In the inherently information-driven insurance industry, companies ingest, analyze, and process massive amounts of data, requiring extensive decision-making. To manage this, they rely on a myriad of technologies and IT support staff to keep operations running smoothly but often lack effectiveness due to their outdated nature. Artificial intelligence (AI) holds great promise for insurers by streamlining processes, enhancing decision-making, and improving customer experiences with significantly less time, resources, and staff compared with traditional IT systems. The convergence of AI and innovative application datastores is transforming how insurers work with data. In this post, we’ll look at how these elements are reshaping the insurance industry and offering greater potential for AI-powered applications, with MongoDB at the heart of the converged AI and application datastore. Scenario planning and flexible data layers One of the primary concerns for IT leaders and decision-makers in the insurance industry is making smart technology investments. The goal is to consolidate existing technology portfolios, which often include a variety of systems like SQL Server, Oracle, and IBM IMS. Consolidation helps reduce inventory and prepare for the future. But what does future-proofing really look like? Scenario planning is an effective strategy for future-proofing. This involves imagining different plausible futures and investing in the common elements that remain beneficial across all scenarios. For insurance companies, a crucial common thread is the data layer. By making data easier to work with, companies can ensure that their technology investments remain valuable regardless of how future scenarios unfold. MongoDB’s flexible developer data platform offers a distinct architectural advantage by making data easier to work with, regardless of the cloud vendor or AI application in use. This flexibility is vital for preparing for disruptive future scenarios, whether they involve regulatory changes, market shifts, or technological advancements. Watch now: The Converged AI and Application Datastore: How API's, AI & Data are Reshaping Insurance The role of AI and data in insurance Generative AI is revolutionizing the insurance sector, offering new ways to manage and utilize data. According to Celent's 2023 Technology Insight and Strategy Survey, 33% of companies across different industries have AI projects in planning, 29% in development, and 19% in production (shown in Figure 1 below). This indicates a significant shift towards AI-driven solutions by insurers actively experimenting with gen AI. Figure 1: Celent Technology Insight and Strategy Survey 2023 However, there's tension between maintaining existing enterprise systems and innovating with AI. Insurance companies must balance keeping the lights on with investing in AI to meet the expectations of boards and stakeholders. The solution lies in integrating AI in a way that enhances operational efficiency without overwhelming existing systems. However, data challenges need to be addressed to achieve this, specifically around access to data. According to a Workday Global Survey , only 4% of respondents said their data is fully accessible, and 59% say their enterprise data is somewhat or completely siloed. Without a solid data foundation, insurers will struggle to achieve the benefits they are looking for from AI. Data architectures and unstructured data When adopting advanced technologies like AI and ML, which require data as the foundation, organizations often grapple with the challenge of integrating these innovations into legacy systems due to their inflexibility and resistance to modification. A robust data architecture is essential for future-proofing and consolidating technology investments. Insurance companies often deal with a vast amount of unstructured data, such as claim images and videos, which can be challenging to manage. By leveraging AI, specifically through vector search and large language models, companies can efficiently process and analyze this data. MongoDB is ideal for managing unstructured data due to its flexible, JSON-like document model, which accommodates a wide variety of data types and structures without requiring a predefined schema. Additionally, MongoDB’s flexibility enables insurers to integrate seamlessly with various technologies, making it a versatile and powerful solution for unstructured data management. For example, consider an insurance adjuster assessing damage from claim photos. Traditionally, this would require manually reviewing each image. With AI, the photos can be converted into vector embeddings and matched against a database of similar claims, drastically speeding up the process. This not only improves efficiency but also enhances the accuracy of assessments. The converged AI and application datastore with MongoDB Building a single view of data across various systems is a game-changer for the insurance industry. Data warehouses and data lakes have long provided single views of customer and claim data, but they often rely on historical data, which may be outdated. The next step is integrating real-time data with these views to make them more dynamic and actionable. A versatile database platform plays a crucial role in this integration. By consolidating data into a single, easily accessible view, insurance companies can ensure that various personas, from underwriters to data scientists, can interact with the data effectively. This integration allows for more responsive and informed decision-making, which is crucial for staying competitive in a rapidly evolving market. This can be achieved with a converged AI and application datastore, as shown in Figure 2 below. This is where operational data, analytics insights, and unstructured data become operationally ready for the applications that leverage AI. Figure 2: Converged AI and application datastore reference architecture The convergence of AI, data, and application datastores is reshaping the insurance industry. By making smart technology investments, leveraging AI to manage unstructured data, and building robust data architectures, insurance companies can future-proof their operations and embrace innovation. A versatile and flexible data platform provides the foundation for these advancements, enabling companies to make their data more accessible, actionable, and valuable. The MongoDB Atlas developer data platform puts powerful AI and analytics capabilities directly in the hands of developers and offers the capabilities to enrich applications by consolidating, ingesting, and acting on any data type instantly. Because MongoDB serves as the operational data store (ODS)—with its flexible document model—insurers can efficiently handle large volumes of data in real-time. By integrating MongoDB with AI/ML platforms, insurers can develop models trained on the most accurate and up-to-date data, thereby addressing the critical need for adaptability and agility in the face of evolving technologies. With built-in security controls across all data, whether managed in a customer environment or through MongoDB Atlas, a fully managed cloud service, MongoDB ensures robust security with features such as authentication (single sign-on and multi-factor authentication), role-based access controls, and comprehensive data encryption. These security measures act as a safeguard for sensitive data, mitigating the risk of unauthorized access from external parties and providing organizations with the confidence to embrace AI and ML technologies. If you would like to learn more about the convergence of AI and application datastores, visit the following resources: Video: The Converged AI and Application Datastore: How API's, AI & Data are Reshaping Insurance Paper: Innovation in Insurance with Artificial Intelligence The MongoDB Solutions Library is curated with tailored solutions to help developers kick-start their projects

July 18, 2024

Anti-Money Laundering and Fraud Prevention With MongoDB Vector Search and OpenAI

Fraud and anti-money laundering (AML) are major concerns for both businesses and consumers, affecting sectors like financial services and e-commerce. Traditional methods of tackling these issues, including static, rule-based systems and predictive artificial intelligence (AI) methods, work but have limitations, such as lack of context and feature engineering overheads to keeping the models relevant, which can be time-consuming and costly. Vector search can significantly improve fraud detection and AML efforts by addressing these limitations, representing the next step in the evolution of machine learning for combating fraud. Any organization that is already benefiting from real-time analytics will find that this breakthrough in anomaly detection takes fraud and AML detection accuracy to the next level. In this post, we examine how real-time analytics powered by Atlas Vector Search enables organizations to uncover deeply hidden insights before fraud occurs. The evolution of fraud and risk technology Over the past few decades, fraud and risk technology have evolved in stages, with each stage building upon the strengths of previous approaches while also addressing their weaknesses: Risk 1.0: In the early stages (the late 1990s to 2010), risk management relied heavily on manual processes and human judgment, with decision-making based on intuition, past experiences, and limited data analysis. Rule-based systems emerged during this time, using predefined rules to flag suspicious activities. These rules were often static and lacked adaptability to changing fraud patterns . Risk 2.0: With the evolution of machine learning and advanced analytics (from 2010 onwards), risk management entered a new era with 2.0. Predictive modeling techniques were employed to forecast future risks and detect fraudulent behavior. Systems were trained on historical data and became more integrated, allowing for real-time data processing and the automation of decision-making processes. However, these systems faced limitations such as, Feature engineering overhead: Risk 2.0 systems often require manual feature engineering. Lack of context: Risk 1.0 and Risk 2.0 may not incorporate a wide range of variables and contextual information. Risk 2.0 solutions are often used in combination with rule-based approaches because rules cannot be avoided. Companies have their business- and domain-specific heuristics and other rules that must be applied. Here is an example fraud detection solution based on Risk 1.0 and Risk 2.0 with a rules-based and traditional AI/ML approach. Risk 3.0: The latest stage (2023 and beyond) in fraud and risk technology evolution is driven by vector search. This advancement leverages real-time data feeds and continuous monitoring to detect emerging threats and adapt to changing risk landscapes, addressing the limitations of data imbalance, manual feature engineering, and the need for extensive human oversight while incorporating a wider range of variables and contextual information. Depending on the particular use case, organizations can combine or use these solutions to effectively manage and mitigate risks associated with Fraud and AML. Now, let us look into how MongoDB Atlas Vector Search (Risk 3.0) can help enhance existing fraud detection methods. How Atlas Vector Search can help A vector database is an organized collection of information that makes it easier to find similarities and relationships between different pieces of data. This definition uniquely positions MongoDB as particularly effective, rather than using a standalone or bolt-on vector database. The versatility of MongoDB’s developer data platform empowers users to store their operational data, metadata, and vector embeddings on MongoDB Atlas and seamlessly use Atlas Vector Search to index, retrieve, and build performant gen AI applications. Watch how you can revolutionize fraud detection with MongoDB Atlas Vector Search. The combination of real-time analytics and vector search offers a powerful synergy that enables organizations to discover insights that are otherwise elusive with traditional methods. MongoDB facilitates this through Atlas Vector Search integrated with OpenAI embedding, as illustrated in Figure 1 below. Figure 1: Atlas Vector Search in action for fraud detection and AML Business perspective: Fraud detection vs. AML Understanding the distinct business objectives and operational processes driving fraud detection and AML is crucial before diving into the use of vector embeddings. Fraud Detection is centered on identifying unauthorized activities aimed at immediate financial gain through deceptive practices. The detection models, therefore, look for specific patterns in transactional data that indicate such activities. For instance, they might focus on high-frequency, low-value transactions, which are common indicators of fraudulent behavior. AML , on the other hand, targets the complex process of disguising the origins of illicitly gained funds. The models here analyze broader and more intricate transaction networks and behaviors to identify potential laundering activities. For instance, AML could look at the relationships between transactions and entities over a longer period. Creation of Vector Embeddings for Fraud and AML Fraud and AML models require different approaches because they target distinct types of criminal activities. To accurately identify these activities, machine learning models use vector embeddings tailored to the features of each type of detection. In this solution highlighted in Figure 1, vector embeddings for fraud detection are created using a combination of text, transaction, and counterparty data. Conversely, the embeddings for AML are generated from data on transactions, relationships between counterparties, and their risk profiles. The selection of data sources, including the use of unstructured data and the creation of one or more vector embeddings, can be customized to meet specific needs. This particular solution utilizes OpenAI for generating vector embeddings, though other software options can also be employed. Historical vector embeddings are representations of past transaction data and customer profiles encoded into a vector format. The demo database is prepopulated with synthetically generated test data for both fraud and AML embeddings. In real-world scenarios, you can create embeddings by encoding historical transaction data and customer profiles as vectors. Regarding the fraud and AML detection workflow , as shown in Figure 1, incoming transaction fraud and AML aggregated text are used to generate embeddings using OpenAI. These embeddings are then analyzed using Atlas Vector Search based on the percentage of previous transactions with similar characteristics that were flagged for suspicious activity. In Figure 1, the term " Classified Transaction " indicates a transaction that has been processed and categorized by the detection system. This classification helps determine whether the transaction is considered normal, potentially fraudulent, or indicative of money laundering, thus guiding further actions. If flagged for fraud: The transaction request is declined. If not flagged: The transaction is completed successfully, and a confirmation message is shown. For rejected transactions, users can contact case management services with the transaction reference number for details. No action is needed for successful transactions. Combining Atlas Vector Search for fraud detection With the use of Atlas Vector Search with OpenAI embeddings, organizations can: Eliminate the need for batch and manual feature engineering required by predictive (Risk 2.0) methods. Dynamically incorporate new data sources to perform more accurate semantic searches, addressing emerging fraud trends. Adopt this method for mobile solutions, as traditional methods are often costly and performance-intensive. Why MongoDB for AML and fraud prevention Fraud and AML detection require a holistic platform approach as they involve diverse data sets that are constantly evolving. Customers choose MongoDB because it is a unified data platform (as shown in Figure 2 below) that eliminates the need for niche technologies, such as a dedicated vector database. What’s more, MongoDB’s document data model incorporates any kind of data—any structure (structured, semi-structured, and unstructured), any format, any source—no matter how often it changes, allowing you to create a holistic picture of customers to better predict transaction anomalies in real time. By incorporating Atlas Vector Search, institutions can: Build intelligent applications powered by semantic search and generative AI over any type of data. Store vector embeddings right next to your source data and metadata. Vectors inserted or updated in the database are automatically synchronized to the vector index. Optimize resource consumption, improve performance, and enhance availability with Search Nodes . Remove operational heavy lifting with the battle-tested, fully managed MongoDB Atlas developer data platform. Figure 2: Unified risk management and fraud detection data platform Given the broad and evolving nature of fraud detection and AML, these areas typically require multiple methods and a multimodal approach. Therefore, a unified risk data platform offers several advantages for organizations that are aiming to build effective solutions. Using MongoDB, you can develop solutions for Risk 1.0, Risk 2.0, and Risk 3.0, either separately or in combination, tailored to meet your specific business needs. The concepts are demonstrated with two examples: a card fraud solution accelerator for Risk 1.0 and Risk 2.0 and a new Vector Search solution for Risk 3.0, as discussed in this blog. It's important to note that the vector search-based Risk 3.0 solution can be implemented on top of Risk 1.0 and Risk 2.0 to enhance detection accuracy and reduce false positives. If you would like to discover more about how MongoDB can help you supercharge your fraud detection systems, take a look at the following resources: Revolutionizing Fraud Detection with Atlas Vector Search Card Fraud solution accelerator (Risk 1.0 and Risk 2.0) Risk 3.0 fraud detection solution GitHub repository

July 17, 2024

Meet the 2024 MongoDB Community Champions!

MongoDB is excited to announce our new cohort of Community Champions! MongoDB Community Champions comprise an inspirational global group of passionate, dedicated MongoDB advocates—including customers, partners, and inspiring community leaders. They demonstrate exceptional leadership in advancing the growth and knowledge of MongoDB’s brand and technology. The eighteen Community Champions this year represent a range of expertise and serve in a variety of professional and community roles. For example, Zhiyang Su is a senior applied scientist specializing in search ranking. With extensive experience in natural language processing (NLP), deep learning, and high-performance systems, he excels in dialog system design and optimization. Passionate about knowledge sharing, he regularly writes technical blog posts about MongoDB, NLP, and product design. Community Champions serve as the connective tissue between MongoDB and our community, keeping them informed about MongoDB’s latest developments and offerings. Community Champions also share their knowledge and experiences with others through a variety of media channels and event engagements. “With my contributions, I’m helping developers to get the right thing done faster by boosting their productivity,” said Mark Paluch, Spring Data Engineer and 2024 Community Champion. “Close collaboration in the form of learning, discussing, and giving feedback is key to get there. As members of this program, Champions gain a variety of experiences—including exclusive access to executives, product roadmaps, preview programs, an annual Champions Summit with product leaders—and relationships that grow their professional stature as MongoDB practitioners and help them be seen as leaders in the technology community. “Building on our global Champions program, this impressive group allows us to highlight a new level of outstanding members,” said Chuck Freedman, Director of Advocacy and Enablement, Developer Relations at MongoDB. “Our team led a cross-company nomination, interview, and review process to welcome a range of qualified and inspiring individuals representing our customers, partners, and global community.” Reflecting on this year’s selection process, Abirami Sukumaran , Developer Advocate and 2024 Community Champion, said: “I was impressed by the comprehensive nature of the interview. It wasn't just about checking boxes; it felt like a 360-degree assessment of my knowledge and enthusiasm for MongoDB Atlas, which made the entire process very positive. I am really thrilled to share my experience on this database program with enthralled developers around the globe.” We are also currently accepting applications for the Community Creator program. The Creator program consists of community members who create and share content to help others learn and uplevel their MongoDB knowledge. Creators are given exclusive access to product sessions, priority access to content features, and swag. To learn more, please visit the MongoDB Community Creators page. And now, without further ado, let’s meet the 2024 cohort of Community Champions! For more, visit our MongoDB Community Champions page.

July 16, 2024

Teach & Learn with MongoDB: Professor Abdussalam Alawini, University of Illinois at Urbana-Champaign

In this series of interviews, we talk to students and educators around the world who are using MongoDB to make their classes more engaging and relevant. By exploring their stories, we uncover how MongoDB’s innovative platform and resources are transforming educational landscapes and empowering the next generation of tech-savvy professionals. From creative teaching approaches to advanced classroom solutions, the MongoDB for Educators program can help you transform your classroom with cutting-edge technology and free resources. It can help you provide students with an interactive and dynamic learning environment that bridges the gap between theoretical knowledge and practical application. The program includes a variety of free resources for educators crafted by MongoDB experts to prepare learners with in-demand database skills and knowledge. Program participants have access to MongoDB Atlas credits, curriculum materials, certifications, and membership in a global community of educators from over 700 universities. From theory to practice: Hands-on MongoDB Teaching Professor Abdussalam Alawini is known for his creative use of MongoDB in his courses. He heavily uses MongoDB's free cluster to demonstrate MongoDB concepts during classes, and his students also use the free cluster for their projects, giving them hands-on experience with real-world applications. Currently, a Teaching Associate Professor at the University of Illinois Urbana-Champaign, Professor Alawini’s research interests span databases, applied machine learning, and education. He is particularly focused on applying machine learning methods to enhance classroom experiences and education. His work also includes developing next-generation data management systems, such as data provenance, citation, and scientific management systems. He recently received the U of I’s 2024 Campus Excellence in Undergraduate Education award, which highlights his commitment to teaching and the impact he’s had on his students. Professor Alawini is currently collaborating with colleagues on research to map how databases, data systems, data management, and related courses are taught in introductory computer science undergraduate courses worldwide. Professor Alawini’s story offers valuable insights for educators eager to enhance their teaching and prepare students for a tech-driven future. Check out how MongoDB Atlas has revolutionized his teaching by simplifying database deployment, management, and scaling, allowing students to focus more on learning MongoDB concepts. Tell us about your educational journey and what sparked your interest in databases. My educational journey began with a bachelor's degree in Computer Science from the University of Tripoli in 2002. I then spent over six years in the industry as a database administrator, lead software developer, and IT Manager. In 2011, I returned to academia and earned two master's degrees in Computer Science and Engineering and Technology Management from Portland State University, followed by a Ph.D. in Computer Science in 2016. Subsequently, I joined the University of Pennsylvania for a two-year postdoctoral training. My interest in databases was sparked during my time as a database administrator at PepsiCo, where I enjoyed maintaining the company's databases and building specialized reports to improve business operations. I was particularly fascinated by database systems’ ability to optimize queries and handle millions of concurrent user requests seamlessly. This experience led me to focus my doctoral studies on building data management systems for scientific applications. What courses are you currently teaching at the University of Illinois Urbana-Champaign? Currently, I teach Database Systems and Data Management in the Cloud courses at the University of Illinois Urbana-Champaign. In addition, I also teach a course to University High School students to introduce them to data management and database basics. My intention with teaching databases to high schoolers is to use data management as a gateway to lower entry barriers into computing fields for non-computer science students and to recruit underrepresented minorities to computing. What inspired you to start teaching MongoDB? I was inspired to start teaching MongoDB after seeing several surveys indicating that it is the most used database in web development and one of the leading document-oriented databases. MongoDB offers several unique features that set it apart from other databases, including the aggregation pipeline, which simplifies data processing and transformation. Additionally, MongoDB's flexible schema design allows for easier handling of unstructured data, and its horizontal scalability ensures robust performance as data volumes grow. These features make MongoDB an essential tool for modern web development, and I wanted to equip my students with the skills to leverage this powerful technology. How do you design your course content to effectively integrate MongoDB and engage students in practical learning? In all my data management courses, I focus on teaching students the concept of data models, including relational, document, key-value, and graph. In my Database Systems course, I teach MongoDB alongside SQL and Neo4J to highlight the unique features and capabilities of each data model. This comparative approach helps students appreciate the importance and applications of different databases, ultimately making them better data engineers. In my Data Management in the Cloud course, I emphasize the system's side of MongoDB, particularly its scalability. Understanding how MongoDB is built to handle large volumes of data efficiently provides students with practical insights into managing data in a cloud environment. To effectively integrate MongoDB and engage students in practical learning, I use a hybrid flipped-classroom approach. Students watch recorded lectures before class, allowing us to dedicate class time to working through examples together. Additionally, students form teams to work on various data management scenarios using a collaborative online assessment tool called PrairieLearn. This model fosters peer learning and collaboration, enhancing the overall educational experience. How has MongoDB supported you in enhancing your teaching methods and upskilling your students? I would like to sincerely thank MongoDB for Academia for the amazing support and material they provided to enhance my course design. The free courses offered at MongoDB University have significantly improved my course delivery, allowing me to provide more in-depth and practical knowledge to my students. I heavily use MongoDB's free cluster to demonstrate MongoDB concepts during classes, and my students also use the free cluster for their projects, which gives them hands-on experience with real-world applications. MongoDB Atlas has been a game-changer in my teaching methods. As a fully managed cloud database, it simplifies the process of deploying, managing, and scaling databases, allowing students to focus on learning and applying MongoDB concepts without getting bogged down by administrative tasks. The flexibility and reliability of MongoDB Atlas make it an invaluable tool for both educators and students in the field of data management. Could you elaborate on the key findings from your ITiCSE paper on students' experiences with MongoDB and how these insights can help other educators? In my ITiCSE paper, we conducted an in-depth analysis of students' submissions to MongoDB homework assignments to understand their learning experiences and challenges. The study revealed that as students use more advanced MongoDB operators, they tend to make more reference errors, indicating a need for a better conceptual understanding of these operators. Additionally, when students encounter new functionalities, such as the $group operator, they initially struggle but generally do not repeat the same mistakes in subsequent problems. These insights suggest that educators should allocate more time and effort to teaching advanced MongoDB concepts and provide additional support during the initial learning phases. By understanding these common difficulties, instructors can better tailor their teaching strategies to improve student outcomes and enhance their learning experience. What advice would you give to fellow educators who are considering implementing MongoDB in their own courses to ensure a successful and impactful experience for their students? Implementing MongoDB in your courses can be highly rewarding. Here’s some advice to ensure success: Foundation in Data Models: Teach MongoDB alongside other database types to highlight unique features and applications, making students better data engineers. Utilize MongoDB Resources: Leverage support from MongoDB for Academia, free courses from MongoDB University, and free clusters for hands-on projects. Practical Learning: Use MongoDB Atlas to simplify database management and focus on practical applications. Focus on Challenges: Allocate more time for advanced MongoDB concepts. Address common errors and use tools like PrairieLearn that capture students' interactions and learning progress to identify learning patterns and adjust instruction. Encourage Real-World Projects: Incorporate practical projects to enhance skills and relevance. Continuous Improvement: Gather feedback to iteratively improve course content and share successful strategies with peers. MongoDB is always evolving so make sure to stay tuned with their updates and new features. These steps will help create an engaging learning environment, preparing students for real-world data management. Apply to MongoDB for Educators program and explore free resources for educators crafted by MongoDB experts to prepare learners with in-demand database skills and knowledge.

July 10, 2024

Building Gen AI with MongoDB & AI Partners | June 2024

Even for those of us who work in AI, keeping up with the latest news in the AI space can be head-spinning. In just the last few weeks, OpenAI introduced their newest model (GPT-4o), Anthropic continued to develop Claude with the launch of Claude 3.5 Sonnet, and Mistral launched Mixtral 8x22B, their most efficient open model to date. And those are only a handful of recent releases! In such an ever-changing space, partnerships are critical to combining the strengths of organizations to create solutions that would be challenging to develop independently. Also, it can be overwhelming for any one business to keep track of so much change. So there’s a lot of value in partnering with industry leaders and new players alike to bring the latest innovations to customers. I’ve been at MongoDB for less than a year, but in that time our team has already built dozens of strategic partnerships that are helping companies and developers build AI applications faster and safer. I love to see these collaborations take off! A compelling example is MongoDB’s recent work with Vercel. Our team developed an exciting sample application that allows users to deploy a retrieval-augmented generation (RAG) application on Vercel in just a few minutes. By leveraging a MongoDB URI and an OpenAI key, users can one-click deploy this application on Vercel. Another recent collaboration was with Netlify. Our team also developed a starter template that implements a RAG chatbot on top of their platform using LangChain and MongoDB Atlas Vector Search capabilities for storing and searching the knowledge base that powers the chatbot's responses. These examples demonstrate the power of combining MongoDB's robust database capabilities with other deployment platforms. They also show how quickly and efficiently users can set up fully functional RAG applications, and highlight the significant advantages that partnerships bring to the AI ecosystem. And the best part? We’re just getting started! Stay tuned for more information about the MongoDB AI Applications Program later this month. Welcoming new AI partners Speaking of partnerships, in June we welcomed seven AI partners that offer product integrations with MongoDB. Read on to learn more about each great new partner. AppMap is an open source personal observability platform to help developers keep their software secure, clear, and aligned. Elizabeth Lawler, CEO of AppMap, commented on our joint value for developers. “AppMap is thrilled to join forces with MongoDB to help developers improve and optimize their code. MongoDB is the go-to data store for web and mobile applications, and AppMap makes it easier than ever for developers to migrate their code from other data stores to MongoDB and to keep their code optimized as their applications grow and evolve.” Read more about our partnership and how to use AppMapp to improve the quality of code running with MongoDB. Mendable is a platform that automates customer services providing quick and accurate answers to questions without human intervention. Eric Ciarla, co-founder of Mendable, highlighted the importance of our partnership. "Our partnership with MongoDB is unlocking massive potential in AI applications, from go to market copilots to countless other innovative use cases,” he said. “We're excited to see teams at MongoDB and beyond harnessing our combined technologies to create transformative AI solutions across all kinds of industries and functions." Learn how Mendable and MongoDB Atlas Vector Search power customer service applications. OneAI is an API-first platform built for developers to create and manage trusted GPT chatbots. Amit Ben, CEO of One AI, shared his excitement about the partnership. "We're thrilled to partner with MongoDB to help customers bring trusted GenAI to production. OneAI's platform, with RAG pipelines, LLM-based chatbots, goal-based AI, anti-hallucination guardrails, and language analytics, empowers customers to leverage their language data and engage users even more effectively on top of MongoDB Atlas." Check out some One AI’s GPT agents & advanced RAG pipelines built on MongoDB. Prequel allows companies to sync data to and from their customers' data warehouses, databases, or object storage so they get better data access with less engineering effort. "Sharing MongoDB data just got easier with our partnership,” celebrated Charles Chretien, co-founder of Prequel. “Software companies running on MongoDB can use Prequel to instantly share billions of records with customers on every major data warehouse, database, and object storage service.” Learn how you can share MongoDB data using Prequel. Qarbine complements summary data visualization tools allowing for better informed decision-making across teams. Bill Reynolds, CTO of Qarbine, mentioned the impact of our integration to distill better insights from data: “We’re excited to extend the many MongoDB Atlas benefits upward in the modern application stack to deliver actionable insights from publication quality drill-down analysis. The native integrations enhance in-app real-time decisions, business productivity and operational data ROI, fueling modern application innovation.” Want to power up your insights with MongoDB Atlas and Qarbine? Read more . Temporal is a durable execution platform for building and scaling invincible applications faster. "Organizations of all sizes have built AI applications that are ‘durable by design’ using MongoDB and Temporal. The burden of managing data and agent task orchestration is effortlessly abstracted away by Temporal's development primitives and MongoDB's Atlas Developer Data Platform”, says Jay Sivachelvan, VP of Partnerships at Temporal. He also highlighted the benefits of this partnership. “These two solutions, together, provide compounding benefits by increasing product velocity while also seamlessly automating the complexities of scalability and enterprise-grade resilience." Learn how to build microservices in a more efficient way with MongoDB and Temporal. Unstructured is a platform that connects any type of enterprise data for use with vector databases and any LLM framework. Read more about enhancing your gen AI application accuracy using MongoDB and Unstructured. But wait, there's more! To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub , and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

July 9, 2024

Elevate Your Python AI Projects with MongoDB and Haystack

MongoDB is excited to announce an integration with Haystack, enhancing MongoDB Atlas Vector Search for Python developers. This integration amplifies our commitment to providing developers with cutting-edge tools for building AI applications centered around semantic search and Large Language Models (LLMs). We’re excited to partner with MongoDB to help developers build top-tier LLM applications. The new Haystack and MongoDB Atlas integration lets developers seamlessly use MongoDB data in Haystack, a reliable framework for creating quality LLM pipelines for use cases like RAG, QA, and agentic pipelines. Whether you're an experienced developer or just starting, your gen AI projects can quickly progress from prototype to adoption, accelerating value for your business and end-users. Malte Pietsch, co-founder and CTO, deepset Simplifying AI app development with Haystack Haystack is an open-source Python framework that simplifies AI application development. It enables developers to start their projects quickly, experiment with different AI models, and to efficiently scale their applications. Indeed, Haystack is particularly effective for building applications requiring semantic understanding and natural language processing (NLP), such as chatbots and question-answering systems. Haystack’s core features include: Components: Haystack breaks down complex NLP tasks into manageable components, such as document retrieval or text summarization. With the new MongoDB-Haystack integration, MongoDB becomes the place where all your data lives, ready for Haystack to use. Pipelines: Haystack lets you link components together into pipelines for more complex tasks. With this integration, your MongoDB data flows through these pipelines. Agents: Haystack Agents use LLMs to resolve complex queries. They can decide which tools (or components) to use for a given question, leveraging MongoDB data to deliver smarter answers. Atlas Vector Search: Enhance AI development with Haystack At the heart of the new integration is MongoDB Atlas Vector Search , transforming how applications search and retrieve data. By leveraging vector embeddings, Atlas Vector Search goes beyond mere keyword matching: it interprets the intent behind queries, enabling applications to provide highly relevant, context-aware responses. This is a breakthrough for Python developers who aim to build applications that think and understand like humans. Building on this foundation, the Atlas Vector Search and Haystack integration gives Python developers a powerful toolkit for navigating the complexities of AI application development. MongoDB becomes a dynamic document store within Haystack's framework, optimizing data storage, processing, and retrieval. Additionally, the integration eases the use of advanced AI models from leading providers such as OpenAI and Cohere into your applications. Developers can thus create applications that do more than just answer queries—they grasp and act on the underlying intent, ensuring responses are both accurate and contextually relevant. What this means for Python developers For Python developers, this integration means: Faster development: Developers can focus on building and innovating rather than spending time configuring and managing infrastructure. MongoDB's integration with Haystack means you can get up and running quickly, leveraging the best of both technologies to accelerate your development cycles. Smarter applications: By utilizing Haystack's powerful Natural Language Processing tooling in combination with MongoDB Atlas Vector Search’s efficient data handling, developers can create applications that understand and process natural language more effectively. This results in applications that can provide more accurate and contextually relevant responses that resonate with user intent. Access to pre-trained AI models: With seamless integration of leading generative AI models from providers like OpenAI, Anthropic, Cohere, Hugging Face, and AWS Bedrock, Python developers can easily incorporate advanced AI functionalities into their projects. This means developers can quickly adopt state-of-the-art models without the need for extensive training or fine-tuning, saving time and resources. Flexible and scalable pipelines: Haystack's modular approach to building AI applications, through its use of components and pipelines, allows developers to create flexible and scalable solutions. With MongoDB data seamlessly flowing through these pipelines, you can easily adapt and expand your applications to meet growing demands and new challenges. Robust search capabilities: Atlas Vector Search transforms the way applications retrieve and interpret data, going beyond simple keyword searches. It enables applications to perform high-precision searches that return more relevant and semantically rich results. This advanced search capability is crucial for developing applications that require high levels of semantic understanding and accuracy. By integrating MongoDB with Haystack, Python developers are equipped with a powerful toolkit that not only simplifies the AI development process but also significantly enhances the intelligence and functionality of their applications. Whether you are building chatbots, search engines, or other AI-driven applications, this integration provides the tools you need to create innovative and impactful solutions. Get started now Start leveraging the MongoDB and Haystack integration for your AI development. Explore our tutorial , documentation , or check out our github repository to begin building smarter, more intuitive Python projects today!

July 8, 2024

Nokia Corteca Scales Wi-Fi Connectivity to Millions of Devices With MongoDB Atlas

Nokia’s home Wi-Fi connectivity cloud platform was launched in 2019 as the Nokia WiFi Cloud Controller (NWCC). In 2023, it was renamed and relaunched as the Corteca Home Controller, becoming part of the Corteca software suite that delivers smarter broadband for a better experience. The Corteca Home Controller can be hosted on Amazon Web Services, Google Cloud, or Microsoft Azure, and is the industry’s first platform to support three management services—device management, Wi-Fi management, and application management. Supporting TR-369 (a standardized remote device management protocol) also allows the Home Controller to work in a multi-vendor environment, managing both Nokia broadband devices and third-party broadband devices. By solving connectivity issues before the end-user detects them, and by automatically optimizing Wi-Fi performance, the Home Controller helps deliver excellent customer experiences to millions of users, 24/7. During the five years that Nokia Corteca has been a MongoDB Atlas customer, the Home Controller has successfully scaled from 500,000 devices to over 4.5 million. There are now 75 telecommunications customers of Home Controller spread across all regions of the globe. Having the stability, efficiency, and performance to scale Nokia Corteca's solution is end-to-end, from applications embedded in the device, through the home, and into the cloud. Algorithms assess data extracted from home networks, based on which performance parameters automatically adjust as needed—changing Wi-Fi channels to avoid network interference, for example—thereby ensuring zero downtime. The Home Controller processes real-time data sent from millions of devices, generating massive volumes of data. With a cloud optimization team tasked with deploying the solution across the globe to ever more customers, the Home Controller needed to store and manage its vast dataset and to onboard new telecommunication organizations more easily without incurring any downtime. Prior to Nokia Corteca moving to MongoDB Atlas, its legacy relational database lacked stability and required both admin and application teams to manage operations. A flexible model with time series capabilities That's where MongoDB Atlas came in. Nokia was familiar with the MongoDB Atlas database platform, having already worked with it as part of a previous company acquisition and solution integration. As Nokia's development team had direct experience with the scalability, manageability, and ease of use offered by MongoDB Atlas, they knew it had the potential to address the Home Controller’s technical and business requirements. There was another key element: Nokia wanted to store time-series data—a sequence of data points in which insights are gained by analyzing changes over time. MongoDB Atlas has the unique ability to store operational and time series data in parallel and provides robust querying capabilities on that data. Other advantages include MongoDB's flexible schema, which helps developers store data to match the application's needs and adapt as data changes over time. MongoDB Atlas also provides features such as Performance Advisor that monitors the performance of the database and makes intelligent recommendations to optimize and improve the performance and resource consumption Fast real time data browsing and scalability made easy Previously, scaling the database had been time-consuming and manual. With MongoDB Atlas, the team can easily scale up as demand increases with very little effort and no downtime. This also means it is much more straightforward to add new clients, such as large telecommunications companies. Having started with 100GB of data, the team now has more than 1.3 terabytes, and can increase the disc space in a fraction of a second, positioning the team to be able to scale with the business. As the Home Controller grows and onboards more telcos, the team anticipates a strengthening relationship with MongoDB. “We have a very good relationship with the MongoDB team,” said Jaisankar Gunasekaran, Head of Cloud Hosting and Operations at Nokia. “One of the main advantages is their local presence—they’re accessible, they’re friendly, and they’re experts. It makes our lives easier and lets us concentrate on our products and solutions.” To learn more about how MongoDB can help drive innovation and capture customer imaginations, check out our MongoDB for Telecommunications page.

July 2, 2024