Vanda Ackermann

2 results

Supercharge AI Data Management With Knowledge Graphs

WhyHow.AI has built and open-sourced a platform using MongoDB, enhancing how organizations leverage knowledge graphs for data management and insights. Integrated with MongoDB, this solution offers a scalable foundation with features like vector search and aggregation to support organizations in their AI journey. Knowledge graphs address the limitations of traditional retrieval-augmented generation (RAG) systems, which can struggle to capture intricate relationships and contextual nuances in enterprise data. By embedding rules and relationships into a graph structure, knowledge graphs enable accurate and deterministic retrieval processes. This functionality extends beyond information retrieval: knowledge graphs also serve as foundational elements for enterprise memory, helping organizations maintain structured datasets that support future model training and insights. WhyHow.AI enhances this process by offering tools designed to combine large language model (LLM) workflows with Python- and JSON-native graph management. Using MongoDB’s robust capabilities, these tools help combine structured and unstructured data and search capabilities, enabling efficient querying and insights across diverse datasets. MongoDB’s modular architecture seamlessly integrates vector retrieval, full-text search, and graph structures, making it an ideal platform for RAG and unlocking the full potential of contextual data. Check out our AI Learning Hub to learn more about building AI-powered apps with MongoDB. Creating and storing knowledge graphs with WhyHow.AI and MongoDB Creating effective knowledge graphs for RAG requires a structured approach that combines workflows from LLMs, developers, and nontechnical domain experts. Simply capturing all entities and relationships from text and relying on an LLM to organize the data can lead to a messy retrieval process that lacks utility. Instead, WhyHow.AI advocates for a schema-constrained graph creation method, emphasizing the importance of developing a context-specific schema tailored to the user’s use case. This approach ensures that the knowledge graphs focus on the specific relationships that matter most to the user’s workflow. Once the knowledge graphs are created, the flexibility of MongoDB’s schema design ensures that users are not confined to rigid structures. This adaptability enables seamless expansion and evolution of knowledge graphs as data and use cases develop. Organizations can rapidly iterate during early application development without being restricted by predefined schemas. In instances where additional structure is required, MongoDB supports schema enforcement, offering a balance between flexibility and data integrity. For instance, aligning external research with patient records is crucial to delivering personalized healthcare. Knowledge graphs bridge the gap between clinical trials, best practices, and individual patient histories. New clinical guidelines can be integrated with patient records to identify which patients would benefit most from updated treatments, ensuring that the latest practices are applied to individual care plans. Optimizing knowledge graph storage and retrieval with MongoDB Harnessing the full potential of knowledge graphs requires both effective creation tools and robust systems for storage and retrieval. Here’s how WhyHow.AI and MongoDB work together to optimize the management of knowledge graphs. Storing data in MongoDB WhyHow.AI relies on MongoDB’s document-oriented structure to organize knowledge graph data into modular, purpose-specific collections, enabling efficient and flexible queries. This approach is crucial for managing complex entity relationships and ensuring accurate provenance tracking. To support this functionality, the WhyHow.AI Knowledge Graph Studio comprises several key components: Workspaces separate documents, schemas, graphs, and associated data by project or domain, maintaining clarity and focus. Chunks are raw text segments with embeddings for similarity searches, linked to triples and documents to provide evidence and provenance. Graph collection stores the knowledge graph along with metadata and schema associations, all organized by workspace for centralized data management. Schemas define the entities, relationships, and patterns within graphs, adapting dynamically to reflect new data and keep the graph relevant. Nodes represent entities like people, locations, or concepts, each with unique identifiers and properties, forming the graph’s foundation. Triples define subject-predicate-object relationships and store embedded vectors for similarity searches, enabling reliable retrieval of relevant facts. Queries log user queries, including triple results and metadata, providing an immutable history for analysis and optimization. Figure 1. WhyHow.AI platform and knowledge graph illustration. To enhance data interoperability, MongoDB’s aggregation framework enables efficient linking across collections. For instance, retrieving chunks associated with a specific triple can be seamlessly achieved through an aggregation pipeline, connecting workspaces, graphs, chunks, and document collections into a cohesive data flow. Querying knowledge graphs With the representation established, users can perform both structured and unstructured queries with the WhyHow.AI querying system. Structured queries enable the selection of specific entity types and relationships, while unstructured queries enable natural language questions to return related nodes, triples, and linked vector chunks. WhyHow.AI’s query engine embeds triples to enhance retrieval accuracy, bypassing traditional Text2Cypher methods. Through a retrieval engine that embeds triples and enables users to retrieve embedded triples with chunks tied to them, WhyHow.AI uses the best of both structured and unstructured data structures and retrieval patterns. And, with MongoDB’s built-in vector search, users can store and query vectorized text chunks alongside their graph and application data in a single, unified location. Enabling scalability, portability, and aggregations MongoDB’s horizontal scalability ensures that knowledge graphs can grow effortlessly alongside expanding datasets. Users can also easily utilize WhyHow.AI's platform to create modular multiagent and multigraph workflows. They can deploy MongoDB Atlas on their preferred cloud provider or maintain control by running it in their own environments, gaining flexibility and reliability. As graph complexity increases, MongoDB’s aggregation framework facilitates diverse queries, extracting meaningful insights from multiple datasets with ease. Providing familiarity and ease of use MongoDB’s familiarity enables developers to apply their existing expertise without the need to learn new technologies or workflows. With WhyHow.AI and MongoDB, developers can build graphs with JSON data and Python-native APIs, which are perfect for LLM-driven workflows. The same database trusted for years in application development can now manage knowledge graphs, streamlining onboarding and accelerating development timelines. Taking the next steps WhyHow.AI’s knowledge graphs overcome the limitations of traditional RAG systems by structuring data into meaningful entities, relationships, and contexts. This enhances retrieval accuracy and decision-making in complex fields. Integrated with MongoDB, these capabilities are amplified through a flexible, scalable foundation featuring modular architecture, vector search, and powerful aggregation. Together, WhyHow.AI and MongoDB help organizations unlock their data’s potential, driving insights and enabling innovative knowledge management solutions. No matter where you are in your AI journey, MongoDB can help! You can get started with your AI-powered apps by registering for MongoDB Atlas and exploring the tutorials available in our AI Learning Hub . Otherwise, head over to our quick-start guide to get started with MongoDB Atlas Vector Search today. Want to learn more about why MongoDB is the best choice for supporting modern AI applications? Check out our on-demand webinar, “ Comparing PostgreSQL vs. MongoDB: Which is Better for AI Workloads? ” presented by MongoDB Field CTO, Rick Houlihan. If your company is interested in being featured in a story like this, we’d love to hear from you. Reach out to us at ai_adopters@mongodb.com .

February 13, 2025

Revolutionizing Sales with AI: Glyphic AI’s Journey with MongoDB

When connecting with customers, sales teams often struggle to understand and address the unique needs and preferences of each prospect, leading to ineffective pitches. Additionally, time-consuming admin tasks like data entry, sales tool updates, follow-up management, and maintaining personalized interactions across numerous leads can overwhelm teams, leaving less time for impactful selling. Glyphic AI, a pioneering AI-powered sales co-pilot, addresses these challenges. By analyzing sales processes and calls, Glyphic AI helps teams streamline workflows and focus on building stronger customer relationships. Founded by former engineers from Google DeepMind and Apple, Glyphic AI leverages expertise in large language models (LLMs) to work with private and dynamic data. "As LLM researchers, we discovered the true potential of these models lies in the sales domain, generating vast numbers of calls rich with untapped insights. Traditionally, these valuable insights were lost in digital archives, as extracting them required manually reviewing calls and making notes," says Devang Agrawal, co-Founder and Chief Technology Officer of Glyphic AI. “Our aim became to enhance customer centricity by harnessing AI to capture and utilize conversational and historical data, transforming it into actionable intelligence for ongoing and future deals.” Check out our AI Learning Hub to learn more about building AI-powered apps with MongoDB. Built on MongoDB, AWS, and Anthropic, Glyphic AI automatically breaks down sales calls using established methodologies like MEDDIC. It leverages ingested sales playbooks to provide tailored strategies for different customer personas and company types. By using data sources such as Crunchbase, LinkedIn, and internal CRM information, the tool proactively surfaces relevant insights before sales teams engage with customers. Glyphic AI employs LLMs to offer complete visibility into sales deals by understanding the full context and intent of real-time conversations. The system captures information at various points, primarily focusing on sales calls and recordings. These data are analyzed by LLMs tailored for sales tasks, summarizing content based on sales frameworks and extracting specific information requested by teams. MongoDB records serve as the main database for customer records, sales call data, and related metadata, while large video files are stored in AWS S3. MongoDB Atlas Search and Vector Search features are integrated, providing the ability to index and query high-dimensional vectors efficiently. Glyphic AI’s Global Search feature uses Atlas Vector Search to allow users to ask strategic questions and retrieve data from numerous sales calls. It matches queries with vector embeddings in MongoDB, utilizing metadata, account details, and external sources like LinkedIn and Crunchbase to identify relevant content. This content is processed by the LLM model for detailed conversational responses. Additionally, MongoDB's Atlas Vector Search continuously updates records, building a dynamic knowledge base that provides quick insights and proactively generates summaries enriched with data from various sources, assisting with sales calls and customer analysis. Figure 1: How Glyphic AI transforms sales call analysis Why Glyphic AI relies on advanced cloud solutions for efficient data management and innovation "I used MongoDB in the first app I ever built, and ever since it has consistently met our needs, no matter the project," says Agrawal. For Glyphic AI, MongoDB has seamlessly integrated into the company’s existing workflows. MongoDB Atlas has greatly simplified database management and analytics, initially involving the team implementing vector search from scratch. When MongoDB introduced Atlas Vector Search, Glyphic AI transitioned to this more streamlined and integrated solution. “If MongoDB's Atlas Vector Search had been available back then, we would have adopted it immediately for its ease of testing and deployment,” Agrawal reflects. While Agrawal appreciates the benefits of building from scratch, he acknowledges that maintaining complex systems, like databases or developing LLM models, becomes increasingly challenging over time. The AI feature enabling natural language queries in MongoDB Compass has been particularly beneficial for Glyphic AI, especially when extracting insights not yet available in dashboards or analyzing specific database elements. In the fast-paced AI industry, time to market is critical. MongoDB Atlas, as a cloud solution, offers Glyphic AI the flexibility and scalability needed to quickly test, deploy, and refine its applications. The integration of MongoDB Atlas with features like Atlas Vector Search has enabled the team to focus on innovation without being bogged down by infrastructure complexities, speeding up the development of AI-powered features. As a small, agile team, Glyphic AI leverages MongoDB's document model, which aligns well with object-oriented programming principles. This allows for rapid development and iteration of product features, enabling the company to stay competitive in the evolving generative AI market. By simplifying data management and reducing friction, MongoDB’s document model helps Glyphic AI maintain agility and focus on delivering impactful solutions. With vector search embedded in MongoDB, the team found relief in using a unified language and system. Keeping all data—including production records and vectors—in one place has greatly simplified operations. Before adopting MongoDB, the team struggled with synchronizing data across multiple systems and managing deletions to avoid inconsistencies. MongoDB’s ACID compliance has made this process far more straightforward, ensuring reliable transactions and maintaining data integrity. By consolidating production records and vectors into MongoDB, the team achieved the simplicity they needed, eliminating the complexities of managing disparate systems. Glyphic AI's next step: Refining LLMs for enhanced sales insights and strategic decision-making “Over the next year, our goal is to refine our LLMs specifically for the sales context to deliver more strategic insights. We've built a strong conversational intelligence product that enhances efficiency for frontline sales reps and managers. Now, we're focused on aggregating conversation data to provide strategy teams and CROs with valuable insights into their teams' performance,” says Agrawal. As sales analysis evolves to become more strategic, significant technical challenges will arise, especially when scaling from summarizing a handful of calls to analyzing thousands in search of complex patterns. Current LLMs are often limited in their ability to process large amounts of sales call data, which means ongoing adjustments and improvements will be necessary to keep up with new developments. Additionally, curating effective datasets, including synthetic and openly available sales data, will be a key hurdle in training these models to deliver meaningful insights. By using MongoDB, Glyphic AI will be able to accelerate innovation due to the reduced need for time-consuming maintenance and management of complex systems. This will allow the team to focus on essential tasks like hiring skilled talent, driving innovation, and improving the end-user experience. As a result, Glyphic AI will be able to prioritize core objectives and continue to develop and refine their products effectively. As Glyphic AI fine-tunes its LLMs for the sales context, the team will embrace retrieval-augmented generation (RAG) to push the boundaries of AI-driven insights. Leveraging Atlas Vector Search will enable Glyphic AI to handle large datasets more efficiently, transforming raw data into actionable sales strategies. This will enhance its AI’s ability to understand and predict sales trends with greater precision, setting the stage for a new level of sales intelligence and positioning Glyphic AI at the forefront of AI-driven sales solutions. As part of the MongoDB AI Innovators Program , Glyphic AI’s engineers gain direct access to MongoDB’s product management team, facilitating feedback exchange and receiving the latest updates and best practices. This collaboration allows them to concentrate on developing their LLM models and accelerating application development. Additionally, the provision of MongoDB Atlas credits helps reduce costs associated with experimenting with new features. Get started with your AI-powered apps by registering for MongoDB Atlas and exploring the tutorials in our AI resources center . If you're ready to dive into Atlas Vector Search, head over to the quick-start guide to kick off your journey. Additionally, if your company is interested in being featured in a story like this, we'd love to hear from you. Reach out to us at ai_adopters@mongodb.com .

September 24, 2024