Articles, announcements, news, updates and more
Using MongoDB Skill Scanner to Build Better Training Programs
Technology leaders know that transformation is about more than just adopting modern technologies like MongoDB. The entire organization has to rally behind change — which is no easy task. The skills that modern development teams need are evolving faster than ever, and hiring to fill skills gaps can be too time-consuming and expensive of a process for many organizations. So it’s imperative that we plan for how we want to bring our people with us on our modernization journey, and proactively upskill them on the technologies we’re betting on. Because what happens if you choose MongoDB, but your developers don’t know how to use it? CIOs know that training programs are easier said than done. EY reported that 30% of CIOs acknowledge that their training programs are ineffective, and that they’re struggling to retain talent because of it. These leaders come to us to help them build and execute their MongoDB training programs , and seek advice on two extremely common yet critical challenges: How do we get away from the less effective one-size-fits-all approach? How do we measure the ROI of our training program and connect it to business impact? How we use MongoDB Skill Scanner to overcome training challenges Our Professional Services team uses a tool called MongoDB Skill Scanner to address both of these challenges. This tool helps us provide these three benefits to our customers looking to build a training program: Improve MongoDB proficiency: Teams can use Skill Scanner to quickly and easily assess the MongoDB skill gaps of their team members and gain a comprehensive understanding of their team’s MongoDB skills baseline. Increased productivity and accuracy: When team members have a comprehensive understanding of MongoDB, they are able to work more quickly and accurately on projects, leading to increased productivity and a higher quality of work. Save time and money with targeted Training: Using Skill Scanner, customers can avoid wasting time and money on trial-and-error learning. Instead, they can focus on improving their skills in a more targeted and efficient way with right-sized training plans. By leveraging this data, our customers’ engineers can engage in the right training at the right time, targeted for their job role and specific skill shortages. When a training program is built this way, engineers maximize their knowledge retention and minimize time away from their projects. Skill Scanner includes three role-based assessments, one for developers, database administrators, and DevOps respectively. Through a series of multiple choice questions, Skill Scanner provides customers with a clear understanding of their level of expertise across a set of technical skills that are critical for success in their role. After submitting the assessment, engineers will get results in each skill area outlining if they are beginner, intermediate, or advanced. Why data-driven training programs matter We’ve learned that it’s not enough to just tell teams to go watch training videos or webinars on their own, or to place everyone in the same one-size-fits-all program. Skills gaps vary from team to team, and individual to individual. The one-size-fits-all approach of some programs may not address individual learners' needs, wasting time and making it difficult for them to acquire new skills. By using Skill Scanner, we’re able to interpret this data to help determine which training courses your team should take. But we don’t only capture this data before doing training; we use Skill Scanner again after training programs are completed to see where immediate improvements have been made. This helps technology leaders prove the impact and ROI of their training, and gives them the confidence that their teams are ready to be successful with MongoDB. Developing a Precision Learning Program To go even further, our team can work with you to build a Precision Learning Program, where we use Skill Scanner data to build learning schedules that are unique to each individual. These schedules include a variety of short, blended, learning events such as classes, technical workshops, self-paced exercises, and project coaching. We’ve seen PLP lead to higher knowledge retention and of course, measurable project results. A customer who recently concluded their PLP saw a 43% increase in knowledge retention. Getting started building a personalized training program Skill gaps aren’t a novel problem IT leaders are facing. But with new digital courses, training, and technologies, the resources to close these gaps are at your fingertips. Skill Scanner and Precision Learning Program have been specifically designed to empower teams by offering targeted training that enhances their understanding of MongoDB. These short training events are carefully crafted to close skill gaps without compromising developer productivity. We’ve seen a variety of customers use this tool to help train their team’s individual needs, from needing to upskill new hires on their teams, projects with new MongoDB products, migrating to MongoDB Atlas, and more. It also saves your business the hours developers would've wasted searching for answers (and developers don’t want to spend their time that way, either). “We need help getting from point A to point B and feel MongoDB is uniquely positioned to help” — CTO at large insurance firm If you're interested in trying out MongoDB Skill Scanner or want to explore the MongoDB Precision Learning Program further, you can reach out to your account representative or contact us directly .
Dissecting Open Banking with MongoDB: Technical Challenges and Solutions
Thank you to Ainhoa Múgica for her contributions to this post. Unleashing a disruptive wave in the banking industry, open banking (or open finance), as the term indicates, has compelled financial institutions (banks, insurers, fintechs, corporates, and even government bodies) to embrace a new era of transparency, collaboration, and innovation. This paradigm shift requires banks to openly share customer data with third-party providers (TPPs), driving enhanced customer experiences and fostering the development of innovative fintech solutions by combining ‘best-of-breed’ products and services. As of 2020, 24.7 million individuals worldwide used open banking services, a number that is forecast to reach 132.2 million by 2024. This rising trend fuels competition, spurs innovation, and fosters partnerships between traditional banks and agile fintech companies. In this transformative landscape, MongoDB, a leading developer data platform, plays a vital role in supporting open banking by providing a secure, scalable, and flexible infrastructure for managing and protecting shared customer data. By harnessing the power of MongoDB's technology, financial institutions can lower costs, improve customer experiences, and mitigate the potential risks associated with the widespread sharing of customer data through strict regulatory compliance. Figure 1: An Example Open Banking Architecture The essence of open banking/finance is about leveraging common data exchange protocols to share financial data and services with 3rd parties. In this blog, we will dive into the technical challenges and solutions of open banking from a data and data services perspective and explore how MongoDB empowers financial institutions to overcome these obstacles and unlock the full potential of this open ecosystem. Dynamic environments and standards As open banking standards continue to evolve, financial institutions must remain adaptable to meet changing regulations and industry demands. Traditional relational databases often struggle to keep pace with the dynamic requirements of open banking due to their rigid schemas that are difficult to change and manage over time. In countries without standardized open banking frameworks, banks and third-party providers face the challenge of developing multiple versions of APIs to integrate with different institutions, creating complexity and hindering interoperability. Fortunately, open banking standards or guidelines (eg. Europe, Singapore, Indonesia, Hong Kong, Australia, etc) have generally required or recommended that the open APIs be RESTful and support JSON data format, which creates a basis for common data exchange. MongoDB addresses these challenges by offering a flexible developer data platform that natively supports JSON data format, simplifies data modeling, and enables flexible schema changes for developers. With features like the MongoDB Data API and GraphQL API , developers can reduce development and maintenance efforts by easily exposing data in a low-code manner. The Stable API feature ensures compatibility during database upgrades, preventing code breaks and providing a seamless transition. Additionally, MongoDB provides productivity-boosting features like full-text search , data visualization , data federation , mobile database synchronization , and other app services enabling developers to accelerate time-to-market. With MongoDB's capabilities, financial institutions and third-party providers can navigate the changing open banking landscape more effectively, foster collaboration, and deliver innovative solutions to customers. An example of a client who leverages MongoDB’s native JSON data management and flexibility is Natwest. Natwest is a major retail and commercial bank in the United Kingdom based in London, England. The bank has moved from zero to 900 million API calls per month within years, as open banking uptake grows and is expected to grow 10 times in coming years. At a MongoDB event on 15 Nov 2022, Jonathan Haggarty, Natwest’s Head of “Bank of APIs” Technology – an API ecosystem that brings the retail bank’s services to partners – shared in his presentation titled Driving Customer Value using API Data that Natwest’s growing API ecosystem lets it “push a bunch of JSON data into MongoDB [which makes it] “easy to go from simple to quite complex information" and also makes it easier to obfuscate user details through data masking for customer privacy. Natwest is enabled to surface customer data insights for partners via its API ecosystem, for example “where customers are on the e-commerce spectrum”, the “best time [for retailers] to push discounts” as well insights on “most valuable customers” – with data being used for problem-solving; analytics and insight; and reporting. Performance In the dynamic landscape of open banking, meeting the unpredictable demands for performance, scalability, and availability is crucial. The efficiency of applications and the overall customer experience heavily rely on the responsiveness of APIs. However, building an open banking platform becomes intricate when accommodating third-party providers with undisclosed business and technical requirements. Without careful management, this can lead to unforeseen performance issues and increased costs. Open banking demands high performance of the APIs under all kinds of workload volumes. OBIE recommends an average TTLB (time to last byte) of 750 ms per endpoint response for all payment invitations (except file payments) and account information APIs. Compliance with regulatory service level agreements (SLAs) in certain jurisdictions further adds to the complexity. Legacy architectures and databases often struggle to meet these demanding criteria, necessitating extensive changes to ensure scalability and optimal performance. That's where MongoDB comes into play. MongoDB is purpose-built to deliver exceptional performance with its WiredTiger storage engine and its compression capabilities. Additionally, MongoDB Atlas improves the performance following its intelligent index and schema suggestions, automatic data tiering, and workload isolation for analytics. One prime illustration of its capabilities is demonstrated by Temenos, a renowned financial services application provider, achieving remarkable transaction volume processing performance and efficiency by leveraging MongoDB Atlas. They recently ran a benchmark with MongoDB Atlas and Microsoft Azure and successfully processed an astounding 200 million embedded finance loans and 100 million retail accounts at a record-breaking 150,000 transactions per second . This showcases the power and scalability of MongoDB with unparalleled performance to empower financial institutions to effectively tackle the challenges posed by open banking. MongoDB ensures outstanding performance, scalability, and availability to meet the ever-evolving demands of the industry. Scalability Building a platform to serve TPPs, who may not disclose their business usages and technical/performance requirements, can introduce unpredictable performance and cost issues if not managed carefully. For instance, a bank in Singapore faced an issue where their Open APIs experienced peak loads and crashes every Wednesday. After investigation, they discovered that one of the TPPs ran a promotional campaign every Wednesday, resulting in a surge of API calls that overwhelmed the bank's infrastructure. A scalable solution that can perform under unpredictable workloads is critical, besides meeting the performance requirements of a certain known volume of transactions. MongoDB's flexible architecture and scalability features address these concerns effectively. With its distributed document-based data model, MongoDB allows for seamless scaling both vertically and horizontally. By leveraging sharding , data can be distributed across multiple nodes, ensuring efficient resource utilization and enabling the system to handle high transaction volumes without compromising performance. MongoDB's auto-sharding capability enables dynamic scaling as the workload grows, providing financial institutions with the flexibility to adapt to changing demands and ensuring a smooth and scalable open banking infrastructure. Availability In the realm of open banking, availability becomes a critical challenge. With increased reliance on banking services by third-party providers (TPPs), ensuring consistent availability becomes more complex. Previously, banks could bring down certain services during off-peak hours for maintenance. However, with TPPs offering 24x7 experiences, any downtime is unacceptable. This places greater pressure on banks to maintain constant availability for Open API services, even during planned maintenance windows or unforeseen events. MongoDB Atlas, the fully managed global cloud database service, addresses these availability challenges effectively. With its multi-node cluster and multi-cloud DBaaS capabilities, MongoDB Atlas ensures high availability and fault tolerance. It offers the flexibility to run on multiple leading cloud providers, allowing banks to minimize concentration risk and achieve higher availability through a distributed cluster across different cloud platforms. The robust replication and failover mechanisms provided by MongoDB Atlas guarantee uninterrupted service and enable financial institutions to provide reliable and always-available open banking APIs to their customers and TPPs. Security and privacy Data security and consent management are paramount concerns for banks participating in open banking. The exposure of authentication and authorization mechanisms to third-party providers raises security concerns and introduces technical complexities regarding data protection. Banks require fine-grained access control and encryption mechanisms to safeguard shared data, including managing data-sharing consent at a granular level. Furthermore, banks must navigate the landscape of data privacy laws like the General Data Protection Regulation (GDPR), which impose strict requirements distinct from traditional banking regulations. MongoDB offers a range of solutions to address these security and privacy challenges effectively. Queryable Encryption provides a mechanism for managing encrypted data within MongoDB, ensuring sensitive information remains secure even when shared with third-party providers. MongoDB's comprehensive encryption features cover data-at-rest and data-in-transit, protecting data throughout its lifecycle. MongoDB's flexible schema allows financial institutions to capture diverse data requirements for managing data sharing consent and unify user consent from different countries into a single data store, simplifying compliance with complex data privacy laws. Additionally, MongoDB's geo-sharding capabilities enable compliance with data residency laws by ensuring relevant data and consent information remain in the closest cloud data center while providing optimal response times for accessing data. To enhance data privacy further, MongoDB offers field-level encryption techniques, enabling symmetric encryption at the field level to protect sensitive data (e.g., personally identifiable information) even when shared with TPPs. The random encryption of fields adds an additional layer of security and enables query operations on the encrypted data. MongoDB's Queryable Encryption technique further strengthens security and defends against cryptanalysis, ensuring that customer data remains protected and confidential within the open banking ecosystem. Activity monitoring With numerous APIs offered by banks in the open banking ecosystem, activity monitoring and troubleshooting become critical aspects of maintaining a robust and secure infrastructure. MongoDB simplifies activity monitoring through its monitoring tools and auditing capabilities. Administrators and users can track system activity at a granular level, monitoring database system and application events. MongoDB Atlas has Administration APIs , which one can use to programmatically manage the Atlas service. For example, one can use the Atlas Administration API to create database deployments, add users to those deployments, monitor those deployments, and more. These APIs can help with the automation of CI/CD pipelines as well as monitoring the activities on the data platform enabling developers and administrators to be freed of this mundane effort and focus on generating more business value. Performance monitoring tools, including the performance advisor, help gauge and optimize system performance, ensuring that APIs deliver exceptional user experiences. Figure 2: Activity Monitoring on MongoDB Atlas MongoDB Atlas Charts , an integrated feature of MongoDB Atlas, offers analytics and visualization capabilities. Financial institutions can create business intelligence dashboards using MongoDB Atlas Charts. This eliminates the need for expensive licensing associated with traditional business intelligence tools, making it cost-effective as more TPPs utilize the APIs. With MongoDB Atlas Charts, financial institutions can offer comprehensive business telemetry data to TPPs, such as the number of insurance quotations, policy transactions, API call volumes, and performance metrics. These insights empower financial institutions to make data-driven decisions, improve operational efficiency, and optimize the customer experience in the open banking ecosystem. Figure 3: Atlas Charts Sample Dashboard Real-Timeliness Open banking introduces new challenges for financial institutions as they strive to serve and scale amidst unpredictable workloads from TPPs. While static content poses fewer difficulties, APIs requiring real-time updates or continuous streaming, such as dynamic account balances or ESG-adjusted credit scores, demand capabilities for near-real-time data delivery. To enable applications to immediately react to real-time changes or changes as they occur, organizations can leverage MongoDB Change Streams that are based on its aggregation framework to react to data changes in a single collection, a database, or even an entire deployment. This capability further enhances MongoDB’s real-time data and event processing and analytics capabilities. MongoDB offers multiple mechanisms to support data streaming, including a Kafka connector for event-driven architecture and a Spark connector for streaming with Spark. These solutions empower financial institutions to meet the real-time data needs of their open banking partners effectively, enabling seamless integration and real-time data delivery for enhanced customer experiences. Conclusion MongoDB's technical capabilities position it as a key enabler for financial institutions embarking on their open banking journey. From managing dynamic environments and accommodating unpredictable workloads to ensuring scalability, availability, security, and privacy, MongoDB provides a comprehensive set of tools and features to address the challenges of open banking effectively. With MongoDB as the underlying infrastructure, financial institutions can navigate the ever-evolving open banking landscape with confidence, delivering innovative solutions, and driving the future of banking. Embracing MongoDB empowers financial institutions to unlock the full potential of open banking and provide exceptional customer experiences in this era of collaboration and digital transformation. If you would like to learn more about how you can leverage MongoDB for your open banking infrastructure, take a look at the below resources: Open banking panel discussion: future-proof your bank in a world of changing data and API standards with MongoDB, Celent, Icon Solutions, and AWS How a data mesh facilitates open banking Financial services hub
Empower Modern App Developers with Document Databases
The Guide to openEHR Schema Modeling with MongoDB
Disclosure: This article features valuable insights from MongoDB's experts, customers, or partners to further an understanding of MongoDB in specific industry use cases. Please note that while MongoDB does not validate the accuracy of the text and statements, this resource aims to provide practical knowledge for your reference. The openEHR specification is a widely used standard for storing and managing electronic health records (EHRs). It offers a structured way of organizing clinical data that makes it easy to query and analyze. However, implementing the schema modeling and querying of openEHR data can present unique challenges. In this blog post, we will explore the intricacies of openEHR schema modeling, some challenges, and outline potential solutions that we can implement using MongoDB. Understanding openEHR specification and interoperability standards Before diving into the complexities of openEHR schema modeling and querying, let’s first understand the openEHR specification and how it relates to other interoperability standards such as HL7 FHIR. These standards work together to enable seamless healthcare information exchange across various systems and applications. A clear understanding of these standards will provide a solid foundation for addressing the challenges we’ll encounter. openEHR (open Electronic Health Record) is an open-source standard for the representation and management of electronic health records (EHRs). It is designed to provide a flexible and interoperable framework for the collection, storage, retrieval, and exchange of health data, regardless of the system or application used to generate or consume it. Key building blocks of openEHR include: Archetypes: Archetypes are structured, reusable models that define the content and structure of clinical information. An example is a Vital Signs Archetype which defines the structure and constraints for capturing vital signs measurements. Properties may include elements such as temperature, heart rate, blood pressure, and oxygen saturation. Templates: Templates, on the other hand, are derived from archetypes and are used to define specific subsets of clinical information. An example is an Adult Vital Signs Template — a template derived from the Vital Signs Archetype and customized for adult patients. It includes a subset of vital sign elements specifically relevant to adult patients, such as temperature, heart rate, and blood pressure. Compositions: A composition is an instance of a template that contains actual clinical data. An example is Patient Encounter Composition — a composition that represents a patient encounter, containing various clinical measurements and data. Properties include sections for patient demographics, symptoms, diagnoses, procedures, medications, and vital signs. Within the composition, the vital signs section would follow the structure defined by the Adult Vital Signs Template, containing actual vital signs measurements for a specific encounter. Figure 1: openEHR - Archetype & template openEHR & HL7 FHIR While openEHR defines an “information model” for modeling and persisting data in EMR systems, FHIR is an open-source standard for healthcare information exchange across EMRs and other systems. The diagram below can help you visualize how these standards and technologies work together in healthcare systems. FIgure 2: openEHR, FHIR, & HL7 openEHR schema modeling challenges openEHR schema modeling poses several challenges due to its complex hierarchical structure and the need to handle diverse data types. Additionally, querying this data can be difficult due to the complex relationships between the different data elements. Many of the end user queries are at composition level and each composition is made up of hundreds of fields. This is typically overcome by creating multiple sets of indexes often resulting in performance bottlenecks. Let’s examine some of the typical query patterns and demonstrate how these pose challenges for schema modeling and querying: Complex Hierarchical Structure: The openEHR specification is built on a complex hierarchical structure that represents various healthcare concepts and their relationships. Translating this structure into a database schema can be challenging, as traditional relational databases may struggle to handle the dynamic and nested nature of openEHR data. Finding an efficient and flexible way to model this complex structure is essential for ensuring accurate representation and easy retrieval of data. Data Versioning and Evolution: openEHR supports the concept of versioning and evolution, allowing for changes and updates to healthcare records over time. Modeling and querying evolving data can be complex, as it requires maintaining the history of changes and accommodating different versions of the schema. Ensuring data consistency, efficient versioning, and the ability to query historical data are critical considerations in openEHR schema modeling. Performance and Scalability: Healthcare systems generate a vast amount of data, and efficient querying of openEHR records is crucial for timely analysis and decision-making. Designing a schema that allows for fast and scalable querying is a challenge, particularly when dealing with large datasets and complex query patterns. Optimizing query performance, indexing strategies, and data partitioning techniques are essential for ensuring a responsive and scalable system. Query Patterns: openEHR data is queried based on various patterns, such as retrieving patient records, searching for specific diagnoses, or aggregating data for statistical analysis. Each query pattern may have different performance requirements and may involve traversing complex relationships within the hierarchical structure. Designing an efficient schema that can handle these query patterns and provide fast and accurate results is a significant challenge in openEHR schema modeling. Learn more about how MongoDB works with any healthcare data standard in our whitepaper, What is Radical Interoperability Typical openEHR schema model As you see in the below hierarchical model, the complexity of the openEHR specification can make it challenging to model and query clinical data. In the subsequent sections, we will explore potential solutions for openEHR schema modeling, including the Attribute Pattern and the Flat Hierarchy Pattern. Figure 3: Part of Archetype data captured in JSON format Archetype query language Archetype Query Language (AQL) is a query language specifically designed for querying clinical data stored in openEHR-based electronic health record systems. It provides a standardized and powerful way to retrieve specific clinical information from structured data using archetypes and templates. AQL enables clinicians, researchers, and developers to express complex queries, filter data based on clinical criteria, and retrieve meaningful information for analysis and decision support. For example to “get the latest 5 abnormal blood pressure values that were recorded in a health encounter for a specific patient”, you can write an AQL as follows: As you can see above in the highly nested and hierarchical schema model, while being a flexible and extensible approach to representing clinical data, presents unique challenges in storage, retrieval, and performance. Even a small volume of 250k documents in this format requires a significant amount of storage space for handling. In addition, given the variable nature of nesting hierarchy, typical patterns for indexing are highly inefficient. We will explore possible solution options using MongoDB schema modeling patterns. Addressing openEHR schema modeling challenges with MongoDB MongoDB provides flexible and powerful features that can help address the challenges of openEHR schema modeling and querying. Attribute pattern with standard index One possible solution is to use the attribute pattern modeling style, simplifying the schema model and making it more predictable. This approach allows for efficient indexing, enabling fast retrieval of data. The query for our example scenario to “get the latest 5 abnormal blood pressure values that were recorded in a health encounter for a specific patient” can be simplified as follows. The above query is highly efficient as evaluated against a collection of 5 MM document size. Additionally, the storage is significantly improved taking 833 MB compressed storage space for 5 MM documents, as well as efficient retrieval using index. However the index size is significantly large for this pattern — about 3.5 GB for the 5 MM document size – which may be a blocker for larger data sets. Can we improve on this? What are our options? Learn more about how MongoDB works with any healthcare data standard in our whitepaper, What is Radical Interoperability Flat hierarchy model with wildcard search An alternative approach to addressing challenges discussed above is by leveraging a flattened document model and using wildcard index. The same openEHR schema model can be represented as follows: By using a flattened document data model, the complex hierarchical structure of openEHR schemas can be simplified for storage and retrieval efficiency. While this simplifies the schema model, there are a few considerations like the dimensions need to be moved to the application / configuration layer, like the body temperature is celsius, for example. In addition the query on the above document model is simplified as below. To make the query efficient, we create a wild-card index as follows. With the above index in place, the query is significantly more efficient as we can see in the explain plan below. When comparing the storage size to openEHR standard spec, the flat hierarchy spec model only takes 839 MB compressed storage for 5 MM documents. However, the wildcard index is significantly heavy. For the above volume of documents, the index size is only 705 MB. Can we improve upon this further? Flat hierarchy model with Atlas Search MongoDB Atlas Search brings the power of Apache Lucene to MQL. You can simplify the process of indexing and search significantly. You start by creating an Atlas Search index as follows: Atlas takes care of the various steps leading up to making a Lucene search index available. Once the index is in place, you can query using $search opearator. The query we have seen in above solution options can be constructed as follows. The data storage is identical to the flat hierarchy schema, with only the index built using Atlas search. The search index size is similar to the wildcard index. However the search index provides significantly greater functionalities and capabilities. Keep in mind that this capability is only available on cloud and with MongoDB Atlas . We have seen multiple solution options as detailed above. You can choose one that suits your application as well as infrastructure requirements. Design Validation Typical process of evaluating various possible schema models and their efficiencies will require multiple iterations including NFR validation on production scale of data volume. This helps us to: Better understand fit of MongoDB to the particular use case Understand MongoDB sizing expectations Demonstrate MongoDB performance Understand performance of specific queries Fine tune MongoDB schema for our needs Current tools only replicate a sample data or generate random data and are not suitable for the above purposes as the indexes get skewed and performance of queries is not close to real world. Completely random data also does not provide an accurate view of the MongoDB sizing that will be required. Test data generator Test data generator and Performance testing tools are a solution accelerator from PeerIslands that helps customers generate large volume, customizable, close-to-real-world test data with specific customer schemas. We have used the test data generator to generate a 5 Million documents data set for both attribute and flat hierarchy design patterns. The test data generator takes a configuration file and quickly generates large volumes of data, as shown in the flat hierarchy schema model below. Generating test data for the original openEHR spec requires a more complex configuration as the one below. Results and comparison We generated 5MM data items for each type of design pattern (attribute and flat hierarchy). For base openEHR spec, we generated 250k items given the size and time requirements. The figure below provides a comparison between the data and index sizes for each design approach: In addition the data and index sizes for Atlas Search is shown below for a 214k dataset. The attribute pattern performs significantly better than the base openEHR spec. However it is limited by its index size. The flat hierarchy data model performs the best overall with a wildcard index. When you can use cloud, you have the option of using Atlas Search. While the overall index size is not any better than the wildcard index on the flat hierarchy collection, the Lucene search index provides a significantly expanded feature set for query and retrieval. Final thoughts The MongoDB document data model provides a powerful and intuitive approach to structuring and interacting with healthcare data (read openEHR), which is often complex and variable. It closely aligns with how you think and code, allowing you to store and retrieve data of any shape and form. The powerful query engine and indexing capabilities further enhance its versatility, enabling you to develop complex query patterns and optimize performance for your specific application requirements. Choosing the appropriate modeling approach depends on the specific requirements of the application, query patterns, and performance considerations. Both the Attribute Pattern and the Flat Hierarchy models offer viable solutions for openEHR data storage in MongoDB, providing flexibility and performance optimizations tailored to different use cases. Additionally, MongoDB Atlas Search introduces powerful search capabilities for enhanced query and retrieval functionalities. While we looked into schema modeling and querying solutions for openEHR data in MongoDB, there are other topics that are of interest while developing production-scale applications and environments where you would like both engineers and operations teams to be more productive. We will be exploring the following topics in a future blog. Simplifying openEHR queries: A DSL based approach to convert existing AQL to MQL and using Generative AI to build MQL based on natural language prompts. Strategies for organizing openEHR data such as Multi-tenancy by clinic and horizontal scaling using sharding. Generate production-scale data volume persisted in a Sharded MongoDB cluster. Run NFR validation on dedicated production grade infrastructure and compare performance of various approaches discussed. We hope this comprehensive guide has provided valuable insights into openEHR schema modeling and querying challenges, as well as potential solutions using MongoDB. Learn more about how MongoDB works with any healthcare data standard in our whitepaper, What is Radical Interoperability References: Introduction to openEHR HL7 FHIR & openEHR Choosing the standard that is right for you MongoDB Wildcard Index
Sales Advice for a Winning Year
Sam Fiorenzo and Lorena Cortes are two exceptional salespeople at MongoDB who have made a significant impact on their careers and the company. As they geared up to join our Excellence Club, they offered to share their insights and advice on what it takes to have a successful year in sales. Whether you're just starting out or looking to take your career to the next level, their tips and strategies are sure to inspire and guide you on your journey. FInd your next sales role at MongoDB Advice from Lorena Cortes Account Manager, Customer Success I joined MongoDB in 2020 and have since been promoted twice. In each role I’ve held, I’ve learned something new that’s helped me achieve success as a salesperson. Developing myself from an Account Development Representative to Account Executive, and most recently to an Account Manager within our Customer Success team, has taught me to have a customer-centric mindset and reiterated the importance of building and maintaining long-term relationships with clients. This is why success requires more than just hitting your quotas. To develop yourself and your career, here are my three pieces of advice: Maintain a positive mindset and trust the process This is easier said than done, but it’s essential to success. One of the keys to maintaining a positive attitude is following a proven sales methodology. At MongoDB, for example, we follow a process that involves understanding the customer's current state, what they're trying to achieve, and what is required to get there. It's important not to skip any steps and to stay patient and persistent throughout the process. Deal cycles can be long, and some deals may fall through, but it's crucial to stay focused on your long-term vision and persevere through difficulties. Preparation will be key here. As for experiencing setbacks, such as losing a deal, it's important to embrace these opportunities to reflect, review, learn, and grow. Losing deals can be tough, but it's also an opportunity to refine your approach and improve it for the next time. Trusting in the process means staying committed to your weekly M4S (Metrics for Success) and tracking your activity. Even if you don't see immediate results, trust that the actions you're taking will eventually yield outcomes. Work hard and stay consistent Working hard and staying consistent is another key to success in sales. Cultivating self-discipline and holding yourself accountable to your yearly goals is essential. Establishing a positive routine will help you avoid distractions and stay on track. Allocating time for additional enablement around the technology you’re selling and working with internal stakeholders can help establish credibility with your customers. This is a reflection of how well you listen to your customer and take your partnership with their company seriously. Achieve work-life balance Achieving work-life balance is crucial to your physical and mental well-being, and ultimately, your success in sales. This was one of my biggest lessons this past quarter. It's easy to get caught up in work and forget to take care of yourself outside of the office, but this can actually hinder your productivity and success. Making time for yourself outside of work by engaging in activities that bring you joy, such as exercise or hobbies, will enhance your overall happiness and ability to come back to work recharged. Advice from Sam Fiorenzo Strategic Account Executive I joined MongoDB as a Sales Development Representative and have spent almost six years growing my career here. I’ve learned a lot about what it takes to be a successful salesperson from different peers or leaders and through my own experiences. While success isn’t promised, there are some things that have been core to getting there. Company matters Being selective about the product or service you want to represent makes a big difference. In my mind, a strong company will have a clearly defined addressable market, a product that is mission critical or tied to revenue-generating functions (it will solve pain for its customers), and is backed by a leadership team that you trust to drive the company forward. If you’re missing these factors, it probably means you’re selling a commodity or you’re losing to competitors for various reasons. Selling “nice-to-haves” makes it difficult to differentiate based on real value and near impossible to forecast. The company you work for matters. Having an employer who invests in understanding the market and your customers' pain points matters. Working for an organization like MongoDB that is continuously iterating to improve products or services massively simplifies my role. I get to focus on finding and understanding the real problems some of the largest organizations in the world are facing, and then help them fix it. Growth mindset Sales can be a tough job. You hear "no" far more often than "yes," and it's easy to become discouraged when it seems like all of your hard work isn't paying off. I’ve had to learn to be diligent in reframing challenges or setbacks. I’ve messed up a sales cycle more times than I can count, missed an important qualification detail or deadline, and have definitely lost deals. The key for me is that I don’t look at any of those as my end result. When I’m met with a challenge in order to progress, it just means I need to pivot. These moments act as change agents that drive me forward. It would be untrue to state that all problems you face are good, however, if I had never learned to persevere through setbacks, my sales career would’ve ended years ago. Some of the most successful sales professionals I’ve come across refer to this as having a growth mindset. They’re the people who could get a door slammed in their face and respond with “not that door? – I’ll try this window.” Great discovery The foundation of any good partnership or deal is deeply understanding your customer’s current state and their biggest problems. We refer to this as discovery. Early in my tech sales career, I was given advice to mute the phone when I wasn’t speaking. This additional second it took for me to unmute before driving the conversation forward taught me how to listen to understand instead of immediately responding. Those brief moments became filled with elaboration or detail of what was most important instead of closed questions filling the silence. Another senior leader used to tell us to “hold the point” during great discovery. He told a story about his duck-hunting dogs. It was the dogs’ job to seek out the ducks' nests and point to their location without waking them or scaring them away. He (the hunter) would then signal when he was ready to shoot. It was then that the dogs would flush or scare the ducks out of the nest. A "Hold The Point" sticker from the MongoDB archives One of his dogs was the best at finding the ducks but would get too excited and jump to flush out the ducks before he was ready. The hunts became wildly unsuccessful. He compared this excited hunting dog to an eager sales rep hearing the first pain that could be solved and jumping to share our product. When we talk to customers, it’s easy to become eager or excited when hearing about one problem your solution can fix. The analogy highlights the importance of patience in understanding their entire situation and “holding your point” before jumping to prescribe a solution. These are small examples that resonated with me. I use them as reminders to stay laser-focused on listening to the customer and understanding as much as I can about their needs before getting too excited about a qualified deal. Last, but far from least, it's important to remember that you can't do it alone It truly takes an army of talented individuals to find a lot of success in sales. For me, this includes people in roles focused on consulting services, customer success, solutions architecting, product engineering, support, and even partnerships. Whether you’re one week or ten years into selling it’s important to stay humble and acknowledge that everyone around you has unique strengths or skills that can drive your success forward. In my experience, working together and leveraging others’ expertise is crucial to overachieving goals and ultimately making a meaningful impact in your organization. Learn about MongoDB’s employee resource groups that build community and foster inclusion for women in tech, including Sell Like a Girl, an initiative devoted to making MongoDB the best place to work for women in sales.
How Edenlab Built a High-Load, Low-Code FHIR Server to Deliver Healthcare for 40 Million Plus Patients
The Kodjin FHIR server has speed and scale in its DNA. Edenlab, the Ukrainian company behind Kodjin , built our original FHIR solution to digitize and service the entire Ukrainian national health system. The learnings and technologies from that project informed our development of the Kodjin FHIR server. At Edenlab, we have always been driven by our passion for building solutions that excel in speed and scale. With Kodjin, we have embraced a modern tech stack to deliver unparalleled performance that can handle the demands of large-scale healthcare systems, providing efficient data management and seamless interoperability. Eugene Yesakov, Solution Architect, Author of Kodjin Built for speed and scale While most healthcare projects involve handling large volumes of data, including patient records, medical images, and sensor data, the Kodjin FHIR server is based on a system developed to handle tens of millions of patient records and thousands of requests per second, to ensure timely access and efficient decision-making for a population of over 40 million people. And all of this information had to be processed and exchanged in real-time or near real-time, without delays or bottlenecks. This article will explore some of the architectural decisions the Edenlab team took when building Kodjin, specifically the role MongoDB played in enhancing performance and ensuring scalability. We will examine the benefits of leveraging MongoDB's scalability, flexibility, and robust querying capabilities, as well as its ability to handle the increasing velocity and volume of healthcare data without compromising performance. About Kodjin FHIR server Kodjin is an ONC-certified and HIPAA-compliant FHIR Server that offers hassle-free healthcare data management. It has been designed to meet the growing demands of healthcare projects, allowing for the efficient handling of increasing data volumes and concurrent requests. Its architecture, built on a horizontally scalable microservices approach, utilizes cutting-edge technologies such as the Rust programming language, MongoDB, ElasticSearch, Kafka, and Kubernetes. These technologies enable Kodjin to provide users with a low-code approach while harnessing the full potential of the FHIR specification. A deeper dive into the architecture approach - the role of MongoDB in Kodjin When deciding on the technology stack for the Kodjin FHIR Server, the Edenlab team knew that a document database would be required to serve as a transactional data store. In an FHIR Server, a transactional data store ensures that data operations occur in an atomic and consistent manner, allowing for the integrity and reliability of the data. Document databases are well-suited for this purpose as they provide a flexible schema and allow for storing complex data structures, such as those found in FHIR data. FHIR resources are represented in a hierarchical structure and can be quite intricate, with nested elements and relationships. Document databases, like MongoDB, excel at handling such complex and hierarchical data structures, making them an ideal choice for storing FHIR data. In addition to supporting document storage, the Edenlab team needed the chosen database to provide transactional capabilities for FHIR data operations. FHIR transactions, which encompass a set of related data operations that should either succeed or fail as a whole, are essential for maintaining data consistency and integrity. They can also be used to roll back changes if any part of the transaction fails. MongoDB provides support for multi-document transactions , enabling atomic operations across multiple documents within a single transaction. This aligns well with the transactional requirements of FHIR data and ensures data consistency in Kodjin. Implementation of GridFS as a storage for the terminologies in Terminology service Terminology service plays a vital role in FHIR projects, requiring a reliable and efficient storage solution for terminologies used. Kodjin employs GridFS , a file system within MongoDB designed for storing large files, which makes it ideal to handle terminologies. GridFS offers a convenient way to store and manage terminology files, ensuring easy accessibility and seamless integration within the FHIR ecosystem. By utilizing MongoDB's GridFS, Kodjin ensures efficient storage and retrieval of terminologies, enhancing the overall functionality of the terminology service. Kodjin FHIR server performance To evaluate the efficiency and responsiveness of the Kodjin FHIR server in various scenarios we conducted multiple performance tests using Locust, an open-source load testing tool. One of the performance metrics measured was the retrieval of resources by their unique ids using the GET by ID operation. Kodjin with MongoDB achieved a performance of 1721.8 requests per second (RPS) for this operation. This indicates that the server can efficiently retrieve specific resources, enabling quick access to desired data. The search operation, which involves querying ElasticSearch to obtain the ids of the searched resources and retrieving them from MongoDB, exhibited a performance of 1896.4 RPS. This highlights the effectiveness of polyglot persistence in Kodjin, leveraging ElasticSearch for fast and efficient search queries and MongoDB for resource retrieval. The system demonstrated its ability to process search queries and retrieve relevant results promptly. In terms of resource creation, Kodjin with MongoDB showed a performance of 1405.6 RPS for POST resource operations. This signifies that the system can effectively handle numerous resource-creation requests. The efficient processing and insertion of new resources into the MongoDB database ensure seamless data persistence and scalability. Overall, the performance tests confirm that Kodjin with MongoDB delivers efficient and responsive performance across various FHIR operations. The high RPS values obtained demonstrate the system's capability to handle significant workloads and provide timely access to resources through GET by ID, search, and POST operations. Conclusion Kodjin leverages a modern tech stack including Rust, Kafka, and Kubernetes to deliver the highest levels of performance. At the heart of Kodjin is MongoDB, which serves as a transactional data store. MongoDB's capabilities, such as multi-document transactions and flexible schema, ensure the integrity and consistency of FHIR data operations. The utilization of GridFS within MongoDB ensures efficient storage and retrieval of terminologies, optimizing the functionality of the Terminology service. To experience the power and potential of the Kodjin FHIR server firsthand, we invite you to contact the Edenlab team for a demo. For more information On MongoDB’s work in healthcare, and to understand why the world’s largest healthcare companies trust MongoDB, read our whitepaper on radical interoperability .
Accelerating to T+1 - Have You Got the Speed and Agility Required to Meet the Deadline?
Thank you to Ainhoa Múgica and Karolina Ruiz Rogelj for their contributions to this post. On May 28, 2024, the Securities and Exchange Commission (SEC) will implement a move to a T+1 settlement for standard securities trades , shortening the settlement period from 2 business days after the trade date to one business day. The change aims to address market volatility and reduce credit and settlement risk. The shortened T+1 settlement cycle can potentially decrease market risks, but most firms' current back-office operations cannot handle this change. This is due to several challenges with existing systems, including: Manual processes will be under pressure due to the shortened settlement cycle Batch data processing will not be feasible To prepare for T+1, firms should take urgent action to address these challenges: Automate manual processes to streamline them and improve operational efficiency Event-based real-time processing should replace batch processing for faster settlement In this blog, we will explore how MongoDB can be leveraged to accelerate manual process automation and replace batch processes to enable faster settlement. What is a T+1 and T+2 settlement? T+1 settlement refers to the practice of settling transactions executed before 4:30pm on the following trading day. For example, if a transaction is executed on Monday before 4:30 pm, the settlement will occur on Tuesday. This settlement process involves the transfer of securities and/or funds from the seller's account to the buyer's account. This contrasts with the T+2 settlement, where trades are settled two trading days after the trade date. According to SEC Chair Gary Gensler , “T+1 is designed to benefit investors and reduce the credit, market, and liquidity risks in securities transactions faced by market participants.” Overcoming T+1 transition challenges with MongoDB: Two unique solutions 1. The multi-cloud developer data platform accelerates manual process automation Legacy settlement systems may involve manual intervention for various tasks, including manual matching of trades, manual input of settlement instructions, allocation emails to brokers, reconciliation of trade and settlement details, and manual processing of paper-based documents. These manual processes can be time-consuming and prone to errors. MongoDB (Figure 1 below) can help accelerate developer productivity in several ways: Easy to use: MongoDB is designed to be easy to use, which can reduce the learning curve for developers who are new to the database. Flexible data model: Allows developers to store data in a way that makes sense for their application. This can help accelerate development by reducing the need for complex data transformations or ORM mapping. Scalability: MongoDB is highly scalable , which means it can handle large volumes of trade data and support high levels of concurrency. Rich query language: Allows developers to perform complex queries without writing much code. MongoDB's Apache Lucene-based search can also help screen large volumes of data against sanctions and watch lists in real-time. Figure 1: MongoDB's developer data platform Discover the developer productivity calculator . Developers spend 42% of their work week on maintenance and technical debt. How much does this cost your organization? Calculate how much you can save by working with MongoDB. 2. An operational trade store to replace slow batch processing Back-office technology teams face numerous challenges when consolidating transaction data due to the complexity of legacy batch ETL and integration jobs. Legacy databases have long been the industry standard but are not optimal for post-trade management due to limitations such as rigid schema, difficulty in horizontal scaling, and slow performance. For T+1 settlement, it is crucial to have real-time availability of consolidated positions across assets, geographies, and business lines. It is important to note that the end of the batch cycle will not meet this requirement. As a solution, MongoDB customers use an operational trade data store (ODS) to overcome these challenges for real-time data sharing. By using an ODS, financial firms can improve their operational efficiency by consolidating transaction data in real-time. This allows them to streamline their back-office operations, reduce the complexity of ETL and integration processes, and avoid the limitations of relational databases. As a result, firms can make faster, more informed decisions and gain a competitive edge in the market. Using MongoDB (Figure 2 below), trade desk data is copied into an ODS in real-time through change data capture (CDC), creating a centralized trade store that acts as a live source for downstream trade settlement and compliance systems. This enables faster settlement times, improves data quality and accuracy, and supports full transactionality. As the ODS evolves, it becomes a "system of record/golden source" for many back office and middle office applications, and powers AI/ML-based real-time fraud prevention applications and settlement risk failure systems. Figure 2: Centralized Trade Data Store (ODS) Managing trade settlement risk failure is critical in driving efficiency across the entire securities market ecosystem. Luckily, MongoDB integration capabilities (Figure 3 below) with modern AI and ML platforms enable banks to develop AI/ML models that make managing potential trade settlement fails much more efficient from a cost, time, and quality perspective. Additionally, predictive analytics allow firms to project availability and demand and optimize inventories for lending and borrowing. Figure 3: Event-driven application for real time monitoring Summary Financial institutions face significant challenges in reducing settlement duration from two business days (T+2) to one (T+1), particularly when it comes to addressing the existing back-office issues. However, it's crucial for them to achieve this goal within a year as required by the SEC. This blog highlights how MongoDB's developer data platform can help financial institutions automate manual processes and adopt a best practice approach to replace batch processes with a real-time data store repository (ODS). With the help of MongoDB's developer data platform and best practices, financial institutions can achieve operational excellence and meet the SEC's T+1 settlement deadline on May 28, 2024. In the event of T+0 settlement cycles becoming a reality, institutions with the most flexible data platform will be better equipped to adjust. Top banks in the industry are already adopting MongoDB's developer data platform to modernize their infrastructure, leading to reduced time-to-market, lower total cost of ownership, and improved developer productivity. Looking to learn more about how you can modernize or what MongoDB can do for you? Zero downtime migrations using MongoDB’s flexible schema Accelerate your digital transformation with these 5 Phases of Banking Modernization Reduce time-to-market for your customer lifecycle management applications MongoDB’s financial services hub
Introducing the Certified MongoDB Atlas Connector for Power BI
This is a collaborative post from MongoDB and Microsoft. We thank Alexi Antonino, Natacha Bagnard, Jad Jarouche from MongoDB, and Bob Zhang, Mahesh Prakriya, and Rajeev Jain from Microsoft for their contributions. Introducing MongoDB Atlas Connector for Power BI, the certified solution that facilitates real-time insights on your Atlas data directly in the Power BI interfaces that analysts know and love! Supporting Microsoft’s Intelligent Data Platform , this integration bridges the gap between Developers and Analytics teams, allowing analysts who rely on Power BI for insights to natively transform, analyze, and share dashboards that incorporate live MongoDB Atlas data. Available in June , the Atlas Power BI Connector empowers companies to harness the full power of their data like never before. Let’s take a deeper look into how the Atlas Power BI Connector can unlock comprehensive, real-time insights on live application data that will help take your business to the next level. Effortlessly model document data with Power Query The Atlas Power BI Connector makes it easy to model document data with native Power BI features and data modeling capabilities. With its SQL-92 compatible dialect, mongosql, you can tailor your data to fit any requirements by transforming heavily nested document data to fit your exact needs, all from your Power Query dashboard. Gain real-time insights on live application data By using the Power BI Connector to connect directly to MongoDB Atlas, you can build up-to-date dashboards in Power BI Desktop and scale insights to your organization through Power BI Service with ease. With no delays caused by data duplication, you can stay ahead of the curve by unlocking real-time insights on Atlas data that are relevant to your business. Empower cross-source data analysis The Power BI Connector's integration with MongoDB Atlas enables you to seamlessly model, analyze, and share insightful dashboards that are built from multiple data sources. By combining Atlas's powerful Data Federation capabilities with Power BI's advanced analytics and visualization tools, you can easily create comprehensive dashboards that offer valuable insights into your data, regardless of where it is stored. See it in action Log in and activate the Atlas SQL Interface to try out the Atlas Power BI Connector ! If you are new to Atlas or Power BI, get started for free today on Azure Marketplace or Power BI Desktop .
The MongoDB for VS Code Extension Is Now Generally Available
4 Ways MongoDB Solves Healthcare's Interoperability Puzzle
Picture this: You're on a road trip, driving across the country, taking in the beautiful scenery, and enjoying the freedom of the open road. But suddenly, the journey comes to a screeching halt as you fall seriously ill and need emergency surgery. The local hospital rushes you into the operating room, but how will they know what medications you're allergic to, or what conditions you've been treated for in the past? Figure 1: Before and after interoperability In a perfect world, the hospital staff would have access to all of your medical records, seamlessly integrated into one interoperable electronic health record (EHR) system. This would enable them to quickly and accurately treat you as seen in Figure 1. Unfortunately, the reality is that data is often siloed, fragmented, and difficult to access, making it nearly impossible for healthcare providers to get a complete picture of their patients' health. That’s where interoperability comes in, enabling seamless integration of data from different sources and formats, allowing healthcare providers with easy access to the information they need, even between different health providers. And at the heart of solving the interoperability challenge is MongoDB, the ideal solution for building a truly interoperable data repository. In this blog post, we'll explore four ways why MongoDB stands out from all others in the interoperability software space. We'll show you how our unique capabilities make us the fundamental missing piece in the interoperability puzzle for healthcare. Let’s get started! 1. Document flexibility MongoDB's document data model is perfect for managing healthcare data. It allows you to work with the data in JSON format, eliminating the need to flatten or transform it into a string. This simplifies the implementation of common interoperability standards for clinical and terminology data, such as HL7 FHIR and openEHR, as well as SNOMED and LOINC - because all of these standards also support JSON. The document model also supports nested and hierarchical data structures, making it easier to represent complex clinical data with varying levels of detail and granularity. MongoDB's document model also provides flexibility in managing healthcare data, allowing for dynamic and self-describing schemas. With no need to pre-define the schema, fields can vary from document to document and can be modified at any time without requiring disruptive schema migrations. This makes it easy for healthcare providers to add or update information to clinical documents, such as when new interoperability standards are released, ensuring that healthcare data is kept accurate and up-to-date without requiring database reconfiguration or downtime. 2. Scalability Dealing with large healthcare datasets can be challenging for traditional relational database systems, but MongoDB's horizontal scaling feature offers a solution. With horizontal scaling, healthcare providers can easily distribute their data across multiple servers and cloud providers (AWS, GCP, and Azure), resulting in increased processing power and faster query times. It also results in more cost-efficient storage as growing vertically is more expensive than growing horizontally. This feature allows healthcare providers to scale their systems seamlessly as their data volumes grow while maintaining performance and reliability. While MongoDB’s reliability is ensured through its replication architecture, where each database replica set consists of three nodes that provide fault tolerance and automatic failover in the event of node failure. Horizontal scaling also improves reliability by adding more servers or nodes to the system, reducing the risk of a single point of failure. 3. Performance When it comes to healthcare data, query performance can make all the difference in delivering timely and accurate care. And that’s another aspect where MongoDB shines. MongoDB holds data in a format that is optimized for storage and retrieval, allowing it to quickly and efficiently read and write data. MongoDB’s advanced querying capabilities, backed by compound and wildcard indexes, make it a standout solution for healthcare applications. MongoDB Atlas’ Search, using Apache Lucene indexing, also enables efficient querying across vast data sets, handling complex queries with multiple fields. This is especially useful for Clinical Data Repositories (CDRs), which permit almost unlimited querying flexibility. Atlas Search indexing also allows for advanced search features enabling medical professionals to quickly and accurately access the information they need from any device. 4. Security Figure 2: Fine-grained access control The security of sensitive clinical data is paramount in the healthcare industry. That’s why MongoDB provides an array of robust security features, including fine-grained access control and auditing as seen in Figure 2. With Client-Side-Field-Level Encryption (CS-FLE) and Queryable Encryption, MongoDB is the only data platform that allows the processing of randomly encrypted patient data, providing the highest level of data security, with minimal impact on performance. Additionally, MongoDB Atlas supports VPC peering and private links that permit secure connections to healthcare applications, wherever they are hosted. By implementing strong security measures from the start, organizations can ensure privacy by design. Partner ecosystem MongoDB is the only non-relational database and modern data platform that directly collaborates with clinical data repository (CDR) vendors like Smile, Exafluence, Better, Firely, and others. While some vendors offer MongoDB as an alternative to a relational database, others have built their solutions exclusively on MongoDB, one for example is Kodjin FHIR server. MongoDB has extended its capabilities to integrate with AWS FHIR Works, enabling healthcare providers and payers to deploy a FHIR server with MongoDB Atlas through the AWS Marketplace. With MongoDB's unique approach to data storage and retrieval and its ability to work with CDR vendors, millions of patients worldwide are already benefiting from its use. Beyond interoperability with MongoDB Access to complete medical records is often limited by data silos and fragmentation, leaving healthcare providers with an incomplete picture of their patients' health. That's where MongoDB's interoperability solution comes in as the missing puzzle piece the healthcare industry needs. With MongoDB's unmatched document flexibility, scalability, performance, and security features, healthcare providers can access accurate and up-to-date patient information in real-time. But MongoDB's solution goes beyond that. Radical interoperability with MongoDB means that healthcare providers own the data layer and are thus able to leverage any usages from the stored data, and connect to any existing applications or APIs. They're free to work with any healthcare data standard, including custom schemas, and leverage the data for use cases beyond storage and interoperability. The future of healthcare is here, and with MongoDB leading the way, we can expect to see more innovative solutions that put patients first. If you're interested in learning more about radical interoperability with MongoDB, check out our brochure .
Aerofiler Brings Breakthrough Automation to the Legal Profession
Don Nguyen is the perfect person to solve a technology problem in the legal space. Don spent several years in software engineering before eventually becoming a lawyer, where he discovered just how much manual, administrative work legal professionals have to do. The company he co-founded, Aerofiler, takes the different parts of the contract lifecycle and digitises them to eliminate manual work, allowing lawyers to focus on things that require their expertise. Don says the legal profession has always been behind industries like accounting, marketing, and finance when it comes to leveraging technology to increase productivity. Both Don and his co-founder, Stuart Loh, thought they could automate a lot of manual tasks for legal professionals through an AI-powered contract lifecycle management solution. Turning mountains into automation Law firms generate mountains of paperwork that must be digitised and filed. Searching contracts post-execution can be an arduous task using the legacy systems most firms are running on today. Initially, Don, Stuart, and Jarrod Mirabito (co-founder and CTO) set out to make searching contracts and tracking obligations easier. As the service became more popular, customers started asking for more capabilities, like digitising and automating the approval process. Aerofiler's solution now manages the entire contract lifecycle, from drafting and negotiations to approvals, signing, and filing. Don says the difficulty with running AI to extract data is you can't usually see where the data is coming from, and you can't train your models, for example, to extract a concept that might be specific to your industry. Aerofiler supports custom extraction so firms can crawl for and find exactly the results they're looking for, and it highlights exactly where in the contract the data is found. Aerofiler is unique as a modern, cloud-based Contract Lifecycle Management solution that streamlines contract management processes and enhances workflow efficiency. It features AI-powered analytics, smart templates, and real-time collaboration tools, and is highly configurable to fit the unique needs of different companies. Aerofiler's user interface is also highly intuitive and user-friendly, leading to greater user adoption and overall efficiency. The startup stack Don has over 10 years of experience working with MongoDB and describes it as very robust. When it was time to choose a database for their startup, MongoDB Atlas was an easy choice. One of the big reasons Don chose Atlas is so they don't have to manage their own infrastructure. Atlas provides the functionality for text search, storage, and metadata retrieval, making it easy to hit the ground running. On top of MongoDB, the system runs Express.js, VueJS, and Node.js, also known as a MEVN stack. In choosing a database, Don points out that every assumption you make will have exceptions to it, and no matter what your requirements are now, they will inevitably change. So one of the key factors in making a decision is how that database will handle those changes when they come. In his experience, NoSQL databases like MongoDB are easy to deploy and maintain. And, with MongoDB offering ACID transactions , they get a lot of the functionality that they would otherwise look for in a relational database stack. How startups grow up Aerofiler is part of the MongoDB for Startups program, which helps early-stage, high-growth startups build faster and scale further. MongoDB for Startups offers access to a wide range of resources, including free credits to our best-in-class developer data platform, MongoDB Atlas, personalized technical advice, co-marketing opportunities, and access to our robust developer community. Don says the free credits helped the startup at a time when resources were tight. The key to their success, Don says, is in solving problems their customers have. In terms of the road ahead, Don is excited about ChatGPT and says there are some very interesting applications for generative AI in the legal space. If anyone would like to talk about what generative AI is and how it could work in the legal space, he's happy to take those calls and emails . Are you part of a startup and interested in joining the MongoDB for Startups program? Apply now .
MongoDB Goes (Leafy) Green: Our Net Zero Commitment
At MongoDB, we have a deep commitment to sustainability and taking ownership of our environmental impact. In 2021, we internally announced our pledge to have net zero emissions (CO 2 e) by 2030, and have since benchmarked our emissions and have worked on developing a strategy to achieve this goal. Through this process, we discovered that over our last fiscal year we produced the same amount of carbon as driving a gas-powered vehicle around the globe more than 6,000 times. This amount may seem high because we calculated not only our Scope 1 (direct; e.g., offices) and Scope 2 (indirect; e.g., purchased electricity) emissions as is standard under current reporting requirements, but also our Scope 3 emissions (indirect; e.g., supply/value chain). These Scope 3 emissions account for ~97.5% of our total footprint. We have chosen to disclose this full amount because we are committed to reducing our entire carbon footprint as much as possible and want to be transparent on this journey. While 2030 may seem far away, we are committed to reducing emissions and already have taken immediate action. Last year, we invested in hiring a Sustainability Manager to help us tackle how we are going to achieve our goals and engage teams across the company to develop carbon reduction strategies. This included adding an interim target to be 100% powered by renewables by 2026. We can’t do this alone. A large part of our indirect emissions is from our cloud partners, so we have partnered with them to see what we can achieve for our customers together. In addition to reducing our direct footprint, we are also focused on cleaner energy sources. By 2025 our major partners—AWS, Google Cloud (GCP), and Microsoft Azure—will be 100% powered by renewables. Following their example, we have entered into our first virtual power purchase agreement to support the construction of a new 10MW solar plant in Texas and add renewable energy to the grid. This is unique for a company our size and is evidence of our commitment to sustainability. By reducing our emissions, we can help our customers reduce their own carbon impact through MongoDB and the cloud. Enabling our customers to make greener choices We have heard from our customers that sustainability is important to them and influences their purchasing decisions. We looked at our own technology and have re-engineered MongoDB Atlas to reduce power consumption by ~30%. While moving to the cloud can have a positive impact on reducing carbon emissions, the actual amount depends on various factors, including the cloud provider, the type and amount of workloads, and the location of data centers. That's why we've introduced a new level of transparency to help our customers make more sustainable choices, including our Green Leaf icon in MongoDB Atlas that highlights low-carbon AWS and GCP cloud regions and encourages customers to consider the carbon impact of their choices. Additionally, MongoDB Atlas Serverless takes this one step further. Serverless infrastructure can help customers reduce their carbon footprint by reducing their infrastructure overhead and only using computing resources only when needed rather than constantly running. This means less energy is consumed overall, and less waste is created. Additionally, with MongoDB Atlas Serverless, customers can quickly scale up or down in response to changing demand, ensuring they're not wasting resources on idle infrastructure. To drive awareness and use of our products’ sustainability features, we have released a quick reference blog and detailed white paper on sustainable architecture. Finally, to help our customers understand how these changes impact their footprint, MongoDB will now include a note on attributable carbon emissions on our customers’ invoices. Changes in our offices, operations, and beyond We are making actionable changes to gain momentum towards our larger net-zero goal within our offices. We are switching over 100% of the lights in our offices to LEDs, have eliminated many single-use items, and are reducing power consumption by regulating usage outside of peak hours– this is as simple as putting Zoom Screens on timers. Additionally, we have introduced a Supplier Code of Conduct to ensure ESG compliance throughout our value chain. Finally, we have partnered with our Green Team ERG to enable employee engagement in our sustainability goals. Our Green Team fosters employee engagement by organizing community building and educational events centered around environmentalism and act as a voice for our employees' drive for corporate sustainability. As of this year employees can donate towards a project selected by our Green Team leads focused on reforestation through points earned in our employee recognition tool. We are just getting started, and our commitment to you is to be as transparent as possible throughout this process. We encourage you to follow our progress on our Sustainability webpage and check out our latest CSR report , which dives deeper into our emissions benchmark. At MongoDB, we believe that sustainability is everyone's responsibility, and we are committed to doing our part to create a more sustainable future.