Karolina Ruiz Rogelj

7 results

Real-Time Inventory Tracking with Computer Vision & MongoDB Atlas

In today’s rapidly evolving manufacturing landscape, digital twins of factory processes have emerged as a game-changing technology. But why are they so important? Digital twins serve as virtual replicas of physical manufacturing processes, allowing organizations to simulate and analyze their operations in a virtual environment. By incorporating artificial intelligence and machine learning, organizations can interpret and classify objects, leading to cost reductions, faster throughput speeds, and improved quality levels. Real-time data, especially inventory information, plays a crucial role in these virtual factories, providing up-to-the-minute insights for accurate simulations and dynamic adjustments. In the first blog , we covered a 5-step high level plan to create a virtual factory. In this blog, we delve into the technical aspects of implementing a real-time computer vision inventory inference solution as seen in Figure 1 below. Our focus will be on connecting a physical factory with its digital twin using MongoDB Atlas, which facilitates real-time interaction between the physical and digital realms. Let's get started! Figure 1: High Level Overview Part 1: The physical factory sends data to MongoDB Atlas Let’s start with the first task of transmitting data from the physical factory to MongoDB Atlas. Here, we focus on sending captured images of raw material inventory from the factory to MongoDB for storage and further processing as seen in Figure 2. Using the MQTT protocol, we send images as base64 encoded strings. AWS IoT Core serves as our MQTT broker, ensuring secure and reliable image transfer from the factory to MongoDB Atlas. Figure 2: Sending images to MongoDB Atlas via AWS IoT Core For simplicity purposes, in this demo, we directly store the base64 encoded image strings in MongoDB documents. This is because each image received from the physical factory is small enough to fit into one document. However, this is not the only method to work with images (or generally large files) in MongoDB. Within our developer data platform , we have various storage methods, including GridFS for larger files or binary data for smaller ones (less than 16MB). Moreover, object storage services like AWS S3 or Google Cloud Storage, coupled with MongoDB data federation are commonly used in production scenarios. In this real-world scenario, integrating object storage services with MongoDB provides a scalable and cost-efficient architecture. MongoDB is excellent for fast and scalable reads and writes of operational data, but when retrieving images with very low latency is not a priority, the storage of these large files in ‘buckets’ helps reduce costs while getting all the benefits of working with MongoDB Atlas. Robert Bosch GmbH , for instance, uses this architecture for Bosch's IoT Data Storage , which helps service millions of devices worldwide efficiently. Coming back to our use case, to facilitate communication between AWS IoT Core and MongoDB, we employ Rules defined in AWS IoT Core, which helps us send data to an HTTPS endpoint. This endpoint is configured directly in MongoDB Atlas and allows us to receive and process incoming data. If you want to learn more about MongoDB Data APIs, check this blog from our Developer Center colleagues. Part 2: MongoDB Atlas to AWS SageMaker for CV prediction Now it’s time for the inference part! We’ve trained a built-in multi-label classification model provided by Sagemaker, using images like in Figure 3. The images were annotated with using an .lst file format following the schema: So in an image where only the red and white pieces are present, but no blue is present in the warehouse, we would have an annotation such as: Figure 3: Sample image used for the Computer Vision model The model was built using 24 training images and 8 validation images, which was a simplicity-based decision to demonstrate the capabilities of the implementation rather than building a powerful model. Regardless of the extremely low training/validation sample, we managed to achieve a 0.97 validation accuracy. If you want to learn more about how the model was built, check out the Github repo . With a model trained and ready to predict, we created a model endpoint in Sagemaker where we send new images through a POST request so it answers back with the predicted values. We use an Atlas Function to drive this functionality. Every minute, it grabs the latest image stored in MongoDB and sends it to the Sagemaker endpoint. It then waits for the response. When the response is received, we get an array with three decimal values between 0 and 1 representing the likelihood of each piece (blue, red, white) being in the stock. We interpret the numeric values with a simple rule: if the value is above 0.85, we consider the piece being in stock. Finally, the same Atlas function writes the results in a collection (Figure 4) that keeps the current state of the inventory of the physical factory. More details about the function here . Figure 4: Collection storing real time stock status of the factory The beauty comes when we have MongoDB Realm incorporated on the Virtual Factory as seen in Figure 5. It’s automatically and seamlessly synced with MongoDB Atlas through Device Sync. The moment we update the collection with the inventory status of the physical factory in MongoDB Atlas, the virtual factory, with Realm, is automatically updated. The advantage here, besides not needing to include any additional lines of code for the data transfer, is that conflict resolution will be handled out of the box and when connection is lost, the data won’t be lost and rather updated as soon as the connection is re-established. This essentially enables a real-time synchronized digital twin without the hustle of managing data pipelines, configuring your code for edge cases and lose time in non-competitive work. Figure 5: Connecting Atlas and Realm via Device Sync Just as an example of how companies are implementing Realm and Device Sync for mission-critical applications: The airline Cathay Pacific revolutionized how pilots logged critical flight data such as wind speed, elevation, and oil pressure. Historically, it was done manually via pen and paper until they switched to a fully digital, tablet-based app with MongoDB, Realm, and Device Sync. With this, they eliminated all papers from flights and did one of the first zero-paper flights in the world in 2019. Check out the full article here . As you can see, the combination of these technologies is what enables the development of truly connected, highly performant digital twins within just one platform. Part 3: CV results are sent to Digital Twin via Device Sync In the process of sending data to the digital twin through device sync, developers can follow a straightforward procedure. First, we need to navigate to Atlas and access the Realm SDK section. Here, they can choose their preferred programming language and the data models will be automatically pre-built based on the schemas defined in the MongoDB collections. MongoDB Atlas simplifies this task by offering a copy-paste functionality as seen in Figure 6 , eliminating the need to construct data models from scratch. For this specific project, the C# SDK was utilized. However, developers have the flexibility to select from various SDK options, including Kotlin, C++, Flutter, and more, depending on their preferences and project requirements. Once the data models are in place, simply activating device sync completes the setup. This enables seamless bidirectional communication. Developers can now send data to their digital twin effortlessly. Figure 6: Realm C# SDK Object Model example One of the key advantages of using device sync is its built-in conflict resolution capability. Whether facing offline interruptions or any conflicting changes, MongoDB Atlas automatically manages conflict resolution. The "Always on '' feature is particularly crucial for Digital Twins, ensuring constant synchronization between the device and the MongoDB Atlas. This powerful feature saves developers significant time that would otherwise be spent on building custom conflict resolution mechanisms, error-handling functions, and connection-handling methods. With device sync handling conflict resolution out of the box, developers can focus on building and improving their applications. They can be confident in the seamless synchronization of data between the digital twin and MongoDB Atlas. Part 4: Virtual factory sends inventory status to the user For this demonstration, we built the Digital Twin of our physical factory using Unity so that it can be interactive through a VR headset. With this, the user can order a piece on the physical world by interacting with the Virtual Twin, even if the user is thousands of miles away from the real factory. In order to control the physical factory through the headset, it’s crucial that the app informs the user whether or not a piece is present in the stock, and this is where Realm and Device Sync come into play. Figure 7: User is informed of which pieces are not in stock in real time. In Figure 7, the user intended to order a blue piece on the Digital Twin and the app is informing that the piece is not in stock, and therefore not activating the order neither on the physical factory nor its digital twin. What’s happening behind on the backend is that the app is reading the Realm object that stores the stock status of the physical factory and deciding if the piece is orderable or not. Remember that this Realm object is in real-time sync with MongoDB Atlas, which in turn is constantly updating the stock status on the collection in Figure 4 based on Sagemaker inferences. Conclusion In this blog, we presented a four-part process demonstrating the integration of a virtual factory and computer vision with MongoDB Atlas. This solution enables transformative real-time inventory management for manufacturing companies. If you're interested in learning more and getting hands-on experience, feel free to explore our accompanying GitHub repository for further details and practical implementation.

August 1, 2023

Real-Time Energy Monitoring for Smart Buildings with MongoDB and HiveMQ

The Internet of Things (IoT) has ushered in a new era of energy efficiency, enabling the deployment of energy-efficient sensors for energy conservation and resource utilization. With over 1.5 billion connected IoT devices already installed in commercial smart buildings in 2022 and a projected surge to 3.25 billion devices by 2028, the volume of data generated is staggering. To put it in perspective, an average home in 2020 would generate approximately 4.7 terabytes of data annually. However, managing and harnessing this immense amount of real-time streaming data poses a significant challenge for developers. In smart buildings, where a multitude of IoT sensors continuously gather event streaming data, developers often grapple with integrating disparate technologies and investing significant time into data streaming integration. In this blog, we present a simple yet powerful solution to this challenge. We will demonstrate how you can effortlessly move IoT data using standard protocols such as MQTT to MongoDB Atlas using the HiveMQ Enterprise Extension for MongoDB . By optimizing smart buildings with real-time energy monitoring through the seamless integration of MongoDB and HiveMQ, we unlock the potential for efficient energy management and a sustainable future. Let’s get started! Dream team: HiveMQ + MongoDB In a world where energy conservation and efficient resource utilization are essential, let’s go through Figure 1 and see how simple it is to use MongoDB, HiveMQ’s MQTT broker, and the Enterprise Extension for MongoDB to enable real-time energy monitoring for smart building. Figure 1: Combining HiveMQ and MongoDB process data in real-time Step 1: Data transmission Using MQTT-based IoT devices deployed throughout the building, electricity consumption, temperature, and occupancy data is collected and sent to the HiveMQ MQTT broker. The MQTT broker acts as a central hub, efficiently and securely handling the communication between devices and backend systems. The HiveMQ MQTT broker also ensures reliable message delivery. It also provides MQTT-specific features like quality of service, session management, and topic-based message routing. Step 2: Data ingestion The HiveMQ MongoDB extension seamlessly integrates with MongoDB, allowing for persistent storage of the MQTT data in a highly scalable and flexible manner. The fully customizable templating system allows MQTT data to be stored according to the building’s specific operational requirements. MongoDB's document-based model accommodates the varying data formats and structures generated by different IoT devices. Step 3: Data visualization and analytics Once the MQTT data is securely stored in MongoDB, using its powerful in-app analytics, building managers can gain deep insights into energy consumption patterns, identify anomalies, and optimize energy usage. By leveraging MongoDB's rich query support and aggregation framework, building managers can make data-driven decisions promptly, reducing costs and enhancing sustainability. In cases where data needs to be exported to an ML/AI engine, MongoDB Spark and Kafka connectors can be used. Users of MongoDB Atlas can leverage Atlas Device Sync and Realm to send real-time alerts and messages to mobile devices. Data can be visualized using MongoDB Atlas Charts or through a third-party Business Intelligence (BI) tool connected via MongoDB BI connector or Atlas SQL interface. Conclusion By seamlessly integrating HiveMQ's MQTT broker with MongoDB, developers can efficiently handle data transmission, ingestion, and storage. This integration enables building managers to gain valuable insights into energy consumption patterns, make data-driven decisions, and optimize energy usage. To learn more about MongoDB’s role in IoT, please visit our IoT webpage . You can also try the HiveMQ platform now with the Enterprise Extension for MongoDB for free . Thank you Ainhoa Múgica for her contributions to this blog.

July 11, 2023

Dissecting Open Banking with MongoDB: Technical Challenges and Solutions

Thank you to Ainhoa Múgica for her contributions to this post. Unleashing a disruptive wave in the banking industry, open banking (or open finance), as the term indicates, has compelled financial institutions (banks, insurers, fintechs, corporates, and even government bodies) to embrace a new era of transparency, collaboration, and innovation. This paradigm shift requires banks to openly share customer data with third-party providers (TPPs), driving enhanced customer experiences and fostering the development of innovative fintech solutions by combining ‘best-of-breed’ products and services. As of 2020, 24.7 million individuals worldwide used open banking services, a number that is forecast to reach 132.2 million by 2024. This rising trend fuels competition, spurs innovation, and fosters partnerships between traditional banks and agile fintech companies. In this transformative landscape, MongoDB, a leading developer data platform, plays a vital role in supporting open banking by providing a secure, scalable, and flexible infrastructure for managing and protecting shared customer data. By harnessing the power of MongoDB's technology, financial institutions can lower costs, improve customer experiences, and mitigate the potential risks associated with the widespread sharing of customer data through strict regulatory compliance. Figure 1: An Example Open Banking Architecture The essence of open banking/finance is about leveraging common data exchange protocols to share financial data and services with 3rd parties. In this blog, we will dive into the technical challenges and solutions of open banking from a data and data services perspective and explore how MongoDB empowers financial institutions to overcome these obstacles and unlock the full potential of this open ecosystem. Dynamic environments and standards As open banking standards continue to evolve, financial institutions must remain adaptable to meet changing regulations and industry demands. Traditional relational databases often struggle to keep pace with the dynamic requirements of open banking due to their rigid schemas that are difficult to change and manage over time. In countries without standardized open banking frameworks, banks and third-party providers face the challenge of developing multiple versions of APIs to integrate with different institutions, creating complexity and hindering interoperability. Fortunately, open banking standards or guidelines (eg. Europe, Singapore, Indonesia, Hong Kong, Australia, etc) have generally required or recommended that the open APIs be RESTful and support JSON data format, which creates a basis for common data exchange. MongoDB addresses these challenges by offering a flexible developer data platform that natively supports JSON data format, simplifies data modeling, and enables flexible schema changes for developers. With features like the MongoDB Data API and GraphQL API , developers can reduce development and maintenance efforts by easily exposing data in a low-code manner. The Stable API feature ensures compatibility during database upgrades, preventing code breaks and providing a seamless transition. Additionally, MongoDB provides productivity-boosting features like full-text search , data visualization , data federation , mobile database synchronization , and other app services enabling developers to accelerate time-to-market. With MongoDB's capabilities, financial institutions and third-party providers can navigate the changing open banking landscape more effectively, foster collaboration, and deliver innovative solutions to customers. An example of a client who leverages MongoDB’s native JSON data management and flexibility is Natwest. Natwest is a major retail and commercial bank in the United Kingdom based in London, England. The bank has moved from zero to 900 million API calls per month within years, as open banking uptake grows and is expected to grow 10 times in coming years. At a MongoDB event on 15 Nov 2022, Jonathan Haggarty, Natwest’s Head of “Bank of APIs” Technology – an API ecosystem that brings the retail bank’s services to partners – shared in his presentation titled Driving Customer Value using API Data that Natwest’s growing API ecosystem lets it “push a bunch of JSON data into MongoDB [which makes it] “easy to go from simple to quite complex information" and also makes it easier to obfuscate user details through data masking for customer privacy. Natwest is enabled to surface customer data insights for partners via its API ecosystem, for example “where customers are on the e-commerce spectrum”, the “best time [for retailers] to push discounts” as well insights on “most valuable customers” – with data being used for problem-solving; analytics and insight; and reporting. Performance In the dynamic landscape of open banking, meeting the unpredictable demands for performance, scalability, and availability is crucial. The efficiency of applications and the overall customer experience heavily rely on the responsiveness of APIs. However, building an open banking platform becomes intricate when accommodating third-party providers with undisclosed business and technical requirements. Without careful management, this can lead to unforeseen performance issues and increased costs. Open banking demands high performance of the APIs under all kinds of workload volumes. OBIE recommends an average TTLB (time to last byte) of 750 ms per endpoint response for all payment invitations (except file payments) and account information APIs. Compliance with regulatory service level agreements (SLAs) in certain jurisdictions further adds to the complexity. Legacy architectures and databases often struggle to meet these demanding criteria, necessitating extensive changes to ensure scalability and optimal performance. That's where MongoDB comes into play. MongoDB is purpose-built to deliver exceptional performance with its WiredTiger storage engine and its compression capabilities. Additionally, MongoDB Atlas improves the performance following its intelligent index and schema suggestions, automatic data tiering, and workload isolation for analytics. One prime illustration of its capabilities is demonstrated by Temenos, a renowned financial services application provider, achieving remarkable transaction volume processing performance and efficiency by leveraging MongoDB Atlas. They recently ran a benchmark with MongoDB Atlas and Microsoft Azure and successfully processed an astounding 200 million embedded finance loans and 100 million retail accounts at a record-breaking 150,000 transactions per second . This showcases the power and scalability of MongoDB with unparalleled performance to empower financial institutions to effectively tackle the challenges posed by open banking. MongoDB ensures outstanding performance, scalability, and availability to meet the ever-evolving demands of the industry. Scalability Building a platform to serve TPPs, who may not disclose their business usages and technical/performance requirements, can introduce unpredictable performance and cost issues if not managed carefully. For instance, a bank in Singapore faced an issue where their Open APIs experienced peak loads and crashes every Wednesday. After investigation, they discovered that one of the TPPs ran a promotional campaign every Wednesday, resulting in a surge of API calls that overwhelmed the bank's infrastructure. A scalable solution that can perform under unpredictable workloads is critical, besides meeting the performance requirements of a certain known volume of transactions. MongoDB's flexible architecture and scalability features address these concerns effectively. With its distributed document-based data model, MongoDB allows for seamless scaling both vertically and horizontally. By leveraging sharding , data can be distributed across multiple nodes, ensuring efficient resource utilization and enabling the system to handle high transaction volumes without compromising performance. MongoDB's auto-sharding capability enables dynamic scaling as the workload grows, providing financial institutions with the flexibility to adapt to changing demands and ensuring a smooth and scalable open banking infrastructure. Availability In the realm of open banking, availability becomes a critical challenge. With increased reliance on banking services by third-party providers (TPPs), ensuring consistent availability becomes more complex. Previously, banks could bring down certain services during off-peak hours for maintenance. However, with TPPs offering 24x7 experiences, any downtime is unacceptable. This places greater pressure on banks to maintain constant availability for Open API services, even during planned maintenance windows or unforeseen events. MongoDB Atlas, the fully managed global cloud database service, addresses these availability challenges effectively. With its multi-node cluster and multi-cloud DBaaS capabilities, MongoDB Atlas ensures high availability and fault tolerance. It offers the flexibility to run on multiple leading cloud providers, allowing banks to minimize concentration risk and achieve higher availability through a distributed cluster across different cloud platforms. The robust replication and failover mechanisms provided by MongoDB Atlas guarantee uninterrupted service and enable financial institutions to provide reliable and always-available open banking APIs to their customers and TPPs. Security and privacy Data security and consent management are paramount concerns for banks participating in open banking. The exposure of authentication and authorization mechanisms to third-party providers raises security concerns and introduces technical complexities regarding data protection. Banks require fine-grained access control and encryption mechanisms to safeguard shared data, including managing data-sharing consent at a granular level. Furthermore, banks must navigate the landscape of data privacy laws like the General Data Protection Regulation (GDPR), which impose strict requirements distinct from traditional banking regulations. MongoDB offers a range of solutions to address these security and privacy challenges effectively. Queryable Encryption provides a mechanism for managing encrypted data within MongoDB, ensuring sensitive information remains secure even when shared with third-party providers. MongoDB's comprehensive encryption features cover data-at-rest and data-in-transit, protecting data throughout its lifecycle. MongoDB's flexible schema allows financial institutions to capture diverse data requirements for managing data sharing consent and unify user consent from different countries into a single data store, simplifying compliance with complex data privacy laws. Additionally, MongoDB's geo-sharding capabilities enable compliance with data residency laws by ensuring relevant data and consent information remain in the closest cloud data center while providing optimal response times for accessing data. To enhance data privacy further, MongoDB offers field-level encryption techniques, enabling symmetric encryption at the field level to protect sensitive data (e.g., personally identifiable information) even when shared with TPPs. The random encryption of fields adds an additional layer of security and enables query operations on the encrypted data. MongoDB's Queryable Encryption technique further strengthens security and defends against cryptanalysis, ensuring that customer data remains protected and confidential within the open banking ecosystem. Activity monitoring With numerous APIs offered by banks in the open banking ecosystem, activity monitoring and troubleshooting become critical aspects of maintaining a robust and secure infrastructure. MongoDB simplifies activity monitoring through its monitoring tools and auditing capabilities. Administrators and users can track system activity at a granular level, monitoring database system and application events. MongoDB Atlas has Administration APIs , which one can use to programmatically manage the Atlas service. For example, one can use the Atlas Administration API to create database deployments, add users to those deployments, monitor those deployments, and more. These APIs can help with the automation of CI/CD pipelines as well as monitoring the activities on the data platform enabling developers and administrators to be freed of this mundane effort and focus on generating more business value. Performance monitoring tools, including the performance advisor, help gauge and optimize system performance, ensuring that APIs deliver exceptional user experiences. Figure 2: Activity Monitoring on MongoDB Atlas MongoDB Atlas Charts , an integrated feature of MongoDB Atlas, offers analytics and visualization capabilities. Financial institutions can create business intelligence dashboards using MongoDB Atlas Charts. This eliminates the need for expensive licensing associated with traditional business intelligence tools, making it cost-effective as more TPPs utilize the APIs. With MongoDB Atlas Charts, financial institutions can offer comprehensive business telemetry data to TPPs, such as the number of insurance quotations, policy transactions, API call volumes, and performance metrics. These insights empower financial institutions to make data-driven decisions, improve operational efficiency, and optimize the customer experience in the open banking ecosystem. Figure 3: Atlas Charts Sample Dashboard Real-Timeliness Open banking introduces new challenges for financial institutions as they strive to serve and scale amidst unpredictable workloads from TPPs. While static content poses fewer difficulties, APIs requiring real-time updates or continuous streaming, such as dynamic account balances or ESG-adjusted credit scores, demand capabilities for near-real-time data delivery. To enable applications to immediately react to real-time changes or changes as they occur, organizations can leverage MongoDB Change Streams that are based on its aggregation framework to react to data changes in a single collection, a database, or even an entire deployment. This capability further enhances MongoDB’s real-time data and event processing and analytics capabilities. MongoDB offers multiple mechanisms to support data streaming, including a Kafka connector for event-driven architecture and a Spark connector for streaming with Spark. These solutions empower financial institutions to meet the real-time data needs of their open banking partners effectively, enabling seamless integration and real-time data delivery for enhanced customer experiences. Conclusion MongoDB's technical capabilities position it as a key enabler for financial institutions embarking on their open banking journey. From managing dynamic environments and accommodating unpredictable workloads to ensuring scalability, availability, security, and privacy, MongoDB provides a comprehensive set of tools and features to address the challenges of open banking effectively. With MongoDB as the underlying infrastructure, financial institutions can navigate the ever-evolving open banking landscape with confidence, delivering innovative solutions, and driving the future of banking. Embracing MongoDB empowers financial institutions to unlock the full potential of open banking and provide exceptional customer experiences in this era of collaboration and digital transformation. If you would like to learn more about how you can leverage MongoDB for your open banking infrastructure, take a look at the below resources: Open banking panel discussion: future-proof your bank in a world of changing data and API standards with MongoDB, Celent, Icon Solutions, and AWS How a data mesh facilitates open banking Financial services hub

June 6, 2023

4 Ways MongoDB Solves Healthcare's Interoperability Puzzle

Picture this: You're on a road trip, driving across the country, taking in the beautiful scenery, and enjoying the freedom of the open road. But suddenly, the journey comes to a screeching halt as you fall seriously ill and need emergency surgery. The local hospital rushes you into the operating room, but how will they know what medications you're allergic to, or what conditions you've been treated for in the past? Figure 1: Before and after interoperability In a perfect world, the hospital staff would have access to all of your medical records, seamlessly integrated into one interoperable electronic health record (EHR) system. This would enable them to quickly and accurately treat you as seen in Figure 1. Unfortunately, the reality is that data is often siloed, fragmented, and difficult to access, making it nearly impossible for healthcare providers to get a complete picture of their patients' health. That’s where interoperability comes in, enabling seamless integration of data from different sources and formats, allowing healthcare providers with easy access to the information they need, even between different health providers. And at the heart of solving the interoperability challenge is MongoDB, the ideal solution for building a truly interoperable data repository. In this blog post, we'll explore four ways why MongoDB stands out from all others in the interoperability software space. We'll show you how our unique capabilities make us the fundamental missing piece in the interoperability puzzle for healthcare. Let’s get started! 1. Document flexibility MongoDB's document data model is perfect for managing healthcare data. It allows you to work with the data in JSON format, eliminating the need to flatten or transform it into a string. This simplifies the implementation of common interoperability standards for clinical and terminology data, such as HL7 FHIR and openEHR, as well as SNOMED and LOINC - because all of these standards also support JSON. The document model also supports nested and hierarchical data structures, making it easier to represent complex clinical data with varying levels of detail and granularity. MongoDB's document model also provides flexibility in managing healthcare data, allowing for dynamic and self-describing schemas. With no need to pre-define the schema, fields can vary from document to document and can be modified at any time without requiring disruptive schema migrations. This makes it easy for healthcare providers to add or update information to clinical documents, such as when new interoperability standards are released, ensuring that healthcare data is kept accurate and up-to-date without requiring database reconfiguration or downtime. 2. Scalability Dealing with large healthcare datasets can be challenging for traditional relational database systems, but MongoDB's horizontal scaling feature offers a solution. With horizontal scaling, healthcare providers can easily distribute their data across multiple servers and cloud providers (AWS, GCP, and Azure), resulting in increased processing power and faster query times. It also results in more cost-efficient storage as growing vertically is more expensive than growing horizontally. This feature allows healthcare providers to scale their systems seamlessly as their data volumes grow while maintaining performance and reliability. While MongoDB’s reliability is ensured through its replication architecture, where each database replica set consists of three nodes that provide fault tolerance and automatic failover in the event of node failure. Horizontal scaling also improves reliability by adding more servers or nodes to the system, reducing the risk of a single point of failure. 3. Performance When it comes to healthcare data, query performance can make all the difference in delivering timely and accurate care. And that’s another aspect where MongoDB shines. MongoDB holds data in a format that is optimized for storage and retrieval, allowing it to quickly and efficiently read and write data. MongoDB’s advanced querying capabilities, backed by compound and wildcard indexes, make it a standout solution for healthcare applications. MongoDB Atlas’ Search, using Apache Lucene indexing, also enables efficient querying across vast data sets, handling complex queries with multiple fields. This is especially useful for Clinical Data Repositories (CDRs), which permit almost unlimited querying flexibility. Atlas Search indexing also allows for advanced search features enabling medical professionals to quickly and accurately access the information they need from any device. 4. Security Figure 2: Fine-grained access control The security of sensitive clinical data is paramount in the healthcare industry. That’s why MongoDB provides an array of robust security features, including fine-grained access control and auditing as seen in Figure 2. With Client-Side-Field-Level Encryption (CS-FLE) and Queryable Encryption, MongoDB is the only data platform that allows the processing of randomly encrypted patient data, providing the highest level of data security, with minimal impact on performance. Additionally, MongoDB Atlas supports VPC peering and private links that permit secure connections to healthcare applications, wherever they are hosted. By implementing strong security measures from the start, organizations can ensure privacy by design. Partner ecosystem MongoDB is the only non-relational database and modern data platform that directly collaborates with clinical data repository (CDR) vendors like Smile, Exafluence, Better, Firely, and others. While some vendors offer MongoDB as an alternative to a relational database, others have built their solutions exclusively on MongoDB, one for example is Kodjin FHIR server. MongoDB has extended its capabilities to integrate with AWS FHIR Works, enabling healthcare providers and payers to deploy a FHIR server with MongoDB Atlas through the AWS Marketplace. With MongoDB's unique approach to data storage and retrieval and its ability to work with CDR vendors, millions of patients worldwide are already benefiting from its use. Beyond interoperability with MongoDB Access to complete medical records is often limited by data silos and fragmentation, leaving healthcare providers with an incomplete picture of their patients' health. That's where MongoDB's interoperability solution comes in as the missing puzzle piece the healthcare industry needs. With MongoDB's unmatched document flexibility, scalability, performance, and security features, healthcare providers can access accurate and up-to-date patient information in real-time. But MongoDB's solution goes beyond that. Radical interoperability with MongoDB means that healthcare providers own the data layer and are thus able to leverage any usages from the stored data, and connect to any existing applications or APIs. They're free to work with any healthcare data standard, including custom schemas, and leverage the data for use cases beyond storage and interoperability. The future of healthcare is here, and with MongoDB leading the way, we can expect to see more innovative solutions that put patients first. If you're interested in learning more about radical interoperability with MongoDB, check out our brochure .

May 18, 2023

Three Major IoT Data-Related Challenges and How to Address Them

IoT has formed itself a crucial component for future-oriented solutions and holds a massive potential of economic value. McKinsey & Company estimates that by 2030, IoT (Internet of Things) will enable $5.5 trillion to $12.6 trillion in value worldwide, including the value captured by consumers and customers. For proof of its growing popularity and consumers’ dependency on it, you likely don't need to look any further than your own wrist. From fitness bands to connected vehicles, smart homes, and fleet-management solutions in manufacturing and retail, IoT already connects billions of devices worldwide, with many more to come. As more IoT enabled devices come online, with increasingly sophisticated sensors, choosing the right underlying technology to make IoT solutions easier to implement and help companies seize new innovative opportunities is essential. In this blog we will discuss how MongoDB has successfully addressed three major IoT data-related challenges across various industries, including Manufacturing, Retail, Telecommunications, and Healthcare. The challenges are the following: Data Management Real-Time Analytics Supply Chain Optimization FIgure 1: MongoDB Atlas for IoT Let's dive right in! Data management Storing, transmitting, and processing the large amount of data that IoT devices produce is a significant challenge. Additionally, the data produced by IoT devices often comes in variable structures. This data must be carefully timestamped, indexed, and correlated with other data sources to make the context required for effective decision-making. This data volume and complexity combination makes it difficult to effectively and efficiently process data from IoT devices. Bosch Consider Bosch IoT Suite , a family of products and services in IoT device management, IoT data management, and IoT edge by Bosch Digital. These products and services hold over 250 international IoT projects and over 10 million connected devices. Bosch implemented MongoDB to store, manage, and analyze data in real time. MongoDB’s ability to handle structured, semi-structured, and unstructured data, and efficient data modeling with JSON make it easy to map the information model of each device to its associated document in the database. In addition, dynamic schemas support agile development methodologies and make it simple to develop apps and software. Adding new devices, sensors, and assets is easy, which means the team can focus on creating better software. ThingSpace Another example is that of ThingSpace , Verizon’s market-leading IoT connectivity management platform, which provides the network access required to deliver various IoT products and services. Verizon works with companies that purchase network access from it to connect their devices, bundled together with their own solutions, which they sell to end users. ThingSpace’s customers each sell an IoT product that needs reliable connectivity to ensure the devices always work, which WiFi cannot offer. Verizon’s monolithic RDBMS-based system would not be able to scale to handle both transactional and time-series workloads, so Verizon decided it needed a distributed database architecture going forward. MongoDB proved to be the only solution that scaled to meet Verizon’s requirements across different use cases and combinations of workload types. The immense processing needs resulting from the high number of devices and high velocity of messages coming in were only addressed by MongoDB’s highly available, scalable architecture. Native MongoDB Time Series allow for improved performance, through optimized storage with clustered indexes and optimized Time-Series query operators. MongoDB's advanced capabilities, such as flexible data modeling, powerful indexing and Time Series provide an effective solution for managing the complex and diverse data generated by IoT devices. Real-time analytics Real-time data analytics, one of the most crucial parts of big data analytics today, brings value to businesses for making more data-driven real-time decisions. However, despite its importance, very few can respond to changes in data minute by minute or second by second. Many challenges arise when it comes to the implementation of real-time analytics for enterprises. Storing such a huge volume of data and analyzing it in real time is an entirely different story. Thermo Fisher Cloud Let’s consider the Thermo Fisher Cloud, one of the largest cloud platforms for the scientific community on AWS. MS Instrument Connect allows Thermo Fisher customers to see live experiment results from any mobile device or browser. Each experiment produced millions of "rows" of data, which led to suboptimal performance with existing databases. Internal developers needed a database that could easily handle a wide variety of fast-changing data. MongoDB's expressive query language and rich secondary indexes provided the flexibility to support both ad-hoc and predefined queries customers needed for their scientific experiments. Anytime I can use a service like MongoDB Atlas, I’m going to take that so that we at Thermo Fisher can focus on what we’re good at, which is being the leader in serving science. Joseph Fluckiger, Sr. Software Architect @Thermo Fisher MongoDB Atlas scales seamlessly and is capable of ingesting enormous amounts of sensor and event data to support real-time analysis for catching any critical events or changes as they happen. That gives organizations new capabilities, including: Capturing streaming or batch data of all types without excessive data mapping Analyzing data easily and intuitively with a built-in aggregation framework Delivering data insights rapidly and at scale with ease With MongoDB organizations can optimize queries to quickly deliver results to improve operations and drive business growth. Supply chain optimization Items move through different locations in the supply chain, making it hard to maintain end-to-end visibility throughout their journey. The lack of control on any stage can dramatically harm the efficiency of planning, slow down the entire supply chain and ultimately result in lower return on investment. From optimizing warehouse space by sourcing raw materials as needed, to real-time supply chain insights, IoT-enabled supply chains can help significantly optimize these processes by eliminating blind spots and inefficiencies. Longbow Advantage Longbow Advantage delivers substantial business results by enabling clients to optimize their supply chains. Millions of shipments move through multiple warehouses every day, generating massive quantities of data throughout the day that must be analyzed for real-time visibility and reporting. Its flagship warehouse visibility platform, Rebus, combines real-time performance reporting with end-to-end warehouse visibility and intelligent labor management. Longbow needed a database solution that could process quantities of that scale and deliver real-time warehouse visibility and reporting at the heart of Rebus, and it knew it could not rely on monolithic, time-consuming spreadsheets to do so. It became clear that MongoDB’s document database model was a good match and would allow Rebus to gather, store, and build visibility into disparate data in near real time. Another key component of smart supply chain solutions is IoT-enabled mobile apps that provide real-time visibility and facilitate on-the-spot data-driven decisions. In such situations, the offline first paradigm becomes crucial, since staff need access to data in areas where connectivity is poor or nonexistent. Realm by MongoDB is a lightweight, object-oriented embedded database technology for resource constrained environments. It is an ideal solution for storing data on mobile devices. By utilizing MongoDB’s Realm SDKs, which wrap the Realm database, and Atlas Device Sync , which enables seamless data synchronization between MongoDB and Realm on your mobile phone with minimal developer efforts, businesses can rapidly develop mobile applications and drive innovation. MongoDB provides a powerful solution for IoT-enabled supply chains that can optimize processes and eliminate inefficiencies, enabling organizations to make data-driven decisions and improve supply chain efficiency. Conclusion The IoT industry is rapidly evolving, and as the number of connected devices grows, so do the challenges faced by businesses leveraging these solutions. Through a range of real-world use cases, we have seen how MongoDB has helped businesses deal with IoT data management, perform real-time analytics and optimize their supply chains, driving innovation in a variety of industries. With its unique features and capabilities, designed to manage the heavy lifting for you, MongoDB is well-positioned to continue playing a crucial role in the ongoing digital transformation of the IoT landscape. Want to learn more or get started with MongoDB? Check out our IoT resources MongoDB IoT Reference Architecture Migrate existing applications - with Relational Migrator MongoDB & IIoT ebook IoT webpage

April 24, 2023

Build a ML-Powered Underwriting Engine in 20 Minutes with MongoDB and Databricks

The insurance industry is undergoing a significant shift from traditional to near-real-time data-driven models, driven by both strong consumer demand, and the urgent need for companies to process large amounts of data efficiently. Data from sources such as connected vehicles and wearables are utilized to calculate precise and personalized premium prices, while also creating new opportunities for innovative products and services. As insurance companies strive to provide personalized and real-time products, the move towards sophisticated and real-time data-driven underwriting models is inevitable. To process all of this information efficiently, software delivery teams will need to become experts at building and maintaining data processing pipelines. This blog will focus on how you can revolutionize the underwriting process within your organization, by demonstrating how easy it is to create a usage-based insurance model using MongoDB and Databricks. This blog is a companion to the solution demo in our Github repository . In the GitHub repo, you will find detailed step-by-step instructions on how to build the data upload and transformation pipeline leveraging MongoDB Atlas platform features, as well as how to generate, send, and process events to and from Databricks. Let’s get started. Part 1: the Use Case Data Model Part 2: the Data Pipeline Part 3: Automated Decision Support with Databricks Part 1: The use case data model Figure 1: Entity relationship diagram - Usage-based insurance example Imagine being able to offer your customers personalized usage-based premiums that take into account their driving habits and behavior. To do this, you'll need to gather data from connected vehicles, send it to a Machine Learning platform for analysis, and then use the results to create a personalized premium for your customers. You’ll also want to visualize the data to identify trends and gain insights. This unique, tailored approach will give your customers greater control over their insurance costs while helping you to provide more accurate and fair pricing. A basic example data model to support this use case would include customers, the trips they take, the policies they purchase, and the vehicles insured by those policies. This example builds out three MongoDB collections, as well two Materialized Views . The full Hackloade data model which defines all the MongoDB objects within this example can be found here . Part 2: The data pipeline Figure 2: The data pipeline - Usage-based insurance The data processing pipeline component of this example consists of sample data, a daily materialized view, and a monthly materialized view. A sample dataset of IoT vehicle telemetry data represents the motor vehicle trips taken by customers. It’s loaded into the collection named ‘customerTripRaw’ (1) . The dataset can be found here and can be loaded via MongoImport , or other methods. To create a materialized view, a scheduled Trigger executes a function that runs an Aggregation Pipeline. This then generates a daily summary of the raw IoT data, and lands that in a Materialized View collection named ‘customerTripDaily’ (2) . Similarly for a monthly materialized view, a scheduled Trigger executes a function that runs an Aggregation Pipeline that, on a monthly basis, summarizes the information in the ‘customerTripDaily’ collection, and lands that in a Materialized View collection named ‘customerTripMonthly’(3). For more info on these, and other MongoDB Platform Features: MongoDB Materialized Views Building Materialized View on TimeSeries Data MongoDB Scheduled Triggers Cron Expressions Part 3: Automated decisions with Databricks Figure 3: The data pipeline with Databricks - Usage-based insurance The decision-processing component of this example consists of a scheduled trigger and an Atlas Chart. The scheduled trigger collects the necessary data and posts the payload to a Databricks ML Flow API endpoint (the model was previously trained using the MongoDB Spark Connector on Databricks). It then waits for the model to respond with a calculated premium based on the miles driven by a given customer in a month. Then the scheduled trigger updates the ‘customerPolicy’ collection, to append a new monthly premium calculation as a new subdocument within the ‘monthlyPremium’ array. You can then visualize your newly calculated usage-based premiums with an Atlas Chart! In addition to the MongoDB Platform Features listed above, this section utilizes the following: MongoDB Atlas App Services MongoDB Functions MongoDB Charts Go hands on Automated digital underwriting is the future of insurance. In this blog, we introduced how you can build a sample usage-based insurance data model with MongoDB and Databricks. If you want to see how quickly you can build a usage-based insurance model, check out our GitHub repository and dive right in! Learn more about MongoDB and Insurance .

March 6, 2023

Modernizing Core Banking: A Shift Toward Composable Systems

Modernizing core banking systems with MongoDB can bring many benefits such as faster innovation, flexible deployment, and instant scalability. According to McKinsey & Company , it is critical for banks to modernize their core banking platforms with a “flexible back end” in order to stay competitive and adapt to new business models. With the emergence of better data infrastructure based on JSON and the ongoing evolution of software design, the next generation of composable core banking processes can be built on MongoDB's developer data platform, offering greater flexibility and adaptability than traditional systems. The current market: Potential core banking solutions Financial disruptors such as fintechs and challenger banks are growing their businesses and attracting customers by building on process-centric core banking systems, while traditional banks struggle with inflexible, legacy systems. As seen in Figure 1 below, two potential solutions are the core banking “platform” and “suite”. The platform solution involves using a single vendor and several closely integrated modules. It also includes a single, large database and a single roadmap. On the other hand, the suite solutions refers to using multiple vendors, multiple loosely integrated modules, multiple databases and roadmaps. However, both of these systems are inflexible and result in vendor lock-in, preventing the adoption of best-of-breed functionalities from other vendors. Figure 1: Core banking solutions: platform, suite and composable ecosystem. A new approach, known as a composable ecosystem as seen on the far right of Figure 1, is being adopted by some financial institutions. This approach consists of distinct independent services and functions, with the ability to incorporate "best of breed" functionality without major integration challenges, multiple loosely coupled roadmaps, and individual component deployment without vendor lock-in. This allows for specialization and the development of advanced individual components that can be combined to deliver the best products and services and is better at adopting new technologies and approaches. Composable ecosystems with MongoDB's developer data platform MongoDB’s developer data platform is the best choice for financial institutions to build a composable core banking ecosystem. Such an ecosystem is made up of four key building blocks as seen below in Figure 2: JSON, BIAN, MACH, and data domains. JSON is a widely-used data format in the financial industry, and MongoDB's BSON extension allows for the storage of additional data types. BIAN is a standard that defines a component business blueprint for banking, and MongoDB's technology supports BIAN and embodies MACH principles. MACH is a set of design principles for component-based architectures, and data domains enable the mapping of business capabilities to applications and data. By using MongoDB's developer data platform, financial institutions can implement flexible and scalable core banking systems that can adapt to the ever-changing market demands. Figure 2: MongoDB, the developer data platform for your core banking system. MongoDB in action: Core banking use cases Companies such as Temenos and Current have utilized MongoDB's capabilities to deliver innovative services and improve performance. As Tony Coleman, CTO of Temenos, said, "Implementing a good data model is a great start. Implementing a great database technology that uses the data model correctly, is vital. MongoDB is a really great fit for banking." MongoDB and Temenos have worked on a number of new, component-based services to enhance the Temenos product family. Financial institutions can embed Temenos components to deliver new functionality in their existing on-premises environments or through a full banking-as-a-service experience with Temenos T365, powered by MongoDB on various cloud platforms. Temenos has a cloud-first, microservices-based infrastructure built with MongoDB, which gives customers flexibility while improving performance. Current is a digital bank that was founded with the aim of providing its customers with a modern, convenient, and user-friendly banking experience. To achieve this, the company needed to build a robust, scalable, and flexible technology platform. Current decided to build its core technology ecosystem in-house, using MongoDB as the underlying database technology. "MongoDB gave us the flexibility to be agile with our data design and iterate quickly," said Trevor Marshall, CTO of Current. In addition, MongoDB's strong security features make it a secure choice for handling sensitive financial data. Overall, MongoDB's capabilities make it a powerful choice for driving innovation and simplifying landscapes in the financial sector. Conclusion In conclusion, the financial industry is in need of modernizing their core banking systems to stay competitive in the face of rising disruptors and new business models. A composable ecosystem, utilizing a developer data platform like MongoDB, offers greater flexibility and adaptability than traditional legacy systems. If you’d like to learn more about how MongoDB can optimize your core banking functionalities, take a look at our white paper: Componentized Core Banking: The next generation of composable banking processes built upon MongoDB .

January 26, 2023