2066 results

How MongoDB Helps Reduce Data Fragmentation for More Connected Healthcare Data

Many differences exist across healthcare systems around the globe, but there is one unfortunate similarity: fragmentation. Fragmentation is a consequence of the inability of various healthcare organizations (both public and private) to communicate with each other or to do so in a timely or consistent manner, and it can have a dramatic impact on patient and population well-being. Interoperability and communication A patient can visit a specialist for a specific condition and the family doctor for regular checkups, perhaps even on the same day. But how can both doctors make appropriate decisions if patient data is not shared between them? Fragmented healthcare delivery, as described in this scenario, also leads to data fragmentation. Such data fragmentation can cause misdiagnosis and services duplication. It can also lead to billing issues, fraud, and more, causing preventable harm and representing a massive economic burden for healthcare systems worldwide. To improve healthcare fragmentation, we need truly interoperable health data. The longitudinal patient record A longitudinal patient record (LPR) is a full, life-long view of a patient’s healthcare history and the care they’ve received. It’s an electronic snapshot of every interaction patients have, regardless of provider and service. Ideally, this record can be shared across any or all entities within a country’s healthcare system. The LPR represents a step beyond the electronic health record, extending past a specific healthcare network to a regional or national level. It’s critical that LPRs use the same data format and structure to guarantee the ability of healthcare providers to easily and quickly interact with them. Data standards for LPRs are key to interoperability and can help address healthcare fragmentation, which, in turn, can help save lives by improving care. FHIR Fast Healthcare Interoperability Resources (FHIR) is a commonly used schema that comprises a set of API and data standards for exchanging healthcare data. FHIR enables semantic interoperability to allow effective communication between independent healthcare institutions and essentially defines “how healthcare information can be exchanged between different computer systems regardless of how it is stored in those systems” ( ONC Fact Sheet, “What is FHIR?” ). FHIR aims to solve the fragmentation problem of the healthcare system by directly attacking the root of the problem: miscommunication. As is the case for many other modern communication standards (for example, ISO 20022 for finance ), FHIR builds its REST API from a JSON schema. This foundation is convenient, considering most modern applications are built with object-oriented programming languages that have JSON as the standard file and data interchange format. This approach also makes it easier for developers to build applications, which is perhaps the most important point: The future of healthcare delivery may increasingly depend on the creation of applications that will transform how patients and providers interact with healthcare systems for the better. MongoDB: FHIR and healthcare app-ification MongoDB is a document database and is therefore a natural fit for building FHIR applications. With JSON as the foundation of the MongoDB document model developers can easily store and retrieve data from their FHIR APIs to and from the database, with no translation or change of format needed. In fact, organizations can adopt FHIR resources as the basis of a new, canonical data model that existing internal systems can begin to shift and conform to. One example is the Exafluence FHIR API , which is built on top of MongoDB. Exafluence's API allows for real-time data interchange by leveraging Apache Kafka and Spark, in either an on-premise or multi-cloud deployment. Software teams leveraging the Exafluence solutino have experienced velocity increases of their FHIR interoperability projects by 40% to 60% . MongoDB's tool set can develop value-add business solutions on the FHIR-native dataset — without ETL. Beyond FHIR , the trend toward healthcare app-ification (i.e., the increasing use of applications in healthcare) clashes with pervasive legacy architectures, which typically are not optimized for the developer experience. Because of this reliance on legacy architectures, modernization or transformation initiatives often fail to take hold or are postponed as companies perceive the risks to be too high and the return on investment is not evident. It doesn’t have to be this way, however. MongoDB’s industry-proven iterative approach to modernization reduces the risk of application and infrastructure migration and unlocks developer productivity and innovation. Interoperable, modern healthcare applications can now be built in a developer-friendly environment, with all the benefits expected from traditional databases (i.e., ACID transactions, expressive query language, and enterprise-grade security). MongoDB provides the freedom for solutions to be deployed anywhere (e.g., on-premises, multi-cloud), providing a major advantage for healthcare organizations, which typically have multi-environment deployments. Healthcare and the cloud Digital healthcare will accelerate the adoption of cloud technologies within the industry, enabling innovation at scale and unlocking billions of dollars in value. Healthcare organizations, however, have so far been reluctant to move workloads to the cloud, mostly because of data privacy and security concerns. To support such cloud adoption initiatives, MongoDB Atlas offers a unique multi-cloud data platform , integrating MongoDB in a fully managed environment with enterprise-grade security measures and data encryption capabilities. MongoDB Atlas is HIPPA-ready and a key facilitator for GDPR compliance. A holistic view of patient care Interoperable healthcare records and communication standards will make longitudinal patient records possible by providing a much-sought-after holistic view of the patient, helping to fix healthcare fragmentation. Many challenges still exist, including transforming legacy infrastructures into modern, flexible data platforms that can adapt to the exponential changes happening in the healthcare industry. MongoDB provides a developer data platform designed to unlock developer productivity and ultimately giving healthcare organizations the power to focus on what matters most: the patient. Learn more about how MongoDB supports healthcare organizations .

May 26, 2022

From Tamagotchi Pets to IoT Factories: Digging In at MongoDB World’s Builder’s Fest

Everyone loves to build — whether it’s a child playing with LEGO Bricks or a startup founder building an app from scratch. At MongoDB World 2022 , attendees will have the chance to build something truly unique. Builder’s Fest, which takes place on June 9, 2022, at MongoDB World in New York, gives attendees the opportunity to get involved in hands-on workshops and coding competitions. The event will help developers learn how to master features of MongoDB — and also have a lot of fun. “Builder’s Fest is a place where builders get together and feed off each other’s vibes and collaborate,” says Karen Huaulme, a principal developer advocate at MongoDB. “People are coming in from all different levels; we have something for everyone.” A session at Builder’s Fest at MongoDB World 2019. “At Builder’s Fest, we get our hands dirty and see MongoDB in action,” says MongoDB principal consulting engineer Dawid Esterhuizen. MongoDB experts will lead workshops and coding competitions to showcase their work, show off their skill sets, and reveal their secret passions. After more than two years of remote conferences, these engineers are excited to build something in person, together with colleagues and peers. After delivering his keynote earlier in the week, MongoDB CTO Mark Porter will host four sessions during Builder’s Fest, including two “Chat With the CTO” blocks that will offer an open forum for attendees to ask him questions about MongoDB’s engineering, products, or direction — or anything else. Porter’s other sessions are: MongoDB’s Architectural Advantages and Engineering Culture at MongoDB. Builder’s Pods Attendees at Builder's Fest 2019 gather around to share different ideas with each other. Much of the action at Builder’s Fest takes place in the Builder’s Pods, which will be spread throughout MongoDB World’s Partner Promenade. The Pods are set up for hands-on learning and tutorials, and they will host the mini workshops led by MongoDB experts and others. David Bradford is an engineer at MongoDB who is hosting a session in the Builder’s Pod. “For Builder’s Fest, I’m really excited to see the breadth of ways that MongoDB can be used to build unique and novel tooling and solutions,” he says. Bradford’s session will detail how to use MongoDB to export Git history. “I’m looking forward to showing off some exploration I have been doing around leveraging MongoDB to explore trends and patterns hidden in Git repositories,” Bradford says. “I’m excited to be able to show how features like the aggregation framework and MongoDB Atlas Charts can be used to quickly build powerful analysis tools.” During 2019’s Builder’s Fest, some of the Builder’s Pod topics included using MongoDB Atlas and Stitch, getting a Raspberry Pi to send IoT data to MongoDB, and creating visualizations using MongoDB Charts. New sessions this year will begin every half hour at each pod around the space. Workshops will cover a wide spectrum, from building custom Tamagotchi hardware to tinkering with our model-size IIoT smart factory. John Page, a distinguished engineer at MongoDB, is hosting the session on building custom Tamagotchi hardware. “I'm excited to show off my passion projects and to introduce people to the joy of coding for tiny computers,” Page says. “If you’ve never programmed hardware directly before, you get a chance to try it.” Says Dawid Esterhuizen, a principal consulting engineer at MongoDB: “I plan to visit the IoT and Tamagotchi pods as my first stop. I like getting involved in the technical bits," he says, "and at Builder’s Fest we get our hands dirty and see MongoDB in action.” The security topics are always of interest, Esterhuizen says, as well as the various coding challenges. At Builder’s Fest, there really is something for everyone. On two large stages at either end of the Partner Promenade, MongoDB World 2022 participants can take part in coding challenges and gaming competitions. Attendees can go head-to-head and show off their skills. Coding experts and gamers will not want to miss out on this electric activation on Day 3 of MongoDB World. Builder’s Fest will run from 11 a.m. to 3 p.m inside the Partner Promenade. Be sure to check out the full Builder’s Fest agenda within the MongoDB World app for iOS or Android to find workshops that are right for you. Register today for MongoDB World, and use code ​​MDBW22BLOG to save 25% off your tickets. We hope to see you in NYC from June 7 to June 9!

May 26, 2022

How Two MongoDB Employees Are Reflecting on Asian American and Pacific Islander Heritage Month

Asian American and Pacific Islander Heritage Month is a time to reflect on and celebrate the many communities and cultures that make up this group of individuals. Each community has its own history, struggles, and achievements, and it’s important to recognize that the experiences of individuals who belong to them may differ greatly. This year, two MongoDB employees share their personal stories and how they’re reflecting on Asian American and Pacific Islander Heritage Month. Zaira Pirzada , senior strategy manager for governance, risk, and compliance I live in the hyphen. Indian-Pakistani-American. Pakistani-Indian-American. Oh, I’m also Muslim. There are layers to unpack when I think about my identity and the places that define me. So I live somewhere in the hyphen, trying to figure out what it is to love the India, Pakistan, and the United States that I know. This is reflected in everything I am and I do. My Indian mother and Pakistani father came to this country when they were 5 and 6 years old, so the motherland is a faint memory for them. My mother grew up loving hip-hop, rock, and R&B. She bought me my first cassette tapes. My father introduced me to classic rock. I thank him for showing me the brilliance of Queen and Ozzy. My mother's Bismillah ceremony (she's the one covered in flowers) Today in our family, we hold on to our culture in the ways we can. Our culture sits on our tongue when we speak Urdu mixed with English. I hope we never lose our language. That culture lives in our stomachs in what we crave when we cook and when we eat (even on Thanksgiving). I hope we never lose the taste for our spices. That culture is music to our ears and color to our eyes when we watch South Asian movies (with subtitles). My mother and grandmother could never have been in a position of leadership in a corporate environment. But here I am, exploring a world that is entirely different from the world I came from and that my family has ever known. My parents provided me the opportunity to explore whatever I wanted to be and however I wanted to be it. I do it boldly, and I do it with privilege that I recognize. Me and some of my cousins (all offspring of the people in the previous picture) in American clothing. Me and the same cousins, but in traditional clothing Asian American and Pacific Islander Heritage Month means a lot to many people. To me, it’s a time to reflect on the journey of being American while still being Indian and Pakistani. Especially as the previous generation ages, we must reflect on who we are now while also remembering our roots. We are all living another chapter in the book of humanity’s becoming. This is a part of my chapter. Kailie Yuan , education engineer As I scroll through the web reading stories for Asian American and Pacific Islander Heritage Month, I can’t help but wonder: Why does it seem like everyone had such a fun, supporting, and loving childhood? As sad as it sounds, I never felt that I was enough. In most Chinese families, you are expected to be exceptional and flawless. My family was no different and always tried to make me go the extra mile with all that I do. But how many kids spend Saturdays in school trying to perform better in class, get their free time taken away and replaced with reading and studying, and are constantly told they need to put more effort into school when they don’t know what more they can do? I grew up with the expectation that to be successful, I needed to become a doctor or a lawyer. I was disappointed in myself when I couldn’t deliver on the high standards that my parents held. Because of that, I despised this Chinese stereotype of perfectionism — and still do. I didn’t want to be judged or feel like I wasn’t enough anymore, and this caused me to distance myself from Chinese culture and people outside of my family. That changed when the COVID-19 pandemic happened. Although I don’t support most traditional Chinese views, I realized that I still care tremendously about Chinese people. Several times, I cried out of anger after hearing reports of Asian people being targeted and blamed for the pandemic. I wondered what would happen if those experiencing discrimination were my parents or grandparents? For me, Asian American and Pacific Islander Heritage Month now means shining light on what it’s really like for many Asian Americans and the tragedy that has been happening since the COVID-19 pandemic started. So many people don’t acknowledge the horrors some Asians have been dealing with or the fear they have when they leave their homes. By sharing my story, I hope it helps others realize that things have been tough for us. Interested in joining MongoDB? We have hundreds of open roles on our teams across the globe and would love to help you transform your career.

May 25, 2022

What Does the Executive Order on Supply Chain Security Mean for Your Business? Security Experts Weigh In on SBOMs

In the wake of high-profile software supply chain attacks, the White House issued an executive order requiring more transparency in the software supply chain. The Executive Order (14028) on Improving the Nation’s Cybersecurity requires software vendors to provide a software bill of materials (SBOM). An SBOM is a list of ingredients used by software — that is, the collection of libraries and components that make up an application, whether they are third-party, commercial off-the-shelf, or open source software. By providing visibility into all the individual components and dependencies, SBOMs are seen as a critical tool for improving software supply chain security. The new executive order affects every organization that does or seeks to do business with the federal government. To learn more about the requirements and implementation, MongoDB invited a few supply chain security experts for a panel discussion. In our conversation, Lena Smart, MongoDB’s Chief Information Security Officer, was joined by three expert panelists: Dr. Allan Friedman, PhD, senior advisor and strategist, CISA; Clinton Herget, principal solutions engineer, Snyk; and Patrick Dwyer, CycloneDX SBOM project co-lead, Open Web Application Security Project. Background In early 2020, hackers broke into Texas-based SolarWind's systems and added malicious code to the company's Orion software system, which is used by more than 33,000 companies and government entities to manage IT resources. The code created a backdoor into affected systems, which hackers then used to conduct spying operations. In December 2021, a vulnerability in the open source Log4J logging service was discovered. The vulnerability enables attackers to execute code remotely on any targeted computer. The vulnerability resulted in massive reconnaissance activity, according to security researchers, and it leaves many large corporations that use the Log4J library exposed to malicious actors. Also in late 2021, the Russian ransomware gang, REvil, exploited flaws in software from Kaseya, a popular IT management application with MSPs. The attacks multiplied before warnings could be issued, resulting in malicious encryption of data and ransom demands as high as $5 million. In our panel discussion, Dr. Friedman kicked off the conversation by drawing on the “list of ingredients” analogy, noting that knowing what’s in the package at the grocery store won’t help you keep your diet or protect you from allergens by itself — but good luck doing so without it. What you do with that information matters. So the data layer is where we will start to see security practitioners implement new intelligence and risk-awareness approaches, Friedman says. SBOM Use Cases The question of what to do with SBOM data was top-of-mind for all of the experts in the panel discussion. Friedmans says that when the idea of SBOMs was first introduced, it was in the context of on-premises systems and network or firewall security. Now, the discussion is centered on SaaS products. What should customers expect from an SBOM for a SaaS product? As Senior Advisor and Strategist at the Cybersecurity and Infrastructure Security Agency (CISA), Friedman says this is where the focus will be over the next few months as they engage in public discussions with the software community to define those use cases. A few of the use cases panelists cited included pre-purchase due diligence, forensic and security analysis, and risk assessment. “At the end of the day, we're doing this hopefully to make the world of software more secure,” Smart says. No one wants to see another Log4J, the panelists agreed, but chances are we'll see something similar. A tool such as an SBOM could help determine exposure to such risks or prevent them from happening in the first place. Dwyer waded into the discussion by emphasizing the need for SBOM production and consumption to fit into existing processes. “Now that we're automating our entire software production pipeline, that needs to happen with SBOMs as well,” Dwyer says. Herget agreed on the need to understand the use cases and edge cases, and to integrate them. “If we're just generating SBOMs to store them in a JSON file on our desktop, we’ve missed the point,” he says. “It's one thing to say that Maven can generate an SBOM for all Java dependencies in a given project, which is amazing until you get to integrating non-Java technologies into that application.” Hergert says that in the era of microservices, you could be working with an application that has 14 different top-level languages involved, with all of their corresponding sets of open source dependencies handled by an orchestrated, cloud-based continuous integration pipeline. “We need a lot more tooling to be able to do interesting things with SBOMs,” Herget continued. “Wouldn't it be great to have search-based tooling to be able to look at dependency tree relationships across the entire footprint?” For Herget, future use cases for SBOM data will depend on a central question: What do we have that is a scalable, orchestrated way to consume SBOM data that we can then throw all kinds of tooling against to determine interesting facts about our software footprint that we wouldn't necessarily have visibility into otherwise? SBOMs and FedRAMP In the past few years, Smart has been heavily involved in FedRAMP (Federal Risk and Authorization Management Program), which provides a standardized approach to government security authorizations for Cloud Service Offerings. She asked the panelists whether SBOMs should be part of the FedRAMP SSP (System Security Plan). Friedman observed that FedRAMP is a “passed once, run anywhere” model, which means that once a cloud service is approved by one agency, any other government agency can also use it. “The model of scalable data attestations that are machine-readable I think does lend itself as a good addition to FedRAMP,” Friedman says. Herget says that vendors will follow if the government chooses to lead on implementing SBOMs. “If we can work toward a state where we're not talking about SBOMs as a distinct thing or even an asset that we're working toward but something that’s a property of software, that's the world we want to get to.” The Role of Developers in Supply Chain Security As always, the role of the developer is one of the most critical factors in improving supply chain security, as Herget points out. “The complexity level of software has exceeded the capacity for any individual developer, even a single organization, to understand where all these components are coming from,” Herget says. “All it takes is one developer to assign their GitHub merge rights to someone else who's not a trusted party and now that application and all the applications that depend on it are subject to potential supply chain attack.” Without supply chain transparency or visibility, Herget explains, there’s no way to tell how many assets are implicated in the event of an exploit. And putting that responsibility on developers isn’t fair because there are no tools or standardized data models that explain where all the interdependencies in an application ultimately lead. Ingredient lists are important, Herget says, but what’s more important are the relationships between them, which components are included in a piece of software and why, who added them and when, and to have all that in a machine-readable and manipulable way. “It's one thing to say, we have the ingredients,” Herget says, “But then what do you do with that, what kind of analysis can you then provide, and how do you get actionable information in front of the developer so they can make better decisions about what goes into their applications?” SBOM Minimum Requirements The executive order lays out the minimum requirements of an SBOM, but our panelists expect that list of requirements to expand. For now, there are three general buckets of requirements: Each component in an SBOM requires a minimum amount of data, including the supplier of the component, the version number, and any other identifiers of the component. SBOMs must exist in a widely used, machine-readable format, which today is either CycloneDX or SPDX . Policies and practices around how deep the SBOM tree should go in terms of dependencies. Moving forward, the panelists expect the list of minimum requirements to expand to include additional identifiers, such as a hash or digital fingerprint of a component, and a requirement to update an SBOM anytime you update software. They also expect additional requirements for the dependency tree, like a more complete tree or at least the ability to generate the complete tree. “Log4j taught people a lot about the value of having as complete a dependency tree as possible,” Friedman said, “because it was not showing up in the top level of anyone's dependency graph.” SBOMs for Legacy Systems One of the hurdles with implementing SBOMs universally is what to do with legacy systems, according to Smart. Johannes Ullrich, Dean of Research for SANS Technology Institute, has said that it may be unrealistic to expect 10- or 20-year-old code to ever have a reasonable SBOM. Friedman pointed to the use of binary analysis tools to assess software code and spot vulnerabilities, noting that an SBOM taken from the build process is far different from one built using a binary analysis tool. While the one taken from the build process represents the gold standard, Friedman says, there could also be incredible power in the binary analysis model, but there needs to be a good way to compare them to ensure an apples-to-apples approach. “We need to challenge ourselves to make sure we have an approach that works for software that is in use today, even if it's not necessarily software that is being built today,” Herget says. As principal solutions engineer at Snyk, Herget says these are precisely the conversations they’re having around what is the right amount of support for 30-year-old applications that are still utilized in production, but were built before the modern concept of package management became integrated into the day-to-day workflows of developers. “I think these are the 20% of edge cases that SBOMs do need to solve for,” Herget says, “Because if it’s something that only works for modern applications, it's never going to get the support it needs on both the government and the industry side.” Smart closed the topic by saying, “One of the questions that we've gotten in the Q&A is, ‘What do you call a legacy system?’ The things that keep me awake at night, that's what I call legacy systems.” Perfect Ending Finally, the talk turned to perfection, how you define it, and whether it’s worth striving for perfection before launching something new in the SBOM space. Herget, half-joking, said that perfection would be never having these talks again. “Think about how we looked at DevOps five or 10 years ago — it was this separate thing we were working to integrate within our build process,” he says. “You don’t see many panel talks on how we will get to DevOps today because it's already part of the water we’re all swimming in.” Dwyer added that perfection to him is when SBOMs are just naturally embedded in the modern software development lifecycle — all the tooling, the package ecosystems. “Perfection is when it's just a given that when you purchase software, you get an SBOM, and whenever it's updated, you get an SBOM, but you actually don't care because it's all automated, Dwyer says. “That’s where we need to be.” According to Friedman, one of the things that SBOMs has started to do is to expose some of the broader challenges that exist in the software ecosystem. One example is software naming and software identity. Friedman says that in many industries, we don't actually have universal ways of naming things. “And, it’s not that we don't have any standards, it’s that we have too many standards,” he explains. “So, for me, perfection is saying SBOMs are now driving further work in these other areas of security where we know we've accumulated some debt but there hasn't been a forcing function to improve it until now.”

May 23, 2022

MongoDB & IIoT: A 4-Step Data Integration

The Industrial Internet of Things (IIoT) is driving a new era of manufacturing, unlocking powerful new use cases to forge new revenue streams, create holistic business insights, and provide agility based on global and consumer demands. Our previous article, “ Manufacturing at Scale: MongoDB & IIoT ,” we gave an overview of the adoption and implementation of IIoT in manufacturing processes, testing various use cases with a model-size smart factory (Figure 1). In this post, we’ll look at how MongoDB’s flexible, highly available, and scalable data platform allows for end-to-end data integration using a four-step framework. Figure 1: Architecture diagram of MongoDB's application data platform with MQTT-enabled devices. 4-step framework for end-to-end data integration The four stages of this framework (Figure 2) are: Connect: Establish an interface to “listen” and “talk” to the device(s). Collect: Gather and store data from devices in an efficient and reliable manner. Compute: Process and analyze data generated by IoT devices. Create: Create unique solutions (or applications) through access to transformational data. Figure 2: The four-step framework for shop floor data integration During the course of this series, we will explore each of the four steps in detail, covering the tools and methodology and providing a walkthrough of our implementation process, using the Fischertechnik model as a basis for testing and development. All of the steps, however, are applicable to any environment that uses a Message Queuing Telemetry Transport (MQTT) API. The first step of the process is Connect. The first step: Connect The model factory contains a variety of sensors that are generating data on everything from the camera angle to the air quality and temperature — all in real time. The factory uses the MQTT protocol to send and receive input, output, and status messages related to the different factory components. You may wonder why we don’t immediately jump to the data collection stage. The reason is simple; we must first be able to “see” all of the data coming from the factory, which will allow us to select the metrics we are interested in capturing and configure our database appropriately. As a quick refresher on the architecture diagram of the factory, we see in Figure 3 that any messages transmitted in or out of the factory are routed through the Remote MQTT Broker. The challenge is to successfully read and write messages to and from the factory, respectively. Figure 3: Architecture diagram of the model smart factory It is important to remember that the method of making this connection between the devices and MongoDB depends on the communication protocols the device is equipped with. On the shop floor, multiple protocols are used for device communication, such as MQTT and OPC-UA, which may require different connector technologies, such as Kafka, among other off-the-shelf IoT connectors. In most scenarios, MongoDB can be integrated easily, regardless of the communication protocol, by adding the appropriate connector configuration. (We will discuss more about that implementation in our next blog post.) For this specific scenario, we will focus on MQTT. Figure 4 shows a simplified version of our connection diagram. Figure 4: Connecting the factory's data to MongoDB Atlas and Realm Because the available communication protocol for the factory is MQTT, we will do the following: Set up a remote MQTT broker and test its connectivity. Create an MQTT bridge. Send MQTT messages to the device(s). Note that these steps can be applied to any devices, machinery, or environment that come equipped with MQTT, so you can adapt this methodology to your specific project. Let’s get started. 1. Set up a remote MQTT broker To focus on the connection of the brokers, we used a managed service from HiveMQ to create a broker and the necessary hosting environment. However, this setup would work just as well with any self-managed MQTT broker. HiveMQ Cloud has a free tier, which is a great option for practice and for testing the desired configuration. You can create an account to set up a free cluster and add users to it. These users will function as clients of the remote broker. We recommend using different users for different purposes. Test the remote broker connectivity We used the Mosquitto CLI client to directly access the broker(s) from the command line. Then, we connected to the same network used by the factory, opened a terminal window, and started a listener on the local TXT broker using this command: mosquito_sub -h -p 1883 -u txt -P xtx -t f/o/# Next, in a new terminal window, we published a message to the remote broker on the same topic as the listener. A complete list of all topics configured on the factory can be found in the Fischertechnik documentation . You can fill in the command below with the information of your remote broker. mosquitto_pub -h <hivemq-cloud-host-address> -p 8883 -u <hivemq-client-username> -P <hivemq-client-password> -t f/o/# -m "Hello" If the bridge has been configured correctly, you will see the message “Hello” displayed on the first terminal window that contains your local broker listener. Now we get to the good part. We want to see all the messages that the factory is generating for all of the topics. Because we are a bit more familiar with the Mosquitto CLI, we started a listener on the local TXT broker using this command: mosquitto_sub -h -p 1883 -u txt -P xtx -t # Where the topic “#” essentially means “everything.” And just like that, we can get a sense of which parameters we can hope to extract from the factory into our database. As an added bonus, the data is already in JSON. This will simplify the process of streaming the data into MongoDB Atlas once we reach the data collection stage, because MongoDB runs on the document model , which is also JSON-based. The following screen recording shows the data stream that results from starting a listener on all topics to which the devices publish while running. You will notice giant blocks of data, which are the encoding of the factory camera images taken every second, as well as other metrics, such as stock item positions in the warehouse and temperature sensor data, all of which is sent at regular time intervals. This is a prime example of time series data, which we will describe how to store and process in a future article. Video: Results of viewing all device messages on all topics 2. Create a MQTT bridge An MQTT bridge (Figure 5) is a uni/bi-directional binding of topics between two MQTT brokers, such that messages published to one broker are relayed seamlessly to clients subscribed to that same topic on the other broker. Figure 5: Message relays between MQTT brokers In our case, the MQTT broker on the main controller is configured to forward/receive messages to/from the remote MQTT broker via the following MQTT bridge configuration: connection remote-broker address <YOUR REMOTE MQTT BROKER IP ADDRESS:PORT> bridge_capath /etc/ssl/certs notifications false cleansession true remote_username <HIVEMQ CLIENT USERNAME> remote_password <HIVEMQ CLIENT PASSWORD> local_username txt local_password xtx topic i/# out 1 "" "" topic o/# in 1 "" "" topic c/# out 1 "" "" topic f/i/# out 1 "" "" topic f/o/# in 1 "" "" try_private false bridge_attempt_unsubscribe false This configuration file is created and loaded directly into the factory broker via SSH. 3. Send MQTT messages to the device(s) We can test our bridge configuration by sending a meaningful MQTT message to the factory through the HiveMQ websocket client (Figure 6). We signed into the console with one of the users (clients) previously created and sent an order message to the “f/o/order” topic used in the previous step. Figure 6: Sending a test message using the bridged broker The format for the order message is: {"type":"WHITE","ts":"2022-03-23T13:54:02.085Z"} “Type” refers to the color of the workpiece to order. We have a choice of three workpiece colors: RED, WHITE, BLUE; “ts” refers to the timestamp of when the message is published. This determines its place in the message queue and when the order process will actually be started. Once the bridge is configured correctly, the factory will start to process the order according to the workpiece color specified in the message. Thanks for sticking with us through to the end of this process. We hope this methodology provides fresh insight for your IoT projects. Find a detailed tutorial and all the source code for this project on GitHub. Learn more about MongoDB for Manufacturing and IIoT . This is the second of an IIoT series from MongoDB’s Industry Solutions team. Read the first post, “ Manufacturing at Scale: MongoDB & IIoT .” In our next article, we will explore how to capture time series data from the factory using MongoDB Atlas and Kafka .

May 20, 2022

Open Banking: How to Future-Proof Your Banking Strategy

Open banking is on the minds of many in the fintech industry, leading to basic questions such as: What does it mean for the future? What should we do today to better serve customers who expect native open banking services? How can we align with open banking standards while they’re still evolving? In a recent panel discussion , I spoke with experts in the fintech space: Kieran Hines, senior banking analyst at Celent; Toine Van Beusekom, strategy director at Icon Solutions; and Charith Mendis, industry lead for banking at AWS. We discussed open banking standards, what the push to open banking means for innovation, and more. This article provides an overview of that discussion and offers best practices for getting started with open banking. Watch the panel discussion Open Banking: Future-Proof Your Bank in a World of Changing Data and API Standards to learn how you can future-proof your open banking strategy. Fundamentals To start, let’s answer the fundamental question: What is open banking ? The central tenet of open banking is that banks should make it easy for consumers to share their financial data with third-party service providers and allow those third parties to initiate transactions on their behalf — adding value along the way. But, as many have realized, facilitating open banking is not so easy. At the heart of the open banking revolution is data — specifically, the infrastructure of databases, data standards, and open APIs that make the free flow of data between banks, third-party service providers, and consumers possible. What does this practice mean for the banking industry? In the past, banks almost exclusively built their own products, which has always been a huge drain on teams, budgets, and infrastructure. With open banking, financial services institutions are now partnering with third-party vendors to distribute products, and many regulations have already emerged to dictate how data is shared. Because open banking is uncharted territory, it presents an array of both challenges — mostly regulatory — and opportunities for both established banks and disruptors to the space. Let’s dig into the challenges first. Challenges As open banking, and the technology practices that go along with it, evolve, related compliance standards are emerging and evolving as well. If you search for “open banking API,” you’ll find that nearly every vendor has their own take on open banking and that they are all incompatible to boot. As with any developing standard, open banking standards are not set in stone and will continue to evolve as the space grows. The fast-changing environment will hinder those banks that do not have a flexible data architecture that allows them to quickly adapt to provider standards as needed. An inflexible data architecture becomes an immediate roadblock with unforeseen consequences. Closely tied to the challenge of maintaining compliance with emerging regulations is the challenge that comes with legacy architecture. Established banks deliver genuine value to customers through time-proven, well-worn processes. In many ways, however, legacy operations and the technology that underpins them are doomed to stand in the way not only of open banking but also operational efficiency goals and the ability to meet the customer experience expectations of a digital-native consumer base. To avoid the slow down of clunky legacy systems, banks need an agile approach to ensure the flexibility to pivot to developing challenges. Opportunities The biggest opportunity for institutions transitioning into open banking is the potential for rapid innovation. Banking IP is headed in new and unprecedented directions. Pushing data to the cloud, untangling spaghetti architecture, or decentralizing your data by building a data mesh frees up your development teams to innovate, tap into new revenue streams, and achieve the ultimate goal: Providing greater value to your customers. As capital becomes scarce in banks, the ability to repeatedly invest in new pilots is limited. Instead of investing months or years worth of capital into an experiment, building new features from scratch, or going to the board to secure funding, banks need to succeed immediately, be able to scale from prototype to global operation within weeks, or fail fast with new technology. Without the limiting factors of legacy software or low levels of capital, experimentation powered by new data solutions is now both free and low risk. Best Practices Now that we’ve described the potential that open banking presents for established and emerging industry leaders, let’s look at some open banking best practices, as described in the panel discussion . Start with your strategy. What’s your open banking strategy in the context of your business strategy? Ask hard questions like: Why do you want to transform? What’s wrong with what’s going on now? How can you fix current operations to better facilitate open banking? What new solutions do you need to make this possible? An entire shift for a business to open banking means an entirely new business strategy, and you need to determine what that strategy entails before you implement sweeping changes. View standards as accelerators, not inhibitors. Standards can seem like a burden on financial institutions, and in most cases, they do dictate change that can be resource intensive. But you can also view changing regulations as the catalyst needed to modernize. While evolving regulations may be the impetus for change, they can also open up new opportunities once you’re aligned with industry standards. Simplify and unify your data. Right now, your data likely lives all over the place, especially if you’re an established bank. Legacy architectures and disparate solutions slow down and complicate the flow of data, which in turn inhibits your adoption of open banking standards. Consider how you can simplify your data by reducing the number of places it lives. Migrating to a single application data platform makes it faster and easier to move data from your financial institution to third parties and back again. Always consider scale. When it comes to open banking, your ability to scale up and scale down is crucial — and is also tied to your ability to experiment, which is also critical. Consider the example of “buy now pay later” service offerings to your clients. On Black Friday, the biggest shopping day of the year, financial institutions will do exponentially more business than, say, a regular Tuesday in April. So, to meet consumer demand, your payments architecture needs to be able to scale up to meet the influx of demand on a single, exceptional day and scale back down on a normal day to minimize costs. Without the ability to scale, you may struggle to meet the expectations of customers. Strive for real time. Today, everyone — from customers to business owners to developers — expect the benefits of real-time data. Customers want to see their exact account balance when they want to see it, which is already challenging enough. If you add the new layer of open banking to the mix, with data constantly flowing from banks to third parties and back, delivering data in real-time to customers is more complex than ever. That said, with the right data platform underpinning operations, the flow of data between systems can be simplified and made even easier when your data is unified on a single platform. If you can unlock the potential of open banking, you can innovate, tap into new revenue streams, shake off the burden of legacy architecture, and ultimately, achieve a level of differentiation likely to bring in new customers. Watch the panel discussion to learn more about open banking and what it means for the future of banks.

May 19, 2022

Collaborative User Story Mapping with Avion and MongoDB

When companies think about their products, they often fall into the trap of planning without truly considering their user’s journey and experience. Perhaps it’s time to start thinking about products from the customer's perspective. Avion was founded by James Sear and Tim Ramage with one thing in mind - to provide the most intuitive and enjoyable user story mapping experience for agile teams to use, from product inception to launch (and beyond). The key, Sear said, is that user story mapping gives you a way of thinking about your product and its features, typically software, from the perspective of your customers or users. This is facilitated by defining things that the user can do (user stories) within the context of your core user journeys. Built with MongoDB spoke with Sear about the idea of user story mapping, how he and Ramage started Avion, and what it’s been like to work with MongoDB. Built with MongoDB: What is Avion all about? James Sear : Avion is a digital user story mapping tool for product teams. It helps them to break down complexity, map out user journeys, build out the entire scope of their product and then decide what to deliver and in what order. It’s a valuable tool that is typically underused. Not everyone understands what story mapping is; as it’s quite a specific technique and you do have to put the time in to learn it in order to get the most out of it. But once you have, there is so much value to be unlocked, in terms of delivering better outcomes for your users, as opposed to just building stuff for the sake of it. Built with MongoDB: What made you decide to start Avion? Sear: My co-founder Tim Ramage and I met around 2014, and we were jointly involved in teams that were building lots of different software products for various companies, both big and small. And while we were very involved in their technical implementation, we were also both really interested in the product management side of delivery, because it’s just so crucial to be successful. That includes everything from UX decisions, product roadmapping prioritization, customer feedback, metrics, managing the team, it all really interested us. However, one thing that we found a particularly difficult part of the process, was taking your clients’ big ideas and translating them into some sort of actionable development plan. We tried a few different approaches for this, until we stumbled across a technique called user story mapping. User story mapping manages to pull together all of your core user journeys, the scope of all features that could be built, and how you plan to deliver them. On top of that, it conveys the order in which you should be working on things. Once you have this powerful asset, you can have effective conversations with your team, and answer the most important questions, such as—what’s the minimum we can build to make this valuable to users, where does this feature actually appear for our users or what we are going to build next, and why?. It really does allow you to communicate more effectively with stakeholders. For instance, you could use it to update your CEO and talk them through what you’re building now, answering those difficult questions like why you’re not building feature X or feature Y. You’ve got this outline right in front of you that makes sense to a product person, a developer, or even an outside stakeholder. Built with MongoDB: Initially, you started to build out a collaborative tool for product teams, and Avion has evolved into more. What else has changed in your journey at Avion? Sear: Our goal at launch was to provide our customers with a best-in-class story mapping experience in the browser. This meant nailing the performance and user interaction, so creating a story map just felt fluid and easy. After this, we focused on tightly integrating with more traditional backlog tools, like Jira and Azure DevOps. We always maintain that our customers shouldn’t have to give up their existing tooling to get value from Avion — so we built it to sit in the middle of their stack and assist them with planning and delivery. Built with MongoDB: What are some of the challenges that you’ve faced in such a crowded productivity space? Sear: It’s difficult to stick out amongst the crowd, but our unique value proposition is actually quite niche. This allows us to show our potential customers a different side of product planning that they might not have seen before. And for anyone that already knows about story mapping, Avion is an opinionated and structured canvas for them to just get work done and be productive quickly. Ultimately, we try to stick out by providing value in a vertical slice of product planning that is often overlooked. Built with MongoDB: What kind of experiences have you had working with MongoDB? Sear: There have been many scenarios where we’ve been debugging difficult situations with production scaling issues, and we just cannot work out why the apps have gone down overnight. There are so many tricky things that come up when you’re running in production. But we have always managed to find something in MongoDB Atlas that can help us just try and pinpoint that issue, whether it’s some usage graphs, or some kind of metrics that allows us to really dig down into the collections, the queries, and everything so MongoDB has been excellent for that in terms of features. It just gives you that peace of mind, we’ve had customers delete stuff of their own accord, and get really upset, but we’ve been able to help them by going back to snapshot backups and retrieving that data for them. From a customer support perspective, it’s massive to have that option on the table. MongoDB Atlas is really useful to us and we don’t have to configure anything, it’s just amazing. The MongoDB upgrades are completely seamless, and help us stay on the latest version of the database which is a huge win for security. Learn more about user story mapping with Avion , and start planning a more user-centric backlog. Interested in learning more about MongoDB for Startups? Learn more about us on the MongoDB Startups page .

May 19, 2022

Atlas Charts Adds a Dedicated Hub for Managing Embedded Charts and Dashboards

Since the release of the Charts Embedding SDK in May of 2020, developers have been exploring powerful new ways to visualize and share data from their MongoDB Atlas clusters. Embedding charts and dashboards is a valuable use case for Charts users and the new Embedding Page streamlines the embedding experience for first time users and veterans alike. Everything you need on one screen Don’t worry if the concept of embedding within the MongoDB Charts platform is new to you. The Getting Started tab provides configuration guidance, and links to video references, code snippets, live sandboxes, and other resources to help you get started. But just as your applications may evolve according to your needs, your embedding requirements may also change over time. Once you have set up an embedded dashboard or chart, the Items tab acts as the landing page. Think of this as a live snapshot of your current embedding environment. You’ll see a list of all of your charts grouped by their dashboards, be able to search based on title or description, and filter the list to show only dashboards. Within each row, you can view a chart or dashboard’s embedded status, see which type of embedding is enabled, view and copy the embedding ID, and access the full suite of embedding settings available for each item. This means that you can add filters or change your embedding method without having to know exactly where every chart or related setting lives. This approach also lets you operate with confidence on one single page. How cool is that? Authentication settings The Charts SDK allows you to configure unauthenticated embedding for dashboards or charts, making for a painless way to share these items in a safe and controlled environment. Depending on your use case, this setup may be a little more flexible than you’d like. The Authentication Settings tab contains authentication provider settings, giving project owners a single source of truth for adding and maintaining providers. Our focus for this feature is on simplicity and consolidation. We believe wholeheartedly that if we can enable you to spend less time hunting down where to configure settings or find resources, you can focus more on what really matters and build great software. For more information on authentication options, read our documentation . New to MongoDB Atlas Charts? Get started today by logging in to or signing up for MongoDB Atlas , deploying or selecting a cluster, and activating Charts for free.

May 18, 2022

The Insider's Guide to MongoDB World 2022

Join us from June 7 to June 9 at MongoDB World 2022, which will be held at New York City’s Javits Center. Enjoy three packed days of keynotes, workshops, talks, technical panels, networking, community building, and more. Whether you’re eager to reconnect with your peers in person or are slightly overwhelmed by the choice of sessions and activities, you’ll find everything you need to know in this post. We will highlight special events at MongoDB World, preview what to expect and how to prepare, and provide tips on getting the most out of the conference. Plan your itinerary Space for workshops, talks, and other sessions are limited, so make sure to check out the World 2022 agenda and sign up for the activities that interest you. “Take time to create a list — and budget time between sessions,” advises Ben Flast, a MongoDB product management lead and featured World speaker . “There’s a lot going on, so have a plan to make sure to see the sessions that are most important to you.” Pick your learning path Whether they’re conference tracks, Chalk Talks, or keynotes, each event has a different audience, purpose, and skill level. The must-see keynotes from MongoDB CEO Dev Ittycheria, CTO Mark Porter, and chief product officer Sahir Azam will showcase announcements and new releases — and explain how they fit into the MongoDB ecosystem. Additionally, we are excited to announce that renowned technologist Ray Kurzweil has been confirmed as a keynote speaker. A distinguished thinker, inventor, and leader, Kurzweil has transformed multiple areas of technology, pioneering industry-leading products such as flatbed scanners, the first text-to-speech synthesizer, and much more. Don’t miss this exciting speech from a legend of the tech industry. Talks and workshops are divided into eight tracks, each of which includes a variety of sessions. The tracks include: Partner talks; the MongoDB Application Data Platform; Community Cafe; Governance, Compliance, and Security; Industry and Solutions Data Architecture; Modern Application Development; Make It Matter; Schema Design and Modeling; and the keynote speeches. Make It Matter, a track on inclusion, diversity, equity, and accessibility (IDEA), will be held in our dedicated IDEA Lounge. “People learn in all sorts of ways,” explains Karen Huaulme, a principal developer advocate at MongoDB. “That’s why we have hour-long sessions, 15-minute lightning talks, and everything in between. Feel free to mix and match so that you can learn in a way that works for you.” See the entire World 2022 agenda and mix and match your sessions For instance, Jumpstarts are high-level tutorials that introduce newcomers to basic (but important) MongoDB skills and best practices. This year, we’re running Jumpstarts on data and schema modeling, MongoDB Atlas, and Atlas Search, all of which will be moderated by seasoned MongoDB product managers and users. In contrast, Chalk Talks are highly interactive, small-group sessions for everyone from beginners to experts. Chalk Talks tend to be short (around 30 minutes), with plenty of audience participation, whiteboarding, and free-flowing discussion. For something more immersive, try a workshop — the long meal to the Chalk Talk’s snack break. Held only on Day 3, workshops are deep dives into highly technical topics. The first two hours will set the tone with onboarding, configuration, and lectures, and the second half will center on relevant real-world scenarios and attendee needs. If you want to practice using a specific technology and figure out how to make it work in your environment, sign up for a workshop . If you’re curious about the big picture, attend a Product Announcement or a Product Vision talk. Announcements will cover individual releases, how to use them, and how they fit into the MongoDB product family. Vision talks will marry new and existing products in order to explore different themes and workflows. Examples include " Serverless: The Future of Application Development " and " Going Real-Time With MongoDB Atlas ." More information can be found on the MongoDB World Agenda , which is updated regularly. Come prepared Speakers and facilitators will be in touch in advance to share all the necessary prerequisites, whether it’s downloadable modules, syllabi, or any other required materials. “Preparation will depend on the specific event,” says Jesse Hall, a senior developer advocate at MongoDB and workshop presenter. “For example, my workshop takes a serverless approach — setting up MongoDB in JAMstack — so be sure to bring a laptop with a basic development environment (like Node.js or VSCode).” Don't miss the hallway track Let serendipity take the wheel as you mix and mingle with other attendees, speakers, customers, partners, and other industry leaders between sessions, at the Community Cafe, and elsewhere. “Don’t be afraid to explore different events or exchange ideas with new people,” Huaulme suggests. “That’s where the magic happens. Don’t be intimidated by the idea of chatting with speakers or presenters. They’re very approachable, down-to-earth, and happy to hear from you.” “Keep an open mind and an open ear, and definitely reach out to anyone wearing MongoDB swag,” says Flast. “They’re working on something interesting.” Meet MongoDB partners Get to know our partners and learn how they build the future with MongoDB. Many of our top partners will be presenting talks at MongoDB World on topics from building operational data stores to working with edge devices , and they’ll also be running booths at the Partner Promenade. These organizations include major cloud companies, along with leaders in streaming data, real-time analytics, and much more. Matt Asay, MongoDB’s vice president of Partner Marketing , encourages visitors to make time to learn how each partner complements the MongoDB application data platform, and to see how these partners help enterprises of all sizes build the future. For his part, Asay looks forward to moderating a panel with leaders from Vercel, Prisma, and Apollo GraphQL, and to learning more about how these cutting-edge companies build for — and with — developers. Try something new Check out events that are off the beaten track, like the Builders’ Fest and the Community Cafe. At both venues, you’ll be able to unleash your creativity and pick up new skills. Check out the unique workshops at the Builders’ Pods, relaxed areas with lots of comfortable chairs, tables, and monitors. In the past, participants have learned to pick locks, create ice sculptures, construct machine learning algorithms, and develop games, among other things. For Huaulme, the Builders’ Fest sessions are a personal favorite. “The last time around, I learned to pick locks, while others learned to jump rope,” she recalls. “Builders’ Fest is a great place to learn new, fun skills — not all of which are related to tech.” Builders’ Fest will also include competition alongside discovery and exploration. Head to the nearby stages, where you can choose from coding challenges (like Code Golf) and play popular video games such as MarioKart, Donkey Kong, and more. Test your skills — whether it’s your mastery of code or your fast reflexes — against your peers. Stop by the Community Cafe to recharge. Lounge with coffee, thumb through the products at the swag store, and take a break from the action. Don’t forget to check out the silk-screen booth, where you can customize T-shirt designs and watch as they’re printed before your eyes. Register today for MongoDB World, and use code ​​ MDBW22BLOG to save 25% off your tickets. We hope to see you in NYC from June 7 to June 9!

May 18, 2022

From Core Banking to Componentized Banking: Temenos Transact Benchmark with MongoDB

Banking used to be a somewhat staid, hyper-conservative industry, seemingly evolving over eons. But banking in recent years has dramatically changed. Under pressure from demanding consumers and nimble new competitors, development cycles measured in years are no longer sufficient in a market expecting new products, such as Buy-Now-Pay-Later, to be introduced within months or even weeks. Just ask Temenos, the world's largest financial services application provider, providing banking for more than 1.2 billion people . Temenos is leading the way in banking software innovation and offers a seamless experience for their client community. Financial institutions can embed Temenos components, which delivers new functionality in their existing on-premises environments (or in their own environment in their cloud deployments) or through a full banking as a service experience with Temenos T365 powered by MongoDB on various cloud platforms. Temenos embraces a cloud-first, microservices-based infrastructure built with MongoDB, giving customers flexibility, while also delivering significant performance improvements. This new MongoDB-based infrastructure enables Temenos to rapidly innovate on its customers' behalf, while improving security, performance, and scalability. Architecting for a better banking future Banking solutions often have a life cycle of 10 or more years, and some systems I am involved in upgrading date back to the 1980s. Upgrades and changes, often focussed on regulatory or technical upgrades (for example, operating system versions), hardware upgrades, and new functionality, are bolted on. The fast pace of innovation, a mobile-first world, competition, crypto, and Defi are demanding a massive change for the banking industry, too. The definition of new products and roll outs measured in weeks and months versus years requires an equally drastic change in technology adoption. Banking is following a path similar to the retail industry. Retail was built upon a static design approach with monolithic applications connected through ETL (Extract, Transform, and Load) and “unloading of data,” that was robust and built for the times. The accelerated move to omnichannel requirements brought a component-driven architecture design to fruition that allowed faster innovation and fit-for-purpose components being added (or discarded) from a solution. The codification of this is called MACH (Microservices, API first, Cloud-native, and Headless) and a great example is the flexibility brought to bear through companies such as Commercetools . Temenos is taking the same direction for banking. Its concept of components that are seamlessly added to existing Temenos Transact implementations empowers banks to start an evolutionary journey from existing status on-premises environments to a flexible hybrid landscape delivering best of breed banking experiences. Key for this journey is a flexible data concept that meshes the existing environments with requirements of fast changing components available on premises and in the cloud. Temenos and MongoDB joined forces in 2019 to investigate the path toward data in a componentized world. Over the past few years, our teams have collaborated on a number of new, innovative component services to enhance the Temenos product family, and several banking clients are now using those components in production. However, the approach we've taken allows banks to upgrade on their own terms. By putting components “in front” of the Temenos Transact platform , banks can start using a componentization solution without disrupting their ability to serve existing customer requirements. Similarly, Temenos offers MongoDB's critical data infrastructure with an array of deployment capabilities, from full-service multi- or hybrid cloud offerings, to on-premises self-managed, depending on local regulations and the client’s risk appetite. In these and other ways, Temenos makes it easier for its banking clients to embrace the future without upsetting existing investments. From an architectural perspective, this is how component services utilize the new event system of Temenos Transact and enable a new way of operating: Temenos Transact optimized with MongoDB Improved performance and scale All of which may sound great, but you may still be wondering whether this combination of MongoDB and Temenos Transact can deliver the high throughput needed by Tier 1 banks. Based on extensive testing and benchmarking, the answer is a resounding yes . Having been in the benchmark business for a long time, I know that you should never trust just ANY benchmark. (In fact, my colleague, MongoDB distinguished engineer John Page, wrote a great blog post about how to benchmark a database .) But Temenos, MongoDB, and AWS jointly felt the need to remove this nagging itch and deliver a true statement on performance, delivering proof of a superior solution for the client community. Starting with the goal of reaching a throughput of 25,000 transactions, it quickly became obvious that this rather conservative goal could easily be smashed, so we decided to quadruple the number to 100,000 transactions using a more elaborate environment. The newly improved version of Temenos Transact in conjunction with component services proved to be a performance giant. One hundred thousand financial transactions per second with a MongoDB response time under 1ms was a major milestone compared to earlier benchmarks with 79ms response time with Oracle, for example. Naturally, this result is in large part due to the improved component behavior and the AWS Lambda functions that now run the business functionality, but the document model of MongoDB in conjunction with the idiomatic driver concept has proven superior over the outdated relational engine of the legacy systems. Below, I have included some details from the benchmark. As Page once said, “You should never accept single benchmark numbers at face value without knowing the exact environment they were achieved in.” Configuration: table, th, td { border: 1px solid black; border-collapse: collapse; } J-meter Scripts Number of Balance Services Number of Transact Services MongoDB Atlas Cluster Number of Docs in Balance Number of Docs in Transaction 3 6 GetBalance - 4 GetTransactions - 2 4 M80 (2TB) 110M 200M Test Results table, th, td { border: 1px solid black; border-collapse: collapse; } Functional TPS API Latency ms DB Latency ms Get Balance 46751 79.45 0.36 Get Transaction 22340 16.58 0.36 Transact Service 31702 117.15 1.07 Total 100793 71.067 0.715 The underlying environment consists of 200-million accounts with 100-million customers, which shows the scalability the configuration is capable of working with. This setup would be suitable for the largest Tier 1 banking organizations. The well-versed MongoDB user will realize that the used cluster configuration for MongoDB is small. The M80 cluster, 32 VCores with 128GB RAM, is configured with 5 nodes. Many banking clients prefer those larger 5-node configurations for higher availability protection and better read distribution over multiple AWS Availability Zones and regions, which would improve the performance even more. In the case of an Availability Zone outage or even a regional outage, the MongoDB Atlas platform will continue to service via the additional region as back up. The low latency shows that the MongoDB Atlas M80 was not even fully utilized during the benchmark. The diagram shows a typical configuration for such a cluster setup for the American market: one East Coast location, one West Coast location, and an additional node out of both regions in Canada. MongoDB Atlas allows the creation of such a cluster within seconds configured to the specific requirements of the solution deployed. The total landscape is shown in the following diagram: Signed, sealed, and delivered. This benchmark should give clients peace of mind that the combination of core banking with Temenos Transact and MongoDB is indeed ready for prime time. While thousands of banks rely on MongoDB for many parts of their operations ranging from login management and online banking, to risk and treasury management systems, Temenos' adoption of MongoDB is a milestone. It shows that there is significant value in moving from a legacy database technology to the innovative MongoDB application data platform, allowing faster innovation, eliminating technical debt along the way, and simplifying the landscape for financial institutions, their software vendors, and service providers. If you would like to learn more about MongoDB in the financial services industry, take a look at our guide: The Road to Smart Banking: A Guide to Moving from Mainframe to Data Mesh and Data-as-a-Product

May 18, 2022

A Hub for Eco-Positivity

In this guest blog post, Natalia Goncharova, founder and web developer for EcoHub — an online platform where people can search for and connect with more than 13,000 companies, NGOs, and governmental agencies across 200-plus countries — describes how the company uses MongoDB to generate momentum around global environmental change. There is no denying that sustainability has become a global concern. In fact, the topic has gone mainstream. A 2021 report by the Economist Intelligence Unit (EIU) shows a 71% rise in the popularity of searches for sustainable goods over the past five years. The report “measures engagement, awareness and action for nature in 27 languages, across 54 countries, covering 80% of the world’s population.” The EIU report states that the sustainability trend is accelerating in developing and emerging countries including Ecuador and Indonesia. For me, it’s not a lack of positive sentiment that is holding back change; it is our ability to turn ideas and goodwill into action. We need a way of harnessing this collective sentiment. In 2020, the decision to found EcoHub and devote so much time to it was a difficult one to make. I had just been promoted to team leader at work, and things were going well. Leaving my job with the goal of helping to protect our environment sounded ridiculous at times. Many questions raced through my mind, the most insistent one being: Will I be able to actually make a difference? However, as you’ll see in this post, my decision was ultimately quite clear. What is EcoHub? When I created EcoHub, my principal aim was to connect ecological NGOs and businesses. Now, EcoHub enables users to search a database of more than 10,000 organizations in more than 200 countries. You can search via a map or keyword. By making it easier to connect, EcoHub lets users quickly build networks of sustainably minded organizations. We believe networks are key to spreading good ideas, stripping out duplication, and building expertise. Building the platform has been a monumental task. I have developed it myself over the past few months, acting as product manager, project manager, and full-stack developer. (It wouldn’t be possible without my research, design, and media teams as well.) During the development of the EcoHub platform on MongoDB, the flexible schema helped us edit and add new fields in a document because the process doesn’t require defining data types. We had a situation in which it was necessary to change the schema and implement changes for all documents in the database. In this case, modifying the entire collection with MongoDB didn’t take long for an experienced developer. Additionally, MongoDB’s document-oriented data model works well with the way developers think. The model reflects how we see the objects in the codebase and makes the process easier. In my experience, the best resource to find answers when I ran into a question or issue was MongoDB documentation . It provides a good explanation of almost anything you want to do in your database. Search is everything In technical terms, my choices were ReactJS, NodeJS, and MongoDB. It is the latter that is so important to the effectiveness of the EcoHub platform. Search is everything. The easier we can make it for individuals or organizations to find like minds, the better. I knew from the start that I’d need a cloud-based database with strong querying abilities. As an experienced developer, I had previous experience with MongoDB and knew the company to be reliable, with excellent documentation and a really strong community of developers. It was a clear choice from the start. Choosing our partners carefully is also important. If EcoHub is to build awareness of environmental issues and foster collaboration, then we must ensure we make intelligent choices in terms of the companies we work with. I have been impressed with MongoDB’s sustainability commitments , particularly around diversity and inclusion, carbon reduction, and its appetite for exploring the way the business has an impact globally and locally. EcoHub search is built on the community version of MongoDB , which enables us to work quickly, implement easily and deliver the right performance. Importantly, as EcoHub grows and develops, MongoDB also allows us to make changes on the fly. As environmental concerns continue to grow, our database will expand. MongoDB enables our users to search, discover, and connect with environmental organizations all over the world. I believe these connections are key to sharing knowledge and expertise and helping local citizens coordinate their sustainability efforts. Commitment to sustainability When it came down to it, the decision to build EcoHub wasn’t as difficult as I initially thought. My commitment to sustainability actually started when I was young: I can remember myself at 8 years old, glued to the window, waiting for the monthly Greenpeace magazine to arrive. Later, that commitment grew as I went to university and graduated with a degree in Environmental Protection and Engineering. Soon after, I founded my first ecology organization and rallied our cityagainst businesses wanting to cut down our beautiful city parks. Starting EcoHub was a natural and exciting next step, despite the risks and unknown factors. I hope we can all join hands to create a sustainable future for ourselves, our children, and our animals and plants, and keep our planet beautiful and healthy. MongoDB Atlas makes operating MongoDB a snap at any scale. Determine the costs and benefits with our cost calculator .

May 11, 2022

Shared Responsibility: More Agility, Less Risk

The tension between agility, security, and operational uptime can keep IT organizations from innovating as fast as they’d like. On one side, application developers want to move fast and continually deliver innovative new releases. On the other side, InfoSec and IT operations teams aim to continually reduce risk, which can result in a slowed down process. This perception couldn’t be further from the truth. Modern InfoSec and IT operations are evolving into SecOps and DevOps, and the idea that they want to stop developers from innovating by restricting them to old, centrally controlled paradigms is a long-held prejudice that needs to be resolved. What security and site reliability teams really want is for developers to operate with agility as well as safety so that risks are appropriately governed. The shared responsibility model can reduce risk while still allowing for innovation. The challenge of how to enable developers to move fast while ensuring the level of security necessary for SecOps and DevOps is to abstract granular controls away from developers so they can focus on building applications while, in the background, secure defaults that cannot be disabled are in place at every level. Doers get more done Working with a cloud provider, whether you’re talking about infrastructure as a service (IaaS) or a hyperscaler, is like going into a home improvement store and seeing all the tools and materials. It gives you a sense of empowerment. That’s the same feeling you get when you’re in front of an administrative console for AWS, Google Cloud, or Azure. The aisles at home improvement stores, however, can contain some pretty raw materials. Imagine asking a team of developers to build a new, state-of-the-art kitchen out of lumber, pipes, and fittings without even a blueprint. You’re going to wind up with pipes that leak, drawers that don’t close, and cabinets that don’t fit. This approach understandably worries InfoSec and IT operations teams and can cause them to be perceived as innovation blockers because they don’t want developers attempting do-it-yourself security. So how do you find a place where the raw materials provide exactly what you need so that you can build with confidence? That’s the best of both worlds. Developers can move faster by not having to deal with the plumbing, and InfoSec and IT operations get the security and reliability assurance they need. This is where the shared responsibility model comes in. Shared responsibility in the cloud When considering cloud security and resilience, some responsibilities fall clearly on the business. Others fall on public cloud providers, and still others fall on the vendors of the cloud services being used. This is known as the shared responsibility model . Security and resilience in the cloud are only possible when everyone is clear on their roles and responsibilities. Shared responsibility recognizes that cloud vendors, such as MongoDB, must ensure the security and availability of their services and infrastructure, and customers must also take appropriate steps to protect the data they keep in the cloud. The security defaults in MongoDB Atlas enable developers to be agile while also reducing risk. Atlas gives developers the necessary building blocks to move fast without having to worry about the minutiae of administrative security tasks. Atlas enforces strict security policies for things like authentication and network isolation, and it provides tools for ensuring secure best practices, such as encryption, database access, auto-scaling, and granular auditing. Testing for resilience The shared responsibility model attempts to strike a balance between agility, security, and resilience. Cloud vendors must meet the responsibilities of their service-level agreements (SLAs), but businesses also have to be conscientious of their cloud resources. Real-world scenarios can cause businesses to experience outages, and avoiding them is the essence of the shared responsibility model. To avoid such outages, MongoDB Atlas does everything possible to keep database clusters continuously available; the customer holds the responsibility of provisioning appropriately sized workloads. That can be an uphill battle when you’re talking about an intensive workload for which the cluster is undersized. Consider a typical laptop as an example. It has an SLA in so far as it has specifications that determine what it can do. If you try to drive a workload that exceeds the laptop’s specifications, it will freeze. Was the laptop to blame, or was it the workload? With the cloud, there’s an even greater expectation that there are more than enough resources to handle any given workload. But those resources are based on real infrastructure with specs, just like the laptop. This example illustrates both the essence and the ambiguity of the shared responsibility model. As the customer, you’re supposed to know whether that stream of data is something your compute resources can handle. The challenge is that you don’t know it until you start running into the boundaries of your resources, and pushing the limits of those boundaries means risking the availability of those resources. It’s not hard to imagine a developer, who may be working under considerable stress, over-provisioning a workload, which then leads to a freeze or outage. It’s essential, therefore, for companies to have a test environment that closely mimics their production environment. This allows them to validate that the MongoDB Atlas cluster can keep up with what they’re throwing at it. Anytime companies make changes to their applications, there is a risk. Some of that risk may be mitigated by things like auto-scaling and elasticity, but the level of protection they afford is limited. Having a test environment can help companies better predict the outcome of changes they make. The cloud has evolved to a point where security, resilience, and agility can peacefully coexist. MongoDB Atlas comes with strict security policies right out of the box. It offers automated infrastructure provisioning, default security features, database setup, maintenance, and version upgrades so that developers can shift their focus from administrative tasks to innovation when building applications. By abstracting away some of the security and resilience responsibilities through the shared responsibility model, MongoDB Atlas allows developers to move fast while giving SecOps the reassurances they need to support their efforts.

May 11, 2022