MongoDB Applied

Customer stories, use cases and experience

What Does the Executive Order on Supply Chain Security Mean for Your Business? Security Experts Weigh In on SBOMs

In the wake of high-profile software supply chain attacks, the White House issued an executive order requiring more transparency in the software supply chain. The Executive Order (14028) on Improving the Nation’s Cybersecurity requires software vendors to provide a software bill of materials (SBOM). An SBOM is a list of ingredients used by software — that is, the collection of libraries and components that make up an application, whether they are third-party, commercial off-the-shelf, or open source software. By providing visibility into all the individual components and dependencies, SBOMs are seen as a critical tool for improving software supply chain security. The new executive order affects every organization that does or seeks to do business with the federal government. To learn more about the requirements and implementation, MongoDB invited a few supply chain security experts for a panel discussion. In our conversation, Lena Smart, MongoDB’s Chief Information Security Officer, was joined by three expert panelists: Dr. Allan Friedman, PhD, senior advisor and strategist, CISA; Clinton Herget, principal solutions engineer, Snyk; and Patrick Dwyer, CycloneDX SBOM project co-lead, Open Web Application Security Project. Background In early 2020, hackers broke into Texas-based SolarWind's systems and added malicious code to the company's Orion software system, which is used by more than 33,000 companies and government entities to manage IT resources. The code created a backdoor into affected systems, which hackers then used to conduct spying operations. In December 2021, a vulnerability in the open source Log4J logging service was discovered. The vulnerability enables attackers to execute code remotely on any targeted computer. The vulnerability resulted in massive reconnaissance activity, according to security researchers, and it leaves many large corporations that use the Log4J library exposed to malicious actors. Also in late 2021, the Russian ransomware gang, REvil, exploited flaws in software from Kaseya, a popular IT management application with MSPs. The attacks multiplied before warnings could be issued, resulting in malicious encryption of data and ransom demands as high as $5 million. In our panel discussion, Dr. Friedman kicked off the conversation by drawing on the “list of ingredients” analogy, noting that knowing what’s in the package at the grocery store won’t help you keep your diet or protect you from allergens by itself — but good luck doing so without it. What you do with that information matters. So the data layer is where we will start to see security practitioners implement new intelligence and risk-awareness approaches, Friedman says. SBOM Use Cases The question of what to do with SBOM data was top-of-mind for all of the experts in the panel discussion. Friedmans says that when the idea of SBOMs was first introduced, it was in the context of on-premises systems and network or firewall security. Now, the discussion is centered on SaaS products. What should customers expect from an SBOM for a SaaS product? As Senior Advisor and Strategist at the Cybersecurity and Infrastructure Security Agency (CISA), Friedman says this is where the focus will be over the next few months as they engage in public discussions with the software community to define those use cases. A few of the use cases panelists cited included pre-purchase due diligence, forensic and security analysis, and risk assessment. “At the end of the day, we're doing this hopefully to make the world of software more secure,” Smart says. No one wants to see another Log4J, the panelists agreed, but chances are we'll see something similar. A tool such as an SBOM could help determine exposure to such risks or prevent them from happening in the first place. Dwyer waded into the discussion by emphasizing the need for SBOM production and consumption to fit into existing processes. “Now that we're automating our entire software production pipeline, that needs to happen with SBOMs as well,” Dwyer says. Herget agreed on the need to understand the use cases and edge cases, and to integrate them. “If we're just generating SBOMs to store them in a JSON file on our desktop, we’ve missed the point,” he says. “It's one thing to say that Maven can generate an SBOM for all Java dependencies in a given project, which is amazing until you get to integrating non-Java technologies into that application.” Hergert says that in the era of microservices, you could be working with an application that has 14 different top-level languages involved, with all of their corresponding sets of open source dependencies handled by an orchestrated, cloud-based continuous integration pipeline. “We need a lot more tooling to be able to do interesting things with SBOMs,” Herget continued. “Wouldn't it be great to have search-based tooling to be able to look at dependency tree relationships across the entire footprint?” For Herget, future use cases for SBOM data will depend on a central question: What do we have that is a scalable, orchestrated way to consume SBOM data that we can then throw all kinds of tooling against to determine interesting facts about our software footprint that we wouldn't necessarily have visibility into otherwise? SBOMs and FedRAMP In the past few years, Smart has been heavily involved in FedRAMP (Federal Risk and Authorization Management Program), which provides a standardized approach to government security authorizations for Cloud Service Offerings. She asked the panelists whether SBOMs should be part of the FedRAMP SSP (System Security Plan). Friedman observed that FedRAMP is a “passed once, run anywhere” model, which means that once a cloud service is approved by one agency, any other government agency can also use it. “The model of scalable data attestations that are machine-readable I think does lend itself as a good addition to FedRAMP,” Friedman says. Herget says that vendors will follow if the government chooses to lead on implementing SBOMs. “If we can work toward a state where we're not talking about SBOMs as a distinct thing or even an asset that we're working toward but something that’s a property of software, that's the world we want to get to.” The Role of Developers in Supply Chain Security As always, the role of the developer is one of the most critical factors in improving supply chain security, as Herget points out. “The complexity level of software has exceeded the capacity for any individual developer, even a single organization, to understand where all these components are coming from,” Herget says. “All it takes is one developer to assign their GitHub merge rights to someone else who's not a trusted party and now that application and all the applications that depend on it are subject to potential supply chain attack.” Without supply chain transparency or visibility, Herget explains, there’s no way to tell how many assets are implicated in the event of an exploit. And putting that responsibility on developers isn’t fair because there are no tools or standardized data models that explain where all the interdependencies in an application ultimately lead. Ingredient lists are important, Herget says, but what’s more important are the relationships between them, which components are included in a piece of software and why, who added them and when, and to have all that in a machine-readable and manipulable way. “It's one thing to say, we have the ingredients,” Herget says, “But then what do you do with that, what kind of analysis can you then provide, and how do you get actionable information in front of the developer so they can make better decisions about what goes into their applications?” SBOM Minimum Requirements The executive order lays out the minimum requirements of an SBOM, but our panelists expect that list of requirements to expand. For now, there are three general buckets of requirements: Each component in an SBOM requires a minimum amount of data, including the supplier of the component, the version number, and any other identifiers of the component. SBOMs must exist in a widely used, machine-readable format, which today is either CycloneDX or SPDX . Policies and practices around how deep the SBOM tree should go in terms of dependencies. Moving forward, the panelists expect the list of minimum requirements to expand to include additional identifiers, such as a hash or digital fingerprint of a component, and a requirement to update an SBOM anytime you update software. They also expect additional requirements for the dependency tree, like a more complete tree or at least the ability to generate the complete tree. “Log4j taught people a lot about the value of having as complete a dependency tree as possible,” Friedman said, “because it was not showing up in the top level of anyone's dependency graph.” SBOMs for Legacy Systems One of the hurdles with implementing SBOMs universally is what to do with legacy systems, according to Smart. Johannes Ullrich, Dean of Research for SANS Technology Institute, has said that it may be unrealistic to expect 10- or 20-year-old code to ever have a reasonable SBOM. Friedman pointed to the use of binary analysis tools to assess software code and spot vulnerabilities, noting that an SBOM taken from the build process is far different from one built using a binary analysis tool. While the one taken from the build process represents the gold standard, Friedman says, there could also be incredible power in the binary analysis model, but there needs to be a good way to compare them to ensure an apples-to-apples approach. “We need to challenge ourselves to make sure we have an approach that works for software that is in use today, even if it's not necessarily software that is being built today,” Herget says. As principal solutions engineer at Snyk, Herget says these are precisely the conversations they’re having around what is the right amount of support for 30-year-old applications that are still utilized in production, but were built before the modern concept of package management became integrated into the day-to-day workflows of developers. “I think these are the 20% of edge cases that SBOMs do need to solve for,” Herget says, “Because if it’s something that only works for modern applications, it's never going to get the support it needs on both the government and the industry side.” Smart closed the topic by saying, “One of the questions that we've gotten in the Q&A is, ‘What do you call a legacy system?’ The things that keep me awake at night, that's what I call legacy systems.” Perfect Ending Finally, the talk turned to perfection, how you define it, and whether it’s worth striving for perfection before launching something new in the SBOM space. Herget, half-joking, said that perfection would be never having these talks again. “Think about how we looked at DevOps five or 10 years ago — it was this separate thing we were working to integrate within our build process,” he says. “You don’t see many panel talks on how we will get to DevOps today because it's already part of the water we’re all swimming in.” Dwyer added that perfection to him is when SBOMs are just naturally embedded in the modern software development lifecycle — all the tooling, the package ecosystems. “Perfection is when it's just a given that when you purchase software, you get an SBOM, and whenever it's updated, you get an SBOM, but you actually don't care because it's all automated, Dwyer says. “That’s where we need to be.” According to Friedman, one of the things that SBOMs has started to do is to expose some of the broader challenges that exist in the software ecosystem. One example is software naming and software identity. Friedman says that in many industries, we don't actually have universal ways of naming things. “And, it’s not that we don't have any standards, it’s that we have too many standards,” he explains. “So, for me, perfection is saying SBOMs are now driving further work in these other areas of security where we know we've accumulated some debt but there hasn't been a forcing function to improve it until now.”

May 23, 2022
Applied

MongoDB & IIoT: A 4-Step Data Integration

The Industrial Internet of Things (IIoT) is driving a new era of manufacturing, unlocking powerful new use cases to forge new revenue streams, create holistic business insights, and provide agility based on global and consumer demands. Our previous article, “ Manufacturing at Scale: MongoDB & IIoT ,” we gave an overview of the adoption and implementation of IIoT in manufacturing processes, testing various use cases with a model-size smart factory (Figure 1). In this post, we’ll look at how MongoDB’s flexible, highly available, and scalable data platform allows for end-to-end data integration using a four-step framework. Figure 1: Architecture diagram of MongoDB's application data platform with MQTT-enabled devices. 4-step framework for end-to-end data integration The four stages of this framework (Figure 2) are: Connect: Establish an interface to “listen” and “talk” to the device(s). Collect: Gather and store data from devices in an efficient and reliable manner. Compute: Process and analyze data generated by IoT devices. Create: Create unique solutions (or applications) through access to transformational data. Figure 2: The four-step framework for shop floor data integration During the course of this series, we will explore each of the four steps in detail, covering the tools and methodology and providing a walkthrough of our implementation process, using the Fischertechnik model as a basis for testing and development. All of the steps, however, are applicable to any environment that uses a Message Queuing Telemetry Transport (MQTT) API. The first step of the process is Connect. The first step: Connect The model factory contains a variety of sensors that are generating data on everything from the camera angle to the air quality and temperature — all in real time. The factory uses the MQTT protocol to send and receive input, output, and status messages related to the different factory components. You may wonder why we don’t immediately jump to the data collection stage. The reason is simple; we must first be able to “see” all of the data coming from the factory, which will allow us to select the metrics we are interested in capturing and configure our database appropriately. As a quick refresher on the architecture diagram of the factory, we see in Figure 3 that any messages transmitted in or out of the factory are routed through the Remote MQTT Broker. The challenge is to successfully read and write messages to and from the factory, respectively. Figure 3: Architecture diagram of the model smart factory It is important to remember that the method of making this connection between the devices and MongoDB depends on the communication protocols the device is equipped with. On the shop floor, multiple protocols are used for device communication, such as MQTT and OPC-UA, which may require different connector technologies, such as Kafka, among other off-the-shelf IoT connectors. In most scenarios, MongoDB can be integrated easily, regardless of the communication protocol, by adding the appropriate connector configuration. (We will discuss more about that implementation in our next blog post.) For this specific scenario, we will focus on MQTT. Figure 4 shows a simplified version of our connection diagram. Figure 4: Connecting the factory's data to MongoDB Atlas and Realm Because the available communication protocol for the factory is MQTT, we will do the following: Set up a remote MQTT broker and test its connectivity. Create an MQTT bridge. Send MQTT messages to the device(s). Note that these steps can be applied to any devices, machinery, or environment that come equipped with MQTT, so you can adapt this methodology to your specific project. Let’s get started. 1. Set up a remote MQTT broker To focus on the connection of the brokers, we used a managed service from HiveMQ to create a broker and the necessary hosting environment. However, this setup would work just as well with any self-managed MQTT broker. HiveMQ Cloud has a free tier, which is a great option for practice and for testing the desired configuration. You can create an account to set up a free cluster and add users to it. These users will function as clients of the remote broker. We recommend using different users for different purposes. Test the remote broker connectivity We used the Mosquitto CLI client to directly access the broker(s) from the command line. Then, we connected to the same network used by the factory, opened a terminal window, and started a listener on the local TXT broker using this command: mosquito_sub -h 192.168.0.10 -p 1883 -u txt -P xtx -t f/o/# Next, in a new terminal window, we published a message to the remote broker on the same topic as the listener. A complete list of all topics configured on the factory can be found in the Fischertechnik documentation . You can fill in the command below with the information of your remote broker. mosquitto_pub -h <hivemq-cloud-host-address> -p 8883 -u <hivemq-client-username> -P <hivemq-client-password> -t f/o/# -m "Hello" If the bridge has been configured correctly, you will see the message “Hello” displayed on the first terminal window that contains your local broker listener. Now we get to the good part. We want to see all the messages that the factory is generating for all of the topics. Because we are a bit more familiar with the Mosquitto CLI, we started a listener on the local TXT broker using this command: mosquitto_sub -h 192.168.0.10 -p 1883 -u txt -P xtx -t # Where the topic “#” essentially means “everything.” And just like that, we can get a sense of which parameters we can hope to extract from the factory into our database. As an added bonus, the data is already in JSON. This will simplify the process of streaming the data into MongoDB Atlas once we reach the data collection stage, because MongoDB runs on the document model , which is also JSON-based. The following screen recording shows the data stream that results from starting a listener on all topics to which the devices publish while running. You will notice giant blocks of data, which are the encoding of the factory camera images taken every second, as well as other metrics, such as stock item positions in the warehouse and temperature sensor data, all of which is sent at regular time intervals. This is a prime example of time series data, which we will describe how to store and process in a future article. Video: Results of viewing all device messages on all topics 2. Create a MQTT bridge An MQTT bridge (Figure 5) is a uni/bi-directional binding of topics between two MQTT brokers, such that messages published to one broker are relayed seamlessly to clients subscribed to that same topic on the other broker. Figure 5: Message relays between MQTT brokers In our case, the MQTT broker on the main controller is configured to forward/receive messages to/from the remote MQTT broker via the following MQTT bridge configuration: connection remote-broker address <YOUR REMOTE MQTT BROKER IP ADDRESS:PORT> bridge_capath /etc/ssl/certs notifications false cleansession true remote_username <HIVEMQ CLIENT USERNAME> remote_password <HIVEMQ CLIENT PASSWORD> local_username txt local_password xtx topic i/# out 1 "" "" topic o/# in 1 "" "" topic c/# out 1 "" "" topic f/i/# out 1 "" "" topic f/o/# in 1 "" "" try_private false bridge_attempt_unsubscribe false This configuration file is created and loaded directly into the factory broker via SSH. 3. Send MQTT messages to the device(s) We can test our bridge configuration by sending a meaningful MQTT message to the factory through the HiveMQ websocket client (Figure 6). We signed into the console with one of the users (clients) previously created and sent an order message to the “f/o/order” topic used in the previous step. Figure 6: Sending a test message using the bridged broker The format for the order message is: {"type":"WHITE","ts":"2022-03-23T13:54:02.085Z"} “Type” refers to the color of the workpiece to order. We have a choice of three workpiece colors: RED, WHITE, BLUE; “ts” refers to the timestamp of when the message is published. This determines its place in the message queue and when the order process will actually be started. Once the bridge is configured correctly, the factory will start to process the order according to the workpiece color specified in the message. Thanks for sticking with us through to the end of this process. We hope this methodology provides fresh insight for your IoT projects. Find a detailed tutorial and all the source code for this project on GitHub. Learn more about MongoDB for Manufacturing and IIoT . This is the second of an IIoT series from MongoDB’s Industry Solutions team. Read the first post, “ Manufacturing at Scale: MongoDB & IIoT .” In our next article, we will explore how to capture time series data from the factory using MongoDB Atlas and Kafka .

May 20, 2022
Applied

Open Banking: How to Future-Proof Your Banking Strategy

Open banking is on the minds of many in the fintech industry, leading to basic questions such as: What does it mean for the future? What should we do today to better serve customers who expect native open banking services? How can we align with open banking standards while they’re still evolving? In a recent panel discussion , I spoke with experts in the fintech space: Kieran Hines, senior banking analyst at Celent; Toine Van Beusekom, strategy director at Icon Solutions; and Charith Mendis, industry lead for banking at AWS. We discussed open banking standards, what the push to open banking means for innovation, and more. This article provides an overview of that discussion and offers best practices for getting started with open banking. Watch the panel discussion Open Banking: Future-Proof Your Bank in a World of Changing Data and API Standards to learn how you can future-proof your open banking strategy. Fundamentals To start, let’s answer the fundamental question: What is open banking ? The central tenet of open banking is that banks should make it easy for consumers to share their financial data with third-party service providers and allow those third parties to initiate transactions on their behalf — adding value along the way. But, as many have realized, facilitating open banking is not so easy. At the heart of the open banking revolution is data — specifically, the infrastructure of databases, data standards, and open APIs that make the free flow of data between banks, third-party service providers, and consumers possible. What does this practice mean for the banking industry? In the past, banks almost exclusively built their own products, which has always been a huge drain on teams, budgets, and infrastructure. With open banking, financial services institutions are now partnering with third-party vendors to distribute products, and many regulations have already emerged to dictate how data is shared. Because open banking is uncharted territory, it presents an array of both challenges — mostly regulatory — and opportunities for both established banks and disruptors to the space. Let’s dig into the challenges first. Challenges As open banking, and the technology practices that go along with it, evolve, related compliance standards are emerging and evolving as well. If you search for “open banking API,” you’ll find that nearly every vendor has their own take on open banking and that they are all incompatible to boot. As with any developing standard, open banking standards are not set in stone and will continue to evolve as the space grows. The fast-changing environment will hinder those banks that do not have a flexible data architecture that allows them to quickly adapt to provider standards as needed. An inflexible data architecture becomes an immediate roadblock with unforeseen consequences. Closely tied to the challenge of maintaining compliance with emerging regulations is the challenge that comes with legacy architecture. Established banks deliver genuine value to customers through time-proven, well-worn processes. In many ways, however, legacy operations and the technology that underpins them are doomed to stand in the way not only of open banking but also operational efficiency goals and the ability to meet the customer experience expectations of a digital-native consumer base. To avoid the slow down of clunky legacy systems, banks need an agile approach to ensure the flexibility to pivot to developing challenges. Opportunities The biggest opportunity for institutions transitioning into open banking is the potential for rapid innovation. Banking IP is headed in new and unprecedented directions. Pushing data to the cloud, untangling spaghetti architecture, or decentralizing your data by building a data mesh frees up your development teams to innovate, tap into new revenue streams, and achieve the ultimate goal: Providing greater value to your customers. As capital becomes scarce in banks, the ability to repeatedly invest in new pilots is limited. Instead of investing months or years worth of capital into an experiment, building new features from scratch, or going to the board to secure funding, banks need to succeed immediately, be able to scale from prototype to global operation within weeks, or fail fast with new technology. Without the limiting factors of legacy software or low levels of capital, experimentation powered by new data solutions is now both free and low risk. Best Practices Now that we’ve described the potential that open banking presents for established and emerging industry leaders, let’s look at some open banking best practices, as described in the panel discussion . Start with your strategy. What’s your open banking strategy in the context of your business strategy? Ask hard questions like: Why do you want to transform? What’s wrong with what’s going on now? How can you fix current operations to better facilitate open banking? What new solutions do you need to make this possible? An entire shift for a business to open banking means an entirely new business strategy, and you need to determine what that strategy entails before you implement sweeping changes. View standards as accelerators, not inhibitors. Standards can seem like a burden on financial institutions, and in most cases, they do dictate change that can be resource intensive. But you can also view changing regulations as the catalyst needed to modernize. While evolving regulations may be the impetus for change, they can also open up new opportunities once you’re aligned with industry standards. Simplify and unify your data. Right now, your data likely lives all over the place, especially if you’re an established bank. Legacy architectures and disparate solutions slow down and complicate the flow of data, which in turn inhibits your adoption of open banking standards. Consider how you can simplify your data by reducing the number of places it lives. Migrating to a single application data platform makes it faster and easier to move data from your financial institution to third parties and back again. Always consider scale. When it comes to open banking, your ability to scale up and scale down is crucial — and is also tied to your ability to experiment, which is also critical. Consider the example of “buy now pay later” service offerings to your clients. On Black Friday, the biggest shopping day of the year, financial institutions will do exponentially more business than, say, a regular Tuesday in April. So, to meet consumer demand, your payments architecture needs to be able to scale up to meet the influx of demand on a single, exceptional day and scale back down on a normal day to minimize costs. Without the ability to scale, you may struggle to meet the expectations of customers. Strive for real time. Today, everyone — from customers to business owners to developers — expect the benefits of real-time data. Customers want to see their exact account balance when they want to see it, which is already challenging enough. If you add the new layer of open banking to the mix, with data constantly flowing from banks to third parties and back, delivering data in real-time to customers is more complex than ever. That said, with the right data platform underpinning operations, the flow of data between systems can be simplified and made even easier when your data is unified on a single platform. If you can unlock the potential of open banking, you can innovate, tap into new revenue streams, shake off the burden of legacy architecture, and ultimately, achieve a level of differentiation likely to bring in new customers. Watch the panel discussion to learn more about open banking and what it means for the future of banks.

May 19, 2022
Applied

Collaborative User Story Mapping with Avion and MongoDB

When companies think about their products, they often fall into the trap of planning without truly considering their user’s journey and experience. Perhaps it’s time to start thinking about products from the customer's perspective. Avion was founded by James Sear and Tim Ramage with one thing in mind - to provide the most intuitive and enjoyable user story mapping experience for agile teams to use, from product inception to launch (and beyond). The key, Sear said, is that user story mapping gives you a way of thinking about your product and its features, typically software, from the perspective of your customers or users. This is facilitated by defining things that the user can do (user stories) within the context of your core user journeys. Built with MongoDB spoke with Sear about the idea of user story mapping, how he and Ramage started Avion, and what it’s been like to work with MongoDB. Built with MongoDB: What is Avion all about? James Sear : Avion is a digital user story mapping tool for product teams. It helps them to break down complexity, map out user journeys, build out the entire scope of their product and then decide what to deliver and in what order. It’s a valuable tool that is typically underused. Not everyone understands what story mapping is; as it’s quite a specific technique and you do have to put the time in to learn it in order to get the most out of it. But once you have, there is so much value to be unlocked, in terms of delivering better outcomes for your users, as opposed to just building stuff for the sake of it. Built with MongoDB: What made you decide to start Avion? Sear: My co-founder Tim Ramage and I met around 2014, and we were jointly involved in teams that were building lots of different software products for various companies, both big and small. And while we were very involved in their technical implementation, we were also both really interested in the product management side of delivery, because it’s just so crucial to be successful. That includes everything from UX decisions, product roadmapping prioritization, customer feedback, metrics, managing the team, it all really interested us. However, one thing that we found a particularly difficult part of the process, was taking your clients’ big ideas and translating them into some sort of actionable development plan. We tried a few different approaches for this, until we stumbled across a technique called user story mapping. User story mapping manages to pull together all of your core user journeys, the scope of all features that could be built, and how you plan to deliver them. On top of that, it conveys the order in which you should be working on things. Once you have this powerful asset, you can have effective conversations with your team, and answer the most important questions, such as—what’s the minimum we can build to make this valuable to users, where does this feature actually appear for our users or what we are going to build next, and why?. It really does allow you to communicate more effectively with stakeholders. For instance, you could use it to update your CEO and talk them through what you’re building now, answering those difficult questions like why you’re not building feature X or feature Y. You’ve got this outline right in front of you that makes sense to a product person, a developer, or even an outside stakeholder. Built with MongoDB: Initially, you started to build out a collaborative tool for product teams, and Avion has evolved into more. What else has changed in your journey at Avion? Sear: Our goal at launch was to provide our customers with a best-in-class story mapping experience in the browser. This meant nailing the performance and user interaction, so creating a story map just felt fluid and easy. After this, we focused on tightly integrating with more traditional backlog tools, like Jira and Azure DevOps. We always maintain that our customers shouldn’t have to give up their existing tooling to get value from Avion — so we built it to sit in the middle of their stack and assist them with planning and delivery. Built with MongoDB: What are some of the challenges that you’ve faced in such a crowded productivity space? Sear: It’s difficult to stick out amongst the crowd, but our unique value proposition is actually quite niche. This allows us to show our potential customers a different side of product planning that they might not have seen before. And for anyone that already knows about story mapping, Avion is an opinionated and structured canvas for them to just get work done and be productive quickly. Ultimately, we try to stick out by providing value in a vertical slice of product planning that is often overlooked. Built with MongoDB: What kind of experiences have you had working with MongoDB? Sear: There have been many scenarios where we’ve been debugging difficult situations with production scaling issues, and we just cannot work out why the apps have gone down overnight. There are so many tricky things that come up when you’re running in production. But we have always managed to find something in MongoDB Atlas that can help us just try and pinpoint that issue, whether it’s some usage graphs, or some kind of metrics that allows us to really dig down into the collections, the queries, and everything so MongoDB has been excellent for that in terms of features. It just gives you that peace of mind, we’ve had customers delete stuff of their own accord, and get really upset, but we’ve been able to help them by going back to snapshot backups and retrieving that data for them. From a customer support perspective, it’s massive to have that option on the table. MongoDB Atlas is really useful to us and we don’t have to configure anything, it’s just amazing. The MongoDB upgrades are completely seamless, and help us stay on the latest version of the database which is a huge win for security. Learn more about user story mapping with Avion , and start planning a more user-centric backlog. Interested in learning more about MongoDB for Startups? Learn more about us on the MongoDB Startups page .

May 19, 2022
Applied

From Core Banking to Componentized Banking: Temenos Transact Benchmark with MongoDB

Banking used to be a somewhat staid, hyper-conservative industry, seemingly evolving over eons. But banking in recent years has dramatically changed. Under pressure from demanding consumers and nimble new competitors, development cycles measured in years are no longer sufficient in a market expecting new products, such as Buy-Now-Pay-Later, to be introduced within months or even weeks. Just ask Temenos, the world's largest financial services application provider, providing banking for more than 1.2 billion people . Temenos is leading the way in banking software innovation and offers a seamless experience for their client community. Financial institutions can embed Temenos components, which delivers new functionality in their existing on-premises environments (or in their own environment in their cloud deployments) or through a full banking as a service experience with Temenos T365 powered by MongoDB on various cloud platforms. Temenos embraces a cloud-first, microservices-based infrastructure built with MongoDB, giving customers flexibility, while also delivering significant performance improvements. This new MongoDB-based infrastructure enables Temenos to rapidly innovate on its customers' behalf, while improving security, performance, and scalability. Architecting for a better banking future Banking solutions often have a life cycle of 10 or more years, and some systems I am involved in upgrading date back to the 1980s. Upgrades and changes, often focussed on regulatory or technical upgrades (for example, operating system versions), hardware upgrades, and new functionality, are bolted on. The fast pace of innovation, a mobile-first world, competition, crypto, and Defi are demanding a massive change for the banking industry, too. The definition of new products and roll outs measured in weeks and months versus years requires an equally drastic change in technology adoption. Banking is following a path similar to the retail industry. Retail was built upon a static design approach with monolithic applications connected through ETL (Extract, Transform, and Load) and “unloading of data,” that was robust and built for the times. The accelerated move to omnichannel requirements brought a component-driven architecture design to fruition that allowed faster innovation and fit-for-purpose components being added (or discarded) from a solution. The codification of this is called MACH (Microservices, API first, Cloud-native, and Headless) and a great example is the flexibility brought to bear through companies such as Commercetools . Temenos is taking the same direction for banking. Its concept of components that are seamlessly added to existing Temenos Transact implementations empowers banks to start an evolutionary journey from existing status on-premises environments to a flexible hybrid landscape delivering best of breed banking experiences. Key for this journey is a flexible data concept that meshes the existing environments with requirements of fast changing components available on premises and in the cloud. Temenos and MongoDB joined forces in 2019 to investigate the path toward data in a componentized world. Over the past few years, our teams have collaborated on a number of new, innovative component services to enhance the Temenos product family, and several banking clients are now using those components in production. However, the approach we've taken allows banks to upgrade on their own terms. By putting components “in front” of the Temenos Transact platform , banks can start using a componentization solution without disrupting their ability to serve existing customer requirements. Similarly, Temenos offers MongoDB's critical data infrastructure with an array of deployment capabilities, from full-service multi- or hybrid cloud offerings, to on-premises self-managed, depending on local regulations and the client’s risk appetite. In these and other ways, Temenos makes it easier for its banking clients to embrace the future without upsetting existing investments. From an architectural perspective, this is how component services utilize the new event system of Temenos Transact and enable a new way of operating: Temenos Transact optimized with MongoDB Improved performance and scale All of which may sound great, but you may still be wondering whether this combination of MongoDB and Temenos Transact can deliver the high throughput needed by Tier 1 banks. Based on extensive testing and benchmarking, the answer is a resounding yes . Having been in the benchmark business for a long time, I know that you should never trust just ANY benchmark. (In fact, my colleague, MongoDB distinguished engineer John Page, wrote a great blog post about how to benchmark a database .) But Temenos, MongoDB, and AWS jointly felt the need to remove this nagging itch and deliver a true statement on performance, delivering proof of a superior solution for the client community. Starting with the goal of reaching a throughput of 25,000 transactions, it quickly became obvious that this rather conservative goal could easily be smashed, so we decided to quadruple the number to 100,000 transactions using a more elaborate environment. The newly improved version of Temenos Transact in conjunction with component services proved to be a performance giant. One hundred thousand financial transactions per second with a MongoDB response time under 1ms was a major milestone compared to earlier benchmarks with 79ms response time with Oracle, for example. Naturally, this result is in large part due to the improved component behavior and the AWS Lambda functions that now run the business functionality, but the document model of MongoDB in conjunction with the idiomatic driver concept has proven superior over the outdated relational engine of the legacy systems. Below, I have included some details from the benchmark. As Page once said, “You should never accept single benchmark numbers at face value without knowing the exact environment they were achieved in.” Configuration: table, th, td { border: 1px solid black; border-collapse: collapse; } J-meter Scripts Number of Balance Services Number of Transact Services MongoDB Atlas Cluster Number of Docs in Balance Number of Docs in Transaction 3 6 GetBalance - 4 GetTransactions - 2 4 M80 (2TB) 110M 200M Test Results table, th, td { border: 1px solid black; border-collapse: collapse; } Functional TPS API Latency ms DB Latency ms Get Balance 46751 79.45 0.36 Get Transaction 22340 16.58 0.36 Transact Service 31702 117.15 1.07 Total 100793 71.067 0.715 The underlying environment consists of 200-million accounts with 100-million customers, which shows the scalability the configuration is capable of working with. This setup would be suitable for the largest Tier 1 banking organizations. The well-versed MongoDB user will realize that the used cluster configuration for MongoDB is small. The M80 cluster, 32 VCores with 128GB RAM, is configured with 5 nodes. Many banking clients prefer those larger 5-node configurations for higher availability protection and better read distribution over multiple AWS Availability Zones and regions, which would improve the performance even more. In the case of an Availability Zone outage or even a regional outage, the MongoDB Atlas platform will continue to service via the additional region as back up. The low latency shows that the MongoDB Atlas M80 was not even fully utilized during the benchmark. The diagram shows a typical configuration for such a cluster setup for the American market: one East Coast location, one West Coast location, and an additional node out of both regions in Canada. MongoDB Atlas allows the creation of such a cluster within seconds configured to the specific requirements of the solution deployed. The total landscape is shown in the following diagram: Signed, sealed, and delivered. This benchmark should give clients peace of mind that the combination of core banking with Temenos Transact and MongoDB is indeed ready for prime time. While thousands of banks rely on MongoDB for many parts of their operations ranging from login management and online banking, to risk and treasury management systems, Temenos' adoption of MongoDB is a milestone. It shows that there is significant value in moving from a legacy database technology to the innovative MongoDB application data platform, allowing faster innovation, eliminating technical debt along the way, and simplifying the landscape for financial institutions, their software vendors, and service providers. If you would like to learn more about MongoDB in the financial services industry, take a look at our guide: The Road to Smart Banking: A Guide to Moving from Mainframe to Data Mesh and Data-as-a-Product

May 18, 2022
Applied

A Hub for Eco-Positivity

In this guest blog post, Natalia Goncharova, founder and web developer for EcoHub — an online platform where people can search for and connect with more than 13,000 companies, NGOs, and governmental agencies across 200-plus countries — describes how the company uses MongoDB to generate momentum around global environmental change. There is no denying that sustainability has become a global concern. In fact, the topic has gone mainstream. A 2021 report by the Economist Intelligence Unit (EIU) shows a 71% rise in the popularity of searches for sustainable goods over the past five years. The report “measures engagement, awareness and action for nature in 27 languages, across 54 countries, covering 80% of the world’s population.” The EIU report states that the sustainability trend is accelerating in developing and emerging countries including Ecuador and Indonesia. For me, it’s not a lack of positive sentiment that is holding back change; it is our ability to turn ideas and goodwill into action. We need a way of harnessing this collective sentiment. In 2020, the decision to found EcoHub and devote so much time to it was a difficult one to make. I had just been promoted to team leader at work, and things were going well. Leaving my job with the goal of helping to protect our environment sounded ridiculous at times. Many questions raced through my mind, the most insistent one being: Will I be able to actually make a difference? However, as you’ll see in this post, my decision was ultimately quite clear. What is EcoHub? When I created EcoHub, my principal aim was to connect ecological NGOs and businesses. Now, EcoHub enables users to search a database of more than 10,000 organizations in more than 200 countries. You can search via a map or keyword. By making it easier to connect, EcoHub lets users quickly build networks of sustainably minded organizations. We believe networks are key to spreading good ideas, stripping out duplication, and building expertise. Building the platform has been a monumental task. I have developed it myself over the past few months, acting as product manager, project manager, and full-stack developer. (It wouldn’t be possible without my research, design, and media teams as well.) During the development of the EcoHub platform on MongoDB, the flexible schema helped us edit and add new fields in a document because the process doesn’t require defining data types. We had a situation in which it was necessary to change the schema and implement changes for all documents in the database. In this case, modifying the entire collection with MongoDB didn’t take long for an experienced developer. Additionally, MongoDB’s document-oriented data model works well with the way developers think. The model reflects how we see the objects in the codebase and makes the process easier. In my experience, the best resource to find answers when I ran into a question or issue was MongoDB documentation . It provides a good explanation of almost anything you want to do in your database. Search is everything In technical terms, my choices were ReactJS, NodeJS, and MongoDB. It is the latter that is so important to the effectiveness of the EcoHub platform. Search is everything. The easier we can make it for individuals or organizations to find like minds, the better. I knew from the start that I’d need a cloud-based database with strong querying abilities. As an experienced developer, I had previous experience with MongoDB and knew the company to be reliable, with excellent documentation and a really strong community of developers. It was a clear choice from the start. Choosing our partners carefully is also important. If EcoHub is to build awareness of environmental issues and foster collaboration, then we must ensure we make intelligent choices in terms of the companies we work with. I have been impressed with MongoDB’s sustainability commitments , particularly around diversity and inclusion, carbon reduction, and its appetite for exploring the way the business has an impact globally and locally. EcoHub search is built on the community version of MongoDB , which enables us to work quickly, implement easily and deliver the right performance. Importantly, as EcoHub grows and develops, MongoDB also allows us to make changes on the fly. As environmental concerns continue to grow, our database will expand. MongoDB enables our users to search, discover, and connect with environmental organizations all over the world. I believe these connections are key to sharing knowledge and expertise and helping local citizens coordinate their sustainability efforts. Commitment to sustainability When it came down to it, the decision to build EcoHub wasn’t as difficult as I initially thought. My commitment to sustainability actually started when I was young: I can remember myself at 8 years old, glued to the window, waiting for the monthly Greenpeace magazine to arrive. Later, that commitment grew as I went to university and graduated with a degree in Environmental Protection and Engineering. Soon after, I founded my first ecology organization and rallied our cityagainst businesses wanting to cut down our beautiful city parks. Starting EcoHub was a natural and exciting next step, despite the risks and unknown factors. I hope we can all join hands to create a sustainable future for ourselves, our children, and our animals and plants, and keep our planet beautiful and healthy. MongoDB Atlas makes operating MongoDB a snap at any scale. Determine the costs and benefits with our cost calculator .

May 11, 2022
Applied

Shared Responsibility: More Agility, Less Risk

The tension between agility, security, and operational uptime can keep IT organizations from innovating as fast as they’d like. On one side, application developers want to move fast and continually deliver innovative new releases. On the other side, InfoSec and IT operations teams aim to continually reduce risk, which can result in a slowed down process. This perception couldn’t be further from the truth. Modern InfoSec and IT operations are evolving into SecOps and DevOps, and the idea that they want to stop developers from innovating by restricting them to old, centrally controlled paradigms is a long-held prejudice that needs to be resolved. What security and site reliability teams really want is for developers to operate with agility as well as safety so that risks are appropriately governed. The shared responsibility model can reduce risk while still allowing for innovation. The challenge of how to enable developers to move fast while ensuring the level of security necessary for SecOps and DevOps is to abstract granular controls away from developers so they can focus on building applications while, in the background, secure defaults that cannot be disabled are in place at every level. Doers get more done Working with a cloud provider, whether you’re talking about infrastructure as a service (IaaS) or a hyperscaler, is like going into a home improvement store and seeing all the tools and materials. It gives you a sense of empowerment. That’s the same feeling you get when you’re in front of an administrative console for AWS, Google Cloud, or Azure. The aisles at home improvement stores, however, can contain some pretty raw materials. Imagine asking a team of developers to build a new, state-of-the-art kitchen out of lumber, pipes, and fittings without even a blueprint. You’re going to wind up with pipes that leak, drawers that don’t close, and cabinets that don’t fit. This approach understandably worries InfoSec and IT operations teams and can cause them to be perceived as innovation blockers because they don’t want developers attempting do-it-yourself security. So how do you find a place where the raw materials provide exactly what you need so that you can build with confidence? That’s the best of both worlds. Developers can move faster by not having to deal with the plumbing, and InfoSec and IT operations get the security and reliability assurance they need. This is where the shared responsibility model comes in. Shared responsibility in the cloud When considering cloud security and resilience, some responsibilities fall clearly on the business. Others fall on public cloud providers, and still others fall on the vendors of the cloud services being used. This is known as the shared responsibility model. Security and resilience in the cloud are only possible when everyone is clear on their roles and responsibilities. Shared responsibility recognizes that cloud vendors, such as MongoDB, must ensure the security and availability of their services and infrastructure, and customers must also take appropriate steps to protect the data they keep in the cloud. The security defaults in MongoDB Atlas enable developers to be agile while also reducing risk. Atlas gives developers the necessary building blocks to move fast without having to worry about the minutiae of administrative security tasks. Atlas enforces strict security policies for things like authentication and network isolation, and it provides tools for ensuring secure best practices, such as encryption, database access, auto-scaling, and granular auditing. Testing for resilience The shared responsibility model attempts to strike a balance between agility, security, and resilience. Cloud vendors must meet the responsibilities of their service-level agreements (SLAs), but businesses also have to be conscientious of their cloud resources. Real-world scenarios can cause businesses to experience outages, and avoiding them is the essence of the shared responsibility model. To avoid such outages, MongoDB Atlas does everything possible to keep database clusters continuously available; the customer holds the responsibility of provisioning appropriately sized workloads. That can be an uphill battle when you’re talking about an intensive workload for which the cluster is undersized. Consider a typical laptop as an example. It has an SLA in so far as it has specifications that determine what it can do. If you try to drive a workload that exceeds the laptop’s specifications, it will freeze. Was the laptop to blame, or was it the workload? With the cloud, there’s an even greater expectation that there are more than enough resources to handle any given workload. But those resources are based on real infrastructure with specs, just like the laptop. This example illustrates both the essence and the ambiguity of the shared responsibility model. As the customer, you’re supposed to know whether that stream of data is something your compute resources can handle. The challenge is that you don’t know it until you start running into the boundaries of your resources, and pushing the limits of those boundaries means risking the availability of those resources. It’s not hard to imagine a developer, who may be working under considerable stress, over-provisioning a workload, which then leads to a freeze or outage. It’s essential, therefore, for companies to have a test environment that closely mimics their production environment. This allows them to validate that the MongoDB Atlas cluster can keep up with what they’re throwing at it. Anytime companies make changes to their applications, there is a risk. Some of that risk may be mitigated by things like auto-scaling and elasticity, but the level of protection they afford is limited. Having a test environment can help companies better predict the outcome of changes they make. The cloud has evolved to a point where security, resilience, and agility can peacefully coexist. MongoDB Atlas comes with strict security policies right out of the box. It offers automated infrastructure provisioning, default security features, database setup, maintenance, and version upgrades so that developers can shift their focus from administrative tasks to innovation when building applications. By abstracting away some of the security and resilience responsibilities through the shared responsibility model, MongoDB Atlas allows developers to move fast while giving SecOps the reassurances they need to support their efforts.

May 11, 2022
Applied

Semeris Demystifies Legal Documents Using MongoDB

Sorting through endless legal documents can be a time-consuming and burdensome process, but one startup says it doesn’t have to be that way. Semeris strives to demystify legal documentation by using the latest artificial intelligence and natural language processing techniques. Semeris’s goal is to put the information its customers need at their fingertips when and where they need it. Semeris aims to bring structure to capital market legal documents, while providing a first-class service to customers and blending together the disciplines of finance, law, natural language processing, and artificial intelligence. In this edition of Built with MongoDB, we talk with Semeris about how they use MongoDB Atlas Search to help customers analyze documents and extract data as quickly as possible. Built with MongoDB spoke with Semeris CEO, Peter Jasko , about his vision for the company, working with MongoDB, the company’s relationship with venture capital firm QVentures , and the value of data. In this video, Peter Jasko explains how MongoDB Atlas's fully managed service and support has been a key factor in helping Semeris scale. Built with MongoDB: Can you tell us about Semeris? Peter Jasko: We help our investor banking and lawyer clients analyze legal documentation. We help them extract information from the documentation that they look at. A typical transaction might have 500 to 1,000 pages of documentation, and we help them to analyze that really quickly and pull out the key information that they need to be able to review that documentation within a couple hours, rather than the 7 or 8 hours it would normally take. Built with MongoDB: What is the value of data in your space? Peter: Data is essential in what we do because we build models around the publicly available documentation that we see. We store that data, we analyze it, we build machine learning models around it, and then we use that to analyze less seen documentation or more private documentation that our clients have internally. Built with MongoDB: How has your partnership with QVentures helped Semeris? Peter: Our partnership with QVentures is not just a financial one where they’ve invested some money into our firm; they’ve also helped us uncover contacts within the market. They introduced us to the MongoDB partnership that has helped us get some credits and build out our technology onto the MongoDB platform. Built with MongoDB: What has it been like using MongoDB’s technology? Peter: We chose MongoDB because it’s a scalable solution, and it has a strong developer following. It’s easier for us to hire tech developers who understand the technology because MongoDB has such a strong following in the community. If we have small issues with the technology, we’re very quickly able to search and find the answer to learn how we need to resolve that. Additionally, scalability is really important to us. And, what we found is that the MongoDB platform scales both in compute and also in storage seamlessly. We get a notification that more storage is required, and we can upgrade that online and with no customer impact and no downtime. It's really, really seamless. Another reason we chose MongoDB is that it’s cloud agnostic. We're on AWS now, but we're almost certainly at some point going to be asked from customers to look at Azure or Google. So it's really beneficial to us that MongoDB works on all the different platforms that we look at. Built with MongoDB: What are some of the features you use within MongoDB? Peter: We use MongoDB Atlas Search because of its ability to retrieve thousands of data points from multiple documents. We use the indexing capability there, and the key thing that we find is that our customers want to retrieve thousands of data points from multiple different documents. A lot of our customers are analysts or investment portfolio managers, and they want that information in their hands as quickly as possible. Built with MongoDB: What is some advice you’d give to aspiring founders and CEOs? Peter: Try lots of things and try them quickly. Try lots of little spikes, and take the ones that work well, and eventually put those into production. Really focus on what your customers want. Ultimately, we tried a lot of different ideas, some of which we thought were great. But you have to put it in front of your customers to be able to decide which ones are really worth spending time on and putting into production quality and which ones you should just let fall by the wayside as research done but not ultimately used. Find out more about Semeris Docs . Interested in learning more about MongoDB for Startups? Check out our Startups page .

May 4, 2022
Applied

MACH Aligned for Retail: Microservices

MACH is an approach to architecting modern applications through open tech ecosystems. It is an acronym representing Microservices, API-first, Cloud-native SaaS, and Headless. With the accelerating digitalization of retail experiences requiring new technology stacks that provide agility, flexibility, and performance at scale, MACH is especially relevant for retail and ecommerce , a far cry from current legacy, monolithic architectures. The MACH Alliance is an organization, of which MongoDB is a member, dedicated to educating and driving the adoption of the MACH framework and to “future proof enterprise technology and propel current and future digital experiences.” This is the first of a series of blog posts dedicated to MACH and how retail organizations are leveraging this framework to gain a competitive advantage. Let us begin with the first letter of MACH: microservices. What are microservices and why should I care? In simplest terms, microservices are an approach to building applications in which business functions are broken down into smaller, self-contained components called services. These services function autonomously and are usually developed and deployed independently. This independence means the failure or outage of one microservice will not affect another. Each service serves a particular business function or objective. For a deeper look into technical details about microservices, check out MongoDB’s specific guides dedicated to this topic. The benefits of a microservices-based architecture are clear. The modular approach of microservices provides companies with quicker time to market and value, ultimately leading to a better customer experience. Development teams can work independently on different app functionalities, consequently shortening development cycles to get more features deployed in less time, which means the reaction to changing customer demands improves dramatically. Also, since services are deployed in independent environments, scalability concerns are managed in a much more convenient (and efficient) way, and resilience is strengthened significantly because there is no single point of failure, as there would be with monolithic applications. Microservices provide a modern architecture for app development, which ultimately delivers the best experience for customers. Learn how Boots modernized its stack with MongoDB and Microservices . Applying microservices for retail What does a microservice-based application look like in a real-world scenario? Let’s say an ecommerce application is being built. Microservices will greatly optimize the following challenges: Dynamic product catalog: An ecommerce app might involve a large number of products (maybe from different suppliers) with changing availability. With each supplier and/or product category as a part of a microservice, it becomes easier and more efficient to manage and provide an always up-to-date product catalog for users. Changing customer needs: A microservice-based architecture increases speed of development and testing, ultimately allowing new features to be deployed faster and enabling developers to quickly pivot to new customer needs. Different teams can work in parallel and independently, with little to no dependencies, rolling out or rolling back features as needed without risk. Scale flexibly: Independently scale app functionalities up during peaks or down for valleys with on-demand cloud-based microservices. The world before microservices Before microservices were an option, the typical data infrastructure would look like a data access layer on top of a database in order to get all the datasets containing information needed for running the application, as seen in Figure 1. There would be many databases to pull data from and various information silos, making for a painful process. Business logic had to be generated to transform these datasets to perform specific functions, namely a product catalog, cart, checkout, payments, and the like. Before building any application, the relational data objects would need to be mapped out to match an object-oriented programming paradigm. Figure 1: The monolithic application diagram before microservices This is not easily scalable or flexible for modern applications because every change in a dataset needs to be pushed upstream, and every new feature request for the app implies a data schema change downstream. This complicates life for developers and makes adaptation to customer needs a nightmare. Decoupled app functionality with microservices With microservices, business functions are decoupled as much as possible in order to create a bounded context that is clearly independent of the others, meaning a failure or outage in one does not affect the others. This often means having a separate database per service, as seen in Figure 2. Figure 2: A first approach to microservices In this first approach to microservices, decoupled application functionalities can be developed, maintained, and scaled independently. However, having a separate database for each business functionality is not the ideal. It adds operational complexity, defeating the purpose of a microservices approach since maintaining and scaling a distributed system is not a simple task. But there is light in all of this: a middle ground between strong decoupling and operational efficiency can be found with MongoDB. MongoDB and microservices MongoDB is built under a number of fundamental technology principles that ensure companies can reap the advantages of microservices, specifically around a flexible data model, redundancy, automation, and scalability. MongoDB can be deployed in any environment (on-premises or cloud for example), but all industries are moving or have already moved toward the cloud, with its lower cost of ownership and flexibility. Retail is no exception. The cloud is again the natural next step for microservices. It provides dynamic scalability and high availability, freeing teams of the operational burden of maintaining a distributed system in-house. This is why MongoDB clients are choosing MongoDB Atlas as their cloud database-as-a-service to deploy applications based on microservices. As a step to modernization , MongoDB can be used as an operational data store, as seen in Figure 3. Legacy data silos are needfully connected via change data capture (CDC) and/or ETL processes in order to have an up-to-date copy of the data, stored as JSON documents. This is a first major advantage, since now applications can be developed against a data model that fits how developers think and code, therefore providing unprecedented agility to the development cycle. Figure 3: Microservices with MongoDB, acting as an operational data Store. Applications can be developed taking advantage of its flexible data model and scalability MongoDB Atlas allows for all of the benefits and flexibility of a fully managed, API-driven database. It allows for environment automation without dealing with every detail of database operation and scalability. This makes development teams more agile so that they can evolve applications at the pace customers expect and require today. Learn more about how MongoDB and MACH are changing the game for retail , and stay tuned for the next blog in this series, in which we will discuss how an API-first approach helps retailers simplify development processes, increase interoperability, and reduce inefficiencies.

April 28, 2022
Applied

Celebrating Earth Day With Three MongoDB Customers

Every April 22nd, citizens across the globe come together to celebrate the environmental movement on Earth Day. This year’s official theme is " Invest in our Planet ." According to the Earth Day organization, “for Earth Day 2022, we need to act (boldly), innovate (broadly), and implement (equitably). It’s going to take all of us. All in. Businesses, governments, and citizens — everyone accounted for, and everyone accountable. A partnership for the planet.” On this important day, we’re highlighting three MongoDB customers that have taken great strides to make a positive impact on our environment. They are shining examples of the power of MongoDB and what it means to be eco-friendly. University of Bremen At Germany’s University of Bremen , the Collaborative Research Center (CRC) initiative Farbige Zustände is a cross-disciplinary effort to reinvent an entire field of research: the discovery of new materials. “It’s not just harder, lighter, strong materials,” Dr. Nils Ellendt, CEO of the CRC, says. “It’s finding materials that need less refining, that are more compatible with a sustainable environment. How few elements can we use, not how many.” When the CRC first planned its data infrastructure, the company looked at standard structured databases, but it quickly realized that the datasets researchers would be using were quite heterogeneous and better suited to unstructured database techniques. That’s where MongoDB came in. MongoDB proved it could handle unstructured data at scale, so the CRC built its entire testing process around it. Soon after, the company determined that MongoDB was well suited for its unique approach. The Centre has its sights on pioneering a complete revolution in materials science, not simply in the creation of a massive new catalog of potential engineering materials, but also in pioneering data and automation in creative engineering. Read our full profile of the CRC “Farbige Zustände.” Journey Foods Journey Foods is a machine learning–powered software platform for food companies, designed to revolutionize the future of food. Just a few years after its launch, Journey Foods has raised more than $2.5 million from investors and partnered with global consortiums such as Future Food Network, FoodTank, and the University of Chicago on sustainability and data. “We are trying to focus on developing our service to accurately provide nutrition insights, sustainability insights, and help save our customers money,” said Riana Lynn, CEO. “We are prioritizing partnerships that will help us build out a big and dynamic ecosystem.” Lynn said the company chose MongoDB because of its seamless user experience, ease of scalability, and recommendations from other companies. She cited the consistent and always available support and follow-up from MongoDB; because of that, her developers appreciate how easy it is to use the platform, and to share and collaborate on different projects. Read our full interview with Journey Foods’ CEO. Kode Labs The commercial and real estate markets are being transformed by new technologies that reduce the carbon footprint of these energy-intensive businesses. Kode Labs was born because its founders recognized the importance of sustainable buildings, and how they rely on advanced software to achieve LEED and other sustainability certifications. Kode Labs launched in 2017 to provide intuitive, easy-to-use software for building management that enables sustainability, operations efficiency, and comfort. The company uses MongoDB Atlas for a fully managed database that allows it to effortlessly deploy new projects, infrastructure components, and more when starting to work with a new client or building out further projects with an existing one. “Everyone wants to be more energy efficient, healthier, and have modern places to live and work,” says Etrit Demaj, co-founder of Kode Labs. With MongoDB Atlas, "we help building managers and construction firms deliver on these growing expectations.” Read more about Kode Labs’ mission to support sustainable buildings.

April 21, 2022
Applied

The 5-Step Guide to Mainframe Modernization for Banks

Enriched, convenient, and personalized are the watchwords for any business building a modern, digital customer experience. It’s no different for traditional retail banks, especially as they try to fend off challenger banks and design their own online banking and in-branch experiences to win new business and retain existing customers. But in order to beat the competition and build experiences that best those offered by neobanks, established retail banks need to master their data estate. Specifically, they need to free themselves from the rigid data architectures associated with legacy mainframes and monolithic enterprise banking applications. Only then can established banks have their developers get to work building high-quality customer-facing applications rather than managing thousands of SQL tables, scrambling to rework schema, or maintaining creaky legacy systems. The first step on this journey is modernizing the mainframe. Enriched modernization in 5 phases The best way to modernize is through a phased model that uses an operational data layer (ODL). An ODL acts as a bridge between a bank’s existing systems and its new ones. Using an ODL allows for an iterative approach, allowing banks to see progress toward modernization at each step along the way while still protecting existing assets and business-critical operations. Banks can see rapid improvements in a relatively short amount of time while preserving the legacy components for as long as they’re needed to keep the business running. MongoDB’s five-phase approach to modernization enables banks to modernize iteratively while balancing performance and risk. If banks are eager to modernize and their customers are demanding modern banking experiences, what’s taking banks so long to move away from the legacy systems that are restricting their ability to innovate? And why do so many legacy modernization efforts fall short? Download The 5 Phases of Banking Modernization to start plotting your path forward. Mainframe modernization techniques With an ODL, the legacy infrastructure can be switched off piece by piece and retired as more functionality is added. In this scenario, database operations become much more efficient because objects get stored together rather than in disjointed locations. Reads are executed in parallel via the nodes in a replica set. Writes are largely unaffected. To bring similar benefits to writes, banks may choose to implement an ODL with sharding and regional shards , bringing writes closer to the actual user. Workloads can then be gradually moved from legacy systems to the ODL, with the ultimate goal to decommission the legacy system. The beauty of this approach to modernization is that it starts with the use case: What problems does the bank face in its data management and what functionalities are customers requesting? If the first priority is giving customers access to historical transaction data, then banks can tackle that problem immediately by building a repository (or domain) to offload customer data from the mainframe. If the priority is cost reduction, then an ODL can act as an interim layer, allowing applications to access the data they need without the need to run expensive queries against mainframe data. The advantages of an ODL MongoDB’s application data platform is ideal for connecting legacy mainframes and databases to newer architectures, such as a data mesh, by way of an ODL. An ODL has a number of advantages. Combined, these advantages make data massively easier to access and use — and applications easier and faster to build. An ODL allows an organization to process and augment data that resides in separate silos, and then use that data to power a downstream product, such as a website or an ATM. With an ODL, data is physically copied to a new location. A bank’s legacy systems remain in place, but new applications can access data through the ODL rather than interacting directly with legacy systems. An ODL can draw data from one or many source systems and power one or many consuming applications, unifying data from multiple systems into a single real-time platform. An ODL relieves the mainframe of workloads. One useful by-product is in avoiding consumer service interruptions brought about by maintenance windows on legacy systems, like Oracle Exadata. An ODL can be used to serve only reads, accept writes that are then written back to source systems, or evolve into a system of record that eventually replaces legacy systems and simplifies the enterprise architecture. Because of its ability to work with legacy systems, or to gradually replace them, and its ability to support an evolutionary approach to legacy modernization, many banks find that an ODL is a critical step on the path to full modernization of their enterprise architecture. In terms of architectural setup, some banks may want one ODL for each of their data domains but others may find certain domains can share an ODL. The ODS/ODL template can be applied in a variety of ways — without breaking the bank’s internal standards. For example, imagine an ATM terminal connected to a MongoDB-based ODL. With the ODL in place, data from the mainframe is replicated in real time and made available for the consumer to check their most recent transactions and account balance on the ATM. Customer balance information, however, also still resides on the source system. Using the ODL to replicate and display information from the mainframe avoids customers having to face annoying delays while they wait for the information from a mainframe to load. At the same time, risk management and regulatory reports can still be run against a mainframe as a batch “end of day” process. With an ODL in place, data can flow from the mainframe to a newer architecture, giving the ATM broader capabilities that expand customers’ banking experiences, such as the ability to pay invoices, change addresses, or even open additional accounts. Nightly batch, bulk load, or real-time updates: MongoDB is flexible enough to connect to any data source, be it classic DB2 for zOS, Oracle, SQL Server, Hadoop-based legacy, or even Excel spreadsheets. MongoDB has the appropriate connectivity to ingest any data at any time from anywhere. Enrichment, data domains, and data marketplaces: With its document data model, MongoDB has the capability to bring data into data domains versus using convoluted table schema and ETL processes. The domains emerge naturally based on the application and user community requirements. Security, schemas, and validation: MongoDB has multiple layers of security, including password protection over encryption in flight and at rest, plus granular field-level encryption. All with external key management. MongoDB can be used as an operational data layer Take the next step in mainframe modernization Because many core banking capabilities are transactional and can be handled with daily batch processing, mainframes remain the backbone of our financial system. Mainframe modernization might sound daunting, but it doesn’t have to be. Banks can choose to proceed along a straightforward and predictable path that allows them to modernize iteratively. They can receive the benefits of modernization in one area of the organization even if other groups are earlier in their modernization path. It’s possible to do this while supporting increasingly complex data privacy regulations and, importantly, minimizing risk. Banks and other financial institutions that have successfully modernized have seen cost reductions, faster performance, simpler compliance practices, and rapid development cycles. New, flexible architectures have accelerated the creation of value-added services for consumers and corporate clients. If you’re ready to learn more about how you can accelerate your digital transformation and minimize risk, “ The 5 Phases of Banking Modernization ” now.

April 21, 2022
Applied

Forward Together: How MongoDB Invests in (and Helps Grow) Modernization Consultancies for Our Customers

At MongoDB, our partners are critical to our ability to care for our customers. For example, our Global Systems Integrator (GSI) partners help MongoDB customers modernize, migrate, and manage their data infrastructure in the cloud at scale. Notwithstanding the continued success of our GSI partners, many of our customers and the GSI themselves expressed the need for assistance that was highly tailored to their specific industry or that addressed an extremely complex technical challenge. As a result of this feedback, MongoDB has launched a modernization investment program focused on specialist system integrators, which we are calling Boutique Systems Integrators (BSIs). This post explains our rationale behind the program and offers key details on how it functions. Even though this article is focused on our system integrator program, our customers may also appreciate understanding our thought process behind a key partner initiative that strives to deliver continued customer value. Why did we create the BSI program? Many BSIs offer smaller and more specialized consultancy services, enabling us to expand MongoDB’s global reach in a focused and thoughtful manner. Furthermore, since these BSIs are specially suited to particular industries or sectors within those industries, they help us care for customers in businesses that have historically been difficult to move off of their legacy infrastructure. This approach has allowed us to improve the MongoDB customer experience while also creating beneficial opportunities for our partners. In addition to the need for industry-specific expertise, it was also evident that both our customers and our GSI partners (which tended to be larger corporations) were looking for smaller, more specialized organizations to complement their efforts. For instance, if a GSI didn’t possess the necessary expertise for a given use case, then a BSI could provide help to deliver and implement a solution jointly. Our BSI program immediately delivered value to our customers and GSI partners. It was a great solution to a difficult problem. However, we quickly realized that customer demand for BSI services exceeded capacity. We needed to determine how to accelerate our BSI program and scale our existing partners in order to best support our customers. In the winter of 2020 during some of the most difficult periods of the pandemic, we launched an initiative in which we directly invested and took equity stake in a handful of BSIs. Our investments allowed the BSIs to rapidly accelerate the onboarding of highly skilled consultants, launch additional go-to-market offerings, drive marketing programs, and remain maniacally focused on driving modernization value for our customers. How did we create the BSI program — and how are things going? To select the most appropriate partners to invest in, we created a simple but effective request for proposal (RFP) process. The criteria focused on straightforward qualifications; we sought out organizations led by founders or executives with extensive enterprise experience, established organizations with 50 employees or fewer looking to scale, and aspiring partners with existing MongoDB skills and a proven track record of success in modernization. The vetting process took a wide range of factors into consideration, including growth potential and customer references. Essentially, MongoDB screened for organizations with strong foundations, which took the form of seasoned leadership, finely honed business processes, and the ability to scale with us. We also assessed competence in related areas like training and team development, overall organizational alignment on goals, industry expertise, and recruiting. To take things one step further, the most promising companies were selected to become investment partners, a higher tier of the BSI program. Today, after more than a year of hard work, five investment partners have joined our program: PeerIslands, Exafluence, Gravity 9, Wekan Enterprise Solutions, and Clarity Business Solutions. In total, our investment partners can field over 300 modernization consultants — all of whom are thoroughly familiar with MongoDB — and empower customers to fully leverage the potential of the MongoDB application data platform. Where we are with each of our investment partners Let’s review some of the exceptional work these five BSI partners have done over the past year. PeerIslands won the BSI Partner of the Year Award in 2021. A strong organization that has excelled across all of our success criteria, PeerIslands possesses strong delivery skills in the independent software vendor (ISV) replatform and retail segments. The company has a very capable team of ex-Cognizant leaders with close alignment to MongoDB products. Customers love the company’s onshore-offshore model, and we are happy to empower PeerIslands to expand its headcount and organizational abilities — and thrilled to see where the team will go next. Exafluence is a leader in modernizing payments and healthcare platforms, with a particular focus on Fast Healthcare Interoperability Resources (FHIR). Given the documented compatibility problems between healthcare information solutions, Exafluence is in an excellent position to facilitate the transformation of obsolete, legacy technology — and help improve vital outcomes for patients and providers alike. Wekan Enterprise Solutions , based in India and the United States, specializes in IoT, mobile development and modernization, and fleet logistics. This creates a natural alignment with Realm, MongoDB’s toolkit for application developers, which includes key services, app logic, and data sync. Wekan, the strongest partner in our ecosystem with Realm expertise, has excelled in reimagining mobile and backend data infrastructure for its customers. Gravity9 , headquartered in the U.K., has considerable expertise in accelerating clients’ transformation journeys by modernizing applications for the cloud, developing operational data stores, and creating single-view solutions. As a result, the company possesses deep technical capabilities and is well qualified to bring MongoDB to a new slate of enterprises. Clarity Business Solutions specializes in application modernization for U.S. government agencies. With its experienced team (many of whom already possess top security clearances and other federal qualifications), Clarity is well positioned to help public sector customers get more from their MongoDB and FedRAMP investments. How our customers have benefited from this program Our investment partners work with clients to: Accelerate their modernization journeys with MongoDB's application data platform Provide access to highly skilled modernization consultants with expertise in the MongoDB stack and the cloud Bring in industry specialization for MongoDB use cases Deliver end-to-end solutions using MongoDB's products and services Dr. Narendra Kini, the chief medical officer at our customer VitalProbe Inc., says, “Converting an array of healthcare data (structured, unstructured, and semi-structured) across critical variables into formats that can be stored and accessed for advanced analytics to drive better health outcomes is the core competency of Exafluence." Further, says Dr. Kini, "They are able to achieve this interoperability through their MongoDB Atlas–centric data platform, with FHIR APIs being accessed through mobile apps that leverage MongoDB Realm sync technology. Our company is a great example of a customer that benefits from Exafluence’s expertise and more specifically from the FHIR platform built by them,” he says. “Three years ago, when we decided to transform our data engineering and ML workloads to the cloud and build the world’s best revenue cycle management platform for healthcare providers, we needed a partner who could step up to the challenge. PeerIslands came our way then,” says Sriram Upadhyayula, the CTO at CloudMed. “We have since worked with them for over three years, and I have been more than satisfied with the solutions PeerIslands has developed and delivered. They have grown to be recognized for their experience in digital transformation and for their expertise in MongoDB. The quality of talent at PeerIslands is among the best I’ve worked with,” says Upadhyayula. “They go above and beyond to deliver quality results, time and again.” Where we go from here Going forward, MongoDB looks to grow our modernization investment program for BSIs and continue to empower our existing investment partners as we seek to jointly care for customers. With the help of our talented partners, we aim to support MongoDB adoption — and provide a better data experience — for many more customers across various industries and time zones. If you are a customer interested in using one of our investment partners for a modernization project or would like to become one of our BSIs, please contact us at partners@mongodb.com . We look forward to hearing from you!

April 7, 2022
Applied

Ready to get Started with MongoDB Atlas?

Start Free