MongoDB Blog

Articles, announcements, news, updates and more

What Does the Executive Order on Supply Chain Security Mean for Your Business? Security Experts Weigh In on SBOMs

In the wake of high-profile software supply chain attacks, the White House issued an executive order requiring more transparency in the software supply chain. The Executive Order (14028) on Improving the Nation’s Cybersecurity requires software vendors to provide a software bill of materials (SBOM). An SBOM is a list of ingredients used by software — that is, the collection of libraries and components that make up an application, whether they are third-party, commercial off-the-shelf, or open source software. By providing visibility into all the individual components and dependencies, SBOMs are seen as a critical tool for improving software supply chain security. The new executive order affects every organization that does or seeks to do business with the federal government. To learn more about the requirements and implementation, MongoDB invited a few supply chain security experts for a panel discussion. In our conversation, Lena Smart, MongoDB’s Chief Information Security Officer, was joined by three expert panelists: Dr. Allan Friedman, PhD, senior advisor and strategist, CISA; Clinton Herget, principal solutions engineer, Snyk; and Patrick Dwyer, CycloneDX SBOM project co-lead, Open Web Application Security Project. Background In early 2020, hackers broke into Texas-based SolarWind's systems and added malicious code to the company's Orion software system, which is used by more than 33,000 companies and government entities to manage IT resources. The code created a backdoor into affected systems, which hackers then used to conduct spying operations. In December 2021, a vulnerability in the open source Log4J logging service was discovered. The vulnerability enables attackers to execute code remotely on any targeted computer. The vulnerability resulted in massive reconnaissance activity, according to security researchers, and it leaves many large corporations that use the Log4J library exposed to malicious actors. Also in late 2021, the Russian ransomware gang, REvil, exploited flaws in software from Kaseya, a popular IT management application with MSPs. The attacks multiplied before warnings could be issued, resulting in malicious encryption of data and ransom demands as high as $5 million. In our panel discussion, Dr. Friedman kicked off the conversation by drawing on the “list of ingredients” analogy, noting that knowing what’s in the package at the grocery store won’t help you keep your diet or protect you from allergens by itself — but good luck doing so without it. What you do with that information matters. So the data layer is where we will start to see security practitioners implement new intelligence and risk-awareness approaches, Friedman says. SBOM Use Cases The question of what to do with SBOM data was top-of-mind for all of the experts in the panel discussion. Friedmans says that when the idea of SBOMs was first introduced, it was in the context of on-premises systems and network or firewall security. Now, the discussion is centered on SaaS products. What should customers expect from an SBOM for a SaaS product? As Senior Advisor and Strategist at the Cybersecurity and Infrastructure Security Agency (CISA), Friedman says this is where the focus will be over the next few months as they engage in public discussions with the software community to define those use cases. A few of the use cases panelists cited included pre-purchase due diligence, forensic and security analysis, and risk assessment. “At the end of the day, we're doing this hopefully to make the world of software more secure,” Smart says. No one wants to see another Log4J, the panelists agreed, but chances are we'll see something similar. A tool such as an SBOM could help determine exposure to such risks or prevent them from happening in the first place. Dwyer waded into the discussion by emphasizing the need for SBOM production and consumption to fit into existing processes. “Now that we're automating our entire software production pipeline, that needs to happen with SBOMs as well,” Dwyer says. Herget agreed on the need to understand the use cases and edge cases, and to integrate them. “If we're just generating SBOMs to store them in a JSON file on our desktop, we’ve missed the point,” he says. “It's one thing to say that Maven can generate an SBOM for all Java dependencies in a given project, which is amazing until you get to integrating non-Java technologies into that application.” Hergert says that in the era of microservices, you could be working with an application that has 14 different top-level languages involved, with all of their corresponding sets of open source dependencies handled by an orchestrated, cloud-based continuous integration pipeline. “We need a lot more tooling to be able to do interesting things with SBOMs,” Herget continued. “Wouldn't it be great to have search-based tooling to be able to look at dependency tree relationships across the entire footprint?” For Herget, future use cases for SBOM data will depend on a central question: What do we have that is a scalable, orchestrated way to consume SBOM data that we can then throw all kinds of tooling against to determine interesting facts about our software footprint that we wouldn't necessarily have visibility into otherwise? SBOMs and FedRAMP In the past few years, Smart has been heavily involved in FedRAMP (Federal Risk and Authorization Management Program), which provides a standardized approach to government security authorizations for Cloud Service Offerings. She asked the panelists whether SBOMs should be part of the FedRAMP SSP (System Security Plan). Friedman observed that FedRAMP is a “passed once, run anywhere” model, which means that once a cloud service is approved by one agency, any other government agency can also use it. “The model of scalable data attestations that are machine-readable I think does lend itself as a good addition to FedRAMP,” Friedman says. Herget says that vendors will follow if the government chooses to lead on implementing SBOMs. “If we can work toward a state where we're not talking about SBOMs as a distinct thing or even an asset that we're working toward but something that’s a property of software, that's the world we want to get to.” The Role of Developers in Supply Chain Security As always, the role of the developer is one of the most critical factors in improving supply chain security, as Herget points out. “The complexity level of software has exceeded the capacity for any individual developer, even a single organization, to understand where all these components are coming from,” Herget says. “All it takes is one developer to assign their GitHub merge rights to someone else who's not a trusted party and now that application and all the applications that depend on it are subject to potential supply chain attack.” Without supply chain transparency or visibility, Herget explains, there’s no way to tell how many assets are implicated in the event of an exploit. And putting that responsibility on developers isn’t fair because there are no tools or standardized data models that explain where all the interdependencies in an application ultimately lead. Ingredient lists are important, Herget says, but what’s more important are the relationships between them, which components are included in a piece of software and why, who added them and when, and to have all that in a machine-readable and manipulable way. “It's one thing to say, we have the ingredients,” Herget says, “But then what do you do with that, what kind of analysis can you then provide, and how do you get actionable information in front of the developer so they can make better decisions about what goes into their applications?” SBOM Minimum Requirements The executive order lays out the minimum requirements of an SBOM, but our panelists expect that list of requirements to expand. For now, there are three general buckets of requirements: Each component in an SBOM requires a minimum amount of data, including the supplier of the component, the version number, and any other identifiers of the component. SBOMs must exist in a widely used, machine-readable format, which today is either CycloneDX or SPDX . Policies and practices around how deep the SBOM tree should go in terms of dependencies. Moving forward, the panelists expect the list of minimum requirements to expand to include additional identifiers, such as a hash or digital fingerprint of a component, and a requirement to update an SBOM anytime you update software. They also expect additional requirements for the dependency tree, like a more complete tree or at least the ability to generate the complete tree. “Log4j taught people a lot about the value of having as complete a dependency tree as possible,” Friedman said, “because it was not showing up in the top level of anyone's dependency graph.” SBOMs for Legacy Systems One of the hurdles with implementing SBOMs universally is what to do with legacy systems, according to Smart. Johannes Ullrich, Dean of Research for SANS Technology Institute, has said that it may be unrealistic to expect 10- or 20-year-old code to ever have a reasonable SBOM. Friedman pointed to the use of binary analysis tools to assess software code and spot vulnerabilities, noting that an SBOM taken from the build process is far different from one built using a binary analysis tool. While the one taken from the build process represents the gold standard, Friedman says, there could also be incredible power in the binary analysis model, but there needs to be a good way to compare them to ensure an apples-to-apples approach. “We need to challenge ourselves to make sure we have an approach that works for software that is in use today, even if it's not necessarily software that is being built today,” Herget says. As principal solutions engineer at Snyk, Herget says these are precisely the conversations they’re having around what is the right amount of support for 30-year-old applications that are still utilized in production, but were built before the modern concept of package management became integrated into the day-to-day workflows of developers. “I think these are the 20% of edge cases that SBOMs do need to solve for,” Herget says, “Because if it’s something that only works for modern applications, it's never going to get the support it needs on both the government and the industry side.” Smart closed the topic by saying, “One of the questions that we've gotten in the Q&A is, ‘What do you call a legacy system?’ The things that keep me awake at night, that's what I call legacy systems.” Perfect Ending Finally, the talk turned to perfection, how you define it, and whether it’s worth striving for perfection before launching something new in the SBOM space. Herget, half-joking, said that perfection would be never having these talks again. “Think about how we looked at DevOps five or 10 years ago — it was this separate thing we were working to integrate within our build process,” he says. “You don’t see many panel talks on how we will get to DevOps today because it's already part of the water we’re all swimming in.” Dwyer added that perfection to him is when SBOMs are just naturally embedded in the modern software development lifecycle — all the tooling, the package ecosystems. “Perfection is when it's just a given that when you purchase software, you get an SBOM, and whenever it's updated, you get an SBOM, but you actually don't care because it's all automated, Dwyer says. “That’s where we need to be.” According to Friedman, one of the things that SBOMs has started to do is to expose some of the broader challenges that exist in the software ecosystem. One example is software naming and software identity. Friedman says that in many industries, we don't actually have universal ways of naming things. “And, it’s not that we don't have any standards, it’s that we have too many standards,” he explains. “So, for me, perfection is saying SBOMs are now driving further work in these other areas of security where we know we've accumulated some debt but there hasn't been a forcing function to improve it until now.”

May 23, 2022
Applied

MongoDB & IIoT: A 4-Step Data Integration

The Industrial Internet of Things (IIoT) is driving a new era of manufacturing, unlocking powerful new use cases to forge new revenue streams, create holistic business insights, and provide agility based on global and consumer demands. Our previous article, “ Manufacturing at Scale: MongoDB & IIoT ,” we gave an overview of the adoption and implementation of IIoT in manufacturing processes, testing various use cases with a model-size smart factory (Figure 1). In this post, we’ll look at how MongoDB’s flexible, highly available, and scalable data platform allows for end-to-end data integration using a four-step framework. Figure 1: Architecture diagram of MongoDB's application data platform with MQTT-enabled devices. 4-step framework for end-to-end data integration The four stages of this framework (Figure 2) are: Connect: Establish an interface to “listen” and “talk” to the device(s). Collect: Gather and store data from devices in an efficient and reliable manner. Compute: Process and analyze data generated by IoT devices. Create: Create unique solutions (or applications) through access to transformational data. Figure 2: The four-step framework for shop floor data integration During the course of this series, we will explore each of the four steps in detail, covering the tools and methodology and providing a walkthrough of our implementation process, using the Fischertechnik model as a basis for testing and development. All of the steps, however, are applicable to any environment that uses a Message Queuing Telemetry Transport (MQTT) API. The first step of the process is Connect. The first step: Connect The model factory contains a variety of sensors that are generating data on everything from the camera angle to the air quality and temperature — all in real time. The factory uses the MQTT protocol to send and receive input, output, and status messages related to the different factory components. You may wonder why we don’t immediately jump to the data collection stage. The reason is simple; we must first be able to “see” all of the data coming from the factory, which will allow us to select the metrics we are interested in capturing and configure our database appropriately. As a quick refresher on the architecture diagram of the factory, we see in Figure 3 that any messages transmitted in or out of the factory are routed through the Remote MQTT Broker. The challenge is to successfully read and write messages to and from the factory, respectively. Figure 3: Architecture diagram of the model smart factory It is important to remember that the method of making this connection between the devices and MongoDB depends on the communication protocols the device is equipped with. On the shop floor, multiple protocols are used for device communication, such as MQTT and OPC-UA, which may require different connector technologies, such as Kafka, among other off-the-shelf IoT connectors. In most scenarios, MongoDB can be integrated easily, regardless of the communication protocol, by adding the appropriate connector configuration. (We will discuss more about that implementation in our next blog post.) For this specific scenario, we will focus on MQTT. Figure 4 shows a simplified version of our connection diagram. Figure 4: Connecting the factory's data to MongoDB Atlas and Realm Because the available communication protocol for the factory is MQTT, we will do the following: Set up a remote MQTT broker and test its connectivity. Create an MQTT bridge. Send MQTT messages to the device(s). Note that these steps can be applied to any devices, machinery, or environment that come equipped with MQTT, so you can adapt this methodology to your specific project. Let’s get started. 1. Set up a remote MQTT broker To focus on the connection of the brokers, we used a managed service from HiveMQ to create a broker and the necessary hosting environment. However, this setup would work just as well with any self-managed MQTT broker. HiveMQ Cloud has a free tier, which is a great option for practice and for testing the desired configuration. You can create an account to set up a free cluster and add users to it. These users will function as clients of the remote broker. We recommend using different users for different purposes. Test the remote broker connectivity We used the Mosquitto CLI client to directly access the broker(s) from the command line. Then, we connected to the same network used by the factory, opened a terminal window, and started a listener on the local TXT broker using this command: mosquito_sub -h 192.168.0.10 -p 1883 -u txt -P xtx -t f/o/# Next, in a new terminal window, we published a message to the remote broker on the same topic as the listener. A complete list of all topics configured on the factory can be found in the Fischertechnik documentation . You can fill in the command below with the information of your remote broker. mosquitto_pub -h <hivemq-cloud-host-address> -p 8883 -u <hivemq-client-username> -P <hivemq-client-password> -t f/o/# -m "Hello" If the bridge has been configured correctly, you will see the message “Hello” displayed on the first terminal window that contains your local broker listener. Now we get to the good part. We want to see all the messages that the factory is generating for all of the topics. Because we are a bit more familiar with the Mosquitto CLI, we started a listener on the local TXT broker using this command: mosquitto_sub -h 192.168.0.10 -p 1883 -u txt -P xtx -t # Where the topic “#” essentially means “everything.” And just like that, we can get a sense of which parameters we can hope to extract from the factory into our database. As an added bonus, the data is already in JSON. This will simplify the process of streaming the data into MongoDB Atlas once we reach the data collection stage, because MongoDB runs on the document model , which is also JSON-based. The following screen recording shows the data stream that results from starting a listener on all topics to which the devices publish while running. You will notice giant blocks of data, which are the encoding of the factory camera images taken every second, as well as other metrics, such as stock item positions in the warehouse and temperature sensor data, all of which is sent at regular time intervals. This is a prime example of time series data, which we will describe how to store and process in a future article. Video: Results of viewing all device messages on all topics 2. Create a MQTT bridge An MQTT bridge (Figure 5) is a uni/bi-directional binding of topics between two MQTT brokers, such that messages published to one broker are relayed seamlessly to clients subscribed to that same topic on the other broker. Figure 5: Message relays between MQTT brokers In our case, the MQTT broker on the main controller is configured to forward/receive messages to/from the remote MQTT broker via the following MQTT bridge configuration: connection remote-broker address <YOUR REMOTE MQTT BROKER IP ADDRESS:PORT> bridge_capath /etc/ssl/certs notifications false cleansession true remote_username <HIVEMQ CLIENT USERNAME> remote_password <HIVEMQ CLIENT PASSWORD> local_username txt local_password xtx topic i/# out 1 "" "" topic o/# in 1 "" "" topic c/# out 1 "" "" topic f/i/# out 1 "" "" topic f/o/# in 1 "" "" try_private false bridge_attempt_unsubscribe false This configuration file is created and loaded directly into the factory broker via SSH. 3. Send MQTT messages to the device(s) We can test our bridge configuration by sending a meaningful MQTT message to the factory through the HiveMQ websocket client (Figure 6). We signed into the console with one of the users (clients) previously created and sent an order message to the “f/o/order” topic used in the previous step. Figure 6: Sending a test message using the bridged broker The format for the order message is: {"type":"WHITE","ts":"2022-03-23T13:54:02.085Z"} “Type” refers to the color of the workpiece to order. We have a choice of three workpiece colors: RED, WHITE, BLUE; “ts” refers to the timestamp of when the message is published. This determines its place in the message queue and when the order process will actually be started. Once the bridge is configured correctly, the factory will start to process the order according to the workpiece color specified in the message. Thanks for sticking with us through to the end of this process. We hope this methodology provides fresh insight for your IoT projects. Find a detailed tutorial and all the source code for this project on GitHub. Learn more about MongoDB for Manufacturing and IIoT . This is the second of an IIoT series from MongoDB’s Industry Solutions team. Read the first post, “ Manufacturing at Scale: MongoDB & IIoT .” In our next article, we will explore how to capture time series data from the factory using MongoDB Atlas and Kafka .

May 20, 2022
Applied

Open Banking: How to Future-Proof Your Banking Strategy

Open banking is on the minds of many in the fintech industry, leading to basic questions such as: What does it mean for the future? What should we do today to better serve customers who expect native open banking services? How can we align with open banking standards while they’re still evolving? In a recent panel discussion , I spoke with experts in the fintech space: Kieran Hines, senior banking analyst at Celent; Toine Van Beusekom, strategy director at Icon Solutions; and Charith Mendis, industry lead for banking at AWS. We discussed open banking standards, what the push to open banking means for innovation, and more. This article provides an overview of that discussion and offers best practices for getting started with open banking. Watch the panel discussion Open Banking: Future-Proof Your Bank in a World of Changing Data and API Standards to learn how you can future-proof your open banking strategy. Fundamentals To start, let’s answer the fundamental question: What is open banking ? The central tenet of open banking is that banks should make it easy for consumers to share their financial data with third-party service providers and allow those third parties to initiate transactions on their behalf — adding value along the way. But, as many have realized, facilitating open banking is not so easy. At the heart of the open banking revolution is data — specifically, the infrastructure of databases, data standards, and open APIs that make the free flow of data between banks, third-party service providers, and consumers possible. What does this practice mean for the banking industry? In the past, banks almost exclusively built their own products, which has always been a huge drain on teams, budgets, and infrastructure. With open banking, financial services institutions are now partnering with third-party vendors to distribute products, and many regulations have already emerged to dictate how data is shared. Because open banking is uncharted territory, it presents an array of both challenges — mostly regulatory — and opportunities for both established banks and disruptors to the space. Let’s dig into the challenges first. Challenges As open banking, and the technology practices that go along with it, evolve, related compliance standards are emerging and evolving as well. If you search for “open banking API,” you’ll find that nearly every vendor has their own take on open banking and that they are all incompatible to boot. As with any developing standard, open banking standards are not set in stone and will continue to evolve as the space grows. The fast-changing environment will hinder those banks that do not have a flexible data architecture that allows them to quickly adapt to provider standards as needed. An inflexible data architecture becomes an immediate roadblock with unforeseen consequences. Closely tied to the challenge of maintaining compliance with emerging regulations is the challenge that comes with legacy architecture. Established banks deliver genuine value to customers through time-proven, well-worn processes. In many ways, however, legacy operations and the technology that underpins them are doomed to stand in the way not only of open banking but also operational efficiency goals and the ability to meet the customer experience expectations of a digital-native consumer base. To avoid the slow down of clunky legacy systems, banks need an agile approach to ensure the flexibility to pivot to developing challenges. Opportunities The biggest opportunity for institutions transitioning into open banking is the potential for rapid innovation. Banking IP is headed in new and unprecedented directions. Pushing data to the cloud, untangling spaghetti architecture, or decentralizing your data by building a data mesh frees up your development teams to innovate, tap into new revenue streams, and achieve the ultimate goal: Providing greater value to your customers. As capital becomes scarce in banks, the ability to repeatedly invest in new pilots is limited. Instead of investing months or years worth of capital into an experiment, building new features from scratch, or going to the board to secure funding, banks need to succeed immediately, be able to scale from prototype to global operation within weeks, or fail fast with new technology. Without the limiting factors of legacy software or low levels of capital, experimentation powered by new data solutions is now both free and low risk. Best Practices Now that we’ve described the potential that open banking presents for established and emerging industry leaders, let’s look at some open banking best practices, as described in the panel discussion . Start with your strategy. What’s your open banking strategy in the context of your business strategy? Ask hard questions like: Why do you want to transform? What’s wrong with what’s going on now? How can you fix current operations to better facilitate open banking? What new solutions do you need to make this possible? An entire shift for a business to open banking means an entirely new business strategy, and you need to determine what that strategy entails before you implement sweeping changes. View standards as accelerators, not inhibitors. Standards can seem like a burden on financial institutions, and in most cases, they do dictate change that can be resource intensive. But you can also view changing regulations as the catalyst needed to modernize. While evolving regulations may be the impetus for change, they can also open up new opportunities once you’re aligned with industry standards. Simplify and unify your data. Right now, your data likely lives all over the place, especially if you’re an established bank. Legacy architectures and disparate solutions slow down and complicate the flow of data, which in turn inhibits your adoption of open banking standards. Consider how you can simplify your data by reducing the number of places it lives. Migrating to a single application data platform makes it faster and easier to move data from your financial institution to third parties and back again. Always consider scale. When it comes to open banking, your ability to scale up and scale down is crucial — and is also tied to your ability to experiment, which is also critical. Consider the example of “buy now pay later” service offerings to your clients. On Black Friday, the biggest shopping day of the year, financial institutions will do exponentially more business than, say, a regular Tuesday in April. So, to meet consumer demand, your payments architecture needs to be able to scale up to meet the influx of demand on a single, exceptional day and scale back down on a normal day to minimize costs. Without the ability to scale, you may struggle to meet the expectations of customers. Strive for real time. Today, everyone — from customers to business owners to developers — expect the benefits of real-time data. Customers want to see their exact account balance when they want to see it, which is already challenging enough. If you add the new layer of open banking to the mix, with data constantly flowing from banks to third parties and back, delivering data in real-time to customers is more complex than ever. That said, with the right data platform underpinning operations, the flow of data between systems can be simplified and made even easier when your data is unified on a single platform. If you can unlock the potential of open banking, you can innovate, tap into new revenue streams, shake off the burden of legacy architecture, and ultimately, achieve a level of differentiation likely to bring in new customers. Watch the panel discussion to learn more about open banking and what it means for the future of banks.

May 19, 2022
Applied

Collaborative User Story Mapping with Avion and MongoDB

When companies think about their products, they often fall into the trap of planning without truly considering their user’s journey and experience. Perhaps it’s time to start thinking about products from the customer's perspective. Avion was founded by James Sear and Tim Ramage with one thing in mind - to provide the most intuitive and enjoyable user story mapping experience for agile teams to use, from product inception to launch (and beyond). The key, Sear said, is that user story mapping gives you a way of thinking about your product and its features, typically software, from the perspective of your customers or users. This is facilitated by defining things that the user can do (user stories) within the context of your core user journeys. Built with MongoDB spoke with Sear about the idea of user story mapping, how he and Ramage started Avion, and what it’s been like to work with MongoDB. Built with MongoDB: What is Avion all about? James Sear : Avion is a digital user story mapping tool for product teams. It helps them to break down complexity, map out user journeys, build out the entire scope of their product and then decide what to deliver and in what order. It’s a valuable tool that is typically underused. Not everyone understands what story mapping is; as it’s quite a specific technique and you do have to put the time in to learn it in order to get the most out of it. But once you have, there is so much value to be unlocked, in terms of delivering better outcomes for your users, as opposed to just building stuff for the sake of it. Built with MongoDB: What made you decide to start Avion? Sear: My co-founder Tim Ramage and I met around 2014, and we were jointly involved in teams that were building lots of different software products for various companies, both big and small. And while we were very involved in their technical implementation, we were also both really interested in the product management side of delivery, because it’s just so crucial to be successful. That includes everything from UX decisions, product roadmapping prioritization, customer feedback, metrics, managing the team, it all really interested us. However, one thing that we found a particularly difficult part of the process, was taking your clients’ big ideas and translating them into some sort of actionable development plan. We tried a few different approaches for this, until we stumbled across a technique called user story mapping. User story mapping manages to pull together all of your core user journeys, the scope of all features that could be built, and how you plan to deliver them. On top of that, it conveys the order in which you should be working on things. Once you have this powerful asset, you can have effective conversations with your team, and answer the most important questions, such as—what’s the minimum we can build to make this valuable to users, where does this feature actually appear for our users or what we are going to build next, and why?. It really does allow you to communicate more effectively with stakeholders. For instance, you could use it to update your CEO and talk them through what you’re building now, answering those difficult questions like why you’re not building feature X or feature Y. You’ve got this outline right in front of you that makes sense to a product person, a developer, or even an outside stakeholder. Built with MongoDB: Initially, you started to build out a collaborative tool for product teams, and Avion has evolved into more. What else has changed in your journey at Avion? Sear: Our goal at launch was to provide our customers with a best-in-class story mapping experience in the browser. This meant nailing the performance and user interaction, so creating a story map just felt fluid and easy. After this, we focused on tightly integrating with more traditional backlog tools, like Jira and Azure DevOps. We always maintain that our customers shouldn’t have to give up their existing tooling to get value from Avion — so we built it to sit in the middle of their stack and assist them with planning and delivery. Built with MongoDB: What are some of the challenges that you’ve faced in such a crowded productivity space? Sear: It’s difficult to stick out amongst the crowd, but our unique value proposition is actually quite niche. This allows us to show our potential customers a different side of product planning that they might not have seen before. And for anyone that already knows about story mapping, Avion is an opinionated and structured canvas for them to just get work done and be productive quickly. Ultimately, we try to stick out by providing value in a vertical slice of product planning that is often overlooked. Built with MongoDB: What kind of experiences have you had working with MongoDB? Sear: There have been many scenarios where we’ve been debugging difficult situations with production scaling issues, and we just cannot work out why the apps have gone down overnight. There are so many tricky things that come up when you’re running in production. But we have always managed to find something in MongoDB Atlas that can help us just try and pinpoint that issue, whether it’s some usage graphs, or some kind of metrics that allows us to really dig down into the collections, the queries, and everything so MongoDB has been excellent for that in terms of features. It just gives you that peace of mind, we’ve had customers delete stuff of their own accord, and get really upset, but we’ve been able to help them by going back to snapshot backups and retrieving that data for them. From a customer support perspective, it’s massive to have that option on the table. MongoDB Atlas is really useful to us and we don’t have to configure anything, it’s just amazing. The MongoDB upgrades are completely seamless, and help us stay on the latest version of the database which is a huge win for security. Learn more about user story mapping with Avion , and start planning a more user-centric backlog. Interested in learning more about MongoDB for Startups? Learn more about us on the MongoDB Startups page .

May 19, 2022
Applied

Atlas Charts Adds a Dedicated Hub for Managing Embedded Charts and Dashboards

Since the release of the Charts Embedding SDK in May of 2020, developers have been exploring powerful new ways to visualize and share data from their MongoDB Atlas clusters. Embedding charts and dashboards is a valuable use case for Charts users and the new Embedding Page streamlines the embedding experience for first time users and veterans alike. Everything you need on one screen Don’t worry if the concept of embedding within the MongoDB Charts platform is new to you. The Getting Started tab provides configuration guidance, and links to video references, code snippets, live sandboxes, and other resources to help you get started. But just as your applications may evolve according to your needs, your embedding requirements may also change over time. Once you have set up an embedded dashboard or chart, the Items tab acts as the landing page. Think of this as a live snapshot of your current embedding environment. You’ll see a list of all of your charts grouped by their dashboards, be able to search based on title or description, and filter the list to show only dashboards. Within each row, you can view a chart or dashboard’s embedded status, see which type of embedding is enabled, view and copy the embedding ID, and access the full suite of embedding settings available for each item. This means that you can add filters or change your embedding method without having to know exactly where every chart or related setting lives. This approach also lets you operate with confidence on one single page. How cool is that? Authentication settings The Charts SDK allows you to configure unauthenticated embedding for dashboards or charts, making for a painless way to share these items in a safe and controlled environment. Depending on your use case, this setup may be a little more flexible than you’d like. The Authentication Settings tab contains authentication provider settings, giving project owners a single source of truth for adding and maintaining providers. Our focus for this feature is on simplicity and consolidation. We believe wholeheartedly that if we can enable you to spend less time hunting down where to configure settings or find resources, you can focus more on what really matters and build great software. For more information on authentication options, read our documentation . New to MongoDB Atlas Charts? Get started today by logging in to or signing up for MongoDB Atlas , deploying or selecting a cluster, and activating Charts for free.

May 18, 2022
Updates

From Core Banking to Componentized Banking: Temenos Transact Benchmark with MongoDB

Banking used to be a somewhat staid, hyper-conservative industry, seemingly evolving over eons. But banking in recent years has dramatically changed. Under pressure from demanding consumers and nimble new competitors, development cycles measured in years are no longer sufficient in a market expecting new products, such as Buy-Now-Pay-Later, to be introduced within months or even weeks. Just ask Temenos, the world's largest financial services application provider, providing banking for more than 1.2 billion people . Temenos is leading the way in banking software innovation and offers a seamless experience for their client community. Financial institutions can embed Temenos components, which delivers new functionality in their existing on-premises environments (or in their own environment in their cloud deployments) or through a full banking as a service experience with Temenos T365 powered by MongoDB on various cloud platforms. Temenos embraces a cloud-first, microservices-based infrastructure built with MongoDB, giving customers flexibility, while also delivering significant performance improvements. This new MongoDB-based infrastructure enables Temenos to rapidly innovate on its customers' behalf, while improving security, performance, and scalability. Architecting for a better banking future Banking solutions often have a life cycle of 10 or more years, and some systems I am involved in upgrading date back to the 1980s. Upgrades and changes, often focussed on regulatory or technical upgrades (for example, operating system versions), hardware upgrades, and new functionality, are bolted on. The fast pace of innovation, a mobile-first world, competition, crypto, and Defi are demanding a massive change for the banking industry, too. The definition of new products and roll outs measured in weeks and months versus years requires an equally drastic change in technology adoption. Banking is following a path similar to the retail industry. Retail was built upon a static design approach with monolithic applications connected through ETL (Extract, Transform, and Load) and “unloading of data,” that was robust and built for the times. The accelerated move to omnichannel requirements brought a component-driven architecture design to fruition that allowed faster innovation and fit-for-purpose components being added (or discarded) from a solution. The codification of this is called MACH (Microservices, API first, Cloud-native, and Headless) and a great example is the flexibility brought to bear through companies such as Commercetools . Temenos is taking the same direction for banking. Its concept of components that are seamlessly added to existing Temenos Transact implementations empowers banks to start an evolutionary journey from existing status on-premises environments to a flexible hybrid landscape delivering best of breed banking experiences. Key for this journey is a flexible data concept that meshes the existing environments with requirements of fast changing components available on premises and in the cloud. Temenos and MongoDB joined forces in 2019 to investigate the path toward data in a componentized world. Over the past few years, our teams have collaborated on a number of new, innovative component services to enhance the Temenos product family, and several banking clients are now using those components in production. However, the approach we've taken allows banks to upgrade on their own terms. By putting components “in front” of the Temenos Transact platform , banks can start using a componentization solution without disrupting their ability to serve existing customer requirements. Similarly, Temenos offers MongoDB's critical data infrastructure with an array of deployment capabilities, from full-service multi- or hybrid cloud offerings, to on-premises self-managed, depending on local regulations and the client’s risk appetite. In these and other ways, Temenos makes it easier for its banking clients to embrace the future without upsetting existing investments. From an architectural perspective, this is how component services utilize the new event system of Temenos Transact and enable a new way of operating: Temenos Transact optimized with MongoDB Improved performance and scale All of which may sound great, but you may still be wondering whether this combination of MongoDB and Temenos Transact can deliver the high throughput needed by Tier 1 banks. Based on extensive testing and benchmarking, the answer is a resounding yes . Having been in the benchmark business for a long time, I know that you should never trust just ANY benchmark. (In fact, my colleague, MongoDB distinguished engineer John Page, wrote a great blog post about how to benchmark a database .) But Temenos, MongoDB, and AWS jointly felt the need to remove this nagging itch and deliver a true statement on performance, delivering proof of a superior solution for the client community. Starting with the goal of reaching a throughput of 25,000 transactions, it quickly became obvious that this rather conservative goal could easily be smashed, so we decided to quadruple the number to 100,000 transactions using a more elaborate environment. The newly improved version of Temenos Transact in conjunction with component services proved to be a performance giant. One hundred thousand financial transactions per second with a MongoDB response time under 1ms was a major milestone compared to earlier benchmarks with 79ms response time with Oracle, for example. Naturally, this result is in large part due to the improved component behavior and the AWS Lambda functions that now run the business functionality, but the document model of MongoDB in conjunction with the idiomatic driver concept has proven superior over the outdated relational engine of the legacy systems. Below, I have included some details from the benchmark. As Page once said, “You should never accept single benchmark numbers at face value without knowing the exact environment they were achieved in.” Configuration: table, th, td { border: 1px solid black; border-collapse: collapse; } J-meter Scripts Number of Balance Services Number of Transact Services MongoDB Atlas Cluster Number of Docs in Balance Number of Docs in Transaction 3 6 GetBalance - 4 GetTransactions - 2 4 M80 (2TB) 110M 200M Test Results table, th, td { border: 1px solid black; border-collapse: collapse; } Functional TPS API Latency ms DB Latency ms Get Balance 46751 79.45 0.36 Get Transaction 22340 16.58 0.36 Transact Service 31702 117.15 1.07 Total 100793 71.067 0.715 The underlying environment consists of 200-million accounts with 100-million customers, which shows the scalability the configuration is capable of working with. This setup would be suitable for the largest Tier 1 banking organizations. The well-versed MongoDB user will realize that the used cluster configuration for MongoDB is small. The M80 cluster, 32 VCores with 128GB RAM, is configured with 5 nodes. Many banking clients prefer those larger 5-node configurations for higher availability protection and better read distribution over multiple AWS Availability Zones and regions, which would improve the performance even more. In the case of an Availability Zone outage or even a regional outage, the MongoDB Atlas platform will continue to service via the additional region as back up. The low latency shows that the MongoDB Atlas M80 was not even fully utilized during the benchmark. The diagram shows a typical configuration for such a cluster setup for the American market: one East Coast location, one West Coast location, and an additional node out of both regions in Canada. MongoDB Atlas allows the creation of such a cluster within seconds configured to the specific requirements of the solution deployed. The total landscape is shown in the following diagram: Signed, sealed, and delivered. This benchmark should give clients peace of mind that the combination of core banking with Temenos Transact and MongoDB is indeed ready for prime time. While thousands of banks rely on MongoDB for many parts of their operations ranging from login management and online banking, to risk and treasury management systems, Temenos' adoption of MongoDB is a milestone. It shows that there is significant value in moving from a legacy database technology to the innovative MongoDB application data platform, allowing faster innovation, eliminating technical debt along the way, and simplifying the landscape for financial institutions, their software vendors, and service providers. If you would like to learn more about MongoDB in the financial services industry, take a look at our guide: The Road to Smart Banking: A Guide to Moving from Mainframe to Data Mesh and Data-as-a-Product

May 18, 2022
Applied

A Hub for Eco-Positivity

In this guest blog post, Natalia Goncharova, founder and web developer for EcoHub — an online platform where people can search for and connect with more than 13,000 companies, NGOs, and governmental agencies across 200-plus countries — describes how the company uses MongoDB to generate momentum around global environmental change. There is no denying that sustainability has become a global concern. In fact, the topic has gone mainstream. A 2021 report by the Economist Intelligence Unit (EIU) shows a 71% rise in the popularity of searches for sustainable goods over the past five years. The report “measures engagement, awareness and action for nature in 27 languages, across 54 countries, covering 80% of the world’s population.” The EIU report states that the sustainability trend is accelerating in developing and emerging countries including Ecuador and Indonesia. For me, it’s not a lack of positive sentiment that is holding back change; it is our ability to turn ideas and goodwill into action. We need a way of harnessing this collective sentiment. In 2020, the decision to found EcoHub and devote so much time to it was a difficult one to make. I had just been promoted to team leader at work, and things were going well. Leaving my job with the goal of helping to protect our environment sounded ridiculous at times. Many questions raced through my mind, the most insistent one being: Will I be able to actually make a difference? However, as you’ll see in this post, my decision was ultimately quite clear. What is EcoHub? When I created EcoHub, my principal aim was to connect ecological NGOs and businesses. Now, EcoHub enables users to search a database of more than 10,000 organizations in more than 200 countries. You can search via a map or keyword. By making it easier to connect, EcoHub lets users quickly build networks of sustainably minded organizations. We believe networks are key to spreading good ideas, stripping out duplication, and building expertise. Building the platform has been a monumental task. I have developed it myself over the past few months, acting as product manager, project manager, and full-stack developer. (It wouldn’t be possible without my research, design, and media teams as well.) During the development of the EcoHub platform on MongoDB, the flexible schema helped us edit and add new fields in a document because the process doesn’t require defining data types. We had a situation in which it was necessary to change the schema and implement changes for all documents in the database. In this case, modifying the entire collection with MongoDB didn’t take long for an experienced developer. Additionally, MongoDB’s document-oriented data model works well with the way developers think. The model reflects how we see the objects in the codebase and makes the process easier. In my experience, the best resource to find answers when I ran into a question or issue was MongoDB documentation . It provides a good explanation of almost anything you want to do in your database. Search is everything In technical terms, my choices were ReactJS, NodeJS, and MongoDB. It is the latter that is so important to the effectiveness of the EcoHub platform. Search is everything. The easier we can make it for individuals or organizations to find like minds, the better. I knew from the start that I’d need a cloud-based database with strong querying abilities. As an experienced developer, I had previous experience with MongoDB and knew the company to be reliable, with excellent documentation and a really strong community of developers. It was a clear choice from the start. Choosing our partners carefully is also important. If EcoHub is to build awareness of environmental issues and foster collaboration, then we must ensure we make intelligent choices in terms of the companies we work with. I have been impressed with MongoDB’s sustainability commitments , particularly around diversity and inclusion, carbon reduction, and its appetite for exploring the way the business has an impact globally and locally. EcoHub search is built on the community version of MongoDB , which enables us to work quickly, implement easily and deliver the right performance. Importantly, as EcoHub grows and develops, MongoDB also allows us to make changes on the fly. As environmental concerns continue to grow, our database will expand. MongoDB enables our users to search, discover, and connect with environmental organizations all over the world. I believe these connections are key to sharing knowledge and expertise and helping local citizens coordinate their sustainability efforts. Commitment to sustainability When it came down to it, the decision to build EcoHub wasn’t as difficult as I initially thought. My commitment to sustainability actually started when I was young: I can remember myself at 8 years old, glued to the window, waiting for the monthly Greenpeace magazine to arrive. Later, that commitment grew as I went to university and graduated with a degree in Environmental Protection and Engineering. Soon after, I founded my first ecology organization and rallied our cityagainst businesses wanting to cut down our beautiful city parks. Starting EcoHub was a natural and exciting next step, despite the risks and unknown factors. I hope we can all join hands to create a sustainable future for ourselves, our children, and our animals and plants, and keep our planet beautiful and healthy. MongoDB Atlas makes operating MongoDB a snap at any scale. Determine the costs and benefits with our cost calculator .

May 11, 2022
Applied

Shared Responsibility: More Agility, Less Risk

The tension between agility, security, and operational uptime can keep IT organizations from innovating as fast as they’d like. On one side, application developers want to move fast and continually deliver innovative new releases. On the other side, InfoSec and IT operations teams aim to continually reduce risk, which can result in a slowed down process. This perception couldn’t be further from the truth. Modern InfoSec and IT operations are evolving into SecOps and DevOps, and the idea that they want to stop developers from innovating by restricting them to old, centrally controlled paradigms is a long-held prejudice that needs to be resolved. What security and site reliability teams really want is for developers to operate with agility as well as safety so that risks are appropriately governed. The shared responsibility model can reduce risk while still allowing for innovation. The challenge of how to enable developers to move fast while ensuring the level of security necessary for SecOps and DevOps is to abstract granular controls away from developers so they can focus on building applications while, in the background, secure defaults that cannot be disabled are in place at every level. Doers get more done Working with a cloud provider, whether you’re talking about infrastructure as a service (IaaS) or a hyperscaler, is like going into a home improvement store and seeing all the tools and materials. It gives you a sense of empowerment. That’s the same feeling you get when you’re in front of an administrative console for AWS, Google Cloud, or Azure. The aisles at home improvement stores, however, can contain some pretty raw materials. Imagine asking a team of developers to build a new, state-of-the-art kitchen out of lumber, pipes, and fittings without even a blueprint. You’re going to wind up with pipes that leak, drawers that don’t close, and cabinets that don’t fit. This approach understandably worries InfoSec and IT operations teams and can cause them to be perceived as innovation blockers because they don’t want developers attempting do-it-yourself security. So how do you find a place where the raw materials provide exactly what you need so that you can build with confidence? That’s the best of both worlds. Developers can move faster by not having to deal with the plumbing, and InfoSec and IT operations get the security and reliability assurance they need. This is where the shared responsibility model comes in. Shared responsibility in the cloud When considering cloud security and resilience, some responsibilities fall clearly on the business. Others fall on public cloud providers, and still others fall on the vendors of the cloud services being used. This is known as the shared responsibility model . Security and resilience in the cloud are only possible when everyone is clear on their roles and responsibilities. Shared responsibility recognizes that cloud vendors, such as MongoDB, must ensure the security and availability of their services and infrastructure, and customers must also take appropriate steps to protect the data they keep in the cloud. The security defaults in MongoDB Atlas enable developers to be agile while also reducing risk. Atlas gives developers the necessary building blocks to move fast without having to worry about the minutiae of administrative security tasks. Atlas enforces strict security policies for things like authentication and network isolation, and it provides tools for ensuring secure best practices, such as encryption, database access, auto-scaling, and granular auditing. Testing for resilience The shared responsibility model attempts to strike a balance between agility, security, and resilience. Cloud vendors must meet the responsibilities of their service-level agreements (SLAs), but businesses also have to be conscientious of their cloud resources. Real-world scenarios can cause businesses to experience outages, and avoiding them is the essence of the shared responsibility model. To avoid such outages, MongoDB Atlas does everything possible to keep database clusters continuously available; the customer holds the responsibility of provisioning appropriately sized workloads. That can be an uphill battle when you’re talking about an intensive workload for which the cluster is undersized. Consider a typical laptop as an example. It has an SLA in so far as it has specifications that determine what it can do. If you try to drive a workload that exceeds the laptop’s specifications, it will freeze. Was the laptop to blame, or was it the workload? With the cloud, there’s an even greater expectation that there are more than enough resources to handle any given workload. But those resources are based on real infrastructure with specs, just like the laptop. This example illustrates both the essence and the ambiguity of the shared responsibility model. As the customer, you’re supposed to know whether that stream of data is something your compute resources can handle. The challenge is that you don’t know it until you start running into the boundaries of your resources, and pushing the limits of those boundaries means risking the availability of those resources. It’s not hard to imagine a developer, who may be working under considerable stress, over-provisioning a workload, which then leads to a freeze or outage. It’s essential, therefore, for companies to have a test environment that closely mimics their production environment. This allows them to validate that the MongoDB Atlas cluster can keep up with what they’re throwing at it. Anytime companies make changes to their applications, there is a risk. Some of that risk may be mitigated by things like auto-scaling and elasticity, but the level of protection they afford is limited. Having a test environment can help companies better predict the outcome of changes they make. The cloud has evolved to a point where security, resilience, and agility can peacefully coexist. MongoDB Atlas comes with strict security policies right out of the box. It offers automated infrastructure provisioning, default security features, database setup, maintenance, and version upgrades so that developers can shift their focus from administrative tasks to innovation when building applications. By abstracting away some of the security and resilience responsibilities through the shared responsibility model, MongoDB Atlas allows developers to move fast while giving SecOps the reassurances they need to support their efforts.

May 11, 2022
Applied

Survey of 2,000 IT Professionals Reveals the Importance of Innovation, and Its Challenges

We all know that innovation is hard. Yet innovation is high on the strategic agendas of most organizations, partly because there is no choice. Highly innovative organizations are more successful across a number of measures, including profitability. Increasingly, that innovation must be delivered through software. Early in the digital age, just using software was enough to set a company apart. But today, off-the-shelf software (or off-the-shelf cloud services) doesn’t provide a lasting competitive advantage, because your competitors have access to the exact same software and services. It’s up to internal teams to build the innovations that set organizations apart. The 2022 MongoDB Report on Data and Innovation surveyed 2,000 developers and IT decision-makers in the Asia Pacific region, covering Australia, China, Hong Kong, India, New Zealand, South Korea, and Taiwan. The report details our findings on the importance of innovation, the technical challenges to building new things, and the consequences when you fail to do so. Some of the important discoveries we uncovered are: Fully 73% of respondents agreed that working with data is the hardest part of building and evolving applications. 55% say their data architectures are complex. In fact, 38% of organizations surveyed use 10+ databases. 28% of developers’ time is spent building new features or applications, versus 27% maintaining existing data, applications, and systems. Top blockers of innovation include developer workloads, data architecture, legacy technologies and technical debt To unlock all our findings and understand more about the need for an increased attention on innovation, download the full report .

May 9, 2022
Home

Semeris Demystifies Legal Documents Using MongoDB

Sorting through endless legal documents can be a time-consuming and burdensome process, but one startup says it doesn’t have to be that way. Semeris strives to demystify legal documentation by using the latest artificial intelligence and natural language processing techniques. Semeris’s goal is to put the information its customers need at their fingertips when and where they need it. Semeris aims to bring structure to capital market legal documents, while providing a first-class service to customers and blending together the disciplines of finance, law, natural language processing, and artificial intelligence. In this edition of Built with MongoDB, we talk with Semeris about how they use MongoDB Atlas Search to help customers analyze documents and extract data as quickly as possible. Built with MongoDB spoke with Semeris CEO, Peter Jasko , about his vision for the company, working with MongoDB, the company’s relationship with venture capital firm QVentures , and the value of data. In this video, Peter Jasko explains how MongoDB Atlas's fully managed service and support has been a key factor in helping Semeris scale. Built with MongoDB: Can you tell us about Semeris? Peter Jasko: We help our investor banking and lawyer clients analyze legal documentation. We help them extract information from the documentation that they look at. A typical transaction might have 500 to 1,000 pages of documentation, and we help them to analyze that really quickly and pull out the key information that they need to be able to review that documentation within a couple hours, rather than the 7 or 8 hours it would normally take. Built with MongoDB: What is the value of data in your space? Peter: Data is essential in what we do because we build models around the publicly available documentation that we see. We store that data, we analyze it, we build machine learning models around it, and then we use that to analyze less seen documentation or more private documentation that our clients have internally. Built with MongoDB: How has your partnership with QVentures helped Semeris? Peter: Our partnership with QVentures is not just a financial one where they’ve invested some money into our firm; they’ve also helped us uncover contacts within the market. They introduced us to the MongoDB partnership that has helped us get some credits and build out our technology onto the MongoDB platform. Built with MongoDB: What has it been like using MongoDB’s technology? Peter: We chose MongoDB because it’s a scalable solution, and it has a strong developer following. It’s easier for us to hire tech developers who understand the technology because MongoDB has such a strong following in the community. If we have small issues with the technology, we’re very quickly able to search and find the answer to learn how we need to resolve that. Additionally, scalability is really important to us. And, what we found is that the MongoDB platform scales both in compute and also in storage seamlessly. We get a notification that more storage is required, and we can upgrade that online and with no customer impact and no downtime. It's really, really seamless. Another reason we chose MongoDB is that it’s cloud agnostic. We're on AWS now, but we're almost certainly at some point going to be asked from customers to look at Azure or Google. So it's really beneficial to us that MongoDB works on all the different platforms that we look at. Built with MongoDB: What are some of the features you use within MongoDB? Peter: We use MongoDB Atlas Search because of its ability to retrieve thousands of data points from multiple documents. We use the indexing capability there, and the key thing that we find is that our customers want to retrieve thousands of data points from multiple different documents. A lot of our customers are analysts or investment portfolio managers, and they want that information in their hands as quickly as possible. Built with MongoDB: What is some advice you’d give to aspiring founders and CEOs? Peter: Try lots of things and try them quickly. Try lots of little spikes, and take the ones that work well, and eventually put those into production. Really focus on what your customers want. Ultimately, we tried a lot of different ideas, some of which we thought were great. But you have to put it in front of your customers to be able to decide which ones are really worth spending time on and putting into production quality and which ones you should just let fall by the wayside as research done but not ultimately used. Find out more about Semeris Docs . Interested in learning more about MongoDB for Startups? Check out our Startups page .

May 4, 2022
Applied

From Enterprise Account Executive to Regional Director: How Lucile Tournier Has Accelerated Her Career with MongoDB France

Lucile Tournier joined MongoDB France as an Enterprise Account Executive in 2020. From learning new technology to becoming a new mom and taking on a leadership role, Lucile has had an incredible journey over the past two years. In this article, I talk with Lucile to learn more about her experience on the Enterprise Sales team in France and how she has grown her career to become a Regional Director at MongoDB. Click here to read this blog post in French . Jackie Denner: Hi, Lucile. Thank you for sharing a bit about your career journey. How did you come to join MongoDB, and why were you interested in the company? Lucile Tournier: MongoDB is my first experience working in the software industry. My previous roles were with French services companies, where I had very different experiences in terms of sales cycles, corporate culture (MongoDB being an American company), and even technicality (databases — the only stack I had never discussed). I was certainly in my comfort zone in my previous positions. I said to myself, “If I am looking for a new challenge, why not try the software industry? Is it for me? Is it possible to switch from a services company to a software vendor?” I decided to contact Alexandre Esculier , Regional VP of France for MongoDB (at the time Regional Director) who experienced such a shift. Who better than him to answer my questions? After many discussions with him and other members of the MongoDB team in France, I was convinced and decided to go through the recruitment process. You might wonder why I chose MongoDB in particular. Three years ago, I co-founded a market finance startup within a services company. It was an exciting experience, in a fast lane, and full of challenges and great successes. I liked the “speed boat” aspect (fast and adaptable) within an established company. For my next chapter, I wanted to join a company that was fast-paced and innovative. I really found the best of both worlds at MongoDB: An established company with clear processes and disruptive technology, all while having a startup spirit with hypergrowth and agility. I made the right decision. JD: Tell me a bit about your experience in the Enterprise Account Executive role. LT: Like a roller coaster. Throughout six months of intensive onboarding, I was able to quickly go into the field alongside very valuable teams: My manager, Solutions Architects, Customer Success, and Partner teams (to name a few). I started to improve my skills, sign my first contracts with major accounts who trusted me (just like my management), open up new territories, and expand existing ones. I learned a lot about the technology, the sales process (based on MEDDIC, co-built by John McMahon, who is a member of the MongoDB board), and especially about myself thanks to a feedback culture that is at the very heart of MongoDB. Learning about yourself is not so easy. It requires being able to question yourself every single day, but what a great opportunity to grow. JD: What makes enterprise sales at MongoDB a unique career opportunity? LT: It is unique on several levels: The technology, the processes, the fast pace, the results of the company, and the people! Everything is amazing. What I particularly remember is the benevolence. During my first year at MongoDB, I had the immense joy of becoming the mother of a little boy, Dorian. Starting a new job and becoming pregnant in the process is not quite what I had planned. I am grateful that the leadership team was open-minded, supportive, and more than happy for me. I was able to successfully carry out my two great journeys: Performing at MongoDB and becoming a mom. I don't think it could have gone better anywhere else. JD: You were promoted from Enterprise Account Executive to Regional Director. What learning and development opportunities helped you achieve this, and how did sales leadership support your transition? LT: If I hadn't had the trust and support of my entire line management, this transition would have been very difficult, if not impossible. I already had a team management role at my previous company. However, it was important for me, as for MongoDB, to go back to the field before returning to a team management position. Coming from a completely different world, how would I have been able to properly guide a team without going through the field first? So, I honed my skills, I proved I was 100% committed, I listened as much as possible to the feedback I was given; I tried; I lost; I won. I did things differently, and I started again and again. In summary, I had confidence in my environment, and I was able to give my all while being well guided. I had regular development sessions, training, and, above all, an attentive ear from Alexandre Esculier and Jérôme Delozière, VP for continental Europe, who helped me to be self-aware and ask myself the right questions. After 18 months as an Enterprise Account Executive, I successfully transitioned to a Regional Director role managing five Enterprise Account Executives. JD: What is most exciting about being part of the Enterprise Sales team at MongoDB? LT: Everything! First, MongoDB’s technology is amazing. It is important to emphasize this, because it is impossible for me to work for a company where customers are not happy with our products. I want to be able to believe in what I am selling, and I believe in it. The R&D teams are always looking for the latest developments that allow us to be 5 years ahead of the market. Additionally, selling through the MEDDIC methodology has taught me a lot. I had the art and MongoDB gave me the science. Even after 10 years of sales, I keep learning. Most importantly, the people! Everyone is trying to be the best version of themselves and one of the builders of this great adventure. It's really nice to work with so much emulation. JD: What is our Sales team culture like? LT: To describe it in one word: Transparent. In transparency we can progress. We have to share with each other, help each other, point out our weaknesses, and listen. The same goes with customers. Transparency is the key. JD: What skills and qualities make someone successful on the Enterprise Sales team? LT: I think success comes from hard work. Nothing comes ready-made in this environment and there is no relying on luck. You have to work, learn, question yourself, and move things forward. Luck comes later. JD: Is there anything else that you think someone should know about our Enterprise Sales team in France? LT: I'm hiring, so do not hesitate to reach out to me via LinkedIn ! Interested in joining MongoDB’s Sales team? We have several open roles across the globe and would love for you to transform your career with us!

May 3, 2022
Culture

How MongoDB Could Have Changed Star Wars and Saved the Jedi

May the 4th be with you! Here at MongoDB, lots of us love Star Wars. It got us thinking about how the events that unfolded throughout the movie franchises could have been different had MongoDB products and features been available. So to celebrate Star Wars Day, this article will take a light(side)-hearted look at exactly that! MongoDB Atlas: How has nobody heard of the Jedi? One of the questions that was asked a lot by fans when Star Wars: Episode VII – The Force Awakens was released was how Rey, Finn, and many others in the Star Wars universe didn’t know that the Jedi were real, let alone still existed. This can be explained by Emperor Palpatine ensuring that all Jedi Knights, temples, and traces of the Jedi were erased. But what if this information had been stored in MongoDB Atlas , our application data platform in the cloud? One of the core features of MongoDB Atlas is a document database-as-a-service (DBaaS), which allows for storing data as JSON-like documents in collections in the cloud, accessible from anywhere with a connection to the internet. Under the hood, this database supports high availability using replica sets , which are sets of nodes (the minimum and default value is three nodes), with one acting as the primary node and two or more as secondary nodes. The data is replicated across these three nodes, with availability handled by Atlas automatically. If the primary node goes down, the replica sets promote a secondary node to primary. Imagine if, following Emperor Palpatine and Darth Vader destroying evidence of the Jedi Order, the data could have recovered itself thanks to the high availability of clusters on Atlas. Atlas cloud recovery would have also helped prevent deleting of data in the Jedi Archives. In Star Wars: Episode II – Attack of the Clones , Obi Wan-Kenobi visits the Jedi Archives on Coruscant to locate the planet Kamino, where they expect to find answers on who attempted to assassinate Senator Padmé Amidala. However, Obi Wan-Kenobi finds himself having to call for the help of the librarian, Jocasta Nu, because he cannot find any traces of the planet in the archives. She famously says that if the planet is not in the archives then it simply does not exist. Atlas’ database gives the ability to store data in the cloud, available anywhere as long as you can access it. Therefore, you could also argue that the information in the archives would have been available from anywhere, not just in the one server within the Jedi Archives. Luminous beings we might be, but database specialists, the Jedi were not. Security: You don't belong here! In a world with ever more data being consumed and stored, people are becoming more aware of how secure their data is (or is not). When looking to use MongoDB Atlas, developers are often concerned about how safe their data is in the cloud. MongoDB Atlas comes with a lot of security features pre-configured from the start. This includes isolation, authorization, and encryption. Security stack showing Isolation, Authorization and Encryption We firmly believe that your data should be private and only visible to those with the rights to see it. In Star Wars: Episode VI – Return of the Jedi , the Alliance learns of the construction of a second Death Star and discovers that an energy shield generator to protect the new Death Star is on the forest moon planet Endor. Leia, Han Solo, Chewbacca, R2-D2, C-3P0, and the Ewoks fight in the Battle of Endor for access to the bunker containing the generator. Thanks to R2-D2 and C-3P0 drawing away the Imperial Army, the enemy is attacked by the Ewoks. Chewbacca is then able to steal an AT-ST and rescue his allies, who are attempting to hack into the bunker. They successfully gain entry to the bunker, plant explosives, and expose the new Death Star, allowing it to be destroyed by the Rebel Fleet. However, if MongoDB security had been involved, the rebels wouldn’t have gained access to the bunker and the energy shield protecting the Death Star would have remained, meaning the Empire could have won. Death Star II was able to travel across the galaxy and strike fear into the hearts of many, and perhaps this might have prevented the creation of Starkiller Base in The Force Awakens and its destruction of the entire Hosnian system, saving millions of lives. While the Empire could have wiped out the Alliance and taken control of the galaxy with the second Death Star, it would have had to remain within range of its shield generator on the forest moon of Endor, unable to terrorize the galaxy at large as Starkiller Base eventually did. The First Order and Kylo Ren may never have risen to power. Luke Skywalker may not have escaped the Emperor or redeemed his father. Life would probably look very different for those millions of lives saved in the Hosnian system. We may have, in fact, seen Darth Vader and Luke Skywalker rule the galaxy as father and son. Scary thought! Data API: Bye-Bye, R2-D2 MongoDB Atlas Data API is a new feature, currently in preview, that allows developers to access their Atlas hosted database cluster via HTTP calls over the web using just a unique endpoint URL and an API key. This opens up the possibility of using the power of the MongoDB document database model in more ways and more scenarios. You might choose to use it because you don’t want the overhead of installing and using a driver for your chosen language or platform. This could be because you are prototyping or just prefer the API approach. You may not even have a driver for that scenario but can make HTTP requests. One example of this scenario is the Internet of Things (IoT), where you often won’t have a driver as an option but can easily make calls over the web. Another scenario is from a SQL stored procedure. That might sound controversial, but what if you want to push data to both a relational database and an Atlas cluster at the same time to help with migrating over to the most popular non-relational database in the world? A driver is a programming interface, allowing for use with whatever the driver is intended for. The MongoDB driver , available for multiple programming languages, allows communication with a MongoDB database via a connection string. It simplifies that process and handles the communication so developers don’t have to write complicated low-level code. In the Star Wars universe, you can think of droids as an interface to the data in the world around them. R2-D2 is an astromech droid, whose primary function is to provide navigational abilities but is able to do far more, including interfacing with other computers via a SCOMP link, disabling autopilot on the Naboo starfighter, picking up distress signals, locating Emperor Palpatine on Grievous’s ship, and, of course, sharing the Death Star plans with the Alliance. So, if there was MongoDB Atlas Data API in the Star Wars universe, what might that look like? This could be a simple data pad, similar to a smartphone. Instead of relying on R2-D2, BB-8, or Chopper to act as an interface to the information in different computers around the galaxy, the data pads could do this instead, providing the ability to access the data stored in Atlas. Using MongoDB, the Death Star plans might have been added to a collection in the database and accessible to all those who were authorized. This would have prevented some of the danger seen at the start of Star Wars: Episode IV – A New Hope , when Princess Leia had to upload the plans into R2-D2. Of course, R2-D2 would still have proved useful in other situations, such as in battle, putting out fires in the Millenium Falcon, or throwing Luke his lightsaber during a battle in Star Wars: Episode VI – Return of the Jedi . But some of the key roles he played could have been made redundant if the Data API–enabled data pad had been available instead. Sharding: Where art thou Anch-to Speaking of R2-D2, another event he was involved in could have been different had there been a feature of MongoDB in the Star Wars universe, sharding . When you have a really large data set, like, I don’t know, all the information in the galaxy’s HoloNet, you might want to break it down into smaller pieces in order to make it faster and easier to search through. Sharding is a perfect example of this in action. Sharding works by segregating your data into smaller pieces based on a field in your document. A common real-world comparison would be a library. In a library, books aren’t all just thrown on a shelf in the order they were acquired by the library. They are instead broken down into different shelves, sorted by author surname. The equivalent of this in the database world is a sharding key, which tells you exactly where to head first, saving you time and effort. In Star Wars: Episode VII – The Force Awakens , the Alliance including Rey want to find Luke Skywalker. If the galaxy information had been sharded, it would be possible for it to have been searched to find Luke’s location much faster, without the need to find a particular map to fill out a local data set. Access to Ahch-to, the location of the ancient Jedi Temple and the galaxy’s only green-milk-producing thala-sirens, would be just a query away.. This also ties in nicely to the previous section about the Data API. Without the need to use R2-D2 for the missing piece but instead use a data pad to query all the known information on the galaxy, the Alliance may have found Luke much sooner — especially if they were able to use MongoDB’s powerful query language to perform complex queries on the data using the aggregation pipeline . MongoDB is a non relational database that can be used by all living things. It surrounds docs, penetrates analytics, and binds the galaxy together. There we have it, a trip through the galaxy and the events of Star Wars to see how the timeline might have been different had MongoDB been around. MongoDB is a non-relational database that can be used by all living things. It surrounds documents, penetrates analytics, and binds the Galaxy together. The knowledge of the Jedi wouldn’t have been erased and their reputation tarnished had MongoDB Atlas and its high availability been available. The energy shield generator on Endor would have survived, meaning Death Star II may never have been destroyed, allowing the Empire to take full control of the galaxy but thwarting the rise of the First Order. R2-D2 might not have been so important had MongoDB Atlas Data API been available on data pads, allowing direct access over the internet to the data instead of requiring a driver. Luke Skywalker may have been found much sooner had sharding been available alongside powerful querying functionality such as the aggregation pipeline to bypass the need to find a map and get the missing piece from R2-D2. How can you use the power of MongoDB Atlas today to change your own universe? Get started today with our free forever M0 tier . MongoDB World returns this June to NYC and in honor of May the Fourth we are offering tickets at only $400 May4-6. Register now and join us for announcement packed keynotes, hands on workshops, and more June 7-9.

May 3, 2022
Home

Ready to get Started with MongoDB Atlas?

Start Free