dj-walker-morgan-(studio-3t)

2177 results

MACH Aligned for Retail: Cloud-Native SaaS

MongoDB is an active member of the MACH Alliance , a non-profit cooperation of technology companies fostering the adoption of composable architecture principles promoting agility and innovation. Each letter in the MACH acronym corresponds to a different concept that should be leveraged when modernizing heritage solutions and creating brand-new experiences. MACH stands for Microservices, API-first, Cloud-native SaaS, and Headless. In previous articles in this series, we explored the importance of Microservices and the API-first approach. Here, we will focus on the third principle championed by the alliance: Cloud-native SaaS. Let’s dive in. What is cloud-native SaaS? Cloud-native SaaS solutions are vendor-managed applications developed in and for the cloud, and leveraging all the capabilities the cloud has to offer, such as fully managed hosting, built-in security, auto-scaling, cross-regional deployment, automatic updates, built-in analytics, and more. Why is cloud-native SaaS important for retail? Retailers are pressed to transform their digital offerings to meet rapidly shifting consumer needs and remain competitive. Traditionally, this means establishing areas of improvement for your systems and instructing your development teams to refactor components to introduce new capabilities (e.g., analytics engines for personalization or mobile app support) or to streamline architectures to make them easier to maintain (e.g., moving from monolith to microservices). These approaches can yield good results but require a substantial investment in time, budget, and internal technical knowledge to implement. Now, retailers have an alternative tool at their disposal: Cloud-native SaaS applications. These solutions are readily available off-the-shelf and require minimal configuration and development effort. Adopting them as part of your technology stack can accelerate the transformation and time to market of new features, while not requiring specific in-house technical expertise. Many cloud-native SaaS solutions focused on retail use cases are available (see Figure 1), including Vue Storefront , which provides a front-end presentation layer for ecommerce, and Amplience , which enables retailers to customize their digital experiences. Figure 1: Some MACH Alliance members providing retail solutions. At the same time, in-house development should not be totally discarded, and you should aim to strike the right balance between the two options based on your objectives. Figure 2 shows pros and cons of the two approaches: Figure 2: Pros and cons of cloud-native SaaS and in-house approaches. MongoDB is a great fit for cloud-native SaaS applications MongoDB’s product suite is cloud-native by design and is a great fit if your organization is adopting this principle, whether you prefer to run your database on-premises, leveraging MongoDB Community and Enterprise Advanced , or as SaaS with MongoDB Atlas . MongoDB Atlas, our developer data platform, is particularly suitable in this context. It supports the three major cloud providers (AWS, GCP, Azure) and leverages the cloud platforms’ features to achieve cloud-native principles and design: Auto-deployment & auto-healing: DB clusters are provisioned, set up, and healed automatically, reducing operational and DBA efforts. Automatically scalable: Built-in auto-scaling capabilities enable the database RAM, CPU, and storage to scale up or down depending on traffic and data volume. A MongoDB Serverless instance allows abstracting the infrastructure even further, by paying only for the resources you need. Globally distributed: The global nature of the retail industry requires data to be efficiently distributed to ensure high availability and compliance with data privacy regulations, such as GDPR , while implementing strict privacy controls. MongoDB Atlas leverages the flexibility of the cloud with its replica set architecture and multi-cloud support, meaning that data can be easily distributed to meet complex requirements Secure from the start: Network isolation, encryption, and granular auditing capabilities ensure data is only accessible to authorized individuals, thereby maintaining confidentiality. Always up to date: Security patches and minor upgrades are performed automatically with no intervention required from your team. Major releases can be integrated effortlessly, without modifying the underlying OS or working with package files. Monitorable and reliable: MongoDB Atlas distributes a set of utilities that provides real-time reporting of database activities to monitor and improve slow queries, visualize data traffic, and more. Backups are also fully managed, ensuring data integrity. Independent Software Vendors (ISVs) increasingly rely on capabilities like these to build cloud-native SaaS applications addressing retail use cases. For example, Commercetools offers a fully managed ecommerce platform underpinned by MongoDB Atlas (see Figure 3). Their end-to-end solution provides retailers with the tools to transform their ecommerce capabilities in a matter of days, instead of building a solution in-house. Commercetools is also a MACH Alliance member, fully embracing composable architecture paradigms explored in this series. Adopting Commercetools as your ecommerce platform of choice lets you automatically scale your ecommerce as traffic increases, and it integrates with many third-party systems, ranging from payment platforms to front-end solutions. Additionally, its headless nature and strong API layer allow your front-end to be adapted based on your brands, currencies, and geographies. Commercetools runs on and natively ingests data from MongoDB. Leveraging MongoDB for your other home-grown applications means that you can standardize your data estate, while taking advantage of the many capabilities that the MongoDB data platform has to offer. The same principles can be applied to other SaaS solutions running on MongoDB. Figure 3: MongoDB Atlas and Commercetools capabilities. Find out more about the MongoDB partnership with Commercetools . Learn how Commercetools enabled Audi to integrate its in-car commerce solution and adapt it to 26 countries . MongoDB supports your home-grown applications MongoDB offers a powerful developer data platform, providing the tools to leverage composable architecture patterns and build differentiating experiences in-house. The same benefits of MongoDB’s cloud-native architecture explored earlier are also applicable in this context and are leveraged by many retailers globally, such as Conrad Electronics, running their B2B ecommerce platform on MongoDB Atlas . Summary Cloud-native principles are an essential component of modern systems and applications. They support ISVs in developing powerful SaaS applications and can be leveraged to build proprietary systems in-house. In both scenarios, MongoDB is strongly positioned to deliver on the cloud-native capabilities that should be expected from a modern data platform. Stay tuned for our final blog of this series on Headless and check out our previous blogs on Microservices and API-first .

September 22, 2022

How a Data Mesh Facilitates Open Banking

Open banking shows signs of revolutionizing the financial world. In response to pressure from regulators, consumers, or both, banks around the world continue to adopt the central tenet of open banking: Make it easy for consumers to share their financial data with third-party service providers and allow those third parties to initiate transactions. To meet this challenge, banks need to transition from sole owners of financial data and the customer relationship to partners in a new, distributed network of services. Instead of competing with other established banks, they now compete with fintech startups and other non-bank entities for consumer attention and the supply of key services. Despite fundamental shifts in both the competition and the customer relationship, however, open banking offers a huge commercial opportunity, which we’ll look at more closely in this article. After all, banks still hold the most important currency in this changing landscape: trust. Balancing data protection with data sharing Established banks hold a special position in the financial system. Because they are long-standing, heavily regulated, and backed by government agencies that guarantee deposits (e.g., the FDIC in the United States), established banks are trusted by consumers over fintech startups when it comes to making their first forays into open banking. A study by Mastercard of 4,000 U.S. and Canadian consumers found that the majority (55% and 53%, respectively) strongly trusted banks with their financial data. Only 32% of U.S. respondents and 19% of Canadians felt the same way about fintech startups. This position of trust extends to the defensive and risk-averse stance of established banks when it comes to sharing customer data. Even when sharing data internally, these banks have strict, permission-based data access controls and risk-management practices. They also maintain extensive digital audit trails. Open banking challenges these traditional data access practices, however, causing banks to move to a model where end customers are empowered to share their sensitive financial data with a growing number of third parties. Some open banking standards, such as Europe’s Payment Services Directive (PSD2), specifically promote informed consent data sharing, further underlining the shift to consumers as the ultimate stewards of their data. At the same time, banks must comply with evolving global privacy laws, such as Europe’s General Data Protection Regulation (GDPR). These laws add another layer of risk and complexity to data sharing, granting consumers (or “data subjects” in GDPR terms) the right to explicit consent before data is shared, the right to withdraw that consent, data portability rights, and the right to erasure of that data — the famed “right to be forgotten.” In summary, banks are under pressure from regulators and consumers to make data more available, and customers now make the final decision about which third parties will receive that data. Banks are also responsible for managing: Different levels of consent for different types of data The ability to redact certain sensitive fields in a data file, while still sharing the file Compliance with data privacy laws, including "the right to be forgotten" The open opportunity for banks In spite of the competition and added risks for established banks, open banking greatly expands the global market of customers, opens up new business models and services, and creates new ways to grow customer relationships. In an open banking environment, banks can leverage best-of-breed services from third parties to bolster their core banking services and augment their online and mobile banking experiences. Established banks can also create their own branded or “white label” services, like payment platforms, and offer them as services for others to use within the open banking ecosystem. For customers, the ability of third parties to get access to a true 360-degree view of their banking and payment relationships creates new insights that banks would not have been able to generate with just their own data. Given the risks, and the huge potential rewards, how do banks satisfy the push and pull of data sharing and data protection? How do they systematically collect, organize, and publish the most relevant data from across the organization for third parties to consume? Banks need a flexible data architecture that enables the deliberate collection and sharing of customer data both internally and externally, coupled with fine-grained access, traceability, and data privacy controls down to the individual field level. At the same time, this new approach must also provide a speed of development and flexibility that limits the cost of compliance with these new regulations and evolving open banking standards. Rise of the data mesh Open banking requires a fundamental change in a bank’s data infrastructure and its relationship with data. The technology underlying the relational databases and mainframes in use at many established banks was first developed in the 1970s. Conceived long before the cloud computing era, these technologies were never intended to support the demands of open banking, nor the volume, variety, and velocity of data that banks must deal with today. Banks are overcoming these limitations and embracing open banking by remodeling their approach to data and by building a data mesh using a modern developer data platform. What is a data mesh? A data mesh is an architectural framework that helps banks decentralize their approach to sharing and governing data, while also enabling self-service consumption of that data. It achieves this by grouping a bank’s data into domains. Each domain in a data mesh contains related data from across the bank. For example, a "consumer" domain may contain data about accounts, addresses, and relationship managers from across every department of the bank. Each data domain is owned by a different internal stakeholder group or department within the bank, and these owners are responsible for collecting, cleansing, and distributing the data in their domain across the enterprise and to consumers. With open banking, domain owners are also responsible for sharing data to third parties. This decentralized, end-to-end approach to data ownership encourages departments within the bank to adopt a “product-like” mentality toward the data within their domain, ensuring that it is maintained and made available like any other service or product they deliver. For this reason, the term data-as-a-product is synonymous with data mesh. Data domain owners are also expected to: Create and maintain relevant reshaped copies of data, rather than pursue a single-source-of-truth or canonical model. Serve data by exposing data product APIs. This means doing the cleansing and curation of data as close as possible to the source, rather than moving data through complex data pipelines to multiple environments. The successful implementation of a data mesh, and the adoption of a data-as-a-product culture, requires a fundamental understanding of localized data. It also requires proper documentation, design, management, and, most important, flexibility, as in the ability to extend the internal data model. The flexibility of the document model is, therefore, critical for success. Conclusion Open banking holds great potential for the future of the customer experience, and will help established financial institutions meet the ever-evolving customer expectations. Facilitated by a data mesh, you can open new doors for responsible, efficient data sharing across your financial institution, and this increase in data transparency leads to better outcomes for your customers—and your bottom line. Want to learn more about the benefits of open banking? Watch the panel discussion Open Banking: Future-Proof Your Bank in a World of Changing Data and API Standards .

September 22, 2022

What’s New in Atlas Charts: Streamlined Data Sources

We’re excited to announce a major improvement to managing data sources in MongoDB Atlas Charts : Atlas data is now available for visualization automatically, with zero setup required. Every visualization relies on an underlying data source. In the past, Charts made adding Atlas data as a source fairly straightforward, but teams still needed to manually choose clusters and collections from which to power their dashboards. Streamlined data sources , however, eliminates the manual steps required to add data sources into Charts. This feature further optimizes your data visualization workflow by automatically making clusters, serverless instances, and federated database instances in your project available as data sources within Charts. For example, if you start up a new cluster or collection and want to create a visual quickly, you can simply go into one of your dashboards and start building a chart immediately. Check out streamlined data sources in action: See how the new data sources experience streamlines your data visualization workflow in Charts. Maintain full control of your data Although all project data will be available automatically to project members by default, we know how important it is to be able to control what data can be used by your team. For example, you may have sensitive customer data or company financials in a cluster. Project owners maintain full control over limiting access to data like this when needed. As shown in the following image, with a few clicks, you can select any cluster or collection, confirm whether or not any charts are using a data source, and disconnect when ready. If you have collections that you want some of your team to access but not others, this can be easily achieved under Data Access in collection settings as seen in the following image. With every release, our goal is to make visualizing Atlas data more frictionless and powerful. The Streamlined data sources feature helps us take a big step in this direction. Building data visualizations just got even easier with Atlas Charts. Give it a try today ! New to Atlas Charts? Get started today by logging into or signing up for MongoDB Atlas , deploying or selecting a cluster, and activating Charts for free.

September 21, 2022

How to Leverage Enriched Queries with MongoDB 6.0

MongoDB introduces useful new functions and features with every release, and MongoDB 6.0, released this summer, offers many notable improvements , including deeper insights from enriched queries via the MongoDB Query API . This set of query enhancements was announced at MongoDB World 2022 by senior product manager Katya Kamenieva. You can watch her presentation below. Watch Kayta Kamenieva’s MongoDB World presentation on queries. Users can now use upgraded operators and change stream features. In this post, we’ll look at several of these updates, along with examples of how you can put them to use. Top N accumulators With this new feature, users can compute top items in each group based on the sort criteria ( $topN , $bottomN ), current order of documents ($firstN, $lastN), or value of a field ( $manX , $minN ). This functionality would be useful, for example, if you have a collection of restaurants with ratings, and you want to see the top three highest-rated restaurants based on the type of cuisine. You can group by cuisine and use $topN to return the top three restaurants by rating. Ability to sort arrays The ability to sort an array allows users to sort elements in the array. For example, suppose you have posted content with hundreds of user comments, and you want to sort the comments based on how many likes they received. In this case, $sortArray can pull those comments and prioritize them to the top of the comments list. Densification and gap-filling These new additions to the aggregation framework help to build out time series data more completely. When attempting to create histograms of data over time, the new stages, $denisfy and $fill , allow you to fill gaps in that data to create smoother and more complete graphs using linear interpolation, last/next observed value carried forward, or a constant value. This capability can be helpful, for example, if you want to create a graph that shows the amount of inventory in a warehouse every day for a year, but the inventory was only recorded once a week. The $densify expression will fill the gaps in the timeline, while $fill will produce values for the inventory data based on the previous observation. Joining sharded collections With this new feature, when joining collections using $lookup or performing recursive search with $graphLookup , collections on both sides can be sharded. Before 6.0, only the originating collection could be sharded. An example use case is enriching records in the “accounts” collection with the list of the corresponding orders that are stored in the “orders” collection. In the past, only “accounts” collections could be sharded. Starting with 6.0, both “accounts” and “orders” collections can be sharded. Change streams pre- and post-images Change streams now offer point-in-time (PIT) pre- and post-image capabilities , allowing users to include the state of the document before and after changes in the output of the change stream. This functionality can be useful in many situations. For example, suppose a company is tracking flight times. If a flight is delayed, the system can compare the value of the departure and arrival times both before and after that delay and trigger an automatic rewrite of the schedule for the new flight timeline, including schedules for the entire crew. Atlas Search across multiple collections This improvement to MongoDB Atlas Search allows users to search across multiple collections with a single query using $search inside the $unionWith or $lookup stages. $search can provide these results quickly, using only one query. Enriched queries are not the only improvements in MongoDB 6.0. Read about the 7 reasons to upgrade to MongoDB 6.0 and discover the possibilities. Try MongoDB Atlas for Free Today

September 20, 2022

MongoDB Connector for Apache Kafka 1.8 Available Now

MongoDB has released version 1.8 of the MongoDB Connector for Apache Kafka with new monitoring and debugging capabilities. In this article, we’ll highlight key features of this release. JMX monitoring The MongoDB Connector works with Apache Kafka Connect to provide a way for users to easily move data between MongoDB and the Apache Kafka. The MongoDB connector is written in Java and now implements Java Management Extensions (JMX) interfaces that allow you to access metrics reporting. These metrics will make troubleshooting and performance tuning easier. JMX technology, which is part of the Java platform, provides a simple, standard way for applications to provide metrics reporting with many third-party tools available to consume and present the data. For those who might not be familiar with JMX monitoring , let’s look at a few key concepts. An MBean is a managed Java object, which represents a particular component that is being measured or controlled. Each component can have one or more MBean attributes. The MongoDB Connector for Apache Kafka publishes MBeans under the “com.mongodb.kafka.connector” domain. Many open source tools are available to monitor JMX metrics, such as the console-based JmxTerm or the more feature-complete monitoring and alerting tools like Prometheus . JConsole is also available as part of the Java Development Kit (JDK). Note: Regardless of your client tool, MBeans for the connector are only available when there are active source or sink configurations defined on the connector. Visualizing metrics Figure 1: Source task JMX metrics from JConsole. Figure 1 shows some of the metrics exposed by the source connector using JConsole. In this example, a sink task was created and by default is called “sink-task-0”. The applicable metrics are shown in the JConsole MBeans panel. A complete list of both source and sink metrics will be available in the MongoDB Kafka Connector online documentation shortly after the release of 1.8. MongoDB Atlas is a great platform to store, analyze, and visualize monitoring metrics produced by JMX. If you’d like to try visualizing JMX metrics in MongoDB Atlas generated by the connector, check out jmx2mongo . This tool continuously writes JMX metrics to a MongoDB time series collection. Once the data is in MongoDB Atlas, you can easily create charts from the data like the following: Figure 2: MongoDB Atlas Chart showing successful batch writes vs writes greater than 100ms. Figure 2 shows the number of successful batch writes performed by a MongoDB sink task and the number of those batch writes that took longer than 100ms to execute. There are many other monitoring use cases available; check out the latest MongoDB Kafka Connector documentation for more information. Extended debugging Over the years, the connector team collected requests from users to enhance error messages or provide additional debug information for troubleshooting. In 1.8, you will notice additional log messages and more descriptive errors. For example, before 1.8, if you set the copy.existing parameter, you may get the log message: “Shutting down executors.” This message is not clear. To address this lack of clarity, the message now reads: “Finished copying existing data from the collection(s).” These debugging improvements in combination with the new JMX metrics will make it easier for you to gain insight into the connector and help troubleshoot issues you may encounter. If you have ideas for additional metrics or scenarios where additional debugging messages would be helpful, please let us know by filing a JIRA ticket . For more information on the latest release, check out the MongoDB Kafka Connector documentation . To download the connector, go to the MongoDB Connector repository in GitHub or download from the Confluent Hub .

September 19, 2022

Network, Build, and Learn at MongoDB.local Events — Now Free to Attend

Panel Discussion at MongoDB.local London, 2021 Every year, MongoDB hosts popular MongoDB.local events in major cities around the world. Packed with workshops, talks, and keynotes, these one-day, in-person gatherings bring together engineers, entrepreneurs, and executives from the surrounding area. This year, for the first time, admission to MongoDB.local events is free. (Note that admission is granted on a first-come, first-served basis, limited only by seating capacity.) Five upcoming events Five MongoDB.local events are scheduled for the remainder of 2022, and you can register for the .local event near you through the links below or through the MongoDB.local hub page . Frankfurt , September 27, 2022 San Francisco , October 20, 2022 Dallas , October 27, 2022 London , November 15, 2022 Toronto , December 15, 2022 From sessions on the future of serverless to demos of next-generation technology, here’s what to expect at a MongoDB.local event near you. Learn from the experts Whether you attend keynote presentations or participate in customer discussions, you can tap into a wealth of knowledge from people and organizations that are thoroughly familiar with today’s technology landscape. You’ll learn from MongoDB experts, who will share hard-earned knowledge, practical solutions, and technical insight based on firsthand experience with common issues. You can also attend talks from MongoDB customers, which are generally centered around a specific use case and solution — a sort of shared retrospective for the public. At .local Frankfurt, for example, an engineer from Bosch will discuss the company’s evolution from individual documents to time series data in an IoT environment. All MongoDB.locals include sessions for a wide array of skill levels and specialities, such as a deep dive into the new Queryable Encryption feature or an introduction to building a basic application using Atlas Device Sync and React. These workshops offer practical, actionable advice that you can implement immediately upon returning to your office. Expand your professional network MongoDB.local events also offer many opportunities to expand your personal and professional network. In particular, these gatherings are a great way to connect with members of your local MongoDB User Group, who are likely working with the same technologies (or facing similar challenges) that you are. Whether you’re searching for a new job or business opportunity, looking for tips and techniques to implement in your own environment, or just browsing for inspiration, you’ll likely find what you seek at MongoDB.local. Explore the latest products Product booths are another highlight of MongoDB.local events. Staffed by MongoDB product teams, these booths are where you can pick up limited edition stickers, discuss the latest developments with expert engineers, and see new MongoDB features in action. Every event also features booths where third-party partners, vendors, and allies demonstrate cutting-edge technology, show how their platforms and services work in tandem with MongoDB, and answer any questions you may have. Stop by these booths to explore the next big thing in data, see how MongoDB can provide new solutions for pressing problems, and come away with helpful, personalized advice for your own challenges. Enjoy a one-of-a-kind experience From Frankfurt’s Klassikstadt to London’s Tobacco Dock , MongoDB.locals are held at unique, memorable venues. Step inside refurbished historical sites, such as a former factory turned automobile museum or a shipping wharf converted into a top-tier event space. In addition to a full day of talks and tutorials, attendees can enjoy breakfast, lunch, snacks, and drinks served at MongoDB.locals. Join us for a day packed with learning and networking opportunities in a venue near you. Whether you’re a decision-maker or a developer, you’ll find something interesting, enlightening, or useful at MongoDB.local. Learn more about our upcoming MongoDB.local events in Frankfurt , San Francisco , Dallas , London , and Toronto , and register for your free ticket.

September 15, 2022

Built With MongoDB: Vanta Automates Security and Compliance for Fast-Growing Businesses

Organizations pay a high price for running afoul of regulations. Several eight- and nine-figure fines have already been issued for GDPR violations in the four years since the far-reaching privacy regulation went into effect. Although the biggest fines are reserved for the biggest offenders, small businesses and startups, which can least afford financial and reputational setbacks, have no choice but to take compliance seriously. San Francisco-based startup Vanta knows what a challenge security and compliance can be for companies. Vanta co-founder Christina Cacioppo worked on Dropbox’s collaborative document project, Paper, when she and her team encountered resistance from the company’s legal team. From legal's perspective, the Paper project was jeopardizing compliance with Dropbox’s customer contracts. Cacioppo helped found Vanta to come up with a software solution to the compliance problem. Vanta helps companies scale security practices and automate compliance for the most prevalent data security and privacy regulatory frameworks, including SOC 2, ISO 27001, HIPAA, PCI DSS, GDPR, and CCPA. The company's platform gives organizations the tools they need to automate up to 90% of the work required for security audits, and more than 1,500 customers have signed on since its founding in 2016. Vanta is part of the MongoDB for Startups program, which helps early-stage, high-growth startups build faster and scale further, and has used MongoDB as its database of record since its inception. Next-level security monitoring Vanta launched in the wake of several high-profile data breaches. Although the company's founders understood that online security was becoming more important, they also knew how hard it could be for fast-growing companies to invest the time and resources needed to build a security foundation. So, they set about building a platform that could withstand not just today's threats but tomorrow's as well. Robbie Ostrow, now engineering manager, was the first employee the company hired. "Historically, the way proving security worked was that a company would have an auditor look at its platform once a year and issue a piece of paper that says, 'you seem secure,'" Ostrow says. "We check all the same items that an auditor would check, but instead of checking 1% of it once a year, we check 100% once an hour." Ostrow acknowledges how helpful MongoDB Atlas has been in ensuring state-of-the-art security practices. "As a security company, one thing that's really important is ensuring that our data is separate from everybody else's data and that we are not accidentally exposing random ports to the internet," Ostrow says. "One awesome thing about MongoDB Atlas is a feature called VPC peering, which allows us to take our virtual private cloud (VPC) and communicate with our database cluster while not exposing any cruft to the world." Integration and scaling According to Ostrow, Vanta’s decision to use MongoDB from the start has been critical to its success. "We originally chose MongoDB because it was a perfect tool with which we could prototype,'' Ostrow says. "But we also found that it's a great tool for production systems. And we don't really believe in MVPs for the sake of MVPs because they eventually end up becoming production systems. So luckily we chose MongoDB, which helped us prototype really quickly because we didn't have to build tooling and migrate it to another system. And then it ended up being a tool that was able to scale with us." Once Vanta moved past an MVP, its growth was intricately tied to how fast it could integrate with other tools and build new features. "The key to the growth we've had is in the number of integrations we've been able to build and new features we've been able to add on top of those integrations," Ostrow says. "MongoDB has helped a lot to allow us to build and ship quickly without any downtime." Vanta software engineer, David Zhu, agrees. "MongoDB makes it easy for us to model our data and access it in ways that are very flexible," Zhu says. "As a security company, we're monitoring a lot of different resources, and our understanding of those resources changes over time." Flexible and familiar As a company that prizes the ability to iterate rapidly, Vanta finds great value in the flexibility of the document model that underpins MongoDB Atlas. "We have a really strict code base," Ostrow says, "but the flexibility of the data model allows us to move quickly while still feeling safe about the changes we're making." Getting the developer experience right is key to maximizing the productivity of a limited and costly resource. "Whenever we make changes or need to think about how we want to model our information," Zhu says, "MongoDB has the flexibility to let us make changes on the fly and speed up our development process." Drew Gregory, a software engineer at Vanta, also highlights the benefit of familiarity when developing in Atlas. "MongoDB's API abstractions tend to feel like JavaScript and JSON objects," Gregory says. "We really enjoy trying to make our entire stack feel and look like TypeScript. So MongoDB, cosmetically, aesthetically, and even programmatically, feels like working with JavaScript the whole way down." Zhu echoed a similar point: "Our technical stack is very straightforward. MongoDB slots right in. All of the data looks similar, and all engineers can work really easily across all aspects of our stack." That familiarity is important at Vanta because it helps with recruiting efforts. "One thing I like to tell people I'm recruiting is that Vanta tries to move fast and not break too many things,'' Ostrow said. "Because we're a startup, we need to grow incredibly quickly. But we're also a security company that our customers depend on. And we want to make sure that, while we're able to ship features really quickly, we're not going to violate customers' trust while we're doing so. Hiring people who are able to do this and ensuring that the tools you're using are able to scale are really important." To that end, Ostrow points out: "We're hiring quickly and looking for great new engineers. So get in touch if you're interested." A program for success MongoDB for Startups offers startups access to a wide range of resources, including free credits to our best-in-class developer data platform, MongoDB Atlas, personalized technical advice, co-marketing opportunities, and access to our robust developer community. Ostrow credits the MongoDB for Startups program for helping Vanta with its Atlas deployment. "MongoDB sent us a consultant who was able to help optimize the way we were using it and gave us a report with excellent advice across the board," Ostrow says. "We still refer to that report all the time." Are you part of a startup and interested in joining the MongoDB for Startups program? Apply now .

September 14, 2022

How to Use MongoDB Atlas to Make Your CRM More Efficient

As part of digital transformation, many companies want to optimize their internal business processes, gain more visibility into important business metrics, and create new automation routines. Data is always at the core of business processes and metrics, and most business-critical data is often located in one or a few repositories, such as a customer relationship management system (CRM). Historically business users have relied on spreadsheets and enterprise data warehouses for bringing the data together and making decisions. These solutions can range from a disjointed set of dashboards to an all-in-one central console. But businesses that need to move fast need to iterate on their data and processes fast, and they can’t do that if implementing a change in CRM takes months or if the things are done manually in spreadsheets. This article describes how MongoDB Professional Services created an internal solution to address these issues. Our approach In MongoDB Professional Services, we also needed to streamline our business processes and get out of spreadsheets for business management, especially for revenue forecasting. As the organization grew, the amount of manual labor associated with spreadsheet maintenance became untenable, and making sense of the data became more difficult, especially when the data might be inconsistent, stale, or even inaccurate. Ordinarily, a good CRM or Professional Services Automation (PSA) system can help solve this problem. At MongoDB, for example, we use Salesforce, which provides decent flexibility, but also requires heavy customization and has limitations. We’ve also seen MongoDB customers address the problem by building ETL pipelines into MongoDB Atlas and taking advantage of MongoDB’s flexible schema, query language and aggregation framework, and Atlas Search . The data from source systems is ingested as-is or remapped to create a single view. The best approach we’ve found, however, is to optimize the schema for how the data will be consumed, with different parts of documents potentially coming from different source systems. Atlas App Services provides a serverless abstraction layer that allows fine-grained but flexible control over the schema to help you avoid conflicts and iterate without breaking compatibility. After considering alternatives, we created an internal CRM/PSA-augmenting system that is built on top of the MongoDB Atlas platform to provide us with additional capabilities and flexibility. This solution allows Professional Services to rapidly deliver advanced functionality, such as revenue forecasting, automation, and visibility into complex business metrics. The solution also allows Professional Services to address business systems' needs and promptly react to changes, with functionality beyond what is typically provided by other systems. MongoDB’s internal solution, at its core, is serverless and data-centric, leveraging Atlas App Services functions and triggers for processing the data and Atlas Search for full-text search. It uses Connector for BI , Atlas GraphQL API , and App Services wire protocol and Atlas Functions to access and manipulate data from other components. Its components include a React-based console application, Atlas Charts, Tableau dashboards, Google Sheets, and microservices for data import and integrations. Project view of our internal solution console. Revenue forecasting module in our internal solution console. MongoDB Charts shows business metrics. Solution architecture The data architecture in our internal solution builds on the single view approach and the data-mart concept. The main idea is to ingest relevant data from Salesforce and other systems, enrich it, and build on it quickly, as shown in the following image. We followed these eight key principles to help enable this functionality: Focus on bringing in data in the form that makes the most sense for the business. And, find the right balance between making the ETL easy and optimizing for the foreseen application use cases. Apply transformations in the ETL process to make the ingested data intuitive, including document hierarchy, field names, and data types. Clearly define the data lifecycle in terms of data producers and consumers. Data producers can only overwrite documents and fields that they “own” - and only those. For example, the ETL process from the source system should overwrite the data in MongoDB documents as needed, but it should only modify those fields that are actually coming from the pipeline. Aim to structure MongoDB documents in a way that makes it clear which fields are owned by what producer. Atlas App Services schema and rules can help ensure that the most critical documents and fields are correctly accessed and modified. Use the Atlas Functions and App Services wire protocol in applications and services, as opposed to directly connecting to the Atlas instance. This allowed us to use Google SSO in the console without requiring any sophisticated security mechanisms when we need to do regular CRUD operations from within the application. For complex data logic and on-the-fly calculations, use App Functions . Use database triggers for propagating changes and generating data-driven events. Use scheduled triggers for generating aggregated views and periodic work. Use external services for communicating with the outside world (e.g., email sender, ETL job). The external services are invoked asynchronously by listening on change streams from their respective namespaces (pub-sub model). All external services work independently of each other. Don’t overthink. MongoDB Atlas’s Developer Data Platform offers a lot of flexibility and, if these principles are followed, making changes and iterating on a working system is surprisingly easy. To reiterate the last point, our internal solution is easy to modify and extend because of the flexible schema concept in MongoDB and the independence of external components. Users can access the data through available tools and integrations, and developers can update specific parts of the system or introduce new ones without delays, making this solution efficient in terms of both cost and effort. Conclusion Through this example of our internal solution, we demonstrated that by leveraging MongoDB Atlas in full force, you can solve seemingly intractable business problems with speed, efficiency, and robustness on top of what regular systems can do. Whether you’re optimizing your company’s business processes, building business dashboards, or improving automation, the MongoDB Atlas developer data platform can help make the process easier. Learn how MongoDB’s consulting engineers can help you with design and architecture decisions and accelerate your development efforts. Contact us to learn more .

September 12, 2022

3 Factors Limiting Developers’ Innovation

Software has steadily become the engine of business growth and innovation, which has led to the demand for new applications — for business or consumers — to grow exponentially. According to the International Data Corporation, there will be 750 million new applications built by 2025. That means there will be more applications built over the next few years than were built in the software industry’s first 40 years. With many thousands of new applications rolling out every month, businesses need more developers who can innovate. Indeed, the U.S. Bureau of Labor and Statistics reports that in the United States alone, the workforce will need 400,000 more developers by the end of this decade. But in addition to an increasing number of developers, organizations also need to ensure that their development teams are productive, efficient and able to innovate. A recent MongoDB survey suggests that developers are struggling with that. How do developers spend their time? The goal for developers is to define and build new features and applications. This type of innovation is crucial to business success, since software innovation leads to benefits such as improved customer experience, cost reduction and increased productivity. MongoDB’s 2022 Report on Data and Innovation , a survey of 2,000 Asia-Pacific technology professionals, found that companies share two top goals for innovation: increasing internal efficiency and productivity, and building better products. In other words: building better stuff, faster. But is this happening? The survey says “not really.” Here is a breakdown of how those 2,000 IT professionals reported spending their time: Only 28% of technology teams’ time is spent working on defining and building new features and applications, compared to a whopping 72% of time managing infrastructure and completing administrative tasks and projects. Needless to say, this is not conducive to innovation. What is limiting developer innovation? What is blocking developers from spending more time building new software? The survey points to three top contributors: High developer workloads: One Haystack survey reports that 80% of developers describe themselves as burned out . This obviously can affect an employee’s ability to innovate and create quality work. With the continued growth of data volume and app creation, burnout is only getting worse. This problem can only be addressed by providing developers with the proper tools and a simplified data architecture. Both allow devs to reduce their overall cognitive load and allow them to build applications more efficiently. Complex data architecture: Our survey discovered that complexity limits innovation. Whether a legacy system with decades of organic sprawl or a cloud environment that has become overly complex as more and more components have been added to it, a “spaghetti architecture” requires developers to spend significant time learning, connecting and maintaining disparate technologies. Legacy systems and technical debt: The systems that businesses use, especially outdated technology and overly complex systems, are often major blockers for developers and for an organization’s innovation. Huge amounts of time and resources go into maintenance and in building ways to connect old systems to newer technology. Even as digital transformation efforts move a lot of companies to the cloud, a McKinsey survey found that 60% of CIOs saw their technical debt increase over the previous three years. This means IT decisions made years or decades ago hobble the agility of today’s developers. Want to learn more about developers, data, and innovation? Download MongoDB’s 2022 Data and Innovation Report .

September 8, 2022

MongoDB Partners With Codecademy on New “Learn MongoDB” Course

MongoDB is pleased to announce the release of the new "Learn MongoDB'' course , created together with Codecademy. Hosted on the Codecademy platform , the course teaches students the basics of MongoDB and how to perform CRUD operations , query and analyze data , and create and use indexes . With interactive tutorials and quizzes throughout, developers in the “Learn MongoDB” course can educate themselves on the breadth of MongoDB’s developer data platform and learn best practices for building applications on top of MongoDB. And by completing additional coursework on the programming language of their choice on Codecademy, early-career developers can learn how to code across the full application stack. This is a free, self-paced course that takes approximately eight hours to complete. For developers building new applications, identifying the right solution for their data layer is critical. But equally important is learning how to use their solution of choice for maximum reliability and scalability. In addition to documentation and how-to guides, educational courses like “Learn MongoDB” are invaluable for helping developers harness new solutions to their full potential. MongoDB offers MongoDB University , which helps developers advance their careers with MongoDB courses and certifications; we also partner with leading third-party providers of developer educational experiences like Codecademy . If you’re a developer new to working with MongoDB, Codecademy’s “Learn MongoDB” course is a great way to get started. Sign up today!

September 7, 2022

Free your data with the MongoDB Relational Migrator

Nothing is more frustrating than data that is just out of reach. Imagine wanting to combine customer behavior data from your CRM and usage data from your legacy product to trigger tailored promotions in your new mobile app, but not being able to locate the required data in the sea of tables in your relational database. As MongoDB CTO Mark Porter explains in his MongoDB World keynote , the data that can make a difference might be locked up “somewhere that you can’t use.” Relying on his own hard-earned experience with data, Porter adds that this information can be trapped “in a schema with hundreds or thousands of tables that have built up over decades.” “Schema is a huge part of this problem,” MongoDB product manager Tom Hollander explains during a presentation on MongoDB Relational Migrator at MongoDB World 2022. “So we’ve spent a lot of time building out the tools to enable you to map your tabular relational schema into a document schema and make use of the full power of the MongoDB document model.” To see MongoDB Relational Migrator in action, check out this introduction and demo from MongoDB World 2022, featuring MongoDB product manager Tom Hollander. What is MongoDB Relational Migrator? MongoDB Relational Migrator streamlines migrations from legacy data infrastructure to MongoDB by helping developers analyze relational database schemas, convert them into MongoDB schemas, and then migrate data from the source database to MongoDB. Currently, Relational Migrator is compatible with four of the most common relational databases: Oracle, SQL Server, MySQL, and PostgreSQL. Migrator not only moves data from your relational database to MongoDB, but it also transforms it according to your new schema. As Hollander and MongoDB product marketing director Eric Holzhauer point out , developers often use a mix of software and tools (e.g., extract-transform-load pipelines, change data capture (CDC), message queues, and streaming) to execute migrations, which can be complicated, risky, and error-prone. Relational Migrator provides a single tool that can streamline the process while simultaneously ensuring that your data lands in an organized, logical manner. By simplifying schema translations — one of the most complex, difficult parts of any relational migration — Relational Migrator grants developers and other technical teams a greater degree of control over (and increased visibility into) their new MongoDB schema. The result is to make data more accessible for analysis and decision making. “Now I can get at the data in my program without going through a translation layer,” Porter explains. A visual representation of how Migrator maps relational schema to document schema. Migration mode: Snapshot or ongoing? Migrator provides two modes of data transfer: a one-time snapshot or a continuous sync (which will be available later this year). To help decide which mode you should use, consider whether you can move over to MongoDB and immediately decommission your previous database or whether you need to keep your existing relational database up and running. Organizations may wish to keep their relational database for various reasons, such as testing the effectiveness of your proposed document schema, running out the contract or licensing agreement to avoid expensive fees, or keeping old databases available for audits. In this situation, you can keep your relational database running so that Relational Migrator will continue to push data from your source to your new MongoDB clusters. The limits of Relational Migrator As Hollander points out, Relational Migrator is only a tool — one intended to facilitate schema mapping, providing many abilities and options for effective schema design. “It’s not a silver bullet that will immediately modernize your application portfolio,” Hollander says. “It’s not going to do everything for you. You still have to do the planning.” Furthermore, because database schema is a tricky topic even for seasoned experts, Hollander recommends that developers would benefit from working with architects, consultants, and partners — especially if they’re not familiar with MongoDB or schema design best practices. Relational Migrator does not yet support continuous replication, which would enable your relational database and MongoDB clusters to coexist for an extended period of time. However, Hollander says that work on this feature is ongoing and it will be available in the future, along with additional capabilities like schema recommendations, an integration for the MongoDB Atlas developer data platform, and more. MongoDB Relational Migrator is currently in early access, for use on non-production workloads with assistance from our Product and Field Engineering teams. To learn more, get in touch with your MongoDB rep or contact us via our Migrator page to discuss your workload and next steps.

September 6, 2022

Moving From Monolith to Microservices: Mark Porter and Accenture’s Michael Ljung Explain

The first step in digital transformation for many organizations is to migrate from legacy on-premises environments and move as many workloads as possible into the public cloud. As seen in the first in our series of conversations between Mark Porter, CTO of MongoDB, and Michael Ljung, Accenture ’s Global Lead, Software Engineering, Accenture Cloud First, this is not always easy, but with the right tools and planning, that migration can reap great benefits. The next step in many organizations’ transformation is to dismantle their monolithic applications — which often limit businesses’ ability to quickly innovate — and move to applications built on a microservices architecture. Many organizations are already well on their way. Research shows that 36% of large companies, 50% of medium companies, and 44% of small companies are already using microservices in their production and development. To explain this migration away from the monolith, Porter and Ljung sat down to discuss the benefits of microservices, how to size those services properly for best results, and how an Accenture customer used a microservices approach to quickly roll out new features to help provide COVID-19 vaccinations. Watch their full discussion: Why microservices? Although teams choose a microservice architecture for a variety of reasons and use cases , one driving force is that businesses now rely so heavily on software for competitive advantage that they require a more rapid development cycle for new releases. A monolithic approach does not support the fast time-to-market cycles needed, nor does it provide the working environment developers need to speed the release process. In their conversation, Porter and Ljung cover several benefits of moving away from the monolith and adopting microservices at the proper size, including the following: Microservices align to how humans work best together. A large, monolithic codebase leads to complexity and creates immense cognitive loads for the developers. They offer protection from complete downtime. Microservices allow for compartmentalization to avoid a single point of failure. By contrast, with a monolithic application, if something goes wrong, everything goes wrong. They allow for better application scaling. With a microservices architecture, only the features that require extra performance need to be scaled. And they allow you to increase your speed to market. Some teams have reported that moving to microservices and containers saw a 13x increase in the frequency of software releases . Read the first installment in this cloud migration series, “ Migrating to the Cloud Isn't As Easy As Most People Think .”

September 1, 2022