MongoDB Applied

Customer stories, use cases and experience

Modernizing Core Banking: A Shift Toward Composable Systems

Modernizing core banking systems with MongoDB can bring many benefits such as faster innovation, flexible deployment, and instant scalability. According to McKinsey & Company , it is critical for banks to modernize their core banking platforms with a “flexible back end” in order to stay competitive and adapt to new business models. With the emergence of better data infrastructure based on JSON and the ongoing evolution of software design, the next generation of composable core banking processes can be built on MongoDB's developer data platform, offering greater flexibility and adaptability than traditional systems. The current market: Potential core banking solutions Financial disruptors such as fintechs and challenger banks are growing their businesses and attracting customers by building on process-centric core banking systems, while traditional banks struggle with inflexible, legacy systems. As seen in Figure 1 below, two potential solutions are the core banking “platform” and “suite”. The platform solution involves using a single vendor and several closely integrated modules. It also includes a single, large database and a single roadmap. On the other hand, the suite solutions refers to using multiple vendors, multiple loosely integrated modules, multiple databases and roadmaps. However, both of these systems are inflexible and result in vendor lock-in, preventing the adoption of best-of-breed functionalities from other vendors. Figure 1: Core banking solutions: platform, suite and composable ecosystem. A new approach, known as a composable ecosystem as seen on the far right of Figure 1, is being adopted by some financial institutions. This approach consists of distinct independent services and functions, with the ability to incorporate "best of breed" functionality without major integration challenges, multiple loosely coupled roadmaps, and individual component deployment without vendor lock-in. This allows for specialization and the development of advanced individual components that can be combined to deliver the best products and services and is better at adopting new technologies and approaches. Composable ecosystems with MongoDB's developer data platform MongoDB’s developer data platform is the best choice for financial institutions to build a composable core banking ecosystem. Such an ecosystem is made up of four key building blocks as seen below in Figure 2: JSON, BIAN, MACH, and data domains. JSON is a widely-used data format in the financial industry, and MongoDB's BSON extension allows for the storage of additional data types. BIAN is a standard that defines a component business blueprint for banking, and MongoDB's technology supports BIAN and embodies MACH principles. MACH is a set of design principles for component-based architectures, and data domains enable the mapping of business capabilities to applications and data. By using MongoDB's developer data platform, financial institutions can implement flexible and scalable core banking systems that can adapt to the ever-changing market demands. Figure 2: MongoDB, the developer data platform for your core banking system. MongoDB in action: Core banking use cases Companies such as Temenos and Current have utilized MongoDB's capabilities to deliver innovative services and improve performance. As Tony Coleman, CTO of Temenos, said, "Implementing a good data model is a great start. Implementing a great database technology that uses the data model correctly, is vital. MongoDB is a really great fit for banking." MongoDB and Temenos have worked on a number of new, component-based services to enhance the Temenos product family. Financial institutions can embed Temenos components to deliver new functionality in their existing on-premises environments or through a full banking-as-a-service experience with Temenos T365, powered by MongoDB on various cloud platforms. Temenos has a cloud-first, microservices-based infrastructure built with MongoDB, which gives customers flexibility while improving performance. Current is a digital bank that was founded with the aim of providing its customers with a modern, convenient, and user-friendly banking experience. To achieve this, the company needed to build a robust, scalable, and flexible technology platform. Current decided to build its core technology ecosystem in-house, using MongoDB as the underlying database technology. "MongoDB gave us the flexibility to be agile with our data design and iterate quickly," said Trevor Marshall, CTO of Current. In addition, MongoDB's strong security features make it a secure choice for handling sensitive financial data. Overall, MongoDB's capabilities make it a powerful choice for driving innovation and simplifying landscapes in the financial sector. Conclusion In conclusion, the financial industry is in need of modernizing their core banking systems to stay competitive in the face of rising disruptors and new business models. A composable ecosystem, utilizing a developer data platform like MongoDB, offers greater flexibility and adaptability than traditional legacy systems. If you’d like to learn more about how MongoDB can optimize your core banking functionalities, take a look at our white paper: Componentized Core Banking: The next generation of composable banking processes built upon MongoDB .

January 26, 2023
Applied

5 Ways to Learn MongoDB

MongoDB offers a variety of ways for users to gain product knowledge, get certified, and advance their careers. In this guide, we'll provide an overview of the top five ways to get MongoDB training, resources, and certifications. #1: MongoDB University The best place to go to get MongoDB-certified and improve your technical skills is MongoDB University . At our last MongoDB.local London event, we announced the launch of a brand new, enhanced university experience, with new courses and features, and a seamless path to MongoDB certification to help you take your skills and career to the next level. MongoDB University offers courses, learning paths, and certifications in a variety of content types and programming languages. Some of the key features that MongoDB University offers are: Hands-on labs and quizzes Bite-sized video lectures Badges for certifications earned Study guides and materials Getting certified from MongoDB University is a great way to start your developer journey. Our education offerings also include benefits for students and educators . #2: MongoDB Developer Center For continued self-paced learning, the MongoDB Developer Center is the place to go. The Developer Center houses the latest MongoDB tutorials, videos, community forums , and code examples in your preferred languages and tools. The MongoDB Developer Center is a global community of more than seven million developers. Within the Developer Center, you can code in different languages, get access to integrate technologies you already use, and start building with MongoDB products, including: MongoDB, the original NoSQL database MongoDB Atlas , the cloud document database as a service and the easiest way to deploy, operate, and scale MongoDB MongoDB Atlas App Services , the easy way to get new apps into the hands of your users faster #3: Instructor-led training As an IT leader, you can help your team succeed with MongoDB instructor-led training taught live by expert teachers and consultants. With MongoDB’s instructor-led training offering, you can access courses aimed at various roles. Our Developer and Operations learning paths cover fundamental skills needed to build and manage critical MongoDB deployments. Beyond that, our specialty courses help learners master their skills and explore advanced MongoDB features and products. You can also modify how you want to learn. MongoDB offers public remote courses, which are perfect for individuals or teams who want to send a few learners at a time. If your goal is to upskill your entire team with MongoDB, our courses can be delivered privately, both onsite or remotely. Instructor-led training also provides the opportunity for Q&A, providing answers to your specific questions. #4: Resources Beyond formal training programs, MongoDB is committed to providing thought leadership resources for those looking to dive deeper and learn more about MongoDB and database technologies in general. Our website offers an active blog with ongoing thought leadership and how-to articles, along with additional coding documentation , guides, and drivers. You can also check out the MongoDB Podcast for information about new and emerging technology, MongoDB products, and best practices. #5: Events You can also engage with MongoDB experts at our many events, including MongoDB World, our annual conference for developers and other IT leaders. After MongoDB World, we take our show on the road with MongoDB .local events across the globe. These events give you the opportunity to learn in a hands-on fashion and meet other MongoDB users. MongoDB also hosts MongoDB days in various global regions, focusing on developer workshops and leveling up skills. Beyond that, you can keep up with our webinars and other learning opportunities through our Events page. Build your own MongoDB story Of course, many people like to learn by doing. To get started using MongoDB Atlas in minutes, register for free .

January 20, 2023
Applied

Predictions 2023: Modernization Efforts in the Financial Services Industry

As a global recession looms, banks are facing tough economic conditions in 2023. Lowering costs will be vital for many organizations to remain competitive in a data-intensive and highly regulated environment. Thus, it’s important that any IT investments accelerate digital transformation with innovative technologies that break down data silos, increase operational efficiency, and build personalized customer experiences. Read on to learn about areas in which banks are looking to modernize in 2023 to build better customer experiences at a lower cost and at scale. Shaping a better banking future with composable designs With banks eager to modernize and innovate, institutions must move away from the legacy systems that are restricting their ability to show progress. Placing consumers at the center of a banking experience made up of interconnected, yet independent services offers technology-forward banks the chance to reshape their business models and subsequently grow market share and increase profitability. These opportunities have brought to fruition a composable architecture design allows faster innovation, improved operational efficiency, and creates new revenue streams by extending the portfolio of services and products. Thus, banks are able to adopt the best-of-breed and perfect-fit-for-purpose software available by orchestrating strategic partnerships with relevant fintechs and software providers. This new breed of suppliers can provide everything from know your customer (KYC) services to integrated booking, load services or basic marketing and portfolio management functionalities. This approach is more cost efficient for institutions than having to build and maintain the infrastructure themselves, and it is significantly faster in terms of time to market and time to revenue. Banks adopting such an approach are seeing fintechs less as competitors and more as part of an ecosystem to collaborate with to accelerate innovation and reach customers. Operational efficiency with intelligent automation Financial institutions will continue to focus on operational efficiency and cost control through automating previous manual and paper-driven processes. Banks have made some progress digitizing and automating what were once almost exclusively paper-based, manual processes. But, the primary driver of this transformation has been compliance with local regulations rather than an overarching strategy for really getting to know the client and achieving true customer delight. The market is eager for better automated and data-driven decisions, and legacy systems can’t keep up. Creating hyper-personalized experiences that customers demand, which include things like chatbots, self-service portals, and digital forensics, is difficult for institutions using outdated technology. And, having data infrastructure in siloes prohibits any truly integrated modern experience. Using a combination of robotic process automation (RPA), machine learning (ML), and artificial intelligence (AI), financial institutions are able to streamline processes, thereby freeing the workforce to focus on tasks that drive a bigger impact for the customer and business. Institutions must not digitize without considering the human interaction that will be replaced, as customers prefer a hybrid approach. The ability to act on real-time data is the way forward for driving value and transforming customer experiences, which must be accompanied by the modernization of the underlying data architecture. The prerequisite for this goal involves the de-siloing of data and sources into a holistic data landscape. Some people call it a data mesh , some composable data sources, virtualized data. Solving ESG data challenges Along with high inflation, the cost-of-living crisis, energy turmoil, and rising interest rates, environmental, social, and governance (ESG) is also in the spotlight. There is growing pressure from regulators to provide ESG data and from investors to make sure portfolios are sustainable. The role of ESG data in conducting market analysis, supporting asset allocation and risk management, and providing insights into the long-term sustainability of investments continues to expand. The nature and variability of many ESG metrics is a major challenge facing companies today. Unlike financial datasets that are mostly numerical, ESG metrics can include both quantitative and qualitative data to help investors and other stakeholders understand a company’s actions and intentions. This complexity, coupled with the lack of a universally applicable ESG reporting standard, means institutions must consider different standards with different data requirements. To master ESG reporting, including the integration of relevant KPIs, appropriate, high-quality data is needed that is also at the right level of granularity and covers the required industries and region. Given the data volume and complexity, financial institutions are building ESG platforms underpinned by modern data platforms that are capable of consolidating different types of data from various providers, creating customized views, modeling data, and performing operations with no barriers. Digital payments - Unlocking an enriched experience Pushed by new technologies and global trends, the digital payments market is flourishing globally. With a valuation of more than $68 billion in 2021 and expectations of double-digit growth over the next decade, emerging markets are leading the way in terms of relative expansion. This growth has been driven by pandemic-induced cashless payments, e-commerce, government push, and fintechs. Digital payments are transforming the payments experience. While it was once enough for payment service providers to supply account information and orchestrate simple transactions, consumers now expect an enriched experience where each transaction offers new insights and value-added services. Meeting these expectations is difficult, especially for companies that rely on outdated technologies that were created long before transactions were carried out with a few taps on a mobile device. To meet the needs of customers, financial institutions are modernizing their payments data infrastructure to create personalized, secure, and real-time payment experiences — all while protecting consumers from fraud. This modernization allows financial institutions to ingest any type of data, launch services more quickly at a lower cost, and have the freedom to run in any environment, from on-premises to multi-cloud . Security and risk management Data is critical to every financial institution; it is recognized as a core asset to drive customer growth and innovation. As the need to leverage data efficiently increases, however, according to 57% of decision makers , the legacy technology that still underpins many organizations is too expensive and doesn’t fulfill the requirements of modern applications. Not only is this legacy infrastructure complex, it is unable to meet current security requirements. Given the huge amount of confidential client and customer data that the financial services industry deals with on a daily basis — and the strict regulations surrounding that data — security must be of highest priority. The perceived value of this data also makes financial services organizations a primary target for data breaches. Fraud protection, risk management, and anti-money laundering are high priorities for any new data platform according to Forrester’s What’s Driving Next-Generation Data Platform Adoption in Financial Services study. To meet these challenges, adoption of next-generation data platforms will continue to grow as financial institutions realize their full potential to manage costs, maximize security, and foster innovation. Download Forrester’s full study — What’s Driving Next-Generation Data Platform Adoption in Financial Services — to learn more.

January 17, 2023
Applied

How Startups Stepped Up in 2022

After muddling through the global pandemic in 2021, entrepreneurs emerged in 2022 ready to transform the way people live, learn, and work. Through the MongoDB for Startups program, we got a close-up view of their progress. What we observed was a good indication of how critical data is to delivering the transformative experiences users expect. Data access vs. data governance The increasing importance of data in the digital marketplace has created a conflict that a handful of startups are working to solve: Granting access to data to extract value from it while simultaneously protecting it from unauthorized use. In 2022, we were excited to work with promising startups seeking to strike a balance between these competing interests. Data access service provider Satori enables organizations to accelerate their data use by simplifying and automating access policies while helping to ensure compliance with data security and privacy requirements. At most organizations, providing access to data is a manual process often handled by a small team that's already being pulled in multiple directions by different parts of the organization. It's a time-consuming task that takes precious developer resources away from critical initiatives and slows down innovation. Data governance is a high priority for organizations because of the financial penalties of running afoul of data privacy regulations and the high cost of data breaches. While large enterprises make attractive targets, small businesses and startups in particular need to be vigilant because they can less afford financial and reputational setbacks. San Francisco-based startup Vanta is helping companies scale security practices and automate compliance for the most prevalent data security and privacy regulatory frameworks. Its platform gives organizations the tools they need to automate up to 90% of the work required for security audits. Futurology The Internet of Things (IoT), artificial intelligence (AI), virtual reality (VR), and natural language processing (NLP) remain at the forefront of innovation and are only beginning to fulfill their potential as transformative technologies. Through the MongoDB for Startups program, we worked with several promising ventures that are leveraging these technologies to deliver game-changing solutions for both application developers and users. Delaware-based startup Qubitro helps companies bring IoT solutions to market faster by making the data collected from mobile and IoT devices accessible anywhere it's needed. Qubitro creates APIs and SDKs that let developers activate device data in applications. With billions of devices producing massive amounts of data, the potential payoff in enabling data-driven decision making in modern application development is huge. London-based startup Concured uses AI technology to help marketers know what to write about and what's working for themselves and their competitors. It also enables organizations to personalize experiences for website visitors. Concured uses NLP to generate semantic metadata for each document or article and understand the relationship between articles on the same website. Another London-based startup using AI and NLP to deliver transformative experiences is Semeris . Analyzing legal documents is a tedious, time-consuming process, and Semeris enables legal professionals to reduce the time it takes to extract information from documentation. The company’s solution creates machine learning (ML) models based on publicly available documentation to analyze less seen or more private documentation that clients have internally The language we use in day-to-day communication says a lot about our state of mind. Sydney-based startup Pioneera looks at language and linguistic markers to determine if employees are stressed out at work or at risk for burnout. When early warning signs are detected, the person gets the help they need to reduce stress, promote wellness, and improve productivity confidentially and in real time. Technologies like AR and VR are transforming learning for students. Palo Alto-based startup Inspirit combines 3D and VR instruction to create an immersive learning experience for middle and high school students. The platform helps students who love science engage with the subject matter more deeply and those who dislike it to experience it in a more compelling format. No code and low code The startup space is rich with visionary thinkers and ideas. But the truth is that you can't get far with an idea if you don't have access to developer talent, which is scarce and costly in today's job market. We've worked with a couple of companies through the MongoDB for Startups program that are helping entrepreneurs breathe life into their ideas with low- and no-code solutions for building applications and bringing them to market. Low- and no-code platforms enable users with little or no coding background to satisfy their own development needs. For example, Alloy Automation is a no-code integration solution that integrates with and automates ecommerce services, such as CRM, logistics, subscriptions, and databases. Alloy can automate SMS messages, automatically start a workflow after an online transaction, determine if follow-up action should be taken, and automate actions in coordination with connected apps. Another example is Thunkable , a no-code platform that makes it easy to build custom mobile apps without any advanced software engineering knowledge or certifications. Thunkable's mission is to democratize mobile app development. It uses a simple drag-and-drop design and powerful logic blocks to give innovators the tools they need to breathe life into their app designs. The startup journey Although startups themselves are as diverse as the people who launch them, all startup journeys begin with the identification of a need in the marketplace. The MongoDB for Startups program helps startups along the way with free MongoDB Atlas credits, one-on-one technical advice, co-marketing opportunities, and access to a vast partner network. Are you a startup looking to build faster and scale further? Join our community of pioneers by applying to the MongoDB for Startups program. Apply now .

January 16, 2023
Applied

Improving Building Sustainability with MongoDB Atlas and Bosch

Every year developers from more than 45 countries head to Berlin to participate in the Bosch Connected Experience (BCX) hackathon — one of Europe’s largest AI and Internet of Things (AIoT) hackathons. This year, developers were tasked with creating solutions to tackle a mix of important problems, from improving sustainability in commercial building operations and facility management to accelerating innovation of automotive-grade, in-car software stacks using a variety of hardware and software solutions made available through Bosch, Eclipse, and their ecosystem partners. MongoDB also took part in this event and even helped one of the winning teams build their solution on top of MongoDB Atlas. I had the pleasure of connecting with a participant from that winning team, Jonas Bruns, to learn about his experience building an application for the first time with MongoDB Atlas. Ashley George: Tell us a little bit about your background and why you decided to join this year's BCX hackathon? Jonas Bruns: I am Jonas, an electrical engineering student from Friedrich Alexander University in Erlangen Nürnberg. Before I started my master’s program, I worked in the automotive industry in the Stuttgart area. I was familiar with the BCX hackathon from my time in Stuttgart and, together with two friends from my studies, decided to set off to Berlin this year to take part in this event. The BCX hackathon is great because there are lots of partners on site to help support the participants and provide knowledge on both the software and hardware solutions available to them — allowing teams to turn their ideas into a working prototype within the short time available. We like being confronted with new problems and felt this was an important challenge to take on, so participation this year was a must for us. AG: Why did you decide to use MongoDB Atlas for your project? JB: We started with just the idea of using augmented reality (AR) to improve the user experience (UX) of smart devices. To achieve this goal, we needed not only a smartphone app but also a backend in which all of our important data is stored. Due to both limited time and the fact that no one on our team had worked with databases before, we had to find a solution that would grow with our requirements and allow us to get started as easily as possible. Ideally, the solution would also be fully managed as well to eliminate us having to take care of security on our own. After reviewing our options, we quickly decided on using MongoDB Atlas . AG: What was it like working with MongoDB Atlas, especially having not worked with a database solution before? JB: The setup was super easy and went pretty fast. Within just a short time, we were able to upload our first set of data to Atlas using MongoDB Compass . As we started to dive in and explore Atlas a bit more we discovered the trigger functionality (Atlas Triggers), which we were able to use to simplify our infrastructure. Originally, we planned to use a server connected to the database, which would react to changed database entries. This would then send a request to control the desired periphery. The possibility to configure triggers directly in the database made a server superfluous and saved us a lot of time. We configured the trigger so that it executes a JavaScript function when a change is made to the database. This evaluates data from the database and executes corresponding requests, which directly control the periphery. Initially, we had hit a minor roadblock in determining how to handle the authentication needs (creating security tokens), which the periphery needs and expects during a request. To solve for this, we stored the security tokens on an AWS server which listens to an HTTP request. From Atlas, we then just have to call the URL and the AWS instance does the authentication and control of the lights. After we solved this problem, we were thrilled with how little configuration was needed and how intuitive Atlas is. The next steps, like connecting Atlas to the app, were easy. We achieved this by sending data from Flutter to Atlas over HTTPs with the Atlas Data API . AG: How did Atlas enable you to build your winning application? JB: By the end of the challenge, we had developed our idea into a fully functional prototype using Google ARcore, Flutter, MongoDB Atlas, and the Bosch Smart Home Hardware (Figure 1). We built a smartphone application that uses AR to switch on and off a connected light in a smart building. The position and state of the light (on or off) are stored in the database. If the state of the light should change, the app manipulates the corresponding value in the database. The change triggers a function that then sets the light to the desired state (on or off). The fact that we were able to achieve this within a short time without sufficient prior knowledge is mainly due to the ease and intuitive nature of Atlas. The simple handling allowed us to quickly learn and use the available features to build the functionality our app needed. Figure 1: Tech stack for the projects prototype. AG: What additional features within Atlas did you find the most valuable in building your application? JB: We created different users to easily control the access rights of the app and the smart devices. By eliminating the need for another server to communicate with the smart devices and using the trigger function of Atlas, we were able to save a lot of time on the prototype. In addition, the provided preconfigured code examples in various languages facilitated easy integration to our frontend and helped us avoid errors. Anyone who is interested can find the results of our work in the GitHub repo . AG: Do you see yourself using Atlas more in the future? JB: We will definitely continue to use Atlas in the future. The instance from the hackathon is still online, and we want to get to know the other functionalities that we haven't used yet. Given how intuitive Atlas was in this project, I am also sure that we will continue to use it for future projects as well. Through this project, Jonas and team were able to build a functional prototype that can help commercial building owners have more control over their buildings and take the steps to help reduce CO₂ emissions.

January 12, 2023
Applied

Build Analytics-Driven Apps with MongoDB Atlas and the Microsoft Intelligent Data Platform

Customers increasingly expect engaging applications informed by real-time operational analytics, yet meeting these expectations can be difficult. MongoDB Atlas is a popular operational data platform that makes it straightforward to manage critical business data at scale. For some applications, however, enterprises may also want to apply insights gleaned from data warehouse, business intelligence (BI), and related solutions, and many enterprises depend on the Microsoft Intelligent Data Platform to apply analytics and governance solutions to operational data stores. MongoDB and Microsoft have partnered to make it simple to use the Microsoft Intelligent Data Platform to glean and apply comprehensive analytical insights to data stored in MongoDB. This article details how enterprises can successfully use MongoDB with the Microsoft Intelligent Data Platform to build more engaging, analytics-driven applications. Microsoft Intelligent Data Platform + MongoDB MongoDB Atlas provides a unified interface for developers to build distributed, serverless, and mobile applications with support for diverse workload types including operational, real-time analytics, and search. With the ability to model graph, geospatial, tabular, document, time series, and other forms of data, developers don’t have to go for multiple niche databases, which results in highly complex, polyglot architectures. The Microsoft Intelligent Data Platform offers a single platform for databases, analytics, and data governance by integrating Microsoft’s database, analytics, and data governance products. In addition to all Azure database services, the Microsoft Intelligent Data Platform includes Azure Synapse Analytics for data warehousing and analytics, Power BI for BI reporting, and Microsoft Purview for enterprise data governance requirements. Although customers have always been able to apply the Microsoft Intelligent Data Platform services to MongoDB data, doing so hasn't always been as simple as it could be. Through this new integration, customers gain a seamless way to run analytics and data warehousing operations on the operational data they store in MongoDB Atlas. Customers can also more easily use Microsoft Purview to manage and run data governance policies against their most critical MongoDB data, thereby ensuring compliance and security. Finally, through Power BI customers are empowered to easily query and extract insights from MongoDB data using powerful in-built and custom visualizations. Let’s deep dive into each of these integrations. Operationalize insights with MongoDB Atlas and Azure Synapse Analytics MongoDB Atlas is an Operational Data Platform which can handle multiple workload types including transactional, search, operational analytics, etc. It can cater to multiple application types including distributed, serverless, mobile, etc. For data warehousing workloads, long-running analytics, and AI/ML, we compliment Azure Synapse Analytics very well. MongoDB Atlas can be easily integrated as a source or as a sink resource in Azure Synapse Analytics. This connector is useful to: Fetch all the MongoDB Atlas historical data into Synapse Retrieve incremental data for a period based on filter criteria in a batch mode, to run SQL based or Spark based analytics The sink connector allows you to store the analytics results back to MongoDB, which can then power applications enabled on top of it. Many enterprises require real-time analytics, for example, in fraud detection, anomaly detection of IoT devices, predicting stock depletion, and maintenance of machinery, where a delay in getting insights could cause serious repercussions. MongoDB and Microsoft have worked together to come up with the best practice architecture for the same which can be found in this article . Figure 1: Schematic showing integration of MongoDB with Azure Synapse Analytics. Business intelligence reporting and visualization with PowerBI Together, MongoDB Atlas and Microsoft PowerBI offer a sophisticated real-time data platform, providing customers with the ability to present specialized operational and analytical query engines on the same data sets. Information on connecting from PowerBI desktop to MongoDB is available in the official documentation . MongoDB is also excited to announce the forthcoming MongoDB Atlas Power BI Connector that will expose the richness of the JSON document data with Power BI (see Figure 2). This MongoDB Atlas Power BI Connector allows users to unlock access to their Atlas cloud data. Figure 2: Schematic showing integration of MongoDB and Microsoft Power BI. Beyond providing mere access to MongoDB Atlas data, this connector will provide a SQL interface to let you interact with semi-structured JSON data in a relational way, thereby ensuring you can take full advantage of Power BI's rich business intelligence capabilities. Importantly, through the connector, support is planned for two connectivity modes: import and direct. This new MongoDB Atlas Power BI Connector will be available in the first half of 2023. Conclusion Together with the Microsoft Intelligent Data Platform offerings, MongoDB Atlas can help operationalize the insights driven from customers’ data spread across siloed legacy databases and help build modern applications with ease. With MongoDB Atlas on Microsoft Azure, developers receive access to the most comprehensive, secure, scalable, and cloud–based developer data platform in the market. Now, with the availability of Atlas on the Azure Marketplace, it’s never been easier for users to start building with Atlas while streamlining procurement and billing processes. Get started today through the MongoDB Atlas on Azure Marketplace listing .

January 10, 2023
Applied

Break Down Silos with a Data Mesh Approach to Omnichannel Retail

Omnichannel experiences are increasingly important for customers, yet still hard for many retailers to deliver. In this article, we’ll cover an approach to unlock data from legacy silos and make it easy to operate across the enterprise — perfect for implementing an omnichannel strategy. Establishing an omnichannel retail strategy An omnichannel strategy connects multiple, siloed sales channels (web, app, store, phone, etc.) into one cohesive and consistent experience. This strategy allows customers to purchase through multiple channels with a consistent experience (Figure 1). Most established retailers started with a single point of sale or “channel” — the first store — then moved to multiple stores and introduced new channels like ecommerce, mobile, and B2B. Omnichannel is the next wave in this journey, offering customers the ability to start a journey on one channel and end it on another. Figure 1: Omnichannel experience examples. Why are retailers taking this approach? In a super-competitive industry, an omnichannel approach lets retailers maximize great customer experience, with a subsequent effect on spend and retention. Looking at recent stats , Omnisend found that purchase frequency is 250% higher on omnichannel, and Harvard Business Review’s research saw omnichannel customers spend 10% more online and 4% more in-store. Omnichannel: What's the challenge? So, if all retailers want to provide these capabilities to their customers, why aren’t they? The answer lies in the complex, siloed data architectures that underpin their application architecture. Established retailers who have built up their business over time traditionally incorporated multiple off-the-shelf products (e.g., ERP, PIMS, CMS, etc.) running on legacy data technologies into their stack (mainframe, RDBMS, file-based). With this approach, each category of data is stored in a different technology, platform, and rigid format — making it impossible to combine this data to serve omnichannel use cases (e.g., in-store stock + ecommerce to offer same-day click and collect). See Figure 2. Figure 2: Data sources for omnichannel. The next challenge is the separation of operational and historical data — older data is moved to archives, data lakes, or warehouses. Perhaps you can see today’s stock in real time, but you can’t compare it to stock on the same day last year because that is held in a different system. Any business comparison occurs after the fact. To meet the varied volume and variety of requests, retailers must extract, transform, and load (ETL) data into different databases, creating a complex disjointed web of duplicated data. Figure 3 shows a typical retailer architecture: A document database for key-value lookup, cache added for speed, wide column storage for analytics, graph databases to look up three degrees of separation, time series to track changes over time, etc. Figure 3: An example of a typical data architecture sprawl in modern retailers. The problem is that ETL’d data becomes stale as it moves between technologies, lagging behind real-time and losing context. This sprawl of technology is complex to manage and difficult to develop against — inhibiting retailers from moving quickly and adapting to new requirements. If retailers want to create experiences that can be used by consumers in real-time — operational or analytical — this architecture does not give them what they need. Additionally, if they want to use AI or machine learning models, they need access to current behavior for accuracy. Thus, the obstacle to delivering omnichannel experiences is a data problem that requires a data solution. Let's look at a smart approach to fixing it. Modern retailers are taking a data mesh approach Retail architectures have gone through many iterations, starting from vendor solutions per use case, moving toward a microservices approach, and landing into domain-driven design (Figure 4). Vendor Applications Microservices Domain-Driven Design * Each vendor decides the framework and governance of the data layer. The enterprise has no control over app or data * Microservices pull data from the API layer * Microservices and core datasets are combined into bounded contexts by business function * Data is not interoperable between components * DevOps teams control their microservices, but data is managed by a centralized enterprise team * DevOps teams control microservices AND data Figure 4: Architecture evolution. Domain-driven design has emerged through an understanding that the team with domain expertise should have control over the application layer and its associated data — this is the “bounded context” for their business function. This means they can change the data to innovate quickly, without reliance on another team. Of course, if data remains in its bounded context only, we end up with the same situation as the commercial off-the-shelf (COTS) and legacy architecture model. Where we see value is when the data in each domain can be used as a product throughout the organization. Data as a product is a core data mesh concept — it includes data, metadata, and the code and infrastructure to use it. Data as a product is expected to be discoverable (searchable), addressable, self-identifying, and interoperable (Figure 5). In a retail example, the product, customer, and store can be thought of as bounded contexts. The product bounded context contains the product data and the microservices/applications that are built for product use cases. But, for a cross-domain use case like personalized product recommendations, the data from both customer and product domains must be available “as a product.” Figure 5: Bounded contexts and data as a product. What we’re creating here is a data mesh — an enterprise data architecture that combines intentionally distributed data across distinctly defined, bounded contexts. It is a business domain-oriented, decentralized data ownership and architecture, where each makes its data available as an interoperable “data product.” The key is that the data layer must serve all real-time workloads that are required of the business — both operational and real-time analytical (Figure 6). Figure 6: Data mesh. Why use MongoDB for omnichannel data mesh Let’s look at data layer requirements needed for a data mesh move to be successful and how MongoDB can meet those requirements. Capable of handling all operational workloads: Expressive query language, including joining data, ACID transactions, and IoT collections make it great for multiple workloads. MongoDB is known for its performance and speed. The ability to use secondary indexes means that several workloads can run performantly. Search is key for retail applications — MongoDB Atlas has Lucene search engine built-in for full-text search with no data movement. Omnichannel experiences often involve mobile interaction. MongoDB Realm and Flexible Device Sync can seamlessly ensure consistency between mobile and backend. Capable of handling analytical workloads: MongoDB’s distributed architecture means analytical workloads can run on a real-time data set, without ETL or additional technology and without disturbing operational workloads. For real-time analytical use cases, the aggregation framework can be used to perform powerful data transformations and run ad hoc exploratory queries. For business intelligence or reporting workloads, data can be queried by Atlas SQL or piped through the BI Connector to other data tools (e.g., Tableau and PowerBI). Capable of serving data as a product: When serving data as a product, it is often by API: MongoDB’s BSON-based document model maps well to JSON-based API payloads for speed and ease. MongoDB Atlas provides both the Data API and the GraphQL API fully hosted. Depending on the performance needed, direct access may also be required. MongoDB has drivers for all common programming languages, meaning that other teams using different languages can easily interact with it. Rules for access of course must be defined, and one option is to use MongoDB App Services . Real-time data can also be published to Apache Kafka topics using the MongoDB Kafka Connector , which can act as a sync and a source for data. For example, one bounded context could publish data in real-time to a named Kafka topic, allowing another context to consume this and store it locally to serve latency-sensitive use cases. The tunable schema allows for flexibility in non-product fields, while schema validation capabilities enforce specific fields and data types in a collection to provide consistent datasets. Resilient, secure, and scalable: MongoDB Atlas has a 99.995% uptime guarantee and provides auto-healing capability, with multi-region and multi-cloud resiliency options. MongoDB provides the ability to scale up or down to meet your application requirements — vertically and horizontally. MongoDB follows a best-in-class security protocol. Choose the flexible data mesh approach Providing customers with omnichannel experiences isn’t easy, especially with legacy siloed data architectures. Omnichannel requires a way of making your data work easily across the organization in real-time, giving access to data to those who need it while also giving the power to innovate to the domain experts in each field. A data mesh approach provides the capability and flexibility to continuously innovate. Ready to build deeper business insights with in-app analytics and real-time business visibility? Read our new white paper: Application-Driven Analytics: In-App and Real-Time Insights for Retailers .

January 10, 2023
Applied

Securing Multi-Cloud Applications with MongoDB Atlas

The rise of multi-cloud applications offers more versatility and flexibility for teams and users alike. Developers can leverage the strengths of different cloud providers, such as more availability in certain regions, improved resilience and availability, and more diverse features for use cases such as machine learning or events. As organizations transition to a public, multi-cloud environment, however, they also need to adjust their mindset and workflows — especially where it concerns security. Using multiple cloud providers requires teams to understand different security policies, and take extra steps to avoid potential breaches. In this article, we’ll examine three security challenges associated with multi-cloud applications, and explore how MongoDB Atlas can help you mitigate or reduce the risks posed by these challenges. Challenge 1: More clouds, more procedures, more complexity Security protocols, such as authentication, authorization, and encryption, vary between cloud providers. And, as time goes on, cloud providers will continue to update their features to stay current with the market and remain competitive, adding more potential complications to multi-cloud environments. Although there are broad similarities between AWS, Azure, and GCP, there are also many subtle differences. AWS Identity and Access Management (IAM) is built around root accounts and identities, such as users and roles. Root accounts are basically administrators with unlimited access to resources, services, and billing. Users represent credentials for humans or applications that interact with AWS, whereas roles serve as temporary access permissions that can be assumed by users as needed. In contrast, Azure and GCP use role-based access control (RBAC) and implement it in different ways. Azure Active Directory allows administrators to nest different groups of users within one another, forming a hierarchy of sorts — and making it easier to assign permissions. However, GCP uses roles , which include both preset and customizable permissions (e.g., editor or viewer), and scopes , or permissions that are allotted to a specific identity concerning a certain resource or project. For example, one scope could be a read-only viewer on one project but an editor on another. Given these differences, keeping track of security permissions across various cloud providers can be tricky. As a result, teams may fail to grant access to key clients in a timely manner or accidentally authorize the wrong users, causing delays or even security breaches. Challenge 2: Contributing factors Security doesn’t exist in a vacuum, and some factors (organizational and otherwise) can complicate the work of security teams. For example, time constraints can make it harder to implement or adhere to security policies. Turnover can also create security concerns, including lost knowledge (e.g., a team may lose its AWS expert) or stolen credentials. To avoid the latter, organizations must immediately revoke access privileges for departing employees and promptly grant credentials to incoming ones. However, one study found that 50% of companies took three days or longer to revoke system access for departing employees, while 72% of companies took one week or longer to grant access to new employees. Challenge 3: Misconfigurations and human error According to the Verizon Data Breach Investigations Report , nearly 13% of breaches involved human error — primarily misconfigured cloud storage. Overall, the Verizon team found that the human element (which includes phishing and stolen credentials) was responsible for 82% of security incidents. Because misconfigurations are such common mistakes, they comprise the majority of data breaches. For example, AWS governs permissions and resources through JSON files called policies. However, unless you’re an expert in AWS IAM, it’s hard to understand what a policy might really mean. Figure 1 shows a read-only policy that was accidentally altered to include writes through the addition of a single line of code, thereby inadvertently opening it to the public. That data could be sensitive personally identifiable information (PII); for example, it could be financial data — something that really shouldn’t be modified. Figure 1: Two examples of read-only policies laid out side by side, demonstrating how a single line of code can impact your security. Although the Verizon report concluded that misconfigurations have decreased during the past two years, these mistakes (often AWS S3 buckets improperly configured for public access) have resulted in high-profile leaks worldwide. In one instance, a former AWS engineer created a tool to find and download user data from misconfigured AWS accounts . She gained access to Capital One and more than 100 million customer credentials and credit card applications. The penalties for these vulnerabilities and violations are heavy. For example, the General Data Protection Regulation (GDPR) enacts a penalty of up to four percent of an organization’s worldwide revenue or €20,000,000 — whichever is larger. In the aftermath of the security event, Capital One was fined $80 million by regulators ; other incidents have resulted in fines ranging from $35 million to $700 million . Where does MongoDB Atlas come in? MongoDB Atlas is secure by default, which means minimal configuration is required, and it’s verified by leading global and regional certifications and assurances. These assurances include critical industry standards, such as ISO 27001 for information security, HIPAA for protected healthcare information, PCI-DSS for payment card transactions, and more . By abstracting away the details of policies, roles, and other protocols, Atlas centralizes and simplifies multi-cloud security controls. Atlas provides a regional selection option to control data residency, default virtual private clients (VPCs) for resource isolation, RBAC for fine-tuning access permissions, and more. These tools support security across an entire environment, meaning you can simply configure them as needed, without worrying about the nuances of each cloud provider. Atlas is also compatible with many of the leading security technologies and managers, including Google KMS, Azure Key Vault, or AWS KMS, enabling users to either bring their own keys or to secure their clusters with the software of their choice. Additionally, data is always encrypted in transit and at rest. For example, you can run rich queries on fully encrypted data using Queryable Encryption , which allows you to extract insights without compromising security. Data is only decrypted when the results are returned to the driver — where the key is located — otherwise, encrypted fields will display as randomized ciphertext. One real-world example involves a 2013 data breach at a supermarket chain in the United Kingdom, where a disgruntled employee accessed the personal data of nearly 100,000 employees. If Queryable Encryption had been available and in use at the time, the perpetrator would have downloaded only cipher text. With MongoDB Atlas, securing multi-cloud environments is simple and straightforward. Teams can use a single, streamlined interface to manage their security needs. There is no need to balance different security procedures and structures or keep track of different tools like hyperscalers or key management systems. Enjoy a streamlined, secure multi-cloud experience — sign up for a free MongoDB Atlas cluster today .

January 9, 2023
Applied

How to Get Mobile Data Sync Right with Mobile Backend as a Service (MBaaS)

Twenty years ago, Watts Humphrey, known today as the "Father of Software Quality," declared that every business is a software business. While his insight now seems obvious, digital technology has evolved to where we can add to it: Every business is also a mobile business. According to Gartner , 75% of enterprise data will be generated and processed away from the central data center by 2025. And according to data.ai, 84% of enterprises attribute growth in productivity to mobile apps. Today, mobile tech transforms every aspect of business. It enables the workforce through point-of-sale, inventory, service, and sales. It streamlines critical business processes like self-checkout and customer communications. And it powers essential work devices from telemetry to IoT to manufacturing. The data businesses capture on mobile and edge devices can be used to improve operational efficiency, drive process improvements, and deliver richer, real-time app experiences. But all of this requires a solution for synchronizing mobile data with backend systems, where it can be combined with other historical data, analyzed, or fed into predictive intelligence algorithms to surface new insights and trigger other value-add activities. But syncing mobile data with backend systems can be hard for a number of reasons. Mobile devices are constantly going in and out of coverage. When connections break and then resume, conflicts emerge between edits that were made on devices while offline and other data that's being processed on the backend. So conflict resolution becomes a crucial part of ensuring changes on the mobile device are captured on the backend in a way that ensures data integrity. Sync and swim Apps that are not designed with backend sync in mind can take a long time to load, are prone to crashing, and show stale information. When apps don’t deliver positive experiences, people stop trusting them — and stop using them. On the other hand, an app with robust sync between a device’s local data store and the back end lets workers see live data across users and devices, allowing for real-time collaboration and decision-making. According to Deloitte , 70% of workers don’t sit at a desk every day, so the ability to sync data will increasingly drive business outcomes. Indian startup, FloBiz , uses MongoDB Atlas Device Sync to handle the difficult job of keeping the mobile, desktop, and web apps in sync. This means even if multiple users were using the same account, going offline and online, there would be no issues, duplications or lost data. Why data sync is difficult A lot of organizations choose to build their own sync solutions. DIY solutions can go one of two ways: overly complex or oversimplified, resulting in sync that happens only a few times a day or in only one direction. It can be complicated and time-consuming for developers to write their own conflict-resolution code because building data sync the right way takes potentially thousands of lines of code. Developers frequently underestimate the challenge because it seems straightforward on the surface. They assume sync consists simply of the application making a request from the server, receiving some data, and using that data to update the app’s UI on the device. But when building for mobile devices this is a massive oversimplification. Building data sync can be more complicated than people assume. When developers attempt to build their own sync tool, they typically use RESTful APIs to connect the mobile app with the backend and exchange data between them. Mobile apps are often built more like web apps in the beginning. But once the need to handle offline scenarios arises, and because some functionality requires local persistence, then it becomes necessary to add a mobile database. Syncing with that mobile database then becomes a challenge. The exchange of data between the device and the back end gets complicated. It requires the developer to anticipate numerous offline use cases and write complex conflict-resolution code. It can be done, but it’s a time-consuming process that’s not guaranteed to solve all use cases. When data is requested, applications need to understand whether a network is available, and if not, whether the appropriate data is stored locally, leading to complex query, retry, and error handling logic. The worst part about all this complexity is that it’s non-differentiating, meaning it doesn’t set the business apart from the competition. Users expect the functionality powered by data sync and won’t tolerate anything less. An integrated, out-of-the-box solution MongoDB's Atlas Device Sync combined with Realm is a mobile backend as a service (MBaaS) solution that enables developers to build offline-first applications that automatically refresh when a connection is reestablished. Local and edge data persistence is managed by Realm, a development platform designed for modern, data-driven applications. Developers use Realm to build mobile, web, desktop, and IoT apps. Realm is a fast and scalable alternative to SQLite and Core Data for client-side persistence. The bidirectional data synchronization service between Realm and MongoDB Atlas allows businesses to do more with their data at the edge by tapping into some of MongoDB’s more powerful data processing capabilities in the cloud. Complex synchronization problems such as conflict resolution are handled automatically by MongoDB’s built-in sync. To learn more about the challenges of building real-time mobile apps that scale, with sample use cases about how thousands of businesses are handling it today, download our white paper, Building Real-time Mobile Apps that Scale .

January 4, 2023
Applied

Demystifying Sharding with MongoDB

Sharding is a critical part of modern databases, yet it is also one of the most complex and least understood. At MongoDB World 2022 , sharding software engineer Sanika Phanse presented Demystifying Sharding in MongoDB , a brief but comprehensive overview of the mechanics behind sharding. Read on to learn about why sharding is necessary, how it is executed, and how you can optimize the sharding process for faster queries. Watch this deep-dive presentation on the ins and outs of sharding, featuring MongoDB sharding software engineer Sanika Phanse. What is sharding, and how does it work? In MongoDB Atlas , sharding is a way to horizontally scale storage and workloads in the face of increased demand — splitting them across multiple machines. In contrast, vertical scaling requires the addition of more physical hardware, for example, in the form of servers or components like CPUs or RAM. Once you’ve hit the capacity of what your servers can support, sharding becomes your solution. Past a certain point, vertical scaling requires teams to spend significantly more time and money to keep pace with demand. Sharding, however, spreads data and traffic across your servers, so it’s not subject to the same physical limitations. Theoretically, sharding could enable you to scale infinitesimally, but, in practice, you are scaling proportionally to the number of servers you add. Each additional shard increases both storage and throughput, so your servers can simultaneously store more data and process more queries. How do you distribute data and workloads across shards? At a high level, sharding data storage is straightforward. First, a user must specify a shard key, or a subset of fields to partition their data by. Then, data is migrated across shards by a background process called the balancer , which ensures that each shard contains roughly the same amount of data. Once you specify what your shard key will be, the balancer will do the rest. A common form of distribution is ranged sharding, which assigns data to various shards through a range of shard keys. Using this approach, one shard will contain all the data with shard keys ranging from 0-99, the next will contain 100-199, and so forth. In theory, sharding workloads is also simple. For example, if you receive 1,000 queries per second on a single server, sharding your workload across two servers would divide the number of queries per second equally, where each server receives 500 queries per second. . However, these ideal conditions aren’t always attainable, because workloads aren’t always evenly distributed across shards. Imagine a group of 50,000 students, whose grades are split between two shards. If half of them decide to check their grades — and all of their records happen to fall in the same shard ID range — then all their data will be stored on the same shard. As a result, all the traffic will be routed to one shard server. Note that both of these examples are highly simplified; real-world situations are not as neat. Shards won’t always contain a balanced range of shard IDs, because data might not be evenly divided across shards. Additionally, 50,000 students, while large, is still too small of a sample size to be in a sharded cluster. How do you map and query sharded data? Without an elegant solution, users may encounter latency or failed queries when they try to retrieve sharded data. The challenge is to tie together all your shards, so it feels like you’re communicating with one database, rather than several. This solution starts with the config server, which holds metadata describing the sharded cluster, as well as the most up-to-date routing table, which maps shard keys to shard connection strings. To increase efficiency, routers regularly contact the config server to create a cached copy of this routing table. Nonetheless, at any given point in time, the config server’s version of the routing table can be considered the single source of truth. To query sharded data, your application sends your command to the team of routers. After a router picks up the command, it will then use the shard key from the command’s query, in conjunction with its cached copy of the routing table, to direct the query to the correct location. Rather than using the entire document, the user will only select one field (or combination of fields) to serve as the shard key. Then, the query will make its way to the correct shard, execute the command, update, and return a successful result to the router. Operations aren’t always so simple, especially when queries do not specify shard keys. In this case, the router realizes that it is unaware of where your data exists. Thus, it sends the query to all the shards, and then it waits to gather all the responses before returning to the application. Although this specific query is slow if you have many shards, it might not pose a problem if this query is infrequent or uncommon. How do you optimize shards for faster queries? Shard keys are critical for seamless operations. When selecting a shard key, use a field that matches on all (or most) of your data and has a high cardinality. This step ensures granularity among shard key values, which allows the data to be distributed evenly across shards. Additionally, your data can be resharded as needed, to fit changing requirements or to improve efficiency. Users can also accelerate queries with thoughtful planning and preparation, such as optimizing their data structures for the most common, business-critical query patterns. For example, if your workload makes lots of age-based queries and few _ID-based queries, then it might make sense to sort data by age to ensure more targeted queries. Hospitals are good examples, as they pose unique challenges. Assuming that the hospital’s patient documents would contain fields such as insurance, _ID value, and first and last names, which of these values would make sense as a shard key? Patient name is one possibility, but it is not unique, as many people might have the same name. Similarly, insurance can be eliminated, because there are only a handful of insurance providers, and people might not even have insurance. This key would violate both the high-cardinality principle, as well as the requirement that every document has this value filled. The best candidate for shard key would be the patient ID number or _ID value. After all, if one patient visits, that does not indicate whether another patient will (or will not) visit. As a result, the uniqueness of the _ID value will be very useful, as it will enable users to make targeted queries to the one document that is relevant to the patient. Faced with repeating values, users can also create compound shard keys instead. By including hyphenated versions of multiple fields, such as _ID value, patient names, and providers, a compound shard key can help reduce query bottlenecks and latency. Ultimately, sharding is a valuable tool for any developer, as well as a cost-effective way to scale out your database capacity. Although it may seem complicated in practice, sharding (and working effectively with sharded data) can be very intuitive with MongoDB. To learn more about sharding — and to see how you can set it up in your own environment — contact the MongoDB Professional Services team today.

January 3, 2023
Applied

Zero Trust will be a Critical Practice for Security Professionals in 2023

Being a security professional in 2022 was no walk in the park. In a year that saw thousands of data breaches, even the most seasoned security professionals had their hands full. In our latest episode of the MongoDB Podcast , MongoDB Chief Information Security Officer, Lena Smart, joined tech legend and MongoDB co-founder, Dwight Merriman, to discuss the changing IT security landscape and the trends that will shape best practices for the future. Technology anti-trends As a technology entrepreneur who has been involved in a half-dozen startups, Merriman developed a sense for trends in technology that intersect with user needs. In 1995, the internet was one of those trends. Others that followed include LANs, smart phones, and AI. But what's different about security, Merriman says, is that it acts as more of an anti-trend, meaning that it's a problem that only seems to be getting harder to solve. "Information security has always been an issue," Merriman says. "But every year it gets harder. Pre-internet it was a bit easier, when you're not plugged into the entire planet. Today, the inherent complexity of modern software means there are more attack vectors." As the IT complexity anti-trend coincides with an increase in the sophistication of hackers, the job of security professionals only gets harder. "You've got everything from the kid in their basement hacking around to more sophisticated attacks like organized crime and nation-state actors," Merriman says. "How do you defend against that as a company when you have orders of magnitude less resources? As a CISO, security person, or developer, it's just getting harder every day." Merriman predicts that it's going to get harder every year for the next 10 years, and the stakes are only going to get higher. "You cannot be too paranoid," he says, "We still need to get work done. So I'm a big proponent of, you know, you can't create too much friction." Controlling what you can control Ensuring security while reducing friction is one of the core principles of data governance, which includes the processes required to establish proper handling of an organization's data. Whether you're using third-party services, integrating with the software supply chain to build new applications or services, or working across internal departments, the best approach from a security perspective is to start with as little trust as possible. " Zero Trust is a big term these days," Merriman says. "Part of your supply chain is your internal supply chain. In large companies like a Fortune 500 company, where it's so big, you might as well be separate companies. So, whatever you think about when you think about security and supply chain, do that internally too. Think of each department as a supply chain if it is a supplier for you." The concept of the Zero Trust model is based around three principles: Never trust, always verify — This ensures that anyone who accesses company data is verified at the onset of access to network resources. Provide the least amount of privilege possible — Being judicious with who can access what data is essential to keep data protected. By limiting employee and external access to only data needed to perform a specific task, you reduce the likelihood of a breach. Apply network segmentation — By dividing data (like with MongoDB clusters), you isolate and protect it, rather than keeping it all in one place that, should it be breached, puts all data at risk. “Identity is your new security perimeter. You can never be too paranoid or too vigilant when it comes to determining who can access your business’s data,” says Merriman. Breaking new ground in security The security imperative is what drove MongoDB to partner with pioneers in the academic community to develop a groundbreaking new form of security, queryable encryption . Working with Brown University cryptographer Seny Kamara and long-time collaborator Tarik Moataz, the team developed the world's first truly searchable encrypted database. It enables organizations to encrypt sensitive data from the client side, store it as fully randomized encrypted data on the database server side, and run expressive queries on the encrypted data. Queryable encryption extends the idea of Zero Trust by adding an extra layer of security for data while it's in use by anyone tasked with handling it. Designed by our Advanced Cryptography Research Group with 20 years of experience designing peer-reviewed, state-of-the-art encrypted search algorithms, Queryable Encryption is available in Preview now . Listen to the full conversation with MongoDB Chief Information Security Officer, Lena Smart and legend and MongoDB co-founder, Dwight Merriman. If your organization needs a way to construct database architectures that are not only scalable, but also secure, consider using MongoDB Atlas to build the next big thing.

January 3, 2023
Applied

Telco Scaling Strategies: Modernizing Business Support Systems for Flexible Revenue Growth

Consumers and businesses alike are driving huge demand for innovative telecommunications technology that tests the limits of monolithic, traditional business support system (BSS) architecture. The competition is fierce within the industry, tipping telcos to differentiate their businesses with fresh digital services like low latency mobile apps, ultra-fast streaming services, virtual reality, and IoT solutions. The worldwide adoption of 5G is driving the change, bringing with it the need to simplify architectures to accommodate the complexity of modern 5G networks, which require multiple assurances, orchestration, provisioning, and charging functions to aggregate data effectively and manage services. Alongside the development of headline-grabbing technologies, telecommunications enterprises are also busy building increasingly customer-centric experiences. Fast, reliable communication is essential for all people, from the average consumer who expects flawless performance, to the enterprises that need to run mission-critical business processes over telecommunications networks. These interactions matter immensely for customer loyalty. Partnering together, Tech Mahindra and MongoDB have ushered telcos through their ongoing BSS modernization journeys by enabling business growth and operational efficiencies with solutions ranging from core network functions through to product catalog and customer management systems. Today, we see that the biggest hurdles standing in the way of telco innovation are legacy data architectures that eat up developers’ time with time-consuming maintenance work. Building a consolidated view: Drilling into customer data Billing modernization is a big market. The global telecommunications billing and revenue management market size reached $13 billion in 2019, and is forecast to expand at a compound annual growth rate of 11.6% to more than $31 billion between 2020 and 2027, according to research prepared by Forrester Research, Inc. Core, consolidated customer data act as an enabler for many other related solutions that share the need for a solid, reliable record of customer core data. The common factor for payment processing, customer loyalty programs, service provisioning, service usage, and finally billing generation, is customer core data. Figure 1:   Customer centricity in billing Like many large corporations, CSPs are often made up of siloed application stacks broken out by product area, such as VoIP, mobile, cable, and so on. Since customers use products and services that exist within multiple silos, changes to customer data need to be propagated to multiple systems. The lack of a single, consolidated view is often a result of historical mergers and acquisitions. Without the ability to analyze data within a single view, opportunities to capitalize on data analytics to uncover cross-selling and up-selling opportunities are lost. Furthermore, enterprises managing multiple parallel billing implementations and the systems’ associated data synchronization infrastructure can incur hefty costs and architectural complexity. The best way to address these challenges is to modernize core customer data systems to have a single view of customers and their billing-related data. At the heart of these modernization projects is the move to a new platform. Product catalog simplification and the hybrid cloud approach Akin to the global-scale retailers with some of the most complex product offerings around, telcos have a complicated array of product offerings that require CPQ processes to be able to combine offerings. Whether a telco is combining handsets, tariffs, warranties, add-ons or promotions, managing data in a streamlined and scalable infrastructure is a crucial modernization strategy. What’s more, telcos must deliver personalized and real-time shopping experiences to customers across web, mobile, phone, and in-store platforms to stay competitive. The modernization and simplification of a telco’s product catalog architecture can quickly turn into a complicated mess when certain existing legacy systems must stay in place. Making one swift rip-and-replace move can be a big risk with costly implications for long-term transformation projects. Today, more and more successful modernization strategies are achieving omnichannel implementation and multi-play services goals by migrating incrementally. This enables telcos to set the stage for a successful migration, at the right pace for the company. A key part of a modernization strategy is often a move from a self-managed, on-premise architecture to a cloud-based one. Initially, a hybrid cloud strategy is often more cost-efficient, and acts as an important stepping stone in any enterprise’s digital transformation journey. The limits of existing legacy systems inhibit telcos’ ability to scale and grow. As the telecommunications industry reimagines how to apply new technology in the digital-first world, heavier reliance on the public cloud is delivering operational and competitive advantages. But for an industry with complex legacy infrastructure and loads of personal data, moving every workload to the cloud isn’t feasible yet. Through implementation projects with telcos around the world, Tech Mahindra has proven that a central commercial catalog, integrated with the right legacy technical catalogs and BSS stacks, improves time-to-market for launching multiple bundles and offers that are still processed and billed to a single customer. The benefits of this hybrid approach are immediately apparent: Faster time-to-market Increase in sales Improved customer satisfaction Reduced handling time Less fallouts, errors, and training time Through Tech Mahindra’s BlueMarble Commerce solution, underpinned by MongoDB, telecommunications enterprises are quickly overhauling their omnichannel strategy without overhauling their business or existing systems. A true end-to-end omnichannel multi-play solution for telcos, BlueMarble automates channel sales, order management, CPQ (product configuration, pricing, and quoting) and fulfillment, including reverse logistics. Combatting the complexity of BSS modernization By putting the needs of their customers first, telco, cable, and media service providers are building a bridge toward seamless and consistent experiences for both digital and physical user experiences. BlueMarble Commerce was built with this principle in mind. 

With its ability to connect with legacy systems using custom-built adaptors and APIs, BlueMarble combines multiple channels in a uniform, seamless manner, helping to simplify telco modernization. MongoDB delivers a multi-cloud database service built for resilience, scale, and the highest levels of data privacy and security. This is critical when building a platform that enables a cohesive, integrated suite of offerings capable of managing modern data requirements for building applications in a microservices framework without sacrificing speed, security, developer experience, and the ability to scale. These were the primary reasons to choose MongoDB for the BlueMarble platform. With these features of MongoDB Atlas, BlueMarble acts as a federated, overlay solution that masks the complexity of legacy, and enables the creation of new digital functionalities by integrating new digital architectures with existing legacy, eliminating the need to build new bespoke applications. In conclusion, as telcos compete to provide the exciting new technologies driving change in the industry, they must not lose sight of the customer, or providing customer value. Tech partners like MongoDB and Tech Mahindra are leading the charge in supplying cloud-native, microservices-based architectures in business support systems. This essay appears in the new TM Forum report: Evolving BSS for future services. Access the full report here .

December 8, 2022
Applied

Ready to get Started with MongoDB Atlas?

Start Free