How a Data Mesh Facilitates Open Banking
Open banking shows signs of revolutionizing the financial world. In response to pressure from regulators, consumers, or both, banks around the world continue to adopt the central tenet of open banking: Make it easy for consumers to share their financial data with third-party service providers and allow those third parties to initiate transactions. To meet this challenge, banks need to transition from sole owners of financial data and the customer relationship to partners in a new, distributed network of services. Instead of competing with other established banks, they now compete with fintech startups and other non-bank entities for consumer attention and the supply of key services. Despite fundamental shifts in both the competition and the customer relationship, however, open banking offers a huge commercial opportunity, which we’ll look at more closely in this article. After all, banks still hold the most important currency in this changing landscape: trust. Balancing data protection with data sharing Established banks hold a special position in the financial system. Because they are long-standing, heavily regulated, and backed by government agencies that guarantee deposits (e.g., the FDIC in the United States), established banks are trusted by consumers over fintech startups when it comes to making their first forays into open banking. A study by Mastercard of 4,000 U.S. and Canadian consumers found that the majority (55% and 53%, respectively) strongly trusted banks with their financial data. Only 32% of U.S. respondents and 19% of Canadians felt the same way about fintech startups. This position of trust extends to the defensive and risk-averse stance of established banks when it comes to sharing customer data. Even when sharing data internally, these banks have strict, permission-based data access controls and risk-management practices. They also maintain extensive digital audit trails. Open banking challenges these traditional data access practices, however, causing banks to move to a model where end customers are empowered to share their sensitive financial data with a growing number of third parties. Some open banking standards, such as Europe’s Payment Services Directive (PSD2), specifically promote informed consent data sharing, further underlining the shift to consumers as the ultimate stewards of their data. At the same time, banks must comply with evolving global privacy laws, such as Europe’s General Data Protection Regulation (GDPR). These laws add another layer of risk and complexity to data sharing, granting consumers (or “data subjects” in GDPR terms) the right to explicit consent before data is shared, the right to withdraw that consent, data portability rights, and the right to erasure of that data — the famed “right to be forgotten.” In summary, banks are under pressure from regulators and consumers to make data more available, and customers now make the final decision about which third parties will receive that data. Banks are also responsible for managing: Different levels of consent for different types of data The ability to redact certain sensitive fields in a data file, while still sharing the file Compliance with data privacy laws, including "the right to be forgotten" The open opportunity for banks In spite of the competition and added risks for established banks, open banking greatly expands the global market of customers, opens up new business models and services, and creates new ways to grow customer relationships. In an open banking environment, banks can leverage best-of-breed services from third parties to bolster their core banking services and augment their online and mobile banking experiences. Established banks can also create their own branded or “white label” services, like payment platforms, and offer them as services for others to use within the open banking ecosystem. For customers, the ability of third parties to get access to a true 360-degree view of their banking and payment relationships creates new insights that banks would not have been able to generate with just their own data. Given the risks, and the huge potential rewards, how do banks satisfy the push and pull of data sharing and data protection? How do they systematically collect, organize, and publish the most relevant data from across the organization for third parties to consume? Banks need a flexible data architecture that enables the deliberate collection and sharing of customer data both internally and externally, coupled with fine-grained access, traceability, and data privacy controls down to the individual field level. At the same time, this new approach must also provide a speed of development and flexibility that limits the cost of compliance with these new regulations and evolving open banking standards. Rise of the data mesh Open banking requires a fundamental change in a bank’s data infrastructure and its relationship with data. The technology underlying the relational databases and mainframes in use at many established banks was first developed in the 1970s. Conceived long before the cloud computing era, these technologies were never intended to support the demands of open banking, nor the volume, variety, and velocity of data that banks must deal with today. Banks are overcoming these limitations and embracing open banking by remodeling their approach to data and by building a data mesh using a modern developer data platform. What is a data mesh? A data mesh is an architectural framework that helps banks decentralize their approach to sharing and governing data, while also enabling self-service consumption of that data. It achieves this by grouping a bank’s data into domains. Each domain in a data mesh contains related data from across the bank. For example, a "consumer" domain may contain data about accounts, addresses, and relationship managers from across every department of the bank. Each data domain is owned by a different internal stakeholder group or department within the bank, and these owners are responsible for collecting, cleansing, and distributing the data in their domain across the enterprise and to consumers. With open banking, domain owners are also responsible for sharing data to third parties. This decentralized, end-to-end approach to data ownership encourages departments within the bank to adopt a “product-like” mentality toward the data within their domain, ensuring that it is maintained and made available like any other service or product they deliver. For this reason, the term data-as-a-product is synonymous with data mesh. Data domain owners are also expected to: Create and maintain relevant reshaped copies of data, rather than pursue a single-source-of-truth or canonical model. Serve data by exposing data product APIs. This means doing the cleansing and curation of data as close as possible to the source, rather than moving data through complex data pipelines to multiple environments. The successful implementation of a data mesh, and the adoption of a data-as-a-product culture, requires a fundamental understanding of localized data. It also requires proper documentation, design, management, and, most important, flexibility, as in the ability to extend the internal data model. The flexibility of the document model is, therefore, critical for success. Conclusion Open banking holds great potential for the future of the customer experience, and will help established financial institutions meet the ever-evolving customer expectations. Facilitated by a data mesh, you can open new doors for responsible, efficient data sharing across your financial institution, and this increase in data transparency leads to better outcomes for your customers—and your bottom line. Want to learn more about the benefits of open banking? Watch the panel discussion Open Banking: Future-Proof Your Bank in a World of Changing Data and API Standards .
4 Ways Telcos Deliver Mission-Critical Network Performance and Reliability
Tech leaders like Google, Apple, and Netflix set a new standard for customer service. Today’s customers expect intuitive, always-on, seamless service that challenges telecommunications companies’ network performance and reliability. This article examines several ways that companies can meet these challenges through an automated, data-driven approach. How a modern data platform can help A fully integrated, customer-centric, and data-driven approach to service delivery and assurance is needed to remain competitive. Modern telecommunications enterprises are tackling this problem by investing in areas like AI and machine learning, for example, which can help them identify correlations between disparate, diverse sources of data and automate end-to-end network operations, including: Network security Fraud mitigation Network optimization Customer experience Furthermore, by adopting a modern data platform, companies can easily answer questions, such as the following, that are nearly impossible to resolve when relying on legacy technology: Is an event likely to have a customer impact? Are customer-facing service SLAs being met? Where should cell sites be placed for maximum ROI? Is new equipment deployed and configured correctly? MongoDB’s developer data platform can help companies provide the necessary performance and reliability to meet customers’ expectations in four key areas: reducing data complexity, service assurance automation, network intelligence and automation, and TM Forum Open APIs. Reducing data complexity One recent study found that data scientists spend about 45% of their time loading and cleansing data . For a true impact to your organization, you need to free up that time to enable data scientists to focus on mission-critical projects and innovation. Additionally, architectural complexity, with bolted-on solutions and legacy technology, prevents you from harnessing your data and having a true impact on network performance and reliability. MongoDB’s developer data platform solves the great complexity problem by supporting a diverse range of workloads from a single data platform. Reducing the channels for data flow allows companies to establish a single source of truth, achieve a customer-centric approach that is critical for competitive advantage, and increase service assurance. Figure 1. MongoDB’s developer data platform reduces complexity in telecommunications workloads, resulting in more reliable network service for customers. With continuous uptime and advanced automation, MongoDB’s developer data platform ensures performance, no matter the scale. Service assurance automation In telecommunications, always-on, always-available service both for the end user and internal IT teams is critical. While outdated service assurance processes may have been viable decades ago, the volume of data and number of users have grown exponentially, making manually intensive processes of the past no longer possible. This volume increase will continue to stress existing business support systems, and without modernization, it will hamper the development of new revenue streams. Moving from a reactive to proactive and then predictive model, as shown in Figure 2, will enhance service assurance and enable organizations to meet the expectations of the digital-native customer. Figure 2. The transition from a reactive to proactive to predictive data model opens up new opportunities to use innovative technologies like artificial intelligence. Network intelligence and automation Consider the essential task of configuration and management of radio access networks. On a daily basis, engineers change the angles of antenna towers, the configuration of the radio, the nearest neighbor relations, and other events your system tracks and manages. With an intuitive developer data platform, any change in the configuration is saved in the data mediation layer (DML) for anyone to see and track, making it easy for engineers to go to the DML to check the configuration for a particular tower. Information that was previously captured in one snapshot per day is now propagated in real time. Another example — intent-based automation — abstracts the complexity of underlying software-defined networking components by allowing intent to be specified and by providing automatic translation. This type of automation allows teams to process intent generated either by end user activity or via service assurance processes, and that intent is translated into the underlying network state. Network events determine whether the network is in the desired, stable state, and that unintended states are addressed via automation, potentially using TM Forum Network-as-a-Service APIs. TM Forum Open APIs The TM Forum (TMF) is an alliance of more than 850 companies that accelerates digital innovation through its TMF Open APIs, which provide a standard interface for the exchange of different telco data models. The use of TMF Open APIs ranges from providers of off-the-shelf software to proprietary developments of the largest telecommunications providers. In working with many of the world’s largest communication service providers (CSPs) and their related software provider ecosystems, MongoDB has seen a significant number of organizations leverage these APIs to develop new microservices in days, rather than weeks or months. Through exposing common interfaces, CSPs are able to adopt a modular architecture made up of best-of-breed components (either internally or externally developed) while minimizing the time, effort, and cost required to integrate them. The TMF Network-as-a-Service APIs, in particular, hold significant potential for network automation. This API component suite supports a set of operational domains exposing and managing network services. The abstraction layer between network automation tooling and the underlying network infrastructure provides a flexible, modular architecture. Network optimization is vital to the survival of telcos in today’s competitive market. However, with a modern developer data platform underpinning your network, you’ll be equipped to meet and exceed customer expectations. Read our ebook to learn more about implementing TM Forum Open APIs with MongoDB .
4 Critical Features for a Modern Payments System
The business systems of many traditional banks rely on solutions that are decades old. These systems, which are built on outdated, inflexible relational databases, prevent traditional banks from competing with industry disruptors and those already adopting more modern approaches. Such outdated systems are ill-equipped to handle one of the core offerings that customers expect from banks today — instantaneous, cashless, digital payments . The relational database management systems (RDBMSes) at the core of these applications require breaking data structures into a complex web of tables. Originally, this tabular approach was necessary to minimize memory and storage footprints. But as hardware has become cheaper and more powerful, these advantages have also become less relevant. Instead, the complexity of this model results in data management and programmatic access issues. In this article, we’ll look at how a document database can simplify complexity and provide the scalability, performance, and other features required in modern business applications. Document model To stay competitive, many financial institutions will need to update their foundational data architecture and introduce a data platform that enables a flexible, real-time, and enriched customer experience. Without this, new apps and other services won’t be able to deliver significant value to the business. A document model eliminates the need for an intricate web of related tables. Adding new data to a document is relatively easy and quick since it can be done without the usually lengthy reorganization that RDBMSes require. What makes a document database different from a relational database? Intuitive data model simplifies and accelerates development work. Flexible schema allows modification of fields at any time, without disruptive migrations. Expressive query language and rich indexing enhance query flexibility. Universal JSON standard lets you structure data to meet application requirements. Distributed approach improves resiliency and enables global scalability. With a document database, there is no need for complicated multi-level joins for business objects, such as a bill or even a complex financial derivative, which often require object-relational mapping with complex stored procedures. Such stored procedures, which are written in custom languages, not only increase the cognitive load on developers but also are fiendishly hard to test. Missing automated tests present a major impediment to the adoption of agile software development methods. Required features Let’s look at four critical features that modern applications require for a successful overhaul of payment systems and how MongoDB can help address those needs. 1. Scalability Modern applications must operate at scales that were unthinkable just a few years ago, in relation to both transaction volume and to the number of development and test environments needed to support rapid development. Evolving consumer trends have also put higher demands on payment systems. Not only has the number of transactions increased, but the responsive experiences that customers expect have increased the query load, and data volumes are growing super-linear. The fully transactional RDBMS model is ill suited to support this level of performance and scale. Consequently, most organizations have created a plethora of caching layers, data warehouses, and aggregation and consolidation layers that create complexity, consume valuable developer time and cognitive load, and increase costs. To work efficiently, developers also need to be able to quickly create and tear down development and test environments, and this is only possible by leveraging the cloud. Traditional RDBMSes, however, are ill suited for cloud deployment. They are very sensitive to network latency, as business objects spread across multiple tables can only be retrieved through multiple sequential queries. MongoDB provides the scalability and performance that modern applications require. MongoDB’s developer data platform also ensures that the same data is available for use with other frequent consumption patterns like time series and full-text search . Thus, there is no need for custom replication code between the operational and analytical datastore. 2. Resiliency Many existing payment platforms were designed and architected when networking was expensive and slow. They depend on high-quality hardware with low redundancy for resilience. Not only is this approach very expensive, but the resiliency of a distributed system can never be reached through redundancy. At the core of MongoDB’s developer data platform is MongoDB Atlas , the most advanced cloud database service on the market. MongoDB Atlas can run in any cloud, or even across multiple clouds, and offers 99.995% uptime. This downtime is far less than typically expected to apply necessary security updates to a monolithic legacy database system. 3. Locality and global coverage Modern computing demands are at once ubiquitous and highly localized. Customers expect to be able to view their cash balances wherever they are, but client secrecy and data availability rules set strict guardrails on where data can be hosted and processed. The combination of geo-sharding, replication, and edge data addresses these problems. MongoDB Atlas in combination with MongoDB for Mobile brings these powerful tools to the developer. During the global pandemic, more consumers than ever have begun using their smartphones as payment terminals. To enable these rich functions, data must be held at the edge. Developing the synchronization of the data is difficult, however, and not a differentiator for financial institutions. MongoDB for Mobile, in addition with MongoDB’s geo-sharding capability on Atlas cloud, offloads this complexity from the developer. 4. Diverse workloads and workload isolation As more services and opportunities are developed, the demand to use the same data for multiple purposes is growing. Although legacy systems are well suited to support functions such as double entry accounting, when the same information has to be served up to a customer portal, the central credit engine, or an AI/ML algorithm, the limits of the relational databases become obvious. These limitations have resulted in developers following what is often called “best-of-breed” practices. Under this approach, data is replicated from the transactional core to a secondary, read-only datastore based on technology that is better suited to the particular workload. Typical examples are transactional data stores being copied nightly into data lakes to be available for AI/ML modelers. The additional hardware and licensing cost for this replication are not prohibitive, but the complexity of the replication, synchronization, and the complicated semantics introduced by batch dumps slows down development and increases both development and maintenance costs. Often, three or more different technologies are necessary to facilitate the usage patterns. With its developer data platform, MongoDB has integrated this replication, eliminating all the complexity for the developers. When a document is updated in the transactional datastore, MongoDB will automatically make it available for full-text search and time series analytics. The pace of change in the payments industry shows no signs of slowing. To stay competitive, it’s vital that you reassess your technology architecture. MongoDB Atlas is emerging as the technology of choice for many financial services firms that want to free their data, empower developers, and embrace disruption. Replacing legacy relational databases with a modern document database is a key step toward enhancing agility, controlling costs, better addressing consumer expectations, and achieving compliance with new regulations. Learn more by downloading our white paper “Modernize Your Payment Systems."
Mobile Edge Computing, Part 1: Delivering Data Faster with Verizon 5G Edge and MongoDB
As you’ve probably heard, 5G is changing everything, and it’s unlocking new opportunities for innovators in one sector after another. By pairing the power of 5G networks with intelligent software, customers are beginning to embrace the next generation of industry, such as powering the IoT boom, enhancing smart factory operations, and more. But how can companies that are leveraging data for daily operations start using data for innovation? In this article series, we’ll explore how the speed, throughput, reliability, and responsiveness of the Verizon network, paired with the sophistication of the next generation MongoDB developer data platform, are poised to transform industries including manufacturing, agriculture, and automotive. Mobile edge computing: The basics Companies everywhere are facing a new cloud computing paradigm that combines the best experiences of hyperscaler compute and storage with the topological proximity of 5G networks. Mobile edge computing , or MEC, introduces a new mode of cloud deployments whereby enterprises can run applications — through virtual machines, containers, or Kubernetes clusters — within the 5G network itself, across both public and private networks. Before we dive in, let’s define a few key terms: What is mobile edge computing? The ability to deploy compute and storage closer to the end user What is public mobile edge computing? Compute and storage deployed with the carrier data centers What is private mobile edge computing? On-premise provisioned compute and storage Verizon 5G Edge , Verizon’s mobile edge compute portfolio, takes these concepts from theoretical to practical. By creating a unified compute mesh across both public and private networks, Verizon 5G Edge produces a seamless exchange of data and stateful workloads — a simultaneous deployment of both public and private MEC best characterized as a hybrid MEC. In this article, we’ll primarily focus on public MEC deployment. Although MEC vastly increases the flexibility of data usage by both practitioners and end users, the technology is not without its challenges, including: Deployment: Given a dynamic fleet of devices, in an environment with 20-plus edge zones across both public and private MEC, to which edge(s) should the application be deployed? Orchestration: For Day 2 operations and beyond, what set of environmental changes, — be it on the cloud, network, or on device(s) — should trigger a change to my edge environment? Edge discovery: Throughout the application lifecycle, for a given connected device, which edge(s) is the optimal endpoint for connection? Fortunately for developers, Verizon has developed a suite of network APIs tailored to answer these questions. From edge discovery and network performance to workload orchestration and network management, Verizon has drastically simplified the level of effort required to build resilient, highly available applications at the network edge without the undifferentiated heavy lifting previously required. Edge discovery API workflow Using the Verizon edge discovery API, customers can let Verizon manage the complexity of maintaining the service registry as well as identifying the optimal endpoint for a given mobile device. In other words, with the edge discovery API workflow in place of the self-implemented latency tests, a single request-response would be needed to identify the optimal endpoint, as shown in Figure 1. Figure 1. A single request-response is used to identify the optimal endpoint Although this API addresses challenges of service discovery, routing, and some advanced deployment scenarios, other challenges exist outside of the scope of the underlying network APIs. In the case of stateful workloads, for example, how might you manage the underlying data generated from your device fleet? Should all of the data live at the edge, or should it be replicated to the cloud? What about replication to the other edge endpoints? Using the suite of MongoDB services coupled with Verizon 5G Edge and its network APIs, we will describe popular reference architectures for data across the hybrid edge. Delivering data with MongoDB Through Verizon 5G Edge, developers can now deploy parts of their application that require low latency at the edge of 4G and 5G networks using the same APIs, tools, and functionality they use today, while seamlessly connecting back to the rest of their application and the full range of cloud services running in a cloud region. However, for many of these use cases, a persistent storage layer is required that extends beyond the native storage and database capabilities of the hyperscalers at the edge. Given the number of different edge locations where an application can be deployed and consumers can connect, ensuring that appropriate data is available at the edge is critical. It is also important to note that where consumers are mobile (e.g., vehicles), the optimal edge location can vary. At the same time, having a complete copy of the entire dataset at every edge location to cater for this scenario is neither desirable nor practical due to the potentially large volumes of data being managed and the associated multi-edge data synchronization challenges that would be introduced. The Atlas solution The solution requires having an instantaneous and comprehensive overview of the dataset stored in the cloud while synchronizing only required data to dedicated edge data stores on demand. For many cases, such as digital twin, this synchronization needs to be bi-directional and may potentially include conflict resolution logic. For others, a simpler unidirectional data sync would suffice. These requirements mean you need a next-gen data platform, equipped with all the power to simplify data management while also delivering data in an instant. MongoDB Atlas is the ideal solution for the central, cloud-based datastore. Atlas provides organizations with a fully managed, elastically scalable application data platform upon which to build modern applications. MongoDB Atlas can be simultaneously deployed across any of the three major cloud providers (Amazon Web Services, Microsoft Azure, and Google Cloud Platform) and is a natural choice to act as the central data hub in an edge or multi-edge based architecture, because it enables diverse data to be ingested, persisted, and served in ways that support a growing variety of use cases. Central to MongoDB Atlas is the MongoDB database, which combines a flexible document-based model with advanced querying and indexing capabilities. Atlas is, however, more than just the MongoDB database and includes many other components to power advanced applications with diverse data requirements, like native search capabilities, real-time analytics, BI integration, and more. Read the next post in this blog series to explore the real-world applications and innovations being powered by mobile edge computing.
Mobile Edge Computing, Part 2: Computing in the Real World
It would be easy to conceptualize mobile edge computing (MEC) as a telecommunications-specific technology ; but, in fact, edge computing has far-reaching implications for real-world use cases across many different industries. Any organization that requires a solution to common data usage challenges, such as low-latency data processing, cloud-to-network traffic management, Internet of Things (IoT) application development, data sovereignty, and more, can benefit from edge-based architectures. In our previous article , we discussed what mobile edge computing is, how it helps developers increase data usage flexibility, and how Verizon 5G Edge and MongoDB work in conjunction to enable data computing at the edge, as shown in Figure 1. Figure 1. Verizon and MongoDB work in conjunction to deliver data to consumers and producers faster than ever with mobile edge computing. In this article, we’ll look at real-world examples of how mobile edge computing is transforming the manufacturing, agriculture, and automotive industries. Smart manufacturing Modern industrial manufacturing processes are making greater use of connected devices to optimize production while controlling costs. Connected IoT devices exist throughout the process, from sensors on manufacturing equipment to mobile devices used by employees on the factory floor to connected vehicles transporting goods — all generating large amounts of data. For companies to realize the benefits of all this data, it is critical that the data be processed and analyzed in real time to enable rapid action. Moving this data from the devices to the cloud for processing introduces unnecessary latency and data transmission that can be avoided by processing at the edge. As seen in Figure 2, for example, sensors, devices, and other data sources in the smart factory use the Verizon 5G Edge Discovery Service to determine the optimal edge location. After that, data is sent to the edge where it is processed before being persisted and synchronized with MongoDB Atlas — all in an instant. Figure 2. Data sources in smart factories use the Verizon 5G Edge Discovery Service to determine the optimal edge location. Process optimization Through real-time processing of telemetry data, it’s possible to make automated, near-instantaneous changes to the configuration of industrial machinery in response to data relayed from a production line. Potential benefits of such a process include improved product quality, increased yield, optimization of raw material use, and ability to track standard key performance indicators (KPIs), such as overall equipment efficiency (OEE). Preventative maintenance Similar to process optimization, real-time processing of telemetry data can enable the identification of potential impending machinery malfunctions before they occur and result in production downtime. More critically, however, if a situation has the potential either to damage equipment or pose a danger to those working in the vicinity, the ability to automatically perform shut downs as soon as the condition is detected is vital. Agriculture One of the most powerful uses of data analytics at scale can be seen in the agriculture sector . For decades, researchers have grappled with challenges such as optimal plant breeding and seed design, which to date have been largely manual processes. Through purpose-built drones and ground robotics, new ways to conduct in-field inspection using computer vision have been used to collect information on height, biomass, and early vigor, and to detect anomalies. However, these robots are often purpose-built with large data systems on-device, requiring manual labor to upload the data to the cloud for post-processing. Using the edge, this entire workflow can be optimized. Starting with the ground robotics fleet, the device can be retrofitted with a 5G modem to disintermediate much of the persistent data collection. Instead, the device can collect data locally, extract relevant metadata, and immediately push data to the edge for real-time analytics and anomaly detection. In this way, field operators can collect insights about the entirety of their operations — across a given crop field or nationwide — without waiting for the completion of a given task. Automotive Modern vehicles are more connected than ever before, with almost all models produced today containing embedded SIM cards that enable even more connected experiences. Additionally, parallel advances are being made to enable roadside infrastructure connectivity. Together, these advances will power not just increased data sharing between vehicles but also between vehicles and the surrounding environment (V2V2X). In the shorter term, edge-based data processing has the potential to yield many benefits both to road users and to vehicle manufacturers . Data quality and bandwidth optimization Modern vehicles have the ability to transmit large amounts of data not only in terms of telemetry relating to the status of the vehicle but also in regard to the observed status of the roads. If a vehicle detects that it is in a traffic jam, for example, then it might relay this information so that updates can be made available to other vehicles in the area to alert drivers or replan programmed routes, as shown in Figure 3. Figure 3. Mobile edge computing enables data generated from multiple sources within a vehicle to be shared instantly. Although this is a useful feature, many vehicles may be reporting the same information. By default, all of this information will be relayed to the cloud for processing, which can result in large amounts of redundant data. Instead, through edge-based processing: Data is shared more quickly between vehicles in a given area using only local resources. Costs relating to cloud-based data transfer are better controlled. Network bandwidth usage is optimized. While improving control of network usage is clearly beneficial, arguably a more compelling use of edge-based processing in the automotive industry relates to aggregating data received from many vehicles to improve the quality of data sent to the cloud-based data store. In the example of a traffic jam, all of the vehicles transmitting information about the road conditions will do so based on their understanding gained through GPS as well as internal sensors. Some vehicles will send more complete or accurate data than others, but, by aggregating the many different data feeds at the edge, this process results in a more accurate, complete representation of the situation. The future Read Part 1 of this blog series . Download our latest book on computing at the edge .
Content Discovery: How to Win the Battle for Attention
Think of the last app you used today. For me, it was searching for the latest episode of Sesame Street on HBOMax for my toddler. For someone else, it was finding a YouTube video on how to bake a cake. Or listening to a song recommended by Spotify. All of these instances, steps we barely put any thought into, are examples of content discovery , the bidirectional process by which users and applications interact, ensuring users’ known and unknown content consumption needs are fulfilled. As content is being generated at a nearly unfathomable and exponential pace ( think 500 hours of videos uploaded to YouTube every single minute ), catching and holding consumers’ attention with content is only going to become more difficult. Delivering great content discovery experiences that meet evolving customer expectations will be the only way to keep up. Content discovery happens in two ways, resembling push and pull forces: Push (recommendation engines): Content is suggested to the user. This can look like personalized landing pages or content recommendations. Pull (search): The customer searches for content, typically via a search bar. The user leads the action, and a new opportunity for suggesting relevant content is created. Consider how you consume content. Maybe you’re searching for a show you want to watch, or once that show is completed, the app you’re using recommends another similar show you might like. If media providers can master both of these processes – accurate search and intuitive recommendations – you can expect to fuel user engagement and decrease churn. Simple enough, right? Unfortunately, developing and deploying cutting-edge search and recommendation engines is easier said than done. A few major challenges stand in the way, like integrating data from multiple sources with excruciating extract, transform, load (ETL) pipelines, adding and maintaining a separate search engine solution, reduction in both time-to-value and developer productivity, and more. Having a unified data platform that can handle analytics at scale and search natively is a massive advantage for effective content discovery. Let’s look at how an advanced data platform like MongoDB Atlas makes the push and pull of content discovery possible. The push: Real-time, relevant recommendations Hitting users with the content they want when they want it (whether they know it's the content they want or not) is the aim of any recommendation engine. It’s particularly important in the media content consumption game, since there are so many competing platforms vying for user attention. As the volume and variety of user data increases by the second — generated from what they’re watching, what they stopped watching, what devices they’re using to interact with content — recommendations engines need to move beyond simple if-then-else statement based on historical data to advanced machine learning model that learns with data captured in real time, such as a causal inference models that predict what people might want to watch based on what other users with similar profiles and viewing habits are currently watching. MongoDB integrates natively with machine learning and artificial intelligence engines, using change streams to update the ML models to provide recommendations. The consumer profile is updated and saved in MongoDB, acting as the persistence layer and effectively becoming the single-view consumer data platform, a critical component in the pursuit of real-time analytics informing recommendations. Developers now have a single view of data, and machine learning models use that unsoiled data to make lightning-fast, accurate recommendations to keep users engaged with content. MongoDB acts as the catalyst for real-time recommendations informed by customer behavior triggers. The pull: Solving the search bar Virtually every application today has a search function — but it is also challenging to get right. Unlike database queries, where the user knows exactly what they are querying for, search has to give fast and relevant results to open ended, natural language inputs, tolerating typos and partial search terms, and essentially inferring users intent. Ultimately, consumers expect a Google-level search experience, and if they don’t get it, they’ll move to the next content platform. Building your own search engine, that will meet user expectations even as those expectations evolve, is costly in terms of time and resources spent developing and maintaining the engine. Many more database indexes need to be added to support search queries, and the search workload will start to contend for system resources with the core data persistence and processing demands of the application. To avoid resource contention between these two workloads, the database needs to be carefully sized and closely monitored and scaled, driving up operational overhead and cost. Also developing a database search solution won’t offer you any advantage over the competition, since there are dedicated search engines in the market that can do that heavy lifting for you. This reality has led companies to bolt-on a specialized search engine to their database – not that this is a simple solution either. Bolting on a search cluster to your database requires adding a new query language to integrate your application with the search engine, which increases the operational and architectural complexity of your current environment. This results in an elongated time for market for what could be a suboptimal search engine. Atlas Search solves the architecture and operational challenges of adding a separate search engine, since it’s fully integrated with the MongoDB Atlas Data Platform. Powered by the market-leading search engine Apache Lucene, it provides advanced search capabilities, while reducing architectural sprawl. Customers have reported improved development velocities of 30% to 50% after adopting Atlas Search. Atlas manages the required search infrastructure and automatically keeps the search indexes in sync with data mastered in the MongoDB database. Developers interact with search using the same universal interface that they are accustomed to using when interacting with other data in the platform, which means no new solutions to learn or decrease in developer productivity. Maintaining two separate systems adds complexity and lowers productivity, compared to the unified platform offered by MongoDB Atlas. With MongoDB Atlas, you can deliver the right recommendations at the most opportune time, and provide a best-in-class search experience to keep users engaged. No secondary solutions. No months of wasted development. Just a single, simplified process for game-changing content discovery. Take a deeper look into content discovery powered by MongoDB in our recent guide, Simplifying Content Discovery .
How to Build the Right App For Your Mobile Workforce
The average turnover rate in the retail industry is slightly above 60%. This high turnover rate translates into more than 230 million days of lost productivity and $19 billion in costs associated with recruiting, hiring, and training, according to Human Resources Today . When surveyed by Harvard Business Review , 86% of the organizations polled said frontline workers need better technology-enabled insights to be able to make good decisions in the moment. The survey also pointed out that leading retailers are starting to consider the impact tech can have on productivity. Combined, the data points to a growing chorus of evidence that suggests a mobile workforce — where employees are empowered with the digital tools needed to not only provide a great customer experience but also make their own jobs easier — is less likely to feel burnout and be dissatisfied with their jobs. What a mobile workforce can do for your organization With an intuitive, modern app, you can accomplish key business objectives. Improve the customer buying experience: Frontline staff equipped with mobile-first technologies can better match the fluency of the customers. It enables them to serve the customer better by providing accurate, real-time information, such as what items are in stock, or make suggestions based on customer buying history. Increase employee productivity: According to Deloitte , workers spend as much as three hours each week looking for the information they need. Imagine the impact regaining those hours could have on worker productivity! Track and improve performance, sales, and buying experience through data analysis: The potential of workforce enablement apps extends beyond just identifying what items are in stock at which stores. They can also gather valuable data that can reveal key patterns in everything from customer purchase habits and target peak shopping times to individual worker metrics such as number of successful sales. With those data insights, you can better allocate workers, assign workers based on strengths, stock items based on buying trends, and more. Challenges when building a retail worker app An always-connected and innovative retail workforce enablement app sounds great, but building this kind of intuitive app from the ground up presents a lot of challenges for already strained IT teams. Many retailers still rely heavily on relational databases that require additional support from a sprawl of supporting databases and technologies. As shown in this typical retail tech stack, legacy architectures are often made up of specialist NoSQL and relational databases, and additional mobile data and analytics platforms — all resulting in siloed data, slow data processing, and unnecessary complexity. This “spaghetti” architecture has several drawbacks when it comes to building a mobile app that truly empowers developers. The data from all these systems ends up siloed, requiring time-consuming ETL maneuvers to bring it together into a single view. Real-time access to data and insights, required to know what’s out of stock, who made a purchase for pickup, and more becomes harder to orchestrate. It’s hard to ensure data synchronization between a worker’s app and the backend database when they’re moving in and out of connectivity (when workers walk to the back of a warehouse or stockroom, for instance). It’s even harder with a sprawling data architecture to account for. The added complexity managing multiple databases, analytics suites, and the connections between them slows down your development teams, burdening them with additional complexity and maintenance issues to manage. As a result, IT teams will spend more time managing data silos and supporting old systems and applications than enabling mobile platforms to support new applications and empower frontline staff. To learn more about these issues — and overcome them — read our latest whitepaper, Why It’s So Hard for Retailers to Build a Workforce Enablement App (and How to Do It Right) .