Paresh Saraf

10 results

How DataSwitch And MongoDB Atlas Can Help Modernize Your Legacy Workloads

Data modernization is here to stay, and DataSwitch and MongoDB are leading the way forward. Research strongly indicates that the future of the Database Management System (DBMS) market is in the cloud, and the ideal way to shift from an outdated, legacy DBMS to a modern, cloud-friendly data warehouse is through data modernization. There are a few key factors driving this shift. Increasingly, companies need to store and manage unstructured data in a cloud-enabled system, as opposed to a legacy DBMS which is only designed for structured data. Moreover, the amount of data generated by a business is increasing at a rate of 55% to 65% every year and the majority of it is unstructured. A modernized database that can improve data quality and availability provides tremendous benefits in performance, scalability, and cost optimization. It also provides a foundation for improving business value through informed decision-making. Additionally, cloud-enabled databases support greater agility so you can upgrade current applications and build new ones faster to meet customer demand. Gartner predicts that by 2022, 75% of all databases will be on the cloud – either by direct deployment or through data migration and modernization. But research shows that over 40% of migration projects fail. This is due to challenges such as: Inadequate knowledge of legacy applications and their data design Complexity of code and design from different legacy applications Lack of automation tools for transforming from legacy data processing to cloud-friendly data and processes It is essential to harness a strategic approach and choose the right partner for your data modernization journey. We’re here to help you do just that. Why MongoDB? MongoDB is built for modern application developers and for the cloud era. As a general purpose, document-based, distributed database, it facilitates high productivity and can handle huge volumes of data. The document database stores data in JSON-like documents and is built on a scale-out architecture that is optimal for any kind of developer who builds scalable applications through agile methodologies. Ultimately, MongoDB fosters business agility, scalability and innovation. Key MongoDB advantages include: Rich JSON Documents Powerful query language Multi-cloud data distribution Security of sensitive data Quick storage and retrieval of data Capacity for huge volumes of data and traffic Design supports greater developer productivity Extremely reliable for mission-critical workloads Architected for optimal performance and efficiency Key advantages of MongoDB Atlas , MongoDB’s hosted database as a service, include: Multi-cloud data distribution Secure for sensitive data Designed for developer productivity Reliable for mission critical workloads Built for optimal performance Managed for operational efficiency To be clear, JSON documents are the most productive way to work with data as they support nested objects and arrays as values. They also support schemas that are flexible and dynamic. MongoDB’s powerful query language enables sorting and filtering of any field, regardless of how nested it is in a document. Moreover, it provides support for aggregations as well as modern use cases including graph search, geo-based search and text search. Queries are in JSON and are easy to compose. MongoDB provides support for joins in queries. MongoDB supports two types of relationships with the ability to reference and embed. It has all the power of a relational database and much, much more. Companies of all sizes can use MongoDB as it successfully operates on a large and mature platform ecosystem. Developers enjoy a great user experience with the ability to provision MongoDB Atlas clusters and commence coding instantly. A global community of developers and consultants makes it easy to get the help you need, if and when you need it. In addition, MongoDB supports all major languages and provides enterprise-grade support. Why DataSwitch as a partner for MongoDB? Automated schema re-design, data migration & code conversion DataSwitch is a trusted partner for cost-effective, accelerated solutions for digital data transformation, migration and modernization through a modern database platform. Our no-code and low-code solutions along with cloud data expertise and unique, automated schema generation accelerates time to market. We provide end-to-end data, schema and process migration with automated replatforming and refactoring, thereby delivering: 50% faster time to market 60% reduction in total cost of delivery Assured quality with built-in best practices, guidelines and accuracy Data modernization: How “DataSwitch Migrate” helps you migrate from RDBMS to MongoDB DataSwitch Migrate (“DS Migrate”) is a no-code and low-code toolkit that leverages advanced automation to provide intuitive, predictive and self-serviceable schema redesign from a traditional RDBMS model to MongoDB’s Document Model with built-in best practices. Based on data volume, performance, and criticality, DS Migrate automatically recommends the appropriate ETTL (Extract, Transfer, Transform & Load) data migration process. DataSwitch delivers data engineering solutions and transformations in half the timeframe of the existing typical data modernization solutions. Consider these key areas: Schema redesign – construct a new framework for data management. DS Migrate provides automated data migration and transformation based on your redesigned schema, as well as no-touch code conversion from legacy data scripts to MongoDB Atlas APIs. Users can simply drag and drop the schema for redesign and the platform converts it to a document-based JSON structure by applying MongoDB modeling best practices. The platform then automatically migrates data to the new, re-designed JSON structure. It also converts the legacy database script for MongoDB. This automated, user-friendly data migration is faster than anything you’ve ever seen. Here’s a look at how the schema designer works. Refactoring – change the data structure to match the new schema. DS Migrate handles this through auto code generation for migrating the data. This is far beyond a mere lift and shift. DataSwitch takes care of refactoring and replatforming (moving from the legacy platform to MongoDB) automatically. It is a game-changing unique capability to perform all these tasks within a single platform. Security – mask and tokenize data while moving the data from on-premise to the cloud. As the data is moving to a potentially public cloud, you must keep it secure. DataSwitch’s tool has the capability to configure and apply security measures automatically while migrating the data. Data Quality – ensure that data is clean, complete, trustworthy, consistent. DataSwitch allows you to configure your own quality rules and automatically apply them during data migration. In summary: first, the DataSwitch tool automatically extracts the data from an existing database, like Oracle. It then exports the data and stores it locally before zipping and transferring it to the cloud. Next, DataSwitch transforms the data by altering the data structure to match the re-designed schema, and applying data security measures during the transform step. Lastly, DS Migrate loads the data and processes it into MongoDB in its entirety. Process Conversion Process conversion, where scripts and process logic are migrated from legacy DBMS to a modern DBMS, is made easier thanks to a high degree of automation. Minimal coding and manual intervention are required and the journey is accelerated. It involves: DML – Data Manipulation Language CRUD – typical application functionality (Create, Read, Update & Delete) Converting to the equivalent of MongoDB Atlas API Degree of automation DataSwitch provides during Migration Schema Migration Activities DS Automation Capabilities Application Data Usage Analysis 70% 3NF to NoSQL Schema Recommendation 60% Schema Re-Design Self Services 50% Predictive Data Mapping 60% Process Migration Activities DS Automation Capabilities CRUD based SQL conversion (Oracle, MySQL, SQLServer, Teradata, DB2) to MongoDB API 70% Data Migration Activities DS Automation Capabilities Migration Script Creation 90% Historical Data Migration 90% 2 Catch Load 90% DataSwitch Legacy Modernization as a Service (LMaas): Our consulting expertise combined with the DS Migrate tool allows us to harness the power of the cloud for data transformation of RDBMS legacy data systems to MongoDB. Our solution delivers legacy transformation in half the time frame through pay-per-usage. Key strengths include: ● Data Architecture Consulting ● Data Modernization Assessment and Migration Strategy ● Specialized Modernization Services DS Migrate Architecture Diagram Contact us to learn more.

May 13, 2021

Accelerate Data Modernization with Infosys Data Model Converter

Are you in the process of migrating applications from a relational database to MongoDB? If so, you’re likely trying to best understand and decide how your enterprise data needs to be modeled. Our previous blog discussed how Infosys Data Services Suite can help enterprises move data seamlessly from legacy relational databases to MongoDB. But moving data is only one part of the puzzle. The more significant step is choosing the target data model, or schema design, a process that usually requires several hours of highly skilled talent. That’s why we created this follow-up blog to help you get started. Rethinking Schema Design Ultimately, schema design can be the difference between an inefficient, disorganized database and a strategic one that empowers the entire company. Schema design in MongoDB requires a change in perspective for data architects, developers, and database administrators. They have to: Rethink the legacy relational data model. This model flattens data into rigid two-dimensional tabular structures of rows and columns. The new data model is a rich and dynamic one with embedded sub-documents and arrays Rethink how the data platform works. In relational databases, it is extremely difficult to change the data platform as the application evolves. However, in MongoDB, the apps and APIs come first and the data platform dynamically accommodates the data Getting Schema Design Right Begin the schema design process by considering the application’s requirements. You’ll want to model the data in a way that leverages the flexibility of the document model. In schema migrations, it may seem easy at first to simply mirror the flat schema of the relational database in the document model. However, this negates the advantages enabled by the rich and embedded data structures of the document model. For example, data that belongs to a parent-child relationship in two RDBMS tables can be collapsed (embedded) into a single document in MongoDB. The application data access patterns should also drive schema design with a specific focus on: The read/write ratio of database operations and whether it is more important to optimize the performance of one operation over another The types of queries and updates performed by the databases The lifecycle of the data and growth rate of documents Simplifying Schema Design with Infosys Data Model Converter Infosys has developed a solution called Infosys Data Model Convertor that processes source relational schema and the above-mentioned signals as inputs and automatically provides target MongoDB schema suggestions. Infosys Data Model Converter is available as part of Infosys Modernization Suite which accelerates enterprises’ modernization journey. Each schema suggestion is accompanied by a detailed analysis report. The data modeler can use this as a starting point and iterate over the schema to arrive at the final MongoDB schema. The Infosys Data Model Converter reduces 50-60% of the effort typically spent in schema design. Key Features Boosts productivity by augmenting the migration of RDBMS to NoSQL database Saves time by automatically extracting schema, query and data patterns from an existing RDBMS Comprehensively analyzes the RDBMS entity relations, data and read-and-write patterns Applies a rich set of rules and generates a fully compliant NoSQL target state data model Offers flexibility by externalizing the rules for organization-specific customizations Connects and deploys the model to the target NoSQL platform with sample data Discover more ways in which Infosys can help you unlock value from modernization. Contact us for any modernization questions.

April 15, 2021

Optimize Data Modeling and Schema Design with Hackolade and MongoDB

Development teams are constantly searching for new ways to quickly enhance applications and satisfy the rapid progression of customer needs. The dynamic schema evolution in MongoDB enables such a reality through the power and flexibility of storing data in a JSON document format instead of in relational tables. Notably, developers love the flexibility and schema-less nature of the JSON document format. But as application complexity and scale increases in an enterprise environment, this flexibility must be skillfully organized to harness the power of the solution, maximize developer productivity, and lower total cost of ownership. For large enterprises and government agencies, the key is to leverage the benefits of modern applications running on MongoDB Atlas while also ensuring proper data management and governance. This is where a data modeling tool designed specifically for MongoDB will greatly help. Enter Hackolade. For decades, Entity-Relationship Diagrams (ERDs) have been used to visually represent the data structures of relational databases. But ERDs were originally designed for flat structures only. Hackolade , a MongoDB certified technology partner , has enhanced ERD capabilities to accommodate the representation of JSON hierarchical structures with nested objects and arrays. Hackolade is pioneering data modeling and schema design for NoSQL databases and REST APIs. Why it Matters A data model is an abstraction describing and documenting an organization’s information system. It is a collection of Entity-Relationship diagrams, descriptions, constraints, and metadata representing data structures: Hackolade data model for MongoDB A schema, on the other hand, is a “consumable” scope contract describing the layout or structure of a file, a transaction, or a database. It is an authoritative source for producers and consumers to agree on the structure being exchanged or accessed. While data models are useful for humans to understand structure, schemas are the technical artifact necessary for systems to interact. Hackolade provides both, allowing MongoDB customers to easily visualize the data model, intuitively create and enforce schema with MongoDB’s JSON Schema Validator, and iteratively change the schema as the applications evolve. Automatically-generated JSON Schema Validator Customer Benefits Increase data agility with forward-engineering An ERD provides an easy-to-understand picture of your data. As a communication tool, it helps facilitate dialog between application stakeholders like business analysts, designers, architects, developers, and DBAs. With an ERD, you can evaluate different “what if” scenarios, identify the ideal way to denormalize data, and leverage the benefits of MongoDB Atlas technology. Simply apply a query-driven design of the schema after analyzing the access patterns of the application. You can then visualize and evaluate the impacts without writing a line of code—obviously, this is a more productive approach than coding first, then realizing that much needs to be rewritten to accommodate everyone’s needs. The Hackolade software generates several artifacts such as: collection creation with validator script requiring no knowledge of JSON Schema syntax, sample JSON documents, Mongoose schemas, documentation in HTML, Markdown or PDF, plotter output of ERD pictures, document and index sizing estimates, and more. The process is easily integrated into a Jenkins CI/CD pipeline by invoking a flexible Command-Line Interface. Ensure data quality and compliance through schema reverse-engineering Deriving a data model from an existing MongoDB instance is not as easy as fetching a DDL from a relational database. Schemas must be inferred from a representative sample of documents in each collection. Hackolade has perfected its schema inference algorithms to accommodate the flexibility and polymorphism of JSON hierarchical structures. The derived models become a trusted source to feed data dictionaries and data governance suites. Reverse-engineering helps ensure data quality and compliance, with the use of an automated Command-Line Interface process. Facilitate application modernization with the denormalization of legacy data structures Hackolade can import a variety of structures from relational DDLs, logical data models in XSD format, JSON documents and schemas, and Excel templates. To leverage the benefits of MongoDB, these structures should evolve to embed information where applicable and avoid slow JOINs. This should not be done blindly, but based on a proper analysis of the application access patterns in the context of data volume estimates and relationship cardinality. Hackolade provides a handy feature to quickly evolve a relational data model towards a denormalized schema, thereby leveraging the benefits of MongoDB’s document model and facilitating modernization. The process easily hooks into the forward-engineering process described above, generating pictures, scripts, and documentation. Implement continuous evolution and data management The lifecycle of modernized applications does not stop after the initial data migration step. Applications must be successfully operated, and will continue to evolve, resulting in likely schema changes. Hackolade is designed to facilitate agile development approaches and the full lifecycle of modern software. It provides the necessary tooling to design and manage data models and schemas for successful application modernization on MongoDB Atlas. Learn how to maximize developer productivity and lower total cost of ownership using data modeling with Hackolade , and the MongoDB University data modeling advanced course . Download the joint solution brief: MongoDB and Hackolade: Visual Data Modeling for MongoDB Schemas .

March 11, 2021

Capgemini Solutions that help customers modernize applications to MongoDB

Companies across every industry vertical continue to face the challenge of how to effectively migrate and quickly access massive amounts of enterprise data—all while keeping system performance up to par throughout the obstacle-ridden process. The complexities involved with the ubiquitous, traditional Relational Database Management Systems (RDBMS) are many. RDBMS systems can often inhibit performance, falter under heavy volumes and slow down deployment. With MongoDB’s document-based, distributed database, however, performance and volume issues are easily addressed. But when it comes to speeding up time to market? The right auxiliary tools are still needed. Capgemini, a MongoDB partner and global leader in digital transformation, provides the final piece of the puzzle with a new tool rooted in automated intelligence. In this blog, we’ll explore three key Capgemini Solutions that help customers modernize to MongoDB. Tools that expedite time to market Migration from legacy system to MongoDB New development using MongoDB as a backend database Whether your company is developing a new database or migrating from legacy systems to MongoDB, Capgemini’s new Database Convert & Compare (DCC) tool can help. Below, we’ll detail how DCC works, then walk through a few recent, client examples and the immense benefits reaped. Tool: Database Convert & Compare (DCC) A powerful tool developed by the Capgemini team, DCC optimizes activities like database migration, data comparison, validation and much more. The tool can perform data transformations with specific customization based on the source and target database in the scope. When migrating from RDBMS to MongoDB, DCC achieves 70% automation and 30% manual retrofit on a database level. How does DCC work? In context of RDBMS to NoSQL migration, DCC performs the migration in 3 stages. 1) Assessment: Source database schema assessment – DCC extracts source schema information and performs an assessment to generate detailed inventory of data objects such as tables, views, stored procedures and indexes. It also generates a detailed report on data volume from each table which helps in assessing estimated data migration time from source to target Apply analytics to prepare recommendation for target database structure—The target structure varies based on various parameters, such as: Table relationships (one to many, many to many, one to one) Indexes applied on table for performance requirements Column data type 2) Schema Migration Customize tool to apply recommendation from step 1.2 hence generating the script for target database Target schema script preparation – DCC will generate a complete database schema script except for a few object types such as stored procedure, views etc. Produce detailed report of schema migration, inclusive of objects that couldn’t be migrated Manual intervention is required to apply business logic implementation of source database, stored procedures and views to target environment application 3) Data Migration Column mapping – assessment report generates inventory of source database table fields as well as post recommended schema structure; the report also provides recommended field mapping from source to target based on adopted recommendation and DCC customization Post migration data validation script – DCC generates a data validation script after data migration is complete which takes field mapping into consideration from the related assessment and recommendation reports Data migration script for execution – DCC allows for the setup and configuration of different scripts for data migration, such as: One-time data migration from source to target Daily batch run to sync up source and target database data Intermittent data validation during the process of data migrationIf there are any discrepancies found in validation, the job will stop and generate a report with potential root cause of issue in data migration) Standalone data comparison – DCC allows for seclusion of data validation between source and target database. In this case, DCC will generate source database table inventory details and extract target database collection inventory details. Minimal manual intervention is required to perform the field mapping and set the configuration in the tool for data migration execution. Other configuration features such as one time migrations or daily batch migrations can be configured as well. The Capgemini team has successfully implemented and deployed the DCC tool for various banking customers for RDBMS to NoSQL end-to-end migration including for application retrofit and rewiring using other capable tools such as CAP360 Case study 1: Migration from Mainframe to MongoDB for a Large European Investment Bank A large banking client encountered significant challenges in terms of growth and scale-up, low resilience and increased risks, and certainly increasing costs associated with the advent of mobile banking and a related significant increase in volume. To help the client evolve more quickly, Capgemini built an Operational Data Platform to offload expensive mainframe operations, as well as store and process customer transactions for business operations, analysis and reporting. The Challenge: Inefficient and slow to meet customer demand for new digital banking services due to heavy reliance on legacy infrastructure and apps Continued growth in traffic and the launch of new digital services led to increased cost of operations and decreased performance Mainframe was the single point of failure for many applications. Outages resulted in poor customer service, brand erosion, and regulatory concerns The Approach: An analysis of digital channels revealed that 92% of traffic was generated by 25 interaction types, with 85% of these being read-only. To offload these operations from the mainframe, an operational data lake (ODL) was created. MongoDB-based ODL was updated in near real-time via change data capture and messaging queue to power existing apps, new digital services and other APIs. Outcome and Benefits: Accelerated time to market for new digital services, including personalization Improved stand-in capability to support resiliency during planned and unplanned mainframe outages Reduced number of read-only transactions to mainframes (MIPS cost), freeing up resources for additional growth Saved the customer over 80% in year-on-year post migration costs. The new MongoDB database was seamlessly able to handle 25mn+ transactions per day as well as able to handle data volume of over 30 months of history with ~13b transactions held in 114m documents Case study 2: Migration of Large-scale Database from Legacy to MongoDB for US-based Insurance Customer A US-based insurance client faced disparate data spread across 100+ systems, making data aggregation a cumbersome process. The client wanted to access the many data points around a single customer without hindering performance of the entire system. The Challenge: Reconciling different data schemas from multiple systems into a single schema is problematic and, in many cases, impossible. When adding new data sources, it is difficult to iterate on the schema quickly. Providing access to the data within the ‘Single View’ requires ad hoc queries as well as multi-layer indexing and aggregation which becomes complicated for relational databases to provide. Lack of personalization and ability to provide context-based experiences in real time results in lost business opportunities. Approach: In order to assist customer service reps in real-time, we built “The Wall,” a single view application that pulls disparate data from legacy systems for analytics. Additionally, we designed a flexible data model to aggregate disparate data into a single data store. MongoDB’s expressive query language and secondary indexes can reach any field in real time, making data access faster and easier. Our approach was designed based on 4 key foundations: Document Model – Rich and flexible data store. A single document can store up to 16 MB of data. With 20+ data types meant flexibility in terms of managing data Versatility – Variety of structured and non-structured data models defined Analytics – Strong data aggregator framework to aggregate data related to single customer Workload Isolation – Parallel run for operational and analytical workload on same cluster Outcome and Benefits: Our largest insurance customer was able to attain the single view of the customer within 90 days timespan. A different insurance customer achieved 360 degree view of 13 million customers on MongoDB Enterprise Advanced. And yet another esteemed healthcare customer was able to achieve as much as a 300% reduction in processing times and increased processing throughput with 50% less hardware. Ready to accelerate your digital transformation? Capgemini and MongoDB can help you re-envision your data and advance your business processes so you can focus on innovation. Reach out today to get started. Download the Modernization Guide

February 10, 2021

Legacy Modernization with MongoDB and Confluent

In many organizations, crucial enterprise data is locked in dozens or hundreds of silos that may be, controlled by different teams, and stuck in systems that aren’t able to serve new workloads or access patterns. This is a blocker for innovation and insight ultimately hampering the business. For example, imagine building a new mobile app for your customers that enables them to view their account data in a single view. Designing the app could require months of time to simply navigate the internal processes necessary to gain access to the legacy systems and even more time to figure out how to integrate them. An Operational Data Layer, or ODL, can offer a “best of both worlds” approach, providing the benefits of modernization without the risk of a full rip and replace. Legacy systems are left intact – at least at first – meaning that existing applications can continue to work as usual without interruption. New or improved data consumers will access the ODL rather than the legacy data stores, protecting those stores from new workloads that may strain their capacity and expose single points of failure. At the same time, building an ODL offers a chance to redesign the application’s data model, allowing for new development and features that aren’t possible with the rigid tabular structure of existing relational systems. With an ODL, it’s possible to combine data from multiple legacy sources into a single repository where new applications, such as a customer single view or artificial intelligence processes, can access the entire corpus of data. Existing workloads can gradually shift to the ODL, delivering value at each step. Eventually, the ODL can be promoted to a system of record and legacy systems can be decommissioned. Read our blog covering DaaS with MongoDB and Confluent to learn more. There’s also a push today for applications and databases to be entirely cloud-based, but the reality is that current business applications are often too complex to be migrated easily or completely. Instead, many businesses are opting to move application data between on-premises and cloud deployments in an effort to leverage the full advantage of public cloud computing without having to undertake a complete, massive data lift-and-shift. Confluent can be used for both one-time and real-time data synchronization between legacy data sources and modern data platforms like MongoDB, whose fully managed global cloud database service, MongoDB Atlas , is supported across AWS, Google Cloud, and Azure. Confluent Platform can be self-managed in your own data center while Confluent Cloud can be used on the public clouds. Whether leaving your application on-premise is a personal choice or a corporate mandate, there are many good reasons to integrate with MongoDB Atlas. Bring your data closer to your users in more than 70 regions with Atlas’s global clusters Address your most intense workloads with one-click, automated sharding for scale out and zero-downtime scale up Quickly provision TBs of database storage, all on high performance SSDs with dedicated I/O bandwidth Natively query and analyze data across AWS S3 and MongoDB Atlas with MongoDB Atlas Data Lake Perform full-text search queries with MongoDB Atlas Search Build native mobile applications that seamlessly synchronize data with MongoDB Realm Create powerful visualizations and dashboards of your MongoDB data with MongoDB Charts Off-load older data to cost effective storage with MongoDB Atlas Online Archive In this video we will show one time migration and Real time continuous data synchronization from a Relational System to MongoDB Atlas using Confluent Platform and the MongoDB Connector for Apache Kafka . Also we will be talking about different ways to store and consume the data within MongoDB Atlas. Git repository for the demo is here . Learn more about the MongoDB and Confluent partnership here and download the joint Reference Architecture here . Click here to learn more about modernizing to MongoDB.

January 7, 2021

Part 1: The Modernization Journey with Exafluence and MongoDB

Welcome to the first in a series of conversations between Exafluence and MongoDB about how our partnership can use open source tools and the application of data, artificial intelligence/machine learning and neuro-linguistic programming to power your business’s digital transformation. In this installment, MongoDB Senior Partner Solutions Architect Paresh Saraf and Director for WW Partner Presales Prasad Pillalamarri sit down with Exafluence CEO Ravikiran Dharmavaram and exf Insights Co-Founder Richard Robins to discuss how to start the journey to build resilient, agile, and quick-to-market applications.   From Prasad Pillalamari: I first met Richard Robins, MD & Co-Founder of exf Insights at Exafluence back in June 2016 at a MongoDB world event. Their approach towards building data-driven applications was fascinating for me. Since then Exafluence has grown by leaps and bounds in the System Integration space and MongoDB has outperformed its peers in the database market. So Paresh and I decided to interview Richard to deep-dive into their perspective on Modernization with MongoDB. Prasad & Paresh: We first met the Exafluence team in 2016. Since then, MongoDB has created the Atlas cloud data platform that now supports multi-cloud clusters and Exafluence has executed multiple projects on mainframe and legacy modernization. Could you share your perspective on the growth aspects and synergies of both companies from a modernization point of view? Richard Robins: Paresh and Prasad, I’m delighted to share our views with you. We’ve always focused on what happens after you successfully offload read traffic from mainframes and legacy RDBMS to the cloud. That’s digital transformation and legacy app modernization. Early on, Exafluence made a bet that if the development community embraces something we should, too. That’s how we locked in on MongoDB when we formed our company. Having earned our stripes in the legacy data world, we knew that getting clients to MongoDB would mean mining the often poorly documented IP contained in the legacy code. That code is often where long-retired subject matter expert (SME) knowledge resides. To capture it, we built tools to scan COBOL/DB2 and stored procedures to reverse engineer the current state. This helps us move clients to a modern cloud native application, and it's an effective way to merge, migrate, and retire the legacy data stores all of our clients contend with. Once we’d mined the IP with those tools we needed to provide forward-engineered transformation rules to reach the new MongoDB Atlas endpoint. Using a metadata driven approach, we built a rules catalog that included a full audit and REST API to keep data governance programs and catalogs up to date as an additional benefit of our modernization efforts. We’ve curated these tools as exf Insights , and we bring them to each modernization project. Essentially, we applied NLP, ML, and AI to data transformation to improve modernization analysts’ efficiency, and added a low-to-no code transformation rule builder, complete with version control and rollback capabilities. All this has resulted in our clients getting world-class, resilient capabilities at a lower cost in less time. We’re delighted to say that our modernization projects have been successful by following simple tenets — to embrace what the development community embraces and to offer as much help as possible — embodied in the accelerator tools we’ve built. That’s why we are so confident we'll continue our rapid growth. P&P: How do you think re-architecting legacy applications with MongoDB as the core data layer will add value to your business? RR: We believe that MongoDB Atlas will continue to be the developers go-to document database, and that we’ll see our business grow 200-300% over the next three years. With MongoDB Atlas and Realm we can provide clients with resilient, agile applications that scale, are easily upgraded, and are able to run on any cloud as well as the popular mobile iOS and Android devices. Digital transformation is key to remaining competitive and being agile going forward. With MongoDB Atlas, we can give our clients the same capabilities we all take for granted on our mobile apps: they’re resilient, easy to upgrade, usually real-time, scale via Kubernetes clusters, and can be rolled back quickly if necessary. Most importantly, they save our clients money and can be automatically deployed. P&P: At a high level, how will Exafluence help customers take this journey? RR: We’re unusual as a services firm in that we spend 20% of gross revenue on R&D, so our platform and approach are proven. Thus, relatively small teams for our healthcare, financial services, and industrial 4.0 clients can leverage our approach, platform, and tools to deliver advanced analytical systems that combine structured and unstructured data across multiple domains. We built our exf Insights accelerator platform using MongoDB and designed it for interoperability, too. On projects we often encounter legacy ETL and messaging tools. To show how easy it is, we recently integrated exf Insights with SAP HANA and the SAP Data Intelligence platform. Further, we can publish JSON code blocks and provide Python code for integration into ETL platforms like Informatica and Talend. Our approach is to reverse engineer by mining IP from legacy data estates and then forward engineer the target data estate, using these steps and tools: Reverse Engineer Extract stored procedures, business logic, and technical data from the legacy estate and load it into our platform. Use our AI/ML/NLP algorithms to analyse business transformation logic and metadata, with outliers identified for cleansing. Provide DB scans to assess legacy data quality to cleanse and correct outliers, and provide tools to compare DB level data reconciliations. Forward Engineer To produce a clean set of metadata and business transformation logic, and baseline with version control, we: Extract, transform, and load metadata to the target state. Score metadata via NLP and ML to recommend matches to the Analyst who accepts/rejects or overrides recommendations. Analysts can then add additional transformations which are catalogued. Deploy and load cleansed data to the target state platform so any transformations and gold copies may be built. Automate Data Governance via Rest API, Code Block generation (Python/JSON) to provide enterprise catalogs with the latest transforms. P&P: What are your keys to a successful transformation journey? RR: Over the past several years we’ve identified these elements and observations: Subject matter experts and technologists must work together to provide new solutions. There’s a shortage of skilled technologists able to write, deploy, and securely manage next generation solutions. Using accelerators and transferring skills are vital to mitigating the skills shortage. Existing IP that’s buried in legacy applications must be understood and mined in order for a modernization program to succeed. A data-driven approach that combines reverse and forward engineering speeds migration and also provides new data governance and data science catalog capabilities. The building, caring, and feeding of new, open source-enabled applications is markedly different from the way monolithic legacy applications were built. The document model enables analytics and interoperability. Cybersecurity and data consumption patterns must be articulated and be part of the process, not afterthoughts. Even with aggressive transformation plans, new technology must co-exist with legacy applications for some time; progress works best if it’s not a big bang. Success requires business and technology to learn new ways to provide, acquire, and build agile solutions. P&P: Can you talk about solutions you have which will accelerate the modernization journey for the customers? RR: exf Insights helps our clients visualize what’s possible with extensive, pre-built, modular solutions for health care, financial services, and industrial 4.0. They show the power of MongoDB Atlas and also the power of speed layers using Spark and Confluent Kafka. These solutions are readily adaptable to client requirements and reduce the risk and time required to provide secure, production-ready applications. Source data loading. Analyze and integrate raw structured and unstructured data, including support for reference and transactional data. Metadata scan. Match data using AI/NLP, scoring results and providing side-by-side comparison. Source alignment. Use ML to check underlying data and score results for analysts, and leverage that learning to accelerate future changes. Codeless transformation. Empower data SMEs to build the logic with a multiple-sources-to-target approach and transform rules which support code value lookups and complex Boolean logic. Includes versioned gold copies of any data type (e.g., reference, transaction, client, product, etc.). Deployment. Deploy for scheduled or event-driven repeatability and dynamically populate Snowflake or other repositories. Generates code blocks that are usable in your estate or REST API. We used the same 5-step workflow data scientists use when we enabled business analysts to accelerate the retirement of internal data stores to build and deploy the COVID-19 self-checking app in three weeks, including active directory integration and downloadable apps. We will be offering a Realm COVID-19 screening app on web, Android, and IOS to the entire MongoDB Atlas community in addition to our own clients. The accelerator integrates key data governance tools, including exf Insights repository management of all sources and targets with versioned lineage; as-built transformation rules for internal and client implementations; and a business glossary integrated into metadata repositories. P&P: Usually one of the key challenges for businesses is data being locked in silos. RR: We couldn’t agree more. Our data modernization projects routinely integrate with source transactional systems that were never built to work together. We provide scanning tools to understand disparate data as well as ways to ingest, align, and stitch them together. Using health care as an example, exf Insights provides a comprehensive analytical capability, able to integrate data from hospitals, claims, pharmaceutical companies, patients, and providers. Some of this is NonSQL, such as radiological images; for pharma companies we provide capabilities to support clinical research organizations (CROs) via a follow-the-molecule approach. Of course, we also have to work with and subscribe to Centers for Medicare & Medicaid Services (CMS) guidelines. Our data migration focuses on collecting the IP behind the data and making the source, logic, and any transformations rules available to our clients. In financial services, it’s critical to understand source and targets. No matter how data is accessed (federated or direct store), with Spark and Kafka we can talk to just about any data repository. P&P: Once we discover the data to be migrated, we need to model the data according to MongoDB’s data model paradigm. That requires multiple transformations before data is loaded to MongoDB. Can you explain more about how your accelerators help here? RR: By understanding data consumption and then looking at existing data structures, we seek to simplify and then apply the capabilities of MongoDB’s document model. It’s not unlike what a data architect would do in the relational world, but with MongoDB Atlas it’s easier. We ourselves use MongoDB for our exf Insights platform to align, transform, and make data ready for consumption in new applications. We’re able to provide full rules lineage and audit trail, and even support rollback. For the real-time speed layer we use Spark and Kafka as well. This data-driven modernization approach also turns data governance into an active consumer of the rules catalog, so exf Insights works well for regulated industries. P&P: It’s great that we have data migrated now. Consider a scenario where it’s a mainframe application and we have lots of COBOL code in there. It has to be moved to a new programming language like Python, with a change in the data access layer to point to MongoDB. Do you have accelerators which can facilitate the application migration? If so, how? RR: Yes, we do have accelerators that understand the COBOL syntax to create JSON and ultimately Java, which speeds modernization. We also found we had to reverse engineer stored procedures as part of our client engagements for Exadata migration. P&P: Once we migrate the data from legacy databases to MongoDB, validation is the key step. As this is a heterogeneous migration it can be challenging. How can Exafluence add value here? RR: We’ve built custom accelerators that migrate data from the RDBMS world to MongoDB, and offer data comparisons as clients go from development to testing to production, documenting all data transformations along the way. P&P: Now that we’ve talked about all your tools which can help in the modernization journey, can you tell us about how you already helped your customers to achieve this? RR: Certainly. We’ve already outlined how we’ve created solution starters for modernization, with sample solutions as accelerators. But that’s not enough; our key tenet for successful modernization projects is pairing SMEs and developers. That’s what enables our joint client and Exafluence teams to understand the business, key regulations, and technical standards. Our data-driven focus lets us understand the data regardless of industry vertical. We’ve successfully used exf Insights now in financial services, healthcare, and industry 4.0. Whether it’s understanding the nuances of financial instruments and data sources for reference and transactional data, or Medical Device IoT sensors in healthcare, or shop floor IoT and PLC data for predictive analytics and digital twin modeling, a data-driven approach reduces modernization risks and costs. Below are some of the possibilities this data-driven approach has delivered for our healthcare clients using MongoDB Atlas. By aggregating provider, membership, claims, pharma, and EHR clinical data, we offer robust reporting that: Transforms health care data from its raw form into actionable insights that improve member care quality, health outcomes, and satisfaction Provides FHIR support Surfaces trends and patterns in claims, membership, and provider data Lets users access, visualize, and analyze data from different sources Tracks provider performance and identifies operational inefficiencies P&P: Thank you, Richard! Keep an eye out for upcoming conversations in our series with Exafluence, where we'll be talking about agility in infrastructure and data as well as interoperability. MongoDB and Modernization To learn more about MongoDB's overall Modernization strategy, read here .

December 9, 2020

Accelerating Mainframe Offload to MongoDB with TCS MasterCraft™

Tata Consultancy Services (TCS), a leading multinational information technology services and consulting company, leverages its IP-based solutions to accelerate and optimize service delivery. TCS MasterCraft™ TransformPlus uses intelligent automation to modernize and migrate enterprise-level mainframe applications to new, leading-edge architectures and databases like MongoDB. In this blog, we’ll review the reasons why organizations choose to modernize and how TCS has made the process easy and relatively risk-free. Background Legacy Modernization Legacy modernization is a strategic initiative that enables you to refresh your existing database and applications portfolio by applying the latest innovations in development methodologies, architectural patterns, and technologies. At the current churn rate, about half of today’s S&P 500 firms will be replaced over the next 10 years $100T of economic value is ready to be unlocked over the next decade via digital transformation Source Legacy System Challenges Legacy technology platforms of the past, particularly monolithic mainframe systems, have always been challenged by the pace of disruptive digitalization. Neither the storage nor the accessibility of these rigid systems is agile enough to meet the increasing demands of volume, speed, and data diversity generated by modern digital applications. The result is noise between the legacy system of record and digital systems of engagement. This noise puts companies at a competitive disadvantage. It often manifests as a gap between customer service and user experience, impeding the delivery of new features and offerings and constraining the business from responding nimbly to changing trends. Operational costs of mainframe and other legacy systems have also skyrocketed. With each million instructions per second (MIPS) costing up to $4,000 per year, these older systems can create the equivalent of nearly 40% of an organization’s IT budget in technical debt, significantly increasing the overall annual run cost. And as qualified staff age and retire over the years, it’s becoming harder to find and hire people with the required mainframe skills. To manage MIPS consumption, a large number of our customers are offloading commonly accessed mainframe data to an independent operational data layer (ODL), to which queries are redirected from consuming applications. IT experts understand both the risk and the critical need to explore modernization options like encapsulation, rehosting, replatforming, refactoring, re-architecting, or rebuilding to replace these legacy systems. The key considerations when choosing an approach are familiar: risk of business disruption, cost, timelines, productivity, and the availability of the necessary skills. MongoDB + TCS MasterCraft™ TransformPlus = Transformation Catalyst To stay competitive, businesses need their engineering and IT teams to do these three things, among others: Build innovative digital apps fast Use data as a competitive moat to protect and grow their business Lower cost and risk while improving customer experience Some customers use a “lift and shift” approach to move workloads off the mainframe to cloud for immediate savings, but that process can’t unlock the value that comes with microservice architectures and document databases. Others gain that value by re-architecting and rewriting their applications, but this approach can be time consuming, expensive, and risky. More and more, customers are using a tools-driven refactoring approach to intelligently automate code conversion. What TCS MasterCraft™ TransformPlus Brings to the Table TCS MasterCraft™TransformPlus automates the migration of legacy applications and databases to modern architectures like MongoDB. It extracts business logics from decades-old legacy mainframe systems as a convertible, NoSQL document data model for deployment. This makes extraction faster, easier, and more economical, and reduces the risk that comes with rewriting legacy applications. With more than 25 years of experience, TCS’s track record includes: 60+ modernization projects successfully delivered 500M+ lines of COBOL code analyzed 25M+ lines of COBOL code converted to Java 50M+ new lines of Java code auto-generated What MongoDB Brings to the Table MongoDB’s document data model platform can help make development cycles up to 5 times faster. Businesses can drive innovation faster, cut costs by 70% or more, and reduce their risk at the same time. As a developer, MongoDB gives you: The best way to work with data The ability to put data where you need it The freedom to run anywhere Why is TCS collaborating with MongoDB for Mainframe Offload? Cost. Redirecting queries away from the mainframe to the ODL significantly reduces costs. Even cutting just 20%-30% in MIPS consumption can save millions of dollars in mainframe operating costs. Agility. As an ODL built on a modern data platform, MongoDB helps developers build new apps and digital experiences 3—5 times faster than is possible on a mainframe. User Experience. MongoDB meets demands for exploding data volumes and user populations by scaling out on commodity hardware, with self-healing replicas that maintain 24x7 service. More details can be found here . How TCS MasterCraftTM Accelerates Mainframe Offload to MongoDB Data Migration Configures target document schema to corresponding relational schema Automatically transforms relational data from mainframe sources to MongoDB documents Loads data to MongoDB Atlas with the latest connector support Application Migration Facilitates a cognitive code analysis-based application knowledge repository Ensures complete, comprehensive application knowledge extraction Automates conversion of application logic from COBOL to Java, with data access layer accessing data from MongoDB Splits monolithic code into multiple microservices Automates migration of mainframe screens to AngularJS-based UI Together, TCS MasterCraft™ TransformPlus and MongoDB can simplify and accelerate your journey to the cloud, streamlining and protecting your data while laying the foundation for digital success. Download the Modernization Guide to learn more.

October 28, 2020

DaaS with MongoDB and Confluent

An operational data layer (ODL) is an architectural pattern that centrally integrates and organizes siloed enterprise data, making it available to consuming applications. It enables a range of board-level strategic initiatives such as legacy modernization and data as a service (DaaS), and use cases such as single view, real-time analytics, and mainframe offload. The simplest representation of this pattern is something like the diagram shown in Figure 1. An ODL is an intermediary between existing data sources and consumers that need to access that data. An ODL deployed in front of legacy systems can enable new business initiatives and meet new requirements that the existing architecture can't handle -- without the difficulty and risk of a full rip and replace of legacy systems. It can reduce the workload on source systems, improve availability, reduce end-user response times, combine data from multiple systems into a single repository, serve as a foundation for re-architecting a monolithic application into a suite of microservices, and more. The ODL becomes a system of innovation, allowing the business to take an iterative approach to digital transformation. Figure 1: An ODL centrally integrates and organizes siloed enterprise data, making it available to consuming applications. Architecture Figure 2: ODL architecture Source Systems and Data Producers Source systems and data producers are usually databases, but sometimes they are systems or other data stores. Generally, they are systems of record for one or more applications, either off-the -shelf packages apps (ERP, CRM, and so forth) or internally developed custom apps. In some cases, there may be only one source system feeding the ODL. Usually, this is the case if the main goal of implementing an ODL is to add an abstraction layer on top of that single system. This could be for the purpose of caching or offloading queries from the source system, or it could be to create an opportunity to revise the data model for modernization or new uses that don't fit with the structure of the existing source system. An ODL with a single source system is most useful when the source is a highly used system of record and/or is unable to handle new demands being placed on it; often, this is a mainframe. More often, there are multiple source systems. In this case, the ODL can unify disparate datasets, providing a complete picture of data that otherwise would not be available. Consuming Systems An ODL can support any consuming systems that require access to data. These can be either internal or customer-facing. Existing applications can be updated to access the ODL instead of the source systems, while new applications (often delivered as domains of microservices) typically will use the ODL first and foremost. The requirements of a single application may drive the initial implementation of an ODL, but usage usually expands to additional applications once the ODL’s value has been demonstrated to the business. An ODL can also feed analytics, providing insights that were not possible without a unified data system. Ad hoc analytical tools can connect to an ODL for an up-to-the-minute view of the company — without interfering with operational workloads — while the data can also support programmatic real-time analytics to drive richer user experiences with dashboards and aggregations embedded directly into applications. Data Loading For a successful ODL implementation, the data must be kept in sync with the source systems. Once the source systems’ producers have been identified, it’s important to understand the frequency and quantity of data changes in producer systems. Similarly, consuming systems should have clear requirements for data currency. Once you understand these, it’s much easier to develop an appropriate data loading strategy. 1. Batch extract and load. This typically is used for an initial one-time operation to load data from source systems. Batch operations extract all required records from the source systems and load them into the ODL for subsequent merging. If none of the consuming systems requires up-to-the-second level data currency and overall data volumes are low, it may also suffice to refresh the entire dataset with periodic (daily/weekly) data refreshes. Batch operations are also good for loading data from producers that are reference data sources, where data changes are typically less frequent — for example, country codes, office locations, tax codes, and similar data. Commercial extract, transform, and load (ETL) tools or custom implementations with Confluent (built on Apache Kafka) are used for carrying out batch operations: extracting data from producers, transforming the data as needed, and then loading it into the ODL. If after the initial load the development team discovers that additional refinements are needed to the transformation logic, then the related data from the ODL may need to be dropped, and the initial load repeated. 2. Delta extract and load. This is an ongoing operation that propagates incremental updates committed to the source systems into the ODL, in real time. To maintain synchronization between the source systems and the ODL, it’s important that the delta load starts immediately following the initial batch load. The frequency of delta operations can vary drastically. In some cases, they may be captured and propagated at regular intervals, for example every few hours. In other cases, they are event-based, propagated to the ODL as soon as new data is committed to the source systems. To keep the ODL current, most implementations use change data capture (CDC) mechanisms to catch the changes to source systems as they happen. Confluent is often used to store these real-time changes captured by the CDC mechanism, thanks to the multiple connectors available from various technologies. After the changes are safely stored in Kafka, you can use a streaming application, an ETL process, or custom handlers to transform the data into the required format for the ODL. Once the data is in the right format, you can leverage the MongoDB Connector for Apache Kafka sink to stream the new delta changes into the ODL. Increasingly, the message queue itself transforms the data, removing the need for a separate ETL mechanism . Why MongoDB for DaaS? Unified Data Infrastructure The move to the cloud has brought forth efficiency and a self-service mindset by addressing the operational and administrative blockers in traditional on-premises environments. However, developer workflows have remained relatively unchanged as cloud lift-and-shift initiatives often replicate pre-existing data infrastructure complexities, including technology sprawl. MongoDB Atlas unifies transactional, operational, and real-time analytics into a single cloud-native platform and API for MongoDB users. This delivers a far better developer experience, because it makes data easier to manipulate, find, and analyze by eliminating the need for migrations across fragmented data services. Atlas Online Archive :Gives the users the ability to age out older data into cost-effective storage while still giving them the ability to easily query both warm and cold data from a single query. Atlas Data Lake : Query heterogeneous data stored in Amazon S3 and MongoDB Atlas in place and in its native format by using the MongoDB Query Language (MQL). Atlas Search : Build fast, Apache Lucene-based search capabilities on top of data in Atlas without the need to migrate it to a separate search platform. Seamless Application Development MongoDB Realm Mobile Database allows developers to store data locally on iOS and Android devices as well as IoT edge gateways by using a rich data model that’s intuitive to them. Combined with the MongoDB Realm sync to Atlas, Realm makes it simple to build reactive, reliable apps that work even when users are offline. MongoDB Realm allows developers to validate and build key features quickly. Application development services such as Realm Sync provide out-of-the box bidirectional synchronization between the cloud and your devices. Realm offers other services, including its GraphQL service, which can query data using any GraphQL client. Realm also provides many other services and features such as functions, triggers, and data access rules — ultimately simplifying the code required and enabling you to focus on adding business value to your applications instead of wasting time writing boilerplate code. MongoDB is the Best Way for an ODL to Work with Data Ease . MongoDB’s document model makes it simple to model — or remodel — data in a way that fits the needs of your applications. Documents are a natural way to describe data. Flexibility With MongoDB, there’s no need to predefine a schema. Documents are polymorphic: fields can vary from document to document within a single collection. Speed .Using MongoDB for an ODL means you can get better performance when accessing data, and write less code to do so. In most legacy systems, accessing data for an entity, such as a customer, typically requires JOINing multiple tables together. JOINs entail a performance penalty, even when optimized — which takes time, effort, and advanced SQL skills. Versatility Building on the ease, flexibility, and speed of the document model, MongoDB enables developers to satisfy a range of application requirements, both in the way data is modeled and in how it is queried. Data access and APIs . Consuming systems require powerful and secure access methods to the data in the ODL. If the ODL is writing back to source systems, this channel also needs to be handled. MongoDB’s drivers provide access to a MongoDB-based ODL from the language of your choice. MongoDB lets you intelligently distribute and ODL . Consuming systems depend on an ODL. It needs to be reliable and scalable and to offer a high degree of control over data distribution to meet latency and data sovereignty requirements. Availability . MongoDB maintains multiple copies of data by using replica sets. Replica sets are self-healing, because failover and recovery are fully automated, so it is not necessary to manually intervene to restore a system in the event of a failure, or to add the additional clustering frameworks and agents that are needed for many legacy relational databases. Scalability . To meet the needs of an ODL with large datasets and high throughput requirements, MongoDB provides horizontal scale-out on low-cost, commodity hardware or cloud infrastructure by using sharding. Workload isolation . MongoDB’s replication provides a foundation for combining different classes of workload on the same MongoDB cluster, each workload operating against its own copy of the data. Data locality . MongoDB allows precise control over where data is physically stored in a single logical cluster. For example, data placement can be controlled by geographic region for latency and governance requirements, or by hardware configuration and application features to meet specific classes of service for different consuming systems. MongoDB Gives You the Freedom to Run Anywhere Portability . MongoDB runs the same everywhere: on premises in your data centers, on developers’ laptops, in the cloud, or as an on-demand fully managed DaaS: MongoDB Atlas. Global coverage . MongoDB’s distributed architecture allows a single logical cluster to be distributed around the world, situating data close to users. When you use MongoDB Atlas, global coverage is even easier; Atlas supports more than 70 regions across all the major cloud providers. No lock-in . With MongoDB, you can reap the benefits of a multi-cloud strategy. Since Atlas clusters can be deployed on all major cloud providers, you get the advantage of an elastic, fully managed service without being locked into a single cloud provider. What Role Does Confluent Play? What Confluent builds is an enterprise-ready platform that complements Apache Kafka with advanced capabilities designed to help accelerate application development and connectivity, enable event transformations via stream processing, simplify enterprise operations at scale, and meet stringent architectural and security requirements. One of Confluent’s goals is to democratize Kafka for a wider range of developers and accelerate how quickly they can build event streaming applications. Confluent enables this via a set of features, including the ability to leverage Kafka in languages other than Java, a rich prebuilt ecosystem including more than 100 connectors so developers don’t have to spend time building connectors themselves, and enabling stream processing with the ease and familiarity of SQL. Kafka can sometimes be complex and difficult to operate at scale. Confluent makes it easier via GUI-based management and monitoring, DevOps automation including with Kubernetes Operator, and enabling dynamic performance and elasticity in deploying Kafka. Also, Confluent offers a set of features many organizations consider prerequisites when deploying mission-critical apps on Kafka. These include security features that control who has access to what, the ability to investigate potential security incidents via audit logs, the ability to ensure via schema validation that there is no “dirty” data in Kafka and that only “clean” data is in the system, and resilience features (if your data center goes down, for example, your customer-facing applications stay running). Confluent offers all of this with freedom of choice, meaning you can choose self-managed software you can deploy anywhere, including on premises or in a public cloud, private cloud, containers, or Kubernetes. Or, you can choose MongoDB’s fully managed cloud service, available on all three major cloud providers. And underpinning all of this is the Confluent committer-led expertise. Confluent has more than 1 million hours of Kafka expertise and offers support, professional services, training, and a full partner ecosystem. Simply put, there is no other organization in the world better suited to be an enterprise partner, and no organization in the world that is more capable of ensuring your success. This means everything to the organizations Confluent works with. Learn More: DaaS Service with MongoDB Get Started with the MongoDB connector for Apache Kafka Announcing the MongoDB Atlas sink and source connectors in Confluent Cloud Download the Modernization Guide

October 8, 2020