Developer

4 results

1Data - PeerIslands Data Sync Accelerator

Today’s enterprises are in the midst of digital transformation, but they’re hampered by monolithic, on-prem legacy applications that don’t have the speed, agility, and responsiveness required for digital applications. To make the transition, enterprises are migrating to the cloud. MongoDB has partnered with PeerIslands to develop 1Data, a reference architecture and solution accelerator that helps users with their cloud modernization. This post details the challenges enterprises face with legacy systems and walks through how working with 1Data helps organizations expedite cloud adoption. Modernization Trends As legacy systems become unwieldy, enterprises are breaking them down into microservices and adopting cloud native application development. Monolith-to-microservices migration is complex, but provides value across multiple dimensions. These include: Development velocity Scalability Cost-of-change reduction Ability to build multiple microservice databases concurrently One common approach for teams adopting and building out microservices is to use domain driven design to break down the overall business domain into bounded contexts first. They also often use the Strangler Fig pattern to reduce the overall risk, migrate incrementally, and then decommission the monolith once all required functionality is migrated. While most teams find this approach works well for the application code, it’s particularly challenging to break down monolithic databases into databases that meet the specific needs of each microservice. There are several factors to consider during transition: Duration. How long will the transition to microservices take? Data synchronization. How much and what types of data need to be synchronized between monolith and microservice databases? Data translation in a heterogeneous schema environment. How are the same data elements processed and stored differently? Synchronization cadence. How much data needs syncing, and how often (real-time, nightly, etc.)? Data anti-corruption layer. How do you ensure the integrity of transaction data, and prevent the new data from corrupting the old? Simplifying Migration to the Cloud Created by PeerIslands and MongoDB, 1Data helps enterprises address the challenges detailed above. Migrate and synchronize your data with confidence with 1Data Schema migration tool. Convert legacy DB schema and related components automatically to your target MongoDB instance. Use the GUI-based data mapper to track errors. Real-time data sync pipeline. Sync data between monolith and microservice databases nearly in real time with enterprise grade components. Conditional data sync. Define how to slice the data you’re planning to sync. Data cleansing. Translate data as it’s moved. DSLs for data transformation. Apply domain-specific business rules for the MongoDB documents you want to create from your various aggregated source system tables. This layer also acts as an anti-corruption layer. Data auditing. Independently verify data sync between your source and target systems. Go beyond the database. Synchronize data from APIs, Webhooks & Events. Bidirectional data sync. Replicate key microservice database updates back to the monolithic database as needed. Get Started with Real-Time Data Synchronization With the initial version of 1Data, PeerIslands addresses the core functionality of real-time data sync between source and target systems. Here’s a view of the logical architecture: Source System. The source system can be a relational database like Oracle, where we’ll rely on CDC, or other sources like Events, API, or Webhooks. **Data Capture & Streaming.**Captures the required data from the source system and converts them into data streams using either off-the-shelf DB connectors or custom connectors, depending on the source type. 1Data implements data sharding and throttling, which enable data synchronization at scale, in this phase. Data Transformation. The core of the accelerator, when we convert the source data streams into target MongoDB document schemas. We use LISP-based Domain Specific Language to enable simple, rule-based data transformation, including user-defined rules. Data Sink & Streaming. Captures the data streams that need to be updated into the MongoDB database through stream consumers. The actual update into the target DB is done through sink connectors. Target system. The MDB database used by the microservices. Auditing. Most data that gets migrated is enterprise-critical; 1Data audits the entire data synchronization process for missed data and incorrect updates. Two-way sync. The logical architecture enables data synchronization from the MongoDB database back to the source database. We used MongoDB, Confluent Kafka and Debezium to implement this initial version of 1Data: The technical architecture is cloud agnostic, and can be deployed on-prem as well. We’ll be customizing it for key cloud platforms as well as fleshing out specific architectures to adopt for common data sync scenarios. Conclusion The 1Data solution accelerator lends itself to multiple use cases, from single view to legacy modernization. Please reach out to us for technical details and implementation assistance, and watch this space as we develop the 1Data accelerator further.

October 15, 2020

Reacting to Auth Events using Stitch Triggers

MongoDB Stitch makes it easy to add authentication to your application. Several authentication providers are available to be configured using the Stitch Admin Console. Recently, authorization triggers were added to Stitch. Functions can now be executed based on authorization events such as user creation, deletion, and login. During my Stitchcraft live coding sessions , I’ve been creating an Instagram-like application that uses Google Authentication . The Google authentication provider can be configured to return metadata with the authenticated user. I set up my provider to retrieve the user’s email, name, and profile picture. This works well as long as only the authenticated users need to see it. If you want other users to be able to access this data, you’re going to have to write it to a collection. Before authorization triggers, this could have been an arduous task. Now it’s as simple as executing a function to perform an insert on the CREATE operation. Because I wanted to also ensure that the data in this collection stayed up-to-date, I created authorization triggers for CREATE and LOGIN and pointed them to a single upsert function as seen below. exports = function(authEvent) { const mongodb = context.services.get("mongodb-atlas"); const users = mongodb.db('data').collection('users'); const { user, time } = authEvent; const newUser = { user_id: user.id, last_login: time, full_name: user.data.name, first_name: user.data.first_name, last_name: user.data.last_name, email: user.data.email, picture: user.data.picture }; return users.updateOne({user_id: user.id}, newUser, {upsert: true}); }; During the last Stitchcraft session, I set up this authorization trigger and a database trigger that watched for changes to the full_name field. Check out the recording with the GitHub repo linked in the description. Follow me on Twitch and be notified of future Stitchcraft live coding sessions. -Aydrian Howard Developer Advocate NYC @aydrianh Creating your first Stitch app? Start with one of the Stitch tutorials . Want to learn more about MongoDB Stitch? Read the white paper .

September 25, 2018