On November 27, all 10gen supported drivers were updated with new error checking and reporting defaults. Each driver now has a MongoClient connection class to handle the error checking. On the same day there was also a server release with fixes on 2.2
MongoQP: MongoDB Slow Query Profiler
Two times a year 10gen’s Drivers and Innovations team gather together for a face to face meeting to work together and setting goals for the upcoming six months. This year the team broke up into teams for an evening hackathon. MongoQP , a query profiler, was one of the hacks presented by Jeremy Mikola, PHP Evangelist at 10gen. Logging slow queries is essential for any database application, and MongoDB makes doing so relatively painless with its database profiler. Unfortunately, making sense of the system.profile collection and tying its contents back to your application requires a bit more effort. The heart of mongoqp (Mongo Query Profiler) is a bit of map/reduce JS that aggregates those queries by their BSON skeleton (i.e. keys preserved, but values removed). With queries reduced to their bare structure, any of their statistics can be aggregated, such as average query time, index scans, counts, etc. As a fan of Genghis , a single-file MongoDB admin app, I originally intended to contribute a new UI with the profiler results, but one night was not enough time to wrap my head around Backbone.js and develop the query aggregation. Instead, I whipped up a quick frontend using the Silex PHP micro-framework. But with the hack day deadline no longer looming, there should be plenty of time to get this functionality ported over to Genghis. Additionally, the map/reduce JS may also show up in Tyler Brock’s mongo-hacker shell enhancement package. While presenting mongoqp to my co-workers, I also learned about Dan Crosta’s professor , which already provides many of the features I hoped to implement, such as incremental data collection. I think there is still a benefit to developing the JS innards of mongoqp and getting its functionality ported over to other projects, but I would definitely encourage you to check out professor if you’d like a stand-alone query profile viewer. Contributions welcome through Github .
COSMOS SQL Migration to MongoDB Atlas
Azure Cosmos DB is Microsoft's proprietary globally distributed, multi-model database service. Cosmos DB supports SQL interface as one of the models in addition to the Cosmos MongoDB API. Even customers with the SQL interface use COSMOS for the document model and the convenience of working with a SQL interface. We have seen customers struggle with scalability issues and costs with Cosmos DB and want to move to MongoDB Atlas. Migrating an application from Cosmos DB SQL to MongoDB Atlas involves both application refactoring and data migration from Cosmos to MongoDB. The current tool set for migrating data from Cosmos SQL to MongoDB Atlas is fairly limited. While the Azure datamigration tool can be used for a 1 time export, customers frequently need zero downtime for migrations which the datamigration tool cannot satisfy. All writes into the source COSMOS SQL should be discontinued before the data migration can be performed. This puts a lot of pressure on the customer in terms of downtime requirements and planning out the migration. PeerIslands has built a COSMOS SQL migrator tool that addresses these concerns. The tool provides a way to perform COSMOS SQL migration with near zero downtime. The architecture of the tool is explained below Initial Snapshot The tool uses the native datamigrationtool to export data as JSON files from Azure Cosmos DB SQL API. The Data Migration tool is an open-source solution that imports/exports data to/from Azure Cosmos DB. The exported data in JSON format is then imported into MongoDB Atlas using the mongoimport. Figure 1: Initial Snapshot processing stages. Change data capture Using the combination of the above tools we complete the initial snapshot. But what happens to documents that are updated or newly inserted during migration? Just prior to the initial snapshot process being started, the migration tool starts the change capture process. The migration tool listens to the ongoing changes in CosmosDB using the Kafka Source Connector provided by Azure and pushes the changes to a Kafka topic. Optionally KSQL can be used to perform any transformation required. Once the changes are in Kafka, the migration tool uses the Atlas Sink Connector to push the ongoing message to the Atlas Cluster. Below is the diagram depicting the flow of change stream messages from Cosmos SQL to MongoDB. Figure 2: The flow of change stream messages from Cosmos SQL to MongoDB The COSMOS SQL migration tool provides a GUI based point & click interface that brings together the above capabilities for handling the entire migration process. Since the tool is capable of change data capture, the tool provides a lot of flexibility for migrating your data without any downtime. Figure 3: Cosmos SQL migration tool dashboard In addition to data migration, PeerIslands can help with complete application refactoring required for migrating out of COSMOS SQL interface. Reach out to email@example.com if you need to migrate from COSMOS SQL to MongoDB Atlas.