Jesse Krasnostein

6 results

Millions of Users and a Developer-Led Culture: How Blinkist Powers its Berlin Startup on MongoDB Atlas

Not unlike other startups, Blinkist grew its roots in a college dorm. Only, its creators didn’t know it at the time. It took years before the founders decided to build a business on their college study tricks. Blinkist condenses nonfiction books into pithy, but accessible 15-minute summaries which you can read or listen to via its app. “It all started with four friends,” says Sebastian Schleicher, Director of Engineering at Blinkist. “After leaving university, they found jobs and built lifestyles that kept them fully occupied—but they were pretty frustrated because their packed schedules left them no time for reading and learning new things.” Rather than resign themselves to a life without learning, they racked their brains as to how they could find a way to satisfy their craving for knowledge. They decided to revive their old study habits from university where they would write up key ideas from material that they’d read and then share it with each other. It didn’t take long for them to realise that they could build a business on this model of creating valuable easily accessible content to inspire people to keep learning. In 2012, Blinkist was born. Six years later, the Berlin-based outfit has nearly 100 employees, but instead of writers and editors, they have Tea Masters and Content Ninjas. Blinkist has no formal hierarchical management structure, having replaced bosses with BOS, the Blinkist Operating System . The app has over five million users and, at its foundation, it has MongoDB Atlas , the fully managed service for MongoDB, running on AWS. But it didn’t always. “In four years, we had a million users and 2,500 books,” says Schleicher. “We’d introduced audiobooks and seen them become the most important delivery channel. We tripled our revenue, doubled our team, moved into a larger, open-plan office, and even got a dog. Things were good.” Running into trouble with 3rd party MongoDB as a Service Then came an unwelcome plot twist. Blinkist had built its service on Compose, a third-party database as a service, based on MongoDB. MongoDB had been an obvious choice as the document model provided Blinkist with the flexibility needed to iterate quickly, but the team was too lean to spend time on infrastructure management In 2016, Compose unexpectedly decided to change the architecture of its database, creating major obstacles for Blinkist as they would become locked-in to an old version of MongoDB. “They left us alone,” says Schleicher. “They said, ‘Here’s a tool, migrate your data.’ I asked if they’d help. No dice. I offered them money. Not interested, no support. After being a customer for all those years? I said goodbye.” After years of issues, it became clear last year that Blinkist would need to leave Compose, which meant choosing a new database provider. “We looked at migrating to MySQL, we were that desperate. That would have meant freezing development and concentrating on the move ourselves. On a live service. It was bleak.” Discovering MongoDB Atlas By this time, MongoDB’s managed cloud Atlas service was well established and seemed to be the logical solution. “We downloaded MongoDB’s free mongomirror service to make the transition,” says Schleicher, “but we hit a brick wall. Compose had locked us into a very old version of the database and who knows what else, and we couldn’t work it out.” At that point, Schleicher made a call to MongoDB. MongoDB didn’t say, ‘Do it yourself.’ Instead, they sent their own data ninja—or, in more conventional, business-card wording, a principal consulting engineer. “It was the easiest thing in the world,” Schleicher remembers. “In one day, he implemented four feature requests, got the migration done and our databases were in live sync. Such a great experience.” Now that Blinkist is on Atlas, Schleicher feels like they have a very solid base for the future. “Performance is terrific. Our mobile app developers accidentally coded in a distributed denial of service attack on our own systems. Every day at midnight, in each time zone, our mobile apps all simultaneously sync. This pushes the requests load up from a normal peak of 7,500 requests a minute to 40,000 continuous. That would have slaughtered the old system, with real business impacts — killing sign-ups and user interactions. This time, nobody noticed anything was wrong." Right now it feels like we have a big tech advantage. With MongoDB Atlas and AWS, we’re on the shoulders of people who can scale the world. I know for the foreseeable future I have partners I can really rely on. Sebastian Schleicher, Director of Engineering, Blinkist Schleicher adds: “We’re building our future through microarchitecture with all the frills. Developers know they don’t have to worry about what’s going on behind the API in MongoDB. It just works. We’re free to look at data analytics and AI—whatever techniques and tools we believe will help us grow—and not spend all our time maintaining a monolithic slab of code.” With Blinkist’s global ambitions, scaling isn’t just a technical challenge; it tests company culture—no matter how modern—to the limits. MongoDB’s own customer-focused culture, it turns out, is proving as compatible as MongoDB’s data platform. “Talking to MongoDB isn’t like being exposed to relentless sales pressure. It’s cooperative, it’s reassuring. There are lots of good technical people on tap. It’s holistic, no silos, whatever it takes to help us.” This partnership is helping make Blinkist a great place to be a developer. “A new colleague we hired last year told me we’ve created an island of happiness for engineers. Once you have an understanding of the business needs and vision, you get to drive your own projects. We believe in super transparency. Everyone is empowered.” “Oh, and did I mention we have a dog?” Atlas is the easiest and fastest way to get started with MongoDB. Deploy a free cluster in minutes.

September 25, 2018

New to MongoDB Atlas — Global Clusters Enable Low-Latency Reads and Writes from Anywhere

The ability to replicate data across any number of cloud regions was introduced to MongoDB Atlas, the fully managed service for the database , last fall. This granted Atlas customers two key benefits. For those with geographically distributed applications, this functionality allowed them to leverage local replicas of their data to reduce read latency and provide a fast, responsive customer experience on a global scale. It also meant that an Atlas cluster could be easily configured to failover to another region during cloud infrastructure outages, providing customers with the ability to provision multi-region fault tolerance in just a few clicks. But what about improving write latency and addressing increasingly demanding regulations, many of which have data residency requirements? In the past, users could address these challenges in a couple of ways. If they wanted to continue using a fully managed MongoDB service, they could deploy separate databases in each region. Unfortunately, this often resulted in added operational and application complexity. They could also build and manage a geographically distributed database deployment themselves and satisfy these requirements using MongoDB’s zone sharding capabilities. Today we’re excited to introduce Global Clusters to MongoDB Atlas. This new feature makes it possible for anyone to effortlessly deploy and manage a single database that addresses all the aforementioned requirements. Global Clusters allow organizations with distributed applications to geographically partition a fully managed deployment in a few clicks, and control the distribution and placement of their data with sophisticated policies that can be easily generated and changed. Improving app performance by reducing read and write latency With Global Clusters, geographically distributed applications can write to (and of course, read from) local partitions of an Atlas deployment called zones . This new Global Writes capability allows you to associate and direct data to a specific zone, keeping it in close proximity to nearby application instances and end users. In its simplest configuration, an Atlas zone contains a 3-node replica set distributed across the availability zones of its preferred cloud region. This configuration can be adjusted depending on your requirements. For example, you can turn the 3-node replica set into multiple shards to address increases in local write throughput. You can also distribute the secondaries within a zone into other cloud regions to enable fast, responsive read access to that data from anywhere. The illustration above represents a simple Global Cluster in Atlas with two zones. For simplicity’s sake, we’ve labeled them blue and red. The blue zone uses a cloud region in Virginia as the preferred region, while the red zone uses one in London. Local application instances will write to and read from the MongoDB primaries located in the respective cloud regions, ensuring low latency read and write access. Each zone also features a read-only replica of its data located in the cloud region of the other one. This ensures that users in North America will have fast, responsive read access to data generated in Europe, and vice versa. Satisfying data residency for regulatory requirements By allowing developers to easily direct the movement of data at the document level, Global Clusters provide a foundational building block that helps organizations achieve compliance with regulations containing data residency requirements. Data is associated with a zone and pinned to that zone unless otherwise configured. The illustration below represents an Atlas Global Cluster with 3 zones — blue, red, and orange. The configuration of the blue and red zones are very similar to what we already covered. Local application instances read and write to nearby primaries located in the preferred regions — Virginia and London — and each zone includes a read-only replica in the preferred cloud region of every other zone for serving fast, global reads. What’s different is the orange zone, which serves Germany. Unlike data generated in North America and the UK, data generated in and around Germany is not replicated globally; instead, it remains pinned to the preferred cloud region located in Frankfurt. Deploying your first Global Cluster Now let’s walk through how easy it is to set up a Global Cluster with MongoDB Atlas. In the Atlas UI, when you go to create a cluster, you’ll notice a new accordion labelled Global Cluster Configuration. If you click into this and enable “Global Writes”, you’ll find two easy-to-use and customizable templates. Global Performance provides reasonable read and write latency to the majority of the global population and Excellent Global Performance provides excellent read and write latency to majority of the global population. Both options are available across AWS, Google Cloud Platform, and Microsoft Azure. You can also configure your own zones. Let’s walk through the setup of a Global Cluster using the Global Performance template on AWS. After selecting the Global Performance template, you’ll see that the Americas are mapped to the North Virginia region, EMEA is mapped to Frankfurt, and APAC is mapped to Singapore. As your business requirements change over time, you are able to switch to the Excellent Global Performance template or fully customize your existing template. Customizing your Global Cluster Say you wanted to move your EMEA zone from Frankfurt to London. You can do so in just a few clicks. If you scroll down in the Create Cluster Dialog, you’ll see the Zone configuration component (pictured below). Select the zone you want to edit and simply update the preferred cloud region. Once you’re happy with the configuration, you can verify your changes in the latency map and then proceed to deploy the cluster. After your Global Cluster has been deployed, you’ll find that it looks just like any other Atlas cluster. If you click into the connect experience to find your connection string, you’ll find a simple and concise connection string that you can use in all of your geographically distributed application instances. Configuring data for a Global Cluster Now that your Global Cluster is deployed, let's have a look at the Atlas Data Explorer, where you can create a new database and collection. Atlas will walk you through this process, including the creation of an appropriate compound shard key — the mechanism used to determine how documents are mapped to different zones. This shard key must contain the location field. The second field should be a well-distributed identifier, such as userId . Full details on key selection can be found in the MongoDB Atlas docs . To help show what documents might look like in your database, we’ve added a few sample documents to a collection in the Data Explorer. As you can see above, we’ve included a field called location containing a ISO-3166-1 alpha 2 country code ("US", "DE", "IN") or a supported ISO-3166-2 subdivision code ("US-DC", "DE-BE", "IN-DL"), as well as a field called userId , which acts as our well-distributed identifier. This ensures that location affinity is baked into each document. In the background, MongoDB Atlas will have automatically placed each of these documents in their respective zones. The document corresponding to Anna Bell will live in North Virginia and the document corresponding to John Doe will live in Singapore. Assuming we have application instances deployed in Singapore and North Virginia, both will use the same MongoDB connection string to connect to the cluster. When Anna Bell connects to our application from the US, she will automatically be working with data kept in close proximity to her. Similarly, when John Doe connects from Australia, he will be writing to the Singapore region. Adding a zone to your Global Cluster Now let’s say that you start to see massive adoption of your application in India and you want to improve the performance for local users. At anytime, you can return to your cluster configuration, click “Add a Zone”, and select Mumbai as the preferred cloud region for the new zone. The global latency map will update, showing us the new zone and an updated view of the countries that map to it. When we deploy the changes, the documents that are tagged with relevant ISO country codes will gracefully be transferred across to the new zone, without downtime. Scaling write throughput in a single zone As we mentioned earlier in this post, it’s possible to scale out a single zone to address increases in local write throughput. Simply scroll to the “Zone Configuration”, click on “Additional Options” and increase the number of shards. By adding a second shard to a zone, you are able to double your write throughput. Low-latency reads of data originating from other zones We also referenced the ability to distribute read-only replicas of data from a zone into the preferred cloud regions of other zones, providing users with low-latency read access to data originating from other regions. This is easy to configure in MongoDB Atlas. In “Zone Configuration”, select “Add secondary and read-only regions”. Under “Deploy read-only replicas”, select “Add a node” and choose the region where you’d like your read-only replica to live. For global clusters, Atlas provides a shortcut to creating read-only replicas of each zone in every other zone. Under “Zone configuration summary”, simply select the “Configure local reads in every zone” button. MongoDB Atlas Global Clusters are very powerful, making it possible for practically any developer or organization to easily deploy, manage, and scale a distributed database layer optimized for low-latency reads and writes anywhere in the world. We're very excited to see what you build with this new functionality. Global clusters are available today on Amazon Web Services, Google Cloud Platform, and Microsoft Azure for clusters M30 and larger. New to MongoDB Atlas? Deploy a free database cluster in minutes.

July 4, 2018

New to MongoDB Atlas — Fully Managed Connector for Business Intelligence

Driven by emerging requirements for self-service analytics, faster discovery, predictions based on real-time operational data, and the need to integrate rich and streaming data sets, business intelligence (BI) and analytics platforms are one of the fastest growing software markets. Today, it’s easier than ever for MongoDB Atlas customers to make use of the MongoDB Connector for BI. The new BI Connector for Atlas is a fully managed, turnkey service that allows you to use your automated cloud databases as data sources for popular SQL-based BI platforms, giving you faster time to insight on rich, multi-structured data. The BI Connector for Atlas removes the need for additional BI middleware and custom ETL jobs, and relies on the underlying Atlas platform to automate potentially time-consuming administration tasks such as setup, authentication, maintaining availability, and ongoing management. Customers can use the BI Connector for Atlas along with the recently released MongoDB ODBC Driver to provide a SQL interface to fully managed MongoDB databases. This allows data scientists and business analysts responsible for analytics and business reporting on MongoDB data to easily connect to and use popular visualization and dashboarding tools such as Excel, Tableau, MicroStrategy, Microsoft Power BI, and Qlik. When deploying the BI Connector, Atlas designates a secondary in your managed cluster as the data source for analysis, minimizing the likelihood an analytical workload could impact performance on your operational data store. The BI Connector for Atlas also utilizes MongoDB’s aggregation pipeline to push more work to the database and reduce the amount of data that needs to be moved and computed in the BI layer, helping deliver insights faster. The BI Connector for Atlas is currently available for M10 Atlas clusters and higher. New to MongoDB Atlas? Deploy a free database cluster in minutes.

June 21, 2018

How to Integrate MongoDB Atlas and Segment using MongoDB Stitch

It can be quite difficult tying together multiple systems, APIs, and third-party services. Recently, we faced this exact problem in-house, when we wanted to get data from Segment into MongoDB so we could take advantage of MongoDB’s native analytics capabilities and rich query language. Using some clever tools we were able to make this happen in under an hour – the first time around. While this post is detailed, the actual implementation should only take around 20 minutes. I’ll start off by introducing our cast of characters ( what tools we used to do this) and then we will walk through how we went about it. The Characters To collect data from a variety of sources including mobile, web, cloud apps, and servers, developers have been turning to Segment since 2011. Segment consolidates all the events generated by multiple data sources into a single clickstream. You can then route the data to over 200+ integrations all at the click of a button. Companies like DigitalOcean , New Relic , InVision , and Instacart all rely on Segment for different parts of their growth strategies. To store the data generated by Segment, we turn to MongoDB Atlas – MongoDB’s database as a service. Atlas offers the best of MongoDB: A straightforward query language that makes it easy to work with your data Native replication and sharding to ensure data can live where it needs to A flexible data model that allows you to easily ingest data from a variety of sources without needing to know precisely how the data will be structured (its shape) All this is wrapped up in a fully managed service, engineered and run by the same team that builds the database, which means that as a developer you actually can have your cake and eat it too. The final character is MongoDB Stitch , MongoDB’s serverless platform. Stitch streamlines application development and deployment with simple, secure access to data and services – getting your apps to market faster while reducing operational costs. Stitch allows us to implement server-side logic that connects third-party tools like Segment, with MongoDB, while ensuring everything from security to performance is optimized. Order of Operations We are going to go through the following steps. If you have completed any of these already, feel free to just cherry pick the relevant items you need assistance with: Setting up a Segment workspace Adding Segment’s JavaScript library to your frontend application – I’ve also built a ridiculously simple HTML page that you can use for testing Sending an event to Segment when a user clicks a button Signing up for MongoDB Atlas Creating a cluster, so your data has somewhere to live Creating a MongoDB Stitch app that accepts data from Segment and saves it to your MongoDB Atlas cluster While this blog focusses on integrating Segment with MongoDB, the process we outline below will work with other APIs and web services. Join the community slack and ask questions if you are trying to follow along with a different service. Each time Segment sees new data a webhook fires an HTTP Post request to Stitch. A Stitch function then handles the authentication of the request and, without performing any data manipulation, saves the body of the request directly to the database – ready for further analysis. Setting up a Workspace in Segment Head over to and sign up for an account. Once complete, Segment will automatically create a Workspace for you. Workspaces allow you to collaborate with team members, control permissions, and share data sources across your whole team. Click through to the Workspace that you've just created. To start collecting data in your Workspace, we need to add a source. In this case, I’m going to collect data from a website, so I’ll select that option, and on the next screen, Segment will have added a JavaScript source to my workspace. Any data that comes from our website will be attributed to this source. There is a blue toggle link I can click within the source that will give me the code I need to add to my website so it can send data to Segment. Take note of this as we will need it shortly. Adding Segment to your Website I mentioned a simple sample page I had created in case you want to test this implementation outside of other code you had been working on. You can grab it from this GitHub repo . In my sample page, you’ll see I’ve copied and pasted the Segment code and dropped it in between my page’s <head> tags. You’ll need to do the equivalent with whatever code or language you are working in. If you open that page in a browser, it should automatically start sending data to Segment. The easiest way to see this is by opening Segment in another window and clicking through to the debugger. Clicking on the debugger button in the Segment UI takes you to a live stream of events sent by your application. Customizing the events you send to Segment The Segment library enables you to get as granular as you like with the data you send from your application. As your application grows, you’ll likely want to expand the scope of what you track. Best practice requires you to put some thought into how you name events and what data you send. Otherwise different developers will name events differently and will send them at different times – read this post for more on the topic. To get us started, I’m going to assume that we want to track every time someone clicks a favorite button on a web page. We are going to use some simple JavaScript to call Segment’s analytics tracking code and send an event called a “track” to the Segment API. That way, each time someone clicks our favorite button, we'll know about it. You’ll see at the bottom of my web page, that there is a jQuery function attached to the .btn class. Let’s add the following after the alert() function. analytics.track("Favorited", { itemId:, itemName: itemName }); Now, refresh the page in your browser and click on one of the favorite buttons. You should see an alert box come up. If you head over to your debugger window in Segment, you’ll observe the track event streaming in as well. Pretty cool, right! You probably noticed that the analytics code above is storing the data you want to send in a JSON document. You can add fields with more specific information anytime you like. Traditionally, this data would get sent to some sort of tabular data store, like MySQL or PostgreSQL, but then each time new information was added you would have to perform a migration to add a new column to your table. On top of that, you would likely have to update the object-relational mapping code that's responsible for saving the event in your database. MongoDB is a flexible data store, that means there are no migrations or translations needed, as we will store the data in the exact form you send it in. Getting Started with MongoDB Atlas and Stitch As mentioned, we’ll be using two different services from MongoDB. The first, MongoDB Atlas, is a database as a service. It’s where all the data generated by Segment will live, long-term. The second, MongoDB Stitch, is going to play the part of our backend. We are going to use Stitch to set up an endpoint where Segment can send data, once received, Stitch validates that the request Stitch was sent from Segment, and then coordinate all the logic to save this data into MongoDB Atlas for later analysis and other activities. First Time Using MongoDB Atlas? Click here to set up an account in MongoDB Atlas. Once you’ve created an account, we are going to use Atlas’s Cluster Builder to set up our first cluster (every MongoDB Atlas deployment is made up of multiple nodes that help with high availability, that’s why we call it a cluster ). For this demonstration, we can get away with an M0 instance – it's free forever and great for sandboxing. It's not on dedicated infrastructure, so for any production workloads, its worth investigating other instance sizes. When the Cluster Builder appears on screen, the default cloud provider is AWS, and the selected region is North Virginia. Leave these as is. Scroll down and click on the Cluster Tier section, and this will expand to show our different sizing options. Select M0 at the top of the list. You can also customize your cluster’s name, by clicking on the Cluster Name section. Once complete, click Create Cluster. It takes anywhere from 7-10 minutes to set up your cluster so maybe go grab a drink, stretch your legs and come back… When you’re ready, read on. Creating a Stitch Application While the Cluster is building, on the left-hand menu, click Stitch Apps. You will be taken to the stitch applications page, from where you can click Create New Application. Give your application a name, in this case, I call it “SegmentIntegration” and link it to the correct cluster. Click Create. Once the application is ready, you’ll be taken to the Stitch welcome page. In this case, we can leave anonymous authentication off. We do need to enable access to a MongoDB collection to store our data from Segment. For the database name I use “segment”, and for the collection, I use “events”. Click Add Collection. Next, we will need to add a service. In this case, we will be manually configuring an HTTP service that can communicate over the web with Segment’s service. Scroll down and click Add Service. You’ll jump one page and should see a big sign saying, “This application has no services”… not for long. Click Add a Service… again. From the options now visible, select HTTP and then give the service a name. I’ll use “SegmentHTTP”. Click Add Service. Next, we need to add an Incoming Webhook. A Webhook is an HTTP endpoint that will continuously listen for incoming calls from Segment, and when called, it will trigger a function in Stitch to run. Click Add Incoming Webhook Leave the default name as is and change the following fields: Turn on Respond with Result as this will return the result of our insert operation Change Request Validation to “Require Secret as Query Param” Add a secret code to the last field on the page. Important Note: We will refer to this as our “public secret” as it is NOT protected from the outside world, it’s more of a simple validation that Stitch can use before running the Function we will create. Shortly, we will also define a “private secret” that will not be visible outside of Stitch and Segment. Finally, click “Save”. Define Request Handling Logic with Functions in Stitch We define custom behavior in Stitch using functions , simple JavaScript (ES6) that can be used to implement logic and work with all the different services integrated with Stitch. Thankfully, we don’t need to do too much work here. Stitch already has the basics set up for us. We need to define logic that does the following things: Grabs the request signature from HTTP headers Uses the signature to validate the requests authenticity (i.e., it came from Segment) Write the request to our collection in MongoDB Atlas Getting an HTTP Header and Generating an HMAC Signature Add the following to line 8, after the curly close brace }. const signature = payload.headers['X-Signature']; And then use Stitch’s built-in Crypto library to generate a digest that we will compare with the signature. const digest = utils.crypto.hmac(payload.body.text(), context.values.get("segment_shared_secret"), "sha1", "hex"); A lot is happening here so I’ll step through each part and explain. Segment signs requests with a signature that is a combination of the HTTP body and a shared secret. We can attempt to generate an identical signature using the utils.crytop.hmac function if we know the body of the request, the shared secret, the hash function Segment uses to create its signatures, and the output format. If we can replicate what is contained within the X-Signature header from Segment, we will consider this to be an authenticated request. *Note: This will be using a private secret, not the public secret we defined in the Settings page when we created the webhook. This secret should never be publicly visible. Stitch allows us to define values that we can use for storing variables like API keys and secrets. We will do this shortly. * Validating that the Request is Authentic and Writing to MongoDB Atlas To validate the request, we simply need to compare the digest and the signature . If they’re equivalent, then we will write to the database. Add the following code directly after we generate the digest . if (digest == signature) { // Request is valid } else { // Request is invalid console.log("Request is invalid"); } Finally, we will augment the if statement with the appropriate behavior needed to save our data. On the first line of the if statement, we will get our “mongodb-atlas” service. Add the following code: let mongodb ="mongodb-atlas"); Next, we will get our database collection so that we can write data to it. let events = mongodb.db("segment").collection("events"); And finally, we write the data. events.insertOne(body); Click the Save button on the top left-hand side of the code editor. At the end of this, our entire function should look something like this: exports = function(payload) { var queryArg = payload.query.arg || ''; var body = {}; if (payload.body) { body = JSON.parse(payload.body.text()); } // Get x-signature header and create digest for comparison const signature = payload.headers['X-Signature']; const digest = utils.crypto.hmac(payload.body.text(), context.values.get("segment_shared_secret"), "sha1", "hex"); //Only write the data if the digest matches Segment's x-signature! if (digest == signature) { let mongodb ="mongodb-atlas"); // Set the collection up to write data let events = mongodb.db("segment").collection("events"); // Write the data events.insertOne(body); } else { console.log("Digest didn't match"); } return queryArg + ' ' + body.msg; }; Defining Rules for a MongoDB Atlas Collection Next, we will need to update our rules that allow Stitch to write to our database collection. To do this, in the left-hand menu, click on “mongodb-atlas”. Select the collection we created earlier, called “ ”. This will display the Field Rules for our Top-Level Document. We can use these rules to define what conditions must exist for our Stitch function to be able to Read or Write to the collection. We will leave the read rules as is for now, as we will not be reading directly from our Stitch application. We will, however, change the write rule to "evaluate" so our function can write to the database. Change the contents of the “Write” box: Specify an empty JSON document {} as the write rule at the document level. Set Allow All Other Fields to Enabled, if it is not already set. Click Save at the top of the editor. Adding a Secret Value in MongoDB Stitch As is common practice, API keys and passwords are stored as variables, meaning they are never committed to a code repo – visibility is reduced. Stitch allows us to create private variables (values) that may be accessed only by incoming webhooks, rules, and named functions. We do this by clicking Values on the Stitch menu, clicking Create New Value, and giving our value a name – in this case segment_shared_secret (we will refer to this as our private secret). We enter the contents in the large text box. Make sure to click Save once you’re done. Getting Our Webhook URL To copy the webhook URL across to Segment from Stitch, navigate using the Control menu: Services > SegmentHTTP > webhook0 > Settings (at the top of the page). Now copy the “Webhook URL”. In our case, the Webhooks looks something like this: Adding the Webhook URL to Segment Head over to Segment and log in to your workspace. In destinations, we are going to click Add Destination. Search for Webhook in the destinations catalog and click Webhooks. Once through to the next page, click Configure Webhooks. Then select any sources from which you want to send data. Once selected, click Confirm Source. Next, we will find ourselves on the destination settings page. We will need to configure our connection settings. Click the box that says Webhooks (max 5). Copy your webhook URL from Stitch, and make sure you append your public secret to the end of it using the following syntax: Initial URL: Add the following to the end: ?secret=<YOUR_PUBLIC_SECRET_HERE> Final URL: Click Save We also need to tell Segment what our private secret is so it can create a signature that we can verify within Stitch. Do this by clicking on the Shared Secret field and entering the same value you used for the segment_shared_secret . Click Save. Finally, all we need to do is activate the webhook by clicking the switch at the top of the Destination Settings page: Generate Events, and See Your Data in MongoDB Now, all we need to do is use our test HTML page to generate a few events that get sent to Segment – we can use Segment’s debugger to ensure they are coming in. Once we see them flowing, they will also be going across to MongoDB Stitch, which will be writing the events to MongoDB Atlas. We’ll take a quick look using Compass to ensure our data is visible. Once we connect to our cluster, we should see a database called “segment”. Click on segment and then you’ll see our collection called “events”. If you click into this you’ll see a sample of the data generated by our frontend! The End Thanks for reading through – hopefully you found this helpful. If you’re building new things with MongoDB Stitch we’d love to hear about it. Join the community slack and ask questions in the #stitch channel!

May 17, 2018

New to MongoDB Atlas — Live Migrate Sharded Clusters and Deployments Running MongoDB 2.6

Live migration in MongoDB Atlas enables users to import data from MongoDB deployments running in other environments and cut over to a fully managed database service, giving you industry best practice security by default, advanced features to streamline operations and performance optimization, and the latest versions of MongoDB. Today, we’re introducing two new enhancements to MongoDB Atlas live migration that make it easier than ever for users to take advantage of the official cloud database service for MongoDB with minimal impact to their applications. Previously, live migration could only be performed on a replica set running MongoDB version 3.0 and above. MongoDB Atlas now supports live migrations of replica sets running MongoDB 2.6, making it easier for users running older versions to transition to a fully managed service and a more recent version of the database software. Live migrations will now also support sharded clusters , meaning that some of the world’s largest MongoDB workloads can now be moved to MongoDB Atlas with less effort and minimal impact to production applications. Live migrate from MongoDB 2.6 to 3.2+ Upgrading to a new database version may seem like routine work for some, but, as battle-hardened IT operators know, can have complexities and require plenty of strategy and foresight. Between all the applications and end users you have, the prospect of upgrading to a new release can be a major undertaking requiring significant planning. While some of our Enterprise and Community customers love to upgrade to the latest release as soon as possible to get new features and performance improvements, others take a more measured approach to upgrading. To make upgrading easier, we are excited to announce that we have extended database version support for the live migration tool in MongoDB Atlas. MongoDB users running older versions of the database can now easily update to the latest versions of the database and migrate to the best way to run MongoDB in the cloud, all at the same time. Using live migration, you can migrate from any MongoDB 2.6 replica set to a MongoDB 3.2+ cluster on MongoDB Atlas . This requires no backend configuration, no downtime, and no upgrade scripting. Once our migration service is able to communicate with your database, it will do all the heavy lifting. The migration service works by: Performing a sync between your source database and a target database hosted in MongoDB Atlas Syncing live data between your source database and the target database by tailing the oplog Notifying you when it’s time to cut over to the MongoDB Atlas cluster Given that you’re upgrading a critical part of your application, you do need to be wary of how your application’s compatibility with the database might change, and for that we recommend the following stages are included in your upgrade plan: Upgrade your application to make use of the latest MongoDB drivers, and make any necessary code changes Create a staging environment on MongoDB Atlas Use the live migration tool to import your data from your existing MongoDB replica Deploy a staging version of your updated application and connect it to your newly created MongoDB Atlas staging environment Perform thorough functional and performance tests to ensure behavior is as expected Re-use the live migration tool to import your production data when ready, and then perform the hard cutover in databases and application versions Compatibility between source and destination cluster versions. Live migrate sharded clusters Until today, migrating a sharded MongoDB deployment with minimal downtime has been difficult. The live migration tool now makes this possible for customers looking to move their data layer into MongoDB Atlas. When performing a live migration on a sharded cluster, we recommend that in addition to following the process listed above, that you also consider the following: Our live migration service will need access to all your shards and config servers, in addition to your mongos servers You can only migrate across like database versions e.g. 3.2 to 3.2, 3.4 to 3.4, etc. You must migrate from and to the same number of shards For full details on sharded live migrations, click here Ready to migrate to MongoDB Atlas ?

March 5, 2018