MongoDB 3.4.0-rc2 is out and is ready for testing. This is the culmination of the 3.3.x development series.
Fixed in this release candidate:
- SERVER-7306 Mongod as windows service should not claim to be 'started' until it is ready to accept connections
- SERVER-18908 Secondaries unable to keep up with primary under WiredTiger
- SERVER-26420 Make internal clients identify themselves in the isMaster handshake
- SERVER-26514 Create command should take idIndex option
- SERVER-26648 Tolerate bad collection metadata produced on version 2.4 or earlier
- SERVER-26652 Invalid definitions in systemd configuration for debian
- WT-1592 Dump detailed cache information via statistics
- WT-2954 Inserting multi-megabyte values can cause large in-memory pages
As always, please let us know of any issues.
-- The MongoDB Team
Microservices Webinar Recap
Recently, we held a webinar discussing microservices, and how two companies, Hudl and UPS i-parcel, leverage MongoDB as the database powering their microservices environment. There have been a number of theoretical and vendor-led discussions about microservices over the past couple of years. We thought it would be of value to share with you real world insights from companies who have actually adopted microservices, as well as answers to questions we received from the audience during the live webinar. Jon Dukulil is the VP of Engineering from Hudl and Yursil Kidwai is the VP of Technology from UPS i-parcel. How are Microservices different from Service Oriented Architectures (SOAs) utilizing SOAP/REST with an Enterprise Service Bus (ESB)? Microservices and SOAs are related in that both approaches distribute applications into individual services. Where they differ though, is the scope of the problem they address today. SOAs aim for flexibility at the enterprise IT level. This can be a complex undertaking as SOAs only work when the underlying services do not need to be modified. Microservices represent an architecture for an individual service, and aim at facilitating continous delivery and parallel development of multiple services. The following graphic highlights some of the differences. One significant difference between SOAs and microservices revolves around the messaging system, which coordinates and synchronizes communication between different services in the application. Enterprise service buses (ESB) emerged as a solution for SOAs because of the need for service integration and a central point of coordination. As ESBs grew in popularity, enterprise vendors packaged more and more software and smarts into the middleware, making it difficult to decouple the different services that relied on the ESB for coordination. Microservices keep the messaging middleware focused on sharing data and events, and enabling more of the intelligence at the endpoints. This makes it easier to decouple and separate individual services. How big should a microservice be? There are many differing opinions about how large a microservice should be, thus it really depends on your application needs. Here is how Hudl and UPS i-parcel approach that question. Jon Dukulil (Hudl) : We determine how big our microservice should be the amount of work that can be completed by a squad. For us, a squad is a small completely autonomous team. It consists of 4 separate functions: product manager, developer, UI designer, and QA. When we are growing headcount we are not thinking of growing larger teams, we are thinking of adding more squads. !(https://webassets.mongodb.com/_com_assets/cms/Microservices_MongoDB_Blog2-a6l74owk23.png) Yursil Kidwai (UPS i-parcel) : For us, we have defined microservice as a single verb (e.g. Billing), and are constantly challenging ourselves on how that verb should be defined. We follow the “two pizza” rule, in which a team should never be larger than what you can feed with two large pizzas. Whatever our “two pizza” team can deliver in one week is what we consider to be the right size for a microservice. Why should I decouple databases in a microservices environment? Can you elaborate on this? One of the core principles behind microservices is strong cohesion (i.e. related code grouped together) and loose coupling (i.e. a change to one service should not require a change to another). With a shared database architecture both these principles are lost. Consumers are tied to a specific technology choice, as well as particular database implementation. Application logic may also be spread among multiple consumers. If a shared piece of information needs to be edited, you might need to change the behavior in multiple places, as well as deploy all those changes. Additionally, in a shared database architecture a catastrophic failure with the infrastructure has the potential to affect multiple microservices and result in a substantial outage. Thus, it is recommended to decouple any shared databases so that each microservice has its own database. Due to the distributed nature of microservices, there are more failure points. Because of all these movable parts in microservices, how do you deal with failures to ensure you meet your SLAs? Jon Dukulil (Hudl) : For us it’s an important point. By keeping services truly separate where they share as little as possible, that definitely helps. You’ll hear people working with microservices talk about “minimizing the blast radius” and that’s what I mean by the separation of services. When one service does have a failure it doesn’t take everything else down with it. Another thing is that when you are building out your microservices architecture, take care of the abstractions that you create. Things in a monolith that used to be a function call are now a network call, so there are many more things that can fail because of that: networks can timeout, network partitions, etc. Our developers are trained to think about what happens if we can’t complete the call. For us, it was also important to find a good circuit breaker framework and we actually wrote our own .NET version of a framework that Netflix built called Hystrix. That has been pretty helpful to isolate points of access between services and stop failures from cascading. Yursil Kidwai (UPS i-parcel) : One of the main approaches we took to deal with failures and dependencies was the choice to go with MongoDB. The advantage for us is MongoDB’s ability to deploy a single replica set across multiple regions. We make sure our deployment strategy always includes multiple regions to create that high availability infrastructure. Our goal is to always be up, and the ability of MongoDB’s replica sets to very quickly recover from failures is key to that. Another approach was around monitoring. We built our own monitoring framework that we are reporting on with Datadog. We have multiple 80 inch TVs displaying dashboards of the health of all our microservices. The dashboards are monitoring the throughput of the microservices on a continual basis, with alerts to our ops team configured if the throughput for a service falls below an acceptable threshold level. Finally, it’s important for the team to be accountable. Developers can’t just write code and not worry about, but they own the code from beginning to end. Thus, it is important for developers to understand the interdependencies between DevOps, testing, and release in order to properly design a service. Why did you choose MongoDB and how does it fit in with your architecture? Jon Dukulil (Hudl) : One, from a scaling perspective, we have been really happy with MongoDB’s scalability. We have many small databases and a couple of very large databases. Our smallest database today is serving up just 9MB of data. This is pretty trivial so we need these small databases to run on cost effective hardware. Our largest database is orders of magnitude larger and is spread over 8 shards. The hardware needs of those different databases are very different, but they are both running on MongoDB. Fast failovers are another big benefit for us. It’s fully automated and it’s really fast. Failovers are in the order of 1-5 seconds for us, and the more important thing is they are really reliable. We’ve never had an issue where a failover hasn’t gone well. Lastly, since MongoDB has a dynamic schema, for us that means that the code is the schema. If I’m working on a new feature and I have a property that last week was a string, but this week I want it to be an array of strings, I update my code and I’m ready to go. There isn’t much more to it than that. Yursil Kidwai (UPS i-parcel) : In many parts of the world, e-commerce rules governing cross border transaction are still changing and thus our business processes in those areas are constantly being refined. To handle the dynamic environment that our business operates in, the requirement to change the schema was paramount to us. For example, one country may require a tax identification number, while another country may suddenly decide it needs your passport, as well as some other classification number. As these changes are occurring, we really need something behind us that will adapt with us and MongoDB’s dynamic schema gave us the ability to quickly experiment and respond to our ever changing environment. We also needed the ability to scale. We have 20M tracking events across 100 vendors processed daily, as well as tens of thousands of new parcels that enter into our system every day. MongoDB’s ability to scale-out on commodity hardware and its elastic scaling features really allowed us to handle any unexpected inflows. Next Steps To understand more about the business level drivers and architectural requirements of microservices, read Microservices: Evolution of Building Modern Apps Whitepaper . For a technical deep dive into microservices and containers, read Microservices: Containers and Orchestration Whitepaper
10 Exciting Things About MongoDB.local London
After nearly two years of the coronavirus pandemic preventing in-person events, MongoDB is very excited to once again see people face-to-face at MongoDB.local London! This event is designed to help developers grow and will be packed with educational content to teach you how to build data-driven applications without distraction. MongoDB.local London will be run as a hybrid event, featuring both in-person and virtual attendance options. For those unable to attend in-person, we will live stream most sessions for you to enjoy. All streamed content will be available on-demand for 30 days after the event. In-person attendance for the event is limited, so head over to our registration page and sign-up today! MongoDB.local London takes place on November 9, 2021. There will be something for everyone at .local London. Here are 10 exciting things about our upcoming event: Hear it Here First: The Keynote presentation will provide a recap of the products released in MongoDB 5.0 and highlight the new features in 5.1. Following the keynote, attendees can pose questions to MongoDB CTO, Mark Porter, CPO, Sahir Azam, and a larger panel of MongoDB experts. Customer Stories: During these sessions, attendees will hear from MongoDB customers and community members about how they are utilizing the MongoDB data platform to enhance the way they work with data. These sessions will include speakers from Boots, Vodafone, NatWest, and DWP Digital. “Ask Me Anything" Panels: Attendees can have their questions answered and problems solved live by a panel of MongoDB engineers and product experts. Panel topics include Performance & Security, the Aggregation Pipeline, and Schema Design. Technical Sessions: Over the course of the event, there will be 20+ educational technical sessions that will cover beginner, intermediate, and advanced level content. The information in these sessions has been selected specifically for this audience and will be delivered by the MongoDB experts who build the data platform. MongoDB Product and Feature Demos: The MongoDB product teams will be conducting dozens of demos on everything MongoDB, from Atlas to Ops Manager . This is the perfect opportunity to learn more about MongoDB and how it can work for you. Ask the Experts: Our MongoDB Experts will be offering free 1:1 technical consulting sessions where attendees can ask any technical questions that they have. Only available to in-person attendees. Deep-Dive Tutorials: Opportunity to learn by doing long-format, classroom style sessions on the latest data trends with MongoDB. You will receive 1:1 attention from MongoDB experts while you get hands-on with the data platform. Only available to in-person attendees. Community Café: Come to the Community Café stage where there will be an “up close and personal” with MongoDB CTO, Mark Porter, customer interviews, trivia, and so much more! Happy Hour: In-person attendees can grab some food and drinks at the event happy hour. Here’s your chance to engage and network with other attendees. Swag: Is it really a tech event if there isn’t some free swag? Stop by the event booths to get some swag from our MongoDB team members. Register today to save your spot for the event! Whether you attend in-person or virtually, we look forward to having you join us!