By Jesse Davis, Python Engineer at MongoDB
Last week, we published a draft of the Server Discovery And Monitoring Spec for MongoDB drivers. This spec defines how a MongoDB client discovers and monitors a single server, a set of mongoses, or a replica set. How does the client determine what types of servers they are? How does it keep this information up to date? How does the client find an entire replica set from a seed list, and how does it respond to a stepdown, election, reconfiguration, or network error?
In the past each MongoDB driver answered these questions a little differently, and mongos differed a little from the drivers. We couldn’t answer questions like, “Once I add a secondary to my replica set, how long does it take for the driver to discover it?” Or, “How does a driver detect when the primary steps down, and how does it react?”
From now on, all drivers answer these questions the same. Or, where there’s a legitimate reason for them to differ, there are as few differences as possible and each is clearly explained in the spec. Even in cases where several answers seem equally good, drivers agree on one way to do it.
The server discovery and monitoring method is specified in five sections. First, a client is constructed. Second, it begins monitoring the server topology by calling the ismaster command on all servers. (The algorithm for multi-threaded and asynchronous clients is described separately from single-threaded clients.) Third, as ismaster responses are received the client parses them, and fourth, it updates its view of the topology. Finally, the spec describes how drivers update their topology view in response to errors.
I’m particularly excited about the unittests that accompany the spec. We have 37 tests that are specified formally in YAML files, with inputs and expected outcomes for a variety of scenarios. For each driver we’ll write a test runner that feeds the inputs to the driver and verifies the outcome. This ends confusion about what the spec means, or whether all drivers conform to it.
The Java driver 2.12.1 is the spec’s reference implementation for multi-threaded drivers, and I’m making the upcoming PyMongo 3.0 release conform to the spec as well. Mongos 2.6’s replica set monitor is the reference implementation for single-threaded drivers, with a few differences. The upcoming Perl driver 1.0 implements the spec orthodoxly.
Once we have multiple reference implementations and the dust has settled, the draft spec will be final. We’ll bring the rest of our drivers up to spec over the next year.
You can read more about the Server Discovery And Monitoring Spec at these links:
- A summary of the spec.
- The Server Discovery And Monitoring Spec.
- The spec source, including the YAML test files.
- PyMongo’s test runner.
- PyMongo’s core implementation of the spec.
We have more work to do. For one thing, the Server Discovery And Monitoring Spec only describes how the client gathers information about your server topology—it does not describe which servers the client uses for operations. My Read Preferences Spec only partly answers this second question. My colleague David Golden is writing an improved and expanded version of Read Preferences, which will be called the Server Selection Spec. Once that spec is complete, we’ll have a standard algorithm for all drivers that answers questions like, “Which replica set member does the driver use for a query? What about an aggregation? Which mongos does it use for an insert?” It’ll include tests of the same formality and rigor as the Server Discovery And Monitoring Spec does.
Looking farther ahead, we plan to standardize the drivers’ APIs so we all do basic CRUD operations the same. And since we’ll allow much larger replica sets soon, both the server-discovery and the server-selection specs will need amendments to handle large replica sets. In all cases, we’ll provide a higher level of rigor, clarity, and formality in our specs than we have before.
This was originally posted to Jesse’s blog, Empty Square