MongoDB 3.0.5-rc2 is out and is ready for testing. This is a release candidate containing only fixes since 3.0.4. The next stable release 3.0.5 will be a recommended upgrade for all 3.0 users.
Fixed in this release:
- SERVER-17254 WT: drop collection while concurrent oplog tailing may greatly reduce throughput
- SERVER-17689 Server crash during initial replication sync
- SERVER-18875 Oplog performance on WT degrades over time after accumulation of deleted items
- SERVER-18902 Retrieval of large documents slower on WiredTiger than MMAPv1
- SERVER-18926 Full text search extremely slow and uses a lot of memory under WiredTiger
- SERVER-19255 Listener::waitUntilListening may return before listening has started
- SERVER-19489 Assertion failure and segfault in WorkingSet::free in 3.0.5-rc0
- SERVER-19513 Truncating a capped collection may not unindex deleted documents in WiredTiger
As always, please let us know of any issues.
– The MongoDB Team
MongoDB Pluggable Storage Engines: State of the Union, Storage Engine Summit
Back at the inaugural MongoDB World user conference in June 2014, our co-founder and CTO, Eliot Horowitz, announced that MongoDB would incorporate a storage engine API into the next major version of the database – MongoDB 3.0 . The goal of the storage engine API was to make it fast and easy for MongoDB and the community to build new pluggable storage engines that allow the database to be extended with new capabilities, and to be configured for specific hardware architectures. 12 months on, following both the MongoDB 3.0 release and the MongoDB World 2015 conference (keynotes and sessions from the event are now online ), we hosted our first Storage Engine Summit, bringing together storage engine developers from across both MongoDB and the community. The summit’s goals were to: Review the status of current MongoDB storage engines Provide developers with visibility into the roadmap for both the storage engine API itself, and the database roadmap Collect storage engine developer’s feature requirements Develop best practices for community collaboration and development of the API In this blog, I’ll provide an overview of the progress happening around MongoDB storage engines, and key outcomes from the summit. Why Pluggable Storage Engines? First, a bit of background. With users building increasingly complex data-driven apps, there is no longer a "one size fits all" database storage technology capable of powering every type of application built by the business. Modern applications need to support a variety of workloads with different access patterns and price/performance profiles – from low latency, in-memory read and write applications, to real-time analytics, to highly compressed "active" archives. The storage engine API allows MongoDB to be configured with a choice of storage engines, each configured for specific workloads. This “pluggable” approach significantly reduces developer and operational complexity compared to running multiple databases. Now users can leverage the same MongoDB query language, data model, scaling, security and operational tooling across different applications, each powered by pluggable MongoDB storage engines that are optimized for specific workloads. So what progress has been made so far? Quite a bit, and in a very short time frame. MongoDB 3.0 shipped with two supported storage engines: The default MMAPv1 engine, an improved version of the engine used in prior MongoDB releases, enhanced with collection level concurrency control. The new WiredTiger storage engine. For many applications, WiredTiger's more granular concurrency control and native compression provide significant benefits in terms of lower storage costs, greater hardware utilization, higher throughput, and more predictable performance. Benchmarks show MongoDB 3.0 configured with WiredTiger delivers 7-10x higher performance than MongoDB 2.6 using the original MMAP storage engine. In addition, storage compression rates of up to 80% are fairly common. To demonstrate how developers could start to innovate, several experimental engines also shipped with MongoDB 3.0, including the in-memory and /dev/null storage engines. Beyond the walls of MongoDB engineering teams, the community also started releasing its own engines. Within one quarter of 3.0’s availability: Facebook released mongo-rocks , a MongoDB storage engine based on its RocksDB embedded database project. Mongo-rocks is already in use for some production workloads at Facebook’s Parse mobile backend-as-a-service division, which today supports over 500,000 applications on MongoDB. Percona published a release candidate of its TokuMXse storage engine. What’s Next? Held in our New York offices on June 4th, the summit drew diverse representation from the community. Some attendees already had storage engines for MongoDB; some were in varying stages of development; while others were just starting to evaluate the opportunities. Attendees included: Facebook, who already has RocksDB Percona, readying TokuMXse SanDisk, developing an engine optimized for its Fusion-io SSD storage devices Deep , who have developed a MySQL storage engine featuring an adaptive database kernel 8k Data , currently building a JSON database on top of Postgres There was also representation from MongoDB’s storage engine teams, who discussed additional engines planned for the MongoDB 3.2 timeframe, including: An encrypted storage engine protecting data at-rest, with keys secured by the industry standard Key Management Interoperability Protocol (KMIP). At-rest encryption is especially critical for regulated industries such as healthcare, financial services, retailers and certain government agencies. And with so many high profile breaches of sensitive data at high profile companies over the past five years, increasingly all data is being encrypted. An in-memory engine designed to serve ultra high-throughput, low-latency apps typical in finance, ad-tech, gaming, real-time analytics, session management and general cache use cases. A new option for insert only workloads (e.g., streaming IoT sensor data, log file analysis, social media feed ingestion), based on the LSM option in the WiredTiger storage engine. Much more than just sharing feeds and speeds, the intent of the storage engine summit was to bring together developers to share best practices in how to collaborate: Mark Callaghan, one of our esteemed MongoDB masters and member of Facebook’s technical staff, gave great insight into his experiences working as part of the MySQL storage engine community. He shared the good, the bad, and the downright ugly. Dr Michael Cahill, co-founder of WiredTiger, and now Director of Storage Engineering at MongoDB, shared his experiences implementing the WiredTiger storage engine, both while outside MongoDB, and within the company. Mathias Stearn & Geert Bosch, senior database kernel engineers, provided a deep dive walkthrough into the storage engine API implementation, and plans for the future. This gave all those present an opportunity to provide feedback on issues they had faced, solutions to move forward, and future requirements to shape the evolution of the API All of the talks gave a great frame-of-reference in how the MongoDB storage engine community should move forward. So what did the summit achieve in one short day? The formation of a focused community committed to building MongoDB storage engines. The establishment of a framework for communications, contributions, collaboration & future summits. It connected engine developers directly with MongoDB engineers designing & implementing API functionality. Not only has feedback already helped evolve the storage engine API in the current 3.0 release, additional enhancements will be reflected in MongoDB 3.2’s API. Wrapping Up Huge progress has been made, and the MongoDB storage engine ecosystem is continuing to grow. This diversity helps MongoDB users solve new classes of use cases with a single database framework. The storage engine API helps developers innovate faster. And it helps storage engine partners get their technology in front of a whole new audience. To learn more of the promise of multiple storage engines, download the What’s New in MongoDB 3.0 guide. What’s New in MongoDB 3.0
MongoDB at AWS re:Invent 2020
While 2020 has been a challenging year, it has also given rise to new levels of innovative collaboration and agile thinking. Where better to experience both than at AWS re:Invent 2020? At MongoDB, we’re excited to partner with AWS on this free, 3-week virtual event, providing unlimited access to hundreds of sessions led by Cloud experts. Although we’ll miss the grand, buzzing halls of the Venetian Hotel and the celebratory sounds of slot machines this year, it’s still important to approach AWS re:Invent with a focused plan. Think of this year’s event as an opportunity to curate your own perfectly tailored experience. Check out this page for details of our fresh new lineup of deep-dives, targeted jam sessions and — of course — the annual MongoDB late-night party. Here are some of the highlights. AWS Jam — "Excel isn't a database!" Imagine this: It's your first week in a new job, and the VP of sales has already given you an important data task. The good news? From the start of the year, all your current sales data has been stored in MongoDB Atlas — allowing operational and analytical workloads to run on the live data set. The not-so-good news? That wasn't always the case. For years before they switched, their database (well, ”database”) of choice was… Excel. Fortunately someone took the initiative to export that data in CSV format and store it in S3, but now the sales team needs your help to analyze that data — and they need it fast. In our “Excel isn’t a database!” Jam Session, you’ll test and upgrade your skills by connecting MongoDB Atlas Data Lake to CSV data that’s been languishing in an S3 bucket. Then you’ll run an aggregation to complete the challenge and claim points. Game on! This jam session will be available on-demand for the duration of AWS re:Invent Databases & S3: Auto-archiving Breakout Session Databases are built for fast access, but this can also make them resource-intensive. As data grows, you may want to optimize performance (or cost) by migrating old or infrequently used data into cheap object storage. But this presents its own problems: automating the archival process, ensuring data consistency during failures, and either querying two data stores separately or building a query federation system. In this talk, you’ll learn about how we approached these problems while building Online Archive and Federated Query features into MongoDB Atlas, lessons learned from the experience, and how you can do the same. MongoDB Late Nite That’s right: it’s a party! In the spirit of Vegas, MongoDB will be hosting an interactive late-night bash complete with throw-back entertainment at our virtual after-hours event. Like Vegas, there’s something for everyone. Unlike Vegas, the odds are actually on your side. Get your adrenaline going and dial in for exclusive swag at our Home Shopping Network. Just sign on and dial into our custom QVC-reboot every hour for a chance to snag some really cool limited-release items. Stay tuned to the event website to find out what you can win, and when! Are you a Jeopardy lover? MongoDB Late Nite is your time to shine. Exercise your mental reflexes and get those synapses firing with hundreds of other party people inside episodes of dev-focused live trivia. And what kind of revelry is complete without a resident psychic on board? Join us at the Future of Coding for an interactive reading by a VERY accurate psychic. So kick back, grab a beverage and join us at the party from home. Let’s get in the spirit together! Sponsor Page/Online Booth Pop into our virtual sponsor booth at your convenience. Our product experts will be there to answer your questions one-on-one. Alternatively, if casually exploring resources is more your style, check out our self-serve content playlists. View these to dig deeper into MongoDB education, glean customer success stories and get up to speed on the latest product features.