MongoDB 3.4.0-rc3 is out and is ready for testing. This is the culmination of the 3.3.x development series.
Fixed in this release candidate:
- SERVER Configurable connection pools size for mongos
- SERVER Limit total memory utilization for bulk index builds
- SERVER Additional tests for views on sharded collections
- SERVER Over 25% regression on mongodb using YCSB workload
- SERVER Minor speed regression (13%) and 'choppy' performance in 3.4 vs 3.2
- TOOLS A single invocation of mongoreplay replays the ops twice
- TOOLS Connections never closed during replay
As always, please let us know of any issues.
-- The MongoDB Team
Leaf in the Wild: How Loopd uses MongoDB to Power its Advanced Location-Tracking Platform for Conferences
Conferences can be incredibly hectic experiences for everyone involved. You have attendees wanting to meet and exchange information, sponsors and exhibitors looking to maximize foot traffic to their booths, and the conference hosts trying to get a sense of how they can optimize their event and if it was all worth it in the end. While sponsors usually do get a lead list immediately after an event for their troubles, attendees often struggle to remember who they actually spoke to and event hosts are often left in the dark about what they can do to maximize the returns on their investments. Enter Loopd, an advanced events engagement platform. I sat down with their CEO, Brian Friedman, to understand how they’re using MongoDB to help conference attendees and event hosts separate the signal from the noise. Tell us about Loopd. Loopd provides physical intelligence for corporate events. We help corporate marketers learn how people interact with each other, with their company, and with their company's products. The Loopd event engagement system is the industry's only bi-directional CRM solution that enables the exchange of content and contact information passively and automatically. We equip conference attendees with Loopd wearable badges, which can be used to easily exchange contact information or gain entry into sessions. Through our enterprise IoT analytics and sensors, we then gather and interpret rich data so that marketers have a more sophisticated understanding of business relationships and interactions at conferences, exhibits and product activation events. Some of our clients include Intel, Box, Twilio, and MongoDB. Bluetooth LE Loopd Badges How are you using MongoDB? We use MongoDB to store millions of datapoints from connected advertising and Bluetooth LE Loopd Badges on the conference floor. All of the attendee movement data captured by the Loopd Badge at an event can be thought of as time series data associated with location information. We track each Loopd Badge’s location and movement path in real time during the event. As a result, we handle heavy write operations during an event to make sure any and all calculations are consistent, timely, and accurate. We also use the database for real-time analysis. For example, we calculate the number of attendee visits & returns, and average time durations in near real time. We use the aggregation framework in MongoDB to make this happen. What did you use before MongoDB? Before MongoDB, we used PostgreSQL as our main data store. We used Redis as a temporary data buffer queue for storing new movement data. The data was dumped, inserted, and updated into rows in the SQL database once per second. The raw location data was read and parsed from the SQL database into a user-readable format. We needed a temporary buffer because the high volume of insert and update requests drained available resources. What challenges did you face with PostgreSQL? With PostgreSQL, we needed a separate Redis caching server to buffer write and update operations before storing them in the database, which added architectural and operational complexity. It also wasn’t easy to scale as it’s not designed to be deployed across multiple instances. How did MongoDB help you resolve those challenges? When we switched to MongoDB from PostgreSQL, our write throughput significantly increased, removing the need for a separate caching server in between the client and the database. We were able to halve our VM resource consumption (CPU power and memory), which translated to significant cost savings. As a bonus, our simplified underlying architecture is now much easier to manage. Finally, one of the great things about MongoDB is its data model flexibility, which allows us to rapidly adapt our schema to support new application demands, without the need to incur downtime or manage complex schema migrations. Please describe your MongoDB deployment. We typically run one replica set per event. The database size depends on the event — for MongoDB World 2016, we generated about 2 million documents over the course of a couple of days. We don’t shard our MongoDB deployments yet but having that ability in our back pocket will be very important for us going forward. At the moment, all of our read queries are executed on the secondaries in the replica set, which means write throughput isn’t impacted by read operations. The smallest analytics window in our application is a minute, which means we can tolerate any eventual consistency from secondary reads. Our MongoDB deployments are hosted in Google Cloud VM instances. We’re exploring using containers but they’re currently not in use for any production environments. We’re also evaluating Spark and Hadoop for doing some more interesting things with the data we have in MongoDB. What version of MongoDB are you running? We use MongoDB 3.2. We find the added document validation feature very valuable for checking data types. While we will still perform application-level error validation, we appreciate this added level of security. What advice do you have for other companies looking to start with MongoDB? MongoDB is flexible, scalable, and quite developer and DBA friendly, even if you’re used to RDBMS. We would recommend familiarizing yourself with the basic concepts of MongoDB first, heavily leaning on the community during learning. I’d also recommend reading the production notes to optimize system configuration operational parameters. Brian, thanks for taking the time to share your story with the MongoDB community. Thinking of migrating to MongoDB from a relational database? Learn more from our guide: Download RDBMS Migration Guide
MongoDB at AWS re:Invent 2020
While 2020 has been a challenging year, it has also given rise to new levels of innovative collaboration and agile thinking. Where better to experience both than at AWS re:Invent 2020? At MongoDB, we’re excited to partner with AWS on this free, 3-week virtual event, providing unlimited access to hundreds of sessions led by Cloud experts. Although we’ll miss the grand, buzzing halls of the Venetian Hotel and the celebratory sounds of slot machines this year, it’s still important to approach AWS re:Invent with a focused plan. Think of this year’s event as an opportunity to curate your own perfectly tailored experience. Check out this page for details of our fresh new lineup of deep-dives, targeted jam sessions and — of course — the annual MongoDB late-night party. Here are some of the highlights. AWS Jam — "Excel isn't a database!" Imagine this: It's your first week in a new job, and the VP of sales has already given you an important data task. The good news? From the start of the year, all your current sales data has been stored in MongoDB Atlas — allowing operational and analytical workloads to run on the live data set. The not-so-good news? That wasn't always the case. For years before they switched, their database (well, ”database”) of choice was… Excel. Fortunately someone took the initiative to export that data in CSV format and store it in S3, but now the sales team needs your help to analyze that data — and they need it fast. In our “Excel isn’t a database!” Jam Session, you’ll test and upgrade your skills by connecting MongoDB Atlas Data Lake to CSV data that’s been languishing in an S3 bucket. Then you’ll run an aggregation to complete the challenge and claim points. Game on! This jam session will be available on-demand for the duration of AWS re:Invent Databases & S3: Auto-archiving Breakout Session Databases are built for fast access, but this can also make them resource-intensive. As data grows, you may want to optimize performance (or cost) by migrating old or infrequently used data into cheap object storage. But this presents its own problems: automating the archival process, ensuring data consistency during failures, and either querying two data stores separately or building a query federation system. In this talk, you’ll learn about how we approached these problems while building Online Archive and Federated Query features into MongoDB Atlas, lessons learned from the experience, and how you can do the same. MongoDB Late Nite That’s right: it’s a party! In the spirit of Vegas, MongoDB will be hosting an interactive late-night bash complete with throw-back entertainment at our virtual after-hours event. Like Vegas, there’s something for everyone. Unlike Vegas, the odds are actually on your side. Get your adrenaline going and dial in for exclusive swag at our Home Shopping Network. Just sign on and dial into our custom QVC-reboot every hour for a chance to snag some really cool limited-release items. Stay tuned to the event website to find out what you can win, and when! Are you a Jeopardy lover? MongoDB Late Nite is your time to shine. Exercise your mental reflexes and get those synapses firing with hundreds of other party people inside episodes of dev-focused live trivia. And what kind of revelry is complete without a resident psychic on board? Join us at the Future of Coding for an interactive reading by a VERY accurate psychic. So kick back, grab a beverage and join us at the party from home. Let’s get in the spirit together! Sponsor Page/Online Booth Pop into our virtual sponsor booth at your convenience. Our product experts will be there to answer your questions one-on-one. Alternatively, if casually exploring resources is more your style, check out our self-serve content playlists. View these to dig deeper into MongoDB education, glean customer success stories and get up to speed on the latest product features.