Update 2/25/2016: The new UI has changed the way this process would look (putting the users & roles under the “More” menu on the Deployment page), but the idea is the same. Feel free to open a ticket or chat us with any questions you may have about this.
A question we are asked a lot is how to create a user that can tail the oplog using Cloud Manager Automation. This is a feature needed by Meteor users if they want to use MongoDB authentication to protect their database servers. Here’s how:
- Head to your Authorization & Roles page
- Create a new role (I called mine “oplogger”) that has permissions to read the local database
- Once you save this role, you can go to your “Authentication & Users” tab:
- Then you can create a user with the “oplogger” role (and any other roles you may want) and save it with a password you know
- Push your changes via “Review & Deploy” and then “Confirm & Deploy”
Once you configure your Meteor installation (
MONGO_OPLOG_URL) to connect with the new credentials, your app should work as expected, providing you live tracking of changes.
Securing MongoDB Part 3: Database Auditing and Encryption
Welcome back to our 4-part blog series presenting the best practices and controls available in MongoDB to help you create a secure, compliant database platform. In this installment, we’ll be discussing database auditing and encryption. As a quick recap, in part 1 , we took a look at the general requirements for data security and regulatory compliance, and then in part 2 , reviewed MongoDB access control enforcing authentication and authorization. In part 4 , we’ll wrap up with environmental control and management. If you want to get a head-start and learn about all of these topics in one installment, just go ahead and download the MongoDB Security Architecture guide . MongoDB Auditing The auditing framework provided as part of MongoDB Enterprise Advanced logs all access and actions executed against the database. The auditing framework captures administrative actions (DDL) such as schema operations as well as authentication and authorization activities, along with read and write (DML) operations to the database. Administrators can construct and filter audit trails for any operation against MongoDB, whether DML, DCL or DDL without having to rely on third party tools. For example, it is possible to log and audit the identities of users who retrieved specific documents, and any changes made to the database during their session. **Figure 1**: MongoDB Maintains an Audit Trail of Administrative Actions Against the Database Administrators can configure MongoDB to log all actions or apply filters to capture only specific events, users or roles. The audit log can be written to multiple destinations in a variety of formats including to the console and syslog (in JSON format), and to a file (JSON or BSON), which can then be loaded to MongoDB and analyzed to identify relevant events. MongoDB Enterprise Advanced also supports role-based auditing. It is possible to log and report activities by specific role, such as userAdmin or dbAdmin – coupled with any inherited roles each user has – rather than having to extract activity for each individual administrator. Auditing adds performance overhead to a MongoDB system. The amount is dependent on several factors including which events are logged and where the audit log is maintained, such as on an external storage device and the audit log format. Users should consider the specific needs of their application for auditing and their performance goals in order to determine their optimal configuration. Learn more from the MongoDB auditing documentation . MongoDB Encryption Administrators can encrypt MongoDB data in motion over the network and at rest in permanent storage. Network Encryption Support for SSL/TLS allows clients to connect to MongoDB over an encrypted channel. Clients are defined as any entity capable of connecting to the MongoDB server, including: Users and administrators Applications MongoDB tools (e.g., mongodump, mongorestore, mongotop) Nodes that make up a MongoDB cluster, such as replica set members, query routers and config servers. It is possible to mix SSL/TLS with non-SSL/TLS connections on the same port, which can be useful when applying finer grained encryption controls for internal and external traffic, as well as avoiding downtime when upgrading a MongoDB cluster to support SSL. The TLS protocol is also supported with x.509 certificates. MongoDB Enterprise Advanced supports FIPS 140-2 encryption if run in FIPS Mode with a FIPS validated Cryptographic module. The mongod and mongos processes should be configured with the "sslFIPSMode" setting In addition, these processes should be deployed on systems with an OpenSSL library configured with the FIPS 140-2 module. The MongoDB documentation includes a tutorial for configuring TLS/SSL connections . Disk Encryption There are multiple ways to encrypt data at rest with MongoDB. Encryption can implemented at the application level, or via external filesystem and disk encryption solutions. By introducing additional technology into the stack, both of these approaches can add cost and complexity. With the introduction of the Encrypted storage engine in MongoDB 3.2 , protection of data at-rest becomes an integral feature of the database. By natively encrypting database files on disk, administrators eliminate both the management and performance overhead of external encryption mechanisms. This new storage engine provides an additional level of defense, allowing only those staff with the appropriate database credentials access to encrypted data. **Figure 2:** End to End Encryption – Data In-Flight and Data At-Rest Using the Encrypted storage engine, the raw database content, referred to as plaintext, is encrypted using an algorithm that takes a random encryption key as input and generates ciphertext that can only be read if decrypted with the decryption key. The process is entirely transparent to the application. MongoDB supports a variety of encryption schema, with AES-256 (256 bit encryption) in CBC mode being the default. AES-256 in GCM mode is also supported. The encryption schema can be configured for FIPS 140-2 compliance. The storage engine encrypts each database with a separate key. The key-wrapping scheme in MongoDB wraps all of the individual internal database keys with one external master key for each server. The Encrypted storage engine supports two key management options – in both cases, the only key being managed outside of MongoDB is the master key: Local key management via a keyfile. Integration with a third party key management appliance via the KMIP protocol (recommended). Most regulatory requirements mandate that the encryption keys must be rotated and replaced with a new key at least once annually. MongoDB can achieve key rotation without incurring downtime by performing rolling restarts of the replica set. When using a KMIP appliance, the database files themselves do not need to be re-encrypted, thereby avoiding the significant performance overhead imposed by key rotation in other databases. Only the master key is rotated, and the internal database keystore is re-encrypted. The Encrypted storage engine is designed for operational efficiency and performance: Compatible with WiredTiger’s document level concurrency control and compression. Support for Intel’s AES-NI equipped CPUs for acceleration of the encryption/decryption process. As documents are modified, only updated storage blocks need to be encrypted, rather than the entire database. Based on user testing, the Encrypted storage engine minimizes performance overhead to around 15% (this can vary, based on data types being encrypted), which can be much less than the observed overhead imposed by some filesystem encryption solutions. The Encrypted storage engine is based on WiredTiger and available as part of MongoDB Enterprise Advanced. Refer to the documentation to learn more, and see a tutorial on how to configure the storage engine. MongoDB Atlas Encryption As discussed in Part 2 of the Securing MongoDB blog series, MongoDB Atlas is a database as a service for MongoDB, providing all of the features of the database, without the operational heavy lifting required for any application. MongoDB Atlas has been engineered to deliver robust encryption controls. Data managed by the MongoDB Atlas service can be encrypted on the network and on disk. Support for TLS/SSL allows clients to connect to MongoDB over an encrypted channel. All data transfers across the cluster are also encrypted. Data at rest can be protected using encrypted data volumes. Note that this uses the cloud provider’s native volume encryption solution, rather than the MongoDB encrypted storage engine. Review the MongoDB Atlas documentation for more information on configuring the in-built security controls. Getting Started with MongoDB Security With comprehensive controls for user rights management, auditing and encryption, coupled with management controls, MongoDB can meet the best practice and requirements discussed in this blog series. MongoDB Enterprise Advanced is the certified and supported production release of MongoDB, with advanced security features, including Kerberos and LDAP authentication, encryption of data at-rest, FIPS-compliance, and maintenance of audit logs. These capabilities extend MongoDB’s security framework, which includes Role-Based Access Control, PKI certificates, Field-Level Redaction, and SSL/TLS data transport encryption. In the final part of this blog post series, we will dive into environmental control and database management. You can learn about all of these capabilities now by reading the MongoDB Security Architecture guide. If you want to try them for yourself, [download MongoDB Enterprise](https://www.mongodb.com/download-center?#enterprise), free of charge for evaluation and development. MongoDB security architecture About the Author - Mat Keep Mat is a director within the MongoDB product marketing team, responsible for building the vision, positioning and content for MongoDB’s products and services, including the analysis of market trends and customer requirements. Prior to MongoDB, Mat was director of product management at Oracle Corp. with responsibility for the MySQL database in web, telecoms, cloud and big data workloads. This followed a series of sales, business development and analyst / programmer positions with both technology vendors and end-user companies.
4 Common Misperceptions about MongoDB
One year ago, in the middle of the pandemic, Dev Ittycheria, the CEO of MongoDB, brought me on as Chief Technology Officer. Frankly, I thought I knew everything about databases and MongoDB. After all, I’d been in the database business for 32 years already. I’d been on MongoDB’s Board of Directors and used the products extensively. And of course I’d done my due diligence, met the leadership team, and analyzed earnings reports and product roadmaps. Even with all that knowledge, this past year as MongoDB’s CTO has taught me that many of my preconceived notions were just plain wrong. This made me wonder how many other people might also have the wrong impression about this company. And this blog is my attempt to set those perceptions straight by sharing my four major revelations of the last year. My first revelation is that MongoDB is not trying to become this generation’s relational database. For years I assumed that MongoDB basically wanted to be a better, more modern version of Oracle when it grew up. In other words, compete with the huge footprint of Oracle and other commercial RDBMSs that have been the industry archetype for so long. I was way off. The whole point of MongoDB is to leave all those forms of archaic, legacy database technology in the historical dust. This was never supposed to be an evolution, but instead a revolution. Our founders not only envisioned the world's fastest and most scalable persistent store, but also one that would be programmed and operated differently. The combination of embedded documents and structures combined with automatic high availability and almost-infinite distribution capability all add up to a fundamentally different way of working with data, building applications, and running those applications in production. Oracle and (SQL*Server, etc) still hang their hats on E.F. Codd’s 51-year old vision of rows and columns. To obtain high availability and distribution of data, you need add ons, options packages, bailing wire and duct tape. And you need a lot of database administrators. Not cheap. Even after all that, you’re still trailing the technological edge. This is how wrong I was. Our durable competitive advantages over these legacy data stores make competing with those products almost irrelevant. We instead focus on the modern needs of modern developers building modern applications. These developers need to create their own competitive advantage through language-native development, reliable deployments to production, and lightning fast iteration. And the world is noticing; just check out the falling slope of Oracle and SQL*Server and the rising slope of MongoDB on the db-engines website. Which brings me to my second revelation: MongoDB was built for developers, by developers. I always knew that MongoDB was exceedingly fast and easy to program against. One time while I was bored in a meeting (yes, it happens here as well!), I built an Atlas database, loaded it with 350MB of data, downloaded and learned our Compass data discovery tool, built-in analytics aggregation pipelines, and our Charts package, and embedded live charts in a web page. This took me all of 19 minutes, end to end. To build something like that for engineers , it just has to be built by engineers , ones that are free to focus on all the rough edges that creep into products as features are added. I was first exposed to software planning and management over 40 years ago, and my LinkedIn profile shows a pretty diverse tour around the industry. Now, one year in, I can emphatically state that engineering and product at MongoDB are both different and better than any company I’ve ever had the privilege to work at. Our executive leadership gives engineering and product broad brushstokes of goals and desired outcomes, and then we work together to come up with detailed roadmaps, updated quarterly, that meet those goals in the way we think best, with no micromanagement. And we’re not afraid of 3-5 year projects, either. For example, multi-cloud was more than three years in the making. Also unlike any other company I’ve been at, we embrace the creation and re-payment of tech debt, rather than sweeping it under the rug. We do this through giving our product and engineering teams huge amounts of context, delivered with candor and openness. And one more essential thing; we have an empowered program management team that improves processes (including killing them) as fast as we create them. In short, we paint the targets for our teams and let them decide how and when to shoot. They even design the arrows and bows. It’s true bottoms-up engineering. Our engineers feel valued and understood. And that, in turn, empowers them to develop features that make our customers feel valued and understood, like a unified query language, or real-time analytics and charting directly in the console, or multi-region/multi-cloud clusters where all the networking cruft is taken care of for you. And this brings me to my third revelation: MongoDB is built for even the most demanding mission critical applications. Fast? Yes. Easy? Of course. But mission-critical? That’s not how I saw MongoDB when I used Version 2 for a massive student data project 10 years ago. While it was the only possible datastore we could have chosen for the amount of data and the speed of ingestion and processing needed, it was pretty hard to set up and use in a 24 x 365 environment. MongoDB had gotten ahead of itself in the early 2010’s. There was a gap between our capabilities and the expectations of the market. And it was painful. Other databases had had more than 30 years to solidify their systems and operations. We’d had five. But with Version 3 we added a new storage engine, full ACID transactions, and search. We built on it with Version 4. And then again with Version 5, released this week at our .Live conference. I knew about all this progress intellectually of course when I joined, but not viscerally. I came to realize that the security, durability, availability, scalability, and operability our platform offers (of course in addition to all the features that developers love too) was ideal for architecting fast-moving enterprise applications. And I found the proof in our customer list. It reads like a Who’s Who of major global banks, retailers, and telecommunications companies, running core systems like payments, IoT applications, content management, and real-time analytics. They use our database, data lake, analytics, search, and mobile products across their entire businesses, in every major cloud, on-premises, and on their laptops. And that leads me to my fourth and final revelation. MongoDB is no longer just a database. Of course, the database is still the core. But MongoDB now provides an enterprise-class, mission-critical application data platform. A cohesive, integrated suite of offerings capable of managing modern data requirements across even the most sprawling digital estates, and scaling to meet the level of any company’s ambition, without sacrificing speed or security. Since the day I was first introduced to MongoDB’s products, I’ve had tremendous respect and admiration for the teams and their work. After all, I’m a developer, first and foremost. And it always felt like they “got” me. But had I known then what I know now, I would have jumped on this train a long time ago. In fact, I might have camped out on their doorstep with my resume in hand. And who knows? Maybe a bunch of people reading this will do just that, and have their own revelations about how fulfilling and exciting it can be to be at a great company, with a great culture, producing great products. I’ll write another letter a year from now, and let you know how it’s going then. In the meantime, please reach out to me here, or at @MarkLovesTech .