Update 2/25/2016: The new UI has changed the way this process would look (putting the users & roles under the “More” menu on the Deployment page), but the idea is the same. Feel free to open a ticket or chat us with any questions you may have about this.
A question we are asked a lot is how to create a user that can tail the oplog using Cloud Manager Automation. This is a feature needed by Meteor users if they want to use MongoDB authentication to protect their database servers. Here’s how:
- Head to your Authorization & Roles page
- Create a new role (I called mine “oplogger”) that has permissions to read the local database
- Once you save this role, you can go to your “Authentication & Users” tab:
- Then you can create a user with the “oplogger” role (and any other roles you may want) and save it with a password you know
- Push your changes via “Review & Deploy” and then “Confirm & Deploy”
Once you configure your Meteor installation (
MONGO_OPLOG_URL) to connect with the new credentials, your app should work as expected, providing you live tracking of changes.
Securing MongoDB Part 3: Database Auditing and Encryption
Welcome back to our 4-part blog series presenting the best practices and controls available in MongoDB to help you create a secure, compliant database platform. In this installment, we’ll be discussing database auditing and encryption. As a quick recap, in part 1 , we took a look at the general requirements for data security and regulatory compliance, and then in part 2 , reviewed MongoDB access control enforcing authentication and authorization. In part 4 , we’ll wrap up with environmental control and management. If you want to get a head-start and learn about all of these topics in one installment, just go ahead and download the MongoDB Security Architecture guide . MongoDB Auditing The auditing framework provided as part of MongoDB Enterprise Advanced logs all access and actions executed against the database. The auditing framework captures administrative actions (DDL) such as schema operations as well as authentication and authorization activities, along with read and write (DML) operations to the database. Administrators can construct and filter audit trails for any operation against MongoDB, whether DML, DCL or DDL without having to rely on third party tools. For example, it is possible to log and audit the identities of users who retrieved specific documents, and any changes made to the database during their session. **Figure 1**: MongoDB Maintains an Audit Trail of Administrative Actions Against the Database Administrators can configure MongoDB to log all actions or apply filters to capture only specific events, users or roles. The audit log can be written to multiple destinations in a variety of formats including to the console and syslog (in JSON format), and to a file (JSON or BSON), which can then be loaded to MongoDB and analyzed to identify relevant events. MongoDB Enterprise Advanced also supports role-based auditing. It is possible to log and report activities by specific role, such as userAdmin or dbAdmin – coupled with any inherited roles each user has – rather than having to extract activity for each individual administrator. Auditing adds performance overhead to a MongoDB system. The amount is dependent on several factors including which events are logged and where the audit log is maintained, such as on an external storage device and the audit log format. Users should consider the specific needs of their application for auditing and their performance goals in order to determine their optimal configuration. Learn more from the MongoDB auditing documentation . MongoDB Encryption Administrators can encrypt MongoDB data in motion over the network and at rest in permanent storage. Network Encryption Support for SSL/TLS allows clients to connect to MongoDB over an encrypted channel. Clients are defined as any entity capable of connecting to the MongoDB server, including: Users and administrators Applications MongoDB tools (e.g., mongodump, mongorestore, mongotop) Nodes that make up a MongoDB cluster, such as replica set members, query routers and config servers. It is possible to mix SSL/TLS with non-SSL/TLS connections on the same port, which can be useful when applying finer grained encryption controls for internal and external traffic, as well as avoiding downtime when upgrading a MongoDB cluster to support SSL. The TLS protocol is also supported with x.509 certificates. MongoDB Enterprise Advanced supports FIPS 140-2 encryption if run in FIPS Mode with a FIPS validated Cryptographic module. The mongod and mongos processes should be configured with the "sslFIPSMode" setting In addition, these processes should be deployed on systems with an OpenSSL library configured with the FIPS 140-2 module. The MongoDB documentation includes a tutorial for configuring TLS/SSL connections . Disk Encryption There are multiple ways to encrypt data at rest with MongoDB. Encryption can implemented at the application level, or via external filesystem and disk encryption solutions. By introducing additional technology into the stack, both of these approaches can add cost and complexity. With the introduction of the Encrypted storage engine in MongoDB 3.2 , protection of data at-rest becomes an integral feature of the database. By natively encrypting database files on disk, administrators eliminate both the management and performance overhead of external encryption mechanisms. This new storage engine provides an additional level of defense, allowing only those staff with the appropriate database credentials access to encrypted data. **Figure 2:** End to End Encryption – Data In-Flight and Data At-Rest Using the Encrypted storage engine, the raw database content, referred to as plaintext, is encrypted using an algorithm that takes a random encryption key as input and generates ciphertext that can only be read if decrypted with the decryption key. The process is entirely transparent to the application. MongoDB supports a variety of encryption schema, with AES-256 (256 bit encryption) in CBC mode being the default. AES-256 in GCM mode is also supported. The encryption schema can be configured for FIPS 140-2 compliance. The storage engine encrypts each database with a separate key. The key-wrapping scheme in MongoDB wraps all of the individual internal database keys with one external master key for each server. The Encrypted storage engine supports two key management options – in both cases, the only key being managed outside of MongoDB is the master key: Local key management via a keyfile. Integration with a third party key management appliance via the KMIP protocol (recommended). Most regulatory requirements mandate that the encryption keys must be rotated and replaced with a new key at least once annually. MongoDB can achieve key rotation without incurring downtime by performing rolling restarts of the replica set. When using a KMIP appliance, the database files themselves do not need to be re-encrypted, thereby avoiding the significant performance overhead imposed by key rotation in other databases. Only the master key is rotated, and the internal database keystore is re-encrypted. The Encrypted storage engine is designed for operational efficiency and performance: Compatible with WiredTiger’s document level concurrency control and compression. Support for Intel’s AES-NI equipped CPUs for acceleration of the encryption/decryption process. As documents are modified, only updated storage blocks need to be encrypted, rather than the entire database. Based on user testing, the Encrypted storage engine minimizes performance overhead to around 15% (this can vary, based on data types being encrypted), which can be much less than the observed overhead imposed by some filesystem encryption solutions. The Encrypted storage engine is based on WiredTiger and available as part of MongoDB Enterprise Advanced. Refer to the documentation to learn more, and see a tutorial on how to configure the storage engine. MongoDB Atlas Encryption As discussed in Part 2 of the Securing MongoDB blog series, MongoDB Atlas is a database as a service for MongoDB, providing all of the features of the database, without the operational heavy lifting required for any application. MongoDB Atlas has been engineered to deliver robust encryption controls. Data managed by the MongoDB Atlas service can be encrypted on the network and on disk. Support for TLS/SSL allows clients to connect to MongoDB over an encrypted channel. All data transfers across the cluster are also encrypted. Data at rest can be protected using encrypted data volumes. Note that this uses the cloud provider’s native volume encryption solution, rather than the MongoDB encrypted storage engine. Review the MongoDB Atlas documentation for more information on configuring the in-built security controls. Getting Started with MongoDB Security With comprehensive controls for user rights management, auditing and encryption, coupled with management controls, MongoDB can meet the best practice and requirements discussed in this blog series. MongoDB Enterprise Advanced is the certified and supported production release of MongoDB, with advanced security features, including Kerberos and LDAP authentication, encryption of data at-rest, FIPS-compliance, and maintenance of audit logs. These capabilities extend MongoDB’s security framework, which includes Role-Based Access Control, PKI certificates, Field-Level Redaction, and SSL/TLS data transport encryption. In the final part of this blog post series, we will dive into environmental control and database management. You can learn about all of these capabilities now by reading the MongoDB Security Architecture guide. If you want to try them for yourself, [download MongoDB Enterprise](https://www.mongodb.com/download-center?#enterprise), free of charge for evaluation and development. MongoDB security architecture About the Author - Mat Keep Mat is a director within the MongoDB product marketing team, responsible for building the vision, positioning and content for MongoDB’s products and services, including the analysis of market trends and customer requirements. Prior to MongoDB, Mat was director of product management at Oracle Corp. with responsibility for the MySQL database in web, telecoms, cloud and big data workloads. This followed a series of sales, business development and analyst / programmer positions with both technology vendors and end-user companies.
MongoDB Query API Webinar: FAQ
Last week we held a live webinar on the MongoDB Query API and our lineup of idiomatic programming language drivers. There were many great questions during the session, and in this post, what I want to do is share the most frequently asked ones with you. But first - here is a quick summary of what MongoDB Query API is all about if you are unfamiliar with it. What is MongoDB Query API? MongoDB is built upon the document data model . The document model is designed to be intuitive, flexible, universal, and powerful. You can easily work with a variety of data, and because documents map directly to the objects in your code, it fits naturally in your app development experience. MongoDB Query API lets you work with data as code and build any class of application faster by giving you extensive query capabilities natively in any modern programming language. Whether you’re working with transactional data, looking for search capabilities, or trying to run sophisticated real-time analytics, MongoDB Query API can meet your needs. MongoDB Query API has some unique features like its expressive query, primary and secondary indexes, powerful aggregations and transformations, on-demand materialized views, and more — enabling you to work with data of any structure, at any scale. Some key features to highlight: Indexes To optimize any workload and query pattern you can take advantage of a large set of index types like multi-key (for arrays), wildcard, geospatial, and more and index any field no matter how deeply nested it is within your documents. Fully featured secondary indexes are document-optimized and include partial, unique, case insensitive, and sparse. Aggregation Pipeline Aggregation pipeline lets you group, transform, and analyze your data to support any class of workload. You can choose from dozens of aggregation stages and over 200 operators to build modular and expressive pipelines. You can also use low-code tools like MongoDB Compass to drag and drop stages, examine intermediate output, and export to your programming language of choice. On-Demand Materialized Views The powerful $merge aggregation stage allows you to combine the results of your aggregation pipeline with existing collections to update and enrich data without having to recompute your entire data set. You can output results to sharded and unsharded collections while simultaneously defining indexes on each view Geospatial and Graph Utilize MongoDB’s built-in natively ability to store and run queries against geospatial data Use operators like $graphLookup to quickly traverse connected data sets These are just a few of the features we highlighted in the MongoDB Query API webinar. No matter what type of application you are thinking of building or managing, MongoDB Query API can meet your needs as the needs of your users and application change. FAQs for MongoDB Query API Here are the most common questions asked during the webinar: Do we have access to the data sets presented in this webinar? Yes, you can easily create a cluster and load the sample data sets into Atlas. Instructions on how to get started are here . How can I access full-text search capabilities? Text search is a standard feature of MongoDB Atlas. You can go to cloud.mongodb.com to try it out using sample data sets. Does VS code plugin support Aggregation? Yes, it does. You can learn more about the VS code plugin on our docs page. If you need to pass variable values in the aggregation, say the price range from the app as an input, how would you do that? This is no different than sending a query - since you construct your aggregation in your application you just fill in the field you want with value/variable in your code. Is there any best practice document on MongoDB query API to have stable performance and utilize minimum resources? Yes, we have tips and tricks on optimizing performance by utilizing indexes, filters, and tools here . Does MongoDB support the use of multiple different indexes to meet the needs of a single query? Yes, this can be accomplished by the use of compound indexes. You can learn more about it in our docs here . If you work with big data and create a collection, is it smarter to create indexes first or after the collection is filled (regarding the time to create a collection)? It is better to create the indexes first as they will take less time to create if the collection is empty, but you still have an option to create the index once the data is there in the collection. There are multiple great benefits of MongoDB’s indexing capabilities: When building indexes, there is no impact on your app’s availability since the index operation is online. Flexibility to add and remove indexes at any time. Ability to hide indexes to evaluate the impact of removing them before officially dropping them. Where do I go to learn more? Here are some resources to help you get started: MongoDB Query API page MongoDB University MongoDB Docs You can also check out the webinar replay here .