Update 2/25/2016: The new UI has changed the way this process would look (putting the users & roles under the “More” menu on the Deployment page), but the idea is the same. Feel free to open a ticket or chat us with any questions you may have about this.
A question we are asked a lot is how to create a user that can tail the oplog using Cloud Manager Automation. This is a feature needed by Meteor users if they want to use MongoDB authentication to protect their database servers. Here’s how:
- Head to your Authorization & Roles page
- Create a new role (I called mine “oplogger”) that has permissions to read the local database
- Once you save this role, you can go to your “Authentication & Users” tab:
- Then you can create a user with the “oplogger” role (and any other roles you may want) and save it with a password you know
- Push your changes via “Review & Deploy” and then “Confirm & Deploy”
Once you configure your Meteor installation (
MONGO_OPLOG_URL) to connect with the new credentials, your app should work as expected, providing you live tracking of changes.
Securing MongoDB Part 3: Database Auditing and Encryption
Welcome back to our 4-part blog series presenting the best practices and controls available in MongoDB to help you create a secure, compliant database platform. In this installment, we’ll be discussing database auditing and encryption. As a quick recap, in part 1 , we took a look at the general requirements for data security and regulatory compliance, and then in part 2 , reviewed MongoDB access control enforcing authentication and authorization. In part 4 , we’ll wrap up with environmental control and management. If you want to get a head-start and learn about all of these topics in one installment, just go ahead and download the MongoDB Security Architecture guide . MongoDB Auditing The auditing framework provided as part of MongoDB Enterprise Advanced logs all access and actions executed against the database. The auditing framework captures administrative actions (DDL) such as schema operations as well as authentication and authorization activities, along with read and write (DML) operations to the database. Administrators can construct and filter audit trails for any operation against MongoDB, whether DML, DCL or DDL without having to rely on third party tools. For example, it is possible to log and audit the identities of users who retrieved specific documents, and any changes made to the database during their session. **Figure 1**: MongoDB Maintains an Audit Trail of Administrative Actions Against the Database Administrators can configure MongoDB to log all actions or apply filters to capture only specific events, users or roles. The audit log can be written to multiple destinations in a variety of formats including to the console and syslog (in JSON format), and to a file (JSON or BSON), which can then be loaded to MongoDB and analyzed to identify relevant events. MongoDB Enterprise Advanced also supports role-based auditing. It is possible to log and report activities by specific role, such as userAdmin or dbAdmin – coupled with any inherited roles each user has – rather than having to extract activity for each individual administrator. Auditing adds performance overhead to a MongoDB system. The amount is dependent on several factors including which events are logged and where the audit log is maintained, such as on an external storage device and the audit log format. Users should consider the specific needs of their application for auditing and their performance goals in order to determine their optimal configuration. Learn more from the MongoDB auditing documentation . MongoDB Encryption Administrators can encrypt MongoDB data in motion over the network and at rest in permanent storage. Network Encryption Support for SSL/TLS allows clients to connect to MongoDB over an encrypted channel. Clients are defined as any entity capable of connecting to the MongoDB server, including: Users and administrators Applications MongoDB tools (e.g., mongodump, mongorestore, mongotop) Nodes that make up a MongoDB cluster, such as replica set members, query routers and config servers. It is possible to mix SSL/TLS with non-SSL/TLS connections on the same port, which can be useful when applying finer grained encryption controls for internal and external traffic, as well as avoiding downtime when upgrading a MongoDB cluster to support SSL. The TLS protocol is also supported with x.509 certificates. MongoDB Enterprise Advanced supports FIPS 140-2 encryption if run in FIPS Mode with a FIPS validated Cryptographic module. The mongod and mongos processes should be configured with the "sslFIPSMode" setting In addition, these processes should be deployed on systems with an OpenSSL library configured with the FIPS 140-2 module. The MongoDB documentation includes a tutorial for configuring TLS/SSL connections . Disk Encryption There are multiple ways to encrypt data at rest with MongoDB. Encryption can implemented at the application level, or via external filesystem and disk encryption solutions. By introducing additional technology into the stack, both of these approaches can add cost and complexity. With the introduction of the Encrypted storage engine in MongoDB 3.2 , protection of data at-rest becomes an integral feature of the database. By natively encrypting database files on disk, administrators eliminate both the management and performance overhead of external encryption mechanisms. This new storage engine provides an additional level of defense, allowing only those staff with the appropriate database credentials access to encrypted data. **Figure 2:** End to End Encryption – Data In-Flight and Data At-Rest Using the Encrypted storage engine, the raw database content, referred to as plaintext, is encrypted using an algorithm that takes a random encryption key as input and generates ciphertext that can only be read if decrypted with the decryption key. The process is entirely transparent to the application. MongoDB supports a variety of encryption schema, with AES-256 (256 bit encryption) in CBC mode being the default. AES-256 in GCM mode is also supported. The encryption schema can be configured for FIPS 140-2 compliance. The storage engine encrypts each database with a separate key. The key-wrapping scheme in MongoDB wraps all of the individual internal database keys with one external master key for each server. The Encrypted storage engine supports two key management options – in both cases, the only key being managed outside of MongoDB is the master key: Local key management via a keyfile. Integration with a third party key management appliance via the KMIP protocol (recommended). Most regulatory requirements mandate that the encryption keys must be rotated and replaced with a new key at least once annually. MongoDB can achieve key rotation without incurring downtime by performing rolling restarts of the replica set. When using a KMIP appliance, the database files themselves do not need to be re-encrypted, thereby avoiding the significant performance overhead imposed by key rotation in other databases. Only the master key is rotated, and the internal database keystore is re-encrypted. The Encrypted storage engine is designed for operational efficiency and performance: Compatible with WiredTiger’s document level concurrency control and compression. Support for Intel’s AES-NI equipped CPUs for acceleration of the encryption/decryption process. As documents are modified, only updated storage blocks need to be encrypted, rather than the entire database. Based on user testing, the Encrypted storage engine minimizes performance overhead to around 15% (this can vary, based on data types being encrypted), which can be much less than the observed overhead imposed by some filesystem encryption solutions. The Encrypted storage engine is based on WiredTiger and available as part of MongoDB Enterprise Advanced. Refer to the documentation to learn more, and see a tutorial on how to configure the storage engine. MongoDB Atlas Encryption As discussed in Part 2 of the Securing MongoDB blog series, MongoDB Atlas is a database as a service for MongoDB, providing all of the features of the database, without the operational heavy lifting required for any application. MongoDB Atlas has been engineered to deliver robust encryption controls. Data managed by the MongoDB Atlas service can be encrypted on the network and on disk. Support for TLS/SSL allows clients to connect to MongoDB over an encrypted channel. All data transfers across the cluster are also encrypted. Data at rest can be protected using encrypted data volumes. Note that this uses the cloud provider’s native volume encryption solution, rather than the MongoDB encrypted storage engine. Review the MongoDB Atlas documentation for more information on configuring the in-built security controls. Getting Started with MongoDB Security With comprehensive controls for user rights management, auditing and encryption, coupled with management controls, MongoDB can meet the best practice and requirements discussed in this blog series. MongoDB Enterprise Advanced is the certified and supported production release of MongoDB, with advanced security features, including Kerberos and LDAP authentication, encryption of data at-rest, FIPS-compliance, and maintenance of audit logs. These capabilities extend MongoDB’s security framework, which includes Role-Based Access Control, PKI certificates, Field-Level Redaction, and SSL/TLS data transport encryption. In the final part of this blog post series, we will dive into environmental control and database management. You can learn about all of these capabilities now by reading the MongoDB Security Architecture guide. If you want to try them for yourself, [download MongoDB Enterprise](https://www.mongodb.com/download-center?#enterprise), free of charge for evaluation and development. MongoDB security architecture About the Author - Mat Keep Mat is a director within the MongoDB product marketing team, responsible for building the vision, positioning and content for MongoDB’s products and services, including the analysis of market trends and customer requirements. Prior to MongoDB, Mat was director of product management at Oracle Corp. with responsibility for the MySQL database in web, telecoms, cloud and big data workloads. This followed a series of sales, business development and analyst / programmer positions with both technology vendors and end-user companies.
4 Critical Features for a Modern Payments System
The business systems of many traditional banks rely on solutions that are decades old. These systems, which are built on outdated, inflexible relational databases, prevent traditional banks from competing with industry disruptors and those already adopting more modern approaches. Such outdated systems are ill-equipped to handle one of the core offerings that customers expect from banks today — instantaneous, cashless, digital payments . The relational database management systems (RDBMSes) at the core of these applications require breaking data structures into a complex web of tables. Originally, this tabular approach was necessary to minimize memory and storage footprints. But as hardware has become cheaper and more powerful, these advantages have also become less relevant. Instead, the complexity of this model results in data management and programmatic access issues. In this article, we’ll look at how a document database can simplify complexity and provide the scalability, performance, and other features required in modern business applications. Document model To stay competitive, many financial institutions will need to update their foundational data architecture and introduce a data platform that enables a flexible, real-time, and enriched customer experience. Without this, new apps and other services won’t be able to deliver significant value to the business. A document model eliminates the need for an intricate web of related tables. Adding new data to a document is relatively easy and quick since it can be done without the usually lengthy reorganization that RDBMSes require. What makes a document database different from a relational database? Intuitive data model simplifies and accelerates development work. Flexible schema allows modification of fields at any time, without disruptive migrations. Expressive query language and rich indexing enhance query flexibility. Universal JSON standard lets you structure data to meet application requirements. Distributed approach improves resiliency and enables global scalability. With a document database, there is no need for complicated multi-level joins for business objects, such as a bill or even a complex financial derivative, which often require object-relational mapping with complex stored procedures. Such stored procedures, which are written in custom languages, not only increase the cognitive load on developers but also are fiendishly hard to test. Missing automated tests present a major impediment to the adoption of agile software development methods. Required features Let’s look at four critical features that modern applications require for a successful overhaul of payment systems and how MongoDB can help address those needs. 1. Scalability Modern applications must operate at scales that were unthinkable just a few years ago, in relation to both transaction volume and to the number of development and test environments needed to support rapid development. Evolving consumer trends have also put higher demands on payment systems. Not only has the number of transactions increased, but the responsive experiences that customers expect have increased the query load, and data volumes are growing super-linear. The fully transactional RDBMS model is ill suited to support this level of performance and scale. Consequently, most organizations have created a plethora of caching layers, data warehouses, and aggregation and consolidation layers that create complexity, consume valuable developer time and cognitive load, and increase costs. To work efficiently, developers also need to be able to quickly create and tear down development and test environments, and this is only possible by leveraging the cloud. Traditional RDBMSes, however, are ill suited for cloud deployment. They are very sensitive to network latency, as business objects spread across multiple tables can only be retrieved through multiple sequential queries. MongoDB provides the scalability and performance that modern applications require. MongoDB’s developer data platform also ensures that the same data is available for use with other frequent consumption patterns like time series and full-text search . Thus, there is no need for custom replication code between the operational and analytical datastore. 2. Resiliency Many existing payment platforms were designed and architected when networking was expensive and slow. They depend on high-quality hardware with low redundancy for resilience. Not only is this approach very expensive, but the resiliency of a distributed system can never be reached through redundancy. At the core of MongoDB’s developer data platform is MongoDB Atlas , the most advanced cloud database service on the market. MongoDB Atlas can run in any cloud, or even across multiple clouds, and offers 99.995% uptime. This downtime is far less than typically expected to apply necessary security updates to a monolithic legacy database system. 3. Locality and global coverage Modern computing demands are at once ubiquitous and highly localized. Customers expect to be able to view their cash balances wherever they are, but client secrecy and data availability rules set strict guardrails on where data can be hosted and processed. The combination of geo-sharding, replication, and edge data addresses these problems. MongoDB Atlas in combination with MongoDB for Mobile brings these powerful tools to the developer. During the global pandemic, more consumers than ever have begun using their smartphones as payment terminals. To enable these rich functions, data must be held at the edge. Developing the synchronization of the data is difficult, however, and not a differentiator for financial institutions. MongoDB for Mobile, in addition with MongoDB’s geo-sharding capability on Atlas cloud, offloads this complexity from the developer. 4. Diverse workloads and workload isolation As more services and opportunities are developed, the demand to use the same data for multiple purposes is growing. Although legacy systems are well suited to support functions such as double entry accounting, when the same information has to be served up to a customer portal, the central credit engine, or an AI/ML algorithm, the limits of the relational databases become obvious. These limitations have resulted in developers following what is often called “best-of-breed” practices. Under this approach, data is replicated from the transactional core to a secondary, read-only datastore based on technology that is better suited to the particular workload. Typical examples are transactional data stores being copied nightly into data lakes to be available for AI/ML modelers. The additional hardware and licensing cost for this replication are not prohibitive, but the complexity of the replication, synchronization, and the complicated semantics introduced by batch dumps slows down development and increases both development and maintenance costs. Often, three or more different technologies are necessary to facilitate the usage patterns. With its developer data platform, MongoDB has integrated this replication, eliminating all the complexity for the developers. When a document is updated in the transactional datastore, MongoDB will automatically make it available for full-text search and time series analytics. The pace of change in the payments industry shows no signs of slowing. To stay competitive, it’s vital that you reassess your technology architecture. MongoDB Atlas is emerging as the technology of choice for many financial services firms that want to free their data, empower developers, and embrace disruption. Replacing legacy relational databases with a modern document database is a key step toward enhancing agility, controlling costs, better addressing consumer expectations, and achieving compliance with new regulations. Learn more by downloading our white paper “Modernize Your Payment Systems."