Nominate a Female Innovator to Attend MongoDB World
March 7, 2016
We can’t wait until this year’s MongoDB World, and are working hard behind the scenes. We want to make sure our biggest event is a fun-filled learning experience, and that attendees reflect our diverse community. As part of our commitment to changing the gender ratio in technology, we're excited to launch the Female Innovators at MongoDB World initiative.
Nominees will be notified of their nomination by April 15, 2016. The first 50 nominees to register will receive complimentary admission to MongoDB World, June 28-29 in NYC.
At the event they’ll be able to:
- Network with other female innovators through sessions and activities at the Women Who Code Lounge
- Meet the engineers behind the technology
- Learn best practices to amp up their skills, and the inside scoop on who’s doing what with MongoDB
- Explore community activities, including the Leaf Lounge, poster sessions, and our famous after party
Fill out the short nomination form to invite a woman who works or aspires to work in tech to attend our global conference. You can also nominate yourself! Make sure to act fast, a limited amount of tickets are available.
The deadline to submit a nomination is Friday, April 8. Eligible nominees will be notified of their acceptance status by April 15.
- Nominees must be 18 years old or older, and must identify as a female.
- Open to new participants of MongoDB World only.
- Can you nominate yourself? Absolutely!
Nominees will be notified of their nomination by April 15, 2016. The first 50 nominees to register will receive complimentary admission to MongoDB World, June 28-29 in NYC. View terms and conditions.
At-Rest Encryption in MongoDB 3.2: Features and Performance
Introduction MongoDB 3.2 introduces a new option for at-rest data encryption. In this post we take a closer look at the forces driving the need for increased encryption, MongoDB features for encrypting your data, as well as the performance characteristics of the new Encrypted Storage Engine. Data security is top of mind for many executives due to increased attacks as well as a series of data breaches in recent years that have negatively impacted several high profile brands. For example, in 2015, a major health insurer was a victim of a massive data breach in which criminals gained access to the Social Security numbers of more than 80 million people — resulting in an estimated cost of $100M. In the end, one of the critical vulnerabilities was the health insurer did not encrypt sensitive patient data stored at-rest. Data encryption is a key part of a comprehensive strategy to protect sensitive data. However, encrypting and decrypting data is potentially very resource intensive. It is important to understand the performance characteristics of your encryption technology to accurately conduct capacity planning. MongoDB 3.2: Delivering Native Encryption At-Rest MongoDB 3.2 provides a comprehensive encryption solution that protects your data, both in-flight and at-rest. For encryption-in-flight, MongoDB uses SSL/TLS, which ensures secure communication between your database and client, as well as inter-cluster traffic between nodes. Learn more about MongoDB and SSL/TLS . With the latest version 3.2 , MongoDB also includes a fully integrated encryption-at-rest solution that reduces cost and performance overhead. Encryption-at-rest is part of MongoDB Enterprise Advanced only, but is freely available for development and evaluation. We will take a closer look at this new option later in the post. Before 3.2, the primary methods to provide encryption-at-rest were to use 3rd party applications that encrypt files at the application, file system, or disk level. These methods work well with MongoDB but can add extra cost, complexity, and overhead. Additionally, disk and file system encryption might not protect against all situations. While disk level encryption protects from someone taking the physical drive from the machine, it does not protect from someone that has physical access to the machine and can override the file system. Similarly, file system encryption will prevent someone from overriding the file system, but does not preclude someone from gaining unauthorized access through the application or database layer. Database encryption mitigates these problems by adding an extra layer of security. Even if an administrator has access to the file system, he/she will first need to be authenticated to the database before decrypting the data files. MongoDB’s Encrypted Storage Engine supports a variety of encryption algorithms from the OpenSSL library. AES-256 in CBC mode is the default, while other options include GCM mode, as well as FIPS mode for FIPS-140-2 compliance. Encryption is performed at the page level to provide optimal performance. Instead of having to encrypt/decrypt the entire file or database for each change, only the modified pages need to be encrypted or decrypted. Additionally, the Encrypted Storage Engine provides safe and secure management of the encryption keys. Each encrypted node contains an internal database key that is used to encrypt/decrypt the data files. The database key is wrapped with an external master key, which must be given to the node for it to initialize. MongoDB uses operating system protection mechanisms, such as VirtualLock and mlock , that lock the process’ virtual memory space into memory, ensuring that keys are never written or paged to disk in unencrypted form. Evaluating Performance Encrypting and decrypting data requires the use of additional resources, and administrators will want to understand the performance impact to adjust capacity planning accordingly. In our Encrypted Storage Engine benchmarking tests, we saw an average throughput overhead between 10% and 20%. Let’s take a closer look at some benchmark data to show the results for Insert Only, Read Only, and 50%-Read/50%-Insert workloads. For our benchmark, we used Intel Xeon X5675 CPUs, which support the AES-NI instruction set, and ran the CPUs at high load(100%). There were four different configurations that we evaluated; “ Working Set Fits In Memory ”, “ Working Set Exceeds Memory ”, “ Encrypted ”, and “ Unencrypted ”. The ‘ Working Set ’ refers to the amount of data and indexes that is actively used by your system. Let’s first look at an Insert-Only workload. With a high CPU load, we see an encryption overhead of around ~16%. Now, let’s take a look at the results of our Read-Only Workload. We ran the benchmark between two scenarios; “ Working Set Fits In Memory ” and “ Working Set Exceeds Memory ”. From the benchmark results, the decryption overhead for a Read-Only workload ranges between 5–20%. Lastly, here are the benchmark results for a 50%-Read, 50%-Insert workload. For the 50%-Read/50%-Insert workloads, the encryption overhead ranges between 12%–20%. In addition to throughput, latency is also a critical component of encryption overhead. From our benchmark, average latency overheads ranged between 6% to 30%. Though average latency overhead was slightly higher than throughput overhead, latencies were still very low—all under 1ms. Average Latency(us) Unencrypted Encrypted % Overhead Insert Only Average Latency(us) 32.4 40.9 -26.5% Read Only Working Set Fits In Memory Avg Latency(us) 230.5 245.0 -6.3% Read Only Working Set Exceeds Memory Avg Latency(us) 447.0 565.8 -26.6% 50% Insert/50% Read Working Set Fits In Memory Avg Latency(us) 276.1 317.4 -15.0% 50% Insert/50% Read Working Set Exceeds Memory Avg Latency(us) 722.3 936.5 -29.7% MongoDB Atlas Encryption At Rest MongoDB Atlas is a database as a service and provides all the features of the database without the heavy lifting of setting up operational tasks. Developers no longer need to worry about provisioning, configuration, patching, upgrades, backups, and failure recovery. Atlas offers elastic scalability, either by scaling up on a range of instance sizes or scaling out with automatic sharding, all with no application downtime. MongoDB Atlas provides encryption of data-in-flight over the network and at rest on disk. Data-at-rest can be optionally protected using encrypted data volumes. Encrypted data volumes secure your data without the need for you to build, maintain, and secure your own key management infrastructure. Summary In this post, we looked at a few workloads to determine the impact of encryption with MongoDB's new Encrypted Storage Engine. The results demonstrate that the Encrypted Storage Engine provides a secure way to encrypt your data-at-rest, while maintaining exceptional performance. With the Encrypted Storage Engine and diligent capacity planning, you shouldn't have to make a tradeoff between high performance and strong security when encrypting data-at-rest. For users interested in a database as a service, MongoDB Atlas provides encrypted data volumes to ensure your data at rest is secure. Environment These tests were conducted on bare metal servers. Each server had the following specification: CPU : 3.06GHz Intel Xeon Westmere(X5675-Hexcore) RAM : 6x16GB Kingston 16GB DDR3 2Rx4 OS : Ubuntu 14.04-64 Network Card : SuperMicro AOC-STGN-i2S Motherboard : SuperMicro X8DTN+_R2 Document Size : 1KB Workload : YCSB Version : MongoDB 3.2 Learn More About Encryption and all of the security features available for MongoDB by reading our guide. MongoDB Security Architecture Guide Additional Resources Try MongoDB’s New Encrypted Storage Engine. Users can try the Encrypted Storage Engine free for unlimited development and evaluation. Read our installing MongoDB Enterprise 3.2 documentation . About the Author - Jason Ma Jason is a Principal Product Marketing Manager based in Palo Alto, and has extensive experience in technology hardware and software. He previously worked for SanDisk in Corporate Strategy doing M&A and investments, and as a Product Manager on the Infiniflash All-Flash JBOF. Before SanDisk, he worked as a HW engineer at Intel and Boeing. Jason has a BSEE from UC San Diego, MSEE from the University of Southern California, and an MBA from UC Berkeley.
Accelerating to T+1 - Have You Got the Speed and Agility Required to Meet the Deadline?
On May 28, 2024, the Securities and Exchange Commission (SEC) will implement a move to a T+1 settlement for standard securities trades , shortening the settlement period from 2 business days after the trade date to one business day. The change aims to address market volatility and reduce credit and settlement risk. The shortened T+1 settlement cycle can potentially decrease market risks, but most firms' current back-office operations cannot handle this change. This is due to several challenges with existing systems, including: Manual processes will be under pressure due to the shortened settlement cycle Batch data processing will not be feasible To prepare for T+1, firms should take urgent action to address these challenges: Automate manual processes to streamline them and improve operational efficiency Event-based real-time processing should replace batch processing for faster settlement In this blog, we will explore how MongoDB can be leveraged to accelerate manual process automation and replace batch processes to enable faster settlement. What is a T+1 and T+2 settlement? T+1 settlement refers to the practice of settling transactions executed before 4:30pm on the following trading day. For example, if a transaction is executed on Monday before 4:30 pm, the settlement will occur on Tuesday. This settlement process involves the transfer of securities and/or funds from the seller's account to the buyer's account. This contrasts with the T+2 settlement, where trades are settled two trading days after the trade date. According to SEC Chair Gary Gensler , “T+1 is designed to benefit investors and reduce the credit, market, and liquidity risks in securities transactions faced by market participants.” Overcoming T+1 transition challenges with MongoDB: Two unique solutions 1. The multi-cloud developer data platform accelerates manual process automation Legacy settlement systems may involve manual intervention for various tasks, including manual matching of trades, manual input of settlement instructions, allocation emails to brokers, reconciliation of trade and settlement details, and manual processing of paper-based documents. These manual processes can be time-consuming and prone to errors. MongoDB (Figure 1 below) can help accelerate developer productivity in several ways: Easy to use: MongoDB is designed to be easy to use, which can reduce the learning curve for developers who are new to the database. Flexible data model: Allows developers to store data in a way that makes sense for their application. This can help accelerate development by reducing the need for complex data transformations or ORM mapping. Scalability: MongoDB is highly scalable , which means it can handle large volumes of trade data and support high levels of concurrency. Rich query language: Allows developers to perform complex queries without writing much code. MongoDB's Apache Lucene-based search can also help screen large volumes of data against sanctions and watch lists in real-time. Figure 1: MongoDB's developer data platform Discover the developer productivity calculator . Developers spend 42% of their work week on maintenance and technical debt. How much does this cost your organization? Calculate how much you can save by working with MongoDB. 2. An operational trade store to replace slow batch processing Back-office technology teams face numerous challenges when consolidating transaction data due to the complexity of legacy batch ETL and integration jobs. Legacy databases have long been the industry standard but are not optimal for post-trade management due to limitations such as rigid schema, difficulty in horizontal scaling, and slow performance. For T+1 settlement, it is crucial to have real-time availability of consolidated positions across assets, geographies, and business lines. It is important to note that the end of the batch cycle will not meet this requirement. As a solution, MongoDB customers use an operational trade data store (ODS) to overcome these challenges for real-time data sharing. By using an ODS, financial firms can improve their operational efficiency by consolidating transaction data in real-time. This allows them to streamline their back-office operations, reduce the complexity of ETL and integration processes, and avoid the limitations of relational databases. As a result, firms can make faster, more informed decisions and gain a competitive edge in the market. Using MongoDB (Figure 2 below), trade desk data is copied into an ODS in real-time through change data capture (CDC), creating a centralized trade store that acts as a live source for downstream trade settlement and compliance systems. This enables faster settlement times, improves data quality and accuracy, and supports full transactionality. As the ODS evolves, it becomes a "system of record/golden source" for many back office and middle office applications, and powers AI/ML-based real-time fraud prevention applications and settlement risk failure systems. Figure 2: Centralized Trade Data Store (ODS) Managing trade settlement risk failure is critical in driving efficiency across the entire securities market ecosystem. Luckily, MongoDB integration capabilities (Figure 3 below) with modern AI and ML platforms enable banks to develop AI/ML models that make managing potential trade settlement fails much more efficient from a cost, time, and quality perspective. Additionally, predictive analytics allow firms to project availability and demand and optimize inventories for lending and borrowing. Figure 3: Event-driven application for real time monitoring Summary Financial institutions face significant challenges in reducing settlement duration from two business days (T+2) to one (T+1), particularly when it comes to addressing the existing back-office issues. However, it's crucial for them to achieve this goal within a year as required by the SEC. This blog highlights how MongoDB's developer data platform can help financial institutions automate manual processes and adopt a best practice approach to replace batch processes with a real-time data store repository (ODS). With the help of MongoDB's developer data platform and best practices, financial institutions can achieve operational excellence and meet the SEC's T+1 settlement deadline on May 28, 2024. In the event of T+0 settlement cycles becoming a reality, institutions with the most flexible data platform will be better equipped to adjust. Top banks in the industry are already adopting MongoDB's developer data platform to modernize their infrastructure, leading to reduced time-to-market, lower total cost of ownership, and improved developer productivity. Looking to learn more about how you can modernize or what MongoDB can do for you? Zero downtime migrations using MongoDB’s flexible schema Accelerate your digital transformation with these 5 Phases of Banking Modernization Reduce time-to-market for your customer lifecycle management applications MongoDB’s financial services hub