MongoDB on EC2 Best Practices
In light of last week’s EBS issues, we wanted to make sure MongoDB users on EBS are configured to be as robust as possible. A basic setup would consist of a 3 node replica set. The nodes would be roughly laid out like this: * A: us-east-1a priority 1 * B: us-east-1b priority 1 * C: us-west-1a priority 0
During steady state, either A or B would be primary. If the primary went down for any reason (system crash, or loss of one availability zone) the other node in us-east would take over. This is guaranteed because the west coast node has a priority of 0. If instead the entire east coast region were lost, then you would still have a ful copy of data on C. If you decided that you were going to make the west coast your primary data center for the duration, you would just bring up a couple more nodes there, and make a new replica set with the data from C. More information about running MongoDB on ec2 is available from our recent ec2 webinar. We are big fans of cloud computing in general and want MongoDB to only get more and more cloud-friendly over time. -Eliot
Getting started with VMware CloudFoundry, MongoDB and Rails
Listen to Jared Rosoff’s June 2 webinar “VMware Cloud Foundry with MongoDB” . Read about getting started with Cloud Foundry, MongoDB, and Node.js . Last week, VMware launched Cloud Foundry : an open-source platform as a service. It’s pretty radical in that not only can you run your apps on infrastructure operated by VMware, you can also download Cloud Foundry itself and run it on your own machines! But what’s most awesome about Cloud Foundry is that it supports MongoDB right out of the box! In today’s post, we’re going to walk through the creation of a Rails application using MongoDB and Cloud Foundry. Here’s what we’re going to need to do: Create a new rails project (skipping Active Record) Add the dependencies for MongoMapper Build a simple app that interacts with the database Set up our credentials so we can talk to Cloud Foundry’s MongoDB Push our application to CloudFoundry Create our rails project First we’ll create our new Rails project. My preference is for MongoMapper , so I’m going to skip the ActiveRecord generators when I initialize my project. <pre > $ rails new my_app –skip-active-record Add the dependencies for MongoMapper Next we need to setup our dependencies. Cloud Foundry will automatically detect that we’re a Rails app, and it will process our Gemfile if we’ve got one. So we’re need to add any dependencies to our Gemfile so they’ll get installed when we deploy. This is what our Gemfile looks like: source "http://rubygems.org" gem "rails", "3.0.5" gem "mongo_mapper" gem "bson_ext" I’m assuming you’re using Rails 3 here, but you can easily adapt these instructions for other versions. Build a simple app that interacts with the database Now we’ll write a little code to interact with the DB so we can be sure everything’s working. $ script/rails generate scaffold messages message:string --orm mongo_mapper I’m also going to set the root of my rails app to be our new messages controller and remove our public/index.html . Here’s my routes file: CloudFoundryRailsTutorial::Application.routes.draw do resources :messages root :to => "messages#index" end Be sure to delete your public/index.html !! Set up our credentials so we can talk to Cloud Foundry’s MongoDB MongoMapper comes with a generator that will create a basic config/mongo.yml configuration file for us. $ script/rails generate mongo_mapper:config The file looks like this: defaults: &defaults host: 127.0.0.1 port: 27017 development: <<: *defaults database: myapp_development test: <<: *defaults database: myapp_test set these environment variables on your prod server production: <<: *defaults database: myapp username: <%= ENV['MONGO_USERNAME'] %> password: <%= ENV['MONGO_PASSWORD'] %> We need to modify this so that it can talk to Cloud Foundry’s infrastructure. When CloudFoundry runs your app, it passes in a bunch of information through an environment variable. We need to pull the host, port, username, and password for MongoDB out of this environment variable. After some modification, the production section of your config/mongo.yml should look something like this: production: host: <%= JSON.parse( ENV['VCAP_SERVICES'] )['mongodb-1.8'].first['credentials']['hostname'] rescue 'localhost' %> port: <%= JSON.parse( ENV['VCAP_SERVICES'] )['mongodb-1.8'].first['credentials']['port'] rescue 27017 %> database: <%= JSON.parse( ENV['VCAP_SERVICES'] )['mongodb-1.8'].first['credentials']['db'] rescue 'cloud_foundry_mongodb_tutorial' %> username: <%= JSON.parse( ENV['VCAP_SERVICES'] )['mongodb-1.8'].first['credentials']['username'] rescue '' %> password: <%= JSON.parse( ENV['VCAP_SERVICES'] )['mongodb-1.8'].first['credentials']['password'] rescue '' %> Note: The “rescue” clauses are so that you can run this app locally. If you don’t include this and you try to run this app outside of cloud foundry, you’ll get an exception because there’s no VCAP_SERVICES environment variable passed into your app. Push our application to CloudFoundry Now we’re ready to test things out. If you haven’t already, you should go ahead and install the vmc command line tool. There’s a getting started with VMC guide here . Here’s what it looked like when I deployed my app: redeye:myapp jsr$ vmc push --runtime ruby19 Would you like to deploy from the current directory? [Yn]: y Application Name: mongodb-on-cf-demo Application Deployed URL: 'mongodb-on-cf-demo.cloudfoundry.com'? Detected a Rails Application, is this correct? [Yn]: y Memory Reservation [Default:256M] (64M, 128M, 256M or 512M) 256M Creating Application: OK Would you like to bind any services to 'mongodb-on-cf-demo'? [yN]: y Would you like to use an existing provisioned service [yN]? n The following system services are available:: 1. mysql 2. mongodb 3. redis Please select one you wish to provision: 2 Specify the name of the service [mongodb-a8a43]: Creating Service: OK Binding Service: OK Uploading Application: Checking for available resources: OK Processing resources: OK Packing application: OK Uploading (5K): OK Push Status: OK Starting Application: OK redeye:myapp jsr$ Now you can point your browser to http://mongodb-on-cf-demo.cloudfoundry.com/ and you should see the list of messages! Congrats! You’ve got your first Rails app up and running on Cloud Foundry and MongoDB! The code for this mongodb + rails + Cloud Foundry app is up on github Happy coding and be sure to tell us about all the awesome apps you build on Cloud Foundry! – Jared Rosoff
Accelerating to T+1 - Have You Got the Speed and Agility Required to Meet the Deadline?
On May 28, 2024, the Securities and Exchange Commission (SEC) will implement a move to a T+1 settlement for standard securities trades , shortening the settlement period from 2 business days after the trade date to one business day. The change aims to address market volatility and reduce credit and settlement risk. The shortened T+1 settlement cycle can potentially decrease market risks, but most firms' current back-office operations cannot handle this change. This is due to several challenges with existing systems, including: Manual processes will be under pressure due to the shortened settlement cycle Batch data processing will not be feasible To prepare for T+1, firms should take urgent action to address these challenges: Automate manual processes to streamline them and improve operational efficiency Event-based real-time processing should replace batch processing for faster settlement In this blog, we will explore how MongoDB can be leveraged to accelerate manual process automation and replace batch processes to enable faster settlement. What is a T+1 and T+2 settlement? T+1 settlement refers to the practice of settling transactions executed before 4:30pm on the following trading day. For example, if a transaction is executed on Monday before 4:30 pm, the settlement will occur on Tuesday. This settlement process involves the transfer of securities and/or funds from the seller's account to the buyer's account. This contrasts with the T+2 settlement, where trades are settled two trading days after the trade date. According to SEC Chair Gary Gensler , “T+1 is designed to benefit investors and reduce the credit, market, and liquidity risks in securities transactions faced by market participants.” Overcoming T+1 transition challenges with MongoDB: Two unique solutions 1. The multi-cloud developer data platform accelerates manual process automation Legacy settlement systems may involve manual intervention for various tasks, including manual matching of trades, manual input of settlement instructions, allocation emails to brokers, reconciliation of trade and settlement details, and manual processing of paper-based documents. These manual processes can be time-consuming and prone to errors. MongoDB (Figure 1 below) can help accelerate developer productivity in several ways: Easy to use: MongoDB is designed to be easy to use, which can reduce the learning curve for developers who are new to the database. Flexible data model: Allows developers to store data in a way that makes sense for their application. This can help accelerate development by reducing the need for complex data transformations or ORM mapping. Scalability: MongoDB is highly scalable , which means it can handle large volumes of trade data and support high levels of concurrency. Rich query language: Allows developers to perform complex queries without writing much code. MongoDB's Apache Lucene-based search can also help screen large volumes of data against sanctions and watch lists in real-time. Figure 1: MongoDB's developer data platform Discover the developer productivity calculator . Developers spend 42% of their work week on maintenance and technical debt. How much does this cost your organization? Calculate how much you can save by working with MongoDB. 2. An operational trade store to replace slow batch processing Back-office technology teams face numerous challenges when consolidating transaction data due to the complexity of legacy batch ETL and integration jobs. Legacy databases have long been the industry standard but are not optimal for post-trade management due to limitations such as rigid schema, difficulty in horizontal scaling, and slow performance. For T+1 settlement, it is crucial to have real-time availability of consolidated positions across assets, geographies, and business lines. It is important to note that the end of the batch cycle will not meet this requirement. As a solution, MongoDB customers use an operational trade data store (ODS) to overcome these challenges for real-time data sharing. By using an ODS, financial firms can improve their operational efficiency by consolidating transaction data in real-time. This allows them to streamline their back-office operations, reduce the complexity of ETL and integration processes, and avoid the limitations of relational databases. As a result, firms can make faster, more informed decisions and gain a competitive edge in the market. Using MongoDB (Figure 2 below), trade desk data is copied into an ODS in real-time through change data capture (CDC), creating a centralized trade store that acts as a live source for downstream trade settlement and compliance systems. This enables faster settlement times, improves data quality and accuracy, and supports full transactionality. As the ODS evolves, it becomes a "system of record/golden source" for many back office and middle office applications, and powers AI/ML-based real-time fraud prevention applications and settlement risk failure systems. Figure 2: Centralized Trade Data Store (ODS) Managing trade settlement risk failure is critical in driving efficiency across the entire securities market ecosystem. Luckily, MongoDB integration capabilities (Figure 3 below) with modern AI and ML platforms enable banks to develop AI/ML models that make managing potential trade settlement fails much more efficient from a cost, time, and quality perspective. Additionally, predictive analytics allow firms to project availability and demand and optimize inventories for lending and borrowing. Figure 3: Event-driven application for real time monitoring Summary Financial institutions face significant challenges in reducing settlement duration from two business days (T+2) to one (T+1), particularly when it comes to addressing the existing back-office issues. However, it's crucial for them to achieve this goal within a year as required by the SEC. This blog highlights how MongoDB's developer data platform can help financial institutions automate manual processes and adopt a best practice approach to replace batch processes with a real-time data store repository (ODS). With the help of MongoDB's developer data platform and best practices, financial institutions can achieve operational excellence and meet the SEC's T+1 settlement deadline on May 28, 2024. In the event of T+0 settlement cycles becoming a reality, institutions with the most flexible data platform will be better equipped to adjust. Top banks in the industry are already adopting MongoDB's developer data platform to modernize their infrastructure, leading to reduced time-to-market, lower total cost of ownership, and improved developer productivity. Looking to learn more about how you can modernize or what MongoDB can do for you? Zero downtime migrations using MongoDB’s flexible schema Accelerate your digital transformation with these 5 Phases of Banking Modernization Reduce time-to-market for your customer lifecycle management applications MongoDB’s financial services hub