What We're Reading This Week
Here’s what we’re reading this week at MongoDB:
ComputerWorld: The Weather Channel forecasts heavy NoSQL ahead
Crain’s: 2014 Fast 50
Diginomica: Open source or bust – developer engagement, MongoDB style
Eliot’s Ramblings: The Road to MMS Automation
eWeek: MongoDB Management Service Tweaked for Flexibility
GigaOm: MongoDB targets ‘massive’ revenue stream with new cloud-based management service
MongoDB Community Blog: MongoDB Management Service Re-imagined: The Easiest Way to Run MongoDB
MongoDB Corporate Blog: Too Many Projects, Too Little Time: Deliver MongoDB-as-a-Service
Too Many Projects, Too Little Time: Deliver MongoDB-as-a-Service
What do a leading investment bank, a PaaS for mobile app developers and one of the largest US government departments all have in common? Each one needed to power a new wave of applications with a single database. Of course it didn’t make sense for each database instance to run on its own infrastructure. So they decided to build a shared service to standardize the way multiple applications and project teams consumed the database. That database was MongoDB. And what they built was MongoDB-as-a-Service. These organizations are not alone. MongoDB is the fastest growing database community on the planet . As more companies move from initial pilots to full scale production, IT groups are challenged to bring order to chaos. They need to maintain consistent operational best practices and enforce corporate governance mandates and BU accountability across multiple projects. This is where delivering MongoDB-as-a-Service comes in. One pool of shared resources – running in a private data center or in a public cloud – serving multiple tenants, each with unique workload requirements. Building something like this from scratch isn’t easy. But you don’t have to reinvent the wheel either. We’ve assembled best practices from multiple “as-a-service” projects to create the top 10 considerations to delivering MongoDB-as-a-Service . So what are the top 10 things to think about? You should download the whitepaper, but as a summary: Step 1: Identify Common Workload Requirements Presents checklists you can use to capture both current and future database loads and technology specs. This provides input to the infrastructure you need to provision. Step 2: Hardware & OS Selection Identifies the general-purpose building blocks you should use to power the service. Step 3: Virtualization Strategy Helps guide you to the virtualization technology that will get the most out of your hardware. Step 4: Enabling Multi-Tenant Services What do you need – maximum density of tenants per server, or maximum isolation between tenants? It doesn’t have to be either/or. You can blend multiple approaches to meet the demands of multiple apps. Step 5: Enforcing Security Isolation between Tenants Guidance on how to maintain strict isolation between each project, with full account and auditing control. Step 6: Meeting Service Level Agreement (SLA) Requirements How can you be sure you can deliver continuous availability to your customers? How can you scale those apps that need it, when they need it? This section shows you how. Step 7: Managing the MongoDB Service You need to provision new services fast. You need proactive monitoring to identify potential issues before an outage brings all your apps down. You need to ensure each teams’ data is safe and can recover from disasters. We present the management platform you need to accomplish all of these things. Step 8: Cost Accounting & Chargeback There is no such thing as a free ride….you deliver value for money, so now its time to make sure the project teams pay for what they consume. Step 9: Define the Implementation Plan Where to start? You need the right people on board. This section helps you track down the (willing?) volunteers Step 10: Production-Grade DBaaS You need your MongoDB instances to be certified, secure and supported. We have just the thing. If the above has piqued your interest, fill in a few details and download our new whitepaper now: MongoDB-as-a-Service: Top 10 Considerations .
Accelerating to T+1 - Have You Got the Speed and Agility Required to Meet the Deadline?
On May 28, 2024, the Securities and Exchange Commission (SEC) will implement a move to a T+1 settlement for standard securities trades , shortening the settlement period from 2 business days after the trade date to one business day. The change aims to address market volatility and reduce credit and settlement risk. The shortened T+1 settlement cycle can potentially decrease market risks, but most firms' current back-office operations cannot handle this change. This is due to several challenges with existing systems, including: Manual processes will be under pressure due to the shortened settlement cycle Batch data processing will not be feasible To prepare for T+1, firms should take urgent action to address these challenges: Automate manual processes to streamline them and improve operational efficiency Event-based real-time processing should replace batch processing for faster settlement In this blog, we will explore how MongoDB can be leveraged to accelerate manual process automation and replace batch processes to enable faster settlement. What is a T+1 and T+2 settlement? T+1 settlement refers to the practice of settling transactions executed before 4:30pm on the following trading day. For example, if a transaction is executed on Monday before 4:30 pm, the settlement will occur on Tuesday. This settlement process involves the transfer of securities and/or funds from the seller's account to the buyer's account. This contrasts with the T+2 settlement, where trades are settled two trading days after the trade date. According to SEC Chair Gary Gensler , “T+1 is designed to benefit investors and reduce the credit, market, and liquidity risks in securities transactions faced by market participants.” Overcoming T+1 transition challenges with MongoDB: Two unique solutions 1. The multi-cloud developer data platform accelerates manual process automation Legacy settlement systems may involve manual intervention for various tasks, including manual matching of trades, manual input of settlement instructions, allocation emails to brokers, reconciliation of trade and settlement details, and manual processing of paper-based documents. These manual processes can be time-consuming and prone to errors. MongoDB (Figure 1 below) can help accelerate developer productivity in several ways: Easy to use: MongoDB is designed to be easy to use, which can reduce the learning curve for developers who are new to the database. Flexible data model: Allows developers to store data in a way that makes sense for their application. This can help accelerate development by reducing the need for complex data transformations or ORM mapping. Scalability: MongoDB is highly scalable , which means it can handle large volumes of trade data and support high levels of concurrency. Rich query language: Allows developers to perform complex queries without writing much code. MongoDB's Apache Lucene-based search can also help screen large volumes of data against sanctions and watch lists in real-time. Figure 1: MongoDB's developer data platform Discover the developer productivity calculator . Developers spend 42% of their work week on maintenance and technical debt. How much does this cost your organization? Calculate how much you can save by working with MongoDB. 2. An operational trade store to replace slow batch processing Back-office technology teams face numerous challenges when consolidating transaction data due to the complexity of legacy batch ETL and integration jobs. Legacy databases have long been the industry standard but are not optimal for post-trade management due to limitations such as rigid schema, difficulty in horizontal scaling, and slow performance. For T+1 settlement, it is crucial to have real-time availability of consolidated positions across assets, geographies, and business lines. It is important to note that the end of the batch cycle will not meet this requirement. As a solution, MongoDB customers use an operational trade data store (ODS) to overcome these challenges for real-time data sharing. By using an ODS, financial firms can improve their operational efficiency by consolidating transaction data in real-time. This allows them to streamline their back-office operations, reduce the complexity of ETL and integration processes, and avoid the limitations of relational databases. As a result, firms can make faster, more informed decisions and gain a competitive edge in the market. Using MongoDB (Figure 2 below), trade desk data is copied into an ODS in real-time through change data capture (CDC), creating a centralized trade store that acts as a live source for downstream trade settlement and compliance systems. This enables faster settlement times, improves data quality and accuracy, and supports full transactionality. As the ODS evolves, it becomes a "system of record/golden source" for many back office and middle office applications, and powers AI/ML-based real-time fraud prevention applications and settlement risk failure systems. Figure 2: Centralized Trade Data Store (ODS) Managing trade settlement risk failure is critical in driving efficiency across the entire securities market ecosystem. Luckily, MongoDB integration capabilities (Figure 3 below) with modern AI and ML platforms enable banks to develop AI/ML models that make managing potential trade settlement fails much more efficient from a cost, time, and quality perspective. Additionally, predictive analytics allow firms to project availability and demand and optimize inventories for lending and borrowing. Figure 3: Event-driven application for real time monitoring Summary Financial institutions face significant challenges in reducing settlement duration from two business days (T+2) to one (T+1), particularly when it comes to addressing the existing back-office issues. However, it's crucial for them to achieve this goal within a year as required by the SEC. This blog highlights how MongoDB's developer data platform can help financial institutions automate manual processes and adopt a best practice approach to replace batch processes with a real-time data store repository (ODS). With the help of MongoDB's developer data platform and best practices, financial institutions can achieve operational excellence and meet the SEC's T+1 settlement deadline on May 28, 2024. In the event of T+0 settlement cycles becoming a reality, institutions with the most flexible data platform will be better equipped to adjust. Top banks in the industry are already adopting MongoDB's developer data platform to modernize their infrastructure, leading to reduced time-to-market, lower total cost of ownership, and improved developer productivity. Looking to learn more about how you can modernize or what MongoDB can do for you? Zero downtime migrations using MongoDB’s flexible schema Accelerate your digital transformation with these 5 Phases of Banking Modernization Reduce time-to-market for your customer lifecycle management applications MongoDB’s financial services hub