Top 5 Slides and Videos from MongoNYC
July 1, 2013 | Updated: May 22, 2015
#mongonyc#conferences#mongodb and sql#mongodb replica sets#mongodb data safety#the right way to implement mongodb
Slides and videos are now up from the largest ever MongoNYC, the one day conference in New York City dedicated to MongoDB. Based on feedback from attendees, here are the Top 5 Videos from MongoNYC, which range from a series of use cases, to best practices for using MongoDB in production.
- Growing Up MongoDB By Kiril Salvino, CTO and Founder, Gamechanger
- The right and wrong ways to implement MongoDB, Richard Kreuter, Consulting Manager, 10gen
- Managing a Maturing MongoDB Ecosystem, Charity Majors, Systems Engineer, Parse
- Real time integration between MongoDB and SQL Databases, Eugene Dvorkin, WebMD
- How to Keep Your Data Safe in MongoDB, Eliot Horowitz, CTO and Co-founder, 10gen
The Big Data Hoax That Wasn't
Welcome to the Age of Big Data. Or perhaps it’s the Age of Big Data Agnosticism. In a Newtonian twist, what started as a wave of hype for data’s transformational potential on organizations everywhere has turned into an equal and opposite backlash of big data naysaying. It is an understandable reaction to the great over-selling of big data as a kind of enterprise cure-all. Of course, in some companies, big data pilots have produced nothing but big piles of unfulfilled expectations. But the problem likely is not big data. Big data remains potentially the most powerful engine for business transformation to gain currency in the 21st century. The problem is that so much of what is sold as big data isn’t. It’s typically just lots of data. “Big data, that’s just data mining with a fancy new name.” How often have you heard that? It’s flatly false. The size or volume of the data does not matter in genuine big data analytics. Instead, savvy organizations already understand that big data is really about working with a mix of data types - structured and unstructured, from inside the organization and outside. It is CRM forms, but it also is Tweets, Facebook posts, TripAdvisor rants, Gmails, Outlook entries, even voicemail. In most organizations this does not add up to petabytes of data, as I’ve written before . Terabytes is the usual quantity even though that seems small by many measures. The complexity arises in the diversity of data. And that raises a problem. Not many databases have the flexibility to handle that many forms of data. And fewer databases have the agility to permit modifications on the fly - “Shouldn’t we add SMS data in here, too?” The right answer is, done. A database that cannot - with little fuss -- add a new row is too rigid for use in true big data analysis because the exciting - maybe maddening? - bit about big data today is that always there is new input that may enhance the overall result. Then there are the other questions: why are you collecting big data in the first place? What do you want from your analysis of it and this question is key because without targeted analytics, big data is just hoarding. As an insightful story in The Guardian recently posited, “Companies need to focus on big answers not big data. Instead of focusing upon the concept of big data, organizations should concentrate on the intelligence data can offer.” In other words, it’s not about the data: it’s about what intelligence can be drawn from it. The Guardian author calls himself a “big data sceptic” but, really, he isn’t. He just shares the frustration over the many mislabeled big data projects - that never were about big data - and also about the data hoarding that some companies do when they say they are committing to big data. Such projects rarely end well. Real big data - unstructured, from multiple sources - coupled with real analytics is a game changer that gives forward-thinking organizations insight that before was merely guesswork. One Texas city ran analyses to determine exactly what happened in parts of the city that experienced higher than anticipated growth and a resulting increase in value. This was true big data. In the mix were police reports, zoning violations, construction permits, parking tickets, you name it. If the data existed, it was fed into the analysis and the city began to see what it did - and didn’t do - to spur growth. Where could it get out of the way? Where could it proactively spur growth? It was real big data in action. And it’s why big data remains a big deal, despite the hype.
Accelerating to T+1 - Have You Got the Speed and Agility Required to Meet the Deadline?
On May 28, 2024, the Securities and Exchange Commission (SEC) will implement a move to a T+1 settlement for standard securities trades , shortening the settlement period from 2 business days after the trade date to one business day. The change aims to address market volatility and reduce credit and settlement risk. The shortened T+1 settlement cycle can potentially decrease market risks, but most firms' current back-office operations cannot handle this change. This is due to several challenges with existing systems, including: Manual processes will be under pressure due to the shortened settlement cycle Batch data processing will not be feasible To prepare for T+1, firms should take urgent action to address these challenges: Automate manual processes to streamline them and improve operational efficiency Event-based real-time processing should replace batch processing for faster settlement In this blog, we will explore how MongoDB can be leveraged to accelerate manual process automation and replace batch processes to enable faster settlement. What is a T+1 and T+2 settlement? T+1 settlement refers to the practice of settling transactions executed before 4:30pm on the following trading day. For example, if a transaction is executed on Monday before 4:30 pm, the settlement will occur on Tuesday. This settlement process involves the transfer of securities and/or funds from the seller's account to the buyer's account. This contrasts with the T+2 settlement, where trades are settled two trading days after the trade date. According to SEC Chair Gary Gensler , “T+1 is designed to benefit investors and reduce the credit, market, and liquidity risks in securities transactions faced by market participants.” Overcoming T+1 transition challenges with MongoDB: Two unique solutions 1. The multi-cloud developer data platform accelerates manual process automation Legacy settlement systems may involve manual intervention for various tasks, including manual matching of trades, manual input of settlement instructions, allocation emails to brokers, reconciliation of trade and settlement details, and manual processing of paper-based documents. These manual processes can be time-consuming and prone to errors. MongoDB (Figure 1 below) can help accelerate developer productivity in several ways: Easy to use: MongoDB is designed to be easy to use, which can reduce the learning curve for developers who are new to the database. Flexible data model: Allows developers to store data in a way that makes sense for their application. This can help accelerate development by reducing the need for complex data transformations or ORM mapping. Scalability: MongoDB is highly scalable , which means it can handle large volumes of trade data and support high levels of concurrency. Rich query language: Allows developers to perform complex queries without writing much code. MongoDB's Apache Lucene-based search can also help screen large volumes of data against sanctions and watch lists in real-time. Figure 1: MongoDB's developer data platform Discover the developer productivity calculator . Developers spend 42% of their work week on maintenance and technical debt. How much does this cost your organization? Calculate how much you can save by working with MongoDB. 2. An operational trade store to replace slow batch processing Back-office technology teams face numerous challenges when consolidating transaction data due to the complexity of legacy batch ETL and integration jobs. Legacy databases have long been the industry standard but are not optimal for post-trade management due to limitations such as rigid schema, difficulty in horizontal scaling, and slow performance. For T+1 settlement, it is crucial to have real-time availability of consolidated positions across assets, geographies, and business lines. It is important to note that the end of the batch cycle will not meet this requirement. As a solution, MongoDB customers use an operational trade data store (ODS) to overcome these challenges for real-time data sharing. By using an ODS, financial firms can improve their operational efficiency by consolidating transaction data in real-time. This allows them to streamline their back-office operations, reduce the complexity of ETL and integration processes, and avoid the limitations of relational databases. As a result, firms can make faster, more informed decisions and gain a competitive edge in the market. Using MongoDB (Figure 2 below), trade desk data is copied into an ODS in real-time through change data capture (CDC), creating a centralized trade store that acts as a live source for downstream trade settlement and compliance systems. This enables faster settlement times, improves data quality and accuracy, and supports full transactionality. As the ODS evolves, it becomes a "system of record/golden source" for many back office and middle office applications, and powers AI/ML-based real-time fraud prevention applications and settlement risk failure systems. Figure 2: Centralized Trade Data Store (ODS) Managing trade settlement risk failure is critical in driving efficiency across the entire securities market ecosystem. Luckily, MongoDB integration capabilities (Figure 3 below) with modern AI and ML platforms enable banks to develop AI/ML models that make managing potential trade settlement fails much more efficient from a cost, time, and quality perspective. Additionally, predictive analytics allow firms to project availability and demand and optimize inventories for lending and borrowing. Figure 3: Event-driven application for real time monitoring Summary Financial institutions face significant challenges in reducing settlement duration from two business days (T+2) to one (T+1), particularly when it comes to addressing the existing back-office issues. However, it's crucial for them to achieve this goal within a year as required by the SEC. This blog highlights how MongoDB's developer data platform can help financial institutions automate manual processes and adopt a best practice approach to replace batch processes with a real-time data store repository (ODS). With the help of MongoDB's developer data platform and best practices, financial institutions can achieve operational excellence and meet the SEC's T+1 settlement deadline on May 28, 2024. In the event of T+0 settlement cycles becoming a reality, institutions with the most flexible data platform will be better equipped to adjust. Top banks in the industry are already adopting MongoDB's developer data platform to modernize their infrastructure, leading to reduced time-to-market, lower total cost of ownership, and improved developer productivity. Looking to learn more about how you can modernize or what MongoDB can do for you? Zero downtime migrations using MongoDB’s flexible schema Accelerate your digital transformation with these 5 Phases of Banking Modernization Reduce time-to-market for your customer lifecycle management applications MongoDB’s financial services hub