Tweet About MongoDB World. Win a Ticket.
On June 1-2, over 2,000 developers, sysadmins, and DBAs will converge in New York City for MongoDB World, our annual user conference. It’s your chance to to get inspired, share ideas and get the latest insights on using MongoDB.
If you’re active on social media, you may have a chance to win a free conference pass. Simply share your enthusiasm for MongoDB World on Twitter during the month of March.
How to Enter
- Follow @mongodb on Twitter
- Tell us why you’re excited about MongoDB World by tweeting with a mention of @mongodb and make sure to use the #MongoDBWorld hashtag. Get creative! Start conversations, use humor, share photos, tell us what speakers you look forward to hearing from.
Click to tweet: I can't wait for #MongoDBWorld w/ @MongoDB because...
## Prizes 1st Prize: Ticket to MongoDB World 2015 2nd Prize: Fully loaded MongoDB World swag pack 3rd Prize: MongoDB T-shirt
How Winners Will Be Selected
MongoDB will pick the winning applicant by April 3rd, and will notify the winner via twitter direct message. The winners will be chosen based on a combination of most widely shared content and creativity used to experience their excitement about MongoDB World.
Call for Feedback: The New PHP and HHVM Drivers
In the beginning Kristina created the MongoDB PHP driver. Now the PECL mongo extension was new and untested, write operations tended to be fire-and-forget, and Boolean parameters made more sense than $options arrays. And Kristina said, "Let there be MongoCollection," and there was basic functionality. Since the PHP driver first appeared on the scene, MongoDB has gone through many changes. Replica sets and sharding arrived early on, but things like the aggregation framework and command cursors were little more than a twinkle in Eliot's eye at the time. The early drivers were designed with many assumptions in mind: write operations and commands were very different; the largest replica set would have no more than a dozen nodes; cursors were only returned by basic queries. In 2015, we know that these assumptions no longer hold true. Beyond MongoDB's features, our ecosystem has also changed. When the PHP driver, a C extension, was first implemented, there wasn't yet a C driver that we could utilize. Therefore, the 1.x PHP driver contains its own BSON and connection management C libraries. HHVM , an alternative PHP runtime with its own C++ extension API, also did not exist years ago, nor was PHP 7.0 on the horizon. Lastly, methods of packaging and distributing libraries have changed. Composer has superseded PEAR as the de facto standard for PHP libaries and support for extensions (currently handled by PECL) is forthcoming. During the spring of 2014, we worked with a team of students from Facebook's Open Academy program to prototype an HHVM driver modeled after the 1.x API. The purpose of that project was twofold: research HHVM's extension API and determine the feasibility of building a driver atop libmongoc (our then new C driver) and libbson . Although the final result was not feature complete, the project was a valuable learning experience. The C driver proved quite up to the task, and HNI, which allows an HHVM extension to be written with a combination of PHP and C++, highlighted critical areas of the driver for which we'd want to use C. This all leads up to the question of how best to support PHP 5.x, HHVM, and PHP 7.0 with our next-generation driver. Maintaining three disparate, monolithic extensions is not sustainable. We also cannot eschew the extension layer for a pure PHP library, like mongofill , without sacrificing performance. Thankfully, we can compromise! Here is a look at the architecture for our next-generation PHP driver: At the top of this stack sits a pure PHP library, which we will distribute as a Composer package. This library will provide an API similar to what users have come to expect from the 1.x driver (e.g. CRUD methods, database and collection objects, command helpers) and we expect it to be a common dependency for most applications built with MongoDB. This library will also implement common specifications , in the interest of improving API consistency across all of the drivers maintained by MongoDB (and hopefully some community drivers, too). Sitting below that library we have the lower level drivers (one per platform). These extensions will effectively form the glue between PHP and HHVM and our system libraries (libmongoc and libbson). These extensions will expose an identical public API for the most essential and performance-sensitive functionality: Connection management BSON encoding and decoding Object document serialization (to support ODM libraries) Executing commands and write operations Handling queries and cursors By decoupling the driver internals and a high-level API into extensions and PHP libraries, respectively, we hope to reduce our maintainence burden and allow for faster iteration on new features. As a welcome side effect, this also makes it easier for anyone to contribute to the driver. Additionally, an identical public API for these extensions will make it that much easier to port an application across PHP runtimes, whether the application uses the low-level driver directly or a higher-level PHP library. GridFS is a great example of why we chose this direction. Although we implemented GridFS in C for our 1.x driver, it is actually quite a high-level specification. Its API is just an abstraction for accessing two collections: files (i.e. metadata) and chunks (i.e. blocks of data). Likewise, all of the syntactic sugar found in the 1.x driver, such as processing uploaded files or exposing GridFS files as PHP streams, can be implemented in pure PHP. Provided we have performant methods for reading from and writing to GridFS' collections – and thanks to our low level extensions, we will – shifting this API to PHP is win-win. Earlier I mentioned that we expect the PHP library to be a common dependency for most applications, but not all. Some users may prefer to stick to the no-frills API offered by the extensions, or create their own high-level abstraction (akin to Doctrine MongoDB for the 1.x driver), and that's great! Hannes has talked about creating a PHP library geared for MongoDB administration, which provides an API for various user management and ops commands. I'm looking forward to building the next major version of Doctrine MongoDB ODM directly atop the extensions. While we will continue to maintain and support the 1.x driver and its users for the foreseeable future, we invite everyone to check out our next-generation driver and consider it for any new projects going forward. You can find all of the essential components across GitHub and JIRA: Project GitHub JIRA PHP Library mongodb/mongo-php-library PHPLIB PHP 5.x Driver (phongo) mongodb/mongo-php-driver PHPC HHVM Driver (hippo) mongodb/mongo-hhvm-driver HHVM The existing PHP project in JIRA will remain open for reporting bugs against the 1.x driver, but we would ask that you use the new projects above for anything pertaining to our next-generation drivers. If you're interested in hearing more about our upcoming PHP and HHVM drivers, Derick Rethans is presenting a new talk entitled One Extension, Two Engines at php[tek] 2015 in May. About the Author - Jeremy Jeremy Mikola is a software engineer at MongoDB's NYC office. As a member of the driver and evangelism team, he helps develop the PHP driver and contributes to various OSS projects, such as Doctrine ODM and React PHP. Jeremy lives in Hoboken, NJ and is known to enjoy a good sandwich.
Accelerating to T+1 - Have You Got the Speed and Agility Required to Meet the Deadline?
On May 28, 2024, the Securities and Exchange Commission (SEC) will implement a move to a T+1 settlement for standard securities trades , shortening the settlement period from 2 business days after the trade date to one business day. The change aims to address market volatility and reduce credit and settlement risk. The shortened T+1 settlement cycle can potentially decrease market risks, but most firms' current back-office operations cannot handle this change. This is due to several challenges with existing systems, including: Manual processes will be under pressure due to the shortened settlement cycle Batch data processing will not be feasible To prepare for T+1, firms should take urgent action to address these challenges: Automate manual processes to streamline them and improve operational efficiency Event-based real-time processing should replace batch processing for faster settlement In this blog, we will explore how MongoDB can be leveraged to accelerate manual process automation and replace batch processes to enable faster settlement. What is a T+1 and T+2 settlement? T+1 settlement refers to the practice of settling transactions executed before 4:30pm on the following trading day. For example, if a transaction is executed on Monday before 4:30 pm, the settlement will occur on Tuesday. This settlement process involves the transfer of securities and/or funds from the seller's account to the buyer's account. This contrasts with the T+2 settlement, where trades are settled two trading days after the trade date. According to SEC Chair Gary Gensler , “T+1 is designed to benefit investors and reduce the credit, market, and liquidity risks in securities transactions faced by market participants.” Overcoming T+1 transition challenges with MongoDB: Two unique solutions 1. The multi-cloud developer data platform accelerates manual process automation Legacy settlement systems may involve manual intervention for various tasks, including manual matching of trades, manual input of settlement instructions, allocation emails to brokers, reconciliation of trade and settlement details, and manual processing of paper-based documents. These manual processes can be time-consuming and prone to errors. MongoDB (Figure 1 below) can help accelerate developer productivity in several ways: Easy to use: MongoDB is designed to be easy to use, which can reduce the learning curve for developers who are new to the database. Flexible data model: Allows developers to store data in a way that makes sense for their application. This can help accelerate development by reducing the need for complex data transformations or ORM mapping. Scalability: MongoDB is highly scalable , which means it can handle large volumes of trade data and support high levels of concurrency. Rich query language: Allows developers to perform complex queries without writing much code. MongoDB's Apache Lucene-based search can also help screen large volumes of data against sanctions and watch lists in real-time. Figure 1: MongoDB's developer data platform Discover the developer productivity calculator . Developers spend 42% of their work week on maintenance and technical debt. How much does this cost your organization? Calculate how much you can save by working with MongoDB. 2. An operational trade store to replace slow batch processing Back-office technology teams face numerous challenges when consolidating transaction data due to the complexity of legacy batch ETL and integration jobs. Legacy databases have long been the industry standard but are not optimal for post-trade management due to limitations such as rigid schema, difficulty in horizontal scaling, and slow performance. For T+1 settlement, it is crucial to have real-time availability of consolidated positions across assets, geographies, and business lines. It is important to note that the end of the batch cycle will not meet this requirement. As a solution, MongoDB customers use an operational trade data store (ODS) to overcome these challenges for real-time data sharing. By using an ODS, financial firms can improve their operational efficiency by consolidating transaction data in real-time. This allows them to streamline their back-office operations, reduce the complexity of ETL and integration processes, and avoid the limitations of relational databases. As a result, firms can make faster, more informed decisions and gain a competitive edge in the market. Using MongoDB (Figure 2 below), trade desk data is copied into an ODS in real-time through change data capture (CDC), creating a centralized trade store that acts as a live source for downstream trade settlement and compliance systems. This enables faster settlement times, improves data quality and accuracy, and supports full transactionality. As the ODS evolves, it becomes a "system of record/golden source" for many back office and middle office applications, and powers AI/ML-based real-time fraud prevention applications and settlement risk failure systems. Figure 2: Centralized Trade Data Store (ODS) Managing trade settlement risk failure is critical in driving efficiency across the entire securities market ecosystem. Luckily, MongoDB integration capabilities (Figure 3 below) with modern AI and ML platforms enable banks to develop AI/ML models that make managing potential trade settlement fails much more efficient from a cost, time, and quality perspective. Additionally, predictive analytics allow firms to project availability and demand and optimize inventories for lending and borrowing. Figure 3: Event-driven application for real time monitoring Summary Financial institutions face significant challenges in reducing settlement duration from two business days (T+2) to one (T+1), particularly when it comes to addressing the existing back-office issues. However, it's crucial for them to achieve this goal within a year as required by the SEC. This blog highlights how MongoDB's developer data platform can help financial institutions automate manual processes and adopt a best practice approach to replace batch processes with a real-time data store repository (ODS). With the help of MongoDB's developer data platform and best practices, financial institutions can achieve operational excellence and meet the SEC's T+1 settlement deadline on May 28, 2024. In the event of T+0 settlement cycles becoming a reality, institutions with the most flexible data platform will be better equipped to adjust. Top banks in the industry are already adopting MongoDB's developer data platform to modernize their infrastructure, leading to reduced time-to-market, lower total cost of ownership, and improved developer productivity. Looking to learn more about how you can modernize or what MongoDB can do for you? Zero downtime migrations using MongoDB’s flexible schema Accelerate your digital transformation with these 5 Phases of Banking Modernization Reduce time-to-market for your customer lifecycle management applications MongoDB’s financial services hub