MongoDB 3.0.13 is out and is ready for production deployment. This release contains only fixes since 3.0.12, and is a recommended upgrade for all 3.0 users.
Fixed in this release:
- SERVER Add Debian 8 (Jessie) builds and associated package repository
- SERVER On RHEL7/Centos7 mongod can't stop if pid location in conf differs from the init.d script
- SERVER Database/Collection drop during initial sync can cause collmod to fail initial sync
- SERVER Update to PCRE 8.39
- SERVER Building 2dsphere index uses excessive memory
- TOOLS No numeric version in --version output
As always, please let us know of any issues.
-- The MongoDB Team
Announcing MongoDB 3.4
Today we are announcing MongoDB 3.4 , another milestone in our march to being the default database for modern applications. 3.4 makes MongoDB more flexible than ever, allowing developers to consolidate even more use cases into their MongoDB deployment, even as we continue to mature the platform and its ecosystem. MongoDB was created to make it easy for developers to work with their data, beginning with introducing the document model itself. Documents are the best rudimentary unit for a data store, because they let you represent any kind of data, and embody their structure however best suits your use case. Whether that means deep or shallow nesting (or no nesting), documents can handle it. The key is being able to add many types of queries and algorithms to the data. MongoDB 3.4 adds a stage to the aggregation pipeline that enables faceted search , greatly simplifying the query load for applications that browse and explore that data. It also adds operators to power graph queries . As we continue to add query features, users can consolidate more uses cases, instead of bloating their application footprint with a proliferation of specialized data stores. Just because it’s easy to work with data in MongoDB, it doesn’t mean we can’t make it easier. In 3.4, the aggregation pipeline continues to mature , with more operators and expressions, enhancing string handling, allowing more sophisticated use of array elements, testing fields for type, and support for branching. Financial calculations are made simple with the addition of a Decimal data type. I think it was John Donne who said: “No database is an island,” but whoever said it, they were very right. A database has to work as the heart of an ecosystem, and in 3.4, we continue to build that thriving ecosystem. Connecting MongoDB to the outside world is better than ever. MongoDB 3.4 introduces a ground-up rewrite of the BI connector , which improves performance, simplifies installation and configuration, and supports Windows. 3.4 also includes an update for our Apache Spark connector , with support for the Spark 2.0. We’ve also extended the platforms that MongoDB runs on, including ARM-64, and IBM’s POWER8 and zSeries platforms. MongoDB Compass is growing up with 3.4. It has new ways to depict data, such as the map view for geographic data, and it has become a data manipulation and performance tuning tool as well. In 3.4, Compass offers visual plan explanations, real-time stats, CRUD operations and index creation, so now you can identify, diagnose, and fix performance and data problems all from within Compass. Of course, MongoDB 3.4 is supported by our trifecta of enterprise-grade ops management platforms: Ops Manager , Cloud Manager , and MongoDB Atlas , each of which add new features with this release. Ops Manager, for example, has improved its monitoring with built in telemetry gathering tailored to each deployment platform, and now allows ops teams to create server pools to serve up database-as-a-service to internal teams. Atlas introduces Virtual Private Cloud (VPC) Peering, allowing teams to use convenient private IPs to talk to their MongoDB service from within their AWS VPC. There’s a ton more than I can fit into a blog post. That’s what release notes are for. But I shouldn’t leave out a few highlights, like: tunable consistency control for replica sets, including linearizable reads; collations for queries and indexes; and read-only views , which enable us to bring field level security to apps handling regulated data. We’re incredibly excited to ship MongoDB 3.4 to you, so it can help your data serve you, not the other way around. Our approach is to build a database that can handle any kind of data, and the capabilities to query that data however you need to. Learn more about MongoDB 3.4, register for our upcoming webinar: Find out what's new About the Author - Eliot Horowitz Eliot Horowitz is CTO and Co-Founder of MongoDB. Eliot is one of the core MongoDB kernel committers. Previously, he was Co-Founder and CTO of ShopWiki. Eliot developed the crawling and data extraction algorithm that is the core of its innovative technology. He has quickly become one of Silicon Alley's up and coming entrepreneurs and was selected as one of BusinessWeek's Top 25 Entrepreneurs Under Age 25 nationwide in 2006. Earlier, Eliot was a software developer in the R&D group at DoubleClick (acquired by Google for $3.1 billion). Eliot received a BS in Computer Science from Brown University.
How DataSwitch And MongoDB Atlas Can Help Modernize Your Legacy Workloads
Data modernization is here to stay, and DataSwitch and MongoDB are leading the way forward. Research strongly indicates that the future of the Database Management System (DBMS) market is in the cloud, and the ideal way to shift from an outdated, legacy DBMS to a modern, cloud-friendly data warehouse is through data modernization. There are a few key factors driving this shift. Increasingly, companies need to store and manage unstructured data in a cloud-enabled system, as opposed to a legacy DBMS which is only designed for structured data. Moreover, the amount of data generated by a business is increasing at a rate of 55% to 65% every year and the majority of it is unstructured. A modernized database that can improve data quality and availability provides tremendous benefits in performance, scalability, and cost optimization. It also provides a foundation for improving business value through informed decision-making. Additionally, cloud-enabled databases support greater agility so you can upgrade current applications and build new ones faster to meet customer demand. Gartner predicts that by 2022, 75% of all databases will be on the cloud – either by direct deployment or through data migration and modernization. But research shows that over 40% of migration projects fail. This is due to challenges such as: Inadequate knowledge of legacy applications and their data design Complexity of code and design from different legacy applications Lack of automation tools for transforming from legacy data processing to cloud-friendly data and processes It is essential to harness a strategic approach and choose the right partner for your data modernization journey. We’re here to help you do just that. Why MongoDB? MongoDB is built for modern application developers and for the cloud era. As a general purpose, document-based, distributed database, it facilitates high productivity and can handle huge volumes of data. The document database stores data in JSON-like documents and is built on a scale-out architecture that is optimal for any kind of developer who builds scalable applications through agile methodologies. Ultimately, MongoDB fosters business agility, scalability and innovation. Key MongoDB advantages include: Rich JSON Documents Powerful query language Multi-cloud data distribution Security of sensitive data Quick storage and retrieval of data Capacity for huge volumes of data and traffic Design supports greater developer productivity Extremely reliable for mission-critical workloads Architected for optimal performance and efficiency Key advantages of MongoDB Atlas , MongoDB’s hosted database as a service, include: Multi-cloud data distribution Secure for sensitive data Designed for developer productivity Reliable for mission critical workloads Built for optimal performance Managed for operational efficiency To be clear, JSON documents are the most productive way to work with data as they support nested objects and arrays as values. They also support schemas that are flexible and dynamic. MongoDB’s powerful query language enables sorting and filtering of any field, regardless of how nested it is in a document. Moreover, it provides support for aggregations as well as modern use cases including graph search, geo-based search and text search. Queries are in JSON and are easy to compose. MongoDB provides support for joins in queries. MongoDB supports two types of relationships with the ability to reference and embed. It has all the power of a relational database and much, much more. Companies of all sizes can use MongoDB as it successfully operates on a large and mature platform ecosystem. Developers enjoy a great user experience with the ability to provision MongoDB Atlas clusters and commence coding instantly. A global community of developers and consultants makes it easy to get the help you need, if and when you need it. In addition, MongoDB supports all major languages and provides enterprise-grade support. Why DataSwitch as a partner for MongoDB? Automated schema re-design, data migration & code conversion DataSwitch is a trusted partner for cost-effective, accelerated solutions for digital data transformation, migration and modernization through a modern database platform. Our no-code and low-code solutions along with cloud data expertise and unique, automated schema generation accelerates time to market. We provide end-to-end data, schema and process migration with automated replatforming and refactoring, thereby delivering: 50% faster time to market 60% reduction in total cost of delivery Assured quality with built-in best practices, guidelines and accuracy Data modernization: How “DataSwitch Migrate” helps you migrate from RDBMS to MongoDB DataSwitch Migrate (“DS Migrate”) is a no-code and low-code toolkit that leverages advanced automation to provide intuitive, predictive and self-serviceable schema redesign from a traditional RDBMS model to MongoDB’s Document Model with built-in best practices. Based on data volume, performance, and criticality, DS Migrate automatically recommends the appropriate ETTL (Extract, Transfer, Transform & Load) data migration process. DataSwitch delivers data engineering solutions and transformations in half the timeframe of the existing typical data modernization solutions. Consider these key areas: Schema redesign – construct a new framework for data management. DS Migrate provides automated data migration and transformation based on your redesigned schema, as well as no-touch code conversion from legacy data scripts to MongoDB Atlas APIs. Users can simply drag and drop the schema for redesign and the platform converts it to a document-based JSON structure by applying MongoDB modeling best practices. The platform then automatically migrates data to the new, re-designed JSON structure. It also converts the legacy database script for MongoDB. This automated, user-friendly data migration is faster than anything you’ve ever seen. Here’s a look at how the schema designer works. Refactoring – change the data structure to match the new schema. DS Migrate handles this through auto code generation for migrating the data. This is far beyond a mere lift and shift. DataSwitch takes care of refactoring and replatforming (moving from the legacy platform to MongoDB) automatically. It is a game-changing unique capability to perform all these tasks within a single platform. Security – mask and tokenize data while moving the data from on-premise to the cloud. As the data is moving to a potentially public cloud, you must keep it secure. DataSwitch’s tool has the capability to configure and apply security measures automatically while migrating the data. Data Quality – ensure that data is clean, complete, trustworthy, consistent. DataSwitch allows you to configure your own quality rules and automatically apply them during data migration. In summary: first, the DataSwitch tool automatically extracts the data from an existing database, like Oracle. It then exports the data and stores it locally before zipping and transferring it to the cloud. Next, DataSwitch transforms the data by altering the data structure to match the re-designed schema, and applying data security measures during the transform step. Lastly, DS Migrate loads the data and processes it into MongoDB in its entirety. Process Conversion Process conversion, where scripts and process logic are migrated from legacy DBMS to a modern DBMS, is made easier thanks to a high degree of automation. Minimal coding and manual intervention are required and the journey is accelerated. It involves: DML – Data Manipulation Language CRUD – typical application functionality (Create, Read, Update & Delete) Converting to the equivalent of MongoDB Atlas API Degree of automation DataSwitch provides during Migration Schema Migration Activities DS Automation Capabilities Application Data Usage Analysis 70% 3NF to NoSQL Schema Recommendation 60% Schema Re-Design Self Services 50% Predictive Data Mapping 60% Process Migration Activities DS Automation Capabilities CRUD based SQL conversion (Oracle, MySQL, SQLServer, Teradata, DB2) to MongoDB API 70% Data Migration Activities DS Automation Capabilities Migration Script Creation 90% Historical Data Migration 90% 2 Catch Load 90% DataSwitch Legacy Modernization as a Service (LMaas): Our consulting expertise combined with the DS Migrate tool allows us to harness the power of the cloud for data transformation of RDBMS legacy data systems to MongoDB. Our solution delivers legacy transformation in half the time frame through pay-per-usage. Key strengths include: ● Data Architecture Consulting ● Data Modernization Assessment and Migration Strategy ● Specialized Modernization Services DS Migrate Architecture Diagram Contact us to learn more.