MongoDB 3.4.1 is out and is ready for production deployment. This release contains only fixes since 3.4.0, and is a recommended upgrade for all 3.4 users.
Fixed in this release:
- SERVER-27124 Disallow readConcern:majority reads on pv0
- SERVER-27201 $graphLookup triggers null pointer dereference
- SERVER-27207 Find on view with sort through mongos may incorrectly return empty result set
- SERVER-27213 Two $match stages combine incorrectly, yielding incorrect results.
- SERVER-27300 Disallow indexing of BSONType::Symbol with a non-simple collation
- SERVER-27210 3.4.0 mongo shell unable to connect using MongoURI with "ssl=true"
- SERVER-27271 rolesInfo command raises System.InvalidOperationException : Duplicate element name 'roles'.
- SERVER-26870 Sometimes collection data file is not removed even though collection is dropped
- TOOLS-1541 Support exporting views
As always, please let us know of any issues.
-- The MongoDB Team
What’s New in MongoDB 3.4, Part 1: Multimodel Done Right
MongoDB 3.4 is now Generally Available (GA) and ready for production deployment ! MongoDB 3.4 is the latest release of the industry’s fastest growing database. It offers a major evolution in capabilities and enhancements that enable you to address emerging opportunities and use cases. This 3-part blog series aims to help you navigate everything that is new, and provides the most important resources to get you started: Part 1 covers the extended multimodel capabilities of MongoDB 3.4, including native graph processing, faceted navigation, rich real-time analytics, and powerful connectors for BI and Apache Spark. Part 2 discusses enhanced capabilities to support mission-critical applications, including geo-distributed MongoDB zones, elastic clustering, tunable consistency, and enhanced security controls. Part 3 concludes with modernized DBA and Ops tooling available in the new release. If you want to get the detail now on everything the new release offers, download the What’s New in MongoDB 3.4 white paper . Why Multimodel? Rather than the monolithic codebases of the past, today’s applications are increasingly being decomposed into loosely coupled suites of microservices, each implementing specific functionality within an application. Different services can place very different demands on the database used – from simple key-value lookups to complex analytics, aggregations, and graph traversals, through to rich search queries. Some data may need to be stored only in-memory for predictable low latency, while other data sets may need to be encrypted on disk for regulatory compliance. Data sets may vary from billions of small records, each just several KBs in size, to the management of large, multi-MB objects. To try and tame the complexity that would come from using a multitude of storage technologies, the industry is moving towards the concept of “multimodel” databases. Such designs are based on the premise of presenting multiple data models within the same platform, thereby serving diverse application requirements. However, many self-described multimodel database are little more than a collection of discrete technologies for data storage, search, and analytics, each with its own domain specific language, API and deployment requirements, and working on its own copy of the data. This approach to multimodel fails to offer much of an improvement over just running multiple independent databases, imposing high complexity, overhead, friction, and cost for developers and operations teams. MongoDB Takes a Different Approach How does MongoDB differ? MongoDB’s flexible document data model presents a superset of other database models. It allows data be represented as simple key-value pairs and flat, table-like structures, through to rich documents and objects with deeply nested arrays and sub-documents. With an expressive query language, documents can be queried in many ways – from simple lookups to creating sophisticated processing pipelines for data analytics and transformations, through to faceted search, JOINs and graph traversals. With a flexible storage architecture, application owners can deploy storage engines optimized for different workload and operational requirements. MongoDB’s approach to delivering multimodel significantly reduces developer and operational complexity, compared to running multiple, separate technologies to satisfy diverse application requirements. Users can leverage the same MongoDB query language, data model, scaling, security, and operational tooling across different applications, all within a single, integrated database platform. MongoDB 3.4 introduces new native graph processing, faceted navigation, multi-language collations, additional aggregation pipeline operators, a new decimal data type, along with enhanced connectors for BI and Apache Spark integration. Native Graph Processing Applications storing data in MongoDB frequently contain data that represents graph or tree type hierarchies. These connections can be as simple as a management reporting chain in a HR application, or as complex as multi-directional, deeply nested relationships maintained by social networks, master data management, recommendation engines, disease taxonomy, fraud detection, and more. While special purpose graph databases are effective at storing and querying graph data, it’s often desirable to store and traverse graph data directly in MongoDB. Here it can be processed, queried, and analyzed alongside all other operational data in real time, without the complexity of duplicating data across two separate databases. Graph and hierarchical data is commonly queried to uncover indirect or transitive relationships. For example, if company “A” is owned by company “B”, and “B” is owned by company “C”, then “C” indirectly owns company “A”. MongoDB 3.4 offers this functionality via a new aggregation stage called $graphLookup to recursively lookup a set of documents with a specific defined relationship to a starting document. Developers can specify the maximum depth for the recursion, and apply additional filters to only search nodes that meet specific query predicates. $graphLookup can recursively query within a single collection, or across multiple collections. Review the documentation to learn more about the MongoDB $graphLookup operator for graph processing . Faceted Navigation Faceting is a popular analytics and search capability that allows an application to group information into related categories by applying multiple filters to query results. Facets allow users to narrow their search results by selecting a facet value as a filter criteria. Facets also provide an intuitive interface to exploring a data set, and allow convenient navigation to data that is of most interest. Most databases need to execute multiple GROUP_BY statements to render facets, resulting in long running queries and poor user experience. MongoDB 3.4 introduces new aggregation pipeline stages for the bucketing, grouping and counting of one or more facets in a single round trip to the database. As a result, developers can generate richer and intuitive experiences to help users navigate complex data sets. Review the documentation to learn more about MongoDB faceted navigation . Multi-Language Collations Applications addressing global audiences require handling content that spans many languages. Each language has different rules governing the comparison and sorting of data. In order to create intuitive, localized user experiences, applications must handle non-English text with the appropriate rules for that language. For example, French has detailed rules for sorting names with accents on them. German phonebooks order words differently than the German dictionary. MongoDB 3.4 significantly expands language support capabilities to allow users to build applications that adhere to language-specific comparison rules. Support for collations – the rules governing text comparisons and sorting – has been added throughout the MongoDB Query Language and indexes for over 100 different languages and locales. Each collation can also be further customized to provide precise control over case sensitivity, numeric ordering, whitespace handling, and more. Developers can specify collation for a collection, an index, a view, or for specific operations that support collation (i.e. find, aggregate, update). You can learn more about collation in MongoDB from the documentation . Aggregation Pipeline Enhancements MongoDB developers and data engineers rely on the aggregation pipeline due to its power and flexibility in enabling sophisticated processing and manipulation demanded by real-time analytics and data transformations. MongoDB 3.4 continues to extend the aggregation pipeline by adding new capabilities within the database that simplify application-side code, as well as optimizer enhancements that improve performance. In addition to the graph and facet features described earlier, many other expressions are added in MongoDB 3.4. These expressions address string manipulation, array handling, type handling, and schema detection and transformation: String handling includes expressions for splitting and manipulating strings either in bytes or code points (a code point can represent a single component of the string, e.g, a character, emoji, or formatting instruction). Array expressions allow more sophisticated manipulation and computations on arrays, including parallel array processing. New expressions allow determining types of fields Case/switch expressions for branching Support for ISO week expressions MongoDB 3.4 also brings additional performance optimizations to the aggregation pipeline. Where possible, the query optimizer automatically moves the $match stage earlier in the pipeline, and combines it with other stages, to increase cases where indexes can be used to filter results sets. In most cases, no modifications of existing queries need to be made. You can learn more about the many MongoDB 3.4 aggregation pipeline enhancements from the documentation . Decimal Data Type Decimal128 is a 16 byte decimal floating-point number format. It is intended for calculations on decimal numbers where high levels of precision are required, such as financial (i.e. tax calculations, currency conversions) and scientific computations. Decimal128 supports 34 decimal digits of significance and an exponent range of −6143 to +6144. MongoDB 3.4 adds support for the decimal data type which represents decimal128 values. Unlike the double data type, which only stores approximations of decimal values, the decimal data type stores the exact value. For example, a decimal type ("9.99") has a precise value of 9.99, while 9.99 represented as a double would have an actual value of 9.99000000000000021316282072803, thus creating the potential for rounding errors when it is used in calculations. Decimal type values are treated like any other numeric type, and compare and sort correctly with other types based on actual numeric value. Operations on decimals are implemented in accordance with the decimal128 standard, so a value of 0.10 will retain its trailing zeros while comparing equal to 0.1, 0.10000 and so on. Review the documentation to learn more about the new MongoDB decimal data type . Visualizing MongoDB Data The MongoDB Connector for BI was introduced in November 2015. For the first time analysts, data scientists, and business users were able to seamlessly visualize semi-structured and unstructured data managed in MongoDB, alongside traditional data from their SQL databases, using the same BI tools deployed within millions of enterprises. Building on its initial release, the Connector for BI has been reengineered to improve performance, simplify installation and configuration, and support Windows. Figure 1: Uncover new insights with powerful visualizations generated from MongoDB Performance and scalability has been improved by moving more query execution down to the MongoDB processes themselves. Queries and complex aggregations are executed natively within the database, thus reducing latency and bandwidth consumption. In addition, installation and authentication has been simplified. Users now authenticate as an existing user already declared within MongoDB, no longer needing to create separate username and password credentials within the connector. The Connector for BI is part of the Advanced Analytics suite available with MongoDB Enterprise Advanced . Review the MongoDB Connector for BI documentation to learn more. MongoDB Connector for Apache Spark Following general availability in June 2016, the MongoDB Connector for Apache Spark has been updated to support the latest Spark 2.0 release. Spark 2.0 support in the connector provides access to the new SparkSession entry point, the unified DataFrame and Dataset API, enhanced SparkSQL and SparkR functionality, and the experimental Structured Streaming feature. The connector exposes all of Spark’s libraries, including Scala, Java, Python, and R. MongoDB data is materialized as DataFrames and Datasets for analysis through machine learning, graph, streaming, and SQL APIs. Already powering sophisticated analytics at organizations including China Eastern Airlines , Black Swan, and x.ai, the MongoDB Connector for Apache Spark takes advantage of MongoDB’s aggregation framework, rich queries, and secondary indexes to extract, filter, and process only the range of data it needs – for example, all customers located in a specific geography. To maximize performance across large, distributed data sets, the connector can co-locate Resilient Distributed Datasets (RDDs) with the source MongoDB node, thereby minimizing data movement across the cluster and reducing latency. You can download the MongoDB Connector for Apache Spark from GitHub, and sign up for a free Spark course from MongoDB University. Next Steps That wraps up the first part of our 3-part blog series. Remember, you can get the detail now on everything packed into the new release by downloading the What’s New in MongoDB 3.4 white paper . Alternatively, if you’d had enough of reading about it and want to get started now, then: Download MongoDB 3.4 Alternatively, spin up your own MongoDB 3.4 cluster on the MongoDB Atlas database service Sign up for our free 3.4 training from the MongoDB University
Solving Customer Challenges: Meet Consulting Engineer Paul-Emile Brotons
Our Professional Services team is growing. Hear from Paul-Emile Brotons about his Consulting Engineer (CE) role, the types of projects he works on for customers, how he continually learns, and what makes this role a great opportunity for people with technical backgrounds who enjoy solving a variety of problems. Jackie Denner: Thanks for sharing your experience as a Consulting Engineer. Can you tell me about the Consulting Engineer team within Professional Services at MongoDB? Paul-Emile Brotons: I joined MongoDB a year and a half ago. The Consulting Engineering team is responsible for assisting customers at every stage of their MongoDB journey to ensure they are successful. We assist customers with training, database design, architecture design, code reviews, preproduction audits and reviews, setup, and health checks. I’m part of the South European team and I’m based out of Paris, but the Consulting Engineering team is worldwide. Since we are solving challenging problems, the team is very close and meets daily to share ideas and discuss solutions. I always have colleagues available to help at any time of day. JD: As a junior engineer, why did you opt for a Consulting Engineer role instead of a traditional Product Engineer role? PEB: Before joining MongoDB, I was a full-stack engineer at a French startup specializing in revenue management. I learned great technical skills there, but, in the end, I felt I was missing the big picture: What other stacks exist on the market? What tools are other engineering teams at big companies or startups working with? That is exactly what the Consulting Engineer role made possible for me. Since our projects are usually short-term, a typical CE may see 50 projects in a year. In my current role, I have been working with almost every new and exciting technology. I also get to learn how people within product and engineering work in other organizations. I find this very valuable, and it’s not something you can easily find in a traditional Product Engineer role. JD: What does a day in your role look like? PEB: CEs are assigned to “missions,” which typically range from one to four days and concern a specific customer. Longer-term projects can span several months. My role generally starts the week before. Before each mission, I try to set up a short preconsult session where I meet with customers and discover the topics they want to discuss. Then, on the day of the mission, I provide training, performance evaluation, tuning, and more. I learn a lot in my role, and I try to find solutions to all the difficult problems the customer has not been able to solve alone. It’s challenging and very rewarding. In some cases, I may not be assigned to a customer and I will be working on preparation and continuous learning. I appreciate the liberty my role gives me. JD: What was your onboarding like, and what learning and growth opportunities are there on the Consulting Engineer team? PEB: To be completely honest, I was a bit scared when I joined. I was very impressed with the way people work here, and I had a feeling it would be hard for me to onboard. However, the ramp-up process is so well-done that it almost felt easy. The first weeks were dedicated only to training. First, we have to learn a lot about MongoDB. A CE is a database expert. Since almost every software needs a persistent layer, this expertise is very valuable. Second, we have to know our stuff when it comes to Linux, networking, cloud providers, architecture, coding, and more. Afterward, everything is done to gradually increase the level of difficulty; complex missions are not delivered by new hires. Management is really careful about that, which is reassuring. Once a CE is performing well in their role, they may be promoted to Senior and then Principal grades. Many of us also study to pass certifications. I will soon start studying for a Linux sysadmin certification. The management team is very supportive and encourages continuous learning. JD: How do you interact with other teams at MongoDB? PEB: The CE role requires a lot of interaction with teams such as Sales, Presales Engineering, and Product Engineering. Consulting Engineers can be leveraged to help Sales and Solution Architects before the sale happens, since we are seen as trusted advisers. We also often speak to product teams to discuss the inner workings of a product, feature, or system. I’ve had the opportunity to meet many people within MongoDB. JD: What is one of the most interesting or challenging projects you’ve worked on? PEB: It is honestly difficult to choose, but I would pick a long project I worked on with a major container transportation and shipping company. It was challenging given the scope of the project and the number of interactions and subjects I had to deal with. The project was key for the customer, and it was technically demanding. We had to review the whole application architecture; analyze the front end to infer the requests and schema design needed on the database side; work with a wide range of professionals, including developers, solution architects, Linux engineers, and project managers; and test that everything would happen as expected. It was a great learning experience, from both a personal and professional perspective. JD: What makes someone successful in a CE role? PEB: Aside from sufficient knowledge of computer science, the CE role requires good communication and problem-solving skills. You have to know how to listen to and understand the problems customers encounter before you can think of a solution. Good customer contact is often the key to a mission’s success, and it makes the difference between a satisfied customer and a happy customer. JD: What advice would you offer someone looking to move into Professional Services at MongoDB? PEB: First, prepare well for the interviews — study up on algorithms, two programming languages, and basic database and hardware concepts. The interviews can be challenging, and there are a lot of rounds. Second, I would advise candidates to look at the beginners course on the MongoDB University website. The courses are free and they’re the best I have done on the web so far. Going deeper into learning MongoDB before joining the company saved me a lot of time. Last but not least, I would encourage candidates to contact CEs at MongoDB to get a clear view of the company and the role. My colleagues and I are more than happy to answer any questions that might help someone decide if this role is the right fit for them. Interested in a Professional Services career at MongoDB? We have several open roles on our team and would love for you to transform your career with us!