MongoDB Blog

Articles, announcements, news, updates and more

Australian Start-Up Ynomia Is Building an IoT Platform to Transform the Construction Industry and its Hostile Environments

The trillion dollar construction industry has not yet experienced the same revolution in technology you might have expected. Low levels of R&D and difficult working environments have led to a lack of innovation and fundamental improvements have been slow. But one Australian start-up is changing that by building an Internet of Things (IoT) platform to harness construction and jobsite data in real time. “Productivity in construction is down there with hunting and fishing as one of the least productive industries per capita in the entire world. It's a space that's ripe for people to come in and really help,” explains Rob Postill , CTO at Ynomia. Ynomia has already been closely involved with many prestigious construction projects, including the residential N06 development in London’s famous 2012 Olympic Village. It was also integral to the construction of the Victoria University Tower in Australia. Link to Podcast Episode Here “These projects involve massive outflow of money: think about glass facades on modern buildings, which can represent 20-30 percent of the overall project cost. They are largely produced in China and can take 12 weeks to get here,” says Postill. “Meanwhile, the plasterer, the plumber, the electrician are all waiting for those glass facades to be put on so it is safe for them to work. If you get it wrong, you can go in the deep red very quickly.” To tackle these longstanding challenges, Ynomia aims to address the lack of connectivity, transparency and data management on construction sites, which has traditionally resulted in the inefficient use of critical personnel, equipment and materials; compressed timelines; and unpredictable cash flows. To optimize productivity, Ynomia offers a simple end-to-end technology solution that creates a Connected Jobsite. Helping teams manage materials, tools, and people across the worksite in real time. IOT in a Hostile Environment The deployment of technology in construction is often fraught with risk. As a result, construction sites are still largely run on paper, such as blueprints, diagrams and models as well as the more traditional invoices and filing. At the same time, there is a constant need to track progress and monitor massive volumes of information across the entire supply chain. Engineers, builders, electricians, plumbers, and all the other associated professionals need to know what they need to do, where they need to be, and when they need to start. “The environment is hostile to technology like GPS, computers, and mobile phone reception because you have a lot of Faraday cages and lots of water and dust,” explains Postill. “You can't have somebody wandering around a construction site with a laptop; it'll get trashed pretty quickly." Enter MongoDB Atlas “On a site, you might be talking about materials, then if you add to that plant & equipment, or bins, or tools etc, you're rapidly getting into thousands and thousands of tags, talking all the time, every day,” said Postill. That means thousands of tags now send millions of readings on Ynomia building sites around the world. All these IoT data packets must be stored efficiently and accurately so Ynomia can reassemble the history of what has happened and track tagged inventory, personnel, and vehicles around the site. Many of the tag events are also safety critical so accuracy is a vital component and packets can't be missed. To address these needs Ynomia was looking for a database that was scalable, flexible, resilient and could easily handle a wide variety of fast-changing sensor data captured from multiple devices. The final component Postill was looking for in a database layer was freedom: a database that didn't lock them into a single cloud platform as they were still in the early stages of assessing cloud partners. The Commonwealth Scientific and Industrial Research Organisation , which Postill had worked with in the past, suggested MongoDB , a general purpose, document-based database built for modern applications. “The most important factor was that the database is event-driven, which I knew would be difficult in the traditional relational model. We deal with millions of tag readings a day, which is a massive wall of data,” said Postill. A Cloud Database Ynomia is using MongoDB Atlas , the global cloud database service, now hosted on Microsoft Azure. Atlas offers best-in-class automation and proven practices that combine availability, scalability, and compliance with the most demanding data security and privacy standards. “When we started we didn't know enough about the problem and we didn't want to be constrained," explained Postill. "MongoDB Atlas gives us a cloud environment that moves with us. It allows us to understand what is happening and make changes to the architecture as we go." Postill says this combination of flexibility and management tooling also allows his developers to focus on business value not undifferentiated code. One example Postill gave was cluster administration: "Cluster administration for a start-up like us is wasted work," he said. "We’re not solving the customer's problem. We're not moving anything on. We’re focusing on the wrong thing. For us to be able to just make that problem go away is huge. Why wouldn’t you?" Atlas also gives Ynomia the option to spin out new clusters seamlessly anywhere in the world. This allows customers to keep data local to their construction site, improving latency and helping solve for regional data regulations. Real Time Analytics The company has also deployed MongoDB Charts, which takes this live data and automatically provides a real time view. Charts is the fastest and easiest way to visualize event data directly from MongoDB in order to act instantly and decisively based on the real-time insights generated by event-driven architecture. It allows Ynomia to share dashboards so all the right people can see what they need to and can collaborate accordingly. “Charts enables us to quickly visualize information without having to build more expensive tools, both internally and externally, to examine our data,” comments Postill. “As a startup, we go through this journey of: what are we doing and how are we doing it? There's a lot of stuff we are finding out along the way on how we slice and re-slice our data using Charts.” A Platform for Future Growth Ynomia is targeting a huge market and is set for ambitious growth in the coming years. How the platform, and its underlying architecture, can continue to scale and evolve will be crucial to enabling that business growth. “We do anything we can to keep things simple,” concluded Postill. “We pick technology partners that save us from spending time we shouldn't spend so we can solve real problems. We pick technologies that roll with the punches and that's MongoDB.” When we started we didn't know enough about the problem and we didn't want to be constrained," explained Postill. "MongoDB Atlas gives us a cloud environment that moves with us. It allows us to understand what is happening and make changes to the architecture as we go. Rob Postill, CTO, Ynomia

February 23, 2021
Applied

New MongoDB Shell now supports Client-side Field-level Encryption

Last summer, we introduced mongosh , the new MongoDB Shell with an enhanced user experience and a powerful, Node.js-based scripting environment . Since then, we have been adding new functionality and APIs to close the gap with the legacy mongo shell, on the path to making it the default shell for MongoDB. In addition to the set of CRUD and other commands that we supported in the first release we recently added: Bulk operations Change Streams Sessions and Transactions Logging and profiling commands Replica set and Sharding configuration commands Plus some other minor things and utility commands here and there. This week, we released mongosh 0.8 with support for Client-side Field-level Encryption (FLE). Support for Client-side Field-level Encryption MongoDB Client-Side Field-level Encryption (FLE) allows developers to selectively encrypt individual fields of a document using the MongoDB drivers (and now with mongosh as well) on the client before it is sent to the server. This keeps data encrypted (but still queryable) while it is in-use in database memory, and protects it from the providers hosting the database, as well as from any user that has direct access to the database. Back in November, we announced that in addition to AWS’ KMS, Client-side FLE now supports key management systems in Azure and Google Cloud in beta. The most recent version of the MongoDB Shell makes it easy to test this functionality in a few easy steps: Create a free Atlas Cluster Install mongosh . Check out our documentation to set up your KMS in Azure or GCP. Start encrypting! To make it easier to get started with Client-Side FLE, here are two simple scripts that you can edit and copy-paste into mongosh: mongosh-fle-gcp-kms to set up Client-side FLE with Google Cloud and mongosh-fle-local-kms to use a local key. In the screenshot below, you can see a document that was encrypted on the client with automatic encryption before it was sent across the wire and inserted into MongoDB. Fields are in clear text in the shell but then are shown as encrypted when connecting with Compass to the same Atlas cluster. A Powerful Scripting Environment As mongosh is built on top of Node.js, it’s a great environment for scripting , no matter if it’s about checking the health status of your replica set or if you want to take a quick look at the data to make sure it’s coming in from your application as you are expecting. With modules from npm , the experience becomes much more rich and interactive. For example, if I want to look at the sample_mflix collection available in the Atlas sample datasets and check the distribution of thriller movies over the years, I can put together a simple script that includes running an aggregation and visually formatting the results with an open source library called babar This is just one of many ways you can extend the functionality of the MongoDB Shell by taking advantage of the great ecosystem of JavaScript libraries and modules that the community has built over the years and keeps on building every day. Start Scripting and Let Us Know How it's Working for You! As we added new functionality to the MongoDB Shell, we tried as much as possible to keep backwards compatibility with mongo, and we were mostly able to do that. In a limited number of cases, however, we took the opportunity to clean up the API and address some unexpected behaviors. Wondering what’s coming next in mongosh? We are working on adding support for load() and rc files to make it easy to load your scripts into the shell. If you find something that does not work as expected, please let us know! Simply create a bug in our JIRA project or reach out on Twitter .

February 22, 2021
Developer

Applying Maslow’s Hierarchy of Needs to Documentation

In his groundbreaking 1943 paper, psychologist Abraham Maslow theorized that all humans are motivated by five categories of needs: physiological, safety, love and belongingness, esteem, and self-actualization. Known today as the "Hierarchy of Needs," this theory is often depicted as a pyramid, for only when one stage is fulfilled can an individual move on to the next. Abraham Maslow's Hierarchy of Needs (1943) Not only does this theory apply to motivation, but also it applies to the efficacy of a user’s experience. Although Maslow's Hierarchy of Needs was originally intended for psychological analysis, a modified version can also be applied to users in today's digital world. At MongoDB Documentation, we strive to help users meet their learning objectives effectively and efficiently. With Maslow’s theory in mind, we created a framework for our projects that took Maslow's principles into account. We call this framework "Documentation's Hierarchy of Needs. Stage 1: Existence & Basic Needs The first layer of the Doc’s Hierarchy of Needs is existence. At the fundamental level, if content does not exist, a user cannot use it. In order for content to effectively exist, the platform needs to: Allow writers to write and publish documentation. Have a frontend where users can easily access content and content is displayed in an accessible, intuitive manner. To address this, the documentation platform team has engineered a toolchain that enables authors to write, preview, review, and publish content to the documentation corpus to be accessed by any user. It sought to enable writers to write and focus on the content they were delivering rather than get bogged down in the tools they were using to write. The toolchain itself converts content to be data, which allows the content to be easily organized, structured, reused, standardized, and tested. Whereas older technologies introduced friction into the design and development process, our new platform included a more flexible frontend to quickly iterate and improve experiences for the users accessing the content. All of this means that content can easily be written, published, and accessed. Stage 2: Quality Needs The second layer of the Doc’s Hierarchy of Needs is quality. If the content isn’t of high quality, it isn’t beneficial to a user. From user research, we learned that when creating higher-quality content, you want to adhere to the following criteria: Be task or use case centric. Come off as approachable, helpful, and informative. Create emotions of confidence, excitement, and determination. We took these traits into consideration and rethought a few key touchpoints our users interact with. This includes a new docs homepage and a series of new docs product landing pages. As these pages are frequently first-touch experiences, it was important for us to provide a positive initial impression and introduction. Docs Homepage Prompting users with relatable tasks Throughout these cards, users are given the opportunity to immediately get to the product documentation they need. In order to match the user’s mental model, all cards are written to emphasize tasks. Leading users to a through introduction Throughout extensive user research, we learned that users have difficulty understanding the fundamental differences between MongoDB and traditional relational databases. In this section, we wanted to give users a taste of what that was and leave them intrigued and informed on where to learn more. Connecting users with other learning resources This section keeps the ball rolling. At the beginning of their journey, the user receives a broad overview, before working their way through basic concepts. At the end of the page, they are encouraged to continue their learning and explore our other educational platforms. Docs Product Landing Pages Creating consistency in user goals At this touchpoint, users are entering a specific product learning experience. In order to supplement our users’ learning journeys, these pages are focused on increasing product fluency and adoption. Creating emotions of excitement and confidence In testing these designs, users felt that this specific section made them feel the most confident and excited. The use cases outlined quickly jumped out as relatable tasks, the small number of steps made the task feel easily achievable, and the interaction made the information exciting. Stage 3: Findability Needs The third layer of Doc’s Hierarchy of Needs is findability. Here, we break through basic needs and head into psychological needs. Historically, users could still rely on external resources, such as Google, to find the information they required. Of course, this does not provide the ideal experience for our users but meets their basic needs. One of our main focuses this year was to improve findability and strengthen our navigational experience. We found that the navigational experiences are mainly split between two persona types: advanced users and first-time learners. Advanced learners are more likely to know exactly what they are looking for, leading them to rely heavily on an effective search experience. On the contrary, learners are less likely to know what they are looking for and are just looking to learn and explore. Factors for Findability Success After several rounds of user interviews, literature reviews, and meetings with subject matter experts, we identified the following characteristics of the ideal navigational experience: Task-Centric Approach In each round of research, such as card sorting or tree tests, we consistently found that users approach navigation based on their own experience or knowledge. Because of this finding, we implemented a task-centric approach in the revamp of Docs Navigation. By mirroring users’ mental models, this navigational model takes some of the heavy lifting off the user and creates an intuitive experience. Importance of Efficiency and Accuracy Users ranked efficiency and accuracy as the most important factors when navigating. In fact, many users, specifically developers, measure efficiency in number of clicks. To maximize the efficiency of the search engine, we provided context clues to users. This enabled them to determine the most relevant results and apply additional filters for improved accuracy. These findings became pivotal when envisioning a new Docs Search and pinpointing valuable features that would optimize for these factors. A New Docs Nav A New Docs Search Small Snacks Upon the release of these projects, the documentation platform team has enjoyed looking at the resulting analytics, which has inspired us to further improve findability and quality. For example, with information about what queries users are searching for, we can make decisions around what we want to optimize next. A fun tidbit we saw in our analytics concerned user preferences around full page search vs. a modal. In our research, we found that there was a split in affinities toward each approach, and as a result, it was difficult to make an informed decision on which to invest in. Instead, we decided to build both, as this extra work increased scope by only one engineering day. We have since found that they are being used to an equal degree. How fun! This leads us to believe that we are providing further psychological safety to our users by letting them navigate through how they desire. Stage 4: Experience Needs The fourth layer of the Doc’s Hierarchy of Needs is experience, which encompasses the finishing touches. The difference between delight and neutrality, intuition and frustration, the ooo’s and the ugh’s. Internally, we've made improvements to the platform that increase writer efficiency and productivity so that writers can create better documentation. Research indicates that if an employee is happy with their set of tools, the work they produce will be better as well. Stage 5: Contribution Needs The last layer of the Doc’s Hierarchy of Needs is contribution. Once the content exists — and it's of high quality, it's easily findable, and the experience is superb — users feel they should be able to contribute and be a part of the effort. From user research, we’ve heard that “contribution needs” include: Feeling that they can help Docs improve. Being able to report their own problems. Joining a community. Creating an open source platform Users who regularly read the documentation are also able to contribute directly by making a pull request to Github. This directly relates to self-fulfillment as it is defined in Maslow’s Hierarchy of Needs, because we are encouraging users to achieve their full potential by participating in the growth of the platform. Note: This graphic includes internal commits as well Improving the Feedback Widget After receiving user feedback, we focused on the following points to improve in the next iteration: Interface with content The previous feedback widget was visually covering the actual documentation content. It also had no way to hide or dismiss its presence. To address this, the new feedback widget was de-emphasized in the view, keeping the priority on the content itself. Quality of feedback collected Internally, the feedback widget was not helpful because it didn’t provide enough context for writers to make quality improvements. To address this, the new feedback widget introduced new categories that allowed users to add specific classifiers to their entry. Introduction of helpful next steps In addition, users frequently confused the feedback widget with a support center. This created a large number of tickets that often could not be acted upon. To address this, the feedback widget connects these users to better fit resources such as the Community or the Support Center. This connection also creates an opportunity for users to join the rest of the MongoDB Community and connect with other like minded individuals. Results/Learnings In doing so, we have successfully eliminated all noise in the feedback widget directly relating to it interfering with the content on the page. We have seen an increase in the quality of feedback as a result of the more detailed rating system and self-selection of categories. We have also seen a broader decrease in the quantity of feedback — and thus, less chaff to sift through than before. Looking Towards the Future We like to think that this helps us create a holistic docs experience, as we are touching on key parts of the user journey. It puts the user at the center of all product strategy and design which is extremely important to us as a team. Additionally, it provides a helpful framework for what we plan to do next!

February 17, 2021
Home

Capgemini Solutions that help customers modernize applications to MongoDB

Companies across every industry vertical continue to face the challenge of how to effectively migrate and quickly access massive amounts of enterprise data—all while keeping system performance up to par throughout the obstacle-ridden process. The complexities involved with the ubiquitous, traditional Relational Database Management Systems (RDBMS) are many. RDBMS systems can often inhibit performance, falter under heavy volumes and slow down deployment. With MongoDB’s document-based, distributed database, however, performance and volume issues are easily addressed. But when it comes to speeding up time to market? The right auxiliary tools are still needed. Capgemini, a MongoDB partner and global leader in digital transformation, provides the final piece of the puzzle with a new tool rooted in automated intelligence. In this blog, we’ll explore three key Capgemini Solutions that help customers modernize to MongoDB. Tools that expedite time to market Migration from legacy system to MongoDB New development using MongoDB as a backend database Whether your company is developing a new database or migrating from legacy systems to MongoDB, Capgemini’s new Database Convert & Compare (DCC) tool can help. Below, we’ll detail how DCC works, then walk through a few recent, client examples and the immense benefits reaped. Tool: Database Convert & Compare (DCC) A powerful tool developed by the Capgemini team, DCC optimizes activities like database migration, data comparison, validation and much more. The tool can perform data transformations with specific customization based on the source and target database in the scope. When migrating from RDBMS to MongoDB, DCC achieves 70% automation and 30% manual retrofit on a database level. How does DCC work? In context of RDBMS to NoSQL migration, DCC performs the migration in 3 stages. 1) Assessment: Source database schema assessment – DCC extracts source schema information and performs an assessment to generate detailed inventory of data objects such as tables, views, stored procedures and indexes. It also generates a detailed report on data volume from each table which helps in assessing estimated data migration time from source to target Apply analytics to prepare recommendation for target database structure—The target structure varies based on various parameters, such as: Table relationships (one to many, many to many, one to one) Indexes applied on table for performance requirements Column data type 2) Schema Migration Customize tool to apply recommendation from step 1.2 hence generating the script for target database Target schema script preparation – DCC will generate a complete database schema script except for a few object types such as stored procedure, views etc. Produce detailed report of schema migration, inclusive of objects that couldn’t be migrated Manual intervention is required to apply business logic implementation of source database, stored procedures and views to target environment application 3) Data Migration Column mapping – assessment report generates inventory of source database table fields as well as post recommended schema structure; the report also provides recommended field mapping from source to target based on adopted recommendation and DCC customization Post migration data validation script – DCC generates a data validation script after data migration is complete which takes field mapping into consideration from the related assessment and recommendation reports Data migration script for execution – DCC allows for the setup and configuration of different scripts for data migration, such as: One-time data migration from source to target Daily batch run to sync up source and target database data Intermittent data validation during the process of data migrationIf there are any discrepancies found in validation, the job will stop and generate a report with potential root cause of issue in data migration) Standalone data comparison – DCC allows for seclusion of data validation between source and target database. In this case, DCC will generate source database table inventory details and extract target database collection inventory details. Minimal manual intervention is required to perform the field mapping and set the configuration in the tool for data migration execution. Other configuration features such as one time migrations or daily batch migrations can be configured as well. The Capgemini team has successfully implemented and deployed the DCC tool for various banking customers for RDBMS to NoSQL end-to-end migration including for application retrofit and rewiring using other capable tools such as CAP360 Case study 1: Migration from Mainframe to MongoDB for a Large European Investment Bank A large banking client encountered significant challenges in terms of growth and scale-up, low resilience and increased risks, and certainly increasing costs associated with the advent of mobile banking and a related significant increase in volume. To help the client evolve more quickly, Capgemini built an Operational Data Platform to offload expensive mainframe operations, as well as store and process customer transactions for business operations, analysis and reporting. The Challenge: Inefficient and slow to meet customer demand for new digital banking services due to heavy reliance on legacy infrastructure and apps Continued growth in traffic and the launch of new digital services led to increased cost of operations and decreased performance Mainframe was the single point of failure for many applications. Outages resulted in poor customer service, brand erosion, and regulatory concerns The Approach: An analysis of digital channels revealed that 92% of traffic was generated by 25 interaction types, with 85% of these being read-only. To offload these operations from the mainframe, an operational data lake (ODL) was created. MongoDB-based ODL was updated in near real-time via change data capture and messaging queue to power existing apps, new digital services and other APIs. Outcome and Benefits: Accelerated time to market for new digital services, including personalization Improved stand-in capability to support resiliency during planned and unplanned mainframe outages Reduced number of read-only transactions to mainframes (MIPS cost), freeing up resources for additional growth Saved the customer over 80% in year-on-year post migration costs. The new MongoDB database was seamlessly able to handle 25mn+ transactions per day as well as able to handle data volume of over 30 months of history with ~13b transactions held in 114m documents Case study 2: Migration of Large-scale Database from Legacy to MongoDB for US-based Insurance Customer A US-based insurance client faced disparate data spread across 100+ systems, making data aggregation a cumbersome process. The client wanted to access the many data points around a single customer without hindering performance of the entire system. The Challenge: Reconciling different data schemas from multiple systems into a single schema is problematic and, in many cases, impossible. When adding new data sources, it is difficult to iterate on the schema quickly. Providing access to the data within the ‘Single View’ requires ad hoc queries as well as multi-layer indexing and aggregation which becomes complicated for relational databases to provide. Lack of personalization and ability to provide context-based experiences in real time results in lost business opportunities. Approach: In order to assist customer service reps in real-time, we built “The Wall,” a single view application that pulls disparate data from legacy systems for analytics. Additionally, we designed a flexible data model to aggregate disparate data into a single data store. MongoDB’s expressive query language and secondary indexes can reach any field in real time, making data access faster and easier. Our approach was designed based on 4 key foundations: Document Model – Rich and flexible data store. A single document can store up to 16 MB of data. With 20+ data types meant flexibility in terms of managing data Versatility – Variety of structured and non-structured data models defined Analytics – Strong data aggregator framework to aggregate data related to single customer Workload Isolation – Parallel run for operational and analytical workload on same cluster Outcome and Benefits: Our largest insurance customer was able to attain the single view of the customer within 90 days timespan. A different insurance customer achieved 360 degree view of 13 million customers on MongoDB Enterprise Advanced. And yet another esteemed healthcare customer was able to achieve as much as a 300% reduction in processing times and increased processing throughput with 50% less hardware. Ready to accelerate your digital transformation? Capgemini and MongoDB can help you re-envision your data and advance your business processes so you can focus on innovation. Reach out today to get started. Download the Modernization Guide

February 10, 2021
Home

MongoDB Connector for Apache Kafka 1.4 Available Now

As businesses continue to embrace event-driven architectures and tackle Big Data opportunities, companies are finding great success integrating Apache Kafka and MongoDB. These two complementary technologies provide the power and flexibility to solve these large scale challenges. Today, MongoDB continues to invest in the MongoDB Connector for Apache Kafka releasing version 1.4! Over the past few months, we’ve been collecting feedback and learning how to best help our customers integrate MongoDB within the Apache Kafka ecosystem. This article highlights some of the key features of this new release. Selective Replication in MongoDB Being able to track just the data that has changed is an important use case in many solutions. Change Data Capture (CDC) has been available on the sink since the original version of the connector. However, up until version 1.4, the source for CDC events could only be sourced from MongoDB via the Debezium MongoDB Connector. WIth the latest release you can specify the MongoDB Change Stream Handler on the sink to read and replay MongoDB events sourced from MongoDB using the MongoDB Connector for Apache Kafka. This feature enables you to record insert, update, and delete activities on a namespace in MongoDB and replay them on a destination MongoDB cluster. In effect you have a lightweight way to perform basic replication of MongoDB data via Kafka. Let’s dive in and see what is happening under the hood. Recall that when the connector is used as a source to MongoDB, it starts a change stream on a specific namespace. Depending on how you configure the source connector, documents are written into a Kafka topic based on this namespace and pipeline that match your criteria. These documents are by default in the change stream event format . Here is a partial message in the Kafka topic that was generated from the following statement: db.Source.insert({proclaim: "Hello World!"}); { "schema": { "type": "string", "optional": false }, "payload": { "_id": { "_data": "82600B38...." }, "operationType": "insert", "clusterTime": { "$timestamp": { "t": 1611348141, "i": 2 } }, "fullDocument": { "_id": { "$oid": "600b38ad6011ef6265c3acd1" }, "proclaim": "Hello World!" }, "ns": { "db": "Tutorial3", "coll": "Source" }, "documentKey": { "_id": { "$oid": "600b38ad6011ef6265c3acd1" } } } } Now that our change stream message is in the Kafka topic, we can use the connector as a sink to read the stream of messages and replay them at the destination cluster. To set up the sink to consume these events, set the “change.data.capture.handler" to the new com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler property. Notice that one of the fields is “operationType”. The sink connector will only support insert, update and delete operations on the namespace and does not support actions like creation of database objects such as users, namespaces, indexes, views, and other metadata that occurs in more traditional replication solutions. In addition this capability is not intended as a replacement for a full featured replication system as it can not guarantee transactional consistency between the two clusters. That said, if all you are looking to do is move data and can accept its lack of consistency then you have a simple solution using the new ChangeStreamHandler. To work through a tutorial on this new feature, check out Tutorial 3 of the MongoDB Connector for Apache Kafka Tutorials in GitHub . Dynamic Namespace Mapping When we use the MongoDB connector as a sink we take data that resides on a Kafka Topic and insert it into a collection. Prior to 1.4, once this mapping is defined it isn’t possible to route topic data to another collection. In this release we added the ability to dynamically map a namespace to the contents of the kafka topic message. For example, consider a Kafka Topic “Customers.Orders” that contains the following messages: {"orderid":1,"country":"ES"} {"orderid":2,"country":"US"} We would like these messages to be placed in their own collection based upon the country value. Thus, the message with the field “orderid” that has a value of 1 will be copied in a collection called, “ES”. Likewise, the message with the field “orderid” that has a value of 2 will be copied to a collection called, “US”. To see how we configure this scenario, we will define a sink using the new namespace.mapper property configured with a value of “ com.mongodb.kafka.connect.sink.namespace.mapping.FieldPathNamespaceMapper ”. Using this mapper, we can use a key or value field to determine the database and collection respectively. In our example above let’s define our config using the value of the country field as the collection name to sink to: '{"name": "mongo-dynamic-sink", "config": { "connector.class":"com.mongodb.kafka.connect.MongoSinkConnector", "topics":"Customers.Orders", "connection.uri":"mongodb://mongo1:27017,mongo2:27017,mongo3:27017", "database":"Orders", "collection":"Other" "value.converter":"org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable":"false", "namespace.mapper":"com.mongodb.kafka.connect.sink.namespace.mapping.FieldPathNamespaceMapper", "namespace.mapper.value.collection.field":"country" }} Messages that do not have a country value will by default be written to the namespace defined in the configuration just like they would have been without the mapping. However, If you want messages that do not conform to the map to generate an error simply set the property namespace.mapper.error.if.invalid to true. This will raise an error and stop the connector when messages can not be mapped to a namespace due to missing fields or fields that are not strings. If you’d like to have more control over the namespace you can use the new “getNamespace” method of the interface com.mongodb.kafka.connect.sink.namespace.mapping.NamespaceMapper . Implementations of this method can implement more complex business rules and can access the SinkRecord or SinkDocument as part of the logic to determine the destination namespace. Dynamic Topic Mapping Once the source connector is configured, change stream events flow from the namespace defined in the connector to a Kafka Topic. The name of the Kafka Topic is made up of three configuration parameters: topic.prefix, database and collection. For example, if you had as part of your source connector configuration: “topic.prefix”:”Stocks”, “database”:”Customers”, “collection”:”Orders” The Kafka topic that would be created would be “Stocks.Customers.Orders”. However, what if you didn’t always want the events in the Orders collection to always go to this specific topic? What if you wanted to determine at run-time which topic a specific message should be routed to? In 1.4 you can now specify a namespace map that defines which kafka topic a namespace should be written to. For example, consider the following map: {"Customers": "CustomerTopic", "Customers.Orders": "Orders"} This will map all change stream documents from the Customers database to CustomerTopic.<collectionName> apart from any documents from the Customers.Orders namespace which map to the Orders topic. If you need to use complex business logic to determine the route, you can implement the getTopic method in the new TopicMapper class to handle this mapping logic. Also note that 1.4 introduced a topic.suffix configuration property in addition to the topic.prefix. Using our example above, you can configure “topic.prefix”:”Stocks”, “database”:”Customers”, “collection”:”Orders”, topics.suffix:”US” This will define the topic to write to as “Stocks.Customers.Orders.US” Next Steps Download the latest MongoDB Connector for Apache Kafka 1.4 from the Confluent Hub ! Read the MongoDB Connector for Apache Kafka documentation Questions/Need help with the connector? Ask the Community Have a feature request? Provide Feedback or a file a JIRA

February 9, 2021
Developer

MongoDB Launches Sales Academy

We’re thrilled to announce our inaugural MongoDB Sales Academy! This program will prepare emerging professionals with the training and experience they need to jumpstart a career in sales. We’re looking for recent college graduates with an interest in technology to join our rapidly growing sales team. “The creation of a program designed to develop recent college graduates into sales professionals is a natural extension of MongoDB’s culture of talent development. We have best in breed sales enablement and onboarding programs, and a “BDR to CRO” program focused on accelerating sales careers. We have an opportunity to bring these world-class training programs to those who are starting their careers, and to turn emerging professionals into future leaders at MongoDB.” - Meghan Gill, VP Sales Operations & SDR The Sales Academy will be a full time, paid 12-week training program based in Austin, TX. It will focus on training and developing future MongoDB Sales Development Representatives as, upon completion, these recent college graduates will move into a full time SDR position. Those who are part of the Sales Academy will have direct one-on-one support from their sales mentors, MongoDB’s leadership team, the Campus Team, and each other. These New Grads will complete a best-in-class training program, which includes both technical concepts and sales processes. Through regular coaching and professional development training, our Sales Academy New Grads will graduate from the program and become full-time members of the Sales team at MongoDB. “Life at MongoDB is ever-evolving and a great start for anyone looking to take their career to the next level. You can expect to constantly learn new things about technology and your customers, work alongside some of the best sales professionals in the industry, and to be on the forefront of innovation. If you want to understand technology like never before, work with customers modernizing today’s world, and get consistent feedback from peers and leadership, this is the right place for you.” - Maya Monico, SDR Manager This isn’t the first time that MongoDB has hired students into our sales organization. Hannah Branfman was part of our SDR Internship program and, upon graduating from her school, joined us full-time. When asked about what sales at MongoDB is like, Hannah says: “If you have ambition, are coachable and have a strong desire to learn, MongoDB will be a great fit for you. You have to be willing to make mistakes and remain naturally curious — don’t stop asking questions! If you have the perseverance to not only get here, but to then set the bar high for yourself and surpass it, you will fit in great. Get ready to make an impact!” - Hannah Branfman, SDR We’re eager to find recent college graduates who are ambitious and excited to learn. If you’re interested in kickstarting your sales career at MongoDB in our Austin office, this could be the perfect fit for you! The job post is now up and we look forward to reviewing your application and getting to know you!

February 3, 2021
News

Appy Pie & MongoDB’s Seamless, No-Code Business Solutions for Mobile & Web Apps

The tech industry’s ceaseless and exponential growth is no longer a surprise. As long as clients and end-users remain interested in faster, more efficient services, then tech companies will continue to improve business processes to meet the demand. Simultaneously, these improvements will reduce costs and maximize revenue. It’s a win-win--if, of course, it’s done correctly. So, what’s behind most success stories? How do some companies launch and maintain applications at such rapid, expansive scale? Often, the key to success lies in fostering core business processes that are driven by automation. For many tech-based organizations, Appy Pie Connect has been the go-to seamless integration platform that helps them get started. And now, with MongoDB Realm , it’s about to get even easier. Users build best in-class-apps across Android, iOS, and web with MongoDB Realm’s mobile database, sync solution, and application development services. Why use Appy Pie & MongoDB Realm? Together, Appy Pie and MongoDB are driving seismic operational change. Originally, Appy Pie AppMakr product moved to MongoDB Realm for local storage. But after experiencing the immense ease and advantages offered by Realm--specifically, its offline-first database that supports cross-platform app development-- we decided to extend its benefits to the customers of Appy Pie Connect. As an automation platform, Appy Pie Connect helps businesses automate manual tasks through smart integrations, allowing for intuitive, instant sharing between apps less commonly connected like MailChimp and LinkedIn or Stripe and Gmail, and so on. By integrating MongoDB and MongoDB Realm with Appy Pie Connect, customers can easily store or retrieve data within multiple database sources. This enables the storage of flexible schemas and maintains consistency and integration. This unique “no code” technology allows organizations to extract and work with data from MongoDB and then apply that data to desired software through triggered-based actions. For example, users can set up a trigger for every event on their Google Calendar so that their Slack status corresponds and is updated at the start and end of each meeting. This way, data concurrency is maintained without any manual effort. Realm is a particularly great choice for the customers at Appy Pie Connect because of its effortless data syncing. View some of the common use cases below. Example 1 In the Meter Billing example below, cost is calculated in real time based on usage (e.g. video viewing time), resulting in a transparent “pay as you go” model. Example 2 With real-time data sync, any updates or changes to the application are immediately reflected without requiring users to update or reinstall. Example 3 All API failure logs are conveniently displayed to the admin on the Appy Pie dashboard so that immediate troubleshooting actions can be taken. Benefits of MongoDB Realm and MongoDB Atlas Appy Pie Connect already uses MongoDB Atlas, so moving to Realm -- a MongoDB product offered through Atlas -- was a natural choice. Realm allows mobile users to sync data quickly and seamlessly between mobile devices and backend systems, even if they go offline (sync will occur when they are connected) -- and Atlas enables it all.. Some benefits of using Atlas are as follows: Scalability flexible data schema Document oriented storage Ad hoc queries, indexing, and real-time aggregation Powerful tools for data analysis Serverless function and GraphQL support Easy hosting and quickly able to built rest API Ultimately, Appy Pie Connect helps businesses convert MongoDB into a central data store by pulling in and replicating data from all its sources. This allows customers to create new MongoDB documents automatically from new Typeform entries, new files on Dropbox, new posts on WordPress, or other resources. Similarly, Appy Pie Connect can also send MongoDB data to other third-party apps, including WordPress, Salesforce, Slack, Mailchimp, Google Drive, and many more. This makes enterprise-wide communication and collaboration much more efficient. When data is pulled from MongoDB through automation, it can help streamline other areas of the business. For example, when you pull MongoDB data into MailChimp, you can automatically add a new subscriber on Mailchimp. This ensures that your lists grow automatically, as fast as your business does. Ex. Appy Pie Connect seamlessly sends MongoDB data to third-party apps Use cases Send data from MongoDB to LinkedIn (without any code!) to quickly post accurate job-related content Extract retail product data into Google Sheets to record valuable data in an organized manner in one place Post enterprise-related content from MongoDB to Twitter to streamline social media presence How it works Appy Pie Connect employs a trigger-action based function that allows you, as a platform user, to choose the two apps you want to connect. The process is very straightforward. Once you choose the apps you wish to integrate, you will be presented with multiple options to connect them. Simply click on the “Connect” button. To integrate with the selected applications accounts, simply allow API access for Appy Pie Connect. Next, design the workflows by mapping all of your data synced from the applications you are connecting. Once complete, you are ready to test your brand new Connect with your Trigger and Action apps. And, that’s it! It is time to experience the magic of Appy Pie Connect at work. Let the automation workflows take over the mundane, repetitive tasks, and move on to more innovative, exciting tasks. As the efficiency of an organization improves through automation, one of the most direct advantages is a marked reduction in cost. These integrations help save hundreds of hours of manual effort, thereby freeing up talented resources to instead focus their energy and intellect on more critical, innovative issues. With Appy Pie Connect and MongoDB Realm, businesses can ensure that their workforce is not only optimized but also inspired, a key factor to employee satisfaction and overall company success. Watch this demo to learn how to integrate MongoDB with Google Sheets using Appy Pie Connect to help you automate data exchange between MongoDB and Google Sheets with ease. Click here to learn more about MongoDB Realm

February 2, 2021
Applied

Build Better Mobile Apps -- Running MongoDB Realm and Google Cloud

We’re partnering with Google Cloud to offer MongoDB Realm as part of the MongoDB Cloud stack with Google Cloud to service users globally whether you’re building a new mobile app or modernizing an existing one. Realm’s integrated application development services make it easy for developers to build industry leading apps on mobile devices and the web. With MongoDB Atlas running as a service with Google Cloud, it’s easy to connect your mobile database to Google services. Customers choose Google Cloud to: avoid vendor lock-in by running multi-cloud and hybrid cloud deployments take advantage of Google Cloud’s machine learning and advanced analytics abilities stay secure with the same protections Google Cloud itself uses to guard their data, applications, and infrastructure. Why MongoDB Realm for Mobile? Realm comes with 3 key features: Cross-platform mobile database Cross-platform mobile sync solution Time-saving application development services Mobile Database Realm’s mobile database is an open source, developer-friendly alternative to CoreData and SQLite. With Realm’s open source database, mobile developers can build offline-first apps in a fraction of the time. Supported languages include Swift, C#, Xamarin, JavaScript, Java, ReactNative, Kotlin, and Objective-C. Realm’s Database was built with a flexible, object-oriented data model, so it’s simple to learn and mirrors the way developers already code. Because it was built for mobile, applications built on Realm are reliable, highly performant, and work across platforms. Sync Solution Realm Sync is an out-of-the-box synchronization service that keeps data up-to-date between devices, end users, and your backend systems, all in real-time. It eliminates the need to work with REST, simplifying your offline-first app architecture. Use Sync to backup user data, build collaborative features, and keep data up to date whenever devices are online - without worrying about conflict resolution or networking code. Powered by the Realm Mobile Database on the client-side and MongoDB Atlas on the backend, Realm is optimized for offline use and scales with you. Building a first-rate app has never been easier. Application Development Services With Realm app development services, your team can spend less time integrating backend data for your web apps, and more time building the innovative features that push your business initiatives forward. Services include: GraphQL Functions Triggers Data access controls User authentication Use these products from Google to accelerate the development and deployment of backend services: Google Kubernetes Engine (GKE) Google Cloud Functions (FaaS) Google App Engine (PaaS) Realm and MongoDB Atlas with Google Cloud and Android As Realm is a MongoDB product offered through Atlas, and Atlas is used by Realm to sync data between the database and clients, Google Cloud and Atlas abilities are key to the Realm user experience. Figure 1: Screenshot of Realm offered through MongoDB Cloud UI MongoDB Atlas and Google Cloud MongoDB Atlas delivers a fully managed service on Google Cloud’s globally scalable and reliable infrastructure. Atlas allows users to manage their MongoDB databases easily through the UI or an API call. It’s simple to migrate to, and offers sophisticated features such as Global Clusters that offer low-latency read and write access anywhere across the globe. 3 Key Abilities with MongoDB Atlas and Google Cloud Geographic Presence All Google Cloud regions have at least 3 availability zones, providing higher availability, resiliency and geographic availability. Other public clouds do not have the same reliability guarantees. Network Offering — Cost and Customer Benefits Global VPC - global resources that reduce complexity in networking implementation Performance - premium tier leverages performance of the Google Cloud network improving application performance and latency across tiers Price - better pricing ratio for network egress costs Native Integrations Security -- Atlas offers native integrations to Google Auth through Realm, support for Google Cloud KMS for additional encryption at rest or MongoDB Client-Side Field Level Encryption, and OAuth flow based console integration Billing -- pay as you go billing on Google Cloud Marketplace (Realm is purchased through Atlas credits similarly on Marketplace) Realm and Android With Realm, you can create mobile applications for Android devices. Realm supports all versions of the Android API after level 9 (Android 2.3 Gingerbread). Below is a sample reference architecture which shows how to leverage MongoDB Atlas with Google Cloud as an Operational Data Layer (ODL) / Operational Data Store (ODS) and build mobile applications using MongoDB Mobile and Realm Sync. Figure 2: Reference Architecture for ODL on MongoDB Atlas and Realm with Google Cloud Realm Customer Story — A Leading New York Healthcare Payer MongoDB has partnered with Exafluence to deliver a COVID employee self-assessment health checker app for a leading healthcare payer in New York since the onset of the pandemic, they’ve needed to quickly adapt to new operational standards, as the situation with COVID evolves. MongoDB Atlas, Realm, Google Cloud, and Exafluence have all been a key part of allowing their onsite operations to continue. The CDC and New York State require organizations to keep track of which of their employees reporting to a physical office for work. As a result, the organization must monitor their New York based employees who still come onsite in order to support their members. They needed an app that would capture their employees’ health, and ask a series of questions to determine if the associate was able to enter the facility. Exafluence -- a MongoDB Global Strategic Partner working with the healthcare payer’s HR team and business team -- was able to deliver a complete solution in only three weeks from start to go-live. This rapid deployment was made possible using MongoDB Atlas, Realm, and Google Cloud. The completed app includes: support for mobile devices a web Portal to aggregate information use of QR Scans to confirm access on iPads deployed in facility entrances integration with Active Directory and alerts to the funds email system This rapid deployment was made possible using MongoDB Atlas and Realm. The organization and Exafluence chose Realm because it’s application development services make it easy to work with data across both web and mobile applications. Realm works with React js, provides offline sync and is Atlas cloud ready. MongoDB Atlas and Realm also make it easy to rapidly develop new features when the next stage of the pandemic changes app requirements. Exafluence will be able to quickly add app features tied to vaccination, like the ability for employees to disclose and share immunization certification via MongoDB’s FHIR API. Prior to the Covid App, this healthcare payer chose to use Atlas on Google Cloud because the fully managed, global DBaaS accelerates development and allows them to manage both structured and unstructured data. They also needed a solution for analytics involving geocoding, machine learning, and dashboarding. With Atlas and Google Cloud, their teams get agility while with elastic scaling and provision on-demand resources. Additional differentiators that drove the organization to select Google Cloud include: Maps API Air flow for scheduling Cloud identity Kubernetes deployment and seamless integration with MongoDB and Realm for mobile development Scalable VM environments Meeting CISO requirements They were able to automate and offload operational tasks while taking advantage of built-in security best practices, and this in turn reduced regulatory risk. With Atlas and Google Cloud, their teams can also elastically scale and provision on-demand resources to build more microservices, in-line with their agile development requirements. Click here to learn more about MongoDB Realm

February 2, 2021
Applied

MongoDB Rated an “Overall Leader” by Technology Analysts KuppingerCole

KuppingerCole, one of Europe’s leading technology analyst firms, has recognized MongoDB as an “overall leader” in a new report focused on enterprise databases in the cloud. In the report, KuppingerCole acknowledged MongoDB’s unique position as a database platform designed for the cloud age and well-suited to the applications that drive disruption, innovation, and competitive advantage. “Designed from scratch as a general-purpose database platform for the cloud age, MongoDB has grown into the world’s fastest-growing NoSQL ecosystem and one of the preferred engines for modern cloud-native applications,” wrote Alexei Balaganski, lead analyst for KuppingerCole. The report noted that, as the only overall leader in cloud databases that is not also a cloud services provider, MongoDB is uniquely committed to making sure data can move easily between multiple cloud providers (AWS, Microsoft Azure, and Google Cloud). In addition to the overall leadership accolade, KuppingerCole gave MongoDB its “Strong positive” designation – the highest possible rating – in seven important areas: security, functionality, interoperability, usability, deployment, innovativeness, and market position. Excellence in Product, Innovation, and Market Position MongoDB was one of only four database services to achieve a leadership designation in each of the three areas key to the final categorization as an overall leader: product leadership, innovation leadership, and market leadership. Product Leadership. Here, KuppingerCole singled out MongoDB Atlas as the core of MongoDB’s cloud-agnostic database product, offered as a managed cloud service across the three major cloud platforms (AWS, Azure, GCP). Balaganski also refers to MongoDB's popularity among developers, who have rated it the “Most Wanted” database four years running in StackOverflow’s annual survey . “MongoDB offers maximum deployment flexibility and an interface for developers so familiar that other vendors chose to implement the MongoDB protocol for their own databases,” writes Balaganski. Innovation Leadership. KuppingerCole defines leadership in innovation not so much as a constant flow of new features, but as a customer-oriented upgrade approach. The analyst firm wants to see backward compatibility, especially at the API level, and it wants to see new features that meet emerging customer requirements. Clearly, MongoDB delivers. KuppingerCole lauded MongoDB's history of listening to its customers and delivering new and relevant features. The result, the analyst firm said, is that MongoDB has been, “One of the most popular NoSQL databases for years.” KuppingerCole specifically pointed out Atlas Search, Atlas Data Lake, and mobile support. Market leadership. MongoDB was rated a leader in this category as well. While Oracle has, “Arguably the largest number of on-prem database customers,” it was far from assured that all of them would migrate to Oracle Cloud. KuppingerCole said MongoDB, “Dominates the NoSQL database market across all major cloud platforms.” MongoDB Atlas' Unique Advantages In its analysis of the “overall leader” cohort, KuppingerCole showed why it could confidently claim that MongoDB and the MongoDB Atlas cloud platform, “Provide a simple, universal alternative for developers that do not want to replicate the complexity of their legacy on-prem infrastructures in the cloud and would rather avoid a plethora of specialized database engines, opting instead for a single interface to manage, query, and analyze all of their data.” Two other prominent vendors, said KuppingerCole, focus on a database portfolio strategy, offering a purpose-built database for every conceivable need. But KuppingerCole said that approach runs the risk of locking data in silos, and complicates analytics across silos. The analyst firm said that management capabilities can vary between different database engines. It also noted that another leading vendor’s database engine was not well-suited to highly distributed modern application architectures. MongoDB addresses those challenges. Among its strengths, KuppingerCole noted that MongoDB provides a single platform for transactional, search, and analytics workloads. It also mentioned MongoDB’s multi-cloud availability in AWS, GCP, and Azure; its advanced security features, such as client-side field-level encryption; and comprehensive developer support with drivers, tools, and additional services. That makes it, as Balaganski writes, “A one-stop platform for developing modern highly scalable, and distributed cloud-native applications.” And, of course, an overall leader.

January 29, 2021
Home

Ready to get Started with MongoDB Atlas?

Start Free