All Blog Posts

Australian Start-Up Ynomia Is Building an IoT Platform to Transform the Construction Industry and its Hostile Environments

The trillion dollar construction industry has not yet experienced the same revolution in technology you might have expected. Low levels of R&D and difficult working environments have led to a lack of innovation and fundamental improvements have been slow. But one Australian start-up is changing that by building an Internet of Things (IoT) platform to harness construction and jobsite data in real time. “Productivity in construction is down there with hunting and fishing as one of the least productive industries per capita in the entire world. It's a space that's ripe for people to come in and really help,” explains Rob Postill , CTO at Ynomia. Ynomia has already been closely involved with many prestigious construction projects, including the residential N06 development in London’s famous 2012 Olympic Village. It was also integral to the construction of the Victoria University Tower in Australia. Link to Podcast Episode Here “These projects involve massive outflow of money: think about glass facades on modern buildings, which can represent 20-30 percent of the overall project cost. They are largely produced in China and can take 12 weeks to get here,” says Postill. “Meanwhile, the plasterer, the plumber, the electrician are all waiting for those glass facades to be put on so it is safe for them to work. If you get it wrong, you can go in the deep red very quickly.” To tackle these longstanding challenges, Ynomia aims to address the lack of connectivity, transparency and data management on construction sites, which has traditionally resulted in the inefficient use of critical personnel, equipment and materials; compressed timelines; and unpredictable cash flows. To optimize productivity, Ynomia offers a simple end-to-end technology solution that creates a Connected Jobsite. Helping teams manage materials, tools, and people across the worksite in real time. IOT in a Hostile Environment The deployment of technology in construction is often fraught with risk. As a result, construction sites are still largely run on paper, such as blueprints, diagrams and models as well as the more traditional invoices and filing. At the same time, there is a constant need to track progress and monitor massive volumes of information across the entire supply chain. Engineers, builders, electricians, plumbers, and all the other associated professionals need to know what they need to do, where they need to be, and when they need to start. “The environment is hostile to technology like GPS, computers, and mobile phone reception because you have a lot of Faraday cages and lots of water and dust,” explains Postill. “You can't have somebody wandering around a construction site with a laptop; it'll get trashed pretty quickly." Enter MongoDB Atlas “On a site, you might be talking about materials, then if you add to that plant & equipment, or bins, or tools etc, you're rapidly getting into thousands and thousands of tags, talking all the time, every day,” said Postill. That means thousands of tags now send millions of readings on Ynomia building sites around the world. All these IoT data packets must be stored efficiently and accurately so Ynomia can reassemble the history of what has happened and track tagged inventory, personnel, and vehicles around the site. Many of the tag events are also safety critical so accuracy is a vital component and packets can't be missed. To address these needs Ynomia was looking for a database that was scalable, flexible, resilient and could easily handle a wide variety of fast-changing sensor data captured from multiple devices. The final component Postill was looking for in a database layer was freedom: a database that didn't lock them into a single cloud platform as they were still in the early stages of assessing cloud partners. The Commonwealth Scientific and Industrial Research Organisation , which Postill had worked with in the past, suggested MongoDB , a general purpose, document-based database built for modern applications. “The most important factor was that the database is event-driven, which I knew would be difficult in the traditional relational model. We deal with millions of tag readings a day, which is a massive wall of data,” said Postill. A Cloud Database Ynomia is using MongoDB Atlas , the global cloud database service, now hosted on Microsoft Azure. Atlas offers best-in-class automation and proven practices that combine availability, scalability, and compliance with the most demanding data security and privacy standards. “When we started we didn't know enough about the problem and we didn't want to be constrained," explained Postill. "MongoDB Atlas gives us a cloud environment that moves with us. It allows us to understand what is happening and make changes to the architecture as we go." Postill says this combination of flexibility and management tooling also allows his developers to focus on business value not undifferentiated code. One example Postill gave was cluster administration: "Cluster administration for a start-up like us is wasted work," he said. "We’re not solving the customer's problem. We're not moving anything on. We’re focusing on the wrong thing. For us to be able to just make that problem go away is huge. Why wouldn’t you?" Atlas also gives Ynomia the option to spin out new clusters seamlessly anywhere in the world. This allows customers to keep data local to their construction site, improving latency and helping solve for regional data regulations. Real Time Analytics The company has also deployed MongoDB Charts, which takes this live data and automatically provides a real time view. Charts is the fastest and easiest way to visualize event data directly from MongoDB in order to act instantly and decisively based on the real-time insights generated by event-driven architecture. It allows Ynomia to share dashboards so all the right people can see what they need to and can collaborate accordingly. “Charts enables us to quickly visualize information without having to build more expensive tools, both internally and externally, to examine our data,” comments Postill. “As a startup, we go through this journey of: what are we doing and how are we doing it? There's a lot of stuff we are finding out along the way on how we slice and re-slice our data using Charts.” A Platform for Future Growth Ynomia is targeting a huge market and is set for ambitious growth in the coming years. How the platform, and its underlying architecture, can continue to scale and evolve will be crucial to enabling that business growth. “We do anything we can to keep things simple,” concluded Postill. “We pick technology partners that save us from spending time we shouldn't spend so we can solve real problems. We pick technologies that roll with the punches and that's MongoDB.” When we started we didn't know enough about the problem and we didn't want to be constrained," explained Postill. "MongoDB Atlas gives us a cloud environment that moves with us. It allows us to understand what is happening and make changes to the architecture as we go. Rob Postill, CTO, Ynomia

February 23, 2021

New MongoDB Shell now supports Client-side Field-level Encryption

Last summer, we introduced mongosh , the new MongoDB Shell with an enhanced user experience and a powerful, Node.js-based scripting environment . Since then, we have been adding new functionality and APIs to close the gap with the legacy mongo shell, on the path to making it the default shell for MongoDB. In addition to the set of CRUD and other commands that we supported in the first release we recently added: Bulk operations Change Streams Sessions and Transactions Logging and profiling commands Replica set and Sharding configuration commands Plus some other minor things and utility commands here and there. This week, we released mongosh 0.8 with support for Client-side Field-level Encryption (FLE). Support for Client-side Field-level Encryption MongoDB Client-Side Field-level Encryption (FLE) allows developers to selectively encrypt individual fields of a document using the MongoDB drivers (and now with mongosh as well) on the client before it is sent to the server. This keeps data encrypted (but still queryable) while it is in-use in database memory, and protects it from the providers hosting the database, as well as from any user that has direct access to the database. Back in November, we announced that in addition to AWS’ KMS, Client-side FLE now supports key management systems in Azure and Google Cloud in beta. The most recent version of the MongoDB Shell makes it easy to test this functionality in a few easy steps: Create a free Atlas Cluster Install mongosh . Check out our documentation to set up your KMS in Azure or GCP. Start encrypting! To make it easier to get started with Client-Side FLE, here are two simple scripts that you can edit and copy-paste into mongosh: mongosh-fle-gcp-kms to set up Client-side FLE with Google Cloud and mongosh-fle-local-kms to use a local key. In the screenshot below, you can see a document that was encrypted on the client with automatic encryption before it was sent across the wire and inserted into MongoDB. Fields are in clear text in the shell but then are shown as encrypted when connecting with Compass to the same Atlas cluster. A Powerful Scripting Environment As mongosh is built on top of Node.js, it’s a great environment for scripting , no matter if it’s about checking the health status of your replica set or if you want to take a quick look at the data to make sure it’s coming in from your application as you are expecting. With modules from npm , the experience becomes much more rich and interactive. For example, if I want to look at the sample_mflix collection available in the Atlas sample datasets and check the distribution of thriller movies over the years, I can put together a simple script that includes running an aggregation and visually formatting the results with an open source library called babar This is just one of many ways you can extend the functionality of the MongoDB Shell by taking advantage of the great ecosystem of JavaScript libraries and modules that the community has built over the years and keeps on building every day. Start Scripting and Let Us Know How it's Working for You! As we added new functionality to the MongoDB Shell, we tried as much as possible to keep backwards compatibility with mongo, and we were mostly able to do that. In a limited number of cases, however, we took the opportunity to clean up the API and address some unexpected behaviors. Wondering what’s coming next in mongosh? We are working on adding support for load() and rc files to make it easy to load your scripts into the shell. If you find something that does not work as expected, please let us know! Simply create a bug in our JIRA project or reach out on Twitter .

February 22, 2021

Applying Maslow’s Hierarchy of Needs to Documentation

In his groundbreaking 1943 paper, psychologist Abraham Maslow theorized that all humans are motivated by five categories of needs: physiological, safety, love and belongingness, esteem, and self-actualization. Known today as the "Hierarchy of Needs," this theory is often depicted as a pyramid, for only when one stage is fulfilled can an individual move on to the next. Abraham Maslow's Hierarchy of Needs (1943) Not only does this theory apply to motivation, but also it applies to the efficacy of a user’s experience. Although Maslow's Hierarchy of Needs was originally intended for psychological analysis, a modified version can also be applied to users in today's digital world. At MongoDB Documentation, we strive to help users meet their learning objectives effectively and efficiently. With Maslow’s theory in mind, we created a framework for our projects that took Maslow's principles into account. We call this framework "Documentation's Hierarchy of Needs. Stage 1: Existence & Basic Needs The first layer of the Doc’s Hierarchy of Needs is existence. At the fundamental level, if content does not exist, a user cannot use it. In order for content to effectively exist, the platform needs to: Allow writers to write and publish documentation. Have a frontend where users can easily access content and content is displayed in an accessible, intuitive manner. To address this, the documentation platform team has engineered a toolchain that enables authors to write, preview, review, and publish content to the documentation corpus to be accessed by any user. It sought to enable writers to write and focus on the content they were delivering rather than get bogged down in the tools they were using to write. The toolchain itself converts content to be data, which allows the content to be easily organized, structured, reused, standardized, and tested. Whereas older technologies introduced friction into the design and development process, our new platform included a more flexible frontend to quickly iterate and improve experiences for the users accessing the content. All of this means that content can easily be written, published, and accessed. Stage 2: Quality Needs The second layer of the Doc’s Hierarchy of Needs is quality. If the content isn’t of high quality, it isn’t beneficial to a user. From user research, we learned that when creating higher-quality content, you want to adhere to the following criteria: Be task or use case centric. Come off as approachable, helpful, and informative. Create emotions of confidence, excitement, and determination. We took these traits into consideration and rethought a few key touchpoints our users interact with. This includes a new docs homepage and a series of new docs product landing pages. As these pages are frequently first-touch experiences, it was important for us to provide a positive initial impression and introduction. Docs Homepage Prompting users with relatable tasks Throughout these cards, users are given the opportunity to immediately get to the product documentation they need. In order to match the user’s mental model, all cards are written to emphasize tasks. Leading users to a through introduction Throughout extensive user research, we learned that users have difficulty understanding the fundamental differences between MongoDB and traditional relational databases. In this section, we wanted to give users a taste of what that was and leave them intrigued and informed on where to learn more. Connecting users with other learning resources This section keeps the ball rolling. At the beginning of their journey, the user receives a broad overview, before working their way through basic concepts. At the end of the page, they are encouraged to continue their learning and explore our other educational platforms. Docs Product Landing Pages Creating consistency in user goals At this touchpoint, users are entering a specific product learning experience. In order to supplement our users’ learning journeys, these pages are focused on increasing product fluency and adoption. Creating emotions of excitement and confidence In testing these designs, users felt that this specific section made them feel the most confident and excited. The use cases outlined quickly jumped out as relatable tasks, the small number of steps made the task feel easily achievable, and the interaction made the information exciting. Stage 3: Findability Needs The third layer of Doc’s Hierarchy of Needs is findability. Here, we break through basic needs and head into psychological needs. Historically, users could still rely on external resources, such as Google, to find the information they required. Of course, this does not provide the ideal experience for our users but meets their basic needs. One of our main focuses this year was to improve findability and strengthen our navigational experience. We found that the navigational experiences are mainly split between two persona types: advanced users and first-time learners. Advanced learners are more likely to know exactly what they are looking for, leading them to rely heavily on an effective search experience. On the contrary, learners are less likely to know what they are looking for and are just looking to learn and explore. Factors for Findability Success After several rounds of user interviews, literature reviews, and meetings with subject matter experts, we identified the following characteristics of the ideal navigational experience: Task-Centric Approach In each round of research, such as card sorting or tree tests, we consistently found that users approach navigation based on their own experience or knowledge. Because of this finding, we implemented a task-centric approach in the revamp of Docs Navigation. By mirroring users’ mental models, this navigational model takes some of the heavy lifting off the user and creates an intuitive experience. Importance of Efficiency and Accuracy Users ranked efficiency and accuracy as the most important factors when navigating. In fact, many users, specifically developers, measure efficiency in number of clicks. To maximize the efficiency of the search engine, we provided context clues to users. This enabled them to determine the most relevant results and apply additional filters for improved accuracy. These findings became pivotal when envisioning a new Docs Search and pinpointing valuable features that would optimize for these factors. A New Docs Nav A New Docs Search Small Snacks Upon the release of these projects, the documentation platform team has enjoyed looking at the resulting analytics, which has inspired us to further improve findability and quality. For example, with information about what queries users are searching for, we can make decisions around what we want to optimize next. A fun tidbit we saw in our analytics concerned user preferences around full page search vs. a modal. In our research, we found that there was a split in affinities toward each approach, and as a result, it was difficult to make an informed decision on which to invest in. Instead, we decided to build both, as this extra work increased scope by only one engineering day. We have since found that they are being used to an equal degree. How fun! This leads us to believe that we are providing further psychological safety to our users by letting them navigate through how they desire. Stage 4: Experience Needs The fourth layer of the Doc’s Hierarchy of Needs is experience, which encompasses the finishing touches. The difference between delight and neutrality, intuition and frustration, the ooo’s and the ugh’s. Internally, we've made improvements to the platform that increase writer efficiency and productivity so that writers can create better documentation. Research indicates that if an employee is happy with their set of tools, the work they produce will be better as well. Stage 5: Contribution Needs The last layer of the Doc’s Hierarchy of Needs is contribution. Once the content exists — and it's of high quality, it's easily findable, and the experience is superb — users feel they should be able to contribute and be a part of the effort. From user research, we’ve heard that “contribution needs” include: Feeling that they can help Docs improve. Being able to report their own problems. Joining a community. Creating an open source platform Users who regularly read the documentation are also able to contribute directly by making a pull request to Github. This directly relates to self-fulfillment as it is defined in Maslow’s Hierarchy of Needs, because we are encouraging users to achieve their full potential by participating in the growth of the platform. Note: This graphic includes internal commits as well Improving the Feedback Widget After receiving user feedback, we focused on the following points to improve in the next iteration: Interface with content The previous feedback widget was visually covering the actual documentation content. It also had no way to hide or dismiss its presence. To address this, the new feedback widget was de-emphasized in the view, keeping the priority on the content itself. Quality of feedback collected Internally, the feedback widget was not helpful because it didn’t provide enough context for writers to make quality improvements. To address this, the new feedback widget introduced new categories that allowed users to add specific classifiers to their entry. Introduction of helpful next steps In addition, users frequently confused the feedback widget with a support center. This created a large number of tickets that often could not be acted upon. To address this, the feedback widget connects these users to better fit resources such as the Community or the Support Center. This connection also creates an opportunity for users to join the rest of the MongoDB Community and connect with other like minded individuals. Results/Learnings In doing so, we have successfully eliminated all noise in the feedback widget directly relating to it interfering with the content on the page. We have seen an increase in the quality of feedback as a result of the more detailed rating system and self-selection of categories. We have also seen a broader decrease in the quantity of feedback — and thus, less chaff to sift through than before. Looking Towards the Future We like to think that this helps us create a holistic docs experience, as we are touching on key parts of the user journey. It puts the user at the center of all product strategy and design which is extremely important to us as a team. Additionally, it provides a helpful framework for what we plan to do next!

February 17, 2021

Capgemini Solutions that help customers modernize applications to MongoDB

Companies across every industry vertical continue to face the challenge of how to effectively migrate and quickly access massive amounts of enterprise data—all while keeping system performance up to par throughout the obstacle-ridden process. The complexities involved with the ubiquitous, traditional Relational Database Management Systems (RDBMS) are many. RDBMS systems can often inhibit performance, falter under heavy volumes and slow down deployment. With MongoDB’s document-based, distributed database, however, performance and volume issues are easily addressed. But when it comes to speeding up time to market? The right auxiliary tools are still needed. Capgemini, a MongoDB partner and global leader in digital transformation, provides the final piece of the puzzle with a new tool rooted in automated intelligence. In this blog, we’ll explore three key Capgemini Solutions that help customers modernize to MongoDB. Tools that expedite time to market Migration from legacy system to MongoDB New development using MongoDB as a backend database Whether your company is developing a new database or migrating from legacy systems to MongoDB, Capgemini’s new Database Convert & Compare (DCC) tool can help. Below, we’ll detail how DCC works, then walk through a few recent, client examples and the immense benefits reaped. Tool: Database Convert & Compare (DCC) A powerful tool developed by the Capgemini team, DCC optimizes activities like database migration, data comparison, validation and much more. The tool can perform data transformations with specific customization based on the source and target database in the scope. When migrating from RDBMS to MongoDB, DCC achieves 70% automation and 30% manual retrofit on a database level. How does DCC work? In context of RDBMS to NoSQL migration, DCC performs the migration in 3 stages. 1) Assessment: Source database schema assessment – DCC extracts source schema information and performs an assessment to generate detailed inventory of data objects such as tables, views, stored procedures and indexes. It also generates a detailed report on data volume from each table which helps in assessing estimated data migration time from source to target Apply analytics to prepare recommendation for target database structure—The target structure varies based on various parameters, such as: Table relationships (one to many, many to many, one to one) Indexes applied on table for performance requirements Column data type 2) Schema Migration Customize tool to apply recommendation from step 1.2 hence generating the script for target database Target schema script preparation – DCC will generate a complete database schema script except for a few object types such as stored procedure, views etc. Produce detailed report of schema migration, inclusive of objects that couldn’t be migrated Manual intervention is required to apply business logic implementation of source database, stored procedures and views to target environment application 3) Data Migration Column mapping – assessment report generates inventory of source database table fields as well as post recommended schema structure; the report also provides recommended field mapping from source to target based on adopted recommendation and DCC customization Post migration data validation script – DCC generates a data validation script after data migration is complete which takes field mapping into consideration from the related assessment and recommendation reports Data migration script for execution – DCC allows for the setup and configuration of different scripts for data migration, such as: One-time data migration from source to target Daily batch run to sync up source and target database data Intermittent data validation during the process of data migrationIf there are any discrepancies found in validation, the job will stop and generate a report with potential root cause of issue in data migration) Standalone data comparison – DCC allows for seclusion of data validation between source and target database. In this case, DCC will generate source database table inventory details and extract target database collection inventory details. Minimal manual intervention is required to perform the field mapping and set the configuration in the tool for data migration execution. Other configuration features such as one time migrations or daily batch migrations can be configured as well. The Capgemini team has successfully implemented and deployed the DCC tool for various banking customers for RDBMS to NoSQL end-to-end migration including for application retrofit and rewiring using other capable tools such as CAP360 Case study 1: Migration from Mainframe to MongoDB for a Large European Investment Bank A large banking client encountered significant challenges in terms of growth and scale-up, low resilience and increased risks, and certainly increasing costs associated with the advent of mobile banking and a related significant increase in volume. To help the client evolve more quickly, Capgemini built an Operational Data Platform to offload expensive mainframe operations, as well as store and process customer transactions for business operations, analysis and reporting. The Challenge: Inefficient and slow to meet customer demand for new digital banking services due to heavy reliance on legacy infrastructure and apps Continued growth in traffic and the launch of new digital services led to increased cost of operations and decreased performance Mainframe was the single point of failure for many applications. Outages resulted in poor customer service, brand erosion, and regulatory concerns The Approach: An analysis of digital channels revealed that 92% of traffic was generated by 25 interaction types, with 85% of these being read-only. To offload these operations from the mainframe, an operational data lake (ODL) was created. MongoDB-based ODL was updated in near real-time via change data capture and messaging queue to power existing apps, new digital services and other APIs. Outcome and Benefits: Accelerated time to market for new digital services, including personalization Improved stand-in capability to support resiliency during planned and unplanned mainframe outages Reduced number of read-only transactions to mainframes (MIPS cost), freeing up resources for additional growth Saved the customer over 80% in year-on-year post migration costs. The new MongoDB database was seamlessly able to handle 25mn+ transactions per day as well as able to handle data volume of over 30 months of history with ~13b transactions held in 114m documents Case study 2: Migration of Large-scale Database from Legacy to MongoDB for US-based Insurance Customer A US-based insurance client faced disparate data spread across 100+ systems, making data aggregation a cumbersome process. The client wanted to access the many data points around a single customer without hindering performance of the entire system. The Challenge: Reconciling different data schemas from multiple systems into a single schema is problematic and, in many cases, impossible. When adding new data sources, it is difficult to iterate on the schema quickly. Providing access to the data within the ‘Single View’ requires ad hoc queries as well as multi-layer indexing and aggregation which becomes complicated for relational databases to provide. Lack of personalization and ability to provide context-based experiences in real time results in lost business opportunities. Approach: In order to assist customer service reps in real-time, we built “The Wall,” a single view application that pulls disparate data from legacy systems for analytics. Additionally, we designed a flexible data model to aggregate disparate data into a single data store. MongoDB’s expressive query language and secondary indexes can reach any field in real time, making data access faster and easier. Our approach was designed based on 4 key foundations: Document Model – Rich and flexible data store. A single document can store up to 16 MB of data. With 20+ data types meant flexibility in terms of managing data Versatility – Variety of structured and non-structured data models defined Analytics – Strong data aggregator framework to aggregate data related to single customer Workload Isolation – Parallel run for operational and analytical workload on same cluster Outcome and Benefits: Our largest insurance customer was able to attain the single view of the customer within 90 days timespan. A different insurance customer achieved 360 degree view of 13 million customers on MongoDB Enterprise Advanced. And yet another esteemed healthcare customer was able to achieve as much as a 300% reduction in processing times and increased processing throughput with 50% less hardware. Ready to accelerate your digital transformation? Capgemini and MongoDB can help you re-envision your data and advance your business processes so you can focus on innovation. Reach out today to get started. Download the Modernization Guide

February 10, 2021

MongoDB Connector for Apache Kafka 1.4 Available Now

As businesses continue to embrace event-driven architectures and tackle Big Data opportunities, companies are finding great success integrating Apache Kafka and MongoDB. These two complementary technologies provide the power and flexibility to solve these large scale challenges. Today, MongoDB continues to invest in the MongoDB Connector for Apache Kafka releasing version 1.4! Over the past few months, we’ve been collecting feedback and learning how to best help our customers integrate MongoDB within the Apache Kafka ecosystem. This article highlights some of the key features of this new release. Selective Replication in MongoDB Being able to track just the data that has changed is an important use case in many solutions. Change Data Capture (CDC) has been available on the sink since the original version of the connector. However, up until version 1.4, the source for CDC events could only be sourced from MongoDB via the Debezium MongoDB Connector. WIth the latest release you can specify the MongoDB Change Stream Handler on the sink to read and replay MongoDB events sourced from MongoDB using the MongoDB Connector for Apache Kafka. This feature enables you to record insert, update, and delete activities on a namespace in MongoDB and replay them on a destination MongoDB cluster. In effect you have a lightweight way to perform basic replication of MongoDB data via Kafka. Let’s dive in and see what is happening under the hood. Recall that when the connector is used as a source to MongoDB, it starts a change stream on a specific namespace. Depending on how you configure the source connector, documents are written into a Kafka topic based on this namespace and pipeline that match your criteria. These documents are by default in the change stream event format . Here is a partial message in the Kafka topic that was generated from the following statement: db.Source.insert({proclaim: "Hello World!"}); { "schema": { "type": "string", "optional": false }, "payload": { "_id": { "_data": "82600B38...." }, "operationType": "insert", "clusterTime": { "$timestamp": { "t": 1611348141, "i": 2 } }, "fullDocument": { "_id": { "$oid": "600b38ad6011ef6265c3acd1" }, "proclaim": "Hello World!" }, "ns": { "db": "Tutorial3", "coll": "Source" }, "documentKey": { "_id": { "$oid": "600b38ad6011ef6265c3acd1" } } } } Now that our change stream message is in the Kafka topic, we can use the connector as a sink to read the stream of messages and replay them at the destination cluster. To set up the sink to consume these events, set the “change.data.capture.handler" to the new com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler property. Notice that one of the fields is “operationType”. The sink connector will only support insert, update and delete operations on the namespace and does not support actions like creation of database objects such as users, namespaces, indexes, views, and other metadata that occurs in more traditional replication solutions. In addition this capability is not intended as a replacement for a full featured replication system as it can not guarantee transactional consistency between the two clusters. That said, if all you are looking to do is move data and can accept its lack of consistency then you have a simple solution using the new ChangeStreamHandler. To work through a tutorial on this new feature, check out Tutorial 3 of the MongoDB Connector for Apache Kafka Tutorials in GitHub . Dynamic Namespace Mapping When we use the MongoDB connector as a sink we take data that resides on a Kafka Topic and insert it into a collection. Prior to 1.4, once this mapping is defined it isn’t possible to route topic data to another collection. In this release we added the ability to dynamically map a namespace to the contents of the kafka topic message. For example, consider a Kafka Topic “Customers.Orders” that contains the following messages: {"orderid":1,"country":"ES"} {"orderid":2,"country":"US"} We would like these messages to be placed in their own collection based upon the country value. Thus, the message with the field “orderid” that has a value of 1 will be copied in a collection called, “ES”. Likewise, the message with the field “orderid” that has a value of 2 will be copied to a collection called, “US”. To see how we configure this scenario, we will define a sink using the new namespace.mapper property configured with a value of “ com.mongodb.kafka.connect.sink.namespace.mapping.FieldPathNamespaceMapper ”. Using this mapper, we can use a key or value field to determine the database and collection respectively. In our example above let’s define our config using the value of the country field as the collection name to sink to: '{"name": "mongo-dynamic-sink", "config": { "connector.class":"com.mongodb.kafka.connect.MongoSinkConnector", "topics":"Customers.Orders", "connection.uri":"mongodb://mongo1:27017,mongo2:27017,mongo3:27017", "database":"Orders", "collection":"Other" "value.converter":"org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable":"false", "namespace.mapper":"com.mongodb.kafka.connect.sink.namespace.mapping.FieldPathNamespaceMapper", "namespace.mapper.value.collection.field":"country" }} Messages that do not have a country value will by default be written to the namespace defined in the configuration just like they would have been without the mapping. However, If you want messages that do not conform to the map to generate an error simply set the property namespace.mapper.error.if.invalid to true. This will raise an error and stop the connector when messages can not be mapped to a namespace due to missing fields or fields that are not strings. If you’d like to have more control over the namespace you can use the new “getNamespace” method of the interface com.mongodb.kafka.connect.sink.namespace.mapping.NamespaceMapper . Implementations of this method can implement more complex business rules and can access the SinkRecord or SinkDocument as part of the logic to determine the destination namespace. Dynamic Topic Mapping Once the source connector is configured, change stream events flow from the namespace defined in the connector to a Kafka Topic. The name of the Kafka Topic is made up of three configuration parameters: topic.prefix, database and collection. For example, if you had as part of your source connector configuration: “topic.prefix”:”Stocks”, “database”:”Customers”, “collection”:”Orders” The Kafka topic that would be created would be “Stocks.Customers.Orders”. However, what if you didn’t always want the events in the Orders collection to always go to this specific topic? What if you wanted to determine at run-time which topic a specific message should be routed to? In 1.4 you can now specify a namespace map that defines which kafka topic a namespace should be written to. For example, consider the following map: {"Customers": "CustomerTopic", "Customers.Orders": "Orders"} This will map all change stream documents from the Customers database to CustomerTopic.<collectionName> apart from any documents from the Customers.Orders namespace which map to the Orders topic. If you need to use complex business logic to determine the route, you can implement the getTopic method in the new TopicMapper class to handle this mapping logic. Also note that 1.4 introduced a topic.suffix configuration property in addition to the topic.prefix. Using our example above, you can configure “topic.prefix”:”Stocks”, “database”:”Customers”, “collection”:”Orders”, topics.suffix:”US” This will define the topic to write to as “Stocks.Customers.Orders.US” Next Steps Download the latest MongoDB Connector for Apache Kafka 1.4 from the Confluent Hub ! Read the MongoDB Connector for Apache Kafka documentation Questions/Need help with the connector? Ask the Community Have a feature request? Provide Feedback or a file a JIRA

February 9, 2021

How Hackathons Inspire Innovation and Creativity at MongoDB

When our engineers aren’t creating the best products to help our customers bring their big ideas to life, they’re working to bring their own ideas to fruition. Launched in 2013, hackathons are a big part of MongoDB’s engineering culture, giving our teams the freedom to create, innovate, and learn. About Hackathons at MongoDB Once a year, members from our Engineering department (including Product Managers, Support Engineers, Developer Advocates, and more) spend a week working on a project of their choice. Whether it be with a team or solo, the sky’s the limit. For some, it’s about creating new features or product updates to serve our customers better. For others, it’s about building internal tools and processes to make their day-to-day easier. Some engineers even use the time to work on passion projects or focus on self-improvement via online courses and reading a backlog of technical papers. No matter the goal, the hackathon is a much-needed and appreciated week for sparking new ideas, working with different people, and building useful knowledge and skills. How It's Judged Our engineers battle it out to be named the winner in one of several categories. To be considered, participants create a project demo and submit it on the Thursday afternoon of hackathon week. From there, the demos are divided among four groups of judges consisting of three or four judges each. By Friday morning, the judges select demos (which are open to all employees for viewing) to move into the final round of judging. The Prizes For our hackathons, engineers aim to get the most votes in 10 selected categories. Some categories include: Most Likely to Be Adored by the Support Team Most Likely to Make the Company 10 Million Dollars in 2021 Most Likely to Be Deployed by Production Best (Ab)Use of Cloud/Ops Manager Best Eng/Non-Eng #BuildTogether Award The Projects Past winning projects that made their way into production include MongoDB Charts , custom JS expressions in the aggregation framework, and GraphSQL support in MongoDB Realm Sync . Out of more than 120 submitted projects, here are few that won our 2020 hackathon: Leafy Catchy Eileen Huang , a Product Designer based in MongoDB’s New York City headquarters, pulled together a team of designers and engineers to build a game users can play while waiting for their cluster to build. “We wanted to show that even when doing something technical such as managing databases, people could always benefit from having a delightful moment,” she says. “Although the game isn’t live, it was a super fun week of exploring various game design techniques and trying to create a fully fleshed-out game with a playable character, sound, game UI, and more.” Evergreen Project Visualizations David Bradford , a New York City-based Lead Engineer for the Developer Productivity team, built a tool to visualize the runtime and reliability of the test suites in MongoDB’s continuous integration system. The tool plots the averages for all the test suites against each other and allows users to click into a given test suite to see a more detailed view of a suite’s history. “The project was mostly to address a personal pain point,” David explains. “We see the effects of long-running or unreliable tests fairly frequently, but given the number of tests we run, it takes some investigation to know which improvements would have the most impact. Building a tool that can visualize the data makes it easy to find which test suites provide the most benefits from improvements. It also enables other teams and engineers to start the investigations themselves.” MongoDB Charts Social Sharing Matt Fairbrass , a Senior Software Engineer based on our Sydney team, originally wrote a proposal for MongoDB Charts Social Sharing as a Request for Comments. However, the hackathon gave him and Senior Software Engineer Hao Hu an opportunity to collaborate on a proof of concept. With the core focus on data sharing, their goal was to make it quick and easy to share individual charts with others — whether via email or by posting to one of the social networks. To do this, they added controls to the chart Embedding Dialog to make this task as simple as the single click of a button. “As the discourse of the modern world unfortunately has shown us, being able to distinguish between what is factual and what is fake is becoming increasingly more important,” Matt states. “A result, data is now more than ever the most important tool we can use to surface the unbiased and unvarnished truth in social debate. But this is only true if the data is accessible to everyone.” Charts are visual by their very nature, he continues, “so it’s somewhat ironic that the current experience of sharing a link to a publicly accessible chart on a social network is anything but visual. So, the second goal of our project was to generate rich preview images of the chart being shared dynamically, and automatically attach them to the social media post by using the Open Graph Protocol , all while respecting the security permissions of the chart as set by the author.” Matt and Hao successfully tested this by extending the existing infrastructure to run an instance of Puppeteer . The system worked so well that they were able to extend the same functionality to support dynamically generating screenshots of publicly linked shared dashboards as a stretch goal. “This project has also opened up other avenues for the MongoDB Charts team to explore for further enhancing the product, so this proof of concept has now been turned into a user story that will later be worked on by the broader team,” Matt says. Raspberry Pi Astronomical Database Bruce Lucas , a Staff Engineer based in New York City, created a project inspired by his personal hobby, which is to design and 3D-print an altazimuth telescope mount. “My goal was to leverage a queryable database of stars to write software that automatically captures images, points the scope, and tracks the moving sky by using a Raspberry Pi,” he says. “To do this, I wanted to test a theory to see if a MongoDB database with geoqueries could be used and would run on the Raspberry Pi.” Pinwheel Emily Cardner , a Campus Recruiting Manager based in New York, partnered with engineers on a project to help manage cohorts of employees. With MongoDB’s robust New Grad Program that allows interns to rotate on various teams before being permanently placed, managing the entire process had become overly tedious and complicated, and she wanted to use an app to make it easier. “Even before the hackathon, I did some research to see if a platform like this existed, but I couldn't find anything,” she explains. “I thought I could throw it out as an option to see if someone looking to join a project wanted to build an app. I knew it could be a cool project working with MongoDB’s Realm product and that there could be an appetite for UI folks, but there was one problem: I’m not technical at all! So, I recruited a few folks via Slack and generated a bit of interest from various teams. They came up with an awesome minimal viable product (MVP) after we had a few brainstorming sessions.” This project is important for a few reasons, she adds. “First, I’m now working with the Engineering Corps team that creates internal tools to turn the MVP into a real product. As it turns out, other folks at the company needed cohort management tools too, so now L&D, Education, and Sales Enablement teams are all working with us on it,” she says. “Second, I learned a lot about the engineering process through this project. It was really cool to create my own mockups and collaborate with the engineers to see how products are created. I think it will help me more when working with engineers in the future.” Emily adds that she may have influenced a new hackathon award category. “I may or may not have made up my own award and then lobbied the judges to include it,” she says. “I thought creating a #BuildTogether award would encourage more people like me who are not traditionally in Engineering to work with engineers and create cool products. The judges agreed, and we ended up winning!” Why This Matters Our engineers covet this time every year to explore, create, and tackle new problems. Hackathon week also offers an opportunity to connect and collaborate with others. Many projects have openings for additional members, allowing employees from various technical areas to partner with people they might not normally work with, establishing a stronger culture, and fostering cross-departmental relationships. Hackathons allow our engineers to work on projects that are dropped or pushed down on the priority list in favor of competing priorities. Even if the projects aren’t implemented, seeing demos and having thoughtful conversations about them helps to spin up new ideas for things to add to our product roadmap. By encouraging people to step out of the day-to-day, take a moment (or a week) to think differently, and work with other people who offer new perspectives, the hackathons not only add value to our product offerings but also help our engineers expand their skills and creativity. Interested in pursuing a career at MongoDB? We have several open roles on our teams across the globe , and would love for you to build your career with us!

February 9, 2021

MongoDB Launches Sales Academy

We’re thrilled to announce our inaugural MongoDB Sales Academy! This program will prepare emerging professionals with the training and experience they need to jumpstart a career in sales. We’re looking for recent college graduates with an interest in technology to join our rapidly growing sales team. “The creation of a program designed to develop recent college graduates into sales professionals is a natural extension of MongoDB’s culture of talent development. We have best in breed sales enablement and onboarding programs, and a “BDR to CRO” program focused on accelerating sales careers. We have an opportunity to bring these world-class training programs to those who are starting their careers, and to turn emerging professionals into future leaders at MongoDB.” - Meghan Gill, VP Sales Operations & SDR The Sales Academy will be a full time, paid 12-week training program based in Austin, TX. It will focus on training and developing future MongoDB Sales Development Representatives as, upon completion, these recent college graduates will move into a full time SDR position. Those who are part of the Sales Academy will have direct one-on-one support from their sales mentors, MongoDB’s leadership team, the Campus Team, and each other. These New Grads will complete a best-in-class training program, which includes both technical concepts and sales processes. Through regular coaching and professional development training, our Sales Academy New Grads will graduate from the program and become full-time members of the Sales team at MongoDB. “Life at MongoDB is ever-evolving and a great start for anyone looking to take their career to the next level. You can expect to constantly learn new things about technology and your customers, work alongside some of the best sales professionals in the industry, and to be on the forefront of innovation. If you want to understand technology like never before, work with customers modernizing today’s world, and get consistent feedback from peers and leadership, this is the right place for you.” - Maya Monico, SDR Manager This isn’t the first time that MongoDB has hired students into our sales organization. Hannah Branfman was part of our SDR Internship program and, upon graduating from her school, joined us full-time. When asked about what sales at MongoDB is like, Hannah says: “If you have ambition, are coachable and have a strong desire to learn, MongoDB will be a great fit for you. You have to be willing to make mistakes and remain naturally curious — don’t stop asking questions! If you have the perseverance to not only get here, but to then set the bar high for yourself and surpass it, you will fit in great. Get ready to make an impact!” - Hannah Branfman, SDR We’re eager to find recent college graduates who are ambitious and excited to learn. If you’re interested in kickstarting your sales career at MongoDB in our Austin office, this could be the perfect fit for you! The job post is now up and we look forward to reviewing your application and getting to know you!

February 3, 2021

MongoDB’s Customer Success Team Is Growing: Meet Members from Our EMEA Team

MongoDB is the perfect home for anybody looking to join a dynamic, fast-paced, and rapidly growing technology company that’s blazing a trail in the database market. And because we’re onboarding new customers constantly — from massive household brands to the newest startup — we need amazing people to set them up for success from day one. Customer Success (CS) is one team that does just that. MongoDB currently is looking for talented people worldwide to be part of a team that delivers next-generation solutions for driving digital transformation with a diverse roster of clients. Want interview tips for our Customer Success roles? Read this blog. As MongoDB’s frontline resource, you’ll share the journey with each customer from initial onboarding all the way through each phase of the customer’s plan, developing strong and lasting partnerships along the way. Members from our EMEA-based CS team give their take on what to expect while working at MongoDB. Diverse Backgrounds Are More Than Welcome The Customer Success team is composed of creative teammates from a wide variety of backgrounds. As an inclusive community that values your ideas and embraces differences, the CS team believes all backgrounds and experiences can provide value to the role and the customers we serve. Despite this diversity, team members all share two core characteristics: a shared passion for innovation and technology, and a zest for connecting with people. Giuliana Alderisi , a Customer Success Specialist at MongoDB who oversees the Italian, Spanish, and Nordics region, speaks to the diversity of experiences across the CS team. “Our background as Customer Success Specialists are really heterogeneous,” she says. “I’m a computer engineer, but I know teammates who come from very different backgrounds, such as economics, sales development, and marketing, just to name a few. Of course, to increase the level of support we provide to customers, we also come from different countries and speak different languages. I always enjoy the ability to look at things from a different perspective. So, needless to say, I love our coffee breaks where we share our experiences.” One of those teammates she enjoys meeting with is Lucia Fabrizio , a Customer Success Manager covering the Enterprise Italian market. “After spending some years in sales and enablement roles, I found myself eager to start a new challenge, and I really wanted to better understand what happens after the sale is closed,” Lucia says. “I knew I enjoyed inspiring and educating others, as well as guiding them as they solved problems and tackled new opportunities, but I was unsure what my next career move could be. Then I came across MongoDB’s Customer Success Manager role, and it ticked all the boxes. I would describe myself as an introvert, which doesn’t mean I am shy. I simply enjoy listening and using my genuine curiosity to dive deeply into any situation and then act strategically. I’ve learned that this is a great quality for Customer Success Managers.” What You Do Matters The opportunities for discovery and growth are seemingly boundless for MongoDB’s CSMs. “The team is incredibly skilled and inclusive,” says Giuliana. “It is rare that I spend a day without learning something new from my team members.” So far for Giuliana, this has included everything from pipeline generation and work on expansions to improving soft skills and stakeholder management. And according to Giuliana, building together within the MongoDB community is an immensely enjoyable process. “We all know each of us has different talents and different skills, so collaboration is not just essential — it is promoted. We brainstorm together and openly share the ideas we have to make our customers successful,” she says. “MongoDB is big, so sometimes it might be difficult to identify the right person or department you should reach out to get the task done. However, everyone at MongoDB is super friendly, and in a matter of minutes, you’ll find the answer you’re looking for.” Part of the golden learning opportunities for those on the CS team is the chance to familiarize yourself with the full range of exciting products at the company’s disposal. You’ll have the freedom to explore the many facets of MongoDB, gain an understanding of how the products work, and collaborate with a variety of talented individuals. “We work with a lot of different customers and industries,” Guiliana says. “We’re specialized in driving them to success while they use MongoDB products, no matter who is the final user. This also means we are product-certified and get to know the major MongoDB products so we can properly help our customers.” MongoDB does everything it can to provide team members with the tools, resources, and training needed to hit the ground running. We have a dedicated Customer Success boot camp that runs in parallel to our Sales boot camp, helping the team prepare to work with customers, including onboarding. In addition, the CS team has put together product certifications that focus on role-playing so members can practice working with customers. For those intimidated by high-level tech, the CS team is always surrounded by world-class experts who are giving of their time and eager to bring members up to speed on all of MongoDB’s latest offerings. This includes partnering with the Product team to receive additional training, particularly for new products and tools. Being Our Customers' Voice and Advocate In the CS role, you don’t just get to know the emerging and cutting-edge products; you also cultivate lasting relationships with your customers. This includes everything from brainstorming creative ways for customers to adopt new features to ensuring their business is set up for scale, continuity, and sustainability. And because the CS team partners with a range of people in various job roles and companies, the top skills needed to successfully drive these relationships are: Technical acumen and interest in our technology Curiosity and eagerness to learn continuously Empathy for our customers “The base of MongoDB’s Customer Success program — at least how I think of it — is moving from a ‘vendor-customer’ relationship to an actual partnership with our customers,” says Lucia. “This is because we understand the importance of being our customers’ advocate, not only supporting them through pain points but by listening first and bringing their voice to our internal teams. When I meet with customers, I tell them to think of me as an ‘orchestra director’ who’s bringing all the relevant MongoDB personas together to support them through each phase of their plan and create new goals together.” A Strong Culture Built on Core Values Both Lucia and Giuliana speak glowingly about the culture at MongoDB. As Guiliana explains, the team is encouraged to work together on brainstorming sessions and lightning talks to compare notes and share their knowledge with their peers. “We’re also asked to take the time to explore new initiatives to help the CS program grow and find new ways to help our customers,” Giuliana adds. “This was already great before COVID-19 and became even more important when the pandemic affected our lives.” Giuliana also appreciates MongoDB’s benefit offerings such as the Emergency Care Leave, which helped to ensure parents would not feel guilty taking care of their children during the height of the pandemic. As a matter of fact, she adds, “None of the customer-focused or new-hire programs, trainings, or onboardings stopped; MongoDB simply adapted and pivoted with a great effort of creativity and relentlessness.” Lucia has some parting wisdom for those hoping to join the team : “Be comfortable challenging the norm and bringing your own perspective” she says. “You are the CEO of your portfolio, but it is essential to 'build together’ across the multitude of cross-functional teams here.” Interested in pursuing a career at MongoDB? We have several open roles on our teams across the globe , and would love for you to build your career with us!

February 3, 2021

MongoDB Realm Sync is GA

Every mobile developer wants to build an app that users will love - meaning you want to build apps that work for users regardless of signal strength, that react to changes in data in real time, and that won’t drain your user’s battery life or use excessive amounts of data. In June, we released MongoDB Realm , a set of integrated application development services that makes it possible for anyone to build a great app - whether you’re a solo developer working to stand up your idea, or part of a larger team shipping your latest release. As part of this, we announced a public beta for MongoDB Realm Sync , which makes it easier for you to keep data in sync across users, devices, and your back end, even when devices aren’t always online. We’re excited to share that as of today, Realm Sync is now Generally Available (GA). We believe Realm Sync offers a best-in-class solution for offline-first app developers, who need to move data between a local client and the cloud. With the Realm Sync service, we’ve significantly reduced the code you need to write, while also reducing the complexity of your app architecture. Crucially, we’ve done it while making sure everything is built to optimize for battery power, CPU, and bandwidth. As a developer, you no longer need to write (or maintain) thousands of lines of complex conflict resolution and networking code. Realm Sync handles that for you, making it simple to move data between the local Realm Mobile Database and MongoDB Atlas on the back end. You can build features faster, reduce bugs, deliver a better user experience – and do it all without having to worry about standing up or scaling servers. Download the MongoDB Realm Whitepaper Building for an Offline-First Environment To many development teams, synchronizing data between the client and your back end sounds simple. But when connectivity isn’t guaranteed, it becomes time-consuming and complex to achieve. MongoDB Realm simplifies data sync. Synchronization works bi-directionally, moving data between the Realm Mobile Database on the client-side and MongoDB Atlas on the back end. Automatic conflict resolution resolves any data conflicts that may emerge across multiple devices, users, and your back end, and ensures data is consistent whenever mobile devices come online. Because data is synced to Atlas, applications can easily scale up or down infrastructure as app usage changes. MongoDB Realm Sync also: Speeds feature innovation. Realm’s Mobile Database - used to store data locally on device, and MongoDB Realm Sync - both reduce the code developers need to write, and free up time to focus on building new features that provide unique business value. Works across platforms. Realm Mobile Database and MongoDB Realm Sync work on any platform, for any mobile device. Is secure and stable. MongoDB Realm lets you encrypt data in-flight or at-rest, both in the cloud and on-device. MongoDB Realm Sync in Action Fortune 500 businesses and cutting-edge start-ups are already using the Realm Mobile Database and MongoDB Realm Sync to build their apps today. Srikanth Gandra, Director of Digital Technology for 7-Eleven, built a mobile app on MongoDB Realm that’s been successfully rolled out for use across the United States and Canada. “What we’ve created is really innovative. Since rolling this out to all 8,500 stores in North America, we’ve been able to sync data across more than 20,000 devices on a nearly real-time basis," he said. "[Managers] can start using devices immediately, rather than waiting 2-3 minutes to download the data on initial startup, like they used to. Data accuracy - especially around inventory when sales happen or shipments arrive - has really improved.” “We’re evaluating using Realm and Realm Sync to assist with inbound and outbound parcel shipping use cases,” said James Fairweather, Chief Innovation Officer, Pitney Bowes. “As an example, we are exploring building an app on Realm for our front-line workers to scan a package that would automatically sync the data back to MongoDB Atlas providing consistent reporting and up-to-date logistics throughout the shipping journey.” With MongoDB Realm Sync, mobile developers have the tools to make data sync simple, making sure they both build apps fast, while still making sure that even complex components like real-time data sync are built right. Try MongoDB Realm Sync, and get started building your offline-first app. Try MongoDB Realm Sync Today

February 2, 2021