All Blog Posts

Introducing: Atlas Operator for Kubernetes

The MongoDB Enterprise Operator serves to automate and manage MongoDB clusters on self-managed infrastructure. While this integration has provided complete control over self-managed MongoDB deployments from a single Kubernetes control plane, we’re taking it a step further by extending this functionality to our fully-managed database—MongoDB Atlas. We’re excited to introduce the trial version of the Atlas Operator for Kubernetes. The Atlas Operator will allow you to manage all your MongoDB Atlas clusters without ever having to leave Kubernetes. Keep your workflow as seamless and optimized as possible by managing the lifecycle of your cloud-native applications from where you want most. With the trial version of this Atlas Operator, you can provision and deploy fully-managed MongoDB Atlas clusters on the cloud provider of your choice through Kubernetes. This provider is especially important for those seeking to unlock the power of multi-cloud with unique tools and services native to AWS, Google Cloud, and Azure without any added complexity to the data management experience. With this new Atlas Operator, you get the best of all clouds with multi-cloud clusters on Atlas , coupled with the freedom to run your entire stack anywhere, all while managed in one central location. The “trial version” simply means it has all the core functionality to provision fully-managed Atlas clusters, but the bells and whistles are yet to come. In addition to encapsulating core Atlas functionality, it ensures Kubernetes Secrets are created for each database user which allows for easier management of sensitive data. The Atlas Operator also allows you to create IP Bindings so your applications can securely access clusters. If you’re interested in using the trial version of the Atlas Operator today, follow our quickstart guide below to get started! Quickstart Below you’ll find the steps to create your first cluster in Atlas using the Atlas Operator. Note that you need to have a running Kubernetes cluster before deploying the Atlas Operator. Register/Login to Atlas and create API Keys for your Organization. This information together with the Organization ID will be used to configure the Atlas Operator access to Atlas. Deploy the Atlas Operator kubectl apply -f \ https://raw.githubusercontent.com/mongodb/mongodb-atlas-kubernetes/main/deploy/all-in-one.yaml Create a Secret containing connection information from step one. This Secret will be used by the Atlas Operator to connect to Atlas: kubectl create secret generic mongodb-atlas-operator-api-key \ --from-literal="orgId=<the_atlas_organization_id>" \ --from-literal="publicApiKey=<the_atlas_api_public_key>" \ --from-literal="privateApiKey=<the_atlas_api_private_key>" \ -n mongodb-atlas-system Create AtlasProject Custom Resource: cat <<EOF | kubectl apply -f - apiVersion: atlas.mongodb.com/v1 kind: AtlasProject metadata: name: my-project spec: name: Test Atlas Operator Project projectIpAccessList: - ipAddress: "0.0.0.0/0" comment: "Allowing access to database from everywhere (only for Demo!)" EOF Create AtlasCluster Custom Resource cat <<EOF | kubectl apply -f - apiVersion: atlas.mongodb.com/v1 kind: AtlasCluster metadata: name: my-atlas-cluster spec: name: "Test-cluster" projectRef: name: my-project providerSettings: instanceSizeName: M10 providerName: AWS regionName: US_EAST_1 EOF (You'll have to wait until the cluster is ready - "status" field shows "ready:true":) kubectl get atlasclusters my-atlas-cluster -o=jsonpath='{.status.conditions[?(@.type=="Ready")].status}' True Create a Secret for the password that will be used to log into Atlas Cluster Database kubectl create secret generic the-user-password \ --from-literal="password=P@@sword%" Create AtlasDatabaseUser Custom Resource (references the password Secret) cat <<EOF | kubectl apply -f - apiVersion: atlas.mongodb.com/v1 kind: AtlasDatabaseUser metadata: name: my-database-user spec: roles: - roleName: "readWriteAnyDatabase" databaseName: "admin" projectRef: name: my-project username: theuser passwordSecretRef: name: the-user-password EOF Shortly the Secret will be created by the Atlas Operator containing the data necessary to connect to the Atlas Cluster. You can mount it into your application Pod and read the connection strings from the file or from the environment variable. kubectl get secrets/test-atlas-operator-project-test-cluster-theuser \ -o=jsonpath="{.data.connectionString.standardSrv}} | base64 -d mongodb+srv://theuser:P%40%40sword%25@test-cluster.peqtm.mongodb.net Stay Tuned for More Be on the lookout for updates in future blog posts! The trial version of the MongoDB Atlas Operator is currently available on multiple marketplaces, but we’ll be looking to make enhancements in the near future. For more information, check out our MongoDB Atlas & Kubernetes GitHub page and our documentation .

April 8, 2021

MongoDB Connector for Apache Kafka 1.5 Available Now

Today, MongoDB has released version 1.5 of the MongoDB Connector for Apache Kafka! This article highlights some of the key features of this new release in addition to continuing to improve the overall quality & stability of the Connector . DeleteOne write model strategy When messages arrive on Kafka topics, the MongoDB Sink Connector reads them and by default will upsert them into the MongoDB cluster specified in the sink configuration. However, what if you didn’t want to always upsert them? This is where write strategies come in and provide you with the flexibility to define what you want to do with the document. While the concept of write strategies is not new to the connector, in this release there is a new write strategy available called DeleteOneBusinessKeyStrategy . This is useful for when a topic contains records identifying data that should be removed from a collection in the MongoDB sink. Consider the following: You run an online store selling fashionable face masks. As part of your architecture, the website sends orders to a Kafka topic, “web-orders” which upon message arrival kicks off a series of actions such as sending an email confirmation, and inserting the order details into an “Orders” collection in a MongoDB cluster. A sample Orders document: { _id: ObjectId("6053684f2fe69a6ad3fed028"), 'customer-id': 123, 'order-id': 100, order: { lineitem: 1, SKU: 'FACE1', quantity: 1 } } This process works great, however, when a customer cancels an order, we need to have another business process to update our inventory, send the cancellation, email and remove the order from our MongoDB sink. In this scenario a cancellation message is sent to another Kafka topic, “canceled-orders”. For messages in this topic, we don’t just want to upsert this into a collection, we want to read the message from the topic and use a field within the document to identify the documents to delete in the sink. For this example, let’s use the order-id key field and define a sink connector using the DeleteOneBusinessKeyStrategy as follows: "connector.class": "com.mongodb.kafka.connect.MongoSinkConnector", "topics":"FaceMaskWeb.OrderCancel", "connection.uri":"mongodb://mdb1", "database":"FaceMaskWeb", "collection":"Orders", "writemodel.strategy": "com.mongodb.kafka.connect.sink.writemodel.strategy.DeleteOneBusinessKeyStrategy", "document.id.strategy": "com.mongodb.kafka.connect.sink.processor.id.strategy.PartialValueStrategy", "document.id.strategy.partial.value.projection.type": "AllowList", "document.id.strategy.partial.value.projection.list": "order-id", "value.converter":"org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable":false, "document.id.strategy.overwrite.existing": true Now when messages arrive in the “FakeMaskWeb.OrderCancel” topic, the “order-id” field is used to delete documents in the Orders collection. For example, using the sample document above, if we put this value into the OrderCancel topic { “order-id”: 100 } It would cause the document in the Orders collection with order-id and value 100 to be deleted. For a complete list of write model strategies check out the MongoDB Kafka Connector Sink documentation . Qlik Replicate Qlik Replicate is recognized as an industry leader in data replication and ingestion. With this new release of the Connector, you can now replicate and stream heterogeneous data from data sources like Oracle, MySQL, PostGres and others to MongoDB via Kafka and the Qlik Replicate CDC handler . To configure the MongoDB Connector for Apache Kafka to consume Qlik Replicate CDC events, use “com.mongodb.kafka.connect.sink.cdc.qlik.rdbms.RdbmsHandler” as the value for the change data capture handler configuration parameter. The handler supports, insert, refresh, read, update and delete events. Errant Record Reporting Kafka Connect, the service which manages connectors that integrate with a Kafka deployment, has the ability to write records to a dead letter queue (DLQ) topic if those records could not be serialized or deserialized. Starting with Apache Kafka version 2.6, there was added support for error reporting within the sink connectors. This gives sink connectors the ability to send individual records to the DLQ if the connector deems the records to be invalid or problematic. For example, if you are projecting fields in the sink that do not exist in the kafka message or if your sink is expecting a JSON document and the message arrives in a different format. In these cases an error is written to the DLQ versus failing the connector. Various Improvements As with every release of the connector, we are constantly improving the quality and functionality. This release is no different. You’ll also see pipeline errors now showing up in the connect logs, and the sink connector can now be configured to write to the dead letter queue! Next Steps Download the latest MongoDB Connector for Apache Kafka 1.5 from the Confluent Hub ! Read the MongoDB Connector for Apache Kafka documentation . Questions/Need help with the connector? Ask the Community . Have a feature request? Provide Feedback or a file a JIRA .

April 7, 2021

Global, Multi-Cloud Security at Scale with MongoDB Atlas

In October 2020, we announced the general availability of multi-cloud clusters on MongoDB Atlas . Since then, we’ve made several key improvements that allow customers to take advantage of the full breadth of MongoDB Atlas ’ best-in-class data security and privacy capabilities across clouds on a global scale. Cross-Cloud Security with MongoDB Atlas A common question we get from customers about multi-cloud clusters is how security works. Each cloud provider offers protocols and controls to ensure that data within its ecosystem is securely stored and accessed. But what happens when your data is distributed across different clouds? Don’t worry–we have you covered. MongoDB Atlas is designed to ensure that our built-in best practices are enforced regardless of which cloud providers you choose to use, from dedicated network peering connections to customer-managed keys for data encryption-at-rest and client-side field-level encryption. Private Networking to Multiple Clouds You can now create multiple network peering connections and/or private endpoints for a multi-cloud cluster to access data securely within each cloud provider. For example, say your operational workload runs on Azure, but you want to set up analytics nodes in Google Cloud and AWS so you can compare the performance of Datalab and SageMaker for machine learning. You can set up network peering connections for all three cloud providers in Atlas to allow each of your cloud environments to access cluster data in their respective nodes using private networks. For more details, take a look at our documentation on network peering architecture . Integrate with Cloud KMS for Additional Control Over Encryption Any data stored in Atlas can be encrypted with an external key from AWS KMS, Google Cloud KMS, or Azure Key Vault for an extra layer of encryption on top of MongoDB’s built-in encrypted storage engine . You can also configure client-side field level encryption (client-side FLE) with any of the three cloud key management services to further protect sensitive data by encrypting document fields before it even leaves your application ( support for Azure Key Vault and Google Cloud KMS is available in beta with select drivers ). This means data remains encrypted even while it is in memory and in-use within your live database. Even though the data is encrypted, it remains queryable by the application but is inaccessible to any administrators running the database or underlying cloud infrastructure for you. Beyond security, client-side FLE is also a great way to comply with right to erasure requests that are part of modern privacy regulations such as the GDPR or the CCPA. You simply destroy the user’s encryption key and their PII is unreadable and irrecoverable in memory, on disk, in logs, and in backups. For multi-cloud clusters, this means you can take advantage of multiple layers of encryption that use keys from different clouds. For example, you can have PII data encrypted client-side with AWS KMS keys, then stored in both an AWS and Google Cloud region on Atlas and further encrypted at rest with a key managed via Azure Key Vault. Global, Multi-Cloud Clusters on MongoDB Atlas For workloads that reach users across continents, our customers leverage Global Clusters . This gives you the unique ability to shard clusters across geographic zones and pin documents to a specific zone. Now that Atlas is multi-cloud, you can now choose from the nearly 80 available regions across all three providers, expanding the potential reach of your client applications while making it easy to comply with data residency regulations. Consider a sample scenario where you’re based in the US and want to expand to reach audiences in Europe. To comply with GPDR , you must store EU customer data within that region. With Global Clusters, you can configure a multi-cloud cluster with a US zone and an EU zone. In the US, you choose to run on AWS, but in Europe, you decide to go with Azure because it has more available regions. All of this can be configured in minutes using the Atlas UI: simply define your zones and ensure that your documents contain a location field that dictates which zone they should be stored in. For more details, follow our tutorial for how to configure a multi-cloud Global Cluster on Atlas . Future-Proof Your Applications with Mulit-Cloud Clusters There are many reasons why companies are considering a multi-cloud strategy , from cross-cloud resiliency to geographical reach to being able to leverage the latest tools and services on the market. With MongoDB Atlas, you get best-in-class data security and operations and intuitive admin controls, regardless of how many cloud providers you want to use. To learn more about how to deploy a multi-cloud cluster on MongoDB Atlas, check out our step-by-step tutorial , which includes best practices for node distribution, instructions for how to test failing over to another cloud, and more. Safe Harbor The development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality.

April 7, 2021

Built with MongoDB: Gryphon Online Safety

As friends and coworkers at an IoT company, John Wu and Arup Bhattacharya used to commiserate about the perils the internet posed for their children. It’s a problem most parents can relate to—especially now, when some children spend more than seven hours a day online. One day, John’s daughter saw something online that horrified him, and he and Arup decided they wanted to help bring the internet back into the hands of parents so they could curate online content for their children. With that, Gryphon Online Safety was formed. Gryphon is a cloud-managed network protection platform for homes and small businesses that blocks viruses, malware, and hackers while giving parents the chance to filter content and monitor what their children are doing. With $5.4 million in seed funding, more than 30,000 customers, and a team of 30 employees across three countries, Gryphon is growing quickly. The COVID-19 pandemic further accelerated adoption of Gryphon’s products; with children spending more time on devices at home and hacking activity increasing online, the company has seen a significant boost in users. In this edition of #BuiltWithMongoDB, we talk to CTO and Co-Founder Arup Bhattacharya and Senior Cloud Solutions Architect Sandip Das about the future of internet security and their experience building Gryphon with MongoDB. MongoDB: How has your business changed during COVID-19, given that families have been spending more time at home and online? Arup Bhattacharya: Our business has thrived during COVID. Although we typically add a thousand customers every month, during the pandemic that number has skyrocketed. More people are working from home and more children are attending virtual classes, which has caused families to think more about security and parental controls. Although we typically see two main cycles with our business, one in August and the other around the holiday season, our product isn’t that cyclical. People upgrade their hardware at different times, and when they look for high-performance mesh WiFi routers and security, we are an obvious solution. What’s funny is that while parents deeply appreciate our solution and the security it provides, children often hate us. I stumbled across a Reddit post in which a child wondered how he could get past the access filters his father had set up via Gryphon. Someone responded: “There’s nothing you can do but grow up and buy your own router.” With that said, there’s so much bad content out there—from bullying to games that hurt children—that it’s crucial we allow an easy way for parents to control the experience their children have online. MongoDB: At what point did you implement MongoDB, and what decision framework and criteria led to that decision? Sandip Das: We compared the big databases in terms of what solutions were available. We wanted something freely available for rapid prototyping and that made integration easy. For the back end, we use JavaScript with Node.js runtime, which is easily compatible with MongoDB. In fact, it’s the default choice for database integration. MongoDB owns its library, and combined with how simple the integration was, this made MongoDB a good choice for us. Another big factor was the storage. With MongoDB Atlas, you can have any number of servers, and you can quickly scale up to whatever your demands are. We developed the service from the beginning, and we were managing it ourselves. However, as the load has increased and more customers came on board, we thought it was time to seek out a better and more scalable solution that’s also easy to manage. That’s how we found MongoDB Atlas. With MongoDb Atlas autoscaling, we were able to achieve the flexibility we always wanted, along with automated backup solutions. MongoDB: Arup, you've held several senior engineering positions before becoming Co-founder and CTO of Gryphon. What advice would you give to others looking to follow that path? AB: The CTO position is very critical because it is the bridge between technology and business. The first thing you should think about when starting a company is the pain point you are solving. We started by first asking ourselves how our product will help society. How will it help people improve their lives? The starting point of a company shouldn’t just be to make money overnight. What will keep you motivated through the difficulty of building a business is thinking deeply about how your product will make a positive impact on people’s lives. Second, there inevitably will be low times and high times. At several points in the founder’s journey, you will experience real doubt and wonder whether you can really achieve your goals. The best thing to do is to keep on pushing for the highest-quality product possible. If your product is the best on the market and you are solving a genuine problem, the customers will find and appreciate you. Looking to build something cool? Get started with the MongoDB for Startups program.

April 6, 2021

Dive Deeper into Chart Data with New Drill-Down Capability

With the latest release of MongoDB Charts, you’re now able to dive deeper into the data that’s aggregated in your visualizations. At a high level, we generally create charts, graphs and visualizations of our data to answer questions about our business or products. Oftentimes, we need to “double click” on those visualizations to get insight into each individual data point that makes up the line, bar, column, etc. How the drill-down functionality works: Step 1: Right click on the data point you are interested in drilling down into Step 2: Click "show data for this item" Step 3: View the data in tabular or document format Each view can be better for different circumstances. For data without too many fields or no nested arrays, it might be quicker and more easily viewed in a table. On the other hand, the JSON view allows you to explore the structure of documents and click into arrays. Scenarios where more detailed information can help: Data visualization use cases are relatively broad spanning, but oftentimes they fall into 3 main categories: monitoring data, finding insights, and embedding analytics into applications. I’ll be focusing on the first two of these three as there are many different ways you could potentially build drilling-down into data via embedded charts. (Read more about our click events and embedded analytics ). For data or performance monitoring purposes , we're not speaking so much about the performance of your actual database and its underlying infrastructure, but the performance of the application or system built on top of the database. Imagine I have an application or website that takes reviews, if I build a chart like the one below where I want to easily see when an interaction hits a threshold that I want to dive deeper into, I now have the ability to quickly see the document that created that data point. This chart shows app ratings given after a user session in an app. For this example, we want to dive into any rating that was below a 3 (out of 5). This scatter plot shows I have two such ratings that cross that threshold. With the drill-down capability, I can easily see all the details captured in that user session. For finding new insights, let’s imagine I’m tracking how many transactions happen on my ecommerce site over time. In the column chart below, you can see I have purchases by month for the last year and a half (note, there’s a gap because this example is for a seasonal business!). Just by glancing at the chart, I can quickly see purchases have increased over time, and my in-app purchases have increased my overall sales. However, I want to see more about the documents that were aggregated to create those columns, so I can quickly see details about the transaction amount and location without needing to create another chart or dashboard filter. In both examples, I was able to answer a deeper level question that the original chart couldn’t answer on it’s own. We hope this new feature helps you and your stakeholders get more out of MongoDB Charts, regardless if you’re new to it or have been visualizing your Atlas data with it for months, if not years! If you haven’t tried Charts yet, you can get started for free by signing up for a MongoDB Atlas and deploying a free tier cluster.

April 6, 2021

How Three College Friends Became MongoDB Coworkers

Siya Raj Purohit, Chaitanya Varanasi, and Sohail Shaikh first met while attending the University of Texas at Austin (UT Austin) as undergraduate students. Five years after graduating, they found themselves brought together again — this time by MongoDB. I recently sat down with Siya, Chai, and Sohail to talk about this friendship that has been sustained through divergent career paths and continues to grow alongside their roles at MongoDB. Jackie Denner: Tell us about your story leading up to MongoDB. How did the three of you meet and begin to grow your careers? Siya Raj Purohit: I studied electrical and computer engineering at UT Austin from 2010 to 2013. Although Chai, Sohail, and I weren’t in the same year, we became friends from hanging out and working through the rigorous engineering curriculum in the same study lounge. Outside of the engineering building, Austin’s tech scene was exploding; some of my favorite memories with Chai and Sohail are going to tech events together. We met Stephen Wolfram (from WolframAlpha), briefly hung out with Mark Cuban, and crashed many SXSW tech events. Since graduating from college, I’ve lived in four states and worked across startups and venture capital firms. At MongoDB, I help provide founders with the resources they need to push the tech industry forward. Chaitanya (Chai) Varanasi: I am an electrical and computer engineering major from UT Austin, class of 2015 (Hook ‘Em!). Electrical and computer engineering is a fairly small cohort of students who all share a building and sit in the same hall for introductory classes. It is always said that the hottest fires forge the strongest metal. In our situation, we all had to go through grueling labs and coding assignments that would keep us up all night and unite us toward a common goal of passing that class. What started as collaboration on class materials very quickly transitioned into late-night frozen yogurt hangouts, playing Catan, and discovering Austin together. Sohail and I used to travel across the country for various hackathons, which was how we started our careers in software engineering. One of my favorite memories is of Siya taking us to the meetup of a lifetime at the Capital Factory, a startup incubator in Austin; we even got a picture with Stephen Wolfram! After graduating, I joined a large financial institution in Dallas as a software engineer, and then I began my presales journey in the performance space. After realizing the potential of data and understanding the value companies gain from data insights, I joined MongoDB. Sohail Shaikh: My journey in tech began when I was 12 years old and built my first computer. Since then, I have always been fascinated with new technologies and learning more about them. I was a math major at UT Austin, class of 2015. I actually can’t remember the first time I met Siya or Chai, because it seems as if I have known them forever, and I felt an immediate bond with both of them from the start. I have vivid memories of our times at UT together: attending hackathons, collaborating on ideas, and spending a lot of time talking about the future and how we could bring change. In the five-and-a-half years since graduating, I have worked in Palo Alto and Dallas — at a startup, at AppDynamics, and now at MongoDB. I’m excited to be reunited with Chai and Siya; we are all very passionate about making a positive impact in this world, and we are all doing that today at MongoDB! JD: What is your role at MongoDB? SRP: I’m helping the next generation of developers to build great companies. There is so much great talent coming out of universities and startup accelerator programs, and MongoDB for Startups works with developers to ensure they have the right products and services to transform their ideas into innovative companies. More than 1,500 companies have #BuiltWithMongoDB so far — and we’re super excited to continue growing the ecosystem. CV: I am a Senior Solutions Architect. My day-to-day job consists of being a technical partner to our rock-star sales team and performing proof of concepts with our customers to continually grow our MongoDB presence. SS: I am a Solutions Architect at MongoDB for the South Central region. My day-to-day job is working with customers in the presales organization and showcasing why MongoDB is so amazing. JD: How did you maintain your friendship after college? SRP: After college, I lost touch with Chai and Sohail for a couple of years. I moved to Silicon Valley, and although we periodically caught up through mutual friends, we didn’t really reconnect until we all joined MongoDB. I joined a few weeks before Chai (mostly to be part of his welcoming crew) and was ecstatic when Sohail told us he was joining MongoDB too. Now, we have a private Slack channel (named after one of our favorite Bollywood films) where we talk about our jobs and lives and also share cute memes and gifs. CV: Sohail and I both lived in Dallas and worked on the same team at a previous company. We have done multiple trips together and spent way too many nights eating sushi and Whataburger! Siya and I lost touch for a little because of the distance, but we were able to make up for lost time after joining MongoDB. SS: I am horrible at maintaining relationships, but Chai and Siya keep me in check (it’s just the type of people they truly are). I would meet Chai once a year on a group trip, and one day I called him to learn more about his new role at AppDynamics; he didn’t hesitate to refer me in. Next thing I knew, I was working with him on his team. Two-and-a-half years later, Chai decided to move to MongoDB, and I couldn’t resist. After working with Chai, I am now convinced I talk to him more than his wife does. Siya and I reconnected during the pandemic through a socially distanced meetup at a park while I was visiting San Francisco. Now that we both work for MongoDB, our friendship has picked up right where we left off. JD: All three of you joined MongoDB during the COVID-19 pandemic. How was the remote onboarding experience? SRP: Honestly, I was sort of nervous about joining remotely. I had left a company where I had really strong relationships with my coworkers, and it was daunting to imagine building new connections while being entirely remote. During my interview process, I asked for advice on how to best onboard. I was recommended the book The First 90 Days , which provided a great framework and onboarding roadmap. The MongoDB onboarding week itself was awesome — I met many people across the company, joined a few employee affinity groups (MongoDB Women is my favorite!), and learned about the lives of my coworkers beyond work — I even virtually met some of their babies and pets! I’m really excited to spend time with coworkers in person once it’s safer to do so. CV: I had a phenomenal experience with onboarding. Everyone at MongoDB has been nothing short of helpful. This was the first time in my life that I got to meet an entire executive team in a small group setting within the first month of joining the company. Each MongoDB executive hosts a coffee chat once a quarter, which is a great way to get to know them more personally. That kind of exposure is unparalleled, and it truly showed me how a great culture was supported from both bottom up and top down. SS: Onboarding at MongoDB is the best I have ever seen! Training and role clarity have been phenomenal, even in a remote setting. The material is organized and easy to grasp, and I don’t feel as if I have been left to figure everything out on my own. The team is extremely helpful in answering all of my questions and helping me grow. In Sales, there is also boot camp, which is divided up into two parts for my role. Boot camp lasted for a month to avoid any Zoom fatigue (given that we are all virtual), which also gave us more time to work on our assignments and properly learn the lay of the land. JD: What are you most excited about? SRP: I am so excited about Chai moving to NYC so we can work out of the same office when it reopens. I’ve already mapped out the top 10 bubble tea shops in NYC for us to visit. CV: I am ready to explore New York with Siya and have future MongoDB lunches together. Sohail and I are ready to tackle our Sales Kickoff and have fun when we return to normal situations after the pandemic. We are all career-driven individuals, and I am excited to see how we can uplift each other as a family. SS: I am most excited to be learning about the database space and contributing to growing the business. I am also super excited to see where MongoDB goes in the future. As one of the world’s fastest-growing databases, it feels as if we are on a rocket ship. JD: What advice would you give to others who are looking for a new role? SRP: Recruiting is always hard. Find unique ways to showcase why you’re a fit for a certain role or company — passion is seen and rewarded. CV: Always keep your connections and networks alive. Keep interacting with the folks you care about. I am nothing without my work friends and my work family. MongoDB is on a rocket ship right now, and you will absolutely love working here. SS: Don’t be afraid to take a risk in your careers, and put in an application to MongoDB today! We love working with talented, hard-working folks, and the grass is truly green on this side! Interested in pursuing a career at MongoDB? We have several open roles on our teams across the globe and would love for you to build your career with us!

April 1, 2021

Built with MongoDB: ADEx

Anyone who has reviewed legal documents knows how tedious and time-consuming the process can be. In the high-stakes, detail-oriented legal environment, even experienced lawyers or paralegals can make mistakes. And those mistakes can be expensive. Enter ADEx . ADEx is an online legal document due-diligence platform that is transforming the way people interact with legal and financial documents. “Computers never get tired, no matter how many pages your legal document contains or how dense its language,” says ADEx Co-Founder and CTO Apoorv Khandelwal . “Our platform can abstract your legal documents faster and more reliably than a paralegal.” The company has hosted more than 7 million contracts and partnered with large companies including Salesforce, Box, and Colliers International. As part of our #BuiltWithMongoDB series, we spoke with Apoorv about the company’s growth, its tech stack, and his experience scaling with MongoDB. MongoDB: What's ADEx's tech stack like? Apoorv Khandelwal: For our back end, we use the Java-based Play and Spring frameworks. We use Angular for the front end and Electron for the desktop app. For various predictions, we have Python Flask applications, and the deep learning models themselves are trained with TensorFlow and Keras. Our cloud provider for servers and application deployment is Kubernetes. We use various AWS services for storing clients’ legal documents, machine learning models, and other files. But the majority of our application data — ranging from contract summaries to our provision library to user events — is stored in MongoDB. MongoDB: How did you decide to use MongoDB? AK: Having worked at Amazon as a software development engineer, I was familiar with SQL databases and Hadoop. The team focused on machine learning, so its input data formats and sources were constantly evolving. My experiences showed me the pain associated with keeping SQL schemas up to date. When the choice came for ADEx, it was clear to me that we couldn’t use SQL. My experiences in successful startups showed me how we could successfully leverage the flexibility and scalability of MongoDB. I had worked before with Dynamo and other NoSQL platforms, but we didn’t want to get tied down to specific cloud providers. There were conversations about graph databases such as Neo4j as well, but they were not ideal for the majority of our queries that execute bulk data scans or do not start from a known data point. In the end, MongoDB’s flexibility and large community support made it the best choice. Later, upon joining the Techstars Accelerator in 2019, we were able to get credits through the Techstars and MongoDB for a startups partnership. We worked with a technical advisor at MongoDB to set up private connections from our applications. The learning curve was very short compared to other databases I had used; the basic concepts were clear, and the documentation guided me through the more complex data modeling and architecture decisions. Between features such as end-to-end encryption, auto-scaling, and automated backups, much of the basic database management work is now handled by MongoDB Atlas. MongoDB: How has MongoDB been for you as you've scaled? AK: With Atlas, I don’t have to worry about scaling anymore. Given how intuitive and easy to use it is — especially with the metrics and visualizations — it has solved a bunch of problems. I don’t even have to think about storage, because the database capacity automatically adjusts based on current data usage. Often for SQL, a team of database engineers may be needed for managing and running the database. With Atlas, we don’t need any dedicated person at all. We’ve been pleasantly surprised by the gentle learning curve while gradually utilizing more MongoDB features. For example, as we’ve introduced more-sophisticated use cases in our products, and we have enjoyed using MongoDB’s powerful aggregation framework to offload data processing from our application servers. We have an M30 cluster for cloud, and M20 for QA. MongoDB: What advice do you have for developers hoping to someday become CTOs? AK: Three things. First, get prior experience at a successful startup with a small engineering team. You will witness firsthand the growing pains a CTO has to deal with. These practical lessons can be invaluable for your own venture. Second, act as a filter between the business and technical teams. Imagine filling a small plate with food from a giant buffet. In a startup, the technical team has a limited capacity with which to build features or maintain the product. You should actively filter the flow of incoming ideas and features. Prioritizing the most crucial ones will prevent overflowing the technical team’s capacity while ensuring maximum value for customers. And third, get good technical mentors. It’s difficult to design sufficiently abstract data models that anticipate all potential future pivots. But a good debate with mentors can save plenty of technical debt later on. The first years were hard for me until I got technical mentors, such as Lalit Kapoor and Mihai Strusievici through Techstars. Looking to build something cool? Get started with the MongoDB for Startups program.

March 30, 2021

Announcing the MongoDB SI Architect Certification Program for Modernization to the Cloud

You know the value of modernization as a strategic initiative. It’s not only about refreshing your portfolio of legacy applications with the latest innovations simply for the sake of moving to the cloud. This is much more than just “lift and shift”. True modernization is about realizing your company’s full potential and gaining a competitive edge through development methodologies, architectural patterns and technologies. And by modernizing with MongoDB, you can build new business functionality 3-5x faster, scale to millions of users wherever they are on the planet, and cut costs by 70% or more. If you’re familiar with our technology and our Modernization Program , you already understand the benefits. But do your customers? And, if not, how do you tell them? To help you get started, the MongoDB Partner team has created the MongoDB SI Architect Certification , a full scale kit of assets related to modernization. This free, self-paced certification helps you improve the modernization experience for a variety of customer types as well as drive conversations with customers around data center exit plans and application qualification for assessing cloud data platforms. Consider this certification the next step in deepening your expertise so you can expand your business opportunities and help customers modernize to the cloud. Customized for System Integrator partners, our certification teaches you how to discuss the benefits of modernization with various customers on a cloud journey. It enables architects to have deep discussions on vertical-based stories, migration tools, best practices, and architecture guidelines. System Integrator partners will also learn the fundamental value of offerings, messaging, objection handling, and more. Most importantly, this certification program equips SI architects with the ability to communicate key takeaways to the customer in a language they understand. Program Structure The free SI Architect certification program is self-paced, takes approximately 40 hours, and divided into six key sections, complete with a final certification exam. Introduction allows partners to access the modernization webinars and modernization program offerings. Top use cases focus on how MongoDB is used in business-wide strategic initiatives, like legacy modernization, cloud data strategy, microservices and more vertical based stories. Customer case studies highlight how MongoDB is deployed and leveraged through real-life customer case studies and proof points. University classes allow participants to leverage MongoDB university on-line as well as on-demand courses relevant to the architects. Competitive edge helps architects understand the true value of MongoDB in comparison to the competition. Final certification culminates the program with a "Talk to the experts" session and final certification exam where participants take a real world industry use case or customer project and assess how to migrate to the cloud. Our “Talk to the experts” session provides users with the opportunity to query experts with questions about the final certification exam. It also introduces the messaging around “MongoDB: The Intelligent Operational Data Platform” and details an Atlas TCO and sizing exercise. In addition to these assets, partners also have access to self-paced developer training and database administrator training here . Note: Download the enhanced Modernization Guide to refresh your knowledge on MongoDB modernization Dive Deeper into MongoDB Cloud Technology What’s one key lesson we know for certain? The data management platform you choose is a key factor in successfully migrating legacy applications to the cloud. The MongoDB Cloud section of our Architecture Guide discusses the unique value MongoDB can bring to organizations making the transition to cloud. Note: Download the Architecture Guide to refresh your knowledge on MongoDB Cloud The key components of the MongoDB cloud platform are: At its core is MongoDB , the general purpose operational database for modern applications. Nearly every application needs a fast database that can deliver single digit millisecond response times; and when it comes to speed, MongoDB delivers. With our flexible document data model, transactional guarantees, rich and expressive query language, and native support for both vertical and horizontal scaling, MongoDB can be used for practically any use case, reducing the need for specialized databases even as your requirements change. With multi-cloud clusters on MongoDB Atlas , customers can realize the benefits of a multi-cloud strategy with true data portability and a simplified management experience. Multi-cloud clusters provide the best-in-class technology across multiple clouds in parallel, migrate workloads across cloud providers seamlessly, and improve high availability with cross-cloud redundancy. Realm Mobile Database extends this data foundation to the edge. Realm is a lightweight database embedded on the client side. Realm helps solve the unique challenges of building for mobile, making it simple to store data on-device while also enabling data access when offline. Realm Sync is seamlessly integrated and keeps data up-to-date across devices and users by automatically syncing data between the client and a backend Atlas cluster. Ready to boose your knowledge and expertise? The Modernization Guide, Architecture Guide, and SI Architect certification program are waiting for you. Get started today. Start the free MongoDB SI Architect certification program today!

March 24, 2021

The Innovation Tax: How much are unproductive and unhappy developers costing you?

I am not someone who believes that developers should be coddled. And I don’t subscribe to a culture of entitlement for developers, or any other part of an organization for that matter, including the C-suite. We are all professional adults operating in the real world. We should treat each other like grownups, regardless of role or responsibility. From the coder to the financial analyst to the sales rep, we all bring our unique value to the company. So execs like me need to strive to understand, appreciate, and foster the critical skills every team member brings to the table. Let’s start with developers, one of my favorite cohorts. We’ve all heard the now overused adage of the digital age: “Every company is becoming a software company.” What this trope is trying to convey is that innovation in the digital space - application development - is a major force in driving new business creation and competitive advantage. The speed with which a new application can be deployed, coupled with the quantity of innovative features in it, is a direct lever on the success of a business. If applications are the currency of the new economy, then development teams are the market makers. In my experience, however, despite the relentless strategic emphasis on speed and innovation in the digital economy, these teams continue to be misunderstood, mismanaged, and marginalized inside both large and small companies. It’s not rational. Worse, it’s incredibly costly. I think about this as a tax on the amount of innovation that a company can produce. Companies pay this tax when they fail to understand the nature of the work developers do, or provide a safe and productive environment for them to do it. And if you don’t get that right, you’re not going to be in this game for very long. Though I don’t write any production code these days, at heart, I’m still a developer. And MongoDB leads hundreds of developers spread across tens of teams, so I’m constantly exposed to developer issues. Over the course of my career, I’ve learned a few things about how – and how not – to cultivate a productive culture for developers. This will be an ongoing discussion, for sure. But to get things started, here are a few things to think about if you’re trying to reduce the “Innovation Tax” your currently paying: Give your developers business context Don’t insult the intelligence or maturity of your developers. They can – and must – understand the business rationale for their work. In fact, painting the strategic target for developers will result in a better work product as they align their key decisions in the architecture and design experience of your software. Once they understand the business context, they’ll find better ways of achieving it bottoms-up than any tops-down leader, even a CTO like me, possibly can. Respect tech debt — and pay down the principal In my experience, the single biggest source of low morale among developers is the combination of too much tech debt and management’s dismissal of it. Taking on some debt to get a release out is fine - if you do it knowingly and pay down the principal later. But leaders who don’t pay attention to mounting debt demonstrate in a very visceral way to developers that they’ve become Gantt-chart leaders, and lost touch with their ethos of engineering. Developers don’t do well with cognitive dissonance, so when you tell them to build the next great thing on top of a dumpster fire, you lose credibility, they lose patience, and your company loses money as the pace of innovation slows to a crawl. Understand what your developers are really doing I could talk about this one for days, but the bottom line is that if leaders don’t understand how developers spend their time, they have no business leading the teams. It’s easy to just focus on new features, but you must acknowledge and address the fact that adjacent work like maintaining databases or a legacy staging environment is pure drudgery that provides no innovation value, costs a fortune in developer time, and saps morale. Listen to the developers when they say that they need to revamp an adjacent or dependent system to understand why it’s important. Remove OKRs and vanity metrics Top-down innovation is an oxymoron. You have to trust that developers want nothing more than to see their work come to life. The more management tells them how to do their job – through objectives and key results or any other key performance indicators – the more they limit the scope of innovation. Paint the target, then get out of the way. Align your goals This goes back to providing business context. Leaders and developers need to believe they are working together toward the same goal. An oppositional relationship takes developers out of flow, and you can lose a whole day of productivity from a single negative interaction. Again, I’m not advocating coddling; developers have their part to play in the complex recipe that builds a successful company, just like everybody else. But for that to work, you must align business, technical, and organizational goals, and build honest and transparent relationships with your devs. Like I said, I could riff on this topic for many more days (or posts). And keep in mind, mismanaging developers is just one form of innovation tax. I’ll be exploring other hidden levies in this space over the coming months. But hopefully this starter list of dos and don’ts gets the conversation going. Please feel free to add to it (or subtract from it) on Twitter at @MarkLovesTech .

March 23, 2021