Atlas

123 results

How to Use MongoDB Atlas to Make Your CRM More Efficient

As part of digital transformation, many companies want to optimize their internal business processes, gain more visibility into important business metrics, and create new automation routines. Data is always at the core of business processes and metrics, and most business-critical data is often located in one or a few repositories, such as a customer relationship management system (CRM). Historically business users have relied on spreadsheets and enterprise data warehouses for bringing the data together and making decisions. These solutions can range from a disjointed set of dashboards to an all-in-one central console. But businesses that need to move fast need to iterate on their data and processes fast, and they can’t do that if implementing a change in CRM takes months or if the things are done manually in spreadsheets. This article describes how MongoDB Professional Services created an internal solution to address these issues. Our approach In MongoDB Professional Services, we also needed to streamline our business processes and get out of spreadsheets for business management, especially for revenue forecasting. As the organization grew, the amount of manual labor associated with spreadsheet maintenance became untenable, and making sense of the data became more difficult, especially when the data might be inconsistent, stale, or even inaccurate. Ordinarily, a good CRM or Professional Services Automation (PSA) system can help solve this problem. At MongoDB, for example, we use Salesforce, which provides decent flexibility, but also requires heavy customization and has limitations. We’ve also seen MongoDB customers address the problem by building ETL pipelines into MongoDB Atlas and taking advantage of MongoDB’s flexible schema, query language and aggregation framework, and Atlas Search . The data from source systems is ingested as-is or remapped to create a single view. The best approach we’ve found, however, is to optimize the schema for how the data will be consumed, with different parts of documents potentially coming from different source systems. Atlas App Services provides a serverless abstraction layer that allows fine-grained but flexible control over the schema to help you avoid conflicts and iterate without breaking compatibility. After considering alternatives, we created an internal CRM/PSA-augmenting system that is built on top of the MongoDB Atlas platform to provide us with additional capabilities and flexibility. This solution allows Professional Services to rapidly deliver advanced functionality, such as revenue forecasting, automation, and visibility into complex business metrics. The solution also allows Professional Services to address business systems' needs and promptly react to changes, with functionality beyond what is typically provided by other systems. MongoDB’s internal solution, at its core, is serverless and data-centric, leveraging Atlas App Services functions and triggers for processing the data and Atlas Search for full-text search. It uses Connector for BI , Atlas GraphQL API , and App Services wire protocol and Atlas Functions to access and manipulate data from other components. Its components include a React-based console application, Atlas Charts, Tableau dashboards, Google Sheets, and microservices for data import and integrations. Project view of our internal solution console. Revenue forecasting module in our internal solution console. MongoDB Charts shows business metrics. Solution architecture The data architecture in our internal solution builds on the single view approach and the data-mart concept. The main idea is to ingest relevant data from Salesforce and other systems, enrich it, and build on it quickly, as shown in the following image. We followed these eight key principles to help enable this functionality: Focus on bringing in data in the form that makes the most sense for the business. And, find the right balance between making the ETL easy and optimizing for the foreseen application use cases. Apply transformations in the ETL process to make the ingested data intuitive, including document hierarchy, field names, and data types. Clearly define the data lifecycle in terms of data producers and consumers. Data producers can only overwrite documents and fields that they “own” - and only those. For example, the ETL process from the source system should overwrite the data in MongoDB documents as needed, but it should only modify those fields that are actually coming from the pipeline. Aim to structure MongoDB documents in a way that makes it clear which fields are owned by what producer. Atlas App Services schema and rules can help ensure that the most critical documents and fields are correctly accessed and modified. Use the Atlas Functions and App Services wire protocol in applications and services, as opposed to directly connecting to the Atlas instance. This allowed us to use Google SSO in the console without requiring any sophisticated security mechanisms when we need to do regular CRUD operations from within the application. For complex data logic and on-the-fly calculations, use App Functions . Use database triggers for propagating changes and generating data-driven events. Use scheduled triggers for generating aggregated views and periodic work. Use external services for communicating with the outside world (e.g., email sender, ETL job). The external services are invoked asynchronously by listening on change streams from their respective namespaces (pub-sub model). All external services work independently of each other. Don’t overthink. MongoDB Atlas’s Developer Data Platform offers a lot of flexibility and, if these principles are followed, making changes and iterating on a working system is surprisingly easy. To reiterate the last point, our internal solution is easy to modify and extend because of the flexible schema concept in MongoDB and the independence of external components. Users can access the data through available tools and integrations, and developers can update specific parts of the system or introduce new ones without delays, making this solution efficient in terms of both cost and effort. Conclusion Through this example of our internal solution, we demonstrated that by leveraging MongoDB Atlas in full force, you can solve seemingly intractable business problems with speed, efficiency, and robustness on top of what regular systems can do. Whether you’re optimizing your company’s business processes, building business dashboards, or improving automation, the MongoDB Atlas developer data platform can help make the process easier. Learn how MongoDB’s consulting engineers can help you with design and architecture decisions and accelerate your development efforts. Contact us to learn more .

September 12, 2022

Introducing MongoDB’s Prometheus Monitoring Integration

Wouldn’t it be great if you could connect your data stored in the world’s leading document database to the leading open source monitoring solution? Absolutely! And now you can. Prometheus has been a longstanding developer favored solution by providing monitoring and alerting functionality for cloud-native environments. It has key features like a multi-dimensional data model with time series support, a flexible query language to leverage their dimensionality called PromQL, and no reliance on distributed storage. MongoDB meets monitoring like never before Our integration allows you to view MongoDB hardware and monitoring metrics all within Prometheus. If you were a user of MongoDB and Prometheus before, this means you no longer have to worry about jumping back and forth between applications to view your data. Our official Prometheus integration provides complete feature parity with Atlas metrics in a secure and supported environment. With a few clicks in the UI, you can configure the integration and set up custom scraping intervals for your Atlas Admin API endpoints to ensure your view in Prometheus is consistently updated based on your preference. Best of all, this integration is free and available for use with MongoDB Atlas (clusters M10 and higher) and Cloud Manager. We truly believe in the freedom to run anywhere, and that includes viewing your data in your preferred monitoring solutions. How the Prometheus Integration works with MongoDB The MongoDB Prometheus integration converts the results of a series of MongoDB commands into Prometheus protocol, allowing Prometheus to scrape the metrics you can view through your MongoDB monitoring charts and more. Once Prometheus successfully collects your metrics, you can parse your metrics in the Prometheus UI or create custom dashboards in Grafana. Get started with the Prometheus Integration If you already have an Atlas account, get started by following the instructions below: Log into your Atlas account. Click the vertical three dot menu next to the project dropdown in the upper lefthand corner of the screen. Select “Integrations.” The Prometheus Monitoring Integration is listed here. Select “Configure” on the Prometheus tile, and follow the guided setup flow. If you don’t have an Atlas account, create an m10 or higher Atlas cluster and follow the instructions above. Note: If you were one of the customers who requested this integration, we thank you! We appreciate your feedback and suggestions, and look forward to implementing more in the future. Input is always welcome at feedback.mongodb.com .

March 16, 2022

Speed Up Your Workflow with Query Library in Atlas Charts

We're excited to announce Query Library for Atlas Charts! Getting started with Charts is already fast, easy, and powerful. With Query Library, we have made it even easier to build charts with queries. When you log in to Charts, there are a few essential steps to visualize your data. You need to add a data source, you need to create a dashboard, and from there you can create a chart. The Charts UI provides a user-friendly, drag and drop interface for building charts. But today, more than a quarter of users also leverage the MongoDB Query Language (MQL) to write custom queries when creating charts. To demonstrate a simple example of what using a query looks like, we’ll use the sample movie data we make available to every Charts user through our sample dashboard . Below we are using MQL to filter for only movies in the comedy genre: Rather than dragging the genre field into this chart and adding a filter, with a little bit of MQL knowledge, a query can speed up the chart-building workflow. As you can see above, users can now also easily save this newly created query or load a previously saved query. Query Library builds on Charts’ existing support for queries and aggregation pipelines and makes it even more powerful to leverage MQL in building charts. Rather than recreating queries across multiple dashboards, manually sharing with team members, copying and pasting, or otherwise retrieving queries written in the past, Charts users can either save any new query for later use or load a saved query directly from the chart builder. Here’s what it looks like to load a saved query: Best of all, these saved queries are available across your team. Any saved query is available to all members of your project. Check out our documentation for more details on saving, loading, and managing queries in Charts. Simplifying visualization of your Atlas data The goal of Atlas Charts is to create a data visualization experience native to MongoDB Atlas customers. It’s a quick, straightforward, and powerful tool to help you make business decisions and perform analytics on your applications. Capabilities like Query Library will help to speed up your data visualization workflow to get you quickly in and out of your data and back to what matters for your team. To get started with Query Library today, navigate to the chart builder in any of your dashboards, simply write a query, and save it for later use! New to Atlas Charts? Get started today by logging into or signing up for MongoDB Atlas , deploying or selecting a cluster, and activating Charts for free.

March 2, 2022

Building on Atlas to Accelerate Sales Efficiency at MongoDB

When it comes to customers and prospects, there is no such thing as having too much data. But it can sometimes seem that way. With so much data on prospects, the challenge is in sorting through it to truly understand which have the highest potential. As a result, companies are often left with disparate and disorganized spreadsheets, making Salesforce that much harder to use. Soon, sales leaders find themselves preoccupied with simply trying to access and understand data, rather than implementing strategy. MongoDB’s own sales organization found itself in such a position in the summer of 2020 and decided that there had to be a better way to utilize the customer data they received from vendor Scalestack . “There were hundreds of customized documents that each needed to be updated from multiple different data sources — repeatedly,” said Matt Highland, account strategy manager at MongoDB. “The documents would often become outdated very quickly. We needed a current view of the data.” First, the sales team conducted an in-house study of the process its sales leaders use to analyze accounts and territories in order to gain an idea of all the data points and workflows that optimize account allocation. Next, sales leaders evaluated vendor solutions, but couldn’t find any tool or service that met every need. From there, the decision was made to build a tool internally using MongoDB’s own engineering resources and technology. With an assist from its Product Design department, the sales leaders translated the requirements and previous tools used into high-fidelity wireframes for the engineering team. The solution, named Argos, is a web application built on top of MongoDB Atlas . Launched in early 2021, it is now being used by more than 600 people in MongoDB’s sales teams. Argos helps employees to understand the potential spend for each of their accounts, which in turn informs account and territory planning at the regional, team, and rep level. “The main benefits have been transparency and access to timely data,” Highland said. “The relevant data is in one place, and it’s up to date. Argos has also helped with speed, and how quickly we can get a new sales rep going, how quickly we can grow our teams, and how quickly we can adjust our strategy when situations change. It also gives us more clarity about how equitable the territories are.” MongoDB’s sales group is now far more agile in building and adjusting territories because it became far easier to find those key data points about customers and prospects. Additionally, it freed up analytical bandwidth for more strategic projects that were previously spent wrangling data. Thanks to its intuitive document data model, MongoDB Atlas made it easy for developers to build Argos — as well as for sales teams to spin up and use the web application. In addition, the flexibility of the document model made it easy for teams to alter the application as its requirements evolved. Because of how simple Atlas is to use, engineers could focus solely on the Argos implementation and not worry about database deployment, availability, or performance. The analytics available in Atlas also made fine-tuning queries and indexes straightforward, which translated into a better user experience through faster application performance. And the flexibility of the document data model made ingesting data quick and easy, which decreased the time between the start of the Argos project to its working prototype. “With Argos, the sales team has a shared single view of their accounts with the decision making data points available and up to date,” Highland said. Next, Highland said, is to explore expanding Argos to other teams inside MongoDB to continue to streamline its go-to-market efforts. The MongoDB SalesOps team is hiring! Check out our job postings to see if you would be a good fit!

February 9, 2022

Joyce, a Decentralized Approach to Foster Business Agility

Despite all of the tools and methodologies that have arisen in the last few years, many companies, particularly those that have been in the market for decades, struggle when it comes to leveraging their operational data to build new digital products and services. According to research and surveys conducted by McKinsey over the last few years, the success rate of digital transformations is consistently low, with less than 30% succeeding at improving their company’s performance. There are a lot of reasons for this, but most of them can be summarized in a sentence: A digital transformation is primarily an organizational and cultural change and then a technological shift. The question is not if digital transformation is a good thing nor is it if moving to the cloud is a good choice. Companies need (badly, in some cases) a digital transformation and yes, the pros of moving to the cloud usually overcome the cons. So, let’s try to dig deeper and analyze three of the main problems companies face when they go on this journey Digital products development Products by nature are customer-driven but companies run their businesses on multiple back-end systems that are instead purpose-driven. Unless you run a very small business, different people with different objectives have ownership of such products and systems. Given this context, what happens when a company wants to launch a new digital product at speed? The back-end systems (CRMs, E-commerce, ERP, etc.) hold the data they need to bring to the customer. Some systems are SaaS, some are legacy, and perhaps others are custom applications created by the company that disrupted the market with innovative solutions back in the days, the perfect recipe for integration hell. The product manager needs to coordinate and negotiate multiple change requests with the system’s owners whilst trying to convince them to add their needs in the backlog to meet the deadline. And things get even worse, as the new product relies on the computational power of the source systems, and if those systems cannot handle the additional traffic, both the product and the core services will be affected. Third-party integration “Everybody wants the change, (almost) nobody wants to change.” In this ever-growing digital world, partnering with third parties (whether they are clients or service providers) is crucial, but everyone who has tried to do so knows how challenging this is: non-standard interfaces, CSV files over FTP with fancy update rules, security issues… The list of unwanted things can grow indefinitely. SaaS everywhere The Software-as-a-Service model is extremely popular and getting the service you want without worrying about the underlying infrastructure gives freedom and speed of adoption, but what happens when a big company relies on multiple SaaS products to run their business? Sooner or later, they experience loss of control and higher costs in keeping a consistent view of the big picture. They need to deal with SaaS internal representations of their own data, multiple views of the same domain concept, unplanned expenses to export, and interpret and integrate the data from different sources with different formats. Putting it all together All the issues above fall into a well-known category of information technology. They are integration problems, and over the years, a lot of vendors promised a definitive solution. Now, you can consider low-code/no-code platforms with hundreds of ready-made connectors and modern graphical interfaces. Problem solved, right? Well, not really. Low-code integration platforms simplify implementation. They are really good at it, but doing so oversimplifies the real challenge: creating and maintaining a consistent set of APIs shaped around the business value over time, and preventing the interfaces from leaking internal complexities to the rest of the company, something that has to be defined and maintained through architectural choices and proper skills (completely hidden behind the selling points of such platforms). There are two different ways to solve integration problems: Centralized using adapters. In this case, the logic is pushed to the central orchestration component, with integration managed through a set of adapters. This is the rather old school SOA approach, the one that the majority of market integration platforms are built on. Decentralized, pushing the logic to the edges, giving autonomous teams the freedom to define both the boundaries and the APIs that a domain must expose to deliver business value. This is a more modern approach that has arisen recently alongside the rise of microservices and, in the analytical world, with the concept of data mesh. The former gives speed at the starting point and the illusion of reducing the number of choices and skills to manage the problems, but in the long run, inevitably, this begins to accumulate technical debt. Due to the lack of necessary degrees of freedom, you lose the ability to evolve the integration points over time, the same thing that caused the transition from SOA to microservices architectures. The latter needs the relevant skills, vision, and ability to execute but gives immediate results and allows you to flexibly manage the evolution of the enterprise architecture over time. Old problems, new solutions At Sourcesense in the last 20 years, we have partnered on hundreds of projects to bring agility, speed, and new open-source technology to our customers. Many times through the years, we were faced with the integration challenges above, and yes, we tried to solve them with the technology available at the time, so we have built some integration solutions on SOA (when they were the best of breed) and interacted with many of the integration platforms on the market. Then, we struggled with the issues and limitations of the integration landscape and have listened to our customers’ needs and where expectations have fallen short. The rise of agile methodologies, cloud computing, new techniques, technologies, and architectural styles has given an unprecedented boost to software evolution and the ability to support business needs, so we embraced the new wave and now have growing experience in solving problems with these tools. Along the way, we’ve seen a recurring pattern when we encountered integration problems, the effectiveness of data hubs as components of the enterprise architectures to solve these challenges, so we built one of our own: Joyce. Data hubs This is a relatively new term and refers to software platforms that collect data from different sources with the main purpose of distribution and sharing. Since this definition is broad and vague, let’s add some other key elements that matter and help define the contours of our implementation. Collecting data from different sources can bring three major benefits: Computational decoupling from the sources. Pulling (or pushing) the data out of the originating systems means that client applications and services interact with the hub and not directly with the sources, preventing them from being slowed down by additional traffic. Catalog and discoverability. If data is collected correctly, this leads to the creation of a catalog, allowing people inside the organization to search, discover, and use the data inside the hub. Security. The main purpose of the hubs is distribution and sharing. This leads immediately to focus on access control and security hardening. A single access point simplifies the overall security around the data because it significantly reduces the number of systems the clients have to interact with to gather the data they need. Joyce, how it works The cornerstone concept of Joyce is the schema. It allows you to shape the ingested data and how this data will be made available to client services. Using the same declarative approach made popular by Kubernetes, the schemas describe the expected result and the platform performs the actions to make it happen. Schemas are standard JSON schema files stored and classified in a catalog. Their definition falls into three categories: Input – how to gather and shape the source data. We leverage the Kafka Connect framework to provide ready-made connectors for a wide variety of sources. The ingested data can be filtered, formatted, and enriched with transformation handlers (domain-specific extensions of JSON schema). Model – allows you to create new aggregates from the data stored in the platform. This feature gives the freedom to model the data the way needed by client services. Export – bulk data export capability. Exported data can be any query run against the existing data with an optional temporal filter. Input and model data is made available to all the client services with the proper authorization grants through auto-generated REST and GraphQL APIs. It is also possible to subscribe to a dedicated topic if an event-driven approach is more suitable for the use-case. MongoDB: the key for a flexible model and performance at scale We heavily rely on MongoDB. Thanks to its flexibility, we can easily map any data structure the user defines to collect the data. Half of the schema definition is basically the definition of a MongoDB schema. (We also auto-generate one schema per collection to guarantee data integrity.) Joyce runs in a Kubernetes cluster and all its services are inherently stateless to exploit the full potential of horizontal scaling. The architecture is based on the CQRS pattern. This means that writes and reads are completely decoupled and can scale independently to meet the unique needs of the production environment. MongoDB is also the backing database of the API layer so we can keep the promise of low latency, high throughput, and continuous availability along all the components of the stack. The platform is available as a fully managed PaaS on the three major cloud providers (AWS, Azure, GCP) but if needed, it can be installed on an existing infrastructure (in cloud and on prem). Final considerations There are many challenges leaders must face for a successful digital transformation. They need to guide their organizations along a process that involves changes on many levels. The exponential growth of technological solutions in the last few years adds more complexity and confusion. The evolution of organizational models and methodologies point in the direction of shared responsibility, people empowerment, and autonomous teams with a light and effective central governance. The same evolution also permeates the novel approaches to enterprise architectures like the data mesh. Unfortunately, there’s no silver bullet, just the right choices for the given context. Despite all the marketing and hype around this or that one solution to all of your digital transformation needs, a long term successful shift needs guidance, competence and empowerment. We’ve built Joyce with the aim of reducing the burden of repetitive tasks and boilerplate code to get the results faster and catch the low hanging fruits without trying to replace the necessary architectural thinking to properly define the current state and the evolution of the enterprise architectures of our customers. If you’re struggling with the problems enlisted at the beginning of this article you should give Joyce a try. Learn more about Joyce

December 21, 2021

Introducing Pay as You Go MongoDB Atlas on AWS Marketplace

We’re excited to introduce a new way of paying for MongoDB Atlas . AWS customers can now pay Atlas charges via our new AWS Marketplace listing . Through this listing, individual developers can enjoy a simplified payment experience via their AWS accounts, while enterprises now have another way to procure MongoDB in addition to privately negotiated offers, already supported via AWS Marketplace. Previously, customers who wanted to pay via AWS Marketplace had to commit to a certain level of usage upfront. Pay as you go is available directly in Atlas via credit card, PayPal, and invoice — but not in AWS Marketplace, until today. With this new listing and integration, you can pay via AWS with no upfront commitments . Simply subscribe via AWS Marketplace and start using Atlas. You can get started for free with Atlas’s free-forever tier , then scale as needed. You’ll be charged in AWS only for the resources you use in Atlas, with no payment minimum. Deploy, scale, and tear down resources in Atlas as needed; you’ll pay just for the hours that you’re using them. Atlas comes with a Basic Support Plan via in-app chat. If you want to upgrade to another Atlas support plan , you can do so in Atlas. Usage and support costs will be billed together to your AWS account daily. If you’re connecting Atlas to applications running in AWS, or integrating with other AWS services , you’ll be able to see all your costs in one place in your AWS account. To get started with Atlas via AWS Marketplace, visit our Marketplace listing and subscribe using your account. You’ll then be prompted to either sign in to your existing Atlas account or sign up for a new Atlas account . Try MongoDB Atlas for Free Today!

December 15, 2021

MongoDB Atlas for Government Achieves "FedRAMP In-process"

We are pleased to announce that MongoDB Atlas for Government has achieved the FedRAMP designation of “ In-process ”. This status reflects MongoDB’s continued progress toward a FedRAMP Authorized modern data platform for the US Government. Earlier this year, MongoDB Atlas for Government achieved the designation of FedRAMP Ready . MongoDB is widely used across the Federal Government, including the Department of Veterans Affairs, the Department of Health & Human Services (HHS), the General Services Administration, and others. HHS is also sponsoring the FedRAMP authorization process for MongoDB. What is MongoDB Atlas for Government? MongoDB Atlas for Government is an independent environment of our flagship cloud product MongoDB Atlas. Atlas for Government has been built for US government needs. It allows federal, state, and local governments as well as educational institutions to build and iterate faster using a modern database-as-a-service platform. The service is available in AWS GovCloud (US) and AWS US East/West regions. MongoDB Atlas for Government Highlights: Atlas for Government clusters can be created in AWS GovCloud East/West or AWS East/West regions. Atlas for Government clusters can span regions within AWS GovCloud or within AWS. Atlas core features such as automated backups, AWS PrivateLink, AWS KMS, federated authentication, Atlas Search, and more are fully supported Applications can use client-side field level encryption with AWS KMS in GovCloud or AWS East/West. Getting started and pricing MongoDB Atlas for Government is available to Government customers or companies that sell to the US Government. You can buy Atlas for Government through AWS GovCloud or the AWS marketplace . Please fill out this form and a representative will get in touch with you. To learn more about Atlas for Government, visit the product page , check out the documentation , or read the FedRAMP FAQ .

September 22, 2021

Highlight What Matters with the MongoDB Charts SDK

We're proud to announce that with the latest release of the MongoDB Charts SDK you can now apply highlights to your charts. These allow you to emphasize and deemphasize your charts with our MongoDB query operators . Build a richer interactive experience for your customers by highlighting with the MongoDB Charts embedding SDK . By default, MongoDB Charts allows for emphasizing parts of your charts by series when you click within a legend. With the new highlight capability in the Charts Embedding SDK, we put you in control of when this highlighting should occur, and what it applies to. Why would you want to apply highlights? Highlighting opens up the opportunity for new experiences for your users. The two main reasons why you may want to highlight are: To show user interactions: We use this in the click handler sandbox to make it obvious what the user has clicked on. You could also use this to show documents affected by a query for a control panel. Attract the user’s attention: If there's a part of the chart you want your users to focus on, such as the profit for the current quarter or the table rows of unfilled orders. Getting started With the release of the Embedding SDK , we've added the setHighlight method to the chart object, which uses MQL queries to decide what gets highlighted. This lets you attract attention to marks in a bar chart, lines in a line chart, or rows in a table. Most of our chart types are already supported, and more will be supported as time goes on. If you want to dive into the deep end, we've added a new highlighting example and updated the click event examples to use the new highlighting API: Highlighting sandbox Click events sandbox Click events with filtering sandbox The anatomy of a click In MongoDB Charts, each click produces a wealth of information that you can then use in your applications , as seen below: In particular, we generate an MQL expression that you can use called selectionFilter , which represents the mark selected. Note that this filter uses the field names in your documents, not the channel names. Before, you could use this to filter your charts with setFilter , but now you can use the same filter to apply emphasis to your charts. All this requires is calling setHighlight on your chart with the selectionFilter query that you get from the click event, as seen in this sandbox . Applying more complex highlights Since we accept a subset of the MQL language for highlighting, it's possible to specify highlights which target multiple marks, as well as multiple conditions. We can use expressions like $lt and $gte to define ranges which we want to highlight. And since we support the logical operators as well, you can even use $and / $or . All the Comparison , Logical and Element query operators are supported, so give it a spin! Conclusion This ability to highlight data will make your charts more interactive and help you better emphasize the most important information in your charts. Check out the embedding SDK to start highlighting today! New to Charts? You can start now for free by signing up for MongoDB Atlas , deploying a free tier cluster and activating Charts. Have an idea on how we can make MongoDB Charts better? Feel free to leave an idea at the MongoDB Feedback Engine .

September 2, 2021

Fine-Tune Relevance in MongoDB Atlas Search with Function Scoring and Synonyms

MongoDB Atlas Search is an embedded full-text search solution in MongoDB Atlas that gives developers a seamless and scalable experience for building fast, relevance-based application features. We announced its general availability last year at MongoDB.live 2020 and over the past year we’ve introduced many new features, including a visual index builder, search query tester, custom analyzers , and wildcard path queries . This year at MongoDB.live 2021 , we’re excited to highlight two new capabilities that help developers tune the relevance of search results. See how easy it is to get started with MongoDB Atlas Search in this demo video by Marcus Eagan, Senior Product Manager for Atlas Search. Building relevance into search results Understanding the behavior of your users is essential when thinking about search result relevance. People don’t always tell you what they want, and they sometimes use words or phrases that don’t match your content exactly. To cover these scenarios, you can use full-text search features like function scoring and synonyms. Influence search rankings with function scoring There are often multiple factors that influence how search results should be ranked. For example, let’s say you have a restaurant finder application. The explicit inputs are things like the user’s location and what they’re searching for, but what’s implied is that they likely want to see highly rated restaurants or ones with more reviews. What’s Cooking: a sample restaurant finder application using MongoDB Atlas Search Function scoring allows you to influence the order of results returned by manipulating the score of each result. In Atlas Search, that means you can use a numeric field in a document and apply a mathematical expression to it. For example, you might want to increase the score of restaurants that are sponsored or have higher star ratings. This can easily be accomplished within the same search query by simply adding the function option to the score parameter of your query. Learn more about how to use function scores in our developer tutorial . Show results for more search queries with synonyms Synonyms are often used to define terms that are semantically similar to each other to improve search results. For example, someone searching for “noodles” might want to find results for “spaghetti”, “chow mein”, or “pad thai”. Synonyms can also help with typos, especially on mobile and small keyboards. In Atlas Search, you can define collections of synonyms for a search index via the API. Synonyms can be explicit (one-way) or equivalent (two-way). Explicit synonyms are good for defining relationships between terms that are subsets of each other, like the noodle example above: “spaghetti”, “chow mein”, and “pad thai” are all explicit synonyms for “noodles”, but not each other (you don’t want results for “chow mein” in a search for “spaghetti”). Equivalent synonyms are often used for terms that have regional variations or are otherwise interchangeable both ways, like soda and pop, or Kleenex and tissues. What's next for Atlas Search Developers are increasingly turning to full-text search to make content more discoverable and relevant for application end users. With Atlas Search, we hope to not only make building full-text search easier, but also more powerful and expressive. Join our community to ask questions and find out what other developers are building with Atlas Search and let us know what you think we should build next in our feedback forums .

July 13, 2021

Launched Today: MongoDB 5.0, Serverless Atlas, and the Evolution of our Developer Data Platform

Today we welcome you to our annual MongoDB .Live developer conference. Through our keynote and conference sessions we'll show you all the improvements, new features, and exciting things we've been working on since last year’s conference. What I want to do in this blog post is provide you with a summary of what we are announcing, and resources to help you learn more. While it's easy to focus on what we are announcing at this year's event, we actually started out on this journey 12 years ago by releasing the world’s most intuitive and productive database technology to develop with — MongoDB. And we believe the applications of the NEXT 10 YEARS will be built on data architectures that continue to optimize for the developer experience, allowing teams like yours to innovate at speed and scale. So how are we building on this vision? Today I am incredibly proud to announce three big things: The General Availability (GA) of MongoDB 5.0, the latest generation of our core database. It includes native support for time series workloads, new ways to future-proof your applications, multi-cloud privacy controls, along with a host of other improvements and new features. The preview release of serverless instances on MongoDB Atlas, which makes it even easier for development teams who don’t want to think about capacity management at all to get the database resources they need quickly and efficiently. Major enhancements to Atlas Data Lake, Atlas Search, and Realm Sync, which allow engineering teams to reduce architectural complexity and get more value out of their data. MongoDB 5.0 GA MongoDB 5.0 is the latest generation of the database most wanted by developers . Our new release makes it even easier to support a broader range of workloads, introduces new ways of future-proofing your apps, and further enhances privacy and security. This major jump in version number from MongoDB 4.4 – our prior GA version – to 5.0 reflects a new era for MongoDB's release cadence: We want to get new features and improvements into your hands faster. Starting with MongoDB 5.0, we will be publishing new Rapid Releases every quarter, which will roll up into Major Releases once a year for those of you that want to maintain the existing annual upgrade cadence. You can learn more about the new MongoDB release cadence from our blog post published last October. Digging into MongoDB 5.0, here is what’s new and improved: Native Time Series Designed for IoT and financial analytics, our new time series collections, clustered indexing, and window functions make it easier, faster, and lower cost to build and run time series applications, and to enrich your enterprise data with time series measurements. MongoDB automatically optimizes your schema for high storage efficiency, low latency queries, and real-time analytics against temporal data. Running your time series applications on MongoDB eliminates the time and the complexity of having to stitch together multiple technologies yourself. You can manage the entire time series data lifecycle in MongoDB – from ingestion, storage, querying, real-time analysis, and visualization through to online archiving or automatic expiration as data ages. Time series collections can sit right alongside regular collections in your MongoDB database, making it really easy to combine time series data with your enterprise data within a single versatile, flexible database – using a single query API to power almost any class of workload. Our new time-series collections blog post gives you everything you need to get started. Future-proof with the Versioned API and Live Resharding Update January 31, 2022: "Versioned API" has been rebranded as "Stable API." Learn more about Stable API here . Starting with MongoDB 5.0, the Versioned API future-proofs your applications. You can fearlessly upgrade to the latest MongoDB releases without the risk of introducing backward-breaking changes that require application-side rework. Using the new versioned API decouples your app lifecycle from the database lifecycle, so you only need to update your application when you want to introduce new functionality, not when you upgrade the database. Future-proofing doesn’t end with the Versioned API. MongoDB 5.0 also introduces Live Resharding which allows you to easily change the shard key for your collections on demand – with no database downtime – as your workload grows and evolves. The way I like to think about this is that we’ve extended the flexibility the document model has always given you down to how you distribute your data. So as things change, MongoDB adapts without expensive schema or sharding migrations. Next-Gen Privacy & Security MongoDB’s unique Client-Side Field Level Encryption now extends some of the strongest data privacy controls available anywhere to multi-cloud databases. And with the ability in 5.0 to reconfigure your audit log filters and rotate x509 certificates without downtime you maintain a strict security posture with no interruption to your applications. Run MongoDB 5.0 Anywhere MongoDB 5.0 is available today as a fully-managed service in Atlas . You can of course also download and run MongoDB 5.0 on your own infrastructure, either with the community edition of MongoDB, or with MongoDB Enterprise Advanced . The Enterprise Advanced offering provides sophisticated operational tooling via Ops Manager, advanced security controls, proactive 24x7 support, and more. MongoDB Ops Manager 5.0 enhancements include: Support for the automation, monitoring, and backup/restore of MongoDB 5.0 deployments. Improved load performance with parallelized client-side restores. A quick start experience for deploying MongoDB in Kubernetes with Ops Manager. And lastly, a guided Atlas migration experience that walks users through provisioning a migration host to push data from their existing environment into the fully managed Atlas cloud service. You can learn more about MongoDB 5.0 from our What’s New guide . New to MongoDB Atlas — Serverless Instances (Preview) We want developers to be able to build MongoDB applications without having to think about database infrastructure or capacity management. With serverless instances on MongoDB Atlas, now available in Preview, you can automatically get the database resources you need based on your workload demand. It’s really simple: the only decision you need to make is the cloud region hosting your data. After that, you’ll get an on-demand database endpoint that dynamically adapts to your application traffic. Serverless instances will support the latest MongoDB 5.0 GA release, Versioned API, and upcoming Rapid Releases so you never have to worry about backwards compatibility or upgrades. Pay only for reads and writes your application performs and the storage resources you use (up to 1TB of storage in preview) and leave capacity management to MongoDB Atlas’s best-in-class automation. We invite you to try it out today with a new or existing Atlas account. And the Preview release is just the beginning – we will be working with partners such as Vercel and Netlify to deliver an integrated serverless development experience in the coming months. In the longer term, we will continue to evolve our cloud-native backend architecture to abstract and automate even more infrastructure decisions and optimizations to deliver the best database experience on the market. The New MongoDB Shell GA The new MongoDB Shell has been redesigned from the ground up to provide a modern command-line experience with enhanced usability features and powerful scripting environment. It makes it even easier for users to interact and manage their MongoDB data platform, from running simple queries to scripting admin operations. A great user experience, even on a command-line tool, should always be a major consideration. With the new MongoDB Shell we have introduced syntax highlighting, intelligent auto-complete, contextual help and useful error messages creating an intuitive, interactive experience for MongoDB users. Check out this blog post for more information. MongoDB Charts and Atlas Data Lake: Better Together MongoDB Charts intuitive UI and ability to quickly create & share charts and graphs of JSON data is now integrated with Atlas Data Lake . You can now easily visualize JSON data stored in Amazon AWS S3 without any data movement, duplication or transformation. Furthermore, you can run Atlas Data Lake’s federated query to blend data across multiple Atlas databases and AWS S3, and visualize the results with Charts. By adding Atlas Data Lake as a data source in Charts, you can discover deeper, more meaningful insights in real time. Check out this blog post for more information. Atlas Search — More Relevance Features It’s incredibly important for modern applications to deliver fast and relevant search functionality: it powers discoverability and personalization of content, which in turn drives user engagement and retention. Atlas Search , which delivers powerful full-text search functionality without the need for a separate search engine, has several new capabilities for building rich end user experiences. We’ve recently added support for function scoring, which allows teams to apply mathematical formulas on fields within documents to influence their relevance, such as popularity or distance — e.g. closer restaurants with more or better reviews will show up higher in a list of results. In addition, you can now define collections of synonyms for a particular search index. By associating semantically equivalent terms with each other, you can respond to a wider range of user-initiated queries in your applications. Realm Realm lets you have simple, powerful local persistence on mobile phones, tablets and IoT devices like Raspberry Pi. The Realm SDKs provide a set of APIs that let developers store and interact with native objects directly, reducing the amount of code required as there is no need for ORMs or learning cryptic database syntax. In addition, we made MongoDB Realm Sync generally available earlier this year, making it easy to synchronize data between local storage on your devices and MongoDB Atlas on the backend. No need to worry about networking code or dealing with conflict resolution as we handle all of that for you. Today, we’re excited to announce support for Unity. You can now use Realm to store your game data, like scores and player state, and sync it automatically across devices. Realm's support for Unity is now Generally Available and ready for production workloads. We're also investing in support for more cross-platform frameworks — the Kotlin Multiplatform and Flutter/Dart SDKs are now both available in Alpha. And finally, the team is working towards Realm Flexible Sync, a new way to synchronize data with more granular control. Flexible Sync will allow you to — Build applications that respond dynamically to user's needs. Let your end users decide what data they need, and when. Use more precise permissions that can adapt over time. Check out this dedicated blog on our upcoming plans for Flexible Sync to learn more. Getting Started With everything we announced today, you can imagine it was a packed keynote! And there is so much more that we didn’t cover. You can get all of the highlights from our new announcements page where you will also find all the resources you need to get started.

July 13, 2021

Visualize Blended Atlas and AWS S3 Data From Atlas Data Lake with MongoDB Charts

As of June 2022, the functionality previously known as Atlas Data Lake is now named Atlas Data Federation. Atlas Data Federation’s functionality is unchanged and you can learn more about it here . Atlas Data Lake will remain in the Atlas Platform, with newly introduced functionality that you can learn about here . We’re excited to announce that MongoDB Charts supports Atlas Data Lake as a data source! You can now use Charts to easily visualize data stored across different Atlas databases and AWS S3 buckets. Thanks to the aggregating power of Atlas Data Lake’s federated query, creating charts and graphs from blended application and cloud object data is simpler than ever before. On the surface this powerful integration is as simple as adding your Atlas Data Lake as a data source within Charts. However, it unlocks a deeper level of analysis while eliminating the need for creating an Extract-Transform-Load (ETL) process across your Atlas and S3 data. The integration provides the ability to visualize data from the following combination of sources without writing any code: Data from many Atlas databases or clusters, including multi-cloud clusters Cloud storage data from AWS S3 Blended Atlas and cloud storage (AWS S3) data Scenario: Finding insights from aggregated customer profile and contract data Let’s add a real world scenario of how this can enhance the analytics you derive from your data. While doing so, we will walk through the steps of setting up your Atlas Data Lake, adding it as a data source to Charts, and getting the most of your data with Charts’ powerful visualization capabilities. For context, let’s imagine we’re an analyst at a telecom company and we have contract data that is stored in MongoDB Atlas in different clusters and databases for each country we operate in - United States and Canada. Second, we have offloaded data from our Customer Relationship Management (CRM) tool as a parquet file into an AWS S3 bucket. All three datasets share a common “customerID” field. Configure Atlas Data Lake Because both “contracts” collections (or datasets) in MongoDB Atlas share the same fields, I simply mapped both into a single collection within the data lake. I mapped the customer profiles dataset into its own collection, since it only shares the “customerID” field. However, now that it’s in the same data lake, I will easily be able to join it to my contract data with a $lookup in my Charts aggregation pipeline or with a Lookup Field in the chart builder. (A $lookup in the MongoDB Query API is equivalent to a join in SQL.) Configure Charts data source I want to find insights from all contracts, both US and Canada in this scenario. Once I have created a single Atlas Data Lake collection (DL_contracts.allcontracts) from the two separate databases, I then need to add it as a data source in Charts. Simply click on “add data source” within Charts and add your data lake, and then choose the collections we want to use in the next step. For completeness I also added the two Atlas collections (US and Canada contracts) as data sources in Charts by following the same steps. Visualize data across multiple Atlas databases With Atlas Data Lake’s federated query capability, which effectively performs a union of data, I am able to build a column chart that shows the amount of all US and CA contracts in a single chart without writing any code. As you can see below, the chart shows both US and CA columns when connected to the data lake collection. When the data source is switched directly to either Atlas database, it only shows data for that respective database, or country in this example. Visualize blended data from Atlas and an AWS S3 bucket Lastly, let’s take our insights to the next level by visualizing data from multiple Atlas databases and a parquet file that’s stored in an AWS S3 bucket. Adding customer profile data that I offloaded from my CRM tool into S3 will enable me to find more robust insights. I could also visualize the data from the parquet file alone by connecting to that data lake collection. Since the contract data and customer profile data are in different collections within my Atlas Data Lake, I created a $lookup in the aggregation pipeline of the Charts data source. I then created a table chart from three different data sources with conditional formatting to quickly identify high value customers. The columns with blue boxes include contract data from both Atlas clusters, while the columns with orange boxes include customer profile data from a parquet file via AWS S3 bucket. Note, I could also aggregate the data in Atlas Data Lake and use $out to create a new collection of the data , and then connect Charts to the new collection as a data source. For the purposes of this blog, I wanted to highlight Charts-specific aggregation capabilities. We hope that you’re excited about the ability to easily visualize multiple data sources, from multiple Atlas databases to AWS S3 buckets in one place! Remember, if you haven’t used Charts before, you can get started for free by signing up for MongoDB Cloud , deploying an Atlas cluster and activating Charts. Try MongoDB Atlas for free today!

July 9, 2021