MongoDB Atlas

33 results

Global, Multi-Cloud Security at Scale with MongoDB Atlas

In October 2020, we announced the general availability of multi-cloud clusters on MongoDB Atlas . Since then, we’ve made several key improvements that allow customers to take advantage of the full breadth of MongoDB Atlas ’ best-in-class data security and privacy capabilities across clouds on a global scale. Cross-Cloud Security with MongoDB Atlas A common question we get from customers about multi-cloud clusters is how security works. Each cloud provider offers protocols and controls to ensure that data within its ecosystem is securely stored and accessed. But what happens when your data is distributed across different clouds? Don’t worry–we have you covered. MongoDB Atlas is designed to ensure that our built-in best practices are enforced regardless of which cloud providers you choose to use, from dedicated network peering connections to customer-managed keys for data encryption-at-rest and client-side field-level encryption. Private Networking to Multiple Clouds You can now create multiple network peering connections and/or private endpoints for a multi-cloud cluster to access data securely within each cloud provider. For example, say your operational workload runs on Azure, but you want to set up analytics nodes in Google Cloud and AWS so you can compare the performance of Datalab and SageMaker for machine learning. You can set up network peering connections for all three cloud providers in Atlas to allow each of your cloud environments to access cluster data in their respective nodes using private networks. For more details, take a look at our documentation on network peering architecture . Integrate with Cloud KMS for Additional Control Over Encryption Any data stored in Atlas can be encrypted with an external key from AWS KMS, Google Cloud KMS, or Azure Key Vault for an extra layer of encryption on top of MongoDB’s built-in encrypted storage engine . You can also configure client-side field level encryption (client-side FLE) with any of the three cloud key management services to further protect sensitive data by encrypting document fields before it even leaves your application ( support for Azure Key Vault and Google Cloud KMS is available in beta with select drivers ). This means data remains encrypted even while it is in memory and in-use within your live database. Even though the data is encrypted, it remains queryable by the application but is inaccessible to any administrators running the database or underlying cloud infrastructure for you. Beyond security, client-side FLE is also a great way to comply with right to erasure requests that are part of modern privacy regulations such as the GDPR or the CCPA. You simply destroy the user’s encryption key and their PII is unreadable and irrecoverable in memory, on disk, in logs, and in backups. For multi-cloud clusters, this means you can take advantage of multiple layers of encryption that use keys from different clouds. For example, you can have PII data encrypted client-side with AWS KMS keys, then stored in both an AWS and Google Cloud region on Atlas and further encrypted at rest with a key managed via Azure Key Vault. Global, Multi-Cloud Clusters on MongoDB Atlas For workloads that reach users across continents, our customers leverage Global Clusters . This gives you the unique ability to shard clusters across geographic zones and pin documents to a specific zone. Now that Atlas is multi-cloud, you can now choose from the nearly 80 available regions across all three providers, expanding the potential reach of your client applications while making it easy to comply with data residency regulations. Consider a sample scenario where you’re based in the US and want to expand to reach audiences in Europe. To comply with GPDR , you must store EU customer data within that region. With Global Clusters, you can configure a multi-cloud cluster with a US zone and an EU zone. In the US, you choose to run on AWS, but in Europe, you decide to go with Azure because it has more available regions. All of this can be configured in minutes using the Atlas UI: simply define your zones and ensure that your documents contain a location field that dictates which zone they should be stored in. For more details, follow our tutorial for how to configure a multi-cloud Global Cluster on Atlas . Future-Proof Your Applications with Multi-Cloud Clusters There are many reasons why companies are considering a multi-cloud strategy , from cross-cloud resiliency to geographical reach to being able to leverage the latest tools and services on the market. With MongoDB Atlas, you get best-in-class data security and operations and intuitive admin controls, regardless of how many cloud providers you want to use. To learn more about how to deploy a multi-cloud cluster on MongoDB Atlas, check out our step-by-step tutorial , which includes best practices for node distribution, instructions for how to test failing over to another cloud, and more. Safe Harbor The development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality.

April 7, 2021

Dive Deeper into Chart Data with New Drill-Down Capability

With the latest release of MongoDB Charts, you’re now able to dive deeper into the data that’s aggregated in your visualizations. At a high level, we generally create charts, graphs and visualizations of our data to answer questions about our business or products. Oftentimes, we need to “double click” on those visualizations to get insight into each individual data point that makes up the line, bar, column, etc. How the drill-down functionality works: Step 1: Right click on the data point you are interested in drilling down into Step 2: Click "show data for this item" Step 3: View the data in tabular or document format Each view can be better for different circumstances. For data without too many fields or no nested arrays, it might be quicker and more easily viewed in a table. On the other hand, the JSON view allows you to explore the structure of documents and click into arrays. Scenarios where more detailed information can help: Data visualization use cases are relatively broad spanning, but oftentimes they fall into 3 main categories: monitoring data, finding insights, and embedding analytics into applications. I’ll be focusing on the first two of these three as there are many different ways you could potentially build drilling-down into data via embedded charts. (Read more about our click events and embedded analytics ). For data or performance monitoring purposes , we're not speaking so much about the performance of your actual database and its underlying infrastructure, but the performance of the application or system built on top of the database. Imagine I have an application or website that takes reviews, if I build a chart like the one below where I want to easily see when an interaction hits a threshold that I want to dive deeper into, I now have the ability to quickly see the document that created that data point. This chart shows app ratings given after a user session in an app. For this example, we want to dive into any rating that was below a 3 (out of 5). This scatter plot shows I have two such ratings that cross that threshold. With the drill-down capability, I can easily see all the details captured in that user session. For finding new insights, let’s imagine I’m tracking how many transactions happen on my ecommerce site over time. In the column chart below, you can see I have purchases by month for the last year and a half (note, there’s a gap because this example is for a seasonal business!). Just by glancing at the chart, I can quickly see purchases have increased over time, and my in-app purchases have increased my overall sales. However, I want to see more about the documents that were aggregated to create those columns, so I can quickly see details about the transaction amount and location without needing to create another chart or dashboard filter. In both examples, I was able to answer a deeper level question that the original chart couldn’t answer on it’s own. We hope this new feature helps you and your stakeholders get more out of MongoDB Charts, regardless if you’re new to it or have been visualizing your Atlas data with it for months, if not years! If you haven’t tried Charts yet, you can get started for free by signing up for a MongoDB Atlas and deploying a free tier cluster.

April 6, 2021

Atlas as a Service

Many of our customers provide MongoDB as a service to their development teams, where developers can request a MongoDB database instance and receive a connection string and credentials in minutes. As those customers move to MongoDB Atlas , they are similarly interested in providing the same level of timely service to their developers. Atlas has a very powerful control plane for provisioning clusters. In large organizations that have thousands of developers, however, it is not always practical to give so many people direct access to that interface. The goal of this article is to show how the Atlas APIs can be used to provide MongoDB as a service when MongoDB is managed by Atlas. Specifically, we’ll demonstrate how an interface could be created that offers developers a set of choices for a MongoDB database instance. For simplicity, this represents how to provide developers with a list of memory and storage options to configure their cluster. Other options like cloud provider and region are abstracted away. We also demonstrate how to add labels to the Atlas clusters, which is a feature that the Atlas UI doesn’t support. For example, we’ve added a label for cluster description. Architecture Although the Atlas APIs could be called directly from the client interface, we elected to use a 3 tier architecture, the benefits being: We can control the extent of functionality to what is needed. We can simplify the APIs exposed to the front-end developers. We can more granulary secure the API endpoints. We could take advantage of other backend features such as Triggers , Twilio integration, etc. Naturally, we selected Realm to host the middle tier. Implementation Backend Atlas API The Atlas APIs are wrapped in a set of Realm Functions . For the most part, they all call the Atlas API as follows (this is getOneCluster): /* * Gets information about the requested cluster. If clusterName is empty, all clusters will be fetched. * See https://docs.atlas.mongodb.com/reference/api/clusters-get-one * */ exports = async function(username, password, projectID, clusterName) { const arg = { scheme: 'https', host: 'cloud.mongodb.com', path: 'api/atlas/v1.0/groups/' + projectID +'/clusters/' + clusterName, username: username, password: password, headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']}, digestAuth:true }; // The response body is a BSON.Binary object. Parse it and return. response = await context.http.get(arg); return EJSON.parse(response.body.text()); }; You can see each function’s source on GitHub . MiniAtlas API The next step was to expose the functions as endpoints that a frontend could use. Alternatively, we could have called the functions using the Realm Web SDK , but we elected to stick with the more familiar REST protocol for our frontend developers. Using Realm Third-Party Services , we developed the following 6 endpoints: table, th, td { border: 1px solid black; border-collapse: collapse; } API Method Type Endpoint Get Clusters GET /getClusters Create a Cluster POST /getClusters Get Cluster State GET /getClusterState?clusterName:cn Modify a Cluster PATCH /modifyCluster Pause or Resume Cluster POST /pauseCluster Delete a Cluster DELETE /deleteCluster?clusterName:cn Here’s the source for getClusters. Note is pulls the username and password from the Values & Secrets : /* * GET getClusters * * Query Parameters * * None * * Response - Currently all values documented at https://docs.atlas.mongodb.com/reference/api/clusters-get-all/ * */ exports = async function(payload, response) { var results = []; const username = context.values.get("username"); const password = context.values.get("apiKey"); projectID = context.values.get("projectID"); // Sending an empty clusterName will return all clusters. var clusterName = ''; response = await context.functions.execute("getOneCluster", username, password, projectID, clusterName); results = response.results; return results; }; You can see each webhook’s source on GitHub . When the webhook is saved, a Webhook URL is generated, which is the endpoint for the API: API Endpoint Security Only authenticated users can execute the API endpoints. The caller must include an Authorization header containing a valid user id, which the endpoint passes through this script function: exports = function(payload) { const headers = context.request.requestHeaders const { Authorization } = headers const user_id = Authorization.toString().replace(/^Bearer/, '') return user_id }; MongoDB Realm includes several built-in authentication providers including anonymous users, email/password combinations, API keys , and OAuth 2.0 through Facebook , Google , and Apple ID . For this exercise, we elected to use Google OAuth , primarily because it’s already integrated with our SSO provider here at MongoDB. The choice of provider isn’t important. Whatever provider or providers are enabled will generate an associated user id that can be used to authenticate access to the APIs. Frontend The frontend is implemented in JQuery and hosted on Realm . Authentication The client uses the MongoDB Stitch Browser SDK to prompt the user to log into Google (if not already logged in) and sets the users Google credentials in the StitchAppClient . let credential = new stitch.GoogleRedirectCredential(); client.auth.loginWithRedirect(credential); Then, the user Id required to be sent in the API call to the backend can be retrieved from the StitchAppClient as follows: let userId = client.auth.authInfo.userId; And set in the header when calling the API. Here’s an example calling the createCluster API: export const createCluster = (uid, data) => { let url = `${baseURL}/createCluster` const params = { method: "post", headers: { "Content-Type": "application/json;charset=utf-8", ...(uid && { Authorization: uid }) }, ...(data && { body: JSON.stringify(data) }) } return fetch(url, params) .then(handleErrors) .then(response => response.json()) .catch(error => console.log(error) ); }; You can see all the api calls in webhooks.js . Tips We had great success using Postman Team Workspaces to share and validate the backend APIs. Conclusions This prototype was created to demonstrate what’s possible, which by now you hopefully realize is anything! The guts of the solution are here - how you wish to extend it is up to you. Learn more about the MongoDB Atlas APIs .

March 2, 2021

Flowhub Relies on MongoDB to Meet Changing Regulations and Scale Its Business

The legal landscape for cannabis in the United States is in constant flux. Each year, new states and other jurisdictions legalize or decriminalize it, and the regulations governing how it can be sold and used change even more frequently. For companies in the industry, this affects not only how they do business, but also how they manage their data. Responding to regulatory changes requires speedy updates to software and a database that makes it easy to change the structure of your data as needed – and that’s not to mention the scaling needs of an industry that’s growing incredibly rapidly. Flowhub makes software for the cannabis industry, and the company is leaping these hurdles every day. I recently spoke with Brad Beeler , Lead Architect at Flowhub, about the company, the challenges of working in an industry with complex regulations, and why Flowhub chose MongoDB to power its business. We also discussed how consulting from MongoDB not only improved performance but also saved the company money, generating a return on investment in less than a month. Eric Holzhauer: First, can you tell us a bit about your company? Brad Beeler: Flowhub provides essential technology for cannabis dispensaries. Founded in 2015, Flowhub pioneered the first Metrc API integration to help dispensaries stay compliant. Today, over 1,000 dispensaries trust Flowhub's point of sale, inventory management, business intelligence, and mobile solutions to process $3B+ cannabis sales annually. Flowhub in use in a dispensary EH: How is Flowhub using MongoDB? BB: Essentially all of our applications – point of sale, inventory management, and more – are built on MongoDB, and we’ve been using MongoDB Atlas from the beginning. When I joined two and a half years ago, our main production cluster was on an M40 cluster tier, and we’ve now scaled up to an M80. The business has expanded a lot, both by onboarding new clients with more locations and by increasing sales in our client base. We’re now at $3 billion of customer transactions a year. As we went through that growth, we started by making optimizations at the database level prior to throwing more resources at it, and then went on to scale the cluster. One great thing about Atlas is that it gave us the metrics we needed to understand our growth. After we’d made some optimizations, we could look at CPU and memory utilization, check that there wasn’t a way to further improve query execution with indexes, and then know it was time to scale. It’s really important for usability that we keep latency low and that the application UI is responsive, and scaling in Atlas helps us ensure that performance. We also deploy an extra analytics node in Atlas, which is where we run queries for reporting. Most of our application data access is relatively straightforward CRUD, but we run aggregation pipelines to create reports: day-over-day sales, running financials, and so forth. Those reports are extra intensive at month-end or year-end, when our customers are looking back at the prior period to understand their business trends. It’s very useful to be able to isolate that workload from our core application queries. I’ll also say that MongoDB Compass has been an amazing tool for creating aggregation pipelines. EH: Can you tell us some more about what makes your industry unique, and why MongoDB is a good fit? BB: The regulatory landscape is a major factor. In the U.S., there’s a patchwork of regulation, and it continues to evolve – you may have seen that several new states legalized cannabis in the 2020 election cycle. States are still exploring how they want to regulate this industry, and as they discover what works and what doesn’t, they change the regulations fairly frequently. We have to adapt to those changing variables, and MongoDB facilitates that. We can change the application layer to account for new regulations, and there’s minimal maintenance to change the database layer to match. That makes our development cycles faster and speeds up our time to market. MongoDB’s flexibility is great for moving quickly to meet new data requirements. As a few concrete examples: The state of Oregon wanted to make sure that consumers knew exactly how much cannabis they were purchasing, regardless of format. Since some dispensaries sell prerolled products, they need to record the weight of the paper associated with those products. So now that’s a new data point we have to collect. We updated the application UI to add a form field where the dispensary can input the paper weight, and that data flows right into the database. Dispensaries are also issuing not only purchase receipts, but exit labels like what you’d find on a prescription from a pharmacy. And depending on the state, that exit label might include potency level, percentage of cannabinoids, what batch and package the cannabis came from, and so on. All of that is data we need to be storing, and potentially calculating or reformatting according to specific state requirements. Everything in our industry is tracked from seed to sale. Plants get barcodes very early on, and that identity is tracked all the way through different growth cycles and into packaging. So if there’s a recall, for example, it’s possible to identify all of the products from a specific plant, or plants from a certain origin. Tracking that data and integrating with systems up the supply chain is critical for us. That data is all tracked in a regulatory system. We integrate with Metrc , which is the largest cannabis tracking system in the country. So our systems feed back into Metrc, and we automate the process of reporting all the required information. That’s much easier than a manual alternative – for example, uploading spreadsheets to Metrc, which dispensaries would otherwise need to do. We also pull information down from Metrc. When a store receives a shipment, it will import the package records into our system, and we’ll store them as inventory and get the relevant information from the Metrc API. Flowhub user interface EH: What impact has MongoDB had on your business? BB: MongoDB definitely has improved our time to market in a couple of ways. I mentioned the differences of regulation and data requirements across states; MongoDB’s flexibility makes it easier to launch into a new state and collect the right data or make required calculations based on data. We also improve time to market because of developer productivity. Since we’re a JavaScript shop, JSON is second nature to our developers, and MongoDB’s document structure is very easy to understand and work with. EH: What version of MongoDB are you using? BB: We started out on 3.4, and have since upgraded to MongoDB 4.0. We’re preparing to upgrade to 4.2 to take advantage of some of the additional features in the database and in MongoDB Cloud. One thing we’re excited about is Atlas Search : by running a true search engine close to our data, we think we can get some pretty big performance improvements. Most of our infrastructure is built on Node.js, and we’re using the Node.js driver . A great thing about MongoDB’s replication and the driver is that if there’s a failover and a new primary is elected, the driver keeps chugging, staying connected to the replica sets and retrying reads and writes if needed. That’s prevented any downtime or connectivity issues for us. EH: How are you securing MongoDB? BB: Security is very important to us, and we rely on Atlas’s security controls to protect data. We’ve set up access controls so that our developers can work easily in the development environment, but there are only a few people who can access data in the production environment. IP access lists let us control who and what can access the database, including a few third-party applications that are integrated into Flowhub. We’re looking into implementing VPC Peering for our application connections, which currently go through the IP access list. We’re also interested in Client-Side Field-Level Encryption . We already limit the amount of personally identifiable information (PII) we collect and store, and we’re very careful about securing the PII we do need to store. Client-Side Field-Level Encryption would let us encrypt that at the client level, before it ever reaches the database. EH: You're running on Atlas, so what underlying cloud provider do your use? BB: We’re running everything on Google Cloud. We use Atlas on Google Cloud infrastructure, and our app servers are running in Google Kubernetes Engine. We also use several other Google services. We rely pretty heavily on Google Cloud Pub/Sub as a messaging backbone for an event-driven architecture. Our core applications initially were built with a fairly monolithic architecture, because it was the easiest approach to get going quickly. As we’ve grown, we’re moving more toward microservices. We’ve connected Pub/Sub to MongoDB Atlas, and we’re turning data operations into published events. Microservices can then subscribe to event topics and use the events to take action and maintain or audit local data stores. Our data science team uses Google BigQuery as the backend to most of our own analytics tooling. For most uses, we migrate data from MongoDB Atlas to BigQuery via in-house ETL processes, but for more real-time needs we’re using Google Dataflow to connect to MongoDB’s oplog and stream data into BigQuery. EH: As you grow your business and scale your MongoDB usage, what's been the most important resource for you? BB: MongoDB’s Flex Consulting has been great for optimizing performance and scaling efficiently. Flowhub has been around for a number of years, and as we’ve grown, our database has grown and evolved. Some of the schema, query, and index decisions that we had made years ago weren’t optimized for what we’re doing now, but we hadn’t revisited them comprehensively. Especially when we were scaling our cluster, we knew that we could make more improvements. Our MongoDB Consulting Engineer investigated our data structure and how we were accessing data, performance, what indexes we had, and so on. We even got into the internals of the WiredTiger storage engine and what optimizations we could make there. We learned a ton about MongoDB, and the Consulting Engineer also introduced us to some tools so we could diagnose performance issues ourselves. Based on our Consulting Engineer’s recommendations, we changed the structure of how we stored some data and reworked certain queries to improve performance. We also cleaned up a bunch of unnecessary indexes. We had created a number of indexes over the years for different query patterns, and our Consulting Engineer was able to identify which ones could be removed wholesale, and which indexes could be replaced with a single new one to cover different query patterns. We made some optimizations in Atlas as well, moving to a Low CPU instance based on the shape of our workload and changing to a more efficient backup option. With the optimizations recommended in our consulting engagement, we were able to reduce our spend by more than 35%. MongoDB consulting paid for itself in less than a month, which was incredible. I had to develop a business case internally for investing in consulting, and this level of savings made it an easy sell. The knowledge we picked up during our consulting engagement was invaluable. That’s something we’ll carry forward and that will continue to provide benefits. We’re much better at our indexing strategy, for example. Say you’re introducing a new type of query and thinking about adding an index: now we know what questions to ask. How often is this going to be run? Could you change the query to use an existing index, or change an existing index to cover this query? If we decide we need a new index, should we deprecate an old one? With the optimizations recommended in our consulting engagement, we were able to reduce our spend by more than 35%. MongoDB consulting paid for itself in less than a month, which was incredible. Brad Beeler, Lead Architect, Flowhub EH: What advice would you give to someone who's considering MongoDB for their next project? BB: Take the time upfront to understand your data and how it’s going to be used. That’ll give you a good head start for structuring the data in MongoDB, designing queries, and implementing indexes. Obviously, Flex Consulting was very helpful for us on this front, so give that a look.

January 19, 2021

MongoDB Atlas Online Archive for Data Tiering is now GA

We’re thrilled to announce that MongoDB Atlas Online Archive is now Generally Available. Online Archive allows you to seamlessly tier your data across Atlas clusters and fully managed cloud object stores, while retaining the ability to query it through a single endpoint. Reduce storage costs. Set the perfect price to performance ratio on your data. Automate data tiering. Eliminate the need to manually migrate or delete valuable data. Queryable archives. Easily federate queries across live and archival data using a unified connection string. With Online Archive, you can bring new use cases to MongoDB Atlas that were previously cost-prohibitive such as high volume time-series workloads, data archival for auditing purposes, historical log keeping and more. Manage your entire data lifecycle on MongoDB Atlas without replicating or migrating it across multiple systems. What is Atlas Online Archive? Online Archive is a fully managed data tiering solution that allows you to tier data across your "hot" database storage layer and "colder" cloud object storage to maintain queryability while optimizing on cost and performance. Online Archive is a good fit for many different use cases, including: Insert heavy workloads, where data is immutable and has lower performance requirements as it ages Historical log keeping and time-series datasets Storing valuable data that would have otherwise been deleted using TTL indexes We’ve received amazing feedback from the community over the past few months while the feature was in preview and we’re now confident in supporting your production workloads. Our users have put the feature through a variety of use cases in production and development workloads which has enabled us to make a wide range of improvements. Online Archive gives me the flexibility to store all of my data without incurring high costs, and feel safe that I won't lose it. It's the perfect solution. Ran Landau, CTO, Splitit Autonomous Archival Management It's easy to get started with Online Archive and it requires no ongoing maintenance once it’s been set up. In order to activate the feature, you can follow these simple steps: Navigate to the “Online Archive” tab on your cluster card and begin the setup flow. Set an archiving rule by selecting a date field, with dot-notation if it’s nested, or creating a custom filter. Choose commonly queried fields that you want your archival queries to be optimized for, with a few things in mind: Your data will always be “partitioned” by the date field in your archive, but can be partitioned by up to two additional fields as well. The fields that you most commonly query should be towards the top of the list (date can be moved to the top or bottom). Query fields should be chosen carefully as they cannot be changed after the fact and will have a large impact on query performance. Avoid choosing a field that has unique values as it will have negative performance impacts for queries that need to scan lots of data. And you’re done! MongoDB Atlas will automatically move data off of your cluster and into a more cost-effective storage layer that can still be queried with a single connection string that combines cluster and archive data, powered by Atlas Data Lake . What's Next? Along with announcing Online Archive as Generally Available, we’re excited to share a few additional product enhancements which should be available in the coming months: Private Link support when querying your archive Incremental deletes of data from your archive Support for BYO key encryption on your archival data Improved performance and stability Try Atlas Online Archive Online Archive allows you to right-size your Atlas clusters by storing hot data that is regularly accessed in live storage and moving colder data to a cheaper storage tier. Billing for this feature will include the cost to store data in our fully managed cloud object storage and usage based pricing for querying archive data. We can’t wait to see what new workloads you’ll bring onto MongoDB Atlas with the new flexibility provided by Online Archive! To get started, sign up for an Atlas account and deploy any dedicated cluster (M10 or higher). Have questions? Check out the documentation or head over to our community forums to get answers from fellow developers. And if we’re missing a feature you’d like to see, please let us know ! Safe Harbor Statement The development, release, and timing of any features or functionality described for MongoDB products remains at MongoDB's sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality. Except as required by law, we undertake no obligation to update any forward-looking statements to reflect events or circumstances after the date of such statements.

November 30, 2020

Splitit & MongoDB Atlas: Racing to Capture a Global Opportunity

Splitit is a global payment solution that allows businesses to offer installment plans for their customers. Unlike with other buy now, pay later (BNPL) solutions, Splitit shoppers can split their online purchases into monthly installments by using their existing credit, without the need for registration, application, or approval. “We have a very different proposition than others in this space,” says Splitit’s CTO, Ran Landau. “We’re not a financing company. We utilize the customer’s existing credit card arrangement, which allows us to accommodate smaller average deal values and a broader range of installment schedules.” Splitit works with online retailers across all market sectors and diverse price points, and recently raised $71.5 million in investment to fund global expansion. Following its IPO in January 2019, the business had seen strong growth as more consumers moved from brick and mortar to ecommerce. Then COVID-19 hit, and online shopping boomed. Landau recognized that the company needed to quickly scale its infrastructure in order to capture this large opportunity. The Need for Speed Landau joined Splitit in May 2019 and worked to modernize the company’s infrastructure. At the time, the team was using a traditional relational database. “As tech leaders, we need to make the right decision,” he says. “When I came to Splitit, I knew I needed a powerful NoSQL server so that my developers could develop faster and so that we could scale – both things that our relational databases were failing to deliver.” In the interest of getting up and running quickly, Ran’s team thought that they could move faster using a cloud-provider database that mimicked MongoDB functionality. He had used MongoDB before and saw that this solution offered the same drivers he was familiar with and claimed compatibility with MongoDB 3.6. Initially, the new solution seemed fine. But as the team started to migrate more data into the database, however, Landau noticed a few missing features. Scripts for moving documents from one collection to another were failing, and overall performance was deteriorating. The application became slow and unresponsive even though the load on the database was normal. “We were having issues with small things, like renaming collections. I couldn’t search or navigate through documents easily,” recalls Landau. Offline Database: A Breaking Point Then one day, the application was unable to communicate with the database for 20 minutes, and when the database finally came back online, something wasn’t right. Landau contacted support, but the experience was not very helpful. “We were not pleased with the response from the database vendor,” he explains. “They insisted that the issue was on our side. It wasn’t so collaborative.” Fortunately, he had taken a snapshot of the data so Splitit was able to revert back to an earlier point in time. But the incident was troubling. Other teams also had been complaining about how difficult it was to debug problems and connect to the database successfully. Landau knew he needed to find a better solution as soon as possible. MongoDB Atlas: A Reliable, Scalable Solution Landau believed that MongoDB was still the right choice for Splitit, and investigated whether the company offered a cloud solution. He discovered MongoDB Atlas and decided to give it a try. “The migration to MongoDB Atlas was so simple. I exported whatever data I had, then imported it into the new cluster. I changed the connection strings and set up VPC peering in all of my environments,” says Landau. “It was incredibly easy.” Not only was MongoDB Atlas built on actual MongoDB database software, but it was also secure, easy to use, and offered valuable features such as Performance Advisor . “It can tell you which indexes need to be built to increase speed. It’s such a powerful tool — you don’t need to think; it analyzes everything for you,” explains Landau. Another great feature was auto-scaling. “My biggest concern as I scale is that things keep working. I don’t have to stop, evaluate, and maintain the components in my system,” says Landau. “If we go back to doing database operations, we can’t build new features to grow the business.” Auto-archival Made Easy with Online Archive As a business in the financial services industry, Splitit needs to comply with various regulations, including PCI DSS . A key requirement is logging every transaction and storing it for auditing purposes. For Splitit, that adds up to millions of logs per day. Landau knew that storing this data in the operational database was not a cost-effective, long-term solution, so he initially used an AWS Lambda function to move batches of logs older than 30 days from one collection to another periodically. A few months ago, he discovered Online Archive , a new feature released at MongoDB.live in June 2020. With it, Landau was able to define a simple rule for archiving data from a cluster into a more cost-effective storage layer and let Atlas automatically handle the data movement. “The gem of our transition to Atlas was finding Online Archive,” says Landau. “There’s no scripting involved and I don’t have to worry about my aging data. I can store years of logs and know that it’s always available if I need it.” Online Archive gives me the flexibility to store all of my data without incurring high costs, and feel safe that I won't lose it. It's the perfect solution. Ran Landau, CTO, Splitit With federated queries, the team can also easily analyze the data stored in both the cluster and the Online Archive for a variety of use cases. Ready for Hypergrowth and Beyond Looking back, Landau admits that he learned his lesson. In trying to move quickly, he selected a solution that appeared to work like MongoDB, but ultimately paid the price in reliability, features, and scalability. You wouldn't buy a fake shirt. You wouldn't buy fake shoes. Why buy a fake database? MongoDB Atlas is the real thing. Ran Landau, CTO, Splitit Landau is confident that his investment in MongoDB puts in place a core building block for the business’ continued success. With a fully managed solution, his team can focus on building features that differentiate Splitit from competitors to capture more of the market. “We saw our growth triple in March due to COVID-19, but the sector as a whole is expanding,” he says. “Our technology is patent protected. Everything we build moving forward will be on MongoDB. As a company that’s scaling rapidly, the most important thing is not having to worry about my scaling. MongoDB Atlas takes care of everything.”

November 23, 2020

Introducing Multi-Cloud Clusters on MongoDB Atlas

One of the core pillars of MongoDB is the freedom to run anywhere. Since 2017, organizations have been able to use MongoDB Atlas , our fully managed global cloud database, across 70+ regions on the cloud provider of their choice: AWS, Azure, or Google Cloud. We’re increasingly seeing customers run independent workloads on different clouds — a common practice among enterprises with different applications and business units. However, we believe the real power of multi-cloud applications is yet to be realized in our industry. So today, I’m proud to announce that multi-cloud clusters are generally available on MongoDB Atlas! With this groundbreaking capability, customers can distribute their data in a single cluster across multiple public clouds simultaneously, or move workloads seamlessly between them. Data — traditionally the hardest piece of an application stack to move — is now the easiest. A New Multi-Cloud Paradigm More organizations are moving towards a multi-cloud model , and they want the freedom and flexibility to use the best of each cloud provider for any and every application. The question is how engineering teams can do this efficiently and deliberately while dealing with challenges such as incompatible operations and the effects of data gravity . Read our eBook, Why the World is Going Multi-Cloud , for a high-level guide to today's fast-emerging cloud architecture. Download Now With multi-cloud clusters on MongoDB Atlas, customers can realize the benefits of a multi-cloud strategy with true data portability and a simplified management experience. Developers no longer have to deal with manual data replication, and businesses can focus their technical resources on building differentiated software. This opens up a whole new set of possibilities that were previously difficult ― if not impossible ― to achieve, from being able to use best-of-breed services across multiple platforms to data mobility and cross-cloud resiliency. Use best-in-class technology across multiple clouds in parallel Developer productivity is critical to a company’s success, and CTOs know that enabling their teams to choose the best technology available is a major contributing factor. With MongoDB Atlas, developers get more freedom in deciding what building blocks to use, regardless of which cloud is storing application data. Some examples of popular cloud services that our customers like to use include AWS Lambda , Google Cloud AI Platform , and Azure Cognitive Services. With multi-cloud clusters, developers can now run operational and analytical workloads using different cloud tools on the same dataset, with no manual data replication required. Migrate workloads across cloud environments seamlessly Data mobility is another reason companies want a multi-cloud strategy. The world is constantly changing, and businesses never know if, or how, their cloud requirements are going to change. They may face mergers and acquisitions, be subject to new regulatory controls for data portability, find themselves in direct competition with a cloud provider, or find significant cost savings on another platform. With MongoDB Atlas, organizations can future-proof their applications and have the option to move them from one cloud to another if needed, without undergoing a costly data migration. Our built-in automation seamlessly handles cross-cloud data replication on a rolling basis so applications stay online and available to end-users. Improve high availability with cross-cloud redundancy Any business with a mission-critical or user-facing application knows that downtime is unacceptable. Cloud disruptions vary in severity, from temporary capacity constraints to full-blown outages, and organizations need to mitigate as much risk as possible. By distributing data across multiple clouds, they can improve high availability and application resiliency without sacrificing latency. MongoDB Atlas extends the number of locations available by allowing users to choose from any of the nearly 80 regions available (with more coming) across AWS, Azure, and Google Cloud — the widest selection of any cloud database on the market. This is particularly relevant for businesses that must comply with data sovereignty requirements , but have limited deployment options due to sparse regional coverage on their primary cloud provider. In some cases, only one in-country region is available, leaving users especially vulnerable to disruptions in cloud service. For example, AWS and Google Cloud each offer only one region in Canada. With multi-cloud clusters, organizations can take advantage of both regions, and add additional nodes in the Azure Toronto and Quebec City regions for extra fault tolerance. With MongoDB Atlas, customers no longer need to make a trade-off between availability and compliance. Reach more users with flexible deployment options In order to deliver a world-class application experience, organizations must at a minimum meet end-user requirements for their products and services. For SaaS providers and B2C businesses, this may include cloud provider preferences or regional availability. While each of the cloud providers offer a large and growing list of regions globally, their data centers are still heavily concentrated in the USA, Europe, and eastern Asia. If multinational enterprises want to reach local users in other areas, they may not always find coverage on a single cloud. For example, AWS is the only provider to offer a cloud region in Bahrain, Azure Oslo is the only option in Norway, and only Google Cloud has data centers in Indonesia. To capture more global market share, companies may need a multi-cloud strategy to meet customers where they are. An Integrated, More Secure Cloud Data Platform MongoDB has consistently delivered innovations in the data management experience, including automated data tiering with Atlas Online Archive , integrated full-text search with Atlas Search , and Client-Side Field Level Encryption (FLE) for some of the strongest levels of data privacy available today. Client-Side FLE currently works with AWS Key Management Service (KMS), and will soon offer beta support for Azure Key Vault and Google Cloud KMS. With this expansion, it will be easier for organizations to further enhance the privacy and security of sensitive and regulated workloads across all major public cloud platforms. Read our guide to learn more about how Client-Side Field Level Encryption protects data in MongoDB. Download Now Multi-Cloud Data Management Made Easy Multi-cloud distribution can be enabled for both new and existing clusters starting today via the Atlas UI. Multi-cloud clusters come with all the features that our customers know and love, including built-in security defaults , fully managed backup and restores , automated patches and upgrades , intelligent performance advice , and more. While multi-cloud clusters are generally available, we are planning on releasing more capabilities in the coming months to deliver even more value to you. Whether you’re a startup just getting off the ground or a global enterprise in the midst of a multi-year cloud transformation initiative, our multi-cloud database solution abstracts away the toughest roadblock to unlocking your multi-cloud strategy. When your data can travel across clouds, there’s no limit to what you can build. Let us know where multi-cloud clusters on MongoDB Atlas take you - or tell us what you need to get there .

October 20, 2020

Free Your Genius With MongoDB Atlas Free Tier

What’s free, lasts forever, and can help you explore all your app ideas? You guessed it — a MongoDB Atlas free tier cluster. Obviously there are plenty of reasons to use a more powerful cluster, but before you do, let’s explore the capabilities of Atlas free tier. It’s Free — Forever You heard it here first, folks. You never have to pay for your free tier cluster, and you can keep it on for as long as you’d like, on us. That means there’s no credit card required. We invest in the wonder of you, and all your great ideas. Why not test them all out? New to Atlas? MongoDB Atlas is our fully managed global cloud database providing best-in-class infrastructure, automation and proven practices that guarantee availability, scalability, and compliance with security standards. Atlas is available on more than 70 regions across AWS, GCP and Azure on our M0 free tier or our paid tiers (starting at M2). The most popular features of Atlas are available on paid and free clusters . Features like MongoDB Atlas Search help you design and deliver top-notch built-in search capabilities. The Collections view lets you inspect and interact with your data. It also provides the Aggregation Pipeline Builder , which allows you to learn, test and visualize MongoDB’s aggregation framework. You can also use your free tier cluster to test out MongoDB solutions that come integrated with Atlas, such as MongoDB Realm and MongoDB Charts , which both have their own free tier versions. Free tier clusters are a great opportunity to start innovating at no cost. If you decide you want something more powerful, upgrading to a larger tier has never been easier. To get yourself an account, sign up for Atlas today. Atlas Veteran? If you already know your way around a cloud database, why would you use a free tier cluster? No matter where you are in your Atlas journey, free tier clusters can be useful for development environments, proof of concepts, checking out new Atlas features, and demos. No matter how many projects you have, you can always spin up one free tier cluster per project. It’s Advantageous Free tier clusters are deployed on the latest battle-tested version of MongoDB, meaning you’re getting all the perks (on-demand materialized views, client side FLE, wildcard indexes), with none of the cost. For more information on what you’d get with a free tier cluster today, read through this blog post, which discusses free tier features on MongoDB 4.2 . Free tier clusters come with 512MB of storage. If you’re curious about what you can do with that, we have a few ideas to get you started: Build a search engine in less than 10 minutes with MongoDB Atlas Search, test out MongoDB with new programming languages , or kickstart your Atlas learning process with our sample data sets . It’s Distributed Atlas free tier clusters are available on the cloud provider of your choice in the most popular regions, including AWS North Virginia, Google Cloud São Paulo, and Azure in the Netherlands. We’ll be expanding to more regions in the future, so if you don’t have access to your preferred region now, it won’t be long until you do. To see all the free tier options available, simply create a new cluster in the Atlas UI. The best way to check out all the features available on free tier is to play around with one yourself… What are you waiting for?

August 14, 2020