MongoDB Blog

Articles, announcements, news, updates and more

Flowhub Relies on MongoDB to Meet Changing Regulations and Scale Its Business

The legal landscape for cannabis in the United States is in constant flux. Each year, new states and other jurisdictions legalize or decriminalize it, and the regulations governing how it can be sold and used change even more frequently. For companies in the industry, this affects not only how they do business, but also how they manage their data. Responding to regulatory changes requires speedy updates to software and a database that makes it easy to change the structure of your data as needed – and that’s not to mention the scaling needs of an industry that’s growing incredibly rapidly. Flowhub makes software for the cannabis industry, and the company is leaping these hurdles every day. I recently spoke with Brad Beeler , Lead Architect at Flowhub, about the company, the challenges of working in an industry with complex regulations, and why Flowhub chose MongoDB to power its business. We also discussed how consulting from MongoDB not only improved performance but also saved the company money, generating a return on investment in less than a month. Eric Holzhauer: First, can you tell us a bit about your company? Brad Beeler: Flowhub provides essential technology for cannabis dispensaries. Founded in 2015, Flowhub pioneered the first Metrc API integration to help dispensaries stay compliant. Today, over 1,000 dispensaries trust Flowhub's point of sale, inventory management, business intelligence, and mobile solutions to process $3B+ cannabis sales annually. Flowhub in use in a dispensary EH: How is Flowhub using MongoDB? BB: Essentially all of our applications – point of sale, inventory management, and more – are built on MongoDB, and we’ve been using MongoDB Atlas from the beginning. When I joined two and a half years ago, our main production cluster was on an M40 cluster tier, and we’ve now scaled up to an M80. The business has expanded a lot, both by onboarding new clients with more locations and by increasing sales in our client base. We’re now at $3 billion of customer transactions a year. As we went through that growth, we started by making optimizations at the database level prior to throwing more resources at it, and then went on to scale the cluster. One great thing about Atlas is that it gave us the metrics we needed to understand our growth. After we’d made some optimizations, we could look at CPU and memory utilization, check that there wasn’t a way to further improve query execution with indexes, and then know it was time to scale. It’s really important for usability that we keep latency low and that the application UI is responsive, and scaling in Atlas helps us ensure that performance. We also deploy an extra analytics node in Atlas, which is where we run queries for reporting. Most of our application data access is relatively straightforward CRUD, but we run aggregation pipelines to create reports: day-over-day sales, running financials, and so forth. Those reports are extra intensive at month-end or year-end, when our customers are looking back at the prior period to understand their business trends. It’s very useful to be able to isolate that workload from our core application queries. I’ll also say that MongoDB Compass has been an amazing tool for creating aggregation pipelines. EH: Can you tell us some more about what makes your industry unique, and why MongoDB is a good fit? BB: The regulatory landscape is a major factor. In the U.S., there’s a patchwork of regulation, and it continues to evolve – you may have seen that several new states legalized cannabis in the 2020 election cycle. States are still exploring how they want to regulate this industry, and as they discover what works and what doesn’t, they change the regulations fairly frequently. We have to adapt to those changing variables, and MongoDB facilitates that. We can change the application layer to account for new regulations, and there’s minimal maintenance to change the database layer to match. That makes our development cycles faster and speeds up our time to market. MongoDB’s flexibility is great for moving quickly to meet new data requirements. As a few concrete examples: The state of Oregon wanted to make sure that consumers knew exactly how much cannabis they were purchasing, regardless of format. Since some dispensaries sell prerolled products, they need to record the weight of the paper associated with those products. So now that’s a new data point we have to collect. We updated the application UI to add a form field where the dispensary can input the paper weight, and that data flows right into the database. Dispensaries are also issuing not only purchase receipts, but exit labels like what you’d find on a prescription from a pharmacy. And depending on the state, that exit label might include potency level, percentage of cannabinoids, what batch and package the cannabis came from, and so on. All of that is data we need to be storing, and potentially calculating or reformatting according to specific state requirements. Everything in our industry is tracked from seed to sale. Plants get barcodes very early on, and that identity is tracked all the way through different growth cycles and into packaging. So if there’s a recall, for example, it’s possible to identify all of the products from a specific plant, or plants from a certain origin. Tracking that data and integrating with systems up the supply chain is critical for us. That data is all tracked in a regulatory system. We integrate with Metrc , which is the largest cannabis tracking system in the country. So our systems feed back into Metrc, and we automate the process of reporting all the required information. That’s much easier than a manual alternative – for example, uploading spreadsheets to Metrc, which dispensaries would otherwise need to do. We also pull information down from Metrc. When a store receives a shipment, it will import the package records into our system, and we’ll store them as inventory and get the relevant information from the Metrc API. Flowhub user interface EH: What impact has MongoDB had on your business? BB: MongoDB definitely has improved our time to market in a couple of ways. I mentioned the differences of regulation and data requirements across states; MongoDB’s flexibility makes it easier to launch into a new state and collect the right data or make required calculations based on data. We also improve time to market because of developer productivity. Since we’re a JavaScript shop, JSON is second nature to our developers, and MongoDB’s document structure is very easy to understand and work with. EH: What version of MongoDB are you using? BB: We started out on 3.4, and have since upgraded to MongoDB 4.0. We’re preparing to upgrade to 4.2 to take advantage of some of the additional features in the database and in MongoDB Cloud. One thing we’re excited about is Atlas Search : by running a true search engine close to our data, we think we can get some pretty big performance improvements. Most of our infrastructure is built on Node.js, and we’re using the Node.js driver . A great thing about MongoDB’s replication and the driver is that if there’s a failover and a new primary is elected, the driver keeps chugging, staying connected to the replica sets and retrying reads and writes if needed. That’s prevented any downtime or connectivity issues for us. EH: How are you securing MongoDB? BB: Security is very important to us, and we rely on Atlas’s security controls to protect data. We’ve set up access controls so that our developers can work easily in the development environment, but there are only a few people who can access data in the production environment. IP access lists let us control who and what can access the database, including a few third-party applications that are integrated into Flowhub. We’re looking into implementing VPC Peering for our application connections, which currently go through the IP access list. We’re also interested in Client-Side Field-Level Encryption . We already limit the amount of personally identifiable information (PII) we collect and store, and we’re very careful about securing the PII we do need to store. Client-Side Field-Level Encryption would let us encrypt that at the client level, before it ever reaches the database. EH: You're running on Atlas, so what underlying cloud provider do your use? BB: We’re running everything on Google Cloud. We use Atlas on Google Cloud infrastructure, and our app servers are running in Google Kubernetes Engine. We also use several other Google services. We rely pretty heavily on Google Cloud Pub/Sub as a messaging backbone for an event-driven architecture. Our core applications initially were built with a fairly monolithic architecture, because it was the easiest approach to get going quickly. As we’ve grown, we’re moving more toward microservices. We’ve connected Pub/Sub to MongoDB Atlas, and we’re turning data operations into published events. Microservices can then subscribe to event topics and use the events to take action and maintain or audit local data stores. Our data science team uses Google BigQuery as the backend to most of our own analytics tooling. For most uses, we migrate data from MongoDB Atlas to BigQuery via in-house ETL processes, but for more real-time needs we’re using Google Dataflow to connect to MongoDB’s oplog and stream data into BigQuery. EH: As you grow your business and scale your MongoDB usage, what's been the most important resource for you? BB: MongoDB’s Flex Consulting has been great for optimizing performance and scaling efficiently. Flowhub has been around for a number of years, and as we’ve grown, our database has grown and evolved. Some of the schema, query, and index decisions that we had made years ago weren’t optimized for what we’re doing now, but we hadn’t revisited them comprehensively. Especially when we were scaling our cluster, we knew that we could make more improvements. Our MongoDB Consulting Engineer investigated our data structure and how we were accessing data, performance, what indexes we had, and so on. We even got into the internals of the WiredTiger storage engine and what optimizations we could make there. We learned a ton about MongoDB, and the Consulting Engineer also introduced us to some tools so we could diagnose performance issues ourselves. Based on our Consulting Engineer’s recommendations, we changed the structure of how we stored some data and reworked certain queries to improve performance. We also cleaned up a bunch of unnecessary indexes. We had created a number of indexes over the years for different query patterns, and our Consulting Engineer was able to identify which ones could be removed wholesale, and which indexes could be replaced with a single new one to cover different query patterns. We made some optimizations in Atlas as well, moving to a Low CPU instance based on the shape of our workload and changing to a more efficient backup option. With the optimizations recommended in our consulting engagement, we were able to reduce our spend by more than 35%. MongoDB consulting paid for itself in less than a month, which was incredible. I had to develop a business case internally for investing in consulting, and this level of savings made it an easy sell. The knowledge we picked up during our consulting engagement was invaluable. That’s something we’ll carry forward and that will continue to provide benefits. We’re much better at our indexing strategy, for example. Say you’re introducing a new type of query and thinking about adding an index: now we know what questions to ask. How often is this going to be run? Could you change the query to use an existing index, or change an existing index to cover this query? If we decide we need a new index, should we deprecate an old one? With the optimizations recommended in our consulting engagement, we were able to reduce our spend by more than 35%. MongoDB consulting paid for itself in less than a month, which was incredible. Brad Beeler, Lead Architect, Flowhub EH: What advice would you give to someone who's considering MongoDB for their next project? BB: Take the time upfront to understand your data and how it’s going to be used. That’ll give you a good head start for structuring the data in MongoDB, designing queries, and implementing indexes. Obviously, Flex Consulting was very helpful for us on this front, so give that a look.

January 19, 2021
Applied

Imposter Syndrome and Public Speaking: 5 Tips for a Successful Tech Talk

I've done hundreds of presentations at large and small events. I’ve delivered keynotes in front of a thousand people. You probably think I enter the stage each time with full confidence that I’ll give a solid and engaging presentation. You’re wrong. I’m an imposter when it comes to public speaking, and I'm not the only one. Every time I accept a speaking gig I think there must be people in the audience with much more experience and who can deliver my talk more eloquently. I feel like a fraud, and I know lots of other speakers that have the same feeling. That’s imposter syndrome. Studies have shown that actually 70% of us have experienced this kind of anxiety at some stage in our lives. But none of this stops me from speaking, and it shouldn’t stop you. You’ll get so much out of being brave enough to tell your story in front of a small or large audience: people open up and share their experiences with you, you end up helping others by inspiring them with your story, and you meet a lot of interesting people. Here are five tips for how to overcome your imposter syndrome and deliver a successful tech talk with confidence. 1. Be Better It helps to watch a lot of talks from conferences and note down what you like and don’t like about them. When preparing your talk, have that list present and avoid including things you dislike. This will help you develop better practices in delivery, story telling, slide design, or whatever area you have identified as important. When I put together my first presentation for an event in Munich, I watched some talks from the previous year. I was bored when speakers started with five minutes of talking about themselves and what they’ve done, so I decided to always begin my talks with telling a personal relatable story. It helped me to have the impression that my talks had a much better intro than all others (which might not be true, but I felt that way). 2. Be Authentic If you pretend to be someone you’re not, you’ll definitely suffer from the imposter syndrome. Be honest with the audience. Share your experience about a specific topic. Audiences love to hear real stories and want to learn from your successes and mistakes. As I mentioned before, I always start talks by sharing a true and relatable story. That serves a few purposes: The talk has an interesting start, and I have people’s attention from the first minute. When I share my own struggles and successes, it helps make the audience feel that I’m one of them. I don’t have to pretend to be someone that I’m not. It's my story and my experience. Being authentic helps you feel less like a fraud. 3. Grow Confidence Get feedback at every stage of your talk development, from writing your conference submission , putting together your talk , and crafting your slides to practicing and refining. Ask coworkers, friends, or other speakers to give you feedback. I know this is very hard for us imposters. We’re afraid to get caught being a fraud by people we care about. On the other hand, it’s a safe environment for receiving candid and honest feedback. I always test my new talks in front of a couple of coworkers. They know me and don’t want to hurt my feelings, but they also do not hold back if they don’t like parts of my presentation. It always makes my talks much better and gives me confidence that if they like it, the conference audience might like it too. If you’re just starting with public speaking, it might be an option to hold your presentation first at a smaller community gathering. These events normally are held in a much more relaxed atmosphere and have a smaller audience. It’ll boost your confidence if those user groups like your presentation, and you’ll feel more prepared for larger stages. Start small, grow confidence, and then go big (if you want). 4. The Audience is Your Friend Every speaker I know is nervous before the talk, no matter how many presentations they’ve given. Always remember: the audience wants you to succeed. They came to learn something. They don’t want you to fail. The audience is full of friends that want to support you. Bring some friends. It’s always good to see a friendly, familiar face in the audience (even if your talk is being delivered virtually). Talk to your friend right before you go on stage so your mind is not focused too much on the challenge ahead of you. Find friendly faces. Look around in the audience and find some friendly faces. You can use those attendees later in the talk to get visual confirmation that you’re doing great. If you’re giving your talk in-person, talk to people at the entrance or in the first row. Have a casual conversation, or just say hi. Connecting to the audience helps you to not think of them as strangers that are going to raise critical questions. 5. Use the Imposter Syndrome to Your Advantage Imposter syndrome gives you the feeling of being a fraud talking about a specific topic. You think there must be experts at this conference that know much more about the topic than you do. Use that feeling as a catalyst to really double down on learning more about the topic. I once submitted a talk about software development automation to an event in Norway and got accepted. I knew a few concepts but wasn’t an expert. The next day I went into deep-learning mode, read tons of articles on the internet, and scheduled a dozen interviews with people from various organizations who were responsible for running development tools, were tech leads, or just were innovative developers. I dug up so many interesting stories. I was scared as hell when giving the talk for the first time because I still did not feel like an expert on the topic, but people loved the stories I uncovered and as a result, I got invited to give the talk as a keynote at another event in Italy. Investing time in your research helps you to build confidence, so use the imposter syndrome as an accelerator. How can you start putting together your tech talk abstract and getting ready to submit it to your next conference? At MongoDB University we’ve put together a free course on this topic. At MongoDB, we believe that everyone has an interesting story to share, and we want to help you bring it to life. MongoDB.live 2021, MongoDB’s biggest conference of the year, is back, and we’re currently looking for speakers to inspire and equip attendees with new technologies, ideas, and solutions. Submit your talk today .

January 19, 2021
Home

Modernize data between siloed data warehouses with Infosys Data Mesh and MongoDB

The Data Challenge in Digital Transformation Enterprises that embark on a Digital transformation often face significant challenges with accessing data in a timely manner—an issue that can quickly impede customer satisfaction. To deliver the best digital experience for customers, companies must create the right engagement strategy. This requires all relevant data available in the enterprise be readily accessible. For example, when a customer contacts an insurance company, it is important that the company has a comprehensive understanding of the customer’s background as well as any prior interactions, so they can orchestrate the best possible experience. Data is available in both BI (Business Intelligence) systems, like Enterprise Data Warehouses, and OI (Operational Intelligence) systems, like policy and claim systems. There is a need to bring these BI and OI systems together to avoid any disruption to the digital functions that may delay synchronization. Data removed from an operational system loses context. Re-establishing this domain context and providing persona-based access to the data requires domain-oriented, decentralized data ownership, as well as architecture. Ultimately, organizations seek to use data as a key to fueling the products and services they provide their customers. This data should minimize the cost of customer research—but the data needs to be trusted and high quality. Companies need access to these siloed sources of data in a seamless self-service approach across various product life cycles. The Challenge of Centralized Data Historically, businesses have handled large amounts of data from various sources by ingesting it all into a centralized database (data warehouse, data lake, or data lake on cloud). They would then feed insight drivers, like reporting tools and dashboards as well as online transaction processing applications, from that central repository. The challenge with this approach is the broken link between analytical systems and transactional systems that impedes the digital experience. Centralized systems, like data warehouses, introduce latency and fail to meet the real time response and performance levels needed to build next-generation digital experiences. What is Infosys Data Mesh? Data Mesh helps organizations bridge the chasm between analytics and application development teams within large enterprises. Data Mesh is an architecture pattern that takes a new approach to domain-driven distributed architecture and the decentralization of data. Its basic philosophy is to encapsulate the data, its relationships, context, and access functionality into a data product with guaranteed quality, trust, and ease of use for business consumption. Data Mesh is best suited for low-latency access to data assets used in digital transformations that are intended to improve experience through rich insights. With its richer domain flavor — distributed ownership, manageability, and low latency access — Data Mesh is best positioned as a bridge between transactional (consuming applications) and analytical systems. This diagram depicts the high-level solution view of Data Mesh: Data Mesh Key Design Principles Domain-first approach. Data as a product. Data Mesh products share the following attributes which maximize usability and minimize friction: Self-described: metadata is precise and accurate Discoverable and addressable: products are uniquely identifiable and easy to find Secure and well-governed: only those who are granted access have it Trustworthy: proper data quality conrtols are applie, SLA/SLOs are maintainted Open standard and interoperable: data formats — XBRL, JSON Build new products easily. Any cross-functional team can build a new, enterprise-level product in an existing domain and/or fro existing products Simplified access for multiple technology stacks. Polygot data and ports, cloud and non-cloud. Common infrastructure and services for all data pipelines and catalogs. Platform Requirements for a Data Mesh To build a data mesh, companies need a database platform that can create domain-driven data products that meet various enterprise needs. This includes: Flexible data structures — to accomodate new behaviors An API-driven construct — to access current data products and build new domain-data ones Support for high-performance query on large-scale data structures A shared, scalable infrastructure Why MongoDB is the Optimal Platform for Infosys Data Mesh MongoDB is the best platform for realizing the Infosys Data Mesh architecture and powering analytics-driven enterprises because it provides: A flexible document model and a poly-cloud infrastructure availability so teams can easily modify and enrich flat or hierarchical data models MongoDB Realm Webhooks to create service APIs which connect data across products and enable consumption needs based on business context A scalable, shared infrastructure and support for high-performance querying of large scale data Service APIs for constructing Infosys Data Mesh Two use cases: table, th, td { border: 1px solid black; border-collapse: collapse; } Case 1: Case 2: A wealth management firm offers a variety of products to its customers — things like checking and savings accounts, trading, credit and debit cards, insurance, and investment vehicles. Challenges: Each product is serviced by a different system and technology infrastructure Internal consumers of this data have different needs: product managers analyze product performance, wealth managers and financial advisors rely on customer-centric analytics, and financial control teams track the firm’s revenue performance Solution: Using the Infosys Data Mesh model, the firm’s data owners create domain-data products categorized by customer and product, and then curate and publish them through a technology-agnostic, API-driven service layer. Consumers can then use this service layer to build the data products they need to carry out their business functions. The Risk and Finance unit of a large global bank has multiple regional data lakes catering to each region’s management information system and analytical needs. This poses multiple challenges for creating global data products: Challeges: Technology varies across regions ETL can becomes less advantageous depending on circumstance Regulations govern cross-regional data transfer policies Solution: To address these challenges, the bank creates an architecture of regional data hubs for region-specific products and, as with Case 1, makes those products available to authorized consumers through a technology-agnostic, API-driven service layer. Next, it implements an enterprise data catalog with an easy-to-use search interface on top of the API layer. The catalog’s query engine executes cross-hub queries, creating a self-service model for users to seamlessly discover and consume data products and to align newer ones with their evolving business needs. Enterprise security platform integration ensures that all regulatory and compliance requirements are fully met. How Businesses Overall Can Benefit Data and Insights become pervasive and consumable across applications and personas Speed-to-insights (including real time) enable newer digital experiences and better engagement leading to superior business results Self-service through trusted data products is enabled Infosysy DNA Assets on MongoDB Accelerates the Creation of Industry-Specific Domain Data Products table, th, td { border: 1px solid black; border-collapse: collapse; } Infosys Genome Infosys Data Prep Infosys Marketplace Creates the foundation for Data Mesh by unifying semantics across industries Guides consumers through the product creation process with a scalable data preparation framework Enables discovery and consumption of domain-data products via an enterprise data catalog Download our Modernization Guide for information about which applications are best suited for modernization and tools to get started.

January 14, 2021
Developer

Legacy Modernization with MongoDB and Confluent

In many organizations, crucial enterprise data is locked in dozens or hundreds of silos that may be, controlled by different teams, and stuck in systems that aren’t able to serve new workloads or access patterns. This is a blocker for innovation and insight ultimately hampering the business. For example, imagine building a new mobile app for your customers that enables them to view their account data in a single view. Designing the app could require months of time to simply navigate the internal processes necessary to gain access to the legacy systems and even more time to figure out how to integrate them. An Operational Data Layer, or ODL, can offer a “best of both worlds” approach, providing the benefits of modernization without the risk of a full rip and replace. Legacy systems are left intact – at least at first – meaning that existing applications can continue to work as usual without interruption. New or improved data consumers will access the ODL rather than the legacy data stores, protecting those stores from new workloads that may strain their capacity and expose single points of failure. At the same time, building an ODL offers a chance to redesign the application’s data model, allowing for new development and features that aren’t possible with the rigid tabular structure of existing relational systems. With an ODL, it’s possible to combine data from multiple legacy sources into a single repository where new applications, such as a customer single view or artificial intelligence processes, can access the entire corpus of data. Existing workloads can gradually shift to the ODL, delivering value at each step. Eventually, the ODL can be promoted to a system of record and legacy systems can be decommissioned. Read our blog covering DaaS with MongoDB and Confluent to learn more. There’s also a push today for applications and databases to be entirely cloud-based, but the reality is that current business applications are often too complex to be migrated easily or completely. Instead, many businesses are opting to move application data between on-premises and cloud deployments in an effort to leverage the full advantage of public cloud computing without having to undertake a complete, massive data lift-and-shift. Confluent can be used for both one-time and real-time data synchronization between legacy data sources and modern data platforms like MongoDB, whose fully managed global cloud database service, MongoDB Atlas , is supported across AWS, Google Cloud, and Azure. Confluent Platform can be self-managed in your own data center while Confluent Cloud can be used on the public clouds. Whether leaving your application on-premise is a personal choice or a corporate mandate, there are many good reasons to integrate with MongoDB Atlas. Bring your data closer to your users in more than 70 regions with Atlas’s global clusters Address your most intense workloads with one-click, automated sharding for scale out and zero-downtime scale up Quickly provision TBs of database storage, all on high performance SSDs with dedicated I/O bandwidth Natively query and analyze data across AWS S3 and MongoDB Atlas with MongoDB Atlas Data Lake Perform full-text search queries with MongoDB Atlas Search Build native mobile applications that seamlessly synchronize data with MongoDB Realm Create powerful visualizations and dashboards of your MongoDB data with MongoDB Charts Off-load older data to cost effective storage with MongoDB Atlas Online Archive In this video we will show one time migration and Real time continuous data synchronization from a Relational System to MongoDB Atlas using Confluent Platform and the MongoDB Connector for Apache Kafka . Also we will be talking about different ways to store and consume the data within MongoDB Atlas. Git repository for the demo is here . Learn more about the MongoDB and Confluent partnership here and download the joint Reference Architecture here . Click here to learn more about modernizing to MongoDB.

January 7, 2021
Developer

Built with MongoDB: Coursedog

Nicholas Diao and Justin Wenig had just joined the undergraduate program at Columbia University and were excited to take their first computer science class. They registered and, to their delight, were accepted into the course. When they showed up to the first class, however, they were greeted with distressing news: the class was double-booked. As they quickly discovered, their situation was hardly unique. Universities, such as their own, often lack the software to manage the complexities of class scheduling. They decided to fix that. What started as an undergraduate project to solve a local problem turned into Coursedog , a Y Combinator backed curriculum success platform for higher education. Coursedog works with more than 70 universities to modernize the way they propose, schedule, and publish their classes to students. Coursedog has raised $4.2M in funding and been building with MongoDB since Day 1. Both founders were recently named to Forbes 30 under 30. In this edition of #BuiltWithMongoDB, we talk to Nicholas about being a student founder, building prototypes to find product-market fit, and growing with the MongoDB platform. Siya Raj Purohit: Coursedog started while you were still in college. Let’s talk about how you came across the problem your platform is built to address. Nicholas Diao: My co-founder Justin and I both wanted to be in a specific CS class. On the first day of class, we had our textbooks, our coffee, and our snacks, and were ready to go - and then we realized the professor wasn’t there. It turns out the professor had been double-booked. As aspiring CS majors, we thought “how can this be a problem of the 21st century?!” We assumed there was some automated system that ensured smooth scheduling for university classes. But it turned out that what we thought would be an automated system was a couple of overworked administrators with Microsoft Excel spreadsheets in a dark back room where they had to build the schedule themselves. And, of course, that process is error-prone and not the best use of time for university administrators. Justin and I spoke to around 400 to 500 university administrators to better understand the scheduling process, which is far more complex than it appears and has an immense impact on students and their ability to get the courses they need to learn and graduate. We realized this was a problem that was mostly ignored. Columbia Law School joined us as a design partner, and we started building a simple prototype to address this problem and make the system better for all universities. SRP: What was your initial prototype like? ND: We used simple HTML, CSS, JavaScript, and Node.JS server with a MongoDB database. Part of the reason we chose MongoDB is its ability to be really flexible, because we were learning and iterating day after day. A few months later, we ended up signing our first official school contract at Brigham Young University based on good references from Columbia Law School. In the winter of 2019, we entered Y Combinator. After we graduated from YC, we raised $4M in Series A funding, and now we’re working with more than 70 institutions and have released three additional products focused on curriculum management, event management, and catalog management. We have a team of more than 40 people across three countries. SRP: What does your tech stack consist of? ND: It's the MEVN stack: MongoDB, Express.js, Vue.js, and Node.js. We use AWS for architecture. SRP: When you started building Coursedog, you were still a CS student. How did you decide to choose MongoDB? ND: There are a couple of reasons why we started building with MongoDB. Early on, we wanted to quickly build demos that our customers could provide immediate feedback on. We knew we were tackling a complex problem and we didn’t know what our ultimate data structure would be, so we wanted a database that could be as flexible and iterative as possible and that ended up being a NoSQL database that was cloud-hosted. MongoDB came to the top of that list. It was a great decision, because we made many modifications to our data model and MongoDB managed the complexity of our solution while being easier than any other database. What got our team to really fall in love with MongoDB was the power of what we were able to do with it . There are relationships between different data objects (for example, for an economics class, managing all the components you need to know: room size, class size, department, and so on). We’re able to do very powerful joins in the SQL database and complex filters. We can build out everything we need in an iterative fashion, and MongoDB has all the functions to enable us to build more complicated features over time. We once spoke with a MongoDB technical advisor too: aggregation pipelines were new to us, and our conversation gave us great footing for getting started. From there on, the MongoDB documentation has been detailed enough to help us navigate scaling challenges. SRP: Now you’re used by more than 70 universities. How has your experience been scaling up with MongoDB? ND: We’ve found it really easy to scale because of the seamlessness and flexibility of the product and its easy communication. We have a clear sense of how much data we’re using and what the performance metrics are, and we get timely notifications. We really appreciate that if you hit 80% of a specific metric, MongoDB will send you a notification. This has been hugely helpful to our DevOps and infrastructure folks for monitoring. We’ve actually taken some cues from that: if you have 80% of a room booked, we send a notification to our users. As we’ve scaled and worked with an increasing amount of data (typically there are between 150 and 200 data types for each school), we have found that MongoDB has the flexibility and customizations we required. This is sometimes a contrast to AWS, our architectural tool. Doing the same things in AWS is much harder. For example, to change notifications (or even set them up), you have to read AWS’s incomprehensible online documentation, and then there are about 30 different places you can go to make that change on the site. In contrast, MongoDB makes it so easy to manage the back-end so we can focus on building the business. SRP: What advice would you have for college students who aspire to build their own company and move into the CTO role for that company? ND: The most important skill to pick up during college is collaboration. The way schools evaluate students is all content-focused (test, papers, problem sets) but what really matters in your career is your ability to collaborate with other people. I wouldn’t have gotten anywhere if Justin and I were not able to build a strong partnership and work with other smart, hard-working people to get Coursedog off the ground. For college students, I would say that in the long run, it doesn’t matter what grade you get on that problem set or lab; it matters if you’re able to work well with the people around you. My second piece of advice is when building solutions, start small, start local, and start trying to solve problems for the people around you - as we did with Coursedog. The best companies come from solving personal problems and then building it out from there. Building something cool with MongoDB? Check out our developer resources , and let us know if you want your startup to be featured in our #BuiltWithMongoDB series.

January 6, 2021
Applied

Finding Inspiration and Motivation at MongoDB University

For many people, across the globe, 2020 was a strange and challenging year. The new year has brought the hope of healthier and more prosperous times ahead, but inspiration to stay positive can still be tough to find. For MongoDB Certified Developer Kirk-Patrick Brown, the past months presented obstacles, but with perseverance he also experienced growth and even found ways to give back to his local community using what he learned at MongoDB University . Kirk-Patrick sat down with us virtually, from his home in Jamaica, to talk about his passion for MongoDB, getting certified through MongoDB University in the middle of the pandemic, and staying motivated. Can you tell us about yourself and your approach to software development? I’m Kirk-Patrick Brown, a senior software developer at Smart Mobile Solutions Jamaica. I consider myself an artist. I have a history in martial arts and poetry. I medaled in the Jamaica Taekwondo Championships and received the certificate of merit in a creative writing competition hosted by the Jamaica Cultural Development Commission. It was only natural to bring those artistic traits when moving into software development. For me, software development is also an artistic pursuit. It gives me a canvas to create and bring ideas to life, which in turn brings business value. When did you begin building with MongoDB? I had my first hands on-experience with MongoDB in 2018. I realized it was one of those rare gems that you see, and you're immediately curious about how it actually works, because it’s not like what you’re used to. In Jamaica there are a lot of organizations that run on some form of relational database. But once I learned about MongoDB and NoSQL I became a self-motivated evangelist for MongoDB. I understand that organizations may have used relational databases in the past, and that is understandable because there is a longer history and at one time that was the main type of database for your typical workload, but things have changed drastically. In this era there is more demand for data and all different types of unstructured data. With the advent of big data, systems that were designed years ago may not be able to provide optimal storage and performance. MongoDB is a better alternative for such use cases and enables built-in features such as auto-sharding to horizontally scale and aid in the efficient storage and retrieval of big data. MongoDB keeps being so innovative. The other day I was preparing for a multicloud accreditation with Aviatrix, and it was so funny--at the very same time, MongoDB came out with multicloud clusters. It was just beautiful. You don’t want to get locked into one cloud provider for your deployments. Even though these cloud providers offer availability zones for increased fault tolerance, things can still happen. Becoming multi-cloud allows you to become more resilient to disaster. Being in multiple clouds also lets you bring some of your replica sets closer geographically to your customers. By leveraging regional presences across multiple clouds, you can reduce in-network latency, and increase your ability to fulfill queries faster. That’s one of the main features of MongoDB replication--the ability to configure a member to be of higher priority than others, which could be driven by the location in which most of your queries originate. Multi-cloud clusters enable high availability and performance, and I think it was amazing of MongoDB to create such a feature. You call yourself a “self motivated evangelist” for MongoDB. We’re flattered! What has your experience been? I’m actively trying to get organizations to appreciate NoSQL. Recently I presented to a group of developers in the agile space. I spoke to them about replication, sharding, indexes, performance, and how MongoDB ties into advanced features of security in terms of authentication. I’m primarily pushing for developers and organizations to appreciate the Atlas offering from MongoDB. Right out of the box you can instantly have a deployed database out there in Atlas--with the click of a button, pretty much. You can get up and running immediately because MongoDB is a cloud-first database. Plus there's always customer support, even at the free tiers. You don’t feel alone with your database when you’re using MongoDB Atlas. There has been some resistance, because NoSQL requires a bit of a mental shift to understand what it can provide. But we live in a world where things continually change. If you are not open to adapting I don’t even have to say what’s going to happen, you know? You became MongoDB Certified through MongoDB University in the middle of the pandemic. Can you tell us about that experience? Even before the pandemic started I was studying courses at MongoDB University, and traveling 100 kilometers to go to work every week, while also caring for my family and three year-old son back at home. There were some delays, but I was able to become MongoDB-certified in July 2020. Becoming MongoDB-certified has impacted me in positive ways. I’ve met people I did not know before. It has also given me a level of confidence as it relates to building a database that is highly available, scalable, and provides good data reads via the different types of indexes and indexing techniques provided by MongoDB. I can create the database, perform and optimize CRUD operations, apply security and performance activities alongside a highly available and scalable cluster, all thanks to the knowledge provided by MongoDB University. The courses at MongoDB University covered those aspects very well. There is enough theory but also a great amount of practical application in the courses, so you leave with working knowledge that you can immediately use. What is the project you worked on during the pandemic that you’re most proud of? One of the things I’ve worked on intensely during the pandemic is helping to develop a video verification application for a local company and building out most of the backend functionality. For that project, there was a great deal of research needed into the technological tools and implementation to support recording verification videos of customers. I felt like it was my contribution to society at a time when it was dangerous for people to come into that local business. If I can develop something that allows even one person not to need to come into that physical location, that could be the difference between someone contracting the virus or not. A virus that has taken many lives and disrupted a lot of families this year. What advice do you have for other developers who are struggling right now with motivation to advance themselves and their careers? Don’t ever give up. In anything that you do. There is nothing that you’ll do that’s going to be both good and easy. Being a developer, you experience different problems that you have to solve but you have to keep moving forward. I don’t believe in failure, because in anything you do, there is always a win. You have your experiences and those experiences can guide your decision making. It’s just like machine learning. Machines need a lot of data and you can’t give the machine all positive data. It needs some negative data for it to become a good training model. You need bad experiences as well as good ones. If we had all good experiences our brains would not have the training models to make those correct decisions when we need them. Each day I make one definite step or positive decision. And that may be as simple as going onto the MongoDB University site and saying “I’m going to complete this one course.” You just have to keep going at it. You plan for a lot of things in life, but things most of the time don’t happen when you want them to. There's going to be some delay or something. But you can’t give up. Because if you give up then everything is lost. As long as there is time and there is life then there is opportunity to keep doing this thing. And it may take a little bit to get there but eventually you will. But if you give up, you definitely won’t!

January 6, 2021
Developer

Data in 2021: Four Predictions For an Uncertain Future

What a year it’s been. A global pandemic, a recession, and a U.S. presidential election unlike any in living memory made 2020 a tragic and tumultuous 12 months many want to forget, but can’t. Despite the uncertainty, looking back we can be sure of at least one thing: we’ve seen several years of digital disruption in a matter of months. The race to digitize as fast as possible, our “next normal”, has cut across all industries, accelerating several ancillary trends like cloud adoption, AI, and IoT. Ironically, one of the lasting effects of 2020’s profound unpredictability is just how certain we now are of the growing centrality of digitization, and therefore data, as the primary driver of business success, consumer demand, and even societal change in 2021 and beyond. As such, we asked several of MongoDB’s brightest minds to look ahead to the coming year and share their insights into how these trends in data management may play out. Petabyte-Scale Goes Mainstream The idea of “big data” isn’t new, and many firms have been working with petabyte, and even exabyte, sized data sets for some time. 2021, however, may just be the year that data finally goes “big” for everyone else. For many organizations, particularly those mid-sized and smaller, data management has until now been confined to the realm of terabytes. However, trends like the explosion of connected devices, the roll out of 5G, and the continuation of 2020’s headlong rush to digitize every aspect of business mean petabyte-scale data management is likely to become a reality for many more. And to paraphrase a famous saying: “Mo data, mo problems.” Keeping petabytes of data accessible and safe, while at the same time using it to meaningfully enrich a business, is an order of magnitude more difficult and complex than what many mid-sized enterprises are used to. Petabyte-scale data management demands stricter tolerances for uptime, scalability, and performance. In addition, the data is likely to be more distributed — on prem, in the cloud, and even across different clouds. Real-time analytics becomes a business necessity, as does taking advantage of features like automated tiering. The security and data privacy implications of holding that much data, and making it accessible to more people and connected “things,” mean petabyte-scale data management is also a business opportunity tinged with considerable financial and reputational risk. Data Privacy Continues to Be a Hot Button The coming year will further define the relationship between consumers and their data. In November, California voters approved the California Privacy Rights Act (CPRA). Along with enhancements to the already enacted CCPA (the California Consumer Privacy Act), the CPRA establishes an independent watchdog, the California Privacy Protection Agency, to enforce the CCPA now, and the CPRA when it comes into effect on January 1, 2023. There’s growing expectation that 2021 will also be the year the U.S. Federal government begins drafting a nationwide privacy law. With more states likely to follow California and enact their own CCPA-inspired privacy laws, and a new administration headed to Pennsylvania Avenue on January 20th, a national answer to the patchwork of state-based data privacy laws might finally see the light of day. An online ad for one of Apple's latest releases, a credit card Elsewhere, China and Canada are just two of several major world economies set to introduce new data privacy statutes, or overhaul existing laws over the coming 12 months. For businesses, 2021 is also set to be a landmark year for the emergence of data privacy as a competitive advantage. The latest indicator of this trend came in the closing weeks of 2020. In December, simmering tension between two of the largest and most influential companies on the planet spilled into open conflict when Facebook took out full-page advertisements in the New York Times, Wall Street Journal, and Washington Post declaring, “We’re Standing Up To Apple For Small Businesses Everywhere.” A full page ad Facebook took out in several national publications The ads were a response to changes in Apple’s iOS 14, which will prompt users to grant apps permission to gather data and track them as they move across other apps on their iPhone or iPad. That move will “break” parts of Facebook’s ad targeting system, among other things. Apple CEO Tim Cook has staked the company’s brand on becoming known as the big tech company that respects user privacy, in direct contrast to Facebook and other companies that rely heavily on customer data for their advertising-based business models. “You are not our product,” Cook said in an interview with ABC’s Diane Sawyer in 2019 . “Our products are iPhones and iPads. We treasure your data. We want to help you keep it private and keep it safe.” Make no mistake, Facebook vs. Apple is just one battle in a much larger conflict over data privacy and brand equity. No longer just a compliance challenge, the sanctity of customer data is now a business and brand burnishing advantage too. Real-time Analytics Becomes a Differentiator It’s one thing to ingest a lot of data, and quite another to put that data to use. As 2020’s digitization stampede continues, the next frontier for enterprises is to mine the information they collect for insights that drive personalized customer experiences—at scale and in real time. And to achieve this level of near-instantaneous insight and response, 2021 will be the year businesses focus their attention on moving to converged data platforms. Unlike the siloed databases of yesteryear, converged data platforms (otherwise known as translytical data platforms, like MongoDB !), combine transactional (System of Record), operational (System of Engagement), and analytical (System of Insight) workloads onto a single, unified data platform. A converged data platform allows businesses to exploit their mountains of data at the speed and efficiency consumers now demand, and all with lower complexity and risk. As business leaders seek an edge over their competition, those that prioritize real-time analytics, and move to a converged data platform, will pull further away from their peers. Not Every Cloud Has a Silver Lining From retail to recreation, hospitality to healthcare, moving data and operations to the cloud was already a right of passage on the way to digital transformation. The COVID-19 pandemic simply accelerated this move. But with speed, comes even greater risk , and embracing the cloud on an accelerated timeline is fraught with danger. Do it without proper planning—as in a simple “lift and shift” of your existing setup—and you may find the on-premise issues that currently hamper developer velocity and business agility simply follow you to the cloud. The COVID-19 pandemic has heightened the need for companies to adopt digital business models—and only cloud platforms can provide the agility, scalability, and innovation required for this transition. McKinsey, The Next-Normal Recovery Will be Digital Additionally, all the advantages the cloud affords, such as the ease of scaling your infrastructure, can quickly lead to more architectural silos and technical complexity if handled incorrectly. Our warning is that, with so many companies rushing their move to the cloud in 2021, many will fail to seize on its transformational benefits, and spend 2022 (and beyond) undoing bad architectural decisions.

December 31, 2020
Home

Next Generation Mobile Bank Current is Using MongoDB Atlas on Google Cloud to Make Financial Services Accessible and Affordable for All

Doing Banking Better by Serving Americans Overlooked by Traditional Banks Current CEO Stuart Sopp knows a thing or two about banking and financial services. His background reads like a Who’s Who of the industry: Morgan Stanley, Citi, Deutsche Bank, and BNY Mellon. Despite Sopp’s depth of financial services expertise, he was determined to do banking even better. He founded Current with the belief that banking should be accessible and affordable for all. Current is now a leading U.S. challenger bank — using innovative approaches, services, and technologies — to serve Americans overlooked by traditional banks, regardless of age or income level, to help improve their financial outcomes. Current CTO Trevor Marshall joined us for a conversation on how the company is working with MongoDB and Google Cloud to translate those lofty goals into decisive action that has the potential to stand the financial services industry on its head. The Challenge How to Break Through Legacy Systems and Mindsets to Create Amazing Customer Experiences Traditional banks are account-driven, not people-driven. Want to keep individual accounts while opening a joint account? You may have to go through the entire account enrollment process again, being treated as if you were a completely new customer. For example, your checking, savings, and credit card accounts are often cannibalized by different teams inside a bank. What the credit card team wants you to do may be fundamentally different than what the lending and banking teams want you to do. The teams sell against each other to meet their individual team objectives. That can create confusion, not to mention overdrafts and other shortfalls, and increases the complexity of managing your most basic financial affairs Unfortunately, legacy infrastructure locks this in by reinforcing and even creating data and organization silos. And none of this places the focus where it should be: simplifying customers’ lives and obtaining the best overall outcomes for them. Current’s challenge was to break through walls of legacy thinking and technology to build bridges between the traditional financial world and a fully digital future, creating customer experiences that simply cannot exist in traditional systems. But it was even bigger than that. What Sopp, Marshall, and the team were contemplating was change on an order of magnitude that would shake (and reshape) the industry. It was very much like when the light flashed on, collectively, in the heads of the world’s communications service providers (CSPs): they stopped thinking in terms of “phone numbers” and started looking holistically at the customer. In an era of ubiquitous free or low-cost communications services from OTT (over-the-top) providers, this single view of the customer may be what helps CSPs survive. Focusing on customers, not bank accounts, would differentiate Current from legacy banks and help it thrive. Sopp and Marshall had worked closely together over many years across different projects and companies, so they knew they were up to the challenge. What they needed were equally capable partners to help implement their vision. The Solution A Modern Database to Drive a Transformational Approach to Banking To implement that vision, Current built its own ledgering system: Current Core, a retail banking platform providing true ledger state back to each of its banks. Marshall determined that Current needed a modern database that offered the most scalable, efficient way to optimize the success of this new application. Current Core, illustrated in Figure 1, features event-driven architecture that collects every transaction event in a MongoDB collection. These events construct debits and credits on the customer's ledger. The platform translates ACH (automated clearing house, or direct withdrawals), mobile check deposits, cash deposits, peer-to-peer payments, ATM, point of sale (POS) debit card purchases, and all other transactions into events, and stores them in a ledger that lives inside MongoDB. “By collapsing a collection of events, you can derive the current state of the user. This has enabled Current to usher in an era of a truly customer-focused, not legacy bank account-focused, financial services,” said Marshall. Figure 1: Current Core Platform MongoDB Atlas Search adds search right on top of the data stored in Current Core without all of the overhead, such as another layer of synchronization, that would have been required to integrate a separate search engine like Elasticsearch, as illustrated in Figure 2. Figure 2: Current's original proposed search architecture The updated architecture, as illustrated in Figure 3, uses Atlas Search to simplify queries and enhance accuracy, while enabling other services such as user-to-user payments. Figure 3: Current's simplified architecture using Atlas Search Marshall, who has used MongoDB in previous roles since 2015, said Current chose MongoDB for its: Strongly consistent data model Enterprise security with Field Level Encryption to address security, audit, and compliance requirements Multi-document distributed transactions with ACID guarantees to maintain transactional data integrity across the system “MongoDB gave us the flexibility to be agile with our data design and iterate quickly. The primary driver was the development velocity,” said Marshall. Current chose MongoDB Atlas on Google Cloud because it needed a VPC-peered connection with its Google Kubernetes Engine (GKE) clusters, he explained. “Scaling MongoDB, we didn’t want to manage it ourselves. We wanted to ensure that we were running the latest and greatest versions of the MongoDB server, and that we had a team we could work with for support and guidance.” “Google Cloud has the most cohesive offering of best-in-class cloud technologies. The way the components talk to each other allows for quick implementation for many use cases,” said Marshall. “We came for the credits; we stayed for the Kubernetes.” MongoDB gave us the flexibility to be agile with our data design and iterate quickly. The primary driver was the development velocity. Trevor Marshall, CTO, Current Current is also using MongoDB Compass to make Current Core more accessible to business users.. In addition to GKE, Current is using other Google Cloud solutions including Dataflow, PubSub, Memorystore, IAM + IAP, and Google BigQuery. Current also uses Neo4j, which handles data linkages and user householding to expedite some queries. The Results Great Customer Experiences, Lower TCO, 500% YoY Revenue Growth — and an Industry First “By working with MongoDB and Google Cloud we are creating excellent customer experiences. Our company mission of creating better financial outcomes for people is reflected all the way down to the way data is stored in MongoDB. It is uncommon, especially in financial services, for the data model to support the business so directly,” said Marshall. “Current Core, our proprietary banking technology, makes this possible by providing greater stability, faster money, and cost efficiencies that we pass on to our community of members.” “When you have your initial interaction with Current, we know you as a valued member instead of ‘knowing you’ as a string of siloed accounts,” he added. Using the data model to directly support the business helps Current offer an enhanced level of services and features to all customers, including: Paychecks paid up to two days faster through direct deposit Free overdraft up to $100 with its Overdrive TM feature No minimum account balance or hidden fees Reward points on purchases redeemable for cash back 24/7 customer support Current uses location data from the phone in combination with the transaction data over the card network to improve attribution of rewards purchases. Current also supports multiple attribution options for merchants setting up campaigns on Current's merchant platform. For example, merchants have the flexibility to set up campaigns either to be always-on once the user adds the offer, or to require an activation in the app before each purchase. By working with MongoDB and Google Cloud we are creating excellent customer experiences. Our company mission of creating better financial outcomes for people is reflected all the way down to the way data is stored in MongoDB. It is uncommon, especially in financial services, for the data model to support the business so directly. Trevor Marshall, CTO, Current Marshall said MongoDB Atlas has provided seamless performance and reliability as demand for Current Core has increased 30% week over week in some recent periods. “Atlas has enabled Current to reduce TCO by giving it the power to pushbutton-scale its platform as needed to meet changing demand with zero Ops intervention,” Marshall added. MongoDB’s flexible data also enables Current to release new services and features much faster than the competition — such as introducing the first point of sale rewards platform in the US that handles debit cards. Current members get another convenience feature and Current continues to expand its business. Support, too, is key. “MongoDB Atlas has excellent support, and we’ve seen exceptional response times when needed,” said Marshall. Atlas has enabled Current to reduce TCO by giving it the power to pushbutton-scale its platform as needed to meet changing demand with zero Ops intervention. Trevor Marshall, CTO, Current Not only does all of this position Current better in a competitive marketplace, it also yields deeper economic benefit to Current and its members. “Our ability to directly integrate with financial service providers allows us to achieve best-in-industry unit economics, and we pass that financial benefit right back to members,” said Marshall. Innovating with MongoDB and Google Cloud is paying off in other ways: Current has doubled its member base in less than six months to surpass two million members and increased its revenue more than 500 percent year over year to firmly establish the challenger bank as an industry leader in the U.S. In late November 2020, Current announced it raised $131 million in Series C funding, bringing it to over $180 million in total funding to date, with a valuation of $750 million. Innovating with MongoDB and Google Cloud is paying off in other ways: Current has doubled its member base in less than six months to surpass two million members and increased its revenue more than 500 percent year over year. MongoDB and Google Cloud figure prominently in Current’s future plans. Marshall said Current plans to take advantage of MongoDB’s sharding and horizontal scaling, and to expand its use of Google Cloud services, including “a lot more Dataflow.”

December 30, 2020
Applied

Vietnam's #1 Entertainment Network Accelerates International Growth with MongoDB Atlas and Google Cloud

With a high youth demographic, and a population whose first phone was a smartphone, Vietnam has unique conditions that are accelerating its pace of digital transformation throughout the country as it races to keep pace with neighboring countries. Digital entertainment is one of Vietnam's boom industries. POPS , founded in Vietnam in 2007, has grown to become the leading digital entertainment company in Southeast Asia. It is the #1 network in Vietnam, providing 830+ channels and generating 4.4 billion views per month. The trick, according to POPS’ Chief Technology Officer, Martin Papy, is “hyper-local content for every demographic.” POPS scours audience data to inform new content formats, programming, promotions and deals with local and international content creators. Quickly developing and delivering new applications is central to POPS' go-to-market strategy. To achieve this, POPS has taken the strategic decision to run its suite of apps on MongoDB Atlas and Google Cloud. The main POPS App is an all-in-one content platform and is available on smartphones, smart TVs, website, mobile and tablets. To compliment that app, POPS also recently launched POPS Kids . As the name suggests, Kids is for under-12s and provides a wide range of local and international edutainment and entertainment content. “Deciding on the MongoDB database platform was a simple decision,” Martin says. “We could run it on-premises, it provides a straightforward architecture, and it comes with the full support for restful APIs out of the box. It made it easy for a team of two engineers to build our POPS Kids application very quickly.” This setup worked fine when POPS was Vietnam-based only, with MongoDB running on POPS’ on-premises infrastructure, but was problematic when looking at spiky growth in international markets. “To scale, it was obvious we needed a cloud-based infrastructure,” Martin explained. The advantage of using MongoDB Atlas is that it allows us to expand in the region without significant time investment from our team. Martin Papy, Chief Technology Officer, POPS To solve that problem, POPS turned to MongoDB Atlas , MongoDB's fully managed cloud service, and chose Google Cloud as its cloud provider. Since POPS was already using YouTube and exporting its data from YouTube to Google BigQuery, moving its on-premises infrastructure to Google Cloud was the logical next step. Along with allowing for scale, using MongoDB Atlas simplified the task of database and backup management for the POPS team. Within two years the POPS Kids app has gone from nothing to signing 1.5 million users and generated over 32 million views. This is not simply a story of rapid growth. POPS has also reimagined its developer culture to retain agility as it adds capacity. POPS has reconfigured its monolithic architecture into smaller, more nimble microservices. Developers can now reuse existing components, saving time and freeing them to focus on service improvements. Martin has a team of 50 engineers working full-time on the platform. In just a few clicks, the team can configure, restore or query any point in time in the database's history. It ensures both disaster recovery and the ability to quickly spin up environments for testing new features. “The advantage of building our application using MongoDB Atlas is that it allows us to expand in the region without significant time investment from our team. We can take advantage of the multi-region replication to maintain our level of service. That is incredibly valuable to us," said Martin. This flexibility and capability meant that MongoDB could match POPS’ fast growth curve, from ambitious startup to regional enterprise. For Martin, rapid expansion should not be at the expense of security and control. “MongoDB gives built-in control over all our data. It gives us enterprise-grade features to integrate with our own security protocols and compliance standards. We can deploy a dedicated cluster in a unique virtual private network with its own firewall," he added. MongoDB Atlas now provides all of POPS's apps with a fully managed service on Google’s globally scalable and reliable infrastructure. The broader business is thriving. POPS now provides music, esports, news and game show content to over 212 million subscribers. Today, POPS is present in Vietnam, Thailand and Indonesia, and plans to add new markets in 2021. POPS Kids has become the most beloved and well-known kids’ brand in Southeast Asia. Watch the full video from POPS' presentation at MongoDB.live here .

December 23, 2020
Applied

Ready to get Started with MongoDB Atlas?

Get Started