MongoDB Joins Auth0 to Help Startups Combat Security Risks
We are excited to announce that MongoDB for Startups is collaborating with Auth0 for Startups to provide top security for applications by the most innovative startups. Why should a startup be part of the MongoDB and Auth0 startup programs? Customers, investors, and stakeholders expect many different things from a company, but one common requirement is responsibly managing their data. Companies choose MongoDB because it accelerates application development and makes it easier for developers to work with data. Developers mindful of security, compliance, and privacy when it comes to data use the robust Auth0 platform to create great customer experiences with features like single sign-on and multi-factor authentication. “Auth0 and MongoDB are very complementary in nature. While MongoDB provides a strong, secure data platform to store sensitive workloads, Auth0 provides secure access for anyone with the proper authorization," says Soumyarka Mondal, Co-founder of Sybill.ai. "We are safely using Auth0 as one of the data stores for the encryption piece, as well as using those keys to encrypt all of our users’ confidential information inside MongoDB.” What is the Auth0 for Startups Program? Auth0, powered by Okta, takes a modern approach to identity and enables startups to provide secure access to any application, for any user. Through Auth0 for Startups, we are bringing the convenience, privacy, and security of Auth0 to early-stage ventures, allowing them to focus on growing their business quickly. The Auth0 for Startups program is free for one year and supports: 100,000 monthly active users Five enterprise connections Passwordless authentication Breached password detection 50+ integrations, 60+ SDKs, and 50+ social & IdP connections What is the MongoDB for Startups Program? MongoDB for Startups is focused on enabling the success of high-growth startups from ideation to IPO. The program is designed to give startups access to the best technical database for their rapidly scaling ventures. Apply to our program and program participants will receive: $500 in credits for all MongoDB cloud products (valid for 12 months) A dedicated technical advisor for a two-hour, one-to-one consultation to help you with your data migration and optimization Co-marketing opportunities Access to the MongoDB developer ecosystem and access to our VC partners. Apply to Auth0 For Startups and the MongoDB for Startups Program today.
5 Key Questions for App-Driven Analytics
Note: This article originally appeared in The New Stack . Data that powers applications and data that powers analytics typically live in separate domains in the data estate. This separation is mainly due to the fact that they serve different strategic purposes for an organization. Applications are used for engaging with customers while analytics are for insight. The two classes of workloads have different requirements—such as read and write access patterns, concurrency, and latency—therefore, organizations typically deploy purpose-built databases and duplicate data between them to satisfy the unique requirements of each use case. As distinct as these systems are, they're also highly interdependent in today's digital economy. Application data is fed into analytics platforms where it's combined and enriched with other operational and historical data, supplemented with business intelligence (BI), machine learning (ML) and predictive analytics, and sometimes fed back to applications to deliver richer experiences. Picture, for example, an ecommerce system that segments users by demographic data and past purchases and then serves relevant recommendations when they next visit the website. The process of moving data between the two types of systems is here to stay. But, today, that’s not enough. The current digital economy, with its seamless user experiences that customers have come to expect, requires that applications also become smarter, autonomously taking intelligent actions in real time on our behalf. Along with smarter apps, businesses want insights faster so they know what is happening “in the moment.” To meet these demands, we can no longer rely only on copying data out of our operational systems into centralized analytics stores. Moving data takes time and creates too much separation between application events and analytical actions. Instead, analytics processing must be “shifted left” to the source of the data—to the applications themselves. We call this shift application-driven analytics . And it’s a shift that both developers and analytics teams need to be ready to embrace. Find out why the MongoDB Atlas developer data platform was recently named a Leader in Forrester Wave: Translytical Data Platforms, Q4 2022 Defining required capabilities Embracing the shift is one thing; having the capabilities to implement it is another. In this article, we break down the capabilities required to implement application-driven analytics into the following five critical questions for developers: How do developers access the tools they need to build sophisticated analytics queries directly into their application code? How do developers make sense of voluminous streams of time series data? How do developers create intelligent applications that automatically react to events in real time? How do developers combine live application data in hot database storage with aged data in cooler cloud storage to make predictions? How can developers bring analytics into applications without compromising performance? To take a deeper dive into app-driven analytics—including specific requirements for developers compared with data analysts and real-world success stories—download our white paper: Application-Driven Analytics . 1. How do developers access the tools they need to build sophisticated analytics queries directly into their application code? To unlock the latent power of application data that exists across the data estate, developers rely on the ability to perform CRUD operations, sophisticated aggregations, and data transformations. The primary tool for delivering on these capabilities is an API that allows them to query data any way they need, from simple lookups to building more sophisticated data processing pipelines. Developers need that API implemented as an extension of their preferred programming language to remain "in the zone" as they work through problems in a flow state. Alongside a powerful API, developers need a versatile query engine and indexing that returns results in the most efficient way possible. Without indexing, the database engine needs to go through each record to find a match. With indexing, the database can find relevant results faster and with less overhead. Once developers start interacting with the database systematically, they need tools that can give them visibility into query performance so they can tune and optimize. Powerful tools like MongoDB Compass let users monitor real-time server and database metrics as well as visualize performance issues . Additionally, column-oriented representation of data can be used to power in-app visualizations and analytics on top of transactional data. Other MongoDB Atlas tools can be used to make performance recommendations , such as index and schema suggestions to further streamline database queries. 2. How do you make sense of voluminous streams of time series data? Time series data is typical in many modern applications. Internet of Things (IoT) sensor data, financial trades, clickstreams, and logs enable businesses to surface valuable insights. To help, MongoDB developed the highly optimized time series collection type and clustered indexes. Built on a highly compressible columnar storage format, time series collections can reduce storage and I/O overhead by as much as 70%. Developers need the ability to query and analyze this data across rolling time windows while filling any gaps in incoming data. They also need a way to visualize this data in real time to understand complex trends. Another key requirement is a mechanism that automates the management of the time series data lifecycle. As data ages, it should be moved out of hot storage to avoid congestion on live systems; however, there is still value in that data, especially in aggregated form to provide historical analysis. So, organizations need a systematic way of tiering that data into low-cost object storage in order to maintain their ability to access and query that data for the insights it can surface. 3. How do you create intelligent applications that automatically react to events in real time? Modern applications must be able to continuously analyze data in real time as they react to live events. Dynamic pricing in a ride-hailing service, recalculating delivery times in a logistics app due to changing traffic conditions, triggering a service call when a factory machine component starts to fail, or initiating a trade when stock markets move—these are just a few examples of in-app analytics that require continuous, real-time data analysis. MongoDB Atlas has a host of capabilities to support these requirements. With change streams , for example, all database changes are published to an API, notifying subscribing applications when an event matches predefined criteria. Atlas triggers and functions can then automatically execute application code in response to the event, allowing you to build reactive, real-time, in-app analytics. 4. How do you combine live application data in hot database storage with aged data in cooler cloud storage to make predictions? Data is increasingly distributed across different applications, microservices , and even cloud providers. Some of that data consists of newly ingested time-series measurements or orders made in your ecommerce store and resides in hot database storage. Other data sets consist of older data that might be archived in lower cost, object cloud storage. Organizations must be able to query, blend, and analyze fresh data coming in from microservices and IoT devices along with cooler data, APIs, and third-party data sources that reside in object stores in ways not possible with regular databases. The ability to bring all key data assets together is critical for understanding trends and making predictions, whether that's handled by a human or as part of a machine learning process. 5. How can you bring analytics into your applications without compromising their performance? Live, customer-facing applications need to serve many concurrent users while ensuring low, predictable latency and do it consistently at scale. Any slowdown degrades customer experience and drives customers toward competitors. In one frequently cited study, Amazon found that just 100 milliseconds of extra load time cost them 1% in sales . So, it's critical that analytics queries on live data don’t affect app performance. A distributed architecture can help you enforce isolation between the transactional and analytical sides of an application within a single database cluster . You can also use sophisticated replication techniques to move data to systems that are totally isolated but look like a single system to the app. Next steps to app-driven analytics As application-driven analytics becomes pervasive, the MongoDB Atlas developer data platform unifies the core data services needed to make smarter apps and improved business visibility a reality. Atlas does this by seamlessly bridging the traditional divide between transactional and analytical workloads in an elegant and integrated data architecture. With MongoDB Atlas, you get a single platform managing a common data set for both developers and analysts. With its flexible document data model and unified query interface, the Atlas platform minimizes data movement and duplication and eliminates data silos and architectural complexity while unlocking analytics faster and at lower cost on live operational data. It does all this while meeting the most demanding requirements for resilience, scale, and data privacy. For more information about how to implement app-driven analytics and how the MongoDB developer data platform gives you the tools needed to succeed, download our white paper, Application-Driven Analytics .
MongoDB World 2022 Recap — Performance Gotchas of Replicas Spanning Multiple Data Centers
Indeed has more than 25 million open jobs online at any one time. It stores more than 225 million resumes on Indeed systems, and it has 250 million unique users every month. Indeed operates enterprise-wide global clusters in the cloud across multiple availability zones all around the world, including the United States, Asia-Pacific, Europe, and Australia. Indeed is also a MongoDB super user. About 50% of everything Indeed does is built on MongoDB. In a recent session at MongoDB World 2022, Indeed senior cloud database engineer Alex Leong shared real-world experiences of performance issues when spanning replica sets across multiple data centers. He also covered how to identify these issues and, most importantly, how to fix them. This article provides highlights from Leong’s presentation, including dealing with changes in sync sources, replication lags, and more. Resilience and performance Indeed maintains multiple data centers for resiliency. Having multiple data centers ensures there's no single point of failure and keeps data in close proximity to job seekers' locations. This approach facilitates faster response times and better overall end user experience. Running multiple data centers can introduce other performance issues, however. One issue involves the initial sync of new nodes in the system, which needs to happen as quickly as possible to avoid returning stale data. Write concern is a critical consideration because, if there's an interruption on a primary node and a failover to a secondary, when you eventually roll back to the primary, any changes that were captured on the secondary while the system was running in failover mode must be preserved. Also, when you're running multiple data centers, changes in sync sources can occur that go unnoticed. Replication lags can occur when data centers are located far apart from each other. Overriding sync sources When you have an environment with hundreds of millions of users and enormous volumes of data spanning several geographic regions, spinning up and synchronizing a new node in a replica set creates logistical hurdles. To start, you have to decide where the new node syncs from. It seems logical that the default decision would be to sync with the nearest node. But, as Leong said in his session, at times you may not get the nearest sync source, and you may have to override the default sync source to choose the best one. This decision needs to be made early, Leong said, because doing so later means any progress you've made toward syncing the new node will have been wasted. Replication lags Replication lags can occur between the primary and secondary nodes for several reasons, including downtime (planned or unplanned) on the primary server, a network failure, or disk failure. Whatever the reason, there are ways to speed things up. In his session, Leong illustrates how to use the WiredTiger cache size to accelerate replication between nodes. Changes in sync sources Leong uses the term sync topology to describe how primary and secondary nodes are configured for syncing data between them. In some scenarios, a secondary node can change its sync source (sync topology) from one node to another, perhaps because the first node was busy at the time. MongoDB makes this change automatically, and it might not be noticed without looking at the log. Fixing cross-data center write concerns According to Leong, when write performance decreases, 99% of the time it's because of a change in sync sources. To be proactive, Leong creates a write performance monitor to identify and self-heal decreases in write performance so he doesn't have to find out the hard way (from users). Other critical performance issues covered in the session include chained replication , which is the process by which secondary nodes replicate from node to node, changing write concern when a secondary node goes down, and how to configure write concerns across Availability Zones in AWS. For more details, watch the complete session from MongoDB World 2022: Performance Gotchas of Replicas Spanning Multi Datacenters .
Built by MongoDB: Qubitro Makes Device Data Accessible Anywhere it's Needed
Increased cloud adoption and the expansion of 5G networks are expected to drive growth in IoT technologies over the next few years. Emergent IoT technologies are poised to transform businesses and the social fabric, including healthcare, smart homes and cities, and the government sector. Delaware-based startup, Qubitro , looks to capitalize on the potentially explosive growth in IoT technology by helping companies bring smart solutions to market faster. Qubitro, which is also a member of the MongoDB for Startups program, offers the fastest way of collecting and processing device data to activate it wherever it's needed. Product vision Qubitro founder and CEO, Beray Bentesen, estimates that there are now billions of devices producing massive amounts of data. The company's mission, he says, is to make device data accessible anywhere it's needed as fast as possible and at a lower cost than ever before. By collecting device data from multiple networks and providing various developer toolkits for activating data in applications, Qubitro enables data-driven decision making and modern application development. The company has two main products: the Qubitro Portal , a user interface where users can collaborate with other members or their internal team and create real-time actions such as rules and output integrations with their applications, and developer tools including APIs and SDKs that allow for custom solutions without having to develop data infrastructure from scratch. Bentesen wants Qubitro to become the fabric of a digital transformation powered by device data. "We aim to make any data published from devices flow over our network and make any application that relies on device data to integrate with our services," Bentesen says. The ideal Qubitro customer is one that needs to put device data into their solutions. "It could be startups, IoT-adopting enterprises, or custom solution providers," Bentesen says. The company has also been heavily investing in developer experience, he adds. A platform to build upon The secret to building a platform that can process data in milliseconds with privacy and user experience combined is, not surprisingly, another platform — specifically the MongoDB Atlas developer data platform. "We offer managed connectivity solutions, user interface, and the APIs," Bentesen says. "So we process tons of data. And MongoDB is in the middle of all those inputs and outputs." The MongoDB for Startups program helps startups build faster and scale further with free MongoDB Atlas credits, one-on-one technical advice, co-marketing opportunities, and access to a vast partner network. Bentesen says the company has benefitted from being in the program a number of ways. "In the early days when we joined the program, we were able to get answers to questions that would take probably weeks or maybe more if you search on the internet," he says. "We were able to understand what to develop, which saved us a lot of time and of course expense." The MongoDB Atlas platform also helps their developers during those crucial stages prior to launching a new feature and as the product grows in popularity. "With MongoDB Atlas, we could test our development environment before going to production," Bentesen says. "And as we scale, we're able to observe the traffic through MongoDB Atlas and optimize thanks to the tools MongoDB offers, like MongoDB Compass , without dealing with code or complex environments." MongoDB's document model database made it an easy choice for the company's needs. "We decided to use MongoDB because it's a flexible environment," Bentesen says. "We knew we would have to build new features over time. So we needed to go with a flexible database. We're still adding more and more features without breaking the entire system. We wanted that flexibility, and in a managed cloud offering, which MongoDB gives us." Bentesen also cites MongoDB's Time Series collections as one of the features he's most excited about, since the vast majority of IoT solutions rely on time series data. Looking forward Bentesen says Qubitro will likely add more enterprise features in the future. The more they grow, he says, the more insight they're getting about what customers want. The company also plans to invest heavily in growing its community of users and, of course, attracting more talent. Bentensen says the company fully embraces the remote-first culture and believes they can work faster working remotely. If you're looking forward to building the next generation of connected solutions, visit Qubitro.com , join the company's Discord server , or have a chat anytime, even weekends! Are you part of a startup and interested in joining the MongoDB for Startups program? Apply now . For more startups content, check out our previous blog on ChargeHub .
Built With MongoDB: ChargeHub Simplifies the Electric Charging Experience
MongoDB Supports Cutting-Edge Startups with NVIDIA Inception
We are excited to announce that MongoDB for Startups is collaborating with NVIDIA Inception to power cutting-edge startups in AI, autonomous driving, gaming, robotics, healthcare, and more. Here, we answer frequently asked questions to provide a brief overview of the initiative. Why should a startup build its MVP with MongoDB for Startups and NVIDIA Inception? If you are a founder looking to build innovative technologies in AI, data science, gaming, and other breakthrough industries, then you’ll want to leverage MongoDB for Startups (a program that helps startups build faster and scale further with free MongoDB Atlas credits, one-on-one technical advice, co-marketing opportunities, and access to our partner network) and NVIDIA Inception (a free global program designed to nurture cutting-edge startups) for your MVP. Together with MongoDB and NVIDIA Inception, founders can leverage the most intuitive and flexible way to work with data so they can adapt quickly to the changing needs of a growing business. Utilizing MongoDB Atlas , a fully managed multi-cloud developer data platform, startups can scale rapidly while enjoying the freedom to run anywhere with the best cloud environments. NVIDIA Inception members get a custom set of ongoing benefits, such as NVIDIA Deep Learning Institute credits, marketing support, and technology assistance, which provides startups with the fundamental tools to help them grow. What is the MongoDB for Startups program? MongoDB for Startups is focused on creating a valuable technical startup program that enables the success of high-growth startups from ideation to IPO. We designed this program to give startups access to the best technical database for their rapidly scaling ventures. Apply to our program and you will receive: $500 in credits for all MongoDB cloud products (valid for 12 months) A dedicated technical advisor for a two-hour, one-on-one consultation to help you with your data migration and optimization Co-marketing opportunities Access to the MongoDB developer ecosystem and access to our VC partners What is the NVIDIA Inception Program? NVIDIA Inception helps startups in all industries accelerate growth and build their products faster. With more than 10,000 startup members, the program is free and available for tech startups in all stages. Inception members can receive: Up to $100K in AWS cloud credits Technical training and engineering guidance Product discounts Co-marketing and co-selling Customer introductions and VC exposure Apply to NVIDIA Inception and the MongoDB for Startups Program today.
How to Model Data in Document Databases for Read and Write Performance
MongoDB is often seen as a good choice for storing unstructured data. The ability to persist data in MongoDB without defining what type of data it is or designing schema for it is one of the reasons many of our customers choose us. But the idea that MongoDB is a "schemaless" database is not accurate. Although a document database does allow you to store data without defining what it is, the shape of that data matters if you plan to do more than simply retrieve whole documents by keys. The Need to Model for NoSQL At this year's MongoDB World event, Daniel Coupal , staff developer advocate at MongoDB, explained that the need to model data in a document database is due to the presence of constraints that must be taken into account when you're persisting data in MongoDB. Constraints include things like network and hard disc speed, maximum file size of documents, and features that you don't have now but might add later. "If you look at the stack of an application, you have the application that talks to MongoDB that talks to whatever layer you add — it could be the cloud or a physical machine — those constraints are going to map to some of those layers," Coupal said. "It's imperative to know the features of the products you use." MongoDB offers features like transactions , field-level encryption , data federation , and archives , which require that you model data differently. In relational databases , data modeling is fairly straightforward due to the nature of third normal form (3NF) — the database schema design approach for relational databases. So, essentially, there’s only one solution for modeling the database. With the document model and MongoDB, however, you have several options for data modeling. You can nest everything under a single collection , and possibly wind up with duplicate sets of data (and, therefore, data concurrency issues), or you can use separate collections for different datasets and avoid duplicate data altogether. Ultimately, according to Coupal, "the optimal grouping of objects into collections is determined by the workload." During the session, Coupal provided a breakdown of data modeling methodology that involves a three-phase process starting with the workload, proceeding to relationships, and moving to patterns for optimization purposes. "In a lot of the solutions we're trying to build with NoSQL, performance is a top requirement," he said. He also cited better performance as one of the big reasons why people switch from SQL to MongoDB. What are the Data Access Patterns? In essence, with MongoDB, the way you plan to access the data determines the way you store it in the database. Data that is accessed together should be stored together. In the session, Coupal also presented an insightful analogy between the nature of data modeling in relational databases versus the document model in MongoDB. Essentially, the difference is that with the relational model, if you have a car and you want to model it, you'll take each part of the car individually and place it in its own table. Then, when you want to use the car, you have to reassemble it part by part (and table by table) before you can drive it. With MongoDB, you take the car and put it in the garage (the equivalent of a collection). When you want to use it, you take it out. That's it. "We do only one read on the disc to get everything we need together," Coupal said. Techniques for Data Modeling in MongoDB Coupal also provided an explanation of two different data modeling techniques: referencing and embedding . Embedding is a way to combine what would normally be two tables in a tabular database into one using an array. "The array is the expression of the one-to-many relationship," Coupal said. Referencing is useful for when the "many" side of the relationship is a huge number. Although MongoDB does support transactions, in almost all cases, it's better to use a document for more efficient read-write performance. As we know, developers are the ones who are most likely to understand the data access patterns for their applications. Properly designed schemas can increase performance for a given set of hardware by reducing computation, I/O, and contention. What really differentiates MongoDB from relational databases is the ability to co-locate related data in the atomic unit of storage so multiple values for an attribute can exist within a single record rather than being broken up into rows and stored independently. A document database with a properly designed schema lets you filter and retrieve data with minimal computational overhead and in a single I/O operation. This approach can make finding and retrieving data far faster and less expensive. To see the complete session from MongoDB World 2022, which includes a list of 12 data modeling patterns and techniques for evolving schema in MongoDB, watch The Principles of Data Modeling for MongoDB .
Built With MongoDB: Thunkable Brings the Power of App Development to Non-Developers
Great ideas can come from anywhere. But, if you have a great idea for a mobile app, you won't get far without developer talent. In today's app market, there are so many ideas in various stages of development that developer talent has become scarce and costly. To fill the gap, low-code and no-code solutions have emerged. Thunkable is a no-code platform that makes it easy to build custom native mobile apps without any advanced software engineering knowledge or certifications. The platform has seen tremendous growth, recently expanding to more than three million users. MongoDB has been pivotal to that growth, enabling the company to offer its services at scale without having to worry about managing the database. Thunkable is also part of the MongoDB for Startups program, which has helped the company solve some of the technical hurdles involved with scaling to millions of users. App creation for everyone Just because you don't have a computer science degree doesn't mean you can't come up with a great idea for an app. Thunkable co-founder and CTO, Wei Li, says the company is on a mission to democratize mobile app development. "We are currently empowering more than three million users across the globe that can come to our platform to build and publish their apps and do it without writing a single piece of code," Li says. Thunkable uses a simple drag-and-drop design canvas and powerful logic blocks to give innovators the tools they need to breathe life into their app designs. "It's very exciting seeing people from all different backgrounds trying to build solutions," says Jose Dominguez, engineer at Thunkable. "The part that I'm most proud about," says Li, "is our global users. Every day, I hear wonderful stories. For example, we recently have had people using our platform to build mobile apps to coordinate relief efforts in Ukraine. And since the pandemic, we have seen more users coming to our platform to build an app that addresses their needs for work, family, and community." Mobile phones are a transformational technology, but at the same time, there's untapped potential waiting to be exploited. Li believes Thunkable can help unleash the latent power lurking in our back pockets. "There are so many needs people could solve by using their smartphones," Li says. "But because they cannot program their phone, they become passive consumers. We want to empower them to become active creators." Modern app development with MongoDB Thunkable chose MongoDB early on because it mapped to its existing architecture; it stuck with MongoDB because it scaled when it became critical to do so. "We decided to build with MongoDB because it fits our data very naturally," Dominguez says. "We abstract our users' apps as documents, so it's a natural fit. We do a lot of writes so we needed a system to handle those kinds of loads and MongoDB was the perfect fit." Like a lot of startups, Thunkable has had to figure out how to achieve its goals with limited resources. Its engineering team consists of about four to six developers. "The engineering team has always been focused on building the product," Dominguez says. "So not having to worry about the database was a great win for us. It allowed us to iterate very fast and build new versions of the platform without having to worry about scaling the database or backups." After scaling to three million users, the Thunkable engineering team needed to rethink some of its design decisions about data. So, they talked with MongoDB engineers, courtesy of the startup program. Since then, their data storage needs have decreased while performance has improved. "Our partnership with MongoDB has been fundamental to our growth," Dominguez says. Li concurs: "As we scale, supporting more enterprise customers, we don't have to worry about database management issues. We know MongoDB will do a great job helping us as we scale." Building better together In addition to free MongoDB Atlas credits and one-on-one technical support, participants in the MongoDB for Startups program enjoy co-marketing opportunities and access to our partner network. "It's wonderful to be part of the MongoDB for Startups program," Li says. "We have all the support we need, from database management, MongoDB upgrades, and maintenance. You just go to one portal, one website. It's wonderful." MongoDB technical support has also been a life-saver for Thunkable, says Dominguez. "We had an initial call with an engineer and went through our logs. The engineer spotted a few things that we could fix right away. We were having issues with our op log — the tool that MongoDB uses to replicate to other servers — and the engineers helped get us out of those issues." The MongoDB dashboard is another tool Dominguez cites as being especially helpful. "For us, being able to log in to the dashboard and have all the functionality already there is a life saver," he says. "Not only being able to monitor the cluster, but digging into our collections and seeing if our indices are performing properly gives us tremendous insight. The integration of MongoDB and Google Cloud makes our infrastructure much easier to maintain." Future integrations Although they've scaled to millions of users, the team at Thunkable isn't finished. "This upcoming year will be a big one for Thunkable," Li says. "We're going to use our recent funding to scale up our support for enterprise, mainly for team support, collaboration, and enterprise design integration. For example, we recently integrated with Figma so you can import your Figma design into the Thunkable platform and have a functional app." "We're also currently hiring," Li adds. "We're planning to double our team within the next six months." To learn more about Thunkable or if you want to get started building your own mobile apps, check out Thunkable to get started for free . Are you part of a startup and interested in joining the MongoDB for Startups program? Apply now .
MongoDB World Recap: Why Serverless is the Architecture Developers Have Been Waiting For
Serverless architecture is growing rapidly for good reasons. Developers, for the most part, do not enjoy provisioning or managing infrastructure. For applications with intermittent traffic and long idle times, provisioning becomes a moving target. Overprovision and you risk paying for idle resources. Underprovision and you risk slow response times, poor UX, and high app abandonment rates. Serverless architecture abstracts away server, storage, and network provisioning, plus management so developers can focus on building differentiating features and creating great app experiences. Enough developers and IT organizations are embracing serverless architecture that uptake is expected to grow from $7 billion in 2020 to $37 billion by 2028. History and Evolution of Serverless The concept of serverless is older than its name. At this year's MongoDB World conference, Jeremy Daly , GM of Serverless Cloud at Serverless Inc., provided a brief history of serverless architecture—before it was called serverless—and explained why today's implementations are different from those early iterations. Serverless architecture represents the evolution of server environments from virtualization to containerization to cloud computing. In his presentation, Daly traces the beginning of serverless to the launch of Amazon Simple Storage Service ( Amazon S3 ) in 2006. Then came services such as Amazon’s CloudWatch, Simple Notification Service (SNS), CloudFront, Route 53, CloudFormation, DynamoDB, and Kinesis, which were all serverless solutions before the term existed. In 2014, Amazon released what it called "functions as a service" in the form of AWS Lambda, which was the first serverless solution to see widespread adoption. But limitations with these early iterations prevented serverless from really taking off. The primary drawback was cold starts. "If something isn't pre-provisioned, then when you request it, it has to set up a container or function to make that available to respond to your request," Daly explained. "That's still a thing today but to a much lesser degree." He also cited resource restrictions and limitations—such as the inability to call a Lambda function with an HTTP API request or connect to virtual private clouds and access a database—and limited orchestration workflows as reasons why developers were slow to adopt serverless. The Serverless Future Serverless solutions have evolved significantly from those early iterations. Over the past several years, new services have come from AWS, Google, and Azure. And now MongoDB has announced the general availability of MongoDB Atlas Serverless at World 2022. Today's solutions solve many of the issues that existed early on. According to Daly, today's serverless solutions share five common traits: No server management — Serverless eliminates routine administrative tasks such as having to SSH into a Linux box or provision Amazon Elastic Compute Cloud (Amazon EC2) instances. Flexible scaling — Although there are autoscaling groups in Amazon EC2, there's still a minimum amount that you must provision. With current serverless implementations, you can and should be able to scale down to zero. Pay for value — Sometimes referred to as "pay for use," this could entail paying for storage in a database or provisioning concurrency to eliminate cold starts, but ultimately, you're paying for the value of the services you're using. High availability — Services are automatically provisioned across multiple availability zones for redundancy. Event driven — When something happens, like a change in the database, it triggers a workflow, such as creating a new user account. While the automation and functionality are there, developers will still need to know how to use the services to take advantage of serverless. They have to know how to use different cloud services and SaaS solutions, infrastructure and cloud architecture, build and deployment pipelines, monitoring and observability, and security and compliance. These are things that a DevOps team used to handle, but now are more likely to fall on the shoulders of developers. Regardless of the responsibilities on developers to learn how to use the functionality, Daly says serverless is the future of how developers will build apps. That's because all the functionality can be spun up in an independent stack without having to worry about services that anyone else is running. Any developer can have their own version of the application that they can build off of and test with. This provides fast, high fidelity feedback loops, it enables isolated stacks for different parts of the business, and it ensures that development environments are running the same resources as production. Daly also points out that serverless architecture doesn't replace the database as the backbone of the application. Although it's hard to build a serverless database, MongoDB Atlas for Serverless has figured it out, he said. Daly cites the key characteristics developers need from a serverless database: Fast and responsive (no cold starts) Scale up and down quickly Integration with serverless tools Cloud flexibility and proximity (close to your application) Consumption-based pricing (pay only for what you're using) MongoDB Atlas Serverless MongoDB Product Manager Kevin Jernigan, who co-presented with Daly, put a finer point on delivering the requirements developers are looking for in a serverless database. To create a serverless database in Atlas, the process takes only a few steps and your database spins up in seconds. "What you have is an endpoint that will scale up and down automatically based upon the workload," Jernigan said. "It will scale down to zero when you're not using it." That essentially means you're not getting billed for anything except the storage you're using. And there's no cold start penalty. "There's always infrastructure there ready to respond to the next call you make to your database endpoint. We always have infrastructure running, waiting to respond," Jernigan said. Jernigan listed several capabilities that differentiate MongoDB Atlas Serverless from other solutions in the market. Atlas Serverless includes the full power of MongoDB, including the flexibility of the document model. There are no scaling trade-offs so you don't have to worry about cold starts when you scale to zero. It's also available with tiering-based pricing models with discounts for higher usage. There's also deployment flexibility that allows you to move workloads back and forth between serverless and dedicated infrastructure. And you can deploy in all of the major public cloud providers. To watch the complete session from MongoDB World 2022, Serverless: The Future of Application Development . To learn more about MongoDB Atlas Serverless, visit the serverless page on our website.
Built With MongoDB: Satori Streamlines Secure Data Access
Handling data imposes contradictory responsibilities upon organizations. On one hand, they need to protect data from unauthorized access. On the other hand, they need to extract value from data; otherwise, why collect it in the first place? The contradiction lies in the fact that to extract value from data, you have to grant access to it, but unregulated access to data can lead to its misuse. Data access service provider Satori enables organizations to accelerate their data use by simplifying and automating access policies while helping to ensure compliance with data security and privacy requirements. In addition to being a member of the MongoDB for Startups program, Satori has just added support for MongoDB workloads, so organizations running MongoDB can now take advantage of Satori's secure data access service. Balancing act Despite the immense volume of sensitive personal, financial, or health-related data within most organizations, managing access to that data is often a manual process handled by a small team struggling with other competing interests. Satori chief scientist Ben Herzberg says this task of managing data access at companies is slowing down innovation. "The majority of organizations are still managing access to data in a manual way," Herzberg says. "Everyone is feeling the bottleneck. The data analyst who wants to do their job in a meaningful way just wants to understand what data sets they can use and get access to it fast." Getting access to data can be an uphill battle, however. "Sometimes you have to go through three or four different teams to get access to data," Herzberg says. "It can take a week or two." Meanwhile, the data engineers who are primarily responsible for managing access to data are getting pulled away from their core responsibilities. "This places the company in an uncomfortable position of having time-intensive processes implemented by teams who would prefer to be working on other tasks," Herzberg says. Simple, fast, secure As a data access service, Satori streamlines access to data, accelerates time-to-value, improves engineering productivity, and reduces complexity and operational risk, all while protecting sensitive data and maintaining compliance with relevant data privacy regulations. The first job of protecting sensitive data is identifying it, but according to Satori's research , few companies have a system in place that continuously monitors for and discovers sensitive data. Organizations that do monitor sensitive data typically do so only quarterly or annually. Herzberg says Satori continuously discovers sensitive data as it's being accessed. "As one of our customers said: I want to remain continuously compliant. I want to know where my sensitive data is at all times. We do that," Herzberg says. Data users can request access to data over Slack, the Satori data portal, and through other integrations to get immediate access to data without any engineering effort, changes to infrastructure, schemas, or tables, or creating objects on the database. "When a lot of people want access to data, you need a simple, fast, and secure way to do it without exposing yourself to risk," Herzberg says. Instead of taking days or weeks to process data access requests, with Satori, it takes just minutes. Build the next big thing with MongoDB Satori chose MongoDB early on because of the inherent flexibility of the document data model. "We chose MongoDB to move quickly and without limitations," Satori software engineering manager Oleg Toubenshlak says. "We didn't know what type of data we would be storing or how we might want to extend objects, so we chose MongoDB because of the flexibility of the data model." "MongoDB is a core component of our infrastructure where we keep customer configurations," Toubenshlak says. "We started with MongoDB deployed on-prem and moved to MongoDB Atlas." Toubenshlak cites continuous backups, easy deployment, and scalability as additional Atlas capabilities he finds valuable. "MongoDB allows us to move fast with development so we can focus on other areas. It's very simple in terms of security and network access. In terms of clients, MongoDB Atlas helps us provide extended capabilities in order to map our Java objects to BSON. It's very compatible and does this very quickly. Once we moved to Atlas, all our problems were solved," he says. Toubenshlak also appreciates the help he received as a member of the startup program. "We had startup credits, and we used professional services to make sure everything was configured properly," he says. "Satori is a small cluster for MongoDB, but I'm very surprised at the time investment we've received." The company is also excited about adding MongoDB Atlas to its list of supported platforms. "Adding MongoDB support is very exciting for us," Herzberg says. "We're already working with some design partners in different industries and helping them with their deployment. It's a meaningful step for us in NoSQL databases. We're seeing a lot of traction with existing customers that want to expand their MongoDB deployments and with new customers." If you're running MongoDB and are interested in simplifying data access, visit Satori and set up a demo or test drive. Are you part of a startup and interested in joining the MongoDB for Startups program? Apply now .
Built With MongoDB: Vanta Automates Security and Compliance for Fast-Growing Businesses
Built With MongoDB: Alloy Transforms Ecommerce With No-Code Integrations
Gregg Mojica and Sara Du knew there was a need for simpler integrations with ecommerce platforms because they had experienced it themselves. After becoming friends through the open source community, they started a Shopify store as a side project and became intrigued by the multitude of apps available in the Shopify ecosystem — a large selection of integrations for things like ERP, email and social media marketing, ads, marketing analytics, and more. Mojica and Du also found that stitching together these disparate tools was overly complex and that the tools were not geared toward ecommerce. Their company, Alloy Automation , is a no-code integration solution that integrates with and automates ecommerce services, such as CRM, logistics, subscriptions, and databases. For example, Alloy can automate SMS messages to go out upon reaching fulfillment milestones. It can automatically start a workflow when an event occurs in an online store or in another app, create logic to define whether a follow-up action will be taken, and use conditions like order tags or customer location to set up automated actions that will pull and push data from connected apps. If order status is updated to paid and the total value of order is greater than $100, for example, Alloy Automation can automatically send a text message with a discount for an additional purchase. Alloy is part of the MongoDB for Startups program, and this article looks at how Alloy uses MongoDB and also benefits from the partnership to overcome startup challenges. Jobs to be done Mojica, co-founder and CTO of Alloy, sympathized with merchants that were trying to connect multistage workflows using the limited tools that were available. "A lot of merchants have relatively complex flows," he said. "They're cycling through abandoned carts, checking if certain line items are present, and setting up very aggressive rules that historically you would have to program yourself. But a lot of merchants don't have the operating budget to hire expensive engineers to set up these rules." Mojica applied the knowledge he had gained as an engineer in financial services to address the integration problems he and Du were experiencing as online merchants. Although Alloy was initially focused on solving general ecommerce problems, Mojica says he realized that the tools he was building could apply to more than just ecommerce. "Not only are we solving problems for merchants but also for software and SaaS companies," Mojica said. "Now anybody can build relatively complex automations without having engineering expertise. Alloy can templatize those things and offer them as recipes on our platform – we offer a business facing product called Alloy Embedded that allows anyone to effortlessly connect to our integrations by implementing our SDK. Businesses can get started very easily with just a few lines of code." Early stages Alloy is a Y Combinator company — part of the cohort that was scheduled to demo their products in March 2020, the very moment the world “locked down” because of the COVID-19 pandemic. It still raised $5 million in a seed round , followed by $20 million in Series A funding in February 2022. In that time, the company has expanded its platform to include more than 220 integrations, including MongoDB. Alloy is a member of the MongoDB for Startups program, which provides Atlas credits among other benefits for young companies, and it uses MongoDB Atlas as the underlying database. Mojica cites several reasons for the close partnership between the two companies. "Atlas was the database we chose from the beginning. I personally have used MongoDB before, so I have a certain comfort level, and I was the first person that wrote code in Alloy," Mojica said. "But another big reason why I wanted to use MongoDB is the freeform nature of much of the data that we ingest. We connect over 220 integrations, each one has its own schema, and it's typically in JSON. So having a less structured way to store that information compared with something highly delineated like SQL has been very valuable to us." Growing pains Mojica and Du are acutely aware of the challenges startups face, especially managing technical resources. "We like the fact that MongoDB has really good support, there's built-in monitoring, and backups,” Mojica said. “These things allow you to get going quickly. There's a lot of pressure, especially in the very beginning, to get into Y Combinator. You've got to build the product, get customers, and start your fundraise. That's a lot to do in three months. What you don't want to worry about is all the DevOps stuff." As startups begin to scale, they often become subject to compliance requirements that present new technology hurdles. Alloy went through the compliance process seamlessly thanks to the security capabilities and certifications behind MongoDB Atlas. "We're servicing larger clients and seeing different use cases," Mojica said. "The compliance process involved questions about where we're storing data and if we're in different regions. Once your company is big enough, it's a major concern. Just having SOC 2 certification and making sure we're following all the various data privacy rules is really important. We're effectively an intermediary for customer data, so compliance is really important, like when we are deleting data for GDPR requests. MongoDB Atlas helps us with that. It's SOC 2 certified, and we can deploy in any region on any of the major cloud providers. For us, that meant setting up a Network Peering Connection to our AWS VPC from Atlas. So, from a security and compliance perspective, we know that's all taken care of." Making the MongoDB connection "We added a MongoDB connector to our platform because we were hearing interest from our user base,” Mojica explained. “If you want to integrate with a series of different tools and you're also sending data to MongoDB Atlas, instead of having to build those integrations every time, Alloy already has the infrastructure. You can just connect your system, stream the data, and we handle all the architecture. Something that would normally take weeks or months now takes only a few hours. That's the power of the no-code platform." The Alloy–MongoDB integration includes bidirectional sync. "Your connection with MongoDB Atlas can go both ways,” Mojica added. “You can pull data and you can push data. You can do scheduled workflows once an hour or once a day, make a query, get some data from MongoDB, check if a record was added, and then send the data to another platform or destination. The bidirectional sync is really important, because integration really is the ability to get data, but also push data." Support for startups As a member of the MongoDB for Startups program, Alloy enjoys access to a wide range of resources, including free credits to our best-in-class developer data platform, MongoDB Atlas , and personalized technical advice, among other perks. Alloy leveraged the program from an early stage, as Mojica explained, "The credits were very helpful in the beginning, especially when you're in Y Combinator and don't have a lot of money in the bank. We recently started getting in touch for support. In fact, just knowing that we have support is very valuable." To learn more about Alloy, check out runalloy.com . Are you part of a startup and interested in joining the MongoDB for Startups program? Apply now .