What's New in Atlas Charts: Schedule Dashboard Reports to Share Data with Your Team
April 13, 2023
Today, we’re introducing an exciting feature addition for teams using Atlas Charts.
Charts project owners can now schedule dashboard reports to be sent via email to keep team members informed about key data. This feature has been heavily requested by some of our largest users as there are many use cases where dashboards may be valuable to your team, but you don’t necessarily want to require anyone to do extra work to access and view data. Enter scheduled dashboard reports in Atlas Charts!
In any dashboard that your team relies on for regular data review, simply schedule a dashboard report. The new Schedule button can be found at the top right of the dashboard screen:
Once you’ve chosen a dashboard from which to create a report, you will see a variety of options letting you customize the content and frequency of your report before you schedule. A report requires basic fields like a name or subject line, recipient list, and optionally, a message for the body of the email.
In addition to a link to the dashboard in Charts, you can choose whether to attach an image or PDF for quick reference in the message itself. Finally, you can set a schedule of daily, weekly, monthly, or quarterly delivery. You can also simply send a single email if you have a one-time need to share a report.
And once you’ve set everything up, your email will be sent on your defined schedule.
As you use scheduled dashboard reports more and more, we created a Reports page where you can manage all reports in your project. Note that if you’re on an free tier, you can try one scheduled report. If you’re on an M2 cluster or higher, you can create up to 100 reports per project.
To learn more, please check out our documentation. We’re always listening to feature requests that will enhance using Charts across teams, so if you have any requests or feedback, please share them with us here.
Log in to Atlas Charts today to schedule your first report! If you’re new to Atlas Charts, get started today by logging into or signing up for MongoDB Atlas.
How slice Enables Credit Approval in Less Than a Minute for Millions of Indians
Building a fast, simple, and flexible way to make payments and offer credit to 12 million registered users slice, which has emerged as a leading innovator in India’s fintech ecosystem, calls itself a zillennial product built by the zillennials. Zillennials being a term for the demographic group born towards the end of the time period for millennials and beginning of the generation-z time period. The company believes that personalization combined with an extreme focus on superior customer service is the key to building long-lasting relations with young people. Not surprising then that the company’s singular focus is on providing the best consumer payments experience in the world, according to Upendra Kumar Singh, Head of Engineering, slice. Speaking at the December 2022 MongoDB Day at Bangalore, Upendra described how slice is leading the disruption and innovation in the fintech ecosystem and setting global benchmarks. The company has more than 12 million registered users on its platform. The slice app provides a fast, simple, and flexible way for users to make payments and avail of credit. For the vast majority of Indians, getting credit is typically challenging due to stringent regulations and lack of credit data. Through the use of modern underwriting systems, slice is helping broaden the accessibility to credit in India. The company is among the top pre-paid card providers in the South Asian market. The challenge Providing a seamless credit experience for customers by transforming the ‘Know Your Customer’ (KYC) process slice found itself compared to other fintech apps which were typically ready to use immediately after downloading. They didn’t require users to fill out forms or wait for approvals. Since slice offers a credit product, the process of determining worthiness requires that they match several variables to ascertain the creditworthiness of the user. They would run a reverse look-up process matching the profile of the user to a network of other users to estimate likelihood of default. This process was manual and quite time-consuming, which meant that users needed to wait 24 to 48 hours for the completion of the credit underwriting process. This was inconvenient, especially in scenarios such as flash sales or medical issues where users require credit immediately. The other problem was dynamic spikes in traffic which could bring down the entire infrastructure and, therefore, the whole service. “The team needed the infrastructure to scale seamlessly through a dynamic scaling capability. We also needed the ability to preschedule clusters for scaling when high volumes were anticipated such as in the event of marketing campaigns or monthly billing cycles,” said Upendra Kumar Singh, Hhead of Engineering, slice. The team was also keen to free up time from managing databases to be able to focus on building features that would add value for customers. The solution Real-time computation of more than 100 variables to determine creditworthiness in minimal time From early on MongoDB was one of the core databases the company used in order to take advantage of its flexible schema and speed of development. However, at first the development team self-managed the database. Then as the scale grew, they found there were often dynamic spikes which would threaten to bring down the application and cause an outage. The team realized the need for dynamic scaling capabilities and moved to MongoDB Atlas . slice uses MongoDB Atlas for a number of use cases. One example highlighted in slice’s MongoDB Day presentation was a use case called real time feature store. The slice team took on the challenge of making the onboarding process for credit users more smooth and quick. Code named ‘Project Makhan’ (Makhan means butter in Hindi), the objective was to reduce processing time to less than a minute. To achieve this, the team used Change Streams in MongoDB Atlas to evaluate the user information in real-time. “As and when the user filled out the application with details such as name and gender, we used the Real-time feature store with MongoDB and ML models to compute 100+ direct and indirect variables in real-time to determine if the user was eligible for credit,” said Upendra Kumar Singh, Head of Engineering, slice. When users start filling out details using the mobile app, this information is stored in Atlas and computation starts with derived variables. Once all the variables are computed, an AWS step function is triggered. In turn, this triggers the credit underwriting service, which takes all the computed variables and feeds them into the ML model. The model determines the user’s score between 0 and 100. A rules engine looks out for red flags. The red flags are analyzed using MongoDB to determine the likelihood of the user being a defaulter. This was a manual process, but with MongoDB Atlas, the resolution of red flags happens automatically in real-time. The decision is then communicated to the user. Currently, MongoDB supports 15+ clusters with 17.5 TB data, which is used by more than 12 million registered users. slice can carry out about 15,000 MongoDB input/output operations per second (IOPS) during peak hours. With the auto-scaling capabilities, the team can add or remove nodes on demand using Atlas cluster APIs. Given that the functionalities can often be used for critical workloads in emergency situations, uptime was important. The result Unprecedented processing speed and 99.995% uptime for more than 12 million registered users “The solution enables credit underwriting decisions to be taken in under 30 seconds by calculating more than 100 user variables in real-time. This solution triggered exponential growth of slice as no other player in the industry was providing credit in 30 seconds. With MongoDB’s resilient distributed architecture, slice can achieve a 99.995% uptime SLA. There have been no outages except for those due to manual errors,” said Upendra Kumar Singh, Head of Engineering, slice. MongoDB Atlas provides the core capabilities that slice requires to dynamically scale their business, including the flexibility of the document model, Change Streams, always-on security, continuous backup, easy migrations, real time analytics and native tooling.
How Edenlab Built a High-Load, Low-Code FHIR Server to Deliver Healthcare for 40 Million Plus Patients
The Kodjin FHIR server has speed and scale in its DNA. Edenlab, the Ukrainian company behind Kodjin , built our original FHIR solution to digitize and service the entire Ukrainian national health system. The learnings and technologies from that project informed our development of the Kodjin FHIR server. At Edenlab, we have always been driven by our passion for building solutions that excel in speed and scale. With Kodjin, we have embraced a modern tech stack to deliver unparalleled performance that can handle the demands of large-scale healthcare systems, providing efficient data management and seamless interoperability. Eugene Yesakov, Solution Architect, Author of Kodjin Built for speed and scale While most healthcare projects involve handling large volumes of data, including patient records, medical images, and sensor data, the Kodjin FHIR server is based on a system developed to handle tens of millions of patient records and thousands of requests per second, to ensure timely access and efficient decision-making for a population of over 40 million people. And all of this information had to be processed and exchanged in real-time or near real-time, without delays or bottlenecks. This article will explore some of the architectural decisions the Edenlab team took when building Kodjin, specifically the role MongoDB played in enhancing performance and ensuring scalability. We will examine the benefits of leveraging MongoDB's scalability, flexibility, and robust querying capabilities, as well as its ability to handle the increasing velocity and volume of healthcare data without compromising performance. About Kodjin FHIR server Kodjin is an ONC-certified and HIPAA-compliant FHIR Server that offers hassle-free healthcare data management. It has been designed to meet the growing demands of healthcare projects, allowing for the efficient handling of increasing data volumes and concurrent requests. Its architecture, built on a horizontally scalable microservices approach, utilizes cutting-edge technologies such as the Rust programming language, MongoDB, ElasticSearch, Kafka, and Kubernetes. These technologies enable Kodjin to provide users with a low-code approach while harnessing the full potential of the FHIR specification. A deeper dive into the architecture approach - the role of MongoDB in Kodjin When deciding on the technology stack for the Kodjin FHIR Server, the Edenlab team knew that a document database would be required to serve as a transactional data store. In an FHIR Server, a transactional data store ensures that data operations occur in an atomic and consistent manner, allowing for the integrity and reliability of the data. Document databases are well-suited for this purpose as they provide a flexible schema and allow for storing complex data structures, such as those found in FHIR data. FHIR resources are represented in a hierarchical structure and can be quite intricate, with nested elements and relationships. Document databases, like MongoDB, excel at handling such complex and hierarchical data structures, making them an ideal choice for storing FHIR data. In addition to supporting document storage, the Edenlab team needed the chosen database to provide transactional capabilities for FHIR data operations. FHIR transactions, which encompass a set of related data operations that should either succeed or fail as a whole, are essential for maintaining data consistency and integrity. They can also be used to roll back changes if any part of the transaction fails. MongoDB provides support for multi-document transactions , enabling atomic operations across multiple documents within a single transaction. This aligns well with the transactional requirements of FHIR data and ensures data consistency in Kodjin. Implementation of GridFS as a storage for the terminologies in Terminology service Terminology service plays a vital role in FHIR projects, requiring a reliable and efficient storage solution for terminologies used. Kodjin employs GridFS , a file system within MongoDB designed for storing large files, which makes it ideal to handle terminologies. GridFS offers a convenient way to store and manage terminology files, ensuring easy accessibility and seamless integration within the FHIR ecosystem. By utilizing MongoDB's GridFS, Kodjin ensures efficient storage and retrieval of terminologies, enhancing the overall functionality of the terminology service. Kodjin FHIR server performance To evaluate the efficiency and responsiveness of the Kodjin FHIR server in various scenarios we conducted multiple performance tests using Locust, an open-source load testing tool. One of the performance metrics measured was the retrieval of resources by their unique ids using the GET by ID operation. Kodjin with MongoDB achieved a performance of 1721.8 requests per second (RPS) for this operation. This indicates that the server can efficiently retrieve specific resources, enabling quick access to desired data. The search operation, which involves querying ElasticSearch to obtain the ids of the searched resources and retrieving them from MongoDB, exhibited a performance of 1896.4 RPS. This highlights the effectiveness of polyglot persistence in Kodjin, leveraging ElasticSearch for fast and efficient search queries and MongoDB for resource retrieval. The system demonstrated its ability to process search queries and retrieve relevant results promptly. In terms of resource creation, Kodjin with MongoDB showed a performance of 1405.6 RPS for POST resource operations. This signifies that the system can effectively handle numerous resource-creation requests. The efficient processing and insertion of new resources into the MongoDB database ensure seamless data persistence and scalability. Overall, the performance tests confirm that Kodjin with MongoDB delivers efficient and responsive performance across various FHIR operations. The high RPS values obtained demonstrate the system's capability to handle significant workloads and provide timely access to resources through GET by ID, search, and POST operations. Conclusion Kodjin leverages a modern tech stack including Rust, Kafka, and Kubernetes to deliver the highest levels of performance. At the heart of Kodjin is MongoDB, which serves as a transactional data store. MongoDB's capabilities, such as multi-document transactions and flexible schema, ensure the integrity and consistency of FHIR data operations. The utilization of GridFS within MongoDB ensures efficient storage and retrieval of terminologies, optimizing the functionality of the Terminology service. To experience the power and potential of the Kodjin FHIR server firsthand, we invite you to contact the Edenlab team for a demo. For more information On MongoDB’s work in healthcare, and to understand why the world’s largest healthcare companies trust MongoDB, read our whitepaper on radical interoperability .