MongoDB Blog

Articles, announcements, news, updates and more

Mainframe Data Modernization with MongoDB Powered by Wipro's "ModerniZ" Tool

This post will highlight a practical and incremental approach to modernizing mainframe data and workloads into cloud-hosted microservices using MongoDB’s modern, general purpose database platform. Enterprises modernize their mainframes for a number of reasons—to increase agility, lower the risk of aging legacy skills, and reduce total cost of ownership (TCO). But the greatest underlying benefit and reason for modernization lies in a company’s ability to access and make sense of their own data more quickly. Gleaning valuable business insights through the use of real-time data and AI/ML models is at the heart of today’s most successful and innovative companies. Consider the following business processes and their reliance on real-time insights: Real-time fraud detection, KYC, and score calculation Supporting new API requirements – PSD2, Open Banking, Fintech APIs etc Payment pattern analysis Moving away from hundreds of canned reports to template-based self-service configurable reporting Real-time management reporting With the continued emergence of mobile and web apps, enterprises are looking to render content even faster, as well as scale up and down on demand. However, mainframes often serve as the true system of record (SoR) and maintain the golden copy of core data. In a typical enterprise, inquiry transactions on the mainframe contribute to over 80% of the overall transaction volume—in some cases up to 95%. The goal for organizations is to increase the throughput of these inquiry transactions with improved response time. However, during real-time transaction processing, middleware must orchestrate multiple services, transform service response, and aggregate core mainframe and non-core applications. This architecture prevents the legacy services from seamlessly scaling on-demand with improved response time. At the same time, CIOs have significant concerns around the risks of big bang mainframe exit strategies, especially in the financial services, insurance and retail sectors where these complex applications serve the core business capabilities. Wipro and MongoDB’s joint offering, “ModerniZ,” can help alleviate these risks significantly by introducing a practical, incremental approach to mainframe modernization. An incremental solution: Offloading inquiry services data with ModerniZ To keep up with changing trends in an agile manner, and to bridge the gap between legacy monoliths and digital systems of engagement, a tactical modernization approach is required. While complete mainframe modernization is a strategic initiative, offloading inquiry services data and making it available off the mainframe is a popular approach adopted by several enterprises. Below are a few business requirements driving mainframe data offload: Seamlessly scale volume handling by up to 5X Improve response time by up to 2X Reduce mainframe TCO by up to 25% Direct API enablement for B2B and Partners (change the perception of being antiquated) Improve time to market on new enhancements by up to 3X Provision separate security mechanism for inquiry-only services Access single view data store for intra-day reporting, analytics and events handling These business requirements, as well as the challenges in current mainframe environments, warrant an offloaded data environment with aggregated and pre-enriched data from different sources in a format which can be directly consumed by channels and front-end systems. This is where our joint offering for mainframe modernization with MongoDB and Wipro’s ModerniZ tool comes into play. ModerniZ is Wipro’s IP platform specially focused on modernizing the UI, services and data layer of System Z (Mainframe). ModerniZ has multiple in-house tools and utilities to accelerate every phase of the legacy modernization journey. Let’s dig deeper into the solution elements. "CQRS with ModerniZ" for transactional and operational agility Command Query Responsibility Segregation (CQRS) is an architectural principle that can prescribe an operation as a ‘command,’ which performs an action, or a ‘query,’ which returns data to the requestor—but not both. CQRS achieves this by separating the read data model from the write data model. Separation of these two operations in a business process helps optimize performance, reduce the cost associated with inquiry transactions, and create a new model which can grow vertically and horizontally. MongoDB, with its document model and extensive set of capabilities, is best suited to house this offloaded ‘read data model.’ MongoDB’s JSON/BSON document model helps in pre-enriching the data and storing it in ‘inquiry ready’ format which simplifies the overhead for the front-end consumers. Enterprise use cases for CQRS: Customer demographic information – e.g. monetary and nonmonetary transaction inquiries in banking and financial services Payer view – Healthcare Single view across policy administration engines and pricing engines Consolidated participant view in benefit administration platforms Single view of manufacturing production control systems across plants or countries The below process indicates the step-by-step approach in enabling CQRS by offloading the data and transactional volumes into a modernized landscape, and continuously syncing the data (based on the business criticality) across platforms. Figure 1. Mainframe data modernization process The visual below indicates the conceptual target architecture where the service delivery platform (API/Middleware layer) identifies and routes the transaction to the respective systems. Any update which happens in the legacy system will be cascaded to the target MongoDB based on the business criticality of the fields. Figure 2. Post data modernization process view Mainframe services will continue to get exposed as domain APIs via zCEE for the Command Services (Update Transactions) and the newly built microservices will serve the inquiry transactions by fetching data from MongoDB. Any data updates in the mainframe will be pushed to MongoDB. The below table indicates how different fields can be synced between the mainframe and MongoDB, as well as their corresponding sync intervals. Java programs/Spark consumes the JSON document from Kafka Cluster, merges into MongoDB and creates new documents. table, th, td { border: 2px solid black; border-collapse: collapse; } Sync Type Field Classsification Sync Strategy Type-1 Near Real Time Sync for critical fields Using Kafka Queues / CICS Event Triggers / DB2 / 3rd Pardy CDC Replicators / Queue Replicators Type-2 Scheduled Batch Polling sync for less critical fields Using Mini Intra-day Batch / Replicators / ELT Type-3 EoD Batch sync for non-critical fields Batch CDC Sync Sync / Update / ELT How MongoDB and Wipro's ModerniZ helps Wipro’s ModerniZ platform provides multiple tools to accelerate the modernization journey across phases, from impact analysis to design to build to deployment. For data modernization, ModerniZ has 5 tool sets like PAN (Portfolio Analyzer), SQL Converter, automated data migration, and so on—all which can be leveraged to yield a minimum committed productivity gain of 20%. Figure 3. Mainframe to cloud transformation using ModerniZ Why MongoDB for mainframe modernization? MongoDB is built for modern application developers and for the cloud era. As a general purpose, document-based, distributed database, it facilitates high productivity and can handle huge volumes of data. The document database stores data in JSON-like documents and is built on a scale-out architecture that is optimal for any kind of developer who builds scalable applications through agile methodologies. Ultimately, MongoDB fosters business agility, scalability and innovation. Some key benefits include: Deploys across cloud in nearly all regions Provides a document model that is flexible and maps to how developers think and code Costs a fraction of the price compared to other offloading solutions built using relational or other NoSQL databases and even a larger, sharded MongoDB environment brings magnitudes of savings compared to the traditional mainframe MIPS-based licensing model Allows complex queries to be run against data via an extremely powerful aggregation framework. On top of that, MongoDB provides a BI connector for dedicated reporting/business intelligence tools as well as specific connectors for Hadoop and Spark to run sophisticated analytical workloads Offers enterprise-grade management through its EA product and includes security features which cover all areas of security – authentication, authorization, encryption and auditing. Competitors often only offer a subset of those capabilities Provides a unified interface to work with any data generated by modern applications Includes in-place, real-time analytics with workload isolation and native data visualization Maintains distributed multi-document transactions that are fully ACID compliant MongoDB has successfully implemented mainframe offloading solutions before and customers have even publicly talked about the success (e.g. a top financial service enterprise in the EU) Roadmap and contact details Work in progress on offloading mainframe read workloads to GCP native services and MongoDB. Contact for more information.

September 16, 2021

Serverless Instances Now Offer Extended Regional and Cloud Provider Support

Today’s applications are expected to just work, regardless of time of day, user traffic, or where in the world they are being accessed from. But in order to achieve this level of performance and scale, developers have to meticulously plan for infrastructure needs, sometimes before they even know what the success of their application may be. In many cases, this is not feasible and can lead to over provisioning and over paying. But what if you could forgo all of this planning and the database would seamlessly scale for you? Well, now you can - with serverless instances on MongoDB Atlas. Since we announced serverless instances in preview at we have been actively working toward implementing new functionality to make them more robust and widely available. With our most recent release, serverless instances now offer expanded cloud providers and regions, and support MongoDB tools. Deploy a serverless instance on the cloud provider of your choice With our dedicated clusters on MongoDB Atlas, you have the flexibility to run anywhere with global reach on the cloud provider of your choice, so you can deliver responsive and reliable applications wherever your users are located. Our goal is to provide this same flexibility for serverless instances. We’re happy to announce that you can now deploy a serverless instance in ten regions on AWS, Google Cloud, and Azure. You’ll see when deploying a serverless instance there are now more regions supported on AWS, as well as two available regions on both Google Cloud and Azure - so you can get started with the cloud provider that best suits your needs or the region that’s closest to you. We will be continuing to add new regions over time to ensure coverage where you need it most. Easily import your data with MongoDB tools With this release, we have also made it easier to work with your data. You can now easily import data from an existing MongoDB deployment using the MongoDB Tools including mongodump, mongorestore, mongoexport , and mongoimport . In order to use MongoDB tools with serverless instances, you will need to be using the latest version . If you have additional feature requests that would make your developer experience better, share them with us in our feedback forums . Database deployment made simple With serverless instances, you can get started with almost no configuration needed - MongoDB Atlas will automatically scale to meet your workload needs, whether you have variable traffic patterns or you’re looking for a sandbox database for your weekend hobby project. If you haven’t yet given serverless instances a try, now is a great time to see what they can offer. If you have feedback or questions, we’d love to hear them! Join our community forums to meet other MongoDB developers and see what they’re building with serverless instances. Create your own serverless instance on MongoDB Atlas. Try the Preview .

September 16, 2021

How to Prepare for Your Enterprise Account Executive Interview at MongoDB

At MongoDB, our Enterprise Sales team is growing rapidly as we strive to build a salesforce with a legendary reputation of excellence and integrity. Although we are eager to add new reps to our team, we are focused on ensuring we hire the right people for the job and that we’re the right company for you, too! Because of this, our interview process may not be as quick or look the same as other companies’. We feel confident that we’ve designed our interviews to uncover a mutually beneficial opportunity that will allow anyone who joins the team to look back on their time at MongoDB as a career-defining point in their lives. Our typical interview process includes three interviews, a sales profile assessment, and a final interview that we call “The Challenge”. Throughout these interviews we want you to meet as many people on our team as possible. Your interview panelists may include the Regional Director (RD) you’d directly report to along with RDs from other regions. You’ll also typically meet with a Regional Vice President or SVP depending on your location. We ultimately want you to be introduced and exposed to teams across the company so that you receive insight into our broader culture and can decide if MongoDB is the right fit for you. We recommend treating the recruitment process similar to a sales cycle including preparation, qualification, and closing. No matter the interview, you should be aware that all MongoDB Enterprise Account Executive interviews are around the three Whys: Why MongoDB? Why you? Why now? Why MongoDB We want to ensure that we can support your career growth at MongoDB. At each stage of the interview process, leaders will want to dig in on the three P’s: 1. People Our executive leadership team is made up of some of the best in the industry. To understand who is behind the success of the company and how they got here, we recommend looking into some notable MongoDB figures such as our Executive team and Board Members. Prior to each interview, you will receive a guide with the names of the managers you’ll be interviewing with. We recommend doing some research on these individuals and their team members, along with other Enterprise Account Executives at MongoDB. It’s likely that you’ll be asked about this research during your interview, so be prepared to discuss what you found. 2. Product The MongoDB data platform is complex which can make our sales process rather technical in certain use cases. While we don't expect you to come with database expertise, we do want to know why you have an interest and see the value in selling it! We recommend taking a look at our customer testimonials online to learn how MongoDB technology is applied. We also recommend researching our differentiators, which should help you understand why a C-Suite executive should buy MongoDB. Below are some resources to help you get started. MongoDB Technology Overview Why MongoDB Atlas 3. Process Come prepared to talk about your week, where you spend your time, and how you plan and prioritize your accounts. While our EAEs do handle some existing business, the main focus is on new pipeline generation as we continue to disrupt a huge market. Our Sales team follows the MEDDIC sales qualification methodology as well as our own internal sales process. This provides the team with a proven roadmap on how the most successful sellers have closed deals and promotes a common language within our teams across the globe. We recommend you speak to the sales and qualification process you follow currently and understand how they compare. Why you We’ve spent a lot of time defining our sales process and how our Enterprise Account Executives can be successful. Because of this, we’ve been able to determine what top-performing reps at MongoDB have done differently and what characteristics help them quickly develop and achieve great records of closed deals. Coachability: There’s a ton of enablement at MongoDB, and we want you to make use of it! If you enjoy coaching and development, this is a good environment for you. Drive: The database market is massive, and MongoDB owns less than 1% of it. To be successful, you’ll need grit, a competitive nature, and a drive to disrupt one of the largest addressable markets in the software industry. Street smarts: Although the MongoDB product is technical, there is still a very human element to the sales process. We look for people who have emotional intelligence, the ability to “read the room”, and are empathetic. Ability to build pipeline: It may seem obvious, but our top performers are great at generating business meetings that impact their number of deals closed. You’ll need to excel at and enjoy hunting new business! Champion building: We strongly believe in making long-lasting connections and look for individuals who can identify and build a MongoDB Champion within their customers. Why now We believe that timing is important and want you to feel confident in your decision to join MongoDB. We encourage you to think about the following: Do you feel ready to leave your current role? If so, why do you believe now is the right time for you to do so? What are you not receiving in your current role that you’re looking for in a new role? Do you feel confident in your decision to interview with MongoDB at this time? These are things that will be discussed during your interview process, and we hope that you can happily articulate why you believe MongoDB is the next step for your development and career. Interested in pursuing a career at MongoDB? We have several open roles on our teams across the globe and would love for you to transform your career with us!

September 10, 2021

Preparing for Your Customer Success Interview at MongoDB

We’re thrilled that you’re interested in interviewing for a Customer Success role at MongoDB! Preparing for an interview can often feel overwhelming, but there are several steps you can take to prepare yourself for a successful interview with our Customer Success team. As with any job interview, you should take time to consider your goals and qualifications relative to the Customer Success role you are interviewing for and MongoDB. It is good practice to review the job description and research our company to get an understanding of our products, services, mission, history, and overall culture to help you decide if MongoDB seems like the right fit for you, your goals, and your interests. At each interview step, you’ll have time to ask your interviewer(s) questions, so come prepared with anything and everything that you are interested in knowing more about! This is an opportunity for you to interview us too, and your questions will help us learn more about you and what is important to you. Take a look at the Customer Success interview steps below to learn more about how you can best prepare yourself for success. Recruiter interview The first step in our process is an interview with one of our Recruiters. Going into this conversation, be prepared to discuss your experience, qualifications, and interest in the opportunity. I recommend reviewing the job description and aligning your experience with the qualifications of the role. You should also be prepared to answer questions around why you are considering new job opportunities and why you are interested in MongoDB. Lastly, think through common interview questions and be ready to describe the day-to-day responsibilities you held in previous roles, along with your goals for the future. Some common questions you might hear are “What are you looking for in your next move, and why?” and “What comes after Customer Success?” At MongoDB, we invest in our team members and strive to support your passions and interests. Knowing what your goals for the future are will help us better support your career progression! Prior to each interview, I recommend doing some more research. This research will only help as you progress in the process. Here are some resources to get you started: Familiarize yourself with noSQL Read about some of our customer use cases to get a feel for how MongoDB is being used in the field Familiarize yourself with our current product offerings These white papers (especially the ones under Business Strategy and Architecture) will help take your understanding much further Experience our managed service offering for yourself - spin up an Atlas cluster and read the Atlas FAQ Listen to Sahir Azam talking about Atlas and our ability to support customer’s multi-cloud strategies Customer Success Specialist Hiring manager interview Before your Hiring Manager interview, think through your experiences in the following areas to prepare yourself for interview questions relating to: Stakeholder relationships Interest in technology / technical situations you’ve encountered Interest in MongoDB’s products Ability to prioritize Adaptability I recommend preparing specific examples that you can share with your interviewer. Peer role play interview The goal of the peer interview is to assess your technical aptitude, ability to understand MongoDB technology, and your teamwork, collaboration, and communication skills, as well as what you’re hoping to contribute to the team. Your recruiter will schedule a prep call prior to this interview to provide you with some time to plan for success. You should dig into the materials provided, have a plan for how you are going to approach the role play, and use the prep call to ask clarifying questions. No question is off limits, so use the time to gain as much value as you can! We want to ensure you feel confident about this interview step. Data assignment challenge In your final interview, you’ll be given a list of mock accounts with mock data. The team will be assessing your ability to prioritize these accounts based off of the account data provided. To prepare for this, I recommend determining your approach and being able to clearly explain the logic behind your thought process and how you will put it into action. Customer Success Manager Hiring manager interview Before your Hiring Manager interview, think through your experiences in the following areas to prepare yourself for interview questions relating to: Customer facing experience Interest in technology / technical situations you’ve encountered Day to day responsibilities Enterprise software experience Business/Sales experience I recommend preparing specific examples that you can share with your interviewer. Peer and proof point interview This is an opportunity for you to meet with a peer on the team to learn about the team culture and what a day in the life looks like from someone in the role. The peer will also ask questions to learn more about you and your experiences. The goal of the peer interview is to assess your teamwork, collaboration, and communication skills, as well as what you’re hoping to contribute to the team. The proof point is a use case we provide to you prior to the interview. It will be discussed for about 20 minutes, with the goal of assessing your technical aptitude and ability to understand and articulate the value of MongoDB technology. While it is important to have an understanding of the technology, we don’t expect anyone to be a MongoDB expert. You should think through how you would handle this account and the areas you would focus on with the customer if you had just inherited the account. Why did they choose MongoDB and what else can you learn from the customer? Mock onboarding challenge Your recruiter will schedule a prep call prior to this interview to provide you with some time to plan for success. You should dig into the materials provided, have a plan of how you are going to structure the meeting, and use the prep call to ask clarifying questions about the product or for guidance related to overall meeting management. Interested in pursuing a career at MongoDB? We have several open roles on our teams across the globe and would love for you to transform your career with us!

September 9, 2021

Drowning in Data: Why It's Time to End the Healthcare Data Lake

From digital check-ins, to connected devices and telehealth programs, patients expect the benefits of a more digitized healthcare experience. At the same time, they’re also demanding a more personalized approach from healthcare providers. This duality - the need to provide a more convenient experience with one that’s more tailored to the patient - is fueling a wave of technology modernization efforts and the replacement of monolithic legacy IT systems. With limited re-use outside of the context they were built for and a reliance on nightly batch processing, legacy IT systems fail to deliver the services healthcare IT teams need or provide the experiences patients demand. Modernization should come with a move to microservices that can be used by multiple applications, agile teams that embrace domain driven design principles, and event busses like Kafka to deliver real-time data and functionality to users. While this transformation is occurring, there’s an 800lb gorilla not being widely addressed. Analytics. What the healthcare industry doesn’t want to talk about, is how costly analytics has become; the people, the software, the infrastructure, and particularly how difficult it is to move data in and out of data lakes and warehouses. It's hindering the industry’s ability to deliver insights to patients and providers in a timely and efficient manner. And yet, so many organizations are modernizing their analytics data warehouses and data lakes with an approach that simply updates the underlying technology. It’s a lift-and-shift effort of tremendous scale and cost, but one that is not addressing the underlying issues preventing the speedy delivery of meaningful insights. Drowning in data: A 1980s model in the 2020s While the business application landscape has changed, healthcare is still clinging to the same 1980’s paradigm when it comes to analytics data. It started by physically moving all the data from transactional systems into a single data warehouse or data lake (or worse, both), so as not to disrupt the performance of business applications by executing analytics queries against the transactional database. Eventually, as data warehouses had enough relational tables and data in them, queries began to slow down, and even time-out before delivering results to end users. This gave rise to data marts, yet another database to copy the warehouse data into, using a star schema model to return query results more efficiently than in the relational warehouse. In the last and current iteration of analytics data platforms, warehouses and data marts became augmented, and were even replaced in some cases, with data lakes. Technologies like Hadoop promised a panacea where all sorts of structured and unstructured data could be stored, and where queries against massive datasets could be executed. In reality it turned out to be a costly distraction, and one that did not make an organization's data easier to work with, or provide real-time data insights. Hence why it earned the nickname “data jail”. It was hard to load data into, and even harder to get data out of. New technology, same challenges While Hadoop and other technologies did not last long, they hung around just long enough to negatively alter the trajectory of many analytics shops, which are now investing heavily in migrating away from Hadoop, to cloud-based platforms. But, are these cloud alternatives solving the challenges of the Hadoop era? Can your organization rapidly experiment, innovate and serve up data insights from your data lake? Can you go from an idea to delivery in days? Or, is it weeks, months even? Despite the significant amounts of time, money and people required to load data into these behemoth cloud data stores, they still exhibit the same challenges as their Hadoop-era predecessors. They are difficult to load and even more difficult to make changes to. They can never realistically offer real-time or even near-real-time processing, the response time that patients and providers expect. Worse, they contain so much data, that making sense of it is a task often left to either a sophisticated add-on like AWS HealthLake, or specialized data engineering and data science teams. To add to this, the cloud based analytics systems are typically managed by a single team that’s responsible for collecting, understanding and storing data from all of the different domains within an organization. This is what we like to call a modernized monolith, the pairing of updated technology with a failure to fundamentally address or improve the overall limitations or constraints of a system or process. It’s an outdated and inefficient approach that’s simply been “lifted and shifted” from one technology to another. Many data lake implementations take a modernized monolithic approach which, like their predecessors, results in a bottleneck and difficulty in getting information out, once it goes in. In a world where data is at the center of every innovative business, and real-time analytics is top-of-mind for executives, product owners and architects alike, most data lakes don’t deliver. Transforming your organization into a data-driven enterprise requires a more agile approach to managing and working with ever-growing sums of data. The rise of the operational data layer — an ODS renaissance To provide meaningful insights to patients in a timely and efficient manner, two very important things need to happen. Healthcare organizations need to overcome the limitations of legacy systems, and they need to make sense of a lot of very complex data. A lift-and-shift approach migrating data into a data lake will not solve these problems. In addition, it’s not feasible or advisable to spend tens, or even hundreds of millions of dollars to replace legacy systems as a precursor to a digital engagement strategy. The competition will leap-frog you before your efforts are even half complete. So, what can be done? Can your organization make better sense of its data, and at the same time mitigate the issues legacy systems impose? Can this be done without a herculean effort? The answer is yes. The solution is an operational data layer (ODL) , formerly known as the operational data store. It’s a method that’s been tried and tested by major corporations, and is the underlying technology that powers many of the apps you interact with on your phone. An ODL lets you build new features without existing system limitations. It lets you summarize, analyze, and respond to data events, in real-time. It helps you migrate from legacy systems, without incurring the cost and complexity of replacing legacy systems. It can give your teams the speed and agility that working against a data lake will simply never have. Data lakes and warehouses have their place, and the kinds of long-term data insights and data science benefits that can be gleaned from them are significant. The challenge, however, is reacting in real-time, and serving those insights to patients, quickly. An ODL strategy offers the best, most cost and time efficient approach to mitigate legacy system issues, without the pain of replacing legacy systems. Investing in an ODL strategy will both solve your legacy modernization dilemma, and it will help you deliver real-time data and analytics at the speed of an agile software delivery team. MongoDB is an ideal ODL provider . Not only does it have the underlying, flexible document-based database, but it is also an application data platform, empowering your developers to focus on building features, not managing databases and data. If you’re interested in learning about how MongoDB has enabled organizations large and small to successfully implement ODL strategies and tackle other burning healthcare issues, click here .

September 8, 2021
Mark Loves Tech

Four Tips for Writing and Applying with Your Engineering Resume

At MongoDB, we’re always looking for creative and passionate engineers who want to make an impact on the world. If you’re interested in a role on our Engineering team, we encourage you to apply! Before doing so, here are a few things you can do to make your engineering resume stand out at MongoDB. 1. Keep it concise It can be very tempting to want to detail out every project you’ve worked on and courses you’ve taken. However, I suggest highlighting the most important aspects of your background. Keeping your resume succinct demonstrates that you have a good understanding of your key accomplishments and that you can communicate effectively. I would recommend keeping your resume to one page with the exception of candidates who are 15+ years into their career. In that case, two pages is appropriate. I would also note that listing numerous programming languages, frameworks, and tools can often be confusing and distracting. Focus on what you have the most experience with. I’d recommend listing out core technologies and tools that you have concrete examples of working with, whether it was a recent work initiative or something you use for personal projects, and using those to provide context as to the work you were doing. 2. Keep it clear Ensuring that you choose a clear font and format for your resume is very important. Consider using standard fonts like Times New Roman or Arial and keeping the layout clean and easy on the eyes. Formatting your resume in a way that is intuitive is also key. I’d recommend highlighting your most relevant experience closer to the top. For example, if you’re a recent college graduate, I’d note your education towards the top of your resume. If you’re further into your career, I’d ensure your experience is listed chronologically and that your accomplishments are sorted by relevance to the role you’re interested in. Unless you’re interviewing for a more creative job such as UX or Product Design, I’d focus on clarity and a standard layout rather than too many customizations. For a software engineering role, clarity and legibility are most salient. 3. Be intentional Be intentional about the roles that you apply to. If a job description is looking for a certain skill set that you have, make a point to tailor your resume and highlight that skill set. This intentionality doesn’t just apply to how you build your resume. I’d also recommend taking the time to look over all roles on the career page and only apply to the ones that best suit your background and interests. Being intentional about the roles you apply to is a great way to also demonstrate that you understand your strengths. That being said, we don’t expect candidates to have all of the skills listed in our job descriptions. If you are interested in the role and feel that it could be a good fit for your experience, we are happy to look at your resume. 4. Think about what makes you unique Adding an interests section or a summary to your resume can add some color as to who you are as a person. Frequently, the first few minutes of a MongoDB interview before diving into coding will involve some form of the question “Tell me about yourself”. Although we are certainly interested in your work experience and accomplishments, we are also interested in what makes you, you! For the summary section, I’d keep it related to your engineering background, the type of role or environment you thrive in, and your interest in MongoDB specifically. For interests, you could mention something exciting in the engineering space you’re passionate about or something completely unrelated, such as hobbies, genres of books you enjoy, and places you’ve traveled. Ultimately, your resume is your opportunity to be true to yourself and show a potential next employer what makes you special. Highlighting your skills, keeping things clear and concise, and being intentional are the best ways to start your recruiting journey with MongoDB. We hope to see you in our interview process soon! Interested in pursuing a career at MongoDB? We have several open roles on our teams across the globe and would love for you to transform your career with us!

September 3, 2021

A Guide to Freeing Yourself from Legacy RDBMS

Oracle introduced the first commercial relational database (RDBMS) to the market in 1979 — more than a decade before the World Wide Web. Now, digital transformation is reshaping every industry at an accelerating pace. In an increasingly digital economy, this means a company's competitive advantage is defined by how well they build software around their most critical asset — data. MongoDB and Palisade Compliance have helped some of the largest and most complex Oracle customers transform their architecture and shift to a cloud-first world. Although every client is unique, we have identified three important steps to moving away from Oracle software, reducing costs, and achieving their digital transformation goals: Understand your business and technical requirements for today and tomorrow, and identify the technical solution and company that will be by your side to help future-proof your organization. Decipher your Oracle contracts and compliance positions to maximize cost reduction initiatives and minimize any risks from Oracle audits and non-compliance that may derail your ultimate goals. Mobilize internal momentum and traction to make the move. MongoDB can help with #1, Palisade Compliance assists with #2, and you have to supply #3. This is a guide to getting started, as outlined by the main pillars of success above. 1. Understand your requirements and find the right partner — MongoDB The most common requirements we hear from organizations are that they need to move faster, increase developer productivity, and improve application performance and scale -- all while reducing cost and breaking free from vendor lock-in. For example , to keep pace with demands from the business, Travelers Insurance modernized its development processes with a microservices architecture supported by agile and DevOps methodologies. But the rigidity of its existing Oracle and SQL Server databases created blockers to move at the speed they needed. The solution was MongoDB and its flexible data model. They eliminated the three-day wait to make any database changes, creating a software development pipeline supporting continuous delivery of new business functionality. Similarly, Telefonica migrated its customer personalization service from Oracle to MongoDB. Using Oracle, it took 7 developers, multiple iterations and 14 months to build a system that just didn't perform. Using MongoDB, a team of 3 developers built its new personalization service in 3 months, which now powers both legacy and new products across the globe. MongoDB helps Telefonica be more agile, save money and drive new revenue streams. While some organizations try to innovate by allowing siloed, modern databases to coexist with their legacy relational systems, many organizations are moving to fully replace RDBMS. Otherwise, a level of complexity remains that creates significant additional work for developers because separate databases are required for search, additional technologies are needed for local data storage on mobile devices, and data often needs to be moved to dedicated analytics systems. As a result, development teams move slowly, create fewer new features, and cost the organization more capital. MongoDB provides the industry’s first application data platform that allows you to accelerate and simplify how you build with data for any application. Developers love working with MongoDB’s document model because it aligns with how they think and code. The summarized functional requirements that we typically hear from leading companies and development teams regarding what they require from a data platform include: A data structure that is both natural and flexible for developers to work with Auto-scaling and multi-node replication Distributed multi-document transactions that are fully ACID compliant Fully integrated full-text search that eliminates the need for separate search engines Flexible local datastore with seamless edge to cloud sync In-place, real-time analytics with workload isolation and native data visualization Ability to run federated queries across your operational/transactional databases and cloud object storage Turnkey global data distribution for data sovereignty and fast access to Data Lake Industry-leading data privacy controls with client-side, field level encryption Freedom to run anywhere, including the major clouds across many regions MongoDB delivers everything you need from a modern data platform. But it’s not just about being the right data platform; we’re also the right modernization partner. Through our Modernization Program we have built and perfected modernization guides that help you select and prioritize applications, review best practices, and design best-in-class, production-grade, migration frameworks. We’ve built an ecosystem around accelerating and simplifying your journey that includes: deployment on the leading cloud providers to enable the latest innovations technology companies that help with data modeling, migration, and machine learning, and expert System Integrators to provide you with tools, processes and support to accelerate your projects. We are proud to be empowering development teams to create faster and develop new features and capabilities, all with a lower total cost of ownership. 2. Manage Oracle as you move away — Palisade Compliance Oracle’s restrictive contracts, unclear licensing rules, and the threat of an audit can severely impact a company’s ability to transform and adopt new technologies that are required in a cloud-first world. To move away from Oracle and adopt new solutions, companies must be sure they can actually reduce their costs while staying in compliance and avoiding the risks associated with an audit. There will be a time when you are running your new solution and your legacy Oracle software at the same time. This is a critical phase in your digital transformation as you do not want to be tripped up by Oracle’s tactics and forced to stay with them. It may seem counterintuitive, but as you spend less with Oracle you must be even more careful with your licensing. As long as you keep spending money with Oracle and renewing those expensive contracts, the threat of an audit and non-compliance will remain low. Oracle is unlikely to audit a company that keeps giving it money. However, the moment you begin to move to newer technologies, your risk of an audit significantly increases. As a result, you must be especially vigilant to prevent Oracle from punishing you as you move away from them. Even if you’ve found a technical partner and managed your Oracle licenses and compliance to ensure no surprises, you still have to find a way to reduce your costs. It’s not as simple as terminating Oracle licenses and seeing your support costs go down. As stated above, Oracle contracts are designed to lock in customers and make it nearly impossible to actually reduce costs. Palisade Compliance has identified eleven ways to manage your Oracle licenses and reduce your Oracle support. It is critical that you understand and identify the options that work for your firm, and then build and execute on a plan that ensures your success. 3. Mobilize internal momentum and traction to make the move Legacy technology companies excel at seeding doubt into organization and preventing moves that threaten their antiquated solutions. Unfortunately, too many companies succumb to these tactics and are paralyzed into a competitive disadvantage in the market. In software, as in life, it’s easier to stay the course than to follow through with change. But when it comes to technical and business decisions that impact the overall success and direction of an organization, innovation and change aren’t just helpful, they’re necessary to survive--especially in a world with high customer demands and easy market entry. Ensuring you have the right technical partner and Oracle advisor is the best way to build the confidence and momentum needed to make your move. Creating that momentum is easier with MongoDB’s Database Platform, consisting of a fully managed service across 80+ regions, and Palisade’s expertise in Oracle licensing and contracts. Technical Alternative (MongoDB) + Independent Oracle Advisors (Palisade) ⇒ Momentum Parting thoughts To schedule a preliminary health check review and begin building the right strategy for your needs, fill out your information here . And to learn more about MongoDB’s Modernization Program, visit this page . About Palisade Compliance With over 400 clients in 30 countries around the world, Palisade is the leading provider of Oracle-independent licensing, contracting, and cost reduction services. Visit the website to learn more. To schedule a complementary one-hour Oracle consultation send an email to

September 2, 2021

Highlight What Matters with the MongoDB Charts SDK

We're proud to announce that with the latest release of the MongoDB Charts SDK you can now apply highlights to your charts. These allow you to emphasize and deemphasize your charts with our MongoDB query operators . Build a richer interactive experience for your customers by highlighting with the MongoDB Charts embedding SDK . By default, MongoDB Charts allows for emphasizing parts of your charts by series when you click within a legend. With the new highlight capability in the Charts Embedding SDK, we put you in control of when this highlighting should occur, and what it applies to. Why would you want to apply highlights? Highlighting opens up the opportunity for new experiences for your users. The two main reasons why you may want to highlight are: To show user interactions: We use this in the click handler sandbox to make it obvious what the user has clicked on. You could also use this to show documents affected by a query for a control panel. Attract the user’s attention: If there's a part of the chart you want your users to focus on, such as the profit for the current quarter or the table rows of unfilled orders. Getting started With the release of the Embedding SDK , we've added the setHighlight method to the chart object, which uses MQL queries to decide what gets highlighted. This lets you attract attention to marks in a bar chart, lines in a line chart, or rows in a table. Most of our chart types are already supported, and more will be supported as time goes on. If you want to dive into the deep end, we've added a new highlighting example and updated the click event examples to use the new highlighting API: Highlighting sandbox Click events sandbox Click events with filtering sandbox The anatomy of a click In MongoDB Charts, each click produces a wealth of information that you can then use in your applications , as seen below: In particular, we generate an MQL expression that you can use called selectionFilter , which represents the mark selected. Note that this filter uses the field names in your documents, not the channel names. Before, you could use this to filter your charts with setFilter , but now you can use the same filter to apply emphasis to your charts. All this requires is calling setHighlight on your chart with the selectionFilter query that you get from the click event, as seen in this sandbox . Applying more complex highlights Since we accept a subset of the MQL language for highlighting, it's possible to specify highlights which target multiple marks, as well as multiple conditions. We can use expressions like $lt and $gte to define ranges which we want to highlight. And since we support the logical operators as well, you can even use $and / $or . All the Comparison , Logical and Element query operators are supported, so give it a spin! Conclusion This ability to highlight data will make your charts more interactive and help you better emphasize the most important information in your charts. Check out the embedding SDK to start highlighting today! New to Charts? You can start now for free by signing up for MongoDB Atlas , deploying a free tier cluster and activating Charts. Have an idea on how we can make MongoDB Charts better? Feel free to leave an idea at the MongoDB Feedback Engine .

September 2, 2021

MongoDB Atlas as a Data Source for Amazon Managed Grafana

Amazon Managed Grafana is a fully managed service that is based on open source Grafana. Amazon Managed Grafana makes it easy to visualize and analyze operational data at scale. With Amazon Managed Grafana, organizations can analyze data stored in MongoDB Atlas without having to provision servers, configure or update software, or do the heavy lifting involved in securing and scaling Grafana in production. Connecting MongoDB Atlas to AMG The MongoDB Grafana plug-in makes it easy to query MongoDB with Amazon Managed Grafana. Simply select MongoDB as a data source, then connect to theMongoDB cluster using an Atlas connection string and proper authentication credentials (see Figure 1). Figure 1. Set up: MongoDB Grafana plug-in Now, MongoDB is configured as a data source. To visualize the data through Amazon Managed Grafana, select the Explore tab in the side panel and ensure that MongoDB is selected as the data source. Users can then write the first query in the query editor (see Figure 2). sample_mflix.movies.aggregate([ {"$match": { "year": {"$gt" : 2000} }}, {"$group": { "_id": "$year", "count": { "$sum": 1 }}}, {"$project": { "_id": 0, "count": 1, "time": { "$dateFromParts": {"year": "$_id", "month": 2}}}} ] ).sort({"time": 1}) Figure 2. AMG query editor Grafana will graph the query, illustrating how certain fields change over time. For more granular detail, users can review the data view below the visualization. (see Figure 3). Figure 3. AMG data view Using MongoDB as a data source in Amazon Managed Grafana allows users to easily analyze MongoDB data alongside other data sources, affording a singular point of reference for all of the most important data in an application. There’s no hassle; once connected to MongoDB from Amazon Managed Grafana, it simply works. Try out MongoDB Atlas with Amazon Managed Grafana today.

September 1, 2021

Ready to get Started with MongoDB Atlas?

Start Free