igor-alekseev-(aws)

1997 results

MongoDB Hackathon Winners Announced

Last December, the MongoDB Atlas Hackhaton on DEV was launched. A month has already passed by, and we are very excited to announce this edition’s winners. Once again, the MongoDB judging panel was highly impressed by the quality of submissions they received and would like to thank everyone who participated in the hackathon. Over 210 submissions were received, and the choice was not easy. The submissions were categorized into five categories, and each category winner will receive a grand prize worth $1500. In addition to the grand prize winners, ten runner-up projects were selected for prizes worth $250. Action Star In this category, the projects needed to create an event-driven application that used MongoDB Realm Functions and Triggers. Plant Monitor - Using IoT, MongoDB and Flutter Plant monitor is a complete plant health tracking device that uses MongoDB behind the scenes to store all the data from the IoT device. In addition to using the database to store the data, Souvik created a Realm function to have an HTTP endpoint for the device to send the data directly to MongoDB Atlas. He also went an extra step and used MongoDB Charts for excellent analytics of the plant health. Automation Innovation Here, the participants needed to automate a task performed every week using Atlas Serverless. New Year Resolution Tracker with Weekly Automated Reports New Year resolutions are hard to keep, but it will be easier this year, thanks to this application. New Year Resolution tracker is an application to log daily exercises and follow your progress with the established goal. A MongoDB Atlas Serverless instance was used to optimize the cluster usage. Alex decided to use the MongoDB JavaScript native driver to connect to the database, making the code easier to read. E-commerce Creation Searching through a catalog of products is now easier than ever with Atlas Search. In this category, the participants demonstrated the power of Atlas Search in an e-commerce demo application. Groovemade - E-commerce app using MongoDB atlas search Adding full-text search capabilities to an application can seem like a highly complex problem. However, Patrick proved this to be wrong in his project. Groovemade is an e-commerce website that uses MongoDB Atlas to store its catalog of products. The search bar uses Atlas search's auto-completion and fuzzy search features to deliver relevant results instantly to users. Prime Time This category was specifically for IoT and heavy analytics projects. Here, the participants used MongoDB Time Series collection to store large amounts of data. Temperature Sensing with Raspberry Pi into MongoDB IoT sensors produce a lot of data. This data needs to be stored efficiently to provide analytics to the end-users. With his fish tank temperature sensor, Kai demonstrated how to make the best use of MongoDB Time Series collection to handle the data coming from his sensor. With a new entry in the database every three seconds, Time Series is a great way to handle this type of data at scale. Choose Your Own Adventure For participants who got extra creative, this was the category for their projects. Anything that did not fit in the above categories was eligible here. Asteria: Asteroids approaching Earth today The MongoDB team received many creative and inventive submissions in this category. Still, this project stood out and was unanimously picked as the winner. In this project, Valeria uses many advanced MongoDB Atlas features such as Realm authentication, functions, and hosting, to warn us of possible asteroids colliding with our planet. Let's just hope we'll never have to actually use this website to track a potential collision! Runner-Ups In addition to those five grand prize winners, the jury also picked their ten favourite projects from all submissions. The 10 runner-ups, in no particular order, are: Title of the document table, th, td { padding: 10px; border: 1px solid black; border-collapse: collapse; } Lotir - Share link and images between your phone and your computer Julien RENT! e-commerce, submission for AtlasHackathon Matteo Bianchi Metrics Monitoring App with Anomaly Detection using MongoDB madalinfasie Explore Seattle City Bikes trips Benoît Durand Recipe Cards Collection - Powered by MongoDB, Responsively Designed Roxioxx Studios Watchkeeping: a timesheet compiling tool for Seafarers Chuong Tang Atlas hackathon submission (Refactored waffles) Pranjal Jain HeatSat yvesnrd vaccineAvailability Application Bikram Bhusan Sinha Manage webhooks with MongoDB Functions and Triggers Pubudu Jayawardana See you next time! That concludes our latest MongoDB Atlas and DEV hackathon. We would like to thank all the participants for their great ideas and unique submissions. This will most likely not be the last hackathon, so stay tuned to learn more about the future events coming up on Dev.to .

January 28, 2022

Solving Complex Technical Challenges with MongoDB’s Technical Services Team

MongoDB's Technical Services team works with our customers to ensure that their MongoDB deployments are running at their best. From a query performance question on a test Atlas cluster to helping upgrade large self-hosted sharded clusters run by some of the world's best-known global enterprises, the Technical Services team is available 24/7 to help our customers with any MongoDB product or feature. This deeply technical team is distributed globally, with a variety of backgrounds and expertise to ensure that they can best address any new issue or question. In addition to solving these complex customer challenges, the team also works on internal projects such as software development of support tools for performance tuning, benchmarking, and diagnostics. Hear from three team members about their career journeys and role within Technical Services at MongoDB. Francisco Alanis , Senior Technical Services Engineer, Austin Tell me about your journey into tech. How did you get to where you are today? I've liked technology ever since I was a little kid. I grew up in a small border town in Mexico where my opportunities to learn more about technology were limited to books and magazines. However, I was fortunate enough to be able to visit the U.S. every once in a while. My favorite store to visit was Radio Shack, where I could be more hands-on with technology and inspired by what I found. I eventually started some more formal training in electronics when I did my junior high in a technical middle school, which offered the chance to get a technical degree by the time I started high school. Those days I was studying the basics of how computers worked as a side project, and I felt more attracted to that, but I didn't have access to an actual computer. After several months of savings, my dad was able to buy a computer at my insistence. From that moment on, I started to learn everything I could by poking and prodding and getting every computer magazine I could. I couldn't get access to the Internet until much later. During high school, I moved to the U.S. and started living on my own at about 17 years old. My main objective back then was to get into a college to study computer science or computer engineering. I still had to finish high school but was sent back to 9th grade due to my poor spoken English. I didn't let that stop me, though. I dropped out and got my GED a few months later, my Associates in Arts from a local community college two years later, then got accepted at the University of Texas - Pan American (now UT Rio Grande Valley) after that. There, I worked on projects specializing in networking, distributed systems to solve Physics problems (processing of LHC data), and later on, computer-assisted protein alignment. I completed my Master’s degree a few years later and graduated, married, and started working at IBM all in the same week. At IBM, I worked on Power VIOS Virtual Device Drivers, then AIX Network Device Drivers where I got experience in diagnosis, testing, and SR-IOV driver implementations, and finally at Watson Health where I worked as a DevOps engineer until 2017. In 2017 I started working at MongoDB as a Technical Services Engineer. What has your career path looked like at MongoDB? Before starting at MongoDB I had been working for almost 10 years as a developer, but I had no experience interacting directly with customers. In addition to that, my experience was deep in very specific types of technologies, but my breadth of knowledge wasn't great beyond what I could learn on my own from personal projects. Because of these limitations, my main goals when I started at MongoDB were to get better experience communicating with customers and to expand my breadth of knowledge. In these last four years, I can say I've done that and much more. I still feel a great sense of pride and accomplishment every time I start a call with a customer to assist in an emergency situation and end that call with either a crisis averted or with the customer confident that they are in good hands, knowing their problem will be handled not only by myself but by the full Technical Services team backing me up. Four years ago, I couldn't even imagine being able to offer that kind of service with the level of confidence I can today. In addition to that, it is a world of difference having experience in the design and development of applications versus actually seeing those applications used in the real world, especially the day-to-day consequences of design decisions that may seem inconsequential as a developer but that can profoundly affect customers' usage patterns and views of a product. It’s also interesting to see how different product stacks that include MongoDB can have different effects on the database, both positive and negative. What is the most enjoyable part of your role at MongoDB? Undoubtedly, the best part of MongoDB is the people I work with. I'm very grateful to have the opportunity to work daily with colleagues that are not only very smart but are also very passionate about technology and solving problems. On top of that, they are more than willing to share their knowledge. Our work in Technical Services is very collaborative since there's no single person that knows everything about the data platform. We are exposed to all kinds of different and sometimes unique issues. These issues frequently create learning opportunities that we then share with the team. Additionally, because MongoDB is being used in all kinds of use cases with both mature and emerging technologies, we get a lot of exposure to different solutions used in the field. This can give you accelerated experience in any well-known or new industry trends. Linda Qin , Staff Engineer, Sydney Tell me about your role as a Staff Engineer. My day-to-day job includes casework and project work. When I start my day, I first review the cases in both my own queue and the support queue, then work on the cases based on the urgency and severity. Normally my team responds primarily to the cases submitted by our customers on the MongoDB Support Portal. For critical issues, we’ll set up or join a call with the customer to resolve the issue. I am experienced in MongoDB core databases utilizing sharding, so I regularly help the team with questions in these areas. I am also the Named Technical Services Engineer (NTSE) for some customers. Our NTSE service is a premium enterprise support offering. MongoDB NTSEs work closely with designated customers and have a deep understanding of their environment in order to provide holistic, context-sensitive support. I join regular NTSE meetings with our customers to review opened issues, work on planned activities, and follow up on cases for them. Aside from the casework, I contribute to projects that help improve our productivity. For example, a colleague and I worked on a sharding analyzer to analyze the metadata in a sharded environment. Sharding is a method that MongoDB uses to distribute data across multiple machines. The sharding analyzer can be used to help us understand the data distribution and diagnose issues more efficiently. How do you collaborate with other teams and engineers in your role? Sometimes a case covers multiple areas and different subject expert teams work together to help our customers. For example, when an Atlas customer reports a performance issue, the issue could be caused by under-provisioning or could be related to the queries or indexing configuration. In those cases, I work with my colleagues from the Atlas support team on the investigations into the core database. Within the Technical Services team, we have technical experts with deep experiences and particular responsibilities surrounding their subject matter area. For example, we create a product report to highlight the main pain points and highly-demanded feature requests for the product team. We write Knowledge Base Articles to share internally and with our customers. Additionally, we are often involved in the early stage of new products to review the product description and scope documents and provide feedback based on our field experiences. I am a technical expert in sharding. Apart from the above contributions, I have been working on growing the Technical Services team's skills in this subject area. I have developed a sharding workshop that provides a real sharding deployment with many exercises. New hires or anyone on the support team can use this workshop to get hands-on experience with common issues related to sharding and to gain additional knowledge on the topic of sharding. I am currently working on adding functions to our internal diagnostics tool to automatically analyze MongoDB logs for issues on sharding. What are you most looking forward to in 2022? For MongoDB Technical Services, I am looking forward to more talented people joining our team. We currently have lots of openings in many different locations . For myself, I would like to continue working on projects related to sharding and issue diagnosis. I also plan to work with the other sharding experts to complete the next level sharding workshop, which includes some deeper exercises and knowledge on sharding. Emilio Scalise , Staff Engineer, Rome Tell me a bit about your career journey at MongoDB. I started working at MongoDB in 2015 as part of a small team of six Support Engineers in our Dublin office. The company grew considerably, and I had the opportunity to move back to my country (Italy) to work as a remote Technical Support Engineer in 2016. This role started as an undifferentiatedTechnical Services Engineer for any MongoDB product, but I then began to specialize in supporting our enterprise applications and integrations, a focus that was created the year after I started. This team specializes in supporting Ops Manager, Cloud Manager, and other applications that MongoDB provides in addition to MongoDB Enterprise Server. Over the years I became a Senior Technical Services Engineer, then a Technical Team Lead, and finally a Staff Engineer which is my current role. How has Technical Services leadership supported your career growth? The Technical Services leadership team supported me greatly through the years. I’ve been given the opportunity to take on increasing responsibility and lead many internal projects, teach and coach new team members, and work together with other teammates to continuously improve our MongoDB Support Service to match our growing company needs and expectations. All of these experiences helped me become the Staff Engineer I am today. What types of activities do you take part in as a Staff Engineer? Besides daily casework like all other Technical Services Engineers, as a Staff Engineer, I try to track and contribute to the resolution of major customer escalations and product issues. I’ve been coordinating training and internal tools efforts together with my teammates. Coaching, training, and collaborating with teammates is something that happens continuously over the day, every day. I am also involved with the Technical Experts Program in our organization as an “Expert Champion” (I help recruit new Experts) and as a member of the Ops Manager Experts Team. Within the Experts program, we collaborate with our Product and Development organizations by sharing feedback with them regarding product features and issues, and we also suggest and discuss future improvements in our products. Interested in a career in Technical Services at MongoDB? We have several open roles on our teams across the globe and would love for you to transform your career with us!

January 27, 2022

Is Relational the New COBOL? What the History of Technology Tells Us About Change

We all know that technology is continuously evolving — otherwise we’d all be riding around in horse-drawn carriages. But what causes one technology to become dominant while another fades away? Are these changes obvious while they’re in progress, or only in retrospect? And what seismic shifts are happening now? These and other provocative themes featured heavily in a presentation by MongoDB CTO Mark Porter at the recent AWS: Reinvent. Porter’s talk was titled, “Is Relational the New COBOL? What the History of Technology Shows Us About Change.” COBOL was introduced in 1959, and by the 1970s was the most widely used programming language in the world, powering most mainframe-based software. With the rise of PCs and other advances, COBOL fell from prominence and eventually became a punchline — a stand-in for obsolescence. The programming language never went away, however. There are an estimated 1 to 2 million active COBOL programmers, and around 220 billion lines of COBOL code still in use, often in mission-critical applications. But that doesn’t mean COBOL is relevant to innovation. Developers aren’t using COBOL for any new type of development. The language is inefficient, and doesn’t provide nearly the amount of scalability that developers need to build their applications. Porter sees a similar fate for relational databases — still in use for legacy applications, but unfit for innovation and superseded by modern solutions. The trouble with relational Much like COBOL, relational databases have a long history. However, as Porter explains, we are long past the point where a relational database is the most productive way to support a new app. Rigid data models and unnatural programming requirements make relational databases far less attractive than modern data platforms, which are enterprise-grade, scalable, flexible, highly intuitive, and run-anywhere. Here are some of the most interesting takeaways from Porter’s presentation: Because relational databases are not at the center of new innovation, developers simply aren’t interested in working with them. Porter shared an anecdote about a recent conversation he had with another technology executive. “He said to me, “Mark, I can’t hire relational people out of school. No one wants to work on relational anymore…the people at my company keep telling me that they will quit if I keep making them work on some of those commercial databases, such as Oracle or SQL Server.” As the COVID-19 pandemic continues, companies are scrambling to differentiate themselves through innovation. But companies that rely on relational databases are at a disadvantage when it comes to scaling and keeping pace with competitors. “Enterprises today cannot outsource their innovation. Enterprises during COVID are insourcing their innovation. And when they insource their innovation, they want to move fast. It’s one thing if you can’t scale, it’s another if your competitor beats you to market.” Will relational really go the way of COBOL — widely used, but only in legacy applications? Porter sees some clues. “It’s just economics, just like all the technological changes you face in your organization. The articles I researched in 1910 [show that people] thought that cars were this ridiculous thing. They didn’t see it coming. That’s where we are today with relational.”

January 27, 2022

Powered by MongoDB, Bliinx is Changing the Way Software is Sold

Regardless of the industry, sales organizations often struggle to determine the best way to identify potential customers. There are many schools of thought as to what the best approach is, and when the most opportune time a sales executive should reach out might be. One startup company aims to make that process as simple and efficient as possible. Bliinx , based in Montreal, Quebec, Canada, was created to help revenue teams focus and act on the most qualified leads and accounts based on product usage, billing, firmographic and marketing engagement data. Bliinx’s mission is to “change the way we sell software.” We spoke with Bliinx co-founders Fred Melanson and John Espinoza about starting the company, their journey, and where they see Bliinx headed in the future. How did you decide to start Bliinx? Melanson: I realized that it’s hard to build quality relationships with a lot of people, especially people that you’re trying to get investments from. I would ask people a lot of questions, and those were around relationship building and the question became how do you manage your clients relationships? Everyone would answer that they do everything manually, across siloed channels, and it’s a pain to manage and scale. So I figured there must be something there, that was really the spark that we created Bliinx on. What does Bliinx do? Melanson: We are a lead prioritization platform for bottom-up B2B SaaS, so we help sales teams - mainly account executives - to know who the best leads are at the best accounts to reach out to, and also to identify when it’s the best time to reach out to them. And the way we do that is by finding signals and insights in their sales conversations, their marketing engagement, and product usage. Our tool will plug into your system and find insights that are worth engaging on and scoring your talents and your leads, so the sales reps are focused on the best customers at the best time, without having to use generic one size fits all automation, which can be great for top of funnel SDRs, but for CSMs, who are really about nurturing, closing, and expanding revenue, it has to be more thoughtful and and more human because it’s getting harder and harder to get people’s attention and retention is immensely valuable for SaaS companies, so our tool helps us just find the best people at the best time to grow revenue faster. What are some tools that Bliinx connects with? Melanson: The basic one will plug into your email and calendar, we also have LinkedIn integration, which is pretty unique to sync your LinkedIn messages and plug into your CRM. It also connects with Slack to receive notifications and right now we are building integrations with Segment, Intercom, Stripe, and Snowflake, so reps can have product insights. We are also building new integrations for LinkedIn and Twitter so that reps can also have content marketing engagement insights to act on. Where are you right now with Bliinx? How has the journey been, have you gone through accelerators and are you funded by VC’s? Melanson: I started working on the project about a year-and-a-half, two years ago, it was really an idea out of college. So after a lot of learning, we raised an angel round really quickly, and a couple of months later we got accepted to 500 Startups. From there we raised a pre seed round and we’ve been iterating on the product, trying to really find our positioning, and find the people that have the problem, and figure out what’s the best version of the problem that we can solve. How did getting accepted into 500 Startups shape Bliinx? Melanson: It’s a game changer. I don’t think we would have been here today if it wasn’t for 500 Startups. It was an amazing experience, you’re surrounded by so many smart people, and have such an expertise that you don’t normally have access to. You get what you take out of it, so I pushed it to the max, every time there was office hours, I would take it, every time there was an investor meeting open, I would take it. I would really, really push and it got us to great results, and it’s through 500 Startups that I’ve met our lead investor. Can you tell us about your tech stack? Espinoza: I want to keep it simple, this is the main rule of the company. We've built our system with microservices, use NodeJS and NoSQL for our back-end and have built a robust back-end infrastructure to build our proprietary engines for data orchestration. The rest of our platform is built on typescript and we use MongoDB to manage our databases. How did you decide to go with MongoDB? Espinoza: My first startup, we used MongoDB, and had a great experience. We use MongoDB, and I really love it. We don’t have to care about backups, or anything to do with the infrastructure. It’s plug and play, so what’s amazing for us is I come from the background where you have to build everything. So going with the NoSQL database is fantastic because you don’t have to maintain all the schema, which can be really messy. Like I said, we try to keep it simple. What excites you now about working with Bliinx? Melanson: With the rise of companies that are product-led or marketing-led, and the fact that people are working remotely, sales is changing, and I think it’s for the better. Tools on the market need to adjust, yes people want to try it out before they buy it, but they don’t want to go through a sales rep, they still want to meaningfully connect with people in sales. And sales reps are a big part of that journey, it’s just that you don’t reach out cold to sell, you have them try it, and then you’re more of a consultant, or the hand holder through that way. So it excites me about figuring out a way for people to build meaningful connections in business, with us being so remote. Espinoza: Everything that we build in here is new for me, and that’s what excites me. Working with a lot of data coming from everywhere, and building something valuable for you, let’s do something valuable with a lot of data. This is the magic box that we build in our building, this is a great opportunity. What advice would you give to someone starting up their own company? Melanson: 99% of people just don’t start, so my main advice is to just start. That’s really what the hurdle is, that’s the toughest part, people think it’s recruiting a technical co-founder, or raising money is the toughest part, but it’s starting. You can go so far validating your idea, without having a single line of code. Espinoza: Don’t start with titles. In the beginning, you’re just people with a project. The other is to go talk to people who are doing the same thing. Finding other people to bounce ideas off of, just to validate ideas, that is something that has helped me a lot. Interested in learning more about MongoDB for Startups? Learn more about us here .

January 26, 2022

Scale Out Without Fear or Friction: Live Resharding in MongoDB

Live resharding was one of the key enhancements delivered in our MongoDB 5.0 Major Release . With live resharding you can change the shard key for your collection on demand as your application evolves with no database downtime or complex data migrations . In this blog post, we will be covering: Product developments that have made sharding more flexible What you had to do before MongoDB 5.0 to reshard your collection, and how that changed with 5.0 live resharding Guidance on the performance and operational considerations of using live resharding Before that, we should discuss why you should shard at all, and the importance of selecting a good shard key – even though you have the flexibility with live resharding to change it at any time. Go ahead and skip the next couple of sections if you are already familiar with sharding! Why Shard your Database? Sharding enables you to distribute your data across multiple nodes. You do that to: Scale out horizontally — accommodate growing data or application load by sharding once your application starts to get close to the capacity limits of a single replica set. Enforce data locality — for example pinning data to shards that are provisioned in specific regions so that the database delivers low latency local access and maintains data sovereignty for regulatory compliance. Sharding is the best way of scaling databases and MongoDB was developed to support sharding natively. Sharding MongoDB is transparent to your applications and it’s elastic so you can add and remove shards at any time. The Importance of Selecting a Good Shard Key MongoDB’s native sharding has always been highly flexible — you can select any field or combination of fields in your documents to shard on. This means you can select a shard key that is best suited to your application’s requirements. The choice of shard key is important as it defines how data is distributed across the available shards. Ideally you want to select a shard key that: Gives you low latency and high throughput reads and writes by matching data distribution to your application’s data access patterns. Evenly distributes data across the cluster so you avoid any one shard taking most of the load (i.e., a “hot shard”). Provides linear scalability as you add more shards in the future. While you have the flexibility to select any field(s) of your documents as your shard key, it was previously difficult to change the shard key later on. This made some developers fearful of sharding. If you chose a shard key that doesn’t work well, or if application requirements change and the shard key doesn’t work well for its changed access patterns, the impact on performance could be significant. At this point in time, no other mainstream distributed database allows users to change shard keys, but we wanted to give users this ability. Making Shard Keys More Flexible Over the past few releases, MongoDB engineers have been working to provide more sharding flexibility to users: MongoDB 4.2 introduced the ability to modify a shard key’s value . Under the covers the modification process uses a distributed, multi-document ACID transaction to change the placement of a document in a sharded cluster. This is useful when you want to rehome a document to a different geographic region or age data out to a slower storage tier . MongoDB 4.4 went further with the ability to refine the shard key for a collection by adding a suffix to an existing key. Both of these enhancements made sharding more flexible, but they didn’t help if you needed to reshard your collection using an entirely different shard key. Manual Resharding: Before MongoDB 5.0 Resharding a collection was a manual and complex process that could only be achieved through one of two approaches: Dumping the entire collection and then reloading it into a new collection with the new shard key . This is an offline process, and so your application is down until data reloading is complete — for example, it could take several days to dump and reload a 10 TB+ collection on a three-shard cluster. Undergoing a custom migration that involved writing all the data from the old cluster to a new cluster with the resharded collection. You had to write the query routing and migration logic, and then constantly check the migration progress to ensure all data had been successfully migrated. Custom migrations entail less downtime, but they come with a lot of overhead. They are highly complex, labor-intensive, risky, and expensive (as you had to run two clusters side-by-side). It took one MongoDB user three months to complete the live migration of 10 billion documents. How this Changed with MongoDB 5.0: Live Resharding We made manual resharding a thing of the past with MongoDB 5.0. With 5.0 you just run the reshardCollection command from the shell, point at the database and collection you want to reshard, specify the new shard key, and let MongoDB take care of the rest. reshardCollection: "<database>.<collection>", key: <shardkey> When you invoke the reshardCollection command, MongoDB clones your existing collection into a new collection with the new shard key, then starts applying all new oplog updates from the existing collection to the new collection. This enables the database to keep pace with incoming application writes. When all oplog updates have been applied, MongoDB will automatically cut over to the new collection and remove the old collection in the background. Lets walk through an example where live resharding would really help a user: The user has an orders collection. In the past, they needed to scale out and chose the order_id field as the shard key. Now they realize that they have to regularly query each customer’s orders to quickly display order history. This query does not use the order_id field. To return the results for such a query, all shards need to provide data for the query. This is called a scatter-gather query. It would have been more performant and scalable to have orders for each customer localized to a shard, avoiding scatter-gather, cross-shard queries. They realize that the optimal shard key would be "customer_id: 1, order_id: 1" rather than just the order_id . With MongoDB 5.0’s live resharding, the user can just run the reshard command, and MongoDB will reshard the orders collection for them using the new shard key, without having to bring the database and the application down. Watch our short Live Resharding talk from MongoDB.Live 2021 to see a demo with this exact example. Not only can you change the field(s) for a shard key, you can also review your sharding strategy, changing between range, hash, and zones. Live Resharding: Performance and Operational Considerations Even with the flexibility that live resharding gives you, it is still important to properly evaluate the selection of your shard key. Our documentation provides guidance to help you make the best choice of shard key . Of course, live resharding makes it much easier to change that key should your original choice have not been optimal, or if your application changes in a way that you hadn’t previously anticipated. If you find yourself in this situation, it is essential to plan for live resharding. What do you need to be thinking about before resharding Make sure you have sufficient storage capacity available on each node of your cluster. Since MongoDB is temporarily cloning your existing collection, spare storage capacity needs to be at least 1.2x the size of the collection you are going to reshard. This is because we need 20% more storage in order to buffer writes that occur during the resharding process. For example, if the size of the collection you want to reshard is 2 TB compressed, you should have at least 2.4 TB of free storage in the cluster before starting the resharding operation. While the resharding process is efficient, it will still consume additional compute and I/O resources. You should therefore make sure you are not consistently running the database at or close to peak system utilization. If you see CPU usage in excess of 80% or I/O usage above 50%, you should scale up your cluster to larger instance sizes before resharding. Once resharding is done, it's fine to scale back down to regular instance sizes. Before you run resharding, you should update any queries that reference the existing shard key to include both the current shard key and the new shard key. When resharding is complete, you can remove the old shard key from your queries. Review the resharding requirements documentation for a full run down on the key factors to consider before resharding your collection. What should you expect during resharding? Total duration of the resharding process is dependent on the number of shards, the size of your collection, and the write load to your collection. For a constant data size, the more shards the shorter the resharding duration. From a simple POC on MongoDB Atlas, a 100 GB collection took just 2 hours 45 minutes to reshard on a 4-shard cluster and 5 hours 30 minutes on a 2-shard cluster. The process scales up and down linearly with data size and number of shards – so a 1 TB collection will take 10 times longer to reshard than a 100GB collection. Of course your mileage may vary based on the read/write ratio of your application along with the speed and quality of your underlying hardware infrastructure. While resharding is in flight, you should expect the following impacts to application performance: The latency and throughput of reads against the collection that is being resharded will be unaffected . Even though we are writing to the existing collection and then applying oplog entries to both its replicas and to the cloned collection, you should expect to see negligible impact to write latency given enough spare CPU. If your cluster is CPU-bound, expect a latency increase of 5 to 10% during the cloning phase and 20 to 50% during the applying phase (*) . As long as you meet the aforementioned capacity requirements, the latency and throughput of operations to other collections in the database won't be impacted . (*) Note: If you notice unacceptable write latencies to your collection, we recommend you stop resharding, increase your shard instance sizes, and then run resharding again. The abort and cleanup of the cloned collection are instantaneous. If your application has time periods with less traffic, reshard your collection during that time if possible. All of your existing isolation, consistency, and durability guarantees are honored while resharding is running. The process itself is resilient and crash-safe, so if any shard undergoes a replica set election, there is no impact to resharding – it will simply resume when the new primary has been elected. You can monitor the resharding progress with the $currentOp pipeline stage. It will report an estimate of the remaining time to complete the resharding operation. You can also abort the resharding process at any time. What happens after resharding is complete? When resharding is done and the two collections are in sync, MongoDB will automatically cut over to the new collection and remove the old collection for you, reclaiming your storage and returning latency back to normal. By default, cutover takes up to two seconds — during which time the collection will not accept writes, and so your application will see a short spike in write latency. Any writes that timeout are automatically retried by our drivers , so exceptions are not surfaced to your users. The cutover interval is tunable: Resharding will be quicker if you raise the interval above the two second default, with the trade-off that the period of write unavailability will be longer. By dialing it down below two seconds, the interval of write unavailability will be shorter. However, the resharding process will take longer to complete, and the odds of the window ever being short enough to cutover will be diminished. You can block writes early to force resharding to complete by issuing the commitReshardCollection command. This is useful if the current time estimate to complete the resharding operation is an acceptable duration for your collection to block writes. What you Get with Live Resharding Live sharding is available wherever you run MongoDB – whether that’s in our fully managed Atlas application data platform in the cloud , with Enterprise Advanced , or if using the Community Edition of MongoDB. To recap how you benefit from live resharding: Evolve with your apps with simplicity and resilience: As your applications evolve or as you need to improve on the original choice of shard key, a single command kicks off resharding. This process is automated, resilient, and non-disruptive to your application. Compress weeks/months to minutes/hours: Live resharding is fully automated, so you eliminate disruptive and lengthy manual data migrations. To make scaling out even easier, you can evaluate the effectiveness of different shard keys in dev/test environments before committing your choice to production. Even then, you can change your shard key when you want to. Extend flexibility and agility across every layer of your application stack: You have seen how MongoDB’s flexible document data model instantly adapts as you add new features to your app. With live resharding you get that same flexibility when you shard. New features or new requirements? Simply reshard as and when you need to. Summary Live Resharding is a huge step forward in the state of distributed systems, and is just the start of an exciting and fast-paced MongoDB roadmap that will make sharding even easier, more flexible, and automated. If you want to dig deeper, please take a look at the Live Resharding session recording from our developer conference and review the resharding documentation . To learn more about MongoDB 5.0 and our new Rapid Releases, download our guide to what’s new in MongoDB .

January 26, 2022

Introducing MongoDB Realm’s Flexible Sync – Now Available in Preview

Twelve months ago, we made MongoDB’s edge-to-cloud data synchronization service, Realm Sync , generally available. Since then, Sync has helped hundreds of our customers build reliable, offline-first mobile apps that serve data to millions of end users – from leading telematics providers to chart-topping consumer apps . Historically, Realm Sync has worked well for apps where data is compartmentalized and permissions rarely change, but dynamic use cases with evolving permissions required workarounds. We knew we could do more, so today we are excited to announce the next iteration of Realm Sync – Flexible Sync. With the introduction of Flexible Sync, we are redefining the sync experience by enabling even the most complex use cases out-of-the-box without requiring any custom code. Intuitive query-based sync Distinctly different from how Realm Sync operates today, Flexible Sync lets you use language-native queries to define the data synced to user applications. This more closely mirrors how you are used to building applications today – using GET requests with query parameters – making it easy to learn and fast to build to MVP. Flexible Sync also supports dynamic, overlapping queries based on user inputs. Picture a retail app that allows users to search available inventory. As users define inputs – show all jeans that are size 8 and less than $40 – the query parameters can be combined with logical ANDs and ORs to produce increasingly complex queries, and narrow down the search result even further. In the same application, employees can quickly limit inventory results to only their store’s stock, pulling from the same set of documents as the customer, without worrying about overlap. Document-level permissions Whether it’s a company’s internal application or an app on the App Store, permissions are required in almost every application. That’s why we are excited by how seamless Flexible Sync makes applying a document-level permission model when syncing data – meaning synced documents can be limited based on a user’s role. Consider how an emergency room team would use their hospital’s application. A resident should only be able to access her patients’ charts while her fellow needs to be able to see the entire care team’s charts. In Flexible Sync, a user’s role will be combined with the client-side query to determine the appropriate result set. For example, when the resident above filters to view all patient charts the permission system will automatically limit the results to only her patients. Real-time collaboration optimizations Flexible Sync also enhances query performance and optimizes for real-time collaboration by treating a single object or document as the smallest entity for synchronization. This means synced data is shared between client devices more efficiently and conflict resolution incorporates changes faster and with less data transfer than before. Getting started Flexible Sync is available now. Simply sign up or log in to your cloud account, deploy a Realm app, select your sync type, and dive right in. Flexible Sync is compatible with MongoDB 5.0, which is available with dedicated Atlas database clusters (M10 and higher). Shared-tier cluster support for 5.0 and Flexible Sync will be made available mid-February. Have questions? Check out our documentation or the more detailed announcement post on the Developer Hub. Looking ahead Our goal with Flexible Sync is to deliver a sync service that can fit any use case or schema design pattern imaginable without custom code or workarounds. And while we are excited that Flexible Sync is now in preview, we’re nowhere near done. The Realm Sync team is planning to bring you more query operators, permissions integrations, and enhancements over the course of 2022. We look to you, our users, to help us drive the roadmap. Submit your ideas and feature requests to our feedback portal and ask questions in our Community forums . Happy building!

January 24, 2022

10 Signs Your Data Architecture Is Limiting Your Innovation: Part 3

When it comes to your database architecture, complexity can quickly lead to a drag on your productivity, frustration for your developers, and less time to focus on innovation while your team maintains the status quo. New feature rollouts take longer than they should, while your resources are consumed up by tedious tasks that allow your app to survive, but not truly thrive. This complexity manifests in many different ways; as they accumulate, they can become a serious hindrance to your ability to bring innovative ideas to market. We think of the effect as a kind of tax — a tax that is directly rooted in the complexity of your data architecture. We call it DIRT — the Data and Innovation Recurring Tax . We have identified ten symptoms that can indicate your business is paying DIRT. For an in-depth view, read our white paper 10 Signs Your Data Infrastructure is Holding You Back . Sign #5: New features are rolled out in months, not days With a complex data architecture, your developers have to switch constantly between languages and think in different frameworks. They may use one language to work directly with a database, another to use the object-relational mapping (ORM) layer built on top of it, and yet another to access search functionality. That becomes a major drag on productivity. That slows down your individual developers, but it also has consequences for how they work as a team. If every application architecture is bespoke, it’s almost impossible for developers’ skills to be shared and put to use across an organization. Development slows down. When a key person leaves, there is no one who can effectively fill in and you end up hiring for very specific skills. That’s hard enough, but you also don’t know if you’ll still need those skills in a year or three. Sign #6: It takes longer to roll out schema changes than to build new features If you’re rolling out application changes frequently — or trying to — and you’re using a relational database, then schema changes are hard to avoid. One survey found that 60% of application changes require modifications to existing schema, and, worse, those database changes take longer to deploy than the application changes they are supposed to support. Legacy relational databases require developers to choose a schema at the outset of a project, before they understand the entirety of the data they need or the ways in which their applications will be used. Over time, and with user feedback, the application takes shape — but often it’s not the shape that was originally anticipated. At that point, a fixed schema makes it very hard to iterate, leaving teams with a tough choice: try to achieve your new goals within the context of a schema that isn’t really suitable or go through the painful process of changing it. Learn more about the innovation tax and how to lessen it in our white paper DIRT and the High Cost of Complexity .

January 21, 2022

Faster Migrations to MongoDB Atlas on Google Cloud with migVisor by EPAM

As the needs of Google Cloud customers evolve and shift towards new user expectations, more and more customers are choosing the MongoDB Application Data Platform as an ideal alternative to legacy databases. Together, we’ve partnered with users looking to digitize and grow their businesses (such as Forbes ), or meet increased demand due to COVID (such as our work with Boxed , the online grocer) by scaling up infrastructure and data processing within a condensed time frame. As a fully-managed service within the Google Cloud Marketplace , MongoDB Atlas enables our joint customers to quickly deploy applications on Google Cloud with a unified user experience and an integrated billing model. Migrations to managed cloud database services vary in complexity, but even under the most straightforward circumstances, careful evaluation and planning is required. Customer database environments often leverage database technologies from multiple vendors, across different versions, and can run into thousands of deployments. This makes manual assessment cumbersome and error prone. This is where EPAM Systems , a provider with strategic specialization in database and application modernization solutions, comes in. EPAM’s database migration assessment tool, migVisor , is a first-of-its-kind cloud database migration assessment product that helps companies analyze database workloads, configuration, and structure to generate a visual cloud migration roadmap that identifies potential quick wins as well as challenge areas. migVisor identifies the best migration path for databases using sophisticated scoring logic to rank the complexity of migrating them to a cloud-centric technology stack. Previously applicable only to migrations from RDBMS to cloud-based RDBMS, migVisor is now available for MongoDB to MongoDB Atlas migrations. migVisor helps you: Analyze migration decisions objectively by providing a secure assessment of source and target databases that’s independent of deployed environments Accelerate time to migration by automating the discovery and assessment process, which reduces development cycles from a few weeks to a few days Easily understand tech insights by providing a visual overview of your entire journey, enabling better planning and improving stakeholder visibility Reduce database licensing costs by giving you intelligent insights on the target environment and recommended migration paths Key features of migVisor for MongoDB For several years, migVisor by EPAM has delivered automated assessments that have helped hundreds of customers migrate their relational databases to cloud-based or cloud-native databases. Now, migVisor adds support for the world’s leading modern data platform: MongoDB. As part of the initial release, migVisor will support self-managed MongoDB to MongoDB Atlas migration assessments. We plan to support TCO for MongoDB migrations, application modernization, migration assessment, and relational MongoDB migration assessments in future releases. MongoDB is also a natural fit for Google Cloud’s Open Cloud strategy of providing customers a broad set of fully managed database services, as Google Cloud's own GM and VP of Engineering & Databases, Andi Gutmans, notes: We are always looking for ways to simplify migrations for our customers. Now, with EPAM's database migration assessment tool, migVisor, supporting MongoDB Atlas, our customers can easily complete database assessments—including TCO analyses and migration complexity assessments, and generate comprehensive migration plans. A simplified migration experience combined with our joint Marketplace success enables customers to consolidate their data workloads into the cloud while making the development and procurement process simple&#8212;so users can focus more on innovation. How the migVisor assessment works migVisor analyzes source databases (on-prem or in any cloud environment) for migration assessment to a new target. The assessment includes the following steps: The simple-to-use migVisor Metadata Collector (mMC) collects metadata from the source database, including: featureCompatibilityVersion value, journaling status for data bearing nodes, MongoDB storage size used, replica set configuration, and more. Figure 1: mMC GUI Edit Connection Screen On the migVisor Analysis Dashboard you can select the source/target pair (e.g., MongoDB to MongoDB Atlas on Google Cloud). Figure 2: Source and Target Selection In the migVisor console, you can then view the automated assessment output that was created from migVisor’s migration complexity scoring engine, including classification of the migration into high/medium/low complexity and identification of potential migration challenges and incompatibilities. Figure 3: Source Cluster Features Finally, you can also export the assessment output in CSV format for further analysis in your preferred data analysis/reporting tool. Conclusion Together, Google Cloud and MongoDB have successfully worked with many organizations to streamline cloud migrations and modernize their legacy landscape. To build on the foundation of providing our customers with the best-in-class experience, we’ve closely worked with Google Cloud and EPAM Systems to integrate MongoDB Atlas with migVisor. Because of this, customers will now be able to better plan migrations, reduce risk and avoid missteps, identify quick wins for TCO reduction, review migration complexities, and appropriately plan migration phases for the best outcomes. Learn more about how you can deploy, manage, and grow MongoDB on Google Cloud on our partner page . If you’d like guidance and migration advice, please reach out to mdb-gcp-marketplace@mongodb.com to get in touch with the Google, MongoDB, and EPAM Sales teams.

January 21, 2022

Starting a Career as a Solutions Architect in MongoDB’s Remote Presales Centre

MongoDB’s Remote Presales Centre is kickstarting presales careers and helping customers unlock the value of MongoDB technology. I spoke with Chris Dowling and Snehal Bhatia to learn more about the Remote Presales Centre Solutions Architect role, how they’re making an impact, and why this is an exciting opportunity for those interested in understanding the intersection of business and technology. Jackie Denner: Hi, Chris and Snehal. Thanks for sitting down with me today to discuss the Remote Presales Centre. What is MongoDB’s Remote Presales Centre team? Chris Dowling: The Remote Presales Centre Solutions Architect is an introductory Solutions Architect (SA) role. Our global team is spread across the Americas, EMEA, and APAC, and we are actively growing. We currently have SAs in EMEA covering French, German, Italian, Spanish, and English speaking customers. By joining the team, you’ll essentially be in an “incubation” period to gain experience in a presales role and exposure to sales cycles. Snehal Bhatia: Yes, this Solutions Architect role is for people who are earlier in their career and might not necessarily come from a presales background. We’re not dedicated to particular customers or accounts, rather we cover a wider perspective to help a larger volume of customers across various regions and Sales teams. Not only do we gain valuable experience, but we’re able to add value to the sales cycle by way of customer education through enablement sessions and workshops, along with engaging with customers at an earlier stage to bring technical value from the get-go. We’re also brought in to help qualify opportunities during discovery meetings. Overall, the biggest gap we see is that customers often have a difficult time understanding MongoDB technology, so we’re there to provide clarity, answer questions, and showcase the value of MongoDB. JD: So, what is a typical week like in your Solutions Architect role? CD: I’ve had 15 customer contacts this week. If you’re looking at strictly one-on-one sessions, the maximum number of customers someone on our team would handle per week is around 20. If you take into account some of the wider marketing events we help run as well, it could be as many as 100 customers, it really depends on the day. We don’t just do account-based activities, we also run wider campaigns like workshops and webinars. Snehal and I also had the opportunity to speak at MongoDB.local London in November 2021 on the topics of read and write concerns and how to set up your database for the tradeoffs you need and how ethical concerns need to be factored into technology and IoT design. We also get the chance to do things outside of core responsibilities and are able to work on side projects if we’d like. For example, I really enjoy management and education so I do a lot with sales reps to help them understand MongoDB technology. We really do a mixture of things. In a typical week, we’ll have one or two webinars, a few security questionnaires which is part of the end of a deal cycle and includes some technical questions that we need to respond to, then we have discovery meetings and prep calls with different reps, and we also have a day dedicated to enablement. SB: Yes, we have all of these customer engagements but the core of it is the prep that comes beforehand. We end up working with Marketing, Sales, Sales Managers, Product Owners, Professional Services - we work with a lot of different teams to get their insight so that we’re able to provide a complete view or solution to the customer. The internal prep meetings are a big part of that execution. JD: Why would someone move from an implementation role into a Remote Presales Centre role? CD: Snehal and I both come from an implementation background. I think you should join the Remote Presales Centre team if you’re interested in the architecture of how businesses are running their systems and want to see how the sales process works. In this role, we’re uncovering the answers to “What is motivating the customer to do this? Why would they buy MongoDB? Does MongoDB work for them?” Every day is different for us. In an implementation role, you end up working on the same system and use cases day in and day out, whereas in our role we get to see everything under the sun of what customers might want to do and get to go in and explore a new piece of technology. It’s exciting to see the newest things in tech. SB: In my previous implementation role the goal was to become an expert on just one of the products, which didn’t really help with broadening my skillset. When I came here, I had the opportunity to work with customers from financial services, telecom, banking, IoT, startups, big enterprises, you name an industry or company size and we’ve done something for them, or you name a technology and we’ve likely worked with it. That variety is not something you’d get in an implementation role. Not to mention, in implementation roles you’re often told what to do. The requirements are already made up and you just have to meet them. In our roles as SAs, we’re really influencing the direction of things and understanding the bigger picture and business implications of utilizing the technology. We have the ability to influence customers in a positive way and provide value. JD: Can you describe the learning curve for someone moving into the Remote Presales Centre from a more delivery-focused role? SB: I would say that the biggest mindset shift is instead of immediately answering questions, you need to stop and ask why. If someone says “We want to do this” your first instinct may be to respond and say “Yes we have the capabilities to meet that”, but really you should stop and ask “Why do you want to do this? What value is it going to bring for you? How is this going to influence your business direction?” You need curiosity to understand what the customer is trying to achieve instead of focusing on solving specific issues and pain points, which is very much the focus in an implementation role. CD: It’s also learning the sales cycle and how sales operates, along with figuring out what drives reps and what they want out of the Remote Presales Centre. Sometimes reps need us to explain the technology and sometimes we’re just there for credibility. It’s getting in the mindset of partnering with sales not working for sales. There is obviously a technology learning curve as well since MongoDB products are vast and often complex. SB: I think that extends to the customers we work with as well. Every call you go into you’ll be meeting with a different “customer persona”. Sometimes you’re talking to very technical people like developers and DBAs, so you need to be able to tailor the conversation as per their priorities. But, if you’re meeting with the CTO, you need to contextualize it in business terms to relay what the business needs. It’s all about understanding your audience and tailoring the conversation. JD: Aside from databases, what other technologies do you need to be familiar with or are you exposed to? SB: Everything! When you think of a database, you will never use a database by itself, you have to build an application on top of it. A lot of our role is understanding how the database is contributing to the whole software development lifecycle and overall project. At the end of the day, it’s a part of the tech stack, so you have to understand the whole tech stack, the underlying infrastructure, and the application that’s built on top of the database. It’s not just MongoDB that we talk or learn about, but it’s every other database in the market and every technology that the customer is working with. Every customer we talk to is working with a different tool, programming language, or software development methodology, and you need to be able to communicate with them. JD: How do you stay connected with your colleagues when you are all working remote? CD: If we’re running a workshop it’s a team event, so we end up working closely for that. We also have weekly syncs where we talk about what we’re working on and talk through challenges, and we have things like enablement sessions and coffee chats. SB: These sessions are also on a global level so we have the opportunity to work with the team in the Americas. Since we operate on a volume basis, we’ll discuss workload distribution and try to prioritize tasks based on people’s interests. CD: Yes, for example, I really like time series and search, so I’ll handle a lot of time series and search requests. There’s someone else on the team who loves Realm, our mobile database, so we give him all the Realm requests. JD: Often people are reluctant to move into presales as they don’t consider themselves sales-oriented. How would you respond to that? CD: Stop thinking of it as sales! Think of it as you get to talk to tons of customers about what they think the best technological solution is, and then you can provide insight into MongoDB and how our technology can improve what they’re trying to do. It’s a really technical job in the sense that you’re looking at organizations’ architectures and you’re figuring out why customers are doing what they do. You get to ask a lot of questions and see a lot of new technology. You could end up building proof of values out of that which means you then get to play around with this new technology. SB: I think presales is the best of both worlds. You get to interact with a lot of people in various scenarios, but you are the trusted advisor for the customer. You’re there to help them and are on their side, which means customers trust and confide in you. JD: What learning and growth opportunities are there for someone on the Remote Presales Centre team? CD: You start off doing simple things like learning about MongoDB products, getting ground knowledge, learning customer stories, and understanding why customers use MongoDB. Then you move on to discovery calls with customers and learning how to scope things out for yourself. From there, as you spend more time in the Service Centre, you slowly get further and further through the deal cycle. For example, a few months ago I was in a workshop to determine the technical feasibility of MongoDB’s solution after we had already worked with the customer to determine business objectives and requirements. You eventually go through the whole sales cycle with the goal being that you can execute the whole sales cycle by the time you leave to go into the field. SB: Since the Service Centre is a somewhat new team for MongoDB, you’re also part of discussing processes and helping determine what makes the team most efficient. You get to contribute to building a whole new team and company right now, which is not something you would get in a mature team with defined processes. CD: As the team grows there are a lot of mentorship opportunities as well. MongoDB is growing so quickly that new sales reps come in and are great at selling, but they don’t always have a technical background or understand MongoDB’s value proposition. We are that technical backup for them, and this allows the field SAs more time to do the really deep technical things that we’ll eventually get to do once we move into a more senior position. JD: Why should someone join your team? CD: You have the opportunity to learn so much about MongoDB’s technology and sales cycle, and you get to meet anyone and everyone. I could be talking to a Product Manager in the morning about the newest release and a Customer Success Manager in the afternoon. You really get to meet the whole organization. You’ll have a lot of internal visibility which is great because it also provides pathways to transfer internally if you want to. SB: You don’t get this visibility in most other roles because you’re usually aligned to a region or team. Here, we get to meet everyone in Europe. Chris and I put together a spreadsheet of all of the sales reps in Europe and there’s only 12 we haven’t had the chance to work with yet. Not only do we get to work with all the reps, but we also work with Product Managers, Customer Success, Marketing, Information Security, plus all of their managers. It’s a great way to get introduced to the company. Interested in a Presales career at MongoDB? We have several open roles on our teams across the globe and would love for you to transform your career with us!

January 20, 2022