Announcing the MongoDB Certified Professional of the Year 2019: Rolando Martinez
2019's MongoDB Certified Professional of the Year is Rolando Martinez, a Staff Data Engineer at Vrbo, formerly HomeAway, working out of Austin, Texas. He was just one of the entries we received after we put out the call to find this year's Certified Professional of the Year with a free trip to MongoDB World up for grabs. We sat down with Rolando to talk about how he came to be a certified MongoDB Professional and how that's affected his career.
Using MongoDB Atlas's Sample Data Feature For Learning and Practice
It's easier than ever to get up and working with MongoDB with the new MongoDB Atlas Load Sample Data feature. One click and you get a set of essential datasets loaded into your database giving you the data you need to start your exploration.
Are you the next MongoDB Certified Professional of the Year?
Has becoming MongoDB Certified affected your life in any way, big or small? Whether being MongoDB Certified has helped transform your career, connect with your community, or just see the world a little differently, we want to hear your story! MongoDB and the community want to learn from your success. Since 2013, MongoDB has recognized a current MongoDB Certified Professional who demonstrates ingenuity, hard work, and expertise as the MongoDB Certified Professional of the Year. Tell your story about your certification journey and you could be going to MongoDB World.
Building with Patterns: A Summary
As we wrap up the Building with Patterns series, it’s a good opportunity to recap the problems the patterns that have been covered solve and highlight some of the benefits and trade-offs each pattern has. The most frequent question that is asked about schema design patterns, is “I’m designing an application to do X, how do I model the data?” As we hope you have discovered over the course of this blog series, there are a lot of things to take into consideration to answer that.
Building with Patterns: The Schema Versioning Pattern
It has been said that the only thing constant in life is change. This holds true to database schemas as well. Information we once thought wouldn’t be needed, we now want to capture. Or new services become available and need to be included in a database record. Regardless of the reason behind the change, after a while, we inevitably need to make changes to the underlying schema design in our application. While this often poses challenges, and perhaps at least a few headaches in a legacy tabular database system, in MongoDB we can use the Schema Versioning pattern to make the changes easier.
Building with Patterns: The Document Versioning Pattern
Databases, such as MongoDB, are very good at querying lots of data and updating that data frequently. In most cases, however, we are only performing queries on the latest state of the data. What about situations in which we need to query previous states of the data? What if we need to have some functionality of version control of our documents? This is where we can use the Document Versioning Pattern.
Building with Patterns: The Preallocation Pattern
One of the great things about MongoDB is the document data model. It provides for a lot of flexibility not only in schema design but in the development cycle as well. Not knowing what fields will be required down the road is easily handled with MongoDB documents. However, there are times when the structure is known and being able to fill or grow the structure makes the design much simpler. This is where we can use the Preallocation Pattern.
Building with Patterns: The Tree Pattern
Many of the schema design patterns we've covered so far have stressed that saving time on JOIN operations is a benefit. Data that's accessed together should be stored together and some data duplication is okay. A schema design pattern like Extended Reference is a good example. However, what if the data to be joined is hierarchical? For example, you would like to identify the reporting chain from an employee to the CEO? MongoDB provides the $graphLookup operator to navigate the data as graphs, and that could be one solution. However, if you need to do a lot of queries of this hierarchical data structure, you may want to apply the same rule of storing together data that is accessed together. This is where we can use the Tree Pattern.
Building with Patterns: The Approximation Pattern
Imagine a fairly decent sized city of approximately 39,000 people. The exact number is pretty fluid as people move in and out of the city, babies are born, and people die. We could spend our days trying to get an exact number of residents each day. But most of the time that 39,000 number is "good enough." Similarly, in many applications we develop, knowing a "good enough" number is sufficient. If a "good enough" number is good enough then this is a great opportunity to put the Approximation Pattern to work in your schema design.
Building with Patterns: The Extended Reference Pattern
Throughout this Building With Patterns series, I hope you've discovered that a driving force in what your schema should look like, is what the data access patterns for that data are. If we have a number of similar fields, the Attribute Pattern may be a great choice. Does accommodating access to a small portion of our data vastly alter our application? Perhaps the Outlier Pattern is something to consider. Some patterns, such as the Subset Pattern, reference additional collections and rely on JOIN operations to bring every piece of data back together. What about instances when there are lots of JOIN operations needed to bring together frequently accessed data? This is where we can use the Extended Reference pattern.
Building with Patterns: The Subset Pattern
Some years ago, the first PCs had a whopping 256KB of RAM and dual 5.25" floppy drives. No hard drives as they were incredibly expensive at the time. These limitations resulted in having to physically swap floppy disks due to a lack of memory when working with large (for the time) amounts of data. If only there was a way back then to only bring into memory the data I frequently used, as in a subset of the overall data. Modern applications aren't immune from exhausting resources. MongoDB keeps frequently accessed data, referred to as the working set , in RAM. When the working set of data and indexes grows beyond the physical RAM allotted, performance is reduced as disk accesses starts to occur and data rolls out of RAM.