MongoDB’s Diversity Scholarship program supports members of groups who are underrepresented in the technology industry. This includes, but is not limited to, people who identify as African American, Hispanic, LGBTQ, women, low-income, and people with disabilities who may not otherwise have the opportunity to attend MongoDB events.
Eligible candidates can apply online. Hurry, the applications close this Friday, April 8!
Diversity Scholarship recipients receive:
- Complimentary admission to MongoDB World
- Complimentary admission to a pre-conference workshop of their choice
- A MongoDB certification voucher
- Three-month access to paid MongoDB University courses
Additionally, scholarship recipients may be featured in a blog post.
Applicants must be 18 years old or older, and must belong to a group that is underrepresented in the technology industry.
Scholarships are awarded based on a combination of need and impact. Selection will be made by a committee that will review each application. All application info will be kept confidential. Recipients will be notified by April 15.
Don’t qualify, but would like to help? You can contribute to the Diversity Scholarship!
While MongoDB World registration is open, we're raising funds to support Diversity Scholarship recipients. There’s a donation opportunity when registering for the conference. Contributors will be listed as Diversity Champions on our website, unless otherwise requested.
Contact firstname.lastname@example.org with any questions.
Running MongoDB as a Microservice with Docker and Kubernetes
Update – November 2018 This post is now 2.5 years old, and neither MongoDB nor Kubernetes have been standing still! In particular, Kubernetes has introduced StatefulSets and we've introduced the MongoDB Enterprise Operator for Kubernetes . Both of these capabilities make working with MongoDB in Kubernetes much simpler and more robust. Read this post for the state-of-the-art in running MongoDB in Kubernetes . Introduction Want to try out MongoDB on your laptop? Execute a single command and you have a lightweight, self-contained sandbox; another command removes all traces when you're done. Need an identical copy of your application stack in multiple environments? Build your own container image and let your development, test, operations, and support teams launch an identical clone of your environment. Containers are revolutionizing the entire software lifecycle: from the earliest technical experiments and proofs of concept through development, test, deployment, and support. Read the Enabling Microservices: Containers & Orchestration Explained white paper . Orchestration tools manage how multiple containers are created, upgraded and made highly available. Orchestration also controls how containers are connected to build sophisticated applications from multiple, microservice containers. The rich functionality, simple tools, and powerful APIs make container and orchestration functionality a favorite for DevOps teams who integrate them into Continuous Integration (CI) and Continuous Delivery (CD) workflows. This post delves into the extra challenges you face when attempting to run and orchestrate MongoDB in containers and illustrates how these challenges can be overcome. Considerations for MongoDB Running MongoDB with containers and orchestration introduces some additional considerations: MongoDB database nodes are stateful. In the event that a container fails, and is rescheduled, it's undesirable for the data to be lost (it could be recovered from other nodes in the replica set, but that takes time). To solve this, features such as the Volume abstraction in Kubernetes can be used to map what would otherwise be an ephemeral MongoDB data directory in the container to a persistent location where the data survives container failure and rescheduling. MongoDB database nodes within a replica set must communicate with each other – including after rescheduling. All of the nodes within a replica set must know the addresses of all of their peers, but when a container is rescheduled, it is likely to be restarted with a different IP address. For example, all containers within a Kubernetes Pod share a single IP address, which changes when the pod is rescheduled. With Kubernetes, this can be handled by associating a Kubernetes Service with each MongoDB node, which uses the Kubernetes DNS service to provide a hostname for the service that remains constant through rescheduling. Once each of the individual MongoDB nodes is running (each within its own container), the replica set must be initialized and each node added. This is likely to require some additional logic beyond that offered by off the shelf orchestration tools. Specifically, one MongoDB node within the intended replica set must be used to execute the rs.initiate and rs.add commands. If the orchestration framework provides automated rescheduling of containers (as Kubernetes does) then this can increase MongoDB's resiliency since a failed replica set member can be automatically recreated, thus restoring full redundancy levels without human intervention. It should be noted that while the orchestration framework might monitor the state of the containers, it is unlikely to monitor the applications running within the containers, or backup their data. That means it's important to use a strong monitoring and backup solution such as MongoDB Cloud Manager , included with MongoDB Enterprise Advanced and MongoDB Professional . Consider creating your own image that contains both your preferred version of MongoDB and the MongoDB Automation Agent . Implementing a MongoDB Replica Set using Docker and Kubernetes As described in the previous section, distributed databases such as MongoDB require a little extra attention when being deployed with orchestration frameworks such as Kubernetes. This section goes to the next level of detail, showing how this can actually be implemented. We start by creating the entire MongoDB replica set in a single Kubernetes cluster (which would normally be within a single data center – that clearly doesn't provide geographic redundancy). In reality, little has to be changed to run across multiple clusters and those steps are described later. Each member of the replica set will be run as its own pod with a service exposing an external IP address and port. This 'fixed' IP address is important as both external applications and other replica set members can rely on it remaining constant in the event that a pod is rescheduled. The following diagram illustrates one of these pods and the associated Replication Controller and service. **Figure 1:** MongoDB Replica Set member configured as a Kubernetes Pod and exposed as a service Stepping through the resources described in that configuration we have: Starting at the core there is a single container named mongo-node1 . mongo-node1 includes an image called mongo which is a publicly available MongoDB container image hosted on Docker Hub . The container exposes port 27107 within the cluster. The Kubernetes volumes feature is used to map the /data/db directory within the connector to the persistent storage element named mongo-persistent-storage1 ; which in turn is mapped to a disk named mongodb-disk1 created in the Google Cloud. This is where MongoDB would store its data so that it is persisted over container rescheduling. The container is held within a pod which has the labels to name the pod mongo-node and provide an (arbitrary) instance name of rod . A Replication Controller named mongo-rc1 is configured to ensure that a single instance of the mongo-node1 pod is always running. The LoadBalancer service named mongo-svc-a exposes an IP address to the outside world together with the port of 27017 which is mapped to the same port number in the container. The service identifies the correct pod using a selector that matches the pod's labels. That external IP address and port will be used by both an application and for communication between the replica set members. There are also local IP addresses for each container, but those change when containers are moved or restarted, and so aren't of use for the replica set. The next diagram shows the configuration for a second member of the replica set. **Figure 2:** Second MongoDB Replica Set member configured as a Kubernetes Pod 90% of the configuration is the same, with just these changes: The disk and volume names must be unique and so mongodb-disk2 and mongo-persistent-storage2 are used The Pod is assigned a label of instance: jane and name: mongo-node2 so that the new service can distinguish it (using a selector) from the rod Pod used in Figure 1. The Replication Controller is named mongo-rc2 The Service is named mongo-svc-b and gets a unique, external IP address (in this instance, Kubernetes has assigned 184.108.40.206 ) The configuration of the third replica set member follows the same pattern and the following figure shows the complete replica set: **Figure 3:** Full Replica Set member configured as a Kubernetes Service Note that even if running the configuration shown in Figure 3 on a Kubernetes cluster of three or more nodes, Kubernetes may (and often will) schedule two or more MongoDB replica set members on the same host. This is because Kubernetes views the three pods as belonging to three independent services. To increase redundancy (within the zone), an additional headless service can be created. The new service provides no capabilities to the outside world (and will not even have an IP address) but it serves to inform Kubernetes that the three MongoDB pods form a service and so Kubernetes will attempt to schedule them on different nodes. **Figure 4:** Headless service to avoid co-locating of MongoDB replica set members The actual configuration files and the commands needed to orchestrate and start the MongoDB replica set can be found in the Enabling Microservices: Containers & Orchestration Explained white paper . In particular, there are some special steps required to combine the three MongoDB instances into a functioning, robust replica set which are described in the paper. Multiple Availability Zone MongoDB Replica Set There is risk associated with the replica set created above in that everything is running in the same GCE cluster, and hence in the same availability zone. If there were a major incident that took the availability zone offline, then the MongoDB replica set would be unavailable. If geographic redundancy is required, then the three pods should be run in three different availability zones or regions. Surprisingly little needs to change in order to create a similar replica set that is split between three zones – which requires three clusters. Each cluster requires its own Kubernetes YAML file that defines just the pod, Replication Controller and service for one member of the replica set. It is then a simple matter to create a cluster, persistent storage, and MongoDB node for each zone. **Figure 5:** Replica set running over multiple availability zones Next Steps To learn more about containers and orchestration – both the technologies involved and the business benefits they deliver – read the Enabling Microservices: Containers & Orchestration Explained white paper . The same paper provides the complete instructions to get the replica set described in this post up and running on Docker and Kubernetes in the Google Container Engine. Interested in learning more about Microservices? Microservices Resources About the Author - Andrew Morgan Andrew is a Principal Product Marketing Manager working for MongoDB. He joined at the start last summer from Oracle where he spent 6+ years in product management, focused on High Availability. He can be contacted @andrewmorgan or through comments on his blog ( clusterdb.com ).
MongoDB Doubles Down on Aotearoa as Part of Continued APAC Expansion
MongoDB is expanding its business in New Zealand to help Kiwi organisations build modern applications and take advantage of the AI opportunity that exists today. With hundreds of customers already in Aotearoa, including Pathfinder, Rapido, and Tourism Holdings, we're continuing to hire and invest to continue to grow our community in the country. Powering the next generation of modern applications Interest and excitement in AI, and particularly generative AI, has exploded. With a proud history of Innovation, it's not a surprise that many New Zealand companies are early adopters of this incredible technology. In fact, an AI Forum report has revealed that AI has the potential to increase New Zealand's GDP by as much as $54 billion by 2035. No matter what you think of the veracity of those bold predictions, one thing is sure: Almost every company is trying to figure out how to take advantage of data and software, to help them build better products, more efficiently and more quickly. Jake McInteer speaking at MongoDB.local Auckland As organisations transform into digital-first businesses, they’re faced with a growing list of application and data requirements. Modern applications are complex – they need to handle transactional workloads, app-driven analytics, full-text search, AI-enhanced experiences, stream data processing, and more. Companies are being asked to do this all while reducing data infrastructure sprawl, complexity and often also cut costs. What we are seeing globally is our developer data platform solves this challenge and complexity since it integrates all of the data services organisations need to build modern applications in a unified developer experience. Additionally, we also allow our customers to easily run anywhere in the world with over 110+ locations making us uniquely placed to enable Kiwi companies to adapt to a multicloud future. We also have strong local partnerships with all three cloud hyperscalers, all of which plan to open new cloud regions in New Zealand in the coming years. With the support of our cloud partners, in New Zealand we've already seen great adoption of MongoDB Atlas, including the largest established enterprises, through to cutting-edge startups. Here are a couple of examples. Pathfinder: Protecting vulnerable children Pathfinder , headquartered in Auckland, is a global leader in software development specialising in protecting vulnerable children. The company's mission centres on empowering law enforcement agencies with state-of-the-art technology, meticulously designed to combat the reprehensible crime of child exploitation. "We are committed to delivering investigators the most advanced tools. We cannot accept delays in removing a child from harm due to investigations being overwhelmed by large amounts of disparate data. In situations where every minute impacts a child's well-being, these tools must enable investigators to swiftly navigate data challenges, and rapidly apprehend perpetrators" said Bree Atkinson, CEO of Pathfinder Labs. Pathfinder’s Paradigm service is being built on MongoDB Atlas, running on AWS, and takes advantage of the wider developer data platform features in order to enable the next generation of data-driven investigative capabilities. By using MongoDB Atlas Vector Search , a native part of the MongoDB Atlas platform, the Pathfinder team are also able to match images and details within images (such as people and objects), classify documents and text, and build better search experiences for their users via semantic search. This enables Paradigm to efficiently aid law enforcement in identifying victims and apprehending offenders. Bree Atkinson, CEO of Pathfinder Labs, and Peter Pilly, DevOps Architect at Pathfinder Labs, with the MongoDB team in Auckland at the recent .local event "MongoDB Atlas allows our team to focus on our strengths: developing outstanding technology. It works with us not against us, enhancing integration which enables us to build better user experiences," said Peter Pilley, DevOps Architect at Pathfinder Labs. "Take MongoDB Atlas Vector Search, for example. Before MongoDB, we would have needed to incorporate multiple tools to achieve that functionality. Now we can handle it all from a single platform removing complexity and architecture that wasn't needed. With MongoDB Atlas, we're able to make data-driven decisions swiftly, boosting our productivity and decision-making speed." Peter's team at Pathfinder also uses MongoDB's performance advisor. They say it's like having an extra team member who suggests the best indexes for accessing their data, which is critical in an industry where getting to a specific piece of data could make all the difference. Rapido: Optimising B2B revenue and distribution Rapido has been utilising MongoDB Atlas for over five years. The team was originally part of MongoDB for Startups , a programme that offers startups free credits and technical advice to help them build faster and scale further. Their eagerness to adopt new technologies has enabled them to effectively harness MongoDB Atlas's evolving features. Working with the Accredo ERP system, Rapido has harnessed MongoDB Atlas to innovate in business-to-business (B2B) transactions. Using features like MongoDB Atlas Vector Search, the ' moreLikeThis ' operator, and MongoDB App Services, they've transformed business interactions, offering precise product recommendations and improved real-time visibility via change streams. Rapido's platform, which has processed orders collectively worth more than $100m to date, is essential for many wholesale businesses in New Zealand. Adam Holt, CEO of Rapido, summarises their experience: "Our journey with MongoDB Atlas has been transformative. By building on a cohesive developer data platform, we don't need to bolt-on and learn special technologies for every requirement. Continuously integrating new features keeps our platform advanced in the fast-paced B2B market. It's about leveraging technology to innovate and deliver better solutions to our clients." MongoDB expands in Aotearoa The increased demand from Kiwi organisations who are looking to innovate faster and take advantage of cutting-edge technologies, like AI, means MongoDB is now doubling down on its New Zealand footprint. Earlier this month, MongoDB established its local operations in Aotearoa, New Zealand. Jake McInteer , a native Kiwi, has officially transferred from MongoDB’s Australia business to lead the organisation in New Zealand. MongoDB already has a large, engaged community, more than 200 customers, and an extensive partner network. CEO of Lumin Max Ferguson presents at the Christchurch MongoDB user group We are incredibly excited about the opportunity to invest in and contribute to the Kiwi tech ecosystem, both to support local companies and help kiwi startups like Lumin and Marsello as well as established companies like Tourism Holdings , Figured , and Foster Moore . To support our growth, we have roles open on our Sales and Solutions Architecture team. If you are based in NZ and interested in joining our incredible team, working in our hybrid environment, please check out and apply for the roles here: Enterprise Account Executive, Acquisition Senior Solutions Architect Additionally, read here about the massive opportunity at MongoDB in APAC from our SVP Simon Eid.