Today at MongoDB World 2018 in New York, we are excited to announce the beta release of MongoDB Charts, available now.
MongoDB Charts is the fastest and easiest way to build visualizations of MongoDB data. Now you can easily build visualizations with an intuitive UI and analyze complex, nested data—like arrays and subdocuments—something other visualization technologies designed for tabular databases struggle with. Then assemble those visualizations into a dashboard for at-a-glance, up-to-the minute information. Dashboards can be shared with other users, either for collaboration or viewing only, so that entire groups and organizations can become data-driven and benefit from data visualizations and the insights they provide. When you connect to a live data source, MongoDB Charts will keep your charts and dashboards up to date with the most recent data.
Charts allows you to connect to any MongoDB instance on which you have access permissions, or use existing data sources that other Charts users share with you. With MongoDB’s workload isolation capabilities—enabling you to separate your operational from analytical workloads in the same cluster—you can use Charts for a real-time view without having any impact on production workloads.
Unlike other visualization products, MongoDB Charts is designed to natively handle MongoDB’s rich data structures. MongoDB Charts makes it easy to visualize complex arrays and subdocuments without flattening the data or spending time and effort on ETL. Charts will automatically generate an aggregation pipeline from your chart design which is executed on your MongoDB server, giving you full access to the power of MongoDB when creating visualizations.
Charts already supports all common visualization types, including bar, column, line, area, scatter and donut charts, with table views and other advanced charts coming soon. Multiple visualizations can be assembled into dashboards, providing an at-a-glance understanding of all of your most important data. Dashboards can be used to track KPIs, understand trends, spot outliers and anomalies, and more.
Graphs and charts are a great way to communicate insights with others, but we know data can often be sensitive. Charts lets you stay in control over which users in your organization have access to your data sources and dashboards. You can choose to share with your entire organization, select individuals or keep things private to yourself. Dashboards can be shared as read-only or as read-write, depending on whether you want to communicate or collaborate.
Since this is a beta release, it’s free for anyone to try, but keep in mind that we may add or change functionality before the final release. MongoDB Charts is available as a Docker image which you can install on a server or VM within your environment. Once the Docker container is running, people in your organisation can access Charts from any desktop web browser. To download the image and get started with Charts, head to the MongoDB Download Center.
We think that MongoDB Charts is the best way to get quick, self-service visualizations of the data you’re storing in MongoDB. But we’re only just getting started. Once you try out the beta, please use the built-in feedback tool to report issues and to let us know what you do and don’t like, and we’ll use this input to make future releases even better.
The development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality.
Introducing the MongoDB Enterprise Operator for Kubernetes and OpenShift
Today more DevOps teams are leveraging the power of containerization, and technologies like Kubernetes and Red Hat OpenShift, to manage containerized database clusters. To support teams building cloud-native apps with Kubernetes and OpenShift, we are introducing a Kubernetes Operator (beta) that integrates with Ops Manager, the enterprise management platform for MongoDB. The operator enables a user to deploy and manage MongoDB clusters from the Kubernetes API, without having to manually configure them in Ops Manager. With this Kubernetes integration, you can consistently and effortlessly run and deploy workloads wherever they need to be, standing up the same database configuration in different environments, all controlled with a simple, declarative configuration. Operations teams can also offer developers new services like MongoDB-as-a-Service, that could provide for them a fully managed database, alongside other products and services, managed by Kubernetes and OpenShift. In this blog, we’ll cover the following: Brief discussion on the container revolution Overview of MongoDB Ops Manager How to Install and configure the MongoDB Enterprise Operator for Kubernetes Troubleshooting Where to go for more information The containerization movement If you ever visited an international shipping port or drove down an interstate highway you may have seen large rectangular metal containers generally referred to as intermodal containers. These containers are designed and built using the same specifications even though the contents of these boxes can vary greatly. The consistent design not only enables these containers to freely move from ship, to rail, and to truck, they also allow this movement without unloading and reloading the cargo contents. This same concept of a container can be applied to software applications where the application is the contents of the container along with its supporting frameworks and libraries. The container can be freely moved from one platform to another all without disturbing the application. This capability makes it easy to move an application from an on-premise datacenter server to a public cloud provider, or to quickly stand up replica environments for development, test, and production usage. MongoDB 4.0 introduces the MongoDB Enterprise Operator for Kubernetes which enables a user to deploy and manage MongoDB clusters from the Kubernetes API, without the user having to connect directly to Ops Manager or Cloud Manager (the hosted version of Ops Manager, delivered as a service . While MongoDB is fully supported in a containerized environment, you need to make sure that the benefits you get from containerizing the database exceed the cost of managing the configuration. As with any production database workload, these containers should use persistent storage and will require additional configuration depending on the underlying container technology used. To help facilitate the management of the containers themselves, DevOps teams are leveraging the power of orchestration technologies like Kubernetes and Red Hat OpenShift. While these technologies are great at container management, they are not aware of application specific configurations and deployment topologies such as MongoDB replica sets and sharded clusters. For this reason, Kubernetes has Custom Resources and Operators which allow third-parties to extend the Kubernetes API and enable application aware deployments. Later in this blog you will learn how to install and get started with the MongoDB Enterprise Operator for Kubernetes. First let’s cover MongoDB Ops Manager, which is a key piece in efficient MongoDB cluster management. Managing MongoDB Ops Manager is an enterprise class management platform for MongoDB clusters that you run on your own infrastructure. The capabilities of Ops Manager include monitoring, alerting, disaster recovery, scaling, deploying and upgrading of replica sets and sharded clusters, and other MongoDB products, such as the BI Connector. While a thorough discussion of Ops Manager is out of scope of this blog it is important to understand the basic components that make up Ops Manager as they will be used by the Kubernetes Operator to create your deployments.. Figure 1: MongoDB Ops Manager deployment screen A simplified Ops Manager architecture is shown in Figure 2 below. Note that there are other agents that Ops Manager uses to support features like backup but these are outside the scope of this blog and not shown. For complete information on MongoDB Ops Manager architecture see the online documentation found at the following URL: https://docs.opsmanager.mongodb.com/current/ Figure 2: Simplified Ops Manager deployment The MongoDB HTTP Service provides a web application for administration. These pages are simply a front end to a robust set of Ops Manager REST APIs that are hosted in the Ops Manager HTTP Service. It is through these REST APIs that the Kubernetes Operator will interact with Ops Manager. MongoDB Automation Agent With a typical Ops Manager deployment there are many management options including upgrading the cluster to a different version, adding secondaries to an existing replica set and converting an existing replica set into a sharded cluster. So how does Ops Manager go about upgrading each node of a cluster or spinning up new MongoD instances? It does this by relying on a locally installed service called the Ops Manager Automation Agent which runs on every single MongoDB node in the cluster. This lightweight service is available on multiple operating systems so regardless if your MongoDB nodes are running in a Linux Container or Windows Server virtual machine or your on-prem PowerPC Server, there is an Automation Agent available for that platform. The Automation Agents receive instructions from Ops Manager REST APIs to perform work on the cluster node. MongoDB Monitoring Agent When Ops Manager shows statistics such as database size and inserts per second it is receiving this telemetry from the individual nodes running MongoDB. Ops Manager relies on the Monitoring Agent to connect to your MongoDB processes, collect data about the state of your deployment, then send that data to Ops Manager. There can be one or more Monitoring Agents deployed in your infrastructure for reliability but only one primary agent per Ops Manager Project is collecting data. Ops Manager is all about automation and as soon as you have the automation agent deployed, other supporting agents like the Monitoring agent are deployed for you. In the scenario where the Kubernetes Operator has issued a command to deploy a new MongoDB cluster in a new project, Ops Manager will take care of deploying the monitoring agent into the containers running your new MongoDB cluster. Getting started with MongoDB Enterprise Operator for Kubernetes Ops Manager is an integral part of automating a MongoDB cluster with Kubernetes. To get started you will need access to an Ops Manager 4.0+ environment or MongoDB Cloud Manager. The MongoDB Enterprise Operator for Kubernetes is compatible with Kubernetes v1.9 and above. It also has been tested with Openshift version 3.9. You will need access to a Kubernetes environment. If you do not have access to a Kubernetes environment, or just want to stand up a test environment, you can use minikube which deploys a local single node Kubernetes cluster on your machine. For additional information and setup instructions check out the following URL: https://kubernetes.io/docs/setup/minikube . The following sections will cover the three step installation and configuration of the MongoDB Enterprise Operator for Kubernetes. The order of installation will be as follows: Step 1: Installing the MongoDB Enterprise Operator via a helm or yaml file Step 2: Creating and applying a Kubernetes ConfigMap file Step 3: Create the Kubernetes secret object which will store the Ops Manager API Key Step 1: Installing MongoDB Enterprise Operator for Kubernetes To install the MongoDB Enterprise Operator for Kubernetes you can use helm, the Kubernetes package manager, or pass a yaml file to kubectl. The instructions for both of these methods is as follows, pick one and continue to step 2. To install the operator via Helm: To install with Helm you will first need to clone the public repo https://github.com/mongodb/mongodb-enterprise-kubernetes.git Change directories into the local copy and run the following command on the command line: helm install helm_chart/ --name mongodb-enterprise To install the operator via a yaml file: Run the following command from the command line: kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/mongodb-enterprise.yaml At this point the MongoDB Enterprise Operator for Kubernetes is installed and now needs to be configured. First, we must create and apply a Kubernetes ConfigMap file. A Kubernetes ConfigMap file holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. In this use case the ConfigMap file will store configuration information about the Ops Manager deployment we want to use. Step 2: Creating the Kubernetes ConfigMap file For the Kubernetes Operator to know what Ops Manager you want to use you will need to obtain some properties from the Ops Manager console and create a ConfigMap file. These properties are as follows: Base Url - The URL of your Ops Manager or Cloud Manager. Project Id - The id of an Ops Manager Project which the Kubernetes Operator will deploy into. User - An existing Ops Manager username Public API Key - Used by the Kubernetes Operator to connect to the Ops Manager REST API endpoint If you already know how to obtain these follows copy them down and proceed to Step 3. Base Url The Base Uri is the URL of your Ops Manager or Cloud Manager. Note: If you are using Cloud Manager the Base Url is, “ https://cloud.mongodb.com ” To obtain the Base Url in Ops Manager copy the Url used to connect to your Ops Manager server from your browser's navigation bar. It should be something similar to http://servername:8080. You can also perform the following: Login to Ops Manager and click on the Admin button. Next select the “Ops Manager Config” menu item. You will be presented with a screen similar to the figure below: Figure 3: Ops Manager Config page Copy down the value displayed in the URL To Access Ops Manager box. Note: If you don’t have access to the Admin drop down you will have to copy the Url used to connect to your Ops Manager server from your browser's navigation bar. Project Id The Project Id is the id of an Ops Manager Project which the Kubernetes Operator will deploy into. An Ops Manager Project is a logical organization of MongoDB clusters and also provides a security boundary. One or more Projects are apart of an Ops Manager Organization. If you need to create an Organization click on your user name at the upper right side of the screen and select, “Organizations”. Next click on the “+ New Organization” button and provide a name for your Organization. Once you have an Organization you can create a Project. Figure 4: Ops Manager Organizations page To create a new Project, click on your Organization name. This will bring you to the Projects page and from here click on the “+ New Project” button and provide a unique name for your Project. If you are not an Ops Manager administrator you may not have this option and will have to ask your administrator to create a Project. Once the Project is created or if you already have a Project created on your behalf by an administrator you can obtain the Project Id by clicking on the Settings menu option as shown in the Figure below. Figure 5: Project Settings page Copy the Project ID. User The User is an existing Ops Manager username To see the list of Ops Manager users return to the Project and click on the “Users & Teams” menu. You can use any Ops Manager user who has at least Project Owner access. If you’d like to create another username click on the “Add Users & Team” button as shown in Figure 6. Figure 6: Users & Teams page Copy down the email of the user you would like the Kubernetes Operator to use when connecting to Ops Manager. Public API Key The Ops Manager API Key is used by the Kubernetes Operator to connect to the Ops Manager REST API endpoint. You can create a API Key by clicking on your username on the upper right hand corner of the Ops Manager console and selecting, “Account” from the drop down menu. This will open the Account Settings page as shown in Figure 7. Figure 7: Public API Access page Click on the “Public API Access” tab. To create a new API key click on the “Generate” button and provide a description. Upon completion you will receive an API key as shown in Figure 8. Figure 8: Confirm API Key dialog Be sure to copy the API Key as it will be used later as a value in a configuration file. It is important to copy this value while the dialog is up since you can not read it back once you close the dialog . If you missed writing the value down you will need to delete the API Key and create a new one. Note: If you are using MongoDB Cloud Manager or have Ops Manager deployed in a secured network you may need to whitelist the IP range of your Kubernetes cluster so that the Operator can make requests to Ops Manager using this API Key. Now that we have acquired the necessary Ops Manager configuration information we need to create a Kubernetes ConfigMap file for the Kubernetes Project. To do this use a text editor of your choice and create the following yaml file, substituting the bold placeholders for the values you obtained in the Ops Manager console. For sample purposes we can call this file “my-project.yaml”. apiVersion: v1 kind: ConfigMap metadata: name: namespace: mongodb data: projectId: baseUrl: Figure 9: Sample ConfigMap file Note: The format of the ConfigMap file may change over time as features and capabilities get added to the Operator. Be sure to check with the MongoDB documentation if you are having problems submitting the ConfigMap file. Once you create this file you can apply the ConfigMap to Kubernetes using the following command: kubectl apply -f my-project.yaml Step 3: Creating the Kubernetes Secret For a user to be able to create or update objects in an Ops Manager Project they need a Public API Key. Earlier in this section we created a new API Key and you hopefully wrote it down. This API Key will be held by Kubernetes as a Secret object. You can create this Secret with the following command: kubectl -n mongodb create secret generic --from-literal="user= " --from-literal="publicApiKey= " Make sure you replace the User and Public API key values with those you obtained from your Ops Manager console. You can pick any name for the credentials – just make a note of it as you will need it later when you start creating MongoDB clusters. Now we're ready to start deploying MongoDB Clusters! Deploying a MongoDB Replica Set Kubernetes can deploy a MongoDB standalone, replica set or a sharded cluster. To deploy a 3 node replica set create the following yaml file: apiVersion: mongodb.com/v1 kind: MongoDbReplicaSet metadata: name: namespace: mongodb spec: members: 3 version: 3.6.5 persistent: false project: credentials: Figure 10: simple-rs.yaml file describing a three node replica set The name of your new cluster can be any name you chose. The name of the OpsManager Project config map and the name of credentials secret were defined previously. To submit the request for Kubernetes to create this cluster simply pass the name of the yaml file you created to the following kubectl command: kubectl apply -f simple-rs.yaml After a few minutes your new cluster will show up in Ops Manager as shown in Figure 11. Figure 11: Servers tab of the Deployment page in Ops Manager Notice that Ops Manager installed not only the Automation Agents on these three containers running MongoDB, it also installed Monitoring Agent and Backup Agents. A word on persistent storage What good would a database be if anytime the container died your data went to the grave as well? Probably not a good situation and maybe one where tuning up the resumé might be a good thing to do as well. Up until recently, the lack of persistent storage and consistent DNS mappings were major issues with running databases within containers. Fortunately, recent work in the Kubernetes ecosystem has addressed this concern and new features like PersistentVolumes and StatefulSets have emerged allowing you to deploy databases like MongoDB without worrying about losing data because of hardware failure or the container moved elsewhere in your datacenter. Additional configuration of the storage is required on the Kubernetes cluster before you can deploy a MongoDB Cluster that uses persistent storage. In Kubernetes there are two types of persistent volumes: static and dynamic. The Kubernetes Operator can provision MongoDB objects (i.e. standalone, replica set and sharded clusters) using either type. Connecting your application Connecting to MongoDB deployments in Kubernetes is no different than other deployment topologies. However, it is likely that you'll need to address the network specifics of your Kubernetes configuration. To abstract the deployment specific information such as hostnames and ports of your MongoDB deployment, the Kubernetes Enterprise Operator for Kubernetes uses Kubernetes Services. Services Each MongoDB deployment type will have two Kubernetes services generated automatically during provisioning. For example, suppose we have a single 3 node replica set called "my-replica-set", then you can enumerate the services using the following statement: kubectl get all -n mongodb --selector=app=my-replica-set-svc This statement yields the following results: NAME READY STATUS RESTARTS AGE pod/my-replica-set-0 1/1 Running 0 29m pod/my-replica-set-1 1/1 Running 0 29m pod/my-replica-set-2 1/1 Running 0 29m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-replica-set-svc ClusterIP None 27017/TCP 29m service/my-replica-set-svc-external NodePort 10.103.220.236 27017:30057/TCP 29m NAME DESIRED CURRENT AGE statefulset.apps/my-replica-set 3 3 29m Note the appended string "-svc" to the name of the replica set. The service with "-external" is a NodePort - which means it's exposed to the overall cluster DNS name on port 30057. Note: If you are using Minikube you can obtain the IP address of the running replica set by issuing the following: minikube service list In our example which used minikube the result set contained the following information: mongodb my-replica-set-svc-external http://192.168.39.95:30057 Now that we know the IP of our MongoDB cluster we can connect using the Mongo Shell or whatever application or tool you would like to use. Basic Troubleshooting If you are having problems submitting a deployment you should read the logs. Issues like authentication issues and other common problems can be easily detected in the log files. You can view the MongoDB Enterprise Operator for Kubernetes log files via the following command: kubectl logs -f deployment/mongodb-enterprise-operator -n mongodb You can also use kubectl to see the logs of the database pods. The main container processes is continually tailing the Automation Agent logs and can be seen with the following statement: kubectl logs -n mongodb Note: You can enumerate the list of pods using kubectl get pods -n mongodb Another common troubleshooting technique is to shell into one of the containers running MongoDB. Here you can use common Linux tools to view the processes, troubleshoot, or even check mongo shell connections (sometimes helpful in diagnosing network issues). kubectl exec -it -n mongodb -- /bin/bash An example output of this command is as follows: UID PID PPID C STIME TTY TIME CMD mongodb 1 0 0 16:23 ? 00:00:00 /bin/sh -c supervisord -c /mongo mongodb 6 1 0 16:23 ? 00:00:01 /usr/bin/python /usr/bin/supervi mongodb 9 6 0 16:23 ? 00:00:00 bash /mongodb-automation/files/a mongodb 25 9 0 16:23 ? 00:00:00 tail -n 1000 -F /var/log/mongodb mongodb 26 1 4 16:23 ? 00:04:17 /mongodb-automation/files/mongod mongodb 45 1 0 16:23 ? 00:00:01 /var/lib/mongodb-mms-automation/ mongodb 56 1 0 16:23 ? 00:00:44 /var/lib/mongodb-mms-automation/ mongodb 76 1 1 16:23 ? 00:01:23 /var/lib/mongodb-mms-automation/ mongodb 8435 0 0 18:07 pts/0 00:00:00 /bin/bash From inside the container we can make a connection to the local MongoDB node easily by running the mongo shell via the following command: /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-3.6.5/bin/mongo --port 27017 Note: The version of the automation agent may be different than 3.6.5, be sure to check the directory path Where to go for more information More information will be available on the MongoDB documentation website in the near future. Until then check out these resources for more information: Slack: #enterprise-kubernetes Sign up @ https://launchpass.com/mongo-db GitHub: https://github.com/mongodb/mongodb-enterprise-kubernetes To see all MongoDB operations best practices, download our whitepaper: https://www.mongodb.com/collateral/mongodb-operations-best-practices
Transitioning from Teacher to MongoDB’s New Enterprise Modernization Team: Meet Gabriela Preiss
As a global company, MongoDB has amazing employees with interesting backgrounds and stories. I recently sat down with Gabriela Preiss, an Enterprise Modernization Consultant, to learn more about her journey across the globe from the U.S. to Barcelona, Spain, and her experience transitioning from teaching to becoming the first hire for MongoDB’s brand-new Enterprise Modernization Team, shifting enterprises toward innovation and generating a ton of compelling content along the way. Andrew Bell: Thank you for sharing your story, Gabriela. I’d love to know how you got to where you are today in your role. What skills are important for someone on your team to be successful? Gabriela Priess: My career journey has been from one end of the spectrum to the other. Originally, I studied English and education, and I was a high school teacher for four years. I loved teaching, and I encourage anyone who wants to pursue it to do just that, but eventually, I hit a block and craved more mobility. So I moved from the U.S. to Portugal and studied web and mobile development. Finding myself back as a junior in a new industry, I worked my way up by freelancing as a web developer, building a curriculum for a coding school, and then quickly finding my way into a lead tech support role with a popular web application organization, where I also led the QA process. So, how does all of this add up to working in and with data? I truly believe every professional experience is the chance to extract something positive — a learning takeaway. This diverse background has challenged me and shaped me, as well as helped me to be confident in my choices, to trust I’m taking steps in the right direction, because ultimately each career move has been better than the last and has led me to where I am now, with MongoDB, as an Enterprise Modernization Consultant. Ultimately a career risk led me to a job that didn’t even exist a year ago on a new team. So, we can never truly say what the future holds for us; we may be headed toward a killer career that hasn’t even been invented yet. When it comes to being successful on my team, I think this role is open to so much diversity. I’m trying to narrow down any specific skills, but I think anyone who is ambitious, independent, takes ownership with what they produce, and is curious will succeed here. Curiosity is a huge asset — someone who is open to learning and diving deep into what they don’t yet understand, eager to keep growing, and tech-curious. A big part of what we do involves us keeping our finger on the pulse of tech and data innovation, so we can confidently discuss, debate, or write about it. This means feeding ourselves with the right tech news content. AB: I’d love to know more about the modernization team. What’s your role and your day-to-day like? GP: Our reach is quite broad, but if I had to define it, I’d say the Enterprise Modernization Team (EMT) assists, educates, and helps inspire large enterprises to move toward modernization and innovation. Often, large enterprises have the most complex, costly legacies in their systems and need macro and micro aid and insights to not only modernize but also to visualize and tally the endpoint. EMT Principles and Consultants have the industry expertise and capability to translate our value proposition to senior executives and engineering management. This includes generating training content for internal teams; meeting with other teams for potential and ongoing accounts; delivering webinars, published content, and interactive exposition presentations; and meeting with clients so they have a stronger understanding of how MongoDB helps them to modernize from the most basic format, such as adopting the document model, to truly leading in innovation, such as data science, machine learning, and real-time analytics. So, EMT is a bridge between sales, technical sales, and marketing for complex industry use cases and solutions. These are the teams we collaborate most often with, working closely with sales reps and solutions architects, collaborating with solution providers, and closely aligning with the marketing team producing diverse content and product alignments. So, if you ask me what exactly is my role, I’d say it’s all of the above. Our team is small, although it’s growing quickly, and we have big plans to expand exponentially in the near future. That said, we have a democratic way of dividing the work. We’re made up of our Global Head, Boris Bialek, our Principal, Steve Dalby, and the two Consultants, including myself and Vanda Friedrichs. And we’re all expected to bring equally to the table, despite who has more seniority. This lets us all have an idea of what everyone is working on, and we frequently dip into each other’s projects either to help out or request aid. Each project is free roaming for all: as long as we’re aware of the objective and deadline, we can get creative with how we reach the endpoint. My projects are constantly evolving and regenerating, and I could joke that the only thing they have in common with each other is they all have to do with MongoDB. However, when I was hired, Boris was very clear and direct that each day would be different, and his promise has held true. I don’t have a day-to-day like most others might in regard to consistent projects, but the objective is always the same for each: how can we showcase MongoDB’s value in modernization and innovation in regards to data and tech? Because my projects are so diverse, and often more creative-oriented than anything else, I make up for what some may call a “lack of structure” by being very structured in how I plan my day. Before each day, I predetermine how my next day is going to be divided hourly by projects, tasks, and follow-ups, and I reserve some time for “self-learning,” where I take time to continue my training curriculum, since that’s an ongoing track. AB: Since this is a new role, what tools and resources (e.g., Sales Bootcamp) were you given to help you ramp up? GP: True, this was a new role when I first stepped in, so I didn’t totally know what to expect. There was a running joke I was learning by a fire hose, just having everything blasted at me, and something was bound to stick. MongoDB sets all employees up with boundless learning resources, so I created a curriculum for myself. I prioritized from the top down, based on what I needed to understand ASAP, such as MongoDB’s services and functions, and from there I had freedom to roam based on what interested me the most and what my weak spots were, and was given time to dive in deep technically. For example, I ran POVs to see the data in action from a locally set up database. I know other teams within the company have established curriculums for onboarding, but because this was a new role, I used the resources available and that worked for me. I was given a lot of liberty with my learning because it was mostly autonomous and self-driven, but that’s not to say my learning is over. The company really promotes a learning culture, and every week there are new resources with webinars, learning materials, training materials, and so on. Early into my onboarding, I participated in what’s called our Sales Bootcamp. It’s a two-week intensive training that dives deep into MongoDB’s services as a whole and lays a strong foundation to build on. It’s usually something that’s done in person at MongoDB’s headquarters in New York City, but since this is the COVID-19 era, it was done virtually, with a big cohort of new hires included from Europe and the Americas. This was a cool experience, because I got to meet a lot of new faces. Professionally, my background is originally in education, so I used to write my own curricula for my students, and I’ve been impressed with what I find the MongoDB enablement and Learning & Development teams generating. AB: What content have you and will you create? What is the purpose of this content? How is it leveraged? GP: Among many other roles, the EMT is a content-generating team, so we’re constantly working on creating something new, or collaborating with other teams to create new content. As of today, I’ve been with MongoDB for four months, and in that short time, I’ve been able to generate a lot of interesting, challenging pieces. Each project I’m given is a chance to dive deeper into that subject and expand my understanding of it — like data science or fintech, for example. One of the first projects I had was the chance to write a blog about MongoDB’s partnership with Iguazio , and how our data platform is the ideal persistence layer for Iguazio’s data science and MLOps platform, which is used to develop, deploy, and manage AI applications. Clearly, each project is a team effort, but this gave me the opportunity to dive into a topic I find personally interesting, while building connections with some of our most innovative partners. My first or second week I was introduced to an internal deck created by one of our Solutions Architects, Pascal Jensen. It was a sort of think piece on how data is being driven by the growing uncertainties of the world, in a political, social, and economic sense, and how the most innovative leading companies are responding. We decided to turn this into a more holistic, complete white paper to reach a wider audience. With that, after really digesting the deck that was available and multiple interviews with the Solutions Architects that contributed to it, I built an extensive paper around it, giving breath to the expression “digital by default.” This was something I was quite proud of, because it was so early on in my time with MongoDB, and it let me dive into truly interesting topics. I was able to build on the holistic elements of data and how it’s reshaping even the most mundane elements of the world, propelling us into the future with innovative technologies and solutions for some of the most crucial global concerns, such as hunger or healthcare. Last month, I presented my first corporate webinar with MongoDB, discussing transitioning from a relational database to MongoDB’s document model. It was a huge opportunity, because we were focusing on Spanish-speaking countries in Latin America. For me, this was almost a beta project, because I didn’t know what to expect in regard to reception. In the end, it was a massive success: overall, we had more than 6,500 registrants. That was a really exciting experience, because I knew as a team and a company we were clearly doing something right, engaging with the right audience, and connecting with the right people. There is a really positive response still outpouring from that webinar, and I was happy to be a part of it, especially as a rookie. Again, it just speaks to how much autonomy and freedom to create I’ve been given. My manager never holds me back from any opportunity and really encourages our success. In the spring, we’ll repeat the same endeavor with another webinar, covering a different topic I’m currently preparing in Spanish. AB: What was it like starting in a new role on a new team, especially during the pandemic? How do you stay connected to the team despite living in different countries? GP: Despite the pandemic, there was a lot to dive into because the company was running full speed ahead. It can be slightly intimidating being the new person on a fast-paced team, but I felt very included and seen from day one, and there was more than enough work and training to keep me busy. I haven’t really considered what it would’ve been like to work with MongoDB prepandemic, because at this point, this is all I’ve known. Staying connected with my direct team, though, has been the easiest part for me. I’ve never once felt disconnected despite never having met them in person. As of now, we’re dispersed across Dublin, London, Zurich, and Barcelona, and we’re growing. Plus, our backgrounds are even more diverse considering where we’ve lived, where we’re from, and the languages we speak. It’s refreshing to be part of a team that doesn’t feel limited to one geographic region, because it opens our minds and team discussions to diverse views and ideas. AB: How would you describe the team’s culture? And how do you maintain this culture during COVID-19? GP: The team culture is really positive, inclusive, and ambitious. Every team meeting feels like a brainstorming session, because part of our job is innovation. We’re all given a voice and are expected to use it as we shuffle through ideas and ongoing projects. But overall, our team culture is casual, in the sense that we engage with each other informally, but we all recognize what we need to be working on and by when. We’re each expected to take ownership of our work, and we’re given a lot of creative and structured autonomy. This means independently owning whatever it is we’re working on, and this goes for professional learning too. MongoDB creates a lot of resources internally that I take advantage of, from guided training and courses to reading material, interactive training, webinars, and so forth. I was paired up with one of our Solutions Architects, Benjamin Schubert, and he patiently made himself available to help guide me through some of the more technical aspects of our databases as I was learning how to maneuver through it myself, and I am eternally grateful. Of course, we have support any time we need it, and I can easily seek out resources or set up a Zoom call with an internal expert if I have any questions, but at the end of the day, the ticker moves forward only if everyone is doing their part, so each of us takes our part seriously. Interested in pursuing a career at MongoDB? We have several open roles on our teams across the globe , and would love you to build your career with us!