We’re thrilled to announce our inaugural MongoDB Sales Academy! This program will prepare emerging professionals with the training and experience they need to jumpstart a career in sales. We’re looking for recent college graduates with an interest in technology to join our rapidly growing sales team.
“The creation of a program designed to develop recent college graduates into sales professionals is a natural extension of MongoDB’s culture of talent development. We have best in breed sales enablement and onboarding programs, and a “BDR to CRO” program focused on accelerating sales careers. We have an opportunity to bring these world-class training programs to those who are starting their careers, and to turn emerging professionals into future leaders at MongoDB.” - Meghan Gill, VP Sales Operations & SDR
The Sales Academy will be a full time, paid 8-week training program based in Austin, TX. It will focus on training and developing future MongoDB Sales Development Representatives as, upon completion, these recent college graduates will move into a full time SDR position. Those who are part of the Sales Academy will have direct one-on-one support from their sales mentors, MongoDB’s leadership team, the Campus Team, and each other. These New Grads will complete a best-in-class training program, which includes both technical concepts and sales processes. Through regular coaching and professional development training, our Sales Academy New Grads will graduate from the program and become full-time members of the Sales team at MongoDB.
“Life at MongoDB is ever-evolving and a great start for anyone looking to take their career to the next level. You can expect to constantly learn new things about technology and your customers, work alongside some of the best sales professionals in the industry, and to be on the forefront of innovation. If you want to understand technology like never before, work with customers modernizing today’s world, and get consistent feedback from peers and leadership, this is the right place for you.” - Maya Monico, SDR Manager
This isn’t the first time that MongoDB has hired students into our sales organization. Hannah Branfman was part of our SDR Internship program and, upon graduating from her school, joined us full-time. When asked about what sales at MongoDB is like, Hannah says:
“If you have ambition, are coachable and have a strong desire to learn, MongoDB will be a great fit for you. You have to be willing to make mistakes and remain naturally curious — don’t stop asking questions! If you have the perseverance to not only get here, but to then set the bar high for yourself and surpass it, you will fit in great. Get ready to make an impact!” - Hannah Branfman, SDR
We’re eager to find recent college graduates who are ambitious and excited to learn. If you’re interested in kickstarting your sales career at MongoDB in our Austin office, this could be the perfect fit for you! The job post is now up and we look forward to reviewing your application and getting to know you!
MongoDB’s Customer Success Team Is Growing: Meet Members from Our EMEA Team
MongoDB is the perfect home for anybody looking to join a dynamic, fast-paced, and rapidly growing technology company that’s blazing a trail in the database market. And because we’re onboarding new customers constantly — from massive household brands to the newest startup — we need amazing people to set them up for success from day one. Customer Success (CS) is one team that does just that. MongoDB currently is looking for talented people worldwide to be part of a team that delivers next-generation solutions for driving digital transformation with a diverse roster of clients. Want interview tips for our Customer Success roles? Read this blog. As MongoDB’s frontline resource, you’ll share the journey with each customer from initial onboarding all the way through each phase of the customer’s plan, developing strong and lasting partnerships along the way. Members from our EMEA-based CS team give their take on what to expect while working at MongoDB. Diverse Backgrounds Are More Than Welcome The Customer Success team is composed of creative teammates from a wide variety of backgrounds. As an inclusive community that values your ideas and embraces differences, the CS team believes all backgrounds and experiences can provide value to the role and the customers we serve. Despite this diversity, team members all share two core characteristics: a shared passion for innovation and technology, and a zest for connecting with people. Giuliana Alderisi , a Customer Success Specialist at MongoDB who oversees the Italian, Spanish, and Nordics region, speaks to the diversity of experiences across the CS team. “Our background as Customer Success Specialists are really heterogeneous,” she says. “I’m a computer engineer, but I know teammates who come from very different backgrounds, such as economics, sales development, and marketing, just to name a few. Of course, to increase the level of support we provide to customers, we also come from different countries and speak different languages. I always enjoy the ability to look at things from a different perspective. So, needless to say, I love our coffee breaks where we share our experiences.” One of those teammates she enjoys meeting with is Lucia Fabrizio , a Customer Success Manager covering the Enterprise Italian market. “After spending some years in sales and enablement roles, I found myself eager to start a new challenge, and I really wanted to better understand what happens after the sale is closed,” Lucia says. “I knew I enjoyed inspiring and educating others, as well as guiding them as they solved problems and tackled new opportunities, but I was unsure what my next career move could be. Then I came across MongoDB’s Customer Success Manager role, and it ticked all the boxes. I would describe myself as an introvert, which doesn’t mean I am shy. I simply enjoy listening and using my genuine curiosity to dive deeply into any situation and then act strategically. I’ve learned that this is a great quality for Customer Success Managers.” What You Do Matters The opportunities for discovery and growth are seemingly boundless for MongoDB’s CSMs. “The team is incredibly skilled and inclusive,” says Giuliana. “It is rare that I spend a day without learning something new from my team members.” So far for Giuliana, this has included everything from pipeline generation and work on expansions to improving soft skills and stakeholder management. And according to Giuliana, building together within the MongoDB community is an immensely enjoyable process. “We all know each of us has different talents and different skills, so collaboration is not just essential — it is promoted. We brainstorm together and openly share the ideas we have to make our customers successful,” she says. “MongoDB is big, so sometimes it might be difficult to identify the right person or department you should reach out to get the task done. However, everyone at MongoDB is super friendly, and in a matter of minutes, you’ll find the answer you’re looking for.” Part of the golden learning opportunities for those on the CS team is the chance to familiarize yourself with the full range of exciting products at the company’s disposal. You’ll have the freedom to explore the many facets of MongoDB, gain an understanding of how the products work, and collaborate with a variety of talented individuals. “We work with a lot of different customers and industries,” Guiliana says. “We’re specialized in driving them to success while they use MongoDB products, no matter who is the final user. This also means we are product-certified and get to know the major MongoDB products so we can properly help our customers.” MongoDB does everything it can to provide team members with the tools, resources, and training needed to hit the ground running. We have a dedicated Customer Success boot camp that runs in parallel to our Sales boot camp, helping the team prepare to work with customers, including onboarding. In addition, the CS team has put together product certifications that focus on role-playing so members can practice working with customers. For those intimidated by high-level tech, the CS team is always surrounded by world-class experts who are giving of their time and eager to bring members up to speed on all of MongoDB’s latest offerings. This includes partnering with the Product team to receive additional training, particularly for new products and tools. Being Our Customers' Voice and Advocate In the CS role, you don’t just get to know the emerging and cutting-edge products; you also cultivate lasting relationships with your customers. This includes everything from brainstorming creative ways for customers to adopt new features to ensuring their business is set up for scale, continuity, and sustainability. And because the CS team partners with a range of people in various job roles and companies, the top skills needed to successfully drive these relationships are: Technical acumen and interest in our technology Curiosity and eagerness to learn continuously Empathy for our customers “The base of MongoDB’s Customer Success program — at least how I think of it — is moving from a ‘vendor-customer’ relationship to an actual partnership with our customers,” says Lucia. “This is because we understand the importance of being our customers’ advocate, not only supporting them through pain points but by listening first and bringing their voice to our internal teams. When I meet with customers, I tell them to think of me as an ‘orchestra director’ who’s bringing all the relevant MongoDB personas together to support them through each phase of their plan and create new goals together.” A Strong Culture Built on Core Values Both Lucia and Giuliana speak glowingly about the culture at MongoDB. As Guiliana explains, the team is encouraged to work together on brainstorming sessions and lightning talks to compare notes and share their knowledge with their peers. “We’re also asked to take the time to explore new initiatives to help the CS program grow and find new ways to help our customers,” Giuliana adds. “This was already great before COVID-19 and became even more important when the pandemic affected our lives.” Giuliana also appreciates MongoDB’s benefit offerings such as the Emergency Care Leave, which helped to ensure parents would not feel guilty taking care of their children during the height of the pandemic. As a matter of fact, she adds, “None of the customer-focused or new-hire programs, trainings, or onboardings stopped; MongoDB simply adapted and pivoted with a great effort of creativity and relentlessness.” Lucia has some parting wisdom for those hoping to join the team : “Be comfortable challenging the norm and bringing your own perspective” she says. “You are the CEO of your portfolio, but it is essential to 'build together’ across the multitude of cross-functional teams here.” Interested in pursuing a Customer Engineering career at MongoDB? We have several open roles on our team and would love for you to build your career with us!
How to Get Started with MongoDB Atlas and Confluent Cloud
Every year more and more applications are leveraging the public cloud and reaping the benefits of elastic scale and rapid provisioning. Forward-thinking companies such as MongoDB and Confluent have embraced this trend, building cloud-based solutions such as MongoDB Atlas and Confluent Cloud that work across all three major cloud providers. Companies across many industries have been leveraging Confluent and MongoDB to drive their businesses forward for years. From insurance providers gaining a customer-360 view for a personalized experience to global retail chains optimizing logistics with a real-time supply chain application, the connected technologies have made it easier to build applications with event-driven data requirements. The latest iteration of this technology partnership simplifies getting started with a cloud-first approach, ultimately improving developer’s productivity when building modern cloud-based applications with data in motion. Today, the MongoDB Atlas source and sink connectors are generally available within Confluent Cloud. With Confluent’s cloud-native service for Apache Kafka® and these fully managed connectors, setup of your MongoDB Atlas integration is simple. There is no need to install Kafka Connect or the MongoDB Connector for Apache Kafka, or to worry about scaling your deployment. All the infrastructure provisioning and management is taken care of for you, enabling you to focus on what brings you the most value — developing and releasing your applications rapidly. Let’s walk through a simple example of taking data from a MongoDB cluster in Virginia and writing it into a MongoDB cluster in Ireland. We will use a python application to write fictitious data into our source cluster. Step 1: Set up Confluent Cloud First, if you’ve not done so already, sign up for a free trial of Confluent Cloud . You can then use the Quick Start for Apache Kafka using Confluent Cloud tutorial to create a new Kafka cluster. Once the cluster is created, you need to enable egress IPs and copy the list of IP addresses. This list of IPs will be used as an IP Allow list in MongoDB Atlas. To locate this list, select “Custer Settings” and then the “Networking” tab. Keep this tab open for future reference: you will need to copy these IP addresses into the Atlas cluster in Step 2. Step 2: Set Up the Source MongoDB Atlas Cluster For a detailed guide on creating your own MongoDB Atlas cluster, see the Getting Started with Atlas tutorial. For the purposes of this article, we have created an M10 MongoDB Atlas cluster using the AWS cloud in the us-east-1 (Virginia) data center to be used as the source, and an M10 MongoDB Atlas cluster using the AWS cloud in the eu-west-1 (Ireland) data center to be used as the sink. Once your clusters are created, you will need to configure two settings in order to make a connection: database access and network access. Network Access You have two options for allowing secure network access from Confluent Cloud to MongoDB Atlas: You can use AWS PrivateLink, or you can secure the connection by allowing only specific IP connections from Confluent Cloud to your Atlas cluster. In this article, we cover securing via IPs. For information on setting up using PrivateLink, read the article Using the Fully Managed MongoDB Atlas Connector in a Secure Environment . To accept external connections in MongoDB Atlas via specific IP addresses, launch the “IP Access List” entry dialog under the Network Access menu. Here you add all the IP addresses that were listed in Confluent Cloud from Step 1. Once all the egress IPs from Confluent Cloud are added, you can configure the user account that will be used to connect from Confluent Cloud to MongoDB Atlas. Configure user authentication in the Database Access menu. Database Access You can authenticate to MongoDB Atlas using username/password, certificates, or AWS identity and access management (IAM) authentication methods. To create a username and password that will be used for connection from Confluent Cloud, select the “+ Add new Database User” option from the Database Access menu. Provide a username and password and make a note of this credential, because you will need it in Step 3 and Step 4 when you configure the MongoDB Atlas source and sink connectors in Confluent Cloud. Note: In this article we are creating one credential and using it for both the MongoDB Atlas source and MongoDB sink connectors. This is because both of the clusters used in this article are from the same Atlas project. Now that the Atlas cluster is created, the Confluent Cloud egress IPs are added to the MongoDB Atlas Allow list, and the database access credentials are defined, you are ready to configure the MongoDB Atlas source and MongoDB Atlas sink connectors in Confluent Cloud. Step 3: Configure the Atlas Source Now that you have two clusters up and running, you can configure the MongoDB Atlas connectors in Confluent Cloud. To do this, select “Connectors” from the menu, and type “MongoDB Atlas” in the Filters textbox. Note: When configuring MongoDB Atlas source And MongoDB Atlas sink, you will need the connection host name of your Atlas clusters. You can obtain this host name from the MongoDB connection string. An easy way to do this is by clicking on the "Connect" button for your cluster. This will launch the Connect dialog. You can choose any of the Connect options. For purposes of illustration, if you click on “Connect using MongoDB Compass.” you will see the following: The highlighted part in the above figure is the connection hostname you will use when configuring the source and sink connectors in Confluent Cloud. Configuring the MongoDB Atlas Source Connector Selecting “MongoDbAtlasSource” from the list of Confluent Cloud connectors presents you with several configuration options. The “Kafka Cluster credentials” choice is an API-based authentication that the connector will use for authentication with the Kafka broker. You can generate a new API key and secret by using the hyperlink. Recall that the connection host is obtained from the MongoDB connection string. Details on how to find this are described at the beginning of this section. The “Copy existing data” choice tells the connector upon initial startup to copy all the existing data in the source collection into the desired topic. Any changes to the data that occur during the copy process are applied once the copy is completed. By default, messages from the MongoDB source are sent to the Kafka topic as strings. The connector supports outputting messages in formats such as JSON and AVRO. Recall that the MongoDB source connector reads change stream data as events. Change stream event metadata is wrapped in the message sent to the Kafka topic. If you want just the message contents, you can set the “Publish full document only” output message to true. Note: For source connectors, the number of tasks will always be “1”: otherwise you will run the risk of duplicate data being written to the topic, because multiple workers would effectively be reading from the same change stream event stream. To scale the source, you could create multiple source connectors and define a pipeline that looks at only a portion of the collection. Currently this capability for defining a pipeline is not yet available in Confluent Cloud. Step 4: Generate Test Data At this point, you could run your python data generator application and start inserting data into the Stocks.StockData collection at your source. This will cause the connector to automatically create the topic “demo.Stocks.StockData.” To use the generator, git-clone the stockgenmongo folder in the above-referenced repository and launch the data generation as follows: python stockgen.py -c "< >" Where the MongoDB connection URL is the full connection string obtained from the Atlas source cluster. An example connection string is as follows: mongodb+srv://kafkauser:email@example.com Note: You might need to pip-install pymongo and dnspython first. If you do not wish to use this data generator, you will need to create the Kafka topic first before configuring the MongoDB Atlas sink. You can do this by using the Add a Topic dialog in the Topics tab of the Confluent Cloud administration portal. Step 5: Configuring the MongoDB Atlas Sink Selecting “MongoDB Atlas Sink” from the list of Confluent Cloud connectors will present you with several configuration options. After you pick the topic to source data from Kafka, you will be presented with additional configuration options. Because you chose to write your data in the source by using JSON, you need to select “JSON” in the input message format. The Kafka API key is an API key and secret used for connector authentication with Confluent Cloud. Recall that you obtain the connection host from the MongoDB connection string. Details on how to find this are described previously at the beginning of Step 3. The “Connection details” section allows you to define behavior such as creating a new document for every topic message or updating an existing document based upon a value in the message. These behaviors are known as document ID and write model strategies. For more information, check out the MongoDB Connector for Apache Kafka sink documentation . If order of the data in the sink collection is not important, you could spin up multiple tasks to gain an increase in write performance. Step 6: Verify Your Data Arrived at the Sink You can verify the data has arrived at the sink via the Atlas web interface. Navigate to the collection data via the Collections button. Now that your data is in Atlas, you can leverage many of the Atlas platform capabilities such as Atlas Search, Atlas Online Archive for easy data movement to low-cost storage, and MongoDB Charts for point-and-click data visualization. Here is a chart created in about one minute using the data generated from the sink cluster. Summary Apache Kafka and MongoDB help power many strategic business use cases, such as modernizing legacy monolithic systems, single views, batch processing, and event-driven architectures, to name a few. Today, Confluent and MongoDB Cloud and MongoDB Atlas provide fully managed solutions that enable you to focus on the business problem you are trying to solve versus spinning your tires in infrastructure configuration and maintenance. Register for our joint webinar to learn more!