Resources

Take Advantage of Low-Latency Innovation with MongoDB Atlas, Realm, and AWS Wavelength

The emergence of 5G networking signals future growth for low-latency business opportunities. Whether it’s the ever-popular world of gaming, AR/VR, AI/ML, or the more critical areas of autonomous vehicles or remote surgery, there’s never been a better opportunity for companies to leverage low latency application services and connectivity. This kind of instantaneous communication through the power of 5G is still largely in its nascent development, but customers are adapting to its benefits quickly. New end-user expectations mean back-end service providers must meet growing demand. At the same time, business customers expect to have the ability to seamlessly deploy the same cloud-based back-end services that they’re familiar with, close to their data sources or end users. With MongoDB Realm and AWS Wavelength, you can now develop applications that take advantage of the low latency and higher throughput of 5G—and you can do it with the same tools you’re familiar with. The following blog post explores the benefits of AWS Wavelength, MongoDB Atlas, and Realm, as well as how to set up and use each service in order to build better web and mobile applications and evolve user experience. We’ll also walk through a real-world use case, featuring a smart factory as the example. Introduction to MongoDB Atlas & Realm on AWS MongoDB Atlas is a global cloud database service for modern applications. Atlas is the best way to run MongoDB on AWS because, as a fully managed database-as-a-service, it offloads the burden of operations, maintenance, and security to the world’s leading MongoDB experts while running on industry-leading and reliable AWS infrastructure. MongoDB Atlas enables you to build applications that are highly available, performant at a global scale, and compliant with the most demanding security and privacy standards. When you use MongoDB Atlas on AWS, you can focus on driving innovation and business value instead of managing infrastructure. Services like Atlas Search , Realm , Atlas Data Lake and more are also offered, making MongoDB Atlas the most comprehensive data platform in the market. MongoDB Atlas seamlessly integrates with many AWS products. Click here to learn more about common integration patterns. Why use AWS Wavelength? AWS Wavelength is an AWS Infrastructure offering that is optimized for mobile edge computing applications. Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage services within communications service providers’ (CSP) data centers. AWS Wavelength allows customers to use industry-leading and familiar AWS tools while moving user data closer to them in 13 cities in the US as well as London, UK, Tokyo and Osaka, Japan, and Daejeon, South Korea. Pairing Wavelength with MongoDB’s flexible data model and responsive Realm database for mobile and edge applications, customers get a familiar platform that can run anywhere and scale to meet changing demands. Why use Realm? Realm’s integrated application development services make it easy for developers to build industry-leading apps on mobile devices and the web. Realm comes with three key features: Cross-platform mobile and edge database Cross-platform mobile and edge sync solution Time-saving application development services 1. Mobile & edge database Realm’s mobile database is an open source, developer-friendly alternative to CoreData and SQLite. With Realm’s open source database, mobile developers can build offline-first apps in a fraction of the time. Supported languages include Swift, C#, Xamarin, JavaScript, Java, ReactNative, Kotlin, and Objective-C. Realm’s Database was built with a flexible, object-oriented data model, so it’s simple to learn and mirrors the way developers already code. Because it was built for mobile, applications built on Realm are reliable, highly performant, and work across platforms. 2. Mobile and edge sync solution Realm Sync is an out-of-the-box synchronization service that keeps data up-to-date between devices, end users, and your backend systems, all in real-time. It eliminates the need to work with REST, simplifying your offline-first app architecture. Use Sync to backup user data, build collaborative features, and keep data up to date whenever devices are online—without worrying about conflict resolution or networking code. Figure 2: High-level architecture of implementing Realm in a mobile application Powered by the Realm Mobile and Edge Database on the client-side and MongoDB Atlas on the backend, Realm is optimized for offline use and capable of scaling with you. Building a first-rate app has never been easier. 3. Application development services With Realm app development services, your team can spend less time integrating backend data for your web apps, and more time building the innovative features that push your business initiatives forward. Services include: GraphQL Functions Triggers Data access controls User authentication Reference Architecture High-level design Terminology wise, we will be discussing three main tiers for data persistence: Far Cloud, Edge, and Mobile/IOT. The Far Cloud is the traditional cloud infrastructure business customers are used to. Here, the main parent AWS regions (such as US-EAST-1 in Virginia, US-WEST-2 in Oregon, etc) are used for centralized retention of all data. While these regions are well known and trusted, the issue is that not many users or IOT devices are located in close proximity to these massive data centers and internet-routed traffic is not optimized for low latency. As a result, we use AWS Wavelength regions as our Edge Zones. An Edge Zone will synchronize the relevant subset of data from the centralized Far Cloud to the Edge. Partitioning principles are used such that users’ data will be stored closer to them in one or a handful of these Edge Wavelength Zones, typically located in major metropolitan areas. The last layer of data persistence is on the mobile or IOT devices themselves. If on modern 5G infrastructure, data can be synchronized to a nearby Edge zone with low latency. For less latency-critical applications or in areas where the Parent AWS Regions are closer than the nearest Wavelength Zone, data can also go directly to the Far Cloud. Figure 3: High Level Design of modern edge-aware apps using 5G, Wavelength, and MongoDB Smart factory use case: Using Wavelength, MQTT, & Realm Sync Transitioning from the theoretical, let’s dig one level deeper into a reference architecture. One common use case for 5G and low-latency applications is a smart factory. Here, IOT devices in a factory can connect to 5G networks for both telemetry and command/control. Typically signaling over MQTT, these sensors can send messages to a nearby Wavelength Edge Zone. Once there, machine learning and analysis can occur at the edge and data can be replicated back to the Far Cloud Parent AWS Regions. This is critical as compute capabilities at the edge, while low-latency, are not always full-featured. As a result, centralizing many factories together makes sense for many applications as it relates to long term storage, analytics, and multi-region sync. Once data is in the Edge or the Far Cloud, consumers of this data (such as AR/VR headsets, mobile phones, and more) can access this with low-latency for needs such as maintenance, alerting, and fault identification. Figure 4: High-level three-tiered architecture of what we will be building through this blog post Latency-sensitive applications cannot simply write to Atlas directly. Alternatively, Realm is powerful here as it can run on mobile devices as well as on servers (such as in the Wavelength Zone) and provide low-latency local reads and writes. It will seamlessly synchronize data in real-time from its local partition to the Far Cloud, and from the Far Cloud back or to other Edge Zones. Developers do not need to write complex sync logic; instead they can focus on driving business value through writing applications that provide high performance and low latency. For highly available applications, AWS services such as Auto Scaling Groups can be used to meet the availability and scalability requirements of the individual factory. Traditionally, this would be fronted by a load-balancing service from AWS or an open-source solution like HAProxy. Carrier gateways are deployed in each Wavelength zone and the carrier or client can handle nearest Edge Zone routing. Setting up Wavelength Deploying your application into Wavelength requires the following AWS resources: A Virtual Private Cloud (VPC) in your region Carrier Gateway — a service that allows inbound/outbound traffic to/from the carrier network. Carrier IP — address that you assign to a network interface that resides in a Wavelength Zone A public subnet An EC2 instance in the public subnet An EC2 instance in the Wavelength Zone with a Carrier IP address We will be following the “Get started with AWS Wavelength” tutorial located here . At least one EC2 compute instance in a Wavelength zone will be required for the subsequent Realm section below. The high level steps to achieve that are: Enable Wavelength Zones for your AWS account Configure networking between your AWS VPC and the Wavelength zone Launch an EC2 instance in your public subnet. This will serve as a bastion host for the subsequent steps. Launch the Wavelength application Test connectivity Setting up Realm The Realm components we listed above can be broken out into three independent steps: Set up a Far Cloud MongoDB Atlas Cluster on AWS Configure the Realm Serverless Infrastructure (including enabling sync) Write a reference application utilizing Realm 1. Deploying your Far Cloud with Atlas on AWS For this first section, we will be using a very basic Atlas deployment. For demonstration purposes, even the MongoDB Atlas Free Tier (called an M0) suffices. You can leverage the AWS MongoDB Atlas Quickstart to launch the cluster , so we will not enumerate the steps in specific detail. However, the high-level instructions are: Sign up for MongoDB Atlas account at cloud.mongodb.com and then sign in Click the Create button to display the Create New Database Deployment dialog Choose a “Shared” cluster, then choose the size of M0 (free) Be sure to choose AWS as the cloud and here we will be using US-EAST-1 Deploy and wait for the cluster to complete deployment 2. Configuring Realm and Realm Sync Once the Atlas cluster has completed deploying, the next step is to create a Realm Application and enable Realm Sync. Realm has a full user interface inside of the MongoDB Cloud Platform at cloud.mongodb.com however it also has a CLI and API which allows connectivity to CI/CD pipelines and processes, including integration with GitHub. The steps we are following will be a high-level overview of a reference application located here . Since Realm configurations can be exported, the configuration can be imported into your environment from that repository. The high level steps to create this configuration are as follows: While viewing your cluster at cloud.mongodb.com, click the Realm tab at the top Click “Create a New App” and give it a name such as RealmAndWavelength Choose the target cluster for sync to be the cluster you deployed in the previous step Now we have a Realm app deployed. Next, we need to configure the app to enable sync. Sync requires credentials for each sync application. You can learn more about authentication here . Our application will use API Key Authentication.To turn that on: Click Authentication on the left On the Authentication Providers tab, find API Keys, and click Edit Turn on the provider and Save If Realm has Drafts enabled, a blue bar will appear at the top where you need to confirm your changes. Confirm and deploy the change. You can now create an API key by pressing the “Create API Key” button and giving it a name. Be sure to copy this down for our application later as it cannot be retrieved again for security reasons Also, in the top left of the Realm UI there is a button to copy the Realm App ID. We will need this ID and API key when we write our application shortly. Lastly, we can enable Sync. The Sync configuration relies on a Schema of the data being written. This allows the objects (i.e. C# or Node.JS objects) from our application we are writing in the next step to be translated to MongoDB Documents. You can learn more about schemas here . We also need to identify a partition key. Partition keys are used to decide what subset of data should reside on each Edge node or each mobile device. For Wavelength deployments, this is typically a variation on the region name. A good partition key could be a unique one per API key or the name of the Wavelength Region (e.g. “BOS” or “DFW”). For this latter example, it would mean that your Far Cloud retains data for all zones, but the Wavelength zone in Boston will only have data tagged with “BOS” in the _pk field. The two ways to define a schema are either to write the JSON by hand or automatic generation. For the former, we would go to the Sync configuration, edit the Configuration tab, choose the cluster we deployed earlier, define a partition key (such as _pk as a string), then define the rules of what that user is allowed to read and write. Then you must write the schema on the Schema section of the Realm UI. However, it is often easier to let Realm auto-detect and write the schema for you. This can be done by putting the Sync into “Development Mode.” While you still choose the cluster and partition key, you only need to specify what database you want to sync all of your data to. After that, your application written below is where you can define classes, and upon connection to Realm Sync, the Sync Engine will translate the class you defined in your application into the underlying JSON representing that schema automatically. 3. Writing an application using Realm Sync: MQTT broker for a Smart Factory Now that the back-end data storage is configured, it is time to write the application. As a reminder, we will be writing an MQTT broker for a smart factory. IOT devices will write MQTT messages to this broker over 5G and our application will take that packet of information and insert it into the Realm database. After that, because we completed the sync configuration above, our Edge-to-Far-Cloud synchronization will be automatic. It also works bidirectionally. The reference application mentioned above is available in this GitHub repository . It is based on creating a C# Console application with the documentation here . The code is relatively straightforward: Create a new C# Console Application in Visual Studio Like any other C# Console Application, have it take in as CLI arguments the Realm App ID and API Key. These should be passed in via a Docker environment variable later and the values of these were the values you recorded in the previous Sync setup step Define the RealmObject which is the data model to write to Realm Process incoming MQTT messages and write them to Realm The data model for Realm Objects can be as complex as makes sense for your application. To prove this all works, we will keep a basic model: public class IOTDataPoint : RealmObject { [PrimaryKey] [MapTo("_id")] public ObjectId Id { get; set; } = ObjectId.GenerateNewId(); [MapTo("_pk")] public string Partition { get; set; } [MapTo("device")] public string DeviceName { get; set; } [MapTo("reading")] public int Reading { get; set; } } To sync an object, it must inherit from the RealmObject class. After that, just define getters and setters for each data point you want to sync. The C# implementation of this will vary depending on what MQTT Library you choose. Here we have used MQTTNet so we simply create a new broker with MqttFactory().CreateMqttServer() then start this with specific MqttServerOptionsBuilder where we need to define anything unique to your setup such as port, encryption, and other basic Broker information. However, we need to hook the incoming messages with .WithApplicationMessageInterceptor() so that way any time a new MQTT packet comes into the Broker, we send it to a method to write it to Realm. The actual Realm code is also simple: Create an App with App.Create() and it takes in the argument of the App ID which we are passing in as a CLI argument Log in with app.LogInAsync(Credentials.ApiKey()) and the API Key argument is again passed in as a CLI argument from what we generated before To insert into the database, all writes for Realm need to be done in a transaction. The syntax is straight forward: instantiate an object based on the RealmObject class we defined previously then do the write with a realm.Write(()=>realm.Add({message)}) Finally, we need to wrap this up in a docker container for easy distribution. Microsoft has a good tutorial on how to run this application inside of a Docker container with auto-generated Dockerfiles. On top of the auto-generated Dockerfile, be sure to pass in the arguments of the Realm App ID and API Key to the application as we defined earlier. Learning the inner workings of writing a Realm application is largely outside the scope of this blog post. However there is an excellent tutorial within MongoDB University if you would like to learn more about the Realm SDK. Now that the application is running, and in Docker, we can deploy it in a Wavelength Edge Zone as we created above. Bringing Realm and Wavelength together In order to access the application server in the Wavelength Zone, we must go through the bastion host we created earlier. Once we’ve gone through that jump box to get to the EC2 instance in the Wavelength Zone, we can install any prerequisites (such as Docker), and start the Docker container running the Realm Edge Database and MQTT application. Any new inbound messages received to this MQTT broker will be first written to the Edge and seamlessly synced to Atlas in the Far Cloud. There is a sample MQTT random number generator container suitable for testing this environment located in the GitHub repository mentioned earlier. Our smart factory reference application is complete! At this point: Smart devices can write to a 5G Edge with low latency courtesy of AWS Wavelength Zones MQTT Messages written to that Broker in the Wavelength Zone have low latency writes and are available immediately for reads since it is happening at the Edge through MongoDB Realm Those messages are automatically synchronized to the Far Cloud for permanent retention, analysis, or synchronization to other Zones via MongoDB Realm Sync and Atlas What's Next Get started with MongoDB Realm on AWS for free. Create a MongoDB Realm account Deploy a MongoDB backend in the cloud with a few clicks Start building with Realm Deploy AWS Wavelength in your AWS Account

Build a Single View of Your Customers with MongoDB Atlas and Cogniflare's Customer 360

The key to successful, long-lasting commerce is knowing your customers. If you truly know your customers, then you understand their needs and wants and can identify the right product to deliver to them—at the right time and in the right way. However, for most B2C enterprises, building a single view of the customer poses a major hurdle due to copious amounts of fragmented data. Businesses gather data from their customers in multiple locations, such as ecommerce platforms, CRM, ERP, loyalty programs, payment portals, web apps, mobile apps and more. Each data set can be structured, semi-structured or unstructured, delivered as stream or require batch processing, which makes compiling already fragmented customer data even more complex. This has led some organizations to bespoke solutions, which still only provide a partial view of the customer. Siloed data sets make running operations like customer service, targeted marketing and advanced analytics—such as churn prediction and recommendations—highly challenging. Only with a 360 degree view of the customer can an organization deeply understand their needs, wants and requirements, as well as how to satisfy them. A single view of that 360 data is therefore vital for a lasting relationship. In this blog, we’ll walk through how to build a single view of the customer using MongoDB’s database and Cogniflare’s Calledio Customer 360 tool. We’ll also explore a real-world use case focused on sentiment analysis. Building a single view with Calleido's Customer 360 With a Customer 360 database, organizations can access and analyze various individual interactions and touchpoints to build a holistic view of the customer. This is achieved by acquiring data from a number of disparate sources. However, routing and transforming this data is a complex and time-consuming process. Many of the existing Big Data tools often aren’t compatible with cloud environments. These challenges inspired Cogniflare to create Calleido . Figure 1: Calleido Customer 360 Use Case Architecture Calleido is a data processing platform built on top of battle-tested open source tools such as Apache NiFi. Calleido comes with over 300 processors to move structured and unstructured data from and to anywhere. It facilitates batch and real-time updates, and handles simple data transformations. Critically, Calleido seamlessly integrates with Google Cloud and offers one-click deployment. It uses Google Kubernetes Engine to scale up and down based on the demand, and provides an intuitive and slick low-code development environment. Figure 2: Calleido Data Pipeline to Copy Customers From PostgreSQL to MongoDB A real-world use case: Sentiment analysis of customer emails To demonstrate the power of Cogniflare’s Calleido , MongoDB Atlas , and the Customer 360 view, consider the use case of conducting a sentiment analysis on customer emails. To streamline the build of a Customer 360 database, the team at Cogniflare created flow templates for implementing data pipelines in seconds. In the upcoming sections, we’ll walk through some of the most common data movement patterns for this Customer 360 use case and showcase a sample dashboard. Figure 3: Sample Customer Dashboard The flow commences with a processor pulling IMAP messages from an email server (ConsumeIMAP). Each new email that arrives into the chosen inbox (e.g. customer service), triggers an event. Next, the process extracts email headers to determine topline details about the email content (ExtractEmailHeaders). Using the sender's email, Calleido identifies the customer (UpdateAttribute) and extracts the full email body by executing a script (ExecuteScript). Now, with all the data collected, a message payload is prepared and published through Google Cloud Platform (GCP) Pub/Sub (Kafka can also be used) for consumption by downstream flows and other services. Figure 4: Translating Emails to Cloud PubSub Messages The GCP Pub/Sub messages from the previous flow are then consumed (ConsumeGCPPubSub). This is where the power of MongoDB Atlas integration comes in as we verify each sender in the MongoDB database (GetMongo). If a customer exists in our system, we pass the email data to the next flow. Other emails are ignored. Figure 5: Validating Customer Email with MongoDB and Calleido Analysis of the email body copy is then conducted. For this flow, we use a processor to prepare a request body, which is then sent to Google Cloud Natural Language AI to assess the tone and sentiment of the message. The results from the Language Processing API then go straight to MongoDB Atlas so they can be pulled through into the dashboard. Figure 6: Making Cloud AutoML Call with Calleido End result in the dashboard: The Customer 360 database can be used in internal back-office systems to supplement and inform customer support. With a single view, it’s quicker and more effective to troubleshoot issues, handle returns and resolve complaints. Leveraging information from previous client conversations ensures each customer is given the most appropriate and effective response. These data sets can then be fed into analytics systems to generate learnings and optimizations, such as associating negative sentiment with churn rate. How MongoDB's document database helps In the example above, Calleido takes care of copying and routing data from the business source system into MongoDB Atlas, the operational data store (ODS). Thanks to MongoDB’s flexible data structure, we can transfer data in its original format, and subsequently implement necessary schema transformations in an iterative manner. There is no need to run complex schema migrations. This allows for the quick delivery of a single view database. Figures 7 & 8: Calleido Data Pipelines to Copy Products and Orders From PostgreSQL to MongoDB Atlas Calleido allows us to make this transition in just a few simple steps. The tool runs a custom SQL query (ExecuteSQL) that will join all the required data from outer tables and compile the results in order to parallelize the processing. The data arrives in Avro format, then Calleido converts it into JSON (ConvertAvroToJSON) and transforms it to the schema designed for MongoDB (JoltTransformJSON). End result in the Customer 360 dashboard: MongoDB Atlas is the market-leading choice for the Customer 360 database. Here are the core reasons for its world-class standard: MongoDB can efficiently handle non-standardized schema coming from legacy systems and efficiently store any custom attributes. Data models can include all the related data as nested documents. Unlike SQL databases, MongoDB avoids complicated join queries, which are difficult to write and not performant. MongoDB is rapid. The current view of a customer can be served in milliseconds without the need to introduce a caching layer. The MongoDB flexible schema model enables agility with an iterative approach. In the initial extraction, the data can be copied nearly exactly as its original shape. This drastically reduces latency. In subsequent phases, the schema can be standardized and the quality of the data can be improved without complex SQL migrations. MongoDB can store dozens of terabytes of data across multiple data centers and easily scale horizontally. Data can be shared across multiple regions to help navigate compliance requirements. Separate analytics nodes can be set up to avoid impacting performance of production systems. MongoDB has a proven record of acting as a single view database, with legacy and large organizations up and running with prototypes in two weeks and into production within a business quarter. MongoDB Atlas can autoscale out of the box, reducing costs and handling traffic peaks. The data can be encrypted both in transit and at rest, helping to accomplish compliance with security and privacy standards, including GDPR, HIPAA, PCI-DSS, and FERPA. Upselling the customer: Product recommendations Upselling customers is a key part of modern business, but the secret to doing it successfully is that it’s less about selling and more about educating. It’s about using data to identify where the customer is in the customer journey, what they may need, and which product or service can meet that need. Using a customer's purchase history, Calleido can help prepare product recommendations by routing data to the appropriate tools such as BigQuery ML. These recommendations can then be promoted through the call center and marketing teams for both online or mobile app recommendations. There are two flows to achieve this: preparing training data and generating recommendations: Preparing training data First, appropriate data from PostgreSQL to BigQuery is transferred using the ExecuteSQL processor. The data pipeline is scheduled to execute periodically. In the next step, appropriate data is fetched from PostgreSQL, dividing it to 1K row chunks with the ExecuteSQLRecord processor. These files are then passed to the next processor which uses load balancing enabled to utilize all available nodes. All that data then gets inserted to a BigQuery table using the PutBigQueryStreaming processor. Figure 9: Copying Data from PostgreSQL to BigQuery with Calleido Generating product recommendations Next, we move on to generating product recommendations. First, you must purchase Big Query capacity slots, which offer the most affordable way to take advantage of BigQuery ML features. Here, Calleido invokes an SQL procedure with the ExecuteSQL processor, then ensures that the requested BigQuery capacity is ready to use. The next processor (ExecuteSQL) executes an SQL query responsible for creating and training the Matrix Factorization ML model using the data copied by the first flow. Next in the queue, Calleido uses the ExecuteSQL processor to query our trained model to acquire all the predictions and store them in a dedicated BigQuery table. Finally, the Wait processor waits for both capacity slots to be removed, as they are no longer required. Figure 10 & 11: Generating Product Recommendations with Calleido Then, we remove old recommendations through the power of two processors. First, the ReplaceText processor updates the content of incoming flow files, setting the query body. This is then used by the DeleteMongo processor to perform the removal action. Figure 12: Remove Old Recommendations The whole flow ends with copying Recommendations to MongoDB. The ExecuteSQL processor fetches and aggregates the top 10 recommendations per user, all in chunks of 1k rows. Then, the following two processors (ConvertAvroToJSON and ExecuteScript) prepare data to be inserted into the MongoDB collection, by the PutMongoRecord processor. Figure 13: Copy Recommendations to MongoDB End result in the Customer 360 dashboard (the data used here in this example is autogenerated): Benefits of Calleido's 360 customer database on MongoDB Atlas Once the data is available in a centralized operational data store like MongoDB, Calleido can be used to sync it with an analytics data store such as Google BigQuery. Thanks to the Customer 360 database, internal stakeholders can then use the data to: Improve customer satisfaction through segmentation and targeted marketing Accurately and easily access compliance audits Build demand planning forecasts and analyses of market trends Reward customer loyalty and reduce churn Ultimately, a single view of the customer enables organizations to deliver the right message to prospective buyers, funneling those at the brand awareness stage into the conversion stage and ensures retention and post sales mechanics are working effectively. Historically, a 360 view of the customer was a complex and fragmented process, but with Cogniflare’s Calleido and MongoDB Atlas, a Customer 360 database has become the most powerful and cost efficient data management stack that an organization can harness.

MongoDB Employees Share Their Coming Out Stories: (Inter)National Coming Out Day 2021

National Coming Out Day is celebrated annually on October 11 and is widely recognized in the United States. MongoDB proudly supports and embraces the LGBTQIA+ community across the globe, so we’ve reimagined this celebration as (Inter)National Coming Out Day. In our yearly tradition of honoring (Inter)National Coming Out Day, we asked employees who are members of the LGBTQIA+ community to share their coming out experiences. These are their stories. Jamie Ivanov , Escalation Manager For as long as I can remember, I always wanted to play with dolls and felt closer to my female cousins. This was rather difficult for someone who is a male at birth being brought up in a fairly conservative family. At a young age, I knew that I was different but lacked a way to describe it. I certainly didn't have the support I needed, so I was brought up as a male. My father went out of his way to “make a man out of me” and toughen me up in ways that weren't exactly the most productive. Going through school, I still knew that I was different because I kept feeling attracted to both genders, but I was too afraid to admit to it. I found a youth group for LGBT teenagers that gave me a safe place to be myself and admit to people who I really was. Outside of that group was still pretty scary; I knew that I had to be straight or I would risk being beaten up or harassed, so I tried to push my queerness aside. In my 30s, after going through the Army and having three children, I realized that I couldn't keep pretending anymore -- who I was wasn't the true me. I started telling people that I was bisexual and hoping that they wouldn't see me as less of a person. Most of the responses I received were "yeah, we kinda figured.” Having that weight off of my shoulders was immensely relieving but something still wasn't quite right; while admitting that helped explain who I was interested in, it still didn't explain who I was. Through a series of fortunate unfortunate events, a lot of the facade I had built up for so many years came down, and I realized that who I was didn't match the body that I was given. It was terrifying to talk to anyone about how I was feeling or who I was, but I finally told people that I am a transgender woman. It was one of the scariest things that I have ever done. Some people didn't understand, and I did lose some family over it, but most people accepted me for who I am with open arms! Since being true to myself, more weight has been lifted off of me, and my only regret is not having the resources and courage to admit who I really was years and years ago. Since I've come out as bi/pansexual and a transgender woman, I've built stronger relationships and felt much more comfortable with myself, even to the point of liking photos of myself (which is something I've always hated and realized it was because it wasn't the real me). When a MongoDB recruiter reached out to me, I asked him the same question I asked other recruiters: "How LGBT friendly is MongoDB (with an emphasis on the transgender part)?" The response I got back from my technical recruiter Bryan Spears was the best response I had received from ANY recruiter, or company, and was the deciding factor in why I chose to work at MongoDB. Here’s what he said: “MongoDB is a company that truly does its best to follow our values like embracing the power of differences; we have many employees who identify as LGBTQ+ or are allies of the LGBTQ+ community. We also have two ERGs, MongoDB Queeries and UGT (Underrepresented Genders in Tech), which both aim to create and maintain a safe environment for those identifying as LGBTQ+ or questioning. From a benefits standpoint, we have expanded the amount of WPATH Standards of Care services available for people who identify as Transgender, Gender Nonconforming, or Transsexual through Cigna. While I know none of the information I have shared tells you what life is like at MongoDB, I hope that it shows we are doing our best to make sure that everyone feels respected and welcome here.” I didn't always have the support I needed to be myself at some previous jobs but MongoDB has raised the bar to a level that is hard to compete with. I'm happy to finally find a place that truly accepts me for who I am. Ryan Francis , VP of Global Demand Generation & Field Marketing Growing up in the 90s in what I used to call “the buckle of the Bible Belt,” I did not believe coming out was in the cards. In fact, I would sit up at night to devise my grand escape to New York City after being disowned (how I planned on paying for said escape remains unknown). I was, however, out to my best friend, Maha. During the summer between my Sophomore and Junior years of high school, I spent time with her family in Egypt. On the return trip, I bought a copy of The Advocate to learn about the big gay life that awaited me after my great escape. Later that month, my mother stumbled upon that magazine when she was cleaning the house. She waited six months to bring it up, but one day in January sat me down in the living and asked, “Are you gay?” I paused for a moment and said… “yup.” She started crying and thanked me for being honest with her. A month later, she picked up a rainbow coffee mug at a yard sale and has been Mrs. PFLAG ever since, organizing pride rallies in our little Indiana hometown and sitting on the Episcopal church vestry this year in order to push through our parish’s blessing of same-sex marriage. Needless to say, I didn’t have to escape. My father was also unequivocally accepting. This is a good thing because my sister Lindsay is a Lesbian, so they sure would have had a tough time given 100% of their kids turned out gay. Lindsay is the real hero here who stayed in our homeland to raise her children with her wife, changing minds every day so that, hopefully, there will be fewer and fewer kids who actually have to make that great escape. Angie Byron , Principal Community Manager Growing up in the Midwest in the 80s and 90s, I was always a “tomboy;” as a young kid, I gravitated to toys like Transformers and He-Man and refused to wear pink or dresses. Since we tended to have a lot in common, most of my best friends growing up were boys; I tended to feel awkward and shy around girls and didn’t really understand why at the time. I was also raised both Catholic and Bahá’í, which led to a very interesting mix of perspectives. While both religions have vastly different belief and value systems, the one thing they could agree on was that homosexuality was wrong (“intrinsically immoral and contrary to the natural law” in the case of Catholicism, and “an affliction that should be overcome” in the case of Bahá’í). Additionally, being “out” as queer at that time in that part of the United States would generally get you made fun of, if not the everlasting crap kicked out of you, so finding other queer people felt nearly impossible. As a result, I was in strong denial about who I was for most of my childhood and gave several valiant but ultimately failed attempts at the whole “trying to date guys” thing as a teenager (I liked guys just fine as friends, but when it came to kissing and stuff it was just, er… no.). In the end, I came to the reluctant realization that I must be a lesbian. I knew no other queer people in my life, and so was grappling with this reality alone, feeling very isolated and depressed. So, I threw myself into music and started to find progressively more and more feminist/queer punk bands whose songs resonated with my experiences and what I was feeling: Bikini Kill, Team Dresch, The Need, Sleater-Kinney, and so on. I came out to my parents toward the end of junior high, quite by accident. Even though I had no concrete plan for doing so, I always figured Mom would be the more accepting one, given that she was Bahá’i (a religion whose basic premise is the unity of religions and equality of humanity), and I’d have to work on Dad for a bit, since he was raised Catholic and came from a family with more conservative values from an even smaller town in the midwest. Imagine my surprise when one day, Mom and I were watching Ricky Lake or Sally Jesse Raphael or one of those daytime talk shows. The topic was something like “HELP! I think my son might be gay!” My mom said something off-handed like “Wow, I don’t know what I would do if one of you came out to me as gay...” And, in true 15-year old angsty fashion, I said, “Oh YEAH? Well you better FIGURE IT OUT because I AM!” and ran into my room and slammed the door. I remember Mom being devastated, wondering what she did wrong as a parent, and so on. I told her, truly, nothing. My parents were both great parents; home was my sanctuary from bullying at school, and my siblings and I were otherwise accepted exactly as we were, tomboys or otherwise. After we’d finished talking, she told me that I had better go tell my father, so I begrudgingly went downstairs. “Dad… I’m gay.” Instead of a lecture or expressing disdain, he just said, “Oh really? I run a gay support group at your Junior High!” and I was totally mind blown. Bizarro world. He was the social worker at my school, so this makes sense, but it was the exact opposite reaction that I was expecting. An important life lesson in not prejudging people. When I moved onto high school, we got… drumroll ... the Internet. Here things take a much happier turn. Through my music, I was able to find a small community of fellow queers (known as Chainsaw), including a ton of us from various places in the Midwest. I was able to learn that I was NOT a freak, I was NOT alone, there were SO many other folks who felt the exact same way, and they were all super rad! We would have long talks into the night, support each other through hardships, and more than a few of us met each other in person and hung out in “real life.” Finding that community truly saved my life, and the lives of so many others. (Side-note: This is also how I got into tech because the chat room was essentially one gaping XSS vulnerability, and I taught myself HTML by typing various tags in and seeing how they rendered.) I never explicitly came out to anyone in my hometown. I was too scared to lose important relationships (it turns out I chose my friends well, and they were all completely fine with it, but the prospect of further isolating myself as a teenager was too terrifying at the time). Because of that, when I moved to a whole new country (Canada) and went to college, the very first thing I did on my first day was introduce myself as “Hi, I’m Angie. I’ve been building websites for fun for a couple of years. Also, I’m queer, so if you’re gonna have a problem with that, it’s probably best we get it out of the way now so we don’t waste each others’ time.” Flash forward to today, my Mom is my biggest supporter, has rainbow stickers all over her car, and has gone to dozens of Pride events. Hacking together HTML snippets in a chat room led to a full-blown career in tech. I gleaned a bit more specificity around my identity and now identify as a homoromantic asexual . Many of those folks I met online as a teenager have become life-long friends. And, I work for a company that embraces people for who they are and celebrates our differences. Life is good. Learn more about Diversity & Inclusion at MongoDB Interested in joining MongoDB? We have several open roles on our teams across the globe and would love for you to transform your career with us!