GIANT Stories at MongoDB

MongoDB Stitch Functions – The AWS re:Invent Stitch Rover Demo

Using MongoDB Stitch functions to aggregate sensor data from our re:invent MongoDB Mobile/Stich Mars rover

MongoDB Stitch Mobile Sync – The AWS re:Invent Stitch Rover Demo

MongoDB Stitch Mobile Sync powers the MongoDB rover by synchronizing commands between MongoDB Atlas in the cloud and MongoDB Mobile running on a Raspberry Pi.

MongoDB Stitch QueryAnywhere – The AWS re:Invent Stitch Rover Demo

The MongoDB Stitch rover demonstrates MongoDB Stitch QueryAnywhere by recording commands in MongoDB Atlas directly from the web app frontend.

MongoDB Stitch/Mobile Mars Rover Lands at AWS re:Invent

Powered by MongoDB Mobile, MongoDB Stitch, and AWS Kinesis, the MongoDB Rover debuted at AWS re:Invent. Hundreds of delegates stopped by our demo to see if they could navigate the rover around a treacherous alien landscape.

Building iOS and Android Apps with the MongoDB Stitch React Native SDK

Create React Native mobile apps for iOS and Android using the MongoDB Stitch React Native SDK

Stitch & Mobile Webinar Questions & Replay

Responses to questions from my recent MongoDB Mobile and Stitch webinar – also includes a recording to watch at your leisure.

Scaling to the Cloud With MongoDB Atlas and Pivotal Cloud Foundry

Sani Chabi Yo

Cloud, Atlas

MongoDB Atlas is a fully-managed cloud database developed by the same people that build MongoDB. Atlas handles all the complexity of deploying, managing, and healing your deployments on the cloud service provider of your choice (AWS, Azure, and GCP).

An application deployed within Cloud Foundry can leverage MongoDB Atlas in a variety of ways:

  • App configuration: Manually create a cluster, then get the connection string which you will use in your application.
  • User-provided Service: This option extends the previous one and basically means that the connection string can be used to create what is called in Cloud foundry language a User-provided service which is simply a means to enable developers to use services that are not available in the marketplace.

However, these options have some drawbacks:

  • Manual provisioning
  • Poor self-service experience we are looking for
  • Can’t set quotas to control services usages

There is a third way, the ultimate holy grail in providing an amazing self-service experience is to use the Open Service Broker API project. This project is an open source effort sponsored by Google, IBM, Pivotal, Red Hat, SAP, and many others. The intent is to provide a simple set of API endpoints which can be used to provision, gain access to and managing service offerings. In our scenario, we will create a service broker that will be used by Cloud Foundry to provision MongoDB deployments in MongoDB Atlas.

Cloud Foundry exposes its services in the Marketplace for users to consume. The service manages the provisioning and de-provisioning of MongoDB deployments and provides the necessary credentials for applications.

Pivotal Cloud Foundry with MongoDB Atlas

Don’t We Already Have a PCF Tile for MongoDB?

For those that wish to deploy and manage their own MongoDB deployment within a PCF infrastructure, there is a PCF tile for MongoDB. While using this tile does allow you to deploy MongoDB within your infrastructure it does require setting up MongoDB Ops Manager in your environment as well as making sure you have enough resources to accommodate the MongoDB deployments.

By leveraging the Service Broker and MongoDB Atlas you Reduce management complexity/overhead as well as provide:

  • A true Marketplace experience
  • Support for many compliances (i.e: HIPAA, PCI, etc.)
  • Access more comprehensible deployment sizes (T-shirts)
  • Multi-Cloud deployment strategy
  • The flexibility to deploy into MongoDB Atlas’s 56 regions utilizing any of the 3 majors Cloud Providers (AWS, GCP, Azure)
  • End-users can take immediate advantage of any new capabilities of MongoDB Atlas.

Getting Started With the Service Broker for MongoDB Atlas

To get started, create a MongoDB Atlas account and create an Organization. You’ll also need to configure access to MongoDB Atlas API by creating an API Key. Please refer the to links given for full details.

To get the newly created Organization’s ID. This is simply done by clicking on the “Settings” menu on the left.

MongoDB Atlas Organization ID

The next step is to authorize PCF to communicate with your MongoDB Atlas account. If you have properly followed Cloud Foundry deployment recommendations, this part gets really easy. Below is a simplistic view of the recommended deployment topology.

MongoDB Atlas Organization ID

Without a NAT VM, you will require additional configurations, including configuring all Diego Cells with a static public IP which will then need to be added to Atlas IP Whitelist. This is not ideal and can get complicated very quickly, especially when your Network topology keeps changing.

Now, let add the NAT VM public IP to Atlas API whitelist. If you testing this using PCF Dev, then this will simply be your laptop public IP. In the Atlas interface, and the account view, there is an API Whitelist section, on the upper right side of that section, click on the “Add” button.

MongoDB Atlas API Whitelisting

Then add the NAT VM public IP as an entry

MongoDB Atlas API Whitelisting

The last step is to define the Atlas tiers that you want to make available to the developers and deploy the Service Broker for Atlas. Here is the GitHub link to work through the process.

Let’s Check Out the Developer Experience

Let's check what plans are available in the marketplace by running the following command:

Command

    
$ cf marketplace -s mongodb-atlas-aws 
    

Text Output


$ cf marketplace -s mongodb-atlas-aws                                                                                                              
Getting service plan information for service mongodb-atlas-aws as admin...
OK

service plan         description                                                                                                                 
aws-dev              Please use this for Dev (This is a multitenant environment)                                                               free
aws-qa               Please use this for Qa                                                                                                    paid
aws-prod             Please use this for any Production deploiement                                                                            paid
aws-global_cluster   Please use this for any Production deployment that requires global cluster. It includes 2 zones in US_EAST and US_CENTRAL paid

Here we can see the list of plans we have previously configured.

Now let request a MongoDB cluster for our Dev environment by issuing the following command:

Command

    
$ cf create-service mongodb-atlas-aws aws-dev dev-db
    

Text Output

                                                                                     
Creating service instance dev-db in org pcfdev-org / space pcfdev-space as admin...
OK

Create in progress. Use 'cf services' or 'cf service dev-db' to check operation status.

Since this is an asynchronous process, you can check the provisioning progress by running the following command:

Command

    
$ cf service dev-db
    

Text Output


Showing info of service dev-db in org pcfdev-org / space pcfdev-space as admin...

name:            dev-db
service:         mongodb-atlas-aws
tags:            
plan:            aws-dev
description:     MongoDB Atlas Service on AWS
documentation:   
dashboard:       https://cloud.mongodb.com/v2/5bf305dbf2a30bc3a008384f#clusters

Showing status of last operation from service dev-db...

status:    create succeeded
message:   
started:   2018-11-19T18:50:07Z
updated:   2018-11-19T18:52:14Z

There are no bound apps for this service.

What happened here is that a project has been created and its IP Whitelist configuration adjusted to allows connection from any application living inside PCF. Then the cluster was deployed inside that project according to the selected plan (i.e: tier). This process took about 1-2 minutes.

A dashboard URL is returned, which gives you access to the project within Atlas.

MongoDB Atlas Dashboard

You can also verify that the project IP Whitelist has been properly configured to allow communication from the PCF environment.

MongoDB Atlas Dashboard

Now as a developer, we might want to bind the newly created cluster to your existent application. This is achieved by “binding” your app to a service. You do that by running the following command:

Command

    
$ cf bind-service spring-music dev-db
    

Text Output


Binding service dev-db to app spring-music in org pcfdev-org / space pcfdev-space as admin...
OK
TIP: Use 'cf restage spring-music' to ensure your env variable changes take effect

In the command above I’m using the Spring Music app.

Once that command was issued, the service broker created a database user and granted it the Atlas admin role. A password was also randomly generated for that user and returned to the application, along with the connection string. The application can then just leverage that information to connect to the new cluster.

On the Atlas dashboard, you can check that a user has indeed been successfully created.

MongoDB Atlas Dashboard

Finally, we can verify in PCF that they have been successfully added to the application environment variables by issuing the following command:

Command

    
$ cf env spring-music 
   

Text Output


Getting env variables for app spring-music in org pcfdev-org / space pcfdev-space as admin...
OK

System-Provided:
{
 "VCAP_SERVICES": {
  "mongodb-atlas-aws": [
   {
    "credentials": {
     "database": "test",
     "groupId": "5bf305dbf2a30bc3a008384f",
 "mongodbUri": "mongodb+srv://79f7e2411058421aa5692fae44f0f0de-4xa0q.mongodb.net/test?retryWrites=true",
     "password": "FH0m92BeSDJj",
     "uri": "mongodb+srv://79f7e2411058421aa5692fae44f0f0de-4xa0q.mongodb.net/test?retryWrites=true",
     "username": "79f7e241-1058-421a-a569-2fae44f0f0de"
    },
    "label": "mongodb-atlas-aws",
    "name": "dev-db",
    "plan": "aws-dev",
    "provider": null,
    "syslog_drain_url": null,
    "tags": [
     "MongoDB",
     "Atlas",
     "AWS"
    ],
    "volume_mounts": []
   }
  ]
 }
}

Conclusion

With a service broker for Atlas, the world is truly yours. If you are a platform engineer leveraging Cloud Foundry in your enterprise, then you already know that an important part of your job is about bringing value into the platform and reduce complexity/redundancies as much as possible. A Service Broker for Atlas can help better deliver on that promise.

Get started with MongoDB Atlas today and try out service brokers for yourself!.

New to MongoDB Atlas — Get Started with Free Fully Automated Databases on Microsoft Azure

Leo Zheng

Releases, Cloud, Atlas

We’re excited to announce that teams can now use MongoDB Atlas — the global cloud database for MongoDB — for free on Microsoft Azure. The newly available free tier on Azure Cloud, known as the M0, grants users 512 MB of storage and is ideal for learning MongoDB, prototyping, and early development.

The Atlas free tier will run MongoDB 4.0 and grant users access to some of the latest database features, including multi-document transactions, which make it even easier to address a complete range of use cases with MongoDB; type conversions, which allow teams to perform sophisticated transformations natively in the database without costly and fragile ETL; and updated security defaults (SHA-256 and TLS 1.1+).

Like larger MongoDB Atlas cluster types, M0 clusters grant users optimal security with end to end encryption, high availability, and fully managed upgrades. M0 clusters also enable faster development by allowing teams to perform CRUD operations against their data right from their browsers via the built-in Data Explorer.

Finally, free tier clusters on Azure can be paired with MongoDB Stitch — a powerful suite of serverless platform services for apps using MongoDB — to simplify the handling of backend logic, database triggers, and integrations with the wider Azure ecosystem.

At launch, the MongoDB Atlas free tier will be available in 3 Azure regions:

  • East US (Virginia)
  • East Asia (Hong Kong)
  • West Europe (Netherlands)

Creating a free tier is easy. When building a new Atlas cluster, select Azure as your cloud of choice and one of the regions above.

Next, select M0 in the “Cluster Tier” dropdown.

Then, give the cluster a name and hit the “Create Cluster” button. Your free MongoDB Atlas cluster will be deployed in minutes.

New to MongoDB Atlas? Deploy a free cluster in minutes.

Mitigating the "fat-finger delete" with Queryable Backups

When a user accidentally deletes data sometimes the only way to retrieve that data is through a full database restore. In MongoDB Ops Manager and MongoDB Atlas, it is easy to make a client connection to a backup and perform read-only queries. No need to restore the backup either!

Using AWS Rekognition to Analyse and Tag Uploaded Images Using Stitch

Computers can now look at a video or image and know what’s going on and, sometimes, who’s in it. Amazon Web Service Rekognition gives your applications the eyes it needs to label visual content. In the following, you can see how to use Rekognition along with MongoDB Stitch to supplement new content with information as it is inserted into the database.

You can easily detect labels or faces in images or videos in your MongoDB Stitch application using the built-in AWS service. Just add the AWS service and use the Stitch client to execute the AWS SES request right from your React.js application or create a Stitch function and Trigger. In a recent Stitchcraft live coding session on my Twitch channel, I wanted to tag an image using label detection. I set up a trigger that executed a function after an image was uploaded to my S3 bucket and its metadata was inserted into a collection.

exports = function(changeEvent) {
  const aws = context.services.get('AWS');
  const mongodb = context.services.get("mongodb-atlas");
  const insertedPic = changeEvent.fullDocument;

  const args = {
    Image: {
      S3Object: {
        Bucket: insertedPic.s3.bucket,
        Name: insertedPic.s3.key
      }
    },
    MaxLabels: 10,
    MinConfidence: 75.0
  };

  return aws.rekognition()
    .DetectLabels(args)
    .then(result => {
      return mongodb
        .db('data')
        .collection('picstream')
        .updateOne({_id: insertedPic._id}, {$set: {tags: result.Labels}});
    });
};

With just a couple of service calls, I was able to take an image, stored in S3, analyse it with Rekognition, and add the tags to its document. Want to see how it all came together? Watch the recording on YouTube with the Github repo in the description. Follow me on Twitch to join me and ask questions live.

-Aydrian Howard
Developer Advocate
NYC
@aydrianh