GIANT Stories at MongoDB

MongoDB wins 2018 Customer Impact award at SpringOne Platform!

Sarah Branfman
September 24, 2018
Technology, events, OEM

MongoDB has won the Independent Software Vendor 2018 Pivotal Partner Award for Customer Impact , at the PivotalSpringOne Platform summit. This award recognizes partners that have delivered technology contributing to notable customer success.

We are humbled and honored!

In a world with endless options, the most sophisticated and demanding organizations choose to run their business on MongoDB. This is not a coincidence. MongoDB constantly pushes the pace of innovation to addresses the most challenging problems for our customers and strategically partners with leaders like Pivotal to deliver robust solutions to accelerate development and success.

We launched MongoDB for Pivotal Cloud FoundryⓇ (PCF) earlier this year and already have amazing and very public success. For example, Merrill Corporate launched a new category for M&A professionals with MongoDB, Pivotal and Microsoft. Our joint solution yielded 20x faster Deployment, client-identified bugs fixed in hours, 25% increase in sales and a true transformation into a product led technology organization!

Staying close to customer requirements and focused on customer success - that’s why it’s now easier than ever to rapidly deploy MongoDB powered applications on Pivotal Cloud Foundry by abstracting complexities around ensuring a consistent, predictable, and secure underlying infrastructure that can scale. MongoDB will continue with frequent updates and releases as we evolve our product inline with customer needs. Customers can download a BOSH deployed tile for Pivotal Application ServiceⓇ (PAS) and we are thrilled to announce the beta of MongoDB for the Pivotal Container ServiceⓇ (PKS) as well!

Learn more about the tile in these great posts by my colleagues: On Demand MongoDB Enterprise Server on Pivotal Cloud Foundry and MongoDB Enterprise Server for Pivotal Cloud Foundry goes GA

For those with us in Washington D.C. for the SpringOne Platform conference, we have two amazing sessions for you:

Join MongoDB’s Jeff Yemin (Lead Engineer, Database Engineering) and Pivotal’s Christoph Strobl (Software Engineer) for ‘Next Generation MongoDB: Sessions, Streams, Transactions’ and Diana Esteves (MongoDB Senior Engineer) for MongoDB + CredHub = Secure By Default Data Services on PCF and stop by our meeting room to say hello!

We’re honored to have received this prestigious award from Pivotal and look forward to continued success for our joint customers as MongoDB and Pivotal help tackle their biggest challenges!

Implementing an end-to-end IoT solution in MongoDB: From sensor to cloud

Robert Walters
September 20, 2018
IoT, Technical

Many companies across the world have chosen MongoDB as the data platform for their IoT workloads. MongoDB makes it easy to store a variety of heterogeneous sensor data in a natural, intuitive way and blend it with enterprise data, allowing you to integrate IoT apps across your organization. Experience setting up your own temperature sampling solution in the referenced article.

MongoDB On The Road - Seattle CodeCamp

Ken W. Alger
September 20, 2018

Seattle CodeCamp was held in the Pigott Building on the beautiful Seattle University campus. With the scenic Puget Sound just a few blocks to the west down Madison St and Lake Washington to the east down Cherry St, Seattle CodeCamp was situated in a magnificent venue.

This year, on Saturday, September 15, 2018, 450 developers attended the event. The sponsorship hall had representatives from a few of the conference sponsors including GitHub, Flatiron School, and the College of Science and Engineering from Seattle University. There were plenty of stickers and sponsor information up for grabs along with some great representatives from the companies to talk with.

Seattle Code Camp Swag

The conference sessions included over 65 sessions. One of the things I really enjoy about the CodeCamp events I’ve attended is the wide variety of speakers and session topics available. Everything from front-end to back-end topics is open game and available to learn.

Interested in IoT topics? There were sessions on those. Microservices, yup, Vidya Vrat Agarwal from T-Mobile gave a talk on the what, why, and how of those. Interested in the JAQ (Java + Angular + database) Stack? Adobe’s Suren Konathala gave an amazing talk on that. My former Treehouse colleague, James Churchill, gave a talk comparing JavaScript Frameworks.

And that’s just a small sample of the topics covered at this year’s Seattle Codecamp. I presented a talk on MongoDB & Node.js to a room of about 25 people. I brought with me a supply of MongoDB socks to give session attendees some swag which went over well. A large percentage of people in the room were unfamiliar with MongoDB in general and the MEAN/MERN stack specifically.

MongoDB Swag Socks

As a result, I tailored my talk to discuss the technologies themselves before showing how building an API is done with Node.js, Express.js, and MongoDB. I built an API that served up restaurants indexed by location. After building a functioning API I showed some of the features of MongoDB Compass to explore the data, perform CRUD operations, and leverage the geo-spatial data that was being stored inside MongoDB.

There were several MongoDB specific questions brought up during the session about some of the differences between the way legacy, relational databases store information and how a next generation database, such as MongoDB handles similar schema design and queries. It was a great discussion and provided a great opportunity to educate developers on the flexibility of MongoDB’s document model and the increase in development speed. You can find the project code on GitHub along with the talk slides here.

MongoDB is the easiest and fastest way to work with data. Download MongoDB Compass today and start making smarter decisions about document structure, querying, indexing, and more.

Listing Your MongoDB Atlas Resources

If you want to use the MongoDB Atlas API to manage your clusters one of the first things you will discover is that resource IDs are the keys to the kingdom. In order to use the API you will need an API key and you will need to grant access to your program via the API whitelist.

You can set up your API keys and API whitelist on this screen.

Atlas Account Settings

Once they are set up you can use them to run the py-atlas-list.py script to get a list of all resources.

$ ./py-atlas-list.py -h
usage: py-atlas-list.py [-h] [--username USERNAME] [--apikey APIKEY]
                        [--org_id ORG_ID]

optional arguments:
  -h, --help           show this help message and exit
  --username USERNAME  MongoDB Atlas username
  --apikey APIKEY      MongoDB Atlas API key
  --org_id ORG_ID      specify an organization to limit what is listed
$

If you run this on the command line you will get

Py Atlas List

The project and org IDs have been occluded for security purposes. As you can see the Organization ID, Project IDs and Cluster names are displayed. These will be required by other parts of the API.

Give it a spin. There is a Pipfile.lock for pipenv users.

Time Series Data and MongoDB: Part 3 – Querying, Analyzing, and Presenting Time-Series Data

Robert Walters
September 19, 2018
Technical

This blog series seeks to provide best practices as you build out your time series application on MongoDB. In this post, part three, we will cover how to query, analyze and present time-series data stored in MongoDB.

Building a REST API with MongoDB Stitch

Andrew Morgan
September 19, 2018
Technical, Cloud

MongoDB Stitch QueryAnywhere often removes the need to create REST APIs, but for the other times, Stitch webhooks let you create them in minutes.

MongoDB 4.0: Non-Blocking Secondary Reads

Mat Keep
September 19, 2018
Technical, MongoDB 4.0

Many MongoDB users scale read performance by distributing their query load across secondary replicas. With the MongoDB 4.0 release, reads are no longer blocked when oplog entries are applied. Here's how

Handling Files using MongoDB Stitch and AWS S3

Aydrian Howard
September 18, 2018
mongodb

MongoDB is the best way to work with data. As developers, we are faced with design decisions about data storage. For small pieces of data, it’s often an easy decision. Storing all of Shakespeare’s works, all 38 plays, 154 sonnets, and poems takes up 5.6MB of space in plain text. That’s simple to handle in MongoDB. What happens, however, when we want to include rich information with images, audio, and video? We can certainly store that information inside the database, but another approach is to leverage cloud data storage. Services such as AWS S3 allow you to store and retrieve any amount of data stored in buckets and you can store references in your MongoDB database.

With a built-in AWS Service, MongoDB Stitch provides the means to easily update and track files uploaded to an S3 bucket without having to write any backend code. In a recent Stitchcraft live coding session on my Twitch channel, I demonstrated how to upload a file to S3 and record it in a collection directly from a React.js application. After I added an AWS Service (which just required putting in IAM credentials) to my Stitch application and set up my S3 Bucket, I only needed to add the following code to my React.js application to handle uploading my file from a file input control:


handleFileUpload(file) {
 if (!file) {
   return
 }

 const key = `${this.client.auth.user.id}-${file.name}`
 const bucket = 'stitchcraft-picstream'
 const url = `http://${bucket}.s3.amazonaws.com/${encodeURIComponent(key)}`

 return convertImageToBSONBinaryObject(file)
   .then(result => {
     // AWS S3 Request
     const args = {
       ACL: 'public-read',
       Bucket: bucket,
       ContentType: file.type,
       Key: key,
       Body: result
     }

     const request = new AwsRequest.Builder()
       .withService('s3')
       .withAction('PutObject')
       .withRegion('us-east-1')
       .withArgs(args)
       .build()

     return this.aws.execute(request)
   })
   .then(result => {
     // MongoDB Request
     const picstream = this.mongodb.db('data').collection('picstream')
     return picstream.insertOne({
       owner_id: this.client.auth.user.id,
       url,
       file: {
         name: file.name,
         type: file.type
       },
       ETag: result.ETag,
       ts: new Date()
     })
   })
   .then(result => {
     // Update UI
     this.getEntries()
   })
   .catch(console.error)
}

To watch me put it all together, check out the recording of the Stitchcraft live coding session and check out the link to the GitHub repo in the description. Be sure to follow me on Twitch and tune in for future Stitchcraft live coding sessions.

-Aydrian Howard
@aydrianh
Developer Advocate
NYC

Leading digital cryptocurrency exchange cuts developer time by two-thirds and overcomes scaling challenges with MongoDB Atlas

Cryptocurrency investing is a wild ride. And while many have contemplated the lucrative enterprise of building an exchange, few have the technical know-how, robust engineering, and nerves of steel to succeed. Discidium Internet Labs decided it qualified and launched Koinex, a multi-cryptocurrency trading platform, in India in August 2017. By the end of the year, it was the largest digital asset exchange by volume in the country.

“India loves cryptocurrencies like Bitcoin and Ripple,” says Rakesh Yadav, Co-Founder & CTO Koinex. “We wanted to be the first local exchange to operate in accordance with global best practices. But we needed to provide a great user experience fast with a small development team. Transaction speed is one thing but developer bandwidth is the real limiting factor. “

Those who follow the cryptocurrency markets will know that challenges come fast and furious, with huge swings in prices and trading volumes driven by unpredictable developments, all in an environment of rapidly changing regulations. For an exchange with the scope and ambition of Koinex – it currently offers trading in more than 50 pairs of cryptocurrencies – that leads to a lot of exposure to market volatility.

For example, Rakesh says, three months after the launch, Koinex saw a huge spike in Ripple (XRP) transactions. “It was 50 times the volume we’d seen before,” he says. A group of Japanese credit card companies had just announced Ripple support, giving it a huge increase in trustworthiness. The trading volumes ramped up and stayed up. But the PostgreSQL deployment (a tabular database) underpinning the Koinex platform couldn’t keep pace with surging demand.

“Everything was stored in PostgreSQL, and it wasn’t keeping up. We had an overwhelming growth in data with read and write times slowing because of large indexes, and CPUs spiking. Moving our deployment to Aurora RDS gave us two times improvement. But it wasn’t enough as we could not scale beyond a single instance for writes. We were seeing just one thousand transactions per second, and we wanted to aim for 100,000.” If one spike on one cryptocurrency could cause such problems, what would a really busy market look like? Time to aim high.

“We decided to move the 80 percent of data that needed real-time responses to MongoDB’s fully managed database as a service Atlas, and run it all on AWS.”

The MongoDB Atlas experience

The move was started in January as part of the development of a new trading engine and, as Ankush Sharma Senior Platform Engineer at Koinex explains, MongoDB Atlas looked like a good fit for a number of reasons. “It had sharding out of the box, which we saw as essential as this gave us the ability to distribute write loads out across multiple servers, without any application changes at all. Atlas also meant fewer code changes, less frequent resizing or cluster changes, and as little operational input from us as possible.”

Other aspects of the database seemed promising. “Its flexible data model made it a great fit for blockchain RPC-based communications as it meant we could handle any cryptocurrency regardless of its data structure. MongoDB Atlas is fully managed so it’s zero DevOps resources to run, and it’s got an easy learning curve.”

That last aspect was as important as the technical suitability. “Allocating developer bandwidth is completely crucial,” says Ankush. “If we’d stuck with Postgres, creating the new Trading Engine would have been three to four months. That wouldn’t have been survivable. With MongoDB we did it in 30 to 40 days.” And although he initially wasn’t sure MongoDB Atlas would be a long-term solution, its performance convinced him otherwise.

“It scaled out as we needed, and it scaled back so gracefully. There are times when the market is slower, so it lets us track costs to market liquidity. It’s working really well for us.”

It’s continued to free up developer bandwidth, too. “We started off with just the one product on MongoDB but we have eight or nine on it now. We wouldn’t have been able to concentrate on the mobile app, or provide historical data on demand to traders if our DevOps team didn’t find the database so easy to work with and with so many features.”

And long lead times on new products aren’t an option in the cryptocurrency market. Over just 17 days in July, Koinex built out and launched a new service called Loop – a novel peer-to-peer digital token exchange system designed to deal with controversial regulatory moves by the Indian central bank. “Digital currencies are complex. Policies and technologies are changing all the time so our business often depends on being able build new features quickly, sometimes in just a few weeks. Not only does it have to be done fast but it has to be tested, robust and at scale. It’s a financial platform – you can’t compromise. Time we don’t spend managing the database is time we can spend on new features and products, and that’s a huge payback.”

MongoDB also has the right security features to fit in with a financial exchange, says Ankush: “We have solid protocols limiting who in the company can see what data, with strong access controls, encryption and proper separation of production and development environments. We look to global best practices, and these are all implemented by default in the MongoDB Atlas service.”

For a company barely a year old, Koinex has big plans for the future. “Koinex has been leading the digital asset revolution in India,” says Ankush. “We give users a world-class experience. The long-term plan is to have multiple digital asset management products available, not just cryptocurrencies. Whole new ecosystems are going to develop. With MongoDB Atlas, we’re going to be able to do all the things that other top exchanges do as well as add in our own extras and features.”

New to MongoDB Atlas — Data Explorer Now Available for All Cluster Sizes

Ken W. Alger
September 18, 2018
Release Notes, Cloud

At the recent MongoDB .local Chicago event, MongoDB CTO and Co-Founder, Eliot Horowitz made an exciting announcement about the Data Explorer feature of MongoDB Atlas. It is now available for all Atlas cluster sizes, including the free tier.

The easiest way to explore your data

What is the Data Explorer? This powerful feature allows you to query, explore, and take action on your data residing inside MongoDB Atlas (with full CRUD functionality) right from your web browser. Of course, we've thought about security; Data Explorer access and whether or not a user can modify documents is tied to her role within the Atlas Project. Actions performed via the Data Explorer are also logged in the Atlas alerting window.

Bringing this feature to the "shared" Atlas cluster sizes — the free M0s, M2s, and M5s — allows for even faster development. You can now perform actions on your data while developing your application, which is where these shared cluster sizes really shine.

Check out this short video to see the Data Explorer in action.

Atlas is the easiest and fastest way to get started with MongoDB. Deploy a free cluster in minutes.