GIANT Stories at MongoDB

Transforming Credit Management with Credisense and MongoDB

Credit Management is a grind -- clunky, time consuming and laden with risk.

It requires millions of dollars to capture consumer attention and nurture through the sales cycle. And then comes the arduous credit assessment, throwing a wrench into the promise of a seamless digital customer experience. In fact, up to 90% of bank new customer applications drop out due to slow onboarding 1.

Even once a deal closes, there is still plenty of work to do. The next hurdle is invoice collections, with default and delinquency rates averaging anywhere between 0.2-54.5% internationally 2.

In today’s digitally-driven market, consumers are demanding quicker turnaround times and instant approvals. This makes simplifying and streamlining the entire credit management process more critical than ever.

As MongoDB’s OEM business continues to rapidly expand, it’s a personal priority to work with organizations who are solving serious market needs with the most innovative technology. Credisense, our newest OEM partner, and their MongoDB- powered, full end-to-end origination and credit decisioning solution is a phenomenal example of this.

They’re already making splashes worldwide. For example: CTOS Data Systems (Malaysia’s largest credit reporting agency) is enabling banks, utilities, non-bank credit issuers, fintechs and lenders in the P2P lending space to make real time credit decisions using the Credisense platform, enabling things like instant loan approvals for credit cards and auto loans!

I had the opportunity to discuss the Credisense platform and the data technology behind it with Richard Brooks, Co-founder and Director.

Tell us a bit about yourself and the genesis of the company?

Our three co-founders have different, but complementary backgrounds. I have worked for bureau and data companies my entire career and been involved in the automation side quite extensively. Our second co-founder and CEO, Sean Hywood, is a software expert having built up several software companies over his career focusing on low-code technologies. We combined our knowledge with the technical expertise of our third co-founder and CTO, Waylon Turney-Mizen, with the vision of providing enterprise grade functionality to organizations of all sizes. The aim is to allow all businesses to make smarter decisions, faster.

For anyone that isn’t familiar with Credisense yet, could you describe why you set out to build this and the problem it’s solving?

Credit is a highly regulated, complex, often manual and costly process. McKinsey 3 rightly points out there are five key pressures on credit providers currently:

  1. Changing customer expectations, specifically digital and the customer experience
  2. Tighter regulatory controls such as AML/CFT and GDPR
  3. Data management, increasing reliance on clean data for analysis and decisions
  4. Market disruptor such as P2P lenders and digital banks
  5. Cost pressures driving down returns

There are some sobering stats that show how important these are, such as over $200 billion dollars 4 of regulatory fines in the US alone since the GFC, to the fact that traditional lenders have lost over 30 percent of personal loan market share 5 to agile financial technology companies. All these add up to some serious issue for business, some that even threaten their very existence. Our aim when creating Credisense was to tackle these issues, both by assisting traditional corporates to embrace this digital strategy, to providing this same technology and expertise to smaller businesses so they can compete and level the playing field.

How would you describe the platform and the unique advantages that Credisense gives its customers?

Our platform is born in the cloud and offers a “no-code” build capability allowing organizations to build out the functionality internally and grow the solution with their business. We have a unique graphical interface, and this coupled with the “no-code” technology allows business people -- not IT -- to build, own and manage the system.

The platform itself revolves around the decision and scoring engine which powers the advanced assessment and risk decisions for organizations.

Our MongoDB backend means we can confidently scale to handle millions of credit applications and still support real-time workflow and decision making in seconds.

How did you land on MongoDB to help you solve these challenges?

We needed a database to support a minimum of 100,000 transaction a day across a cloud platform. There are only a handful of NoSQL databases that can support the level of transaction with the ability to further scale if required. MongoDB ticked all the boxes. Add that to MongoDB’s great documentation security, tooling, support and APIs, and it made MongoDB the right choice for our development teams.

What advice would you give someone who is considering using MongoDB for their next project?

MongoDB offered us extensibility to be on-premises, which is something other cloud database platforms would not offer. It made sense to go with a database platform that offered both so that we could in turn offer this to our customers that require data to be held within their own environments for security reasons. Also, reach out and talk to MongoDB early in your process. The support they give you up front will help ensure you’re making the best decisions.

How are you securing MongoDB?

We utilize MongoDB Atlas for our Continuous Integration and Testing environment and will have a managed service offering. This is secured with an IP whitelist, secure password and SSL connection which was easy with Atlas and Atlas Professional.

We also have a customer-managed deployment secured out of the box behind a VPN that connects the app server to the MongoDB server. It also utilizes a strong username/password combination with minimum length and character requirements.

Through our OEM arrangement with MongoDB, we package MongoDB Enterprise as part of our product to ensure our customers have highly secure and enterprise-grade solutions.

Where have you deployed MongoDB? On-premises, in the cloud, via MongoDB Atlas? What tools are you using to deploy, monitor MongoDB?

All! The requirement for extensibility across platforms without any changes to the code was one of the key reasons for MongoDB selection.

MongoDB Atlas removes operational overhead and mitigates risk through automating many of the manual processes (configuring operating system, upgrades, backups and restores). This means we can focus on ensuring our customers have the robust platform they need to provide instant loan approvals.

We also have a production environment on-premises. Soon, we will introduce the use of MongoDB Cloud Manager for monitoring and alerts of on-premises production environments. With over 100 metrics and proactive alerting, we’ll be able to catch issues before they arise.

References:

  1. https://thefinancialbrand.com/66143/7-steps-to-improved-customer-onboarding/
  2. https://data.worldbank.org/indicator/FB.AST.NPER.ZS?year_high_desc=true
  3. https://www.mckinsey.com/business-functions/risk/our-insights/the-value-in-digitally-transforming-credit-risk-management
  4. https://www.mckinsey.com/business-functions/risk/our-insights/the-value-in-digitally-transforming-credit-risk-management
  5. https://qz.com/1334899/personal-loans-are-surging-in-the-us-fueled-by-fintech-startups/

Go Migration Guide

Scott L'Hommedieu

Migrating from community drivers to the official MongoDB Go Driver

Introduction

MongoDB has released an official driver for the Go language that is appropriate for all supported database operations. This driver implements the Core and API specs, and supports MongoDB 3.2 and above.

Many developers have been using community contributed golang drivers such as mgo and variants or forks of it. Developers who are interested in migrating to the official MongoDB Go driver have many considerations when approaching a migration of their application code. This migration guide is intended to provide guidance on some commonly found differences in client code when using the MongoDB Go Driver, and present potential actions to be taken during a migration.

What is the MongoDB Go Driver

The MongoDB Go Driver is a pure go package that can be included in client applications using go package management. It allows applications to connect to MongoDB and execute commands and operations.

In order to connect to and execute commands or operations against MongoDB a user must communicate with MongoDB using the MongoDB Wire Protocol. The MongoDB Go Driver implements the MongoDB Driver Specs that indicate how a driver should present the wire protocol to a user and what the user can expect as a response from the driver. There are drivers for many languages that all implement and adhere to this common set of specs, providing idiomatic access for developers to connect and work with MongoDB in their chosen language.

Why migrate to the MongoDB Go Driver

While several options exist for community supported drivers, the MongoDB Go Driver is the only Go Driver supported by MongoDB engineers. The driver is released with Spec Compliant APIs and functionality, as well as support for the latest MongoDB functionality such as multi-document ACID transactions and logical sessions. The MongoDB Go Driver includes a BSON library that is flexible, easy to use, and performant for building, parsing, and iterating BSON data.

Where is the MongoDB Go Driver

The MongoDB Go Driver is available from Github and the documentation is available from the MongoDB Documentation site as well as GoDoc.

How to use the MongoDB Go Driver

If you are not migrating from mgo or another community driver you can get started by reviewing the docs and examples, or by reading this tutorial.

If you are migrating from mgo or another community driver, read further to understand some of the considerations when migrating.

What is involved in migrating from a community driver to the MongoDB Go Driver.

Users who have previously connected to MongoDB from applications written in Go have used various client libraries/packages to enable database connectivity. In order to move to the MongoDB Go Driver, the client application must depend on different packages and change certain programmatic calls to those packages.

Advice for Migration

Assessing size and impact of changes.

Locating code points for migration

Start by establishing the locations of all code that relies on your previous Go/MongoDB driver. For example, locate all lines in the code which match mgo, bson, txn or mongo and then scan forward from those locations for further use of the driver. This code will need to be changed as part of the migration.

Using Extended JSON

Consider moving code that uses mgo’s Extended JSON to MongoDB Go drivers Extended JSON implementation (*ExtJSON* functions).

Tests

Tests may require changes to new packages and functions. Once tests are migrated to new APIs, results may help to indicate where further changes beyond driver API calls are required to ensure that application behavior and performance are not affected by the migration.

APIs and functions in community libraries that are not supported or have modified interfaces or behaviors in MongoDB Go Driver

General Changes

Support for MongoDB server versions prior to 3.2 is not available in the MongoDB Go Driver during early BETA releases. Consider delaying migration until BETA 2 is released.

MongoDB Go Driver connection strings are the most direct and convenient way to configure the client connection. Consider client Options for more complex configuration needs.

BSON Changes

Moving from the mgo bson package to the MongoDB Go Driver bson package will require type changes and some types have moved from the bson package to primitive package. bson.D and bson.M are unchanged but most other types require some further consideration during migration.

Changing from mgo.Session to mongo.Client

mgo.Session

  • session.Run() runs a command on the admin database, and optionally takes a string for {: 1} style commands. Consider using a common helper to Run() and RunString() helpers on SessionProvider.
  • mongo.Client doesn't support changing the socket timeout after construction (as in SetSocketTimeout(0) ). Customize the socket timeout via a URI option.
  • SetPrefetch is an mgo cursor optimization not available in the MongoDB Go driver; instead, the future implementation of exhaust support for OP_MSG should be used when available.
  • SetSafe is irrelevant as the MongoDB Go driver defaults to safe operation
  • SetMode in mgo can be used for emulating a direct connection; this will need to be replaced with configuration for a direct connection instead.
  • Copy is used for collection cloning; the MongoDB Go driver does this internally
  • session.Create() has no equivalent in the MongoDB Go driver but is trivially replaceable with RunCommand.
  • session.Close() should not be called in the MongoDB Go client unless you want to close/disconnect the common mongo.Client.

One of the most common usage patterns in mgo is `session.DB("foo").C("bar")`. This can be changed easily with text editing tools and regular expressions, such as in VIM:

    
        :%s/\vsession\.DB\(([^\)]+)\).C\(([^\)]+)\)/client.Database(\1).Collection(\2)/cg
   

CRUD changes

  • Repair() method is not available.
  • mgo's cursor iterator has an "All" method to populate a slice of results. Loop or iterate in the MongoDB Go Driver.
  • mgo custom dialer "Connection" types were unnecessary and removed, along with build-flags for 'ssl' and 'sasl' -- the MongoDB Go driver supports these directly.
  • mgo's API for Sort takes strings with possible '-' prefix. Those need to be converted to a proper sort document for use with the MongoDB Go driver.
  • mgo's Index model is very different than the Go driver's implementation, so you may need to change a lot of code that otherwise expects certain options fields into lookups, or you have to copy the type and inflate into it.

Config Changes

  • mgo allows setting a Kerberos "ServiceHost" -- a host name for SASL auth that is different than the connection host name. The MongoDB Go driver has no equivalent (nor does the auth spec).
  • mgo supports deferred Find query, later calling Count,Iter, or other modifiers on it as needed. This concept doesn't exist in the MongoDB Go driver and needs to be emulated.
    • Consider a common library to hold filter, hint options and emulate Iter() and Count()
    • Examples:
                  
      mgo.Collection.Find(...).Hint(...)
      // mgo.Collection
      cursor = coll.Find(filter).Hint(doc).Iter()
      
      // mongo.Collection
      cursor, err = coll.Find(nil, filter, findopt.Hint(doc))
                  
              
                  
      mgo.Collection.Find(...).One(...) translation
      // mgo.Collection
      err = coll.Find(filter).One(&doc)
      
      // mongo.Collection
      result = coll.FindOne(nil, filter)
      err = result.Decode(&doc)
                  
              
  • Cursor iteration (N.B. tools often don't call iter.Close())
  •     
    // mgo.Iter
    var result resultType{}
    for iter.Next(&result) {
        // ... do stuff with result ...
    }
    if err := iter.Close(); err != nil {
        return err
    }
    
    // mongo.Cursor (requires non-nil context until GODRIVER-579 is done)
    ctx := context.Background()
    defer cur.Close(ctx)
    for cur.Next(ctx) {
        result := resultType{}
        if err := cur.Decode(&result); err != nil {
            // ...
        }
              // ... do stuff with result ...
    }
    if err := cur.Err(); err != nil {
        // ...
    }
        
    

Ongoing Development and Feedback

As a beta release, the MongoDB Go Driver is under active development and as such, some features and functions available in community drivers or previous driver versions may not be available. Please file requests for missing features or any other issues you discover as part of your migration. You can also find support and guidance by joining and posting to the MongoDB Go Driver mailing list.

Official MongoDB Go Driver Now Available for Beta Testing

Mat Keep

mongodb

We’re pleased to announce that the official MongoDB Go driver is moving into beta, ready for the wider Go and MongoDB community to put it to the test – we think you’ll really like it.

In this blog, we will discuss:

  1. The growing importance of Go
  2. How we use it today at MongoDB
  3. Our rationale for building a new driver
  4. Resources to get you started with it.

Since its initial release by Google in 2009, Go has become an increasingly important programming language, both within the developer community, and internally here at MongoDB. The Go language broke into Redmonk’s top 20 programming language rankings in January 2015, climbing to 14th spot by June 2018, overtaking a variety of venerable programming languages that have been around 2 to 3 times longer. Its growing popularity can be attributed to its lightweight design, ease of use, efficient memory management, and concurrency – all of which make it well suited to developing apps in modern microservices patterns, and increasingly for data science tasks as well.

Back in January 2018, we announced a plan to build an officially supported MongoDB driver for the Go language to supersede the community-supported mgo driver. This project was driven by the fact that we, along with many other companies, use Go to build essential parts of our software. Our Ops Manager agents, MongoDB Atlas cloud service, the MongoDB Stitch serverless platform, our command line tools, and Evergreen continuous integration system all rely on Go. So we really wanted to bring the Go language into the officially supported fold of languages.

It was also important to sync Go support with our other official MongoDB drivers, both functionally and structurally. We want the community to have the best open source tools at their disposal so they can be even more effective with MongoDB. This is because MongoDB is constantly evolving and there are many new features arriving which can be hard for community projects to track. Like all MongoDB-developed drivers, the new Go driver is idiomatic to the Go programming language, providing fast, easy, and natural app development, and will be fully supported by MongoDB engineers. It exposes all of the rich query, indexing, and aggregation features of the MongoDB API, along with ACID transactions, durability, and consistency controls, all fully integrated with MongoDB’s authentication and encryption mechanisms.

The official MongoDB Go driver entered alpha when we made our announcement back in January, and over the past months out developers have worked with the Go community to refine and perfect an up-to-date, elegant, idiomatic driver. With that alpha phase over, it’s time for your feedback on what the developers have created. Here are the key resources to get you started:

We look forward to updating you all on the progress of the Go driver as we approach GA, and thank you in advance for any feedback you provide.

MongoDB Go Driver Tutorial

With the official MongoDB Go Driver recently moving to beta, it's now regarded as feature complete and ready for a wider audience to start using. This tutorial will help you get started with the MongoDB Go Driver. You will create a simple program and learn how to:

  • Install the MongoDB Go Driver
  • Connect to MongoDB using the Go Driver
  • Use BSON objects in Go
  • Send CRUD operations to MongoDB

You can view the complete code for this tutorial on this GitHub repository. In order to follow along, you will need a MongoDB database to which you can connect. You can use a MongoDB database running locally, or easily create a free 500 MB database using MongoDB Atlas.

Install the MongoDB Go Driver

The MongoDB Go Driver is made up of several packages. If you are just using go get, you can install the driver using:

go get github.com/mongodb/mongo-go-driver

The output of this may look like a warning stating something like package github.com/mongodb/mongo-go-driver: no Go files in (...). This is expected output.

If you are using the dep package manager, you can install the main mongo package as well as the bson and mongo/options package using this command:

dep ensure --add github.com/mongodb/mongo-go-driver/mongo \
github.com/mongodb/mongo-go-driver/bson \
github.com/mongodb/mongo-go-driver/mongo/options

Create the wireframe

Create the file main.go and import the bson, mongo, and mongo/options packages:

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/mongodb/mongo-go-driver/bson"
    "github.com/mongodb/mongo-go-driver/mongo"
    "github.com/mongodb/mongo-go-driver/mongo/options"
)

// You will be using this Trainer type later in the program
type Trainer struct {
    Name string
    Age  int
    City string
}

func main() {
    // Rest of the code will go here
}

This code also imports some standard libraries and defines a Trainer type. You will be using these later in the tutorial.

Connect to MongoDB using the Go Driver

Once the MongoDB Go Driver has been imported, you can connect to a MongoDB deployment using the mongo.Connect() function. You must pass a context and connection string to mongo.Connect(). Optionally, you can also pass in an options.ClientOptions object as a third argument to configure driver settings such as write concerns, socket timeouts, and more. The options package documentation has more information about what client options are available.

Add this code in the main function:

client, err := mongo.Connect(context.TODO(), "mongodb://localhost:27017")

if err != nil {
    log.Fatal(err)
}

// Check the connection
err = client.Ping(context.TODO(), nil)

if err != nil {
    log.Fatal(err)
}

fmt.Println("Connected to MongoDB!")

Once you have connected, you can now get a handle for the trainers collection in the test database by adding the following line of code at the end of the main function:

collection := client.Database("test").Collection("trainers")

The following code will use this collection handle to query the trainers collection.

It is best practice to keep a client that is connected to MongoDB around so that the application can make use of connection pooling - you don't want to open and close a connection for each query. However, if your application no longer requires a connection, the connection can be closed with client.Disconnect() like so:

err = client.Disconnect(context.TODO())

if err != nil {
    log.Fatal(err)
}
fmt.Println("Connection to MongoDB closed.")

Run the code (go run main.go) to test that your program can successfully connect to your MongoDB server. Go will complain about the unused bson and mongo/options packages and the unused collection variable, since we haven't done anything with them yet. You have to comment these out until they are used to make your program run and test the connection.

Use BSON Objects in Go

JSON documents in MongoDB are stored in a binary representation called BSON (Binary-encoded JSON). Unlike other databases that store JSON data as simple strings and numbers, the BSON encoding extends the JSON representation to include additional types such as int, long, date, floating point, and decimal128. This makes it much easier for applications to reliably process, sort, and compare data. The Go Driver has two families of types for representing BSON data: The D types and the Raw types.

The D family of types is used to concisely build BSON objects using native Go types. This can be particularly useful for constructing commands passed to MongoDB. The D family consists of four types:

  • D: A BSON document. This type should be used in situations where order matters, such as MongoDB commands.
  • M: An unordered map. It is the same as D, except it does not preserve order.
  • A: A BSON array.
  • E: A single element inside a D.

Here is an example of a filter document built using D types which may be used to find documents where the name field matches either Alice or Bob:

bson.D{{
    "name", 
    bson.D{{
        "$in", 
        bson.A{"Alice", "Bob"}
    }}
}}

The Raw family of types is used for validating a slice of bytes. You can also retrieve single elements from Raw types using a Lookup(). This is useful if you don't want the overhead of having to unmarshall the BSON into another type. This tutorial will just use the D family of types.

CRUD Operations

Once you have connected to the database, it's time to start adding and manipulating some data. The Collection type has several methods which allow you to send queries to the database.

Insert documents

First, create some new Trainer structs to insert into the database:

ash := Trainer{"Ash", 10, "Pallet Town"}
misty := Trainer{"Misty", 10, "Cerulean City"}
brock := Trainer{"Brock", 15, "Pewter City"}

To insert a single document, use the collection.InsertOne() method:

insertResult, err := collection.InsertOne(context.TODO(), ash)
if err != nil {
    log.Fatal(err)
}

fmt.Println("Inserted a single document: ", insertResult.InsertedID)

To insert multiple documents at a time, the collection.InsertMany() method will take a slice of objects:

trainers := []interface{}{misty, brock}

insertManyResult, err := collection.InsertMany(context.TODO(), trainers)
if err != nil {
    log.Fatal(err)
}

fmt.Println("Inserted multiple documents: ", insertManyResult.InsertedIDs)

Update documents

The collection.UpdateOne() method allows you to update a single document. It requires a filter document to match documents in the database and an update document to describe the update operation. You can build these using bson.D types:

filter := bson.D{{"name", "Ash"}}

update := bson.D{
    {"$inc", bson.D{
        {"age", 1},
    }},
}

This code will then match the document where the name is Ash and will increment Ash's age by 1 - happy birthday Ash!

updateResult, err := collection.UpdateOne(context.TODO(), filter, update)
if err != nil {
    log.Fatal(err)
}

fmt.Printf("Matched %v documents and updated %v documents.\n", updateResult.MatchedCount, updateResult.ModifiedCount)

Find documents

To find a document, you will need a filter document as well as a pointer to a value into which the result can be decoded. To find a single document, use collection.FindOne(). This method returns a single result which can be decoded into a value. You'll use the same filter variable you used in the update query to match a document where the name is Ash.

// create a value into which the result can be decoded
var result Trainer

err = collection.FindOne(context.TODO(), filter).Decode(&result)
if err != nil {
    log.Fatal(err)
}

fmt.Printf("Found a single document: %+v\n", result)

To find multiple documents, use collection.Find(). This method returns a Cursor. A Cursor provides a stream of documents through which you can iterate and decode one at a time. Once a Cursor has been exhausted, you should close the Cursor. Here you'll also set some options on the operation using the options package. Specifically, you'll set a limit so only 2 documents are returned.

// Pass these options to the Find method
options := options.Find()
options.SetLimit(2)

// Here's an array in which you can store the decoded documents
var results []*Trainer

// Passing nil as the filter matches all documents in the collection
cur, err := collection.Find(context.TODO(), nil, options)
if err != nil {
    log.Fatal(err)
}

// Finding multiple documents returns a cursor
// Iterating through the cursor allows us to decode documents one at a time
for cur.Next(context.TODO()) {

    // create a value into which the single document can be decoded
    var elem Trainer
    err := cur.Decode(&elem)
    if err != nil {
        log.Fatal(err)
    }

    results = append(results, &elem)
}

if err := cur.Err(); err != nil {
    log.Fatal(err)
}

// Close the cursor once finished
cur.Close(context.TODO())

fmt.Printf("Found multiple documents (array of pointers): %+v\n", results)

Delete Documents

Finally, you can delete documents using collection.DeleteOne() or collection.DeleteMany(). Here you pass nil as the filter argument, which will match all documents in the collection. You could also use collection.Drop() to delete an entire collection.

deleteResult, err := collection.DeleteMany(context.TODO(), nil)
if err := cur.Err(); err != nil {
    log.Fatal(err)
}
fmt.Printf("Deleted %v documents in the trainers collection\n", deleteResult.DeletedCount)

Next steps

You can view the final code from this tutorial in this GitHub repository. Documentation for the MongoDB Go Driver is available on GoDoc. You may be particularly interested in the documentation about using aggregations or transactions.

If you have any questions, please get in touch in the mongo-go-driver Google Group. Please file any bug reports on the Go project in the MongoDB JIRA. We would love your feedback on the Go Driver, so please get in touch with us to let us know your thoughts.

MongoDB Stitch Mobile Sync – The AWS re:Invent Stitch Rover Demo

MongoDB Stitch Mobile Sync powers the MongoDB rover by synchronizing commands between MongoDB Atlas in the cloud and MongoDB Mobile running on a Raspberry Pi.

The Top 12 BSON Data Types you won't find in JSON

Dj Walker-Morgan

JSON, BSON

People call MongoDB a JSON database but MongoDB actually uses BSON, a binary version of JSON. One of the big things with BSON is that it has so many more data types than JSON which means you can query things with a lot more precisions. How many data types? Let's run down this list of data types you won't find in JSON….

There are two BSON data types that didn't make the list: Boolean and Null. Boolean maps to two JSON values, true and false, so you will find it in JSON. Null is also a value in JSON, but in BSON it turns into a type.

12: Minkey and 11: Maxkey: Although these are BSON data types, they exist only to represent the extremes of keys - the minimum key value and the maximum key value. Why? So you can write a query as a document that describes a range of keys that starts from the smallest key or ends with the largest key.

10: Binary Data: Also known as BinData, this BSON data type is for arrays of bytes because representing bit arrays efficiently is important when you're storing and searching data.

9: TimeStamp: It's a type but for representing time, but it's mostly for internal use. If you want to be storing dates and times, you want...

8: Date: This is actually a date and time as an unsigned 64-bit integer with a UTC (Universal Time Coordinates) time zone.

7: ObjectID: Part of the magic of MongoDB, the ObjectID is like a UUID but smaller. MongoDB generates them to ensure documents are uniquely identified. It's actually 12 bytes in size and gets filled in with 4 bytes for seconds since the epoch, 5 bytes of just randomness and 3 bytes from a randomly incremented counter.

6: Regular Expression: The handy pattern-matching power of regular expression strings is used so often that we thought it needed its own type to save on that "convert from string" step. This comes into its own when you are writing database objects with validation patterns or matching triggers.

5: JavaScript: Like regular expressions, JavaScript functions can be stored in BSON as their own type. For MongoDB applications, this is generally used for stowing away snippets of code to be run in the client application. It's really not a good idea to run the snippets in the server.

4: Double: JSON calls anything with numbers a Number. That leaves it up to implementations to figure out how to turn it into the nearest native data type. In BSON, Double is the default replacement for JSON's Number and that means we can have more specialized number types like...

3: Int: 32 bits of integer precision which, for when you have small integer values and you don't want to store it as a sequence of digits. Simple, precise and when you run out of 32-bit integer, there's always...

2: Long: 64 bits of integer precision, for when you have bigger integer values and you really don't want to store it as a sequence of digits. Simple, precise and when you really want huge numbers with lots of floating point range, you can move up to the star of the type-show…

1: Decimal128: As the name says, it is 128 bits of high precision decimal representation. 34 decimal digits of precision, with a max value of around 10^6145 and a minimum of, you may have guessed it -10^6145. This is the IEEE-754-2008 128-bit decimal floating point number, for when you absolutely have to store huge, or tiny numbers.

Oh, do note that you'll have to be prepared to convert Decimal128 into your favorite language's bigdecimal library because no one comes equipped to handle these huge numbers. You'll want MongoDB 3.4 or later to enjoy our top BSON data type, but who doesn't want huge number support?

So, that's our top 12 BSON data types. If we missed your favorite or you've got your own ranking, let us know in the comments!

MongoDB Stitch QueryAnywhere – The AWS re:Invent Stitch Rover Demo

The MongoDB Stitch rover demonstrates MongoDB Stitch QueryAnywhere by recording commands in MongoDB Atlas directly from the web app frontend.

MongoDB Charts Beta, Now Available in Atlas

Earlier in the year, we announced the availability of MongoDB Charts Beta, the fastest and easiest way to build visualizations of MongoDB data. Today at Mongodb.local San Francisco, we are excited to announce that an update to the beta is now available and integrated into MongoDB Atlas, our hosted database as a service platform. This means that Atlas users can now visualize their data and share with their team, without the need to install or maintain any servers or tools.

Getting started with MongoDB Charts in Atlas couldn’t be simpler. After logging into Atlas, select the Project with the clusters containing the data you want to visualize and click the Charts link in the left navigation bar. After a one-time step to activate Charts, you will be ready to start charting!

MongoDB Charts(Beta) inside MongoDB Atlas
MongoDB Charts(Beta) inside MongoDB Atlas

If you’ve used MongoDB Charts before, the new Atlas-integrated version will be instantly familiar. The main difference is that you can easily add data sources from any Atlas clusters in your project without needing to enter a connection URI. You’re also freed from the burden of managing users separately, with all Atlas Project members able to access Charts with their existing Atlas credentials provided they have Data Access Read Only role or higher.

MongoDB Charts(Beta) New Data Source
New Data Source

We’ve also been busy adding some of the most requested features to the charting experience. Charts has always been great at handling MongoDB’s flexible schema, allowing you to build charts from document-based data that contains nested documents or arrays. In this latest release, we’ve added a number of options for chart authors to customize their charts, including changing axis titles, colors, date formats and more.

Sample chart with MongoDB Charts
Sample Line Chart

After you’ve created a few charts, you can arrange them on a dashboard to get all of the information you need at a glance. Dashboards can be kept private, shared with selected individuals, or with everyone in your project team.

Sample MongoDB Charts Dashboard
Sample Dashboard

If you’re not currently using Atlas, we haven’t forgotten about you. MongoDB Charts Beta is also still available to install into your own server environment, allowing you to visualize data from any MongoDB server. We’ll be refreshing the on-premises beta to include the same charting enhancements as seen in the new Atlas version over the coming weeks.

We hope you enjoy this update and that it helps you get the insight you need from your data. If you have any questions or feature requests, you can always send a note to the Charts team by clicking the support button on the bottom of every page.

Happy Charting!

MongoDB Stitch/Mobile Mars Rover Lands at AWS re:Invent

Powered by MongoDB Mobile, MongoDB Stitch, and AWS Kinesis, the MongoDB Rover debuted at AWS re:Invent. Hundreds of delegates stopped by our demo to see if they could navigate the rover around a treacherous alien landscape.

Boosting JavaScript: From MongoDB's shell to Node.js

Dj Walker-Morgan

Node.js, mongo

Moving a script from MongoDB’s JavaScript-powered shell to Node.js offers a chance to get to use an enormous range of tools and libraries. Find out how to do this with only a few extra lines of code and how to then optimize the resulting script.