GIANT Stories at MongoDB

Listing Your MongoDB Atlas Resources

If you want to use the MongoDB Atlas API to manage your clusters one of the first things you will discover is that resource IDs are the keys to the kingdom. In order to use the API you will need an API key and you will need to grant access to your program via the API whitelist.

You can set up your API keys and API whitelist on this screen.

Atlas Account Settings

Once they are set up you can use them to run the py-atlas-list.py script to get a list of all resources.

$ ./py-atlas-list.py -h
usage: py-atlas-list.py [-h] [--username USERNAME] [--apikey APIKEY]
                        [--org_id ORG_ID]

optional arguments:
  -h, --help           show this help message and exit
  --username USERNAME  MongoDB Atlas username
  --apikey APIKEY      MongoDB Atlas API key
  --org_id ORG_ID      specify an organization to limit what is listed
$

If you run this on the command line you will get

Py Atlas List

The project and org IDs have been occluded for security purposes. As you can see the Organization ID, Project IDs and Cluster names are displayed. These will be required by other parts of the API.

Give it a spin. There is a Pipfile.lock for pipenv users.

PyMongo Monday: PyMongo Create

Last time we showed you how to setup up your environment.

In the next few episodes we will take you through the standard CRUD operators that every database is expected to support. In this episode we will focus on the Create in CRUD.

Create

Lets look at how we insert JSON documents into MongoDB.

First lets start a local single instance of mongod using m.

$ m use stable
2018-08-28T14:58:06.674+0100 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2018-08-28T14:58:06.689+0100 I CONTROL [initandlisten] MongoDB starting : pid=43658 port=27017 dbpath=/data/db 64-bit host=JD10Gen.local
2018-08-28T14:58:06.689+0100 I CONTROL [initandlisten] db version v4.0.2
2018-08-28T14:58:06.689+0100 I CONTROL [initandlisten] git version: fc1573ba18aee42f97a3bb13b67af7d837826b47
2018-08-28T14:58:06.689+0100 I CONTROL [initandlisten] allocator: syste

etc...

The mongod starts listening on port 27017 by default. As every MongoDB driver defaults to connecting on localhost:27017 we won't need to specify a connection string explicitly in these early examples.

Now, we want to work with the Python driver. These examples are using Python 3.6.5 but everything should work with versions as old as Python 2.7 without problems.

Unlike SQL databases, databases and collections in MongoDB only have to be named to be created. As we will see later this is a lazy creation process, and the database and corresponding collection are actually only created when a document is inserted.

$ python
Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 03:03:55)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import pymongo
>>> client = pymongo.MongoClient()
>>> database = client[ "ep002" ]
>>> people_collection = database[ "people_collection" ]
>>> result=people_collection.insert_one({"name" : "Joe Drumgoole"})
>>> result.inserted_id
ObjectId('5b7d297cc718bc133212aa94')
>>> result.acknowledged
True
>>> people_collection.find_one()
{'_id': ObjectId('5b62e6f8c3b498fbfdc1c20c'), 'name': 'Joe Drumgoole'}
True
>>>

First we import the pymongo library (line 6). Then we create the local client proxy object, client = pymongo.MongoClient() (line 7) . The client object manages a connection pool to the server and can be used to set many operational parameters related to server connections.

We can leave the parameter list to the MongoClient call blank. Remember, the server by default listens on port 27017 and the client by default attempts to connect to localhost:27017.

Once we have a client object, we can now create a database, ep002 (line 8) and a collection, people_collection (line 9). Note that we do not need an explicit DDL statement.

Using Compass to examine the database server

A database is effectively a container for collections. A collection provides a container for documents. Neither the database nor the collection will be created on the server until you actually insert a document. If you check the server by connecting MongoDB Compass you will see that there are no databases or collections on this server before the insert_one call.

screen shot of compass at start

These commands are lazily evaluated. So, until we actually insert a document into the collection, nothing happens on the server.

Once we insert a document:

>>>> result=database.people_collection.insert_one({"name" : "Joe Drumgoole"})
>>> result.inserted_id
ObjectId('5b7d297cc718bc133212aa94')
>>> result.acknowledged
True
>>> people_collection.find_one()
{'_id': ObjectId('5b62e6f8c3b498fbfdc1c20c'), 'name': 'Joe Drumgoole'}
True
>>>

We will see that the database, the collection, and the document are created.

screen shot of compass with collection

And we can see the document in the database.

screen shot of compass with document

_id Field

Every object that is inserted into a MongoDB database gets an automatically generated _id field. This field is guaranteed to be unique for every document inserted into the collection. This unique property is enforced as the _id field is automatically indexed and the index is unique.

The value of the _id field is defined as follows:

ObjectID

The id field is generated on the client and you can see the PyMongo generation code in the objectid.py file. Just search for the def _generate string. All MongoDB drivers generate id fields on the client side. The id field allows us to insert the same JSON object many times and allow each one to be uniquely identified. The id field even gives a temporal ordering and you can get this from an ObjectID via the generation_time method.

>>> from bson import ObjectId
>>> x=ObjectId('5b7d297cc718bc133212aa94')
>>> x.generation_time
datetime.datetime(2018, 8, 22, 9, 14, 36, tzinfo=)
>>> <b>print(x.generation_time)</b>
2018-08-22 09:14:36+00:00
>>>

Wrap Up

That is create in MongoDB. We started a mongod instance, created a MongoClient proxy, created a database and a collection and finally made then spring to life by inserting a document.

Next up we will talk more about Read part of CRUD. In MongoDB this is the find query which we saw a little bit of earlier on in this episode.


For direct feedback please pose your questions on twitter/jdrumgoole that way everyone can see the answers.

The best way to try out MongoDB is via MongoDB Atlas our Database as a Service. It’s free to get started with MongoDB Atlas so give it a try today.

PyMongo Monday: Setting Up Your PyMongo Environment

Welcome to PyMongo Monday. This is the first in a series of regular blog posts that will introduce developers to programming MongoDB using the Python programming language. It’s called PyMongo Monday because PyMongo is the name of the client library (in MongoDB speak we refer to it as a "driver") we use to interact with the MongoDB Server. Monday because we aim to release each new episode on Monday.

To get started we need to install the toolchain used by a typical MongoDB Python developer.

Installing m

First up is m. Hard to find online unless your search for "MongoDB m", m is a tool to manage and use multiple installations of the MongoDB Server in parallel. It is an invaluable tool if you want to try out the latest and greatest beta version but still continue mainline development on our current stable release.

The easiest way to install m is with npm the Node.js package manager (which it turns out is not just for Node.js).

$ sudo npm install -g m
Password:******
/usr/local/bin/m -> /usr/local/lib/node_modules/m/bin/m
+ m@1.4.1
updated 1 package in 2.361s
$

If you can’t or don’t want to use npm you can download and install directly from the github repo. See the README there for details.

For today we will use m to install the current stable production version (4.0.2 at the time of writing).

We run the stable command to achieve this.

$ m stable
MongoDB version 4.0.2 is not installed.
Installation may take a while. Would you like to proceed? [y/n] y
... installing binary

######################################################################## 100.0%
/Users/jdrumgoole
... removing source
... installation complete
$

If you need to use the path directly in another program you can get that with m bin.

$ m bin 4.0.0
/usr/local/m/versions/4.0.1/bin
$

To run the corresponding binary do m use stable

$ m use stable
2018-08-28T11:41:48.157+0100 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2018-08-28T11:41:48.171+0100 I CONTROL  [initandlisten] MongoDB starting : pid=38524 port=27017 dbpath=/data/db 64-bit host=JD10Gen.local
2018-08-28T11:41:48.171+0100 I CONTROL  [initandlisten] db version v4.0.2
2018-08-28T11:41:48.171+0100 I CONTROL  [initandlisten] git version: fc1573ba18aee42f97a3bb13b67af7d837826b47
< other server output >
...
2018-06-13T15:52:43.648+0100 I NETWORK  [initandlisten] waiting for connections on port 27017

Now that we have a server running we can confirm that it works by connecting via the mongo shell.

$ mongo
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.0
Server has startup warnings:
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten]
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten]
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten] ** WARNING: This server is bound to localhost.
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten] **          Remote systems will be unable to connect to this server.
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten] **          Start the server with --bind_ip < address> to specify which IP
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten] **          addresses it should serve responses from, or with --bind_ip_all to
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten] **          bind to all interfaces. If this behavior is desired, start the
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten] **          server with --bind_ip 127.0.0.1 to disable this warning.
2018-07-06T10:56:50.973+0100 I CONTROL  [initandlisten]

---
Enable MongoDB's free cloud-based monitoring service to collect and display
metrics about your deployment (disk utilization, CPU, operation statistics,
etc).

The monitoring data will be available on a MongoDB website with a unique
URL created for you. Anyone you share the URL with will also be able to
view this page. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command:
db.enableFreeMonitoring()
---

>

These warnings are standard. They flag that this database has no access controls setup by default and, that it is only listening to connections coming from the machine it is running on (localhost). We will learn how to setup access control and listen on a broader range of ports in later episodes.

Installing the PyMongo Driver

But this series is not about the MongoDB Shell, which uses JavaScript as its coin of the realm, it’s about Python. How do we connect to the database with Python?

First we need to install the MongoDB Python Driver, PyMongo. In MongoDB parlance a driver is a language-specific client library that allows developers to interact with the server in the idiom of their own programming language.

For Python that means installing the driver with pip. In node.js the driver is installed using npm and in Java you can use maven.

$ pip3 install pymongo
Collecting pymongo
  Downloading https://files.pythonhosted.org/packages/a1/e0/51df08036e04c1ddc985a2dceb008f2f21fc1d6de711bb6cee85785c1d78/pymongo-3.7.1-cp27-cp27m-macosx_10_13_intel.whl (333kB)
    100% |████████████████████████████████| 337kB 4.1MB/s
Installing collected packages: pymongo
Successfully installed pymongo-3.7.1
$

We recommend you use a virtual environment to isolate your PyMongo Monday code. This is not required but is very convenient for isolating different development streams.

Now we can connect to the database:

$ python
Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 03:03:55)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pymongo                                                  (1)
>>> client = pymongo.MongoClient(host="mongodb://localhost:8000")   (2)
>>> result = client.admin.command("isMaster")                       (3)
>>> import pprint
>>> pprint.pprint(result)
{'ismaster': True,
 'localTime': datetime.datetime(2018, 6, 13, 21, 55, 2, 272000),
 'logicalSessionTimeoutMinutes': 30,
 'maxBsonObjectSize': 16777216,
 'maxMessageSizeBytes': 48000000,
 'maxWireVersion': 6,
 'maxWriteBatchSize': 100000,
 'minWireVersion': 0,
 'ok': 1.0,
 'readOnly': False}
>>>

First we import the PyMongo library (1). Then we create a local client object (2) that holds the connection pool and other status for this server. We generally don’t want more than one MongoClient object per program as it provides its own connection pool.

Now we are ready to issue a command to the server. In this case it's the standard MongoDB server information command which is called rather anachronistically isMaster (3). This is a hangover from the very early versions of MongoDB. It appears in pre 1.0 versions of MongoDB (which is over ten years old at this stage). The isMaster command returns a dict which details a bunch of server information. In order to format this in a more readable way we import the pprint library.

Conclusion

That’s the end of episode one. We have installed MongoDB, installed the Python client library (aka driver), started a mongod server and established a connection between the client and server.

Next week we will introduce CRUD operations on MongoDB, starting with Create.

For direct feedback please pose your questions on twitter/jdrumgoole. That way everyone can see the answers.

The best way to try out MongoDB is via MongoDB Atlas our fully managed Database as a Service available on AWS, Google Cloud Platform (CGP) and Azure.

Introduction to MongoDB Transactions in Python

Multi-document transactions arrived in MongoDB 4.0 in June 2018. MongoDB has always been transactional around updates to a single document. Now, with multi-document transactions we can wrap a set of database operations inside a start and commit transaction call. This ensures that even with inserts and/or updates happening across multiple collections and/or databases, the external view of the data meets ACID constraints.

To demonstrate transactions in the wild we use a trivial example app that emulates a flight booking for an online airline application. In this simplified booking we need to undertake three operations:

  1. Allocate a seat (seat_collection)
  2. Pay for the seat (payment_collection)
  3. Update the count of allocated seats and sales (audit_collection)

For this application we will use three separate collections for these documents as detailed in bold above. The code in transactions_main.py updates these collections in serial unless the --usetxns argument is used. We then wrap the complete set of operations inside an ACID transaction. The code in transactions_main.py is built directly using the MongoDB Python driver (Pymongo 3.7.1). See the section on client sessions for an overview of the new transactions API in 3.7.1.

The goal of this code is to demonstrate to the Python developer just how easy it is to covert existing code to transactions if required or to port older SQL based systems.

Setting up your environment

The following files can be found in the associated github repo, pymongo-transactions.

  • gitignore : Standard Github .gitignore for Python
  • LICENSE : Apache's 2.0 (standard Github) license
  • Makefile : Makefile with targets for default operations
  • transaction_main.py : Run a set of writes with and without transactions. Run python transactions_main.py -h for help.
  • transactions_retry.py : The file containing the transactions retry functions.
  • watch_transactions.py : Use a MongoDB change stream to watch collections as they change when transactions_main.py is running
  • kill_primary.py : Starts a MongoDB replica set (on port 7100) and kills the primary on a regular basis. This is used to emulate an election happening in the middle of a transaction.
  • featurecompatibility.py : check and/or set feature compatibility for the database (it needs to be set to "4.0" for transactions)
You can clone this repo and work alongside us during this blog post (please file any problems on the Issues tab for the repo).

We assume for all that follows that you have Python 3.6 or greater correctly installed and on your path.

The Makefile outlines the operations that are required to setup the test environment.

All the programs in this example use a port range starting at 27100 to ensure that this example does not clash with an existing MongoDB installation.

Preparation

To setup the environment you can run through the following steps manually. People that have make can speed up installation by using the make install command.

Set a python virtualenv

$ cd pymongo-transactions
$ virtualenv -p python3 venv
$ source venv/bin/activate

Install Python MongoDB Driver, pymongo

Install the latest version of the PyMongo MongoDB Driver (3.7.1 at the time of writing).

pip install --upgrade pymongo

Install Mtools

MTools is a collection of helper scripts to parse, filter, and visualize MongoDB log files (mongod, mongos). mtools also includes mlaunch, a utility to quickly set up complex MongoDB test environments on a local machine. For this demo we are only going to use the mlaunch program.

pip install mtools

the mlaunch program also requires the psutil package.

pip install psutil

The mlaunch program gives us a simple command to start a MongoDB replica set as transactions are only supported on a replica set

Start a replica set whose name is txntest. (see the make init_server make target) for details:

mlaunch init --port 27100 --replicaset --name "txntest"

Using the Makefile for configuration

There is a Makefile with targets for all these operations. For those of you on platforms without access to Make it should be easy enough to cut and paste the commands out of the targets and run them on the command line.

Running the Makefile

cd pymongo-transactions
make

You will need to have MongoDB 4.0 on your path. There are other convenience targets for starting the demo programs:

  • make notxns : start the transactions client without using transactions
  • make usetxns : start the transactions client with transactions enabled
  • make watch_seats : watch the seats collection changing
  • make watch_payments : watch the payment collection changing

Running the transactions example

The transactions example consists of two python programs. transaction_main.py and watch_transactions.py.

Running transactions_main.py

$ python transaction_main.py -h
usage: transaction_main.py [-h] [--host HOST] [--usetxns] [--delay DELAY]
                           [--iterations ITERATIONS]
                           [--randdelay RANDDELAY RANDDELAY]

optional arguments:
  -h, --help            show this help message and exit
  --host HOST           MongoDB URI [default: mongodb://localhost:27100,localh
                        ost:27101,localhost:27102/?replicaSet=txntest&retryWri
                        tes=true]
  --usetxns             Use transactions [default: False]
  --delay DELAY         Delay between two insertion events [default: 1.0]
  --iterations ITERATIONS
                        Run N iterations. O means run forever
  --randdelay RANDDELAY RANDDELAY
                        Create a delay set randomly between the two bounds
                        [default: None]

You can choose to use --delay or --randdelay. if you use both --delay takes precedence. The--randdelay parameter creates a random delay between a lower and an upper bound that will be added between each insertion event.

The transactions_main.py program knows to use the txntest replica set and the right default port range.

To run the program without transactions you can run it with no arguments:

$ python transaction_main.py
using collection: SEATSDB.seats
using collection: PAYMENTSDB.payments
using collection: AUDITDB.audit
Using a fixed delay of 1.0

1. Booking seat: '1A'
1. Sleeping: 1.000
1. Paying 330 for seat '1A'
2. Booking seat: '2A'
2. Sleeping: 1.000
2. Paying 450 for seat '2A'
3. Booking seat: '3A'
3. Sleeping: 1.000
3. Paying 490 for seat '3A'
4. Booking seat: '4A'
4. Sleeping: 1.000
^C

The program runs a function called book_seat() which books a seat on a plane by adding documents to three collections. First it adds the seat allocation to the seats_collection, then it adds a payment to the payments_collection`, finally it updates an audit count in the audit_collection. (This is a much simplified booking process used purely for illustration).

The default is to run the program without using transactions. To use transactions we have to add the command line flag --usetxns. Run this to test that you are running MongoDB 4.0 and that the correct featureCompatibility is configured (it must be set to 4.0). If you install MongoDB 4.0 over an existing /data directory containing 3.6 databases then featureCompatibility will be set to 3.6 by default and transactions will not be available.

Note: If you get the following error running python transaction_main.py --usetxns that means you are picking up an older version of pymongo (older than 3.7.x) for which there is no multi-document transactions support.

Traceback (most recent call last):
  File "transaction_main.py", line 175, in 
    total_delay = total_delay + run_transaction_with_retry( booking_functor, session)
  File "/Users/jdrumgoole/GIT/pymongo-transactions/transaction_retry.py", line 52, in run_transaction_with_retry
    with session.start_transaction():
AttributeError: 'ClientSession' object has no attribute 'start_transaction'

Watching Transactions

To actually see the effect of transactions we need to watch what is happening inside the collections SEATSDB.seats and PAYMENTSDB.payments.

We can do this with watch_transactions.py. This script uses MongoDB Change Streams to see what's happening inside a collection in real-time. We need to run two of these in parallel so it's best to line them up side by side.

Here is the watch_transactions.py program:

$ python watch_transactions.py -h
usage: watch_transactions.py [-h] [--host HOST] [--collection COLLECTION]

optional arguments:
  -h, --help            show this help message and exit
  --host HOST           mongodb URI for connecting to server [default:
                        mongodb://localhost:27100/?replicaSet=txntest]
  --collection COLLECTION
                        Watch  [default:
                        PYTHON_TXNS_EXAMPLE.seats_collection]

We need to watch each collection so in two separate terminal windows start the watcher.

Window 1:

$ python watch_transactions.py --watch seats
Watching: seats
...

Window 2:

$ python watch_transactions.py --watch payments
Watching: payments
...

What Happens when you run without transactions?

Lets run the code without transactions first. If you examine the transaction_main.py code you will see a function book_seats.

def book_seat(seats, payments, audit, seat_no, delay_range, session=None):
    '''
    Run two inserts in sequence.
    If session is not None we are in a transaction

    :param seats: seats collection
    :param payments: payments collection
    :param seat_no: the number of the seat to be booked (defaults to row A)
    :param delay_range: A tuple indicating a random delay between two ranges or a single float fixed delay
    :param session: Session object required by a MongoDB transaction
    :return: the delay_period for this transaction
    '''
    price = random.randrange(200, 500, 10)
    if type(delay_range) == tuple:
        delay_period = random.uniform(delay_range[0], delay_range[1])
    else:
        delay_period = delay_range

    # Book Seat
    seat_str = "{}A".format(seat_no)
    print(count( i, "Booking seat: '{}'".format(seat_str)))
    seats.insert_one({"flight_no" : "EI178",
                      "seat"      : seat_str,
                      "date"      : datetime.datetime.utcnow()},
                     session=session)
    print(count( seat_no, "Sleeping: {:02.3f}".format(delay_period)))
    #pay for seat
    time.sleep(delay_period)
    payments.insert_one({"flight_no" : "EI178",
                         "seat"      : seat_str,
                         "date"      : datetime.datetime.utcnow(),
                         "price"     : price},
                        session=session)
    audit.update_one({ "audit" : "seats"}, { "$inc" : { "count" : 1}}, upsert=True)
    print(count(seat_no, "Paying {} for seat '{}'".format(price, seat_str)))

    return delay_period

This program emulates a very simplified airline booking with a seat being allocated and then paid for. These are often separated by a reasonable time frame (e.f. seat allocation vs external credit card validation and anti-fraud check) and we emulate this by inserting a delay. The default is 1 second.

Now with the two watch_transactions.py scripts running for seats_collection and payments_collection we can run transactions_main.py as follows:

$ python transaction_main.py

The first run is with no transactions enabled.

The bottom window shows transactions_main.py running. On the top left we are watching the inserts to the seats collection. On the top right we are watching inserts to the payments collection.

watching without transactions

We can see that the payments window lags the seats window as the watchers only update when the insert is complete. Thus seats sold cannot be easily reconciled with corresponding payments. If after the third seat has been booked we CTRL-C the program we can see that the program exits before writing the payment. This is reflected in the Change Stream for the payments collection which only shows payments for seat 1A and 2A versus seat allocations for 1A, 2A and 3A.

If we want payments and seats to be instantly reconcilable and consistent we must execute the inserts inside a transaction.

What happens when you run with Transactions?

Now lets run the same system with --usetxns enabled.

$ python transaction_main.py --usetxns

We run with the exact same setup but now set --usetxns.

watching with transactions

Note now how the change streams are interlocked and are updated in parallel. This is because all the updates only become visible when the transaction is committed. Note how we aborted the third transaction by hitting CTRL-C. Now neither the seat nor the payment appear in the change streams unlike the first example where the seat went through.

This is where transactions shine in world where all or nothing is the watchword. We never want to keeps seats allocated unless they are paid for.

What happens during failure?

In a MongoDB replica set all writes are directed to the Primary node. If the primary node fails or becomes inaccessible (e.g. due to a network partition) writes in flight may fail. In a non-transactional scenario the driver will recover from a single failure and retry the write. In a multi-document transaction we must recover and retry in the event of these kinds of transient failures. This code is encapsulated in transaction_retry.py. We both retry the transaction and retry the commit to handle scenarios where the primary fails within the transaction and/or the commit operation.

def commit_with_retry(session):
    while True:
        try:
            # Commit uses write concern set at transaction start.
            session.commit_transaction()
            print("Transaction committed.")
            break
        except (pymongo.errors.ConnectionFailure, pymongo.errors.OperationFailure) as exc:
            # Can retry commit
            if exc.has_error_label("UnknownTransactionCommitResult"):
                print("UnknownTransactionCommitResult, retrying "
                      "commit operation ...")
                continue
            else:
                print("Error during commit ...")
                raise

def run_transaction_with_retry(functor, session):
    assert (isinstance(functor, Transaction_Functor))
    while True:
        try:
            with session.start_transaction():
                result=functor(session)  # performs transaction
                commit_with_retry(session)
            break
        except (pymongo.errors.ConnectionFailure, pymongo.errors.OperationFailure) as exc:
            # If transient error, retry the whole transaction
            if exc.has_error_label("TransientTransactionError"):
                print("TransientTransactionError, retrying "
                      "transaction ...")
                continue
            else:
                raise

    return result

In order to observe what happens during elections we can use the script kill_primary.py. This script will start a replica-set and continuously kill the primary.

$ make kill_primary
. venv/bin/activate && python kill_primary.py
no nodes started.
Current electionTimeoutMillis: 500
1. (Re)starting replica-set
no nodes started.
1. Getting list of mongod processes
Process list written to mlaunch.procs
1. Getting replica set status
1. Killing primary node: 31029
1. Sleeping: 1.0
2. (Re)starting replica-set
launching: "/usr/local/mongodb/bin/mongod" on port 27101
2. Getting list of mongod processes
Process list written to mlaunch.procs
2. Getting replica set status
2. Killing primary node: 31045
2. Sleeping: 1.0
3. (Re)starting replica-set
launching: "/usr/local/mongodb/bin/mongod" on port 27102
3. Getting list of mongod processes
Process list written to mlaunch.procs
3. Getting replica set status
3. Killing primary node: 31137
3. Sleeping: 1.0

kill_primary.py resets electionTimeOutMillis to 500ms from its default of 10000ms (10 seconds). This allows elections to resolve more quickly for the purposes of this test as we are running everything locally.

Once kill_primary.py is running we can start up transactions_main.py again using the --usetxns argument.


$ make usetxns
. venv/bin/activate && python transaction_main.py --usetxns
Forcing collection creation (you can't create collections inside a txn)
Collections created
using collection: PYTHON_TXNS_EXAMPLE.seats
using collection: PYTHON_TXNS_EXAMPLE.payments
using collection: PYTHON_TXNS_EXAMPLE.audit
Using a fixed delay of 1.0
Using transactions

1. Booking seat: '1A'
1. Sleeping: 1.000
1. Paying 440 for seat '1A'
Transaction committed.
2. Booking seat: '2A'
2. Sleeping: 1.000
2. Paying 330 for seat '2A'
Transaction committed.
3. Booking seat: '3A'
3. Sleeping: 1.000
TransientTransactionError, retrying transaction ...
3. Booking seat: '3A'
3. Sleeping: 1.000
3. Paying 240 for seat '3A'
Transaction committed.
4. Booking seat: '4A'
4. Sleeping: 1.000
4. Paying 410 for seat '4A'
Transaction committed.
5. Booking seat: '5A'
5. Sleeping: 1.000
5. Paying 260 for seat '5A'
Transaction committed.
6. Booking seat: '6A'
6. Sleeping: 1.000
TransientTransactionError, retrying transaction ...
6. Booking seat: '6A'
6. Sleeping: 1.000
6. Paying 380 for seat '6A'
Transaction committed.
...

As you can see during elections the transaction will be aborted and must be retried. If you look at the transaction_rety.py code you will see how this happens. If a write operation encounters an error it will throw one of the following exceptions:

Within these exceptions there will be a label called TransientTransactionError. This label can be detected using the has_error_label(label) function which is available in pymongo 3.7.x. Transient errors can be recovered from and the retry code in transactions_retry.py has code that retries for both writes and commits (see above).

Conclusions

Multi-document transactions are the final piece of the jigsaw for SQL developers who have been shying away from trying MongoDB. ACID transactions make the programmer's job easier and give teams that are migrating from an existing SQL schema a much more consistent and convenient transition path.

As most migrations involving a move from highly normalised data structures to more natural and flexible nested JSON documents one would expect that the number of required multi-document transactions will be less in a properly constructed MongoDB application. But where multi-document transactions are required programmers can now include them using very similar syntax to SQL.

With ACID transactions in MongoDB 4.0 it can now be the first choice for an even broader range of application use cases.

Why not try our transactions today by setting up your first cluster on MongoDB Atlas our Database as a Service offering.

To try it locally download MongoDB 4.0.


Join us at MongoDB Europe 2018 for deep-dive technical sessions and hands-on tutorials.

MongoDB 3.6: Here to SRV you with easier replica set connections

If you have logged into MongoDB Atlas recently – and you should, the entry-level tier is free! – you may have noticed a strange new syntax on 3.6 connection strings.

MongoDB Seed Lists

What is this mongodb+srv syntax?

Well, in MongoDB 3.6 we introduced the concept of a seed list that is specified using DNS records, specifically SRV and TXT records. You will recall from using replica sets with MongoDB that the client must specify at least one replica set member (and may specify several of them) when connecting. This allows a client to connect to a replica set even if one of the nodes that the client specifies is unavailable.

You can see an example of this URL on a 3.4 cluster connection string:

Note that without the SRV record configuration we must list several nodes (in the case of Atlas we always include all the cluster members, though this is not required). We also have to specify the ssl and replicaSet options.

With the 3.4 or earlier driver, we have to specify all the options on the command line using the MongoDB URI syntax.

The use of SRV records eliminates the requirement for every client to pass in a complete set of state information for the cluster. Instead, a single SRV record identifies all the nodes associated with the cluster (and their port numbers) and an associated TXT record defines the options for the URI.

Reading SRV and TXT Records

We can see how this works in practice on a MongoDB Atlas cluster with a simple Python script.

import srvlookup #pip install srvlookup
import sys 
import dns.resolver #pip install dnspython

host = None

if len(sys.argv) > 1 :
   host = sys.argv[1]

if host :
   services = srvlookup.lookup("mongodb", domain=host)
   for i in services:
       print("%s:%i" % (i.hostname, i.port))
   for txtrecord in dns.resolver.query(host, 'TXT'):
       print("%s: %s" % ( host, txtrecord))
else:
   print("No host specified")

We can run this script using the node specified in the 3.6 connection string as a parameter.

$ python mongodb_srv_records.py freeclusterjd-ffp4c.mongodb.net
freeclusterjd-shard-00-00-ffp4c.mongodb.net:27017
freeclusterjd-shard-00-01-ffp4c.mongodb.net:27017
freeclusterjd-shard-00-02-ffp4c.mongodb.net:27017
freeclusterjd-ffp4c.mongodb.net: "authSource=admin&replicaSet=FreeClusterJD-shard-0"
$

You can also do this lookup with nslookup:

JD10Gen-old:~ jdrumgoole$ nslookup
> set type=SRV
> _mongodb._tcp.rs.joedrumgoole.com
Server:        10.65.141.1
Address:    10.65.141.1#53

Non-authoritative answer:
_mongodb._tcp.rs.joedrumgoole.com    service = 0 0 27022 rs1.joedrumgoole.com.
_mongodb._tcp.rs.joedrumgoole.com    service = 0 0 27022 rs2.joedrumgoole.com.
_mongodb._tcp.rs.joedrumgoole.com    service = 0 0 27022 rs3.joedrumgoole.com.

Authoritative answers can be found from:
> set type=TXT
> rs.joedrumgoole.com
Server:        10.65.141.1
Address:    10.65.141.1#53

Non-authoritative answer:
rs.joedrumgoole.com    text = "authSource=admin&replicaSet=srvdemo"

You can see how this could be used to construct a 3.4 style connection string by comparing it with the 3.4 connection string above.

As you can see, the complexity of the cluster and its configuration parameters are stored in the DNS server and hidden from the end user. If a node's IP address or name changes or we want to change the replica set name, this can all now be done completely transparently from the client’s perspective. We can also add and remove nodes from a cluster without impacting clients.

So now whenever you see mongodb+srv you know you are expecting a SRV and TXT record to deliver the client connection string.

Creating SRV and TXT records

Of course, SRV and TXT records are not just for Atlas. You can also create your own SRV and TXT records for your self-hosted MongoDB clusters. All you need for this is edit access to your DNS server so you can add SRV and TXT records. In the examples that follow we are using the AWS Route 53 DNS service.

I have set up a demo replica set on AWS with a three-node setup. They are :

rs1.joedrumgoole.com
rs2.joedrumgoole.com
rs3.joedrumgoole.com

Each has a mongod process running on port 27022. I have set up a security group that allows access to my local laptop and the nodes themselves so they can see each other.

I also set up the DNS names for the above nodes in AWS Route 53.

We can start the mongod processes by running the following command on each node.

$ sudo /usr/local/m/versions/3.6.3/bin/mongod --auth --port 27022 --replSet srvdemo --bind_ip 0.0.0.0 --keyFile mdb_keyfile"

Now we need to set up the SRV and TXT records for this cluster.

The SRV record points to the server or servers that will comprise the members of the replica set. The TXT record defines the options for the replica set, specifically the database that will be used for authorization and the name of the replica set. It is important to note that the mongodb+srv format URI implicitly adds “ssl=true”. In our case SSL is not used for the demo so we have to append “&ssl=false” to the client connector. Note that the SRV record is specifically designed to look up the mongodb service referenced at the start of the URL.

The settings in AWS Route 53 are:

Which leads to the following entry in the zone file for Route 53.

Now we can add the TXT record. By convention, we use the same name as the SRV record (rs.joedrumgoole.com) so that MongoDB knows where to find the TXT record.

We can do this on AWS Route 53 as follows:

This will create the following TXT record.

Now we can access this service as :

mongodb+srv://rs.joedrumgoole.com/test

This will retrieve a complete URL and connection string which can then be used to contact the service.

The whole process is outlined below:

Once your records are set up, you can easily change port numbers without impacting clients and also add and remove cluster members.

SRV records are another way in which MongoDB is making life easier for database developers everywhere.

You should also check out full documentation on SRV and TXT records in MongoDB 3.6.

---

You can sign up for a free MongoDB Atlas tier which is suitable for single user use.

Find out how to use your favorite programming language with MongoDB via our MongoDB drivers.

Please visit MongoDB University for free online training in all aspects of MongoDB.

Follow Joe Drumgoole on twitter for more news about MongoDB.


Meet the team that builds MongoDB in-person at MongoDB World.

SEGA HARDlight Migrates to MongoDB Atlas to Simplify Ops and Improve Experience for Millions of Mobile Gamers

It was way back in the summer of ‘91 that Sonic the Hedgehog first chased rings across our 2D screens. Gaming has come a long way since then. From a static TV and console setup in ‘91, to online PC gaming in the noughties and now to mobile and virtual reality. Surprisingly, for most of those 25 years, the underlying infrastructure that powered these games hasn’t really changed much at all. It was all relational databases. But with ever increasing need for scale, flexibility and creativity in games, that’s changing fast. SEGA HARDlight is leading this shift by adopting a DevOps culture and using MongoDB Atlas, the cloud hosted MongoDB service, to deliver the best possible gaming experience.

Bringing Icons to Mobile Games

SEGA HARDlight is a mobile development studio for SEGA, a gaming company you might have heard of. Based in the UK’s Royal Leamington Spa, SEGA HARDlight is well known for bringing the much-loved blue mascot Sonic the Hedgehog to the small screen. Along with a range of Sonic games, HARDlight is also responsible for building and running a number of other iconic titles such as Crazy Taxi: City Rush and Kingdom Conquest: Dark Empire.

Sonic Dash

Earlier versions of the mobile games such as Sonic Jump and Sonic Dash didn’t require a connection to the internet and had no server functionality. As they were relatively static games, developers initially supported the releases with an in-house tech stack based around Java and MySQL and hosted in SEGA HARDlight’s own data centre.

The standard practice for launching these games involved load testing the servers to the point of breaking, then provisioning the resources to handle an acceptable failure point. This limited application functionality, and could cause service outages when reaching the provisioned resources’ breaking point. As the games started to add more online functionality and increased in popularity, that traditional stack started to creak.

Massive Adoption: Spiky Traffic

Mobile games have an interesting load pattern. People flock in extreme numbers very soon after the release. For the most popular games, this can mean many millions people in just a few days or even hours. The peak is usually short and then quickly drops to a long tail of dedicated players. Provisioning for this kind of traffic with a dynamic game is a major headache. The graph from the Crazy Taxi: City Rush launch in 2014 demonstrates just how spiky the traffic can be.

Typical usage curve for a popular mobile game

We spoke with Yordan Gyurchev, Technical Director at SEGA HARDlight, who explained: “With these massive volumes even minor changes in the database have a big impact. To provide a perfect gaming experience developers need to be intimately familiar with the performance trade offs of the database they’re using,”

Crazy Taxi : City Rush

Supersonic Scaling

SEGA HARDlight knew that the games were only going to get more online functionality and generate even more massive bursts of user activity. Much of the gaming data was also account-based so it didn’t fit naturally in the rows and columns of relational databases. In order to address these limitations, the team searched for alternatives. After reviewing Cassandra and Couchbase, but feeling they were either too complex to manage or didn’t have the mature support needed to support the company’s SLAs, the HARDlight engineers looked to MongoDB Atlas, the MongoDB database as a service.

Then came extensive evaluations and testing across multiple dimensions such as cost, maintenance, monitoring and backups. It was well known that MongoDB natively had the scalability and flexibility to handle large volumes and always-on deployments but HARDlight’s team had to have support on the operations side too.

Advanced operational tooling in MongoDB Atlas gave a small DevOps team of just two staffers the ability to handle and run games even as millions of people join the fray. They no longer had to worry about maintenance, upgrades or backups. In fact, one of the clinchers was the point in time backup and restore feature which meant that they can roll back to a checkpoint with the click of a button. With MongoDB Atlas and running on AWS, SEGA HARDlight was ready to take on even Boss Level scaling.

“At HARDlight we’re passionate about finding the right tool for the job. For us we could see that using a horizontally scalable document database was a perfect fit for player-account based games,” said Yordan.

“The ability to create a high traffic volume, highly scalable solution is about knowing the tiny details. To do that, normally engineers need to focus on many different parts of the stack but MongoDB Atlas and MongoDB’s support gives us a considerable shortcut. If this was handled in-house we would only be as good as our database expert. Now we can rely on a wealth of knowledge, expertise and best in class technology.”

Sonic Forces

HARDlight’s first MongoDB powered game was Kingdom Conquest: Dark Empire which was a frictionless launch from the start and gave the engineers their first experiences of MongoDB. Then in a weekend in late 2017 Sonic Forces: Speed Battle was launched on mobile. It’s a demanding, always-on application that enables constant connection to the internet and shared leaderboards. In the background a 3 shard cluster running on MongoDB Atlas easily scaled to handle the complex loads as millions of gamers joined the race. The database was stable with low latencies and not a single service interruption. All of this resulted in a low stress launch, a happy DevOps team and a very enthusiastic set of gamers.

The latest SEGA HARDlight mobile game: Sonic Forces: Speed Battle

Yordan concluded: “With MySQL, it had taken multiple game launches to get the database backend right. With MongoDB Atlas, big launches were a success right from the start. That’s no mean feat.”

Just as the gaming platforms have evolved and transformed through the years, so too has the database layer had to grow and adapt. SEGA HARDlight is now expanding its use of MongoDB Atlas to support all new games as they come online. By taking care of the operations, management and scaling, MongoDB Atlas lets HARDlight focus on building and running some of the most iconic games in the world. And doing it with confidence.

Gone is the 90s infrastructure. Replaced by a stack that is every bit as modern, powerful and fast as the famous blue hedgehog.

SEGA Hardlight is looking for talented engineers to join the team. If you are interested, check out the careers page or email: hardlightcareers@sega.co.uk


Start your Atlas journey today for free. What are you waiting for?

Hacking Unemployment: How DWP Digital and MongoDB are Working Together to Empower Developers and Tackle Some of the Biggest Challenges in the UK

Technology and businesses exist to do social good. We all have bills to pay and families to support, but beyond that, it has to be about more than profit. I also believe that developers in particular have a huge influence on what an organisation can achieve, both its social impact and the bottom line. The Department for Work and Pensions’ Digital team (DWP Digital) is the perfect example of a group that understands and embraces the important role developers can play solving major issues. This year we’ve been lucky enough to work with DWP Digital and its developers in the ultimate hope of tackling some of the UK’s biggest challenges.

The Department for Work and Pensions (DWP) is the UK’s biggest public service department. It’s responsible for allocating government help to those in need. This includes a range of benefits including the state pension, disability allowances and more. Over 22 million citizens rely on the £168 billion that DWP releases every year.

The DWP Digital team is the group responsible for building and supporting the applications that make this all possible. They operate more than 1,000 applications and estimate that more than 50 million lines of code have been written for these applications. Currently, there’s a major shift happening at DWP Digital, as much of the most important work is coming back in-house and developers are adopting a more agile approach to delivery. The aim is to deliver better, more efficient and more customer-focused services; and they could not do that without an engaged, skilled and creative team of developers.

Hack the North: MongoDB Sponsored DWP Digital’s Manchester base Hackathon

Hack the North

For those who don’t know, a hackathon is an event that gives developers a chance to try out new technologies, solve new problems and experiment with new approaches. Basically, there are three things you want to get out of a hackathon: learn something, have fun and try to do some good. However, before we get into the hackathon, some statistics: In Manchester City and its surrounding areas there are more than 75,000 unemployed people living (Source: DWP’s Churchill application, June 2017) and the overall unemployment rate is above the national average with 5.5% of residents out of work (Source: Nomis, official labour market statistics). Jobs in the science, research, engineering and technology professions make up just 4.69% of the total workforce in Manchester. However, vacancies in that category make up 18% of the total vacancies advertised (Source: City Council Quarterly Economy Dashboard Q1 2016/2017 ).

So when DWP Digital decided to run a hackathon ahead of the opening of its Manchester digital hub in early 2018, the big challenge they’d want to tackle was obvious. Hack the North was a two-day public hackathon focused on finding solutions to help address the unemployment problems in the city. It is usually done off-site in order to take the participants out of the headspace of day-to-day activity. There is normally plenty of food (pizza), beverages and competitive banter.

The project board at Hack the North

As DWP Digital is one of the biggest users of MongoDB in Europe and our developer advocacy team have experience running hackathons, a number of our team went up to support the event along with other sponsors ThoughtWorks and TechHub Manchester. I’ve been at a few hackathons through the years and, I have to say, this was one of the best I’ve been involved in. The quality of ideas, the execution and enthusiasm from all involved was fantastic.

We had more than 70 people onsite who divided into 10 distinct teams, each with a mission to deliver a new working solution in just two days using available data from public sources such as Churchill (DWP’s public data repository – which is also built on MongoDB).

The final solutions were wide-ranging, creative and impressive. We had everything from an engine that helped the onboarding process for the newly unemployed, right through to a platform that gamified CV and aptitude testing. However, the eventual winner was a team called UpSkill. UpSkill built an application using MongoDB Atlas that could match people’s skills to the requirements of employers, and has an API to allow people to access resources to boost their skills. It was a very slick, very well executed final product and first among a great crop of ideas.

Admittedly we haven’t completely solved unemployment in Manchester, but to my eyes, the two-day event was a roaring success with the developers learning a lot and building some powerful proof of concepts. If you do want to see more, check out the #HackTheNorth Twitter moment or this excellent blog post from my fellow judge Dan Tanham, a Deputy Director at DWP Digital.

Learning to teach, teaching to learn

You’ve never truly learnt a lesson until you’ve taught it to someone else. Alongside the hackathon, another way DWP Digital keeps its team on the forefront of development best practices is by presenting at developer conferences. We were delighted to have dozens of the DWP Digital team come along to MongoDB Europe 2017 in London November of last year, but what was really special was to have Rob Thompson, CTO of DWP Digital, deliver one of the morning keynotes.

You can see the full video of his presentation below and you won’t be shocked by its thesis. After giving an overview of DWP Digital, Rob talks about how MongoDB and agile development are key tools to help the UK’s biggest public service department transform its data infrastructure and build a number of flagship digital services across pensions, health, benefits and analytics. Rob believes passionately that developers are the key difference between success and failure in most projects.

In the breakout sessions, Rob’s colleague David Parry got into even more detail on how DWP Digital is using agile development, Java and MongoDB in the cloud to create a microservices architecture. This architecture is making it possible to rapidly iterate from proof of concept to hundreds of services as they are rolled out nationally. Unfortunately, we couldn’t film every session, so if you would like to see this type of presentation you’ll just have to make sure you’re at MongoDB Europe later this year.

It’s been a gratifying few months getting to work so closely with the DWP Digital team. Not only are they using MongoDB in incredibly powerful ways but even more importantly I’ve gotten to see first-hand how developer-centric the organization is. You wouldn’t think of a big government department as a hotbed of developer innovation but thankfully they certainly can be. DWP Digital is proving to be every bit as forward-thinking, agile and end-user focused as the cream of Silicon Valley. And society is the better for it.

Find out more about open positions at DWP Digital on the DWP Digital Jobs Twitter account or go to careers.dwp.gov.uk. And if you’d like to know more about MongoDB’s developer focus and the events we run then follow me @jdrumgoole.

MongoDB Europe 2017 is Coming

On the 8th November 2017 we will host our premier European developer event in London, MongoDB Europe. This event is for you the developer, and like the MongoDB database, our event is designed to make your life easier in 2018 and beyond.

You might have missed a memo or two so let me get you up to speed. In 2015 we demoed Compass, our GUI for developers. In 2016, we launched MongoDB Atlas, our fully managed database service in the cloud.

This year we launched MongoDB Stitch, our Backend as a Service that combines the Atlas database with the key services a developer needs to launch applications in the cloud including authentication, file storage, third party services, and orchestration.

Eliot Horowitz, CTO of MongoDB, giving last year’s keynote
Eliot Horowitz, CTO of MongoDB, giving last year’s keynote

MongoDB Europe ‘17 is the best place to catch up on all these technologies whether you are a newbie or a seasoned veteran of the 2.x era.

MongoDB 3.6 is coming and this is the conference to find out how retryable writes, change streams, schema validation, and aggregation enhancements will simplify client side programming for MongoDB developers everywhere.

We will also have killer keynotes from Dr. Hannah Fry and James Governor.

Dr. Hannah Fry
Dr. Hannah Fry

In the morning Hannah will take you on a tour of the intriguing insights we’ve uncovered by looking at ourselves through the eyes of data and show you how a mathematical view of what it means to be human can shape the way we design our society, from dating and healthcare to catching serial killers and everything in between.

James Governor
James Governor

In the afternoon James will examine the trends driving grassroots-led tech adoption today, showing how convenience is always the key to success. Developers are the new Kingmakers, and the platforms that win have the shortest mean time to dopamine

Introduction: Shard N

But it gets better. For wizard level experts and above this year we are offering Shard N, so called because Shard N is the last shard in the prototypical unbound cluster.

Introduction to Shard N

Shard N’s talks start where the other talks end. We have some of our most technically hardcore, talented, and tenured speakers on deck. Keith Bostic, John Page, Asya Kamsky and Drew Di Palma will run extended sessions that will go deep into storage engines, distributed consensus, the aggregation framework, and our new MongoDB Stitch service.

But it’s more than just talk tracks. Come and play ping pong against your peers and get a chance to beat our CEO in a head to head match. Try out our retrogames arcade (Pacman, Asteroids, Space Invaders and more) or hang out with MongoDB staff at our beers around the world event at the end of the day.

There has never been a better time to find out about MongoDB, so be at the Intercontinental on the 8th of November and tell them ‘Joe’ sent you (use code JOE for a 25% discount off of the super low entry fee of £199).

Register today!


MUGs are coming in Europe

We are planning a lot of MongoDB User Groups (MUGs) for the next few months. Eleven in total.

We will be covering schema design, the upcoming 3.6 release of MongoDB, and a host of other topics. At the Munich and London events, Amadeus, one of the world’s largest travel technology companies, handling 750m bookings every year, will be talking about their use of MongoDB.

Date City Country Event
11-Sep-2017 Madrid Spain Madrid MUG
19-Sep-2017 Paris France Paris MUG
19-Sep-2017 Berlin Germany Berlin MUG
20-Sep-2017 Stockholm Sweden Stockholm MUG
26-Sep-2017 Bern Switzerland CloudFoundry Meetup
2-Oct-2017 Nice France Nice MUG
5-Oct-2017 Munich Germany Munich MUG
11-Oct-2017 Dublin Ireland Dublin MUG
18-Oct-2017 Vienna Austria Vienna MUG
25-Oct-2017 London England London MUG
29-Nov-2017 Gent Belgium Belgium MUG

Meetups are free and there is usually pizza and beer available. Whether you are an expert or a newbie this is a great place to catch up with other MongoDB users in your city. Click on the map below and it will take you to an interactive map browser for MUGS worldwide.

If you can’t find a meetup near your location, why not set one up? We have a complete guide. Better still, drop me a line at Joe.Drumgoole@mongodb.com and I can setup the meetup.com page for you under the MongoDB banner.

We are always looking for speakers, so if you are a MongoDB user with an interesting project that you would like to talk about contact your local Meetup Organizer. If you don’t get a response let me know and I will find a meetup for you.

We look forward to seeing you in the coming months.

Finally, don’t forget about MongoDB Europe, our annual marquee event in Europe which this year is held in London on the 8th of November. I will be an MC at the event and this year we are introducing Shard-N, a session devoted to longer, more in-depth technical talks.

Go register!


MongoDB Europe - It's a Wrap!

On the 15th of November we held our first ever MongoDB Europe event. Over 1000 attendees were present for a day focussed on content for MongoDB DBAs, DevOps, and developer personnel.

Eliot presents at MongoDB Europe

Eliot Horowitz, CTO and Founder, MongoDB

Eliot Horowitz, our CTO, opened the technical sessions with a keynote on the MongoDB 3.4 release which entered into general availability on the 29th November, 2016.

Professor Brian Cox presents at MongoDB Europe

Professor Brian Cox

Professor [Brian Cox](https://en.wikipedia.org/wiki/Brian_Cox_(physicist) followed with the ultimate big data presentation, talking about the Big Bang and what the Sloan Digital Sky Survey can tell us about the origins of the universe. Brian’s presentation set the tone for the show, which centered on the giant ideas of MongoDB users.

We ran three “shards” – or streams of talks – throughout the day. Amazing customer sessions from Amadeus, SNCF, and Proximus, combined with highly technical talks on subjects like the Wired Tiger Storage engine, Blockchain, and how to build highly resilient MongoDB applications, all contributed to a high level of learning for all in attendance.

The show was packed all day long, with attendees mingling in the exhibition hall between talks.

MongoDB Europe Coffee time

Coffee time at MongoDB Europe 2016

We ran a ping pong ladder all day and our CEO Dev Ittycheria graciously agreed to play the winner. Dev is a bit of a demon on the ping pong table, so our ladder winner had to be content with a runner’s up prize.

Dev Ittycheria, CEO, plays table tennis at MongoDB Europe

Dev Ittycheria - CEO, MongoDB

MongoDB Europe Closing Keynote - Eliot Horowitz

The Closing Keynote from Eliot Horowitz

We’d especially like to thank all of our Sponsors, who helped make the day so successful.

MongoDB Europe Sponsors

What did we learn from our first MongoDB Europe event? The feedback overall was that it was a very successful event. The one consistent stream of constructive feedback was “make it even more technical.” As long as you are up for the challenge, so are are we.

--

Registration is open for MongoDB World 2017:

Come to MongoDB World 2017!