Joe Drumgoole

18 results

PyMongo Monday - Episode 4 - Update

This is part 4 of PyMongo Monday. Previously we have covered: EP1 - Setting up your MongoDB Environment EP2 - Create - the C in CRUD EP3 - Read - the R in CRUD We are now into Update , the U in CRUD. The key aspect of update is the ability to change a document in place. In order for this to happen we must have some way to select the document and change parts of that document. In the pymongo driver this is achieved with two functions: update_one update_many Each update operation can take a range of update operators that define how we can mutate a document during update. Lets get a copy of the zipcode database hosted on MongoDB Atlas . As our copy hosted in Atlas is not writable we can't test update directly on it. However, we can create a local copy with this simple script: $ mongodump --host demodata-shard-0/demodata-shard-00-00-rgl39.mongodb.net:27017,demodata-shard-00-01-rgl39.mongodb.net:27017,demodata-shard-00-02-rgl39.mongodb.net:27017 --ssl --username readonly --password readonly --authenticationDatabase admin --db demo 2018-10-22T01:18:35.330+0100 writing demo.zipcodes to 2018-10-22T01:18:36.097+0100 done dumping demo.zipcodes (29353 documents) This will create a backup of the data in a dump directory in the current working directory. to restore the data to a local mongod make sure you are running mongod locally and just run mongorestore in the same directory as you ran mongodump . $ mongorestore 2018-10-22T01:19:19.064+0100 using default 'dump' directory 2018-10-22T01:19:19.064+0100 preparing collections to restore from 2018-10-22T01:19:19.066+0100 reading metadata for demo.zipcodes from dump/demo/zipcodes.metadata.json 2018-10-22T01:19:19.211+0100 restoring demo.zipcodes from dump/demo/zipcodes.bson 2018-10-22T01:19:19.943+0100 restoring indexes for collection demo.zipcodes from metadata 2018-10-22T01:19:20.364+0100 finished restoring demo.zipcodes (29353 documents) 2018-10-22T01:19:20.364+0100 done You will now have a demo database on your local mongod with a single collection called zipcodes . $ python Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 03:03:55) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pymongo >>> client = pymongo.MongoClient() >>> database=client['demo'] >>> zipcodes=database["zipcodes"] >>> zipcodes.find_one() {'_id': '01001', 'city': 'AGAWAM', 'loc': [-72.622739, 42.070206], 'pop': 15338, 'state': 'MA'} >>> > Each document in this database has the same format: { '_id': '01001', # ZIP code 'city': 'AGAWAM', # City name 'loc': [-72.622739, 42.070206], # Geo Spatial Coordinates 'pop': 15338, # Population of within zip code 'state': 'MA', # Two letter state code (MA = Massachusetts) } Let's say we want to change the population to reflect the most current value . Today the population of 01001 is approximately 16769. To change the value we would execute the following update. >>> zipcodes.update( {"_id" : "01001"}, {"$set" : { "pop" : 16769}}) {'n': 1, 'nModified': 1, 'ok': 1.0, 'updatedExisting': True} >>> zipcodes.find_one({"_id" : "01001"}) {'_id': '01001', 'city': 'AGAWAM', 'loc': [-72.622739, 42.070206], 'pop': 16769, 'state': 'MA'} >>> Here we see the $set operator in action. The $set operator will set a field to a new value or create that field if it doesn't exist in the document. We add a new field by doing: >>> zipcodes.update_one( {"_id" : "01001"}, {"$set" : { "population_record" : []}}) <pymongo.results.UpdateResult object at 0x1042dc488> >>> zipcodes.find_one({"_id" : "01001"}) {'_id': '01001', 'city': 'AGAWAM', 'loc': [-72.622739, 42.070206], 'pop': 16769, 'state': 'MA', 'population_record': []} >>> Here we are adding a new field called population_record . This field is an array field and has been set to the empty array for now. Now we can update the array with a history of the population for the ZIP Code area. >>> zipcodes.update_one({"_id" : "01001"}, { "$push" : { "population_record" : { "pop" : 15338, "timestamp": None }}}) <pymongo.results.UpdateResult object at 0x106c210c8> >>> zipcodes.find_one({"_id" : "01001"}) {'_id': '01001', 'city': 'AGAWAM', 'loc': [-72.622739, 42.070206], 'pop': 16769, 'state': 'MA', 'population_record': [{'pop': 15338, 'timestamp': None}]} >>> from datetime import datetime >>> zipcodes.update_one({"_id" : "01001"}, { "$push" : { "population_record" : { "pop" : 16769, "timestamp": datetime.utcnow() }}}) <pymongo.results.UpdateResult object at 0x106c21908> >>> zipcodes.find_one({"_id" : "01001"}) {'_id': '01001', 'city': 'AGAWAM', 'loc': [-72.622739, 42.070206], 'pop': 16769, 'state': 'MA', 'population_record': [{'pop': 15338, 'timestamp': None}, {'pop': 16769, 'timestamp': datetime.datetime(2018, 10, 22, 11, 37, 5, 60000)}]} >>> x=zipcodes.find_one({"_id" : "01001"}) >>> x {'_id': '01001', 'city': 'AGAWAM', 'loc': [-72.622739, 42.070206], 'pop': 16769, 'state': 'MA', 'population_record': [{'pop': 15338, 'timestamp': None}, {'pop': 16769, 'timestamp': datetime.datetime(2018, 10, 22, 11, 37, 5, 60000)}]} >>> import pprint >>> pprint.pprint(x) {'_id': '01001', 'city': 'AGAWAM', 'loc': [-72.622739, 42.070206], 'pop': 16769, 'population_record': [{'pop': 15338, 'timestamp': None}, {'pop': 16769, 'timestamp': datetime.datetime(2018, 10, 22, 11, 37, 5, 60000)}], 'state': 'MA'} >>> Here we have appended two documents to the array so that we have a history of the changes in population. The original value of 15338 was captured at an unknown time in the past so we set that timestamp to None . We updated the other value today so we can set that timestamp to the current time. In both cases we use the $push operator to push new elements onto the end of the array population_record . You can see how we use pprint to produce the output in a slightly more readable format. If we want to apply updates to more than one record we use update_many to apply changes to more than one document. Now if the filter applies to more than one document the changes will be applied to each document. So imagine we wanted to add the city sales tax to each city. First, we want to add the city sales tax to all the ZIP Code regions in New York. >>> zipcodes.update_many( {'city': "NEW YORK"}, { "$set" : { "sales tax" : 4.5 }}) <pymongo.results.UpdateResult object at 0x1042dcd88> >>> zipcodes.find( {"city": "NEW YORK"}) <pymongo.cursor.Cursor object at 0x101e09410> >>> cursor=zipcodes.find( {"city": "NEW YORK"}) >>> cursor.next() {u'city': u'NEW YORK', u'loc': [-73.996705, 40.74838], u'sales tax': 4.5, u'state': u'NY', u'pop': 18913, u'_id': u'10001'} >>> cursor.next() {u'city': u'NEW YORK', u'loc': [-73.987681, 40.715231], u'sales tax': 4.5, u'state': u'NY', u'pop': 84143, u'_id': u'10002'} >>> cursor.next() {u'city': u'NEW YORK', u'loc': [-73.989223, 40.731253], u'sales tax': 4.5, u'state': u'NY', u'pop': 51224, u'_id': u'10003'} >>> The final kind of update operation we want to talk about is upsert . We can add the upsert flag to any update operation to do an insert of the target document even when it doesn't match. When is this useful? Imagine we have a read-only collection of ZIP Code data and we want to create a new collection (call it zipcodes_new ) that contains updates to the ZIP Codes that contain changes in population. As we collect new population stats ZIP Code by ZIP Code we want to update the zipcodes_new collection with new documents containing the updated ZIP Code data. In order to simplify this process we can do the updates as an upsert . Below is a fragment of code from update_population.py zip_doc = zipcodes.find_one({"_id": args.zipcode}) zip_doc["pop"] = {"pop": args.pop, "timestamp": args.date} zipcodes_new.update({"_id":args.zipcode}, zip_doc, upsert=True) print("New zipcode data: " + zip_doc["_id"]) pprint.pprint(zip_doc) The upsert=True flag ensures that if we don't match the initial clause {"_id":args.zipcode} we will still insert the zip_doc doc. This is a common pattern for upsert usage: Initially we insert based on a unique key. As the the number of inserts grows the likelihood that we will be updating an existing key as opposed to inserting a new key grows. the upsert=True flag allows us to handle both situations in a single update statement. There is a lot more to update and I will return to update later in the series. For now just remember that update is generally used for mutating existing documents using a range of update operators . Next time we will complete our first pass over CRUD operations with the final function, delete .

October 29, 2018

Why You Need to Be at MongoDB Europe 2018

MongoDB Europe 2018 is just around the corner. On the 8th of November, our premiere European event will bring together over 1000 members of the MongoDB developer community to learn about our existing technology, find out what’s around the corner and hear from our CTO, Eliot Horowitz. It is also a chance to celebrate the satisfaction of working with the world’s most developer focussed data platform. This year we are back at Old Billingsgate which is a fabulous venue for a tech event. There will be three technical tracks (or Shards as we call them) and, of course, this year we see the return of Shard N. Shard N is our high-end technical tutorial sessions where members of MongoDB technical staff get more time to cover more material in depth. These sessions are designed for our most seasoned developers to get new insights into how our products and offerings can be used to solve the most challenging business problems. This year's sessions include John Page on comparing RDBMS and MongoDB performance and the real skinny on Workload isolation from everyone’s favourite MongoDB Ninja, Asya Kamsky . In the main Shards we have Keith Bostic talking about how we built the new transactions engine and lots of sessions on our new serverless platform MongoDB Stitch. Remember, regardless of whether you are a veteran of MongoDB or coming to the database for the first time, the four parallel tracks will ensure that there is always something on for everybody. The people in white coats will be back again this year. Who are they? They are members of our MongoDB Consulting and Solution Architecture teams and nobody knows more about MongoDB than these folks. You can book a slot with them via a calendaring system that will be sent out after registration. All attendees will receive: A MongoDB Europe 2018 hoodie and other exclusive swag such as MongoDB Europe stickers, buttons, and pins 3-months of free on-demand access to MongoDB University (Courses in Java, Python, and Node.js are included.) 50% off MongoDB Certification exams Future discounts on MongoDB events as Alumni We will have the top of the line London Street Food initiative, Kerb , catering the day, and other fun stuff like a nitro-ice-cream parlour and all-day table tennis tournaments. The day will off finish with a drinks reception on us! Register today for your tickets. Get a 25% discount per person for groups of 3 or more. And just for reading this far you get another 20% off by using the code JOED20 . What’s not to like? See you all on the 8th of November at Old Billingsgate.

October 2, 2018

PyMongo Monday - Episode 3 - Read

PyMongo Monday - Episode 3 - Read Previously we covered: Episode 1 : Setting Up Your MongoDB Environment Episode 2 : CRUD - Create In this episode (episode 3) we are are going to cover the Read part of CRUD. MongoDB provides a query interface through the find function. We are going to demonstrate Read by doing find queries on a collection hosted in MongoDB Atlas . The MongoDB connection string is: mongodb+srv://demo:demo@demodata-rgl39.mongodb.net/test?retryWrites=true This is a cluster running a database called demo with a single collection called zipcodes . Every ZIP code in the US is in this database. To connect to this cluster we are going to use the Python shell. $ cd ep003 $ pipenv shell Launching subshell in virtual environment… JD10Gen:ep003 jdrumgoole$ . /Users/jdrumgoole/.local/share/virtualenvs/ep003-blzuFbED/bin/activate (ep003-blzuFbED) JD10Gen:ep003 jdrumgoole$ python Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 03:03:55) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> >>> from pymongo import MongoClient >>> client = MongoClient(host="mongodb+srv://demo:demo@demodata-rgl39.mongodb.net/test?retryWrites=true") >>> db = client["demo"] >>> zipcodes=db["zipcodes"] >>> zipcodes.find_one() {'_id': '01069', 'city': 'PALMER', 'loc': [-72.328785, 42.176233], 'pop': 9778, 'state': 'MA'} >>> The find_one query will get the first record in the collection. You can see the structure of the fields in the returned document. The _id is the zip code. The city is the city name. The loc is the GPS coordinates of each zip code. The pop is the population size and the state is the two-letter state code. We are connecting with the default user demo with the password demo . This user only has read-only access to this database and collection. So what if we want to select all the ZIP codes for a particular city? Querying in MongoDB consists of constructing a partial JSON document that matches the fields you want to select on. So to get all the zip codes in the city of PALMER we use the following query >>> zipcodes.find({'city': 'PALMER'}) <pymongo.cursor.Cursor object at 0x104c155c0> >>> Note we are using find() rather than find_one() as we want to return all the matching documents. In this case find() will return a cursor . To print the cursor contents just keep calling .next() on the cursor as follows: >>> cursor=zipcodes.find({'city': 'PALMER'}) >>> cursor.next() {'_id': '01069', 'city': 'PALMER', 'loc': [-72.328785, 42.176233], 'pop': 9778, 'state': 'MA'} >>> cursor.next() {'_id': '37365', 'city': 'PALMER', 'loc': [-85.564272, 35.374062], 'pop': 1685, 'state': 'TN'} >>> cursor.next() {'_id': '50571', 'city': 'PALMER', 'loc': [-94.543155, 42.641871], 'pop': 1119, 'state': 'IA'} >>> cursor.next() {'_id': '66962', 'city': 'PALMER', 'loc': [-97.112214, 39.619165], 'pop': 276, 'state': 'KS'} >>> cursor.next() {'_id': '68864', 'city': 'PALMER', 'loc': [-98.241146, 41.178757], 'pop': 1142, 'state': 'NE'} >>> cursor.next() {'_id': '75152', 'city': 'PALMER', 'loc': [-96.679429, 32.438714], 'pop': 2605, 'state': 'TX'} >>> cursor.next() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/jdrumgoole/.local/share/virtualenvs/ep003-blzuFbED/lib/python3.6/site-packages/pymongo/cursor.py", line 1197, in next raise StopIteration StopIteration As you can see cursors follow the Python iterator protocol and will raise a StopIteration exception when the cursor is exhausted. However, calling .next() continuously is a bit of a drag. Instead, you can import the pymongo_shell package and call the print_cursor() function. It will print out twenty records at a time. >>> from pymongo_shell import print_cursor >>> print_cursor(zipcodes.find({'city': 'PALMER'})) {'_id': '01069', 'city': 'PALMER', 'loc': [-72.328785, 42.176233], 'pop': 9778, 'state': 'MA'} {'_id': '37365', 'city': 'PALMER', 'loc': [-85.564272, 35.374062], 'pop': 1685, 'state': 'TN'} {'_id': '50571', 'city': 'PALMER', 'loc': [-94.543155, 42.641871], 'pop': 1119, 'state': 'IA'} {'_id': '66962', 'city': 'PALMER', 'loc': [-97.112214, 39.619165], 'pop': 276, 'state': 'KS'} {'_id': '68864', 'city': 'PALMER', 'loc': [-98.241146, 41.178757], 'pop': 1142, 'state': 'NE'} {'_id': '75152', 'city': 'PALMER', 'loc': [-96.679429, 32.438714], 'pop': 2605, 'state': 'TX'} >>> If we don't need all the fields in the doc we can use projection to remove some fields. This is a second doc argument to the find() function. This doc can specify the fields to return explicitly. >>> print_cursor(zipcodes.find({'city': 'PALMER'}, {'city':1,'pop':1})) {'_id': '01069', 'city': 'PALMER', 'pop': 9778} {'_id': '37365', 'city': 'PALMER', 'pop': 1685} {'_id': '50571', 'city': 'PALMER', 'pop': 1119} {'_id': '66962', 'city': 'PALMER', 'pop': 276} {'_id': '68864', 'city': 'PALMER', 'pop': 1142} {'_id': '75152', 'city': 'PALMER', 'pop': 2605} To include multiple fields in a query just add them to query doc. Each field is treated as a boolean and to select the documents that will be returned. >>> print_cursor(zipcodes.find({'city': 'PALMER', 'state': 'MA'}, {'city':1,'pop':1})) {'_id': '01069', 'city': 'PALMER', 'pop': 9778} >>> To pick documents with one field or the other we can use the $or operator. >>> print_cursor(zipcodes.find({ '$or' : [ {'city': 'PALMER' }, {'state': 'MA'}]})) {'_id': '01069', 'city': 'PALMER', 'loc': [-72.328785, 42.176233], 'pop': 9778, 'state': 'MA'} {'_id': '01002', 'city': 'CUSHMAN', 'loc': [-72.51565, 42.377017], 'pop': 36963, 'state': 'MA'} {'_id': '01012', 'city': 'CHESTERFIELD', 'loc': [-72.833309, 42.38167], 'pop': 177, 'state': 'MA'} {'_id': '01073', 'city': 'SOUTHAMPTON', 'loc': [-72.719381, 42.224697], 'pop': 4478, 'state': 'MA'} {'_id': '01096', 'city': 'WILLIAMSBURG', 'loc': [-72.777989, 42.408522], 'pop': 2295, 'state': 'MA'} {'_id': '01262', 'city': 'STOCKBRIDGE', 'loc': [-73.322263, 42.30104], 'pop': 2200, 'state': 'MA'} {'_id': '01240', 'city': 'LENOX', 'loc': [-73.271322, 42.364241], 'pop': 5001, 'state': 'MA'} {'_id': '01370', 'city': 'SHELBURNE FALLS', 'loc': [-72.739059, 42.602203], 'pop': 4525, 'state': 'MA'} {'_id': '01340', 'city': 'COLRAIN', 'loc': [-72.726508, 42.67905], 'pop': 2050, 'state': 'MA'} {'_id': '01462', 'city': 'LUNENBURG', 'loc': [-71.726642, 42.58843], 'pop': 9117, 'state': 'MA'} {'_id': '01473', 'city': 'WESTMINSTER', 'loc': [-71.909599, 42.548319], 'pop': 6191, 'state': 'MA'} {'_id': '01510', 'city': 'CLINTON', 'loc': [-71.682847, 42.418147], 'pop': 13269, 'state': 'MA'} {'_id': '01569', 'city': 'UXBRIDGE', 'loc': [-71.632869, 42.074426], 'pop': 10364, 'state': 'MA'} {'_id': '01775', 'city': 'STOW', 'loc': [-71.515019, 42.430785], 'pop': 5328, 'state': 'MA'} {'_id': '01835', 'city': 'BRADFORD', 'loc': [-71.08549, 42.758597], 'pop': 12078, 'state': 'MA'} {'_id': '01845', 'city': 'NORTH ANDOVER', 'loc': [-71.109004, 42.682583], 'pop': 22792, 'state': 'MA'} {'_id': '01851', 'city': 'LOWELL', 'loc': [-71.332882, 42.631548], 'pop': 28154, 'state': 'MA'} {'_id': '01867', 'city': 'READING', 'loc': [-71.109021, 42.527986], 'pop': 22539, 'state': 'MA'} {'_id': '01906', 'city': 'SAUGUS', 'loc': [-71.011093, 42.463344], 'pop': 25487, 'state': 'MA'} {'_id': '01929', 'city': 'ESSEX', 'loc': [-70.782794, 42.628629], 'pop': 3260, 'state': 'MA'} Hit Return to continue We can do range selections by using the $lt and $gt operators. >>> print_cursor(zipcodes.find({'pop' : { '$lt':8, '$gt':5}})) {'_id': '05901', 'city': 'AVERILL', 'loc': [-71.700268, 44.992304], 'pop': 7, 'state': 'VT'} {'_id': '12874', 'city': 'SILVER BAY', 'loc': [-73.507062, 43.697804], 'pop': 7, 'state': 'NY'} {'_id': '32830', 'city': 'LAKE BUENA VISTA', 'loc': [-81.519034, 28.369378], 'pop': 6, 'state': 'FL'} {'_id': '59058', 'city': 'MOSBY', 'loc': [-107.789149, 46.900453], 'pop': 7, 'state': 'MT'} {'_id': '59242', 'city': 'HOMESTEAD', 'loc': [-104.591805, 48.429616], 'pop': 7, 'state': 'MT'} {'_id': '71630', 'city': 'ARKANSAS CITY', 'loc': [-91.232529, 33.614328], 'pop': 7, 'state': 'AR'} {'_id': '82224', 'city': 'LOST SPRINGS', 'loc': [-104.920901, 42.729835], 'pop': 6, 'state': 'WY'} {'_id': '88412', 'city': 'BUEYEROS', 'loc': [-103.666894, 36.013541], 'pop': 7, 'state': 'NM'} {'_id': '95552', 'city': 'MAD RIVER', 'loc': [-123.413994, 40.352352], 'pop': 6, 'state': 'CA'} {'_id': '99653', 'city': 'PORT ALSWORTH', 'loc': [-154.433803, 60.636416], 'pop': 7, 'state': 'AK'} >>> Again sets of $lt and $gt are combined as a boolean and . if you need different logic you can use the boolean operators . Conclusion Today we have seen how to query documents using a query template, how to reduce the output using projections and how to create more complex queries using boolean and $lt and $gt operators. Next time we will talk about the Update portion of CRUD. MongoDB has a very rich and full-featured query language including support for querying using full-text, geospatial coordinates and nested queries. Give the query language a spin with the Python shell using the tools we outlined above. The complete zip codes dataset is publicly available for read queries at the MongoDB URI: mongodb+srv://demo:demo@demodata-rgl39.mongodb.net/test?retryWrites=true Try MongoDB Atlas via the free-tier today. A free MongoDB cluster for your own personal use forever!

October 1, 2018

PyMongo Monday: PyMongo Create

Last time we showed you how to setup up your environment . In the next few episodes we will take you through the standard CRUD operators that every database is expected to support. In this episode we will focus on the Create in CRUD. Create Lets look at how we insert JSON documents into MongoDB. First lets start a local single instance of mongod using m . $ m use stable 2018-08-28T14:58:06.674+0100 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' 2018-08-28T14:58:06.689+0100 I CONTROL [initandlisten] MongoDB starting : pid=43658 port=27017 dbpath=/data/db 64-bit host=JD10Gen.local 2018-08-28T14:58:06.689+0100 I CONTROL [initandlisten] db version v4.0.2 2018-08-28T14:58:06.689+0100 I CONTROL [initandlisten] git version: fc1573ba18aee42f97a3bb13b67af7d837826b47 2018-08-28T14:58:06.689+0100 I CONTROL [initandlisten] allocator: syste etc... The mongod starts listening on port 27017 by default. As every MongoDB driver defaults to connecting on localhost:27017 we won't need to specify a connection string explicitly in these early examples. Now, we want to work with the Python driver. These examples are using Python 3.6.5 but everything should work with versions as old as Python 2.7 without problems. Unlike SQL databases, databases and collections in MongoDB only have to be named to be created. As we will see later this is a lazy creation process, and the database and corresponding collection are actually only created when a document is inserted. $ python Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 03:03:55) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> >>> import pymongo >>> client = pymongo.MongoClient() >>> database = client[ "ep002" ] >>> people_collection = database[ "people_collection" ] >>> result=people_collection.insert_one({"name" : "Joe Drumgoole"}) >>> result.inserted_id ObjectId('5b7d297cc718bc133212aa94') >>> result.acknowledged True >>> people_collection.find_one() {'_id': ObjectId('5b62e6f8c3b498fbfdc1c20c'), 'name': 'Joe Drumgoole'} True >>> First we import the pymongo library (line 6) . Then we create the local client proxy object , client = pymongo.MongoClient() (line 7) . The client object manages a connection pool to the server and can be used to set many operational parameters related to server connections. We can leave the parameter list to the MongoClient call blank. Remember, the server by default listens on port 27017 and the client by default attempts to connect to localhost:27017 . Once we have a client object, we can now create a database, ep002 (line 8) and a collection, people_collection (line 9) . Note that we do not need an explicit DDL statement. Using Compass to examine the database server A database is effectively a container for collections. A collection provides a container for documents. Neither the database nor the collection will be created on the server until you actually insert a document. If you check the server by connecting MongoDB Compass you will see that there are no databases or collections on this server before the insert_one call. These commands are lazily evaluated. So, until we actually insert a document into the collection, nothing happens on the server. Once we insert a document: >>>> result=database.people_collection.insert_one({"name" : "Joe Drumgoole"}) >>> result.inserted_id ObjectId('5b7d297cc718bc133212aa94') >>> result.acknowledged True >>> people_collection.find_one() {'_id': ObjectId('5b62e6f8c3b498fbfdc1c20c'), 'name': 'Joe Drumgoole'} True >>> We will see that the database, the collection, and the document are created. And we can see the document in the database. _id Field Every object that is inserted into a MongoDB database gets an automatically generated _id field. This field is guaranteed to be unique for every document inserted into the collection. This unique property is enforced as the _id field is automatically indexed and the index is unique . The value of the _id field is defined as follows: The _id field is generated on the client and you can see the PyMongo generation code in the objectid.py file. Just search for the def _generate string. All MongoDB drivers generate _id fields on the client side. The _id field allows us to insert the same JSON object many times and allow each one to be uniquely identified. The _id field even gives a temporal ordering and you can get this from an ObjectID via the generation_time method. >>> from bson import ObjectId >>> x=ObjectId('5b7d297cc718bc133212aa94') >>> x.generation_time datetime.datetime(2018, 8, 22, 9, 14, 36, tzinfo=) >>> <b>print(x.generation_time)</b> 2018-08-22 09:14:36+00:00 >>> Wrap Up That is create in MongoDB. We started a mongod instance, created a MongoClient proxy, created a database and a collection and finally made then spring to life by inserting a document. Next up we will talk more about Read part of CRUD. In MongoDB this is the find query which we saw a little bit of earlier on in this episode. For direct feedback please pose your questions on twitter/jdrumgoole that way everyone can see the answers. The best way to try out MongoDB is via MongoDB Atlas our Database as a Service. It’s free to get started with MongoDB Atlas so give it a try today.

September 17, 2018

Introduction to MongoDB Transactions in Python

Multi-document transactions arrived in MongoDB 4.0 in June 2018. MongoDB has always been transactional around updates to a single document. Now, with multi-document transactions we can wrap a set of database operations inside a start and commit transaction call. This ensures that even with inserts and/or updates happening across multiple collections and/or databases, the external view of the data meets ACID constraints . To demonstrate transactions in the wild we use a trivial example app that emulates a flight booking for an online airline application. In this simplified booking we need to undertake three operations: Allocate a seat ( seat_collection ) Pay for the seat ( payment_collection ) Update the count of allocated seats and sales ( audit_collection ) For this application we will use three separate collections for these documents as detailed in bold above. The code in transactions_main.py updates these collections in serial unless the --usetxns argument is used. We then wrap the complete set of operations inside an ACID transaction. The code in transactions_main.py is built directly using the MongoDB Python driver ( Pymongo 3.7.1 ). See the section on client sessions for an overview of the new transactions API in 3.7.1. The goal of this code is to demonstrate to the Python developer just how easy it is to covert existing code to transactions if required or to port older SQL based systems. Setting up your environment The following files can be found in the associated github repo, pymongo-transactions . gitignore : Standard Github .gitignore for Python LICENSE : Apache's 2.0 (standard Github) license Makefile : Makefile with targets for default operations transaction_main.py : Run a set of writes with and without transactions. Run python transactions_main.py -h for help. transactions_retry.py : The file containing the transactions retry functions. watch_transactions.py : Use a MongoDB change stream to watch collections as they change when transactions_main.py is running kill_primary.py : Starts a MongoDB replica set (on port 7100) and kills the primary on a regular basis. This is used to emulate an election happening in the middle of a transaction. featurecompatibility.py : check and/or set feature compatibility for the database (it needs to be set to "4.0" for transactions) You can clone this repo and work alongside us during this blog post (please file any problems on the Issues tab for the repo). We assume for all that follows that you have Python 3.6 or greater correctly installed and on your path. The Makefile outlines the operations that are required to setup the test environment. All the programs in this example use a port range starting at 27100 to ensure that this example does not clash with an existing MongoDB installation. Preparation To setup the environment you can run through the following steps manually. People that have make can speed up installation by using the make install command. Set a python virtualenv $ cd pymongo-transactions $ virtualenv -p python3 venv $ source venv/bin/activate Install Python MongoDB Driver, pymongo Install the latest version of the PyMongo MongoDB Driver (3.7.1 at the time of writing). pip install --upgrade pymongo Install Mtools MTools is a collection of helper scripts to parse, filter, and visualize MongoDB log files (mongod, mongos). mtools also includes mlaunch , a utility to quickly set up complex MongoDB test environments on a local machine. For this demo we are only going to use the mlaunch program. pip install mtools the mlaunch program also requires the psutil package. pip install psutil The mlaunch program gives us a simple command to start a MongoDB replica set as transactions are only supported on a replica set Start a replica set whose name is txntest. (see the make init_server make target) for details: mlaunch init --port 27100 --replicaset --name "txntest" Using the Makefile for configuration There is a Makefile with targets for all these operations. For those of you on platforms without access to Make it should be easy enough to cut and paste the commands out of the targets and run them on the command line. Running the Makefile cd pymongo-transactions make You will need to have MongoDB 4.0 on your path. There are other convenience targets for starting the demo programs: make notxns : start the transactions client without using transactions make usetxns : start the transactions client with transactions enabled make watch_seats : watch the seats collection changing make watch_payments : watch the payment collection changing Running the transactions example The transactions example consists of two python programs. transaction_main.py and watch_transactions.py . Running transactions_main.py $ python transaction_main.py -h usage: transaction_main.py [-h] [--host HOST] [--usetxns] [--delay DELAY] [--iterations ITERATIONS] [--randdelay RANDDELAY RANDDELAY] optional arguments: -h, --help show this help message and exit --host HOST MongoDB URI [default: mongodb://localhost:27100,localh ost:27101,localhost:27102/?replicaSet=txntest&retryWri tes=true] --usetxns Use transactions [default: False] --delay DELAY Delay between two insertion events [default: 1.0] --iterations ITERATIONS Run N iterations. O means run forever --randdelay RANDDELAY RANDDELAY Create a delay set randomly between the two bounds [default: None] You can choose to use --delay or --randdelay . if you use both --delay takes precedence. The --randdelay parameter creates a random delay between a lower and an upper bound that will be added between each insertion event. The transactions_main.py program knows to use the txntest replica set and the right default port range. To run the program without transactions you can run it with no arguments: $ python transaction_main.py using collection: SEATSDB.seats using collection: PAYMENTSDB.payments using collection: AUDITDB.audit Using a fixed delay of 1.0 1. Booking seat: '1A' 1. Sleeping: 1.000 1. Paying 330 for seat '1A' 2. Booking seat: '2A' 2. Sleeping: 1.000 2. Paying 450 for seat '2A' 3. Booking seat: '3A' 3. Sleeping: 1.000 3. Paying 490 for seat '3A' 4. Booking seat: '4A' 4. Sleeping: 1.000 ^C The program runs a function called book_seat() which books a seat on a plane by adding documents to three collections. First it adds the seat allocation to the seats_collection , then it adds a payment to the payments_collection`, finally it updates an audit count in the audit_collection . (This is a much simplified booking process used purely for illustration). The default is to run the program without using transactions. To use transactions we have to add the command line flag --usetxns . Run this to test that you are running MongoDB 4.0 and that the correct featureCompatibility is configured (it must be set to 4.0). If you install MongoDB 4.0 over an existing /data directory containing 3.6 databases then featureCompatibility will be set to 3.6 by default and transactions will not be available. Note: If you get the following error running python transaction_main.py --usetxns that means you are picking up an older version of pymongo (older than 3.7.x) for which there is no multi-document transactions support. Traceback (most recent call last): File "transaction_main.py", line 175, in total_delay = total_delay + run_transaction_with_retry( booking_functor, session) File "/Users/jdrumgoole/GIT/pymongo-transactions/transaction_retry.py", line 52, in run_transaction_with_retry with session.start_transaction(): AttributeError: 'ClientSession' object has no attribute 'start_transaction' Watching Transactions To actually see the effect of transactions we need to watch what is happening inside the collections SEATSDB.seats and PAYMENTSDB.payments . We can do this with watch_transactions.py . This script uses MongoDB Change Streams to see what's happening inside a collection in real-time. We need to run two of these in parallel so it's best to line them up side by side. Here is the watch_transactions.py program: $ python watch_transactions.py -h usage: watch_transactions.py [-h] [--host HOST] [--collection COLLECTION] optional arguments: -h, --help show this help message and exit --host HOST mongodb URI for connecting to server [default: mongodb://localhost:27100/?replicaSet=txntest] --collection COLLECTION Watch [default: PYTHON_TXNS_EXAMPLE.seats_collection] We need to watch each collection so in two separate terminal windows start the watcher. Window 1: $ python watch_transactions.py --watch seats Watching: seats ... Window 2: $ python watch_transactions.py --watch payments Watching: payments ... What Happens when you run without transactions? Lets run the code without transactions first. If you examine the transaction_main.py code you will see a function book_seats . def book_seat(seats, payments, audit, seat_no, delay_range, session=None): ''' Run two inserts in sequence. If session is not None we are in a transaction :param seats: seats collection :param payments: payments collection :param seat_no: the number of the seat to be booked (defaults to row A) :param delay_range: A tuple indicating a random delay between two ranges or a single float fixed delay :param session: Session object required by a MongoDB transaction :return: the delay_period for this transaction ''' price = random.randrange(200, 500, 10) if type(delay_range) == tuple: delay_period = random.uniform(delay_range[0], delay_range[1]) else: delay_period = delay_range # Book Seat seat_str = "{}A".format(seat_no) print(count( i, "Booking seat: '{}'".format(seat_str))) seats.insert_one({"flight_no" : "EI178", "seat" : seat_str, "date" : datetime.datetime.utcnow()}, session=session) print(count( seat_no, "Sleeping: {:02.3f}".format(delay_period))) #pay for seat time.sleep(delay_period) payments.insert_one({"flight_no" : "EI178", "seat" : seat_str, "date" : datetime.datetime.utcnow(), "price" : price}, session=session) audit.update_one({ "audit" : "seats"}, { "$inc" : { "count" : 1}}, upsert=True) print(count(seat_no, "Paying {} for seat '{}'".format(price, seat_str))) return delay_period This program emulates a very simplified airline booking with a seat being allocated and then paid for. These are often separated by a reasonable time frame (e.f. seat allocation vs external credit card validation and anti-fraud check) and we emulate this by inserting a delay. The default is 1 second. Now with the two watch_transactions.py scripts running for seats_collection and payments_collection we can run transactions_main.py as follows: $ python transaction_main.py The first run is with no transactions enabled. The bottom window shows transactions_main.py running. On the top left we are watching the inserts to the seats collection. On the top right we are watching inserts to the payments collection. We can see that the payments window lags the seats window as the watchers only update when the insert is complete. Thus seats sold cannot be easily reconciled with corresponding payments. If after the third seat has been booked we CTRL-C the program we can see that the program exits before writing the payment. This is reflected in the Change Stream for the payments collection which only shows payments for seat 1A and 2A versus seat allocations for 1A, 2A and 3A. If we want payments and seats to be instantly reconcilable and consistent we must execute the inserts inside a transaction. What happens when you run with Transactions? Now lets run the same system with --usetxns enabled. $ python transaction_main.py --usetxns We run with the exact same setup but now set --usetxns . Note now how the change streams are interlocked and are updated in parallel. This is because all the updates only become visible when the transaction is committed. Note how we aborted the third transaction by hitting CTRL-C. Now neither the seat nor the payment appear in the change streams unlike the first example where the seat went through. This is where transactions shine in world where all or nothing is the watchword. We never want to keeps seats allocated unless they are paid for. What happens during failure? In a MongoDB replica set all writes are directed to the Primary node. If the primary node fails or becomes inaccessible (e.g. due to a network partition) writes in flight may fail. In a non-transactional scenario the driver will recover from a single failure and retry the write . In a multi-document transaction we must recover and retry in the event of these kinds of transient failures. This code is encapsulated in transaction_retry.py . We both retry the transaction and retry the commit to handle scenarios where the primary fails within the transaction and/or the commit operation. def commit_with_retry(session): while True: try: # Commit uses write concern set at transaction start. session.commit_transaction() print("Transaction committed.") break except (pymongo.errors.ConnectionFailure, pymongo.errors.OperationFailure) as exc: # Can retry commit if exc.has_error_label("UnknownTransactionCommitResult"): print("UnknownTransactionCommitResult, retrying " "commit operation ...") continue else: print("Error during commit ...") raise def run_transaction_with_retry(functor, session): assert (isinstance(functor, Transaction_Functor)) while True: try: with session.start_transaction(): result=functor(session) # performs transaction commit_with_retry(session) break except (pymongo.errors.ConnectionFailure, pymongo.errors.OperationFailure) as exc: # If transient error, retry the whole transaction if exc.has_error_label("TransientTransactionError"): print("TransientTransactionError, retrying " "transaction ...") continue else: raise return result In order to observe what happens during elections we can use the script kill_primary.py . This script will start a replica-set and continuously kill the primary. $ make kill_primary . venv/bin/activate && python kill_primary.py no nodes started. Current electionTimeoutMillis: 500 1. (Re)starting replica-set no nodes started. 1. Getting list of mongod processes Process list written to mlaunch.procs 1. Getting replica set status 1. Killing primary node: 31029 1. Sleeping: 1.0 2. (Re)starting replica-set launching: "/usr/local/mongodb/bin/mongod" on port 27101 2. Getting list of mongod processes Process list written to mlaunch.procs 2. Getting replica set status 2. Killing primary node: 31045 2. Sleeping: 1.0 3. (Re)starting replica-set launching: "/usr/local/mongodb/bin/mongod" on port 27102 3. Getting list of mongod processes Process list written to mlaunch.procs 3. Getting replica set status 3. Killing primary node: 31137 3. Sleeping: 1.0 kill_primary.py resets electionTimeOutMillis to 500ms from its default of 10000ms (10 seconds). This allows elections to resolve more quickly for the purposes of this test as we are running everything locally. Once kill_primary.py is running we can start up transactions_main.py again using the --usetxns argument. $ make usetxns . venv/bin/activate && python transaction_main.py --usetxns Forcing collection creation (you can't create collections inside a txn) Collections created using collection: PYTHON_TXNS_EXAMPLE.seats using collection: PYTHON_TXNS_EXAMPLE.payments using collection: PYTHON_TXNS_EXAMPLE.audit Using a fixed delay of 1.0 Using transactions 1. Booking seat: '1A' 1. Sleeping: 1.000 1. Paying 440 for seat '1A' Transaction committed. 2. Booking seat: '2A' 2. Sleeping: 1.000 2. Paying 330 for seat '2A' Transaction committed. 3. Booking seat: '3A' 3. Sleeping: 1.000 TransientTransactionError, retrying transaction ... 3. Booking seat: '3A' 3. Sleeping: 1.000 3. Paying 240 for seat '3A' Transaction committed. 4. Booking seat: '4A' 4. Sleeping: 1.000 4. Paying 410 for seat '4A' Transaction committed. 5. Booking seat: '5A' 5. Sleeping: 1.000 5. Paying 260 for seat '5A' Transaction committed. 6. Booking seat: '6A' 6. Sleeping: 1.000 TransientTransactionError, retrying transaction ... 6. Booking seat: '6A' 6. Sleeping: 1.000 6. Paying 380 for seat '6A' Transaction committed. ... As you can see during elections the transaction will be aborted and must be retried. If you look at the transaction_rety.py code you will see how this happens. If a write operation encounters an error it will throw one of the following exceptions: pymongo.errors.ConnectionFailure pymongo.errors.OperationFailure Within these exceptions there will be a label called TransientTransactionError . This label can be detected using the has_error_label(label) function which is available in pymongo 3.7.x. Transient errors can be recovered from and the retry code in transactions_retry.py has code that retries for both writes and commits (see above). Conclusions Multi-document transactions are the final piece of the jigsaw for SQL developers who have been shying away from trying MongoDB. ACID transactions make the programmer's job easier and give teams that are migrating from an existing SQL schema a much more consistent and convenient transition path. As most migrations involving a move from highly normalised data structures to more natural and flexible nested JSON documents one would expect that the number of required multi-document transactions will be less in a properly constructed MongoDB application. But where multi-document transactions are required programmers can now include them using very similar syntax to SQL. With ACID transactions in MongoDB 4.0 it can now be the first choice for an even broader range of application use cases. Why not try our transactions today by setting up your first cluster on MongoDB Atlas our Database as a Service offering. To try it locally download MongoDB 4.0 . Join us at MongoDB Europe 2018 for deep-dive technical sessions and hands-on tutorials.

August 2, 2018

SEGA HARDlight Migrates to MongoDB Atlas to Simplify Ops and Improve Experience for Millions of Mobile Gamers

It was way back in the summer of 1991 that Sonic the Hedgehog first chased rings across our 2D screens. Gaming has come a long way since then. From a static TV and console setup to online PC gaming in the noughties and now to mobile and virtual reality. Surprisingly, for most of those 25 years, the underlying infrastructure that powered these games hasn’t really changed much at all. It was all relational databases. But with ever increasing need for scale, flexibility, and creativity in games, that’s changing fast. SEGA HARDlight is leading this shift by adopting a DevOps culture and using MongoDB Atlas , the cloud-hosted MongoDB developer data platform, to deliver the best possible gaming experience. Bringing Icons to Mobile Games SEGA HARDlight is a mobile development studio for SEGA, a gaming company you might have heard of. Based in the UK’s Royal Leamington Spa, SEGA HARDlight is well known for bringing the much loved blue mascot, Sonic the Hedgehog, to the small screen. Along with a range of Sonic games, HARDlight is also responsible for building and running a number of other iconic titles such as Crazy Taxi: City Rush and Kingdom Conquest: Dark Empire. Sonic Dash Earlier versions of the mobile games such as Sonic Jump and Sonic Dash didn’t require a connection to the internet and had no server functionality. As they were relatively static games, developers initially supported the releases with an in-house tech stack based around Java and MySQL and hosted in SEGA HARDlight’s own data centre. The standard practice for launching these games involved load testing the servers to the point of breaking, then provisioning the resources to handle an acceptable failure point. This limited application functionality resulted in service outages when reaching the provisioned resources’ breaking point. As the games started to add more online functionality and increased in popularity, that traditional stack started to creak. Mobile games have an interesting load pattern. People flock in extreme numbers very soon after the release. For the most popular games, this can mean many millions of people in just a few days or even hours. The peak is usually short and then quickly drops to a long tail of dedicated players. Provisioning for this kind of traffic with a dynamic game is a major headache. The graph from the Crazy Taxi: City Rush launch in 2014 demonstrates just how spikey the traffic can be. We spoke with Yordan Gyurchev, Technical Director at SEGA HARDlight, who explained: “With these massive volumes even minor changes in the database have a big impact. To provide a perfect gaming experience developers need to be intimately familiar with the performance trade offs of the database they’re using.” Supersonic Scaling SEGA HARDlight knew that the games were only going to get more online functionality and generate even more massive bursts of user activity. Much of the gaming data was also account-based so it didn’t fit naturally in the rows and columns of relational databases. In order to address these limitations, the team searched for alternatives. After reviewing Riak, Cassandra, and Couchbase, but feeling they were either too complex to manage or didn’t have the mature support needed to satisfy the company’s SLAs, the HARDlight engineers looked to MongoDB Atlas , the MongoDB developer data platform. Then came extensive evaluations and testing across multiple dimensions such as cost, maintenance, monitoring, and backups. It was well known that MongoDB natively had the scalability and flexibility to handle large volumes and always-on deployments but HARDlight’s team had to have support on the operations side too. Advanced operational tooling in MongoDB Atlas gave a small DevOps team of just two staffers the ability to handle and run games even as millions of people join the fray. They no longer had to worry about reliability, maintenance, upgrades, or backups. In fact, one of the clinchers was the point-in-time backup and restore feature, which meant that they can roll back to a checkpoint with the click of a button. With MongoDB Atlas and running on AWS, SEGA HARDlight was ready to take on even Boss Level scaling. “At HARDlight we’re passionate about finding the right tool for the job. For us we could see that using a horizontally scalable document database was a perfect fit for player-account-based games,” said Gyurchev. “The ability to create a high traffic volume, highly scalable solution is about knowing the tiny details. To do that, normally engineers need to focus on many different parts of the stack but MongoDB Atlas and MongoDB’s support gives us a considerable shortcut. If this was handled in-house we would only be as good as our database expert. Now we can rely on a wealth of knowledge, expertise, and best-in-class technology.” Sonic Forces HARDlight’s first MongoDB-powered game was Kingdom Conquest: Dark Empire , which was a frictionless launch from the start and gave the engineers their first experiences of MongoDB. Then in a weekend in late 2017 Sonic Forces: Speed Battle was launched on mobile. It’s a demanding, always-on application that enables constant connection to the internet and shared leaderboards. In the background a three-shard cluster running on MongoDB Atlas easily scaled to handle the complex loads as millions of gamers joined the race. The database was stable with low latency and not a single service interruption. All of this resulted in a low-stress launch, a happy DevOps team, and a very enthusiastic set of gamers. Gyurchev concluded, “With MySQL, it had taken multiple game launches to get the database backend right. With MongoDB Atlas, big launches were a success right from the start. That’s no mean feat.” Just as the gaming platforms have evolved and transformed through the years, so too has the database layer had to grow and adapt. SEGA HARDlight is now expanding its use of MongoDB Atlas to support all new games as they come online. By taking care of the operations, management, and scaling, MongoDB Atlas lets HARDlight focus on building and running some of the most iconic games in the world. And doing it with confidence. Gone is the 90s infrastructure, replaced by a stack that is every bit as modern, powerful, and fast as the famous blue hedgehog. SEGA Hardlight is looking for talented engineers to join the team. If you are interested, check out the careers page or email: hardlightcareers@sega.co.uk. Start your Atlas journey today for free

March 15, 2018