Our MongoDB Cluster is running in version 4.4.1.
You can connect to it using MongoDB Compass, the Mongo Shell, SQL or any MongoDB driver supporting at least MongoDB 4.4 with the following URI:
readonly is the username and the password, they are not meant to be replaced.
Upgraded the cluster to 4.4.
Improved the python data import script to calculate the daily values using the existing cumulative values with an Aggregation Pipeline.
Renamed the field "city" to "county" and "cities" to "counties" where appropriate. They contains the data from the column "Admin2" in JHU CSVs.
covid19 database now has 5 collections. More details in our README.md.
covid19.statistics collection is renamed
covid19.global_and_us for more clarity.
The dataset is updated hourly so any commit done by JHU will be reflected at most one hour later in our cluster.
As the COVID-19 pandemic has swept the globe, the work of JHU (Johns Hopkins University) and its COVID-19 dashboard has become vitally important in keeping people informed about the progress of the virus in their communities, in their countries, and in the world. JHU not only publishes their dashboard, but they make the data powering it freely available for anyone to use. However, their data is delivered as flat CSV files which you need to download each time to then query. We've set out to make that up-to-date data more accessible so people could build other analyses and applications directly on top of the data set. We are now hosting a service with a frequently updated copy of the JHU data in MongoDB Atlas, our database in the cloud. This data is free for anyone to query using the MongoDB Query language and/or SQL. We also support a variety of BI tools directly so you can query the data with Tableau, Qlik and Excel. With the MongoDB COVID-19 dataset there will be no more manual downloads and no more frequent format changes. With this data set, this service will deliver a consistent JSON and SQL view every day with no downstream ETL required. None of the actual data is modified. It is simply structured to make it easier to query by placing it within a MongoDB Atlas cluster and by creating some convenient APIs. All the data we use to create the MongoDB COVID-19 dataset comes from the JHU dataset. In their turn, here are the sources they are using:
the World Health Organization,
the National Health Commission of the People's Republic of China,
the United States Centre for Disease Control,
the Australia Government Department of Health,
the European Centre for Disease Prevention and Control,
Using the CSV files they provide, we are producing two different databases in our cluster.
covid19jhu contains the raw CSV files imported with the mongoimport tool,
covid19 contains the same dataset but with a clean MongoDB schema design with all the good practices we are recommending.
Here is an example of a document in the
The document above was obtained by joining together the file
UID_ISO_FIPS_LookUp_Table.csv and the CSV files time series you can find in this folder.
Some fields might not exist in all the documents because they are not relevant or are just not provided by JHU. If you want more details, run a schema analysis with MongoDB Compass on the different collections available.
global (the data from the time series global files)
us_only (the data from the time series US files)
global_and_us (the most complete one)
countries_summary (same as global but countries are grouped in a single doc for each date)
In the following sections, we will also show you how to consume this dataset using the Java, Node.js and Python drivers.
We will show you how to perform the following queries in each language:
Retrieve the last 5 days of data for a given place,
Retrieve all the data for the last day,
Make a geospatial query to retrieve data within a certain distance of a given place.
Explore the Dataset with MongoDB Charts
If you want to create your own MongoDB Charts dashboard, you will need to set up your own Free MongoDB Atlas cluster and import the dataset in your cluster using the import scripts or use
mongoexport & mongoimport or
mongodump & mongorestore. See this section for more details: Take a copy of the data.
Explore the Dataset with MongoDB Compass
For MongoDB Compass or your driver, you can use this connection string.
Explore the Dataset with the MongoDB Shell
Because we store the data in MongoDB, you can also access it via the MongoDB Shell or using any of our drivers. We've limited access to these collections to 'read-only'. You can find the connection strings for the shell and Compass below, as well as driver examples for Java, Node.js, and Python to get you started.
Accessing the Data with Java
Here is the main class of our Java example. Of course, you need the three POJOs from the Java Github folder to make this work.
Accessing the Data with Node.js
Accessing the Data with Python
Accessing the Data with Golang
Accessing the Data with Google Colaboratory
The sample code shows how to install pymongo and use it to connect to the MongoDB COVID-19 dataset. There are some example queries which show how to query the data and display it in the notebook, and the last example demonstrates how to display a chart using Pandas & Matplotlib!
If you want to modify the notebook, you can take a copy by selecting "Save a copy in Drive ..." from the "File" menu, and then you'll be free to edit the copy.
You can get lots of value from the dataset without any programming at all. We've enabled the Atlas BI Connector, which exposes an SQL interface to MongoDB's document structure. This means you can use data analysis and dashboarding tools like Tableau, Qlik Sense, and even MySQL Workbench to analyze, visualise and extract understanding from the data.
Here's an example of a visualisation produced in a few clicks with Tableau:
Tableau is a powerful data visualisation and dashboard tool, and can be connected to our COVID-19 data in a few steps. We've written a short tutorial to get you up and running. As mentioned above, the Atlas BI Connector is activated so you can connect any SQL tool to this cluster using the following connection information:
Database: covid19 or covid19jhu (depends which version of the dataset you want to see),
Username: readonly or readonly?source=admin,
Accessing our copy of this data in a read-only database is useful, but it won't be enough if you want to integrate it with other data within a single MongoDB cluster. You can obtain a copy of the database, either to use offline using a different tool outside of MongoDB, or to load into your own MongoDB instance.
mongoexport is a command-line tool that produces a JSONL or CSV export of data stored in a MongoDB instance. First, follow these instructions to install the MongoDB Database Tools.
Now you can run the following in your console to download the metadata and global_and_us collections as jsonl files in your current directory:
--jsonArray option if you prefer to work with a JSON array rather than a JSONL file.
Documentation for all the features of
mongoexport is available on the MongoDB website and with the command
Once you have the data on your computer, you can use it directly with local tools, or load it into your own MongoDB instance using mongoimport.
Note that you cannot run these commands against our cluster because the user we gave you (
readonly:readonly) doesn't have write permission on this cluster.
Another smart way to duplicate the dataset in your own cluster would be to use
mongorestore. Apart from being more efficient, it will also grab the indexes definition along with the data.
We see the value and importance of making this data as readily available to everyone as possible, so we're not stopping here. Over the coming days, we'll be adding a GraphQL and REST API, as well as making the data available within Excel and Google Sheets.
We've also launched an Atlas credits program for anyone working on detecting, understanding, and stopping the spread of COVID-19. If you are having any problems accessing the data or have other data sets you would like to host please contact us on the MongoDB community. We would also love to showcase any services you build on top of this data set. Finally please send in PRs for any code changes you would like to make to the examples.