Docs Home → MongoDB Database Tools
mongoimport

Synopsis
The mongoimport
tool imports content from an
Extended JSON, CSV, or TSV export
created by mongoexport
, or potentially, another third-party export
tool.
Run mongoimport
from the system command line, not the mongo
shell.
Tip
See also:
mongoexport
which provides the corresponding
structured data export capability.
You can use the MongoDB Database Tools to migrate from a self-hosted deployment to MongoDB Atlas. MongoDB Atlas is the fully managed service for MongoDB deployments in the cloud. To learn more, see Seed with mongorestore. To learn all the ways you can migrate to MongoDB Atlas, see Migrate or Import Data.
Versioning
Starting with MongoDB 4.4, mongoimport
is now released separately
from the MongoDB Server and uses its own versioning, with an initial
version of 100.0.0
. Previously, mongoimport
was released
alongside the MongoDB Server and used matching versioning.
For documentation on the MongoDB 4.2 or earlier versions of
mongoimport
, reference the MongoDB Server Documentation for that version of the tool:
Note
Quick links to older documentation
This documentation is for version 100.8.0
of mongoimport
.
Compatibility
MongoDB Server Compatibility
mongoimport
version 100.8.0
supports the following versions
of the MongoDB Server:
MongoDB 6.0
MongoDB 5.0
MongoDB 4.4
MongoDB 4.2
While mongoimport
may work on earlier versions of MongoDB server,
any such compatibility is not guaranteed.
Platform Support
mongoimport
version 100.8.0
is supported on the following
platforms:
x86_64 | ARM64 | PPC64LE | s390x | |
---|---|---|---|---|
Amazon Linux 2023 | ✓ | ✓ | ||
Amazon 2 | ✓ | |||
Amazon 2013.03+ | ✓ | |||
Debian 10 | ✓ | |||
Debian 9 | ✓ | |||
Debian 8 | ✓ | |||
RHEL / CentOS 9 | ✓ | ✓ | ||
RHEL / CentOS 8 | ✓ | ✓ | ||
RHEL / CentOS 7 | ✓ | ✓ | ✓ | |
RHEL / CentOS 6 | ✓ | |||
SUSE 15 | ✓ | |||
SUSE 12 | ✓ | |||
Ubuntu 20.04 | ✓ | ✓ | ||
Ubuntu 18.04 | ✓ | ✓ | ||
Ubuntu 16.04 | ✓ | ✓ | ✓ | |
Windows 8 and later | ✓ | |||
Windows Server 2012 and later | ✓ | |||
macOS 11 and later | ✓ | ✓ | ||
macOS 10.12 - 10.15 | ✓ |
Installation
The mongoimport
tool is part of the MongoDB Database Tools package:
➤ Follow the Database Tools Installation Guide to install mongoimport
.
Syntax
The mongoimport
command has the following form:
mongoimport <options> <connection-string> <file>
Run mongoimport
from the system command line, not the mongo
shell.
Behavior
Type Fidelity
If you need to preserve all rich BSON data types when using
mongoexport
to perform full instance backups, be sure to
specify Extended JSON v2.0 (Canonical mode) to the
--jsonFormat
option to
mongoexport
, in the following fashion:
mongoexport --jsonFormat=canonical --collection=<coll> <connection-string>
If --jsonFormat
is unspecified,
mongoexport
outputs data in
Extended JSON v2.0 (Relaxed mode) by default.
mongoimport
will automatically use the JSON
format found in the specified target data file when restoring. For
example, it will use Extended JSON v2.0 (Canonical mode) if the target data export file was
created by mongoexport
with --jsonFormat=canonical
specified.
JSON Format
mongoimport
requires import data to be in either
Extended JSON v2.0 (Canonical) or
Extended JSON v2.0 (Relaxed) format by default. For import data
formatted using Extended JSON v1.0, specify the
--legacy
option.
Tip
In general, the versions of mongoexport
and
mongoimport
should match. That is, to import data
created from mongoexport
, you should use the
corresponding version of mongoimport
.
Encoding
mongoimport
only supports data files that are UTF-8 encoded.
Using other encodings will produce errors.
FIPS
mongoimport
automatically creates FIPS-compliant
connections to a mongod
/mongos
that is
configured to use FIPS mode.
Write Concern
If you specify write concern in both the
--writeConcern
option and the
--uri connection string
option, the
--writeConcern
value overrides
the write concern specified in the URI string.
Batches
mongoimport
uses a maximum batch size of 100,000 to
perform bulk insert/upsert operations.
Required Access
In order to connect to a mongod
that enforces authorization
with the --auth
option, you must use the
--username
and --password
options. The connecting user must
possess, at a minimum, the readWrite
role on the database
into which they are importing data.
Warning
Data Import and Export Conflicts With ($) and (.)
Starting in MongoDB 5.0, document field names can be ($
)
prefixed and can contain a (.
). However,
mongoimport
and mongoexport
should not
be used with field names that make use of these characters.
MongoDB Extended JSON v2
cannot differentiate between type wrappers and fields that happen to
have the same name as type wrappers. Do not use Extended JSON
formats in contexts where the corresponding BSON representations
might include ($
) prefixed keys. The
DBRef mechanism is an exception to this
general rule.
There are also restrictions on using mongoimport
and
mongoexport
with (.
) in field names. Since CSV
files use the (.
) to represent data hierarchies, a (.
) in a
field name will be misinterpreted as a level of nesting.
Options
--help
Returns information on the options and use of
mongoimport
.
--verbose, -v
Increases the amount of internal reporting returned on standard output or in log files. Increase the verbosity with the
-v
form by including the option multiple times, (e.g.-vvvvv
.)
--quiet
Runs
mongoimport
in a quiet mode that attempts to limit the amount of output.This option suppresses:
output from database commands
replication activity
connection accepted events
connection closed events
--version
Returns the
mongoimport
release number.
--config=<filename>
New in version 100.3.0.
Specifies the full path to a YAML configuration file containing sensitive values for the following options to
mongoimport
:This is the recommended way to specify a password to
mongoimport
, aside from specifying it through a password prompt.The configuration file takes the following form:
password: <password> uri: mongodb://mongodb0.example.com:27017 sslPEMKeyPassword: <password> Specifying a password to the
password:
field and providing a connection string in theuri:
field which contains a conflicting password will result in an error.Be sure to secure this file with appropriate filesystem permissions.
Note
If you specify a configuration file with
--config
and also use the--password
,--uri
or--sslPEMKeyPassword
option tomongoimport
, each command line option overrides its corresponding option in the configuration file.
--uri=<connectionString>
Specifies the resolvable URI connection string of the MongoDB deployment, enclosed in quotes:
--uri "mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]" Starting with version
100.0
ofmongoimport
, the connection string may alternatively be provided as a positional parameter, without using the--uri
option:mongoimport mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]] As a positional parameter, the connection string may be specified at any point on the command line, as long as it begins with either
mongodb://
ormongodb+srv://
. For example:mongoimport --username joe --password secret1 mongodb://mongodb0.example.com:27017 --ssl Only one connection string can be provided. Attempting to include more than one, whether using the
--uri
option or as a positional argument, will result in an error.For information on the components of the connection string, see the Connection String URI Format documentation.
Note
Some components in the
connection string
may alternatively be specified using their own explicit command-line options, such as--username
and--password
. Providing a connection string while also using an explicit option and specifying conflicting information will result in an error.Note
If using
mongoimport
on Ubuntu 18.04, you may experience acannot unmarshal DNS
error message when using SRV connection strings (in the formmongodb+srv://
) with the--uri
option. If so, use one of the following options instead:the
--uri
option with a non-SRV connection string (in the formmongodb://
)the
--host
option to specify the host to connect to directly
Warning
On some systems, a password provided in a connection string with the
--uri
option may be visible to system status programs such asps
that may be invoked by other users. Consider instead:omitting the password in the connection string to receive an interactive password prompt, or
using the
--config
option to specify a configuration file containing the password.
--host=<hostname><:port>, -h=<hostname><:port>
Default: localhost:27017
Specifies the resolvable hostname of the MongoDB deployment. By default,
mongoimport
attempts to connect to a MongoDB instance running on the localhost on port number27017
.To connect to a replica set, specify the
replSetName
and a seed list of set members, as in the following:--host=<replSetName>/<hostname1><:port>,<hostname2><:port>,<...> When specifying the replica set list format,
mongoimport
always connects to the primary.You can also connect to any single member of the replica set by specifying the host and port of only that member:
--host=<hostname1><:port> If you use IPv6 and use the
<address>:<port>
format, you must enclose the portion of an address and port combination in brackets (e.g.[<address>]
).Alternatively, you can also specify the hostname directly in the
URI connection string
. Providing a connection string while also using--host
and specifying conflicting information will result in an error.
--port=<port>
Default: 27017
Specifies the TCP port on which the MongoDB instance listens for client connections.
Alternatively, you can also specify the port directly in the
URI connection string
. Providing a connection string while also using--port
and specifying conflicting information will result in an error.
--ssl
Enables connection to a
mongod
ormongos
that has TLS/SSL support enabled.Alternatively, you can also configure TLS/SSL support directly in the
URI connection string
. Providing a connection string while also using--ssl
and specifying conflicting information will result in an error.For more information about TLS/SSL and MongoDB, see Configure mongod and mongos for TLS/SSL and TLS/SSL Configuration for Clients.
--sslCAFile=<filename>
Specifies the
.pem
file that contains the root certificate chain from the Certificate Authority. Specify the file name of the.pem
file using relative or absolute paths.Alternatively, you can also specify the
.pem
file directly in theURI connection string
. Providing a connection string while also using--sslCAFile
and specifying conflicting information will result in an error.For more information about TLS/SSL and MongoDB, see Configure mongod and mongos for TLS/SSL and TLS/SSL Configuration for Clients.
--sslPEMKeyFile=<filename>
Specifies the
.pem
file that contains both the TLS/SSL certificate and key. Specify the file name of the.pem
file using relative or absolute paths.This option is required when using the
--ssl
option to connect to amongod
ormongos
that hasCAFile
enabled withoutallowConnectionsWithoutCertificates
.Alternatively, you can also specify the
.pem
file directly in theURI connection string
. Providing a connection string while also using--sslPEMKeyFile
and specifying conflicting information will result in an error.For more information about TLS/SSL and MongoDB, see Configure mongod and mongos for TLS/SSL and TLS/SSL Configuration for Clients.
--sslPEMKeyPassword=<value>
Specifies the password to de-crypt the certificate-key file (i.e.
--sslPEMKeyFile
). Use the--sslPEMKeyPassword
option only if the certificate-key file is encrypted. In all cases, themongoimport
will redact the password from all logging and reporting output.If the private key in the PEM file is encrypted and you do not specify the
--sslPEMKeyPassword
option, themongoimport
will prompt for a passphrase. See TLS/SSL Certificate Passphrase.Alternatively, you can also specify the password directly in the
URI connection string
. Providing a connection string while also using--sslPEMKeyPassword
and specifying conflicting information will result in an error.For more information about TLS/SSL and MongoDB, see Configure mongod and mongos for TLS/SSL and TLS/SSL Configuration for Clients.
Warning
On some systems, a password provided directly using the
--sslPEMKeyPassword
option may be visible to system status programs such asps
that may be invoked by other users. Consider using the--config
option to specify a configuration file containing the password instead.
--sslCRLFile=<filename>
Specifies the
.pem
file that contains the Certificate Revocation List. Specify the file name of the.pem
file using relative or absolute paths.For more information about TLS/SSL and MongoDB, see Configure mongod and mongos for TLS/SSL and TLS/SSL Configuration for Clients.
--sslAllowInvalidCertificates
Bypasses the validation checks for server certificates and allows the use of invalid certificates. When using the
allowInvalidCertificates
setting, MongoDB logs as a warning the use of the invalid certificate.Warning
Although available, avoid using the
--sslAllowInvalidCertificates
option if possible. If the use of--sslAllowInvalidCertificates
is necessary, only use the option on systems where intrusion is not possible.Connecting to a
mongod
ormongos
instance without validating server certificates is a potential security risk. If you only need to disable the validation of the hostname in the TLS/SSL certificates, see--sslAllowInvalidHostnames
.Alternatively, you can also disable certificate validation directly in the
URI connection string
. Providing a connection string while also using--sslAllowInvalidCertificates
and specifying conflicting information will result in an error.For more information about TLS/SSL and MongoDB, see Configure mongod and mongos for TLS/SSL and TLS/SSL Configuration for Clients.
--sslAllowInvalidHostnames
Disables the validation of the hostnames in TLS/SSL certificates. Allows
mongoimport
to connect to MongoDB instances even if the hostname in their certificates do not match the specified hostname.Alternatively, you can also disable hostname validation directly in the
URI connection string
. Providing a connection string while also using--sslAllowInvalidHostnames
and specifying conflicting information will result in an error.For more information about TLS/SSL and MongoDB, see Configure mongod and mongos for TLS/SSL and TLS/SSL Configuration for Clients.
--username=<username>, -u=<username>
Specifies a username with which to authenticate to a MongoDB database that uses authentication. Use in conjunction with the
--password
and--authenticationDatabase
options.Alternatively, you can also specify the username directly in the
URI connection string
. Providing a connection string while also using--username
and specifying conflicting information will result in an error.If connecting to a MongoDB Atlas cluster using the
MONGODB-AWS
authentication mechanism
, you can specify your AWS access key ID in:this field,
the
connection string
, orthe
AWS_ACCESS_KEY_ID
environment variable.
See Connect to a MongoDB Atlas Cluster using AWS IAM Credentials for an example of each.
--password=<password>, -p=<password>
Specifies a password with which to authenticate to a MongoDB database that uses authentication. Use in conjunction with the
--username
and--authenticationDatabase
options.To prompt the user for the password, pass the
--username
option without--password
or specify an empty string as the--password
value, as in--password ""
.Alternatively, you can also specify the password directly in the
URI connection string
. Providing a connection string while also using--password
and specifying conflicting information will result in an error.If connecting to a MongoDB Atlas cluster using the
MONGODB-AWS
authentication mechanism
, you can specify your AWS secret access key in:this field,
the
connection string
, orthe
AWS_SECRET_ACCESS_KEY
environment variable.
See Connect to a MongoDB Atlas Cluster using AWS IAM Credentials for an example of each.
Warning
On some systems, a password provided directly using the
--password
option may be visible to system status programs such asps
that may be invoked by other users. Consider instead:omitting the
--password
option to receive an interactive password prompt, orusing the
--config
option to specify a configuration file containing the password.
--awsSessionToken=<AWS Session Token>
If connecting to a MongoDB Atlas cluster using the
MONGODB-AWS
authentication mechanism
, and using session tokens in addition to your AWS access key ID and secret access key, you can specify your AWS session token in:this field,
the
AWS_SESSION_TOKEN
authMechanismProperties
parameter to theconnection string
, orthe
AWS_SESSION_TOKEN
environment variable.
See Connect to a MongoDB Atlas Cluster using AWS IAM Credentials for an example of each.
Only valid when using the
MONGODB-AWS
authentication mechanism
.
--authenticationDatabase=<dbname>
Specifies the authentication database where the specified
--username
has been created. See Authentication Database.If using the GSSAPI (Kerberos), PLAIN (LDAP SASL), or
MONGODB-AWS
authentication mechanisms
, you must set--authenticationDatabase
to$external
.Alternatively, you can also specify the authentication database directly in the
URI connection string
. Providing a connection string while also using--authenticationDatabase
and specifying conflicting information will result in an error.
--authenticationMechanism=<name>
Default: SCRAM-SHA-1
Specifies the authentication mechanism the
mongoimport
instance uses to authenticate to themongod
ormongos
.Changed in version 100.1.0: Starting in version
100.1.0
,mongoimport
adds support for theMONGODB-AWS
authentication mechanism when connecting to a MongoDB Atlas cluster.ValueDescriptionRFC 5802 standard Salted Challenge Response Authentication Mechanism using the SHA-1 hash function.RFC 7677 standard Salted Challenge Response Authentication Mechanism using the SHA-256 hash function.
Requires featureCompatibilityVersion set to
4.0
.MongoDB TLS/SSL certificate authentication.MONGODB-AWS
External authentication using AWS IAM credentials for use in connecting to a MongoDB Atlas cluster. See Connect to a MongoDB Atlas Cluster using AWS IAM Credentials.
New in version 100.1.0.
GSSAPI (Kerberos)External authentication using Kerberos. This mechanism is available only in MongoDB Enterprise.PLAIN (LDAP SASL)External authentication using LDAP. You can also usePLAIN
for authenticating in-database users.PLAIN
transmits passwords in plain text. This mechanism is available only in MongoDB Enterprise.Alternatively, you can also specify the authentication mechanism directly in the
URI connection string
. Providing a connection string while also using--authenticationMechanism
and specifying conflicting information will result in an error.
--gssapiServiceName=<serviceName>
Specify the name of the service using GSSAPI/Kerberos. Only required if the service does not use the default name of
mongodb
.This option is available only in MongoDB Enterprise.
--gssapiHostName=<hostname>
Specify the hostname of a service using GSSAPI/Kerberos. Only required if the hostname of a machine does not match the hostname resolved by DNS.
This option is available only in MongoDB Enterprise.
--db=<database>, -d=<database>
Specifies the name of the database on which to run the
mongoimport
.Alternatively, you can also specify the database directly in the
URI connection string
. Providing a connection string while also using--db
and specifying conflicting information will result in an error.
--collection=<collection>, -c=<collection>
Specifies the collection to import. If you do not specify
--collection
,mongoimport
takes the collection name from the input filename, omitting the file's extension if it has one.
--fields=<field1[,field2]>, -f=<field1[,field2]>
Specify a comma separated list of field names when importing CSV or TSV files that do not have field names in the first (i.e. header) line of the file.
To also specify the field type as well as the field name, use
--fields
with--columnsHaveTypes
.If you attempt to include
--fields
when importing JSON data,mongoimport
will return an error.--fields
is only for CSV or TSV imports.
--fieldFile=<filename>
As an alternative to
--fields
, the--fieldFile
option allows you to specify a file that holds a list of field names if your CSV or TSV file does not include field names in the first line of the file (i.e. header). Place one field per line.To also specify the field type as well as the field name, use
--fieldFile
with--columnsHaveTypes
.If you attempt to include
--fieldFile
when importing JSON data,mongoimport
will return an error.--fieldFile
is only for CSV or TSV imports.
--ignoreBlanks
Ignores empty fields in CSV and TSV exports. If not specified,
mongoimport
creates fields without values in imported documents.If you attempt to include
--ignoreBlanks
when importing JSON data,mongoimport
will return an error.--ignoreBlanks
is only for CSV or TSV imports.
--type=<json|csv|tsv>
Specifies the file type to import. The default format is JSON, but it's possible to import CSV and TSV files.
The
csv
parser accepts that data that complies with RFC RFC-4180. As a result, backslashes are not a valid escape character. If you use double-quotes to enclose fields in the CSV data, you must escape internal double-quote marks by prepending another double-quote.
--file=<filename>
Specifies the location and name of a file containing the data to import. If you do not specify a file,
mongoimport
reads data from standard input (e.g. "stdin").
--drop
Modifies the import process so that the target instance drops the collection before importing the data from the input.
--headerline
If using
--type csv
or--type tsv
, uses the first line as field names. Otherwise,mongoimport
will import the first line as a distinct document.If you attempt to include
--headerline
when importing JSON data,mongoimport
will return an error.--headerline
is only for CSV or TSV imports.
--useArrayIndexFields
New in version 100.0.0.
Interpret natural numbers in fields as array indexes when importing CSV or TSV files.
Field names must be in the form
<colName>.<arrayIndex>
wherearrayIndex
is a natural number beginning with0
and increasing sequentially by1
for each member of the array.For example, with the following CSV file:
a.0,a.1,a.2,a.3 red,yellow,green,blue An import with the
--useArrayIndexFields
option would result in the following document:"a" : [ "red", "yellow", "green", "blue" ] If using the
--columnsHaveTypes
option as well, use the form<colName>.<arrayIndex>.<type>(<arg>)
to specify both the array index and type for each field. See--columnsHaveTypes
for more information.Numerical keys with leading zeros (e.g.
a.000,a.001
) are not interpreted as array indexes.If the first part of a key is a natural number (e.g.
0.a,1.a
), it is interpreted as a document key, and not an array index.If using the
--ignoreBlanks
option with--useArrayIndexFields
,mongoimport
will log an error if you attempt to import a document that contains a blank value (e.g.""
) for an array index field.The
--useArrayIndexFields
option has no effect when importing JSON data, as arrays are already encoded in JSON format.
--mode=<insert|upsert|merge|delete>
Default: insert
Specifies how the import process should handle existing documents in the database that match documents in the import file.
By default,
mongoimport
uses the_id
field to match documents in the collection with documents in the import file. To specify the fields against which to match existing documents for theupsert
,merge
, anddelete
modes, use--upsertFields
.ValueDescriptioninsert
Insert the documents in the import file.mongoimport
will log an error if you attempt to import a document that contains a duplicate value for a field with a unique index, such as_id
.upsert
Replace existing documents in the database with matching documents from the import file.mongoimport
will insert all other documents. Replace Matching Documents during Import describes how to use--mode
upsert
.merge
Merge existing documents that match a document in the import file with the new document.mongoimport
will insert all other documents. Merge Matching Documents during Import describes how to use--mode
merge
.delete
Delete existing documents in the database that match a document in the import file.
mongoimport
takes no action on non-matching documents. Delete Matching Documents describes how to use--mode
delete
.New in version 100.0.0.
--upsertFields=<field1[,field2]>
Specifies a list of fields for the query portion of the import process.
--upsertFields
can be used with--mode
upsert
,merge
, anddelete
.Use this option if the
_id
fields in the existing documents don't match the field in the document, but another field or field combination can uniquely identify documents as a basis for performing upsert operations.If you do not specify a field,
--upsertFields
will upsert on the basis of the_id
field.To ensure adequate performance, indexes should exist for the field or fields you specify with
--upsertFields
.
--stopOnError
Forces
mongoimport
to halt the insert operation at the first error rather than continuing the operation despite errors.By default,
mongoimport
continues an operation when it encounters duplicate key and document validation errors. To ensure that the program stops on these errors, specify--stopOnError
.
--jsonArray
Accepts the import of data expressed with multiple MongoDB documents within a single JSON array. Limited to imports of 16 MB or smaller.
Use
--jsonArray
in conjunction withmongoexport --jsonArray
.
--legacy
Indicates that the import data is in Extended JSON v1 format instead of the default Extended JSON v2 format.
Tip
In general, the versions of
mongoexport
andmongoimport
should match. That is, to import data created frommongoexport
, you should use the corresponding version ofmongoimport
.For example, if the import data is in v1 format:
{"_id":1.0,"myregfield":{"$regex":"foo","$options":"i"}} Import without the
--legacy
option results in the following document in the collection:{ "_id" : 1, "myregfield" : { "$regex" : "foo", "$options" : "i" } } Import with the
--legacy
results in the following document in the collection:{ "_id" : 1, "myregfield" : { "$regularExpression" : { "pattern" : "foo", "options" : "i" } } }
--maintainInsertionOrder
Default: false
If specified,
mongoimport
inserts the documents in the order of their appearance in the input source. That is, both the bulk write batch order and document order within the batches are maintained.Specifying
--maintainInsertionOrder
also enables--stopOnError
and setsnumInsertionWorkers
to 1.If unspecified,
mongoimport
may perform the insertions in an arbitrary order.
--numInsertionWorkers=<int>
Default: 1
Specifies the number of insertion workers to run concurrently.
For large imports, increasing the number of insertion workers may increase the speed of the import.
--writeConcern=<document>
Default: majority
Specifies the write concern for each write operation that
mongoimport
performs.Specify the write concern as a document with w options:
--writeConcern "{w:'majority'}" If the write concern is also included in the
--uri connection string
, the command-line--writeConcern
overrides the write concern specified in the URI string.
--bypassDocumentValidation
Enables
mongoimport
to bypass document validation during the operation. This lets you insert documents that do not meet the validation requirements.
--columnsHaveTypes
Instructs
mongoimport
that the field list specified in--fields
,--fieldFile
, or--headerline
specifies the types of each field.Field names must be in the form of
<colName>.<type>(<arg>)
. You must backslash-escape the following characters if you wish to include them in an argument:(
,)
, and\
.type
Supported ArgumentsExample Header Fieldauto()
None.misc.auto()
binary(<arg>)
user thumbnail.binary(base64)
boolean()
None.verified.boolean()
date(<arg>)
Alias fordate_go(<arg>)
. Go Language time.Parse format.created.date(2006-01-02 15:04:05)
date_go(<arg>)
created.date_go(2006-01-02 15:04:05)
date_ms(<arg>)
created.date_ms(yyyy-MM-dd H:mm:ss)
date_oracle(<arg>)
created.date_oracle(YYYY-MM-DD HH24:MI:SS)
decimal()
Noneprice.decimal()
double()
None.revenue.double()
int32()
None.followerCount.int32()
int64()
None.bigNumber.int64()
string()
None.zipcode.string()
See Import CSV with Specified Field Types for sample usage.
If you attempt to include
--columnsHaveTypes
when importing JSON data,mongoimport
will return an error.--columnsHaveTypes
is only for CSV or TSV imports.
--parseGrace=<grace>
Default: stop
Specifies how
mongoimport
handles type coercion failures when importing CSV or TSV files with--columnsHaveTypes
.--parseGrace
has no effect when importing JSON documents.ValueDescriptionautoCast
Assigns a type based on the value of the field. For example, if a field is defined as adouble
and the value for that field was"foo"
,mongoimport
would make that field value a string type.skipField
For the row being imported,mongoimport
does not include the field whose type does not match the expected type.skipRow
mongoimport
does not import rows containing a value whose type does not match the expected type.stop
mongoimport
returns an error that ends the import.
Examples
Run mongoimport
from the system command line, not the mongo
shell.
Simple Import
mongoimport
restores a database from a backup taken with
mongoexport
. Most of the arguments to mongoexport
also
exist for mongoimport
.
In the following example, mongoimport
imports
the JSON data from the contacts.json
file into the collection
contacts
in the users
database.
mongoimport --db=users --collection=contacts --file=contacts.json
Replace Matching Documents during Import
With --mode
upsert
, mongoimport
replaces
existing documents in the database that match a document in the
import file with the document from the import file.
Documents that do not match an existing document in the database are
inserted as usual. By default mongoimport
matches documents
based on the _id
field. Use --upsertFields
to specify
the fields to match against.
Consider the following document in the people
collection in the
example
database:
{ "_id" : ObjectId("580100f4da893943d393e909"), "name" : "Crystal Duncan", "region" : "United States", "email" : "crystal@example.com" }
The following document exists in a people-20160927.json
JSON file.
The _id
field of the JSON object matches the _id
field of the
document in the people
collection.
{ "_id" : ObjectId("580100f4da893943d393e909"), "username" : "crystal", "likes" : [ "running", "pandas", "software development" ] }
To import the people-20160927.json
file and replace documents in
the database that match the documents in the import file, specify --mode
upsert
, as in the following:
mongoimport -c=people -d=example --mode=upsert --file=people-20160927.json
The document in the people
collection would then contain only
the fields from the imported document, as in the following:
{ "_id" : ObjectId("580100f4da893943d393e909"), "username" : "crystal", "likes" : [ "running", "pandas", "software development" ] }
Merge Matching Documents during Import
With --mode
merge
, mongoimport
enables you to
merge fields from a new record with an existing document in the
database. Documents that do not match an existing document in the
database are inserted as usual. By default mongoimport
matches documents based on the _id
field. Use
--upsertFields
to specify the fields to match against.
The people
collection in the example
database contains the
following document:
{ "_id" : ObjectId("580100f4da893943d393e909"), "name" : "Crystal Duncan", "region" : "United States", "email" : "crystal@example.com" }
The following document exists in a people-20160927.json
JSON file.
The _id
field of the JSON object matches the _id
field of the
document in the people
collection.
{ "_id" : ObjectId("580100f4da893943d393e909"), "username" : "crystal", "email": "crystal.duncan@example.com", "likes" : [ "running", "pandas", "software development" ] }
To import the people-20160927.json
file and merge documents from
the import file with matching documents in the database, specify
--mode
merge
, as in the following:
mongoimport -c=people -d=example --mode=merge --file=people-20160927.json
The import operation combines the fields from the JSON file with the
original document in the database,
matching the documents based on the _id
field.
During the import process, mongoimport
adds the new username
and
likes
fields to the document and updates the email
field with
the value from the imported document, as in the following:
{ "_id" : ObjectId("580100f4da893943d393e909"), "name" : "Crystal Duncan", "region" : "United States", "email" : "crystal.duncan@example.com", "username" : "crystal", "likes" : [ "running", "pandas", "software development" ] }
Delete Matching Documents
New in version 100.0.0.
With --mode
delete
,
mongoimport
deletes existing documents in the database
that match a document in the import file. Documents that do not match an
existing document in the database are ignored. By default
mongoimport
matches documents based on the _id
field.
Use --upsertFields
to specify
the fields to match against.
Note
With --mode
delete
,
mongoimport
will only delete one existing document per
match. Ensure that documents from the import file match a single
existing document from the database.
The people
collection in the example
database contains the
following document:
{ "_id" : ObjectId("580100f4da893943d393e909"), "name" : "Crystal Duncan", "region" : "United States", "email" : "crystal@example.com", "employee_id" : "5463789356" }
The following document exists in a people-20160927.json
JSON file.
The _id
field of the JSON object matches the _id
field of the
document in the people
collection.
{ "_id" : ObjectId("580100f4da893943d393e909"), "username" : "crystal", "email": "crystal.duncan@example.com", "likes" : [ "running", "pandas", "software development" ], "employee_id" : "5463789356" }
To delete the documents in the database that match a document in the
people-20160927.json
file, specify
--mode
delete
, as in the following:
mongoimport -c=people -d=example --mode=delete --file=people-20160927.json
Because the _id
fields match between the database and the input
file, mongoimport
deletes the matching document from the
people
collection. The same results could also have been achieved
by using --upsertFields
to
specify the employee_id
field, which also matches between the
database and the input file.
Import JSON
to Remote Host Running with Authentication
In the following example, mongoimport
imports data from the
file /opt/backups/mdb1-examplenet.json
into the contacts
collection
within the database marketing
on a remote MongoDB
database with authentication enabled.
mongoimport
connects to the mongod
instance running on
the host mongodb1.example.net
over port 37017
. It authenticates with the
username user
; the example omits the --password
option to have mongoimport
prompt for the password:
mongoimport --host=mongodb1.example.net --port=37017 --username=user --collection=contacts --db=marketing --file=/opt/backups/mdb1-examplenet.json
CSV
Import
General CSV Import
In the following example, mongoimport
imports the CSV
formatted data in the /opt/backups/contacts.csv
file into the
collection contacts
in the users
database on the MongoDB
instance running on the localhost port numbered
27017
.
Specifying --headerline
instructs
mongoimport
to determine the name of the fields using the first
line in the CSV file.
mongoimport --db=users --collection=contacts --type=csv --headerline --file=/opt/backups/contacts.csv
mongoimport
uses the input file name, without the
extension, as the collection name if -c
or --collection
is
unspecified. The following example is therefore equivalent:
mongoimport --db=users --type=csv --headerline --file=/opt/backups/contacts.csv
Import CSV with Specified Field Types
When specifying the field name, you can also specify the data type. To
specify field names and type, include
--columnsHaveTypes
with
either: --fields
, --fieldFile
, or --headerline
.
Specify field names and data types in the form
<colName>.<type>(<arg>)
.
For example, a /example/file.csv
file contains the following data:
Katherine Gray, 1996-02-03, false, 1235, TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdCwgc2VkIGRvIGVpdXNtb2QgdGVtcG9yIGluY2lkaWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWduYSBhbGlxdWEuIFV0IGVuaW0gYWQgbWluaW0gdmVuaWFtLCBxdWlzIG5vc3RydWQgZXhlcmNpdGF0aW9uIHVsbGFtY28gbGFib3JpcyBuaXNpIHV0IGFsaXF1aXAgZXggZWEgY29tbW9kbyBjb25zZXF1YXQuIER1aXMgYXV0ZSBpcnVyZSBkb2xvciBpbiByZXByZWhlbmRlcml0IGluIHZvbHVwdGF0ZSB2ZWxpdCBlc3NlIGNpbGx1bSBkb2xvcmUgZXUgZnVnaWF0IG51bGxhIHBhcmlhdHVyLiBFeGNlcHRldXIgc2ludCBvY2NhZWNhdCBjdXBpZGF0YXQgbm9uIHByb2lkZW50LCBzdW50IGluIGN1bHBhIHF1aSBvZmZpY2lhIGRlc2VydW50IG1vbGxpdCBhbmltIGlkIGVzdCBsYWJvcnVtLg== Albert Gilbert, 1992-04-24, true, 13, Q3VwY2FrZSBpcHN1bSBkb2xvciBzaXQgYW1ldCB0b290c2llIHJvbGwgYm9uYm9uIHRvZmZlZS4gQ2FuZHkgY2FuZXMgcGllIGNyb2lzc2FudCBjaG9jb2xhdGUgYmFyIGxvbGxpcG9wIGJlYXIgY2xhdyBtYWNhcm9vbi4gU3dlZXQgcm9sbCBjdXBjYWtlIGNoZWVzZWNha2Ugc291ZmZsw6kgYnJvd25pZSBpY2UgY3JlYW0uIEp1anViZXMgY2FrZSBjdXBjYWtlIG1hY2Fyb29uIGRhbmlzaCBqZWxseS1vIHNvdWZmbMOpLiBDYWtlIGFwcGxlIHBpZSBnaW5nZXJicmVhZCBjaG9jb2xhdGUgc3VnYXIgcGx1bS4gU3dlZXQgY2hvY29sYXRlIGNha2UgY2hvY29sYXRlIGNha2UganVqdWJlcyB0aXJhbWlzdSBvYXQgY2FrZS4gU3dlZXQgc291ZmZsw6kgY2hvY29sYXRlLiBMaXF1b3JpY2UgY290dG9uIGNhbmR5IGNob2NvbGF0ZSBtYXJzaG1hbGxvdy4gSmVsbHkgY29va2llIGNha2UgamVsbHkgYm==
The following operation uses mongoimport
with the
--fields
and
--columnsHaveTypes
option
to specify both the field names and the BSON types of the imported CSV
data.
mongoimport --db=users --collection=contacts --type=csv \ --columnsHaveTypes \ --fields="name.string(),birthdate.date(2006-01-02),contacted.boolean(),followerCount.int32(),thumbnail.binary(base64)" \ --file=/example/file.csv
Ignore Blank Fields
Use the --ignoreBlanks
option
to ignore blank fields. For CSV and TSV imports, this
option provides the desired functionality in most cases because it avoids
inserting fields with null values into your collection.
The following example imports the data from data.csv
, skipping
any blank fields:
mongoimport --db=users --collection=contacts --type=csv --file=/example/data.csv --ignoreBlanks
Connect to a MongoDB Atlas Cluster using AWS IAM Credentials
New in version 100.1.0.
To connect to a MongoDB Atlas cluster which
has been configured to support authentication via AWS IAM credentials,
provide a connection string
to
mongoimport
similar to the following:
mongoimport 'mongodb+srv://<aws access key id>:<aws secret access key>@cluster0.example.com/testdb?authSource=$external&authMechanism=MONGODB-AWS' <other options>
Connecting to Atlas using AWS IAM credentials in this manner uses the
MONGODB-AWS
authentication mechanism
and the $external
authSource
, as shown in this example.
If using an AWS session token,
as well, provide it with the AWS_SESSION_TOKEN
authMechanismProperties
value, as follows:
mongoimport 'mongodb+srv://<aws access key id>:<aws secret access key>@cluster0.example.com/testdb?authSource=$external&authMechanism=MONGODB-AWS&authMechanismProperties=AWS_SESSION_TOKEN:<aws session token>' <other options>
Note
If the AWS access key ID, secret access key, or session token include the following characters:
: / ? # [ ] @
those characters must be converted using percent encoding.
Alternatively, the AWS access key ID, secret access key, and optionally
session token can each be provided outside of the connection string
using the --username
, --password
, and
--awsSessionToken
options instead, like so:
mongoimport 'mongodb+srv://cluster0.example.com/testdb?authSource=$external&authMechanism=MONGODB-AWS' --username <aws access key id> --password <aws secret access key> --awsSessionToken <aws session token> <other options>
When provided as command line parameters, these three options do not require percent encoding.
You may also set these credentials on your platform using standard
AWS IAM environment variables.
mongoimport
checks for the following environment variables when you
use the MONGODB-AWS
authentication mechanism
:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
If set, these credentials do not need to be specified in the connection string or via their explicit options.
Note
If you chose to use the AWS environment variables to specify these values, you cannot mix and match with the corresponding explicit or connection string options for these credentials. Either use the environment variables for access key ID and secret access key (and session token if used), or specify each of these using the explicit or connection string options instead.
The following example sets these environment variables in the bash
shell:
export AWS_ACCESS_KEY_ID='<aws access key id>' export AWS_SECRET_ACCESS_KEY='<aws secret access key>' export AWS_SESSION_TOKEN='<aws session token>'
Syntax for setting environment variables in other shells will be different. Consult the documentation for your platform for more information.
You can verify that these environment variables have been set with the following command:
env | grep AWS
Once set, the following example connects to a MongoDB Atlas cluster using these environment variables:
mongoimport 'mongodb+srv://cluster0.example.com/testdb?authSource=$external&authMechanism=MONGODB-AWS' <other options>