Docs 菜单
Docs 主页
/
Atlas
/ /

导入存档

注意

此功能不适用于 M0 免费集群和 Flex 集群。要详细了解哪些功能不可用,请参阅 Atlas M0(免费集群)限制。

您可以使用 3和 将存档数据恢复到 S 、mongoimportmongorestore Azure或Google Cloud Platform Storage。本页提供了使用Amazon Web Servicesazopygcloud CLI (具体取决于数据源)以及MongoDBDatabase Tools导入存档数据和重建索引的示例程序。

在开始之前,您必须:

1
aws s3 cp s3://<bucketName>/<prefix> <downloadFolder> --recursive
gunzip -r <downloadFolder>

其中:

<bucketName>

AWS S3 存储桶的名称。

<prefix>

存储桶中已归档数据的路径。路径的格式如下:

/exported_snapshots/<orgId>/<projectId>/<clusterName>/<initiationDateOfSnapshot>/<timestamp>/

<downloadFolder>

要从中复制归档数据的本地文件夹的路径。

例如,运行与以下类似的命令:

例子

aws s3 cp
s3://export-test-bucket/exported_snapshots/1ab2cdef3a5e5a6c3bd12de4/12ab3456c7d89d786feba4e7/myCluster/2021-04-24T0013/1619224539
mybucket --recursive
gunzip -r mybucket
2
#!/bin/bash
regex='/(.+)/(.+)/.+'
dir=${1%/}
connstr=$2
# iterate through the subdirectories of the downloaded and
# extracted snapshot export and restore the docs with mongoimport
find $dir -type f -not -path '*/\.*' -not -path '*metadata\.json' | while read line ; do
[[ $line =~ $regex ]]
db_name=${BASH_REMATCH[1]}
col_name=${BASH_REMATCH[2]}
mongoimport --uri "$connstr" --mode=upsert -d $db_name -c $col_name --file $line --type json
done
# create the required directory structure and copy/rename files
# as needed for mongorestore to rebuild indexes on the collections
# from exported snapshot metadata files and feed them to mongorestore
find $dir -type f -name '*metadata\.json' | while read line ; do
[[ $line =~ $regex ]]
db_name=${BASH_REMATCH[1]}
col_name=${BASH_REMATCH[2]}
mkdir -p ${dir}/metadata/${db_name}/
cp $line ${dir}/metadata/${db_name}/${col_name}.metadata.json
done
mongorestore "$connstr" ${dir}/metadata/
# remove the metadata directory because we do not need it anymore and this returns
# the snapshot directory in an identical state as it was prior to the import
rm -rf ${dir}/metadata/

此处:

  • --mode=upsert 使mongoimport能够处理存档中的重复文档。

  • --uri 指定了 Atlas 集群的连接字符串。

3
sh massimport.sh <downloadFolder> "mongodb+srv://<connectionString>"

其中:

<downloadFolder>

将存档数据复制到的本地文件夹的路径。

<connectionString>

Atlas 集群的连接字符串。

例如,运行与以下类似的命令:

例子

sh massimport.sh mybucket "mongodb+srv://<myConnString>"
1
azcopy copy "https://<storageAccountName>.blob.core.windows.net/<containerName>/<prefix>/*" "<downloadFolder>" --recursive

其中:

<storageAccountName>

blob存储容器所属的Azure帐户的名称。

<containerName>

Azure blob存储容器的名称。

<prefix>

存储桶中已归档数据的路径。

<downloadFolder>

要从中复制归档数据的本地文件夹的路径。

例子

azcopy copy "https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt" "~/downloads" --recursive
2
#!/bin/bash
regex='/(.+)/(.+)/.+'
dir=${1%/}
connstr=$2
# iterate through the subdirectories of the downloaded and
# extracted snapshot export and restore the docs with mongoimport
find $dir -type f -not -path '*/\.*' -not -path '*metadata\.json' | while read line ; do
[[ $line =~ $regex ]]
db_name=${BASH_REMATCH[1]}
col_name=${BASH_REMATCH[2]}
mongoimport --uri "$connstr" --mode=upsert -d $db_name -c $col_name --file $line --type json
done
# create the required directory structure and copy/rename files
# as needed for mongorestore to rebuild indexes on the collections
# from exported snapshot metadata files and feed them to mongorestore
find $dir -type f -name '*metadata\.json' | while read line ; do
[[ $line =~ $regex ]]
db_name=${BASH_REMATCH[1]}
col_name=${BASH_REMATCH[2]}
mkdir -p ${dir}/metadata/${db_name}/
cp $line ${dir}/metadata/${db_name}/${col_name}.metadata.json
done
mongorestore "$connstr" ${dir}/metadata/
# remove the metadata directory because we do not need it anymore and this returns
# the snapshot directory in an identical state as it was prior to the import
rm -rf ${dir}/metadata/

此处:

  • --mode=upsert 使mongoimport能够处理存档中的重复文档。

  • --uri 指定了 Atlas 集群的连接字符串。

3
sh massimport.sh <downloadFolder> "mongodb+srv://<connectionString>"

其中:

<downloadFolder>

将存档数据复制到的本地文件夹的路径。

<connectionString>

Atlas 集群的连接字符串。

例子

sh massimport.sh "~/downloads" "mongodb+srv://<myConnString>"
1
gsutil -m cp -r "gs://<bucketName>/<prefix> <downloadFolder>" --recursive
gunzip -r <downloadFolder>

其中:

<bucketName>

Google Cloud Platform存储桶的名称。

<prefix>

存储桶中已归档数据的路径。路径的格式如下:

/exported_snapshots/<orgId>/<projectId>/<clusterName>/<initiationDateOfSnapshot>/<timestamp>/

<downloadFolder>

要从中复制归档数据的本地文件夹的路径。

例子

gsutil -m cp -r
gs://export-test-bucket/exported_snapshots/1ab2cdef3a5e5a6c3bd12de4/12ab3456c7d89d786feba4e7/myCluster/2021-04-24T0013/1619224539
mybucket --recursive
gunzip -r mybucket
2
#!/bin/bash
regex='/(.+)/(.+)/.+'
dir=${1%/}
connstr=$2
# iterate through the subdirectories of the downloaded and
# extracted snapshot export and restore the docs with mongoimport
find $dir -type f -not -path '*/\.*' -not -path '*metadata\.json' | while read line ; do
[[ $line =~ $regex ]]
db_name=${BASH_REMATCH[1]}
col_name=${BASH_REMATCH[2]}
mongoimport --uri "$connstr" --mode=upsert -d $db_name -c $col_name --file $line --type json
done
# create the required directory structure and copy/rename files
# as needed for mongorestore to rebuild indexes on the collections
# from exported snapshot metadata files and feed them to mongorestore
find $dir -type f -name '*metadata\.json' | while read line ; do
[[ $line =~ $regex ]]
db_name=${BASH_REMATCH[1]}
col_name=${BASH_REMATCH[2]}
mkdir -p ${dir}/metadata/${db_name}/
cp $line ${dir}/metadata/${db_name}/${col_name}.metadata.json
done
mongorestore "$connstr" ${dir}/metadata/
# remove the metadata directory because we do not need it anymore and this returns
# the snapshot directory in an identical state as it was prior to the import
rm -rf ${dir}/metadata/

此处:

  • --mode=upsert 使mongoimport能够处理存档中的重复文档。

  • --uri 指定了 Atlas 集群的连接字符串。

3

运行 massimport.sh 实用程序将存档数据导入 Atlas 集群。

sh massimport.sh <downloadFolder> "mongodb+srv://<connectionString>"

其中:

<downloadFolder>

将存档数据复制到的本地文件夹的路径。

<connectionString>

Atlas 集群的连接字符串。

例子

sh massimport.sh mybucket "mongodb+srv://<myConnString>"

后退

从 Cloud Manager 快照恢复

在此页面上