Some databases have been created by sharding

i don’t understand why have appeared some db’s with large names and taking huge space, i’m testing a couple of collections with few data, and has appeared some db’s like:

shard0:centos7:27117:anonymous:test:PRIMARY> show dbs
5ea30f4d15f4800d42dced74       4.392GB
5efca35993905fd511323efc_sync  0.053GB
5efca35993905fd511323eff_sync  0.000GB
5efca35993905fd511323f02_sync  0.000GB

shard0:centos7:27117:anonymous:5ea30f4d15f4800d42dced74:PRIMARY> show collections
oplog_config-rset1-5ee9f32d93905fd5112d53fd
oplog_shard0
oplog_shard1

shard0:centos7:27117:anonymous:5ea30f4d15f4800d42dced74:PRIMARY> db.oplog_shard0.stats().wiredTiger.uri
statistics:table:collection-6--3400962913216231357

/data/replSet/1A> l collection-6--3400962913216231357*
-rw------- 1 mongod mongod 4710215680 Sep 15 17:56 collection-6--3400962913216231357.wt

is it normal?, i dont’ understand why that file is so big and is updated, thanks for the explanation, and is it possible to drop those databases?

Hi @Willy_Latorre,

The names of those databases is Ops Manager convention for oplog store and sync stores.

Those are components created by the backup which are used for storing backups of Ops Manager.

Have you used your backedup deployment as the target for those databses ? If so this is fundamentally wrong.

Oplog stores needs to be a separate instance or replica set.

Thanks
Pavel

Hi @Willy_Latorre,

EDIT: @Pavel_Duchovny answered while I was typing this :smiley: ! I hope you find what you need in our answers :+1:.

Looks like you are connected on the shard0 of your MongoDB Sharded cluster.
You should not do manipulations directly on a shard in a sharded cluster. Your queries should go through the mongos.

Here are the collections I can see from a fresh sharded cluster with just one sharded collection test.testcol from the mongos.

MongoDB Enterprise mongos> show dbs 
admin   0.000GB
config  0.003GB
test    0.000GB
MongoDB Enterprise mongos> use admin
switched to db admin
MongoDB Enterprise mongos> show collections
system.keys
system.version
MongoDB Enterprise mongos> use config
switched to db config
MongoDB Enterprise mongos> show collections
actionlog
changelog
chunks
collections
databases
lockpings
locks
migrations
mongos
shards
system.indexBuilds
tags
transactions
version
MongoDB Enterprise mongos> use test
switched to db test
MongoDB Enterprise mongos> show collections
testcol

In my case, sh.status() reports that my only chunk for test.testcol is on my shard2. Here is the content of my shard2 replica set.

MongoDB Enterprise shard2:PRIMARY> show dbs 
admin   0.000GB
config  0.001GB
local   0.001GB
test    0.000GB
MongoDB Enterprise shard2:PRIMARY> use admin 
switched to db admin
MongoDB Enterprise shard2:PRIMARY> show collections
system.version
MongoDB Enterprise shard2:PRIMARY> use config 
switched to db config
MongoDB Enterprise shard2:PRIMARY> show collections
cache.chunks.config.system.sessions
cache.chunks.test.testcol
cache.collections
cache.databases
rangeDeletions
system.indexBuilds
system.sessions
transactions
MongoDB Enterprise shard2:PRIMARY> use local
switched to db local
MongoDB Enterprise shard2:PRIMARY> show collections
oplog.rs
replset.election
replset.initialSyncId
replset.minvalid
replset.oplogTruncateAfterPoint
startup_log
system.replset
system.rollback.id
MongoDB Enterprise shard2:PRIMARY> use test
switched to db test
MongoDB Enterprise shard2:PRIMARY> show collections
testcol

So the above is what you should expect in your cluster too.

Now it looks like you are wondering what the oplog collection is from what I see above and it’s normal if this collection is a little bit big as it’s the collection that participate in the replication process in this shard / replica set. It contains the latest write operations this particular replica set did so far. The oplog is also a capped collection, meaning its size will depend on the size you chose to give it when you configured your mongod nodes. The bigger it is, the more history it can store. The oldest operations are overwritten by new ones.

You can actually see the oplog size along with the first and last time entries with this command:

MongoDB Enterprise shard2:PRIMARY> db.getReplicationInfo()
{
	"logSizeMB" : 15910.21054649353,
	"usedMB" : 1.5,
	"timeDiff" : 2833,
	"timeDiffHours" : 0.79,
	"tFirst" : "Tue Sep 15 2020 21:04:28 GMT+0200 (CEST)",
	"tLast" : "Tue Sep 15 2020 21:51:41 GMT+0200 (CEST)",
	"now" : "Tue Sep 15 2020 21:51:44 GMT+0200 (CEST)"
}

In a prod cluster, the more you have, the merrier! That’s going to give more opportunities for a server to catch up if it is stopped and has to catch up.
I hope this helps.

Cheers,
Maxime.

Hi, i’m not using backup, it’s disabled in ops manager, so, the big question is. can i drop those databases?

shard0:centos7:27117:anonymous:test:PRIMARY> db.getReplicationInfo()
{
	"logSizeMB" : 990,
	"usedMB" : 986.48,
	"timeDiff" : 1280352,
	"timeDiffHours" : 355.65,
	"tFirst" : "Wed Sep 02 2020 00:42:24 GMT+0200 (CEST)",
	"tLast" : "Wed Sep 16 2020 20:21:36 GMT+0200 (CEST)",
	"now" : "Wed Sep 16 2020 20:21:42 GMT+0200 (CEST)"
}

it’s only 990Mb

MongoDB Enterprise mongos> show  dbs
5ea30f4d15f4800d42dced74       4.392GB
5efca35993905fd511323efc_sync  0.053GB

Hi @Willy_Latorre,

If you confident that there is no Ops Manager deployment using this in your organisation you can drop those. Be aware that the related backup will b corrupted an unrecoverable

Best
Pavel

Hi Pavel, i have Ops manager currently, and the backup is not enabled on it, i mostly use mongoshell