All my mongoDB documents are being deleted after a few hours

Hey, I own a vps. There we have three minecraft servers and one discord bot. we also have a local mongoDB running on the vps. I have connected to it using node.js mongoose library. I am talking about the discord bot in this case. When I create documents in the mongodb, it works, and the code reads everything. Suddenly, after about 2-3 hours, every single document i created, is deleted. Its really annoying, and makes the bot completely useless. There are no errors, no logs, it just deletes them. This is quite urgent, as we have users who wants to use the bot. When hosting the mongodb publicly, everything works, and nothing is getting deleted, so its not the code. Any help is appreciated

Hi @Lapirate_N_A,

When has this started happening? Have you ensured there are no TTL indexes on the collection where the documents are being deleted (any time / date fields on the documents being deleted).

Have you also checked the logs to see if you could locate the deletes? You may need to alter the profiling level to be able to log the deletes that are happening.

Could you clarify my understanding here - Is this a test instance where you are seeing data be removed but your production / public instance is working as per normal (no unexpected deletions happening)? Also good to verify there were no changes made by your team (if any other members).

Regards,
Jason

1 Like

Hey, thank you for the fast reply. I will try to answer it to my best ability, but I am still new to this. I havent chacked if there are any TTL indexes as I couldnt find any documentation on how to check it with mongoose, but I havent added any " expireAt" tags to any of my schemas. Is there an easy way to check if there are any indexes using mongoose?

I have no idea how logs work, The vps runs a Pterodactyl panel, how would I check mongodb logs on this?

This is a db running locally on the vps. I connect to it using this format: mongodb://IP:PORT. When running it pulically using the atlas, nothing is deleted.

I used this snippet to check for any idexes, and It doesnt seem like there are any ttl indexes, but it keeps on deleting. Any other fixes?:


const connection = mongoose.connection;


connection.on('error', err => {
 console.error('Connection error:', err);
});

connection.once('open', async () => {
 console.log('Connected to MongoDB');

 // Get the list of collections
 const collections = await connection.db.listCollections().toArray();
 console.log(collections)

 // Loop through each collection
 for (const collection of collections) {
   const collectionName = collection.name;

   // Fetch indexes for the current collection
   const indexes = await connection.db.collection(collectionName).indexInformation();

   // Print information for all indexes in the current collection
   console.log(`Collection: ${collectionName}`);
   console.log('Indexes:', indexes);
 }
})```

If the collection or database is being dropped these events will be in the mongodb server log.

Here is me dropping the collection baz from the test db: db.getSiblingDB('test').baz.drop()

{“t”:{“$date”:“2024-01-13T13:55:59.913+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:518070, “ctx”:“conn6”,“msg”:“CMD: drop”,“attr”:{“namespace”:“test.baz”}}

Here is me dropping the database test: db.getSiblingDB('test').dropDatabase()

{“t”:{“$date”:“2024-01-13T13:56:08.945+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:20337, “ctx”:“conn6”,“msg”:“dropDatabase - starting”,“attr”:{“db”:“test”}}
{“t”:{“$date”:“2024-01-13T13:56:08.945+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:20338, “ctx”:“conn6”,“msg”:“dropDatabase - dropping collection”,“attr”:{“db”:“test”,“namespace”:“test.fizz”}}
{“t”:{“$date”:“2024-01-13T13:56:08.945+00:00”},“s”:“I”, “c”:“REPL”, “id”:7360105, “ctx”:“conn6”,“msg”:“Wrote oplog entry for dropDatabase”,“attr”:{“namespace”:“test.$cmd”,“opTime”:{“ts”:{“$timestamp”:{“t”:0,“i”:0}},“t”:-1},“object”:{“dropDatabase”:1}}}
{“t”:{“$date”:“2024-01-13T13:56:08.946+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22206, “ctx”:“conn6”,“msg”:“Deferring table drop for index”,“attr”:{“index”:“id”,“namespace”:“test.fizz”,“uuid”:{“uuid”:{“$uuid”:“b75b0cdf-ef9d-416f-a40b-d4545c702fa4”}},“ident”:“index-20–9135526063718830413”,“commitTimestamp”:{“$timestamp”:{“t”:0,“i”:0}}}}
{“t”:{“$date”:“2024-01-13T13:56:08.946+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22214, “ctx”:“conn6”,“msg”:“Deferring table drop for collection”,“attr”:{“namespace”:“test.fizz”,“ident”:“collection-19–9135526063718830413”,“commitTimestamp”:{“$timestamp”:{“t”:0,“i”:0}}}}
{“t”:{“$date”:“2024-01-13T13:56:08.946+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:20336, “ctx”:“conn6”,“msg”:“dropDatabase”,“attr”:{“db”:“test”,“numCollectionsDropped”:1}}

If the document deletion is due to a delete command then this will not be logged unless it a ‘slow query’, by default this is 100ms or more. However the threshold can be changed and every query will be logged. :warning: this can generate a lot of logs :warning:

db.setProfilingLevel(0,0) will set the slow query threshold to 0 milliseconds. And can be reset to the default 100ms via db.setProfilingLevel(0,100)

Now my db.bar.deleteMany({}) that deleted 2000 documents will be logged:

{“t”:{“$date”:“2024-01-13T14:05:51.189+00:00”},“s”:“I”, “c”:“WRITE”, “id”:51803, “ctx”:“conn6”,“msg”:“Slow query”,“attr”:{“type”:“remove”,“ns”:“test.bar”,“appName”:“mongosh 2.1.1”,“command”:{“q”:{},“limit”:0},“planSummary”:“COLLSCAN”,“keysExamined”:0,“docsExamined”:2000,“ndeleted”:2000,“keysDeleted”:2000,“numYields”:2,“locks”:{“ParallelBatchWriterMode”:{“acquireCount”:{“r”:3}},“FeatureCompatibilityVersion”:{“acquireCount”:{“w”:3}},“ReplicationStateTransition”:{“acquireCount”:{“w”:3}},“Global”:{“acquireCount”:{“w”:3}},“Database”:{“acquireCount”:{“w”:3}},“Collection”:{“acquireCount”:{“w”:3}}},“flowControl”:{“acquireCount”:3},“storage”:{},“cpuNanos”:46080111,“remote”:“172.17.0.1:56680”,“durationMillis”:46}}

There are a couple of ways to get the logs.
If a logPath is configured this can be found via db.serverCmdLineOpts():

 db.serverCmdLineOpts().parsed.systemLog
{ destination: 'file', path: '/var/log/mongodb/mongodb.log' }

If destination is syslog or if systemLog is not present the more investigation will be required to find where the logs age going.

The other option is to use the getLog command to fetch up to 1024 of the most recent logs. These can be loaded into a collection to be queried or you could output them to a file to be inspected.

db.adminCommand({getLog:'global'}).log.forEach(x=>{db.getSiblingDB('log').log.insert(JSON.parse(x))})

Based on the code snippet provided it appears no authentication is enabled on the cluster. Using authentication can prevent unauthorised access that can lead to data loss such as this.

refs:
https://www.mongodb.com/docs/manual/reference/log-messages/#log-messages
https://www.mongodb.com/docs/manual/tutorial/manage-the-database-profiler/#specify-the-threshold-for-slow-operations
https://www.mongodb.com/docs/manual/reference/command/getLog/#getlog
https://www.mongodb.com/docs/manual/tutorial/enable-authentication/#enable-access-control

1 Like

Using mongoose.set(‘debug’, true);, i tried logging the event, but as i said, there are no logs at all. I am 100% sure I dont have any deleteMany methods in the code. It just gets randomly deleted without any logs at all. How would i use the setProfilingLevel(0,0) method in mongoose?

Perhaps run the populated MongoDB without any client/bot connected and see if the issue still occurs. It could reduce the checklist to database creation or bot behavior

@Lapirate_N_A did you figure this out? I am encountering this same issue and it’s really frustrating. I also dont have any TTL indexes and the logs dont show documents getting deleted