Duplicate data getting added into collection

Hi, I have created a mongoose model as below, and set recordId field to “unique: true” to handle duplicate entry. But when through API multiple calls are happening within milliseconds the collection is allowing duplicate entries. Can anyone please tell how to handle this error.

const RecordList = mongoose.model(
‘record_list’,
mongoose.Schema({
recordId: {
type: String,
unique: true,
required: [true, ‘Please enter record id!’],
},
campaign: {
type: String,
}
})
);

Thank You.

1 Like

Hello @Krushna_Chandra_Rout, Welcome to the MongoDB Community forum!

With Mongoose ODM, when you create a field in a schema with property unique:true, it means that a unique constraint be created on that field. In fact it does create such an index in the database for that collection.

For example, the following code creates such data and inserts one document. I can verify the collection, the document and the unique index from mongosh or Compass. In the shell, db.collection.getIndexes() prints the newly created index details.

When I run the same program again, or try to insert another document with the same name: 'john', there is an error: MongoError: E11000 duplicate key error collection: test.records index: name_1 dup key: { name: "john" }.

Please include the version of MongoDB and Mongoose you are working with.

Example Code:

const mongoose = require('mongoose');
const url = 'mongodb://127.0.0.1:27017/test';
mongoose.connect(url, { useNewUrlParser: true, useUnifiedTopology: true });

const Schema = mongoose.Schema;
var RecordSchema = new Schema({
     name: { type: String, required: true, unique: true }
}, { collection: 'records' })

const Record = mongoose.model('Record', RecordSchema);

const r1 = Record({ name: 'john' });
r1.save(function(err) { 
	if (err) throw err;
	console.log('r1 record saved.');
});

This may be that the index is created through Mongoose with background: true option. This option may not create the index immediately, and this allows duplicate entries on the indexed field.

An option for you is to create the index from mongosh or Compass initially. You can still keep the Mongoose definition as it is. This will definitely trigger duplicate data error immediately.


A quick query on my index data showed that the index is created with background:true option [*]:

       {
               "v" : 2,
               "unique" : true,
               "key" : {
                       "name" : 1
               },
               "name" : "name_1",
               "ns" : "test.records",
               "background" : true       // <---- [*]
       }

NOTE: This was with using Mongoose 6.2.4 and MongoDB v.5.0.6 (Atlas Cluster).

Hi, @Prasad_Saya Thank You for this solution.
Actually one of unique field was deleted from compass bymistakely that’s why the above error was comming.

Thank You.

That is a bad situation. Database objects like collections, databases, indexes, etc., need to be carefully and securely managed.

Hi @Prasad_Saya ,
I am having similar kind of issue. My operation is based on the last generated call on mongodb. I am generating a key in incremental order and for that I need to fetch the last generated key and then I am incrementing it based on last generated. Issue I am getting is when i hit the api in 0 sec for 50 keys the keys are not uniquely generated it points to the same thread. So is there any lock mechanism in mongodb to generate only one key at a time? Please suggest

The issue is that 2 or more processes/tasks/threads will read the same value and increment it to the same value and store back that same value. This is a typical issue in bad distributed system. This is not the way you do thing in a distributed system. You need to read and increment in a ACID way. It is not clear how you fetch the last generated key but if you do it with a sort in your collection then I do not know how you could do it. May be with a transaction. May be with repeating an upsert until you do not get a duplicate. One way to do without sort, it is to have the value in a collection and you use findOneAndUpdate to increment your value in an ACID way. But sorting or findOneAndUpdate is a big NO NO as it involves 2 or more server requests. Why don’t you use an $oid, UUID, GUID?

Hi @steevej ,
Thanks for the answer.
Why don’t you use an $oid, UUID, GUID?
Actually, our requirement is that the alphanumeric key be generated once in a lifetime, never repeated and its size is 6 which will be later on incremented when the permutations will complete. I tried to lock the key using pessimistic locking, but I don’t know if it’s the correct way. According to the requirement, I am generating the key in incremental order using modulus operations I am dividing some incremental number/alphabets.size() to get the alphanumeric key of 6 digits in incremental order and it never repeats, where I need to have the last generated number. Can I use pessimistic locking?

From what I understand you are not using mongodb to generate your unique key. It looks like you are developing your own function in some kind of library. If this is the case then you have to make sure that your function will not return 2 identical keys. This seems to be a JS question rather than a Mongo question.

It would be best to share your code so that we really understand. But SO might be a better venue since it is JS.

Not getting any solution about repetition of data entry in MongoDb. Can anyone guide me !?

I am pretty sure that your issue is different from the one discussed in this thread.

Please start a new thread and explain what you do and how the results differ from your expectation.

Dear @Vishal_Yadav3, could you please provide closure on this thread. Thanks.

Thank you for guiding me. I have uploaded my issue on new thread. Looking forward for solution.

1 Like