MongoDb Unique Values in multiple field for slow query result

My logic in function does not allow to duplicate data in normal operation
{"name":"paul", "age":"21"} {"name":"goerge", "age":"21"} {"name":"paul", "age":"44"} {"name":"paul", "age":"21"}

``My function does not allow last one as paul and age 21 is already in database. But when the traffic is high, it is found that that one is also added. To solve this problem, I used compound index as db.test.createIndex({"name":1, "age":1}, {unique:true}). This does not let to allow another data starting with paul.

Is there any way to stop duplicate data ?

Most likely you accomplish

in a wrong way by first doing a find on { paul , 21 } and then do a insert if none is found. As you found out a { paul , 21 } can be added between your find and your insert.

Having a unique index like you did is a sure way to prevent that. However, your logic is still wrong and you will still try to create a duplicate { paul , 21 } but then your code will generate an duplicate key error.

If your use cases queries, the name and age field, you should definitively keep your index. But to prevent the duplicate key error, rather than find(),insert() you should do update() with upsert:true. See https://www.mongodb.com/docs/manual/reference/method/db.collection.update/