I have a service which drops all the collection in a db, and then re-starts building, them. The problem is that every time I do that it is giving me duplicate key error randomly (on different collection in the db)
But everything seems to be working fine in the local db, I believe it has to be related to the indexes and time it takes to propagates the changes to other replica.
How I should I address this?, is there something that I am missing here
I don’t share the above. All writes are performed by the primary first. If something is wrong it probably (99% sure) has nothing to do with replication.
What I suspect is that you do some of the things in parallel and when running local some of these parallel requests are more sequential than when running on more powerful system. May be you start rebuilding some collections before you handle the response of all your drop collections.
If you are absolutely sure the culprit is replication then make sure you do all writes with write concern majority.
More details about your code is needed before pointing any finger.
the above code drops the collection and logs it success
then here is the code which try to create the snapshot
// get snapshot, if not exist, create new one
func (snapshotDAObj *SnapshotsDataAccess) GetSnapshot(id string) (*models.Snapshot, error) {
snapshot, err := snapshotDAObj.FindSnapshotById(id)
if err != nil {
if err == mongo.ErrNoDocuments {
timestamp, error := strconv.ParseUint(id, 10, 64)
if error != nil {
log.Fatal(error)
}
snapshot = &models.Snapshot{
ID: id,
Timestamp: timestamp,
EventCount: 0,
ChainSnapshots: make(map[string]models.ChainsSnapshotType),
ChainPairs: make(map[string]models.ChainPair),
}
_, err := snapshotDAObj.snapshots.InsertOne(context.TODO(), snapshot)
if err != nil {
return nil, err
}
} else {
return nil, err
}
}
return snapshot, nil
}
I am sure that we are trying to create the snapshot data sequentially, but get duplicate key error, and for the the same data we are not getting any issue on the local environment.