Ticket: User Management - duplicate email error

What can I do to correct the test? I have a test failure, with the following error message:

def test_no_duplicate_registrations(client):
    result = add_user(test_user.get('name'), test_user.get(
        'email'), test_user.get('password'))
  assert result == {'error': "A user with the given email already exists."}

E AssertionError: assert {‘success’: True} == {‘error’: ‘A user wi…ail already exists.’}
E Left contains more items:
E {‘success’: True}
E Right contains more items:
E {‘error’: ‘A user with the given email already exists.’}
E Use -v to get the full diff

tests/test_user_management.py:37: AssertionError

================================================== 1 failed, 3 passed, 39 deselected, 12 warnings in 5.74 seconds ===================================================

I am facing same error. Please Help.

I believe that our coding does not have problem. Instead, the database collection Users is messed up. Somebody needs to clean it.

I have the feel that something is wrong with this, but it’s possible to achieve the ticket making the unique email check by yourself before inserting.

But, I get an ok result on the status web and a fail one on the tests

Check the number of users and distinct emails from users collection.

I am experiencing this issue as well.

I have run a script that removes all users with duplicated emails so there are only unique user emails.
In db.py I have checked for duplicates prior to inserting a new user. I also tried using upsert instead of insert.

I am still getting “User Management: duplicate emails should not be allowed”

I think the PyTest returns as expected:
platform linux – Python 3.5.2, pytest-4.0.0, py-1.7.0, pluggy-0.8.0
rootdir: /home/rmorden/Documents/Courses/MongoDB.University/M220P/mflix-python, inifile:
plugins: flask-0.10.0
collected 43 items / 43 deselected

======================== 43 deselected in 0.10 seconds =========================

Have I missed something right in front of me?

1 Like

It seems to me as well that the Users collection is lacking the appropriate index.

I am trying to get info about it using the debugger …

import pdb;

… and then all it seems to find is one index

(Pdb) db["users"].index_information()
{'_id_': {'key': [('_id', 1)], 'v': 2, 'ns': 'mflix.users'}}

i.e. one index, on “_id” if I’m getting it rightly.

Also I could insert as many documents as I liked with the same “name” field so I suppose the required unique index on user.name that would trigger the expected exception is entirely missing.

Hope someone will look into this soon …

Just found this older ticket - not sure if you guys had a look?

I’m going down for reboot now …

Okay, so the unique index is missing and creating it removes the barrier against progression.
If as stated on the above link, first you connect to the cluster, then remove the duplicate entries from the mflix.users collection until a statement similar the below one succeeds, e.g. from the mongo shell …

(I put the constraint on email, although I vaguely recall the problem stating that it’s on “name” or similar … I think it will work anyway.)

db.users.ensureIndex({email: 1}, { unique: true } )

… then the test should pass as long as the (otherwise quite simple) implementation is correct.

Hi @brezniczky,

Thank you for your hard work. When I try to create the index, it is apparently already built.

“numIndexesBefore” : 2,
“numIndexesAfter” : 2,
“note” : “all indexes already exist”,
“ok” : 1,
“operationTime” : Timestamp(1544118746, 2),
“$clusterTime” : {
“clusterTime” : Timestamp(1544118746, 2),
“signature” : {
“hash” : BinData(0,“lUheYf5roXwUUFN7dm+N9y91LEk=”),
“keyId” : NumberLong(“6623856481248739329”)


Thank you for the follow up!
I hope your things work now.
However, if you are getting the same problem as Bing_75881 mentioned, then maybe you’re being too cautious on the check - you mentioned you are checking for duplicates, my first guess (sorry if it’s too obvious) is that the conflicting unique values should rather trigger an exception at insertion time and the except clause should do the rest of the job, so just let it crash within the try…except block?
Mine works in that way. (I find it a little difficult to read through PyTest messages, but I believe Bing’s test is complaining for the lack of reporting an error.)

(I am also wondering if we use the same database, or perhaps someone heard our prayers & fixed the indexes overnight/day for each one of us - my index was surely missing - good luck in either case!)

I faced the same problem. After reading this topic, I tried to run:
db.users.ensureIndex({email: 1}, { unique: true } )
“operationTime” : Timestamp(1544182286, 2),
“ok” : 0,
“errmsg” : "E11000 duplicate key error collection: 5bf7d2ae79358ef8f589f577_mflix.users index: email_1 dup key: { : null }",
“code” : 11000,
“codeName” : “DuplicateKey”,
“$clusterTime” : {
“clusterTime” : Timestamp(1544182286, 2),
“signature” : {
“hash” : BinData(0,“hvPNIAkgo4blpkMthtypydE3v2U=”),
“keyId” : NumberLong(“6624107603691569153”)
When I tried to find users with empty emails, I got many results:
db.users.find({email: null})
{ “_id” : ObjectId(“5bf91e1a9e611a4f49e3bf16”), “some_field” : “some_value”, “some_other_field” : “some_other_value” }
{ “_id” : ObjectId(“5bf91e2d9e611a4f49e3bf19”), “some_field” : “some_value” }

These documents are result of running status check in mflix web application when I run status check with first tickets but
“User Management” test was also run.
I deleted all these wrong document from DB run index again, founded several other duplicates and finally run index w/o errors. After that User Management status check became green with right value.

1 Like

Hey @brezniczky,

Unfortunately it’s not working yet. I have tried running with and without the duplicate check with the same result.

It is possible there is another concurrent issue. When index.html is loaded it seems to hang before/during validation of user management and/or user preferences.

I hope an admin or tutor can step in and provide some advice/assistance because I work full time and have active kids so I don’t necessarily have enough time to sort this out on my own before this and chapter 3 are due on Tuesday.

I am not so familiar with js or html so I really don’t know where to start debugging it.

Despite the result when I tried to create the email index before I signed into Atlas and the only indexes on users were on _id and name. and there I successfully created an index on email there. Just need to wait until I get home to check it works.

1 Like

Hi @Robert_43103, the answer to this question is no. The integration tests tries random email, different from the ones in the database since it generates user names that are larger than max length of the ones that exist in the current dataset.

In any case, go ahead and connect to your cluster and run the following commands:

use mflix;

If you get a different count between the number of distinct emails and number of documents in your users collection, this means that you have some dirty data that is affecting your test pass.

Also, make sure that you are using the same cluster for both the TEST and PROD environments (defined in your dotini file).
You might be applying corrective measures on the wrong cluster/database.



Numbers are different for me distinct value is 243 and count value is 309. How can I clean this data?

Find and delete the duplicates :slight_smile:

I think $sortByCount is a useful aggregation operation in this case, there were only a few offenders in my case - I was remove({…})'ing them one address at a time.

Thinking back it’s potentially a bit too much (the first ones could need to be preserved in cases) but I ended up passing the tests with I believe the ‘brutal’ approach anyway.

1 Like

Creating the index on email fixed the issue :slight_smile:
Thank you everyone.


@dschupp, this seems like an important oversight in the lab mats. I believe the course indicates an index already exists, doesn’t it?

Hey, I had the same problem and just ran db.users.deleteMany({}) and db.users.createIndex({email: 1}, {unique: true}) and it worked.

edit: adding unique to the createIndex call