Organizational Research By

Surprising Reserch Topic

how can i delete duplicates in mongodb


how can i delete duplicates in mongodb  using -'mongodb,indexing,duplicates,duplicate-removal'

I have a large collection (~2.7 million documents) in mongodb, and there are a lot of duplicates. I tried running ensureIndex({id:1}, {unique:true, dropDups:true}) on the collection. Mongo churns away at it for a while before it decides that too many dups on index build with dropDups=true.

How can I add the index and get rid of the duplicates? Or the other way around, what's the best way to delete some dups so that mongo can successfully build the index?

For bonus points, why is there a limit to the number of dups that can be dropped?
    

asked Sep 24, 2015 by ukohale
0 votes
7 views



Related Hot Questions

2 Answers

0 votes

For bonus points, why is there a limit to the number of dups that can be dropped?

MongoDB is likely doing this to defend itself. If you dropDups on the wrong field, you could hose the entire dataset and lock down the DB with delete operations (which are "as expensive" as writes).

How can I add the index and get rid of the duplicates?

So the first question is why are you creating a unique index on the id field?

MongoDB creates a default _id field that is automatically unique and indexed. By default MongoDB populates the _id with an ObjectId, however, you can override this with whatever value you like. So if you have a ready set of ID values, you can use those.

If you cannot re-import the values, then copy them to a new collection while changing id into _id. You can then drop the old collection and rename the new one. (note that you will get a bunch of "duplicate key errors", ensure that your code catches and ignores them)

answered Sep 24, 2015 by ashishshukla
0 votes

I came across this question while trying to find a workaround for the "too many dups" problem (without re-creating the collection from source). The way I finally did it is by creating a new collection c2, adding a unique index on the needed field(s) (purely for speed up purpose) and then doing upsert:

db.c1.find().forEach(function(x){db.c2.update({field1:x.field1, field2:x.field2}, x, {upsert:true})})

where the combinations of field1 and field2 should be unique. Then one can just drop the initial collection c1 and rename the new one. This solution, as shown, can work for one or multiple fields.

answered Sep 24, 2015 by deven.bendale

...