We version most of our collections in Mongodb. The selected versioning mechanism is as follows:
{ "docId" : 174, "v" : 1, "attr1": 165 } /*version 1 */
{ "docId" : 174, "v" : 2, "attr1": 165, "attr2": "A-1" }
{ "docId" : 174, "v" : 3, "attr1": 184, "attr2" : "A-1" }
So, when we perform our queries we always need to use the aggregation framework in this way to ensure get latest versions of our objects:
db.docs.aggregate( [
{"$sort":{"docId":-1,"v":-1}},
{"$group":{"_id":"$docId","doc":{"$first":"$$ROOT"}}}
{"$match":{<query>}}
] );
The problem with this approach is once you have done your grouping, you have a set of data in memory which has nothing to do with your collection and thus, your indexes cannot be used.
As a result, the more documents your collection has, the slower the query gets.
Is there any way to speed this up?
If not, I will consider to move to one of the approaches defined in this good post: http://www.askasya.com/post/trackversions/