80

I am interested in optimizing a "pagination" solution I'm working on with MongoDB. My problem is straight forward. I usually limit the number of documents returned using the limit() functionality. This forces me to issue a redundant query without the limit() function in order for me to also capture the total number of documents in the query so I can pass to that to the client letting them know they'll have to issue an additional request(s) to retrieve the rest of the documents.

Is there a way to condense this into 1 query? Get the total number of documents but at the same time only retrieve a subset using limit()? Is there a different way to think about this problem than I am approaching it?

Ashh
  • 44,693
  • 14
  • 105
  • 132
randombits
  • 47,058
  • 76
  • 251
  • 433
  • I have had this scenario and have written the approach as an article for others to use in here https://beingnin.medium.com/implement-server-side-pagination-in-mongodb-with-total-count-cfbb11b5c956 – Beingnin Jan 22 '21 at 06:18

16 Answers16

96

Mongodb 3.4 has introduced $facet aggregation

which processes multiple aggregation pipelines within a single stage on the same set of input documents.

Using $facet and $group you can find documents with $limit and can get total count.

You can use below aggregation in mongodb 3.4

db.collection.aggregate([
  { "$facet": {
    "totalData": [
      { "$match": { }},
      { "$skip": 10 },
      { "$limit": 10 }
    ],
    "totalCount": [
      { "$group": {
        "_id": null,
        "count": { "$sum": 1 }
      }}
    ]
  }}
])

Even you can use $count aggregation which has been introduced in mongodb 3.6.

You can use below aggregation in mongodb 3.6

db.collection.aggregate([
  { "$facet": {
    "totalData": [
      { "$match": { }},
      { "$skip": 10 },
      { "$limit": 10 }
    ],
    "totalCount": [
      { "$count": "count" }
    ]
  }}
])
Ashh
  • 44,693
  • 14
  • 105
  • 132
  • check out the implementation https://beingnin.medium.com/implement-server-side-pagination-in-mongodb-with-total-count-cfbb11b5c956 – Beingnin Jan 22 '21 at 06:18
  • 7
    If you're looking to get a total count of the data after the $match has happened, placing the $match before the $facet achieves this. – Hedley Smith Mar 25 '21 at 09:20
  • 3
    [This answer](https://stackoverflow.com/a/49483919/3362244) explains the same thing but much clearer. – teuber789 Sep 08 '21 at 13:41
  • THis answer returns the total count properly but its failing to return the totalData – Rigin Oommen Sep 14 '21 at 08:55
  • 1
    Regarding the performance: the $facet stage, and its sub-pipelines, cannot make use of indexes, even if its sub-pipelines use $match or if $facet is the first stage in the pipeline. The $facet stage will always perform a COLLSCAN during execution. – Prisacari Dmitrii Oct 07 '22 at 21:36
23

No, there is no other way. Two queries - one for count - one with limit. Or you have to use a different database. Apache Solr for instance works like you want. Every query there is limited and returns totalCount.

heinob
  • 19,127
  • 5
  • 41
  • 61
  • 9
    I'm not sure if "No" is quite the answer anymore now that we have mongoDb 3.4. See https://stackoverflow.com/a/39784851/3654061 – Felipe Feb 13 '18 at 05:18
  • 1
    There are multiple ways of doing this as I've been search for a solution myself. You can create an aggregation operation to return the total count as well as full documents according to a condition. You can also do one findAll based on conditions. Store the length of that array. And then slice out values according to your limit / offset values. Both of these options are only one call to the DB. The expense of the aggregation depends on how complex it is, same with the slice that you run on the returned array. Thoughts on this? – Sam Gruse Dec 07 '18 at 05:24
  • How about this answer? https://stackoverflow.com/a/56693959 for me seems to work. Compared to aggregation with a limit of 100 docs, runs even slightly (~2-3ms) faster on avg for me... – sznrbrt Jan 04 '20 at 03:09
  • It could be done with one just query by using facet sub pipelines, however, the downside of this solution is that the $facet stage is way slower as it can't use indexes even if a match is used within, the difference can be noted with 10M documents. So, it would be better to have separate queries than just one. – rasfuranku Aug 09 '23 at 03:59
19

MongoDB allows you to use cursor.count() even when you pass limit() or skip().

Lets say you have a db.collection with 10 items.

You can do:

async function getQuery() {
  let query = await db.collection.find({}).skip(5).limit(5); // returns last 5 items in db
  let countTotal = await query.count() // returns 10-- will not take `skip` or `limit` into consideration
  let countWithConstraints = await query.count(true) // returns 5 -- will take into consideration `skip` and `limit`
  return { query, countTotal } 
}
dev
  • 863
  • 7
  • 14
16

Here's how to do this with MongoDB 3.4+ (with Mongoose) using $facets. This examples returns a $count based on the documents after they have been matched.

const facetedPipeline = [{
    "$match": { "dateCreated": { $gte: new Date('2021-01-01') } },
    "$project": { 'exclude.some.field': 0 },
  },
  {
    "$facet": {
      "data": [
        { "$skip": 10 },
        { "$limit": 10 }
      ],
      "pagination": [
        { "$count": "total" }
      ]
    }
  }
];

const results = await Model.aggregate(facetedPipeline);

This pattern is useful for getting pagination information to return from a REST API.

Reference: MongoDB $facet

Hedley Smith
  • 1,307
  • 15
  • 12
  • Note that when you do the match first in the pipeline and the the facet you are being able to hit the indexes. You can't hit indexes from $facet – Willem van der Veen Dec 23 '22 at 09:08
12

Times have changed, and I believe you can achieve what the OP is asking by using aggregation with $sort, $group and $project. For my system, I needed to also grab some user info from my users collection. Hopefully this can answer any questions around that as well. Below is an aggregation pipe. The last three objects (sort, group and project) are what handle getting the total count, then providing pagination capabilities.

db.posts.aggregate([
  { $match: { public: true },
  { $lookup: {
    from: 'users',
    localField: 'userId',
    foreignField: 'userId',
    as: 'userInfo'
  } },
  { $project: {
    postId: 1,
    title: 1,
    description: 1
    updated: 1,
    userInfo: {
      $let: {
        vars: {
          firstUser: {
            $arrayElemAt: ['$userInfo', 0]
          }
        },
        in: {
          username: '$$firstUser.username'
        }
      }
    }
  } },
  { $sort: { updated: -1 } },
  { $group: {
    _id: null,
    postCount: { $sum: 1 },
    posts: {
      $push: '$$ROOT'
    }
  } },
  { $project: {
    _id: 0,
    postCount: 1,
    posts: {
      $slice: [
        '$posts',
        currentPage ? (currentPage - 1) * RESULTS_PER_PAGE : 0,
        RESULTS_PER_PAGE
      ]
    }
  } }
])
TestWell
  • 734
  • 10
  • 19
  • What will be the response for this query. Will it return count as well as result – Kumar Sep 07 '17 at 06:57
  • 1
    @Kumar yes, the count is calculated during $group using $sum and the array result comes from $push. You can see in the $project that I include the post count (postCount) then take only a section from the result array using $slice. The final response returns the number of total posts along with only a section of them for pagination. – TestWell Sep 08 '17 at 16:37
12

there is a way in Mongodb 3.4: $facet

you can do

db.collection.aggregate([
  {
    $facet: {
      data: [{ $match: {} }],
      total: { $count: 'total' }
    }
  }
])

then you will be able to run two aggregate at the same time

Matthew
  • 121
  • 2
  • 5
9

By default, the count() method ignores the effects of the cursor.skip() and cursor.limit() (MongoDB docs)

As the count method excludes the effects of limit and skip, you can use cursor.count() to get the total count

 const cursor = await database.collection(collectionName).find(query).skip(offset).limit(limit)
 return {
    data: await cursor.toArray(),
    count: await cursor.count() // this will give count of all the documents before .skip() and limit()
 };
sznrbrt
  • 993
  • 2
  • 11
  • 32
5

It all depends on the pagination experience you need as to whether or not you need to do two queries.

Do you need to list every single page or even a range of pages? Does anyone even go to page 1051 - conceptually what does that actually mean?

Theres been lots of UX on patterns of pagination - Avoid the pains of pagination covers various types of pagination and their scenarios and many don't need a count query to know if theres a next page. For example if you display 10 items on a page and you limit to 13 - you'll know if theres another page..

Ross
  • 17,861
  • 2
  • 55
  • 73
3

MongoDB has introduced a new method for getting only the count of the documents matching a given query and it goes as follows:

const result = await db.collection('foo').count({name: 'bar'});
console.log('result:', result) // prints the matching doc count

Recipe for usage in pagination:

const query = {name: 'bar'};
const skip = (pageNo - 1) * pageSize; // assuming pageNo starts from 1
const limit = pageSize;

const [listResult, countResult] = await Promise.all([
  db.collection('foo')
    .find(query)
    .skip(skip)
    .limit(limit),

  db.collection('foo').count(query)
])

return {
  totalCount: countResult,
  list: listResult
}

For more details on db.collection.count visit this page

Akash Babu
  • 950
  • 6
  • 10
1

Thought of providing a caution while using the aggregate for the pagenation. Its better to use two queries for this if the API is used frequently to fetch data by the users. This is atleast 50 times faster than getting the data using aggregate on a production server when more users are accessing the system online. The aggregate and $facet are more suited for Dashboard , reports and cron jobs that are called less frequently.

0

It is possible to get the total result size without the effect of limit() using count() as answered here: Limiting results in MongoDB but still getting the full count?

According to the documentation you can even control whether limit/pagination is taken into account when calling count(): https://docs.mongodb.com/manual/reference/method/cursor.count/#cursor.count

Edit: in contrast to what is written elsewhere - the docs clearly state that "The operation does not perform the query but instead counts the results that would be returned by the query". Which - from my understanding - means that only one query is executed.

Example:

> db.createCollection("test")
{ "ok" : 1 }

> db.test.insert([{name: "first"}, {name: "second"}, {name: "third"}, 
{name: "forth"}, {name: "fifth"}])
BulkWriteResult({
    "writeErrors" : [ ],
    "writeConcernErrors" : [ ],
    "nInserted" : 5,
    "nUpserted" : 0,
    "nMatched" : 0,
    "nModified" : 0,
    "nRemoved" : 0,
    "upserted" : [ ]
})

> db.test.find()
{ "_id" : ObjectId("58ff00918f5e60ff211521c5"), "name" : "first" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c6"), "name" : "second" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c7"), "name" : "third" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c8"), "name" : "forth" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c9"), "name" : "fifth" }

> db.test.count()
5

> var result = db.test.find().limit(3)
> result
{ "_id" : ObjectId("58ff00918f5e60ff211521c5"), "name" : "first" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c6"), "name" : "second" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c7"), "name" : "third" }

> result.count()
5 (total result size of the query without limit)

> result.count(1)
3 (result size with limit(3) taken into account)
Community
  • 1
  • 1
mrechtien
  • 35
  • 4
  • If you downvote, please add a reason so I have the chance to understand - which might also improve future answers! – mrechtien Apr 25 '17 at 14:12
  • 3
    I'm not sure about the downvote but just an FYI: `count()` only works with `find()` and thus is not helpful with `aggregate` queries – Felipe Feb 13 '18 at 05:01
0

Try as bellow:

cursor.count(false, function(err, total){ console.log("total", total) })

core.db.users.find(query, {}, {skip:0, limit:1}, function(err, cursor){
    if(err)
        return callback(err);

    cursor.toArray(function(err, items){
        if(err)
            return callback(err);

        cursor.count(false, function(err, total){
            if(err)
                return callback(err);

            console.log("cursor", total)

            callback(null, {items: items, total:total})
        })
    })
 })
surinder singh
  • 1,463
  • 13
  • 12
0

We can do it using 2 query.

    const limit = parseInt(req.query.limit || 50, 10);
    let page = parseInt(req.query.page || 0, 10);
    if (page > 0) { page = page - 1}

    let doc = await req.db.collection('bookings').find().sort( { _id: -1 }).skip(page).limit(limit).toArray();
    let count = await req.db.collection('bookings').find().count();
    res.json({data: [...doc], count: count});
Aravin
  • 6,605
  • 5
  • 42
  • 58
0

I took the two queries approach, and the following code has been taken straight out of a project I'm working on, using MongoDB Atlas and a full-text search index:

return new Promise( async (resolve, reject) => {
  try {

    const search = {
      $search: {
        index: 'assets',
        compound: { 
          should: [{
            text: {
              query: args.phraseToSearch,
              path: [
                'title', 'note'
              ]
            }
          }]
        }
      }
    }

    const project = {
      $project: {
        _id: 0,
        id: '$_id',
        userId: 1,
        title: 1,
        note: 1,
        score: {
          $meta: 'searchScore'
        }
      }
    }

    const match = {
      $match: {
        userId: args.userId
      }
    }

    const skip = {
      $skip: args.skip
    }

    const limit = {
      $limit: args.first
    }

    const group = {
      $group: {
        _id: null,
        count: { $sum: 1 }
      }
    }

    const searchAllAssets = await Models.Assets.schema.aggregate([
      search, project, match, skip, limit
    ])

    const [ totalNumberOfAssets ] = await Models.Assets.schema.aggregate([
      search, project, match, group
    ])

    return await resolve({
      searchAllAssets: searchAllAssets,
      totalNumberOfAssets: totalNumberOfAssets.count
    })

  } catch (exception) {
    return reject(new Error(exception))
  }
})
Wayne Smallman
  • 1,690
  • 11
  • 34
  • 56
0

I had the same problem and came across this question. The correct solution to this problem is posted here.

Saurav Ghimire
  • 500
  • 2
  • 9
-1

You can do this in one query. First you run a count and within that run the limit() function.

In Node.js and Express.js, you will have to use it like this to be able to use the "count" function along with the toArray's "result".

var curFind = db.collection('tasks').find({query});

Then you can run two functions after it like this (one nested in the other)

curFind.count(function (e, count) {

// Use count here

    curFind.skip(0).limit(10).toArray(function(err, result) {

    // Use result here and count here

    });

});
Vibhu Tewary
  • 282
  • 2
  • 11
  • This is not correct method.You are just finding in all document instead of first 10 documents in each request.For each request,everytime you are just finding in whole documents. not in first 10. – Udit Kumawat Apr 29 '17 at 06:29
  • thanks for the comment. at the time this is a solution we came up with. it may not be perfect when it comes to efficiency. do suggest a solution to improvise. – Vibhu Tewary Apr 30 '17 at 06:52