1

I am using mongoose with node.js for MongoDB. Now i need to make 20 parallel find query requests in my database with limit of documents 4, same as shown below just brand_id will change for different brand.

areamodel.find({ brand_id: brand_id }, { '_id': 1 }, { limit: 4 }, function(err, docs) {
    if (err) {
        console.log(err);
    } else {
        console.log('fetched');
    }
}

Now as to run all these query parallely i thought about putting all 20 brand_id in a array of string and then use a $in query to get the results, but i don't know how to specify the limit 4 for every array field which will be matched.

I write below code with aggregation but don't know where to specify limit for each element of my array.

var brand_ids = ["brandid1", "brandid2", "brandid3", "brandid4", "brandid5", "brandid6", "brandid7", "brandid8", "brandid9", "brandid10", "brandid11", "brandid12", "brandid13", "brandid14", "brandid15", "brandid16", "brandid17", "brandid18", "brandid19", "brandid20"];
areamodel.aggregate(
    { $project: { _id: 1 } },
    { $match : { 'brand_id': { $in: brand_ids } } },
    function(err, docs) {
        if (err) {
            console.error(err);
        } else {

        }
    }
);

Can anyone please tell me how can i solve my problem using only one query.

UPDATE- Why i don't think $group be helpful for me.

Suppose my brand_ids array contains these strings

brand_ids = ["id1", "id2", "id3", "id4", "id5"] 

and my database have below documents

{
   "brand_id": "id1",
   "name": "Levis",
   "loc": "india"
},
{
   "brand_id": "id1",
   "name": "Levis"
   "loc": "america"
},
{
   "brand_id": "id2",
   "name": "Lee"
   "loc": "india"
},
{
   "brand_id": "id2",
   "name": "Lee"
   "loc": "america"
}

Desired JSON output

{
   "name": "Levis"
},
{
   "name": "Lee"
}

For above example suppose i have 25000 documents with "name" as "Levis" and 25000 of documents where "name" is "Lee", now if i will use group then all of 50000 documents will be queried and grouped by "name".

But according to the solution i want, when first document with "Levis" and "Lee" gets found then i will don't have to look for remaining thousands of the documents.

Update- I think if anyone of you can tell me this then probably i can get to my solution.

Consider a case where i have 1000 total documents in my mongoDB, now suppose out of that 1000, 100 will pass my match query.

Now if i will apply limit 4 on this query then will this query take same time to execute as the query without any limit, or not.

Why i am thinking about this case

Because if my query will take same time then i don't think $group will increase my time as all documents will be queried.

But if time taken by limit query is more than the time taken without the limit query then.

  • If i can apply limit 4 on each array element then my question will be solved.

  • If i cannot apply limit on each array element then i don't think $group will be useful, as in this case i have to scan whole documents to get the results.

FINAL UPDATE- As i read on below answer and also on mongodb docs that by using $limit, time taken by query does not get affected it is the network bandwidth that gets compromised. So i think if anyone of you can tell me how to apply limit on array fields (by using $group or anything other than that)then my problem will get solved.

mongodb: will limit() increase query speed?

Solution Actually my thinking about mongoDB was very wrong i thought adding limit with queries decrease time taken by query but it is not the case that's why i stumbled so many days to try the answer which Gregory NEUT and JohnnyHK Told me to. Thanks a lot both of you guys i must have found the solution at the day one if i had known about this thing. thanks alot for helping me out of here guys i really appreciate it.

Community
  • 1
  • 1
Prakash Kumar
  • 829
  • 1
  • 15
  • 32
  • Possible dupe of http://stackoverflow.com/questions/34325714/how-to-get-lastest-n-records-of-each-group-in-mongodb – JohnnyHK Nov 20 '16 at 18:01
  • Possible dup of http://stackoverflow.com/questions/24594049/mongodb-aggregation-pipeline-how-to-limit-a-group-push – styvane Nov 21 '16 at 04:55
  • just a thought that came to my mind. why cant you use findOne() instead of find() and then groupby and then try to stop query execution on first match. findOne() is supposed to work like that. since i dont know your business requirement i am proposing the use of findOne() if you are really interested in the first matched document. – satish chennupati Nov 21 '16 at 18:51
  • Why dont you run all your queries in parallel separately? – hyades Nov 21 '16 at 22:17
  • @JohnnyHK You were right man it was duplicate of your post thanks a lot man finally it got my solution. Thanks again – Prakash Kumar Nov 22 '16 at 22:16

2 Answers2

2

I propose you to use the $group aggregation attribute to group all data you got from the $match by brand_id, and then limit the groups of data using $slice.

Look at this stack overflow post

db.collection.aggregate(
     { 
         $sort: {
            created: -1,
         }
     }, {
        $group: {
          _id: '$city',
          title: {
             $push: '$title',
          }
     }, {
        $project: {
          _id: 0,
          city: '$_id',
          mostRecentTitle: {
             $slice: ['$title', 0, 2],
          }
        }
     })

Community
  • 1
  • 1
Orelsanpls
  • 22,456
  • 6
  • 42
  • 69
  • Ok but you are not getting my point, actually my point was suppose i have 100 docs matching with brand_id so according to you, your query will first query 100 docs and then will slice only 4, but don't you think it will increase time taken by query because best case would be after matching 4 docs you should not query for remaining 96 docs so here time taken by query would be equal to tme taken to query only 4 documents not whole 100 documents, hope you got my point. But anyways thanks for answering here it would be better if you could get a solution with querying only 4 docs. – Prakash Kumar Nov 18 '16 at 16:33
  • Are you there @gregory, I am waiting for your reply. – Prakash Kumar Nov 20 '16 at 17:50
  • Can you please have a relook at my question. – Prakash Kumar Nov 20 '16 at 17:55
  • Sorry I'm working don't have time right now, gotta look as soon as possible – Orelsanpls Nov 21 '16 at 13:10
  • Can you please check my Updated post. – Prakash Kumar Nov 22 '16 at 18:07
  • 1
    Sorry i have nt tried your answer correctly but thanks a ton man you are a life saviour for me thanks again :) (y) – Prakash Kumar Nov 22 '16 at 22:13
  • 1
    Sorry for disponibility, really glad you found a soluce to your problem :) – Orelsanpls Nov 23 '16 at 08:31
  • Can you please help me out here i have another question on aggregation may be you could help http://stackoverflow.com/questions/41042482/change-type-of-field-inside-mongodb-aggregation-and-does-lookup-utilises-index – Prakash Kumar Dec 08 '16 at 15:10
  • can you please help me out here http://stackoverflow.com/questions/41142265/update-in-mongodb-with-in-query?noredirect=1#comment69488392_41142265 – Prakash Kumar Dec 14 '16 at 14:33
0

I propose using distinct, since that will return all different brand names in your collection. (I assume this is what you are trying to achieve?)

db.runCommand ( { distinct: "areamodel", key: "name" } )

MongoDB docs

In mongoose i think it is: areamodel.db.db.command({ distinct: "areamodel", key: "name" }) (Untested)

raubas
  • 674
  • 7
  • 6