0

I have a MongoDB database with about a year of temperature readings for 4 different sensors (one reading pre sensor every 5 mins) running on a Raspberry PI 3. I am using Mongoose in Node.JS to query the results. I realize there are limitations with reading large datasets from a SD card. I am wondering if there is anything I can do in code that will reduce the query time. I have tried 2 methods to get the data and get varying results depending on the number of results requested. The "old" method was my first attempt at it and I read is not scalable with large a large number of entrys. The "new" method I found here and modified to work for my situation. Is there a method I have missed in my search for faster query's?

console.time(`new ${number}`);
Readings.find({
  sensor: 'Freezer'
}).sort('-time').limit(number).exec((err, readings) => {
  if (err) {
    console.log(err);
    return;
  }
  readings.reverse();
  // readings.forEach(reading => {
  //   console.log(new Date(reading.time).toLocaleString());
  // });
  console.timeEnd(`new ${number}`);
  return;
});

console.time(`old ${number}`);
Readings.count({
 sensor: 'Freezer'
}, (err, count) => {
 if (err) {
   console.log(err)
   return;
 }
 Readings.find({
   sensor: 'Freezer'
 }).sort({'time':1}).skip(count - number).limit(number).exec((err, readings) => {
   if (err) {
     console.log(err);
     return;
   }
  //  readings.forEach(reading => {
  //    console.log(new Date(reading.time).toLocaleString());
  //  });
   console.timeEnd(`old ${number}`);
   return;
 });
});

the "new" method is faster for small or medium query's while the "old" method is faster getting a large number of readings.

new method 288 results = 2719 ms

old method 288 results = 3831 ms

new method 8640 results = 3288 ms

old method 8640 results = 4184 ms

new method 103680 results = 3364 ms

old method 103680 results = 2795 ms

jimi dough10
  • 2,016
  • 2
  • 13
  • 20
  • 1
    You can use "forward paging" as opposed to `.skip()` for a marked improvement. See [Implementing pagination in mongodb](https://stackoverflow.com/a/28105442/2313887) for an explanation. Also adding [`.lean()`](http://mongoosejs.com/docs/api.html#query_Query-lean) where you can simply return "plain objects" rather than casting to complex mongoose documents is going to save a lot as well. You may even be able to stream the cursor rather than the default array conversion that mongoose does. But we'd really need to know "what you do with the results" to advise further on that. – Neil Lunn Oct 28 '17 at 15:12
  • as far as "what you do with the results" in the example above I would send the returned array to the client using express so the client can render graphs with Charts.JS. – jimi dough10 Oct 28 '17 at 15:19
  • It sounds like more of an architecture problem here. My suggestion would be to store the data in a central instance and use the Raspis only as clients. A smallish DO droplet should be cheap enough will giving you way better performance. – Markus W Mahlberg Oct 28 '17 at 18:42
  • @MarkusWMahlberg I agree the main issue for performance here is hardware. My goal was to keep as much of the system as possible internal and inexpensive. It was a quick job for a friend for his coolers. So I designed the system to have 1 PI as webserver & database. In reality this could be any machine as it doesn't rely on PI GPIO's and 2 clients PI's in different locations that take readings and uploading with websocket. I am just looking to tune the code as much as I can so if I scale the DB and webserver to a more powerful machine i will still get the best response time possible. – jimi dough10 Oct 28 '17 at 20:18
  • @jimidough10 Well, sounds like you do not need a full blown DBMS anyway. Something as lighweight like an embedded KV store or etcd could do the job. – Markus W Mahlberg Oct 28 '17 at 20:20

0 Answers0