0

I use a raspberry pi to log a whole bunch of sensor data into a mongo database.
After some time I consolidate the data into hour and day aggregated data.
Right now I use a collection in which all the sensors are, each one with a document and some descriptions and other metadata. In each of those documents are arrays with the actual data points, to which I append. When aggregating the data over time spans, I append to a different array like 'data_1h' or 'data_1d', etc.
The datapoints themselves are documents with the actual data, a timestamp and a few other bits.
This seemed to go well for some time, but I have over 700 different sensors and after 2 years of data collection, mongo uses large amounts of memory, which the raspberry doesn't have and it starts to choke.
So I was wondering if it would be a better idea to not use large arrays but instead use a seperate collection and write the data there as single documents? One document per data point, as you would in say SQL?

FalcoGer
  • 2,278
  • 1
  • 12
  • 34

1 Answers1

1

yes making different table and mapping document(points) to your sensor document by keeping sensor document id in point document would be better idea because making arrays makes editing and updating more memory comsuming as well if array gets larger querying a single sensor data will load all your array of points which will take a lot of memory

Apil Pokharel
  • 71
  • 2
  • 7