0

I have the following scenario:

On a Raspberry Pi (Raspbian Jessie) I run a python script receiving a huge number of individual data sets from a measuring device via UART. These data sets are "cached" in a mariaDB database (engine: MEMORY) and from there either sent via the internet to a database on a remote server or to a database on a USB drive on the RaspPi itself (in cases of temporary loss of connection or no connection at all). If possible at all, the MEMORY-database should have no downtime whatsoever, even if the USB drive has to be changed because it is full. There will be hardly any reads from the USB-database and no deletes or complicated restructurings This setup is intended to run for years.

According to my research, I have three options:

1) Mounting the USB drive into the datadir of the MEMORY-database (as has been suggested here)

2) Creating and running two instances of mariaDB, one (MEMROY-database) running forever, one (USB-database) being stopped ocasionally to allow for a change of the USB drive (something along the line of this

3) Running two instances of mariaDB in sandbox (e.g. flollowining this description)

My questions is, what is the best way to achieve the functionality described above? I worry, that in scenario 1 the whole mariaDB instance might crash if I mess (unmounting, formatting..) with parts of its datadir. Scenarios 2 and 3 seem preferable to that, but I don't know which one to choose or if I am mistaken altogether.

Community
  • 1
  • 1
Fantilein1990
  • 155
  • 1
  • 6
  • If the dataset is static, consider using memcached. – Rick James Sep 13 '16 at 16:09
  • Static in the meaning of "always structured the same way" or in the meaning of "always exactly the same data/nothing is being written into the cache"? The data cached cumulates to approximately 100MB/day and is therefore forewarded to the usb-/server-database every few seconds and ddeleted from local memory afterwards.. But I will look into memcached, thank you. – Fantilein1990 Sep 14 '16 at 10:16

0 Answers0