I'm trying to get my head around the use of IndexDB. I have an SQL database which I access via REST and I'm planning on providing some local caching using IndexDB.
My SQL structure uses a large (and variable) number of tables, each table storing an array of data (time sequence and value) for a specific sensor value. Ideally, I would have assumed I'd create a new object store for each of my tables from MySQL. However, it seems that you can only create an object store when the database is opened which is a bit of a pain.
So, I see a number of options -:
I could use a single object store and add two indexes - one for the time, and one for the sensor. I'm a little worried that this might have performance issues, but I'm not sure how data is stored under the hood.
I could probably detect a new sensor somehow, and open the database with a new version number. This just feels a little wrong to me.
I could alternatively use different databases for each sensor, but I've read somewhere that it's not recommended to use multiple databases (although it's unclear why since this is possibly the easiest solution).
I'd welcome any thoughts people have regarding the best structure for this sort of data, that will provide good performance.