I am reading in statistical data from running a program with different configurations. Let's say there are 6 configurations (a
, b
, ..., f
). The configurations may not change linearly, so if you think of the measurements as a table, there could be gaps in the table. The question is regarding structuring these statistical data in memory.
The first thing that comes to mind is to read these configurations to a dynamically allocated 6-deep array or arrays:
struct data ******measurements;
Now this works fine. You have very little memory overhead (only the configurations that have data will actually be allocated) and the access time is O(1)
.
Besides the fact that most people don't like ******
pointers, this has the downside that adding configurations means adding dimensions to the array, which could get ugly unless read/write to the array is encapsulated in a function. (Write is already encapsulated to take care of allocation when necessary, so this in fact is not such a big deal).
Another idea that comes to mind is to use a map of struct config
to struct data
using an AVL tree or something (which I already have so has no implementation overheads). This solves the problem of extending the configuration parameters, but decreases access time to O(log(n))
.
The number of tests could get considerably large for the O(log(n))
to make a difference.
My question is: Is using a 6-deep nested array here justified? Or is there a better way of doing this?