I'm doing simulations which I control using python. Each simulation is clearly defined by a set of integers and floats which are stored in a dict, for example:
dict1 = {'paramName1': 1, 'paramName2': 54, 'paramName3', 34.621}
After the simulation is done, there are additional keys which account for the results, so it might look like
dict1 = {'paramName1': 1, 'paramName2': 54, 'paramName3': 34.621, 'result1': 0.0345}
For each simulation, I want to store this dict
in a central database. Before new simulations are started, the programm should then compare the parameters with the entries in the database and check if an identical simulation has already been done.
What would be the cleanest and most performant implementation? Right now, I was using a single numpy-file for each simulation and I am reading all those files in before new simulations are launched. This is of course slow. But I'm wondering how to compare the float numbers when using a, say, sqlite3 database, since there is no tolerance-value or something which can be given, right?