I'm trying to create container that looks close to how my file spec works. It's like a vector but the type of the elements is defined by a hashtable.
If I knew the type at compile-time I could just write something like this:
struct foo {
float a,b,c;
int d;
byte e,f;
};
std::vector<foo> foovector;
foovector.push_back(foo f);
I don't have the struct at compile time. I only have a schema I get from the file header. All elements are the same size and have the same offsets for each item inside an element. The container has the hash table defined before any elements can be added.
typedef Toffset uint; //byte offset;
typedef Ttype uint; //enum of types
std::unordered_map<std::string, std::pair<Toffset,Ttype>> itemKey;
itemKey["a"] = 0;
itemKey["b"] = 4;
itemKey["c"] = 8;
itemKey["d"] = 12;
itemKey["e"] = 16;
itemKey["f"] = 17;
nstd::interleaved_vector superfoo(itemKey, 10); //hashtable, pre-allocation size
nstd::interleaved_vector::iterator myIterator;
myIteratorGlobal = superfoo.begin;
myIteratorA = superfoo["a"].begin;
myIteratorB = superfoo["b"].begin;
*myIteratorB = 2.0f;
*myIteratorGlobal["d"] = 512;
The idea is I can memcpy quickly the raw data in and out of files. Iterator offsets are easy. My questions are:
Does anything do this already?
Is this a bad idea? Should I just create a vector and new up each element? I expect to have millions of elements. The range of sizes of foo will be 20 to 200 bytes.
This is a bad idea? I should instead create coupled vectors, one for each item?
Or is this "interleaved_vector" a good solution to my problem?