I'm working on a synth and I have some code that's going slower than I'd like. I've narrowed down the culprit to some nested for loops which iterate over a 3 dimensional vector of floats. Given that this part of my code is the current bottle neck, I'd like to optimize it as best as I can.
My current understanding of 2d C arrays is that they are really just one long linearly-nested array with some fancy syntax.
int* myArray[3][3];
pseudoSetEntriesToRowNum(myArray);
for (int i = 0; i < 9; i++) {
cout << myArray[i];
}
// output: 000111222
As for vectors, (when 1D) the performance seems to be widely sufficient (Using arrays or std::vectors in C++, what's the performance gap?) with issues arising more from resizing than accessing/setting. But when 2d, my understanding would suggest that the linearly-nested optimization is lost, and that for each dimension a pointer is being followed.
vector<vector<int>> myVector = pseudoMake2dVectorWithRowNumAsEntries(3, 3);
int* myArray = &(myVector[0][0]);
for (int i = 0; i < 9; i++) {
cout << myArray[i]; // this should not be okay
}
// (my guess of) output: 0,0,0, BAD_ACCESS * 6
My question on this topic is first, am I even thinking about this correctly? And, if I am, would the implication be that 2D+ dimensional vectors are not good for time sensitive operations? And if 2D+ dimensional vectors are not ideal, what are some good alternatives?