I am currently working in a piece of software, which is highly performance critical, so each optimization counts for me. There is one critical situation that very frequently occurs inside a loop in which I calculate two int
indices of a std::vector
by using double
(for clarification, convert metric positions to map-positions).
There are three sensible ways to do that in my opinion.
//first possibility
int indexX, indexY;
for(int x = 0; x <= xMax; ++x)
{
for(int y = 0; y <= yMax; ++y)
{
indexX = //calculate value using x somehow
indexY = //calculate value using y somehow
//do multiple things with indexX and indexY
}
}
//second possibility
for(int x = 0; x <= xMax; ++x)
{
for(int y = 0; y <= yMax; ++y)
{
int indexX = //calculate value using x somehow
int indexY = //calculate value using y somehow
//do multiple things with indexX and indexY
}
}
//third possibility
for(int x = 0; x <= xMax; ++x)
{
for(int y = 0; y <= yMax; ++y)
{
const int indexX = //calculate value using x somehow
const int indexY = //calculate value using y somehow
//do multiple things with indexX and indexY
}
}
After some search on SO, people generally seem to not recommend the first case and say that better optimizations are possible, if variables are declared as locally as possible. I have tested that and it seems to be correct so far, IF optimizations are turned on during compiling.
However, I am not sure about cases 2/3. All topics on const
I could find on SO concern using the keyword as a modifier for function parameters, rather than local variables. Sell me on const correctness is a very general discussion on the topic and mostly deals with "accidental error protection". The accepted answer also states compiler optimizations "are possible", but I could not observe any performance differences in my case.
I understand that the compiler will most likely convert something like const int number = 5
to the actual number (as stated here, it's C#, but I don't expect it to differ for C++) however, in my case it is not known during compile time.
Does the compiler detect that a local variable is only assigned to once and is thus guaranteed to treat both cases the same? Could it be that one might lead to better optimizations than the other? Are those optimizations always better for one case than the other or could it switch between the two? Might it depend on the platform?
Edit: I should mention. The code WILL be compiled on "highly different platforms" and I unfortunately, I cannot inspect the assembly outcome in most cases.