Context of this is a function, which needs to run pretty much once per frame, and is therefore very critical performance-wise. This function contains a loop, and operations inside it.
private int MyFunction(int number)
{
// Code
for (int i = 0; i <= 10000; i++)
{
var value = i * number
var valuePow2 = value * value;
// Some code which uses valuePow2 several times
}
return 0; // Not actual line
}
Now, because of mathematical properties, we know that (a * b)² is equal to a² * b²
So, it would be possible to make my function into this:
private int MyFunction(int number)
{
// Code
var numberPow2 = number * number;
for (int i = 0; i <= 10000; i++)
{
var iPow2 = i * i
var valuePow2 = numberPow2 * iPow2;
// Some code which uses valuePow2 several times
}
return 0; // Not actual line
}
intuitively, this seems like it should be faster, since number² does not vary, and is now only calculated once outside of the loop. At the very least, this would be much faster for a human to do, because the x² operation is done on a much smaller number during the loop.
What I am wondering, is in C#, when you use types like int, will the multiplication actually be faster with smaller numbers?
For example, will 5 * 5 execute faster than 5000 * 5000?
If so, then the second version is better, even if by a small margin, because of that.
But if, for a given data type, the time is constant, then the first version of the function is better, because half of the calculations will be done on smaller numbers, because I do the same amount of multiplication in the loop both times, but in the second version I do one extra multiplication before the start.
I am aware that for all intent and purposes, the performance difference is negligible. I was suggested the second version in a Code Review because the function is critical, and I can't find any documentation to support either view.