I just read about Denormalized floating point numbers, should i replace all zero literals with almost-zero literal to get better performance.
I am afraid that the evil zero constants in my could pollute my performance. Example:
Program 1:
float a = 0.0f;
Console.WriteLine(a);
Program 2:
float b = 1.401298E-45f;
Console.WriteLine(b);
Shouldn't program 2 be 1.000.000 times faster than program 1 since b can be represented by ieee floating point representation in cannonized form ? whereas program 1 has to act with "zero" which is not directly representable.
If so the whole software development industry is flawed. A simple field declaration:
float c;
Would automatically initialize it to zero, Which would cause the dreaded performance hit.
Avoid the hustle mentioning "Premature Optimization is the..., blablabla". Delayed Knowledge of Compilers Optimization Workings could result in the explosion of a nuclear factory. So i would like to know ahead what i am paying, so that i am safe to ignore optimizing it.
Ps. I don't care if float becomes denormalized by the result of a mathematical operation, i have no control in that, so i don't care.
Proof: x + 0.1f is 10 times faster than x + 0 Why does changing 0.1f to 0 slow down performance by 10x?
Question Synopsis: is 0.0f evil ? So all who used it as a constant are also evil?