I always assume that dividing a double by an integer will lead to faster code because the compiler will select the better microcode to compute:
double a;
double b = a/3.0;
double c = a/3; // will compute faster than b
For a single operation it does not matter, but for repetitive operations it can make difference. Is my assumption always correct or compiler or CPU dependent or whatever?
The same question applies for multiplication; i.e. will 3 * a
be faster than 3.0 * a
?