float FLOAT = 1.0f;
double DOUBLE = 2.0;
float a = FLOAT / DOUBLE;
double b = FLOAT / DOUBLE;
Are a
and b
calculated in the same way?
How are FLOAT
and DOUBLE
converted in compile?
It seems the default conversion is up-conversion to prevent loss.
Actually I'm doing some calculation on GPU which is precision sensitive and the code should look like this:
float a = FLOAT / 2.0 + 1.0/3.0;
where the code contains very long expression with many numbers and vars (actually generated from Matlab code).
Then how do I control such conversion behavior?
Except writing all the numbers in like 2.0f
(thousands of numbers in expressions).