Recently we've got some floating precision related issues. Specifically:
float values[10];
values[0] = 123.45;
values[1] = 567.89;
or:
float values[10];
values[0] = 123.45f;
values[1] = 567.89f;
or:
float values[10];
values[0] = float(123.45);
values[1] = float(567.89);
Here are questions:
Although a literal written in
XX.XX
is in double format, if it is directly assigned to a single lvalue, does it still compiled into a double value in the constant region of a program? Or it is directly converted to 32-bit single at compile time?Does
XX.XXf
andfloat(XX.XX)
has same meaning? Or does the latter form actually stores a double constant, and convert it to single precition at run time?