Can someone explain me how to choose the precision of a float with a C function?
Examples:
theFatFunction(0.666666666, 3)
returns 0.667
theFatFunction(0.111111111, 3)
returns 0.111
Can someone explain me how to choose the precision of a float with a C function?
Examples:
theFatFunction(0.666666666, 3)
returns 0.667
theFatFunction(0.111111111, 3)
returns 0.111
You can't do that, since precision is determined by the data type (i.e. float
or double
or long double
). If you want to round it for printing purposes, you can use the proper format specifiers in printf()
, i.e. printf("%0.3f\n", 0.666666666)
.
You can't. Precision depends entirely on the data type. You've got float
and double
and that's it.
It might be roughly the following steps:
Thank you.
Most systems follow IEEE-754 floating point standard which defines several floating point types.
On these systems, usually float
is the IEEE-754 binary32
single precision type: it has 24-bit of precision. double
is the binary64
double precision type; it has 53-bit of precision. The precision in bit numbers is defined by the IEEE-754 standard and cannot be changed.
When you print values of floating point types using functions of the fprintf
family (e.g., printf
), the precision is defined as the maximum number of significant digits and is by default set to 6 digits. You can change the default precision with a .
followed by a decimal number in the conversion specification. For example:
printf("%.10f\n", 4.0 * atan(1.0)); // prints 3.1415926536
whereas
printf("%f\n", 4.0 * atan(1.0)); // prints 3.141593