I'm having some problems with precision when importing floats from csv files created in Python to a program written in C. The following code is an example of what happens in my program: Python writes a float32 value to a file (in this case 9.8431373e+00), I read it into a string and use strtof
to convert it back into float32, but the result is different on the last decimal place.
#include <stdio.h>
#include <stdlib.h>
int main(void) {
char* a;
char x[10] = "9.8431373";
printf("%s\n", x);
float f = strtof(x,&a);
printf("%.7e\n", f);
}
Output:
9.8431373
9.8431377e+00
Now, correct me if I am wrong, but a float
object in C has 32 bits which leads to 7 decimal places of precision after wherever its floating point is. So there shouldn't be any precision errors if I'm not declaring a number bigger than the float allows.
If I did indeed declare a number more precise than the float in C allows, then how has Python accepted "9.8431373e+00" as a Float32 without correcting it? Does Python and C have different standards for 32-bit floats?