I'm coding a C function that allows me to transform any float or double into a string containing 32 bits of 0s and 1s (according to IEEE754 standard). I'm not going to make use of printf as the objective is to understand the way it works and be able to store the string.
I took the calculus method from this video: https://www.youtube.com/watch?v=8afbTaA-gOQ. It enabled me to deconstruct the floats into 1 bit for the sign, 8 bits for the exponent and 23 bits for the mantissa.
I'm getting some pretty good results, but my converter is still not accurate, and my mantissa is often wrong in the last bits. The method I use to calculate the mantissa is (where strnew is just a malloc of the appropriate length):
char *ft_double_decimals(double n, int len)
{
char *decimals;
int i;
if (!(decimals = ft_strnew(len)))
return (NULL);
i = 0;
while (i < len)
{
n = n * 2;
decimals[i++] = (n >= 1) ? '1' : '0';
n = n - (int)n;
}
return (decimals);
}
For a float such as 0.1 I get this mantissa: 1001 1001 1001 1001 1001 100 where I should get 1001 1001 1001 1001 1001 101. This is so frustrating! I'm obviously missing something here, and I guess it has something to do with a wrong approximation of the decimals, so if someone knows what method I should use instead of the one I'm using I'll be very grateful!