I am reading C Primer Plus by Stephen Prata, and one of the first ways it introduces floats is talking about how they are accurate to a certain point. It says specifically "The C standard provides that a float has to be able to represent at least six significant figures...A float has to represent accurately the first six numbers, for example, 33.333333"
This is odd to me, because it makes it sound like a float is accurate up to six digits, but that is not true. 1.4 is stored as 1.39999... and so on. You still have errors.
So what exactly is being provided? Is there a cutoff for how accurate a number is supposed to be?
In C, you can't store more than six significant figures in a float without getting a compiler warning, but why? If you were to do more than six figures it seems to go just as accurately.
This is made even more confusing by the section on underflow and subnormal numbers. When you have a number that is the smallest a float can be, and divide it by 10, the errors you get don't seem to be subnormal? They seem to just be the regular rounding errors mentioned above.
So why is the book saying floats are accurate to six digits and how is subnormal different from regular rounding errors?