Why do the frexp/ldexp functions have a significand that ranges from [0.5, 1.0) when IEEE 745 floating point values actually have a significand that ranges from [1.0, 2)?
-
5The rationale documents accompanying the C89 and C99 standards are silent on this matter. A reasonable guess would be that this normalization was chosen because it was familiar to the people who created C, as floating-point formats on DEC architectures used a mantissa normalized to [0.5,1), rather than [1,2) chosen for the IEEE formats introduced later. – njuffa Aug 29 '15 at 06:31
-
2"So why does frexp() put the radix point to the left of the implicit bit, and return a number in [0.5, 1) instead of scientific-notation-like [1, 2)" - "Perhaps the format returned by frexp made sense with the PDP-11's floating-point format(s)" – phuclv Aug 29 '15 at 07:37
1 Answers
For any valid floating point value that is not 0 or denormal, the high bit of the mantissa is always 1. IEEE-754 takes advantage of this by not encoding it in the binary value, thus squeezing out one extra bit of precision. Like double has 53 bits of precision but encodes only 52. So the encoded value is never less than 1, the range is [1.0 .. 2)
But as soon as the actual value is needed, like when you printf() it or calculations need to be done, then this bit needs to be restored from the encoded value. Typically done internally inside the floating point execution unit of the processor. Otherwise the inspiration behind the infamous 80 bit internal format of the Intel x87 FPU design. So the actual range is [0.5 .. 1). The frexp function works with actual values.

- 922,412
- 146
- 1,693
- 2,536
-
I don't follow. What is being restored and what do you mean by "actual values"? – Chris_F Aug 29 '15 at 07:34
-
Hmya, you have to understand more about the way floating point values are encoded to make sense of it. It is not terribly intuitive. Key is that you can get 53 bits of precision out of 52 stored bits. Wikipedia has lots of material about it, check the "Double precision" article for example. – Hans Passant Aug 29 '15 at 08:19
-
2I know perfectly well how IEEE 745 floating point values work. I still do not see what the implied bit of the mantissa has to do with the range [0.5, 1.0). That is what you need to explain. – Chris_F Aug 29 '15 at 11:31
-
The implied bit changes the range of the *encoded* mantissa value to [1.0 .. 2). That's the oddball, the one you never observe in a program. [0.5.. 1) is the normal range. – Hans Passant Aug 29 '15 at 11:34
-
1I have to disagree with your assertion that the non-encoded bit somehow implies a normal range of [0.5, 1.0). If floats were 33-bits with a full 24-bit mantissa it would change nothing. It seems to me to be an arbitrary choice, and a less logical one at that. – Chris_F Aug 29 '15 at 17:07
-
1For normalized floats the mantissa has a range of [1.0, 2.0) and for denormals it has a range of [0.0, 1.0), neither of which is [0.5, 1.0). – Chris_F Aug 29 '15 at 17:17