I think IEEE754 is an engineering miracle, I rely on it daily, and I don't have a question about floating point representation, computation, or using it in a programming language.
My question is about the design choices behind how bits are allocated by IEEE754: 8 and 23 bits for the exponent and fraction fields of a 32-bit float, and 11 and 52 bits for those of a 64-bit double. Why those particular values?
The basic techniques of interpreting the exponent field with a bias (tweaked when the exponent is all 0s), and having an implicit leading 1 for the fraction field (for normalized, but not for denormalized), and using an all 1s exponent to signal special values... all of that would also work fine if there were instead 7 and 24 bits, or 9 and 22 bits, in the exponent and fraction fields of a 32-bit float, and would also allow representing a wide range of values with a roughly scale-invariant distribution.
I'm assuming that, like everything else in IEEE754, there's wisdom behind these choices, but I've never seen it explained in the descriptions of IEEE754 that I've read, hence my question here. Is there some measure of the distribution of representable values that is optimized by choosing 8 and 23? My naive guesses about this are disproven by how for 64-bit floats, the ratio of fraction to exponent bits is about 5-to-1, instead of about 3-to-1 for 32-bit floats.