1

ISO/IEC 9899:2011 §5.2.4.2.2 ¶10 (p48) says:

The presence or absence of subnormal numbers is characterized by the implementation- defined values of FLT_HAS_SUBNORM, DBL_HAS_SUBNORM, and LDBL_HAS_SUBNORM:

-1 indeterminable
0 absent (type does not support subnormal numbers)
1 present (type does support subnormal numbers)

What the! So on some platforms I cannot write double d = 33.3? Or will the compiler automatically convert this to 333E-1? What is the practical significance of presence or absence of non-normalized floating point numbers?

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
Vorac
  • 8,726
  • 11
  • 58
  • 101
  • 3
    You need to understand what a [subnormal number](http://en.wikipedia.org/wiki/Denormal_number) is. – Paul R Apr 29 '13 at 13:52
  • 2
    Neither 33.3 nor 333E-2 is a subnormal number on any normal implementation of floating point arithmetic. Assuming that you're using IEEE 754 floating point arithmetic, sub-normal numbers have an exponent equal to the smallest possible exponent and a mantissa that starts with a 0 bit instead of a 1 bit as in normal numbers. The leading 1 bit is normally implicit; it is not physically stored. Subnormal numbers are tiny, in other words. – Jonathan Leffler Apr 29 '13 at 14:03

1 Answers1

7

Subnormal numbers are the nonzero floating-point numbers between -FLT_MIN and FLT_MIN (for type float) and -DBL_MIN and DBL_MIN (for type double). The constant FLT_MIN is typically 1.17549435E-38F, that is, small. If you do only a little programming with floating-point numbers, you may never have encountered a subnormal number.

On a compilation platform with FLT_HAS_SUBNORM == 0, there are only the numbers +0. and -0. between -FLT_MIN and FLT_MIN.

Subnormal numbers are usually handled in software (since they have exceptional behavior and do not happen often). One reason not to handle them at all is to avoid the slowdown that can occur when they happen. This can be important in real-time contexts.

The next Intel desktop processor generation (or is it the current one?) will handle subnormals in hardware.

The notion of subnormal number has nothing to do with the notations 33.3 and 333E-1, which represent the same double value.

The justification for subnormals and the history of their standardization in IEEE 754 can be found in these reminiscences by Kahan, under “Gradual Underflow”.

EDIT:

I could not find a source for Intel handling subnormals in hardware in its next generation of processors, but I found one for Nvidia's Fermi platform doing so already.

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
  • 1
    A small quibble: `33.3` and `333E-1` are two *different* floating-point literals that represent the same `double` *value*. – Keith Thompson Apr 29 '13 at 14:42
  • Rather than describing what subnormals are, it may be more meaningful to describe the consequence of their omission. Subnormals are supported to ensure that the difference between tow representable floating-point values will be a multiple of the smallest floating-point value. Without them, even though `float` values 2.5E-38 and 2.6E-38 are both representable within a part per million, neither their difference, nor any other number smaller than 2.35E-38, would be representable at all. – supercat Aug 20 '13 at 23:32
  • @supercat If I remember correctly the contents of cs.berkeley.edu/~wkahan/ieee754status/754story.html , linked to in my answer, I have no hope of doing justice to the real story. The property you give is one. The Sterbenz lemma is another. Which one of these two, or which other property, should be called the more fundamental property that the committee eventually decided had to hold and justified the complexity of subnormals? Us discussing it is only speculation (unless you were there). Kahan discussing it is authoritative. – Pascal Cuoq Aug 21 '13 at 07:37
  • @PascalCuoq: Fair enough. I was merely intending to suggest that even people who don't want to delve too deep into things might quickly recognize the potential problems with having `a>b` not imply that `(a-b)>0`. I can imagine other possible solutions, such as forcing all values to be rounded to multiples of the smallest normalized value, or a "positive infinitesimal" value which would simultaneously compare greater than zero and equal to it, along with "negative infinitesimal". Some code which would "just work" with denormals would have to explicitly handle infinitesimals, but... – supercat Aug 21 '13 at 15:08
  • ...infinitesimals would make it possible for code to detect underflow conditions. Infinitesimals would allow for symmetric handling of divisions by zero-ish things (+value/+infinitesimal would yield +infinity, while +value/zero would yield NaN). If the pattern now used for "negative zero" represented the difference of seemingly-equal numbers ("unsigned infinitesimal"), adding that to an infinitesimal could yield unsigned infinitesimal even if adding "true zero" to an infinitesimal would leave it unchanged. BTW, do you know the rationale for NaN!=NaN? To my mind, "x==y"... – supercat Aug 21 '13 at 15:13
  • ...should indicate that "x should be considered numerically indistinguishable from y", a trait which should be just as true for NaN as for any other value. Since floating-point equality between calculated values doesn't really mean they results were numerically equal, but merely that they can't be shown to be different, and since "==" behaves as an equivalence relation everywhere except with NaN, I'm curious what merited making equality tests not be an equivalence relation there. – supercat Aug 21 '13 at 15:24
  • @supercat We have a member of the IEEE 754 committee! He explains! http://stackoverflow.com/a/1573715/139746 – Pascal Cuoq Aug 21 '13 at 16:05
  • @PascalCuoq: Thanks for that link, though I don't find any cited reasons particularly compelling. I have no problem with `NaN > x` values being unordered with regard to anything else; I don't think I've ever seen any code which was simplified by having `NaN != NaN`. If a loop is going to run until two things become equal (a common pattern, and one which denormals exist to handle), having them both become `NaN` would seem like a good reason to stop; reporting that a function converged to `NaN` would seem better than having it run forever. Have you ever seen code where Nan!=Nan was useful? – supercat Aug 21 '13 at 16:40