18

I want to directly calculate the minimum value of the float type, and here is my algorithm(Suppose that the encoding of floating-point number confirms to the IEEE 754 standard):

#include <math.h>
#include <limits.h>
#include <float.h>
#include <stdio.h>

float float_min()
{
    int exp_bit = CHAR_BIT * sizeof(float) - FLT_MANT_DIG;
    float exp = 2 - pow(2, exp_bit - 1);

    float m = pow(2, -(FLT_MANT_DIG - 1));

    return m * pow(2, exp);
}

int main()
{
    printf("%g\n", float_min());
}

The output is 1.4013e-45. However, I find the the value of FLT_MIN in the C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\float.h is 1.175494351e-38F. Who is wrong?

Shangchih Huang
  • 319
  • 3
  • 11
  • 1
    Have you stepped through the code to check that the values of all intermediate variables are what you think they should be? And that the final calculation in the `return` statement doesn't add to the rounding problems? Or that the conversion to a `double` for the `printf` call isn't a problem? – Some programmer dude Aug 15 '17 at 12:30
  • 2
    Your code returns 0 for me: http://ideone.com/94xHPT – interjay Aug 15 '17 at 12:32
  • @interjay Try adding more significant figures, like `%.50f`. – Joseph Thomson Aug 15 '17 at 12:33
  • 7
    Also read up on denormal/subnormal floating point numbers. – Gene Aug 15 '17 at 12:34
  • 1
    You can edit the code. And please don't post code you haven't tested. – interjay Aug 15 '17 at 12:35
  • @Someprogrammerdude The `exp` is `-126`, and the `m` is `1.19209e-007` – Shangchih Huang Aug 15 '17 at 12:40
  • In C99 you can just do `nextafterf(0,1)` instead of assuming ieee 754. Also, `ldexpf` instead of `pow(2, ...)` is a good idea. – Art Aug 15 '17 at 12:57
  • Tip: Using `"%a"` is useful in gaining insight in the details of FP numbers instead of `"%g"`. – chux - Reinstate Monica Aug 15 '17 at 14:22
  • Why would you expect any (non-literal) expression (or sequence of evaluations) exists to portably, reliably generate `MIN_FLT`? (Maybe mucking about with `ldexp()`... But I don't see any reason that you should expect that `FLT_MIN` is necessarily in the range of the function `pow()` or the range of the multiplication operator.) – Eric Towers Aug 15 '17 at 23:25

2 Answers2

41

Although this question has been asked and answered several times before, I don't see any answer that is actually correct. The key is that FLT_MIN is the smallest normalized value that can be represented. Back in the olden days that was all that mattered. Then Intel came along and introduced subnormal values, which reduce precision in order to represent values closer to 0. Subnormals are values with the minimum exponent and a fraction whose high bits are all zeros. It follows from that that the smallest non-zero subnormal value has a fraction that's all zeros except for the lowest bit, which is a 1. That's the smallest value that can be represented, but when you're down there, changing a bit here and there makes a large change in the value, so these things have to be used with great care.

EDIT, to clarify "normalization":

Suppose we're writing decimal values: 6.02x10^23, .602*10^24, 60.2*10^22. Those all represent the same value, but they clearly look different. So let's introduce a rule for writing decimal values: every value must have exactly one non-zero digit to the left of the decimal point. So the "normalized" form of that value is 6.02x10^23, and if we have a value written in a non-normalized form we can move the decimal point and adjust the exponent to preserve the value and put it into normalized form.

IEEE floating-point does the same thing: the rule is that the high bit of the fraction must always be 1, and any calculation has to adjust the fraction and the exponent of its result to satisfy that rule.

When we write decimal values that are really close to 0 that's not a problem: we can make the exponent as small as we need to, so we can write numbers like 6.02*10^-16384. With floating-point values we can't do that: there a minimum exponent that we can't go below. In order to allow smaller values, the IEEE requirements say that when the exponent is the smallest representable value, the fraction doesn't have to be normalized, that is, it doesn't have to have a 1 in its high bit. In writing decimal values, that's like saying we can have a 0 to the left of the decimal point. So if our decimal rule said that the lowest allowable exponent is -100, the smallest normalized value would be 1.00x10^-100, but smaller value could be represented as non-normalized: 0.10*10^-100, 0.01*10^-100, etc.

Now add a requirement to our decimal rules that we can only have three digits: one to the left of the decimal point and two to the right. That's like the floating-point fraction in that it has a fixed number of digits. So for small normal values we have three digits to play with: 1.23*10^-100. For smaller values we use leading zeros, and the remaining digits have less precision: 0.12*10^-100 has two digits, and 0.01*10^-100 has only 1. That's also how floating-point subnormals work: you get fewer and fewer significant bits as you get farther and farther below the minimum normalized value, until you run out of bits and you get 0.

EDIT: to clarify terminology, the IEEE-754 standard referred to those values that are greater than 0 and less then the minimum normalized value as denormals; the latest revision of IEEE-754 refers to them as subnormals. They mean the same thing.

Pete Becker
  • 74,985
  • 8
  • 76
  • 165
  • 3
    Any reference for the claim that it was Intel who introduced denormals? I can't seem to find one. – Ruslan Aug 15 '17 at 15:04
  • 11
    @Ruslan William Kahan introduced denormals (with Jerome Coonen and Harold Stone) in the KCS draft. Kahan was at the time paid as a consultant by Intel to design the 8087. Some of the story is told here: https://blogs.mathworks.com/cleve/2014/07/21/floating-point-denormals-insignificant-but-controversial-2/ – Pascal Cuoq Aug 15 '17 at 15:07
  • I wonder how the cost of supporting denormals would compare with the cost of forcing small values to be rounded to an increment equal to the smallest normalized value (e.g. if the smallest normalized float would be 2^-127, add and then subtract 2^-104) if a value is positive and smaller than that, or subtract and then add if the value was negative and small). – supercat Aug 15 '17 at 18:29
  • @supercat: There are very good reasons for denormals to exist. Rounding them to normals or flushing them to zero significantly harms numerical stability properties in some important applications. There are good explanations of this out there (maybe even in "what everyone should know...") but I don't have a list of links to give you. – R.. GitHub STOP HELPING ICE Aug 15 '17 at 20:15
  • @R..: It is extremely desirable that the smallest possible difference between two float values be representable; that is typically accomplished using denormals, but other approaches would be usable as well. The rounding I suggested would lose 23 bits of dynamic range on `float` compared to supporting denormals, which would be somewhat irksome, but most applications don't really use the full dynamic range, and it might be cheaper to support in hardware. – supercat Aug 15 '17 at 20:26
  • @supercat: Ah, I see. That looks like it should work, at the expense of some dynamic range, but there may be issues with representability of reciprocals. – R.. GitHub STOP HELPING ICE Aug 15 '17 at 20:31
  • @supercat Using your [idea](https://stackoverflow.com/questions/45692834/the-minimum-positive-float-value-by-direct-calculation-is-different-from-the-flt#comment78356758_45693086), would `if (a != b) { d = a - b; ...}` ever result in `d` with a value of 0.0? – chux - Reinstate Monica Aug 17 '17 at 14:43
  • @chux: If `a` and `b` were values that had been produced by the indicated style of calculations, no. With some bit patterns that could never occur as the result of calculations performed that way, yes. – supercat Aug 17 '17 at 15:08
16

Your result 1.4013e-45 is denormal minimal positive float value, also known as FLT_TRUE_MIN which is equal to 1.401298464e-45F.

FLT_MIN is normalized minimal positive float value (1.175494351e-38F)

ikleschenkov
  • 972
  • 5
  • 11