2

I just encountered a behaviour I don't understand in a C program that I'm using.

I guess it's due to floating numbers, maybe int to float cast, but still I would like someone to explain to me that this is a normal behaviour, and why.

Here is my C program :

#include <stdio.h>
#include <float.h>

int main ()
{
  printf("FLT_MIN : %f\n", FLT_MIN);
  printf("FLT_MAX : %f\n", FLT_MAX);

  float valueFloat = 0.000000;
  int valueInt = 0;

  if (valueInt < FLT_MIN) {
        printf("1- integer %d < FLT_MIN %f\n", valueInt, FLT_MIN);
  }
  if (valueFloat < FLT_MIN) {
        printf("2- float %f < FLT_MIN %f\n", valueFloat, FLT_MIN);
  }

  if (0 < 0.000000) {
    printf("3- 0 < 0.000000\n");
  } else if (0 == 0.000000) {
    printf("4- 0 == 0.000000\n");
  } else {
    printf("5- 0 > 0.000000\n");
  }

  if (valueInt < valueFloat) {
    printf("6- %d < %f\n", valueInt, valueFloat);
  } else if (valueInt == valueFloat) {
    printf("7- %d == %f\n", valueInt, valueFloat);
  } else {
    printf("8- %d > %f\n", valueInt, valueFloat);
  } 


  return 0;
}

And here is my command to compile and run it :

gcc float.c -o float ; ./float

Here is the output :

FLT_MIN : 0.000000
FLT_MAX : 340282346638528859811704183484516925440.000000
1- integer 0 < FLT_MIN 0.000000
2- float 0.000000 < FLT_MIN 0.000000
4- 0 == 0.000000
7- 0 == 0.000000

A C developper that I know consider normal that the line "1-" displays become of the loss of precision in the comparison. Let's admit that.

  • But why the line "3-" doesn't appear then, since it's the same comparison ?
  • Why the line "2-" appears, since I'm comparing the same numbers ? (or at least I hope so)
  • And why lines "4-" and "7-" appear ? It seems a different behaviour from line "1-".

Thanks for your help.

Thibault
  • 1,566
  • 15
  • 22

3 Answers3

6

Your confusion is probably over the line:

printf("FLT_MIN : %f\n", FLT_MIN);

change it to:

printf("FLT_MIN : %g\n", FLT_MIN);

And you will see, that FLT_MIN is actually NOT zero, but a (tiny bit) larger than zero.

Kai Petzke
  • 2,150
  • 21
  • 29
3

FLT_MIN is not 0, it's just above 0, you just need to show more places to see that. FLT_MIN is the smallest floating point number above 0 that the computer can represent, since floating points are almost always an approximation, printf and friends round when printing, unless you ask it for the precision:

printf("FLT_MIN : %.64f\n", FLT_MIN);

3 does not actually appear in your output because 0 is not less than 0

4 is comparing 0 with 0, the computer has no problem representing both of those (0 is a special case for floats) so they compare equal

7 is the same case as 4 just with intermediate assignments

Ryan Haining
  • 35,360
  • 15
  • 114
  • 174
2

This is correct behaviour. Under IEEE754, zero is exactly representable as a float. Therefore it can be 'equal' to integer zero (although 'equivalent' would be a better term). FLT_MIN is the smallest magnitude number that can be represented as a float and still be distinguished from zero. Even though a standard %f format specifier to printf() will show FLT_MIN as 0.000000, it is not zero. A literal 0.00... will be interpreted by the compiler as float 0, which is not equal to FLT_MIN, even though the default six decimal place %f format will print them the same.

David G
  • 5,408
  • 1
  • 23
  • 19