-2
#include <float.h> 
#include <stdio.h> 

int main(int argc, char** argv) 
{ 
  printf("[0] %f\n", FLT_MAX); 
  printf("[1] %lf\n", FLT_MAX); 
  printf("[2] %Lf\n", FLT_MAX); // gcc warning: expects argument of type     ‘long double’ 
  printf("[3] %f\n", DBL_MAX); 
  printf("[4] %lf\n", DBL_MAX); 
  printf("[5] %Lf\n", DBL_MAX); // gcc warning: expects argument of type     ‘long double’ 

  //using C++ und std::numeric_limits<float/double>::max() gives same     results

  return 0; 
} 

Linux: x64 lsb_release -d prints "Description: Ubuntu 15.04" gcc --version prints "gcc (Ubuntu 4.9.2-10ubuntu13) 4.9.2" ldd --version prints "ldd (Ubuntu GLIBC 2.21-0ubuntu4) 2.21"

[0] 340282346638528859811704183484516925440.000000 
[1] 340282346638528859811704183484516925440.000000 
[2] --> warning-line disabled 
[3] 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.000000 
[4] 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.000000 
[5] --> warning-line disabled

Windows 7 x64: VS2010 (latest Version 10.0.40219.1 SP1Rel) Debug/Win32

[0] 340282346638528860000000000000000000000.000000 
[1] 340282346638528860000000000000000000000.000000 
[2] 340282346638528860000000000000000000000.000000
[3] 179769313486231570000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.000000 
[4] 179769313486231570000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.000000 
[5] 179769313486231570000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.000000    

difference on FLT_MAX VS2010: 340282346638528860000000000000000000000.000000 GCC4.9.2: 340282346638528859811704183484516925440.000000

is 1.8829581651548307456e+20 (not that small) - and getting much bigger using doubles

UPDATE: actual question

Is there a way (with only a small change of the code) to get the same result on Linux and Windows (and others) or do I need to use the very same implementation on all systems? I'm afraid of having my own implementation for my Windows/Linux/Linux-ARM/VxWorks/Solaris platforms.

llm
  • 557
  • 3
  • 15

3 Answers3

2

The printf function is implemented differently on these platforms.

Look at this code:

#include <stdio.h>

int main()
{
    printf("%lf\n", ((double)1e100)/3);
    return 0;
}

This program compiled with VC++ gives:

3333333333333333200000000000000000000000000000000000000000000000000000000000000000000000000000000000.000000

while the same program compiled with g++ gives:

3333333333333333224453896013722304246165110619355184909726539264904319486405759542029132894851563520.000000
dlask
  • 8,776
  • 1
  • 26
  • 30
  • sorry for beeing unclear: it is clear to me that the result is implementation depended - but i've never though that the integer part of the float could be "interepreted" that wrong by different impls – llm Jun 19 '15 at 11:03
  • 2
    @llm: If that's not what you wanted, then maybe you should ask an actual question instead of just stating facts. – mastov Jun 19 '15 at 11:04
  • 1
    @llm You are dealing with floating point numbers so only *relative error* is important there, the internal representation is unable to support fixed absolute error. – dlask Jun 19 '15 at 11:06
  • actual question: is there a way (small change) to get the same result on linux/windows (and others) or do i need to use the very same implementation on both(all) systems - im afraid of having my own implementation for my win/linux/linux-arm/vxworks/solaris platforms – llm Jun 19 '15 at 11:09
  • 1
    @llm You have to respect the floating point precision and you should not rely on digits that are beyond that. In other words: Write your program in a way that's insensitive to such differences. – dlask Jun 19 '15 at 11:13
  • @dlask: Even if you respect the precision, it might be desirable (for example for improved UX) to get one version or the other on *all* platforms. I think that's a valid concern. – mastov Jun 19 '15 at 11:20
  • @llm: Please edit your question to include the "actual question" you just stated here. – mastov Jun 19 '15 at 11:20
1

The difference between the platforms is in how the numbers are printed, not in the numbers themselves.

You seem to misunderstand how floating-point numbers work. Their accuracy is relative to their magnitude. The magnitude is repesented by the number's exponent, the value by its mantissa. The size of the mantissa is fixed, for float it is 23 bits plus one implicit bit. Converted to decimal, this means that you can represent about seven significant decimal digits accurately.

FLT_MAX is about 3.40282346639e+38. The next smaller number that can be represented as a float is about 3.40282326356e+38. That's a difference of 2.02824096037e+31 or ten orders of magnitude larger than your perceived error.

Even if the apparent difference between the numbers seems to be huge, both printed values are much closer to FLT_MAX than to any other single-precision floating-point number and re-converting the textual representation to ´floatshould yieldFLT_MAX`.

In short: Both implementations of printf are valid.

M Oehm
  • 28,726
  • 3
  • 31
  • 42
1

Is there a way (with only a small change of the code) to get the same result on Linux and Windows?

Yes - mostly.

  1. Use gcc on Windows. By "Windows", certainly OP is referring to a Visual Stdio compiler or related product. gcc is available on Windows, Linux, and many other platforms and with more consistent results than OP's examples. It is really a compiler issue and not an OS one.

  2. Use base-2/16 output.

    printf("%a\n", FLT_MAX);
    // 0x1.fffffep+127  gcc 4.9.2
    // 0x1.fffffep+127  VS 2010
    
  3. Use "%.e" with limited precision. The C spec only specifies a minimum precision of 6 digits for float and 10 for double, else use FLT_DIG/DBL_DIG.

To fix 2/3 digit exponent, see How to control the number of exponent digits ... and visual studio _set_output_format
Note that the precision field in "%.*e" is the number of digits after the lead digits, thus code uses -1.

    printf("%.*e\n", FLT_DIG - 1, FLT_MAX);
    // 3.40282e+38     gcc 4.9.2
    // 3.40282e+038    VS 2010
  1. Use "%.e" with more, but not excessive, precision. FLT_DECIMAL_DIG/DBL_DECIMAL_DIG is the number of digits to print to read the value back and end up with the same float value. Printing with more digits leads to OP's problem. Consider double: Notice VS printed in OP's post to 17 correctly rounded significant digits. If defined in VS, DBL_DECIMAL_DIG would be 17 there. VS prints to 17 digits to preserve "round-tripping" numbers. By directing gcc to print to 17 significant digits, we get the same result.

    #ifdef FLT_DECIMAL_DIG
      //  FLT_DECIMAL_DIG/DBL_DECIMAL_DIG typically not available in VS
      #define OP_FLT_Digs (FLT_DECIMAL_DIG)
      #define OP_DLB_Digs (DBL_DECIMAL_DIG)
    #else  
      #define OP_FLT_Digs (FLT_DIG + 3)
      #define OP_DBL_Digs (DBL_DIG + 2)
    #endif
    printf("%.*e\n", OP_FLT_Digs - 1, FLT_MAX);
    // 3.40282347e+38     gcc 4.9.2
    // 3.40282347e+038    VS 2010
    
  2. More on "%.e". There is value in not printing to more than FLT_DECIMAL_DIG/DBL_DECIMAL_DIG significant digits due to corner cases when scanning back the number will result in the next FP number. Essentially a double rounding issue - somewhat deep for this post - so no details.

  3. Of course all this is moot if the various systems are using vastly different FP formats which is quite probable with long double. Exact FP consistency is difficult, but the above will certainly help minimize differences.

Community
  • 1
  • 1
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • How do you figure the C standard specifies “a *minimum* precision”? C 1999 7.19.6.1 8 says, for `e`, “… the number of digits after it [the decimal point] is equal to the precision; if the precision is missing, it is taken as 6;…” Draft N1570 for 2011 says the same thing, and 2018 official says the same thing. – Eric Postpischil Jan 26 '21 at 12:22