2

I tried to see what is happening to this code "under the hood" using GDB.

At the moment my GDB works in only in linux terminal and, as stated in the title, I get the expected (logical) output whenever I run this code in here.

I think the problem lies arount the return from func().

Running the same code in cmd gives " not equal" and in terminal it gives "equal". Why is this happening?

I use gcc to compile the code

Here is the code:

 #include <stdio.h>

double func(){
     double y= 5 ;
     return (double)y/3;

    /*// Code that works as expected:
    double y= (double)5/3;
    return y;
     */
}

    int main()
{

    double x ;
    x= (double)5/3;

    if (x == func())
        printf("%lf equal to %lf\n", x ,func());
    else
        printf("%lf not equal to %lf\n", x, func());

    return 0;
}
Robert
  • 33
  • 5
  • 1
    Welcome to [floating point math](https://floating-point-gui.de). What is the difference between `x` and `func()` on both systems? That's where you start to figure this out. It could be something absurdly small yet non-zero like 1e-29. – tadman Mar 19 '20 at 21:08
  • @tadman So cmd interprets double variables in a different way than a terminal would. But even so, I suppose that in cmd `func()` and `x` are handled in the same manner since both of them are of `double` datatype. – Robert Mar 19 '20 at 21:25
  • What toolchain are you using for the Windows build? There will be a debugger for that. You can use GDB in Windows too in any case (if using MinGW). – Clifford Mar 19 '20 at 21:26
  • I'm running this in cmd using `gcc version 8.1.0 (x86_64-posix-seh-rev0, Built by MinGW-W64 project)` and it works fine, the result is `equal to`. – anastaciu Mar 19 '20 at 21:33
  • It's not a function of `cmd`, it's likely something else to do with your compiler or compiler settings. A `double` can be represented both as a register (often 80 bits) and as a 64-bit in-memory value. Shifting between these two can cause tiny differences in the values that mean they're not "equivalent". Compiler optimizations can throw a wrench in here, too, representing it in the most efficient way for any given situation. – tadman Mar 19 '20 at 21:36
  • No, it is nothing to do with "cmd" or "terminal" (or more specifically bash or whatever shell you are running in _terminal_ - that is just the method of launching the executable. – Clifford Mar 19 '20 at 21:37
  • Ok - the statement "_problem is that GDB works in linux terminal_" can be interpreted as you can-only run GDB in Linux - which is not true. Are you instead saying that _your program_ works in GDB in Linux? It is very unclear, and your comment about it working in MinGW seems contrary to what you are saying in the question. Moreover the information in your comment is relevant to the question and should be included in the question not a comment - the stuff about terminal and cmd is irrelevant - the toolchains used (for both) is relevant. – Clifford Mar 19 '20 at 21:51
  • A further issue with this question is _"Windows Program runs in Linux Terminal but not in Windows cmd"_ It is either a Windows program or a Linux program and you cannot run on the other without recompilation with a _different_ compiler. Please clarify. – Clifford Mar 20 '20 at 00:18

1 Answers1

1

You are comparing a compiler generated constant with a run-time generated calculation, and then compiling them with different compilers and running them in different environments likely on different processors and your non-deterministic code yields different results. This should not surprise you.

Further double is an 64 bit type, while an x86 FPU supports 80 bit floating point. This extended precision can be used for intermediate calculations, but not all compilers will do so either for compile time constants or run-time calculations.

All these are factors in the non deterministic nature of the least significant digits of floating point results, and as a rule comparing floating-point types for equality is ill-advised. Rather you might test for some acceptably small difference:

#define EQUALITY_LIMIT FLT_EPSILON

if ( fabs(x - func()) < EQUALITY_LIMIT )
    printf("%lf equal to %lf\n", x ,func());
else
    printf("%lf not equal to %lf\n", x, func());
Clifford
  • 88,407
  • 13
  • 85
  • 165
  • So ` return (double)y/3` is a run-time generated calculation and this should have about 80 bits . `x` has 64 bits and that means 15 digits are stored. I suppose `return (double) y/3` stores more than that and that's why equal sign won't work. That makes me wonder why does it work in a terminal. – Robert Mar 19 '20 at 22:19
  • 1
    @Robert - no I am not saying that - it has type `double` and will be 64 bits. 80 bits may or may not be used internally before converting to `double`. I am merely suggesting some issues that _may_ contribute to the folly of comparing floating point values for equality. You'd have to compare the assembly level code and the target processor FPU implementation to determine exactly why the results differ in this case. Since you had not specified your these variables (and are still weirdly fixated on "terminal" vs "cmd" rather then the generated code). – Clifford Mar 20 '20 at 00:02
  • @Robert : Moreover 15 digits are not stored. A double precision _binary_ floating point representation happens to be sufficient to approximate a _real_ number to _at least_ 15 _significant_ _decimal_ digits of precision. All the italicised parts of the previous sentence are critical to understanding the concept. There is no exact coincidence between decimal and binary representation, and what you are comparing are the binary values so it is down to differences in the least significant bits, not the least significant decimal digits. – Clifford Mar 20 '20 at 00:09