0

I made a code to find the derivative of a function at a given point. The code reads

#include"stdafx.h"
#include<iostream>


using namespace std;

double function(double x) {
    return (3 * x * x);
}



int main() {
    double x, y, dy, dx;
    cin >> x;
    y = function(x);
    dx = 0.00000001;
    dy = function(x + dx) - y;
    cout << "Derivative of function at x = " << x << " is " << (double)dy / dx;
    cin >> x;
}

Now my college uses turbo C++ as its IDE and compiler while at home I have visual studio (because TC++ looks very bad on a 900p screen but jokes apart). When I tried a similar program on the college PCs the result was quite messed up and was much less accurate than what I am getting at home. for example:

Examples:

x = 3

@College result = 18.something

@Home result = 18 (precise without a decimal point)

x = 1

@College result = 6.000.....something

@Home result = 6 (precise without a decimal point)

The Very big Question:

Why are different compilers giving different results ?

Suhrid Mulay
  • 175
  • 10
  • 1
    Worth reading: [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – user4581301 Jun 13 '17 at 15:14
  • 1
    Are you sure the result is mathematically different? It looks to me like a simple formatting issue. – François Andrieux Jun 13 '17 at 15:15
  • 3
    Also know that Turbo C++ is a Cretaceous-Era C++ compiler. It comes from the days before Standard C++, and it does things very different from a modern C++ compiler. – user4581301 Jun 13 '17 at 15:16
  • 2
    See https://stackoverflow.com/questions/554063/how-do-i-print-a-double-value-with-full-precision-using-cout – François Andrieux Jun 13 '17 at 15:16
  • Please, *don't* use Turbo C++ - period. It's old, pre-standard and has no business being used in *any* capacity in 2017. All it can do is teach you *bad* C++ where you should focus on learning *modern* C++ (which it can't compile). – Jesper Juhl Jun 13 '17 at 15:35

1 Answers1

0

I’m 90% sure the result’s same in both cases, and the only reason why you see difference is different output formatting. For 64-bit IEEE double math, the precise results of those computations are probably 17.9999997129698385833762586116790771484375 and 6.0000000079440951594733633100986480712890625, respectively.

If you want to verify that hypothesis, you can print you double values this way:

void printDoubleAsHex( double val )
{
    const uint64_t* p = (const uint64_t*)( &val );
    printf( "%" PRIx64 "\n", *p );
}

And verify you have same output in both compilers.

However, there’s also 10% chance that indeed your two compilers compiled your code in a way that the result is different. Ain’t uncommon, it can even happen with the same compiler but different settings/flags/options.

The most likely reason is different instruction sets. By default, many modern compilers generate SSE instructions for the code like yours, older ones producer legacy x87 code (x87 operates on 80-bit floating point values on the stack, SSE on 32 or 64-bit FP values on these vector registers, hence the difference in precision). Another reason is different rounding modes. Another one is compiler specific optimizations, such as /fp in Visual C++.

Soonts
  • 20,079
  • 9
  • 57
  • 130
  • How does your 90% case is valid where he uses the same approach to output the data through (i.e. `cout`)? Do you assume that `cout` is rather differently implemented (e.g. has different default values, like `precision`)? – Yuki Jun 13 '17 at 22:46
  • @Yuki Yep. I think those two versions of the standard libraries, both C++ and the underlying CRT (they both very different for those two environments) use different output precision values. – Soonts Jun 13 '17 at 23:04