2

I have a cuda loop where a variable cumul store an accumulation in double :

double cumulative_value = (double)0;
loop(...)
{
    // ...
    double valueY = computeValueY();
    // ...
    cumulative_value += valueY
}

This code is compiled on different SDK and run on two computers :

 M1 : TeslaM2075 CUDA 5.0
 M2 : TeslaM2075 CUDA 7.5

At step 10, results are differents. Values for this addition (double precision representation in hexadecimal) are:

   0x 41 0d d3 17 34 79 27 4d    => cumulative_value
+  0x 40 b6 60 1d 78 6f 09 b0    => valueY
-------------------------------------------------------
=    
  0x 41 0e 86 18 20 3c 9f 9b (for M1)
  0x 41 0e 86 18 20 3c 9f 9a (for M2)

Rounding mode is not specified as I can see in the ptx cuda file ( == add.f64) but M1 seems to use round to plus Infinity and M1 an other mode.

If I force M2 with one of the 4 rounding modes (__dadd_XX()) for this instruction, cumulative_value is always different than M1 even before step 10.
But if I force M1 and M2 with the same rounding mode, results are the same but not equals to M1 before modification.

My aim is to get M1 (cuda 5.0) results on M2 machine (cuda 7.5) but I don't understand the default rounding mode behavior at runtime. I am wondering if the rouding mode is dynamic at runtime if not specified. Do you have you an idea ?

Calex
  • 39
  • 4
  • 1
    Just a rough idea: Your might try creating CUBIN files, and have a look at them with the [CUDA binary utilities](http://docs.nvidia.com/cuda/cuda-binary-utilities). This might bring some insights about what the PTX files are actually compiled to. – Marco13 Feb 11 '16 at 11:25
  • 1
    You really need to be disassembling binary code with cuobjdump to be certain of what is going on at the instruction level. Could you add an actual repro case for this in your question? I think this will require compileable code to understand what is going on here. The default architecture for CUDA 5 and 7.5 has changed, it might be as simple as you are now compiling for a different instruction set if you use default compilation settings – talonmies Feb 11 '16 at 11:34
  • Thank you for your answers. I will try to reproduce this behavior on a minimal cuda kernel and look at the binary file. – Calex Feb 11 '16 at 13:20
  • After another ptx analysis and in my case, valueY is computed from a FMA instruction on cuda 5.0 while cuda 7.5 compiler uses MUL and ADD instructions. Cuda documentation explains there is only one rounding step using single FMA instruction while there are two rounding steps using MUL and ADD. Thank you very much. – Calex Feb 12 '16 at 09:20
  • 2
    @Calex can you post this as an answer? – havogt Feb 13 '16 at 10:52
  • Sure, I just did it. Thanks – Calex Feb 17 '16 at 08:44

1 Answers1

0

After another ptx analysis and in my case, valueY is computed from a FMA instruction on cuda 5.0 while cuda 7.5 compiler uses MUL and ADD instructions. Cuda documentation explains there is only one rounding step using single FMA instruction while there are two rounding steps using MUL and ADD. Thank you very much for helping me :)

Calex
  • 39
  • 4
  • 1
    Automatic generation of FMAs is an optimization and different compiler versions (as well as different optimization levels) may generate different number of FMAs for a given code. If numerical requirement mandate the use of an FMA somewhere, I would suggest coding it explicitly by using `fma()` or `fmaf()`. To disable automatic generation of FMAs, use compiler switch `-fmad=false`. This may negatively impact accuracy and performance. To prevent contraction of individual adds or multiplies into FMAs, they can be coded using device intrinsics: `__fadd_rn(), __fmul_rn(), __dadd_rn(), __dmul_rn()`. – njuffa Feb 17 '16 at 18:07