4

I'm trying to build some CUDA code using GCC 6.2.1, the default compiler of my distribution (Note: not a GCC version officially supported by CUDA, so you can call this experimental). This is code which builds fine with GCC 4.9.3 and both CUDA versions 7.5 and 8.0.

Well, if I build the following (close-to) minimal example:

#include <tuple>

int main() { return 0; }

with the command-line

nvcc -std=c++11 -Wno-deprecated-gpu-targets -o main main.cu

I get the following errors:

/usr/local/cuda/bin/../targets/x86_64-linux/include/math_functions.h(8897): error: cannot overload functions distinguished by return type alone

/usr/local/cuda/bin/../targets/x86_64-linux/include/math_functions.h(8901): error: cannot overload functions distinguished by return type alone

2 errors detected in the compilation of "/tmp/tmpxft_000071fe_00000000-9_b.cpp1.ii".

Why is that? How can I correct/circumvent this?

einpoklum
  • 118,144
  • 57
  • 340
  • 684
  • Can you post a repo case? I suspect that there is something going on in your code which is causing this to break, not anything in the CUDA headers themselves. – talonmies Oct 03 '16 at 10:53
  • @talonmies: reproducible example posted. – einpoklum Oct 03 '16 at 14:26
  • 4
    For better and/or worse, CUDA uses tight integration with the host toolchain, including making use of the host's `math.h`. As host toolchain vendors diddle (unnecessarily?) with this header file, their changes can trigger conflicts with CUDA's header file `math_functions.h`. This is one of the reasons that CUDA needs to be adjusted to, and validated with, specific host toolchain versions. Problems can be avoided entirely by sticking to the supported host toolchain versions enumerated in the "Getting Started" document for each OS platform. – njuffa Oct 03 '16 at 16:23
  • @njuffa: I understand, but sometimes the tight integration with a certain version does not mean subsequent versions are unusable, but rather that they require some tweaking. For this specific issue that's actually the case (although quite possibly not for all of the issues CUDA has with GCC 6.2.x) – einpoklum Oct 04 '16 at 08:11

1 Answers1

6

TL;DR: Forget about it. Only use CUDA 8.x with GCC 5.x , and CUDA 9 or later with GCC 6.x

It seems other people have seen this issue with GCC 6.1.x and the suggestion is to add the following flags to nvcc: -Xcompiler -D__CORRECT_ISO_CPP11_MATH_H_PROTO (yes, two successive flags; see nvcc --help for details). (But I can't report complete success since other issues pop up instead.)

But remember that GCC 5.4.x is the latest supported version, and that probably has a good reason, so it's somewhat of a wild goose chase to force GCC 6.x onto it - especially when CUDA 9 is available now.

einpoklum
  • 118,144
  • 57
  • 340
  • 684
  • Tried adding these flags to the `CMakeLists.txt` of `libethash-cuda` directory while compiling [Genoil's ethereum](https://github.com/Genoil/cpp-ethereum) project. But I still get the same error. GCC version is 6.3.0 20170516 – asgs Dec 05 '17 at 22:32
  • @asgs: CUDA 9 is your friend. – einpoklum Dec 05 '17 at 23:49
  • Sure, I'll try to grab it manually. Debian repos have only till 8, though – asgs Dec 06 '17 at 05:05
  • 1
    @asgs: You'll need to venture into the dangerous world of running the CUDA installer. Or - have a mixed set of apt sources with debian sid sources on lower priority which you could force for the case of CUDA 9. – einpoklum Dec 06 '17 at 08:18
  • 1
    @JoeyMallone: Consider removing these comments and asking+answering a question regarding using CUDA with clang, or if this already exists - replacing it with a comment linking to the clang-related question. – einpoklum Aug 03 '18 at 07:50