-2

I have installed cuda toolkit on Windows 7 and have run CUDA codes using VS 2017 successfully. Now, I want to configure Dev-cpp on windows to run my CUDA codes.

  • 1
    You can't. You must use the Microsoft toolchain with CUDA on Windows – talonmies Dec 04 '19 at 11:45
  • Even if you find a way to work around the toolchain interlocks as is being suggested in the answer, you should be aware of the fact that the process of toolchain integration is not merely a mechanical process. These interlocks exist because the host toolchain and device toolchain must agree on a number of important **behaviors**, some of which are covered in the CUDA programming guide. There is no design intent by NVIDIA to support other host compilers on Windows (besides MSFT `cl.exe`), and any attempt to work around the mechanics to do so means you are in untested and unsupported territory. – Robert Crovella Dec 05 '19 at 16:21
  • Thanks for guides @talonmies and Robert. – Mohammad K Fallah Dec 07 '19 at 16:55
  • @RobertCrovella NVRTC, as originally suggested in my answer, can be used independent of the host toolchain *by design*. However, I agree that my answer, in its original version, did make the issue of toolchain integration sound a bit more simple of a problem than it actually is. I have expanded my answer with more details to hopefully cover these points… – Michael Kenzel Dec 09 '19 at 16:40
  • 1
    The use of NVRTC doesn't negate in any way the requirement for it to be used with a supported host toolchain. The NVRTC documentation itself states that it is part of the CUDA toolkit, and therefore the requirements for the proper use of the CUDA toolkit apply to NVRTC as well. For example, if you had a host toolchain that interpreted `int` on linux as a 64-bit quantity (totally legal from a language perspective) that would break if any `int` parameters were passed as part of a NVRTC kernel call, because CUDA device code on linux (NVRTC or not) interprets `int` as a 32-bit quantity. – Robert Crovella Jan 10 '20 at 18:03

1 Answers1

-1

Dev-C++ does not seem to be in active development anymore and it seems to have been that way for quite a long while [1]. Development on Dev-C++ seems to originally have ceased around 2005. There have been two forks since then. The absolute latest version of any fork I can seem to find would be from April 2015, which is getting close to 5 years ago at this point…

I'm not a user of Dev-C++ nor have I ever been. However, based on how outdated the software appears to be, I can't help but recommend to use something else. There is certainly no official support for CUDA with Dev-C++, nor has there ever been as far as I'm aware. The CUDA toolkit officially supports the Visual Studio IDE on Windows, which is probably what I'd recommend to use.

All that being said, it would seem that it is possible to use a custom Makefile to build your project with Dev-C++. Thus, you could, in theory, manually add targets for building CUDA code etc. to a Makefile and go with that. However, the official CUDA toolchain is designed to interoperate with the MSVC toolchain. It is not designed to interoperate with other toolchains on Windows. Thus, nvcc requires a compatible version of the Microsoft Visual C++ Compiler to use as a host compiler on Windows. nvcc will, in fact, refuse to run without an acceptable version of MSVC being available.

There are a number of issues such as ABI compatibility you have to be aware of when thinking about using a host toolchain other than one that is officially supported. nvcc etc. go to great lengths to keep things like sizes of basic data types and object layout consistent with whatever supported host compiler is used. It is essential for host and device code to agree on what a declaration means in order to be able to share data structures between CPU and GPU. nvcc in particular also generates a lot of code under the hood to, e.g., deal with CUDA modules, accessing device variables and kernels through undecorated names, and instantiating and calling of kernel templates. Using the runtime API in the manner documented in the CUDA Programming Guide requires this kind of compiler support.

An alternative would be to use the driver API and NVRTC to compile your code. That way would require only libraries from the CUDA toolkit and no particular toolchain integration. When doing so, however, you have to be aware of and specifically craft your host code to interoperate with NVRTC-compiled device code according to the ABI that NVRTC observes. Also, note that NVRTC is a library that has to be called from a program. So if you want to compile CUDA code at build time using this approach, you will have to write your own commandline tool and build integration…

Another option would be to use clang as an open-source alternative to nvcc. clang can compile both host and device code and takes care of ABI issues and code-generation in ways similar to nvcc (the main difference being that clang does not require a separate host compiler). The programming interface is basically the same as that offered by nvcc (sans a few minor details). More information here. This would most-likely be the simplest to integrate into Dev-C++. Since clang is not an officially-supported toolchain, it will generally lack behind a bit on features. It seems to work just fine for most things nowadays, however.

talonmies
  • 70,661
  • 34
  • 192
  • 269
Michael Kenzel
  • 15,508
  • 2
  • 30
  • 39