0

When I try to run a Python code that uses CUDA, however I came across the following error:

pycuda.driver.CompileError: nvcc compilation of C:\Users\user\AppData\Local\Temp\tmplh36ro6y\kernel.cu failed
[command: nvcc --cubin -arch sm_75 -m64 -IC:\Users\user\Documents\deep-master\Deep\cubic -Ic:\users\user\anaconda3\envs\deep-master1\lib\site-packages\pycuda\cuda kernel.cu]
[stdout:
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30037\include\vcruntime.h(197): error: invalid redeclaration of type name "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30037\include\vcruntime_new.h(48): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30037\include\vcruntime_new.h(53): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30037\include\vcruntime_new.h(59): error: first parameter of allocation function must be of type "size_t"

...

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include\sm_32_intrinsics.hpp(120): error: asm operand type size(8) does not match type/size implied by constraint 'r'

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include\sm_32_intrinsics.hpp(122): error: asm operand type size(8) does not match type/size implied by constraint 'r'

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include\sm_32_intrinsics.hpp(123): error: asm operand type size(8) does not match type/size implied by constraint 'r'

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include\sm_32_intrinsics.hpp(124): error: asm operand type size(8) does not match type/size implied by constraint 'r'

Error limit reached.
100 errors detected in the compilation of "kernel.cu".
Compilation terminated.
kernel.cu
]

Environment setup:
Python: 3.6.13
PyCuda: 2020.1
CUDA Toolkit: 10.1/ 11.1
MSVC: 2019

I installed the PyCuda from this link using pycuda-2020.1+cuda101-cp36-cp36m-win_amd64.whl. For which I also installed both cuda toolkit 10.1 and 11.1. I set either 10.1 and 11.1 separately in the system path:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\libnvvp
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\lib\x64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\extras\CUPTI\lib64
-Either set of settings give the same errors.

I don't quite understand this error. What is the use of kernel.cu and why does it need to compile? Did anyone came across similar errors or have any hints? Much appreciated! Thanks a lot.

ryanf
  • 1
  • 1
  • PyCuda drops the CUDA C++ code you write or it generates into a file which is compiled. That is what kernel.cu is. And this error is probably being caused by your installation trying to use a 32 bit version of the host compiler – talonmies Jan 03 '22 at 22:57
  • @talonmies I see thanks for your explanation. The host compiler means the MSVC 2019? I tried to config the MSVC to 64-bit as mentioned here [link](http://www.gaia-gis.it/gaia-sins/msvc_how_to.html). Before modification, `cl` gives `Microsoft (R) C/C++ Optimizing Compiler Version 19.29.30040 for x86` , and then now `Microsoft (R) C/C++ Optimizing Compiler Version 19.29.30040 for x64`, but I still have the same error. Is this the right way to change the compiler to 64-bit? Thanks a lot! :) – ryanf Jan 13 '22 at 13:53

0 Answers0