I'm trying to run a test code on GPU of a remote machine. The code is
import torch
foo = torch.tensor([1,2,3])
foo = foo.to('cuda')
I'm getting the following error
Traceback (most recent call last): File "/remote/blade/test.py", line 3, in <module> foo = foo.to('cuda') RuntimeError: CUDA error: out of memory CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
From this discussion, the conflict between cuda and pytorch versions may be the cause for the error. I run the following
print('python v. : ', sys.version)
print('pytorch v. :', torch.__version__)
print('cuda v. :', torch.version.cuda)
to get the versions:
python v. : 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] pytorch v. : 1.11.0.dev20211206 cuda v. : 10.2
Does anything here look off?