37

Is there a way to reliably enable CUDA on the whole model?


I want to run the training on my GPU. I found on some forums that I need to apply .cuda() on anything I want to use CUDA with (I've applied it to everything I could without making the program crash). Surprisingly, this makes the training even slower.

Then, I found that you could use this torch.set_default_tensor_type('torch.cuda.FloatTensor') to use CUDA. With both enabled, nothing changes. What is happening?

Mateen Ulhaq
  • 24,552
  • 19
  • 101
  • 135
G. Ramistella
  • 1,327
  • 1
  • 9
  • 19
  • 2
    Possible duplicate of [If I'm not specifying to use CPU/GPU, which one is my script using?](https://stackoverflow.com/questions/50495053/if-im-not-specifying-to-use-cpu-gpu-which-one-is-my-script-using) – Omegastick Jun 21 '18 at 08:22
  • `MyModel()` is presumably just an example variable name for the model being used in the code. – iacob Mar 13 '21 at 15:15
  • 1
    Does this answer your question? [How to run PyTorch on GPU by default?](https://stackoverflow.com/questions/43806326/how-to-run-pytorch-on-gpu-by-default) – iacob Mar 13 '21 at 19:20

2 Answers2

75

You can use the tensor.to(device) command to move a tensor to a device.

The .to() command is also used to move a whole model to a device, like in the post you linked to.

Another possibility is to set the device of a tensor during creation using the device= keyword argument, like in t = torch.tensor(some_list, device=device)

To set the device dynamically in your code, you can use

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

to set cuda as your device if possible.

There are various code examples on PyTorch Tutorials and in the documentation linked above that could help you.

ted
  • 13,596
  • 9
  • 65
  • 107
M. Deckers
  • 1,151
  • 10
  • 16
  • 1
    When calling `tensor.to(device)`, for the `device` argument you can use 'cpu', 'cuda', 'cuda:0', 'cuda:1', etc. 'cuda' and 'cuda:0' mean the same thing in most circumstances. Click on the PyTorch tab within [Section 5.6.1](https://d2l.ai/chapter_deep-learning-computation/use-gpu.html#computing-devices) of d2l.ai for more details. – Josiah Yoder May 08 '21 at 15:38
  • 1
    You can check if a tensor is located on the GPU by printing `tensor.device`. – Josiah Yoder May 08 '21 at 15:40
2

With both enabled, nothing changes.

That is because you have already set every tensor to GPU.

Is there a way to reliably enable CUDA on the whole model?

model.to('cuda')

I've applied it to everything I could

You only need to apply it to tensors the model will be interacting with, generally:

  • the model's pramaters model.to('cuda')
  • the features data features = features.to('cuda')
  • the target data targets = targets.to('cuda')
iacob
  • 20,084
  • 6
  • 92
  • 119