14

In pytorch, if I'm not writing anything about using CPU/GPU, and my machine supports CUDA (torch.cuda.is_available() == True):

  1. What is my script using, CPU or GPU?
  2. If CPU, what should I do to make it run on GPU? Do I need to rewrite everything?
  3. If GPU, will this script crash if torch.cuda.is_available() == False?
  4. Does this do anything about making the training faster?
  5. I'm aware of Porting PyTorch code from CPU to GPU but this is old. Does this situation change in v0.4 or the upcoming v1.0?
BartoszKP
  • 34,786
  • 15
  • 102
  • 130
xxbidiao
  • 834
  • 5
  • 14
  • 27
  • Does this answer your question? [How to run PyTorch on GPU by default?](https://stackoverflow.com/questions/43806326/how-to-run-pytorch-on-gpu-by-default) – iacob Mar 13 '21 at 19:20
  • 1
    I think I was asking quite a different question, regarding implicit behavior on `torch` (which can be roughly described as "what would happen if no explicit call to `torch.set_default_tensor_type()` is done" but I'm not even sure whether this API exists in <0.4. – xxbidiao Mar 14 '21 at 01:38

6 Answers6

7

PyTorch defaults to the CPU, unless you use the .cuda() methods on your models and the torch.cuda.XTensor variants of PyTorch's tensors.

Omegastick
  • 1,773
  • 1
  • 20
  • 35
7

My way is like this (below pytorch 0.4):

dtype = torch.cuda.float if torch.cuda.is_available() else torch.float
torch.zeros(2, 2, dtype=dtype)

UPDATE pytorch 0.4:

device = torch.device("cuda" if use_cuda else "cpu")
model = MyRNN().to(device)

from PyTorch 0.4.0 Migration Guide.

Ria
  • 10,237
  • 3
  • 33
  • 60
6

1. What is my script using, CPU or GPU?

The "script" does not have any device alegiance. Where computations are done (CPU or GPU) depends on the specific tensor being operated on. Hence it depends on how the tensor was created.

However the default location for the torch.tensor function to create tensors is set to 'cpu':

torch.FloatTensor()         # CPU tensor
torch.cuda.FloatTensor()    # GPU tensor

torch.tensor(device='cpu')  # CPU tensor
torch.tensor(device='cuda') # GPU tensor

torch.tensor([1,2])         # CPU tensor  <--

2. If CPU, what should I do to make it run on GPU?

You can change the default type of each newly created torch.tensor with:

# Approach 1
torch.set_default_tensor_type('torch.cuda.FloatTensor')

Or you can manually copy each tensor to the GPU:

# Approach 2
device = "cuda" if torch.cuda.is_availble() else "cpu"

my_tensor = my_tensor.to(device)
my_model.to(device) # Operates in place for model parameters

3. If GPU, will this script crash if torch.cuda.is_available() == False?

Yes, in Approach 1 the script would crash with the following error:

RuntimeError: No CUDA GPUs are available

In approach 2 it will simply default to CPU.


4. Does this do anything about making the training faster?

That depends. For most common PyTorch neural net training scenarios yes speed will be improved by moving to the GPU.


5. I'm aware of Porting PyTorch code from CPU to GPU but this is old. Does this situation change in v0.4 or the upcoming v1.0?

There are a number of ways to port code from CPU to GPU:

# Syntax 1
my_tensor = my_tensor.cuda()

# Syntax 2
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_tensor = my_tensor.to(device)

Syntax 2 is often preferred for allowing a switch between CPU and GPU by changing one variable.

iacob
  • 20,084
  • 6
  • 92
  • 119
  • 1
    Changing the the default type (approach #1) was a live saver for me - where struggling the last hours for finding a solution to add a '.cuda()' to the torch.tensor depending on gpu/cpu. Thank you very much!! – TefoD Jun 29 '21 at 08:40
2

You should write your code so that it will use GPU processing if torch.cuda.is_available == True:

if torch.cuda.is_available():
    model.cuda()
else:
    # Do Nothing. Run as CPU.
iacob
  • 20,084
  • 6
  • 92
  • 119
takethelongsh0t
  • 56
  • 1
  • 1
  • 11
0

It will use the default one. You can change default GPU to 1 using the following code before making your model :

import torch as th
th.cuda.set_device(1)
devil in the detail
  • 2,905
  • 17
  • 15
  • Are 0 shorthand for `cpu` in this case? What about `cuda:0` and `cuda:1` if I have more than 1 gpu? – xxbidiao Mar 15 '21 at 05:20
0

What is my script using, CPU or GPU?

By default all tensors are allocated on the CPU.


If CPU, what should I do to make it run on GPU? Do I need to rewrite everything?

You can use set_default_device to change the default device.

For Nvidia GPU:

torch.set_default_device('cuda')

If you have multiple GPUs, you can select a specific one:

torch.set_default_device('cuda:1')

For CPU use 'cpu'

torch.set_default_device('cpu')

You can check default device by creating a simple tensor and getting its device type: torch.tensor([1.2, 3.4]).device would return device(type='cuda', index=0) if the first GPU is selected.


You can also wrap your code with device:

with torch.device('cuda:0'):
    t = torch.tensor([1.2, 3.4])

In this case default will only be changed for the wrapped code.


If GPU, will this script crash if torch.cuda.is_available() == False?

torch.set_default_device and torch.device allow selecting non-existing device but creating tensors would then fail with an exception.

Simon
  • 3,224
  • 3
  • 23
  • 17