1. What is my script using, CPU or GPU?
The "script" does not have any device alegiance. Where computations are done (CPU or GPU) depends on the specific tensor being operated on. Hence it depends on how the tensor was created.
However the default location for the torch.tensor
function to create tensors is set to 'cpu'
:
torch.FloatTensor() # CPU tensor
torch.cuda.FloatTensor() # GPU tensor
torch.tensor(device='cpu') # CPU tensor
torch.tensor(device='cuda') # GPU tensor
torch.tensor([1,2]) # CPU tensor <--
2. If CPU, what should I do to make it run on GPU?
You can change the default type of each newly created torch.tensor
with:
# Approach 1
torch.set_default_tensor_type('torch.cuda.FloatTensor')
Or you can manually copy each tensor to the GPU:
# Approach 2
device = "cuda" if torch.cuda.is_availble() else "cpu"
my_tensor = my_tensor.to(device)
my_model.to(device) # Operates in place for model parameters
3. If GPU, will this script crash if torch.cuda.is_available() == False
?
Yes, in Approach 1 the script would crash with the following error:
RuntimeError: No CUDA GPUs are available
In approach 2 it will simply default to CPU.
4. Does this do anything about making the training faster?
That depends. For most common PyTorch neural net training scenarios yes speed will be improved by moving to the GPU.
5. I'm aware of Porting PyTorch code from CPU to GPU but this is old. Does this situation change in v0.4 or the upcoming v1.0?
There are a number of ways to port code from CPU to GPU:
# Syntax 1
my_tensor = my_tensor.cuda()
# Syntax 2
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_tensor = my_tensor.to(device)
Syntax 2 is often preferred for allowing a switch between CPU and GPU by changing one variable.