2

Often, I code on my laptop which is not equipped with a GPU (MacBook, if it makes a difference). Then files are transferred to a server with a GPU. I just want to perform a sanity-check on my code, before running it on the server, in order to avoid errors related to tensors being on different devices. I am looking for a gpu emulator, which takes in some tenosrs and outputs some other random tensors.

iacob
  • 20,084
  • 6
  • 92
  • 119
Arman
  • 126
  • 3
  • 14
  • Thanks, I found a simulator named GPGPU-Sim, which is going to apparently solve my problem. I have not tried it, yet. I will try it and update if it is successfull. – Arman Mar 22 '21 at 18:38

1 Answers1

-2

Just add .to('cuda:0') to your model which inherits nn.Module and any tensor you created within the forward/backward pass.

Note that cuda:0 means the gpu of index 0.

Moreover, I'd like to define a hyper parameter dictionary to pass to the model, one can easily define hparams['device'] = 'cpu'/'cuda:0'/'cuda:1' in the dict, and when initialize the model, the property self.device = hparams['device'] is set, so that any tensor/module in the model can be easily migrated to any device as configured by adding .to(self.device).

namespace-Pt
  • 1,604
  • 1
  • 14
  • 25
  • Thanks, that is what I normally do. Unfortunately, there are too many details, and some might be forgotten. I need a way to check before running the code. – Arman Mar 22 '21 at 17:24