0
  1. Is there a way to allocate the remaining memory in each GPU for your task?
  2. Can I split my task across multiple GPU's?

nvidia-smi for your reference

su19as
  • 1
  • 1

1 Answers1

0
  1. Yes. PyTorch is able to use any remaining GPU capacity given that there is enough memory. You only need to specify which GPUs to use: https://stackoverflow.com/a/39661999/10702372
  2. Yes. GPU parallelism is implemented using PyTorch's DistributedDataParallel
Jason Adhinarta
  • 98
  • 1
  • 4
  • 7