- Is there a way to allocate the remaining memory in each GPU for your task?
- Can I split my task across multiple GPU's?
Asked
Active
Viewed 82 times
0

su19as
- 1
- 1
1 Answers
0
- Yes. PyTorch is able to use any remaining GPU capacity given that there is enough memory. You only need to specify which GPUs to use: https://stackoverflow.com/a/39661999/10702372
- Yes. GPU parallelism is implemented using PyTorch's DistributedDataParallel

Jason Adhinarta
- 98
- 1
- 4
- 7