0

How can Pytorch set GPU memory limit?

when I start uwsgi and setup 2 workers. then My GPU memory will share all the memory for two.

ex. 4GB GPU - >. first worker use: 2 GB. second worker 2GB

how can I setup first worker only use 1GB second worker use 1GB?

talonmies
  • 70,661
  • 34
  • 192
  • 269
Frank Liao
  • 855
  • 1
  • 8
  • 25
  • 1
    PyTorch doesn't use a memory manager like TensorFlow, so the only way to force it to reduce memory usage is to reduce the batch size, the model size or run parts of the model in the `with torch.no_grad():` context manager (if parts of your model are forward only), see also: https://stackoverflow.com/questions/49529372/force-gpu-memory-limit-in-pytorch – Jan Jun 22 '20 at 05:12
  • 1
    Does this answer your question? [Force GPU memory limit in PyTorch](https://stackoverflow.com/questions/49529372/force-gpu-memory-limit-in-pytorch) – Jan Jun 22 '20 at 05:13

0 Answers0