2

I am using Keras 2.0.8 with TensorFlow 1.3.0 in Windows 10. Do you know why is not using all the memory? Or at least close to it.

Output when I start running a process:

Found device 0 with properties:

name: GeForce GTX 1060

major: 6 minor: 1 memoryClockRate (GHz) 1.6705 pciBusID 0000:01:00.0

Total memory: 6.00GiB

Free memory: 4.96GiB

In this example (when it crashes because OOM) shows that the limit is 5 GB instead of 6 GB. Why?

Limit: 5016036966

InUse: 5008119296

MaxInUse: 5015917568

NumAllocs: 329

MaxAllocSize: 3879002624

Thanks for your help!

PD: This is NVIDIA-SMI results while not running anything and while it

Community
  • 1
  • 1
HectorAnadon
  • 86
  • 2
  • 8

2 Answers2

-1

Because it doesn't need to use all the memory. Your data is kept on your RAM-memory and every batch is copied to your GPU memory. Therefore, increasing your batch size will increase the memory usage of the GPU. In addition, your model size will affect the GPU memory usage of Tensorflow.

Not allocating all GPU-memory is actually quite handy if for example you want to run multiple tensorflow sessions at the same time. However, you can also decide to set the fraction of GPU memory in a tensorflow session. For information on a fixed GPU memory fraction or a dynamic memory usage check this question.

If you want to know why your GPU isn't using it's computing power for 100% check this question.

Wilmar van Ommeren
  • 7,469
  • 6
  • 34
  • 65
  • Tensorflow allocates all the GPU memory even if it is not needed in order to make faster computation. https://groups.google.com/forum/#!topic/keras-users/MFUEY9P1sc8. Even if I run a small network I am always allocating the same 4.96GB, why not more? – HectorAnadon Sep 05 '17 at 14:22
  • apparently tensorflow doesn't allocate all memory in your case. I don't know why you want to allocate all the memory as you won't gain any speed. But if you want to, just set `per_process_gpu_memory_fraction` to 1 in `tf.GPUOptions` (before you start the session) – Wilmar van Ommeren Sep 05 '17 at 14:49
  • 1
    This seems to be my problem https://social.technet.microsoft.com/Forums/windows/en-US/15b9654e-5da7-45b7-93de-e8b63faef064/windows-10-does-not-let-cuda-applications-to-use-all-vram-on-especially-secondary-graphics-cards?forum=win10itprohardware Setting `per_process_gpu_memory_fraction` to 1 crashes, the limit is 0.89 and allocates 5244MiB out of 6144MiB – HectorAnadon Sep 06 '17 at 15:00
-1

Please check whether you have installed keras GPU version. Other wise it will automatically detects the cpu and process on it. You can use Anaconda platform for this. conda install -c anaconda keras-gpu install keras GPU by this command.

9113303
  • 852
  • 1
  • 16
  • 30