2

I have a really small neural network -

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
scores_input (InputL (None, 2)                 0         
_________________________________________________________________
dense1 (Dense)       (None, 1)                 2         
_________________________________________________________________
bn (BatchNormalizati (None, 1)                 4         
_________________________________________________________________
sigmoid (Activation) (None, 1)                 0         
=================================================================
Total params: 6
Trainable params: 4
Non-trainable params: 2

However, it takes about 1.3 GB in the gpu memory as shown by nvidia-smi

Wed Aug 28 08:41:38 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.130                Driver Version: 384.130                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 960     Off  | 00000000:01:00.0  On |                  N/A |
| 30%   42C    P2    29W / 120W |   1763MiB /  1988MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1258      G   /usr/lib/xorg/Xorg                           251MiB |
|    0      2336      G   compiz                                       231MiB |
|    0     25537      G   .../innereye/qtcreator-4.9.2/bin/qtcreator     2MiB |
|    0     30098      G   ...-token=DA8EE4CD7070EDEBCD3537BAAD982629    37MiB |
|    0     30436      C   python                                      1227MiB |
+-----------------------------------------------------------------------------+

I need to load another (larger) network, but they don't fit together in the GPU memory. Any help?

akshayks
  • 199
  • 9
JLev
  • 705
  • 1
  • 9
  • 30
  • 1
    Please see the duplicate question link, this is something TensorFlow always does, by default it allocates all GPU memory and manages it itself. – Dr. Snoopy Aug 28 '19 at 06:18

0 Answers0