2

I am currently trying to use the vgg16 model from keras library but whenever I create an object of the VGG16 model by doing

from keras.applications.vgg16 import VGG16
model = VGG16()

I get the following message 3 times.

tensorflow/core/framework/allocator.cc.124 allocation of 449576960 exceeds 10% of system memory

following this, my computer freezes. I am using a 64-bit, 4gb RAM with linux mint 18 and I have no access to GPU.

Is this problem has to do something with my RAM?

As a temporary solution I am running my python scripts from command line because my computer freezes less there compared to any IDE. Also, this does not happen when I use any alternate model like InceptionV3.

I have tried the solution provided here

but it didn't work

Any help is appreciated.

1 Answers1

2

You are most likely running out of memory (RAM). Try running top (or htop) in parallel and see your memory utilization.

In general, VGG models are rather big and require a decent amount of RAM. That said, the actual requirement depends on batch size. Smaller batch means smaller activation layer.

For example, a 6 image batch would consume about a gig of ram (reference). As a test you could lower your batch size to 1 and see it that fits in your memory.

Tautvydas
  • 2,027
  • 3
  • 25
  • 38
  • How can I reduce the batch size? I could not even make it to the training stage yet. – Aditya rawat Jul 06 '19 at 16:38
  • Yeah, then you're running out of memory. You can try closing all applications but that won't get you far. You need more memory. Either upgrade your setup or rent a box on the cloud. – Tautvydas Jul 07 '19 at 19:45
  • As a temporary solution I decided to train my model on google colab and later download it. Its ridiculously fast – Aditya rawat Jul 08 '19 at 14:30