I've written a network in keras to find a certain attribute in 3D graph. The graphs represent events in a particle Physics detector and the data specifies the coordinates where there was a hit (which basically corresponds to a certain amount of energy being deposited at a specific coordinate), along with the amount of energy deposited. I first load my data from a '.h5' file on a variable. I then have a function that loops over that data and basically crops the event to reduce the size of the input being fed to the neural network (this process takes about 20 minutes). After that, I have the configuration of my network in Keras and then I train it. It trains fine, but if I change the network slightly to try and see if I can get a higher accuracy, it doesn't even begin to train and gives me a memory allocation error. If I run the step again where I process the data (where I crop it) and load it into a variable, then it trains fine. Has anyone had this problem before and know how to solve it?
Asked
Active
Viewed 277 times
0
-
It is impossible to answer questions without the relevant details, you should include actual code and architectures that work and do not work, any detail you might think is not relevant, might be very relevant. Also include full errors messages and traceback. – Dr. Snoopy Jun 19 '20 at 01:48
-
So for some reason, I am not getting the error any longer, not sure why, I did not change anything... – Always Learning Forever Jun 19 '20 at 13:16
2 Answers
1
You should use backend.clear_session to free unused memory. Also see this question for more details.
The reason for the disappearing error is that Colab gives you different types of GPU each time, or even make you share the same GPU with other users.

BlackBear
- 22,411
- 10
- 48
- 86
0
This could have been related to the data being loaded in and previous memory having been allocated so you run out. It might vary with small changes. Try loading data with chunks to avoid future issues.

David Warshawsky
- 142
- 8