I have been trying to train a 3DCNN network with a specific architecture. I wanted to create a dockerfile with all the steps necessary to have the network working. The issue is that If I run the neural network network in the host I have no problem, everything works fine. But doing almost the same on a docker container I always get the "segmentation fault (core dumped)" error.
Both installations are not exactly the same but the variations (maybe some extra package installed) shouldn't be a problem, right? Besides I don't have any error until it starts iterating, so it seems like is a memory problem. The GPU works on the docker container and is the same GPU as the host. the python code is the same.
The Docker container neural network network start training with the data but on the epoch 1 it gets the "segmentation fault (core dumped)".
So my question is the following: Is it possible to have critical differences between the host and a docker container even if they have exactly the same packages installed? Especially with relation to tensorflow and GPU. Because the error must be from outside the code, given that the code works in a similar environment.
Hope I explained myself enough to give the idea of my question, thank you.