I am using AWS to train a CNN on a custom dataset. I launched a p2.xlarge instance, uploaded my (Python) scripts to the virtual machine, and I am running my code via the CLI.
I activated a virtual environment for TensorFlow(+Keras2) with Python3 (CUDA 10.0 and Intel MKL-DNN), which was a default option via AWS.
I am now running my code to train the network, but it feels like the GPU is not 'activated'. The training goes just as fast (slow) as when I run it locally with a CPU.
This is the script that I am running:
https://github.com/AntonMu/TrainYourOwnYOLO/blob/master/2_Training/Train_YOLO.py
I also tried to alter it by putting with tf.device('/device:GPU: 0'):
after the parser (line 142) and indenting everything underneath under there. However, this doesn't seem to have changed anything.
Any tips on how to activate the GPU (or check if the GPU is activated)?