I'm trying to run a CuDNNLSTM layer on Tesla V100-SXM2 GPU, but error appears due to TensorFlow-gpu 2.0.0 installed (can not downgrade because is a shared server).
ConfigProto options are deprecated at tf 2.0.0, so previous threads like this does not help.
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "2" # Or 2, 3, etc. other than 0
tf.config.gpu.set_per_process_memory_growth(True)
tf.config.set_soft_device_placement(True)
If I use this code lines, another error shows up:
module notfoundError: no module named 'tensorflow.contrib'