tf 2.0.0-gpu CUDA 10.0 RTX2070super
hi. i got a problem regarding allocating gmemory. The initial allocation of memory is 7GB like this.
Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6994 MB memory)
2020-01-11 22:19:22.983048: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-01-11 22:19:23.786225: I tensorflow/stream_executor/cuda/cuda_driver.cc:830] failed to allocate 2.78G (2989634304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2020-01-11 22:19:24.159338: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
Limit: 7333884724 InUse: 5888382720 MaxInUse: 6255411968 NumAllocs: 1264 MaxAllocSize: 2372141056
but i can only use 5900MB memory and the rest of memory always fails to be allocated.
i guess that if whole gpu memory is used in rtx 2070s, i use 2 types data typse(float16, float32). so i got a policy by using this codes
opt = tf.keras.optimizers.Adam(1e-4)
opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt)
Still, the allocation always fails.