0

I know that I can set the data type of placeholders and tensors using the dtype=tf.<DTYPE> argument.

Is there a way to explicitly force weights inside tf.layers (say tf.layers.conv2d) to be float64 or do the layer's weights always take the exact data type of their inputs?

I am trying to do the following training settings

  1. Input: float32, weights: float32
  2. Input: float32, weights: float64
  3. Input: float64, weights: float32
  4. Input: float64, weights: float64

And would like to know if the above combinations are possible and how to explicitly prevent TensorFlow from changing the data type of one to match the other's data type

gokul_uf
  • 740
  • 1
  • 8
  • 21

1 Answers1

1

I don't think you can do that efficiently. Most operations such as tf.matmul requires their operands to have the same type. So you will end up upcasting your tf.float32 into tf.float64 whenever you want the computation to happen with this precision.

From a computational point of view, consider that it is common for graphics card to be much less gifted for FP64 operations than for FP32. For example P5000, P6000 or GTX 1080 graphics card have only 1/32 FP64 cores than FP32. The Titan V with a ratio of 1/2 is one of the best you can get.

Finally, specifically in deep learning, precision of the computation has never been a problem. Actually, adding noise to the computation (mostly via stochastic gradient descent) is what most people think make learning work, and one can actually successfully train models with half-precision floating points.

P-Gn
  • 23,115
  • 9
  • 87
  • 104
  • I agree with your statements and maybe mixed-precision is not possible / trivial. But still doesn't answer the question of whether TF does the casting to `float32` / `float64` correctly for the layers API – gokul_uf May 29 '18 at 11:10
  • @gokul_uf As said, they require their operands to share the same type, so there is no casting to be done. I don't know of any TF operation that would do a silent type casting and that is probably for the best. – P-Gn May 29 '18 at 11:26