I know that I can set the data type of placeholders and tensors using the dtype=tf.<DTYPE>
argument.
Is there a way to explicitly force weights inside tf.layers
(say tf.layers.conv2d
) to be float64
or do the layer's weights always take the exact data type of their inputs?
I am trying to do the following training settings
- Input:
float32
, weights:float32
- Input:
float32
, weights:float64
- Input:
float64
, weights:float32
- Input:
float64
, weights:float64
And would like to know if the above combinations are possible and how to explicitly prevent TensorFlow from changing the data type of one to match the other's data type