Can TensorFlow automatically cache computations if they involve multiple calls to the same computation (sub-)graph?
For example, I have a matrix F
in which each entry represents a
computation based on trainable variables W
. My objective function
multiplies this matrix several times with different vectors (each
time with unchanged W).
Will TensorFlow recompute, for example, F[1,2]
whenever I access
it, or will it cache that value?
In theory, one could precompute the matrix F
given a fixed W
,
such that each entry in F
is a tf.constant
. But that would
prevent the correct computation of the gradients of W
.