In a Tensorflow
graph, is there a way to find out if a node
depends on a placeholder
, like node.depends(placeholder) -> Bool
import tensorflow as tf
x = tf.placeholder(name='X', dtype=tf.int64, shape=[])
y = tf.placeholder(name='Y', dtype=tf.int64, shape=[])
p = tf.add(x, y)
q = tf.add(x, x)
sess = tf.Session()
result = sess.run([p,q], feed_dict={x: 1, y: 2})
print(result)
result = sess.run([p,q], feed_dict={x: 1, y: 3})
print(result)
In the code example above, q
does not depend on y
. In the second call of session.run
, we modify only y
. Thus q
does not need to be evaluated again. Does session automatically reuse existing values in these cases? If so, is there any way to find out which are the nodes that were evaluated during .run
?
Otherwise, if I can quickly find out which nodes are dependent on the placeholders I modify, I can send only those as input to run, and reuse existing values (in a dictionary as cache).
The idea is to avoid costly evaluations and more importantly, in my application, to minimize costly operations (outside tensorflow
) that need to be triggered whenever the output nodes change - a necessity in my application.