Yes, it's even easier than you think:
x = tf.placeholder(tf.float32, None)
# create a bool tensor the same shape as x
condition = x < SmallConst
# create tensor same shape as x, with values greater than SmallConst set to 0
to_remove = x*tf.to_float(condition)
# set all values of x less than SmallConst to 0
x_clipped = x - to_remove
I'd normally just put that into one line like:
x_clipped = x - x*tf.to_float(x < small_const)
note: using tf.to_float
on a tensor of type bool
will give you 0.0
s in place of False
s and 1.0
s in place of True
s
Additional information for cleaner code:
The numerical operators (e.g. <
, >=
, +
, -
etc, but not ==
) are overloaded for tensorflow tensors such that you can use native python variables with tensors to get a new tensor that is the result of that operation. So tf.constant()
is actually fairly rarely actually needed. Example of this in action:
a = tf.placeholder(tf.int32)
b = a + 1
c = a > 0
print(b) # gives "<tf.Tensor 'add:0' shape=<unknown> dtype=int32>"
print(c) # gives "<tf.Tensor 'Greater:0' shape=<unknown> dtype=bool>"
sess.run(b, {a: 1}) # gives scalar int32 numpy array with value 2
sess.run(c, {a: 1}) # gives scalar bool numpy array with value True
This is also true of numpy.
tf.assign()
only works on Variables because it will
Update 'ref' by assigning 'value' to it.
Tensors in tensorflow are immutable. The result of any operation on a tensor is another tensor, but the original tensor will never change. Variables, however are mutable, and you change their value with tf.assign()