I believe what you need is the assign_slice_update
discussed in ticket #206. It is not yet available, though.
UPDATE: This is now implemented. See jdehesa's answer: https://stackoverflow.com/a/43139565/6531137
Until assign_slice_update
(or scatter_nd()
) is available, you could build a block of the desired row containing the values you don't want to modify along with the desired values to update, like so:
import tensorflow as tf
a = tf.Variable(tf.ones([10,36,36]))
i = 3
j = 5
# Gather values inside the a[i,...] block that are not on column j
idx_before = tf.concat(1, [tf.reshape(tf.tile(tf.Variable([i]), [j]), [-1, 1]), tf.reshape(tf.range(j), [-1, 1])])
values_before = tf.gather_nd(a, idx_before)
idx_after = tf.concat(1, [tf.reshape(tf.tile(tf.Variable([i]), [36-j-1]), [-1, 1]), tf.reshape(tf.range(j+1, 36), [-1, 1])])
values_after = tf.gather_nd(a, idx_after)
# Build a subset of tensor `a` with the values that should not be touched and the values to update
block = tf.concat(0, [values_before, 5*tf.ones([1, 36]), values_after])
d = tf.scatter_update(a, i, block)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
sess.run(d)
print(a.eval()[3,4:7,:]) # Print a subset of the tensor to verify
The example generate a tensor of ones and performs a[i,j,:] = 5
. Most of the complexity lies into getting the values that we don't want to modify, a[i,~j,:]
(otherwise scatter_update()
will replace those values).
If you want to perform T[i,k,:] = a[1,1,:]
as you asked, you need to replace 5*tf.ones([1, 36])
in the previous example by tf.gather_nd(a, [[1, 1]])
.
Another approach would be to create a mask to tf.select()
the desired elements from it and assign it back to the variable, as such:
import tensorflow as tf
a = tf.Variable(tf.zeros([10,36,36]))
i = tf.Variable([3])
j = tf.Variable([5])
# Build a mask using indices to perform [i,j,:]
atleast_2d = lambda x: tf.reshape(x, [-1, 1])
indices = tf.concat(1, [atleast_2d(tf.tile(i, [36])), atleast_2d(tf.tile(j, [36])), atleast_2d(tf.range(36))])
mask = tf.cast(tf.sparse_to_dense(indices, [10, 36, 36], 1), tf.bool)
to_update = 5*tf.ones_like(a)
out = a.assign( tf.select(mask, to_update, a) )
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
sess.run(out)
print(a.eval()[2:5,5,:])
It is potentially less efficient in terms of memory since it requires twice the memory to handle the a
-like to_update
variable, but you could easily modify this last example to get a gradient-preserving operation from the tf.select(...)
node. You might also be interested in looking at this other StackOverflow question: Conditional assignment of tensor values in TensorFlow.
Those inelegant contortions should be replaced to a call to the proper TensorFlow function as it becomes available.