10

Two parts to this question:

(1) What is the best way to update a subset of a tensor in tensorflow? I've seen several related questions:

Adjust Single Value within Tensor -- TensorFlow and How to update a subset of 2D tensor in Tensorflow?

and I'm aware that Variable objects can be assigned using Variable.assign() (and/or scatter_update, etc.), but it seems very strange to me that tensorflow does not have a more intuitive way to update a part of a Tensor object. I have searched through the tensorflow api docs and stackoverflow for quite some time now and can't seem to find a simpler solution than what is presented in the links above. This seems particularly odd, especially given that Theano has an equivalent version with Tensor.set_subtensor(). Am I missing something or is there no simple way to do this through the tensorflow api at this point?

(2) If there is a simpler way, is it differentiable?

Thanks!

Community
  • 1
  • 1
joeliven
  • 101
  • 1
  • 3
  • Is it enough for you to initialize tensor values with numpy array? Then I recommend the way. – Jin Oct 28 '16 at 06:24
  • 5
    In recent versions of Tensorflow, you can update variables using numpy-like slicing, like this: `v[2:4].assign([1, 2])`, where `v` is a `Variable`. Does that answer your question? – Peter Hawkins Oct 28 '16 at 19:48
  • Thanks both, appreciate the thoughts/comments. Unfortunately not quite what I'm looking for though...the updating Variable using numpy-like slicing would be exactly it, except that is only applicable to "Variables" but not "Tensors". I've redesigned my model to avoid the explicit need for this op, bc it seems the reality is that Tensor objects are completely immutable in tf (unlike Variable objects). Thanks again for the thoughts though! – joeliven Dec 09 '16 at 21:55

1 Answers1

1

I suppose the immutability of Tensors is required for the construction of a computation graph; you can't have a Tensor update some of its values without becoming another Tensor or there will be nothing to put in the graph before it. The same issue comes up in Autograd.

It's possible to do this (but ugly) using boolean masks (make them variables and use assign, or even define them prior in numpy). That would be differentiable, but in practice I'd avoid having to update subtensors.

If you really have to, and I really hope there is a better way to do this, but here is a way to do it in 1D using tf.dynamic_stitch and tf.setdiff1d:

def set_subtensor1d(a, b, slice_a, slice_b):
    # a[slice_a] = b[slice_b]
    a_range = tf.range(a.shape[0])
    _, a_from = tf.setdiff1d(a_range, a_range[slice_a])
    a_to = a_from
    b_from, b_to = tf.range(b.shape[0])[slice_b], a_range[slice_a]     
    return tf.dynamic_stitch([a_to, b_to],
                    [tf.gather(a, a_from),tf.gather(b, b_from)])

For higher dimensions this could be generalised by abusing reshape (where nd_slice could be implemented like this but there is probably a better way):

def set_subtensornd(a, b, slice_tuple_a, slice_tuple_b):
    # a[*slice_tuple_a] = b[*slice_tuple_b]
    a_range = tf.range(tf.reduce_prod(tf.shape(a)))
    a_idxed = tf.reshape(a_range, tf.shape(a))
    a_dropped = tf.reshape(nd_slice(a_idxed, slice_tuple_a), [-1])
    _, a_from = tf.setdiff1d(a_range, a_dropped)
    a_to = a_from
    b_range = tf.range(tf.reduce_prod(tf.shape(b)))
    b_idxed = tf.reshape(b_range, tf.shape(b))
    b_from = tf.reshape(nd_slice(b_idxed, slice_tuple_b), [-1])
    b_to = a_dropped
    a_flat, b_flat = tf.reshape(a, [-1]), tf.reshape(b, [-1])
    stitched = tf.dynamic_stitch([a_to, b_to],
                   [tf.gather(a_flat, a_from),tf.gather(b_flat, b_from)])
    return tf.reshape(stitched, tf.shape(a))

I have no idea how slow this will be. I'd guess quite slow. And, I haven't tested it much beyond running it on a couple of tensors.

gngdb
  • 484
  • 3
  • 8