You can first create ones tensor and then pad to the same length. At the end, stack all tensors together.
a = tf.constant([4,2,1,3], dtype=tf.int32)
def pad_to_same(t):
return tf.pad(tf.ones(t, dtype=tf.int32), [[0,5-t]], constant_values=0)
res = tf.stack([pad_to_same(t) for t in a])
# <tf.Tensor: id=35571, shape=(4, 5), dtype=float32, numpy=
# array([[1., 1., 1., 1., 0.],
# [1., 1., 0., 0., 0.],
# [1., 0., 0., 0., 0.],
# [1., 1., 1., 0., 0.]], dtype=float32)>
Update
If you want to avoid for-loop, you can use tf.map_fn
,
def pad_to_same(t):
return tf.pad(tf.ones(t, dtype=tf.int32), [[0,5-t]], constant_values=0)
res = tf.map_fn(pad_to_same, a)