Yes. There is. Provided you do not need to change the rank of the tensor, it's very simple.
tf.pad()
accepts regular python lists with tensors. The format of the padding is a list of pairs of how much to pad on each side of that dimension.
e.g.
t = tf.constant([[1, 2], [3, 4]])
paddings = [[0, 0], [0, 4-tf.shape(t)[0]]]
out = tf.pad(t, paddings, 'CONSTANT', constant_values=-1)
sess.run(out)
# gives:
# array([[ 1, 2, -1, -1],
# [ 3, 4, -1, -1]], dtype=int32)
If you want to generalise this to a useful function, you could do something like:
def pad_up_to(t, max_in_dims, constant_values):
diff = max_in_dims - tf.shape(t)
paddings = tf.pad(diff[:, None], [[0, 0], [1, 0]])
return tf.pad(t, paddings, 'CONSTANT', constant_values=constant_values)
# (note: see edits for the solution referred to by other answers on this question)
where max_in_dims
is essentially the desired shape of the output. Note: this function will fail if you provide a shape that is strictly smaller than t
in any dimension.
You can use it like:
t = tf.constant([[1, 2], [3, 4]]) # shape = [2, 2]
t_padded = pad_up_to(t, [2, 4], -1) # shape = [2, 4], padded with -1s
or
t = tf.placeholder(tf.float32, [None, None]) # shape = [?, ?]
t_padded = pad_up_to(t, [5,5], -1) # shape = [5, 5], padded with -1s
t_np = np.random.uniform(0, 1, [3,4]) # shape = [3,4], no padding
t_padded_out = sess.run(t_padded, {t: t_np})
t_np2 = np.random.uniform(0, 1, [2,1]) # shape = [2,1], no padding
t_padded_out2 = sess.run(t_padded, {t: t_np2})
Although the dimension sizes are calculated dynamically, the number of dimensions is not, so make sure that max_in_dims
has the same number of elements as t.shape.