Since I need to repeat over a specific axis, I want to avoid unnecessary memory reallocation as much as possible.
For example, given a numpy array A
of shape (3, 4, 5), I want to create a view named B
of shape (3, 4, 100, 5) on the original A
. The 3rd axis of A
is repeated 100 times.
In numpy, this can be achieved like this:
B=numpy.repeat(A.reshape((3, 4, 1, 5)), repeats=100, axis=2)
or:
B=numpy.broadcast_to(A.reshape((3, 4, 1, 5)), repeats=100, axis=2)
The former allocates a new memory and then do some copy stuff, while the latter just create a view over A
without extra memory reallocation. This can be identified by the method described in the answer Size of numpy strided array/broadcast array in memory? .
In theano, however, the theano.tensor.repeat
seems to be the only way, of course it's not preferable.
I wonder if there is a `numpy.broadcast_to' like theano method can do this in an efficient way?