How do I fill a pytorch tensor with a non-scalar value?
For example, let's say I want to fill a pytorch tensor X
of shape (n_samples, n_classes)
with a 1D pytorch vector a
of shape (n_classes,)
. Ideally, I'd like to be able to write :
X = torch.full((n_samples, n_classes), a)
where the vector a
is the fill_value
in torch.full
. However torch.full
only accepts a scalar as the fill_value
(Source). So this code won't work.
I have two questions:
- If I can't use
torch.full
, what is a fast way to fillX
withn_sample
copies ofa
? - Why does
torch.full
only accept scalar fill values? Is there a good reason for why thetorch.full
implementation cannot accept tensor fill values?
Regarding question 1., I am thinking about simply writing:
X = torch.ones((n_samples, n_classes)) * a
However, is there a faster/more efficient way to do this?
For reference, I've already checked out the following stack overflow posts
- In pytorch, how to fill a tensor with another tensor?
- Fill tensor with another tensor where mask is true
- Efficiently filling torch.Tensor at equal index positions
- Add blocks of values to a tensor at specific locations in PyTorch
but none of these directly answer my question.
Thanks!