45

As the question says, what does -1 do in pytorch view?

>>> a = torch.arange(1, 17)
>>> a
tensor([  1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.,
         11.,  12.,  13.,  14.,  15.,  16.])

>>> a.view(1,-1)
tensor([[  1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.,
          11.,  12.,  13.,  14.,  15.,  16.]])

>>> a.view(-1,1)
tensor([[  1.],
        [  2.],
        [  3.],
        [  4.],
        [  5.],
        [  6.],
        [  7.],
        [  8.],
        [  9.],
        [ 10.],
        [ 11.],
        [ 12.],
        [ 13.],
        [ 14.],
        [ 15.],
        [ 16.]])

Does it (-1) generate additional dimension? Does it behave the same as numpy reshape -1?

iacob
  • 20,084
  • 6
  • 92
  • 119
aerin
  • 20,607
  • 28
  • 102
  • 140
  • 4
    As far as I know (I'm no pro..), that the dimension given -1 will be adapted to the other ones. So `a.view(-1,1)` will result in a vector with the dimension `17x1` because there are 17 values - so `v.view(1,-1)` will result in a `1x17` vector.. . – bene Jun 11 '18 at 07:31
  • if you are wondering what `x.view(-1)` does it flattens the vector. Why? Because it has to construct a new view with only 1 dimension and infer the dimension -- so it flattens it. In addition it seems this operation avoids the very nasty bugs `.resize()` brings since the order of the elements seems to be respected. Fyi, pytorch now has this op for flattening: https://pytorch.org/docs/stable/generated/torch.flatten.html or see my answer https://stackoverflow.com/a/66500823/1601580 – Charlie Parker Mar 02 '22 at 17:59

6 Answers6

68

Yes, it does behave like -1 in numpy.reshape(), i.e. the actual value for this dimension will be inferred so that the number of elements in the view matches the original number of elements.

For instance:

import torch

x = torch.arange(6)

print(x.view(3, -1))      # inferred size will be 2 as 6 / 3 = 2
# tensor([[ 0.,  1.],
#         [ 2.,  3.],
#         [ 4.,  5.]])

print(x.view(-1, 6))      # inferred size will be 1 as 6 / 6 = 1
# tensor([[ 0.,  1.,  2.,  3.,  4.,  5.]])

print(x.view(1, -1, 2))   # inferred size will be 3 as 6 / (1 * 2) = 3
# tensor([[[ 0.,  1.],
#          [ 2.,  3.],
#          [ 4.,  5.]]])

# print(x.view(-1, 5))    # throw error as there's no int N so that 5 * N = 6
# RuntimeError: invalid argument 2: size '[-1 x 5]' is invalid for input with 6 elements

print(x.view(-1, -1, 3))  # throw error as only one dimension can be inferred
# RuntimeError: invalid argument 1: only one dimension can be inferred
NpnSaddy
  • 317
  • 3
  • 11
benjaminplanche
  • 14,689
  • 5
  • 57
  • 69
  • what if we have -1 by it's own? e.g. I have in front of me this: `correct[:k].view(-1)`. What does that do in that special case? – Charlie Parker Mar 05 '21 at 23:08
  • 4
    @CharlieParker: this would flatten the tensor (similar to [`torch.flatten(correct)`](https://pytorch.org/docs/stable/generated/torch.flatten.html)), i.e., returning a tensor with a single dimension containing all the elements. E.g., running `x.view(-1)` after the commands in my answer would return `tensor([0., 1., 2., 3., 4., 5.])`, i.e., a tensor with a single dimension, of size 6. – benjaminplanche Mar 08 '21 at 16:52
7

I love the answer that Benjamin gives https://stackoverflow.com/a/50793899/1601580

Yes, it does behave like -1 in numpy.reshape(), i.e. the actual value for this dimension will be inferred so that the number of elements in the view matches the original number of elements.

but I think the weird case edge case that might not be intuitive for you (or at least it wasn't for me) is when calling it with a single -1 i.e. tensor.view(-1). My guess is that it works exactly the same way as always except that since you are giving a single number to view it assumes you want a single dimension. If you had tensor.view(-1, Dnew) it would produce a tensor of two dimensions/indices but would make sure the first dimension to be of the correct size according to the original dimension of the tensor. Say you had (D1, D2) you had Dnew=D1*D2 then the new dimension would be 1.

For real examples with code you can run:

import torch

x = torch.randn(1, 5)
x = x.view(-1)
print(x.size())

x = torch.randn(2, 4)
x = x.view(-1, 8)
print(x.size())

x = torch.randn(2, 4)
x = x.view(-1)
print(x.size())

x = torch.randn(2, 4, 3)
x = x.view(-1)
print(x.size())

output:

torch.Size([5])
torch.Size([1, 8])
torch.Size([8])
torch.Size([24])

History/Context

I feel a good example (common case early on in pytorch before the flatten layer was official added was this common code):

class Flatten(nn.Module):
    def forward(self, input):
        # input.size(0) usually denotes the batch size so we want to keep that
        return input.view(input.size(0), -1)

for sequential. In this view x.view(-1) is a weird flatten layer but missing the squeeze (i.e. adding a dimension of 1). Adding this squeeze or removing it is usually important for the code to actually run.


Example2

if you are wondering what x.view(-1) does it flattens the vector. Why? Because it has to construct a new view with only 1 dimension and infer the dimension -- so it flattens it. In addition it seems this operation avoids the very nasty bugs .resize() brings since the order of the elements seems to be respected. Fyi, pytorch now has this op for flattening: https://pytorch.org/docs/stable/generated/torch.flatten.html

#%%
"""
Summary: view(-1, ...) keeps the remaining dimensions as give and infers the -1 location such that it respects the
original view of the tensor. If it's only .view(-1) then it only has 1 dimension given all the previous ones so it ends
up flattening the tensor.

ref: my answer https://stackoverflow.com/a/66500823/1601580
"""
import torch

x = torch.arange(6)
print(x)

x = x.reshape(3, 2)
print(x)

print(x.view(-1))

output

tensor([0, 1, 2, 3, 4, 5])
tensor([[0, 1],
        [2, 3],
        [4, 5]])
tensor([0, 1, 2, 3, 4, 5])

see the original tensor is returned!

Charlie Parker
  • 5,884
  • 57
  • 198
  • 323
2

I guess this works similar to np.reshape:

The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions.

If you have a = torch.arange(1, 18) you can view it various ways like a.view(-1,6),a.view(-1,9), a.view(3,-1) etc.

iacob
  • 20,084
  • 6
  • 92
  • 119
Krishna
  • 6,107
  • 2
  • 40
  • 43
  • what if we have -1 by it's own? e.g. I have in front of me this: `correct[:k].view(-1)`. What does that do in that special case? – Charlie Parker Mar 05 '21 at 23:11
1

From the PyTorch documentation:

>>> x = torch.randn(4, 4)
>>> x.size()
torch.Size([4, 4])
>>> y = x.view(16)
>>> y.size()
torch.Size([16])
>>> z = x.view(-1, 8)  # the size -1 is inferred from other dimensions
>>> z.size()
torch.Size([2, 8])
Miladiouss
  • 4,270
  • 1
  • 27
  • 34
  • 1
    what if we have -1 by it's own? e.g. I have in front of me this: `correct[:k].view(-1)`. What does that do in that special case? – Charlie Parker Mar 05 '21 at 23:11
0

-1 infers to 2, for instance, if you have

>>> a = torch.rand(4,4)
>>> a.size()
torch.size([4,4])
>>> y = x.view(16)
>>> y.size()
torch.size([16])
>>> z = x.view(-1,8) # -1 is generally inferred as 2  i.e (2,8)
>>> z.size()
torch.size([2,8])
Jithin Palepu
  • 596
  • 1
  • 7
  • 18
  • what if we have -1 by it's own? e.g. I have in front of me this: `correct[:k].view(-1)`. What does that do in that special case? – Charlie Parker Mar 05 '21 at 23:24
0

-1 is a PyTorch alias for "infer this dimension given the others have all been specified" (i.e. the quotient of the original product by the new product). It is a convention taken from numpy.reshape().

Hence t.view(1,17) in the example would be equivalent to t.view(1,-1) or t.view(-1,17).

iacob
  • 20,084
  • 6
  • 92
  • 119