I've been recently a bit confused about tensors. Say if we have a tensor with shape (3,2,3,4), we mean that in the first dimension there are 3 groups of numbers? OR it just means that there are exactly just 3 numbers in the first dimension?
Then here comes the second question, with the tensor A that has the shape (3,2), why is that the output of torch.max(A,0) returns a group of max values that contains 2 max values instead of 3,considering the fact that there are 3 numbers in the first dimension.
>>>a = torch.randn(3,2)
a tensor([[-1.1254, -0.1549],
[-0.5308, 1.0427],
[-0.1268, 1.0866]])
>>>torch.max(a,0)
torch.return_types.max(
values=tensor([-0.1268, 1.0866]),
indices=tensor([2, 2]))
I mean why doesn't it return a list of 3 max values?
Then the third question, if we have two tensors with shape(3,3,10,2) and (2,4,10,1), can we just concatenate these two tensors on the third dimension considering they have the same size on that dimension? If it is feasible, what is the reason behind it?
I'll be much appreciated if you help me understand this!