PyTorch doesn't seem to have documentation for tensor.stride()
.
Can someone confirm my understanding?
My questions are three-fold.
Stride is for accessing an element in the storage. So stride size will be the same as the dimension of the tensor. Correct?
For each dimension, the corresponding element of stride tells how much it takes to move along the 1-dimensional storage. Correct?
For example:
In [15]: x = torch.arange(1,25)
In [16]: x
Out[16]:
tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18
, 19, 20, 21, 22, 23, 24])
In [17]: a = x.view(4,3,2)
In [18]: a
Out[18]:
tensor([[[ 1, 2],
[ 3, 4],
[ 5, 6]],
[[ 7, 8],
[ 9, 10],
[11, 12]],
[[13, 14],
[15, 16],
[17, 18]],
[[19, 20],
[21, 22],
[23, 24]]])
In [20]: a.stride()
Out[20]: (6, 2, 1)
- How does having this information help perform tensor operations efficiently? Basically this is showing the memory layout. So how does it help?