I'm learning about tensor storage through a blog (in my native language - Viet), and after experimenting with the examples, I found something that was difficult to understand. Given 3 tensors x
, zzz
, and x_t
as below:
import torch
x = torch.tensor([[3, 1, 2],
[4, 1, 7]])
zzz = torch.tensor([1,2,3])
# Transpose of the tensor x
x_t = x.t()
When I set the storage of each tensor to the corresponding variable, then their ids are different from each other:
x_storage = x.storage()
x_t_storage = x_t.storage()
zzz_storage = zzz.storage()
print(id(x_storage), id(x_t_storage), id(zzz_storage))
print(x_storage.data_ptr())
print(x_t_storage.data_ptr())
Output:
140372837772176 140372837682304 140372837768560
94914110126336
94914110126336
But when I called the storage()
method on each original tensor in the same print
statement, the same outputs are observed from all tensors, no matter how many times I tried:
print(id(x.storage()), id(x_t.storage()), id(zzz.storage()))
# 140372837967904 140372837967904 140372837967904
The situation gets even weirder as I print them separately on different lines; sometimes their results are different and sometimes theirs are the same:
print(id(x.storage()))
print(id(x_t.storage()))
# Output:
# 140372837771776
# 140372837709856
So my question is, why are there differences between the id of the storages in the first case, and the same id is observed in the second? (and where did that id come from?). And what is happening in the third case?
Also, I want to ask about the method data_ptr()
, as it was suggested to be used instead of id
in one question I saw on Pytorch discuss, but the Docs in Pytorch just show no more detail. I would be glad if anyone can give me detailed answers to any/all of the questions.