2

I try to print the address of a variable on GPU and CPU separately by id() function in python, but they looks close to each other in host memory, i don not know why.

>>> a = torch.tensor([1, 2, 3])
>>> print(id(a))
139651301379824
>>> b = a.to('cuda:2')
>>> print(id(b))
139651301392848
talonmies
  • 70,661
  • 34
  • 192
  • 269
Jacob
  • 29
  • 2
  • But you are not printing the address of the device tensor in GPU memory, you are printing its address in CPU memory – talonmies Jul 14 '22 at 02:50
  • @talonmies Could you tell me how to print address of a device tensor, thanks. – Jacob Jul 14 '22 at 02:59
  • I don't use Pytorch, but I would guess this is what you want (for both CPU and GPU tensors) https://pytorch.org/docs/stable/generated/torch.Tensor.data_ptr.html#torch.Tensor.data_ptr – talonmies Jul 14 '22 at 03:09

1 Answers1

0

You can use id(a.storage) to get the underlying storage, and you will find id(a.storage) equals id(b.storage).

And I found these links(1 2 3) useful and I copied the code for checking if two tensors have same base.

import torch
def same_storage(x, y):
    return x.storage().data_ptr() == y.storage().data_ptr()

# It works on your test.
t = torch.rand((3,3))
t1 = t[0,:]
t2 = t[:,0]

print(same_storage(t1, t2)) # prints True
 
x = torch.arange(10)
y = x[1::2]
print(same_storage(x, y)) # prints True
z = y.clone()
print(same_storage(x, z)) # prints False
print(same_storage(y, z)) # prints False
x pie
  • 552
  • 1
  • 10