As Sparsh Choudhary explained in their answer, because you used torch.from_numpy
, they end up referencing the same memory location. When using +=
(i.e. __iadd__
), the memory address stays the same (the operation is performed in-place when possible, see this question). But when you perform aray = aray + 1
, you're creating a new array, so the result no longer references the original memory address (since it's a completely new array). But tensor
is unchanged and will still point to the original memory address.
We can check this by getting the memory pointers for the numpy array and pytorch tensor (the methods for each are slightly different, see this answer).
import torch
import numpy as np
aray = np.arange(1, 11)
tensor = torch.from_numpy(aray)
print(f"aray: {aray}")
print(f"tensor: {tensor}")
print(f"aray pointer: {aray.ctypes.data}")
print(f"tensor pointer: {tensor.data_ptr()}")
print("Inplace Addition:")
aray += 1
print(f"aray: {aray}")
print(f"tensor: {tensor}")
print(f"aray pointer: {aray.ctypes.data}")
print(f"tensor pointer: {tensor.data_ptr()}")
print("Normal Addition")
aray = 1 + aray
print(f"aray: {aray}")
print(f"tensor: {tensor}")
print(f"aray pointer: {aray.ctypes.data}")
print(f"tensor pointer: {tensor.data_ptr()}")
Output:
aray: [ 1 2 3 4 5 6 7 8 9 10]
tensor: tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
aray pointer: 62868976
tensor pointer: 62868976
Inplace Addition:
aray: [ 2 3 4 5 6 7 8 9 10 11]
tensor: tensor([ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
aray pointer: 62868976
tensor pointer: 62868976
Normal Addition
aray: [ 3 4 5 6 7 8 9 10 11 12]
tensor: tensor([ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
aray pointer: 62869120
tensor pointer: 62868976
As you can see, after they are created, they share the same memory address, and this persists through the +=
operation. But once you perform aray = aray + 1
, aray
now references a new location in memory.