18

The title says it all. I want to convert a PyTorch autograd.Variable to its equivalent numpy array. In their official documentation they advocated using a.numpy() to get the equivalent numpy array (for PyTorch tensor). But this gives me the following error:

Traceback (most recent call last): File "stdin", line 1, in module File "/home/bishwajit/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 63, in getattr raise AttributeError(name) AttributeError: numpy

Is there any way I can circumvent this?

Bishwajit Purkaystha
  • 1,975
  • 7
  • 22
  • 30

2 Answers2

29

Two possible case

  • Using GPU: If you try to convert a cuda float-tensor directly to numpy like shown below,it will throw an error.

    x.data.numpy()

    RuntimeError: numpy conversion for FloatTensor is not supported

    So, you cant covert a cuda float-tensor directly to numpy, instead you have to convert it into a cpu float-tensor first, and try converting into numpy, like shown below.

    x.data.cpu().numpy()

  • Using CPU: Converting a CPU tensor is straight forward.

    x.data.numpy()

blitu12345
  • 3,473
  • 1
  • 20
  • 21
  • How does this compare to `x.cpu().data.numpy()` and `np.asarray(torch_tensor)`? Can't find any documentation that mentions this – Alex Walczak Apr 15 '18 at 17:02
  • If you want to keep training a tensor on GPU, you'd have to duplicate the numpy array then run `x.data.cuda()` after this, no? (or would the numpy array remain.. perhaps?) – drevicko Aug 26 '19 at 10:36
7

I have found the way. Actually, I can first extract the Tensor data from the autograd.Variable by using a.data. Then the rest part is really simple. I just use a.data.numpy() to get the equivalent numpy array. Here's the steps:

a = a.data  # a is now torch.Tensor
a = a.numpy()  # a is now numpy array
Bishwajit Purkaystha
  • 1,975
  • 7
  • 22
  • 30