How to convert Pytorch autograd.Variable to Numpy?

The title says it all. I want to convert a PyTorch autograd.Variable to its equivalent numpy array. In their official documentation they advocated using a.numpy() to get the equivalent numpy array (for PyTorch tensor). But this gives me the following error:

Traceback (most recent call last): File "stdin", line 1, in module File "/home/bishwajit/anaconda3/lib/python3.6/site-packages/torch/autograd/", line 63, in getattr raise AttributeError(name) AttributeError: numpy

Is there any way I can circumvent this?



answered 10 months ago Bishwajit Purkaystha #1

I have found the way. Actually, I can first extract the Tensor data from the autograd.Variable by using Then the rest part is really simple. I just use to get the equivalent numpy array. Here's the steps:

a =  # a is now torch.Tensor
a = a.numpy()  # a is now numpy array

answered 10 months ago blitu12345 #2

Two possible case

  • Using GPU: If you try to convert a cuda float-tensor directly to numpy like shown below,it will throw an error.

    RuntimeError: numpy conversion for FloatTensor is not supported

    So, you cant covert a cuda float-tensor directly to numpy, instead you have to convert it into a cpu float-tensor first, and try converting into numpy, like shown below.

  • Using CPU: Converting a CPU tensor is straight forward.

comments powered by Disqus