The title says it all. I want to convert a
PyTorch autograd.Variable to its equivalent
numpy array. In their official documentation they advocated using
a.numpy() to get the equivalent
numpy array (for
PyTorch tensor). But this gives me the following error:
Traceback (most recent call last): File "stdin", line 1, in module File "/home/bishwajit/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 63, in getattr raise AttributeError(name) AttributeError: numpy
Is there any way I can circumvent this?numpypytorchtensor
I have found the way. Actually, I can first extract the
Tensor data from the
autograd.Variable by using
a.data. Then the rest part is really simple. I just use
a.data.numpy() to get the equivalent
numpy array. Here's the steps:
a = a.data # a is now torch.Tensor a = a.numpy() # a is now numpy array
Two possible case
Using GPU: If you try to convert a cuda float-tensor directly to numpy like shown below,it will throw an error.
RuntimeError: numpy conversion for FloatTensor is not supported
So, you cant covert a cuda float-tensor directly to numpy, instead you have to convert it into a cpu float-tensor first, and try converting into numpy, like shown below.
Using CPU: Converting a CPU tensor is straight forward.