
在神经网络及pytorch的使用及构建中,经常会出现numpy的array与torch的tensor互相转换的形式,本文简述pytorch与numpy的转换及注意事项。[1]
将tensor转换为array
a = torch.ones(5)
print(a)
out:
tensor([1., 1., 1., 1., 1.])
使用object的numpy()转换:
b = a.numpy()
print(b)
out:
[1. 1. 1. 1. 1.]
注意,此时两个数组(array与tensor)是共用一个储存空间的,也就是说,一个改变,另一个也会改变,因此:
a.add_(1)
print(a)
print(b)
out:
tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]
将array转换为tensor
使用from_numpy()
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
out:
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
当然还有能在GPU上运算的CUDA tensors
先判断cuda有没有安装好:
torch.cuda.is_available()
如果为true,那就安装好了。
尝试构建数组:
if torch.cuda.is_available():
x = torch.randn(1)
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
官网中写到:
Tensors can be moved onto any device using the.to
method.
参考
- ^官方文档 https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html