Home > Software design >  How to np.concatenate list with tensors?
How to np.concatenate list with tensors?

Time:10-27

I have a list with tensors:

[tensor([[0.4839, 0.3282, 0.1773,  ..., 0.2931, 1.2194, 1.3533],
        [0.4395, 0.3462, 0.1832,  ..., 0.7184, 0.4948, 0.3998]],
        device='cuda:0'),
 tensor([[1.0586, 0.2390, 0.2315,  ..., 0.9662, 0.1495, 0.7092],
        [0.6403, 0.0527, 0.1832,  ..., 0.1467, 0.8238, 0.4422]],
        device='cuda:0')]

I want to stack all [1xfeatures] matrices into one by np.concatenate(X). but this error appears:

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

How to fix it?

CodePudding user response:

Your tensors are still on the GPU and numpy operations happen on CPU. You can either send both tensors back to cpu first numpy.concatenate((a.cpu(), b.cpu()), as the error message indicates.

Or you can avoid moving off the GPU and use a torch.cat()

a = torch.ones((6),)
b = torch.zeros((6),)

torch.cat([a,b], dim=0)
# tensor([1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0.])

CodePudding user response:

The NumPy function np.concatenate() requires the input to be a NumPy array, but your data are contained in tensors. The error comes from the fact that the NumPy function fails when trying to convert your data to a NumPy array, which itself is due to the fact that your tensors are located on your GPU.

You may want to keep these tensors on your GPU, in which case you can use either use:

  • the torch.cat() function if you're using PyTorch
  • the tf.concat() function if you're using TensorFlow

Alternatively, you may move the tensors to the CPU. To do that, simply add .cpu() to your tensors prior to using np.concatenate() as the error indicates.

CodePudding user response:

Numpy works on the CPU, but your tensors are on the GPU.

Use torch.cat() instead.

  • Related