Home > Software design >  TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tenso
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tenso

Time:03-18

I've read questions about the same error, here on StackOverflow, but unfortunately they does not work. I have a defined function:

def plot_loss(train_loss, validation_loss, title):
    plt.grid(True)
    plt.xlabel("subsequent epochs")
    plt.ylabel('average loss')
    plt.plot(range(1, len(train_loss) 1), train_loss, 'o-', label='training')
    plt.plot(range(1, len(validation_loss) 1), validation_loss, 'o-', label='validation')
    plt.legend()
    plt.title(title)
    plt.show()

and the problem is that in line

    plt.plot(range(1, len(validation_loss) 1), validation_loss, 'o-', label='validation')

this error occurs. Then it flows through gca().plot(...) in pyplot.py and eventually to return self.numpy()in tensor.py

In testing phase I have the following function defined:

with torch.no_grad():
        for data, target in test_loader:
            data, target = data.to(device), target.to(device) 
            output = model(data)
            # calculate and sum up batch loss
            test_loss  = F.nll_loss(output, target, reduction='mean') 
            # get the index of class with the max log-probability 
            prediction = output.argmax(dim=1)
            # item() returns value of the given tensor
            correct  = prediction.eq(target).sum().item()
    test_loss /= len(test_loader)
    return test_loss

I've tried to change the line

  prediction = output.argmax(dim=1)

as it was described in another questions about the same error but unfortunately it did not help.

I've tried to run this code on Google Colab and also on my local machine with GPU (cuda is available) but unfortunately the same error occurs.

EDIT I've manage to find the solution in this link. It seems that it is connected with moving data between CUDA and CPU. I've invoked .cpu() and it's solved:

    with torch.no_grad():
        for data, target in test_loader:
            data, target = data.to(device), target.to(device) 
            output = model(data)
            # calculate and sum up batch loss
            test_loss  = F.nll_loss(output, target, reduction='mean') 
            # get the index of class with the max log-probability 
            prediction = output.argmax(dim=1)
            # item() returns value of the given tensor
            correct  = prediction.eq(target).sum().item()
    test_loss /= len(test_loader)
    return test_loss.cpu()

CodePudding user response:

I've manage to find the solution in this link. It seems that it is connected with moving data between CUDA and CPU. I've invoked .cpu() and it's solved:

 with torch.no_grad():
        for data, target in test_loader:
            data, target = data.to(device), target.to(device) 
            output = model(data)
            # calculate and sum up batch loss
            test_loss  = F.nll_loss(output, target, reduction='mean') 
            # get the index of class with the max log-probability 
            prediction = output.argmax(dim=1)
            # item() returns value of the given tensor
            correct  = prediction.eq(target).sum().item()
    test_loss /= len(test_loader)
    return test_loss.cpu()
  • Related