Home > Back-end >  PyTorch RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should
PyTorch RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should

Time:11-04

I've found a lot of answers on this topic but none of them helped.

The error is:

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

The training loop:

model = BrainModel()
model.to(device)
loss_function = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)

for epoch in range(EPOCHS):
    for sequences, labels in train_dataloader:
        optimizer.zero_grad()
        labels = labels.view(BATCH_SIZE, -1)
        sequences, labels = sequences.to(device), labels.view(BATCH_SIZE, -1).to(device)
        print(next(model.parameters()).is_cuda, sequences.get_device(), labels.get_device())
        out = model(sequences) # ERROR HERE
        out, labels = out.type(torch.FloatTensor), labels.type(torch.FloatTensor)
        loss = loss_function(out, labels)
        loss.backward()
        optimizer.step()

You can see one print inside the loop, and its output is:

True 0 0

which means that all - the model, the x and y - are on cuda. The same code works well when I use CPU but not GPU. I do not understand what else I need to move to device. I always write it the way I did here and it always worked fine :C

CodePudding user response:

You need to transfer labels to CUDA device as well to compute loss:

loss = loss_function(out, labels.to(device))

CodePudding user response:

Needed to do this: use nn.ModuleList instead of python list

        self.convolutions1 = nn.ModuleList([nn.Conv2d(1, 3, 5, 2, 2) for _ in range(sequence_size)])
        emb_dim = calc_embedding_size(self.convolutions1[0], input_size)
        self.convolutions2 = nn.ModuleList([nn.Conv2d(3, 6, 3, 1, 0) for _ in range(sequence_size)])
        emb_dim = calc_embedding_size(self.convolutions2[0], emb_dim)
        self.convolutions3 = nn.ModuleList([nn.Conv2d(6, 9, 5, 1, 0) for _ in range(sequence_size)])
        emb_dim = calc_embedding_size(self.convolutions3[0], emb_dim)

And use torch.cuda.FloatTensor when training on GPU:

out, labels = out.type(torch.cuda.FloatTensor), labels.type(torch.cuda.FloatTensor)
  • Related