Home > Software engineering >  RuntimeError: expected scalar type Double but found Float
RuntimeError: expected scalar type Double but found Float

Time:12-17

I am working with GCNN. My input data are in float64. But whenever I run my code, this error is shown. I tried converting all tensors to double and it didn't work. Primarily my data are in numpy array then I converted those into pytorch tensors.

Here is my data. Here I converted numpy arrays into tensors and convert the tensors into geometric data to run gcnn.

e_index1 = torch.tensor(edge_index)
x1 = torch.tensor(x)
y1 = torch.tensor(y)

print(x.dtype)
print(y.dtype)
print(edge_index.dtype)

from torch_geometric.data import Data
data = Data(x=x1, edge_index=e_index1, y=y1)

Output:

float64
float64
int64

Here is my code of gcnn class and the rest of the code.

import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConv


class GCN(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = GCNConv(data.num_node_features, 16)
        self.conv2 = GCNConv(16, data.num_node_features)

    def forward(self, data):
        x, edge_index = data.x, data.edge_index

        x = self.conv1(x, edge_index)
        x = F.relu(x)
        x = F.dropout(x, training=self.training)
        x = self.conv2(x, edge_index)

        return F.log_softmax(x, dim=1)
device = torch.device('cpu')
model = GCN().to(device)
data = data.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)

model.train()
for epoch in range(10):
    optimizer.zero_grad()
    out = model(data)
    loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])
    loss.backward()
    optimizer.step()

error log

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-148-e816c251670b> in <module>
      7 for epoch in range(10):
      8     optimizer.zero_grad()
----> 9     out = model(data)
     10     loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])
     11     loss.backward()

5 frames
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

<ipython-input-147-c1bfee724570> in forward(self, data)
     13         x, edge_index = data.x.type(torch.DoubleTensor), data.edge_index
     14 
---> 15         x = self.conv1(x, edge_index)
     16         x = F.relu(x)
     17         x = F.dropout(x, training=self.training)

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.8/dist-packages/torch_geometric/nn/conv/gcn_conv.py in forward(self, x, edge_index, edge_weight)
    193                     edge_index = cache
    194 
--> 195         x = self.lin(x)
    196 
    197         # propagate_type: (x: Tensor, edge_weight: OptTensor)

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.8/dist-packages/torch_geometric/nn/dense/linear.py in forward(self, x)
    134             x (Tensor): The features.
    135         """
--> 136         return F.linear(x, self.weight, self.bias)
    137 
    138     @torch.no_grad()

RuntimeError: expected scalar type Double but found Float

I also tried the given solution in stackover flow blogs. But didn't work. Same error is shown repeatedly.

CodePudding user response:

You can use model.double() to convert all the model parameters into double type. This should give a compatible model given your input data is double. Keep in mind though that double type is usually slower than single due to its higher precision nature.

  • Related