Home > Mobile >  Is it possible to use PyTorch's `BatchNorm1d` with `BCELossWithLogits`?
Is it possible to use PyTorch's `BatchNorm1d` with `BCELossWithLogits`?

Time:03-22

I am attempting to normalize the outputs of my classifier that uses BCELossWithLogits as part of its loss function. As far as I know, this implements Sigmoid function internally and outputs the loss.

I want to normalize the output of the sigmoid function prior to calculating the loss. Is it possible to use BatchNorm1d with BCELossWithLogits? Or is passing the output tensor to torch.sigmoid to BatchNorm1d and separately calculating BCELoss the only possible solution?

Thanks.

CodePudding user response:

You can use BCELoss instead of BCELossWithLogits which is described as:

This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss

For example,

m = nn.Sigmoid()
bn = nn.BatchNorm1d(3)
loss = nn.BCELoss()
input = torch.randn((2, 3), requires_grad=True)
target = torch.empty(2, 3).random_(2)
output = loss(m(bn(input)), target)
output.backward()
  • Related