Home > database >  Building a Neural Network for Binary Classification on Top of Pre-Trained Embeddings Not Working?
Building a Neural Network for Binary Classification on Top of Pre-Trained Embeddings Not Working?

Time:09-22

I am trying to build a Neural Network on top of the embeddings that a pre-trained model outputs. In specific: I have the logits of a base model saved to disk where each example is an array of shape 512 (which originally corresponds to an image) with an associated label (0 or 1). This is what I am doing right now:

Here's the model definition and training loop that I have. Right now it is a simple Linear layer, just to make sure that it works, however, when I run this script, the loss starts at .4 and not ~.7 which is the standard for binary classification. Can anyone spot where I am going wrong?

from transformers.modeling_outputs import SequenceClassifierOutput
class ClassNet(nn.Module):
    def __init__(self, num_labels=2):
        super(ClassNet, self).__init__()
        self.num_labels = num_labels
        self.classifier = nn.Linear(512, num_labels) if num_labels > 0 else nn.Identity()
    def forward(self, inputs):
    
    logits = self.classifier(inputs)
    loss_fct = nn.CrossEntropyLoss()
    loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
    return SequenceClassifierOutput(
            loss=loss,
            logits=logits
        )


model = ClassNet()
optimizer = optim.Adam(model.parameters(), lr=1e-4,weight_decay=5e-3) #  L2 regularization
loss_fct=nn.CrossEntropyLoss()



for epoch in range(2):  # loop over the dataset multiple times

running_loss = 0.0
for i, data in enumerate(train_loader, 0):
    # get the inputs; data is a list of [inputs, labels]
    #data['embeddings'] -> torch.Size([1, 512])
    #data['labels'] -> torch.Size([1])
    inputs, labels = data['embeddings'], data['labels']

    # zero the parameter gradients
    optimizer.zero_grad()

    # forward   backward   optimize
    outputs = model(inputs)
    loss = loss_fct(outputs.logits.squeeze(1), labels.squeeze())
    loss.backward()
    optimizer.step()

    # print statistics
    running_loss  = loss.item()
    if i % 2000 == 1:    # print every 2000 mini-batches
        print('[%d, ]] loss: %.3f' %
              (epoch   1, i   1, running_loss / 2000))
        running_loss = 0.0

an example of printing the outputs.logits.squeeze(1) and labels.squeeze():

#outputs.logits.squeeze(1)
tensor([[-0.2214,  0.2187],
        [ 0.3838, -0.3608],
        [ 0.9043, -0.9065],
        [-0.3324,  0.4836],
        [ 0.6775, -0.5908],
        [-0.8017,  0.9044],
        [ 0.6669, -0.6488],
        [ 0.4253, -0.5357],
        [-1.1670,  1.1966],
        [-0.0630, -0.1150],
        [ 0.6025, -0.4755],
        [ 1.8047, -1.7424],
        [-1.5618,  1.5331],
        [ 0.0802, -0.3321],
        [-0.2813,  0.1259],
        [ 1.3357, -1.2737]], grad_fn=<SqueezeBackward1>)
#labels.squeeze()
tensor([1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0])
#loss
tensor(0.4512, grad_fn=<NllLossBackward>)

CodePudding user response:

You are only printing from the second iteration. The above will effectively print for every 200k 1 steps, but i starts at 0

if i % 2000 == 1:    # print every 2000 mini-batches
    print('[%d, ]] loss: %.3f' %
          (epoch   1, i   1, running_loss / 2000))

i.e. one gradient descent step has already occurred. This might be enough to go from the initial loss value -log(1/2) = ~0.69 to the one you observed ~0.45.

  • Related