Home > OS >  How do I finetune a model while preserving layer names
How do I finetune a model while preserving layer names

Time:10-16

When I fine tune a pretrained resnet152 model, I seem to lose all the named layers I’d like access to. I’ve include the simple fine tuned model code, and the print out of named layers of both pretrained and fine tuned.I'd like to maintain the layer names so I can visualize their output in a Class Activation Map.

Code

class ConvNet3(nn.Module):

def init(self):
    super().init()

    model = models.resnet152(pretrained=True)
    model.fc = nn.Linear(2048, 10)
    self.model = model
def forward(self, x):        
    return self.model(x) # [batch_size, 10]


import torchvision.models as models
model = ConvNet3().eval()
print([n for n, _ in model.named_children()])

model = models.resnet152(pretrained=True).eval()
print([n for n, _ in model.named_children()])

Output

[‘model’]
[‘conv1’, ‘bn1’, ‘relu’, ‘maxpool’, ‘layer1’, ‘layer2’, ‘layer3’, ‘layer4’, ‘avgpool’, ‘fc’]

CodePudding user response:

The layers are not lost, you are encapsulating the original Resnet model in your own class. If you use:

print([n for n, _ in model.model.named_children()])

since the Resnet model is stored under the model attribute of the ConvNet3 class.

Unless you need it for another reason, the wrapper class seems unnecessary, a simpler approach would be to do something as follows:

model = models.resnet152(pretrained=True)
model.fc = nn.Linear(2048,10)
model.eval()
print([n for n, _ in model.named_children()])
  • Related