Home > Back-end >  Get input of fully connected layer of ResNet model during runtime
Get input of fully connected layer of ResNet model during runtime

Time:12-07

Found a Solution, left it as an answer to this question down below :)

Info about the project: Classification task with 2 classes.

I am trying to get the output of the fully connected layer of my model for each image I put into the model during runtime. I plan to use them after the model is done training or testing all images to visualize with UMAP.

The model:

#Load resnet
def get_model():
    model = torchvision.models.resnet50(pretrained=True)
    num_ftrs = model.fc.in_features
    model.fc = nn.Linear(num_ftrs, 2)
    return model

The relevant part of pl module:

class classifierModel(pl.LightningModule):
   def __init__(self, model):
     super().__init__()
     self.model = model
     self.learning_rate = 0.0001

def training_step(self, batch, batch_idx):
        x= batch['image']
        y = batch['targets']
        x_hat = self.model(x)
        output = nn.CrossEntropyLoss()
        loss= output(x_hat,y)
        return loss
 
def test_step(self, batch, batch_idx):
        x= batch['image']
        y = batch['targets']
        x_hat = self.model(x)

Is it possible to do this by adding a empty list to the init of the pl module and then add the output after x_hat = model(x) is executed? How would i know if after x_hat = model(x) is executed, the out_features aren't immediatly deleted/discarded ?

CodePudding user response:

x_hat is this vector and is [batch_size, 2048]. So just modify your training step to also return x_hat.

class classifierModel(pl.LightningModule):
   def __init__(self, model):
     super().__init__()
     self.model = model
     self.learning_rate = 0.0001
     self.fc_outputs = []

   def training_step(self, batch, batch_idx):
       x= batch['image']
       y = batch['targets']
       x_hat = self.model(x)
       self.fc_outputs.append(x_hat)
       output = nn.CrossEntropyLoss()
       loss= output(x_hat,y)
       return loss

The values of x_hat will not be deleted unless you explicitly call del x_hat BEFORE assigning these values elsewhere. In the case where you have already assigned the values of x_hat to another variable (in your case it sounds like you want to append it to a list) the memory addresses associated with the values are not deallocated because there is still a variable that references these addresses even after the original variable referencing them (x_hat may have been deleted). In this way, python is relatively safe in terms of memory referencing because it dynamically computes when memory addresses / values are no longer needed at runtime.

CodePudding user response:

I was able to do this using a forward hook on the avgpool layer and saving the output on each test_step like described here :

#Define Hook: 
def get_features(name):
    def hook(model, input, output):
        features[name] = output.detach()
    return hook

Now when I load my model, I register the hook:

#Load resnet model:
def get_model():
    model = models.resnet50(pretrained=True)
    num_ftrs = model.fc.in_features
    model.fc = nn.Linear(num_ftrs, 2)
    model.avgpool.register_forward_hook(get_features('feats')) #register the hook
    return model

I did not need to change the init of the pytorch lightning model but the test step function:

FEATS = []
# placeholder for batch features
features = {}

class classifierModel(pl.LightningModule):
   def __init__(self, model):
     super().__init__()
     self.model = model
     self.learning_rate = 0.0001

def test_step(self, batch,batch_idx):
        x= batch['image']
        y = batch['targets']
        x_hat = self.model(x)
        FEATS.append(features['feats'].cpu().numpy()) #added this line to save output

now we have the output FEATS[0].shape --> (16, 2048, 1, 1) which is what i wanted to get(16 is the batchsize is use).

  • Related