Home > Mobile >  How to set specific values for the weight and bias in a neural net?
How to set specific values for the weight and bias in a neural net?

Time:08-05

I want to initialize the values of the weight and bias of the linear layers in my PyTorch neural network. Below is some code for my neural net:

class NeuralNet(nn.Module):
    

        
    def __init__(self, weights, bias):
        
        super(NeuralNet, self).__init__()
        
        self.weights = weights
        self.bias = bias
        
        self.nn = nn.Sequential(
            nn.Linear(3, 3),
            nn.ReLU(),
            nn.Linear(3, 3),
            nn.ReLU(),
            nn.Linear(3, 3),
            nn.ReLU(),
            nn.Linear(3, 1),
            nn.ReLU(),
        )



    def forward(self, a, b, c):
        
        a = torch.flatten(a) # shape: (n,)
        b = torch.flatten(b) # shape: (n, 1)
        c = torch.flatten(c) # shape: (n,)
        
        y = torch.stack((a, b, c), 1)
        
        y1 = self.nn(y)

        return y1


weights = torch.rand(5)
bias = torch.rand(5)

net = NeuralNet(weights, bias)

Based on my understanding, aach layer in the neural net is currently related to 5 parameters (weights, bias, a, b, and c). Let's say I want to assign a list of values of the weights and biases that I know are pretty close to the actual values to the corresponding layer in the neural net. How do I go about doing that?

To clarify:

The PyTorch documentation says that nn.Linear contains two variables weight (~Linear.weight) and bias (~Linear.bias). I want to be able to assign values to each of these two values for every layer inside my neural net. Is there a way to point to each linear layer inside of nn.Sequential and set a value for the weights and bias?

CodePudding user response:

First, when assigning a weight value to a linear layer, the matrix size must be the same. The weight matrix size of fc1, fc2, and fc3 is 3*3, and the weight matrix size of fc4 is 1*3.

Second, to change the weight value, use no grad to prevent the weight from changing during training.

Third, when accessing sequential layers, you need to use square brackets. In the code below, the 0th element of self.l is nn.linear, the 1st element is nn.relu, the 2nd element is nn.linear, and so on.

Finally, you need to change the weight value using torch.nn.Parameter.

Code:

import torch
import torch.nn as nn
import torch.nn.functional as F

class NeuralNet(nn.Module):
    def __init__(self):
        super(NeuralNet, self).__init__()
        self.l = nn.Sequential(
            nn.Linear(3, 3),
            nn.ReLU(),
            nn.Linear(3, 3),
            nn.ReLU(),
            nn.Linear(3, 3),
            nn.ReLU(),
            nn.Linear(3, 1)
        )

    def forward(self, a, b, c):
        a = torch.flatten(a)
        b = torch.flatten(b)
        c = torch.flatten(c)
        y = torch.stack((a, b, c), 1)
        
        y = self.l(y)

        return y


net = NeuralNet()
with torch.no_grad():
    print(net.l[0].weight)
    net.l[0].weight = nn.Parameter(torch.randn(3,3))
    net.l[0].bias = nn.Parameter(torch.randn(3))
    print(net.l[0].weight)

Result:

Parameter containing:
tensor([[ 0.0684, -0.2947,  0.0059],
        [-0.0430, -0.0716, -0.1406],
        [-0.4114, -0.3912, -0.5121]], requires_grad=True)
Parameter containing:
tensor([[-0.5050, -0.1787, -0.5368],
        [ 0.6825, -0.5514,  2.1333],
        [-1.5563,  2.4367, -0.6201]], requires_grad=True)
  • Related