Home > Back-end >  How to limit the weights of gradient descent?
How to limit the weights of gradient descent?

Time:11-25

Here is the code for calculating the weights. I want to limit the weights to a range, for example (-1, 1)How to do it? Write a class?

def coefficients_sgd(train, l_rate, n_epoch):
  coef = [0.0 for i in range(len(train[0]))]
  for epoch in range(n_epoch):
    sum_error = 0
    for row in train:
      yhat = predict(row, coef)
      error = yhat - row[-1]
      sum_error  = error**2
      coef[0] = coef[0] - l_rate * error
      for i in range(len(row)-1):
        coef[i   1] = coef[i   1] - l_rate * error * row[i]
    print('>epoch=%d, lrate=%.3f, error=%.3f' % (epoch, l_rate, sum_error))
  return coef

CodePudding user response:

This can be viewed as a problem of mapping a number x to the interval (1,-1). There are many functions which could do this.

f(x) -> [-1, 1]

Perhaps use a scaled sigmoid curve centred at the origin: f(x) = 2*sigmoid(x) -1

import math

def sigmoid(x):
  return 1 / (1   math.exp(-x))

def normalise(x):
  return 2*sigmoid(x)-1

CodePudding user response:

You can perform a simple min-max normalization by doing the following:

import numpy as np

coef = np.array(coef)
coef = (coef - min(coef)) / (max(coef) - min(coef))

This will ensure all elements inside coef are in the interval (0, 1)

  • Related