Home > OS >  Simple Linear Regression - what am I doing wrong?
Simple Linear Regression - what am I doing wrong?

Time:02-03

I am new to ML and tried to build a Linear Regression Model by myself. Object is to predict the fahrenheit values for celcius values. This is my code:

celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype= float)
fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype = float)

inputs = celsius_q
output_expected = fahrenheit_a

# y = m * x   b
m = 100
b = 0
m_gradient = 0
b_gradient = 0
learning_rate = 0.00001
#Forwardpropagation
for i in range(10000):
    for i in range(len(inputs)):
        m_gradient  = (m   (b * inputs[i] - output_expected[i]))
        b_gradient  = inputs[i] * (m   (b * inputs[i]) - output_expected[i])

m_new = m - learning_rate * (2/len(inputs)) * m_gradient 
b_new = b - learning_rate * (2/len(inputs)) * b_gradient
    

The code generates wrong weights for m and b, no matter how much I change the learning_rate and the epochs. The weights for minimal loss function has to be:

b = 1.8
m = 32

What am I doing wrong?

CodePudding user response:

The update of m and b needs to happen every step but this is not going to be enough. You also need to slightly increase your learning rate, say twice:

import numpy as np
celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float)

inputs = celsius_q
output_expected = fahrenheit_a

# y = m * x   b
m_new = m = 100.0
b_new = b = 0.0
m_gradient = 0.0
b_gradient = 0.0
learning_rate = 0.0002
# Forwardpropagation
for i in range(10000):
    m_gradient, b_gradient = 0, 0

    for i in range(len(inputs)):
        m_gradient  = (m_new   (b_new * inputs[i] - output_expected[i]))
        b_gradient  = inputs[i] * (m_new   (b_new * inputs[i]) - output_expected[i])

    m_new -= learning_rate * m_gradient
    b_new -= learning_rate * b_gradient

print(m_new, b_new)

Getting:

31.952623523538897 1.7979482813813066

which is close to the expected 32 and 1.8.

CodePudding user response:

You should continually update your parameters, in every step. Something like:

for i in range(10000):
    m_gradient, b_gradient = 0, 0

    for i in range(len(inputs)):
        m_gradient  = (m   (b * inputs[i] - output_expected[i]))
        b_gradient  = inputs[i] * (m   (b * inputs[i]) - output_expected[i])

    m -= learning_rate * m_gradient 
    b -= learning_rate * b_gradient

(But I didn't check your math.)

  • Related