Home > Enterprise >  Simple AND Gate Perceptron Learning in Python
Simple AND Gate Perceptron Learning in Python

Time:03-26

I am trying to code a simple algorithm that will learn the weights and the threshold to be able to draw the line w1*x w2*y = threshhold such that it follows the data of any training set (in this case the AND gate training set). However, it seems my program is not learning, and the error is always at -3, no matter how many iterations I let if have. Below is my code:

import numpy
import random

w1 = random.uniform(-0.2, 0.2)
w2 = random.uniform(-0.2, 0.2)
threshhold = random.uniform(-0.2, 0.2)

training_x = numpy.asarray([[0,0], [0,1], [1,0], [1,1]])
out = [0,0,0,1]

def positive(number):
    if(number >= 0):
        return 1
    else:
        return 0
    
error = numpy.array([0,0,0,0])
for j in range(len(training_x)):
    check = positive(numpy.dot(numpy.asarray([w1,w2]), training_x[j])   threshhold)
    error[j] = out[j] -check
errornumber = numpy.sum(error)


iterations = 1000
count = 1
eta = 0.1

values = [w1, w2, threshhold]
while count < iterations and errornumber != 0:
    for j in range(len(training_x)):
        check = positive(numpy.dot(numpy.asarray([w1,w2]), training_x[j])   threshhold)
        error[j] = out[j] -check
        w1 = values[0]   eta * error[j]*training_x[j][0]
        w2 = values[1]   eta * error[j]*training_x[j][0]
        threshhold = values[2]   eta*training_x[j][0]
    values = [w1, w2, threshhold]
    errornumber = numpy.sum(error)

    print("ERRORS: "   str(errornumber))
    count  = 1
    
print("w1 "   str(values[0])   "w2 "   str(values[1])   "theta "   str(values[2]))

print("count "   str(count))

I would appreciate any help.

By the way, I took inspiration from this website: https://medium.com/analytics-vidhya/implementing-perceptron-learning-algorithm-to-solve-and-in-python-903516300b2f

Thanks in advance!

CodePudding user response:

I rewrote the algorithm and it now works perfectly.

import random
trainingset = [[0,0,0], [0,1,0], [1,0,0], [1,1,1]]
eta = 0.3
maxiterations = 100
w1 = random.uniform(-0.2, 0.2)
w2 = random.uniform(-0.2, 0.2)
w0 = random.uniform(-0.2, 0.2)
error = random.uniform(-0.2, 0.2)
count = 0
while count < maxiterations and error != 0:
    error = 0
    for array in trainingset:
        target = array[2]
        output = 0
        summation = w1*array[0]   w2*array[1] - w0
        if(summation > 0):
            output = 1
        else:
            output = 0
            
        if(output != target):
            error  = 1
            
        w1  = eta*(target - output)*array[0]
        w2  = eta*(target - output)*array[1]
        w0  = eta*(target - output)*(-1)
        
        
        
            
        
        print("output "   str(output)   " target "   str(target))
        print("ERROR "   str(error))
    count  = 1
print("COUNT "   str(count))
print("ENDING ERROR"   str(error))
print("w1 "   str(w1)   "w2 "   str(w2)   "w0 "   str(w0))


CodePudding user response:

Try adjusting the learning rate "eta". When I set eta = 0.005, it would converge - but I might have to re-run it 3-5 times to obtain different random initializations.

Example output:
ERRORS: 1
ERRORS: 1
ERRORS: 1
ERRORS: 1
ERRORS: 0
w1 -0.142054790891668 w2 0.11580039178422052 theta -0.09818819751599175
count 6

edit
After running it a few more times, I obtained this solution: ERRORS: 1
ERRORS: 1
ERRORS: 1
...
ERRORS: 1
ERRORS: 1
ERRORS: 0
w1 0.04909721513572815 w2 0.04279261178484681 theta -0.08081630077876589
count 20
Which does have the property of dividing the graph the way you wanted it to for an AND gate.

  • Related