Home > other >  Threshold of the neural network and what is the difference between bias, hope big directions - based
Threshold of the neural network and what is the difference between bias, hope big directions - based

Time:11-23

, ladies and gentlemen, I want to ask the threshold in the neural network and what is the difference between bias and how to setup, here is my reference to other people's code (
reference
https://blog.csdn.net/qq_41076797/article/details/102458547
) to write a program, my main problem is that if the code 4 line 31 + b1, b2 + to cut, you could interpret b1 and b2 into threshold, but I change to run after the reduction, the mean square error is not convergence, but when the small, and I put the b1, b2 data for the distribution of the standard state, too, is arguably the standard normal distribution of data is negative, on the plus or minus effect should be the same, so it is very strange, also hope who can help me with, thank you very much!
Ps: what was the problem with Posting for the first time, also please forgive me
 
The import numpy as np

# real y
Y=np. Array ([[0,0,0,1]], dtype=np. Float64)
Def sigmoid (x) :
Return 1/(1 + np. J exp (- (x)))


X=np. Array ([[0, 0],
[0, 1],
(1, 0),
[1, 1]], dtype=np. Float64)

W1=np. Random. Rand (2, 4) # the first layer of four neurons, three characteristics of
B1=np. Random. Rand (1, 4)
W1: print (' \ n ', w1)
W2=np. Random. Rand (4, 4) # output layer with two neurons, a layer of each neuron generates a features a total of four characteristics of
B2=np. Random. Rand (1, 4)

Lr vector=0.1 #


For n in range (1000) :
For the index, I enumerate in (x) :
I=i.r eshape (1, 2)
# print (I)
# for each of the x samples are ` spread to
# intermediary input
A1=np. Dot (I, w1) # [1, 4]
# the output of hidden layer
Y1=sigmoid (a1 + b1) # [1, 4]

# the input to the output layer
A2=np. Dot (y1, w2) # [1, 2]
The output of the output layer #
Y2=sigmoid (a2 + b2) # [1, 2]
# back propagation
G=np. Multiply ((y, y2), np. Multiply (y2, (1 - y2)))
# update output layer weights w and offset value b
W2=w2 np + lr *. Dot (y1. T, g)
B2=b2 + lr * g
# print (" first "+ STR (index + 1) +" sample ")
# print (" updated output layer weights w2, w2)
# print (" updated output layer b2, b2)
# print (" to start the update below middle layer weights w1 and offset values b1 ")
W1=w1 + lr * i.r eshape (2, 1) * np in multiply (np) multiply (y1, (1 - y1)), np. Dot (w2, g.T.) T)
B1=b1 + lr * np. Multiply (np) multiply (y1, (1 - y1)), np. Dot (w2, g.T.) T)
# print (" updated ", the output layer weights w1, w1)
# print (" updated output layer b1, b1)
Error=0.5 * np. Sum ((y2 - y) * * 2)
If (np) mod (n, 100)==0) :
Print (' error: \ n 'error)
  • Related