Deep Learning - Lecture 4
Deep Learning - Lecture 4
To update the weights and biases inside a simple neural network, we can use the
perceptron rule as follows:
Where e = (t - o) which denotes the difference between the label t and output of
the model o. Moreover, p denotes the input of the neural network
The Delta Rule
● This is another way to learn the weights (and bias) of the neural network
during the training process which is formulated as follows
● Where
Source: https://www.youtube.com/watch?v=7VV_fUe6ziw
Implementing The Rules
Suppose that we want to train a neural network for mimicking the logical AND
function as follows
Input Output
X1 X2 AND
1 1 1
1 0 0
0 1 0
0 0 0
Can we also use the previous neural network model with the perceptron learning
rule and the delta rule for the XOR problem as follows?
Input Output
X1 X2 XOR
1 1 0
1 0 1
0 1 1
0 0 0