08_NN
08_NN
The Brain
Activation
function
x1
x1
x3
xN
• A threshold unit
– “Fires” if the weighted sum of inputs exceeds a threshold
– Electrical engineers will call this a threshold gate
Artificial Neuron Model
An artificial neuron computes the
weighted sum of its input (called its
net input) adds its bias passes this
value through an activation
function
We say that the neuron fires i e
becomes active) if its output is
above zero
What is bias?
Bias can be incorporated as another
weight clamped to a fixed input of 1
We can use free variable ( bias)
makes the neuron more powerful
Artificial Neuron Model
What is Activation Function?
Also called the squashing function as it limits the amplitude of the output of the
neuron
Many types of activations functions are used
Summary of Activation Functions
Classes of Neural Network (Network topology)
Single layer feed forward networks (Perceptron)
Input layer projecting into the output layer
Multi layer feed forward networks
One or more hidden layers
Input projects only from previous layers onto a layer ( only from one layer to the next)
Recurrent networks
A network with feedback, where some of its outputs are connected to some of its inputs
(discrete time)
Class 1: Perceptron
A single artificial neuron that computes its weighted input and uses a threshold activation function
It effectively separates the input space into two categories by the hyperplane
Class 2: Multi-layer Feed Forward network
9
Application on neural network
Boolean functions
The first Neural network
Boolean functions with a real perceptron
0,1 1,1 0,1 1,1 0,1 1,1
X Y X
?
?
Error =0 so
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Output of fourth Epoch, where no error exists in any instance, so we will stop
Example on SLP with delta rule
Example on SLP with delta rule
Example on SLP with delta rule
Example on SLP with delta rule
We used only 2 epochs using Delta rule rather than using Perceptron
learning algorithm
This method is used only with SLP
Example 1
Non Linear function one perceptron is not
enough
X ?
?
?
changed to one
line as seen
Need to be
• Yellow Class 1 26
• Blue Class 0
Each line will be solved separately
using SLP
Now we need to merge to lines so we will use
another layer with weights w5 and w6
Multi−layer perceptron
For
simplificati
on will be
Multi−layer perceptron
X 1
1
-1 1
2
1
1
-1
-1
Y
Hidden
Layer • MLPs can compute more complex
• MLPs can compute the XOR Boolean functions
• MLPs can compute any Boolean function
– Since they can emulate individual gates
• MLPs are universal Boolean functions
MLP as Boolean Functions
2
1 1
0 1
1 -1 1 1
2 2 1 2
1 1 1 -1 1 -1
1 1
1
X Y Z A