0% found this document useful (0 votes)
3 views

08_NN

Neural networks (NNets) are computational models that mimic the brain's network of neurons, using perceptrons as their basic units. They can be classified into various types, including single-layer perceptrons, multi-layer feedforward networks, and recurrent networks, each with distinct structures and functions. Activation functions and bias play critical roles in determining the output of artificial neurons, enabling the computation of complex functions like XOR through multi-layer perceptrons.

Uploaded by

david1milad1982
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

08_NN

Neural networks (NNets) are computational models that mimic the brain's network of neurons, using perceptrons as their basic units. They can be classified into various types, including single-layer perceptrons, multi-layer feedforward networks, and recurrent networks, each with distinct structures and functions. Activation functions and bias play critical roles in determining the output of artificial neurons, enabling the computation of complex functions like XOR through multi-layer perceptrons.

Uploaded by

david1milad1982
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 117

Neural Networks

The Brain

• In their basic form, NNets mimic the


networked structure in the brain
• The Brain is composed of networks of
neurons
Neural Function
Nnets and the brain

• Neural nets are composed of networks of computational


models of neurons called perceptrons
Simulation of Artificial Neural network

Activation
function
x1
x1
x3

xN

• A threshold unit
– “Fires” if the weighted sum of inputs exceeds a threshold
– Electrical engineers will call this a threshold gate
Artificial Neuron Model
An artificial neuron computes the
weighted sum of its input (called its
net input) adds its bias passes this
value through an activation
function
We say that the neuron fires i e
becomes active) if its output is
above zero
What is bias?
Bias can be incorporated as another
weight clamped to a fixed input of 1
We can use free variable ( bias)
makes the neuron more powerful
Artificial Neuron Model
What is Activation Function?
Also called the squashing function as it limits the amplitude of the output of the
neuron
Many types of activations functions are used
Summary of Activation Functions
Classes of Neural Network (Network topology)
Single layer feed forward networks (Perceptron)
Input layer projecting into the output layer
Multi layer feed forward networks
One or more hidden layers
Input projects only from previous layers onto a layer ( only from one layer to the next)

Recurrent networks
A network with feedback, where some of its outputs are connected to some of its inputs
(discrete time)
Class 1: Perceptron
A single artificial neuron that computes its weighted input and uses a threshold activation function

It effectively separates the input space into two categories by the hyperplane
Class 2: Multi-layer Feed Forward network

• Neural networks are made up of nodes or units, connected by links


• Each link has an associated weight and activation level
• Each node has an input function (typically summing over weighted inputs), an
activation function, and an output 9
Feed-Forward Process
• Input layer units are set by some exterior function
(think of these as sensors), which causes their output
links to be activated at the specified level
• Working forward through the network, the input
function of each unit is applied to compute the input
value

– Usually this is just the weighted sum of the


activation on the links feeding into this node
• The activation function transforms this input
function into a final value
– Typically this is a nonlinear function, often a
sigmoid function corresponding to the
“threshold” of that node
Class 3: Recurrent Neural network (RNN)

9
Application on neural network
Boolean functions
The first Neural network
Boolean functions with a real perceptron
0,1 1,1 0,1 1,1 0,1 1,1

X Y X

0,0 Y 1,0 0,0 X 1,0 0,0 Y 1,0

• Boolean perceptrons are also linear classifiers


– Purple regions are 1
Non Linear function one perceptron is not
enough
X ?

?
?

• Cannot compute an XOR


• MLPs can compute the XOR
26
Example on training a Perceptron
How to update Weights using Learning
Neural Network Steps for SLP or MLP
Example using Perceptron learning
algorithm

First we need to check if problem if linear separable or not


to solve with perceptron
Example using Perceptron learning
algorithm

First we need to check if problem if linear separable or not


to solve with perceptron
Example using Perceptron learning
algorithm

First we need to check if problem if linear separable or not


to solve with perceptron
Methods to update Weights
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm

Error =0 so
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm
Example using Perceptron learning
algorithm

Output of second Epoch


Example using Perceptron learning
algorithm

Output of third Epoch


Example using Perceptron learning
algorithm

Output of fourth Epoch, where no error exists in any instance, so we will stop
Example on SLP with delta rule
Example on SLP with delta rule
Example on SLP with delta rule
Example on SLP with delta rule

We used only 2 epochs using Delta rule rather than using Perceptron
learning algorithm
This method is used only with SLP
Example 1
Non Linear function one perceptron is not
enough
X ?

?
?

• Cannot compute an XOR Need to be


changed to one
line as seen

• MLPs can compute the XOR


• Yellow Class 1
• Blue Class 0 26
How to change to one line

changed to one
line as seen

Need to be

• Yellow Class 1 26
• Blue Class 0
Each line will be solved separately
using SLP
Now we need to merge to lines so we will use
another layer with weights w5 and w6
Multi−layer perceptron

For
simplificati
on will be
Multi−layer perceptron
X 1

1
-1 1

2
1
1

-1
-1

Y
Hidden
Layer • MLPs can compute more complex
• MLPs can compute the XOR Boolean functions
• MLPs can compute any Boolean function
– Since they can emulate individual gates
• MLPs are universal Boolean functions
MLP as Boolean Functions
2
1 1
0 1
1 -1 1 1

2 2 1 2
1 1 1 -1 1 -1
1 1
1

X Y Z A

• MLPs are universal Boolean functions


– Any function over any number of inputs and any number of outputs
• But how many “layers” will they need?
MLP (Back propagation)
This method is used only with MLP
To understand idea of Backpropagation with example
(Backward Pass)
To understand idea of Backpropagation with example
(Backward Pass)
Example
To understand idea of Backpropagation

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy