0% found this document useful (0 votes)
81 views

Deep Learnig

Uploaded by

suneel sekhar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views

Deep Learnig

Uploaded by

suneel sekhar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

DEEP LEARNING

• Deep learning is a branch of machine learning which is based on artificial neural


networks.
• It is capable of learning complex patterns and relationships within data.
• In deep learning, we don’t need to explicitly program everything.
• It has become increasingly popular in recent years due to the advances in processing
power and the availability of large datasets.
• Deep learning is a class of machine learning algorithms that uses multiple layers to
progressively extract higher-level features from the raw input.
• For example, in image processing, lower layers may identify edges, while higher
layers may identify the concepts relevant to a human such as digits or letters or faces.
Applications of Deep Learning
• The main applications of deep learning can be divided into
computer vision,
natural language processing (NLP),
and reinforcement learning.
• In computer vision, Deep learning models can enable machines to identify and
understand visual data.
Some of the main applications of deep learning in computer vision include:
• Object detection and recognition
• Image classification
• In NLP, the Deep learning model can enable machines to understand and generate
human language.
Some of the main applications of deep learning in NLP include:
• Automatic Text Generation
• Language translation
• In reinforcement learning, deep learning works as training agents to take action in an
environment to maximize a reward.
Some of the main applications of deep learning in reinforcement learning include:
• Game playing
• Robotics
UNIT - I: Feed Forward Neural Network
Introduction: Various paradigms of learning problems, Perspectives and Issues in deep
learning framework, review of fundamental learning techniques. Feed forward neural
network: Artificial Neural Network, activation function, multi-layer neural network
UNIT-II: Training Neural Network:Training Neural Network: Risk minimization, loss function,
back propagation, regularization, model selection, and optimization. Deep Neural Networks:
Difficulty of training deep neural networks, Greedy layer wise training.
UNIT-III: Deep Learning
Deep Feed Forward network, regularizations, training deep models, dropouts, Convolution
Neural Network, Recurrent Neural Network, and Deep Belief Network.
UNIT-IV: Probabilistic Neural Networks:
Hopfield Net, Boltzmann machine, RBMs, Sigmoid net, Auto encoders.
UNIT - V: Applications
Applications: Object recognition, sparse coding, computer vision, natural language
processing. Introduction to Deep Learning Tools: Tensor Flow, Caffe, Theano, Torch.
UNIT - I: Feed Forward Neural Network
Feed forward neural networks are artificial neural networks in which nodes do not form
loops.
• The feed-forward model is the basic type of neural network in which the input is only
processed in one direction.
Introduction
Neural Networks are a type of function that connects inputs with outputs.
• In theory, neural networks should be able to estimate any sort of function, no matter
how complex it is.
Neural Network Architecture

• The first layer is the input layer.


• The output layer is the final layer.
• The dataset and the type of challenge determine the number of neurons in the final
layer and the first layer.
• Trial and error will be used to determine the number of neurons in the hidden layers
and the number of hidden layers.
• A weight is assigned to each input to an artificial neuron.
• First, the inputs are multiplied by their weights, and then a bias is applied to the
outcome.
• After that, the weighted sum is passed via an activation function, being a non-linear
function.
• An Activation Function decides whether a neuron should be activated or not.
• This means that it will decide whether the neuron's input to the network is important
or not in the process of prediction using simpler mathematical operations.
• All the inputs from the previous layer will be connected to the first neuron from the
first hidden layer.
• The second neuron in the first hidden layer will be connected to all of the preceding
layer’s inputs, and so on.
• The outputs of the previously hidden layer are inputs for neurons in the second
hidden layer, and each of these neurons is coupled to all of the preceding neurons.
• Various paradigms of learning problems
• Deep learning is a vast field, centered around an algorithm whose shape is
determined by millions or even billions of variables and is constantly being altered —
the neural network.
• deep learning in the modern era can be broken down into three fundamental
learning paradigms.
• Hybrid learning — how can modern deep learning methods cross the boundaries
between supervised and unsupervised learning to accommodate for a vast amount
of unused unlabeled data?
• Hybrid Learning Problems
Semi-Supervised Learning
Self-Supervised Learning
Multi-Instance Learning
• Semi-supervised learning is supervised learning where the training data contains
very few labeled examples and a large number of unlabeled examples.
• Semi-supervised learning is a type of ML that falls in between supervised and
unsupervised learning.
• The goal of semi-supervised learning is to learn a function that can accurately predict
the output variable based on the input variables.
• Self-Supervised Learning (SSL) is one such methodology that can learn complex
patterns from unlabeled data.
• SSL allows AI systems to work more efficiently when deployed due to its ability to
train itself, thus requiring less training time.
• Self-Supervised Learning (SSL) is a model, when fed with unstructured data as input,
generates data labels automatically
• multiple-instance learning (MIL) is a type of supervised learning. Instead of receiving
instances which are individually labeled, the learner receives a set of labeled bags,
each containing many instances
• Composite learning — how can different models or components be connected in
creative methods to produce a composite model.
• Reduced learning — how can both the size and information flow of models be
reduced, both for performance and deployment purposes, while maintaining the
same or greater predictive power.
Perspectives and Issues in deep learning framework
Deep learning has made significant advancements in various fields.
Trojan horses go neural.
• Every new technology comes with new security threats.
• neural Trojan attacks, where “malicious functionality is embedded into the weights of
a neural network.”
• new defense strategies.
How does backpropagation work
• how the mathematical operations and practical aspects of using backpropagation in
the context of neural networks.
How to bring together time series forecasts and deep learning.
• time-series forecasting has been around for a long time, and data scientists usually
leverage a limited set of established methods to work on it.
Deep learning is making inroads in computer graphics.
• Using computers to create physically correct and realistic 3D images is a complex
task.
• neural networks to generate increasingly sophisticated visual representations.
Issues in Deep Learning:
Here are some of the main challenges in deep learning:
• Data availability: It requires large amounts of data, For using deep learning it’s a big
concern to gather as much data for training.
• Computational Resources: For training the deep learning model, it is
computationally expensive because it requires specialized hardware like GPUs and
TPUs.
• Time-consuming: While working on sequential data depending on the computational
resource it can take days or months.
• Interpretability: Deep learning models are complex, it works like a black box. it is
very difficult to interpret the result.
• Overfitting: when the model is trained again and again, it becomes too specialized
for the training data, leading to overfitting and poor performance on new data.
Review of fundamental learning Techniques
• Deep learning is useful for data scientists who are responsible for gathering,
analyzing, and understanding massive volumes of data.
Deep Learning Techniques
1.Classic Neural Networks
• Also known as Fully Connected Neural Networks, it is often identified by its
multilayer perceptrons, where the neurons are connected to the continuous layer.
functions included in this model are:
Linear function: it represents a single line which multiplies its inputs with a constant
multiplier.
Non-Linear function: A nonlinear function is a function whose plotted graph does
not form a straight line but a curved line.
• The exponent of the variable in a nonlinear equation is greater than 1.
2. Convolutional Neural Networks
A CNN is a kind of network architecture for deep learning algorithms and is
specifically used for image recognition and tasks that involve the processing of pixel
data.
3.Recurrent Neural Networks (RNN)
• All the inputs and outputs in standard NN are independent of one another, however
in some circumstances, the prior words are necessary for predicting the next word of
a phrase, and so previous words must be remembered.
• The most important component of RNN is the Hidden state, which remembers
specific information about a sequence.
• A Deep Learning approach for modeling sequential data is Recurrent Neural
Networks (RNN).
4. Generative Adversarial Networks
• It is a combination of two deep learning techniques of neural networks – a
Generator and a Discriminator.
• While the Generator Network yields artificial data, the Discriminator helps in
discerning between a real and a false data.
5.Transfer Learning
• The reuse of a pre-trained model on a new problem is known as transfer learning in
machine learning.
• A machine uses the knowledge learned from a prior assignment to increase
prediction about a new task in transfer learning.
6.Autoencoders
• These are self-supervised learning models which are used to reduce the size of input
data by recreating it.
Autoencoder is made up of two components:
1. Encoder: It works as a compression unit that compresses the input data.
2. Decoder: It decompresses the compressed input by reconstructing it.
Feed forward neural network:
• Feed-Forward Neural Network is a single layer perceptron.
• A sequence of inputs enter the layer and are multiplied by the weights in this model.
• The weighted input values are then summed together to form a total.
• If the sum of the values is more than a predetermined threshold, which is normally
set at zero, the output value is usually 1, and if the sum is less than the threshold,
the output value is usually -1.
• The single-layer perceptron is a popular feed-forward neural network model that is
frequently used for classification.
single layer perceptron

Artificial Neural Network (ANN):


• Artificial Neural Network (ANN) is a deep learning algorithm that emerged from the
idea of Biological Neural Networks of human brains.
ANN works very similar to the biological neural networks
Biological Neural Network.
• ANN algorithm would accept only numeric and structured data as input.
• To accept unstructured and non-numeric data formats such as Image, Text, and
Speech, Convolutional Neural Networks (CNN) is used .
An example of a Deep Neural Network is
Input layer
• The input layer consists of inputs that are independent variables. These inputs can be
loaded from an external source such as a web service or a CSV file.
• In simple terms, these variables are known as features.
Artificial Neural Network (ANN)

Weights
• Weights play an important role in Neural Network, every node/neuron has some
weights.
• Neural Networks learn through the weights, by adjusting weights the neural
networks decide whether certain features are important or not.
Hidden Layer
• These lie between the input layer and the output layer.
• In this layer, the neurons take in a set of weighted inputs and produce an output with
the help of the activation function.
• In this step, we apply activation function, these neurons apply different
transformations to the input data.
• There are many activation functions used in DL,some of them are ReLU, Threshold
Function, Sigmoid.
Output Layer
• This is the last layer in the neural networks and receives input from the last node in
the hidden layer. This layer can be
• Continous (stock price)
• Binary (0 or 1)
• Categorical (Cat or Dog or Duck)
There are two phases in the Neural Network cycle, one is the training phase and the other is
the prediction phase.
• The process of finding the weight and bias values occurs in the training phase.
• The process where the neural network processes input to produce predictions comes
under the prediction phase.
• Consider the learning process of a neural network as an iterative process of the pass
and return.
• Pass is a process of Forward Propagation of Information and return is the Backward
Propagation of Information.

• In Forward Propagation, Given some data, we compute the dot product of that input
value with the assigned weight and then add all those and apply the activation
function to the result in the hidden layer.
• This node acts as an input layer for the next layer. This is repeated until we get the
final output vector y.
• The obtained output value is known as a predicted value.
• we compare the predicted value with the actual value, the difference is known as an
error which is known as Cost Function.
• Loss Function is inversely proportional to accuracy, less the cost function, more is the
accuracy, our goal is to minimize the loss function.
• The formula for loss function

• After calculation the Loss Function, we feed this information back to Neural Network
where it travels back through weights and weights are updated, this method is
called Back Propagation.
• This process is repeated several times so that the machine understands the data and
the weights for the features.

Gradient Descent
• It is an optimization technique that is used to improve the neural network-based
models by minimizing the cost function.
• This process occurs in the backpropagation step.
• It allows us to adjust the weights of the features in order to reach the global minima.
• A global minimum is a point where the function value is smaller than at all other
feasible points.

Activation Function:
• It’s just a function that is used to get the output of node, It is also known as Transfer
Function.
• It is used to determine the output of neural network like yes or no. It maps the
resulting values in between 0 to 1 or -1 to 1 etc. (depending upon the function).
• The Activation Functions can be basically divided into 2 types-
Linear Activation Function
Non-linear Activation Functions
Linear or Identity Activation Function
• The function is a line or linear, the output of the functions will not be confined
between any range.
Equation : f(x) = x
Range : (-infinity to infinity)

Non-linear Activation Function


• The Nonlinear Activation Functions are the most used activation functions.

• It makes it easy for the model to generalize or adapt with variety of data and to
differentiate between the output.
• The main terminologies needed to understand for nonlinear functions are:
• Derivative or Differential: Change in y-axis w.r.t. change in x-axis. It is also known as
slope.
• Monotonic function: A function which is either entirely non-increasing or non-
decreasing.
• The Nonlinear Activation Functions are mainly divided on the basis of their range or
curves-
1. Sigmoid or Logistic Activation Function
• The Sigmoid Function curve looks like a S-shape.
• sigmoid function is especially used for models where we have to predict the
probability as an output.
• Since probability of anything exists only between the range of 0 and 1.
• The softmax function is a more generalized logistic activation function which is used
for multiclass classification.
2. Tanh or hyperbolic tangent Activation Function
• tanh is also like logistic sigmoid but better.
• The range of the tanh function is from (-1 to 1).
• tanh is also sigmoidal (s - shaped).

• The advantage is that the negative inputs will be mapped strongly negative and the
zero inputs will be mapped near zero in the tanh graph.
• The tanh function is mainly used classification between two classes.
• Both tanh and logistic sigmoid activation functions are used in feed-forward nets.
3. ReLU (Rectified Linear Unit) Activation Function
• The ReLU is the most used activation function.
• it is used in almost all the convolutional neural networks or deep learning.
• the ReLU is half rectified (from bottom).
• f(z) is zero when z is less than zero and f(z) is equal to z when z is above or equal to
zero.
• Range: [ 0 to infinity)
• The function and its derivative both are monotonic.
• But the issue is that all the negative values become zero immediately which
decreases the ability of the model to fit or train from the data properly.
4. Leaky ReLU
• It is an attempt to solve the dying ReLU problem

• The leak helps to increase the range of the ReLU function. Usually, the value of a is
0.01 or so.
• When a is not 0.01 then it is called Randomized ReLU.
• Therefore the range of the Leaky ReLU is (-infinity to infinity).
• Both Leaky and Randomized ReLU functions are monotonic and their derivatives are
also monotonic in nature.
Multi-layer neural network:
Multilayer Perceptron
• The Multilayer Perceptron is a neural network where the mapping between inputs
and output is non-linear.
• A Multilayer Perceptron has input and output layers, and one or more hidden layers
with many neurons stacked together.
And neuron must have an activation function like ReLU or sigmoid.
• Multilayer Perceptron falls under the category of feedforward algorithms, each linear
combination is propagated to the next layer.
• Each layer is feeding the next one with the result of their computation.
This goes all the way through the hidden layers to the output layer
• Backpropagation is the learning mechanism that allows the Multilayer Perceptron to
iteratively adjust the weights in the network, with the goal of minimizing the cost
function.

• Gradient Descent is the optimization function used in MultiLayer Perceptron.


• In each iteration, after the weighted sums are forwarded through all layers, the
gradient of the Mean Squared Error is computed across all input and output pairs.
• Then, to propagate it back, the weights of the first hidden layer are updated with the
value of the gradient.
• That’s how the weights are propagated back to the starting point of the neural
network!
• This process is repeated until the error is minized.
One Iteration of Gradient Descent:

XOR Problem:
• The XOR problem is not linearly separable, thus a single perceptron cannot solve it
• Let’s analyze how this MLP works.
• We assume here that all the neurons use activation function (i.e., the function whose
value is 1 for all non-negative inputs, and 0 for all negative inputs)
• The top hidden neuron is connected only to the first input x₁ with a connection
weight of 1, and it has a bias of -1.
• Therefore, this neuron fires only when x₁ = 1 (in which case its net input is
1 × 1 + (-1) = 0
• For example, let’s compute the forward propagation of this MLP for the inputs x₁ = 1
and x₂ = 0.
• The activations of the hidden neurons in this case are:

• We can see that only the top hidden neuron fires in this case.
• The activation of the output neuron is therefore:

• The output neuron fires in this case, which is what we expect the output of XOR to be
for the inputs x₁ = 1 and x₂ = 0.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy