0% found this document useful (0 votes)
50 views

Deep Learning - Question Bank

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Deep Learning - Question Bank

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Deep Learning - Question Bank

UNIT 1

Unit I Revisiting Machine Learning Fundamentals

The Neural Network, The Neuron, Expressing Linear Perceptron as Neurons, Feed-forward Neural
Networks, Linear Neurons and their Limitations, Sigmoid Tanh and ReLU Networks, Softmax Output
Layers. Training Feed-Forward Neural Network, Gradient Descent, The Back propagation Algorithm,
Test Sets, Validation Sets, and Over fitting, Preventing Over fitting in Deep Neural Networks.

1) Explain architecture of Deep learning neural network ?


2) Explain different applications of DL.
3) What are the challenges in applications of DL
4) Explain the terms : artificial neuron
5) A neuron is a linear classifier. Explain
6) Compare activation functions.
7) Explain gradient descent algorithm

8) Explain XOR problem. Why is a single Perceptron model not able to solve the same ?
8) Explain learning rate,epoch, training set, test set.
9) Write note on back propagation algorithm
10) What is overfitting?
11) What is regularization in DL?
12) Differentiate between MAE, MSE, Cross entropy
13) Explain architectures of Deep neural nets.
14) What is data normalization and why do we need it?
15) What will happen if we don’t make use of any activation function in a neural
network?
UNIT 2

Unit II Deep Networks

“Deep feed forward networks, Gradient-Based Learning, Hidden Units, Architecture Design, The
Challenges with Gradient Descent, Learning Rate Adaptation, AdaGrad—Accumulating Historical
Gradients, RMSProp—Exponentially Weighted Moving Average of Gradients, Adam—Combining
Momentum and RMSProp, Regularization for deep learning.”

1. What is the Forward propagation in Neural Networks?


2. What is the Backward propagation in Neural Networks?
3. What is the difference between Forward propagation and Backward Propagation in
Neural Networks?
4. What is Gradient Descent Algorithm ?
5. Explain the different types of Gradient Descent in detail.
6. What is learning rate decay ? (learning rate adaption)
7. How does the learning rate affect the training of the Neural Network?
8. What are the different hyper parameters ?
(epochs, batch size, batch normalization, hidden layers,learning rate, drop out…..)
9. What is the difference between Epoch, Batch, and Iteration in Neural Networks?
10. What is momentum ?
11. What do you mean by Optimizer?
12. Compare and contrast ADAM/ADAGRAD/RMSPROP over DAM/ADAGRAD/RMSPROP.
13. What is the vanishing gradient problem?
14. What is regularization and what are the different regularization techniques available in
deep learning?
15. State the difference between Batch Gradient Descent and Stochastic Gradient Descent.
16. What is adaptive optimization? State the disadvantage of Adagrad.
17. How NAG helps in optimizing a weight in a neural network.
18. Explain how can we say changing the weights will ultimately change the loss?

UNIT 3
Unit III Convolutional Networks

The Convolution Operation, Pooling, Variants of the Basic Convolution Function, Case
study Building a Convolutional Network for CIFAR-10, Visualizing Learning in
Convolutional Networks, Introduction to pre-trained CNN- VGG 16, Inception, ResNet,
Case study.

1.h a. State the number of neurons required in the input layer of a


multilayered neural network for an image of size 64x64x3.
b. State the two reasons due to which CNN has fewer parameters.

2.h a. If the size of the input image is 64x64x1 , kernel size is 3x3, stride =1
and padding = valid. What is the size of the output feature map?
b. What do you understand by pooling? State the different types of
pooling used in CNN.

3.h a. If the size of the input image is 64x64x1 and kernel size is 3x3,
stride =1, padding = same. What is the size of the output feature
map in this case?
b. What do you understand by stride? What is the similarity between
pooling and stride operation in CNN?

4. h a. Given the input size is 64x64x3, kernel size is 3x3, stride =1,
padding = valid, number of filters = 5. What is the size of the output
feature map along with its depth?
b. State the significance of padding in CNN.

5.m Explain how a pretrained model can be used as a feature extractor?

6.m Explain the concept of vanishing gradient with sigmoid as an activation


function. Suggest two ways to overcome this problem?

7.h How is ResNet different from a plain network


Discuss the AlexNet & ZFNet Pretrained Models in Deep learning.
Discuss different pretrained models in detail.

8. h State the motivation behind Inception Network. Give a diagrammatic


representation of an Inception module.

9.l What are activation maps and what is the idea behind visualizing an
activation map for an input image ?

10.l Draw and explain the various components of the CNN Architecture.
Explain the Conventional Operations with a suitable example and discuss
the feature map matrix.
State the advantages of CNN over classical neural networks.

11. Discuss the limitations of CNN and which are the different advance CNN
Models.
(RCNN, Fast RCNN, Faster RCNN, YOLO etc)
Explain sparsity of connections.

12. Can pooling operation be replaced by stride to give the same effect as that
of pooling layer? If yes, give an explanation for the same.

13. CNN is able to provide translation invariance. Justify the given statement.

14. Distinguish between VGG16 and InceptionV3.

15. What is a convolutional kernel? Explain how the change in kernel size will
affect the feature map generated.

16. With a neat diagram distinguish between 3D convolution and 1x1


convolution.

17. Differentiate between stride =1 and stride = 2 with suitable example.

UNIT 4

Unit IV Sequence Modeling

Recurrent Neural Networks , Bidirectional RNNs, Encoder-Decoder Sequence-to-Sequence


Architectures, Deep Recurrent Networks, Recursive Neural Networks, The Long Short-
Term Memory and Other Gated RNNs, Case study Sentiment Analysis Model.

1. Illustrate the concept of Sequence to sequence architecture with an


example?

2. Describe the working of encoder and decoder in sequence to sequence


architecture with an example?

3. Justify, how is bidirectional RNN efficient from traditional RNN?

4. Can you differentiate between Deep Recurrent Networks and Recursive


Neural Network?
5. Justify your answer how LSTM is efficient from RNN?

6. With a neat sketch explain the working of GRU?

7. Can you differentiate between RNN, LSTM and GRU and give the
conclusion which one is most efficient?

8. List and explain the uses cases of RNN & its types in real world example?

9. Discuss the Teacher Forcing and Professor Forcing Concept output


recurrence RNNs with suitable example.

10. Define the following term

a. Recurrent Network

b. Recursive Neural Network

c. Deep Network

d. Deep Recurrent Network

11. Define the following term and mention their uses cases

a. Input Gate in LSTM

b. Output Gate in LSTM

c. Fargate in LSTM

d. Memory cell

12. Discuss the Bag of Words Concept in NLP with suitable examples.
13. Discuss the TF-IDF Concept in NLP with suitable example
14. Discuss the C-BOW Model Concept in NLP with suitable example.
15. Differentiate between CBOW and SkipGram Model.

UNIT 5

Unit V Advances in Deep Learning and Applications


Autoencoders, Undercomplete Autoencoders, Regularized Autoencoders, Representational Power, Layer
Size and Depth, Stochastic Encoders and Decoders, Denoising Autoencoders, Contractive Autoencoders,
Deep Generative Models, Applications of Deep Learning

1. What do you mean by Autoencoders, Illustrate the working of autoencoders with architecture?

2. List the different types of autoencoder? How sparse encoder is different from the
undercomplete auto encoder?

3. Describe the Denoising Autoencoders, mention the importance of Denoising Autoencoders in


deep networks?

4. Mention the importance of Power, Layer Size and Depth in Autoencoders?

5. Illustrate the working of Stochastic encoder and mention the advantages of it?

6. Illustrate the working of Deep autoencoder and mention the advantages of it?

7. Illustrate the working of an undercomplete autoencoder and mention the advantages of it?

8. Illustrate the working of variational autoencoders and mention the advantages of it?

9. Write the advantages and drawbacks of

a. Denoising Autoencoder

b. Undercomplete Autoencoder

10. Write the advantages and drawbacks of

a. Sparse Autoencoder

b. Deep Autoencoder

11. Write the advantages and drawbacks of

a. Convolutional Autoencoder

b. Variational Autoencoder

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy