Deep Learning - Question Bank
Deep Learning - Question Bank
UNIT 1
The Neural Network, The Neuron, Expressing Linear Perceptron as Neurons, Feed-forward Neural
Networks, Linear Neurons and their Limitations, Sigmoid Tanh and ReLU Networks, Softmax Output
Layers. Training Feed-Forward Neural Network, Gradient Descent, The Back propagation Algorithm,
Test Sets, Validation Sets, and Over fitting, Preventing Over fitting in Deep Neural Networks.
8) Explain XOR problem. Why is a single Perceptron model not able to solve the same ?
8) Explain learning rate,epoch, training set, test set.
9) Write note on back propagation algorithm
10) What is overfitting?
11) What is regularization in DL?
12) Differentiate between MAE, MSE, Cross entropy
13) Explain architectures of Deep neural nets.
14) What is data normalization and why do we need it?
15) What will happen if we don’t make use of any activation function in a neural
network?
UNIT 2
“Deep feed forward networks, Gradient-Based Learning, Hidden Units, Architecture Design, The
Challenges with Gradient Descent, Learning Rate Adaptation, AdaGrad—Accumulating Historical
Gradients, RMSProp—Exponentially Weighted Moving Average of Gradients, Adam—Combining
Momentum and RMSProp, Regularization for deep learning.”
UNIT 3
Unit III Convolutional Networks
The Convolution Operation, Pooling, Variants of the Basic Convolution Function, Case
study Building a Convolutional Network for CIFAR-10, Visualizing Learning in
Convolutional Networks, Introduction to pre-trained CNN- VGG 16, Inception, ResNet,
Case study.
2.h a. If the size of the input image is 64x64x1 , kernel size is 3x3, stride =1
and padding = valid. What is the size of the output feature map?
b. What do you understand by pooling? State the different types of
pooling used in CNN.
3.h a. If the size of the input image is 64x64x1 and kernel size is 3x3,
stride =1, padding = same. What is the size of the output feature
map in this case?
b. What do you understand by stride? What is the similarity between
pooling and stride operation in CNN?
4. h a. Given the input size is 64x64x3, kernel size is 3x3, stride =1,
padding = valid, number of filters = 5. What is the size of the output
feature map along with its depth?
b. State the significance of padding in CNN.
9.l What are activation maps and what is the idea behind visualizing an
activation map for an input image ?
10.l Draw and explain the various components of the CNN Architecture.
Explain the Conventional Operations with a suitable example and discuss
the feature map matrix.
State the advantages of CNN over classical neural networks.
11. Discuss the limitations of CNN and which are the different advance CNN
Models.
(RCNN, Fast RCNN, Faster RCNN, YOLO etc)
Explain sparsity of connections.
12. Can pooling operation be replaced by stride to give the same effect as that
of pooling layer? If yes, give an explanation for the same.
13. CNN is able to provide translation invariance. Justify the given statement.
15. What is a convolutional kernel? Explain how the change in kernel size will
affect the feature map generated.
UNIT 4
7. Can you differentiate between RNN, LSTM and GRU and give the
conclusion which one is most efficient?
8. List and explain the uses cases of RNN & its types in real world example?
a. Recurrent Network
c. Deep Network
11. Define the following term and mention their uses cases
c. Fargate in LSTM
d. Memory cell
12. Discuss the Bag of Words Concept in NLP with suitable examples.
13. Discuss the TF-IDF Concept in NLP with suitable example
14. Discuss the C-BOW Model Concept in NLP with suitable example.
15. Differentiate between CBOW and SkipGram Model.
UNIT 5
1. What do you mean by Autoencoders, Illustrate the working of autoencoders with architecture?
2. List the different types of autoencoder? How sparse encoder is different from the
undercomplete auto encoder?
5. Illustrate the working of Stochastic encoder and mention the advantages of it?
6. Illustrate the working of Deep autoencoder and mention the advantages of it?
7. Illustrate the working of an undercomplete autoencoder and mention the advantages of it?
8. Illustrate the working of variational autoencoders and mention the advantages of it?
a. Denoising Autoencoder
b. Undercomplete Autoencoder
a. Sparse Autoencoder
b. Deep Autoencoder
a. Convolutional Autoencoder
b. Variational Autoencoder