0% found this document useful (0 votes)
15 views2 pages

Soft Computing Mid Semester Exam Paper

Uploaded by

rajatk.me.23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views2 pages

Soft Computing Mid Semester Exam Paper

Uploaded by

rajatk.me.23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Roll No:......................

Dr B R Ambedkar National Institute of Technology, Jalandhar


B Tech 6th Semester (Electrical Engineering)
EEPE – 354, Soft Computing Techniques
Mid Semester Examination, March 2024
Duration: 120 Minutes Max. Marks: 30 Date: 19th March 2024

Marks Distribution & Mapping of Questions with Course Outcomes (COs)


Question Number 1 2 3 4 5
Marks 6 6 6 6 6
CO No. 1 2 3 2 3
Learning Level 3 1 2 3 1

Answer all the following questions and assume any missing data.

1) A CNN network architecture is given below:

Input: 32x32x3 (RGB image)

Layer 1: Convolutional; Kernel size: 5x5; Input channels: 3 (RGB), Output


channels: 16; Stride: 1 Padding: 'same'

Layer 2: Max Pooling; Pool size: 2x2; Stride: 2

Layer 3: Convolutional; Kernel size: 3x3, Input channels: 16 (from Layer


1); Output channels: 32 Stride: 1; Padding: 'same'

Layer 4: Max Pooling; Pool size: 2x2; Stride: 2

Layer 5: Fully Connected; Input size: 8x8x32 (from Layer 4); Output size: 64

Layer 6: Fully Connected (Output); Input size: 64 (from Layer 5); Output
size: 10 (for 10 classes)

1.1) Explain the formula used to calculate the number of parameters for each
layer type (convolutional, pooling, and fully connected). (2 marks)
1.2) Provide the calculations for each layer in the given architecture, clearly
stating any assumptions or simplifications made. (2 marks)
1.3) Show the total number of trainable parameters in the entire CNN
model. (2 marks)

2.1) Expalin the difference between softmax layer and sigmoid layer (2
marks)

2.2) A dataset consists of 1000 images. If mini-batch gradient descent is used


(mini batch size of 100) and epoch is 20, identify how many number of times
the weights are updated after all completing all epoch. If you use stochastic

2/2
gradient descent, how many number of times the weights will be updated
after 20 epoch. (2 marks)
th

2.3) Expalin why convolution neural network are better suited for computer
vision problems compared to fully connected neural networks (2 marks)

3.1) For the input x=[2,1,-1], calculate the output values when it is passed
through a) sigmoid b) softmax c) Relu d) tanh activation functions (2
marks)
3.2) Explain vanishing gradient and exploding gradient problem in neural
networks. (2 marks)
3.3) Consider a single neuron with two input features (x1, x2) with weights
(w1,w2) and a bias (b). if x1=0.5, x2=0.1 and weights are w1=w2=b=0.2,
calculate output when relu activation is used in the neuron. (2 marks)
4.1) Write the commands with reference to the model developed with
tensorflow (2 marks)
a) For saving the trained model completely
b) For saving only the weights of the trained model
c) To load the saved model
d) To view the summary of the model

4.2) In which format (extension) complete model is saved in tensorflow and


explain the advantages of using that particular format. (2 marks)

4.3) What are the distinct roles of compile and fit methods in tensorflow
when training a neural network model and how do they differ in terms of
functionality. (2 marks)

5.1) Pictorially represent dropout technique and explain wether the dropout
layer will be used during testing? (2 marks)

5.2) How adam optimisation is different from RMSprop? (2 marks)

5.3) Identify true or false in the following statements (2 marks)


a) Max pooling increases the spatial dimensions of the input volume.
b) A larger stride value results in a more significant reduction in spatial
dimensions
c) The number of filters in a convolutional layer determines the depth of the
output volume
d) Increasing the number of filters reduces the network's capacity to learn
complex features

2/2

You might also like