67% found this document useful (3 votes)
2K views27 pages

R23 III-I CSE (AI) Computer Vision and Image Processing Question Bank

The document is a question bank for a Computer Vision and Image Processing course at Aditya College of Engineering, prepared by Dr. P. Gangadhara Reddy. It includes multiple choice questions, short answer questions, and essay-type questions covering various topics such as image acquisition, processing techniques, and applications in real-life scenarios. The content is structured according to the syllabus for the academic year 2025-2026 under the new regulation of 2023.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
67% found this document useful (3 votes)
2K views27 pages

R23 III-I CSE (AI) Computer Vision and Image Processing Question Bank

The document is a question bank for a Computer Vision and Image Processing course at Aditya College of Engineering, prepared by Dr. P. Gangadhara Reddy. It includes multiple choice questions, short answer questions, and essay-type questions covering various topics such as image acquisition, processing techniques, and applications in real-life scenarios. The content is structured according to the syllabus for the academic year 2025-2026 under the new regulation of 2023.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

ADITYA COLLEGE OF ENGINEERING

(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)

QUESTION BANK

III B.Tech I SEMESTER


COMPUTER VISION and IMAGE PROCESSING

Regulation – 2023

Academic Year 2025 – 2026(ODD)

Prepared by

Dr.P.GANGADHARA REDDY

Associate Professor, Department of ECE

Dr.P.GANGADHARA REDDY P a g e 1 | 27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)

UNIT1 Introduction to Computer Vision and Image Processing

Multiple choice questions


1. What is the primary goal of Computer Vision?
A) Enhance images
B) Compress images for storage
C) Understand and interpret visual information
D) Convert text to speech

2. Which of the following is a common application of Computer Vision?


A) Voice recognition
B) Face detection
C) Text summarization
D) Sound filtering

3. What is the first step in a typical image processing pipeline?


A) Feature extraction
B) Image segmentation
C) Image acquisition
D) Object classification

4. Which library is most commonly used for Computer Vision tasks in Python?
A) NumPy
B) TensorFlow
C) OpenCV
D) Pandas

5. Converting a color image to grayscale is an example of:


A) Feature extraction
B) Image segmentation
C) Image enhancement
D) Image transformation

6. What does thresholding do in image processing?

Dr.P.GANGADHARA REDDY P a g e 2 | 27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
A) Enhances colors
B) Finds edges
C) Converts grayscale images to binary images
D) Removes noise

7. Which of the following is a morphological operation?


A) Fourier Transform
B) Histogram Equalization
C) Dilation
D) Gaussian Blur

8. Which edge detection technique is considered more accurate in detecting true edges?
A) Prewitt
B) Sobel
C) Laplacian
D) Canny

9. In Computer Vision, segmentation refers to:


A) Resizing an image
B) Enhancing color contrast
C) Dividing an image into meaningful parts
D) Converting image to grayscale

10. Which of the following is NOT a goal of image processing?


A) Image acquisition
B) Object classification
C) Image enhancement
D) Noise reduction

11. Which of the following is NOT a component of computer vision?


A. Image understanding
B. Image restoration
C. Natural language translation
D. Object recognition

Dr.P.GANGADHARA REDDY P a g e 3 | 27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
12. Which of the following devices is used for image acquisition?
A. Printer
B. Modem
C. Digital Camera
a. D. Speaker

13. Which of the following color models is most commonly used in digital imaging?
A. CMYK
B. RGB
C. HSV
D. YCbCr

14. What is the primary goal of histogram equalization?


A. Edge detection
B. Image compression
C. Improving image contrast
D. Color conversion

15. Which of the following is a point operation?


A. Convolution
B. Histogram equalization
C. DFT
D. Color transformation

16. The Discrete Fourier Transform is used to:


A. Sharpen an image
B. Convert image data to the frequency domain
C. Blur an image
D. Perform segmentation

17. Which filter is typically used for edge detection in spatial filtering?
A. Gaussian filter
B. Mean filter
C. Sobel filter

Dr.P.GANGADHARA REDDY P a g e 4 | 27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
D. Median filter

18. Quantization in image processing results in:


A. Spatial resolution
B. Temporal resolution
C. Intensity resolution
D. Color correction

19. Which of the following is not a frequency domain operation?


A. Low-pass filtering
B. High-pass filtering
C. Histogram equalization
D. Band-reject filtering

20. Sampling in image processing refers to:


A. Measuring pixel intensity
B. Reducing image size
C. Selecting discrete spatial locations from a continuous image
D. Increasing contrast

21. Computer Vision aims to enable machines to ___ understand____ and ____
interpret______ images and visual data like humans.

22. Image acquisition methods include devices such as __ cameras__ and ___ scanners __.

23. In image processing, the process of converting a continuous image to a discrete image is
called sampling

24. The process of assigning discrete intensity values to sampled pixels is known as
quantization

25. RGB, HSV, and CMYK are examples of _ color models or color spaces

26. _ Point __ operations in image processing involve modifying pixel values independently.

Dr.P.GANGADHARA REDDY P a g e 5 | 27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
27. Histogram ____ equalization______ is used to improve the contrast of an image.

28. Spatial filtering applies a __ kernel or mask_____ over an image to highlight certain
features.

29. The ____ Fourier ______ Transform converts spatial domain data to frequency domain.

30. ____ Image restoration ______ is the process of recovering a degraded image.

2-MARK QUESTIONS (Short Answer)


1. What is Computer Vision?

Computer Vision is a field of Artificial Intelligence (AI) and Computer Science that enables
computers and systems to derive meaningful information from digital images, videos, and other
visual inputs—and take action or make recommendations based on that information.

Examples:

• Face recognition (e.g., in smartphones)


• Object detection (e.g., in autonomous vehicles)
• Medical imaging analysis
• Surveillance systems

2. What is Image Processing?

Image Processing refers to a set of techniques for manipulating and analyzing digital images to
improve their quality or extract useful information.

Types:

• Analog Image Processing – done on physical media (e.g., photography)


• Digital Image Processing – done using computers and digital techniques

3. Difference Between Computer Vision and Image Processing


Dr.P.GANGADHARA REDDY P a g e 6 | 27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)

Feature Image Processing Computer Vision


Purpose Improve image quality Understand image content
Output Enhanced image Description/interpretation of image
Examples Noise removal, contrast enhancement Face recognition, object tracking

4. Key Stages in Image Processing

1. Image Acquisition – capturing the image using sensors or cameras


2. Pre-processing – improving image quality (e.g., noise removal, resizing)
3. Segmentation – dividing the image into meaningful parts or regions
4. Feature Extraction – identifying important patterns (edges, shapes)
5. Image Recognition / Interpretation – analyzing features for classification

5. Common Image Processing Operations

• Grayscale conversion
• Image filtering (smoothing, sharpening)
• Histogram equalization
• Thresholding
• Edge detection (e.g., using Sobel or Canny operators)
• Morphological operations (e.g., dilation, erosion)

6. Tools and Libraries

• OpenCV (Open Source Computer Vision Library) – Most widely used for image/video
processing
• PIL / Pillow – For basic image manipulation in Python
• scikit-image – For scientific image processing in Python
• MATLAB – Common in academic settings

7. Applications of Computer Vision and Image Processing

•Healthcare – Diagnosing diseases from X-rays, MRIs


• Autonomous Vehicles – Lane detection, pedestrian recognition
• Agriculture – Monitoring crop health
Dr.P.GANGADHARA REDDY P a g e 7 | 27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
• Retail – Customer behavior analysis using cameras
• Security – Face and activity recognition in CCTV footage

8. Name any two image acquisition devices.


9. What is sampling in the context of image representation?
10. Define quantization in image processing.
11. What is the purpose of histogram equalization?
12. List any two point operations in image processing.
13. What is the role of a kernel in spatial filtering?
14. Write the full form of DFT and mention its use.
15. What is the basic idea behind frequency domain filtering?
16. What is image restoration?
17. Give two differences between spatial and frequency domain processing.
18. Mention one use of Fourier Transform in image processing.

ESSAY TYPE QUESTIONS (5/10 Marks)


1. Explain the scope and applications of computer vision in real-life scenarios.
2. Describe the historical development of computer vision and its evolution.
3. Discuss the methods of image acquisition and how images are formed and represented
digitally.
4. Explain the concepts of sampling and quantization in digital image processing with
suitable diagrams.
5. Compare and contrast different color spaces like RGB, HSV, and YCbCr. Explain
their relevance in image processing.
6. Describe various point operations used in image enhancement such as brightness and
contrast adjustments.
7. Explain histogram processing and its types with examples.
8. Describe spatial filtering techniques. How do smoothing and sharpening filters work?
9. Explain the concept of Fourier Transform in image processing. How does the Discrete
Fourier Transform (DFT) help in frequency analysis of images?
10. Discuss the various frequency domain filtering techniques and their applications.
11. Explain the concept and steps involved in image restoration. Give examples.

Dr.P.GANGADHARA REDDY P a g e 8 | 27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
12. Write a detailed note on the differences between spatial domain and frequency domain
techniques in image processing.
13. Define Computer Vision. Explain its significance and list any five real-world
applications.
Hint: Discuss how it mimics human visual perception, fields like healthcare, autonomous vehicles,
etc.

14. Differentiate between Computer Vision and Image Processing with suitable examples.
Hint: Focus on goals (understanding vs. improving images), mention examples like noise removal
vs. object recognition.
15. Describe the key steps in a typical digital image processing pipeline.
Hint: Include image acquisition, pre-processing, segmentation, feature extraction, interpretation.
16. Discuss the role of OpenCV in Computer Vision and Image Processing. Mention some
basic functions it offers.
Hint: Python/C++ support, image filtering, face detection, edge detection, object tracking.
17. Explain the importance of image pre-processing in computer vision. Discuss any four
techniques used in pre-processing.
Hint: Techniques like resizing, normalization, noise removal, grayscale conversion.
18. Describe the concept of image segmentation and its importance in Computer Vision.
Provide examples of segmentation techniques.
Hint: Thresholding, edge-based, region-based methods, semantic segmentation in medical
imaging or self-driving cars.
19. Explain edge detection in image processing. Compare any two popular edge detection
techniques.
Hint: Talk about Sobel, Canny, Laplacian operators, and their strengths/weaknesses.
20. Discuss the applications of Computer Vision in the following fields: Healthcare,
Agriculture, Retail, and Surveillance.
Hint: Disease diagnosis, crop health monitoring, customer tracking, face recognition in CCTV.
21. What are morphological operations in image processing? Explain any two operations
with suitable diagrams.
Hint: Dilation, Erosion, Opening, Closing – and their effects on binary images.
22. Explain how a grayscale image is represented in a computer. Why is it important in
image processing?
Hint: Pixel intensity values, matrices, simplifying computation and reducing data size.

Dr.P.GANGADHARA REDDY P a g e 9 | 27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)

Unit II: Image Analysis Techniques

FILL IN THE BLANKS:

1. The Sobel operator is used to detect ____ edges______ in an image.

2. The Canny edge detector involves smoothing the image using a __ Gaussian ______ filter.

3. Prewitt operator is similar to Sobel but uses a different set of __kernels or masks

4. Corner detection helps in identifying points of ____ interest_____ in the image.

5. Thresholding is a basic technique for ____ image ______ segmentation.

6. K-means and Mean-Shift are types of ___ clustering _______ based segmentation.

7. Erosion removes pixels from the _____ boundary_____ of an object in a binary image.

8. Dilation adds pixels to the ___ boundaries _______ of objects in an image.

9. Opening is erosion followed by ____ dilation

10. Closing is dilation followed by _____erosion

11. Co-occurrence matrices are used for ______ texture____ analysis.

12. Gabor filters are useful for detecting ___frequency _______ and texture orientation.

13. Morphological operations are mainly applied to ____ binary __ images.

14. Mean-Shift does not require pre-specifying the number of ____ clusters ______

15. Edge detection is a fundamental step in many ____ pattern ______ recognition systems.

MULTIPLE CHOICE QUESTIONS (MCQs):

Dr.P.GANGADHARA REDDY P a g e 10 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
1. Which of the following is a gradient-based edge detector?
A. Gabor filter
B. Prewitt operator
C. K-means clustering
D. Mean-Shift
2. Canny edge detector performs edge detection in how many stages?
A. 2
B. 3
C. 4
D. 5
3. Which technique is most commonly used for corner detection?
A. K-means
B. Harris detector
C. Gabor filter
D. Thresholding
4. Which of the following segmentation techniques is not region-based?
A. Thresholding
B. Region growing
C. Split-and-merge
D. K-means
5. In K-means clustering, the number of clusters (K) is:
A. Automatically determined
B. Given as input
C. Always 2
D. Based on mean values
6. What is the main application of morphological operations?
A. Texture detection
B. Shape analysis
C. Color transformation
D. Fourier transform
7. Which morphological operation helps to remove small white noise?
A. Dilation
B. Erosion

Dr.P.GANGADHARA REDDY P a g e 11 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
C. Opening
D. Closing

8. A co-occurrence matrix is used to analyze:


A. Edge maps
B. Binary shapes
C. Pixel intensity relationships
D. Color values

9. Gabor filters are best suited for:


A. Edge detection
B. Texture analysis
C. Color segmentation
D. Histogram equalization

10. Which of the following is NOT a clustering-based segmentation method?


A. K-means
B. Mean-Shift
C. Otsu’s method
D. Spectral clustering

2-MARK QUESTIONS (Short Answer)

1. What is edge detection? Why is it important in image processing?


2. Differentiate between Sobel and Prewitt operators.
3. List the steps involved in the Canny edge detection algorithm.
4. What is a corner in an image? Give one example of a corner detection technique.
5. Define thresholding. Mention one application.
6. What is region-based image segmentation?
7. Write a short note on K-means clustering for image segmentation.
8. What is the advantage of Mean-Shift clustering over K-means?
9. Define erosion in morphological image processing.
10. What is dilation and when is it used?
Dr.P.GANGADHARA REDDY P a g e 12 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
11. What is the purpose of morphological opening?
12. What operation follows dilation in morphological closing?
13. Mention two applications of morphological operations.
14. What is a co-occurrence matrix in texture analysis?
15. What is the use of Gabor filters in texture analysis?

ESSAY-TYPE QUESTIONS (5/10 Marks)

1. Explain gradient-based edge detection techniques using Sobel and Prewitt operators.
Compare their masks and outputs.
2. Describe the Canny edge detection process in detail. Explain why it is considered a
robust method.
3. Discuss different techniques used for corner and interest point detection. How are these
points useful in computer vision?
4. Explain the various image segmentation techniques: thresholding, region-based
methods, and clustering-based methods.
5. Compare and contrast K-means and Mean-Shift clustering techniques for image
segmentation. Include their advantages and limitations.
6. Describe erosion and dilation with diagrams. How are they used in binary image
processing?
7. Explain morphological operations: opening and closing. Discuss their applications in
shape analysis and noise removal.
8. What is texture in an image? Explain how statistical and transform-based methods are
used for texture analysis.
9. Describe the construction and use of a co-occurrence matrix in texture classification.
10. What are Gabor filters? Explain their role in transform-based texture analysis. Give
examples of their application.

Unit III: 3D Vision and Motion Analysis in Computer Vision

FILL IN THE BLANKS:

Dr.P.GANGADHARA REDDY P a g e 13 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
1. Epipolar geometry describes the geometric relationship between two ___ camera _______
views of the same scene.

2. The line corresponding to a point in the other camera’s image is called the _ epipolar___
line.

3. The difference in image location of the same 3D point when projected onto two image
planes is called __ disparity

4. Depth estimation can be derived using disparity and the cameras’ ___ baseline

5. Structure from Motion (SfM) uses multiple images to recover 3D structure and __camera
motion

6. In SfM, features are tracked across multiple ___ frames

7. Lucas-Kanade and Horn-Schunck methods are used to compute ____ optical____ flow.

8. Optical flow represents the __ motion ____ of objects between consecutive frames.

9. Motion segmentation separates moving objects from the ___ background___

10. Intrinsic parameters of a camera include focal length and ___ principal ____ center.

11. Extrinsic parameters represent the camera’s position and ___ orientation ___ in 3D space.

12. The process of estimating a camera's internal characteristics is known as camera


___calibration

13. A 3D representation of a scene can be constructed using a ____ point ______ cloud.

14. Stereo vision relies on two slightly different views to estimate ___ depth _______.

15. Feature tracking is essential in computing both optical flow and ___ structure _______
from motion.

Dr.P.GANGADHARA REDDY P a g e 14 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)

MULTIPLE CHOICE QUESTIONS (MCQs):

16. What does epipolar geometry help with in stereo vision?


A. Color correction
B. Image segmentation
C. Finding corresponding points in two images
D. Histogram equalization

17. Disparity is inversely proportional to:


A. Brightness
B. Depth
C. Color
D. Motion

18. Which method is used to estimate 3D structure from multiple images over time?
A. Image fusion
B. Depth from focus
C. Structure from Motion (SfM)
D. Optical flow

19. The Lucas-Kanade method assumes:


A. Large motion
B. Uniform brightness
C. Discontinuous flow
D. Known depth

20. Which of the following techniques is NOT used for optical flow estimation?
A. Horn-Schunck
B. Lucas-Kanade
C. Gabor filter
D. Farneback

Dr.P.GANGADHARA REDDY P a g e 15 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
21. Which component describes the internal geometry of the camera?
A. Extrinsic parameters
B. Epipolar geometry
C. Intrinsic parameters
D. Disparity map

22. Which technique helps generate a 3D point cloud?


A. Thresholding
B. Region growing
C. Stereo triangulation
D. Histogram matching

23. Motion segmentation is useful in:


A. Static object detection
B. Removing color noise
C. Separating moving objects from the background
D. Histogram matching

24. Camera calibration is required to estimate:


A. Histogram
B. Image depth
C. Camera’s intrinsic and extrinsic parameters
D. Color balance

25. Structure from Motion (SfM) primarily requires:


A. Single static image
B. Labeled dataset
C. Multiple video frames or images
D. Edge maps only

2-MARK QUESTIONS (Short Answer)

1. Define epipolar geometry.


Dr.P.GANGADHARA REDDY P a g e 16 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
2. What is disparity in stereo vision?
3. Mention any two depth estimation techniques used in stereo vision.
4. What is the significance of the baseline in stereo vision?
5. Define Structure from Motion (SfM).
6. What is the role of feature tracking in SfM?
7. List two applications of 3D reconstruction from motion.
8. What is optical flow?
9. Name two optical flow estimation methods.
10. Differentiate between the Lucas-Kanade and Horn-Schunck methods.
11. What is motion segmentation?
12. Define intrinsic parameters of a camera. Give an example.
13. What are extrinsic parameters in camera calibration?
14. Mention two common camera calibration techniques.
15. What is a 3D point cloud, and how is it generated?

ESSAY TYPE QUESTIONS (5-10 Marks)

1. Explain epipolar geometry with a neat diagram. How is it used in stereo vision for
correspondence matching?
2. Describe the process of disparity mapping and how it is used for depth estimation in
stereo vision.
3. Discuss different depth estimation techniques in stereo vision and their significance in
3D perception.
4. Explain the concept of Structure from Motion (SfM). Describe the steps involved in 3D
reconstruction using SfM.
5. Write a detailed note on feature tracking across multiple frames. How does it help in
recovering 3D structure?
6. Compare and contrast Lucas-Kanade and Horn-Schunck methods for optical flow
computation.
7. What is motion segmentation? Describe its importance in dynamic scene understanding
with suitable examples.

Dr.P.GANGADHARA REDDY P a g e 17 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
8. Explain camera calibration. Describe the intrinsic and extrinsic parameters and their
role in 3D reconstruction.
9. Describe the complete process of camera calibration using a known object or
checkerboard pattern.
10. What is a 3D point cloud? Explain the techniques used to generate point clouds from
stereo or motion-based systems.
11. Discuss the applications of 3D vision and motion analysis in real-world scenarios such as
robotics, autonomous vehicles, and medical imaging.

Unit IV: Object Recognition and Machine Learning in Vision:

FILL IN THE BLANKS

1. SIFT stands for __ Scale-Invariant Feature Transform

2. SURF is an acronym for _____Speeded-Up Robust Features

3. Feature descriptors such as SIFT and SURF are used to identify and match ____ keypoints
or features ______ across images.

4. Template matching involves comparing a template image with a ____ target _____ image.

5. A deformable part model allows for object detection by modeling both parts and their __
spatial relationships ____

6. CNN stands for ___Convolutional Neural Network

7. CNNs are widely used in vision tasks because they can automatically learn ___ features
_______ from raw image data.

8. In supervised learning, the model is trained using input-output pairs called _ labeled data

9. SVM is short for __ Support Vector Machine

Dr.P.GANGADHARA REDDY P a g e 18 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
10. Decision trees and random forests are examples of ____ supervised____ learning
algorithms.

11. Autoencoders are used for unsupervised learning and __ compression or representation
______ of data.

12. RNNs are especially useful for handling ____ sequential ______ data.

13. A GAN consists of a generator and a ___discriminator

14. Random forests are made by combining several ___ decision _______ trees.

15. Unsupervised learning discovers hidden patterns in data without using __ output or target
___ labels.

MULTIPLE CHOICE QUESTIONS (MCQs)

1. Which of the following is a keypoint descriptor technique?


A. CNN
B. SVM
C. SIFT
D. RNN

2. Which feature descriptor is known for its speed over SIFT?


A. HOG
B. CNN
C. SURF
D. GAN

3. In template matching, the best match is found by:


A. Random selection
B. Histogram comparison
C. Minimizing a similarity measure

Dr.P.GANGADHARA REDDY P a g e 19 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
D. Clustering

4. Deformable part models are useful because:


A. They work only with rigid objects
B. They ignore spatial arrangements
C. They handle object deformations
D. They require labeled keypoints

5. Which model is most suitable for image classification tasks?


A. CNN
B. RNN
C. SIFT
D. GAN

6. Support Vector Machines (SVMs) are used for:


A. Feature extraction
B. Object rendering
C. Classification and regression
D. Clustering

7. Which algorithm combines many decision trees to improve accuracy?


A. Autoencoder
B. SVM
C. Random Forest
D. K-means

8. Autoencoders are primarily used for:


A. Classification
B. Dimensionality reduction
C. Clustering
D. Object detection

Dr.P.GANGADHARA REDDY P a g e 20 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
9. Which deep learning model is used to generate new, synthetic data?
A. RNN
B. CNN
C. GAN
D. SVM

10. Which of the following is best suited for time-series or sequential data?
A. CNN
B. RNN
C. GAN
D. Random Forest

2-MARK QUESTIONS (Short Answer)

1. What is the main advantage of SIFT in feature detection?


2. How does SURF differ from SIFT in terms of performance?
3. What is the purpose of feature matching in object recognition?
4. Define template matching in object detection.
5. What is a deformable part model?
6. List two advantages of using CNNs for image classification.
7. Differentiate between supervised and unsupervised learning.
8. What is the role of Support Vector Machines (SVM) in vision applications?
9. Mention one advantage and one disadvantage of decision trees.
10. What is a random forest, and how does it improve accuracy?
11. State one application of autoencoders in computer vision.
12. What kind of data is best suited for Recurrent Neural Networks (RNNs)?
13. What are the two main components of a Generative Adversarial Network (GAN)?
14. List any two machine learning techniques used in object recognition.
15. Define feature descriptor and give one example.

ESSAY-TYPE QUESTIONS (5/10 Marks)

Dr.P.GANGADHARA REDDY P a g e 21 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
1. Explain SIFT and SURF algorithms. How are they used for feature detection and matching
in images?
2. Describe the steps involved in template matching. What are its advantages and limitations?
3. Explain deformable part models in object recognition. How do they handle variations in
object appearance?
4. Discuss the architecture of a Convolutional Neural Network (CNN). How does it work for
object classification?
5. Compare supervised and unsupervised learning with suitable examples. How are they
applied in vision tasks?
6. Write a detailed note on Support Vector Machines (SVMs). Explain their use in
classification problems with examples.
7. Explain the working of decision trees and random forests. Highlight their differences and
use cases in vision.
8. Describe autoencoders and their applications in vision, such as noise reduction and
representation learning.
9. Explain Recurrent Neural Networks (RNNs). How are they different from CNNs in terms
of data processing?
10. What are Generative Adversarial Networks (GANs)? Explain the working of generator
and discriminator with applications.
11. Discuss the role of machine learning in object recognition. Mention key algorithms and
their strengths.
12. Compare deep learning architectures (CNN, RNN, Autoencoders, GANs) used in vision
applications.

Unit V: Applications and Advanced Topics

FILL IN THE BLANKS

1. JPEG is an example of a __ lossy ________ compression technique.

2. PNG format uses ____ lossless ______ compression, preserving all image data.

Dr.P.GANGADHARA REDDY P a g e 22 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
3. In lossy compression, some data is ___ discarded _______ to reduce file size.

4. Morphological operations like erosion and dilation are applied to ___ binary _______
images.

5. Dilation adds pixels to object ___ boundaries _______ in binary images.

6. The process of removing small noise from an image is achieved using the ___ opening ___
operation.

7. Closing is used to fill small ___ holes _______ in binary images.

8. Morphological processing is widely used in _____ shape _____ analysis.

9. Face recognition systems use techniques from feature extraction and ____ machine ______
learning.

10. Automated visual inspection is commonly used in ___ quality _______ control applications.

11. Medical image analysis aids in the diagnosis of diseases by analyzing ____ clinical or
medical ______ images.

12. Opening is a combination of erosion followed by ____dilation

13. Closing is a combination of dilation followed by _____erosion

Dr.P.GANGADHARA REDDY P a g e 23 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
14. In visual inspection systems, cameras are used to detect surface _____ defects_____ in
manufactured items.

15. Morphological operations are based on the shape and __ geometry ________ of structures
within an image.

MULTIPLE CHOICE QUESTIONS (MCQs)

1. Which of the following is a lossy image compression standard?


A. PNG
B. BMP
C. JPEG
D. TIFF

2. Which format uses lossless compression?


A. JPEG
B. PNG
C. GIF
D. MP4

3. In image compression, the trade-off is usually between image quality and __________.
A. Sharpness
B. File size
C. Brightness
D. Saturation

4. Which morphological operation is used to remove small white noise?


A. Dilation
B. Erosion
C. Opening
D. Closing

Dr.P.GANGADHARA REDDY P a g e 24 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
5. Which operation is used to fill small holes in binary objects?
A. Opening
B. Dilation
C. Erosion
D. Closing
6. In morphological image processing, dilation is followed by erosion in which operation?
A. Closing
B. Opening
C. Skeletonization
D. Thinning

7. What is the primary goal of automated visual inspection systems?


A. Increase storage capacity
B. Detect manufacturing defects
C. Enhance contrast
D. Compress images

8. Which technology is commonly used in face recognition systems?


A. Fourier transform
B. Feature matching
C. Template matching
D. All of the above
9. Medical image analysis is mainly used for:
A. Animation
B. Surveillance
C. Disease diagnosis
D. Audio filtering

10. Which of the following is not a morphological operation?


A. Erosion
B. Smoothing
C. Dilation
D. Closing

Dr.P.GANGADHARA REDDY P a g e 25 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)

2-MARK QUESTIONS (Short Answer)

1. What is the difference between lossy and lossless image compression?


2. Give an example each for lossy and lossless compression formats.
3. What is the main purpose of image compression?
4. Define dilation and state its effect on binary images.
5. What is erosion in morphological processing?
6. Define morphological opening and state its practical use.
7. What is the difference between opening and closing operations?
8. How is shape analysis useful in image processing?
9. Mention one application each of morphological operations in medical and industrial
imaging.
10. What is the role of computer vision in face recognition systems?
11. Name two challenges in automated visual inspection systems.
12. How is image processing applied in medical diagnostics?
13. List any two image compression standards and mention their characteristics.
14. What type of noise is best removed by morphological opening?
15. State one real-time application of shape analysis in industry.

ESSAY TYPE QUESTIONS (5/10 Marks)

1. Explain lossy and lossless image compression techniques. Describe JPEG and PNG
standards with their applications.
2. Discuss the role of morphological operations (dilation, erosion, opening, closing) with
suitable examples and diagrams.
3. What is shape analysis in image processing? Explain how morphological techniques are
applied to shape-based object recognition.
4. Describe the structure and working of a face recognition system. What are the challenges
in implementing such systems?
5. Explain the applications of computer vision in automated visual inspection. How does it
improve quality control?
6. Write a detailed note on medical image analysis. Discuss techniques used for enhancement
and diagnosis.
7. Compare the advantages and limitations of JPEG and PNG formats. In which scenarios
would you use each?

Dr.P.GANGADHARA REDDY P a g e 26 |
27
ADITYA COLLEGE OF ENGINEERING
(UGC-Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Anantapuramu)
Madanapalle -517325, Annamayya Dist., A.P. www.acem.ac.in

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(ARTIFICIAL INTELLIGENCE)
8. Discuss the role of morphological operations in preprocessing. How do they improve the
accuracy of subsequent recognition tasks?
9. What are the challenges of real-time visual inspection systems in industrial
environments? Suggest image processing methods to address them.
10. Give a comparative overview of image compression techniques. Discuss their impact on
image quality and file size.

Dr.P.GANGADHARA REDDY P a g e 27 |
27

You might also like