UNIVERSITY INSTITUTE OF
ENGINEERING
COMPUTER SCIENCE ENGINEERING
Computer Vision
(CST-422)
Prepared By : Payal Thakur(E12720)
DISCOVER . LEARN .
Topic: Introduction to Features: Detecting Features, Extracting
EMPOWER
Features, Matching Features
Computer Vision
Computer Vision
• Computer vision is a multidisciplinary field that focuses on
developing algorithms and techniques to enable computers to
interpret, analyze, and understand visual information from
images or videos. It aims to replicate human vision capabilities
by processing and extracting meaningful information from
visual data.
Applications of Computer Vision
CST-422 Computer Vision 3
Applications of computer vision
Computer vision has a wide range of applications across various domains:
a. Autonomous vehicles: Computer vision is crucial for self-driving cars to perceive
the environment, detect obstacles, and navigate safely.
b. Surveillance systems: Computer vision is used in security and surveillance systems
to detect and track objects, identify suspicious activities, and enhance video
analytics.
c. Medical imaging: Computer vision techniques assist in medical image analysis,
enabling tasks such as tumor detection, tissue segmentation, and diagnosis support.
d. Augmented reality: Computer vision is employed in augmented reality applications
to overlay digital information onto the real world and interact with virtual objects.
e. Robotics: Computer vision plays a vital role in robotic perception, object
recognition, robot navigation, and manipulation of objects in unstructured
environments.
CprE 458/558: Real-Time Systems (G. Manimaran) 4
Importance of Features in
Computer Vision Tasks
Features play a crucial role in various computer vision tasks, including object
recognition, image matching, and tracking.
• Object Recognition: Features provide distinct and discriminative information
about objects, enabling their identification and classification.
• Image Matching: Features help establish correspondences between different
images, facilitating tasks such as image alignment, image retrieval, and
image stitching.
• Tracking: Features aid in tracking objects or regions of interest across video
frames, enabling applications like visual surveillance and motion analysis.
Role of Features in Representing Visual
Patterns
• Features are key elements that represent and describe distinct visual
patterns within images.
• Visual patterns can include edges, corners, textures, color blobs, or any
other salient characteristics.
• Features capture the essential information necessary for identifying
and distinguishing objects or regions in an image.
• By extracting and representing features, computer vision algorithms
can focus on relevant information while reducing the computational
burden.
Characteristics of Effective Features
• Distinctiveness: Features should possess unique characteristics that enable
them to be reliably detected and distinguished from other elements in an
image.
• Invariance: Features should be invariant to various transformations such as
rotation, scale changes, and illumination variations.
• Robustness: Features should be able to withstand noise, occlusion, and
partial visibility, ensuring their reliability in challenging image conditions.
• Efficiency: Features should be computationally efficient to detect, extract,
and match, allowing real-time or near-real-time processing.
Popular Feature Detection and
Description Techniques
• Feature Detection: Algorithms such as Harris corner detection,
Scale-Invariant Feature Transform (SIFT), and Speeded-Up Robust
Features (SURF) are widely used for detecting distinctive points or
regions in an image.
• Feature Description: Techniques like Histogram of Oriented
Gradients (HOG), Local Binary Patterns (LBP), and SIFT descriptors
are employed to extract descriptive information from detected
features.
Applications of Features in Computer
Vision
• Object Recognition: Features are used to represent objects,
enabling recognition and categorization in applications like image
classification, object detection, and facial recognition.
• Image Matching: Features facilitate the alignment of images,
enabling tasks such as image stitching, image retrieval, and
structure from motion.
• Tracking: Features assist in tracking objects or regions of interest
across video frames, enabling applications like visual surveillance,
augmented reality, and gesture recognition.
Introduction to Feature Detection
• Feature detection is a fundamental task in computer vision that
aims to locate distinctive points or regions in an image.
• Features are important for various computer vision applications
such as object recognition, image matching, and tracking.
• Feature detection algorithms identify points or regions that
possess unique characteristics, making them robust to changes in
scale, rotation, and lighting conditions.
Popular Feature Detection Techniques
1. Harris Corner Detection:
• Harris corner detection is a widely used feature detection algorithm.
• It identifies corners, which are points where the surrounding image content exhibits
significant variations in intensity.
• The algorithm computes the corner response function by analyzing the changes in
intensity around each pixel using a gradient-based approach.
• Corners are detected by locating points with high corner response values above a certain
threshold.
2. Scale-Invariant Feature Transform (SIFT):
• SIFT is a powerful feature detection algorithm that is robust to scale, rotation, and affine
transformations.
• It extracts keypoint descriptors that can be matched across different images.
• SIFT detects keypoints by identifying stable and distinctive regions using a scale-space
extrema detection approach.
• Keypoint descriptors are computed by considering the local image gradients and
orientations within the region surrounding each keypoint.
Popular Feature Detection Techniques
3. Speeded-Up Robust Features (SURF):
• SURF is an efficient and robust feature detection algorithm
inspired by SIFT.
• It approximates the computation of SIFT descriptors, resulting in
faster processing times.
• SURF utilizes a multiscale approach to identify scale-invariant
interest points.
• Keypoint descriptors are generated by analyzing the distribution
of local intensity values in the region surrounding each keypoint.
Popular Feature Detection Techniques
Comparison of Feature Detection Techniques:
• Harris corner detection, SIFT, and SURF differ in their underlying
principles, computational complexity, and robustness to different
types of image transformations.
• Harris corner detection is computationally efficient but less robust
to scale and affine transformations.
• SIFT is more computationally intensive but provides excellent
robustness to various image transformations.
• SURF offers a balance between computational efficiency and
robustness.
Popular Feature Detection Techniques
Applications of Feature Detection:
• Feature detection algorithms are extensively used in computer
vision applications such as object recognition, image stitching, and
visual tracking.
• They enable the identification and matching of distinctive image
features across different images, facilitating tasks like image
alignment and correspondence establishment.
Feature Extraction
Introduction to Feature Extraction:
• Feature extraction is the process of extracting descriptive
information from detected features to represent them in a
compact and informative manner.
• Feature descriptors encode the visual characteristics of keypoints
or regions, allowing for efficient and effective comparison and
matching of features.
Feature Descriptor Algorithms
1. Histogram of Oriented Gradients (HOG):
• HOG is a feature descriptor algorithm commonly used for object detection
and pedestrian detection tasks.
• It represents local shape and texture information by computing histograms
of gradient orientations within image regions.
• HOG calculates the gradients of pixel intensities and forms orientation
histograms based on gradient magnitudes and orientations.
• The histograms capture the dominant orientations and their distribution,
providing a representation of local image structure.
Feature Descriptor Algorithms
2. Local Binary Patterns (LBP):
• LBP is a widely used feature descriptor algorithm for texture analysis and
facial recognition.
• It encodes the local texture information by comparing the intensity values
of a central pixel with its neighboring pixels.
• LBP creates a binary pattern by thresholding the differences between the
central pixel and its neighbors.
• The resulting binary patterns are then used to construct histograms or
other statistical measures to represent the texture characteristics of the
image region
Feature Descriptor Algorithms
3. Scale-Invariant Feature Transform (SIFT) Descriptors:
• SIFT descriptors are not only used for feature detection but also for feature
extraction.
• They provide a distinctive representation of keypoints that is robust to
scale, rotation, and affine transformations.
• SIFT descriptors capture the local gradients and orientations around
keypoints, forming a highly informative representation.
• The descriptors are computed by dividing the region surrounding a
keypoint into smaller subregions and generating histograms of gradient
orientations within each subregion.
Feature Descriptor Algorithms
Comparison of Feature Descriptor Algorithms:
• HOG, LBP, and SIFT descriptors differ in their approach to capturing and
representing the visual characteristics of keypoints or regions.
• HOG focuses on capturing shape and texture information through gradient
orientations.
• LBP emphasizes texture analysis by encoding local binary patterns.
• SIFT descriptors provide a robust representation of keypoints by
considering local gradients and orientations.
Feature Extraction
Applications of Feature Extraction:
• Feature extraction plays a crucial role in various computer vision tasks such as
object recognition, image retrieval, and image classification.
• Extracted feature descriptors enable efficient matching and comparison of
features across different images or datasets.
• They facilitate tasks like object localization, image similarity assessment, and
content-based image retrieval.
Feature Matching
Introduction to Feature Matching:
• Feature matching is a crucial task in computer vision that involves comparing
and establishing correspondences between features in different images.
• It is used to align images, track objects, and recognize objects across different
viewpoints or frames.
21
Methods for Feature Matching
1. Brute-Force Matching:
• Brute-force matching is a simple and straightforward method for feature matching.
• It involves exhaustively comparing each feature in one image with all features in the other
image.
• The matching process typically utilizes a distance metric, such as Euclidean distance or Hamming
distance, to measure the similarity between feature descriptors.
• Brute-force matching can be computationally expensive for large feature sets.
2. Nearest Neighbor Search:
• Nearest neighbor search is a commonly used approach for feature matching.
• It involves finding the closest matching feature in one image for each feature in the other image.
• The matching is based on the similarity of feature descriptors using distance metrics like
Euclidean distance or cosine similarity.
• Various data structures, such as kd-trees or hash tables, can be employed to speed up the
nearest neighbor search process.
22
Methods for Feature Matching
3. Lowe's Ratio Test:
• Lowe's ratio test is a technique used to improve the reliability of feature
matching.
• It compares the distances between the nearest and second nearest neighbors of
a feature.
• If the ratio of the distances is below a certain threshold, the match is
considered reliable.
• This test helps discard ambiguous matches and improves the robustness of
feature matching.
23
Feature Matching
Applications of Feature Matching:
• Feature matching plays a crucial role in various computer vision applications
such as image stitching, object recognition, and 3D reconstruction.
• It enables the alignment of multiple images, establishment of correspondences,
and recognition of objects based on their distinctive features.
24
References:-
Books:-
• "Introduction to Computer Vision" by David Forsyth and Jean Ponce. Chapter 6: Features and Descriptors provides an in-
depth explanation of feature detection, extraction, and matching algorithms.
• Computer Vision: Algorithms and Applications" by Richard Szeliski. Chapter 4: Image Features covers various feature
detection, extraction, and matching techniques.
Video Links
• "Feature Detection and Description" by Stanford University - Computer Vision (CS231n). Available on YouTube:
https://www.youtube.com/watch?v=H-HVZJ7kGI0
• "Feature Matching and Homography" by University of Washington - Computer Vision (CSE 576). Available on YouTube:
https://www.youtube.com/watch?v=uvSCXyYpG9k
• "Introduction to Feature Detection and Matching" by OpenCV. Available on YouTube: https://www.youtube.com/watch?
v=AWoG8vdw4pA
25
THANK YOU
For queries
Email: [email protected]