Edge Detection
Dr. K. Adi Narayana Reddy
IFHE
What is an Edge
• Sharp changes in image brightness are interesting for many reasons
• Object boundaries often generate sharp changes in brightness
• light object may lie on a dark background
• reflectance changes often generate sharp changes in brightness which can be
quite distinctive
• zebras have stripes and leopards have spots
• sharp changes in surface orientation are often associated with sharp changes
in image brightness
• Points in the image where brightness changes particularly sharply
are often called edges or edge points
Origin of Edges
surface normal discontinuity
depth discontinuity
surface color discontinuity
illumination discontinuity
• Surface discontinuities create edges derived from features on the shape of an object like curves or corners.
• Depth discontinuities are the edges that define the boundary between the background and an object in the
foreground.
• Illumination discontinuities are essentially shadows.
• Color discontinuities are, as you would have guessed, changes in the color of an object.
Source: Steve Seitz
Closeup of edges
Closeup of edges
Closeup of edges
Closeup of edges
Type of Edges
Edge detection using Gradient
• the gradient operator, is based on the first derivatives of the image
• Let us start with a simple example of a 1D signal, 𝑓(𝑥).
• We know that an edge is a rapid change in image intensity within a
small region.
2D edge detection
Estimating Derivatives with Finite Differences
• To estimate a derivative of an image represented by a discrete set of pixels, we
need to resort to an approximation
• Derivatives are rather naturally approximated by finite differences
• we might estimate a partial derivative as a symmetric difference:
• This is the same as a convolution, where the convolution kernel is
Gradient Operators
Sobel Filter
Example
Gradient wrt y Gradient wrt x
example
finite differences give a most unsatisfactory estimate of the derivative
The filter has a strong response to fast changes due to noise, as well as those due to signal
Differentiation and Noise
• differentiating a function is the same as multiplying its Fourier
transform by a frequency variable
• differentiating a function must set the constant component to zero,
and the amplitude of the derivative of a sinusoid goes up with its
frequency
• stationary additive Gaussian noise has uniform energy at each
frequency but if we differentiate the noise, we will emphasize the
high frequencies
Effect of Noise
Laplacians and edges
• what is the derivative of the derivative
• the first derivative is increasing, the second derivative will have positive values
• the first derivative is decreasing, it will have negative values
• when the first derivative reaches a peak, the second derivative will be zero
• exactly at the location of the edge, we get a sharp change from positive to negative values, called
a zero-crossing
Laplacian Operator
• The Laplacian is simply the sum of the second derivative of the image
with respect to 𝑥 and the second derivative of them image with
respect to 𝑦.
• When we apply the Laplacian operator to an image, we are going to
end up with zero-crossings where the edges lie
• Laplacian operator does not provide the direction, or the orientation,
of the edge.
LoG
• Laplacian operator to the Gaussian to get what is called a Laplacian of
Gaussian (LoG) operator
Zero crossings of the Laplacian of Gaussian for
various scales and at various gradient magnitude
thresholds
Gradient vs Laplacian
Noise
• Additive Stationary Gaussian Noise
• each pixel has added to it a value chosen independently from the same
Gaussian probability distribution.
• Almost always, the mean of this distribution is zero.
• The standard deviation is a parameter of the model.
Linear Filter Response to Additive Gaussian
Noise
• we have a discrete linear filter whose kernel is G
• we apply it to a noise image N consisting of stationary additive
Gaussian noise with mean μ and standard deviation σ.
• The response of the filter at some point i, j will be:
• the noise is stationary, the expectations that we compute will not
depend on the point
• Assume the kernel has finite support, so that only some subset of the
noise variables contributes to the expectation; write this subset as
n0,0, . . ., nr,s
Cont….
• The expected value of this response must be:
• Nu,v are independent identically distributed Gaussian random
variables with mean μ, we have that
Cont….
• The variance of the noise response is obtained as easily
Expanding
• This expression expands into a sum of two kinds of integral. Terms of
the form
Cont….
Finite Difference Filters and Gaussian Noise
• Assume we have an image of stationary Gaussian noise of zero mean
• We shall use the kernel to estimate the first derivative
• Now a second derivative is simply a first derivative applied to a first
derivative, so the kernel will be
Finite Difference Filters and Gaussian Noise
• The kernel coefficients of a k’th derivative come from the k+1’th row of Pascal’s triangle, with
appropriate flips of sign.
• For each of these derivative filters, the mean response to Gaussian noise is zero, but the variance
of this response goes up sharply
• for the k’th derivative it is the sum of squares of the k +1’th row of Pascal’s triangle times the
standard deviation
Edges and Gradient-based Edge Detectors
• Estimating Gradients
Estimating gradients
• simple finite difference filters tend to give strong responses to noise,
so that applying two finite difference filters is a poor way to estimate
a gradient.
Canny Edge Detection
• Canny Edge Detection is one of the most popular edge-detection
methods in use today because it is so robust and flexible
• Noise Reduction
• Calculating the Intensity Gradient of the Image
• Suppression of False Edges
• Hysteresis Thresholding
Noise Reduction
• In Canny Edge Detection, a Gaussian blur filter is used to essentially
remove or minimize unnecessary detail that could lead to undesirable
edges
Calculating the Intensity Gradient of the Image
• it is filtered with a Sobel kernel, both horizontally and vertically.
Suppression of False Edges
• the algorithm in this step uses a technique called non-maximum suppression of edges to filter out unwanted
pixels
• To accomplish this, each pixel is compared to its neighboring pixels in the positive and negative gradient
direction.
• If the gradient magnitude of the current pixel is greater than its neighboring pixels, it is left unchanged.
• Otherwise, the magnitude of the current pixel is set to zero.
Hysteresis Thresholding
• If the gradient magnitude value is higher than the larger threshold value, those pixels are associated with
solid edges and are included in the final edge map.
• If the gradient magnitude values are lower than the smaller threshold value, the pixels are suppressed and
excluded from the final edge map.
• All the other pixels, whose gradient magnitudes fall between these two thresholds, are marked as ‘weak’
edges (i.e. they become candidates for being included in the final edge map).
• If the ‘weak’ pixels are connected to those associated with solid edges, they are also included in the final
edge map.
Example
Why Smooth with a Gaussian?
• it is convenient because it has a number of important properties
• Firstly, if we convolve a Gaussian with a Gaussian, and the result is another Gaussian:
• secondly because it is common to want to see versions of an image smoothed by different amounts
• Efficiency
• For σ one pixel, points outside a 5x5 integer grid centered at the origin have values less than e−4 = 0.0184 and points
outside a 7x7 integer grid centered at the origin have values less than e−9 = 0.0001234
• The Central Limit Theorem
• For an important family of functions, convolving any member of that family of functions with itself repeatedly will
eventually yield a Gaussian
• Gaussians are Separable
Derivative of Gaussian Filters
• Smoothing an image and then differentiating it is the same as convolving it with
the derivative of a smoothing kernel.
• That is, given a function I(x, y)
• S is smoothing
• Laplacian of a function in 2D is defined as:
Identifying Edge Points from Filter Outputs
• Non-maximum Suppression
The central limit theorem states that repeated convolution of a positive
kernel with itself will eventually limit towards a kernel that is a scaling of
a Gaussian.