Image Processing Qb
Image Processing Qb
The equations (1) and (2) are known as fourier transform pair.
36. Specify the properties of 2D fourier transform.
The properties are
1. Separability 2. Translation 3. Periodicity and conjugate symmetry 4. Rotation
5. Distributivity and scaling 6. Average value 7. Laplacian 8. Convolution correlation 9.
Sampling
37. Explain separability property in 2D fourier transform
The advantage of separable property is that F(u, v) and f(x, y) can be obtained by
successive application of 1D fourier transform or its inverse.
43.
38. Give the Properties of one-dimensional DFT
1. The DFT and unitary DFT matrices are symmetric.
2. The extensions of the DFT and unitary DFT of a sequence and their inverse
transforms are periodic with period N.
3. The DFT or unitary DFT of a real sequence is conjugate symmetric about N/2.
39. Give the Properties of two-dimensional DFT
1. Symmetric 2. Periodic extensions 3. Sampled Fourier transform 4. Conjugate symmetry.
40. Define circulant matrix?
A square matrix, in which each row is a circular shift of the preceding row and the first row is a
circular shift of the last row, is called circulant matrix.
41. Give the equation for singular value decomposition of an image
U – Orthogonal matrix
V – Orthogonal matrix and its columns form an orthonormal set
This equation is called as singular value decomposition of an image.
42. Write the properties of Singular value Decomposition(SVD)?
1. The SVD transform varies drastically from image to image.
2. The SVD transform gives best energy packing efficiency for any given image.
3. The SVD transform is useful in the design of filters finding least square,minimum solution of
linear equation and finding rank of large matrices.
43. Why KL transform is called optimal transform? (Dec’12)
It has high information packing capabilities over few co-efficients and known as a
dimensionality reduction technique. Multispectral images exhibit a larger correlation and hence
the KL transform is often used.
44. Why DCT is preferred for image compression?
Since DCT has high energy packing capabilities, it is preferred for image compression.
45. What are the properties of DCT?
1. DCT is real and symmetric 2. DCT has very high energy compaction over few coefficients
3. DCT is separable
46. What is monochrome image and gray image?(Dec’13)
Monochrome image is an single colour image with neutral background. Gray image is an image
with black and white levels which has gray levels in between black and white. 8-bit gray image
has 256 gray levels.
PART-B
1. Explain the steps involved in digital image processing.
(or)
Explain various functional block of digital image processing
1. Image acquisition 2. Image filtering and enhancement 3. Image restoration 4. Color image
processing 5. Wavelets and multiresolution processing 6. Compression 7. Morphological
processing 8. Segmentation 9. Representation and description 10. Object recognition
PROPERTIES:
1. Once 𝜙m for m=1,…r is known Eigen vectors 𝜓m can be determined.
They are orthonormal.
2. It is not a unitary transform
3. The least squares error between U and any partial sum is minimized for any k that is when
Compliment(𝑈𝑘) =Uk
Advantages:
Energy packing efficiency
Used to find inverse of singular matrices
Used for analyzing various image processing problems
Drawbacks:
SVD varies drastically from image to image.
No fast transform substitute available
High computational effort is required
Applications:
In image compression
Face recognition
Watermarking
Texture classification
UNIT II IMAGE ENHANCEMENT
PART-A
1. Specify the objective of image enhancement technique.
The objective of enhancement technique is to process an image so that the result is more suitable
than the original image for a particular application.
2. Explain the 2 categories of image enhancement. (Dec’12)
i) Spatial domain refers to image plane itself & approaches in this category are based on
direct manipulation of picture image.
ii) Frequency domain methods based on modifying the image by fourier transform.
3. What is contrast stretching? (Dec’13)
Contrast stretching reduces an image of higher contrast than the original by darkening the levels
below m and brightening the levels above m in the image.
4. What is grey level slicing?
Highlighting a specific range of grey levels in an image often is desired. Applications include
enhancing features such as masses of water in satellite imagery and enhancing flaws in x-ray images.
5. Define image subtraction.
The difference between 2 images f(x,y) and h(x,y) expressed as g(x,y)=f(x,y)-h(x,y) is obtained by
computing the difference between all pairs of corresponding pixels from f and h.
6. What is the purpose of image averaging?
An important application of image averaging is in the field of astronomy, where imaging with very
low light levels is routine, causing sensor noise frequently to render single images virtually useless
for analysis.
7. What is meant by masking?
Mask is the small 2-D array in which the values of mask co-efficient determines the nature of
process.
The enhancement technique based on this type of approach is referred to as mask processing.
8. Define histogram.
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function h(rk)=nk.
rk-kth gray level; nk-number of pixels in the image having gray level rk.
9. What is meant by histogram equalization? (June’12)
; where k=0,1,2,…,L-1
This transformation is called histogram equalization.
10. Define Derivative filter
For a function f (x, y), the gradient ∆f at co-ordinate (x, y) is defined as the vector
0 -1 0 -1 -1 -1
-1 A+4 -1 -1 A+8 -1
0 -1 0 -1 -1 -1
PART-B
1. Explain the types of gray level transformation used for image enhancement.
Basic intensity transformation functions
(i) Image negatives (ii) Log transformations (iii) Power law transformations
Piecewise-Linear transformation
(i) Contrast stretching (ii) Intensity level slicing (iii) Bit plane slicing
2. What is histogram? Explain histogram equalization and matching (Dec’12)
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function
h(rk)=nk. rk-kth gray level; nk-number of pixels in the image having gray level rk.
The method used to generate a processed image that has a specified histogram is called histogram
matching or histogram specification
3.Discuss the image smoothing filter with its model in the spatial domain.
Averaging filter and low pass filter
Weighted averaging filter
Blurring effect
4. What are image sharpening filters? Explain the various types of it.
Foundation
Second derivative for image sharpening-Laplacian
Unsharp masking and high boost filtering
First derivative - Gradient
5. Explain spatial filtering in image enhancement.
Fundamentals of spatial filtering
Smoothing spatial filters
6. Explain image enhancement in the frequency domain.
Low pass filters
Butter low pass filters
Gaussian low pass filters
Image sharpening using frequency domain filters
7. Explain Homomorphic filtering in detail. (Dec’12)
Homomorphic filtering is used to improve the appearance of an image by simultaneous intensity
range compression and contrast enhancement
8. Explain different noise models in image processing
Explain the following noise models with distributions
Gaussian white noise, Rayleigh noise, Erlong noise, Impulse noise, Uniform noise
9. Explain Geometric mean filter, Harmonic mean filter and Contraharmonic mean filter
(Dec’12)
Geometric mean filter
Each restored image is given by the product of the pixels in the subimage window, raised to the
power 1/mn.
Harmonic mean filter
This filter works well for salt noise, but fails for pepper noise
Contraharmonic mean filter
This filter is well suited for reducing or virtually eliminating the effects of salt and pepper noise.
f(x,y) H g(x,y)
A system operator H, which together with an additive white noise term η(x,y) operates on an input
image f(x,y) to produce a degraded image g(x,y).
5. Explain homogenity property in Linear Operator
H[k1f1(x,y)]=k1 H[f1(x,y)]
The homogeneity property says that,the response to a constant multiple of any input is equal to the
response to that input multiplied by the same constant.
6. Give the relation for degradation model for continuous function
+ n(x,y)
7. What is fredholm integral of first kind?
which is called the superposition or convolution or fredholm integral of first kind. It states
that if the response of H to an impulse is known, the response to any input f(α,β) can be
calculated by means of fredholm integral.
8. What is concept algebraic approach?
The concept of algebraic approach is to estimate the original image which minimizes a predefined
criterion of performances.
9. What are the two methods of algebraic approach?
1. Unconstraint restoration approach 2. Constraint restoration approach
10. Define Gray-level interpolation
Gray-level interpolation deals with the assignment of gray levels to pixels in the spatially
transformed image
11. What is meant by Noise probability density function?
The spatial noise descriptor is the statistical behavior of gray level values in the noise component of
the model.
12. What is geometric transformation? (June’12)
Transformation is used to alter the co-ordinate description of image.
The basic geometric transformations are 1. Image translation 2. Scaling 3. Image rotation
13. What is image translation and scaling?
Image translation means reposition the image from one co-ordinate location to another along straight
line path. Scaling is used to alter the size of the object or image (ie) a co-ordinate system is scaled by
a factor.
14. Why the restoration is called as unconstrained restoration?
In the absence of any knowledge about the noise ‘n’, a meaningful criterion function is to
seek an such that H approximates of in a least square sense by assuming the noise term is as
small as possible. Where H = system operator. = estimated input image. g = degraded image.
15. Which is the most frequent method to overcome the difficulty to formulate the spatial
relocation of pixels?
The point is the most frequent method, which are subsets of pixels whose location in the input
(distorted) and output (corrected) imaged is known precisely.
16. What are the three methods of estimating the degradation function?
1. Observation 2. Experimentation 3. Mathematical modeling.
The simplest approach to restoration is direct inverse filtering, an estimate F^(u,v) of the
transform of the original image simply by dividing the transform of the degraded image G^(u,v)
by the degradation function.
17. What is pseudo inverse filter? (Dec’13)
It is the stabilized version of the inverse filter.For a linear shift invariant system with frequency
response H(u,v) the pseudo inverse filter is defined as
H-(u,v)=1/(H(u,v) H#0
0 H=0
18. What is meant by least mean square filter or wiener filter? (Dec’12)
The limitation of inverse and pseudo inverse filter is very sensitive noise. The wiener filtering is a
method of restoring images in the presence of blur as well as noise.
19. What is meant by blind image restoration?
An information about the degradation must be extracted from the observed image either explicitly or
implicitly.This task is called as blind image restoration.
20. What are the two approaches for blind image restoration?
1. Direct measurement 2. Indirect estimation
21. What is meant by Direct measurement?
In direct measurement the blur impulse response and noise levels are first estimated from an
observed image where this parameter are utilized in the restoration.
22. What is blur impulse response and noise levels?
Blur impulse response: This parameter is measured by isolating an image of a suspected object
within a picture.
Noise levels: The noise of an observed image can be estimated by measuring the image covariance
over a region of constant background luminence.
23. What is meant by indirect estimation?
Indirect estimation method employ temporal or spatial averaging to either obtain a restoration or to
obtain key elements of an image restoration algorithm.
24. Give the difference between Enhancement and Restoration
Enhancement technique is based primarily on the pleasing aspects it might present to the viewer. For
example: Contrast Stretching. Where as Removal of image blur by applying a deblurrings function is
considered a restoration technique.
25. What do you mean by Point processing?
Image enhancement at any Point in an image depends only on the gray level at that point is often
referred to as Point processing.
26. What is Image Negatives?
The negative of an image with gray levels in the range [0, L-1] is obtained by using the negative
transformation, which is given by the expression.
s = L-1-r Where s is output pixel r is input pixel
27. Give the formula for negative and log transformation.
Negative: S=L-1-r; Log: S = c log(1+r) Where c-constant and ≥0
28. What is meant by bit plane slicing? (Dec’13)
Instead of highlighting gray level ranges, highlighting the contribution made to total image
appearance by specific bits might be desired. Suppose that each pixel in an image is represented by 8
bits. Imagine that the image is composed of eight 1-bit planes, ranging from bit plane 0 for LSB to
bit plane-7 for MSB.
PART-B
1. Explain unconstrained restoration and constrained restoration problem in image restoration.
Constrained restoration
Unconstrained restoration
2. What is the use of wiener filter or least mean square filter in image restoration. Explain.
(Dec’12)
PART-B
1. What is image segmentation? Explain in detail.
1. Basic fundamentals 2. Point detection 3. Line detection 4. Edge detection
2. Explain Edge Detection in details?
1. Edge models 2. Basic edge detection 3. Image gradient and its properties 4. Gradient operators
5. Combining gradient with thresholidng
3. Define Thresholding and explain the various methods of thresholding in detail?
1. The basics of intensity of thresholding 2. The role of noise in image thresholding 3. The role of
illuminace and reflectance 4. Basic global thresholding 5. Optimum global thresholing
4. Discuss about region based image segmentation techniques. Compare threshold region based
techniques.
1. Region growing 2.Region splitting and merging
5. Define and explain the various representation approaches?
1. Boundary (Border following) 2. Chain codes 3. Polygonal approximations using minimum-
perimeter polygons
6. Explain Boundary descriptors.
1. Simple descriptors 2. Shape numbers 3. Fourier descriptors 4. Statistical
moments
7. Explain regional descriptors
1. Simple descriptors 2. Topological descriptors 3. Texture 4. Statistical approaches
8. How is line detected? Explain through the operators
Laplacian detector Horizontal, vertical and diagonal operators
9. Explain global processing using Hough transform (Dec’12)
Global properties-parameter space-accumulator cells
Steps in Hough transform
1. Obtain a binary edge image using any edge detection techniques
2. Specify subdivisions in the rho-theta plane
3. Examine the counts of the accumulator cells for high pixel concentration
4. Examine the relationship between pixels in a chosen cell
10. What do you understand by dilation and erosion in morphological operation? Explain in
detail
(Dec’12)
The basic effect of dilation on a binary image is that it gradually increases the boundaries of the
region, while the small holes present in the image become smaller
The objective of erosion operator is to make an object smaller by removing its outer layers of
pixles. If a black pixel has a white neighbor, then all the pixels are made white
10. Elaborate the process of dam construction along with the watershed segmentation
algorithm(Dec’12)
Catchment basin – single connected component –dilation -flooding
PART - B
1. What is data redundancy? Explain three basic data redundancy?
Irrelevant and repeated information is said to be redundancy. Relative redundancy
is given by R=1-1/C
Three basic data redundancies are coding redundancy, spatial redundancy or interpixel
redundancy and psycho visual redundancy
2. What is image compression? Explain variable length coding compression schemes.
Reducing the amount of Irrelevant and repeated information is called image compression.
Explain Shanon fano coding, Huffman coding and Golomb coding techniques with examples
3. Explain about Image compression model?
Encoder Decoder
4. Explain about Error free Compression?
Lossless predictive coding has encoder and decoder and predictor. This predictor generates the
anticipated value of each sample based on a specified number of past samples
5. Explain about Lossy compression?
Lossless predictive coding has encoder, decoder, quantizer and predictor. Delta modulation is
one such lossy compression
6. Explain the schematics of image compression standard JPEG.
JPEG defines three different coding systems (i) a lossy baseline coding system which is based
on the DCT and is adequate for most compression applications (2) an extended coding system
for greater compression, higher precision or progressive reconstruction applications (3) a
lossless independent coding system must include support for the baseline system.
7. Explain how compression is achieved in transform coding and explain about
DCT