imageprocessing
imageprocessing
Q-1 Define the following: 1. Electromagnetic Spectrum ( EMS ): when a beam of sunlight is passed
through a glass prism, the emerging beam of light is not white but consists instead of a continuous
spectrum of colors ranging from violet at one end to red at the other.
2-Low-Level processes : a type of computerized processes that involve primitive operations such as
image preprocessing to reduce noise, contrast enhancement and image sharpening.
3-High-Level processes :- The process of acquiring an image of a text, processing it, extracting
(individual characters, describing characters suitable for computer processing and recognizing those
individual characters are in the scope of digital image processing.
The relative sensitivity of the average human eye to EMW at different wavelengths. This portion of
the EM spectrum to which the eye is sensitive is called visible light.
6. Image acquisition: Digital imaging or digital image acquisition is the creation of a representation
of the visual characteristics of an object, such as a physical scene or the interior structure of an
object. The term is often assumed to imply or include the processing, compression, storage, printing,
and display of such images.
7. Image Enhancement: one of the simplest and most appealing areas of digital
image processing that aims to bring out detail that is obscured, or simply to
processing that procedures partition an image into its constituent parts or objects.
9-Image Recognition: the process that assigns a label (e g vehicle) to an object based on its
descriptors
10-Pattern Recognition : is the process that assigns a label (e.g., vehicle) to an object based on its
descriptors.
11_Color Image Processing: is an area that has been gaining in importance because of the significant
increase in the use of digital images over the Internet. Color is used also as the basis for extracting
features of interest in an image.
12. Digital image: two-dimensional function, f(x, y )where x and y are spatial coordinates. The
intensity or gray level of the image at that point, is called: the amplitude of f at any pair of
coordinates (x,y). When x, y and f, values are all finite, discrete quantities, we call the image a digital
image.
13-Pixel: is the unit of image size and each value of pixel is called gray level.
14_Gray Level: :- It is known that if the image points are increased in addition to the number of gray
levels to a larger image file and longer computation times, representing 256 gray levels are encoded
with 8 bits. Also, 256 gray levels are provided by a computer screen. Today, the flashlights are
pressed to distinguish 1024, 4096 or more levels.
15-Grayscale: is a group of shades without any visible color. each pixel of a grayscale carries an
amount of light, ranging from the weakest amount of light, or black, to the strongest amount of light,
or white. Grayscale only contains brightness information, not color.
16-Grayscale image:
Each pixel is a shade of grey, normally from: 0 ( Black )to 255(white) .This range means that each pixel
can be represented by eight bits, or exactly one byte. 17_Grayscale image also known as 8_bit
image.
18_Binary image: Each pixel is just black or white. Since there are only two possible values for each
pixel, we only need one bit per pixel. Such images can therefore be very efficient in terms of storage
Images for which a binary representation may be suitable include text (printed or handwriting),
fingerprints, or architectural plans.
19-Gray image: an image that each pixel in it is a shade of grey, normally from 0 to 255. This range
means that each pixel can be represented by eight bits, or exactly one byte.
20_RGB image: - an image that each pixel has a particular color that color being described by the
amount of red, green and blue in it. If each of these components has a range 0 to 255.
21-Radiance: is the total amount of energy that flows from the light source, and it is usually
measured in watts (W)
23. Luminance: measured in lumens (Im), gives a measure of the amount of energy an observer
perceives from a light source.
24. Image Formation Model: the process in which three-dimensional (3D) scene points are projected
into two-dimensional (2D) image plane locations, both geometrically and optically
25. Image Sampling: the process of digitizing the coordinate values of an image.
27-Spatial Resolution : one of the main parameters of the digitization that represents the number of
samples in the grid
28-Intensity Resolution: he smallest discernible change in intensity level.
29-image quality: the quality of the images increases as the resolution and the bits per pixel
increase.
30-Image resolution: one of the main parameters of the digitization that represents the number of
samples in the grid.
31-Morphological processing: deals with tools for extracting image components that are useful in the
representation and description of shape.
32-image element: some value in the image matrix. also called picture element, pixel, pel.
33-Number of intensity levels: Number of storage bits for various values of N and k where the
number of bits b required to store a digitized image is b = Mx N x K.
35-Edges of the image: Edges represent the boundaries between two different regions within an
image. More precisely, the edge of an object within a digital image is the set of contiguous pixels
where an abrupt change of intensity occurs.
36-Smooth region in the image: represent the regions that have a low frequency in the image (less
noise and less pixelated)
37-False contouring: constant or nearly constant intensity caused by the use of an insufficient
number of intensity levels in smooth areas of a digital image, is called false contouring so called
because the ridges resemble topographic contours in a map.
39-Isopreference curve: It is defined as a form at the N level in which the results of a Group of image
types are summarized with N contrast, in Which it is requested to arrange these images according to
Their subjective quality. It was found in the course of the Experiments that the Isopreference Curves
tended to shift Right and upward . and The key point of interest in the Context of the present
discussion is that Isopreference Curves tend to become more vertical as the detail in the .Image
increases
40-NK-plane: Each point in the Nk-plane represents an image having values of N and k equal to the
coordinates of that point.
45. Magnitude of the Fourier transform ( FT ): |F(u)|: The magnitude (spectrum, Modulus) of the
Fourier transform. is used to measure how strong the change in image intensity is.
46. Power Spectrum of the FT: [F(u) |2: power spectrum of the Fourier transform or called Intensity.
shows power as the mean squared amplitude at each frequency line but includes no phase
information.
49. Rotation Property of FT:
49-Parsival’s formula: The integral of the power spectrum (square modulus) of the function in
spatial domain equal to the integration of the power spectrum (square modulus) in frequency
domain.
52. Auto-correlation function: Auto-correlation, also known as serial correlation, is the correlation of
a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between
observations as a function of the time lag between them.
53-Cross-Correlation function :a measure of the similarity between two waveforms as a function of
their relative delay.
54. Blurring function ( impulse Function ) : Impulse function depend on the position of each element
(Pixel ) in the function
55. Space variant PS ( SVPDF): Space Variant Point Spread Function,: the PSF that does change its
shape with the position of pixel in the image.
56-Space Invariant Point Spread Function, defined as: the PSF that doesn't change its shape with
the position of pixel in the image.
57-linear systems :are characterized by how they respond to impulses; that is, by their impulse
responses. As you should expect, the output image from a system is equal to the input image
convolved with the system's impulse response.
58. Point Spread Function ( PSF ):PSF: determine the energy distribution in the image plane for the
point source Located in the object plane.
59-The optical transfer function (OTF) :is a central concept in Fourier optics. For each component of
spatial frequency in the object intensity, it determines the strength and phase of the corresponding
component in the image.
5. Contrast of the image:- the difference in Luminance or color that makes an object distinguishable.
Histogram of the image :- the basis for numerous spatial domain processing techniques
14. Image acquisition :- Digital imaging or digital image acquisition is the creation of a representation
of the visual characteristics of an object, such as a physical scene or the interior structure of an
object. The term is often assumed to imply or include the processing, compression, storage, printing,
and display of such images.
Image enhancement : :- one of the simplest and most appealing areas of digital image processing
that aims to bring out detail that is obscured, or simply to highlight certain features of interest in an
image.
Image Recognition:- the process that assigns a label (e g vehicle) to an object based on its
descriptors.
Image segmentation: one of the most difficult tasks in digital image processing that procedures
partition an image into its constituent parts or objects.
Number of intensity levels:- Number of storage bits for various values of N and k where the number
of bits b required to store a digitized image is 𝐛 = 𝐌 𝐱 𝐍 𝐱 K.
Power Spectrum of the FT :- power spectrum of the Fourier transform or called Intensity.
35. Probability density function of the image:- The PDF is used to specify the probability of the
Random Variable falling within a particular range of values, as instead of
Radiance :- is the total amount of energy that flows from the light source, and it is usually measured
in watts (W).
38. RGB image :- an image that each pixel has a particular color that color being described by the
amount of red, green and blue in it. If each of these components has a range 0 255 this gives a total
of 2553 16777216 different possible colors in the image.
Interest in digital image processing methods stems from two principal application
The common applications of DIP in the field of medical is: Gamma ray imaging
PET scan
X Ray Imaging
Medical CT UV imaging
Gamma-Ray Imaging
X-ray Imaging
Acoustic imaging
Electron imaging
X-ray Imaging
2. Industry (manufacturing)
3. Medicine
The sampling rate determines the spatial resolution of the digitized image, while
the quantization level determines the number of grey levels in the digitized image
processing.
• Align images that were taken at different times or with different sensors
8. What are the effect of reducing the number of intensity levels, keep the number of samples
constant.
9. What are the effect of reducing the number of samples, keep the number of intensity levels
constant.
11. Compare between the resolving power of the human eye and the CCD Camera.
Sol/Just in terms of raw resolving power, a Charge-Coupled Device (CCD) imaging chip of medium
resolution can have this number of elements in a receptor array no larger than(5x5) mm2.
While the ability of humans to integrate intelligence and experience with vision makes these types of
number comparisons somewhat superficial. Keep in mind for future discussions that the basic ability
of the eye to resolve detail certainly is comparable to current electronic imaging sensors.
An image function f(x,y) must be digitized both spatially and in amplitude in order to
become suitable for digital processing. Typically, a frame grabber or digitizer is used to
sample and quantize the analogue video signal. Therefore, in order to
create an image which is digital, we need to convert continuous data into digital form. This
conversion from analog to digital involves two processes:
Image processing is a method of processing of images by using mathematical operations like signal
processing where the input is an image, a group of images, or a video,
this is done in order to extract an part of image or to get some important information from the image.
The output of image processing maybe the image or some part of image or a specific set of
parameters related to the image.
In this process the system treats the images as a two dimensional signal while applying pre- set
signal processing techniques
to them. Images could also be processed as a three dimensional signal, with time axis as the third
dimension.
interpretation; and
and representation.
Here is a short list just to give some indication of the range of image processing applications.
Image sharpening and restoration / Medical field / Remote sensing / Transmission and encoding
Machine/Robot vision
Microscopic Imaging
2. Prove that the convolution process in spatial domain equivalent to simple multiplication in
frequency domain
Q-5 List only the:
Camera
Photographs,
Photographic film,
Printed paper,
Medical field
Remote sensing
Machine/Robot vision
Color processing
Pattern recognition -
Video processing
2. Mid-level processes
3. High level processes
Image acquisition.
Image enhancement
Image restoration
Color image processing:
Wavelets and multiresolution processing
Compression
Morphological processing
Segmentation representation and description
Object recognition
1. Binary image
2. Greyscale image
7. Types of Neighbors
Exposure Timer
3. lowlighting
Timeless Time
Death of Distance
Network Externalities
Economies of Scale,
Exposure Selection,
Algorithmification: A.I.
27. What are the basic quantities used to describe the quality of achromatic light source.
Radiance
Luminance
Brightness
2-Pixel accuracy “Gray Level Resolution“ how many bits per sample are used
a- γ Rays b- X-Rays c- UV
d- Visible e- IR
f- Micro
2- Sound Waves
a- Sonic
b- Ultrasonic
3- Electronbeam
4- Magnetic Waves
1- Uniform Quantization
3- Non-Uniform Quantization
1- Uniform Sampling
2- Non-Uniform Sampling