Unit I – Digital Image Processing
Digital Image Processing – Origins-Examples of fields that use Digital Image Processing-
Fundamental steps in digital image processing-Components of an image processing system
Digital Image Fundamentals
The field of digital image processing refers to processing digital images by means of digital computer. Digital
image is composed of a finite number of elements, each of which has a particular location and value. These
elements are called picture elements, image elements, pels and pixels. Pixel is the term used most widely to
denote the elements of digital image. An image is a two-dimensional function that represents a measure of
some characteristic such as brightness or color of a viewed scene. An image is a projection of a 3-D scene into
a 2D projection plane.
What is Digital Image Processing?
An image may be defined as a two-dimensional function f(x,y), where x and y are spatial (plane)coordinates,
and the amplitude of f at any pair of coordinates (x,y) is called the intensity of the image at that point.
The term gray level is used often to refer to the intensity of monochrome images. Color images are formed by
a combination of individual 2-D images.
For example: The RGB color system, a color image consists of three (red, green and blue) individual component
images. For this reason many of the techniques developed for Monochrome images can be extended to color
images by processing the three component images individually.
An image may be continuous with respect to the x- and y- coordinates and also in amplitude. Converting such
an image to digital form requires that the coordinates, as well as the amplitude, be digitized.
The Origins of Digital Image Processing
● One of the first applications of digital images was in the newspaper industry, where pictures
were transmitted by submarine cable between London and New York.
● The introduction of the Bartlane cable picture transmission system in the early 1920s
reduced the transmission time from more than one week to less than three hours.
● Specialized printing equipment was used to code images for cable transmission and
reconstruct them at the receiving end.
● Early digital images faced problems in visual quality, mainly due to improper printing
procedures and poor distribution of intensity levels.
● Digital images require large storage capacity and high computational power; therefore,
progress in digital image processing depended on the development of digital computers
and supporting technologies such as data storage, display, and transmission systems.
● A digital image is composed of a finite number of elements, each having a specific location
and value.
● These elements are called picture elements, image elements, or pixels.
● The term pixel is commonly used to denote the smallest element of a digital image.
● Digital Image Processing involves:
o Acquiring an image,
o Preprocessing the image,
o Extracting individual characters,
o Describing the characters in a form suitable for computer processing, and
o Recognizing those characters.
● Digital image processing techniques began to be widely used in the late 1960s and early
1970s in fields such as:
o Medical imaging
o Remote Earth resource observation
o Astronomy
● The invention of Computerized Axial Tomography (CAT), also known as Computed
Tomography (CT), in the early 1970s was a major milestone.
● In CT imaging:
o X-rays pass through the object,
o Detectors collect the rays at the opposite end,
o The source rotates and the process is repeated.
● Tomography algorithms use the sensed data to construct images representing slices of the
object.
● Computer procedures are used to:
o Enhance image contrast,
o Convert intensity levels into color,
o Improve interpretation of X-rays and other images used in industry, medicine, and
biological sciences.
● Image enhancement and restoration techniques are used to process degraded images of:
o Unrecoverable objects
o Experimental results that are expensive or impossible to reproduce.
● Image processing methods have successfully restored blurred images that were the only
records of rare artifacts that were later lost or damaged.
Example of Field of Digital Image Processing
Digital image processing has many applications, so organizing by image source (e.g., visual, X-ray)
helps understanding. Most images today use electromagnetic energy as the principal source.
Electromagnetic Spectrum
The electromagnetic (EM) spectrum is the range of all types of light and energy waves.
It includes visible light and invisible radiation like X-rays and gamma rays.
Electromagnetic waves travel in the form of waves.
Electromagnetic waves propagate as sinusoidal waves with different wavelengths.
Different EM waves have different energy levels.
High-energy waves include gamma rays and X-rays.
Low-energy waves include infrared and radio waves.
Medical imaging commonly uses X-rays and gamma rays.
Visible light is a small part of the electromagnetic spectrum.
Among visible colors, red light has the longest wavelength.
Medical imaging
Medical imaging is an important field of Digital Image Processing. They fields are
1. Gamma Ray Imaging
2. X-Ray Imaging
3. Imaging in the Ultraviolet Band
4. Imaging in the Visible and Infrared Bands
It uses image processing techniques to improve the quality of medical images and assist doctors
in diagnosis.
Image subtraction is a technique used in medical imaging to highlight changes between two
images.
Mask mode radiography is a common application of image subtraction, where unnecessary
background details are removed to clearly visualize blood vessels and organs.
1. Gamma Ray Imaging
Gamma rays are high-energy photons used in medical and astronomical imaging.
It is also used in bone pathology.
It is used to find tumors and infections in our body.
In nuclear medicine, a patient is injected with a radioactive isotope that emits gamma
rays as it decays.
sThe images are produced and collected by gamma ray detectors.
2. X-Ray Imaging
X-rays are one of the oldest electromagnetic radiation sources used for imaging.
In medical applications, X-rays are generated using an X-ray tube, which is a vacuum
tube with a cathode and an anode.
Angiography is another major application in an area called contrast enhancement
radiography.
Angiography is used to obtain images of blood vessels.
A catheter is a small, flexible, and hollow tube.
Higher-energy X-rays are used in industrial processes.
3. Imaging in the Ultraviolet Band
Ultraviolet (UV) imaging is used in lithography, industrial inspection, microscopy, biological
imaging, lasers, and astronomy.
UV light is used in fluorescence microscopy.
It is one of the fastest-growing areas of microscopy.
Fluorescence is a method discovered in the middle of the nineteenth century.
Ultraviolet light is not visible to the human eye.
UV photons excite electrons in fluorescent materials to a higher energy level.
Excited electrons return to a lower level and emit visible light.
Only the emitted light reaches the detector, producing high-contrast images.
This technique is useful for detecting diseased or abnormal materials.
4. Imaging in the Visible and Infrared Band
Visible spectrum is most widely used in imaging.
Infrared (IR) imaging is often combined with visible imaging.
Applications include light microscopy, astronomy, remote sensing, industry, and law
enforcement.
Light microscopy examples: pharmaceuticals, microinspection, materials characterization.
Image processing is used for enhancement and measurements.
Visible and IR images can be processed to extract useful information and detect features
Fundamental Steps in Digital Image Processing
There are two categories of the steps involved in the image processing:
1. Methods whose outputs are input are images.
2. Methods whose outputs are attributes extracted from those images.
i) Image acquisition: Image acquisition is the first step in digital image processing .Generally, image
acquisition involves preprocessing, such as scaling.
The image is captured by a sensor (e.g., camera) and digitized.
If the camera or sensor output is not already digital, it is converted using an Analog-to-Digital
Converter (ADC).
Image acquisition refers to the process of capturing real-world images and storing them in digital form.
ii) Image Enhancement: It is among the simplest and most appealing areas of digital image processing.
The idea behind this is to bring out details that are obscured or simply to highlight certain features of
interest in image. Image enhancement is a very subjective area of image processing.
iii) Image Restoration: It deals with improving the appearance of an image. It is an objective approach,
in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image
processing. Enhancement, on the other hand is based on human subjective preferences regarding what
constitutes a “good” enhancement result.
iv) Color image processing: It is an area that is been gaining importance because of the use of
digital images over the internet. Color image processing deals with basically color models and their
implementation in image processing applications.
v) Wavelets and Multi resolution Processing: Wavelets are used to represent images at
different levels of [Link] processing is mainly used for image data compression.
The original image is divided into four sub-bands:
o LL – Low frequency (main image information)
o LH – Vertical details
o HL – Horizontal details
o HH – Edge and fine details
This helps in:
o Reducing image size
o Storing important information efficiently
vi) Compression: It deals with techniques reducing the storage required to save an image, or the
bandwidth required to transmit it over the network. It has to major approaches
a) Lossless Compression - without degrading the quality.
b) Lossy Compression - acceptable degradation
The most common image compression standard is JPEG (Joint Photographic Experts Group)
vii) Morphological processing: It deals with tools for extracting image components that are useful in
the representation and description of shape and boundary of objects. It is majorly used in automated
inspection applications.
viii) Image Segmentation: Segmentation procedures partition an image into its constituent parts or
objects. Autonomous segmentation is one of the most difficult tasks in digital image processing In general, the
more accurate the segmentation, the more likely recognition is to succeed.
ix) Representation and Description: It always follows the output of segmentation step that is, raw
pixel data, constituting either the boundary of an image or points in the region itself. In either case converting
the data to a form suitable for computer processing is necessary.
x) Recognition: It is the process that assigns label (eg., tiger, elephant, vehicle) to an object based on its
descriptors. It is the last step of image processing which use artificial intelligence of software.
Knowledge base: Knowledge about a problem domain is coded into an image processing system in the
form of a knowledge base. This knowledge may be as simple as detailing regions of an image where the
information of the interest in known to be located. Thus limiting search that has to be conducted in seeking
the information. The knowledge base also can be quite complex such interrelated list of all major possible
defects in a materials inspection problems or an image database containing high resolution satellite images of
a region in connection with change detection application.
COMPONENTS OF AN IMAGE PROCESSING SYSTEM
An Image processing system consists of a set of components used to acquire, process, store, display, and
transmit digital images . These components work together to convert a real-world scene into a meaning
digital image.
[Link] Domain
The problem domain refers to the real-world scene or object that is to be imaged and analysed.
Examples:
● CCTV surveillance area
● Medical X-ray image
● Satellite images
● Industrial inspection objects
2. Image Sensors (CCD / CMOS)
Image sensors are devices used to capture images by converting light energy into electrical signals.
In digital image processing systems, the sensor acts as the physical device.
Image Sensing
Image sensing is the process of acquiring an image from the problem domain. It requires two main elements:
a) Physical Device (Image Sensor)
● The physical device is a sensor that is sensitive to energy radiated or reflected by the object.
● It detects physical energy and converts it into an electrical signal.
Types of Sensors
i) CCD Sensor (Charge-Coupled Device)
● CCD sensor converts light into electrical charge.
● Produces high-quality images with low noise.
● Commonly used where accuracy and image quality are very important.
Examples:
● X-ray machines
● CT scan systems
● MRI imaging systems
● Satellites
● Scientific and astronomical cameras
ii) CMOS Sensor (Complementary Metal-Oxide Semiconductor)
● CMOS sensor converts light into electrical signal directly.
● Requires less power and is low cost (cheap).
● Commonly used in consumer devices.
Examples:
● Mobile phone cameras
● CCTV cameras
● Webcams
● Laptop cameras
Functions of Physical Device (Sensor)
● Detects physical energy such as:
o Visible light
o X-rays
o Infrared radiation
● Converts this energy into an analog electrical signal.
b) Digitizer
● The digitizer converts the analog electrical signal produced by the sensor into digital form.
● This digital data can be processed by a computer.
Functions:
● Performs analog-to-digital conversion
● Produces digital image data
Example:
● In a digital camera, the sensor generates an electrical signal proportional to light intensity,
and the digitizer converts it into digital image data.
[Link] Image Processing Hardware
This hardware performs fast, low-level image processing operations, especially in real-time systerms .
Functions:
Arithemetic and logical operations
Image averaging
Noise reduction
Parallel processing
This unit is sometimes called the front-end subsystem.
Example: In the noise reduction the averaging of the images is used
4. Computer
The computer acts as the central processing and control unit of the image of the image processing system.
Functions:
● Executes image processing algorithms
● Controls system operations
● Provides user interface
A general-purpose computer can range from a PC to a supercomputer, but a well-equipped PC is
sufficient for most applications.
5. Image Processing Software
Image processing software performs the actual manipulation and analysis of images.
Operations include:
● Image enhancement
● Image restoration
● Image segmentation
● Feature extraction
● Object detection and recognition
Examples:
MATLAB, OpenCV, Photoshop, CCTV analytics software.
6. Mass Storage
Mass storage is required to store large amounts of image and video data.
Capability is a must in image processing applications. Digital storage for image processing applications falls
into 3 principal categories.
A) Short-term storage for use during processing (RAM)
B) on-line storage for relatively recall(Hard Disk ,SSD)
C) Archival storage, characterized by infrequent access.(CD ,DVD,CLOUD STORAGE)
Example:
An image of size 1024 × 1024 pixels with 8 bits per pixel requires 1 MB of storage space
(uncompressed).
7. Image Display: Image display devices are used to view processed images or videos. mainly
color TV monitors and are driven by the outputs of image and graphics display cards are an integral
part of the system.
Examples: Computer monitors, mobile screens, medical displays.
8. Hardcopy Devices: Hardcopy devices produce physical copies of digital images.
Examples: It includes printers, film cameras, optical and CD-ROM discs
9. Image Communication / Network: This component allows images and videos
to be transmitted over communication networks.
Networking is a default function for any computer system.
Examples: LAN, WAN, Internet, wireless networks.
Elements of Visual Perception
i ) Structure of Human eye:
The eye is nearly a sphere with average approximately 20 mm diameter. The eye is enclosed with
three membranes
a. The cornea and sclera - it is a tough, transparent tissue that covers the anterior surface of
the eye. Rest of the optic globe is covered by the sclera
b. The choroid – It contains a network of blood vessels that serve as the major source of
nutrition to the eyes. It helps to reduce extraneous light entering in the eye It has two parts
(1) Iris Diaphragms- it contracts or expands to control the amount of light that enters the
eyes
(2) Ciliary body
c. Retina – it is innermost membrane of the eye. When the eye is properly focused, light from
an object outside the eye is imaged on the retina. There are various light receptors over the
surface of the retina The two major classes of the receptors are-
1) cones- it is in the number about 6 to 7 million. These are located in the central portion
of the retina called the fovea. These are highly sensitive to color. Human can resolve fine
details with these cones because each one is connected to its own nerve end. Cone
vision is called photonic or bright light vision.
2) Rods – these are very much in number from 75 to 150 million and are distributed over
the entire retinal surface. The large area of distribution and the fact that several roads are
connected to a single nerve give a general overall picture of the field of view. They are not
involved in the color vision and are sensitive to low level of illumination. Rod vision is
called is isotopic or dim light vision. The absent of reciprocators is called blind spot.
ii) Image formation in the eye:
The major difference between the lens of the eye and an ordinary optical lens in that the
former is flexible. The shape of the lens of the eye is controlled by tension in the fiber of the
ciliary body. To focus on the distant object the controlling muscles allow the lens to become
thicker in order to focus on object near the eye it becomes relatively flattened. The distance
between the center of the lens and the retina is called the focal length and it varies from
17mm to 14mm as the refractive power of the lens increases from its minimum to its
maximum. When the eye focuses on an object farther away than about [Link] lens exhibits
its lowest refractive power. When the eye focuses on a nearly object. The lens is most srongly
refractive. The retinal image is reflected primarily in the area of the fovea. Perception then
takes place by the relative excitation of light receptors, which transform radiant energy into
electrical impulses that are ultimately decoded by the brain.
iii) Brightness adaption and discrimination:
Digital image are displayed as a
discrete set of intensities. The
range of light intensity levels to
which the human visual system
can adopt is enormous- on the
order of 1010- from isotopic
threshold to the glare limit.
Experimental evidences indicate
that subjective brightness is a
logarithmic function of the light
intensity incident on the eye.
The curve represents the range of intensities to which the visual system can adopt. But the
visual system cannot operate over such a dynamic range simultaneously. Rather, it is
accomplished by change in its overcall sensitivity called brightness adaptation. For any given
set of conditions, the current sensitivity level to which of the visual system is called brightness
adoption level , Ba in the curve. The small intersecting curve represents the range of
subjective brightness that the eye can perceive when adapted to this level. It is restricted at
level Bb , at and below which all stimuli are perceived as indistinguishable blacks. The upper
portion of the curve is not actually restricted. Whole simply raise the adaptation level higher
than Ba. The ability of the eye to discriminate between change in light intensity at any specific
adaptation level is also of considerable interest. Take a flat, uniformly illuminated area large
enough to occupy the entire field of view of the subject. It may be a diffuser such as an opaque
glass, that is illuminated from behind by a light source whose intensity, I can be varied. To this
field is added an increment of illumination ΔI in the form of a short duration flash that appears
as circle in the center of the uniformly illuminated field. If ΔI is not bright enough, the subject
cannot see any perceivable changes.
As ΔI gets stronger the subject may indicate of a perceived
change. ΔIc is the increment of illumination discernible 50% of
the time with background illumination I. Now, ΔIc /I is called the
Weber ratio. Small value means that small percentage change
in intensity is discernible representing “good” brightness
discrimination. Large value of Weber ratio means large
percentage change in intensity is required representing “poor
brightness discrimination”.
iv ) Optical Illusion:
In this the eye fills the non existing information or wrongly pervious geometrical properties of
objects.