Digital Image Processing Introduction
Digital Image Processing (DIP) refers to the manipulation and analysis of images using digital
computers with the objective of enhancing image quality, extracting meaningful information, or
enabling automated interpretation.
Digital Image processing is not just limited to adjust the spatial resolution of the everyday images
captured by the camera. It is not just limited to increase the brightness of the photo, e.t.c. Rather it
is far more than that.
Electromagnetic waves can be thought of as stream of particles, where each particle is moving
with the speed of light. Each particle contains a bundle of energy. This bundle of energy is called
a photon.
The electromagnetic spectrum according to the energy of photon is shown below.
In this electromagnetic spectrum, we are only able to see the visible [Link] spectrum
mainly includes seven different colors that are commonly term as (VIBGOYR). VIBGOYR stands
for violet , indigo , blue , green , orange , yellow and Red.
But that doesnot nullify the existence of other stuff in the spectrum. Our human eye can only see
the visible portion, in which we saw all the objects. But a camera can see the other things that a
naked eye is unable to see. For example: x rays , gamma rays , e.t.c. Hence the analysis of all that
stuff too is done in digital image processing.
This discussion leads to another question which is
Why do we need to analyze all that other stuff in EM spectrum too?
The answer to this question lies in the fact, because that other stuff such as XRay has been widely
used in the field of medical. The analysis of Gamma ray is necessary because it is used widely in
nuclear medicine and astronomical observation. Same goes with the rest of the things in EM
spectrum.
Applications of Digital Image Processing
Some of the major fields in which digital image processing is widely used are mentioned below
• Image sharpening and restoration
• Medical field
• Remote sensing
• Transmission and encoding
• Machine/Robot vision
• Color processing
• Pattern recognition
• Video processing
• Microscopic Imaging
• Others
Image sharpening and restoration
Image sharpening and restoration refers here to process images that have been captured from the
modern camera to make them a better image or to manipulate those images in way to achieve
desired result. It refers to do what Photoshop usually does.
This includes Zooming, blurring , sharpening , gray scale to color conversion, detecting edges and
vice versa , Image retrieval and Image recognition. The common examples are:
The original image
Einstein
The zoomed image
Einstein
Blurr image
Sharp image
Edges
Medical field
The common applications of DIP in the field of medical is
Gamma ray imaging
PET scan
X Ray Imaging
Medical CT
UV imaging
UV imaging
In the field of remote sensing , the area of the earth is scanned by a satellite or from a very high
ground and then it is analyzed to obtain information about it. One particular application of digital
image processing in the field of remote sensing is to detect infrastructure damages caused by an
earthquake.
As it takes longer time to grasp damage, even if serious damages are focused on. Since the area
effected by the earthquake is sometimes so wide , that it not possible to examine it with human eye
in order to estimate damages. Even if it is , then it is very hectic and time consuming procedure.
So a solution to this is found in digital image processing. An image of the effected area is captured
from the above ground and then it is analyzed to detect the various types of damage done by the
earthquake.
Remote sensing
The key steps include in the analysis are
The extraction of edges
Analysis and enhancement of various types of edges
Transmission and encoding
The very first image that has been transmitted over the wire was from London to New York via a
submarine cable. The picture that was sent is shown below.
The picture that was sent took three hours to reach from one place to another.
Now just imagine , that today we are able to see live video feed , or live cctv footage from one
continent to another with just a delay of seconds. It means that a lot of work has been done in this
field too. This field doesnot only focus on transmission , but also on encoding. Many different
formats have been developed for high or low bandwith to encode photos and then stream it over
the internet or e.t.c.
Machine/Robot vision
Apart form the many challenges that a robot face today , one of the biggest challenge still is to
increase the vision of the robot. Make robot able to see things , identify them , identify the hurdles
e.t.c. Much work has been contributed by this field and a complete other field of computer vision
has been introduced to work on it.
Hurdle detection
Hurdle detection is one of the common task that has been done through image processing, by
identifying different type of objects in the image and then calculating the distance between robot
and hurdles.
Line follower robot
Most of the robots today work by following the line and thus are called line follower robots. This
help a robot to move on its path and perform some tasks. This has also been achieved through
image processing.
Color processing
Color processing includes processing of colored images and different color spaces that are used.
For example RGB color model , YCbCr, HSV. It also involves studying transmission , storage ,
and encoding of these color images.
Pattern recognition
Pattern recognition involves study from image processing and from various other fields that
includes machine learning ( a branch of artificial intelligence). In pattern recognition , image
processing is used for identifying the objects in an images and then machine learning is used to
train the system for the change in pattern. Pattern recognition is used in computer aided diagnosis
, recognition of handwriting , recognition of images e.t.c
Video processing
A video is nothing but just the very fast movement of pictures. The quality of the video depends
on the number of frames/pictures per minute and the quality of each frame being used. Video
processing involves noise reduction , detail enhancement , motion detection , frame rate conversion
, aspect ratio conversion , color space conversion e.t.c.
II. Fundamental Steps in Digital Image Processing
An Image is defined as a two dimensional function f(x, y). (x, y) is the spatial coordinate (or
location) and f is the intensity at that point. If x, y and f all are finite and discrete, then the image
is said to be a digital image. Digital image consists of finite and discrete image elements called
pixels, each of them having a location and intensity value. In digital image processing, we process
digital images using a digital computer.
Digital Image processing is not a field which is used for only high-end applications. There are
various fundamental steps in digital image processing. We will discuss all the steps and processes
that can be applied for different images.
Classification
We can categorise the steps in digital image processing as three types of computerised processing,
namely low level, mid level and high level processing.
Low Level Processing
Low level processing involves basic operations such as image preprocessing, image enhancement,
image restoration, image sharpening, etc. The main characteristic of low level processing is that
both its inputs and outputs are images.
Mid Level Processing
Mid level processing involves tasks like image classification, object identification, image
segmentation, etc. The main characteristic of low level processing is that its inputs are generally
images whereas its outputs are attributes associated with image which are extracted from it.
High Level Processing
High level processing involves making sense of ensemble of recognised object and cognitive tasks
associated with computer vision.
Fundamental Steps in Digital Image Processing
II. Fundamental Steps in Digital Image Processing
Image Acquisition
Image acquisition is the first step in digital image processing. In this step we get the image in
digital form. This is done using sensing materials like sensor strips and sensor arrays and
electromagnetic wave light source. The light source falls on an object and it gets reflected or
transmitted which gets captured by the sensing material. The sensor gives the output image in
voltage waveform in response to electric power being supplied to it. The example of a situation
where reflected light is captured is a visible light source. Whereas, in X-ray light sources
transmitted light rays are captured.
The image captured is analog image as the output is continuous. To digitise the image, we use
sampling and quantization where discretize the image. Sampling is discretizing the image spatial
coordinates whereas quantization is discretizing the image amplitude values.
Image Enhancement
Image enhancement is the manipulation of an image for its specific purpose and objectives. This
is majorly used in photo beautify applications. These are performed using filters. The filters are
used to minimise noise in an image. Each filter is used for a specific situation. Correlation
operation is done between filters and input image matrix to obtain enhanced output image in . To
simplify the process, we perform multiplication in the frequency domain which gives the same
result. We transform the image from spatial domain to frequency domain using discrete fourier
transform (DFT) multiply with filter and then go back to spatial domain using inverse discrete
fourier transform (IDFT). Some filters used in frequency domain are butterworth filter and
gaussian filter.
Majorly used filters are high pass filter and low pass filter. Low pass filter smoothens the images
by averaging the pixel of neighbouring value thus minimising the random noise. It gives a blurring
effect. It minimises the sharpening edges. High pass filter is used to sharpen the images using
spatial differentiation. Examples of high pass filters are laplace filter and high boost filter. There
are other non linear filters for different purposes. For example, a median filter is used to eliminate
salt and pepper noise.
Image Restoration
Like image enhancement, image restoration is related to improving an image. But image
enhancement is more of a subjective step where image restoration is more of an objective step.
Restoration is applied to a degraded image trying to recover back the original model. Here firstly
we try to estimate the degradation model and then find the restored image.
We can estimate the degradation by observation, experimentation and mathematical modelling.
Observation is used when you do not know anything about the setup of the image taken or the
environment. In experimentation, we find the point spread function of an impulse with a similar
setup. In mathematical modelling, we even consider the environment at which the image was taken
and it is the best out of all the other three methods.
To find the restored image, we generally use one of the three filters - inverse filter, minimum mean
square (weiner) filter, constrained least squares filter. Inverse filtering is the simplest method but
cannot be used in presence of noise. In the Wiener filter, mean square error is minimised. In
constrained least error filtering, we have a constraint and it is the best method.
Colour Image Processing
Colour image processing is motivated by the fact that using colour it is easier to classify and the
human eye can easily see thousands of colours than shades of black and white. Colour image
processing is divided into types - pseudo colour or reduced colour processing and full colour
processing. In pseudo colour processing, the grey scale is applied to one colour. It was used earlier.
Now-a-days, full colour processing is used for full colour sensors such as digital cameras or colour
scanners as the price of full colour sensor hardware is reduced significantly.
There are various colour models like RGB (Red Green Blue), CMY (Cyan Magenta Yellow), HSI
(Hue Saturation Intensity). Different colour models are used for different purposes. RGB is
understandable for computer monitors. Whereas CMY is understandable for a computer printer.
So there is an internal hardware which converts RGB to CMY and vice versa. But humans cannot
understand RGB or CMY, they understand HSI.
Wavelets
Wavelets represent an image in various degrees of resolution. It is one of the members of the class
of linear transforms along with fourier, cosine, sine, Hartley, Slant, Haar, Walsh-Hadamard.
Transforms are coefficients of linear expansion which decompose a function into a weighted sum
of orthogonal or biorthogonal basis functions. All these transforms are reversible and
interconvertible. All of them express the same information and energy. Hence all are equivalent.
All the transforms vary in only the manner how the information is represented.
Compression
Compression deals with decreasing the storage required to the image information or the bandwidth
required to transmit it. Compression technology has grown widely in this era. Many people are
knowledgeable about it by common image extension JPEG (Joint Photographic Experts Group)
which is a compression technology. This is done by removing redundancy and irrelevant data. In
the encoding process of compression, the image goes through a series of stages - mapper, quantizer,
symbol encoder. Mapper may be reversible or irreversible. Example of mapper is run length
encoding. Quantizer reduces the accuracy and is an irreversible process. Symbol encoders assign
small values to more frequent data and is a reversible process.
To get back the original image, we perform decompression going through the stage of symbol
decoder and inverse mapper. Compression may be lossy or lossless. If after compression we get
the exact same image, then it is lossless compression else it is lossy compression. Examples of
lossless compression are huffman coding, bit plane coding, LZW (Lempel Ziv Welch) coding,
(PCM) pulse code modulation. Examples of lossy compression are JPEG, PNG. Lossy
compression is ideally used in the world as the change is not visible to the naked eye and saves
way better storage or bandwidth than lossless compress.
Morphological Image Processing
In morphological image processing, we try to understand the structure of the image. We find the
image components present in digital images. It is useful in representing and describing the images'
shape and structure. We find the boundary, hole, connected components, convex hull, thinning,
thickening, skeletons, etc. It is the fundamental step for the upcoming stages.
Segmentation
Segmentation is based on extraction information from images on the basis of two properties -
similarity and discontinuity. For example, a sudden change in intensity value represents an edge.
Detection of isolation points, line detection, edge detection are some of the tasks associated with
segmentation. Segmentation can be done by various methods like thresholding, clustering,
superpixels, graph cuts, region growing, region splitting and merging, morphological watersheds.
Feature Extraction
Feature extraction is the next step after segmentation. We extract features from images, regions
and boundaries. Example of feature extraction is corner detection. These features should be
independent and insensitive to variation of parameters such as scaling, rotation, translation,
illumination. Boundary features can be described by boundary feature descriptors such as shape
numbers and chain codes, fourier descriptors and statistical moments.
Image Pattern Classification
In image pattern classification, we assign labels to images on the basis of features extracted. For
example, classify the image as a cat image. Classical methods for image pattern classification are
minimum-distance, correlation and Bayes classifier. Modern methods for the same purpose use
neural networks and deep learning models such as deep convolutional neural networks. This
method is ideal for image processing techniques.
Applications
In medical diagnosis, Gamma ray imaging, X-ray imaging, ultrasound imaging, MRI
imaging is used to know about the internal organs and bones of our body.
In satellite imaging and astronomy, infrared imaging is used.
In forensics, for biometrics such as thumbprints and retina scan, digital image processing
is used.
We can find defects in manufactured packaged goods using microwave imaging.
We can find information about circuit boards and microprocessors.
Using image restoration, we can identify the car number plates of moving cars from CCTV
for police investigations.
Beautify filters are used in social media platforms which use image enhancement.
We can classify and identify images using deep learning models.
III. Components of Image Processing System
Image Processing System is the combination of the different elements involved in the digital image
processing. Digital image processing is the processing of an image by means of a digital computer.
Digital image processing uses different computer algorithms to perform image processing on the
digital images. It consists of following components:-
Image Sensors: Image sensors senses the intensity, amplitude, co-ordinates and other features
of the images and passes the result to the image processing hardware. It includes the problem
domain.
Image Processing Hardware: Image processing hardware is the dedicated hardware that is
used to process the instructions obtained from the image sensors. It passes the result to general
purpose computer.
Computer: Computer used in the image processing system is the general purpose computer that
is used by us in our daily life.
Image Processing Software: Image processing software is the software that includes all the
mechanisms and algorithms that are used in image processing system.
Mass Storage: Mass storage stores the pixels of the images during the processing.
Hard Copy Device: Once the image is processed then it is stored in the hard copy device. It
can be a pen drive or any external ROM device.
Image Display: It includes the monitor or display screen that displays the processed images.
Network: Network is the connection of all the above elements of the image processing system.
III. Digital Image Fundamentals of Image Formation
Digital Image Processing means processing digital image by means of a digital computer.
We can also say that it is a use of computer algorithms, in order to get enhanced image
either to extract some useful information.
Digital image processing is the use of algorithms and mathematical models to process and
analyze digital images. The goal of digital image processing is to enhance the quality of
images, extract meaningful information from images, and automate image-based tasks.
The basic steps involved in digital image processing are:
Image acquisition: This involves capturing an image using a digital camera or scanner,
or importing an existing image into a computer.
Image enhancement: This involves improving the visual quality of an image, such as
increasing contrast, reducing noise, and removing artifacts.
Image restoration: This involves removing degradation from an image, such as
blurring, noise, and distortion.
Image segmentation: This involves dividing an image into regions or segments, each
of which corresponds to a specific object or feature in the image.
Image representation and description: This involves representing an image in a
way that can be analyzed and manipulated by a computer, and describing the features of an
image in a compact and meaningful way.
Image analysis: This involves using algorithms and mathematical models to extract
information from an image, such as recognizing objects, detecting patterns, and quantifying
features.
Image synthesis and compression: This involves generating new images or
compressing existing images to reduce storage and transmission requirements.
Digital image processing is widely used in a variety of applications, including medical
imaging, remote sensing, computer vision, and multimedia.
Types of an image
BINARY IMAGE- The binary image as its name suggests, contain only two pixel
elements i.e 0 & 1,where 0 refers to black and 1 refers to white. This image is also known
as Monochrome.
BLACK AND WHITE IMAGE- The image which consist of only black and white
color is called BLACK AND WHITE IMAGE.
8 bit COLOR FORMAT- It is the most famous image [Link] has 256 different
shades of colors in it and commonly known as Grayscale Image. In this format, 0 stands
for Black, and 255 stands for white, and 127 stands for gray.
16 bit COLOR FORMAT- It is a color image format. It has 65,536 different colors
in [Link] is also known as High Color Format. In this format the distribution of color is not
as same as Grayscale image.
Image formation is an analog to digital conversion of an image with the help of 2D Sampling and
Quantization techniques that is done by the capturing devices like cameras. In general, we see a
2D view of the 3D world.
In the same way, the formation of the analog image took place. It is basically a conversion of the
3D world that is our analog image to a 2D world that is our Digital image.
Generally, a frame grabber or a digitizer is used for sampling and quantizing the analog signals.
Imaging
The mapping of a 3D world object into a 2D digital image plane is called imaging. In order to do
so, each point on the 3D object must correspond to the image plane. We all know that light reflects
from every object that we see thus enabling us to capture all those light-reflecting points in our
image plane.
Various factors determine the quality of the image like spatial factors or the lens of the capturing
device.
Fundamentals of Image Formation
Optical Systems
The lenses and mirrors are crucial in focusing the light coming from the 3D scene to produce the
image on the image plane. These systems define how light is collected and where it is directed and
consequently affects the sharpness and quality of the image produced.
Image Sensors
The goals of image sensors like the CCD or the CMOS sensors are to simply transform the optical
image into an electronic signal. These sensors differ by sensitivity, the resolution that they deliver
affecting the image as a whole.
Resolution and Sampling
Resolution is defined as the sharpness of an image and it occurs technically as the number of pixels
an image can hold. Sampling is the act of taking samples or discretizing a digital signal and
representing a continuous analog signal as a grouping of discrete values. It can be seen that higher
resolution and appropriative sampling rates are required in order to provide detailed and accurate
images.
Image Processing
Image processing can be described as act of modifying and enhancing digital images by using
algorithms. Pre-processing includes activities like filtering, noise reduction and color correction
that enhance image quality and information extraction.
Color and Pixelation
In digital Imaging, a frame grabber is placed at the image plane which is like a sensor. It aims to
focus the light on it and the continuous image is pixelated via the reflected light by the 3D object.
The light that is focused on the sensor generates an electronic signal.
Each pixel that is formed may be colored or grey depending on the intensity of the sampling and
quantization of the light that is reflected and the electronic signal that is generated via them.
All these pixels form a digital image. The density of these pixels determines the image quality. The
more the density the more the clear and high-resolution image we will get.
Forming a Digital Image
In order to form or create an image that is digital in nature, we need to have a continuous
conversion of data into a digital form. Thus, we require two main steps to do so:
Sampling (2D): Sampling is a spatial resolution of the digital image. And the rate of sampling
determines the quality of the digitized image. The magnitude of the sampled image is determined
as a value in image processing. It is related to the coordinates values of the image.
Quantization: Quantization is the number of grey levels in the digital image. The transition of
the continuous values from the image function to its digital equivalent is called quantization. It is
related to the intensity values of the image.
The normal human being acquires a high level of quantization levels to get the fine shading details
of the image. The more quantization levels will result in the clearer image.
Advantages
1) Improved Accuracy: Digital imaging is less susceptible to human factors and gives accurate
output of the object with high detailed capture.
2) Enhanced Flexibility: Digital images are easy to manipulate, edit or analyse as per the
requirements through different software hence they provide flexibility of post processing.
3) High Storage Capacity: Data in any digital format such as in one or more digital images can still
be stored in large amount with very high resolution and quality and will not suffer physical wear
and tear.
4) Easy Sharing and Distribution: The use of digital images allows them to be quickly duplicated
and transmitted across various channels and to various gadgets, helping to speed up the work.
5) Advanced Analysis Capabilities: Digital imaging enables the application of analytical tools,
including image recognition and machine learning, which can provide better insights and increase
productivity.
Disadvantages
1) Data Size: Large-structured digital image could occupy large storage space and computational
power hence may be expensive.
2) Image Noise: Digital images may be compromised by noise and artifacts, which degrades the
image quality mainly when photographed at night or using low image sensors.
3) Dependency on Technology: Digital imaging entails the use of sophisticated technology and
equipment that may be costly and there may be constant need to service or replace the equipment.
4) Privacy Concerns: The ability to take and circulate photographs digitally also poses concern
because personal information can be photographed without the subject’s permission.
5) Data Loss Risks: Digital image repositories, however, are prone to data loss caused by hardware
failures, corrupting software, or unintentional erasure.
Applications
1) Medical Imaging: Digital imaging is employed in the medical fields in the diagnostic process
such as X-ray pictures, MRI scans, and CT scans, for internal body reflections.
2) Surveillance and Security: Digital cameras and imaging systems are greatly needed for various
security or surveillance purposes as they offer live feed and are also useful in acquiring data for
investigations.
3) Remote Sensing: Digital imaging plays an important role in remote sensing applications in terms
of monitoring and mapping of environment and disasters and involve data captured from satellite
and aerial systems.
4) Entertainment and Media: The entertainment industry involves the use of digital imaging in
films, video games, and virtual reality to deliver improved visual impact.
5) Scientific Research: Digital imaging helps in scientific studies through providing best picture
at research fields like astronomy, biology, and material science.
Conclusion
This article outlined what a digital image is and comprised some aspects of digital image formation
that include sampling and quantization. It pointed out their strengths and weaknesses like enhanced
precision as well as flexibility, but large file sizes as well as privacy issues. This is in different
fields such as radiology, physics, chemistry, and astronomy amongst others, hence underpinning
the significance of the technology. With the improvement of digital imaging technology, it is
evident that it will remain fundamental in the process of visual data analysis.
IV. Image Sampling and Quantization
Image sampling vs Quantization
Image sampling and image quantization are two fundamental steps in converting real‑world
images into digital form. Sampling determines how many pixels are used to represent an image,
while quantization decides how many intensity levels or colours each pixel can store. Together,
they control the quality, clarity and file size of digital images.
1. Image Sampling
Image sampling is the process of converting a continuous image into a discrete form by selecting
a finite number of points (pixels) from the image plane. It deals with spatial resolution i.e how
many pixels are used to represent the image in horizontal and vertical directions. In simple terms,
sampling determines where pixel values are taken from the image.
Converts continuous spatial coordinates into a discrete grid.
Directly controls image resolution (e.g., 256×256, 512×512).
Higher sampling rate means more pixels which means more detail.
Lower sampling rate means fewer pixels leading to loss of spatial detail.
Poor sampling can cause aliasing effects.
Working
The continuous image is overlaid with a rectangular grid.
Each grid intersection corresponds to a pixel location.
The image intensity is measured at each grid point.
The spacing between grid points defines the sampling rate.
The sampled values form a 2D array of pixels.
Use Cases
Digital cameras: Determining sensor resolution.
Medical imaging: CT and MRI scan resolution control.
Satellite imagery: Choosing ground spatial resolution.
Image resizing: Downsampling or upsampling images.
Example
A real-world scene is sampled into a 1024×1024 pixel image.
Reducing it to 256×256 means fewer samples, resulting in blurred edges and loss of fine details.
2. Image Quantization
Image quantization is the process of mapping a large set of continuous or discrete intensity
values to a smaller, finite set of intensity levels. It deals with intensity resolution i.e how many
grey levels or colors are used to represent pixel values. In simple terms, quantization determines
how accurately pixel values are represented.
Converts continuous intensity values into discrete levels.
Controls the number of grey levels or colors.
More levels means smoother intensity transitions.
Fewer levels means visible banding and distortion.
Quantization introduces quantization error.
Working
Pixel intensity values are first sampled.
A finite set of intensity levels is defined (e.g., 8-bit → 256 levels).
Each pixel value is rounded to the nearest allowed level.
The difference between original and assigned value is the quantization error.
The result is a digitally representable image.
Use Cases
Image storage: Reducing memory requirements.
Image compression: JPEG and PNG encoding.
Display systems: Limited color depth screens.
Computer vision: Simplifying intensity ranges for processing.
Example
An image with intensities from 0–255 (8-bit) is quantized to 16 levels.
Smooth gradients appear as visible steps (posterization effect).
Relationship between Image Sampling and Image Quantization
Image sampling and image quantization are complementary steps in the digitization of a
continuous image and both must be applied to obtain a complete digital representation.
Sampling discretizes the spatial domain of an image by deciding the number and
arrangement of pixels along the horizontal and vertical axes, thereby determining the image
resolution and structural detail.
Quantization discretizes the intensity domain by assigning each sampled pixel a finite set
of intensity or color levels, thereby controlling brightness accuracy and tonal smoothness.
Inadequate sampling results in spatial distortion, such as aliasing and loss of fine details,
even if intensity values are represented accurately.
Inadequate quantization results in intensity distortion, such as banding and false contours,
even if the spatial resolution is high.
High-quality digital images require both sufficient sampling density and sufficient
quantization levels, as improving one cannot compensate for deficiencies in the other.
Comparison
Let's compare them:
Aspect Image Sampling Image Quantization
Basic Meaning Process of selecting discrete spatial Process of mapping pixel intensity
points (pixels) from a continuous values to a finite set of levels
image
Domain of Spatial domain (x, y coordinates) Intensity / amplitude domain
Operation
Controls Image resolution and level of spatial Gray level or color resolution
detail
Key Parameter Sampling rate or number of pixels Bit depth or number of intensity
levels
Aspect Image Sampling Image Quantization
Main Loss of fine spatial details and Loss of intensity precision and
Information aliasing false contours
Loss
Visual Impact Affects image sharpness and Affects image smoothness and
structure tonal transitions
V. Basic Relationships between Pixels
Relationships between pixels (Neighbours and Connectivity)
An image is denoted by f(x,y) and p,q are used to represent individual pixels of the image.
Neighbours of a pixel
A pixel p at (x,y) has 4-horizontal/vertical neighbours at (x+1,y), (x-1,y), (x,y+1) and (x,y-
1). These are called the 4-neighbours of p: N4 (p).
A pixel p at (x,y) has 4 diagonal neighbours at (x+1,y+1), (x+1,y-1), (x-1,y+1) and (x-1,y-
1). These are called the diagonal-neighbours of p: ND (p).
The 4-neighbours and the diagonal neighbours of p are called 8-neighbours of p : N8(p).
Adjacency between pixels
Let V be the set of intensity values used to define adjacency.
In a binary image, V = {1} if we are referring to adjacency of pixels with value 1. In a gray-scale
image, the idea is the same, but set V typically contains more elements.
For example, in the adjacency of pixels with a range of possible intensity values 0 to 255, set V
could be any subset of these 256 values.
We consider three types of adjacency:
a) 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set N4 (p).
b) 8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is in the set N8 (p).
c) m-adjacency (mixed adjacency): Two pixels p and q with values from V are m-adjacent
if
q is in N4(p), or
q is in ND(p) and the set N4(p)∩N4(q) has no pixels whose values are from V.
Connectivity between pixels
It is an important concept in digital image processing.
It is used for establishing boundaries of objects and components of regions in an image.
Two pixels are said to be connected:
if they are adjacent in some sense(neighbor pixels,4/8/m-adjacency)
if their gray levels satisfy a specified criterion of similarity(equal intensity level)
There are three types of connectivity on the basis of adjacency. They are:
a) 4-connectivity: Two or more pixels are said to be 4-connected if they are 4-adjacent with each
others.
b) 8-connectivity: Two or more pixels are said to be 8-connected if they are 8-adjacent with each
others.
c) m-connectivity: Two or more pixels are said to be m-connected if they are m-adjacent with each
others.
VI. Color Image Processing: Color fundamentals
Color Fundamentals
Visible light is composed of a relatively narrow brand of frequencies in the electromagnetic
spectrum.
If the light is achromatic (void of color), its only attribute is its intensity, or amount.
Achromatic light is what viewers see on a black and white television set.
Chromatic light spans the electromagnetic spectrum from approximately 400 to 700 nm.
Three basic quantities are used to describe the quality of a chromatic light
source: radiance, luminance, and brightness.
Radiance is the total amount of energy that flow from the light source, and it is usually
measured in watts (W).
Luminance, measured in lumens (lm), gives a measure of the amount of energy an
observer perceives from a light source.
Brightness is a subjective descriptor that is practically impossible to measure. It embodies
the achromatic notion of intensity and is one of the key factors in describing color
sensation.