Preprocessing
Digital Image processing of satellite images can be divided into:
Pre-processing
Rectification and Restoration
Enhancement and Transformations
Classification and Feature extraction
Preprocessing consists of:
Geometric correction: conversion of data to ground coordinates by
removal of distortions from sensor geometry
Radiometric Correction: removal of sensor or atmospheric 'noise',
to more accurately represent ground conditions:
to correct data loss, remove haze, enable mosaicking and comparison
Radiometric and geometric correction
Corrected image scene orientation ‘map’ Uncorrected data ‘path’
Why is rectification needed
Raw remote sensing data contain distortions preventing overlay with map layers,
comparison between image scenes, and with no geographic coordinates
To provide georeferencing
To compare/overlay multiple images
To merge with map layers
To mosaic images
e.g. google maps / google earth
Path and rows
Image distortions
In air photos, errors include:
topographic and radial displacement;
airplane tip, tilt and swing (roll, pitch and yaw).
These are less in satellite data due to altitude and stability.
The main source of geometric error in satellite data is satellite path orientation (non-polar)
Sources of geometric error (main ones in bold)
a. Systematic distortions
Scan skew: ground swath is not normal to the polar axis – along with
the forward motion of the platform during mirror sweep
Mirror-scan Velocity and panoramic distortion: along-scan distortion (pixels at edge are slightly larger). This would be
greater for off-nadir sensors.
Earth rotation: earth rotates during scanning (offset of rows).... (122 pixels per Landsat scene)
b. Non-systematic distortions
Topography: requires a DEM, otherwise ~ 6 pixel offset in mountains
Altitude and attitude variations in satellite: these are minor
Geocorrection
Rectification – assigning coordinates to (~6) known locations - GCPs
GCP = Ground Control Point
Resampling - resetting the pixels (rows and columns) to match the GCPs
Rectification
Data pixels must be related to ground locations, e.g. in UTM coordinates
Two main methods:
- Image to image (to a geocorrected image)
.... to an uncorrected image would be 'registration' not rectification
-Image to vectors (to a digital file)....
(black arrows point to known locations
- coordinates from vectors or images)
Ortho-rectification = this process (since ~2000) enables the use of a DEM to
also take into account the topography
New DN values are
Resampling methods assigned in 3 ways
[Link] Neighbour
Pixel in new grid gets
the value of closest
pixel from old grid –
retains original DNs
b. Bilinear Interpolation
New pixel gets a value
from the weighted
average of 4 (2 x 2)
nearest pixels;
smoother but ‘synthetic’
c. Cubic Convolution
(smoothest)
New pixel DNs are
computed from
weighting 16 (4 x 4)
surrounding DNs
[Link]
Resampling
[Link]
Good rectification is required for image registration – no ‘movement between images
Central Coast Mountains, Landsat September 2003
100 m retreat
Over 3 years
Central Coast Mountains, Landsat August 2006
Radiometric correction
Radiometric correction
Radiometric correction is used to modify DN values to account for noise,
i.e. contributions to the DN that are a result of…
a. the intervening atmosphere
b. the sun-sensor geometry
c. the sensor itself
We may need to correct for the following reasons:
a. Variations within an image (speckle or striping)
b. between adjacent or overlapping images (for mosaicing)
c. between bands (for some multispectral techniques)
d. between image dates (temporal data) and sensors
Atmospheric Interference - haze
Lower wavelengths are subject to haze, which falsely increases the DN value.
The simplest method is known as dark object subtraction which assumes there is a
pixel with a DN of 0 (if there were no haze), e.g. deep water in near infra-red. An
integer value is subtracted from all DNs so that this pixel becomes 0.
[Link]
Atmospheric Interference: clouds
clouds affect all visible and IR bands, hiding features twice: once with the
cloud, once with its shadow. We CANNOT eliminate clouds, although we
might be able to assemble cloud-free parts of several overlapping scenes (if
illumination is similar), and correct for cloud shadows (advanced).
[Only in the microwave,
can energy penetrate
through clouds].
Advanced slide: Reflectance to Radiance Conversion
DN reflectance values can be converted to absolute radiance values.
This is useful when comparing the actual reflectance from different
sensors e.g. TM and SPOT, or TM versus ETM (Landsat 5 versus 7)
DN = aL + b where a= gain and b =n offset
The radiance value (L) can be calculated as: L = [Lmax -
Lmin]*DN/255 + Lmin
where Lmax and Lmin are known from the sensor calibration.
This will create 32 bit (decimal) values.
Sensor Failure & Calibration
Sensor problems show as striping or missing lines of data:
Missing data due to sensor failure results in a line of DN values -
every 16th line for TM data .. As there are 16 sensors for each
band, scanning 16 lines at a time (or 6th line for MSS).
MSS 6 line banding – raw scan
TM data – 16 line banding
Sample DNs – shaded DNs are higher
MSS 6 line banding - georectified
Landsat ETM+ scan line corrector (SLC) – failed May 31 2003
[Link]
SLC compensates for forward
motion of the scanner during scan
Canadian Arctic mosaic
See also google maps, [Link]/imap etc..
Northern Land Cover of
Canada – Circa 2000
[Link]
More on Principal Components Analysis ….
100% Marilyn -> 100% Margaret
PCA components
1. Brightness – overall average reflectance
2.
3.