0% found this document useful (0 votes)
2 views

COMP5111-W04-preprocessing

The document discusses the pre-processing of remotely-sensed data, focusing on correcting geometric, radiometric, and atmospheric deficiencies before the data is utilized. It outlines various techniques such as cosmetic operations, geometric correction, and atmospheric correction, emphasizing the importance of these processes for accurate data interpretation and integration with GIS. The document also details specific methods for correcting issues like missing scan lines and systematic banding in imagery from satellite sensors.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

COMP5111-W04-preprocessing

The document discusses the pre-processing of remotely-sensed data, focusing on correcting geometric, radiometric, and atmospheric deficiencies before the data is utilized. It outlines various techniques such as cosmetic operations, geometric correction, and atmospheric correction, emphasizing the importance of these processes for accurate data interpretation and integration with GIS. The document also details specific methods for correcting issues like missing scan lines and systematic banding in imagery from satellite sensors.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

COMP 5111 - Remote Sensing

1
4. Pre-processing of Remotely-Sensed
Data
4.1 Introduction
4.2 Cosmetic Operations
4.3 Geometric Correction and Registration
4.4 Atmospheric Correction
4.5 Illumination and View Angle Effects
4.6 Sensor Calibration
4.7 Terrain Effects

2
4.1 Introduction
In their raw form, as received from imaging sensors mounted on satellite
platforms, remotely-sensed data generally contain flaws or deficiencies with
respect to a particular application.

The correction of deficiencies and the removal of flaws present in the data is
termed preprocessing because, quite logically, such operations are carried out
before the data are used for a particular purpose.

Some corrections are carried out at the ground receiving station, and
some others need at the user’s part.

3
Correction for
- geometric,
- radiometric and
- atmospheric deficiencies, and the
removal of data errors or flaws,
will be covered here despite the fact that not all of these operations will
necessarily be applied in all cases.

The preprocessing techniques discussed in this sections should, rather, be seen as


being applicable in certain circumstances and in particular cases.

4
The preprocessing techniques described in Section 4.2 “Cosmetic Operations”
are concerned with the removal of data errors and of unwanted or distracting
elements of the image.
These errors are caused by detector imbalance.

Many actual and potential uses of remotely-sensed data require that these data
conform to a particular map projection so that information on image and map can
be correlated, for example within a geographical information system (GIS).
Where an image is geometrically corrected so as to have the coordinate and scale
properties of a map, it is said to be georeferenced.
“Geometric correction and registration” of images is the topic of Section 4.3.

5
Atmospheric effects on electromagnetic radiation (due primarily to scattering and
absorption) are described in the first Chapter. These effects add to or reduce the
true ground-leaving radiance, and act differentially across the spectrum.
Section 4.4 “Atmospheric Correction” provides an introductory review of
atmospheric correction techniques.

Sections 4.5–4.7 are concerned with the radiometric correction of images.

6
4.2 Cosmetic Operations
Two topics are discussed in this section.

Missing Scan Lines


This is the correction of digital images that contain either partially or entirely
missing scan lines. Such defects can be due to errors in the scanning or sampling
equipment, in the transmission or recording of image data, or in the reproduction
of the media containing the data.

De-striping
This is a brief discussion of methods of ‘de-striping’ imagery produced by
electromechanical scanners. Such as Landsat TM and ETM+ instruments record 16
scan lines for each spectral band on each sweep of the scanning mirror. The
radiance values along each of
these scan lines are recorded by separate detectors. A systematic pattern is
superimposed upon the image, repeating every 16 lines.
7
4.2.1 Missing Scan Lines

When missing scan lines occur on an image (Figure 4.1) the missing data have
gone for ever.

It is, nevertheless, possible to attempt to estimate what those values might be by


looking at the image data values in the scan lines above and below the
missing values.

8
Figure 4.1 Illustrating dropped scan lines on a Landsat MSS false colour composite image
9
(bands 7, 5 and 4) of south Wales and north Devon.
Method 1
The simplest method for estimating a missing pixel value along a dropped scan line
involves its replacement by the value of the corresponding pixel on the immediately
preceding scan line.

If the missing pixel value is denoted by vij , meaning the value v of pixel i on scan line
j , then the algorithm is simply:

10
Method 2
Method 2 is slightly more complicated;
it requires that the missing value be replaced by the average of the corresponding
pixels on the scan lines above and below the defective line,

that is:

11
Method 3
Method 3 relies on the fact that two or more bands of imagery are often available.
Thus, Landsat TM produces seven bands, ETM+ produces eight.
If the pixels making up two of these bands are correlated on a pair-by-pair basis then
high correlations are generally found for bands in the same region of the spectrum.
For instance, the Landsat ETM+ bands 1 and 2 in the blue and green wavebands of
the visible spectrum are normally highly correlated.

The missing pixels in band k might be estimated by considering contributions from


(i) the equivalent pixels in another, highly correlated, band and
(ii) neighbouring pixels in the same band.
If the neighbouring, highly correlated, band is denoted by the subscript r then the
algorithm can be represented by

The symbol M in this expression is the ratio of the standard deviation of the pixel
values in band k and the standard deviation of the pixel values in band r.
12
4.2.2 Destriping Methods

The presence of a systematic horizontal banding pattern is sometimes seen on


images produced by electromechanical scanners such as Landsat’s TM (Figure
4.2).

This pattern is most apparent when seen against a dark, low-radiance background
such as an area of water. The reasons for the presence of this pattern, known as
banding.

It is effectively caused by the imbalance between the detectors that are used by
the scanner.

13
Figure 4.2 Horizontal
banding effects can be
seen on this Landsat-4
TM band 1 image of
part of the High Peak
area of Derbyshire, UK.

The banding is due to


detector imbalance.

As there are 16
detectors per band,
the horizontal banding
pattern repeats every
16th scan line.

14
Two methods of destriping Landsat imagery are considered in this section.

For the sake of simplicity, they are illustrated with reference


to Landsat MSS images (which have only 6 detectors per band) rather than
to Landsat TM or ETM+ images, which have 16 detectors per spectral band.

Both methods are based upon the shapes of the histograms of pixel values
generated by each of the detectors;
these histograms are calculated from
lines 1, 7, 13, 19, . . . (histogram 1),
lines 2, 8, 14, 20, . . .(histogram 2),
lines 3, 9, 15, 21, . . .(histogram 3)
and so on
until six histograms have been computed (in the case of Landsat MSS).

15
4.2.2.1 Linear Method

This method uses a linear expression to model the relationship between the input
and output values. The underlying idea is based upon the assumption that each of
the 6 detectors ‘sees’ a similar distribution of all the land-cover categories that are
present in the image area.

If this assumption is satisfied, then the histograms generated for a given band from
the pixel values produced by the 6 detectors should be identical. This implies that
the means and standard deviations of the data measured by each detector should
be the same.

To eliminate the striping effects of detector imbalance, the means and standard
deviations of the 6 histograms are equalized, that is, forced to equal a chosen value.
Usually the means of the 6 individual histograms are made to equal the mean of
all of the pixels in the image, and the standard deviations of the 6 individual
histograms are similarly forced to be equal to the standard deviation of all of the
pixels in the image.

16
4.2.2.2 Histogram Matching
The method described in this section uses the shape of the cumulative frequency
histogram of each detector to find an estimate of the non-linear transfer function.
The ideal or target transfer function is taken to be defined by the shape of the
cumulative frequency histogram of the whole image, which is easily found by
carrying out a class-by-class summation of the 6 individual detector histograms.

Our aim is to adjust the individual cumulative histograms so that they match the
shape of the target cumulative histogram as closely as possible.
This is done by adjusting the class numbers of the individual histograms.

In order to determine the class number in the target histogram to be equated to


class number k in the individual histogram, we find the first class in the target
histogram for which the cumulative frequency count equals or exceeds the
cumulative frequency value of class k in the individual histogram.

17
The procedure is applied separately to all 256 values for each of the 6 detectors.

The result is generally a reduction in the banding effect, though much depends
on the nature of the image.
18
4.3 Geometric Correction and
Registration
Remotely-sensed images are not maps.

Frequently, however, information extracted from remotely-sensed images is


integrated with map data in a GIS or presented to consumers in a map-like form
(for example gridded ‘weather pictures’ on TV or in a newspaper).

If images from different sources are to be integrated (for example multispectral


data from Landsat ETM+ and SAR data from ERS-1 and -2) then the images from
these different sources must be expressed in terms of a common coordinate
system.

The transformation of a remotely-sensed image so that it has the scale and


projection properties of a given map projection is called geometric correction.

A related technique, called registration, is the fitting of the coordinate system of


one image to that of a second image of the same area.

19
Geometric correction is a generic term covering all techniques, however
approximate, of converting the data for a specified image band
from row/column  to latitude/longitude (lat/long) format.

Rectification or registration means the equalization of one image coordinate system


to another.
For example, a multi-temporal series of images could be rectified to the first image
in the sequence without any consideration of latitude and longitude, north
orientation or reference ellipsoid.

20
The term geometric correction can include
- georeferencing,
- geocoding, and
- orthorectification.

Geo-referencing usually implies that the four corners of the image have
geographical coordinates but the individual pixels are not given a lat/long pair.

No specific account is taken of ellipsoids, or projections. This is the simplest form


of geometric correction and is described in the next sections (Sections 4.3.1 and
4.3.2).

Geocoding means that the image has all the properties of a map.

Orthorectification means that the terrain elevation has been included in the
correction process, implying that all pixels are viewed as if from above. This is the
most accurate form of geometric correction.

21
Orthorectification

22
A map projection is a technique for the representation of a curved surface (that of
the Earth) on a flat sheet of paper (the map sheet).

Many different map projections are in common use.

Each projection represents an effort to preserve some property of the mapped


area, such as uniform representation of areas or shapes, or preservation of correct
bearings (angles).

23
Cylindirical projection Conic projection Plane projection

24
25
Geometric correction of remotely-sensed images is required when the remotely-
sensed is to be used in one of the following circumstances:

1. to transform an image to match a map projection


2. to locate points of interest on map and image
3. to bring adjacent images into registration
4. to overlay temporal sequences of images of the same area, perhaps acquired by
different sensors and
5. to overlay images and maps within a GIS.

26
The sources of geometric error in moderate spatial resolution imagery are:
(i) instrument error,
(ii) panoramic distortion,
(iii) Earth rotation and
(iv) platform instability.

Instrument errors include distortions in the optical system, non-linearity of the


scanning mechanism and non-uniform sampling rates.

Panoramic distortion is a function of the angular field of view of the sensor and
affects instruments with a wide angular field of view (such as the AVHRR and VIIRS)
more than those with a narrow field of view, such as the Landsat ETM+ and the
SPOT HRV.

Platform instabilities include variations in altitude and attitude.

27
Earth rotation velocity varies with latitude. The effect of Earth rotation is to skew
the image.

Consider the Landsat satellite as it moves southwards above the Earth’s surface.
At time t , its ETM+ sensor scans image lines 1–16.
At time t + 1, lines 17–32 are scanned.

But Earth has moved eastwards during the period between time t and time t+1.

Therefore
start of scan lines 17–32 is slightly further west than the start of scan lines 1–6.
Similarly,
start of scan lines 33–48 is slightly further west than the start of scan lines 17–32.

The effect is shown in Figure 4.3.

28
Figure 4.3 Effects of Earth rotation on the geometry of a line-scanned image.
Due to the Earth’s eastwards rotation, the start of each swath (of 16 scan lines,
in the case of the Landsat-7 ETM+) is displaced slightly westwards.
29
The process of geometric correction can be considered to include:

(i) the determination of a relationship between the coordinate system of map and
image (or image and image in the case of registration);
(ii) the establishment of a set of points defining pixel centres in the corrected
image that define an image with the desired cartographic properties; and
(iii) the estimation of pixel values to be associated with those points.

A simple method based on orbital parameters is described in Section 4.3.1,


While the map-based method is covered in Section 4.3.2.

The estimation of pixel (grey) values to be associated with these output points is
considered in Section 4.3.3.

30
4.3.1 Orbital Geometry Model

Orbital geometry methods are based on knowledge of the characteristics of the


orbit of the satellite platform.

A simple method of correcting the coordinate system of remotely-sensed images


using approximate orbit parameters, described by Landgrebe et al. (1974), is used
here to illustrate the principles involved.

It is not recommended as an operational method, because it is based upon


nominal rather than actual orbital parameters, which implies that the accuracy of
the geometrically corrected image produced by this technique is not high.

31
Note that the image coordinate system has its origin in the top left corner,
at cell (1, 1) or sometimes assumed as (0,0).

The x-axis runs horizontally and increases in


value to the right,
with the y-axis running vertically, with
values increasing downwards.

Thus,
The x-axis gives the pixel position across the
scan line.

The y-axis gives the scan line number.

32
4.3.1.1 Aspect Ratio

Some sensors, such as the Landsat MSS, produce images with pixels that are not
square. It scans pixels in 79 x 56 m GSD along y and x axes, respectively.

As we generally require square rather than rectangular pixels, we can choose


79 x 79 m pixels to overcome the problem.

The aspect ratio (the ratio of the x : y dimensions) is 56 : 79 = 1 : 1.41.

The first transformation matrix, M1, which corrects the image to a 1 : 1 aspect
ratio, is therefore

33
4.3.1.2 Skew Correction
Landsat TM and ETM+ images are skewed with respect to the north–south axis of
the Earth.
Landsats-4–5 and -7 had an orbital inclination of 98.2◦.
The satellite heading (the direction of the forward motion of the satellite) at the
Equator is therefore 8.2◦, increasing with latitude (Figure 2.1).

The skew angle θ at latitude L is given (in degrees) by

where θE is the satellite heading at the Equator.

Given the value of θ the coordinate system of the image can be rotated through
θ◦ anticlockwise so that the scan-lines of the corrected image are oriented in an
east–west direction using the transformation matrix M2:

34
4.3.1.3 Earth Rotation Correction
As the satellite moves southwards over the illuminated hemisphere of the Earth,
the Earth rotates beneath it in an easterly direction with a surface velocity
proportional to the latitude of the nadir or subsatellite point.
Earth rotation correction is applied using the following formula.

where
ωE is the Earth’s angular velocity,
ωO is the satellite’s angular velocity and θ and L are defined above.

The transformation matrix M3 is:

The three transformation matrices M1, M2 and M3 given above are not applied
separately. Instead, a composite transformation matrix, M, is obtained by
multiplying the three separate transformation matrices:

35
4.3.2 Transformation Based on Ground
Control Points
An alternative method (rather than the orbital geometry model) is to look at the
problem from the opposite point of view and, rather than attempt to construct a
physical model that define the sources of error and the direction and magnitude of
their effects, use an empirical method which compares differences between the
positions of common points that can be identified both on the image and on a map
of a suitable scale for the same area.

36
The aim of the procedures is to produce a method of converting map coordinates
to image coordinates, and vice versa.
Two pieces of information are required.
The first is the map coordinates of the image corners. Once the image is outlined
on the map, the map coordinates of the pixel centres (at a suitable scale) can
be found (Figure 4.4).

The map coordinates of the image corners are found by determining an image-to-
map coordinate transformation.
The map coordinates of the required pixel centres are converted to image
coordinates by a map-to-image coordinate transformation.

Both transformations are explained in this section.


The final stage, that of associating pixel values with calculated (map) pixel
positions, is discussed in the next section under the heading of resampling.

37
Figure 4.4 The area of the corrected image is shown by the rectangle that encloses the
oblique uncorrected image.

38
The coordinates of selected points, the GCPs (Ground Control Points), are measured
on map and image.
GCPs are well defined and easily recognizable features that can be located
accurately both on a map and on the corresponding image. They can be located
on the ground by the use of GPS rather than by map measurement.

Figure 4.5 shows the area around London Heathrow airport.


Three ‘good’ GCP lie within the white circles, which are all motorway junctions.
They are good because their position does not change over time

39
Figure 4.5 Extract of
Landsat-5 TM image of
the area around
Heathrow Airport.

The white circles enclose


‘good’ ground control
points.

40
(xi, yi ) (ci, ri )

map coordinates of Image column and row


the i-th GCP coordinates of i-th point

The bivariate linear least squares function is used to find the least squares
coefficients for the following four expressions:
1. x = f (c, r),
2. y = f (c, r),

3. c = f (x, y)) and


4. r = f (x, y)).

If the coefficients of each of these regression functions are known,


it is possible to transform
from map (x, y) to image (c, r) coordinates or
from image (c, r) to map (x, y) coordinates.
41
In general, high-order (third order) polynomial least squares function is used.

ˆs = a00 + a10t + a01u + a20t 2 + a11tu + a02u 2 +a30t 3 + a21t 2u + a12tu 2 + a03u 3

The terms s, t and u are replaced by the appropriate terms.

If, for instance, we wished to estimate y as a function of c and r we would replace


s, t and u in the polynomial expansion by y, c and r.

There is thus one polynomial function for each of the 4 coordinate


transformations.

y = a00 + a10c + a01r + a20c 2 + a11cr + a02r 2 +a30c 3 + a21c 2r + a12cr 2 + a03r 3

x = b00 + b10c + b01r + b20c 2 + b11cr + b02r 2 +b30c 3 + b21c 2r + b12cr 2 + b03r 3

r = d00 + d10x + d01y + d20x 2 + d11xy + d02y 2 +d30x 3 + d21x 2y + d12xy 2 + d03y 3

c = e00 + e10x + e01y + e20x 2 + e11xy + e02y 2 +e30x 3 + e21x 2y + e12xy 2 + e03y 3
42
Before we consider methods of evaluating polynomial expressions for given sets of
(x, y) and (c, r) coordinates we should consider

(i) the size of the sample of control points needed to give reliable estimates of the
coefficients aii ,
(ii) the spatial distribution of the control points and
(iii) the accuracy with which they are located.

In mathematical terms we need to take a sample of at least 10 control points in


order to solve a third-order equation.

These numbers of control points are necessary purely and simply to ensure that it
is mathematically possible to evaluate the equations defining the coefficients aij .

We can conclude not only that control points should be sufficient in number
but also that they should be evenly spread, as far as possible, over the image area.

43
Procedures for estimating the coefficients aij in the least squares functions above
relating map and image coordinate systems are now considered.

The following description assumes that we wish to estimate the map easting y from
the image column and row coordinates r and c for a set of n control points.

The method of least squares is used to find the vector of estimates y according to
the following model:

y = Pa

44
y = a00 + a10c + a01r + a20c 2 + a11cr + a02r 2 +a30c 3 + a21c 2r + a12cr 2 + a03r 3

y = Pa
𝑎𝑎00
𝑎𝑎10
1 𝑐𝑐1 𝑟𝑟1 𝑐𝑐12 𝑐𝑐1 𝑟𝑟1 𝑟𝑟12 𝑐𝑐13 𝑐𝑐12 𝑟𝑟1 𝑐𝑐1 𝑟𝑟12 𝑟𝑟13 𝑎𝑎01
𝑦𝑦1 𝑎𝑎20
𝑦𝑦2 1 𝑐𝑐2 𝑟𝑟2 𝑐𝑐22 𝑐𝑐2 𝑟𝑟2 𝑟𝑟22 𝑐𝑐23 𝑐𝑐22 𝑟𝑟2 𝑐𝑐2 𝑟𝑟22 𝑟𝑟23 𝑎𝑎11
𝑦𝑦3 = 1 𝑐𝑐3 𝑟𝑟3 𝑐𝑐 2 𝑐𝑐3 𝑟𝑟3 𝑟𝑟 2 𝑐𝑐 3 𝑐𝑐 2 𝑟𝑟3
… 3 3 3 3 𝑐𝑐3 𝑟𝑟32 𝑟𝑟33 𝑎𝑎02
…. …. …. …. …. …. …. …. …. …. 𝑎𝑎30
𝑦𝑦𝑛𝑛 𝑎𝑎21
1 𝑐𝑐𝑛𝑛 𝑟𝑟𝑛𝑛 𝑐𝑐𝑛𝑛2 𝑐𝑐𝑛𝑛 𝑟𝑟𝑛𝑛 𝑟𝑟𝑛𝑛2 𝑐𝑐𝑛𝑛3 𝑐𝑐𝑛𝑛2 𝑟𝑟𝑛𝑛 𝑐𝑐𝑛𝑛 𝑟𝑟12 𝑟𝑟𝑛𝑛3
𝑎𝑎12
𝑎𝑎03

a is the vector of unknown coefficients, which are to be estimated from the GCP
data.
The least-squares formula for the evaluation of a is:

a = (PTP)-1PTy

45
4.3.3 Resampling Procedures

Once the 4 transformation equations relating image and map coordinate systems
are known, the next step is
to find the location on the map of the four corners of the image area to be
corrected, and
to work out the number of and spacing (in metres) between the pixel centres
necessary to achieve the correct map scale.

46
The uncorrected image corners (A, B, C, D) are transformed to
the corrected image corners (P, Q, R, S) using
the image (c, r) to map (x, y) coordinates transformation polynomials.

y = a00 + a10c + a01r + a20c 2 + a11cr + a02r 2 +a30c 3 + a21c 2r + a12cr 2 + a03r 3

x = b00 + b10c + b01r + b20c 2 + b11cr + b02r 2 +b30c 3 + b21c 2r + b12cr 2 + b03r 3

Figure 4.8 (a) Schematic


representation of the resampling
process.
47
Then, the pixel size and pixel number along the east and north
directions are determined.

Every pixel in the corrected image is resampled using


the map (x, y) to image (c, r) coordinates transformation polynomials.
r = d00 + d10x + d01y + d20x 2 + d11xy + d02y 2 +d30x 3 + d21x 2y + d12xy 2 + d03y 3

c = e00 + e10x + e01y + e20x 2 + e11xy + e02y 2 +e30x 3 + e21x 2y + e12xy 2 + e03y 3

In most cases, r and c are


non-integral values and lie
somewhere between the
pixel centres in the
uncorrected image.

An interpolated value is
computed using a procedure
called resampling.

48
Three methods of resampling are in common use.
- Nearest neighbour resampling,
- Bilinear interpolation, and
- Bicubic interpolation.

The first is simple – take the value of the pixel in the raw image that is closest to
the computed (c, r) coordinates.
This is called the nearest neighbour method.

It has two advantages;


it is fast and
its use ensures that the pixel values in the output image are ‘real’ in that they are
copied directly from the raw image.

49
The second method of resampling is bilinear interpolation (Figure 4.9).
This method assumed that a surface fitted to the pixel values in the immediate
neighbourhood of (c, r) will be planar, like a roof tile. The four pixel centres nearest to
(c, r) (i.e points P1–P4 in Figure 4.9) lie at the corners of this tile; call their values vij.

The interpolated value V at (c, r) is


obtained from:

50
The third spatial interpolation technique that is in common use for estimating pixel
values in the corrected image is called bicubic because it is based on the fitting of
two third-degree polynomials to the region surrounding the point (c, r).

The 16 nearest pixel values in the uncorrected image are used to estimate the value
at (c, r) on the output image.

This technique is more complicated than either the nearest neighbour or the
bilinear methods discussed above, but it tends to give a more natural-looking
image without the blockiness of the nearest neighbour or the oversmoothing of the
bilinear method.

51
4.3.4 Other Geometric Correction Methods

The least squares polynomial procedure described in Section 4.3.2 is one of the
most widely used methods for georeferencing medium-resolution images
produced by sensors such as such as the Landsat ETM+, which has a nominal
spatial resolution of around 30 m.

The accuracy of the resulting geometrically corrected image depends, as we have


already noted, on the number and spatial distribution of GCPs.

Significant points to note are:


(i) the least squares polynomial produces a global approximation to the
unknown correction function and
(ii) the method assumes that the area covered by the image to be georeferenced
is flat.

52
The effects of terrain relief can produce
very considerable distortions in a
geometric correction procedure based on
empirical polynomial functions (Figure
4.10).

The only effective way of dealing with


relief effects is to use a mathematical
model that takes into account
- the orbital parameters of the satellite,
- the properties of the map projection, and
- the nature of the relief of the ground
surface.

Where high-resolution images (GSD < 1m)


such as those produced by IKONOS and
QuickBird are used, the question of
accurate, relief-corrected geocoded images
becomes critical.
53
GeoEye, the company which owns and operates the IKONOS system, does not
release details of the orbital parameters of the satellite to users (though QuickBird
orbital data are available).
Instead, they provide the coefficients of a set of rational polynomials with their
stereo products. The term rational means ‘ratios of’.

where P1, P2, P3 and P4 are usually maximum degree polynomials equal to 3
(corresponding to 20 coefficients). For example, polynomial P1 is:

54
4.4 Atmospheric Correction

4.4.1 Background

A given pixel location on a remotely-sensed image is not a record of the true ground-
leaving radiance at that point,
for the
magnitude of the ground-leaving signal is attenuated due to atmospheric absorption
and its directional properties are altered by scattering.

Figure 4.11 shows, in a simplified form, the components of the signal received by
a sensor above the atmosphere.

55
All of the signal appears to originate from the point P on the
ground whereas, in fact,
- scattering at S2 redirects some of the incoming electromagnetic
energy within the atmosphere into the field of view of the sensor
(the atmospheric path radiance) and
- some of the energy reflected from point Q is scattered at S1 so
that it is seen by the sensor as coming from P.

This scattered energy, called


‘environmental radiance’, produces
what is known as the ‘adjacency
effect’.

To add to these effects, the radiance


from P (and Q) is attenuated as it
passes through the atmosphere.

56
The relationship between radiance received at a sensor above the atmosphere
and the radiance leaving the ground surface:

Htot is the total downwelling radiance in a specified spectral band,


ρ is the reflectance of the target (ratio of downwelling to upwelling radiance),
T is the atmospheric transmittance,
Lp is the atmospheric path radiance.

The downwelling radiance is attenuated by the atmosphere as it passes from the


top of the atmosphere to the target.
Further attenuation occurs as the signal returns through the atmosphere from the
target to the sensor.

The path radiance term Lp varies in magnitude inversely with wavelength for
scattering increases as wavelength decreases. Hence, Lp will contribute differing
amounts to measurements in individual wavebands.
57
4.4.2 Image-Based Methods
The first method of atmospheric correction that is considered is the estimation of
the path radiance term, Lp and its subtraction from the signal received by the
sensor.

The regression method is applicable to


areas of the image that have dark pixels
(clear water, deep shadow, or dark
coloured rocks).
In terms of the Landsat ETM+ sensor, pixel
values in the near-infrared band 4 are
plotted against the values in the other
bands in turn, and a best-fit (least-
squares) straight line is computed for each
using standard regression methods. The
offset a on the x-axis for each regression Figure 4.12 Regression of selected
pixel values in spectral band A
represents an estimate of the
against the corresponding pixel
atmospheric path radiance term for the values in band B. Band B is normally
associated spectral band (Figure 4.12). a near-infrared band (such as
Landsat ETM+ band 4). 58
4.4.3 Empirical Line Method
Two targets – one light and one dark – are selected, and their reflectance is
measured on the ground, using a field radiometer, to give values R on the y-axis of
Figure 4.14.

The radiances recorded by the sensor


(shown by the x-axis, L, in Figure 4.14) are
computed from the image pixel values.

Finally, the slope, s, and intercept, a, of


the line joining the two target points are
calculated.

The term a represents the atmospheric


radiance. This equation is computed for Figure 4.14 Empirical line method of
all spectral bands of interest. atmospheric correction. Two targets
(light and dark) whose reflectance (R)
and at-sensor radiance ( L) are known are
joined by a straight line with slope s and
intercept a. 59
4.5 Illumination and View Angle Effects
The magnitude of the signal received at a satellite sensor is dependent on several
factors, particularly:
• reflectance of the target
• nature and magnitude of atmospheric interactions
• slope and aspect of the ground target area relative to the solar azimuth
• angle of view of the sensor
• solar elevation angle.

In this section we consider the effects of:


(i) the solar elevation angle,
(ii) the view angle of the sensor and
(iii) the slope and aspect angles of the target.

60
The effects of variation in the solar elevation angle from one image to another of
a given area can be accomplished simply if the reflecting surface is Lambertian.

The correction is simply

where
θ is the solar zenith angle (measured from the vertical),
L is the observed radiance, and
x is the desired view angle.

This formula may be used to standardize a set of multitemporal images to a


standard solar illumination angle.

61

You might also like