REMOTE SENSING
WITH TERRSET® 2020 /
IDRISI®
A Beginner’s Guide
Timothy A. Warner
David J. Campagna
Florencia Sangermano
Geocarto Internatinal Centre Ltd.
Copyright © 2021 Geocarto International Centre Ltd.
All rights reserved
No part of this book may be reproduced, or stored in a retrieval system, or transmitted in any form or
by any means, electronic, mechanical, photocopying, recording, or otherwise, without express written
permission of the publisher.
Trademarks TerrSet® and IDRISI® are registered trademark of Clark Labs, Clark University,
Worcester, MA, USA. Windows® is a trademark of Microsoft Corporation.
Cover design by: Albert Larose
Printed in the United States of America
CONTENTS
Chapter 1 Introduction
1.1 Guide to Using the Manual
1.1.1 Objectives
1.1.2 Organization
1.1.3 Sample Data
1.1.4 Working with the Manual
1.2 TerrSet Software
1.2.1 What’s in a Name?
1.2.2 History and Overview of TerrSet / IDRISI
1.3 Starting TerrSet and the TerrSet Workspace
1.3.1 Downloading the Data
1.3.2 Starting TerrSet
1.3.3 TerrSet On-Line Help
1.3.4 Managing TerrSet Project Files with the TerrSet
EXPLORER
1.3.5 Basic File Management with the TerrSet
EXPLORER
1.3.6 Working with Metadata Using the TerrSet
EXPLORER
1.3.7 Working with Collections in the TerrSet
EXPLORER
Chapter 2 Displaying Remotely Sensed Data
2.1 Introduction to Data Types
2.1.1 Example: Landsat Data
2.1.2 Surface (Elevation and Bathymetry)
2.2 Satellite Image Display
2.2.1 Preparation
2.2.2 Image Display
2.2.3 Image Statistics and a Simple Contrast Stretch
2.2.4 Creating False Color Composite Images
2.2.5 Map Annotation
2.2.6 Printing a Map
2.3 3-D Visualizations
2.3.1 Preparation
2.3.2 ORTHO Perspective Display
2.3.3 Fly-Through Visualization
2.3.4 Recording and playing back Fly-Through movies
Chapter 3 Importing, Pre-Processing and Exporting
3.1 Importing Data into the IDRISI file format
3.1.1 GeoTIFF Format
3.1.2 Concatenation
3.1.3 Concatenation using the MOSAIC program
3.2 Georeferencing
3.2.1 Introduction to Map Projections
3.2.2 Converting Between Projections Using TerrSet-
Defined Projections
3.2.3 Converting Between Projections Using User-
Defined Projections
3.2.4 Resample - Transformations with control points
3.3 Mosaicking Images
3.3.1 Background
3.3.2 Mosaicking the Hong Kong data
3.4 Landsat Import
3.5 Subsetting an image
3.6 Radiometric correction: Atmospheric correction
3.6.1 Background
3.6.2 Atmospheric correction of the Hong Kong Data
3.7 Exporting Images
3.7.1 Exporting to GEOTIFF
3.7.2 Exporting to KML
Chapter 4 Enhancing Images Spatially
4.1 Convolution
4.1.1 Smoothing Filters
4.1.2 High-pass and Edge Filters
4.1.3 Sharpening Filters
4.2 Multiresolution Merge
4.2.1 Creating user defined multiresolution merge
procedures
4.2.2 Multiresolution merge using PANSHARPEN
4.2.3 Multiplicative Merge
Chapter 5 Spectral Enhancement Techniques
5.1 Introduction
5.2 Download Data for this Chapter
5.3 Enhancing Highly Correlated Data using Data
Transformations
5.3.1 Background
5.3.2 Preparation
5.3.3 Principal Component Analysis (PCA)
5.3.4 PCA Decorrelation Stretch
5.3.5 HLS Stretch
5.4 Segmenting and Density Slicing Images for Advanced
Display
5.4.1 Preparation
5.4.2 Developing a Land Mask
5.4.3 Displaying Patterns in Water
5.4.4 Density Slicing Landsat Band 3
5.4.5 Combination False Color Composite for Land and
Water
Chapter 6 Image Ratios
6.1 Introduction to Image Ratios
6.2 Download Data for this Chapter
6.3 Vegetation Indices
6.3.1 Background
6.3.2 Preparation
6.3.3 Exploratory investigation of the AVHRR data of
Africa
6.3.4 NDVI image of Africa
6.4 Discriminating Snow from Clouds
6.4.1 Overview
6.4.2 Snow and Cloud Properties
6.4.3 Preparation
6.4.4 A Color Composite to Discriminate Snow from
Clouds
6.4.5 A Ratio to Discriminate Snow
6.5 Mineral Ratios
6.5.1 Hypothetical Ratio Example
6.5.2 Preparation
6.5.3 Exploratory Investigation of the Puna de Atacama
TM Data
6.5.4 Calculating Ratios
6.6 Other indices
6.6.1 Background
6.6.2 Preparation
6.6.3 Normalized Burn Ratio
6.6.4 Normalized difference water index
Chapter 7 Introduction to Classifying Multispectral Images
7.1 Introduction to Multispectral Classification Methods
7.1.1 Supervised Versus Unsupervised Classification
7.1.2 Soft Versus Hard Classification
7.1.3 Relative Versus Absolute Classifiers
7.2 Download Data for this Chapter
7.3 Unsupervised classification
7.3.1 Overview
7.3.2 Preparation
7.3.3 Develop the List of Land Cover Types
7.3.4 Group Pixels into Spectral Classes: CLUSTER
7.3.5 Determine the informational classes
7.3.6 Assign Each Spectral Class to an Informational
Class
7.3.7 Update the Legend and Create a Custom Image
Palette File
7.3.8 Iterative Self Organizing Clustering: ISOCLUST
7.3.9 Determine the Informational Class
7.3.10 Reassign Each Spectral Class to an Informational
Class
7.3.11 Update the Legend and Create a Custom Image
Palette File
7.4 Comparing classifications
Chapter 8 Supervised Classification
8.1 Background
8.2 Preparation
8.3 Develop the List of Land Cover Types
8.4 Digitize Training Classes
8.5 Characterize training statistics
8.5.1 Generate Class Signatures with MAKESIG
8.5.2 Group the Class Signatures into a Single Signature
Collection
8.5.3 Assess and Comparing Signatures
8.6 Assigning pixels to classes
8.6.1 Parallelepiped Classification
8.6.2 Maximum Likelihood Classification
8.6.3 Multi-layer Perceptron Neural Network
Classification
Chapter 9 Soft Classification
9.1 Introduction
9.2 Download Data for this Chapter
9.3 Preparation
9.3.1 Setting up TerrSet project
9.3.2 Digitize Training Class Data
9.3.3 Create Signature Files of Training Data
9.4 Soft classification with Linear Unmixing
9.4.1 Summarize the Output of the Forest Classification
9.5 Soft Classification with Mahalanobis Typicalities
9.5.1 Summarize the outputs
Chapter 10 Classification Error Assessment
10.1 Introduction
10.2 Download Data for this Chapter
10.3 Classification error analysis
10.3.1 Overview
10.3.2 Preparation
10.3.3 Generate the Random Sample
10.3.4 Interpret the True Land Cover Class for Each
Sample Point
10.3.5 Rasterize the Recode of the Sample Points
10.3.6 Calculate the Classification Accuracy using
ERRMAT
Chapter 11 Image Change Analysis
11.1 Introduction
11.1.1 Change Detection Methods
11.1.2 Required Preprocessing
11.2 Download Data for this Chapter
11.3 Preparation
11.4 Import and radiometric correction
11.4.1 Preliminary Image Differencing
11.5 Change Detection Pre-Processing: Image
Normalization through Regression
11.5.1 Create Multitemporal Display
11.5.2 Digitize Change Mask
11.5.3 Rasterize the Vector Mask
11.5.4 Regression of the Masked Imagery
11.5.5 Normalize the 2013 data using the regression
equation
11.6 Spectral Change Detection
11.6.1 Image Subtraction
11.6.2 Principal Component Analysis
11.6.3 Change Vector Analysis
11.7 Post Classification Change Detection
11.7.1 Preparation
11.7.2 Develop a List of Spectral Classes
11.7.3 Digitize Training Polygons for the 1984 Image
11.7.4 Create Signatures with MAKESIG
11.7.5 Classify the 1984 Image
11.7.6 Harden the Mahalanobis typicality 1984
classification.
11.7.7 Collapse Spectral Classes to Informational Classes
11.7.8 Classification of the 2013 Image
11.7.9 Overlay of Two Independent Classifications using
CROSSTAB
References
Appendix
A. Sources of Free Data
B. Sources of Data for Sale
C. Download Data with Earth Explorer
ABOUT THE AUTHORS
Timothy A. Warner is Emeritus Professor of Geology and Geography at
West Virginia University, in Morgantown, West Virginia, USA. He received
a BSc (Hons) in Geology from the University of Cape Town, South Africa
and a PhD in remote sensing from Purdue University, USA. He served as the
editor-in-chief of the International Journal of Remote Sensing from 2014 to
2020, and also served as an associate-editor of Remote Sensing Letters and
Progress in Physical Geography. He was a co-editor of the book, The SAGE
Handbook of Remote Sensing. He has had Fulbright appointments at the
University of Louis Pasteur, in Strasbourg, France, and at the Universidad de
Concepción, Chile. He is a Fellow of the American Society of
Photogrammetry and Remote Sensing (ASPRS).
David J. Campagna is a professional geologist with over 35 years of
experience in industry, academic and non-profit organizations. He has
worked worldwide with numerous energy firms as a consultant, most recently
as a Principal Scientist for CNOOC Intl. and has consulted for environmental
groups and was the founding board member for the non-profit, SkyTruth, Inc.
He has served as an adjunct professor of Geology and Geography at West
Virginia University. Dr. Campagna holds a Ph.D. in structural geology and
remote sensing from Purdue University, MS in geology from the University
of Kentucky and a BA in geology from Knox College.
Florencia Sangermano is an Assistant Professor in Geography at Clark
University in Worcester, Massachusetts, USA. She received a BSc/MSc in
Biology from Universidad Nacional de Mar del Plata, Argentina, and a MA
and PhD in Geography from Clark University. Her research focuses on
climate and land cover change impacts on ecosystems and biodiversity
through the lens of geospatial analysis, to support conservation planning and
ecosystem management. Before joining the Graduate School of Geography,
she was a research scientist at Clark Labs, where she developed methods to
facilitate the analysis and modeling of changes in climate and land cover and
taught training workshops in North America, South America, and Europe.
ACKNOWLEDGMENTS
The authors are grateful for the patient support of Mr. K. N. Au and Geocarto
International Centre Ltd. through the long process of the writing and
production of this manual.
Over many versions we would like to acknowledge the efforts of Joan
Vlasschaert, Lucy Kammer, Paula Hunt, Ann Stock, Diane Sutter, and
Camille Ferreol. We appreciate the support of West Virginia View. A special
thanks to Caroline Williams of Clark Labs for the conversion and editing of
the manuscript to digital format, and also Albert Larose, Gloria Pappalardo,
and James Toledano of Clark Labs.
CHAPTER 1
INTRODUCTION
1.1 Guide to Using the Manual
1.1.1 Objectives
The objectives of this manual are twofold. The first objective is to introduce
the reader to the display and basic processing procedures for enhancement,
analysis and classification of satellite imagery. The second objective is to
train the user in how to accomplish these tasks within the TerrSet
environment, specifically in the use of TerrSet’s IDRISI Image Processing
suite of tools. TerrSet is an excellent software suite for illustrating image
processing in that the program has a wide array of basic modules, which the
user combines in order to undertake an analysis. This ensures that each
operation within the overall analysis is transparent and ultimately
understandable.
You will be exposed to a wide variety of image analysis approaches and
techniques through this manual. However, no single manual could ever be a
comprehensive guide to either remote sensing or TerrSet. Nevertheless, at the
end of this manual you should have the confidence and experience to
continue exploring the wide range of functionality in TerrSet, and learning
new approaches to image analysis. Indeed, probably the most important skill
you will learn from this manual is not how to use TerrSet, but rather how to
approach remote sensing problems.
1.1.2 Organization
This training manual is primarily designed to be a stand-alone self-study
guide. The skills you need before starting this training manual are only those
of basic familiarity with the personal computer environment. Specifically,
you need an understanding of files and directory structures, and the ability to
maneuver around in the Windows environment. Basic knowledge of image
processing concepts is useful, but not essential. However, access to a general
remote sensing text (Table 1.1.2.a) is strongly recommended as a supplement
to the coverage of topics introduced in this manual. More advanced texts may
also be a useful supplement (Table 1.1.2.b).
Table 1.1.2.a Example introductory remote sensing texts.
Table 1.1.2.b Example advanced remote sensing texts.
The format of this manual was chosen to help the reader perform the included
topical exercises. A section covering a specific image analysis topic begins
with a brief general introduction to the subject matter, and then is followed
by detailed instructions associated with example exercises.
The exercise instructions are contained within triangular separators, i.e.,
◆ ◆ ◆
These exercises give step-by-step instructions in performing tasks in
TerrSet. The first line of the exercise instructions provides a summary of
the activity. The second line gives the menu location of the TerrSet
module described. For example, the instructions below between the
triangular separators illustrate step-by-step instructions for using
DISPLAY LAUNCHER.
◆ ◆ ◆
Displaying an image
Menu Location: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER.
2. Within the DISPLAY LAUNCHER window, select a file by
clicking on the browse button (…) and then click on the
etm_pan raster file in the pick list window.
3. Select GreyScale in the Palette File section in the lower
right corner of the DISPLAY LAUNCHER window.
4. Click on OK to display the image.
◆ ◆ ◆
Other concepts used by this manual include:
• Tools and module names, such as DISPLAY LAUNCHER, are given in
capitals.
• The names of dialog boxes and windows are printed in italics, e.g.
DISPLAY LAUNCHER window.
• Text within the TerrSet module windows and dialog boxes is also
printed in italics. For example, the name of an input text box might be
Input file name.
• Names of files that already exist are printed in bold italics (e.g.
etm_pan).
• Text that you, the reader, should enter in a program (e.g. through a text
box) is given in bold, including the names for new files you will generate.
For example the text may specify: Enter the file name pca_123 in the text
box.
• Terms that we wish to highlight are also shown in bold, for example
ground control point.
• Finally, the sequence of menu options you should select to start
programs is also highlighted in bold: e.g. File - Display - DISPLAY
LAUNCHER.
1.1.3 Sample Data
This manual comes with sample data covering a number of different locations
around the globe (Table 1.1.3.a). The locations were chosen to cover a
variety of natural and human-modified environments.
The first data set, used in Chapters 1 through 4, comprises Landsat Enhanced
Thematic Mapper (ETM) imagery of the coastal region of Hong Kong,
China, and part of the Pearl River estuary. We will be using these data to
illustrate the importing, displaying, merging and creating of maps. In
addition, a combination of imagery and elevation data will be used to create
3D displays and fly-throughs. The elevation data were acquired through the
Shuttle Radar Topography Mission (SRTM), in which the Spaceborne
Imaging Radar-C (SIR-C) was flown aboard the NASA Space Shuttle
Endeavour during 11-22 February, 2000. The SRTM mission generated a
near-global digital elevation model (DEM) of the Earth using overlapping
radar images, through a process called radar interferometry (Maune, 2001).
Table 1.1.3.a Image data and directory location.
Chapter 4 is followed by two chapters on image enhancement. In Chapter 5,
we will use Thermal Infrared Multispectral Scanner (TIMS) imagery of lava
flows from Hawaii to investigate a range of enhancement methods for highly
correlated data. The second half of the chapter draws on the Hong Kong
Landsat ETM data set, already used in Chapters 1-4, to investigate
segmentation and non-standard false color composites.
Chapter 6 explores image ratios. In this chapter, we first use coarse-resolution
Advanced Very High Resolution (AVHRR) imagery of Africa to generate a
continental scale vegetation map. We will use Landsat data of Washington
State to develop an index to discriminate snow from clouds. We will then use
Landsat TM imagery from the Andes of the Atacama region along the Chile-
Bolivia border, where we will examine how image ratios can be used for
geologic mapping. Finally, we will use OLI images from Argentina to study
indices to detect fire and vegetation moisture.
For the classification in Chapter 7 and 8 we will use Landsat OLI imagery of
Morgantown, West Virginia, and the neighboring areas in the Appalachian
Plateau of West Virginia, USA. The area is marked by deciduous forest and
some farming activity. Chapter 9 uses Autumn Landsat ETM imagery from
Morgantown, WV to demonstrate the concepts of soft classification methods,
including linear spectral unmixing and Mahalanobis typicalities.
Chapter 10 covers the estimation of classification errors, using the results
from Chapter 8 and a digital orthophotography mosaic.
The final data set comprises three images of Las Vegas, Nevada, USA, from
Landsat TM, ETM and OLI, which is used in Chapter 11. The arid city of Las
Vegas and its surrounding valley will be used to demonstrate the mapping of
change, in this case, urban growth around the city of Las Vegas over three
decades.
1.1.4 Working with the Manual
This manual is written assuming that you will work progressively through the
material. Thus, instructions are more detailed in the beginning chapters,
especially Chapters 1-4. The instructions are generally slightly briefer the
second and subsequent times any program is described. If you should find
that the instructions for any program are too brief, you may wish to return to
the earlier sections, to review the particular program or procedures described,
and also draw on the extensive Help in TerrSet, as described below.
Some readers may prefer to sample the manual selectively. This should be
fine, but do note that many chapters draw on images prepared earlier in the
chapter, or skills developed in prior sections. If this is a barrier to your
completing the exercise, you will need to do the earlier work first.
1.2 TerrSet Software
1.2.1 What’s in a Name?
As first released, the IDRISI software was named after the cartographer and
botanist, Abu Abd Allah Mohammed al-Idrisi (1100-1165 or 1166 A.D.)
(Wikipedia 2005, Eastman 2009). Al-Idrisi was one of the most important
medieval scholars, producing maps for the Norman King, Roger II of Sicily,
that would serve as a primary reference for the next 500 years. In addition, he
made a major contribution in cataloging medicinal and other plants, which
had not previously been recorded.
1.2.2 History and Overview of TerrSet /
IDRISI
First released in 1987, the IDRISI software, now known as TerrSet, provides
a wide range of raster-based geospatial tools in a single integrated package
(Warner and Campagna 2004).
The latest version, TerrSet 2020, is designed for Windows, and is the
nineteenth release. This edition of the manual is substantially the same as the
previous version, Remote Sensing with TerrSet/IDRISI: A Beginner’s Guide,
with only minor changes to correct errors found in that edition. The TerrSet
2020 software has minimal changes to the particular programs discussed in
this manual. However, if the reader does find any inconsistencies with the
previous version of this manual and the TerrSet software, the authors would
be grateful if we could be informed. For simplicity, TerrSet references both
versions of the TerrSet software in this text.
The TerrSet software includes eight integrated spatial components for
modeling the earth:
- IDRISI GIS
- IDRISI Image Processing
- Land Change Modeler
- Habitat and Biodiversity Modeler
- GeOSIRIS
- Ecosystem Services Modeler
- Earth Trends Modeler
- Climate Adaptation Modeler
TerrSet specializes in analytical functionality covering a wide spectrum of
spatial analysis from database query, to environmental modeling, to image
enhancement and classification. Although TerrSet is primarily oriented
towards the use of raster data, vector data can also be displayed and used in
some of the programs. TerrSet includes routines for environmental modeling
and natural resource management, such as change and time series analysis,
climate adaptation and monitoring, land change prediction, multi-criteria and
multi-objective decision support, uncertainty analysis and simulation
modeling. Spatial operations include interpolation, Kriging and conditional
simulation. For image processing, a suite of tools for image restoration,
enhancement and spectral transformation are available. TerrSet has a
particularly sophisticated range of classification algorithms, including
traditional “hard” classifiers, in which each pixel is assigned to a single class,
as well as “soft” classifiers, in which multiple classes are potentially
associated with each pixel. In addition, TerrSet offers hyperspectral image
classification procedures designed for use with images with hundreds of
spectral bands (a band is an image layer associated with a specific
wavelength region of the electromagnetic spectrum). TerrSet is specifically
designed to allow programmers and modelers to incorporate TerrSet routines
into their own applications, including an interface to Python. Despite the
highly sophisticated nature of these capabilities, the system is still easy to
use.
The over 300 modules which make up TerrSet are organized in menus in nine
major groups, with most of the analytical functionality used in this manual
concentrated in the IDRISI GIS Analysis and IDRISI Image Processing
Menus. Because TerrSet tends to generate many individual files, a program
for file management is an important component of the File menu.
A particularly effective TerrSet program module is the graphical MACRO
MODELER, which allows users to develop and link a sequence of TerrSet
modules. This program is useful for designing an image analysis in a
conceptual manner, for speeding up implementation of a complex sequence
of tasks, for building a macro for repeating an analysis sequence on different
data sets, and for documenting analytical procedures.
TerrSet is highly modular in design. This modular design tends to make an
analysis in TerrSet more complex because of the many steps
involved. However, this approach makes TerrSet a superb teaching tool,
because the user is forced to understand every step in each procedure.
1.3 Starting TerrSet and the TerrSet
Workspace
1.3.1 Downloading the Data
Before starting the TerrSet program, we will need to create and organize file
folders for our project data.
We will begin by creating a folder called RSGuide (short for Remote
Sensing Guide), in which we will store our data. Depending on your
preferences, you may wish to create this new folder on your desktop, the root
directory (C:\) or elsewhere. We find that there is some advantage in keeping
the paths (directory names) short, and so for this manual the examples we
give will be based on creating this directory in the root directory. The choice
of where to place the data, however, is up to you.
There are a number of ways to create a folder on your desktop, and one way
is described below.
◆ ◆ ◆
TerrSet folder creation
1. Right-click on the Windows icon (typically in the bottom-
left corner of your screen) and select File Explorer.
2. Double click on the C:\ drive. (Note if you want to create
your new folder in a location other than C:\, then you should
now navigate to that location).
3. Right-click in the folders panel.
4. Select New, then select Folder and name the folder
RSGuide.
◆ ◆ ◆
We are now ready to download some of the data provided with this project to
your computer.
◆ ◆ ◆
Data transfer
1. Download the data from the Clark Labs website:
clarklabs.org\download. There will be a link on this page
specifically for the data.
2. Place the downloaded file in your RSGuide folder.
3. Unzip the downloaded file by right-clicking on the file
name, and selecting Extract All....
◆ ◆ ◆
At the start of each section you will be prompted for the appropriate data and
data folder for that section.
1.3.2 Starting TerrSet
TerrSet is started by double clicking on the TerrSet icon on your
desktop. Alternatively, you can use the Windows® Start menu, by selecting
TerrSet within the TerrSet menu.
The TerrSet workspace window will now open. If this is the first time you
have opened TerrSet, the Quick Start Navigation Guide will be displayed
(Figure 1.3.2.a). Clicking on any of the Icons will take you to that particular
application within the program. By clicking on the Next tip link, you can
review the Quick Start Navigation Guide that shows how to navigate and
display images in TerrSet.
If you do not want the guide to display every time you open TerrSet, check
the box in the lower left corner labeled Don’t show this again. (If you change
your mind, and want to redisplay the Quick Start Navigation Guide, simply
use the main TerrSet menu to access File – User Preferences, and then, on
the System settings pane, checking the check box for Show tip screen on
startup). Close the navigation guide by clicking on the red x in the lower right
corner.
Figure 1.3.2.a The TerrSet workspace with the Quick Start Navigtaion
Guide, which may be displayed when TerrSet first starts.
Figure 1.3.2.b The TerrSet workspace.
The TerrSet workspace includes the toolbar, the menu system, the shortcut
utility, and the status bar (Figure 1.3.2.b). There are many ways to start an
individual module. One of the simplest ways is through the menu system,
which is at the top of the application window (Figure 1.3.2.b).
You can activate the File, IDRISI GIS Analysis, and IDRISI Image
Processing menus by clicking on the menu with the mouse. If you select a
menu option that includes a right-pointing arrow, a submenu will appear. You
can navigate through the submenus using the arrows on the keyboard (on the
number pad) or using the Enter key. Clicking on a menu option without a
right-pointing arrow will cause a dialog box for that module to appear.
Vertical applications (Land Change Modeler, Habitat and Biodiversity
Modeler, GeOSIRIS, Ecosystem Services Modeler, Earth Trends Modeler,
and Climate Adaptation Modeler) are accessed directly by clicking on the
corresponding Menu link. Some programs can be accessed directly, via the
icons, or buttons below the menu. These buttons are collectively known as
the tool bar. Each icon represents either a program module or an interactive
operation that can be selected by clicking on that button with the mouse.
Hold the cursor over an icon to display momentarily the name of the function
or module represented by that icon. The set of icons represents interactive
display functions as well as some of the most commonly used modules.
A third method for selecting a program is from the shortcut utility, a pull-
down menu with a scroll bar. You can navigate through this menu with the
mouse, or you can type the program name in the box directly. Note that you
can turn the Shortcut utility on or off on the main TerrSet window through
the User Preferences dialog box, obtained from the menu File-User
Preferences.
The status bar (Figure 1.3.2.c) provides a variety of information about
program operation. When maps and map layers are displayed on the screen
and the mouse is moved over one of these windows, the status bar will
indicate the position of the cursor within that map in both column and row
image coordinates and X and Y map reference system coordinates. In
addition, the Status Bar indicates the scale of the screen representation as a
Representative Fraction (RF).
Figure 1.3.2.c also shows some of the major windows within TerrSet that we
will be using extensively. On the left is the TerrSet EXPLORER window,
which is used for organizing data. The Display window, shown in the figure
in the center of the TerrSet workspace, is the major tool for visualizing
images. The Composer dialog box, on the far right, allows one to manipulate
how images are displayed in the Display window.
If one or more program modules are currently still processing, the status bar
will also indicate the progress of the most recently launched analytical
operation with a percent done measure, and sometimes a graphic bar (Figure
1.3.2.d). Note that the program dialog box remains present, even though the
program may still be running or finished. This “persistent window” approach
is useful because it facilitates running programs multiple times without
having to reopen the dialog box each time. You can, if you want, turn off this
persistent window feature through the File - User Preferences menu, and
unchecking the option for Enable persistent dialogs. Despite the benefit of
persistent windows, users should exercise caution in exploiting this capability
because you should not try to open a file that is still being processed.
Figure 1.3.2.c The TerrSet status bar and related main windows.
Figure 1.3.2.d Progress indicator within the status bar.
Since TerrSet has been designed to permit multi-tasking of operations, it is
possible that more than one operation may be working simultaneously. To
check active processes and their status, simply double click on the bottom
right hand part of the status bar panel. A Progress of modules window will
open, listing the current programs running. Modules may also be terminated
from this window.
Figure 1.3.2.d also shows a typical program dialog box. Dialog boxes are
used to provide information to the modules regarding input and output data,
as well as important processing parameters.
1.3.3 TerrSet On-Line Help
TerrSet has excellent on-line documentation and help. In our own use of
TerrSet, and in developing this manual, we have drawn extensively on the
TerrSet help material, and we encourage all users of this manual to take
advantage of this outstanding resource.
You can access the TerrSet Help from the main TerrSet tool bar: Help –
Using Help.
Another way to access the on-line help is through the button labeled Help,
found in the dialog box that controls each module (Figure 1.3.3.a). This
button automatically opens the help material for the particular module from
which the help was called. The help material is very useful for understanding
the general nature of each program, as well as the limitations, such as the type
of data that can be used. The on-line help also provides a comprehensive
glossary and an index, which can be found through the Glossary and Index
tabs on the left-hand side of the Help window (Figure 1.3.3.b). The glossary
is helpful for clarifying the meaning of remote sensing terminology in
general, and TerrSet terms in particular. The index provides a tool for rapid
searches through the main sections of the Help system. You can also identify
the specific subsections within the Help, by clicking on the Contents Tab, and
navigating through the hierarchical structure, as shown in Figure 1.3.3.b.
Figure 1.3.3.a Typical dialog box showing button for on-line help.
Figure 1.3.3.b The TerrSet on-line Help.
In addition, PDF-format tutorial and manual documents can be accessed from
the main TerrSet menu: Help – TerrSet Tutorial / Help – TerrSet Manual.
Moreover, video tutorials are also provided for several introductory and
advanced applications: Help – TerrSet Tutorial Videos.
1.3.4 Managing TerrSet Project Files with
the TerrSet EXPLORER
In a typical image analysis activity you will produce many files. It is
therefore essential to have an effective method to organize your files. Firstly,
you should specify meaningful file names for files you generate. Do not
simply use the default names. Instead use names that have meaning within
your analysis, perhaps referring to the program used, or crucial parameters
used in the processing. In addition, you should keep good notes on the names
of the input and output files, as well as all parameters selected.
In addition to using appropriate file names and keeping good notes, it makes
good sense to organize your data in a series of folders, much as the data for
the exercises in this manual are organized. Note that the use of spaces or
illegal characters can cause problems when processing your data. We
recommend the use of underscore “_” instead of spaces, and refraining from
using characters other than letters or numbers in the file names.
A powerful tool to assist you in organizing your data folders is the concept of
a TerrSet Project. A TerrSet Project is a file that keeps track of the working
folders and resource folders for a particular task. The working folder is the
main location for the data for your project. It is also the default location for
the output for the files created by the different modules. A resource folder is
a location where additional files can be stored. For example, in a large
project, you might store the original copies of your orthophotography,
satellite imagery, and elevation data in separate resource folders.
There is no limit to the number of resource folders used in a single
project. Resource folders are listed in file pick lists, and are searched in the
order they are specified (after the working folder) for file names that you type
in manually. If you have files with the same name in multiple resource
folders the module will use the first one it encounters, we therefore
recommend not having files with the same name in multiple resource folders.
In summary, the TerrSet Project file determines the default locations where
modules will look for data, and the location that will be used to write the
output files. Although you can over-ride the Project file in determining file
locations, it can be tedious to do so, and the chance of becoming confused
and making mistakes becomes quite high.
TerrSet Projects may be stored anywhere, however the default folder for
projects is located in the TerrSet program folder under the subfolder Projects.
A single Project Environment file, default.env, is automatically created. Some
users will find it convenient simply to change the working folder of this
default Project Environment file each time they work on a new data set.
However, you may wish to set a new Project file for each section of this
manual, in order to facilitate switching back and forth between the different
sections.
◆ ◆ ◆
Creating a new project file and specifying the working
folders with the TerrSet EXPLORER
1. Start the TerrSet EXPLORER from toolbar or by clicking
on the “+” icon.
2. If a Start Here window opens, read the contents, and when
you are done, close it, by clicking on the red x in the lower
right corner.
3. The TerrSet EXPLORER window will open on the left side
of the TerrSet workspace. It is anchored to this location, and
although you can change the width of the window, you cannot
move it to other locations.
4. Practice minimizing the window by clicking on the “-“ icon
on the top left of the TerrSet EXPLORER window. The
window will collapse into the left hand side of the TerrSet
workspace.
5. Reopen the TerrSet EXPLORER by clicking on the “+”
icon in the minimized window.
6. Select the Projects tab (Figure 1.3.4.a).
◆ ◆ ◆
Figure 1.3.4.a The Projects and Editor panes in the TerrSet EXPLORER.
◆ ◆ ◆
Creating a new project file and specifying the working
folders with TerrSet EXPLORER (cont.)
7. There should now be two panes within the TerrSet
EXPLORER window. The first is the Projects pane, and
below that should be the Editor pane (Figure 1.3.4.a). If the
Editor pane is not shown right click in the Projects pane, and
select Show Editor. (It is possible to close the Editor pane by
clicking on the red “x” in that pane, hence the need to have a
way to open the pane again.)
8. Note also that the boundary between the Projects and
Editor panes can be dragged with the mouse, in order to
change the relative size of the two panes. This is convenient if
the Editor pane is obscuring information in the Projects pane.
9. Right click within the Projects pane, and select the New
Project Ins option (Figure 1.3.4.b).
◆ ◆ ◆
Figure 1.3.4.b Right click within the Projects pane to select the New Project
Ins option.
◆ ◆ ◆
Creating a new project file and specifying the working
folders with the TerrSet EXPLORER (cont.)
10. A Browse Folder window will open. Use this window to
navigate to the folder Chap1-4, which you copied to your
computer to the new RSGuide folder, in Section 1.3.1.
11. Click OK.
12. A new project file will now be created.
13. Note that you can switch between the original default
environment and the new environment by selecting the
appropriate radio buttons in the Project pane.
14. The project’s default name is the directory name. If you
wish, you can rename the project by editing the text in the
Name text box, within the Editor pane.
15. You can also modify the working folder by clicking on the
name of the current file listed in the text box next to Working
folder. This will generate a browse icon (a button with three
dots). Clicking on the browse icon will open a Browse For
Folder window. If we wanted to specify a new file, we would
now navigate to that file, and then click OK. However, for
now, we will leave the existing file, and therefore press
Cancel.
16. Add a resource folder by right clicking in the Editor pane,
and selecting Add folder.
17. A new line in the Editor pane will open, with the text
Resource folder (1) in the left cell. Click in the right cell, and
a browse icon will appear (a button with three dots). (Figure
1.3.4.c).
18. Click on this browse button, and the Browse for Folder
window will open. Navigate to the Raw Images subfolder,
within the RSGuide\Chap1-4\ folder on your computer. Click
on OK. Click NO to add subfolders. (Figure 1.3.4.c).
◆ ◆ ◆
Figure 1.3.4.c Adding a Resource Folder.
◆ ◆ ◆
Creating a new project file and specifying the working
folders with the TerrSet EXPLORER (cont.)
19. Once the Browse for Folder closes, your project should
now be specified. If necessary, you can drag the TerrSet
EXPLORER window boundary to the right, in order to
provide more room to show the compete path file for the
folder locations (Figure 1.3.4.d)
◆ ◆ ◆
The new project file now points to the location of the working folder and
resource folder for Chapters 1-4 in this manual, as shown in Figure 1.3.4.d.
When you exit and re-launch TerrSet, the project file most recently used is
retained. Therefore, you only need to change the project file information
when you start a new project. For example, we will create a new project file
when we start Chapter 5.
Figure 1.3.4.d The new project file successfully created.
1.3.5 Basic File Management with the
TerrSet EXPLORER
The TerrSet EXPLORER is a powerful tool that has functionality well
beyond that of simply managing project files. For example, you can manage
all aspects of TerrSet-specific files, including regular file maintenance as well
as examining a file’s metadata, binary contents and structure.
Images in TerrSet are stored in what is called the IDRISI raster file format.
An image in IDRISI raster file format has two separate files, one that contains
the raw image brightness values (.rst), and a second file with the metadata
(.rdc), which specifies how the image is to be constructed (for example, the
number of rows and columns) from the raw image brightness values.
Because of this linking of files, doing file maintenance in the Windows
Explorer, instead of the TerrSet EXPLORER is likely to lead to disaster.
For example, if you were to use the Windows Explorer to rename the image
file etm1.rst to HongKong.rst, you would not be able to display it, unless you
also remembered to change the associated etm1.rdc file to
HongKong.rdc. On the other hand, the TerrSet EXPLORER would
automatically change both files, if you changed the name of the etm1.rst.
Thus, it is crucial always to use the TerrSet EXLORER for all file
maintenance work, especially for tasks such as copying, deleting or
renaming files.
We will now examine how we can use the TerrSet EXPLORER to manage
files.
◆ ◆ ◆
Basic file management with the TerrSet EXPLORER
1. If the TerrSet EXPLORER window is not already open,
open it again.
2. Click on the tab for Filters (Figure 1.3.5.a).
3. Observe the list of file types listed. These are only the
primary extensions. The many different types of file
extensions that TerrSet recognizes give some indication of the
wide range of functionality that TerrSet offers.
4. Note that it is possible to check which types of files one
wants to list in the Files pane, an option that we will look at in
a moment. The default is to list raster Image files, as well as
six other file types.
◆ ◆ ◆
Figure 1.3.5.a TerrSet EXPLORER showing the Filters pane.
◆ ◆ ◆
Basic file management with the TerrSet EXPLORER
(cont.)
5. Click on the small downward pointing triangle on the top
right-hand side of the TerrSet Primary Extensions pane
(Figure 1.3.5.a), representing the icon to switch between the
primary and entire list of TerrSet file extensions.
6. After clicking on the downward pointing triangle, observe
the much greater list of file types listed.
7. If you want to check all the file types listed, in order to
display all the files in the TerrSet EXPLORER window, there
is a shortcut that is much quicker than clicking on each box.
Simply right click in the Filters pane, and in the resulting
pop-up menu click on Select All.
8. Before leaving this section of the TerrSet EXPLORER,
right click in the Filters pane once again, and select Clear
Filter. This will uncheck all the check boxes.
9. Now scroll to the check box for Raster Image (*.rst), and
click in the box.
10. Click on the tab for Files (Figure 1.3.5.b).
11. Practice clicking on the directory name to show and hide
the file names (Figure 1.3.5.b). In many cases when you open
the TerrSet Explorer the files may be hidden, and you need to
click on the directory to show the files.
12. If necessary, drag the Metadata pane boundary up, so that
you can see more of the metadata information.
13. Click on the file etm1.rst in the Files pane.
◆ ◆ ◆
Figure 1.3.5.b The Files and Metadata panes in the TerrSet EXPLORER.
The Files tab has two panes (Figure 1.3.5.b). The top pane shows the list of
files based on the list of filter selections chosen through the options available
via the Filters tab, and also listed in the text box above the Metadata pane.
The lower pane shows the metadata for the file selected in the Files pane. The
slider bar on the right allows one to scroll through the entire metadata file.
The basic file maintenance routines in the TerrSet EXPLORER window are
accessed by selecting one or more files, and then right clicking in the Files
pane. A pop-up menu will appear (Figure 1.3.5.c). The menu has options for
Copy, Move, Rename and Delete. These functions work in the way you would
expect.
Figure 1.3.5.c The pop-up menu for basic file maintenance commands.
The TerrSet EXPLORER also provides shortcuts for displaying images and
also the raw data that underlie an image. When a raster or vector file is
selected in the TerrSet EXPLORER, right clicking and selecting the option
for Display Map will display the image using default options. We will learn
about the program DISPLAY LAUNCHER, which allows greater control
over how an image is displayed, in Chapter 2.
The TerrSet EXPLORER pop-up menu for the Files pane also offers options
to view the underlying data from within a file. A file may be viewed in its
byte-by-byte binary and/or ASCII representation by choosing the Show
Binary menu option. This is useful for viewing binary raster images to
determine their file structure for importing into IDRISI raster format. (Binary
is a dense type of coding typically used for images or other large, structured
data sets.) Images can also be shown as a grid of pixel values through the
Show Structure pop-up menu option. The topic of how an image is
constructed from a grid of numbers is explored further in the introduction to
Chapter 2.
1.3.6 Working with Metadata Using the
TerrSet EXPLORER
Note that the last option on the pop-up menu in the Files pane of the TerrSet
EXPLORER is Metadata. You can control whether the Metadata pane is
displayed by checking or unchecking the Metadata option in the pop-up
menu.
To investigate further this concept of Metadata, click on the etm_pan.rst file
in the Files pane of the TerrSet EXPLORER. Scroll through the metadata
values for that file in the Metadata pane below. Note that in the Metadata
window, each row has two cells. The left cell is a category, for example
Name. The right cell is the attribute. In order to change the attribute, you
simply click in the cell, and type the new value. Be warned, though: if the
new values you enter are inappropriate, you can make it impossible to display
the file.
Observe the type of data about the image that is recorded in the metadata file:
the data type (which determines the potential range of the values stored),
number of columns and rows, pixel size, map information, file lineage as well
as user-supplied titles, legends and notes. Notice that no title has been
specified for the etm_pan file. In the next exercise, we will add the image
title information to the metadata, so that when the image is displayed we will
have the option of automatically also displaying a title.
◆ ◆ ◆
Modifying image metadata with TerrSet EXPLORER
1. In the TerrSet EXPLORER, click on the tab for Files.
2. If the files are not listed in the Files pane, double click on
the directory name to display the files.
3. In the Files pane, select etm_pan.rst.
4. If the Metadata pane is not displayed, right click in the
Files pane, and select the option for Metadata.
5. In the Metadata pane, click in the text cell to the right of
File title, and enter Hong Kong Landsat Panchromatic
Band (Figure 1.3.6.a).
6. Click on the Save icon at the bottom left side of the
Metadata pane (Figure 1.3.6.a). (Note: the Save icon is grayed
out most of the time, and is only shown in color when the
metadata has been changed, and therefore can be saved.)
7. A warning message will pop up warning that modification
of certain parameters of the metadata may corrupt the image
if specified incorrectly. Click in the “Yes” option.
◆ ◆ ◆
Figure 1.3.6.a Modifying the metadata of an image.
1.3.7 Working with Collections in the
TerrSet EXPLORER
A useful management tool in TerrSet is the concept of collections. A layer
collection is a group of layers that are associated with each other, for example
the different image bands that together make up a single satellite
image. Collections are used to facilitate the input of filenames to dialog
boxes. They may also be required as input for particular analytical modules.
Finally, raster files that are grouped into a collection and linked when
displayed can be viewed in a systematic way, such as through linked zooming
and panning.
In this part of the exercise, we will use the TerrSet EXPLORER to create a
raster group file with the Hong Kong Landsat data, as a preparatory step for
displaying two bands as linked displays in Chapter 2. TerrSet also offers a
dedicated program for dealing with collections, available through the menu:
File – Collection Editor. However, generally, the TerrSet EXPLORER
provides a more powerful interface for working with collections.
◆ ◆ ◆
Creating a file collection with the TerrSet EXPLORER
1. If the TerrSet EXPLORER window is not already open,
open it again.
2. Click on the tab for Files.
3. Highlight the files etm1.rst through etm7.rst (note that
there is no etm6.rst in this data set. The Landsat band 6 is a
thermal band, and we will work with that band later, in
Chapter 5.) You can select multiple bands by clicking on each
file sequentially, while simultaneously pressing the Ctrl key.
Alternatively, you can click on etm1.rst, then, while
simultaneously pressing the Shift key, click on the etm7.rst
file. This will highlight the beginning and end files, as well as
all those in between. (Figure 1.3.7.a).
4. Right click in the Files pane. Select the menu option for
Create – Raster Group (Figure 1.3.7.a).
◆ ◆ ◆
Figure 1.3.7.a The pop-up menu for creating a raster group file.
◆ ◆ ◆
Creating a file collection with the TerrSet EXPLORER
(cont.)
5. A new file, Raster Group.rgf, will be listed in the Files
pane.
6. Right click on this file.
7. Select Rename from the list of options and enter a new
name by typing over the default name of Raster Group. Since
this is an entire collection of satellite image bands, we will
enter hk_etm_all (Figure 1.3.7.b).
8. Press Enter on your computer keyboard.
9. Refresh the TerrSet EXPLORER view by clicking on the F5
key of the keyboard or by right clicking on the working folder
and selecting the Refresh option.
◆ ◆ ◆
Figure 1.3.7.b Entering a new name for the raster group file.
The Metadata pane has additional powerful built-in functionality to add,
delete, and reorder the individual layers in the collection. This can be
observed by highlighting the hk_etm_all.rgf raster group file in the Files
pane, and noting how the Metadata pane lists the file names associated with
this collection. Now click on the Metadata cell for Group Item (5), and then
right click. A pop-up menu will list options such as Remove, Move up and
Move down (Figure 1.3.7.c). The latter two options change the order of the
layers within the collection. This can be important if the order of the layers in
a group file have an associated meaning. You can also use the icons at the
bottom of the pane to add an item , remove an item , or to move an
item up or down .
Figure 1.3.7.c Manipulating individual files in a raster group file.
CHAPTER 2
DISPLAYING REMOTELY
SENSED DATA
2.1 Introduction to Data Types
Remote sensing image data are usually organized as an array of pixels, where
each pixel has an intensity value (also referred to as a gray level, brightness
value, or digital number (DN)). Pixel is a composite word, derived from
“picture element.”
Figure 2.1.a shows that when a digital image is enlarged significantly, the
individual pixels, and the discrete area each pixel represents, become
apparent. The legend shows the scheme by which the DN values have been
mapped to image gray tones. Table 2.1.a shows the same information as
Figure 2.1.a, except as numbers instead of gray tones. Note that pixels with
dark gray tones in Figure 2.1.a correspond to low DN values in Table 2.1.a.
The fundamental characteristics of remotely sensed data depend upon the
resolution of the sensor used to acquire the data (Jensen 2016). There are
four types of resolution: radiometric, spatial, spectral and temporal (Warner
et al. 2009a).
Figure 2.1.a A highly enlarged image that comprises only 4 columns and 3
rows of pixels.
Table 2.1.a The pixel values for the image shown in Figure 2.1.a.
Figure 2.1.b Simulated spectral curves for a forested pixel as seen by
AVIRIS and Landsat TM sensors.
Radiometric resolution refers to the sensitivity of the sensor to incoming
radiance (i.e., how much change in radiance is required to result in a change
in recorded brightness value?). This sensitivity to different signal levels will
determine the total range of values that can be generated by the sensor.
Traditionally, spatial resolution is considered the minimum distance
between two objects that can be differentiated from one another (Sabins
1997, Jensen 2016). For aerial photography the spatial resolution is usually
measured from a test pattern of numerous white and black lines of a defined
brightness contrast, but varying thickness. Resolution can then be measured
directly from a photograph of the test pattern as the maximum number of line
pairs per millimeter that can be resolved. In practice, photographic resolution
is determined not only by the camera properties, including focal length and
configuration, but also the aircraft height and stability, as well as the
resolution of the film used.
For satellite-borne sensors, resolution is often loosely specified as the
dimension of the ground area that falls within the instantaneous field of view
of a single detector within the imaging array. In this terminology, spatial
resolution is equivalent to unit pixel size in ground-based units. However, it
is important to realize that this equivalency is not the same as the formal
definition of resolution. In fact, using the word resolution to imply the pixel
area is somewhat misleading. Most objects that are similar in size to the pixel
area will not be large enough to be resolved on the image, since the object is
unlikely to be imaged by just a single pixel. Nevertheless, using the pixel
size to refer to resolution is a convenient short hand.
Sensors also are characterized by the wavelength regions of the
electromagnetic spectrum for which they record data. A single sensor may
record one or more separate measurements per pixel, with each measurement
associated with a particular part of the spectrum, usually referred to as a
band, or sometimes as a channel. An instrument's spectral resolution is
determined by the number of bands, and the width of the electromagnetic
spectrum each band covers. A sensor may detect energy from a wide region
of the electromagnetic spectrum, but have poor spectral resolution, if it has a
small number of wide bands. Another sensor that is sensitive to the same
portion of the electromagnetic spectrum but has many small bands would
have greater spectral resolution. Like spatial resolution, the goal of finer
spectral sampling is to enable the analyst, human or computer, to distinguish
between scene elements. More detailed information about how individual
elements in a scene reflect or emit different wavelengths of electromagnetic
energy increases the probability of finding relatively distinct characteristics
for a given element, allowing it to be distinguished from other elements in the
scene.
Figure 2.1.b illustrates this principle by showing simulated spectral
reflectance curves, or spectral signatures, generated when two sensors,
Landsat Thematic Mapper and AVIRIS, are used to image the same pinyon
pine forest (an evergreen species, common in the US Southwest and in
Mexico). AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) is a
research sensor, developed by NASA’s Jet Propulsion Laboratory (for more
information on AVIRIS see http://aviris.jpl.nasa.gov/). Landsat Thematic
Mapper (TM) is a satellite borne sensor flown by NASA on a series of
spacecraft since 1982. In the following section (2.1.1), the Landsat program
is discussed in more detail.
It is a subtle but important point that the data recorded by these sensors is
actually radiance (energy) measured at the sensor, not reflectance, which is
what is shown in Figure 2.1.b. The radiance measured at the sensor is a
function of the reflectance of the ground materials, as well as the solar
illumination and atmospheric transmission, which both vary with time of day,
season, and atmospheric properties. Therefore, to make the simulated data
from the two sensor measurements comparable, it is necessary to convert
them to reflectance.
Both Landsat TM and AVIRIS cover the same broad range of the
electromagnetic spectrum, from 400 to 2,500 nm. However, AVIRIS has
approximately 210 unique, contiguous bands, each 10 nm wide. (In Figure
2.1.b, AVIRIS bands over the atmospheric moisture absorption regions have
been deleted, hence the gaps in the spectrum.) Landsat TM, on the other
hand, has just six bands in the 400 to 2,500 nm region, with a seventh band,
not shown, in the thermal region. The Landsat bands are indicated in the
figure by horizontal black lines. Each line represents a single band, for which
a single radiance value is recorded.
As can be seen from Figure 2.1.b, although both AVIRIS and Landsat capture
the overall shape of the spectrum, the AVIRIS curve has much greater
detail. For example, in the 2,000 to 2,500 nm region, Landsat has just one
band (from 2,080 to 2,350 nm), whereas AVIRIS has approximately 50 bands
across this region. In addition, the AVIRIS sensor has another 50 bands in the
900-1,400 nm region, whereas Landsat misses this region entirely. Note that
the AVIRIS spectrum has some interesting absorption features (lows) in this
region.
The spectral limitations of Landsat may be significant if one were trying to
differentiate very similar land cover types. On the other hand, if your main
interest is differentiating forest from soil, then the Landsat sampling of the
spectrum is more than likely sufficient.
2.1.1 Example: Landsat Data
One of the most important series of satellites for civilian remote sensing is
Landsat (Lauer et al. 1997). The Landsat Project, which began in the early
1970s, has incorporated a sequence of satellites that have been placed in earth
orbit. The most recent of the Landsat series of satellites is Landsat 8 (Figure
2.1.1.a), which was launched on February 11, 2013, and orbits the Earth at an
altitude of approximately 438 miles (705 kilometers) with a sun-synchronous
98-degree inclination and a descending equatorial crossing time of 10:11
a.m.
Figure 2.1.1.a Landsat 8 satellite (from: https://www.usgs.gov/core-science-
systems/nli/landsat)
Landsat images are spatially referenced by the Landsat World-Wide-
Reference system (WRS), which comprises 57,784 scenes, each 114 miles
(183 kilometers) wide by 106 miles (170 kilometers) long. Each image
consists of approximately 2 Gigabytes of data. The WRS system is based on
the orbital tracks of the satellite (paths, in WRS terminology) and arbitrary
rows, in which the tracks are divided into discrete scenes. Thus, Landsat
images are usually referenced by WRS path and row.
Landsat-8 extends the temporal coverage of its predecessors: The
Multispectral Scanner System (MSS) in Landsat 1-5, the Thematic Mapper
(TM) instrument, flown on Landsat 4 and 5 (Landsat 6 was destroyed during
launch), and the ETM+ instrument in Landsat-7. Landsat-8 carries two
pushbroom sensors: the Operational Land Imager (OLI) and the Thermal
Infrared Sensor (TIRS). OLI acquires two basic types of data: a single band
panchromatic image and multispectral images comprising 8 bands, each
sensitive to a different wavelength, from the visible, through the near and
shortwave infrared. The TIRS sensor collects images in two bands of the
thermal infrared part of the electromagnetic spectrum.
Landsat-8 sensors offer an improvement over the Landsat-7 Enhanced
Thematic Mapper Plus (ETM+) sensor, carrying 4 extra bands (a
coastal/aerosol band, two Thermal Infrared bands, and a Cirrus band).
Moreover, the Near Infrared band (Band 5) was refined to exclude water
vapor absorption. The Radiometric resolution was also improved to 12-bit,
which translates to 4096 potential gray levels compared to 256 gray levels in
previous instruments.
2.1.1.1 Panchromatic Data
Similar to Landsat-7 ETM+, OLI includes a panchromatic band (band 8).
This band is narrower than its predecessor, covering the 0.5-0.68 μm
spectrum and has a 15 meter pixel size. The term “panchromatic”
traditionally referred to black and white photographic film that is sensitive to
the entire wavelength region of visible light. When used, this film was
generally filtered to remove blue wavelengths. Likewise, panchromatic
digital satellite imagery usually excludes blue wavelengths, in order to
minimize the effect of atmospheric scattering. While Landsat 7 ETM+
included a portion of the near-infrared spectrum, OLI covers only the visible
part of the spectrum. By gathering more energy over a wide spectrum,
panchromatic sensors can be designed to acquire data with a higher spatial
resolution than if those sensors acquired data with multiple bands (i.e. higher
spectral resolution) because the signal to noise ratio of the sensor is limited
by the amount of energy reaching the detector.
Panchromatic imagery is sometimes used in image processing to sharpen or
increase the spatial resolution of the coarser resolution multispectral imagery.
Panchromatic satellite images are also used on their own for mapping
endeavors such as generating elevation data, and are also quite suitable for
detecting the shapes of objects by their boundaries and shadows.
2.1.1.2 Multispectral Data
The Landsat-8 system (OLI and TIRS) collects data in eight multispectral
bands of reflected energy and two bands of emitted energy (Table 2.1.1.2.a).
The spatial resolution is 30 meters for the visible and near infrared (bands 1-7
and 9) and the thermal infrared (band 10 and 11) is 100 meters (resampled to
30m for distribution).
Table 2.1.1.2.a Landsat 8 OLI/TIRS bands, spectral resolution, spatial
resolution and equivalency with Landsat 7 ETM+ bands.
*
TIRS data is usually resampled to 30 m for ease of comparison to the
multispectral bands.
The eight multispectral bands of OLI data are used to discriminate between
Earth surface materials through the development of spectral signatures. The
term spectral signature was coined based on the idea that for each material,
the proportion of incident radiation that is reflected varies by wavelength, and
is a characteristic of that material. The basic premise of using spectral
signatures is that the signatures, or reflectance patterns, may be sufficiently
different to make it possible to distinguish between different classes of
materials. The term spectral signature is, however, a somewhat misleading
term, as in practice spectral signatures have inherent variability that makes it
challenging to separate between them. Thus, automated identification of
surface materials based on spectral signatures is almost never 100% accurate.
In displaying multispectral image bands, we can select combinations of
individual bands that highlight particular types of signatures. Such
multispectral images normally exploit color to provide a visual representation
of the earth’s surface.
2.1.2 Surface (Elevation and Bathymetry)
A Digital Elevation Model (DEM) is a data set that contains information on
the heights or depths of a surface. The format of DEMs can be either a
regular grid of elevations or an irregular set of points. The United States
Geological Survey distributes DEMs for much of the United States through
the National Map (https://nationalmap.gov/).
Elevations were in the past almost exclusively produced directly or indirectly
by photogrammetric analyses in which images from different vantage points
are compared to determine the ground elevations. In the last two decades, two
competing technologies have become very common: interferometric radar
(the DEM used in Section 2.3 was originally developed from interferometric
radar) and lidar. Interferometric radar uses a comparison of two radar images
acquired from very slightly different perspectives, whereas lidar uses the
measured time it takes for a pulse of light to travel from the sensor, to a
reflecting object, and back to the sensor (Maune 2001).
2.2 Satellite Image Display
As discussed in Section 2.1 above, the brightness of a particular pixel in an
image is proportional to the pixel DN value. The DN value in turn is related
to the intensity of incident solar radiation, and the reflectance properties of
the surface material. Thus, a panchromatic image may be interpreted in a
manner similar to that of a black-and-white aerial photograph of the same
area. A multispectral image consists of several bands of image data. For
visual display, each band may be displayed individually as a gray scale
image, or in a combination of three bands as a multispectral color composite
image.
2.2.1 Preparation
For this section, we will be using the Hong Kong Thematic Mapper
multispectral and panchromatic data from the Chap1-4 folder.
If you have not already downloaded the data as described in Section 1.3.1,
please do so now and move the data to a new folder in your workspace (e.g.
C:\RSGuide\Chap1-4).
Start TerrSet. In the previous chapter, we set the project file and working
folder, however to ensure that nothing has changed since we did that work,
we will first check that the data are organized as we expect.
◆ ◆ ◆
Check project file and working folder with TerrSet
EXPLORER
1. The TerrSet EXPLORER window is automatically opened
if it was open when you last shut down TerrSet. Therefore, it
may not be necessary to re-open the TerrSet EXPLORER.
However, if it is not open, do so.
2. Click on the Projects tab to ensure the Chap1-4 project is
listed, and the radio button next to that project has been
selected as the current project. If the Editor pane is obscuring
part or all of the Projects information, drag the boundary for
the Editor pane down to create more room for the Projects
pane.
3. Confirm in the Editor pane that the working folder
correctly points to the folder you created with the data you
downloaded (e.g. C:\RSGuide\Chap1-4). (If necessary, return
to Section 1.3.4 to review the procedures to create or edit the
project file.)
4. Click on the Files tab.
5. Check if all the satellite image files (raster) are listed in the
Files pane.
6. If the files are not listed in the Files pane, double click on
the directory name to display the files.*
7. In order to see the full list of data in the Metadata pane it
may be necessary to drag the boundary of the Metadata pane
to make the latter pane smaller. Also, if the space to show the
full list of files is not sufficient, you may have to use the
slider bar in order to see the entire list of files.
8. There should be eight .rst files present: a DEM
(hong_kong_dem.rst), one panchromatic image
(etm_pan.rst), and 6 multispectral images numbered 1-5 and
7 (etm1.rst, etm2.rst … etm5.rst, and etm7.rst) (note that
band 6, a thermal band, is not included). In addition, if you
have completed the exercise in Section 1.3.7, there will also
be the hk_etm_all.rgf, a raster group file.
*Note: If you know the files are in the folder but they are not
listed in the Files pane, check under the Filter pane that the
box for the type of file you want to see is checked.
◆ ◆ ◆
If the files you expect are not listed in the Files pane, then the working folder
has not been set correctly. You will need to return to the Projects pane, and
set the correct folder.
2.2.2 Image Display
In this exercise, you will display a single band of satellite data as a gray scale
image, and investigate the nature of a contrast stretch. Panchromatic image
data are most commonly displayed as a gray tone image, just like a black and
white photograph.
◆ ◆ ◆
Displaying a panchromatic image with the DISPLAY
LAUNCHER
Menu Location: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER program from the main
menu, or the tool bar.
2. In the DISPLAY LAUNCHER dialog box (Figure 2.2.2.a),
start the process to select a file by clicking on the browse
button (…) in the center left column.
3. A file Pick List window will open. In the new window, if
only the directory is listed, click on the plus sign (+) to list the
files.
4. Select the etm_pan raster file (Figure 2.2.2.b) by either
double clicking on the file name, or clicking once, and then
clicking on OK.
5. Select GreyScale in the Palette File section of the Display
Launcher window. (Figure 2.2.2.a).
6. Click on OK to display the image.*
*Note: You can also double click on the file within TerrSet
EXPLORER to display it with the Default palette.
◆ ◆ ◆
Figure 2.2.2.a Display Launcher window, with parameters chosen for
displaying the etm_pan.rst image.
Figure 2.2.2.b Pick list window. Left: directories can be expanded by
clicking on the plus sign. Right, directory with an image file highlighted.
The etm_pan image is a subset of a panchromatic image from the Landsat
Enhanced Thematic Mapper Plus (ETM+) sensor. The image represents
green to infra-red radiance (0.5 - 0. 9 μm). This image should therefore at
least somewhat correspond to how we view the world, because humans tend
to see best in the green to red regions of the spectrum. However, you should
be cautious about interpreting this image, because it also includes information
from the near infrared (0.7 - 0. 9 μm), to which our eyes are not sensitive.
As a single band image, this image is best displayed in shades of gray. This is
true of any single band image. This may be confusing, because you might
feel, for example, that an image that represents red radiance should perhaps
be displayed in shades of red. Although colors are indeed used when multiple
bands are displayed in a single composite image, for the display of single
band images, gray shades should be used. This is at least in part due to the
fact that the eye is much more sensitive to brightness variation for gray, than
the brightness variations within a particular color hue.
Adding further confusion to this issue of how to represent a single band of
satellite data, the TerrSet default is to display all single band images with the
IDRISI Default Quantitative palette. A palette in TerrSet terminology is a file
that specifies the relationship between the pixel DN values and the colors or
gray shades used to represent those pixels on the monitor. The IDRISI
Default Quantitative palette is a rainbow of colors from blue to red, and
although it is useful for showing patterns, it is not ideal for most raw images
which are best thought of as analogs to black and white photographs. It is for
this reason that we specifically selected the GreyScale palette file in
displaying the image, and you should always make a point of using
GreyScale for single band images, unless you have a specific reason not to.
Examine the etm_pan image that you have displayed (Figure 2.2.2.c) and
note that the image is dark and with poor contrast. The image is not optimal
for visual interpretation. This problem arises because the DISPLAY
LAUNCHER automatically applies a stretch to an image based on the
minimum and maximum values of the image. However, most images have a
rather limited range of values, with a few outliers that are either very dark or
very bright. Thus, the stretch applied in this case is not optimal. In the next
section, we will investigate this idea of the contrast stretch, including why it
is generally necessary and what it means in practice.
Figure 2.2.2.c Landsat panchromatic image based on the default stretch.
2.2.3 Image Statistics and a Simple Contrast
Stretch
A common format for optical satellite data is for the DN values to be scaled
over a potential range from 0 to 255. This range was chosen because it is an
efficient way to store numbers in a binary computer file. Computers use a
counting system based on the powers of 2, unlike our conventional counting,
which is based on powers of 10. The smallest number representation on a
computer is a bit, which is a location that has a value of 0 or 1. Bits are
grouped arbitrarily every 8. This makes an 8-bit number (28, or
2x2x2x2x2x2x2x2 = 256) a convenient unit of computer storage.
Within this 0-255 range, the optical settings of the sensor are designed to
cover the broadest range of possible landscapes, from highly reflective snow
and beach sand, to very dark material such as basalt rock, water and
shadow. However, any individual scene is unlikely to include the full range
of landscape cover types. Thus, the range of DN values in a single image is
likely to be rather limited. In this section, you will adjust the contrast in the
image so that the small range of DN values found within a scene is mapped to
a wider range of display values, utilizing the full range of 256 brightness
levels available for viewing on the computer monitor.
The frequency histogram is shown as a bar graph (Figure 2.2.3.a). The
vertical axis shows the number of pixels in the image that have each
particular DN value, as indicated on the horizontal axis.
◆ ◆ ◆
Analyzing the image data distribution with HISTO
Menu Location: File – Display – HISTO
1. Start the HISTO program from the main menu or the
toolbar.
2. In the HISTO dialog box, click on the browse button (…)
next to the text box for the Input file name.
3. The Pick list window will open. Use it to select the
etm_pan data set.
4. Set the class width to 1.
5. Leave the remaining parameters set at their default values.
6. Click on OK.
◆ ◆ ◆
Note that Figure 2.2.3.a shows that the full 28 (0-255) range is not represented
in this image, and that the population is not a normal Gaussian distribution
(i.e. does not follow a bell-shaped curve). In fact, many satellite images
reveal a bimodal frequency distribution, particularly if the scene contains
both land and water areas. Also the minimum value and maximum values, 0
and 161, are well outside the range of the majority of pixel values.
Notice how the graph can be updated dynamically. You can convert the graph
to cumulative plot or a line graph by clicking on the appropriate options in
the Histogram window. You can also set new maximum and minimum
numbers for the graph by editing the values in the text boxes for Display …
from… to…., and then pressing the Update button. For example, you might
enter 18 and 80 in these boxes, respectively, and by doing so, obtain a new
graph that displays just that part of the histogram.
Figure 2.2.3.a Histogram Menu and Histogram graphical display of
etm_pan.rst image.
TerrSet has persistent windows, which means that the dialog boxes that
control programs do not close once the program has run. Therefore, the
HISTO dialog box should still be open in your TerrSet workspace and
available to be run again, though you may need to move the Histogram
window to one side to see the dialog box.
◆ ◆ ◆
Analyzing the image data distribution with HISTO (cont.)
7. In the HISTO dialog box (i.e. not in the Histogram display
window, but the original HISTO dialog box that produced the
Histogram window), select the radio button for Numeric
output, instead of the default Graphic.
8. Click OK.
9. The numeric output will be displayed in a new window,
Module Results. If this window is not visible, try using the
slider at the bottom of the TerrSet window to explore other
parts of the main pane.
◆ ◆ ◆
This time, because we selected Numeric as the output format, the program
will generate a text file with the number of pixels for each DN value (Figure
2.2.3.b). From this file, we can learn that there are only 15 pixels with a value
of zero, out of a total of 17.7 million pixels. Furthermore, the cumulative
frequency of all pixels only reaches approximately 0.0005 at a DN value of
17. This means that if you sum all the pixels with a value of 17 or less, they
would comprise less than 0.05% of the image (0.0005 x 100).
Figure 2.2.3.b Histogram numeric output for the etm_pan.rst image.
Use the slider bar on the right of the Module Results window showing the
numeric histogram data to observe the DN value associated with a cumulative
proportion of 0.9985. This should be a DN value of 73. We can therefore
summarize our findings to say that only 0.20% of the image DN values lie
outside the range 18-73 (0.05% is below the range and 0.15% is above the
range).
Now, using the graphic and numeric histogram data as a guide, let’s develop
an enhancement of the image contrast that makes the patterns in the majority
of the image much clearer. This procedure is called a contrast stretch. In
carrying out our contrast stretch, we have to accept that we will lose
discrimination of changes in DN values at the extremes of the distribution,
such as between DN values 0 and 18, and between 73 and 160. However,
because there are so few pixels with such values, overall the image will look
much clearer.
At this stage you can close the HISTO dialog box, and the graphic and
numeric data windows that the program generated.
◆ ◆ ◆
Contrast Enhancement through modifying the Display
settings in the Composer Window
1. If you have closed the image displayed in Section 2.2.2,
redisplay the etm_pan.rst image using the DISPLAY
LAUNCHER and a GreyScale palette file (see Section 2.2.2
for further instructions, if necessary.)
2. In the Composer dialog box (this window is automatically
also opened in the Workspace when an image is displayed),
click on the button for Layer Properties (see icon highlighted
in Figure 2.2.3.c).
3. The Layer Properties dialog box will open (Figure 2.2.3.c).
Note that the window has three tabs, with the Display
Parameters tab as the default.
4. Use the sliders Display Min and Display Max to set the
minimum to 18 and the maximum to 73, or alternatively,
simply type in the appropriate numbers in the text boxes to
the right of the sliders.
5. Click on the button for Apply, Save and then Close.
◆ ◆ ◆
Figure 2.2.3.c The contrast-stretched etm_pan.rst image and the Layer
Properties dialog box.
Note how the image contrast has improved greatly (Figure 2.2.3.c). This is
because we have assigned a black color on the screen to 18, instead of 0, and
white to 73, instead of 161. Shades of gray between black and white are
assigned linearly from 18 to 73, respectively. Thus, middle gray is a DN
value of 45 (half way between 18 and 73), instead of the 80, as was the case
when we were using a 0-161 stretch. If we refer back to the histogram of the
data distribution, we can see that 45 is within the main data distribution,
whereas 80 is greater than nearly all the values in the image.
The contrast adjustment we have made so far is not applied to the original
data, but only to the mapping function for display of the data on the screen.
This is a key concept, because it means that the original data are unchanged,
which may be important if we are to apply further processing.
By clicking on the button for Save, we are storing the minimum and
maximum values in the metadata. This means that every time the image is
displayed, the enhancement is applied automatically, through the DISPLAY
LAUNCHER’s Autoscale option. In the future, when this image is displayed,
the program will automatically use these values to determine the contrast
stretch. In the next exercise, we will check the metadata to see where the
values are recorded, and then redisplay the image to observe the automatic
image stretch.
Figure 2.2.3.d Using the TerrSet EXPLORER to view Display min and
Display max.
◆ ◆ ◆
Contrast Enhancement (cont.)
6. If the TerrSet EXPLORER window is not open, click on the
TerrSet EXPLORER icon.
7. Select the Files tab.
8. If the files are not listed in the Files pane, double click on
the directory name to display the files.
9. In the Files pane, click on the etm_pan.rst image.
10. In the Metadata pane, drag the slider down until the lines
labeled Display min and Display max are visible.
11. Confirm that the Display min text box shows a value of 18
(upper arrow in Figure 2.2.3.d), and the Display max text box
a value of 73.
12. Note that it is possible to change these values by simply
typing new values in the text boxes.
13. Close the image in the display window, if it is not already
closed.
14. Redisplay the etm_pan.rst image again. Be sure to select
the GreyScale palette file, as with any single band satellite
image.
◆ ◆ ◆
The redisplayed image should show a good contrast, as it did when we
applied the contrast stretch manually. In this case, however, the contrast is
applied automatically.
End this section by closing any windows that are still open in the TerrSet
workspace. You can do this by clicking on the menu Window and then select
the option Close all map windows.
2.2.4 Creating False Color Composite
Images
In this section, we will create a standard false color infrared composite of
three Landsat ETM image bands (i.e. the ETM+ green band displayed in blue
on the screen, the red band displayed as green, and the near infrared band
displayed as red).
The concept of a false-color image stems from false color infrared aerial
photography. When infrared photography was first developed, an infrared
layer was substituted for a blue sensitive layer in the film. However, it was
noticed that the vegetation patterns were more obvious if the near infrared
was displayed in red colors, and the red and green layers were depicted in
green and blue shades, respectively. Today, we refer to any 3-band composite
image that uses a color assignment different from that of a natural color
composite (i.e. blue, green and red displayed as blue, green and red) as a false
color composite. The specific combination we will produce, however, is
sometimes known as a standard or traditional false color composite, because
it is so common.
◆ ◆ ◆
Creating a standard false color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program.
2. In the COMPOSITE dialog box, select the browse icon (…)
next to the text box for the Blue image band. (Figure 2.2.4.a)
3. A Pick List window will open. Select the etm2 raster file.
Click twice on the file, or once and click on OK.
4. Repeat for the Green and Red image bands, selecting the
files etm3 and etm4, respectively. (Note: you can also type
the names of the files directly in the text boxes.)
5. Enter the Output image filename in the text box provided:
etm234_composite.
6. Select the radio button for Contrast stretch type as Linear
with saturation points.
7. In the Output type section, select the radio button for
Create 24 bit composite, with original values and stretched
saturation points. (This is the default.)
8. Click OK to create and display the false color composite.
◆ ◆ ◆
Figure 2.2.4.a COMPOSITE dialog box.
Linear with saturation stretch is much better than a simple linear stretch that
uses the minimum and maximum vales as the end points of the stretch. As we
discovered in the previous section, the minimum and maximum values found
in the raster data set are not always an effective measure for the optimal
display. The linear stretch with saturation determines where the data range
encompasses a certain percent of the data (in this case 98%), and uses this
range in the stretch – much like our determination of the best maximum and
minimum vales to use for the panchromatic display in Section 2.2.3. When
the image has no data values or background values specified with zero
values, these values will be considered when doing the contrast stretch of
each band. Checking the box Omit zeroes from calculation in stretch will
ignore background values specified with values of zero, and will give a better
contrast in the color composite.
After you have clicked on the OK button in the COMPOSITE dialog box,
TerrSet will generate the composite image and automatically display the
result.
Figure 2.2.4.b Landsat false color composite image of Hong Kong, with
bands 2,3,4 as blue, green, red (B,G,R).
Try bringing up the metadata for the false color composite. The simplest way
to do this is through the Composer window, which is always present in the
TerrSet workspace when an image is displayed. Click on the Layer
Properties button in the Composer window, and the Layer Properties
window will open. The tab for the Display Parameters will show the
stretches applied to each image band. Alternatively, you can use the TerrSet
EXPLORER, as we have learned to do in Section 2.2.3. Note, however, if
you use the TerrSet EXPLORER, you may need to refresh the Files window
pane. This can be done by right clicking in the Files pane, and selecting
Refresh from the pop-up menu. You will need to scroll down in the Metadata
pane to see the triplicate of values (one for each band) for both the Display
min and Display max.
Note that the COMPOSITE module automatically enters the Display
Minimum and Display Maximum values in the metadata file of the color
composite. This way, when you display the composite raster image, it will
always be enhanced optimally.
2.2.4.1 Create a non-standard false color composite
False color composites can be generated with many possible combinations of
bands. Bands are assigned the blue, green and red colors depending on the
landscape feature that needs to be highlighted. For example, the green, near
infrared, shortwave infrared (displayed as Blue-Green-Red) color composite
highlights vegetation and vegetation moisture. Since water absorbs in all
three bands, it appears black in this color composite, however when
sediments are present, water will appear blue, since sediments reflect on the
visible part of the spectrum. Vegetation reflects strongly in the near infrared
and therefore vegetation appears green in this color composite. Vegetation
also reflects in the shortwave infrared, where the amount of reflection
depends on the moisture content. Dry vegetation reflects more in this part of
the spectrum; therefore, it will appear yellow color (combination of green and
red).
◆ ◆ ◆
Creating a false color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program.
2. Open TerrSet EXPLORER and locate the etm3 raster file.
Select it and drag it to the Blue image band
3. Repeat for the Green and Red image bands, selecting the
files etm4 and etm5 files, respectively.
4. Enter the Output image filename in the text box provided:
etm345_fcc
5. In the section labeled Output type, select the radio button
for Create 24 bit composite, with stretched values.
6. Figure 2.2.4.1.a shows the COMPOSITE window with
parameters specified.
7. Click OK to create and display the false color composite
(Figure 2.2.4.1.b).
8. Close the COMPOSTE window, once the new image has
been created.
◆ ◆ ◆
Figure 2.2.4.1.a The COMPOSITE window.
Figure 2.2.4.1.b Non-standard false color composite of Hong Kong.
2.2.5 Map Annotation
We can also design and create maps within the Display window. Maps not
only show a representation of the Earth’s surface, such as a satellite image,
but also include useful and important information such as the reference grid,
the north direction, and scale. In addition, maps usually have a title and
provide information about the map projection and data source.
We will now use the Hong Kong false color image as the basis for creating a
map composition (Figure 2.2.5.a). The first step in creating a map is to resize
your Display window so that enough background around the image is
available to accommodate our planned map elements.
◆ ◆ ◆
Map composition construction
Menu Location: File – Display – DISPLAY LAUNCHER
1. Open the DISPLAY LAUNCHER from the main menu or
icon bar.
2. Select the raster file etm234_composite.
3. Click on the OK button to display the image.
4. Your image should have a title automatically displayed. If
not, it means you did not enter a title when you created the
image. You should return to Section 2.2.4, or enter the title
through the Metadata pane in TerrSet EXPLORER, and then
redisplay the image.
5. To resize the display window, make sure that the auto
arrange is not selected.
6. Resize the Display Window by clicking on one of the
corners, and dragging the window to an appropriate size to
encompass the annotation planned (Figure 2.2.5.a).
7. Double click on the image within the Display window.
8. Click and drag the image to center it horizontally beneath
the title. The image display area is fixed so you may need to
zoom out if you do not want your image to be cropped.
9. Double click on the title in the Display window, to select
the title. The title will be boxed by a series of black squares,
indicating the title box has been selected.
10. Click and drag the title box to a position centered above
the image.
11. When you are done, click elsewhere in the Display
window to de-select the title box.
◆ ◆ ◆
Figure 2.2.5.a The re-sized display window, with space for planned map
elements.
We will now create the other components of our map including the map grid,
a north arrow, and a scale bar. The tools to create these features are found in
the Map Properties window.
◆ ◆ ◆
Map composition construction (cont.): Applying a map
grid
12. Open the Map Properties window, by either right-clicking
on the image in the Display window, or by clicking on the
Map Properties button located in the Composer window
(Figure 2.2.5.b).
13. Select the Map Grid icon to open the Map Grid pane
(Figure 2.2.5.b).
14. Click to place a check mark in the box labeled Visible.
15. In each of the two text boxes labeled Increment X and
Increment Y, enter 10000.
16. Change the value in the Decimal places text box to 0.
17. In the Text Options area of the Map Grid pane, select the
radio button for Number inside.
18. Click on the button for Select Font.
19. In the Font window, use the pull-down menu under the
heading Color to select the color labeled Silver. We are
selecting this color as a relatively neutral color that has
sufficient contrast with the image to be discernable.
20. Click on the OK button in the Font window, and OK in
the Map Properties window to apply the grid and close the
Map Properties window.
◆ ◆ ◆
The changes to the map are made immediately after closing the Map
Properties window (Figure 2.2.5.c). Let’s continue, and create all the
remaining elements necessary for our map.
Figure 2.2.5.b The Map Properties dialog box with Map Grid pane selected.
Figure 2.2.5.c Initial map composition.
◆ ◆ ◆
Map composition construction (cont.): Applying north
arrow, scale bar, additional text, and background color
21. Click on the Map Properties button in the Composer
window.
22. Select the North Arrow icon.
23. Within the North Arrow pane, click in the Visible check
box.
24. Select the north arrow type you wish to use from the four
options. The option you select will be indicated by a black
box.
25. Click on the Scale Bar tab.
26. Within the Scale Bar pane, click on the Visible check box.
27. The Units box should indicate Meters.
28. Type 20000 in the Length (in Ground Units) text box.
29. Click on the Titles tab.
30. In the Titles pane, click on the Visible check box for both
subtitle and caption text.
31. Type Landsat 7 ETM+ Bands 2,3,4 as B,G,R in the
Subtitle text box.
32. Click on the Select Font button below the Subtitle text
box.
33. In the Font window, set the Font as Arial, Font Style as
Bold, color as Black and Font Size as 14.
34. Click OK to close the Font window.
35. In the Titles pane, type UTM Zone 49N WGS84 Datum
in the Captions text box.
36. Use the Select Font button below the Captions text box to
set the Font as Arial, Font Style as Regular and Font Size as
8.
37. Within the Map Properties dialog box, select the
Background tab.
38. Within the Background pane, click on box labeled Map
Window Background Color.
39. A Color dialog box will open. Select the white color chip
in the Basic colors section of the window. Click OK to close
the Color dialog box.
40. Click in the check box for Assign map window
background color to all map components.
41. Click on OK.
◆ ◆ ◆
Note that many of the map components that we just created are located in
arbitrary positions that obscure parts of the map. We must now move these
elements to a more organized and aesthetically pleasing locations.
Figure 2.2.5.d The completed Hong Kong Landsat Satellite Image Map.
◆ ◆ ◆
Map composition construction (cont.): Arranging map
elements
42. Double click on the North arrow.
43. Click and hold the cursor over the highlighted North
arrow, and move the arrow to the lower left corner. You can
resize the arrow, if necessary, by dragging one of the corners
of the highlighted box.
44. Double click on the scale bar, so that the scale bar is
selected.
45. Click and drag the highlighted scale bar to a location near
the bottom-center of the map.
46. Double click on the caption text, UTM Zone…
47. Drag the text to a location at the bottom right corner of the
image.
48. Double click on the subtitle text, Landsat 7…
49. Drag the text to a location at the bottom-center of the
map, below the scale bar.
◆ ◆ ◆
Congratulations! You have now made your first TerrSet map. The map
composition should look something like the map in Figure 2.2.5.d.
Now there are several options to save your composition. The simplest is to
save it in the map file format and the other is to save it as a graphic file such
as the Windows Bitmap (bmp). We will save our composition in both
formats.
◆ ◆ ◆
Map composition construction (cont.): Save to MAP
format
50. Select the Save button in the Composer window.
51. The Save Composition dialog box will open.
52. Click on the Save composition to MAP* file radio button,
if it is not already selected.
53. In the text box for Output file name, enter Hong Kong
ETM map.
54. Click on OK. The Save Composition dialog box will
close.
*Note: the map file format does not store the images. In order
to re-display this map composition all files used need to be in
the working and/or resource folders.
◆ ◆ ◆
Once saved, a map composition can be redisplayed at any time with the
DISPLAY LAUNCHER, as we will see in Section 2.2.6 below.
◆ ◆ ◆
Map composition construction (cont.): Save to BMP
format
55. Re-open the Save Composition dialog box by clicking on
the Save button in the Composer window.
56. Click on the Save to Windows bitmap (BMP) radio button.
57. In the text box for Output file name, enter Hong Kong
ETM map.
58. Click on OK. The Save Composition dialog box will
close.
◆ ◆ ◆
The BMP file creates a useful graphic for importing into reports and
presentations. Note that you can also use the Copy to clipboard option in the
Save Composition dialog box, to paste a figure directly into another Windows
application. However, in our experience this is not always so reliable, and
sometimes the resulting image has artifacts. If you find this problem, then
you should choose the BMP option.
To end this section, close all the files in the TerrSet workspace.
2.2.6 Printing a Map
The dialog box for printing maps or images from the Display window is
accessed through the Composer window. Importantly, the print function is
the only one where you can precisely control the scale of your map. We will
now print our Hong Kong image map as a scaled print at 1:500,000.
◆ ◆ ◆
Printing a map
Menu Location: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER.
2. In the File Type section of the DISPLAY LAUNCHER
dialog box, select the Map composition radio button.
3. Click on file pick button (…)
4. Select the Hong Kong ETM map file we created in the
previous section (2.2.5).
5. In the DISPLAY LAUNCHER window, click on OK.
6. Select the Print button in the Composer window.
7. The Print Composition dialog box will open.
8. Click on the Printer Setup button. A Printer Setup dialog
box will open. Make sure that your printer is set for Portrait
orientation. Close the Printer Setup dialog box.
9. In the Print Composition dialog box, in the section labeled
Rendering, click on the radio button for Highest Quality.
10. In the Scaling section, select the radio button for Print to
scale.
11. In the Scale: 1/ text box, type in 500000
12. Click on the Print button.
◆ ◆ ◆
TerrSet will render and send to your printer a map at the precise scale of
1:500,000. Measure the scale bar to check (it should be exactly 4.0 cm).
Under printer setup, if available in your Windows configuration, you can
save the map composition as a PDF by selecting the option Microsoft Print to
PDF.
Figure 2.2.6.a Print Composition dialog box.
2.3 3-D Visualizations
TerrSet has a number of modules that allow the user to create three
dimensional displays and visualizations. Typically, these modules require the
use of digital elevation data (DEM) as the basis for the orthographic view.
The surface represented can be used to control a palette, or a second file may
be draped over the surface. A common choice for the drape files is a satellite
image, which results in a perspective view of the Earth’s surface. These
perspective views can be animated to create fly-through visualizations. We
will create a visualization over Hong Kong.
2.3.1 Preparation
Let us first examine the DEM of Hong Kong supplied with this manual. The
DEM is a subset of the nearly global DEM derived from NASA’s Spaceborne
Imaging Radar-C (SIR-C) Topographic Mission. The Shuttle Radar
Topography Mission (SRTM) was an international project that obtained high-
resolution digital elevation data on a near-global scale. SRTM consisted of a
specially modified radar system that flew onboard the Space Shuttle
Endeavour during an 11-day mission in February of 2000. The SRTM DEM
was derived from a process known as radar interferometry. In radar
interferometry, two images are made of the same scene by two separate radar
antennas, separated in the "range" direction, perpendicular to the line of
flight. The two radar images have very slight differences between them.
These differences allow the calculation of the elevation of the ground surface
that was imaged by the radar system. The SRTM experiment aboard the
shuttle consisted of one radar antenna in the shuttle payload bay, and a
second radar antenna attached to the end of a mast extended 60 meters (195
feet) out from the shuttle.
There are many sources for the SRTM DEM data, including the USGS Earth
Explorer site (https://earthexplorer.usgs.gov/).
One of the most straightforward sources of SRTM is provided by The
Consortium for Spatial Information (CGIAR-CSI). CGIAR maintains a 90m
SRTM database of the entire world in mosaicked 5 degree by 5 degree tiles.
The data is available for download at http://srtm.csi.cgiar.org in both ArcInfo
ASCII and GeoTiff formats. A data search can be accomplished either by
manually inputting the geographic coordinates for the area of interest, or by
selecting the 5 degree tile on a graphical interface.
If you download your own data set, you would need to (1) import the DEM
into TerrSet (for example, using the program GEOTIFF/TIFF, if the image is
in TIFF format), (2) reproject the data to the same projection and pixel size as
the imagery you are using (for example using PROJECT), and then (3) subset
the data to the same dimensions as your image data (for example, using
WINDOW).
◆ ◆ ◆
Display DEM data
Menu Location: File – Display – DISPLAY LAUNCHER
1. Open the DISPLAY LAUNCHER.
2. Click on file pick button (…).
3. Select hong_kong_dem.
4. Click on the pick button (…) in the Palette file section of
the DISPLAY LAUNCHER.
5. Select the \TerrSet\Symbols folder. (Note that the entire
path will be displayed, not just the final folder names as
indicated here. The specifics of the entire path will depend on
your installation of TerrSet.) (Figure 2.3.1.a.)
6. Scroll down and select the terrain palette.
7. Click on OK to close the Palette window.
8. In the DISPLAY LAUNCHER, click on OK to display the
image.
◆ ◆ ◆
Figure 2.3.1.a Selecting a palette file from within the \TerrSet\Symbols
folder.
Note that the data are a bit noisy with some data drop-outs (Figure 2.3.1.b).
The data has yet to be quality checked by NASA and is delivered “as is.” The
terrain palette also shows the zero elevation value as a dark green color. We
have included a custom palette along with the DEM data that depicts the zero
value (sea level) as a dark blue color. Let’s learn how to change the image
palette file of a displayed image.
Figure 2.3.1.b The Hong Kong digital elevation model.
◆ ◆ ◆
Display DEM data (cont.): Change the palette file
9. In the Composer window, select the Layer Properties
button.
10. The Layer Properties window will open, with the Display
Parameters tab selected.
11. In this new window, click on the Advanced
Palette/Symbol Selection button.
12. The Advanced Palette/Symbol Selection dialog box will
open. In this window, click on the file pick button (…) next
to the Current Selection field.
13. Select the terrain_water palette, which will be listed
under the RSGuide/Ch1-4/ folder, which is your main
working folder for the current TerrSet project.
14. Close the File pick window by clicking on OK.
15. Close the Advanced Palette/Symbol Selection dialog box
by clicking on OK.
16. Click on the Apply button in the Layer Properties
window.
◆ ◆ ◆
The DEM should now have a dark blue color for the zero elevations (Figure
2.3.1.c).
Figure 2.3.1.c The Hong Kong digital elevation model with an alternate
palette.
2.3.2 ORTHO Perspective Display
In this section, we will create an orthographic display of Hong Kong using
the DEM and Landsat false color raster files. If you were to check the
metadata file of the Hong Kong DEM raster data set, you would observe that
the image dimensions are exactly equal to that of the Hong Kong Landsat
raster files. This is a requirement for the orthographic displays and
visualizations if we wish to use a drape image. Another requirement for the
drape image is that it is either a composite image or a raster file with binary
or integer data between 0-255. Luckily, we have met all these requirements in
our included data sets so we can continue and make our displays.
◆ ◆ ◆
Orthographic display
Menu Location: File – Display – ORTHO
1. Start the ORTHO program from the main menu.
2. In the ORTHO dialog box, double click in the Surface
image text box. (Figure 2.3.2.a)
3. In the resulting pick list, double click on the file
hong_kong_dem.
4. In the ORTHO dialog box, double click in the Use drape
image text box.
5. In the resulting pick list, double click on the file
etm234_composite.
6. In the Output image text box, type in the name of a new
file: HK_ortho_1.
7. Click in the Title check box.
8. The default resolution is 604 x 480 pixels. If you have a
large computer monitor, you can select one of the larger
output file sizes by clicking on the appropriate radio button
under Output resolution.
9. Click on OK.
◆ ◆ ◆
Figure 2.3.2.a The ORTHO dialog box.
Figure 2.3.2.b Orthographic image of Hong Kong with a vertical
exaggeration factor of 1.
The display window appears with your new orthographic display of Hong
Kong from a perspective of a compass direction of 50 degrees (northeast)
(Figure 2.3.2.b). One aspect of the display is immediately apparent. The
vertical relief looks very unnatural due to an exaggerated display of the
terrain. The ORTHO module does not employ a metric scaling, which means
that the program does not take into account the units of the DEM values.
Therefore the degree of exaggeration is entirely qualitative. The most
expedient way to control the degree of exaggeration is to set the viewing
angle, and then alter the exaggeration factor to create an appropriate
perspective. Specifying an exaggeration factor value of “0.5” will halve the
amount of exaggeration, while a value of “2” will double it.
Let’s alter our display to create a better orthographic display by modifying
the exaggeration factor. We’ll try a value of 0.4.
◆ ◆ ◆
Orthographic display (cont.)
10. Close the Display window with the orthographic view.
11. Assuming you have the persistent windows option
specified in TerrSet, the ORTHO dialog box will still be open.
If not, restart the ORTHO dialog box, and enter the same
parameters as above.
12. Change the last digit of the file name of the Output image
from 1, to a 2, so the name will now be HK_ortho_2.
13. Type 0.4 in the Vertical Exaggeration Factor box (replace
the default value of 1)
14. Click on OK.
◆ ◆ ◆
The resulting image with a reduced vertical exaggeration (Figure 2.3.2.c)
looks a little more realistic, but you may want to try different values,
depending on your preferences.
Figure 2.3.2.c. Orthographic image of Hong Kong with a vertical
exaggeration factor of 0.4.
2.3.3 Fly-Through Visualization
The FLY-THROUGH program provides the ability for the user to control
interactively the viewing perspective of an orthographic display in real
time. This creates a perception of “flying” above and around the surface
depicted. Such displays are commonly used as an interesting animation that
helps the viewer gain an appreciation for the area depicted.
We will use the same data sets as those in creating our orthographic
perspective of Hong Kong. When we open the Fly-Through module, please
pay special consideration to the key pad commands, as these are the only
means for you to control the display once activated.
◆ ◆ ◆
Fly-through interactive display
Menu Location: File – Display – FLY-THROUGH
1. Start the FLY-THROUGH program from the menu or the
icon bar.
2. In the FLY-THROUGH dialog box, click on the pick button
(…) to the right of the text box for the Surface image. (Figure
2.3.3.a.)
3. Select the hong_kong_dem file.
4. Click on pick button (…) to the right of the Use drape
Image text box, and select the etm345_fcc file.
5. Select the Slow button in the Initial velocity section.
6. Use the slider bar to reduce the Exaggeration factor to
25%.
◆ ◆ ◆
Figure 2.3.3.a FLY-THROUGH dialog box.
The lower left-hand section of the Fly-Through dialog box shows graphically
the keypads that control the view angle, elevation, and direction of movement
as you move through the image display (Figure 2.3.3.a).
When we start the FLY-THROUGH display, the initial position is located to
the southwest of the surface area. We will try to fly above Hong Kong,
making several turns as we go. See if you can use the description in the
instructions to fly over the Hong Kong airport, and then over the main part of
the city.
Note: If the FLY-THROUGH program is unable to run because of memory
issues, try following the suggestions in the TerrSet on-line Help. There is a
direct link to the Help for this program in the FLY-THROUGH dialog box.
◆ ◆ ◆
Fly-through interactive display (cont.)
7. In the FLY-THROUGH window, click on OK.
8. A new window, a FLY-THROUGH viewer, should open.
9. Hold down the Up Arrow Key and watch the display move
in the FLY-THROUGH viewer.
10. Tap the Up Arrow Key a few times and note how you can
move incrementally as well as smoothly.
11. Continue holding the Up Arrow Key as you advance, until
some of the first islands you reach have disappeared from
view.
12. Press the Page Down key to change your view angle
downwards.
13. Depress the Left Arrow Key to fly to the North West, so
that the airport should pass on the right.
14. Now use the Right Arrow key to turn North East to fly
over the airport.
15. Continue bearing right a little further (this will take you
due East).
16. Depress the Ctrl Key to decrease elevation.
17. Fly East across the main built-up area of Hong Kong, until
you reach the far side of the image.
◆ ◆ ◆
Figure 2.3.3.b A FLY-THROUGH perspective of the Hong Kong airport.
2.3.4 Recording and playing back Fly-
Through movies
Now that you are familiar with the FLY-THROUGH module, you are ready
to become your own fly-through movie director! There are two broad ways of
creating fly-through movies in TerrSet:
1. Save a path, which the FLY-THROUGH program can use to replay a
fly-through.
2. Record a fly-through as an .AVI file.
The path approach is attractive because it is very simple, and is a small text
file that takes only minimal disk resources. You can also create the text file
manually, thus giving you tremendous control over creating new movies.
On the other hand, the AVI file recording is also very simple. Once you have
created the file, you can play it in any AVI player, such as the Windows
Media Player. TerrSet also offers an AVI player, called MEDIA VIEWER.
Like many uncompressed video files, AVI files are large, which makes them
hard to email. Nevertheless, the advantage of creating an AVI file is that,
once you have created it, you can play it without access to TerrSet.
In this exercise, we will record both a FLY-THROUGH path, and an AVI
movie. Before we start, we will create a non-standard false color composite
as a base image for our movie. Since we have already used the COMPOSITE
program for creating false color composites, the instructions below will be
slightly briefer than in Section 2.2.4, where the program was introduced. We
will also take advantage of a short cut for selecting files. Instead of clicking
on the file browse button (a button with “…” on it), we will simply double
click in the text box. In addition, by double clicking on a file, we can insert
that file name in the text box. This is demonstrated in the next sub-section.
2.3.4.1 Save a fly-through path
We will now create a new fly-through, following similar procedures to
Section 2.3.3. This time, however, we will use the new non-standard false
color composite. You might also try different combinations of settings for
starting the FLY-THROUGH program, to see what combination works best
for your computer. The settings suggested below are not necessarily the
optimal settings for you.
◆ ◆ ◆
Saving a fly-through path
Menu Location: File – Display – FLY-THROUGH
1. Start the FLY-THROUGH program from the menu or the
icon bar.
2. In the FLY-THROUGH dialog box, double click in the
Surface image text box. In the resulting pick list, double click
on hong_kong_dem.
3. Double click in the Use drape image text box, and in the
resulting text box, double click on etm345_fcc.
4. In the System resource use section, select the radio button
for Low.
5. In the Initial velocity section, select the radio button for
Slow.
6. Use the slider bar to reduce the Exaggeration factor to
50%.
7. Click on OK.
8. The FLY-THROUGH viewer will open.
9. Move the cursor over the FLY-THROUGH viewer, and
right click with the mouse.
10. A pop-up menu will appear. Note the range of options
available, and the associated F-key short cuts (Figure
2.3.4.2.a). Some options are grayed out at this stage, but will
become available later.
11. Select Smooth pixel.
12. Right click again in the FLY-THROUGH viewer. This
time, in the pop-up menu, select Record. (Note that the
shortcut key F8 is an alternative for starting to record the
movie.)
13. Immediately fly across the Hong Kong image, following
the path across the airport as in Section 2.3.3. Change height,
perspective and direction, as you wish.
14. When you are done, right click in the viewer, and select
Stop from the pop-up menu. (Note that the shortcut key F11 is
an alternative.)
15. Right-click in the image, and from the pop-up menu,
select Save path.
16. A Save as dialog box will open. In the file name text box,
enter flight1.
17. Click on Save.
◆ ◆ ◆
Figure 2.3.4.2.a The FLY-THROUGH pop-up menu.
2.3.4.2 Play a fly-through based on a saved path
The process of saving a path also automatically loads it, and makes it
available for subsequent use.
◆ ◆ ◆
Play a fly-through from a previously recorded path
1. The FLY-THROUGH viewer should still be open from the
Save as step in the previous section (2.3.4.1).
2. Right-click in the FLY-THROUGH viewer, and from the
pop-up menu, select Play. (Note that F9 is an equivalent
command.)
3. This should generate a fly-through based on your
previously recorded flight path.
◆ ◆ ◆
Once the fly-through is completed, you can experiment with playing the pre-
recorded flight path. Note that the flight path merely specifies the path, and
not the images. Thus, if you were to be starting from scratch, you would need
to specify the image, and the drape file. In this case, because we are
continuing from the previous step, with the fly-through viewer still displayed,
there is no need to specify the images once again.
◆ ◆ ◆
Play a fly-through from a previously recorded path (cont.)
4. Right-click in the image, and select Load path from the
pop-up menu.
5. In the subsequent pick list, double click on flight_ex1.csv.
6. Right-click in the image, and select Play from the pop-up
menu.
7. The pre-recorded fly-through path should now play.
◆ ◆ ◆
2.3.4.3 Record a fly-through as an AVI file
As a final step in this exercise, we will record an AVI file from the flight path
recorded in Section 2.3.4.1, above. It is worth noting that it is not necessary
to first save the path; you could instead save your recorded path directly to an
AVI. However, there is some advantage in first saving the path, as that allows
you to preview the movie, prior to generating the movie.
◆ ◆ ◆
Record an AVI-format fly-through movie
1. The FLY-THROUGH viewer should still be open from the
Save as step in the previous sections (2.3.4.1 and 2.3.4.2).
2. Right-click in the FLY-THROUGH viewer, and from the
pop-up menu, select Save as AVI.
3. A Save as dialog box will open. In the File name text box,
type flight1.avi.
4. Click on Save.
5. The fly-through will play as it is generated and saved in
AVI format.
◆ ◆ ◆
The file, flight1.avi, can be played in TerrSet, through the main menu: File -
Display – MEDIA VIEWER. You would then simply open the file, and the
movie starts automatically. It is perhaps more useful, however, to see how
this movie can be played in a standard AVI format player, such as the
Windows Media Player. We provide an example AVI file, flight_ex1.avi,
which you can also play.
◆ ◆ ◆
Play a previously recorded AVI-format fly-through movie
1. Use the Windows Explorer to navigate to your data
directory (e.g. C:\RSGuide\Chap1-4).
2. Find the file flight1.avi.
3. Double click on the file. This should automatically start the
Windows Media Player, and start playing the movie.
4. Alternatively, you can first open the Windows Media
Player from the main Windows Start menu, and then open the
flight1.avi file in that program.
◆ ◆ ◆
CHAPTER 3
IMPORTING, PRE-
PROCESSING AND
EXPORTING
In this section, we will learn how to use TerrSet to import data, georeference
the data to our preferred projection, combine data to cover our area of
interest, and export it to a general software-independent format. We will use
subsets of the four Landsat images required to cover all of Hong Kong, and
will end up with a mosaic similar to the Hong Kong images we explored
earlier.
3.1 Importing Data into the IDRISI file format
Data formats are numerous and vary widely from those of government and
data providers such as the USGS and SPOT Image to software specific such
as ERDAS Imagine and ArcInfo. TerrSet comes with many import routines
for data that cover many of the most common formats. One important format
for raster data is the GeoTIFF format. This is probably the closest we have to
an international standard for raster data formats. Moreover, TerrSet has tools
to facilitate importing Landsat and Sentinel data.
In the following exercises, we will import different GeoTIFF scenes. We will
then project and combine all images into a mosaic. Finally, we will perform
atmospheric correction.
Note: TerrSet offers a powerful alternative tool for importing images, called
GDALIDRISI. GDALIDRISI is a good alternative for importing most raster
formats.
3.1.1 GeoTIFF Format
Our original Landsat data from Hong Kong was provided by the vendor in
the GeoTIFF format, an image format that is growing in popularity. We will
use some of this original data to demonstrate the importing of data into
IDRISI raster format. A GeoTIFF format is a special case of the Tagged
Image File format (TIFF). TIFF is an image format in the public domain,
capable of supporting compression, tiling, and extension to include other
metadata. GeoTIFF incorporates geographic metadata, such as coordinates
and projection type, using compliant TIFF tags and structures.
Let’s import a single band (band 3, corresponding to the red spectrum) from
subsets of four different Landsat ETM+ scenes of Hong Kong.
◆ ◆ ◆
Import data
Menu Location: File – Import – Desktop Publishing
Formats – GEOTIFF/TIFF
1. This program has no associated icon on the main toolbar,
so use the menu as described in the title above to start the
GEOTIFF/TIFF program.
2. The GEOTIFF/TIFF dialog box will open. (Figure 3.1.1.a
shows the dialog box with the options selected described
below.)
3. Note that the dialog box has radio buttons for selecting the
options for importing and exporting IDRISI file format files.
For this exercise, the default, which is importing files, is what
we want.
4. To select an input file, click on the pick button (…) for the
GeoTIFF file name text box.
5. A Pick list window will open. Double click on the \Chap1-
4\Raw images\ subfolder title.
6. The plus sign next to the title will change to a minus sign,
and a list of four file names will be displayed.
7. Click on etm_p121r44_b3.
8. Click on OK to close the Pick list window.
9. TerrSet will automatically put the same name as the
potential output filename in the Idrisi image to create text
box. The IDRISI file has a different extension (.rst) from that
of the GeoTIFF raw data (.tif), so using the same name will
not cause any problems.
10. Click on OK to start the import.
◆ ◆ ◆
Figure 3.1.1.a The GEOTIFF/TIFF dialog box.
The import process is monitored in the status bar. Once the import process
finishes, the DISPLAY LAUNCHER is automatically started, and the image
is displayed (Figure 3.1.1.b). Note that the image imported here is a subset of
the original Landsat 7 scene. Images at medium spatial resolution, such as
those from Landsat, occupy large amounts of disk space. To make the images
manageable for this tutorial the scenes were cropped to a smaller extent. In a
subsequent exercise, we will import a full scene and create a subset as an
example.
Figure 3.1.1.b Successful import of a GeoTIFF Landsat band 3 into TerrSet.
The image is displayed as a gray-scaled image as in Figure 3.1.1.b. As you
can see, the actual image only covers part of the display. The TIFF file was
generated with bounds that correspond with our area of interest over Hong
Kong. More often than not, one’s area of interest may span beyond a single
frame of imagery. In order to cover the area of interest completely, multiple
images are required. In TerrSet it will be necessary to create a spatial
composite of the overlapping images in one data file to view the complete
area of interest.
Let’s see what information was embedded in the GeoTIFF file, by viewing
the metadata of the new file. Can you remember how? If so, go ahead and
open the metadata. If not, just follow the instructions below, which describe
how to open the Layer Properties window. (You can also access the same
information through the TerrSet Explorer.)
◆ ◆ ◆
Import data (cont.): Viewing image metadata
11. With the newly created image still open in a display
viewer, select the Layer Properties button in the Composer
window.
12. Click on the Properties tab to open the Properties pane
(Figure 3.1.1.c).
◆ ◆ ◆
Figure 3.1.1.c Imported data layer properties.
Note that the image comes with projection reference information including
the UTM zone (UTM-50n), and the bounding coordinates (minimum and
maximum X and Y).
Close the Layer Properties window before continuing.
Now import the remaining three GeoTIFF images.
◆ ◆ ◆
Import data (cont.): Import remaining three images
13. Because of TerrSet’s persistent windows, the
GEOTIFF/TIFF dialog box should still be open, though you
may need to move or minimize the other windows in the
TerrSet workspace. If you do not have the option for
persistent windows set, or you have closed the dialog box, use
the menu to restart this module.
14. As before, click on the pick button (…) for the GeoTIFF
file name text box, and select the RSGuide\Chap1-4\Raw
images\ subfolder title in the Pick list window.
15. Select the etm_p121r45_b3 file.
16. Note how once again the output file name automatically
changes to the same name as this new input file.
17. Click on OK to import the file.
18. Once the status bar indicates the file has been imported,
find the GEOTIFF/TIFF dialog box again, and this time
select etm_p122r44_b3 for the GeoTIFF file name (input
data).
19. Click on OK to import this file.
20. Finally, select etm_p122r45_b3 for the input file, and
click on OK in the GEOTIFF/TIFF dialog box to import the
last data set.
◆ ◆ ◆
Check the layer properties of each file, as we did for the first image we
imported. Note that the path 121 images (i.e. those with p121 as part of the
file name) are georeferenced with UTM50n and the path 122 images (i.e.
those with p122 as part of the file name) are georeferenced with the
UTM49n.
Now you can see that the image of Hong Kong that we displayed in the
previous section (Figure 2.2.4.b) is a mosaic of subsets from four image
frames. One can easily see the vertical join line in the original mosaic. The
difference in the colors, and thus spectral response, is because the two
Western images were acquired on a different date from that of the two
Eastern images. However, as we have seen in the importing exercise, the
mosaic comprises four images. Can you see any East-West lines that show
the remaining joins? The joining of each pair of image subsets from North to
South involved data acquired along the same track of the satellite orbit,
during the same North to South overpass. Therefore, the data can be joined
almost seamlessly since they were acquired under the same conditions.
3.1.2 Concatenation
Note: TerrSet offers two modes of spatially joining images, CONCAT and
MOSAIC. In the next section (3.1.2), we will utilize the CONCAT module to
join the Landsat data with common path acquisition. As an alternative, you
can complete section 3.1.3, which uses MOSAIC. You do not need to
complete both sections.
The similarity or disparity of data spectral qualities during acquisition, as
shown in the previous section, will influence our choice in the manner that
we join data sets.
CONCAT is a program module to concatenate, or join, multiple images or
vector files to form a larger file. This program may also be used to paste a
portion of an image over another image. Some preprocessing may be
necessary because all data to be joined must be of the same data type, and
have the same spatial resolution (pixel size) and reference system. Since
CONCAT does not modify the DN values of the component images, the
program works best if the images have comparable spectral characteristics –
like along path satellite images – or if the data has been normalized to a
standard, for example, elevation data.
Landsat data frames along the same path generally meet the requirement of
spectral similarity. Such images are essentially subsets of a single continuous
data acquisition along the path of the satellite, and are sub-divided into
arbitrary individual images based on the predefined row grid. Thus,
CONCAT is well-suited to perform the operation of rejoining Landsat images
along the same path.
We will use the CONCAT module in TerrSet to join the path images that
form each half of the Hong Kong mosaic.
◆ ◆ ◆
Concatenation of Images with CONCAT
Menu Location: File – Reformat – CONCAT
1. Start the CONCAT program from the main menu.
2. The CONCAT dialog box will open.
3. Under Placement type, select the radio button for
Automatic placements using reference coordinates.
◆ ◆ ◆
The data sets that we will join were originally georeferenced to UTM
projection. It is often best to have all data properly georeferenced before
joining. This avoids the need for manual placement and also maintains better
control of the geographic coordinates of each pixel. When the “Automatic
placement” is selected, a new section is created where one can select the
images to be joined.
◆ ◆ ◆
Concatenation of images with CONCAT (cont.)
4. In the Images to be concatenated section of the CONCAT
dialog box, click on the up arrow button next to Number of
files, to set the number of input images to 2 (Figure 3.1.2.a).
5. Click in the first text box under Filename (Figure 3.1.2.a).
Select the browse files button (…), and the Pick list window
will open.
6. If necessary, click on the folder name\Chap1-4: to list the
files in the subdirectory. Select the file etm_p121r44_b3, and
click OK.
7. Click in the second text box, open the Pick list window,
select etm_p121r45_b3, and click OK.
◆ ◆ ◆
Now with the appropriate files selected, one last choice is the manner in
which we will deal with the overlap regions of the two files. In the
Concatenation Type section of the CONCAT window, one has a choice of
either Opaque, or Transparent. For Opaque concatenation, the first image
overwrites the second image, regardless of the values in either image. The
second option, Transparent, operates in the same way, with the exception of
pixels in the first image that have a 0 DN value. These 0 DN pixels are
assumed to be null values, and are therefore treated as if they were
transparent, allowing the DNs of the second image to be retained, and not
overwritten.
Figure 3.1.2.a CONCAT dialog box, with Automatic Placement option
selected.
Generally, image data are almost always transparently joined, whereas for
other data that may have negative values or have actual zero values (such as
elevation data), we typically use the opaque option. However, we do need to
be careful, as many images also have 0 DN values that represent real image
data values. Note that when working with images with background values
different than zero (e.g. -9999) these values can be reclassified to zero using
the module RECLASS.
We will now choose the transparent option, and then give an appropriate
output name for the concatenated image.
◆ ◆ ◆
Concatenation of images with CONCAT (cont.)
8. In the Concatenation Type of the CONCAT dialog box,
select the radio button for Transparent.
9. In the Output image textbox, type the name etm_p121r44-
45_b3.
10. Click on OK to run the CONCAT module.
◆ ◆ ◆
When the program module has completed the process of concatenating these
two images, the image will be displayed automatically (Figure 3.1.2.b). The
default for TerrSet for a single band display is a color palette. As we
discussed in Section 2.2.2, a gray scale palette is more appropriate for single
band images. Therefore, we really should use the Composer window to
change the palette file for this image to GreyScale. However, since we are
only interested in checking to see that the file has been joined correctly, we
will not bother to change the palette file this time. Also, since the eye is more
sensitive to color variations than gray tone variations, you could argue this
non-standard representation is useful in this instance.
The joined image shows no seam between the two individual subsets (Figure
3.1.2.b). However, while much of Hong Kong is covered by the image, there
still remains a significant portion in the West of the image that is blank. We
will need the Path 122 images to cover this region. Therefore, we will need to
concatenate the path 122 images just as we did for the path 121 images.
Figure 3.1.2.b Landsat Path 121 images joined by concatenation.
◆ ◆ ◆
Concatenation of images with CONCAT (cont.)
11. The CONCAT dialog box should still be open from the
concatenation operation. We will use the same parameters,
but simply change the input and output files.
12. Click in the first text box under Filename. Select the
browse files button (…), and the Pick list window will open.
13. Select the file etm_p122r44_b3, and click OK.
14. Click in the second text box, open the Pick list window,
select etm_p122r45_b3, and click OK.
15. In the Output image textbox, type the name etm_p122r44-
45_b3. (Note you can simply edit the file name we used the
previous time, which only differed in that the path was
specified as p121, instead of p122 this time.)
16. Click on OK to run the CONCAT module.
◆ ◆ ◆
Once again the concatenated image will be displayed automatically, with a
color palette applied (Figure 3.1.2.c).
Figure 3.1.2.c Landsat Path 122 images joined by concatenation.
Find the Composer window, click on the Layer Properties button, and then
select the Properties tab, to examine the metadata for each of the two newly
concatenated images. Notice how when you switch the focus between the two
concatenated images (i.e. when you click in each image, bringing it to the
front), you don’t have to reopen the Layer Properties window to see the
attributes of that window; the attributes are changed automatically. However,
you do have to click on the Property tab again, as it is not the default pane.
Notice that the concatenated Landsat path images retain the projection
information from the original TIFF files. Thus, the path 122 image is
georeferenced to UTM-49n, and the path 121 image is georeferenced to
UTM-50n. The different projections are a problem in TerrSet in that you will
encounter a warning if you try to join these images now, to form one single
image.
Therefore, let us learn a bit more about map projections and georeferencing
in section 3.2 so that we can transform these images to the same projection,
and join them to make a mosaic over Hong Kong.
3.1.3 Concatenation using the MOSAIC
program
Note: This section is an alternative to section 3.1.2 above. See the note at the
start of section 3.1.2 about the choice between sections 3.1.2 and 3.13. You
do not need to complete section 3.1.3 if you have successfully completed
section 3.1.2.
The MOSAIC program allows us to join two images that are on the same
projection.
◆ ◆ ◆
Concatenation of Images with MOSAIC
Menu Location: IDRISI Image Processing – Restoration –
MOSAIC
1. Start the MOSAIC program from the main menu.
2. The MOSAIC dialog box will open.
3. In the Images to be processed pane, click in the white area
under Filenames. A Pick list button will be displayed (…).
4. Click on the Pick list button.
5. A Pick list window will open.
6. Select the file etm_p121r44_b3. Click on OK to close the
Pick list.
7. The first file name, etm_p121r44_b3, should now be listed
in the MOSAIC window.
8. Click in the white space below the first file name. A Pick
list button will be displayed.
9. Follow steps 4-7, except this time, choose the file
etm_p121r45_b3.
10. In the text box labeled Output mosaicked image, type
etm_p121r44-45_b3.
11. Uncheck the option for Match image grey level.
12. Accept all other defaults.
13. Figure 3.1.3.a shows the complete dialog box.
14. Click on OK to run the concatenation.
◆ ◆ ◆
Figure 3.1.3.a. The MOSAIC dialog box with parameters chosen for
concatenating the two Landsat images. The Match image grey level check
box should be unchecked.
Figure 3.1.3.b Landsat Path 121 images joined by the MOSAIC program.
The MOSAIC program offers a powerful tool for matching the histograms of
images to try to compensate for brightness differences between images. In
this case, however, we do not need the image matching tool because the DN
values are equivalent between the images we are joining.
When the program module has completed the process of concatenating or
mosaicking these two images, the image will be displayed automatically
(Figure 3.1.3.b). The joined image shows no seam between the two individual
subsets, confirming that there was no need to match the image brightness
values with the Match image grey level tool in MOSAIC. Although much of
the Hong Kong region is covered by the image, there still remains a
significant portion in the West of the image that is blank. We will need the
Path 122 images to cover this region. Therefore, we will concatenate the path
122 images just as we did for the path 121 images.
◆ ◆ ◆
Concatenation of images with MOSAIC (cont.)
15. The MOSAIC dialog box should still be open from the
concatenation operation. We will use the same parameters,
but simply change the input and output files.
16. Click in the first text box under Filenames. Select the
browse files button (…), and the Pick list window will open.
17. Select the file etm_p122r44_b3, and click OK.
18. Click in the second text box, open the Pick list window,
select etm_p122r45_b3, and click OK.
19. In the Output image textbox, type the name etm_p122r44-
45_b3. (Note you can simply edit the file name we used the
previous time, which only differed in that the path was
specified as p121, instead of p122 this time.)
20. Click on OK to run the MOSAIC module.
◆ ◆ ◆
Once again the concatenated image will be displayed automatically (Figure
3.1.3.c).
Figure 3.1.3.c Landsat Path 122 images joined using the MOSAIC program.
Find the Composer window, click on the Layer Properties button, and then
select the Properties tab, to examine the metadata for each of the two newly
concatenated images. Notice how when you switch the focus between the two
concatenated images (i.e. when you click in each image, bringing it to the
front), you don’t have to reopen the Layer Properties window to see the
attributes of that window; the attributes are changed automatically. However,
you do have to click on the Properties tab again, as it is not the default pane.
Notice from the Layer Properties window that the mosaicked Landsat path
images retain the projection information from the original TIFF files. Thus,
the path 122 image is georeferenced to UTM-49n, and the path 121 image is
georeferenced to UTM-50n. The different projections are a problem in
TerrSet in that you will encounter an error if you try to join these images
now, to form one single image.
Therefore, let us learn a bit more about map projections and georeferencing
in section 3.2 so that we can transform these images to the same projection,
and join them to make a mosaic over Hong Kong.
3.2 Georeferencing
3.2.1 Introduction to Map Projections
Georeferencing is an important, but rather technical topic. Fortunately, much
of the complexity of the subject is hidden from us, because TerrSet will take
care of the mathematical specifics for us. However, it is necessary for the
user to have a qualitative understanding of the principles involved. In the
following section we provide a short and highly abbreviated overview of the
topic. The reader is encouraged to read further on the topic. For example, the
TerrSet manual has an excellent section on georeferencing (Eastman 2016).
In addition, most of the texts listed in Table 1.1.2.a have extensive
discussions on georeferencing and the issues involved in resampling onto a
projection.
A map projection is a mathematical procedure which converts between a
spherical or ellipsoidal representation of the earth, and a flat planar map
surface. Although many projections have been designed over the centuries,
just a few are widely used today. The process of geographic referencing of
images is known by many names, such as georeferencing, geocoding,
georectification, but they all refer to a process of transforming the image data
from a simple matrix reference system (row and column) to a geographic map
reference system.
The geographic referencing of individual images allows for the identification
of the relative distance and arrangement between features on the image, as
well as both the absolute location of features in the imagery, and the
comparison of features over time through the overlay of multiple images
acquired on different dates. The importance of having an accurate
geographic reference system linked to the imagery cannot be overstated. A
common coordinate system is the essential element of any GIS database.
No matter how sophisticated the mathematical equations associated with each
projection, the Earth’s surface can never be converted perfectly to a flat
map. There is always some distortion, great or small, in each map.
An important attribute of a projection is the geodetic datum. A geodetic
datum is the representation of the earth’s shape, usually chosen to be an
ellipsoid. The geodetic datum therefore includes the parameters that define
the ellipsoid, as well as the associated coordinate system origin and
orientation. A geodetic datum can be a global datum if it is defined by the
center of the Earth, as is the case for WGS84. Otherwise, a local datum
defines a specific origin position and azimuth relative to a specific location
on the ellipsoid. By locating a datum near one’s area of interest, the error
associated with projecting onto a flat surface is minimized.
Another aspect of transforming your data to a projection is the decision of
how to resample or interpolate your data to fit the projection grid.
Resampling is necessary, because the new grid will have center points for
each pixel that differ from the old grid. There are a number of different
strategies for estimating the pixel DN values at the new pixel location,
including nearest neighbor, bilinear interpolation, and cubic convolution.
• Nearest neighbor is the simplest resampling method. This approach
assumes the best estimate of the new pixel value is simply the original
DN value of the closest pixel from the input image. Output values are the
same as the original image.
• Bilinear interpolation uses the distance-weighted average of the values
of the four nearest cells in the input image for the new pixel value. The
output values are always within the range of original values.
• Cubic convolution uses the sixteen nearest neighbors and fits a
smoothing curve to the values. The output values can be outside the range
of original values.
Choose nearest neighbor whenever it is critical that original pixel values
remain unchanged. However, because original pixel values are unchanged,
nearest neighbor resampling tends to be very blocky in appearance. Choose
bilinear interpolation where averaging seems appropriate for better visual
quality. Bear in mind, however, that smoothing will tend to blur the data
somewhat.
TerrSet has two modules that may be used for geographic referencing. The
first is the module PROJECT, which automatically transforms data from one
known geographic projection to another. The second module is RESAMPLE,
which allows for the generation of a polynomial equation based on control
points picked by the user. Thus, if the data are already georeferenced, but you
need to convert the image to another projection, you would use PROJECT.
On the other hand, if your data are not georeferenced or need spatial
adjustments to match another image, you would use RESAMPLE to convert
the image. It is important to note that RESAMPLE requires a georeferenced
map or image of the same area to serve as a base map for developing the
transformation equation.
3.2.2 Converting Between Projections Using
TerrSet-Defined Projections
As discussed above, PROJECT transforms raster images from one known
geographic reference system to another known system. TerrSet uses
Reference System Parameter Files to identify the complete characteristics of
a projection, including the datum, origin, units, etc. Thus, to apply the
PROJECT module, we will need a Reference System Parameter File for both
the input and output reference systems.
The TerrSet on-line Help gives the source of the algorithms used in the
PROJECT module. The projection transformations are based on the formulas
of Snyder (1987). Datum transformations are accomplished using the
Molodensky transform process, which assumes that the axes of the source
and target coordinates are parallel. In the case of conversions between
NAD27 and NAD83 within the continental US, TerrSet uses the US National
Geodetic Survey's NADCON procedure.
TerrSet incorporates over 500 Reference System Parameter Files for a wide
variety of projections and datums. These include files for a geodetic system
using latitude and longitude and the WGS84 datum, the UTM system (one
each for the 60 UTM zones, for both the northern and southern hemispheres)
using the WGS84 datum, and, for the United States, the UTM system using
NAD27 and NAD83. TerrSet also includes all US State Plane Coordinate
(SPC) systems based on the Lambert Conformal Conic and Transverse
Mercator projections.
Furthermore, TerrSet allows for the creation of geographic systems by the
user, by developing new Reference System Parameter Files, which are given
the extension .ref. Often the best way to create a new .ref file is to copy an
existing one, and then edit it. The user will need to change the parameters
necessary for the new system, and save the file with a new name.
Let us try using the PROJECT module to create maps of our Hong Kong
image in different projections. The original Hong Kong images were
provided in one of the most common map projections in use today, the
Universal Transverse Mercator (UTM) projection. We will transform these
images from this projection to a Geographic projection that will use latitude
and longitude degrees as its coordinates.
◆ ◆ ◆
Converting between projections using PROJECT
Menu Location: File – Reformat – PROJECT
1. Use the main menu to start the PROJECT module.
2. The PROJECT dialog box will open (Figure 3.2.2.a).
3. Use the radio button to specify that the Type of file to be
transformed is raster.
4. Click on the browse button (…) next to the text box for
specifying the Input file name, and select the etm1 file from
the Pick list window.
5. Click on OK to close the input file Pick list.
6. Note that PROJECT will examine the input raster file’s
documentation file to determine the reference system in use,
and will automatically enter the name of the reference system
as it appears in the documentation file in the Input reference
system text box (utm-49n, in this case).
7. Enter a new name for the transformed file in the Output file
name text box: etm1_latlong.
8. We will now choose the reference file that defines the grid
referencing system for the new output file. To find the list of
available reference files, start by clicking on the browse
button (…) next to the text box for Reference file for output
result.
9. The Pick list window will open. Double click on
TerrSet2020/Georef to see the files in that folder.
10. Scroll down the list and select latlong.
11. Click the OK button to return to the PROJECT dialog box.
12. For the Resample type, we will use the default option of
Nearest Neighbor.
13. Likewise, for the Background value, we will use the
default of 0.
◆ ◆ ◆
Figure 3.2.2.a The PROJECT dialog box with parameters selected.
The background value is the value that will be used for all new pixel
locations that lie outside the bounds of the old image. Because a slight
rotation of the image is common in most projection changes, there will likely
be regions in the new image for which we don’t have data from the old
image. Thus, specifying a value for such locations is very important.
◆ ◆ ◆
Converting between projections using PROJECT (cont.)
14. Click on the Output reference information… button.
15. The Reference Parameters window will open (Figure
3.2.2.b).
◆ ◆ ◆
Figure 3.2.2.b The Reference Parameters window.
The PROJECT operation automatically calculates the output boundaries,
given the user-selected reference system or the unit distance of the output
image. The calculated values are inserted automatically in the Reference
Parameters window. It is very important for the user to consider carefully the
calculated number of columns and rows that will span that region, as this will
define the resolution (pixel size) of the output image. You are free to set any
resolution you desire by altering either the values under resolution in X and
resolution in Y, or by changing the number of rows and columns for the
output data. However, in most cases, it is preferable to maintain the original
resolution of the data.
If you are transforming the image into a projection already in use by another
image and you wish to have the same coverage and resolution, use the Copy
from existing file option in the dialog box. In our case, we will choose the
defaults calculated by the PROJECT program (Figure 3.2.2.a), and therefore
we will close the Reference Parameters window without changing any
parameters.
◆ ◆ ◆
Converting between projections using PROJECT (cont.)
16. In the Reference Parameters window, click on OK to
close the window.
17. In the PROJECT window, click the OK button to start the
resampling operation.
◆ ◆ ◆
The resampling to create the new image will take some time, and therefore it
may take a few minutes to generate the new output image. TerrSet will
automatically display the new image with a color palette applied (Figure
3.2.2.c). In this case, the color palette shows the rotation very clearly,
because the background pixels, which lie outside the boundaries of the
original image, are shown in black.
Figure 3.2.2.c The Landsat image with a geographic projection, and the
color palette automatically applied.
This might be a good time to review our skills in modifying a display and
creating a map. In the description below, only brief instructions will be
provided, since this material has already been covered in Section 2.2.5, on
Map Annotation. If you find the instructions below too brief, you will need to
review the detailed instructions in Section 2.2.5. Figure 3.2.2.d shows the
map we are trying to create.
Figure 3.2.2.d Hong Kong Landsat image map using the Geographic
projection.
◆ ◆ ◆
Creating a map: Change the palette file and apply a
contrast stretch
1. In the Composer window, select the Layer Properties
button.
2. In the Layer Properties window, change the palette file
from the default quant, by clicking on the browse button (…)
next to the Palette file text box. The Pick list window will
open. Open the TerrSet/Symbols folder, and scroll down to
select the greyscale palette, and click on OK.
3. The image should now be displayed in gray tones.
4. Change the Display Min/Max contrast settings to 60 and
100.
5. Click on Apply to apply the contrast stretch.
6. Close the Layer Properties window by clicking on OK.
◆ ◆ ◆
Note that, in this example, we have provided appropriate contrast stretch
values for the display in order to speed things along. Normally you would run
HISTO to get initial estimates of appropriate values for the contrast stretch,
followed by some manual experimentation if necessary.
The image should now have a more useful contrast stretch applied, and we
can work on the map presentation itself.
◆ ◆ ◆
Creating a map (cont.): Specify and apply the map
annotation
7. In the Composer window, click on Map Properties.
8. In the Map Properties window, select the tab for Legends.
9. Uncheck the box for Visible (to remove the legend from the
display).
10. Select the tab for Map Grid and check the box for Visible.
11. Click on the Map Grid Bounds radio button for Current
View.
12. Set both Increment X and Increment Y to 0.25.
13. Under Text Options, make sure the radio button for
Number inside has been selected.
14. Set X Axis orientation to Vertical. Use the Select Font
button to change the font color to white.
15. Select the tab for North Arrow.
16. Make sure the box for visible has been checked.
17. Select a North Arrow style by clicking on one of the arrow
icons.
18. Click on the tab for Scale Bar.
19. Make sure the box for visible has been checked.
20. Set the Length (in Ground Units) to 0.25.
21. Click on the tab for Titles.
22. Type in the Title text box: Hong Kong Landsat
Geographic Projection Map.
23. Select the Background tab.
24. Change the Map Window Background Color box to white
by double clicking in the box, and selecting the appropriate
color chip.
25. Check the box for Assign map window background color
to all map components.
26. Click OK to close the Map Properties window.
27. Resize the Display window, so that you have room below
the image for the scale bar and north arrow.
28. Drag the individual map components so that the image,
scale bar, and title are centered.
29. Note that you can resize the north arrow if necessary.
30. If you are satisfied with the map, you should save it: In
the Composer window, select the button for Save.
31. The Save Composition dialog box will open. Select the
radio button for Save composition to MAP file.
32. Enter a new file name in the text box: Geographic
Projection Map.
33. Click on OK to save the file, and close the Save
Composition dialog box.
◆ ◆ ◆
The final map composition should look something like Figure 3.2.2.d. If
necessary, you may need to go back and redo some part of the map by
altering some parameter in the Map Properties window.
Since you have saved the map composition, you can close the DISPLAY
window.
3.2.3 Converting Between Projections Using
User-Defined Projections
Although TerrSet supplies more than 500 specific map projections with
specific datum and coordinate origins, occasionally you might need to use a
projection that is not included with the software. Fortunately, TerrSet does
allow one to specify a new projection, provided that the new projection is a
derivative of an existing TerrSet projection, and one knows the datum shift
with regards to the datum, spheroid and geographic origins of the TerrSet-
supplied projection.
In this exercise, we will create a new projection reference system parameter
file for the Hong Kong 1980 grid. The file will contain all the necessary data
for calculating the transformation from a projection based on the WGS84
datum to the HK80 datum.
The Hong Kong grid was created and used by the government of Hong Kong
to provide highly accurate positions within that region. HK1980 Grid is a
local rectangular grid system based on the Transverse Mercator projection
and Hong Kong 1980 Geodetic Datum. The details of the Hong Kong 1980
grid are:
Reference System: Hong Kong 1980 Grid System
Projection: Transverse Mercator
Datum: Hong Kong 1980
delta WGS84: 162.619 -276.959 -161.764
Ellipsoid: International 1924
Major s-ax: 6378388 meters
Minor s-ax: 6356911.946 meters
Origin long: 114.17855 degrees
Origin Lat: 22.3121333 degrees
Origin X (False easting): 836694.05 meters
Origin Y (False northing): 819069.8 meters
Scale factor: 1.00
Units: meters
Parameters: 0
We will use the TerrSet text editor to modify an existing reference system
parameter file to create the Hong Kong 1980 grid file.
◆ ◆ ◆
Modifying a Reference System Parameter File
Menu Location: File – Data Entry – Edit
1. Use the main menu or icon bar to open the TerrSet TEXT
EDITOR window.
2. In the TEXT EDITOR window, use the menu to select File -
Open…
3. The OPEN FILE window will open.
4. Browse to the main TerrSet program directory (for
example, C:\Program Files (x86)\TerrSet. Within the TerrSet
folder, double click on the Georef subfolder, to list the files
within that directory.
5. Select the LATLONG.REF file.
6. Click on OPEN.
7. In the TEXT EDITOR window, use the menu to select File
– Save as…
8. Once again navigate to the TerrSet2020\georef* folder, and
then in the File name text box enter HK80. Click Save.
9. Use the information provided about the Hong Kong Grid to
change the details of the file to the new projection. Use the
original file to guide you as to the appropriate format. For
example, observe that the major s-ax and minor s-ax fields do
not include the designation of the units (meters), only the
number. The units are specified later in the file.
10. When you are done, use the TEXT EDITOR menu to
select File – Save.
*If you do not have permission to save in this folder, you can
navigate to your working folder and save the ref file there.
◆ ◆ ◆
Now that the new projection file has been developed, the procedure to apply
the Hong Kong 1980 projection in a REPROJECT operation will be very
similar to what we did before in the previous section (Section 3.2.2) using the
TerrSet-supplied projection. Therefore, the instructions will be slightly
briefer in this section, as it is assumed that you are familiar with the module
steps.
◆ ◆ ◆
Applying a user-specified projection
Menu Location: File – Reformat – PROJECT
1. Use the main menu to start the PROJECT module.
2. In the PROJECT dialog box, use the radio button to specify
that the Type of file to be transformed is raster.
3. Click on the browse button (…) next to the text box for
specifying the Input file name, and select the etm1 file from
the Pick list window.
4. Click on OK to close the Pick list.
5. Enter a new name for the transformed file in the Output file
name text box: etm1_hk80.
6. Specify the Reference file for output result by clicking on
the browse button (…) next to the text box. In the resulting
Pick list window, double click on TerrSet/Georef* to see the
files in that folder.
7. Scroll down the list and select HK80.
8. Click the OK button to return to the PROJECT dialog box.
9. Note that the PROJECT dialog box OK button is grayed
out. TerrSet forces us to check the output parameters.
Therefore, click on the button for Output reference
information.
10. The Reference Parameters dialog box will open. Click on
OK in this dialog box to close it.
11. The PROJECT dialog box OK button will now be
enabled. Click on the button to start the resampling operation.
*Or your working folder, if you saved the reference file there.
◆ ◆ ◆
The final image is now projected with the coordinates consistent with
topographic maps published by the government of Hong Kong. When done
processing, TerrSet will automatically display the image with a color palette
applied. The image should look very similar to the image created with the
geographic projection (Figure 3.2.2.c).
See if you can, on your own, create a map display, as we did for the
geographic coordinate projection (Figure 3.2.2.d). You can follow the
instructions at the end of Section 3.2.2 for creating the map, if you need to be
reminded of the steps. However, because this map has a different projection,
in the Map Grid pane you will need to choose values of 20000 for the
Increment X and Increment Y, with 0 decimal places. Also, in the Scale Bar
pane, you will need to choose 20000 for the Length (in Ground Units)
parameter. Don’t forget to save your map composition when you are done,
through the Save button on the Composer window. The end results should
look something like Figure 3.2.3.a.
Figure 3.2.3.a Hong Kong Landsat image map using Hong Kong 1980
national grid projection.
3.2.4 Resample - Transformations with
control points
So far, we have studied the reprojecting of an image based on the
mathematical formulae of the projections themselves, using the PROJECT
module. Reprojecting implies the image is already on a map projection, and
that we would like to change that projection. However, in many cases the
original image is not projected on a formal map projection, but is simply
organized based on the view of the sensor at the time of acquisition.
Sometimes, even if the image is already on a projection, we may find that the
georeferencing was only approximate, and we need to do a more precise
registration of the image. For example, when we do change detection analysis
in Chapter 7, in which we will identify and map changes on the landscape, it
is essential that we have a very precise co-registration between two images.
Likewise, when we mosaic two images, it is very important that the images
match well, so there is no obvious misregistration at the join between the two
images.
In the circumstances described above, we need to develop an empirical
georeferencing, where we calculate our own formula for the relationship for
the transformation from the original image orientation to the map projection.
In TerrSet, this is done with the RESAMPLE module.
RESAMPLE performs a matrix transformation on a raster file using an
equation determined by a series of user-defined ground control points.
These ground control points are points on both images that correspond to
identical ground features. The feature could be a road intersection, or a
bridge, or a natural feature like a stream confluence. The amount of error of
the transformation can then be regulated by the accurate placement of control
points, as well as the order of the polynomial equations.
Using these control points, a set of polynomial equations is developed to
describe the transformation of data from its original (input) grid to a new
(output) one. Often this is accomplished using least-squares fit to a
polynomial of the form:
where ex and ey are residual errors after the transformation. TerrSet includes
the option of using the linear, quadratic or cubic mapping functions (the first,
second and third orders of the polynomial equation).
The simplest transformation is a static shift of the coordinate system and
would only involve the first terms from the equations above (a0 and b0). A
simple linear transformation would include rotating and shifting the image
(the next two terms of the equations, a2 and b2) and the second and third order
terms of the equations would account for nonlinear transformations that
would correct skew, roll, keystone effects, etc. Note that a least-squares
polynomial fit cannot and does not correct for parallax caused by topography.
To correct for parallax, a process called orthorectification is required, a
capability not currently available in TerrSet.
One characteristic of the RESAMPLE operation is that the program does not
automatically calculate the output boundaries, reference system or the unit
distance of the output image. It is very important for the user to know the
minimum and maximum X and Y coordinates of the final output image as
well as the number of columns and rows that will span that region. Note that
the span of the X and Y coordinates and the number of columns and rows
define the resolution of the output image. You are free to set any resolution
you desire, however, in most cases it is preferable to maintain the original
resolution of the data. Fortunately, one can also copy the reference
parameters from an existing file, just as we did in the PROJECT module.
Now that we have some background in the georeferencing of images of an
unknown projection, we will apply the TerrSet RESAMPLE procedure to the
Hong Kong Landsat ETM+ images. As we have seen already, these images
were provided by the vendor in two different projections; the images of path
121 are referenced to UTM 50 and the path 122 images are referenced to
UTM 49. We will use the RESAMPLE module to provide an accurate
transformation of the etm_p121r44-45_b3 image to the UTM-49n projection.
Although we could also use the PROJECT module in this case, because the
images are already on a projection, the RESAMPLE approach is perhaps the
best choice because the georeferencing was not a precision correction, but
only an approximate correction. Thus, PROJECT will not be able to align the
images from two different satellite orbit paths with the accuracy we would
like.
Before we begin, we need to set the display minimum and maximum DN
values in the metadata of the images we will use, so that the images are
optimally stretched in the display process. If this preparatory step is
confusing, you may need to review Section 2.2.3.
◆ ◆ ◆
Specify metadata values for an optimal display stretch
1. Open the TerrSet EXPLORER window using the main
menu or icon bar.
2. In the TerrSet EXPLORER window, select the Files tab.
3. If the files are not listed in the Files pane, double click on
the directory name to display the files.
4. In the Files pane, click on the etm_p121r44-45_b3.rst
image.
5. In the Metadata pane below the file listing, drag the slider
down until the categories of Display min and Display max are
visible.
6. Type 18 in the text box to the right of Display min field.
7. Type 100 in the text box to the right of Display max field.
8. Click on the Save icon (the floppy disk icon) in the bottom
left hand corner of the Metadata pane.
9. In the Files pane, click on the etm_p122r44-45_b3.rst
image.
10. In the Metadata pane, once again drag the slider down
until the categories of Display min and Display max are
visible.
11. Type 10 in the text box to the right of Display min field.
12. Type 120 in the text box to the right of Display max field.
13. Click on the Save icon in the bottom left hand corner of
the Metadata pane.
◆ ◆ ◆
In the instructions above, we have specified optimal display minimum and
maximum values to save time. Be aware, however, that normally you would
identify the appropriate values by first running the HISTO program to get
approximate values, and then selecting values by interactively modifying the
values in the Layer Properties window, which in turn is accessed through the
Composer Window.
◆ ◆ ◆
Georeferencing using RESAMPLE
Menu Location: IDRISI Image Processing – Restoration –
RESAMPLE
1. Use the menu to start the RESAMPLE program.
2. The RESAMPLE window will open (Figure 3.2.4.a).
◆ ◆ ◆
Figure 3.2.4.a The RESAMPLE window.
Let’s take a moment to examine the RESAMPLE window. The window
comprises five major sections:
• The Resample file specification section defines the input and output file
names, as well as the output bounds and resolution.
• The Ground control points section allows an interactive designation of
the control points by providing displays of the reference image and the
image to be transformed.
• Mapping function determines the type of equations used in the
transformation.
• Resampling type gives the option of a bilinear and cubic convolution
resampling, which involve a smoothing of the data during the
transformation, or nearest neighbor resampling, which retains the spectral
integrity of the original pixels.
• The Magnify section specifies the amount of zoom for the displayed
images.
Let’s now fill in the Resample file specification section.
◆ ◆ ◆
Georeferencing using RESAMPLE (cont.): Resample file
specification
3. In the Resample file specification section of the
RESAMPLE window, click on the browse button (…) to right
of the Input Image text box.
4. From the Pick List, select etm_p121r44-45_b3, which you
created in Section 3.1 from the two original ETM+ path 121
images.
5. Click OK to close the Pick list.
6. In the Output Image text box, enter etm_p121r44-
45_b3_UTM49.
7. Accept the default value of 0 to use as the Background
Value.
8. Click on the button for Output reference parameters...
9. In the Reference Parameters dialog box, click in the check
box for Copy from existing file.
10. Click on the browse button (…) to the right of the text box
for Copy from existing file.
11. Select file etm_p122r44-45_b3 from the Pick list window,
and then click on the OK button to close the Pick list.
12. Note that the fields in the Reference Parameters window
were populated with information obtained from the
etm_p122r44-45_b3 metadata file (Figure 3.2.4.b).
◆ ◆ ◆
Figure 3.2.4.b The Reference parameters window with parameters specified
from an existing file.
The transformed path 121 image generated by the RESAMPLE program will
have the same projection, bounds and resolution as the path 122 image. This
will facilitate mosaicking the path 121 and 122 images.
We will now begin to choose our ground control points (GCPs). The GCPs
are important because they are used to determine the polynomial equation
developed by RESAMPLE and subsequently used in the transformation. The
geographic coordinates of the GCPs may be determined by using a GPS
system at the locations themselves (hence the name “ground control points”),
or by locating the exact same feature in the second raster file that has a
known and accurate geographic reference system.
For our Hong Kong example, we will use the latter option, that of obtaining
the location from another image, in this case the satellite image from the
adjacent path. This may seem a bit surprising, since we earlier said that we
would use RESAMPLE precisely because the georeferencing for these
images is only approximate. Therefore, this would seem to contradict the idea
that we can use the adjacent path’s image for GCPs. However, in this case,
our concern is not absolute georeferencing, but rather obtaining a high quality
relative georeferencing of the one image to the other, so that they can be
mosaicked without an obvious misalignment at the join.
There are two methods for entering the locations of control points in the
RESAMPLE dialog box: by image matching or by manually keying in the
data. With both methods, a correspondence file is created that records the
input and output coordinates for each GCP. Ground control points are entered
into the grid on the form. Each line of the grid has text boxes for four
numbers. The first two text boxes are used to specify the input X and Y
coordinates of a point in the input reference system. The last two text boxes
are used to specify the coordinates of that same point in the output reference
system.
If points are being digitized for both the input and output reference images,
after three control points are entered, subsequent output points will be
interpolated linearly, and automatically placed on the output reference grid.
To enter GCPs automatically from image map to image we need to display
both the input and output reference files.
◆ ◆ ◆
Georeferencing using RESAMPLE (cont.): Specifying the
reference files
13. Close the Reference Parameters by clicking on OK.
14. Within the RESAMPLE window, click on the DISPLAY
LAUNCHER icon to the right of the Input reference text box.
This will bring up the DISPLAY LAUNCHER dialog box.
15. Within the DISPLAY LAUNCHER, select the raster layer
etm_p121r44-45_b3.
16. Use the radio button to specify the palette file as
GreyScale.
17. Click on OK to display the image.
18. Within the RESAMPLE window, click on the DISPLAY
LAUNCHER icon to the right of the Output reference text
box. This will once again bring up the DISPLAY LAUNCHER
dialog box.
19. Set the raster layer as etm_p122r44-45_b3, the palette file
as GreySale, and click OK.
◆ ◆ ◆
The displayed images are now linked to the RESAMPLE module for
automatic input of locations into the record of GCP locations. However, the
displays themselves are not linked to one another, so take care in zooming to
create similar zoomed displays.
A potentially useful feature in the RESAMPLE window is the ability to
display a zoom window that magnifies the location of your cursor within the
main image display. The Magnify window gives a detailed view of a portion
of the main display, to help locate your control point more precisely. It is
very important that each control point be located within one pixel of the
correct location, and so the Magnify window is very important for obtaining a
satisfactory transformation.
We will now modify the zoom window to aid us in placing our ground
control points.
◆ ◆ ◆
Georeferencing using RESAMPLE (cont.): Using the
Magnifier window
20. In the Magnify section of the RESAMPLE window, move
the Zoom factor slider to 4x.
21. Uncheck the Show cursor check box, as we will want a
clear view in the zoom window without the cursor present.
22. Move your cursor over one of the two main displayed
images and observe how the Magnify display works in
creating a zoomed view in the RESAMPLE window (Figure
3.2.4.c).
◆ ◆ ◆
Figure 3.2.4.c The RESAMPLE window, including the Magnifier display.
The Magnifier magnifies what is displayed in the DISPLAY VIEWER, and
does not return to the original image to show the full resolution of the data
set. Thus, if the image in the DISPLAY VIEWER is not at full resolution, then
the image in the Magnifier will also not be at full resolution. This is because
your screen monitor has a limited resolution, and cannot display the entire
image at full resolution. Instead, the DISPLAY VIEWER automatically
shows a reduced resolution image, in order to fit the image on the screen.
Only by showing a subset of the image can we see the image at full
resolution, as we will see in the next few steps.
Once we have all the displays as we want, let’s begin placing GCPs using the
Digitize GCP option in the top right corner of the Ground control points
section of the RESAMPLE window.
◆ ◆ ◆
Georeferencing using RESAMPLE (cont.): Identifying
the first GCP
23. Make the Input reference display window (i.e. the left
display window showing etm_p121r44-45_b3) the focus
window by clicking in or on the borders of the window. This
will make the window frame a different color and bring it to
the front of any other windows with which it may overlap.
24. We will now need to select an area for the first GCP.
Specifically, we’d like a feature that is found on both images,
and has a very distinctive shape so we can identify a location
down to the specific pixel.
25. Click on the Zoom Window icon in the main tool bar.
26. Move the cursor into the Input reference display.
27. Click the left mouse button, and keeping the button
depressed, draw a box around the general vicinity of the first
GCP. (Figure 3.2.4.d highlights the region we will select,
which is an island in the bay. Make sure you can find the
island on both displays, input and output, before you start
zooming in.)
28. If necessary, you can refine the area that you have zoomed
in on by repeatedly using the Zoom Window icon, or using the
Zoom window icon in conjunction with the Zoom in / Center
and Zoom out / Center tools.
◆ ◆ ◆
Figure 3.2.4.d General vicinity of the first GCP as shown by the white box.
◆ ◆ ◆
29. Perform the same set of operations to zoom in to the same
location in the right Output reference display.
30. In the RESAMPLE window, in the Ground Control Points
section, and near the words Digitize GCP, find the Input
button, and click on it. A GCP will appear, located in the
center of the input reference image, with a numeric identifier
(1, for this first GCP).
31. Move the GCP to an identifiable location you can find in
both displays by selecting the GCP with the cursor and
dragging it to the feature. The edge of the breakwater makes a
good choice for such a feature.
32. In the RESAMPLE window, click on the (Digitize GCP)
Output button. A GCP with the same numeric identifier as
before (1, for this first GCP) will appear in the center of the
output reference image.
33. Move the GCP to the location of the same feature
identified in the input display window (Figure 3.2.4.e).
◆ ◆ ◆
Figure 3.2.4.e First GCP location.
Note that the RESAMPLE window now has the coordinates of the first GCP
recorded on the first line of the Ground Control Points form (Figure 3.2.4.e).
Now that we have mastered the capability to locate ground control points, we
will pick at least six additional points. By picking at least seven GCPs, we
will have a number of redundant points, so that we can get a reasonable
estimate of the error in the transformation. If we were using a higher order
transformation we would need even more points.
Figure 3.2.4.f GCPs and RMS error in the RESAMPLE window and
displays.
Here are some suggestions you should consider as you choose your GCPs:
• You should aim to have all GCPs as well-distributed around the images
as possible (Figure 3.2.4.f). Thus, in general, try to select each new GCP
some distance from the previous GCPs.
• In picking GCPs, make sure that you are not confusing boat wakes or
other temporary features with permanent features that are likely to be
present in both images.
• It can be quite difficult to find suitable objects, so you may want to
zoom in slowly, first looking at a general region, and then zoom in a
second, and even possibly a third time.
• The best objects to use are road intersections or other linear objects. Do
not use indeterminate objects, for example, the center of an island, unless
that island is only 1-2 pixels in size.
• Once three GCPs have been selected, a linear solution is calculated and
the total root mean squared error (RMS), and residual error for each point
within the GCP table are all displayed as zero. The next time you pick a
GCP in the input display, the position of the GCP in the reference image
is estimated automatically for you. It is very, very important that you
do not simply accept this estimated location. You must check the
location very carefully, and in general you probably will find you need to
move the GCP slightly. If you do not conscientiously move the
automatically selected points, you will end up with a completely false
estimate of the error in your transformation.
• The RMS is an estimate of the average error of the points you have
selected. It is important to realize it is only an estimate based on the
points you have selected, and not for every pixel in the image. This again
emphasizes the importance of a well-distributed set of GCPs so that your
estimate of error is representative of most of the image.
• The Remove GCP button in the RESAMPLE window will delete an
entire row in the grid, and is useful if you feel a point is not worth
keeping.
◆ ◆ ◆
Georeferencing using RESAMPLE (cont.): Identifying
additional GCPs
34. Select Input reference display window by clicking in the
left image.
35. Return the image to its original zoom and extents by
clicking on the Full extent normal icon on the main menu bar.
36. Select the Output reference display window by clicking in
the right image.
37. Return the image to its original zoom and extents by
clicking on the Full extent normal icon on the main menu bar.
38. Identify another area that is present on both images and
potentially has features that you may be able to use as GCPs.
39. Select the Input reference display window by clicking in
the left window.
40. Click on the Zoom Window icon in the main tool bar.
41. Move the cursor into the Input reference display.
42. Zoom in around the general vicinity of the next GCP.
Make sure you can find the same area on both displays, input
and output, before you start zooming in.
43. Click on the Input button in the Ground Control Points
section and a GCP will be located in the center of the input
reference image with a numeric identifier.
44. Move the GCP to an identifiable location observed in both
displays by selecting it with the cursor and dragging it to a
feature that you can identify in both images.
45. Click on the digitize Output button. A GCP with the same
numeric identifier as that of the Input reference image will
appear in the center of the Output reference image.
46. Move the GCP to the location of the same feature
identified in the Input Reference display window.
47. Pick at least seven GCPs in total (Figure 3.2.4.f).
◆ ◆ ◆
Your aim should be for an RMS of less than one half the resolution of the
input image. For the Hong Kong image, we have 30 meter pixels, and
therefore our aim is for an RMS of 15 or less. If you have an error of 30 or
more it means that on average your image registration is one pixel off. We
suggest that in this exercise if your error is more than 30 meters, you should
review all the GCPs.
One solution to a high RMS is to discard the GCPs that have the highest
residuals. These points lie furthest away from the average transformation
calculated for the images. You can omit any GCP by changing the Include
option in the RESAMPLE window from Yes to No. This is done by simply
clicking in the appropriate cell from the Include column, and using the drop-
down menu to select No. Note that once you change the Include attribute of a
GCP, the RMS and residuals are automatically recalculated.
It is important, however, that the ground control points cover the image area
evenly to control the error. If one were not to pick any control points in an
area of the image, the transformation of the image in that area could result in
large error even though the overall RMS is within an acceptable range.
Therefore, if you have to exclude more than one GCP, or if after excluding
even one GCP you find that your points are no longer well-distributed across
the image, you should probably add one or more points.
The GCP list can be modified, saved and retrieved. You can save your GCPs
at any time using the Save GCP button. This will save the GCPs to an IDRISI
correspondence file (.cor), which can be retrieved at a later time. Use the
Retrieve GCP button to retrieve saved GCPs. Let’s save our GCPs now in
case we wish to use them again.
◆ ◆ ◆
Georeferencing using RESAMPLE (cont.): Saving GCPs
48. In the RESAMPLE window, click on Save GCP as.
49. Type in the file name as Hong_Kong_GCP. (TerrSet
will automatically add the .cor extension.)
50. Click on Save.
◆ ◆ ◆
Once you are satisfied with the GCP list and the resulting RMS error, the
resample process can be performed. Before performing the resample,
however, the last criteria needed are the mapping function and the resampling
type to perform on the input files. The mapping function is simply the order
of polynomial fit desired: linear (first order), quadratic (second order), or
cubic (third order). A lower order of polynomial often provides a reasonable
solution since the error associated with poor control point designation
increases as the order of equation increases. Since we are simply
transforming from adjacent UTM zones in this case, the first order (linear)
mapping function is adequate.
The new pixel locations in the geographically referenced grid as determined
by the polynomial equations may not align exactly with any existing pixel
centers in the original data grid. TerrSet offers three procedures to determine
the new pixel location’s digital number value, nearest neighbor, bilinear
resampling, and cubic convolution. In a nearest neighbor interpolation, the
value of the closest input cell to the position of the output cell is conveyed. In
the case of a bilinear interpolation, a linear distance-weighted average of the
four closest cells is used. In the case of a cubic convolution, 16 neighbors are
used to fit a smoothing function. Since we have no need to smooth the data,
the nearest neighbor can be chosen.
The reference unit is simply the unit of measure used in the reference
coordinate system (e.g. meters). The unit distance refers to the actual ground
distance spanned by one reference unit. The unit distance will be 1.0 in most
cases. One exception would be the case of a latitude-longitude reference
system where the unit distance could be in fractions of a degree.
◆ ◆ ◆
RESAMPLE (cont.): Running the program
51. Accept the default Mapping function of Linear.
52. Use the default Resampling function of Nearest Neighbor.
53. Click on OK in the bottom of the RESAMPLE
window. This will begin the resample process and the
resulting image will be displayed in a new window.
◆ ◆ ◆
The new path 121 image is displayed automatically with the same bounds,
resolution, and projection of the path 122 image. Note that the automatic
display uses the default color palette, and therefore will not look quite the
same as the Input reference image. This is easily corrected, using the method
of changing palette files that we have already experimented with earlier. You
are probably quite familiar with the method now, but for completeness’ sake,
we repeat it here.
◆ ◆ ◆
Change display properties of an image
1. In the Composer window, select the button for Layer
Properties.
2. In the Layer Properties window, click on the button for
Advanced Palette/Symbol Selection.
3. This will open the Advanced Palette/Symbol Selection
window. In this new window, click on the browse button (…)
next to the Current selection text box.
4. In the file Pick list, click on the TerrSet\Symbols folder,
and scroll down to greyscale. Click on OK in the Pick list,
and in the Palette\Symbol Selection window.
5. Note that you may need to adjust the display minimum and
maximum values in the Layer Properties window. Use the
values we chose at the start of this section: 18 and 100.
◆ ◆ ◆
Now that the two images for paths 121 and 122 are on the same projection,
we will join them to create a single mosaic image of Hong Kong in the next
section.
3.3 Mosaicking Images
3.3.1 Background
So far we have focused on the geometric challenges of joining images.
However, there is an added problem when it comes to joining images. Images
acquired at different times typically have different radiometric
properties. This means that the DN values for a particular area in one image
are not identical to those in another image of the same area. Among the many
reasons for this variation in radiometric properties between images are
illumination variation due to changes in sun angle as a consequence of the
time of day or season, changing atmospheric properties, especially water
vapor and pollutants, and sensor differences.
Joining images of dissimilar radiometric properties is a major challenge in
image processing. Generally, the goal in mosaicking is to blend adjacent
images in an effort to make the join appear seamless. The balancing of the
radiometric properties between imagery can be a highly subjective process
based on human perception of what a person believes looks "good."
The mosaicking process is relatively effective in areas of sparse development
and low terrain relief. However, we should be aware that sometimes there are
real differences between images that are not an artifact of the image
acquisition. Thus, for example there may be vegetation differences due to
seasonal or climate variations, variation in the presence of snow on the
ground, changes in the landscape due to fires, agriculture or urbanization, and
even differences due to the presence of clouds. Generally, there is very little
hope for removing such image differences.
Numerous methods have been developed to make the image join less
discernable, for example by blending the overlapping regions, matching the
histogram of the images, or using a nonlinear cut line. One approach is to
blend the DN values across a join region, rather than leaving a sharp line
where the images join or overlap. If a blend region is used, one could average
the DN values of the two input images. Alternatively, a feathering could be
used, which is a special case of the averaging method. With feathering, the
new DN value is a weighted average, where the weights are a function of the
distance across the join. Thus, the overlap region becomes a progressive
blending of the one image into the other.
3.3.2 Mosaicking the Hong Kong data
We did not run into this problem of radiometric matching between images in
our concatenation exercise (Section 3.1.2) because when we joined the
images that were along the same satellite paths, there was essentially no
difference in time, atmospheric conditions or sensor properties between them.
However, we have not yet joined the images from the different paths,
specifically paths 121 and 122. Since the paths were imaged on different
days, we should expect there to be radiometric differences between the
images.
In TerrSet, the MOSAIC module creates a new image by spatially orienting
overlapping images and balancing the overlap regions by numerically
averaging the individual pixels in the overlap region. We will use MOSAIC
to join the two path images for Hong Kong. If you have not already done the
exercises in Sections 3.1, you will need to do them now, in order to generate
the files needed for this section.
◆ ◆ ◆
MOSAIC
Menu Location: IDRISI Image Processing – Restoration –
Mosaic
1. Use the main menu bar to start the MOSAIC program.
2. The MOSAIC dialog box will open. In the Images to be
processed section of the dialog box, click in the first text box
below Filenames, to bring up the browse button (…).
3. Click on the browse button.
4. Select the path image generated using the RESAMPLE
program, etm_p121r44-45_b3_utm49.
5. Click OK.
6. Click in the second text box below Filenames, to bring up
the browse button (…).
7. Click on the browse button.
8. Select the path image, etm_p122r44-45_b3.
9. Click OK.
◆ ◆ ◆
Note that the module allows for different background values to be selected.
The Output background value is the DN value the program will assign to
pixels that lie outside the bounds of the two images we are joining. A DN of
0 is the default, and the most common background value used. The Default
input background value is the DN value which will be regarded as not part of
the input images. This is a very important option, because our images have
large blank areas, which we expressly do not want the MOSAIC program to
regard as part of the data to be used in the mosaicking process.
The MOSAIC dialog box has a default procedure for the radiometric
matching between the images. If this option is deselected, then the MOSAIC
module operates much like the CONCAT module in that no radiometric
adjustment is applied.
There are two options for determining the output pixel DN values in the
overlap region: Cover and Average. We will test both options to see which
works best for us in this case.
◆ ◆ ◆
MOSAIC (cont.): Cover option for Overlap Method
10. Type HK_b3_mosaic_cover in the Output mosaicked
image text box.
11. Leave remaining options at their default settings (Figure
3.3.2.a).
12. Click OK to run MOSAIC.
◆ ◆ ◆
Figure 3.3.2.a MOSAIC dialog box with initial parameter selection.
The image is automatically displayed after the processing is complete. In this
case, the mosaic has a GreyScale palette applied, which is appropriate for the
data. However, remember to adjust the contrast stretch values. In the
Composer window, click on the Layer Properties button, and in the Layer
Properties window, set the Display min and Display max to 18 and 100,
respectively. Figure 3.3.2.b shows the image after the contrast stretch has
been applied.
The first thing that is obvious upon examining the output image is that there
is a sharp line at the boundary between the two images. However, this line is
only evident in the water part of the image. For the land part of the image, the
two images appear to be well balanced and almost seamlessly joined. This
suggests that the image differences in the sea portion of the mosaic represent
a real difference in water quality between the two images. For example, the
amount of sediment in the ocean might be different between the two dates.
Figure 3.3.2.b Hong Kong Landsat mosaic.
This difference between the two images in the sea portion of the mosaic
could possibly be minimized by further processing or more involved
mosaicking procedures not available in TerrSet. However, it is doubtful that
these differences could totally be overcome.
Overall, one could be satisfied with this mosaic. However, let’s try the
Average cover option for the overlap region to see if it yields any
improvement.
◆ ◆ ◆
MOSAIC (cont.): Average option for Overlap Method
13. Because of TerrSet’s persistent windows, the MOSAIC
window should still be open in your TerrSet workspace.
14. Type HK_b3_mosaic_average in the Output mosaicked
image text box.
15. Change the Overlap method to Average, by clicking on
the radio button.
16. Leave remaining parameters as before.
17. Click on OK to run MOSAIC.
18. Once the program has finished processing the image,
don’t forget to change the display min and display max to 18
and 100 in the Layer Properties dialog box.
◆ ◆ ◆
The resulting image (Figure 3.3.2.c) has perhaps a less noticeable join in the
water part of the image. However, there is a striking diagonal pattern across
the left side of the image. This artifact is due to the characteristics of the
preprocessing of the original data. The edge effects on the left side of the path
121 image are due to the jagged scene edges and shutter intrusion at the end
of each scan. The jagged edges have non-zero data and thus are not ignored
in the average option, unlike the background areas.
Figure 3.3.2.c Hong Kong Landsat mosaic using “average” option.
Thus, while the Average option did indeed help create a smoother join in the
water areas, averaging changes the actual radiometric values of the images
and may cause difficulties in classifying the imagery later.
Before moving to the next exercise, close all windows.
3.4 Landsat Import
TerrSet provides tools to facilitate the import of specific image products from
satellites such as LANDSAT, SENTINEL and MODIS.
The LANDSAT import module reads the metadata (MTL) file in order to
process the images. It is important, therefore, that the MTL file is present
with the band data in order to use the import routine for LANDSAT. The
TerrSet LANDSAT module currently imports MTL files from the
EarthExplorer (see Appendix). Earth Explorer provides all bands in a
compressed format; the files for this exercise have already been
decompressed. To begin working, add the folder RSGuide\Chap1-4\Raw
images\HK122045 as a resource folder (see Section 1.3.4 if you don’t
remember how to do this). This is the original uncropped Hong Kong data for
path-row 122-45, that we used in previous exercises within this chapter.
◆ ◆ ◆
Import data
Menu Location: File – Import – Government Data
Provider Formats – Landsat Data Archive
1. This program has no associated icon on the main toolbar,
so use the menu as described in the title to this instruction box
to start the LANDSAT module.
2. The LANDSAT dialog box will open. (Figure 3.4.a shows
the dialog box with the options selected described below.)
3. To select the input file, click on the pick button (…) for the
Landsat metadata (MTL) file text box.
4. A Pick list window will open. Double click on the
RSGuide\Chap1-4\Raw images\HK122045 subfolder title.
5. The plus sign next to the title will change to a minus sign,
and a list of three file names will be displayed.
6. Click on
LE07_L1TP_122045_20011120_20170202_01_T1_MTL.
7. Click on OK, to close the Pick list window.
8. TerrSet will automatically input all bands within the folder.
9. You can specify to include or exclude bands to import by
selecting “yes” or “no” under the Include column. For this
exercise, we will only include bands 1,2,3, and 4 (blue, green,
red, and near infrared), for all other bands select “no” under
the Include option (be sure to scroll down to specify “no” for
the QA band).
10. For each of the bands to include you need to specify an
Output Image Name. The default is to have the same output
name as the input. We will change these to be HK_b1,
HK_b2, HK_b3 and HK_b4 for band 1,2,3, and 4
respectively.
11. Under Multispectral Bands select Raw DN. The
LANDSAT module allows us to import the bands as Raw DN
values, or to convert the image to Top of the Atmosphere
(TOA) radiance or to Reflectance (using an apparent
reflectance model, a dark object subtraction model or a Cos(t)
model). In this case, we will import them as Raw and will do
the atmospheric correction in a subsequent exercise.
12. Click OK to run LANDSAT.
◆ ◆ ◆
Figure 3.4.a LANDSAT import module.
All bands will be imported although TerrSet will automatically display the
first band. You can visualize other bands by going to TerrSet EXPLORER.
Note that the extent of this scene is much larger than the etm_p122_r45
images used in previous exercises. The section of the image used in previous
exercises is in the top right corner of this scene; zoom in to that region and
identify the Hong Kong airport.
3.5 Subsetting an image
In many cases when your study area is smaller than the scene, you will want
to crop the image to remove unwanted data. Although TerrSet can work
effectively with full scenes, by subsetting the image we decrease the amount
of storage space needed and the processing time. We will now subset the four
imported bands to show only the top right corner.
To do this, we use a TerrSet module called WINDOW. WINDOW allows us
to subset one or more images based the row/column position or geographic
locations of the four new image corners. WINDOW also allows us to use an
existing window image to copy the window parameters.
In order to start this exercise, we will first create a raster group file with all 4
bands (HK_b1, HK_b2, HK_b3, and HK_b4). If you don’t remember how
to do this, review the instructions in Section 1.3.7. Rename the raster group
file to All_HK.
◆ ◆ ◆
Subset image
Menu Location: File – Reformat – WINDOW
1. Use the menu as described in the title to this instruction box
to start the module WINDOW. Figure 3.5.1.a shows the
dialog box with the options selected described below.
2. Click on Insert layer group and select the raster group file
All_HK. All four bands within the raster group file will
populate under Image files.
3. Since we are subsetting more than one image we will
specify a prefix for the output. Enter win_ in as the Output
prefix.
4. Select the radio button for the option Add prefix to file
name.
5. Under Window specified by, select the radio button
Geographical positions.
6. In the right side of the dialog we will enter the values of the
minimum and maximum X and Y corners of the new subset
image. Minimum X coordinate: 786525, Maximum X
coordinate: 837165, Minimum Y coordinate: 2447085 and
Maximum Y coordinate: 2482725.
7. Click OK to run the WINDOW module.
◆ ◆ ◆
Figure 3.5.1.a WINDOW dialogue box
Let’s now display the cropped images as a true color composite.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band win_HK_b1. Click on OK to close the
Pick list.
3. Specify the Green image band as win_HK_b2.
4. Specify the Red image band as win_HK_b3.
5. Select the radio button for the option Linear with saturation
points under contrast stretch type.
6. Specify the Percent to be saturated from each end of the
gray scale to 0.1.
7. Enter the Output image filename in the text box provided:
HK_tcc.
8. Leave all other defaults and click OK to create and display
the true color composite (Figure 3.5.1.b ).
◆ ◆ ◆
Note that we changed the percent saturation from the default. We do this to
avoid having a high contrast stretch. By displaying the image without
modifying its contrast we see the color characteristics of the raw imagery
(Figure 3.5.1.b). Note that the colors in this true color composite are washed
out and have a blue tint. This is due to the effect of the atmosphere. In the
following exercise, we will remove this effect.
Figure 3.5.1.b True color composite.
3.6 Radiometric correction: Atmospheric
correction
3.6.1 Background
In passive remote sensing, satellite sensors capture the energy from the sun
(Irradiance, Li ) after it interacts with both the atmosphere and the surface of
the earth. When the electromagnetic radiation from the sun reaches the
atmosphere, some energy is scattered, some is absorbed and some is
transmitted to the ground. The energy that reaches the ground interacts with
the ground and then the energy reflected from the ground reaches the sensor
after it passes once again through the atmosphere. The DN value recorded in
the pixel is a brightness value that is directly proportional to the spectral
radiance that reaches the sensor (Figure 3.6.1.a).
Figure 3.6.1.a Solar irradiance (Li), radiance coming from the ground (Lg),
path radiance (Lp) and its relationship with DN values.
Gases in the atmosphere (in particular carbon dioxide, water vapor, and
ozone) absorb parts of the electromagnetic radiation from the sun. The parts
of the spectrum that are absorbed by the atmosphere are called absorption
bands. However, the areas of the spectrum where absorption is low (or
transmission is high) are called atmospheric windows. Sensors designed to
monitor the surface of the earth detect parts of the electromagnetic spectrum
in these windows, while sensors designed to monitor the atmosphere (e.g.
meteorological satellites) are sensitive to the wavelengths in the absorption
bands.
Atmospheric gases (e.g. carbon dioxide, water vapor, nitrogen, oxygen) and
suspended particles in the atmosphere (such as smog or smoke) interact with
the solar energy, affecting the data recorded at the sensor. Atmospheric
scattering happens when the electromagnetic radiation is redirected from its
path in multiple directions after interacting with the gases and particles. The
most important scattering process is called Rayleigh scattering, and happens
when the particles in the atmosphere are smaller than the wavelength of the
incident energy. The wavelengths of visible energy range from 400 to 700
nanometers (nm), and are much larger than the size of the main gases of the
atmosphere (Argon, Oxygen and Nitrogen, with sizes of 0.3 nm, 0.29nm and
0.31 nm respectively). Rayleigh scattering is therefore always present, and
affects shorter wavelengths more than longer ones. Rayleigh scattering is
responsible for the blue color of the sky. It appears as a blue tint in the true
color composite in Figure 3.5.1.c.
DN values can be converted into radiances though a linear transformation
(Figure 3.6.1.a). The radiance is the DN value multiplied by a slope
coefficient (Gain), plus an intercept (Bias or Offset).
When scattering happens, the sensor not only receives the energy reflected
from the ground (Lg in Figure 3.6.1.a), but also the scattered energy, called
path radiance (Lp in Figure 3.6.1.a). Because of the effect of the atmosphere,
the DN values are a combination of the radiance leaving the surface of the
earth and the path radiance.
Since we are interested in using satellite images to obtain information from
the ground, we need to remove the radiance coming from atmospheric
interactions. This process is called atmospheric correction.
There are many different methods for atmospheric correction, in this exercise
we will use the Chavez’s Cos(t) method (Chavez 1996). Cos(t) correction is
based on estimating the path radiance from an object of high absorption (such
as deep water or a dark shadow). The method assumes that these objects
should have a radiance close to zero, and that non-zero values come from the
effect of scattering. Correction is done by extracting for each band, the DN
values of the dark objects (path radiance) and subtracting it from the DN
values of that band. The Cos(t) method estimates the transmittance of the
atmosphere based on the cosine of the sun’s zenith angle. The method
assumes that the scene has an object of high absorption and that haze is
uniform across the image.
Atmospheric correction produces a new image where the DN value of a pixel
is transformed into reflectance. Reflectance is the radiance normalized by the
amount of incident energy. Reflectance values range from 0 to 1, where zero
means that the surface absorbs all energy, and perfect reflecting surfaces have
a reflectance of 1.
3.6.2 Atmospheric correction of the Hong
Kong Data
In the previous section, we saw that Figure 3.5.1.b had a blue tint related to
Rayleigh scattering. In this exercise, we will use the module ATMOSC,
which provides several methods for the conversion of DN values into
reflectance. In this section, we will convert the Hong Kong images we
imported in section 3.5. into reflectance by doing an atmospheric correction
using Chavez’s Cos(t) method.
The process of atmospheric correction requires knowledge about the images
we are processing. This information is contained in the metadata that is
associated with the distributed imagery. For the Landsat ETM+ the
information needed is within a text file with extension MTL.
3.6.2.1 Extracting parameters from metadata and bands
The first step in the process of atmospheric correction is to extract the
parameters needed from the metadata and from the imagery. From the
metadata, we are interested in the date and time of image acquisition, which
allows the module to estimate the spectral solar irradiance. Also, the sun
elevation at the time of acquisition is needed, as this allows the module to
calculate the Sun’s zenith angle. The metadata also provides information on
Gain and Bias (or Offset) that allows us to transform DN values into
radiance. Path radiance from Rayleigh scattering is estimated by extracting
the pixel values across bands of a dark object. This path brightness is called
DN haze in TerrSet. It is a good practice, when doing atmospheric correction,
to keep track of parameters in a table.
If you have not done the exercises in section 3.4 and 3.5, you will need to do
them now to generate the files needed for this section.
◆ ◆ ◆
Reading the Metadata and Identifying DN haze values
1. Use the main menu or icon bar to open the TerrSet TEXT
EDITOR.
2. Open the file
LE07_L1TP_122045_20011120_20170202_01_T1_MTL
which is located in RSGuide\Chap1-4\Raw
images\HK122045.
3. Read the metadata and identify the following parameters
needed for the atmospheric correction:
DATE_ACQUIRED = 2001-11-20
SCENE_CENTER_TIME = "02:41:02"
SUN_ELEVATION = 42.94894063
RADIANCE_MULT_BAND (Gain in Table
3.6.2.1.a)
RADIANCE_ADD_BAND (Bias in Table 3.6.2.1.a)
4. Open TerrSet EXPLORER and select win_HK_b1,
win_HK_b2, win_HK_b3 and win_HK_b4. Then right click
and select Add layer(s). This will add all four layers to the
composition.
5. Change the colors of the layers to create a true color
composite and zoom in around the mountains in the center of
the Hong Kong Airport Island (Chek Lap Kok Island) (Figure
3.6.2.1.a), that has some dark shadowed slopes.
6. Select from the icon bar the IDENTIFY tool and click on
the dark shadow pixels. The values of pixels in all bands will
be shown in the identify window. These will be the DN haze
values. Make a note of the values as shown in Table 3.6.2.1.a.
◆ ◆ ◆
Note that the Gain and Offset in the metadata are given in Watts m-2 sr-1 μm-1.
These values should be divided by 10 in order to convert them to the units
used in TerrSet for the atmospheric correction analysis (mW cm-2 sr-1 um-1).
When extracting the DN haze using IDENTIFY, you may need to try
different places in order to find the lowest value. Alternatively, you can do a
histogram using the module HISTO, in order to extract the lowest non-
background value within the image. The edges of the scene, near background
areas, may have artifacts; avoid taking DN haze values from those regions.
Figure 3.6.2.1.a Composite with four bands and DN haze in dark shadow
area.
Table 3.6.2.1.a DN Haze extracted with the IDENTIFY tool. Offset (Adding
factor) and Gain (Multiplicative factor) extracted from the metadata (MTL
file) and converted to mW cm-2 sr-1 µm-1.
3.6.2.2 Atmospheric correction
After all parameters are extracted from both the metadata and the imagery,
we can now start the process of atmospheric correction. This process is run
for each band separately.
◆ ◆ ◆
Atmospheric correction
Menu Location: IDRISI Image Processing – Restoration –
ATMOSC
1. Use the menu as described in the title to this instruction box
to start the module ATMOSC. (Figure 3.6.2.2.a shows the
dialog box with the options selected described below).
2. Click on the browse button next to Input image and select
win_HK_b1.
3. Specify the Output image name Ch_HK_b1.
4. Select the radio button for Cos(t) model located under
Atmospheric correction model.
5. Fill in the parameters for Year (2001), Day (20), Month
(11) and Time (2.7). Note that the hours and minutes of the
metadata should be converted to hours.
6. Specify the wavelength of band center (in microns) for
band 1 (0.485). This information is given in Table 3.6.2.1.a,
and can be found in the documentation of the sensor provided
by the distributor of the imagery.
7. Under Radiance calibration option, select the radio button
Offset/Gain, and input the corresponding values from Table
3.6.2.1.a Offset: -0.697874 and Gain: 0.077874.
8. The satellite viewing angle is 0, and the sun elevation is
42.948940.
9. Finally, enter the DN haze extracted for band 1 (in our case
this value is 58, but you should enter the value you extracted).
10. Click OK to run ATMOSC. Do not close the ATMOSC
module.
◆ ◆ ◆
Figure 3.6.2.2.a ATMOSC with example of parameters.
We will now need to repeat the process for all other bands: win_HK_b2,
win_HK_b3 and win_HK_b4. Note that the only parameters that need to be
changed for each band are: Input image, Output image, wavelength of band
center, Offset/Gain, and DN haze. All other parameters remain the same.
◆ ◆ ◆
Atmospheric correction (cont.)
Menu Location: IDRISI Image Processing – Restoration –
ATMOSC
11. Click on the browse button next to Input image and select
the next band to be processed (e.g. win_HK_b2).
12. Specify the corresponding Output image name (e.g.
Ch_HK_b2).
13. Specify the wavelength of band center (in microns) for the
band being processed from Table 3.6.2.1.a.
14. Input corresponding Offset and Gain values from Table
3.6.2.1.a.
15. Finally, enter the DN haze for the band being processed.
16. Click OK to run ATMOSC. Do not close the ATMOSC
module.
17. Repeat steps 11 through 16 for band 3 and 4, calling the
outputs Ch_HK_b3 and Ch_HK_b4.
◆ ◆ ◆
You should now have the 4 cropped bands corrected for atmospheric effects
of Rayleigh scattering and absorption. Let’s now display the atmospherically
corrected bands as a true color composite.
◆ ◆ ◆
Create a color composite image
Menu Location: Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band: Ch_HK_b1. Click on OK to close the
Pick list.
3. Specify the Green image band as Ch_HK_b2.
4. Specify the Red image band as Ch_HK_b3.
5. Select the radio button for the option Linear with saturation
points under contrast stretch type.
6. Specify the Percent to be saturated from each end of the
gray scale to 0.1.
7. Enter the Output image filename in the text box provided:
Ch_HK_tcc.
8. Leave all other defaults and click OK to create and display
the true color composite (Figure 3.6.2.2.b).
◆ ◆ ◆
Figure 3.6.2.2.b True color composite after atmospheric correction.
By comparing Figure 3.5.1.b and Figure 3.6.2.2.b, we can see the effect of
the removal of the scattering in the visible bands. Figure 3.5.1.b shows a blue
tint related to an increased scatter of the blue band due to atmospheric gases.
That effect is gone in 3.6.2.2.b.
3.7 Exporting Images
At the start of this chapter (Section 3.1) we saw that TerrSet could be used to
import images into the IDRISI raster file format. TerrSet also supports the
exporting of data and composite imagery into many formats. The menu
divides these formats into three group, each with their own submenu.
• General Conversion Tools are utilities for converting image DN values
into text files of various formats.
• Desktop Publishing Formats are modules for exporting raster data,
and include GeoTIFF as well as JPEG, KML and BMP formats. The
Desktop Publishing Formats work well with the composite images
created in TerrSet.
• The Software-Specific Formats are modules that export the TerrSet
format to those of other remote sensing and GIS programs. This group of
programs generally works best for single band data, rather than the
composite images.
3.7.1 Exporting to GEOTIFF
We will now export the atmospherically corrected Hong Kong airport color
composite created mosaic file to a GeoTIFF format, using one of the modules
in the Desktop Publishing Formats category.
◆ ◆ ◆
EXPORT
Menu Location: File – Export – Desktop Publishing
Formats – GEOTIFF/TIFF
1. Use the main menu to start the GEOTIFF/TIFF module.
2. The GEOTIFF/TIFF dialog box will open.
3. Select the radio button for the Idrisi to GeoTIFF/Tiff
option.
4. Click the browse button (…) next to the Idrisi file name
text box.
5. Select Ch_HK_tcc and click OK.
6. Note that TerrSet automatically inserts the same name in
the output GeoTIFF file to create text box. Because the output
file will have a .tif extension, there will be no confusion
between the IDRISI format file and the TIFF format file.
However, if you wish to change the name of the output file,
you can do so now.
7. Click OK in the GeoTIFF/TIFF window to start the export
routine (Figure 3.7.1.a).
◆ ◆ ◆
Figure 3.7.1.a The GEOTIFF/TIFF dialog box.
The file is exported when the lower right pane in the main window is clear.
The file has now been exported successfully and you can easily read it in any
image display software or word processing software.
3.7.2 Exporting to KML
The module KMLIDRISI allows the conversion of IDRISI raster images into
KML (Keyhole Markup Language) files, which can be opened in Google
Earth and Google Maps. Exporting to this type of format is highly beneficial
for publishing the data online or when you want to compare it to high-
resolution imagery provided by Google Earth (e.g. for accuracy assessment).
3.7.2.1 Projecting to Plate Carée
To export to KML, the data has to be in the Plate Carée projection
(LatLong). We will now project the atmospherically corrected true color
composite created in exercise 3.6.2 from UTM-49n into LatLong. If you have
not done exercise 3.6.2 you will need to complete it before starting this
section. Since we have used the module PROJECT before, the instructions
will be brief. You can review section 3.2.2 if needed.
◆ ◆ ◆
Changing projection
Menu Location: File – Reformat – PROJECT
1. Start the PROJECT module.
2. In the PROJECT dialog box, specify that the Type of file to
be transformed is raster.
3. Select Ch_HK_tcc as the Input file name.
4. Enter Ch_HK_tcc_latlong as the Output file name.
5. Specify the Reference file for the output result to be
Latlong.
6. Click on Output reference information and leave the
default parameters.
7. Leave Resample type and Background value as the defaults.
8. Click OK to start the operation.
◆ ◆ ◆
The resulting image will be in the LatLong reference system. Note that
several modules are run during this operation. This is because we are
projecting a color composite instead of a single band. TerrSet first runs
GENERICRASTER and then runs the module PROJECT. This is because the
color composite needs to be first decomposed into bands and then each band
is projected independently. Finally, the projected bands are put back together
as a color composite using the module COMPOSITE.
3.7.2.2 Exporting to KML
Now that the color composite is in the appropriate reference system, we can
export it into KML so that it can be opened in Google Earth.
◆ ◆ ◆
EXPORT
Menu Location: File – Export – Desktop Publishing
Formats – KMLIDRISI
1. Use the main menu to start the KMLIDRISI module.
2. The KMLIDRISI dialog box will open (Figure 3.7.2.2.a).
3. Select the radio button for the Idrisi to KML option.
4. Select the radio button for the Raster to KML option.
5. Click the browse button (…) next to the Input image file
text box.
6. Select Ch_HK_tcc_latlong and click OK.
7. Under Output option, select the radio button for Simple and
uncheck the pixel smoothing option.
8. Leave the default under Altitude mode.
9. Specify Ch_HK_tcc_latlong as the output folder name.
TerrSet will create a folder within the working folder with
this name and the exported KML file will be saved to this
location.
10. No need to change the default Palette to export with
image. Since our image is a color composite, this will be
ignored.
11. Click OK to run the module.
◆ ◆ ◆
Figure 3.7.2.2.a KMLIDRISI interface with options selected.
If you have Google Earth installed, you can browse to RSGuide\Chap1-
4\Ch_HK_tcc_latlong from Windows Explorer and double click on the KML
file. Google Earth will open and the color composite will be displayed on top
of all other Google Earth layers.
This visualization is particularly useful when doing accuracy assessment, as
you can take advantage not only of the high-resolution imagery but also of
geocoded pictures.
To end this chapter, close all DISPLAY windows and dialog boxes.
CHAPTER 4
ENHANCING IMAGES
SPATIALLY
This chapter, as well as the subsequent two chapters, deals with the
enhancement of images. The term enhancement is used in remote sensing to
imply image processing to make patterns in the data stand out more clearly.
Contrast enhancement, which we already covered in Section 2.2.3, and which
manipulates the look up table used for image display, is a good example of a
basic enhancement technique.
In this manual, we separate enhancement techniques into two categories:
spectral and spatial.
• Spectral enhancement is covered in Chapters 5 and 6, and involves
various mathematical combinations of image bands, usually with each
pixel treated independently of its neighbors.
• Spatial enhancement, the topic for this chapter, includes processing
that explicitly focuses on the spatial properties of an image, and typically
involves processing that draws on a local neighborhood of pixels.
Spatial enhancement is often performed to improve the visual quality of an
image. For example, spatial filters may be employed to either increase the
contrast between features, or to reduce the random noise inherent in the
image. Another common enhancement technique is to merge data of differing
spatial resolutions.
Combining data of differing spatial resolution is of interest because many
satellite-borne sensors record information in bands of different spatial
resolution. A common sensor design is to include a single high spatial
resolution band and a group of lower spatial resolution bands sensitive to a
variety of wavelengths. The high resolution band is termed the
panchromatic band, following the naming convention of black and white
aerial film. The panchromatic band provides detailed spatial information. The
lower resolution bands are termed the multispectral bands, and they provide
the spectral information, or, by analogy, the color information. The aim in a
multi-spatial resolution merge is to combine the spatial information from the
panchromatic band with the spectral information from the multispectral
bands, to obtain a multi-band, high resolution image.
In this chapter we will examine the ways we can spatially enhance the Hong
Kong Landsat image. We will initially develop a so-called high-pass filter
that enhances spatial features in a single band image. We will then merge the
high-resolution panchromatic band with a combination of multispectral bands
to form a multi-resolution product.
4.1 Convolution
A convolution filter is a local image processing operation which involves the
use of moving windows. A window is a local group of pixels, for example,
one pixel and its eight immediate neighbors (Figure 4.1.a), and is by
definition a subset of the image. Windows can be of any size; Figure 4.1.a
shows an example of a 3x3 window, with three rows and three columns of
pixels. An important part of the concept of the moving window is that the
template that defines the local neighborhood can be moved sequentially
across the image, so that each pixel is the center of a local group of pixels.
Pixels from the edge of the image require special handling, as they do not
have neighbors on all sides.
Figure 4.1.a A 3 x 3 window of pixels (a = center pixel, b = neighbors).
In convolution, some mathematical operation is used to combine the DN
values in the window. The result of this local analysis is written to a new
image, the convolution image. Specifically, the convolution value is usually
written out to the new image at the pixel location equivalent to the center of
the window as it was placed over the old image. The precise nature of the
mathematical combination performed in the convolution operation is
controlled by the relative values stored in the matrix kernel. Specifically, the
convolution output value is the sum of the products of each pixel value and
its corresponding kernel value. Then the matrix is moved by one pixel
location, and the operation is repeated. The filter operation can be
normalized by dividing the result by the sum of the kernel values.
4.1.1 Smoothing Filters
Noise suppression and/or removal can be accomplished with convolution
filtering. The simplest type of smoothing filter is the low pass filter, also
known as a mean filter, which simply adds all the pixel values in the matrix
kernel and divides the sum by the matrix size (or number of pixels used in the
summation). In this particular case, the matrix kernel has a value of 1/9 in
each kernel location (Figure 4.1.1.a), since no pixel is given a greater
weighting than another. Note that there is no significant difference between a
3 x 3 kernel with values of 1/9 in each location, and one with values of 1 in
each location, except that the former would have DNs that were on average
nine times that of the original image. It is for this reason the normalization of
the filter is often applied to ensure the output image has a similar radiometric
range to the input image.
Figure 4.1.1.a. A kernel for a smoothing filter.
TerrSet offers a wide variety of filters in the FILTER module. For example,
the Gaussian filter is a more sophisticated smoothing filter than the simple
uniform averaging of the type illustrated in Figure 4.1.1.a. The Gaussian
filter fits a Gaussian or bell curve to the kernel. The Gaussian filter thus gives
the greatest weight to the value at the center of the kernel, and the pixels
further out from the center are given a progressively lower weighting.
Another filter offered, the median filter, can be useful for noise suppression,
as is the adaptive box filter, which is good for correcting "salt-and-pepper"
random noise.
We are now ready to investigate our first filter operation. The data for this
chapter is the same as we used for Chapters 1-3. Therefore, if you have been
working through the manual sequentially, there is no preparatory work you
need to do, as the data should already be available, and the resource folder set
up. If not, you will need to follow the instructions in Chapter 1 to download
the data and set up a resource folder (see Section 1.3). You will also need to
set the Display Min and Display Max in the etm_pan.rst metadata, as was
described in Section 2.2.3.
Let’s begin by displaying the Hong Kong Landsat panchromatic data,
etm_pan. If you need to, you can refer back to Section 2.2.2 to refresh your
memory on how to display an image. However, be sure to display this image
with a GreyScale palette. If, after you have displayed the file, the image has a
color palette, then you forgot to set the palette file correctly, and you should
simply redisplay the image with the correct GreyScale palette.
We will compare the display of the etm_pan image with our results from the
filtered version of the image, generated through the FILTER module.
4.1.1.1 Mean filter
In this first exercise we will employ the Mean low pass filter to smooth out
noise from the Hong Kong panchromatic band etm_pan. We will start by
opening the FILTER module.
◆ ◆ ◆
Low pass filter
Menu Location: IDRISI Image Processing – Enhancement
– FILTER
1. Use the main menu bar to start the FILTER module.
2. The FILTER dialog box will open.
◆ ◆ ◆
The FILTER dialog box shows a list of pre-defined filters on the left, on the
upper right a view of the values of the matrix kernel (filter dimensions)
selected (though note that initially no values are shown), and in the middle,
radio buttons for selecting different kernel sizes. The default filter is the
Mean filter.
Figure 4.1.1.1.a The FILTER dialog box with the Mean filter selected.
As already discussed, mean filtering is commonly applied to smooth an
image and remove some of the random noise. We will now apply the Mean
filter to our Hong Kong panchromatic data.
◆ ◆ ◆
Low pass filter (cont.)
3. In the FILTER dialog box, specify the Input image by
clicking on the browse button (…) next to the appropriate text
box, and then double click on the etm_pan file in the Pick list
window.
4. In the Output image text box, type the name of the file we
will generate: etm_pan_mean3x3.
5. Leave the Filter type as Mean.
6. In the FILTER dialog box, click on OK to generate the
filtered image.
◆ ◆ ◆
When the program is done, the image will be displayed automatically with
the appropriate GreyScale palette. However, bear in mind that the program
does not automatically set a contrast enhancement similar to that of the
unfiltered data. You will therefore need to set the contrast enhancement for
the new image.
◆ ◆ ◆
Contrast enhancement of an image
1. Make sure the new etm_pan_mean3x3 image is the focus
of the TerrSet workspace by clicking in the image.
2. Find the Composer window, and select the button for Layer
Properties.
3. In the Layer Properties window, enter in the Display Min
and Display Max text boxes the values for the contrast stretch
we used for the etm_pan image, i.e., 18 and 73.
4. Still in the Layer Properties window, click on the buttons
for Apply, Save and OK.
◆ ◆ ◆
Once the contrast stretch has been applied, the original etm_pan and the
filtered etm_pan_mean3x3 should look fairly similar. This is in part because
of the scale that we are using to look at the data. In order to see the effects of
the filter operation, which is a local operation, we need to see the image in
greater detail.
◆ ◆ ◆
Zooming into an already displayed image
1. Click on the Zoom Window icon from the main icon bar.
2. Draw a box in the etm_pan Display window around the
Hong Kong airport by using the left mouse button to click on
the top left corner of the desired box and, keeping the mouse
button depressed, drag the cursor to the lower right corner
(Figure 4.1.1.1.b).
3. When you raise your finger from the mouse button, the
display should automatically zoom in to the desired area. If
the area is not quite what you wanted, you can either zoom in
further, by selecting the zoom button again, or return the
zoom to its default extent with the Full extent normal icon,
and start again. You can also use the Zoom in / Center and
Zoom out / Center icons.
◆ ◆ ◆
Figure 4.1.1.1.b Drawing the zoom box around the Hong Kong airport.
When you are satisfied with the zoomed window for the etm_pan image
(Figure 4.1.1.1.c), repeat the zoom operation for the filtered
etm_pan_mean3x3 image (Figure 4.1.1.1.d).
Compare the two zoomed-in images (Figures 4.1.1.1.c and 4.1.1.1.d). Note
the Mean filter reduces the grainy texture of the original image, thus reducing
the internal variability in each cover class. This would be an improvement for
image classification (Chapter 7). However, the reduction in variability is not
without a cost: the detail of the features found within the airport, particularly
associated with the terminal building, is clearly reduced.
Figure 4.1.1.1.c Original Panchromatic image of the Hong Kong airport.
Figure 4.1.1.1.d Mean filtered Panchromatic image of the Hong Kong
airport.
4.1.1.2 Adaptive box filter
An adaptive box filter is useful for removing noise while preserving features.
The adaptive box filter compares the center pixel to its neighbors, and
calculates the average only if the center pixel is different that the surrounding
pixels (noise).
◆ ◆ ◆
Adaptive Box filter
Menu Location: IDRISI Image Processing – Enhancement
– FILTER
1. If you closed the FILTER module from the previous
exercise, open it again by using the main menu bar (Figure
4.1.1.2.a).
2. In the FILTER dialog box, specify the Input image by
clicking on the browse button (…) next to the appropriate text
box, and then double clicking on the etm_pan file in the Pick
list window.
3. Change the Filter type to Adaptive Box Filter.
◆ ◆ ◆
TerrSet provides two options for the Adaptive Box filter: 1) to replace invalid
pixels with zeroes, and 2) to replace invalid pixels with local averages. The
first option allows you to identify which pixels are noise, and therefore are
candidates for the correction. The second option produces an output image
where pixels considered noise are replaced by the average of their neighbors.
A pixel is identified as noise if two conditions are satisfied: 1) the pixel,
when compared to its neighbors, is beyond a set standard deviation specified
by the user, and 2) the difference between the values of the center pixel and
the mean of all neighbors is larger than the user specified difference. If set,
only pixels within a user specified minimum and maximum values are
considered.
◆ ◆ ◆
Adaptive Box filter (cont.)
4. Under Adaptive box filter options, check the radio button to
Replace invalid pixels with zeros.
5. Specify the Threshold standard deviation as 1.5 and the
Threshold difference as 0.
6. Leave unchecked the Change minimum/maximum values
option, and filter size as 3x3.
7. Type etm_pan_Abox_zero as the Output image name.
8. Click OK to run FILTER.
◆ ◆ ◆
Figure 4.1.1.2.a The FILTER dialog box with the Adaptive Box filter
selected and exercise parameters set.
When the program is done, the image will be displayed automatically with
the appropriate GreyScale palette. As in the previous exercise, we will need
to set the contrast enhancement similar to the unfiltered data.
◆ ◆ ◆
Contrast enhancement of an image
9. Make sure the new etm_pan_Abox_zero image is the focus
of the TerrSet workspace by clicking in the image.
10. Find the Composer window, and select the button for
Layer Properties.
11. In the Layer Properties window, enter in the Display Min
and Display Max text boxes the values for the contrast stretch
we used for the etm_pan image, namely 18 and 73.
12. Still in the Layer Properties window, click on the buttons
for Apply, Save and OK. The resulting image will be similar
to Figure 4.1.1.2.b.
13. Do not close the FILTER form.
◆ ◆ ◆
Figure 4.1.1.2.b Adaptive box filtered Panchromatic image of the Hong
Kong airport using the Replace invalid pixels with zeros option.
The resulting image has now a very marked speckle. These pixels with values
of zero are all the pixels with a value different than the mean of the
neighbors, and beyond ±1.5 standard deviations. These noise pixels can be
replaced with the average value of their neighbors.
◆ ◆ ◆
Adaptive Box filter (cont.)
14. Under Adaptive box filter options, check the radio button
to Replace invalid pixels with local averages.
15. Leave unchanged all other parameters set in the previous
exercise.
16. Type etm_pan_Abox_avg in the Output image name text
box.
17. Click OK to run FILTER. The result will look similar to
Figure 4.1.1.2.c.
◆ ◆ ◆
Figure 4.1.1.2.c Adaptive box filtered Panchromatic image of the Hong
Kong airport using the Replace invalid pixels with local averages option.
Compare the etm_pan_mean3x3 and the etm_pan_Abox_avg results. Note
that both remove speckle, but that the mean filter smooths the entire image,
while the adaptive box filter maintains feature details. This characteristic
makes the adaptive box filter particularly useful for removing salt and pepper
noise.
4.1.2 High-pass and Edge Filters
Fundamentally, a high pass filter is a type of edge detector that emphasizes
abrupt changes relative to regions of gradual change within the image.
Specialized filters available in TerrSet that focus on edge effects include
Laplacian Edge Enhancement and the Sobel Edge Detector.
4.1.2.1 High pass filter
The high pass filter generates an image that indicates where abrupt
boundaries are found in an image. Other, non-spatial information, such as the
average brightness information, is lost in generating a high pass image. An
image with only the boundaries shown can be useful, however, and we will
see such an example in Section 4.2 when dealing with a multi-resolution
merge. We will now try this filter, and see what it does to the Landsat
Panchromatic image of Hong Kong.
◆ ◆ ◆
High pass filter
Menu Location: IDRISI Image Processing – Enhancement
– FILTER
1. The FILTER dialog box should still be open from the Low
Pass Filter operation. If not, use the main menu to open the
FILTER dialog box, and specify the input file as etm_pan.
2. In the FILTER dialog box, select the radio button for the
High Pass filter.
◆ ◆ ◆
After you select the High Pass filter, note the values in the filter dimensions
(Figure 4.1.2.1.a). The high pass kernel calculates the difference between the
center pixel and the average of its surrounding neighbors.
Figure 4.1.2.1.a The FILTER dialog box, with the High Pass filter selected.
◆ ◆ ◆
High pass filter (cont.)
3. Type etm_pan_highpass3x3 in the Output image text box.
4. Click on OK.
◆ ◆ ◆
After the program has completed processing, it will automatically display the
resultant image. The image should appear almost featureless, with a dominant
middle gray image tone. If you observe the legend, you will notice the values
are centered on a DN of zero. This is a very different result from that of the
Mean filtered image.
Let’s display the image histogram to understand the filtered results better.
◆ ◆ ◆
Analyzing the image data distribution with HISTO
Menu Location: Display – HISTO
1. Start the HISTO program from the main menu or the
toolbar.
2. In the HISTO dialog box, click on the browse button (…)
next to the text box for the Input file name.
3. The Pick list window will open, select
etm_pan_highpass3x3.
4. Set the class width to 0.5.
5. Click OK.
◆ ◆ ◆
The result of the HISTO program shows that the mean of the filtered image is
centered on zero (Figure 4.1.2.1.b). The reason the result of the High Pass
filter is so different from that of the Mean filter can be understood by looking
in detail at the values of the filter kernels. Add up the kernel weights in the
Mean filter and compare the result to the sum of the kernel weights in the
High Pass filter. The Mean filter kernel sum is equal to 1 (9 x 1/9 = 1),
whereas the sum of the values in the High Pass kernel is equal to 0 (8 x [-1/9]
+ 8/9 = 0).
Figure 4.1.2.1.b Histogram of 3x3 High Pass filtered Panchromatic image.
The sum of the values in the kernel is essentially a measure of how much
information of the original image is retained in the filtered result. The Mean
filter retains most of the image statistical information – the mean and
standard deviation are similar to the original image – but the image is
smoothed and boundaries blurred. On the other hand, the High Pass filter,
with its zero-sum kernel, removes most of the original image brightness
information. The High Pass has a mean centered on zero, and a narrow
standard deviation. Figure 4.1.2.1.b shows that the High Pass image has
some large outliers (anomalous DN values), and this may partly explain the
lack of contrast we observed in the displayed image. Therefore, let’s adjust
the contrast of the high-pass filtered image so that we can better see what the
filter did to the image.
◆ ◆ ◆
Apply a contrast enhancement to an image
1. Make sure the new etm_pan_highpass3x3 image is the
focus of the TerrSet workspace by clicking in the image.
2. Find the Composer window, and select the button for Layer
Properties.
3. In the Layer Properties window, enter in the Display
Min and Display Max text boxes the values of -7 and 7,
respectively.
4. Click on the buttons for Apply, Save and OK.
◆ ◆ ◆
The result should show more contrast, but still appear to be mainly noise. To
see the details, we need to zoom to a scale where the edges become more
apparent. However, we are faced with a challenge – how do we find the
airport in this rather featureless image? One way to do this is to carefully note
the location of the airport in the etm_pan image, and then zoom in on the
same general region in the etm_pan_highpass3x3 image. Another more
precise method, which we will try in the next section, zooms in on a file
using the zoom of a companion image, based on the TerrSet Group Link
concept.
4.1.2.2 TerrSet Group Link Display
In order to take advantage of the Group Link we need to create a Raster
Group File that links the files. The images are displayed using their Raster
Group identities. If the Group Link icon on the main tool bar is then
depressed, zooming in on one file will result in a similar zoom for the other
linked files. This procedure will be described in more detail below. If you
want additional information on creating Raster Group Files, you may wish to
review the material in Section 1.3.7.
Before you start this section, close any currently displayed images.
◆ ◆ ◆
Creating a file collection with the TerrSet EXPLORER
1. If the files are not listed in the Files pane, double click on
the directory name to display the files.
2. Click on etm_pan.rst. The file name will then be
highlighted.
3. Scroll down to the etm_pan_highpass3x3 file. Now be
very careful. Hold the CTRL key down, and click on the
etm_pan_highpass3x3 file, thus highlighting it
simultaneously with etm_pan.rst.
4. Remove your finger from the CTRL button. Press the right
mouse button.
5. A pop-up menu will appear. Within this menu, scroll down
to Create, and then select Raster Group.
6. This will create a new file called Raster Group.rgf. Right
click on this file name in the Files pane.
7. Select Rename from the list of options and enter the new
name filter by typing over the default name of Raster
Group.
8. Press Enter on your computer keyboard.
9. Refresh the TerrSet EXPLORER view by clicking on the F5
key of the keyboard or by right clicking on the working folder
and selecting the Refresh option.
◆ ◆ ◆
Now that we have created the Raster Group File, we can display the two
images within the context of the Raster Group File. TerrSet calls this
displaying the images “with their full ‘dot-logic’ filenames.” The logic of
these terms will become clearer in a moment.
◆ ◆ ◆
Displaying images that form part of a Raster Group File
Menu Location: File – Display – DISPLAY LAUNCHER
1. Open the DISPLAY LAUNCHER using the main toolbar
or main menu.
2. Within the DISPLAY LAUNCHER window, click on the
browse button (…).
3. The Pick List will open. Scroll down to filter. Click on the
plus sign (+).
4. The plus sign will change to a minus (-), and the two files,
etm_pan and etm_pan_highpass3x3 should both be listed
below. Note that both image files are actually listed twice in
the Pick List window, once independently, and once as part of
a Raster Group File (Figure 4.1.2.2.a).
◆ ◆ ◆
Figure 4.1.2.2.a The Pick List showing the image etm_pan listed as part of
the filter Raster Group File.
◆ ◆ ◆
Displaying images that form part of a Raster Group File
(cont.)
5. Select the etm_pan image associated with the filter Raster
Group File from the Pick list window, and click on OK.
◆ ◆ ◆
Observe the name of the file we have just selected, as listed in the DISPLAY
LAUNCHER (Figure 4.1.2.2.b). The file is listed as filter.etm_pan. Thus the
format of the image is Raster Group File name “dot” image file name. The
dot in this context specifically implies that the etm_pan image is a part of the
raster group. When the image is identified with its full name, indicating the
image and the raster group it is a part of, it is termed as the image file name
“with full dot logic.”
Figure 4.1.2.2.b The DISPLAY LAUNCHER showing the image file name
associated with a Raster Group File.
◆ ◆ ◆
Displaying images that form part of a Raster Group File
(cont.)
6. In the DISPLAY LAUNCHER window, select a GreyScale
palette.
7. Click on OK.
8. Once again open the DISPLAY LAUNCHER using the main
toolbar.
9. Within the DISPLAY LAUNCHER window, click on the
browse button (…).
10. The Pick List will open. Scroll down to filter. Click on the
plus sign (+).
11. The plus sign will change to a minus (-). Select the
etm_pan_highpass3x3 image associated with the filter Raster
Group File, and click on OK.
12. In the DISPLAY LAUNCHER window, select a GreyScale
palette.
13. Click on OK.
◆ ◆ ◆
We are now finally ready to apply our linked zoom.
◆ ◆ ◆
Applying a linked zoom with Raster Group File images
1. Click on the Group Link icon from the main menu bar.
2. Click in the etm_pan image, to bring this window to the
front of the TerrSet workspace.
3. Click on the Zoom Window icon from the main menu bar.
4. Draw a zoom box around the airport by clicking at the
upper left corner of the airport, and dragging the mouse to the
bottom right corner of the airport.
5. Both images should zoom in to the same place and extent.
You may need to move the etm_pan image display, to see the
etm_pan_highpass3x3 image.
◆ ◆ ◆
The zoomed image shows that the High Pass filter highlights the boundaries
between features by creating contrasting negative and positive pixels (Figure
4.1.2.2.c). Look closely at the boundary between the airport and the sea. Note
that the boundary is marked by a dark line, with an adjacent parallel white
line. This distinctive result emphasizes the edges of the features in the image.
Also, observe how image noise within the image is heightened. High-pass
filters tend to enhance any noise present in the image, since any variability,
especially variation that is not correlated spatially, is enhanced.
Figure 4.1.2.2.c Zoomed High Pass filtered image.
4.1.2.3 Sobel Edge Detector
Another edge filter, called the Sobel Edge Detector filter, is very useful in
highlighting long continuous linear features in your imagery. Linear features
in imagery, commonly called lineaments, are sometimes used in geological
studies. Natural geomorphic features that exhibit relatively linear patterns
may indicate structural features, such as faults or fracture systems controlled
by joints. We will use the Sobel Edge Detector filter to highlight linear
features found in the etm_pan image.
◆ ◆ ◆
Sobel edge detector FILTER
Menu Location: IDRISI Image Processing – Enhancement
– FILTER
1. The FILTER dialog box should still be open from the Low
Pass Filter operation. If not, use the main menu to open the
FILTER dialog box, and specify the Input image as etm_pan.
(If necessary, review Section 4.1.1.)
2. Select the Sobel Edge Detector radio button.
3. In the Output image text box, type etm_pan_Sobel.
4. Click on OK to generate the filtered image.
◆ ◆ ◆
After processing, the image is automatically displayed with an appropriate
GreyScale palette. However, we do need to set the appropriate contrast.
◆ ◆ ◆
Apply a contrast enhancement to an image
1. Click in the etm_pan_Sobel window.
2. Find the Composer window, and click on the Layer
Properties button.
3. In the Layer Properties window, type 20 in the Display
Min text box, and 45 in the Display Max text box.
4. In the Layer Properties window, click on the buttons for
Apply, Save and OK.
◆ ◆ ◆
Figure 4.1.2.3.a Sobel filter of the area south of the airport.
Now zoom in on the Sobel Edge Detection image around the airport, and
especially the adjacent island, Lantau (Figure 4.1.2.3.a). Note the strong NE
trending linear features. These features suggest a distinctive NE structural
geology trend.
Note: In working through this exercise you may have decided to redisplay
one or more of the images. For example, you may have closed the
pan_mean3x3 image, and may wish to view it again to compare it with the
subsequent images. If so, be sure to use the GreyScale palette each time you
display the image. However, if you have saved the Display Min and Display
Max settings you have entered, it should not be necessary to reapply the
contrast stretch.
4.1.3 Sharpening Filters
For satellite optical data such as Landsat data, convolution filtering is
typically employed to sharpen the image. Often referred to as high-pass
filtering, this operation is somewhat analogous to focusing a lens – it
enhances the boundaries between features of distinctly different digital
values. The difference between sharpened images and typical high-pass and
edge filtered images is that most of the image information is retained in the
final sharpened image.
In addition to the preset filters, TerrSet allows the user to define custom
values for the kernel. We will now use this capability to define a filter that
will sharpen our image without losing most of the image content. To retain
the image content, we will design a matrix kernel whose sum is equal to 1 to
keep the results comparable with the mean filter kernel.
Figure 4.1.3.a FILTER dialog box for custom filter creation.
◆ ◆ ◆
Custom filter
Menu Location: IDRISI Image Processing – Enhancement
– FILTER
1. The FILTER dialog box should still be open from the Sobel
Edge Detector Filter operation. If not, use the main menu to
open the FILTER dialog box, and specify the Input image as
etm_pan.
2. Select the radio button for User-defined filter (variable size
kernel).
3. In the User-defined filter kernel area, click on the center
kernel position.
4. Type 17/9.
5. In the surrounding 8 kernel positions, click and type -1/9 in
each cell. You may find it easier to type the value in one cell,
copy it (press the keys Cntrl and c simultaneously), and paste
the value (Cntrl and v) in each subsequent cell.
6. Make sure that the normalize option is not selected (we
have already normalized the kernel by dividing it by the
number of kernel positions, i.e. 9).
7. In the Filter file name text box, type sharpen17.
8. Click on the Save filter button to the right of the Filter file
name text box.
9. In the Output image text box, type etm_pan_sharpen17.
10. Figure 4.1.3.a shows the FILTER dialog box with the
parameters specified.
11. Click on OK.
◆ ◆ ◆
The filtered image retains most of the image information of the original
image, and shows a similar range of values. Therefore, we now need to set
the contrast once again.
◆ ◆ ◆
Change contrast enhancement of an image
1. Make sure the new etm_pan_sharpen17 image is the focus
of the TerrSet workspace by clicking in the image.
2. Find the Composer window, and select the button for Layer
Properties.
3. In the Layer Properties window, enter in the Display Min
and Display Max text boxes the values for the contrast stretch
we used for the etm_pan image, namely 18 and 73.
4. Still in the Layer Properties window, click on the buttons
for Apply, Save and OK.
◆ ◆ ◆
The image should now have much better contrast.
Zoom in on the airport and compare the filtered results with the original
panchromatic image (Figures 4.1.3.b and 4.1.3.c). (If you need to redisplay
the panchromatic image, remember to use a GreyScale palette.) The sharpen-
filtered image shows better definition to the features in the airport such as the
fuel tank field, the variations of the internal field, and the jets at the terminal.
However, the filtered image also appears noisier.
Figure 4.1.3.b Sharpened image of etm_pan.
Figure 4.1.3.c Hong Kong Landsat Panchromatic data.
4.2 Multiresoltion Merge
Another approach to enhancing images spatially is to merge high-resolution
panchromatic data with multispectral imagery of lower spatial resolution. As
noted previously in Section 2.1.1, multispectral imagery, which because of its
multi-band nature can be displayed in a false color composite format, tends to
have a lower spatial resolution than single band panchromatic data. An ideal
merge of the two data sets would result in retaining the spectral integrity of
the multispectral bands, while incorporating the spatial resolution of the
panchromatic data. In this section, we will learn how to use different methods
to merge data of differing resolution. The first two methods will draw on
color theory, and the third will take advantage of spatial filters and
mathematical operators to merge the data sets.
4.2.1 Creating user defined multiresolution
merge procedures
There exist many methods that allow the merging of multiresolution imagery.
Some of these approaches are implemented in TerrSet (as we will see in
exercise 4.2.2). Sometimes the analyst may need to develop an approach that
is not supported within TerrSet. The process of resolution enhancement
requires many steps to obtain the final product. When using methods for
multiresolution merge not provided by TerrSet, the step-by-step processing
can be very time consuming and laborious, especially if we need to repeat the
process due to some adjustment required somewhere along the process
stream. Thus, we will learn how to create a custom defined multiresolution
merge employing a very useful feature of TerrSet, the MACRO MODELER.
MACRO MODELER has a number of significant benefits:
• It provides an excellent way to develop a sequence of program steps.
• It tends to be a more efficient way of running programs, especially those
that are sequential.
• It provides a record of a processing sequence.
• The sequence of processing steps can easily be adapted to apply to a
new data set, thus making it possible to develop “canned” procedures.
• It is easy to repeatedly run through a processing sequence, thus
facilitating scenario modeling, where for example, multiple alternative
values in some key processing parameters are compared.
4.2.1.1 Getting started with MACRO MODELER: Building a simple model
Macro Modeler is a graphical user interface in which the user can easily
develop and execute multiple operations in a sequential fashion. In essence,
the interface allows the user to plan complex operations by creating a process
stream that includes data inputs, operations, and temporary file outputs as
well as final outputs. Let’s work on becoming acquainted with this interface.
◆ ◆ ◆
MACRO MODELER
Menu Location: IDRISI GIS Analysis – Model
Deployment Tools – MACRO MODELER
1. Start the MACRO MODELER either from the main menu
or the icon bar.
2. The MACRO MODELER graphical interface will open
(Figure 4.2.1.1.a).
◆ ◆ ◆
The Macro Modeler window has a standard pull-down menu and an icon
toolbar. We will be using the icon toolbar exclusively, so let’s see what is
available. Run your cursor over each icon, and note the brief name that pops
up as you read the summary information below on the groups of icons.
Figure 4.2.1.1.a MACRO MODELER graphical Interface.
• Model Operations. The blue model operations icons on the left include
commands to create a new model, open a model file, save the model to a
file, copy the graphical representation of the model to the Windows
clipboard, and print the graphic representation of the model.
• Delete. The red x icon allows the user to delete elements of the model.
• Input. The green icons are the data elements that are input into
processing modules, including raster files, raster group files, vector files,
and attribute data (the latter two are for GIS operations).
• Command Elements. The next two icons in red are the command
elements of a module and a sub-model. Sub-models are previously
constructed modules that can be used within a larger model. When a
command element is placed into the model, an output file is automatically
connected to the module of the appropriate data type and with a default
temporary file name.
• Connector. The next icon, the blue arrow, creates connections between
model elements. Connectors control the flow of input data to the
command modules.
• Start/Stop. The red triangle and square icons are for running and
stopping the model. (The other icons will not be used here.)
In order to demonstrate the Macro Modeler’s capability, let’s first build a
simple one-step model before we attempt building more complicated models.
Let’s build a model that creates a color composite image from three separate
data layers.
◆ ◆ ◆
MACRO MODELER (cont.)
3. Click on the Raster Layer icon to insert a data input into
the model. (Figure 4.2.1.1.b shows the icon and the result of
the operation described here.)
4. In the Pick window, select the data layer etm2.
5. Click on OK.
◆ ◆ ◆
Figure 4.2.1.1.b A raster layer in a Macro Model, with the arrow indicating
the icon to insert the layer.
The Macro Modeler creates a purple rectangle as the graphical representation
of a data input layer and places the name of the input file inside the rectangle
(Figure 4.2.1.1.b). Click on the rectangle and the rectangle’s border becomes
a bold line indicating that the data input is selected. You can also move the
element by selecting the element, holding the mouse click down, and
dragging it to your desired location. Continue now to add the other two data
inputs.
◆ ◆ ◆
MACRO MODELER (cont.)
6. Click on the Raster Layer icon to insert a data input into
the model.
7. In the Pick window, select the data layer etm3.
8. Click on OK.
9. Click on the Raster Layer icon to insert a data input into
the model.
10. In the Pick window, select the data layer etm4.
11. Click on OK.
◆ ◆ ◆
Now we will add the command module.
◆ ◆ ◆
MACRO MODELER (cont.)
12. Click on the Module icon to insert a command module
into the model.
13. In the Pick window, select the module composite.
14. Click on OK.
◆ ◆ ◆
The MACRO MODELER adds the command module, which is represented
in the model by a pink parallelogram with the label composite (Figure
4.2.1.1.c). Notice that the modeler automatically creates an output file with
the name tmp000 (tmp followed by a series of numbers), and shows the
connection from the composite module.
Figure 4.2.1.1.c The model with three input data sets and the composite
module specified.
We will now need to establish the connection and sequence of the input files
to the composite module. First, we must learn the order that the composite
module expects the input files, and then we can establish the connectors. The
MACRO MODELER establishes the order of the data input files into a
command module by the sequence in which you connect the data to the
module. So, it is very important to know beforehand what order to physically
connect the inputs to the module. Let’s try with the model we are building.
◆ ◆ ◆
MACRO MODELER (cont.)
15. Place your cursor over the pink composite module
parallelogram, and right click.
16. A Parameters window appears for the composite module
(Figure 4.2.1.1.d). This window shows the input files, output
file name, and the parameters used in creating the output.
17. Notice the inputs are arranged from top to bottom as Blue
band, Green band, and Red band. This is the sequence that
the module expects the input files to be connected.
18. Try clicking in the fields for the inputs and output. Notice
that you cannot change these attributes.
19. You can, however, change the values in the section
labeled Additional parameters. Click on each parameter
attribute to view the possible options. For this exercise, leave
the options unchanged (default options).
20. Click on the OK button to close the Parameters window.
◆ ◆ ◆
Figure 4.2.1.1.d Parameters window for the composite module.
◆ ◆ ◆
MACRO MODELER (cont.)
21. Click on the Connect icon, and move your curser into the
model. Notice that the cursor shape has changed to a hand
with a pointing finger.
22. Place the cursor over the purple rectangle representing the
input raster layer that we want to be the blue band for the
composite module, namely the Landsat ETM+ band 2, etm2.
23. Click, and holding the mouse button down, drag the
cursor over to the composite module and release.
24. The model now shows the etm2 data layer connecting to
the composite module, with the arrow in the flow chart
pointing towards composite, indicating that etm2 is an input
data file for the composite module.
◆ ◆ ◆
Figure 4.2.1.1.e The model with the first data layer connected.
◆ ◆ ◆
MACRO MODELER (cont.)
25. Right click with the cursor over the composite module to
bring up the Parameters window once again. Note that the
Blue band field now indicates etm2, indicating that you have
successfully connected the data layer to the module. Let’s
continue and add the other data inputs.
26. Click on the Connect icon and move your cursor into the
model.
27. Place the hand cursor over the rectangle representing the
etm3 raster layer input.
28. Click, and keeping the mouse button depressed, drag the
cursor over to the composite module, and release.
29. Click on the Connect icon and move your cursor into the
model.
30. Place the hand cursor over the rectangle representing the
etm4 raster layer input.
31. Click, and keeping the mouse button depressed, drag the
cursor over to the composite module, and release.
◆ ◆ ◆
One final step before we will run our model is to create an output data layer
with an appropriate name. The tmp (temporary) filename prefix is typically
used for intermediate data layers rather than final products. The model creates
data layers for all the outputs and doesn’t automatically clean up the
temporary files. We should clean up these temporary files after we run the
model by either manually deleting the files or by using the Delete all
temporary files command found in the File pull-down menu. Let’s continue
and name our output.
◆ ◆ ◆
MACRO MODELER (cont.)
32. Place you cursor over the tmp001 data output and right
click.
33. A Change layer name window appears, with the original
name in the field.
34. Type in the new file name as etm234.
35. Click OK.
◆ ◆ ◆
Figure 4.2.1.1.f The model completely specified.
One aspect of the MACRO MODELER that we need to consider is the
standard output directory for all the data layers created when you run the
model. All data are automatically written to the standard output directory in
the Working Folder. One can always use the TerrSet File Explorer to move
the final data layers to another folder location if necessary.
Finally, let’s save and run our new model.
◆ ◆ ◆
MACRO MODELER (cont.)
36. Click on the Save icon.
37. Specify that the Model file name is composite234.
38. Click OK.
39. Click on the run icon.
40. A window will open in which you are warned that the
files created by the model will overwrite any existing files
with those names. Click on Yes to all.
41. Your new raster file is created and displayed (Figure
4.2.1.1.g).
◆ ◆ ◆
Figure 4.2.1.1.g Landsat 234 (RGB) composite created using the MACRO
MODELER.
Notice how the pink parallelogram representing the composite module turns
green as the module runs. This is a useful feature of the MACRO
MODELER, in that it indicates the precise stage of the processing. When you
combine multiple modules in a single model, you can track progress through
the model in this way.
4.2.1.2 Color Transformation Merge
A common method to merge data uses the concept of color spaces. Color
space is a mathematical model describing the way colors can be represented
as a combination of three values. For example, the RGB (Red, Green, Blue)
color space is one we are already familiar with. The three axes of the color
space are represented by the Red, Green, and Blue components of the color.
Based on the fact that the impressions of almost all the colors we perceive
with our eyes can be evoked by mixing these three components, a point in the
RGB color space will represent a distinctive color.
RGB is not the only possible representation of color. An alternative color
space is the HLS (Hue, Lightness, Saturation) model. In this case, Hue,
Lightness, and Saturation are the axes of the color space, just as Red, Green
and Blue are the axes in the RGB model.
• Hue refers to the characteristic tint of a color. For computer-based
color, the color wheel is based on the RGB model and includes secondary
colors of cyan, magenta, and yellow. Since the color wheel is circular, the
number assigned to the Hue value is typically an angle, from 0° to 360°
(Figure 4.2.1.2.a). TerrSet normalizes this 360-degree value to an 8-bit
range (0-255).
• Lightness (or intensity) refers to the brightness, which extends from
black to white.
• Saturation refers to the purity of the color, so that low saturation colors
tend to gray, and have a typically pastel quality, whereas high saturation
colors are the purest colors. With no saturation at all, the hue becomes a
shade of gray equivalent to lightness (Figure 4.2.1.2.b).
Sometimes the HLS system is referred to as IHS, for intensity (instead of
lightness), hue, and saturation. Soil scientists refer to this system by the
terms hue, value (for lightness) and chroma (for saturation). For this
exercise, we will use the term HLS.
Interestingly, there are in fact additional color spaces, which sometimes are
used in preference to either the RGB or HLS models. The reader seeking
additional information about color spaces should refer to a text such as
Jensen (2016).
Although there exist different IHS methods, their main characteristic is the
removal of the lightness (or intensity) from the coarse resolution imagery,
while adding back the high resolution panchromatic based intensity. The
main difference across the methods is the procedure used to extract the
intensity image.
In this exercise, multiresolution merge is based on the transformations of
bands between color spaces. These transformations are essentially matrix
rotations of the data between the different color space axes. The differences
between the RGB and HLS color spaces can be visualized in Figure 4.2.1.2.b.
The Lightness (or intensity) component in the HLS space is the useful
component for merging datasets of different resolutions. In Landsat ETM+
data, the panchromatic band has four 15 m pixels for every one 30 m pixel of
the multispectral bands. The panchromatic band, which is sensitive to 0.5-0.9
μm electromagnetic radiation, gives a good average reflectance of the pixel.
We will use TerrSet’s color space transformation functions to merge the
Landsat panchromatic band with the multispectral Landsat bands 4, 3, and 2.
We will use the MACRO MODELER to develop the multi-step merge
procedure. The model will first transform three multispectral bands from
RGB to HLS color space. We will then expand the resolution of the resulting
Hue and Saturation outputs to match that of the panchromatic band. The
panchromatic band will be substituted for the Lightness data, and then the
data will be transformed back to RGB color space. The final step will be to
create the color composite image to view the merged image.
◆ ◆ ◆
HLS resolution merge with the MACRO MODELER
Menu Location: IDRISI GIS Analysis – Model
Deployment Tools – MACRO MODELER
1. Start the MACRO MODELER either from the main menu
or the icon bar.
2. In the MACRO MODELER window, use the menu to
select File and then Delete all temporary files.
◆ ◆ ◆
Figure 4.2.1.2.a Left: Hue color wheel. Right: Lightness and saturation
variation associated with a red hue.
Figure 4.2.1.2.b RGB color space and HLS color space.
◆ ◆ ◆
3. Click on File and select Set/Reset temporary file counter.
4. The Set/Reset TMP File Counter window will open. Click
on Reset.
5. If the value for Current counter value is not 0, change the
value in the Set next counter window to 0. Click OK to close
the Set/Reset TMP File Counter window.
6. Click on the Raster Layer icon to insert a data input into
the model.
7. In the Pick list window, select the data layer etm4.
8. Click on OK.
9. Repeat the above three steps to insert raster layer etm3, and
again to insert raster layer etm2.
10. Click on the Module icon to insert a command module
into the model.
11. In the Pick list window, select the module colspace.
12. Click on OK.
13. Right click with the cursor over the colspace module to
bring up the Parameters: colspace window.
14. Within the Parameters window, note that the input images
are listed as hue image band, lightness image band and
saturation image band.
15. Find the Additional Parameters section. In that section,
the Conversion type should list the default option of HLS to
RGB.
16. Click in the HLS to RGB field, and select RGB to HLS
from the pop-up menu (Figure 4.2.1.2.c).
17. Note that the input images are now listed as Red image
band, Green image band and Blue image band, in that order
(Figure 4.2.1.2.c). Therefore, we will want first to connect the
Landsat band that will be displayed as red, then green, then
blue.
◆ ◆ ◆
Figure 4.2.1.2.c The colspace Parameters window after the RGB to HLS
option has been selected.
◆ ◆ ◆
HLS resolution merge with the MACRO MODELER
(cont.)
18. Click on OK to close the Parameters window.
19. Click on the Connect icon and move your cursor into the
model.
20. Place cursor over the red band for the colspace module,
which will be etm4. Click, and keeping the left mouse button
depressed, drag the cursor over to the colspace module, and
release.
21. The model now shows the etm4 data layer as connecting
to the colspace module with the arrow indicating that it is an
input data file.
22. Repeat the previous three steps to connect etm3 to the
colspace module, then etm2 to the colspace module, in that
order.
◆ ◆ ◆
Your model should now show the three input Landsat ETM+ images
connected to the colspace module (Figure 4.2.1.2.d). The colspace module
provides for three outputs that currently have default names, such as tmp003,
tmp004 and tmp005. These files represent, in order, hue, lightness and
saturation. We will next expand the resolution of the hue and saturation data
to match the Landsat panchromatic image.
Figure 4.2.1.2.d The colspace module within MACRO MODELER.
◆ ◆ ◆
HLS resolution merge with the MACRO MODELER
(cont.)
23. Click on the Module icon to insert a command module
into the model.
24. In the Pick list window, select the module expand.
25. Click on OK.
26. Repeat the above three steps to insert a second expand
module in the model. (The new module is placed in the model
somewhere to the right of the existing model elements. You
may need to scroll over to find them.)
27. Drag each of the expand modules and their outputs to line
up with the first and third raster file outputs from colspace,
which correspond to the hue and saturation outputs from the
module (for example, tmp003 and tmp005 in Figure
4.2.1.2.d).
28. Use the Connect icon to connect the first and third raster
output files from the colspace module to expand modules.
29. Right click on the first expand module. The Parameters
window will open.
30. In the blank text box next to Expansion factor, type 2
(Figure 4.2.1.2.e).
31. Click OK.
32. Right-click on the output file icon from this expand
operation, currently indicated with a temporary name such as
tmp006. The Change layer name window opens. Change the
file name to hue_expand.
33. Repeat steps 29-31 to specify an Expansion factor of 2 for
the second expand module. Rename the output file
sat_expand (sat is short for saturation) (Figure 4.2.1.2.e).
◆ ◆ ◆
Figure 4.2.1.2.e Entering the expansion factor of 2 in the expand module.
Figure 4.2.1.2.f The model with colspace and expand modules.
Now we will insert the panchromatic band into the model, substituting it for
the lightness data in the reverse of the color space transform, from HLS back
to RGB. The final step will be to construct a composite for viewing our
results.
◆ ◆ ◆
HLS resolution merge with the MACRO MODELER
(cont.)
34. Click on the Raster Layer icon to insert a data input into
the model.
35. In the Pick list window, select the data layer etm_pan.
36. Click on OK.
37. Line up the raster file etm_pan between the expanded hue
and saturation raster files (hue_expand and sat_expand in
Figure 4.2.1.2.f).
38. Click on the Module icon to insert a command module
into the model.
39. In the pick window, select the module colspace.
40. Click on OK.
41. This time we need to convert from HLS to RGB, which
we saw above was the default option in colspace. Therefore
we can directly connect the input files to the new colspace
module.
42. Use the connector icon to connect the expanded hue
output (hue_expand in Figure 4.2.1.2.f), etm_pan, and the
expanded saturation output (sat_expand), in that order.
43. Rename the temporary output files (e.g. tmp008, tmp009,
tmp010), by right-clicking on the purple rectangle
representing each file (starting with the file with the lowest
number), and changing the names to red_band, green_band
and blue_band, respectively.
44. Click on the Module icon to insert a command module
into the model.
45. In the Pick list window, select the module composite.
46. Click on OK.
47. To connect files to modules, it is very important to
connect the files in the correct order. The composite module
expects input files in the order blue, green, red. Therefore,
connect the output files from the colspace module in this
order: blue_band, then green_band, and then red_band.
48. The output from the composite module will have a
temporary name, such as tmp011. Right click on the purple
rectangle representing the temporary file, and in the Change
Layer Name dialog box, enter the new name,
432_pan_hls_merge.
49. Click on the Save icon, and when prompted, enter the
name for the model as hls_merge.
50. Click OK.
51. Review the model to see that all the components are
correctly linked (Figure 4.2.1.2.g).
52. Click on the run icon.
53. A window will open in which you are warned that the
files created by the model will overwrite any existing files of
those names. Click on Yes to all.
◆ ◆ ◆
Figure 4.2.1.2.g HLS resolution merge model.
As the model runs, the progress through the various steps is clearly illustrated
by the sequence of modules that turn green. After the program is completed,
the merged image will be displayed automatically.
Let us now compare the standard false color composite of bands 2, 3 and 4
we created in Section 4.2.1.1 with the merged image comprising bands 2, 3
and 4 with the panchromatic band.
Use the DISPLAY LAUNCHER to display the etm234 image. Select an area,
for example the east end of Deep Bay (the significant embayment to the north
of the airport), and zoom into the same area in both images (Figure 4.2.1.2.h).
Note how the detail is sharpened in the merged image. The spectral quality,
however, is slightly changed in the merged image. Both the water areas and
agricultural areas have different shades of blue from the original. This slight
mismatch in the spectral quality is due to the fact that the Landsat
panchromatic data are statistically different from the 234 lightness band. This
is because the method used to decompose bands in the HLS space assumes
that the green, red and near infrared bands contribute equally to the degree of
illumination in the lightness band (i.e. they are equally weighted in the
transformation). The Landsat ETM+ panchromatic band, however, overlaps
more with the near infrared part of the spectrum than with the visible. The
simple substitution of the panchromatic data for lightness results in a slightly
different spectral character to the resulting RGB image, where the green and
red bands have higher lightness than the original while the near infrared
exhibits less lightness.
Before starting the next section, close all files, including the MACRO
MODELER.
Figure 4.2.1.2.h Landsat 4,3,2 (RGB) false color composite image of the
eastern end of Deep Bay with 30 m resolution (top), and HLS merge of
Landsat bands 4,3,2 (RGB) with the panchromatic band, with approximately
15 m resolution (bottom).
4.2.2 Multiresolution merge using
PANSHARPEN
As mentioned previously, there exist many different approaches that allow
the merging of images from multiple resolutions. TerrSet provides a module
called PANSHARPEN that uses two approaches, Intensity-Hue-Saturation
(IHS) and Hyperspherical Color Sharpening (HCS). Both are based on a
different approach of calculating and replacing the Intensity component. In
this section, we will use the Intensity-hue-saturation approach, which is
mathematically equivalent to the HLS method described in section 4.2.1, but
produces a lightness image that does not assume equal weight across bands.
4.2.2.1 Intensity-Hue-Saturation approach
The IHS method within PANSHARPEN establishes the relationship between
the panchromatic band and the bands that overlap the range of wavelengths
of the panchromatic band (usually in the visible part of the spectrum). This
relationship is established through a multiple linear regression. In this
regression, the multispectral bands (e.g., Blue, Green, and Red), are
resampled to the resolution of the panchromatic band, and used as
independent variables, while the panchromatic band (Pan) is used as the
dependent variable.
The regression equation can be used to generate a predicted illumination
image (Pan’). This illumination image is subtracted from the original bands.
Illumination is restored by adding the original panchromatic band.
Pan’ = a + (Bblue * Blue band) + (Bgreen * Green band) + (Bred *Red band)
Bandmerge = Band - Pan’ + Pan
We will now create a panchromatic sharpened image using PANSHARPEN.
◆ ◆ ◆
PANSHARPEN
Menu Location: IDRISI Image Processing – Enhancement
– PANSHARPEN
1. Start PANSHARPEN from the main menu.
2. The PANSHARPEN graphical interface will open (Figure
4.2.2.1.a).
3. Select the radio button for Intensity-hue-saturation (IHS)
option, located under Pansharpen method.
4. Insert the bands etm2, etm3, and etm4 under Bands to be
processed.
5. Leave the default Resample type option (bilinear).
6. Check the option Background value, and set it to 0.
7. Leave the Clip minimum and Clip maximum value as
defaults.
8. Input etm_pan as the Panchromatic image used for
enhancement.
9. Type IHS as the prefix for the output files.
10. Click OK to run PANSHARPEN.
◆ ◆ ◆
In order to do the panchromatic merge, all multispectral bands are first
resampled to the resolution of the panchromatic band. As seen in the previous
chapter, resampling involves the transformation of the image and
recalculation of pixel values. In this case, we chose to use a bilinear
resampling type, which modifies output pixel values. If the original values of
the pixels are required, a nearest neighbor transformation should be used
instead.
By specifying the background value, this value is ignored during the multiple
regression operation. The Clip to minimum and maximum value restrict the
output to a range of values defined by the user. For example, if the clip to
minimum is set to zero, any pixel less than zero after the transformation is
reclassified to zero.
Here we use bands 1 through 4 (blue, green, red, and near infrared) since the
panchromatic band for Landsat 7 ETM+ overlaps that range of the
electromagnetic spectrum. If we were using Landsat 8 OLI instead, we could
only do the transformation for the blue, green and red bands, since the OLI
panchromatic band overlaps only with the visible wavelengths.
Figure 4.2.2.1.a PANSHARPEN interface with parameters.
After the process finishes, the multiresolution merge for one band is
displayed. If you check the metadata for the newly created files, you will see
that the IHS bands have a resolution of 15 meters.
We can create a false color composite to compare the enhancement to the
original bands.
◆ ◆ ◆
Create a false color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band as IHS_etm2. Click on OK to close the
Pick list.
3. Specify the Green image band as IHS_etm3.
4. Specify the Red image band as IHS_etm4.
5. Select the radio button for the option Linear with saturation
points under Contrast stretch type.
6. Leave the default Percent to be saturated from each end of
the grey scale.
7. Enter the Output image filename in the text box
provided: IHS_234_fcc.
8. Leave all other defaults and click OK to create and display
the false color composite (Figure 4.2.2.1.b).
◆ ◆ ◆
If you zoom in to the bay area within the IHS_234_fcc color composite
(Figure 4.2.2.1.b), you will see that the coastline and roads are more defined
than in the color composite created with the original bands (etm234). In
IHS/HLS based transformations, the bands merged should have a similar
spectral range to the panchromatic. Therefore, if the aim is to spatially
enhance bands outside the spectral coverage of the panchromatic band (e.g.
shortwave infrared), this method is not appropriate.
Figure 4.2.2.1.b False color composite (zoom in to the bay area), before
(top), and after IHS pansharpening (bottom).
4.2.3 Multiplicative Merge
Our aim in this section will be to develop a merge procedure that does not
change the relative spectral properties of the bands (i.e. all bands are adjusted
with the same amount of brightness), as was the case in the HLS
transformation of section 4.2.1.2. One way to do this is to use a multiplicative
approach that uniformly enhances the lower resolution spectral bands with
the panchromatic band.
Rather than simply multiply directly the panchromatic bands with the
multispectral bands, we will first filter the panchromatic band. If you
remember from our earlier exercise in filtering, a high pass filter will
emphasize the edges of features, while simultaneously removing much of the
average brightness signal associated with the feature. By using a high pass
filtered image as an input into the image merge, we will sharpen the
boundaries of features, while also retaining the integrity of the spectral
characteristic of the feature. Let’s create the model and compare the results to
the IHS/HLS merge.
Note: In this model, we will assume you have gained familiarity with the
MACRO MODELER, and therefore the instructions will be shortened
somewhat for commands that we have already used a number of times. If
necessary, you can always refer back to Section 4.2.2 to see more detailed
instructions for these commands.
◆ ◆ ◆
Multiplicative resolution merge with the MACRO
MODELER
Menu Location: IDRISI GIS Analysis – Model
Deployment Tools – MACRO MODELER
1. Start the MACRO MODELER either from the main menu
or the icon bar.
2. In the MACRO MODELER window, click on File and
select Delete all temporary files.
3. Click on File and select Set/Reset temporary file counter.
4. The Set/Reset TMP File Counter window will open. Click
the Reset button. If the value for Current counter value is not
0, change the value in the Set next counter window to 0. Click
OK.
5. Use the Raster Layer icon to insert data layer etm2 into the
model. Repeat to insert etm3 and etm4 into the model, in that
order.
6. Use the Module icon to insert the expand command module
into the model. Repeat to insert two more expand modules, to
create a total of three expand modules.
7. Arrange the expand modules, one each in front of the three
input files, etm2, etm3 and etm4.
8. Use the Connector icon to connect each of the three purple
rectangles representing the raster layers, etm2, etm3 and
etm4, to an expand module (Figure 4.2.3.a) (Note that your
temporary output names may have different numbers, e.g.
tmp002 where the figure shows tmp000. This is not
significant. The important issue is how the files are
connected, and for some program modules, such as
composite, the order in which they are combined.)
9. Sequentially right-click in each expand command module,
and each time the Parameter window opens, set the
Expansion factor to 2.
◆ ◆ ◆
Figure 4.2.3.a The initial model with three expand modules.
Having added the Landsat multispectral bands and expanded them to the
same resolution as the Landsat panchromatic band, we will now add the
panchromatic band to the module, and filter it to highlight feature boundaries.
Once we have filtered the panchromatic band, we will scale it so that its
histogram and mean are centered on one.
◆ ◆ ◆
Multiplicative resolution merge with the MACRO
MODELER (cont.)
10. Use the Raster Layer icon to insert data layer etm_pan
into the model.
11. Arrange the etm_pan rectangle, in line under the other
input data layers.
12. Use the Module icon to insert the filter command module
into the model.
13. Use the Connect icon to connect the etm_pan layer to the
filter module.
14. Right-click on the filter module to bring up the
Parameters window.
15. Right-click in the field to the right of the filter type, and
select High pass (Figure 4.2.3.b).
16. Click on OK to close the Parameters window.
◆ ◆ ◆
Figure 4.2.3.b Setting the Filter type to High Pass.
◆ ◆ ◆
Multiplicative resolution merge with the MACRO
MODELER (cont.)
17. Use the Module icon to insert a stretch command module
in the model.
18. Move the stretch module and its output below the filter
module, to save space.
19. Use the Connect icon to connect the output of the filtered
etm_pan image (tmp003 in Figure 4.2.3.c) to the stretch
module.
20. Right-click on the stretch module to bring up the
Parameters window.
21. Click in the field to the right of Output data type, and
select Real from the pop-up menu.
22. Click in the field to the right of Exclude background?, and
select No.
23. Click in the field to the right of Lowest non-background
output value, and enter 0.
24. Click in the field to the right of Highest non-background
output value, and enter 2.
25. Leave the remaining fields with their default values
(Figure 4.2.3.d), and close the window by clicking on OK.
◆ ◆ ◆
Figure 4.2.3.c The model with the filter and stretch modules added and
connected.
Figure 4.2.3.d The stretch module Parameter window.
The Parameters window for stretch (Figure 4.2.3.d) had # symbols in the
fields for Lower bound and I. For this module, # is used to indicate that the
program should use the appropriate values from the image itself. Thus, in this
case, the input image will be queried, and the stretch will extend from the
minimum value in the image to the output value.
For the fields Lowest non-background output value and Highest non-
background output value, we chose values of 0 and 2, so that the average
value after the stretch would be approximately 1.0 (i.e. half way between the
two extremes).
We are now ready to combine the filtered and stretched panchromatic band
with the multispectral bands by a simple multiplication. Since the filtered
panchromatic band is centered on approximately 1.0, the only changes in the
multispectral bands will be on the edges of features, and these will be made
slightly lower or higher, thus accentuating those edges.
◆ ◆ ◆
Multiplicative resolution merge with the MACRO
MODELER (cont.)
26. Click on the Module icon and insert an overlay command
module.
27. Repeat the previous step twice, to create a total of three
overlay modules in the model.
28. Line up each pink parallelogram that represents an
overlay module with an output temporary file from the three
expand modules.
29. Use the Connect icon to connect each output temporary
file from the expand modules to the adjacent overlay module.
30. Use the Connect icon another three times to connect the
output file from the stretch module (tmp004 in Figure 4.2.3.e)
to each overlay module.
31. Right-click on the first overlay module to bring up the
Parameters window.
32. Click on the field to the right of Operations, and select
Multiply from the pop-up menu.
33. Repeat the previous two instructions for each overlay
module, so that each module is set to multiply*.
*Alternatively, you could use the multiply module instead of
overlay.
◆ ◆ ◆
Figure 4.2.3.e The model with the three overlay modules connected.
We are now ready to create a false color composite image of the merged data.
◆ ◆ ◆
Multiplicative resolution merge with the MACRO
MODELER (cont.)
34. Use the Module icon to insert the composite command
module in the model.
35. Connect the temporary raster files from the overlay
modules so that if you follow along a row, the result of the
expansion and overlay operation for the etm2 file is connected
first, then etm3, and finally etm4. In Figure 4.2.3.f, this would
be in the order tmp005, tmp006 and tmp007.
36. The output from the composite module will have a
temporary name, such as tmp008. Right click on the purple
rectangle representing this temporary file, and in the Change
Layer Name dialog box, enter the new name,
432_pan_mult_merge.
37. Click on OK.
38. Click on the Save icon.
39. Specify the Model file name mult_merge.
40. Click on OK.
41. Review the model to see that all the components are
correctly linked (Figure 4.2.3.f).
42. Click on the run icon.
43. A window will open in which you are warned that the
files created by the model will overwrite any existing files of
those names. Click on Yes to all.
◆ ◆ ◆
Figure 4.2.3.f The multiplicative resolution merge model.
As we did with the HLS merge, let us now compare the standard false color
composite of bands 2, 3 and 4 created in Section 4.2.1 to the image
comprising bands 2, 3 and 4 merged with the panchromatic band. Use the
DISPLAY LAUNCHER to display the etm432 image. Zoom into the same
area of the eastern end of Deep Bay as we did before (Figure 4.2.3.g). Note
that we have been able to match accurately the spectral character of the mud
flats that extend into Deep Bay, as well as the spectral signature of the
exposed soil on the tops of hills. We can see that we successfully retained the
relative spectral characteristics of the image. Also, looking at the urban areas
in the zoomed images, we can see a definite sharpening of the roads and
buildings.
Figure 4.2.3.g Landsat 4,3,2 (RGB) false color composite image of the
eastern end of Deep Bay with 30m resolution (top) and multiplicative merge
of Landsat bands 4,3,2 (RGB) and the panchromatic band, with
approximately 15m resolution (bottom).
Note that when creating the color composites, we are stretching the bands.
The different color composites may be stretched differently based on the
ranges of values within the bands and therefore the differences seen may be
related to this differential stretching. To corroborate the differences, you can
display the merged bands together in a composition and compare pixel values
using the IDENTIFY tool as shown in Figure 4.2.3.h.
Figure 4.2.3.h Example of pixel value comparison for the original band 4
(etm4), the IHS merged band 4 (ihs_etm4), the COLSPACE based HLS
merged band 4 (red_band) and the multiplicative merged band 4 (tmp007).
Note that the IHS method provided in PANSHARPEN produces pixel values
more similar to the original values than either the multiplicative or HLS
approach. Although the pixel values of the multiplicative approach are
different to the original bands, the color composite has similar values because
all bands are adjusted with the same amount of brightness.
CHAPTER 5
SPECTRAL ENHANCEMENT
TECHNIQUES
5.1 Introduction
This chapter is a companion to Chapter 6 on ratios and presents different
spectral enhancement techniques. We will start by investigating methods for
enhancing information in highly correlated bands. With highly correlated
data, the colors in the image will not appear very vibrant. We will explore
techniques to address this problem. One of the techniques, principal
component analysis, has very widespread use in remote sensing because it is
an effective method of dealing with another problem commonly encountered,
namely the need to visualize more than three bands at one time. Principal
component analysis also allows us to remove image noise.
After the section on highly correlated data, we then look at specialized image
enhancement. We will segment an image, separating water from land. We
will then apply different false color composites to the water and land features,
thus creating an overall optimal image.
5.2 Download Data for this Chapter
In this chapter, we will work with two different image sets, one for the
handling highly correlated data section (5.3) and one for the segmenting and
density slicing for the advanced display (5.4) section. If you have not done so
already, download the data from the Clark Labs’ website for Chapter 5 and
place it into a new subfolder within the \RSGuide folder on your computer.
Note: Section 1.3.1 provides detailed instructions on how to download the
data. Also, the procedure for setting up the RSGuide folder on your computer
is described.
5.3 Enhancing Highly Correlated Data using
Data Transformations
5.3.1 Background
The NASA airborne Thermal Infrared Multispectral Scanner (TIMS)
instrument collects six bands of long wavelength infrared radiation (Table
5.3.1.a). The thermal infrared part of the electromagnetic spectrum includes
an atmospheric window from 8-12 μm, which is the region where the TIMS
bands are located. Thermal wavelengths are characterized by emission of
energy from objects that are at approximately room temperature. The
measured thermal energy also includes a reflected component, but the
magnitude is generally small, and can be ignored.
Table 5.3.1.a. Approximate TIMS band passes.
Each multispectral thermal band is dominated by the temperature of the
surface radiating energy. This is because the total radiance is proportional to
the fourth power of the temperature of the surface (the Stefan-Boltzman
radiation law). Furthermore, the wavelength at which maximum radiance
occurs is a function of the inverse of the temperature (Wien's displacement
law). The dominant influence of temperature in multispectral thermal data
has the effect of making the bands highly correlated. False color images made
from highly correlated data are characterized by gray tones, and very pale
colors, with low saturation. In this exercise, we shall investigate three
methods of enhancing highly correlated data: principal component analysis
(PCA), decorrelation stretch (which is based on PCA), and an intensity-hue-
saturation (IHS) stretch. These methods, especially PCA, also have general
use in image processing, for example in the analysis of Landsat imagery.
Thus, for example, we will also use PCA as a change detection method.
The data we will work with is a subscene from a flight line over Mauna Loa,
Hawaii, and was collected at 22:03 GMT, September 30, 1989. The image
covers an area approximately 2.5 kilometers on a side. The band 3 (9.0 to 9.3
μm) radiance image is shown in Figure 5.3.1.a. This is a daytime image, and
the image is clearly dominated by topographic effects. For example, notice
the way temperature differences associated with solar heating cause the
cinder cones to stand out. The cinder cones are quite distinctive because they
have a conical shape, with a central depression.
Figure 5.3.1.a TIMS Band 3 (9.0 - 9.3 μm) image of lava flows on Mauna
Loa, Hawaii.
It is apparent from Figure 5.3.1.a that there are slight temperature differences
associated with the different lava flows. These lava flows are all historic
flows, and thus have long since cooled from their original molten state.
Therefore, the brightness variations are unrelated to the original molten lava
temperatures. Instead, the temperature differences you can see in this image
are entirely due to differences in heating, due to slope and aspect effects, as
well as differences in the rate at which heat is absorbed, due to variations in
surface properties. As an example of how surface properties can affect local
temperature, you might think about the difference in temperature between
dark vehicles, which absorb more heat, and light or shiny vehicles.
The differences in temperature between the different lava flows are relatively
small, and mapping the different lava flows from Figure 5.3.1.a would be
quite difficult. In this exercise, we will investigate spectral enhancement
methods to make the different lava flows more clear.
5.3.2 Preparation
In Section 5.2 you should have already downloaded the data. However, we
still need to set the Project and Working Folders for the TIMS data.
Before starting you should close any dialog boxes or displayed images in the
TerrSet workspace.
◆ ◆ ◆
Create a new project file and specify the working folders
with the TerrSet EXPLORER
1. Start the TerrSet EXPLORER from the main menu, or by
clicking the (+) sign in the vertical tab located on the left side
of the TerrSet workspace.
2. In the TerrSet EXPLORER window, select the Projects
tab.
3. Right click within the Projects pane, and select the New
Project Ins option.
4. A Browse For Folder window will open. Use this window
to navigate to the RSGuide folder, which is the folder you
created on your computer for this manual's data. Now
navigate to the Chap5_3 subfolder, within the
RSGuide/Chap5 folder.
5. Click OK in the Browse For Folder window.
6. A new project file, Chap5_3, will now be listed in the
Project pane of the TerrSet EXPLORER. The working folder
will also be listed in the Editor pane.
7. Minimize the TerrSet EXPLORER by clicking on the (-)
sign in the upper left corner of the TerrSet EXPLORER
window.
◆ ◆ ◆
Before we begin our ehancements, let’s first simply look at the data. We will
display two bands as single band images, and thus we will use a GreyScale
palette. In addition, we will create a false color composite, using three
different bands.
◆ ◆ ◆
Initial display of images
Menu: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER from the main menu or
icon bar.
2. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and select TIMSb1.
3. Select a GreyScale palette.
4. Click on OK to display the image.
5. Start the DISPLAY LAUNCHER again.
6. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and select TIMSb3.
7. Select a GreyScale palette.
8. Click on OK to display the image.
◆ ◆ ◆
Compare the two images, and note how similar, and thus how highly
correlated, the two images appear to be. We will now create the false color
composite. Refer to Table 5.3.1.a for the wavelength regions associated with
each band.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band TIMSb1. Click on OK to close the Pick
list.
3. Specify the Green image band as TIMSb3.
4. Specify the Red image band as TIMSb5.
5. Enter the Output image filename in the text box provided:
135fcc.
6. Accept all other defaults, and click OK to create and
display the false color composite (Figure 5.3.2.a).
◆ ◆ ◆
Figure 5.3.2.a TIMS false color composite. Band 1 (8.15 - 8.5 μm) as blue,
band 3 (9.0 - 9.3 μm) as green and band 5 (10.3 - 11.1 μm) as red.
The resulting false color composite (Figure 5.3.2.a) shows distinct
differences in colors between the different lava flows, suggesting that there
are chemical or weathering differences between the various flows. Although
the false color composite helps a great deal in separating the different flows,
it is still rather difficult to separate the different units because the colors are
rather pale.
5.3.3 Principal Component Analysis (PCA)
Principal component analysis (PCA) is a statistical method for generating
new, uncorrelated variables, from a data set. If you are not familiar with
PCA, you should consult a remote sensing text. Most remote sensing texts,
including Lillesand et al. 2015, and Jensen 2016, have excellent descriptions
of this method. For completeness sake, however, we provide a very short
reminder of the purpose and concepts of PCA.
TerrSet offers two PCA methods. In the forward t-mode process each image
band is analyzed as a (temporal) variable resulting in a new set of principal
component images that are uncorrelated with each other and explain
progressively less of the variance found in the original set of bands. A table
of the component loadings and eigenvectors are output also. The second
method, the forward s-mode process, treats each pixel location in the original
image bands as a (spatial) variable. As a result, new images are produced that
are the component loadings and eigenvectors of the input series
transformation. The output table produced is the uncorrelated principal
component scores.
We will be using the t-mode process. It involves a rotation and translation of
the original band axes to produce an equal number of new bands that are
orthogonal (at right angles to each other in the data space) and uncorrelated
(Figure 5.3.3.a). The first principal component band, PCA band 1, is oriented
to capture the maximum variance. Thus, in the case of Figure 5.3.3.a, PCA
band 1 is oriented along the diagonal of the bispectral plot, along the
direction of the main data distribution. Subsequent bands are oriented to
capture the maximum remaining variance, and are perpendicular to the earlier
bands. PCA produces as many new bands as there were old bands, although it
usually assumes that most of the information is present in the first few new
bands, which comprise most of the variance. T-mode PCA produces
component images and loading coefficients, where the images represent the
spatial pattern that explains the largest amount of variability across all bands,
and the loadings represent how correlated that pattern is with the different
bands.
Figure 5.3.3.a Bispectral plot of two band data showing original band axes
and the new axes associated with the principal components.
◆ ◆ ◆
Apply a principal component analysis with PCA
Menu location: IDRISI Image Processing –
Transformation – PCA
1. Start the PCA program using the main menu.
2. In the PCA dialog box window, click on the button to
Insert layer group. In the Pick List window, select the tims
raster group file.
3. Set the Number of components to be extracted to 6 (the
maximum possible, if there are 6 input files).
4. In the text box next to Prefix for output files (can include
path), enter PCA.
5. In the Text output section section of the PCA dialog box,
select the radio button for Complete output.
6. Accept all other defaults. (See Figure 5.3.3.b for the
completed dialog box.)
7. Click on OK.
◆ ◆ ◆
The Module Results window will display a text file of the results of the
analysis (Table 5.3.3.a).
Figure 5.3.3.b The PCA dialog box with the TIMS data specified.
Table 5.3.3.a PCA results for TIMS data of Hawaii
One of the difficulties of using PCA is that the output images can be difficult
to interpret. Nevertheless, by carefully examining the output text from the
PCA program (Table 5.3.3.a), some interpretation can usually be made.
Therefore, these results should be saved, for example by clicking on the Save
to File button at the bottom of the Module Results window.
The Module Results includes information on:
• The variance/covariance matrix (i.e. the variability of the bands, and
how they relate to one another).
• The correlation matrix (the relationship between the bands).
• The eigenvalues of the principal components (amount of variance
explained, or accounted for by each new component).
• The eigenvalues expressed as a proportion of the total (“% var.”) in the
output.
• The eigenvectors, which give the equation to convert the input data to
the output data.
• The Loadings, which provide information on the correlation between
the original bands and the new components.
For the discussion below, you should refer to the relevant images, and the
Module Results (Table 5.3.3.a), to see if you can verify the interpretations
suggested.
The files created by the PCA module have names that are generated
systematically. Each name starts with PCA (the prefix that we specified in
the program), followed by the method used (T-Mode) and the suffix cmp#,
where the # indicates the component number.
Use the TerrSet DISPLAY LAUNCHER to display the 6 output files, each
time using a GreyScale Palette, as described below.
◆ ◆ ◆
Display the PCA images
Menu: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER from the main menu or
icon bar.
2. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and select pca_t-mode_cmp1.
3. Select a GreyScale palette.
4. Click on OK to display the image.
5. Repeat steps 1-4 above, five times, to display the five files,
pca_t-mode_cmp2 through pca_t-mode_cmp6. Remember to
use the GreyScale palette each time.
◆ ◆ ◆
Note that principal component images 3 and 5 are very dark, and therefore
you will need to change the contrast stretch for these images.
◆ ◆ ◆
Change palette and contrast enhancement of an image
1. Make sure the pca_t-mode_cmp3 image is the focus of the
TerrSet workspace by clicking in the image.
2. Find the Composer window, and select the button for Layer
Properties.
3. In the Layer Properties window, move the slider for
Display Max until the display has a better contrast. (A good
value appears to be about 16, however the choice is quite
subjective.)
4. Now move the Display Min slider to improve the contrast
further. (A good value appears to be about -30.0)
5. Click on the buttons for Apply, Save and OK.
6. Repeat steps 1-5 for pca_t-mode_cmp5, selecting
appropriate Display Max and Display Min values.
◆ ◆ ◆
Components should be interpreted by looking at both the spatial pattern in the
image components and the values in the text output tables. In interpreting the
values for the TIMS data (Table 5.3.3.a), we see that the first component (C1
= PCA_T-Mode_cmp1) comprises a total of over 97.4% of the original
variance. This suggests that the majority of the variability in the images is
common to all the images. In this case, the common information is the
temperature of the rocks. The remaining 5 components represent only 0.6%
of the variance in the data. However, it is this 0.6% that is of interest to us.
Note how the images appear to get progressively noisier with higher
numbers. For example, notice how PCA_T-Mode_cmp2, and PCA_T-
Mode_cmp3 show the pattern of lava flows, this pattern is less in PCA_T-
Mode_cmp4, but PCA_T-Mode_cmp5 and PCA_T-Mode_cmp6 are
dominated by noise, presenting no clear geographic pattern.
The eigenvectors, as explained before, represent the formula for the
calculation of the new principal component bands. Thus, they can help us
understand what each output band means. For example, we find that the
eigenvectors for C1 are all positive, and similar (0.44 to 0.32). This suggests
that C1 (the image PCA_T-Mode_cmp1) represents an average of all the
bands. Indeed, the loadings, the last section of the table, show that C1 is
highly correlated with all the input bands (the values vary between 0.97 and
0.99).
Likewise, we can interpret the eigenvectors of each of the remaining principal
components, or output bands, in terms of the original input values. For C2,
the eigenvectors are negative for the first three bands, and positive for the
remaining three. This suggests that PCA_T-Mode_cmp2 can be understood
to be the difference between the first three bands and the second three
bands. Component 3 (C3), on the other hand, is generated by the difference
between the first two bands and the third band. However, without knowledge
of the emittance spectra of the lava flows, interpreting the significances of the
bands is difficult. We can, however, see from the eigenvectors that C1 is an
average of the input data, and that Components 2 to 5 are all enhancing subtle
spectral features in the original images. A simple visual inspection of the
images tells us that PCA_T-Mode_cmp2, PCA_T-Mode_cmp3 and PCA_T-
Mode_cmp4 have some interesting information, whereas PCA_T-
Mode_cmp5 and PCA_T-Mode_cmp6 have relatively little.
As a final step, the PCA components can be visualized as a false color
composite, using the program COMPOSITE and principal components 2, 3
and 4 as the input bands, as described below.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band PCA_ T-Mode_Cmp2. Click on OK to
close the Pick list.
3. Specify the Green image band as PCA_ T-Mode_Cmp3.
4. Specify the Red image band as PCA_ T-Mode_Cmp4.
5. Enter the Output image filename in the text box provided:
PCAfcc234.
6. Accept all other defaults (Figure 5.3.3.c), and click OK to
create and display the false color composite (Figure 5.3.3.d).
◆ ◆ ◆
Figure 5.3.3.c COMPOSITE dialog box, with the data specified.
Figure 5.3.3.d False color composite of TIMS data. Principal components
2,3,4 as BGR.
Based on the discussion above, we can understand that by excluding PCA_T-
Mode_cmp1, we are excluding the majority of the temperature information,
which is of less interest to us. The false color composite without this first
principal component (Figure 5.3.3.d) is remarkably impressive, given that it
comprises less than 1% of the original variance. The lava flows are now very
clearly differentiated.
Note, however, that the PCA false color composite does appear rather noisy,
with a distinctive striping. Since Component 4 had noise in the form of
stripes, this noise was carried to the composite. The stripes are from the scan-
lines. It is inevitable that we will enhance the noise in this type of PCA
operation, since we are boosting a minor part of the signal (the spectral
variation) at the expense of the majority of the signal (the temperature
information).
5.3.4 PCA Decorrelation Stretch
PCA is a powerful data transformation technique that has many applications
and variations. One variation on PCA is the decorrelation stretch. In a
decorrelation stretch, the image is first transformed with PCA. Selected
principal component bands are then stretched, and a reverse PCA
transformation is applied, in which the data are retransformed back to the
original data space. Figure 5.3.4.a illustrates how the data shown in Figure
5.3.3.a might appear after a decorrelation stretch. In comparing the two
figures, note how the data have been stretched out in the direction of PC2,
thus filling the bispectral plot area to a much greater degree.
Gillespie et al. (1986) point out that a very useful attribute of the
decorrelation stretch is that, if a false color composite is made of the
decorrelated data, the color saturation will be much stronger compared to
that of a false color composite of the original bands. However, the hues
should be unchanged, thus making it possible to interpret the colors in terms
of the original spectral bands. (Hue refers to the dominant color, such as red,
or green, and saturation refers to the brightness of the color. For example,
pink has less saturation than red.)
Figure 5.3.4.a Bispectral plot showing the effects of a decorrelation stretch.
Compare to Figure 5.3.3.a, which shows the data prior to the stretch.
Further information on these terms is provided in Section 4.2.2, where color
terminology is explained in detail. Let us investigate whether this
improvement of the colors seems to work with the Hawaii TIMS data.
In Section 5.3.3 we already calculated the principal components for the
Hawaii data. Therefore, in this exercise we only need to apply a stretch to the
selected bands, and then re-transform the principal components back to the
original bands. The TerrSet PCA program, which we used in Section 5.3.3,
has an option to do the reverse transformation.
We will specifically only stretch principal components 2, 3 and 4, as these
components appear to carry most of the spectral information. Principal
component 1 is mostly temperature and will be used in the retransformation
back to the original space, but it is not stretched. Principal components 4 and
5 are mainly noise, and will be excluded entirely.
The stretching is applied with the TerrSet program SCALAR, a program for
applying simple arithmetic (scalar) operations to an image, including
multiplication, addition, division exponentiation, and subtraction.
◆ ◆ ◆
Multiplying an image by a number with SCALAR
Menu Location: IDRISI GIS Analysis – Mathematical
Operations – SCALAR
1. Start the SCALAR program using the main menu.
2. In the SCALAR dialog box, use the pick list button (…) to
specify the name of the Input image as PCA_T-Mode_Cmp2.
Click on OK to close the Pick list.
3. In the text box labeled Output image, type PCA_T-
Mode_Cmp2b.
4. In the text box labeled Scalar value, type 2.
5. In the Operation section of the window, select the radio
button for Multiply (Figure 5.3.4.b).
6. Click OK.
7. The stretched image will be displayed automatically;
however, you can close the image, as we don’t need to see it.
8. Now stretch PCA_T-Mode_Cmp3 by a factor of 2, to
create PCA_T-Mode_Cmp3b output. The simplest way to do
this is to type over the number 2 in text boxes in the SCALAR
window, to change the Input image PCA_T-Mode_Cmp2 to
PCA_T-Mode_cmp3, and output image to PCA_T-
Mode_Cmp3b. Click OK.
9. Repeat the previous procedure to stretch PCA_T-
Mode_Cmp4 by a factor of 2, to create the ouput PCA_T-
Mode_Cmp4b.
◆ ◆ ◆
Figure 5.3.4.b The SCALAR dialog box, with the Multiply option selected.
Now that we have stretched the data, we have two minor steps to complete
before we can run the inverse PCA program. First, we must create a raster
group file. A raster group file is a file that tells TerrSet a group of files
belongs together as a single group. Raster group files were introduced in
Section 1.3.7. If you have trouble with this section, you may want to review
that material.
◆ ◆ ◆
Create a raster group file collection with the TerrSet
EXPLORER
1. If the TerrSet EXPLORER window is not already open,
open it again using the (+) sign on the TerrSet EXPLORER
panel located in the left portion of the TerrSet workspace, or
using the menu icon.
2. Click on the tab for Files.
3. If the files are not listed in the Files pane, double click on
the directory name to display the files.
4. If need be, slide the divider for the Metadata pane down, so
you can see all the files we need to work with: PCA_T-
Mode_Cmp1.rst through PCA_T-Mode_Cmp6.rst, as well as
the three new files PCA_T-Mode_Cmp2b.rst, PCA_T-
Mode_Cmp3b.rst and PCA _T-Mode_Cmp4b.rst.
5. Click on PCA_T-Mode_Cmp1.rst, so the file is
highlighted.
6. Keeping the keyboard Ctrl button pressed, now click on the
following files in the order listed:
PCA_T-Mode_Cmp2b.rst
PCA_T-Mode_Cmp3b.rst
PCA_T-Mode_Cmp4b.rst
PCA_T-Mode_Cmp5.rst
PCA_T-Mode_Cmp6.rst
7. You should now have 6 files highlighted.
8. Right click in the Files pane. Select the menu option for
Create – Raster Group (Figure 5.3.4.c). This will create a file
Raster group.rgf.
9. Right Click on the file Raster group.rgf in the Files pane.
Select Rename, and change the name to PCA2. Press the
Enter key on the keyboard.
◆ ◆ ◆
Figure 5.3.4.c Selecting the raster files to combine into a Raster Group File.
We are ready now for the final step for reconstructing the bands. The process
of band reconstruction (or inverse T-mode PCA) is done though a weighted
linear combination of components, where weights are the corresponding
eigenvectors for each band. To reconstruct the TIMS Bands in this exercise,
the stretched component 1 is multiplied by the eigenvector 1 of component 1
(0.443115), the stretched image of component 2 is multiplied by the
eigenvector 1 of component 2 (-0.241007), the stretched image of component
3 is multiplied by the eigenvector 1 of component 3 (-0.700565), and the
stretched image of component 4 is multiplied by the eigenvector 1 of
component 4 (-0.119028). Then all the results are added together. The
general formula is the following:
Reconstructed Bandx = EVxC1 * C1 + EVxC2 * C2+ EVxC3 * C3+…+ EVxC4 *
Cn
where x is the band being reconstructed. In the calculation of the new band,
you can choose which components to include. A common implementation of
PCA is to exclude components that represent noise (e.g. components where
the pattern is dominated by salt and pepper, stripes or other noise). In this
example, C5 and C6 have a clear striping pattern that dominates the images
and therefore will not be included in the reconstruction.
◆ ◆ ◆
Perform an inverse principal component analysis with
PCA
Menu location: IDRISI Image Processing –
Transformation – PCA
1. Start the PCA program using the main menu.
2. In the PCA window, select the radio button for Inverse T-
Mode. The options in the dialog box will immediately change.
3. For the T-Mode Components RGF file name, use the pick
list button (…) to select the raster group file we have just
created: PCA2. Click OK to close the Pick List window.
4. For the Input T-Mode eigen file (*.eig) select PCA_T-
Mode.
5. In the text box for List of components to be used (e.g. 1-4,
6), enter 1-4. (We only select the first 4 PCA bands, as
discussed above, because the remaining two are dominated by
noise).
6. In the text box next to Prefix for output files (can include
path):, enter Decor.
7. In the text box next to Output bands (e.g. 1-4, 6), enter 1-6.
8. See Figure 5.3.4.d for the completed dialog box.
9. Click on OK.
◆ ◆ ◆
Figure 5.3.4.d Inverse PCA transformation in the PCA dialog box.
We can now create the false color composite, and see whether we have
indeed improved the colors of the lava flows.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band Decor1. Click on OK to close the Pick
list.
3. Specify the Green image band as Decor3.
4. Specify the Red image band as Decor5.
5. Enter the Output image filename in the text box provided:
135decor.
6. Accept all other defaults, and click OK to create and
display the false color composite (Figure 5.3.4.e).
◆ ◆ ◆
Figure 5.3.4.e Decorrelation stretch of TIMS bands 1,3,5 as BGR.
You should compare this decorrelation stretch to the original false color
composite, 135fcc, created in Section 5.3.2 (Figure 5.3.2.a). You should
notice that the colors are indeed much brighter. The details of the lava flow
on the left-hand side should now be more evident.
When noise removal is desired without changing the brightness of the bands,
reconstruction is performed with the raw components using the bands that do
not exhibit noise.
5.3.5 HLS Stretch
In section 4.2.2, we introduced the concept of color transformations. We
provide only the briefest summary here. If you need to be refreshed regarding
color space concepts, you should review Section 4.2.2, and possibly also refer
to a text such as Jensen (2016).
We saw in Section 4.2.2 that colors on a computer monitor are normally
specified in terms of red, green and blue (RGB values), and therefore this a
convenient system for addressing many remote sensing problems. However,
in some instances, it is useful to work in the alternative hue, lightness, and
saturation (HLS) color system. The term hue is relatively intuitive and refers
to the characteristic tint. Lightness refers to the brightness, which extends
from black to white. Saturation refers to the purity of the color, so that low
saturation colors tend to gray, and have a typically pastel quality, whereas
high saturation colors are the purest colors.
It is important to understand that HLS and RGB are equivalent ways of
specifying color, and thus it is possible to move back and forth between
systems. In this exercise, we will transform an image between RGB and HLS
space. This will allow us to increase the saturation of the image, making the
colors much purer. Since we don’t alter the hue, the tints of the colors should
not change at all. This will help discriminate between the different lava flows
in the image, but because we don’t change the hues, the color tints will still
be useful for interpreting the different colors in the image.
◆ ◆ ◆
Convert three image bands from RGB to HLS space
Menu location: IDRISI Image Processing –
Transformation – COLSPACE
1. Start the COLSPACE program from the menu.
2. In the COLSPACE window, select the radio button for RGB
to HLS.
3. In the section for Input files, note that it is a little bit
confusing that TerrSet specifies the input files here in the
reverse order of that used for the COMPOSITE program,
namely as Red, Green, and then Blue.
4. Use the pick list button (…) next to the text box for Red
image band, to select the original band 5 TIMS image,
timsb5. Click OK to close the Pick List window.
5. Repeat the previous step to select timsb3 for the Green
image band.
6. Repeat the previous step to select timsb1 for the Blue
image band.
7. In the section for Output files, type the filename in the
textbox for Hue image band: hue.
8. In the textbox next to Lightness image band, type
lightness.
9. In the textbox next to Saturation image band, type
saturation.
10. Compare your completion of the text box options to
Figure 5.3.5.a.
11. Click OK. Unlike most other TerrSet programs, an image
is not displayed on completing the program.
◆ ◆ ◆
Figure 5.3.5.a. The COLSPACE dialog box for RGB to HLS transformation.
Use the TerrSet DISPLAY LAUNCHER to view the saturation image
created by the COLSPACE program, as described below.
◆ ◆ ◆
Initial display of images
Menu: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER from the main menu or
icon bar.
2. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and select saturation. Click OK in
the Pick List window.
3. Select a GreyScale palette.
4. Click on OK to display the image (Figure 5.3.5.b).
◆ ◆ ◆
Figure 5.3.5.b Saturation image of the TIMS data.
As shown by Figure 5.3.5.b, the saturation image is very dark, and although
the image is scaled over the range 0-255, most of the image values are in the
lower half of the DN range, as indicated by the radiometric scale bar. This
image therefore confirms what we had observed visually, namely that the
saturation of this image is very low. We can improve the image by stretching
the saturation, and then transforming the hue, stretched saturation, and
lightness (HLS) bands back to RGB. However, first we need to decide how
much to stretch the saturation, and we will do that by looking at the image
histogram.
◆ ◆ ◆
Display the image data distribution with HISTO
Menu Location: File – Display – HISTO
1. Start the HISTO program from the main menu or the
toolbar.
2. In the HISTO dialog box, click on the browse button (…)
next to the text box for the Input file name to select
saturation. Click on OK to close the Pick List window.
3. Set the class width to 1.
4. Leave the remaining parameters set at their default values.
5. Click on OK.
6. The histogram will appear in a new HISTOGRAM
window. In this new window, in the section labeled Mode,
click on the option for Cumulative. The graph will
automatically update (Figure 5.3.5.c).
◆ ◆ ◆
Figure 5.3.5.c Histogram of the saturation image, in Cumulative mode.
From Figure 5.3.5.c, we can see that the majority of the DN values are less
than 75. However, to increase the saturation to give an even clearer image,
we arbitrarily select 50 as the maximum for the scaling. This means that any
saturation value of 50 or more will be scaled to the maximum, 255. From the
graph we can see this will still leave about 75% of the image with saturation
values less than the maximum of 255.
There are a variety of ways to do the rescaling of the saturation values. One
simple way is through the program STRETCH. The concept of a stretch
operation should be familiar to you, as a stretch operation is typically used in
displaying data. This is done so that the image brightness levels are shown in
an optimal manner on the screen (Chapter 2). The type of stretch applied in
displaying an image is normally temporary, and does not change the original
file values. For this exercise, however, we need to create a new file with the
stretch applied permanently.
◆ ◆ ◆
Rescaling the DN values of an image with STRETCH
Menu location: IDRISI Image Processing – Enhancement
– STRETCH
1. Use the main menu to start the STRETCH program.
2. In the STRETCH window, click on the Pick List button (…)
next to the Input image text box to select the image
saturation. Click OK to close the Pick List window.
3. In the text box next to Output image, enter sat_str.
4. Check the box for Specify an upper bound other than
maximum.
5. In the text box that will open to the right of the Specify an
upper bound other than maximum, enter 50.
6. Accept the remaining defaults (Figure 5.3.5.d), and Press
OK.
◆ ◆ ◆
Figure 5.3.5.d The STRETCH dialog box.
We can now confirm that the data have indeed been stretched by running the
HISTO program once again, using sat_str as the input file. See if you can
complete this without instructions. If you do have problems, return to the
HISTO instructions earlier in this Section. Figure 5.3.5.e shows the output
you should obtain.
Figure 5.3.5.e Histogram of the sat_str image, in Cumulative mode.
In comparing the two histograms shown in Figures 5.3.5.c and 5.3.5.e, you
should note that in the latter figure only a small portion of the image has low
DN values, and approximately 25% of the image has the maximum DN
value. This confirms that the DN values in the sat_str image are now much
higher with the stretch applied.
Therefore, we are now ready to transform the HLS data back to RGB space.
◆ ◆ ◆
Transform the HLS images back to RGB images
Menu location: IDRISI Image Processing –
Transformation – COLSPACE
1. Start the COLSPACE program from the menu.
2. In the COLSPACE window, select the radio button for HLS
to RGB. (This is the default if you have just opened the
program window.)
3. In the section for Input files, double click in the text box for
Hue Image Band, to select hue. Click OK to close the Pick
List window.
4. For the Lightness Image Band, select lightness.
5. For the Saturation Image Band select sat_str (i.e. the
stretched saturation image).
6. In the section for Output files, in the textbox next to Red
image band, enter b5_str.
7. In the textbox next to Green image band, enter b3_str.
8. In the textbox next to Blue image band, enter b1_str.
9. Compare your completion of the text box options to Figure
5.3.5.f.
10. Press OK. As before, the program will not automatically
display the output file.
◆ ◆ ◆
Figure 5.3.5.f COLSPACE dialog box for the reverse transformation of HLS
to RGB.
The final step is to create a false color composite with the three new
saturation-stretched files with the program COMPOSITE.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band b1_str. Click on OK to close the Pick
list.
3. Specify the Green image band as b3_str.
4. Specify the Red image band as b5_str.
5. Enter the Output image filename in the text box provided:
135sat_str.
6. Accept all other defaults, and click OK to create and
display the false color composite (Figure 5.3.5.g).
◆ ◆ ◆
Figure 5.3.5.g TIMS false color composite of saturation-stretched data.
Band 1 as blue, band 3 as green and band 5 as red.
Compare the false color composite with the stretched saturation, 135_satstr
(Figure 5.3.5.g) with the false color composite made from the original data,
135fcc (Figure 5.3.2.a), by redisplaying the latter image if necessary. In
comparing the two images, evaluate whether the hues are indeed unchanged
in the stretched saturation image. You should also compare these two images
to the decorrelation image, 135decor (Figure 5.3.4.e). Of the three images,
which is the best for interpreting the lava flows?
In doing this exercise, you may have noted that in applying the stretch to the
saturation image, we selected an arbitrary cut-off of 50 DN. You may want to
experiment with values of 10 (an extreme stretch) and 100 (a rather mild
stretch), to see whether a different value might give you better results.
5.4 Segmenting and Density Slicing Images for
Advanced Display
For some applications, it is useful to be able to use non-standard display
options. In this section, we will therefore explore some alternative ways of
displaying an image, including masking of features not of interest, and
density slicing.
5.4.1 Preparation
In Section 5.2 you should have already downloaded the data. Specifically, we
will work with the data in the folder Chap5\Chap5_4\, which, like the folder
for Chapters 1-4, contains ETM+ Hong Kong imagery. However, the area we
will be studying in this section is focused on the Pearl River Estuary.
To create a more manageable data size, the original image, with its 30 meter
pixels, has been degraded by pixel averaging to produce 90 meter pixels. If
you would prefer to work with the original 30 meter pixel data, the original
30 meter data are available in the folder Chap5\Chap5_4alt\. If you do use
this alternative data set, in the instructions that follow, set the subfolder to
this alternative location. When prompted to use files that have names such as
hk_etm_b1, substitute the name of the files in the new directory, e.g.
hk_etm_large_b1, etc.
Before starting you should close any dialog boxes or displayed images in the
TerrSet workspace. Now, set the Project and Working Folders for this new
data, as described below.
◆ ◆ ◆
Create a new project file and specify the working folders
with the TerrSet EXPLORER
1. Start the TerrSet EXPLORER from the toolbar, or by
clicking on the (+) sign in the vertical tab located in the top
left corner of the TerrSet workspace.
2. The TerrSet EXPLORER window will open on the left side
of the TerrSet workspace.
3. Select the Projects tab.
4. Right click within the Projects pane, and select the New
Project Ins option.
5. A Browse For Folder window will open. Use this window
to navigate to the RSGuide folder, which is the folder you
created on your computer for the data for this manual. Now
navigate to the Chap5_4 subfolder, within the
RSGuide/Chap5 folder.
6. Click OK in the Browse For Folder window.
7. A new project file, Chap5_4, will now be listed in the
Project pane of the TerrSet EXPLORER. The working folder
will also be listed in the Editor pane.
8. Note that you can switch between the Chap5_3 project and
this new Chap5_4 project by selecting the appropriate radio
buttons in the Project pane of the TerrSet EXPLORER.
9. Minimize the TerrSet EXPLORER by clicking on the (-
) in the upper left corner of the TerrSet EXPLORER window.
◆ ◆ ◆
On starting a new project, it is always good to look at the data. We will
therefore create a standard color composite, with simulated natural colors.
We will increase the percent that is saturated to the extremes of the brightness
range, in order to increase the image contrast.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, double click in the text
box for the Blue image band. The Pick List will open
automatically. Double click on hk_etm_b1 to select the blue
band (band 1).
3. Repeat this selection process to specify the Green image
band as hk_etm_b2.
4. Specify the Red image band as hk_etm_b3.
5. Enter the Output image filename in the text box provided:
hk123.
6. Change the value of the Percent to be saturated from each
end of the grey scale from 1.0 to 7.5.
7. Accept all other defaults, and click OK to create and
display the false color composite.
◆ ◆ ◆
Notice how the pattern of sediment in the Pearl River Estuary is apparent in
this image (Figure 5.4.1.a).
Figure 5.4.1.a Simulated natural color composite of the Pearl River Delta,
using TM bands 1, 2 and 3 as blue, green and red.
5.4.2 Developing a Land Mask
In order to explore further the patterns in the water, it would be useful to
develop a mask, so that we can ignore the land. A straightforward method of
developing this mask is to use a threshold value in ETM+ Band 5 (1.55-1.75
μm). Water absorbs strongly in the mid infrared, and therefore we will
assume that all dark pixels in the mid infrared band are water. The only
difficult step is to choose the value of the threshold between water and land.
In a tidal area, the boundary is obviously a zone, not an absolute line. In
addition, water logged soils will tend to have similar spectral characteristics
to water.
In the next step we will modify the image palette file, in order to display the
image DN values around the potential threshold very clearly. Specifically, we
want a clear idea of the area that would be included in the mask for each
potential threshold value.
◆ ◆ ◆
Displaying an image with an alternative palette file
Menu Location: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER from the main menu or
icon bar.
2. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and double click on hk_etm_b5 in
the Pick List window that opens automatically.
3. Click on the tab for Advanced Palette/Symbol Selection.
This will open up additional options in the dialog box.
4. Click on the brightly colored color ramp in the bottom right
column (Figure 5.4.2.a). The palette file’s name, RADAR, will
be displayed in the data entry line labeled Current Selection.
5. Accept all the remaining default options.
6. Click on OK to display the image (Figure 5.4.2.b).
◆ ◆ ◆
Figure 5.4.2.a. The DISPLAY LAUNCHER, with the Advanced
Palette/Symbol Selection tab and the RADAR palette.
Figure 5.4.2.b ETM+ Band 5 image with the RADAR palette file. The Layer
properties icon in the Composer window is indicated by the arrow.
Note that in this rendition of the Band 5 image (Figure 5.4.2.b), the water is
generally shown in cool colors, such as blue. We will now adjust the range of
DN values over which the color ramp is applied, in order to select a precise
threshold that discriminates between land and water.
◆ ◆ ◆
Adjusting the thresholds for color ramp display in the
Composer Window
1. Find the Composer window, which is automatically opened
whenever an image is displayed.
2. In the Composer window, select Layer Properties (see
Figure 5.4.2.b).The Layer Properties dialog box will open.
3. Slowly adjust the Display max slider to successively lower
positions. Observe how, when you slide the pointer to lower
values, more and more of the land area of the image is
displayed as white. Stop moving the slider when you reach a
DN value of between 18 and 22.
4. Now use the legend and the colors in the image to help you
choose an optimum DN threshold that differentiates between
the water and land. Specifically, we want a value below
which most pixels are water. (For example, 18 might seem a
good value.)
5. Make a note of the value you selected for the threshold.
◆ ◆ ◆
Now that we have selected the threshold, we need a mechanism to apply the
threshold so as to assign all pixels that have a value below the threshold as
water, and all those above the threshold as land. There are at least two ways
to do this in TerrSet. One way is to use the program RECLASS, which is
available from the main menu from IDRISI GIS Analysis – Database Query –
RECLASS. However, we will use an alternative method, using the IMAGE
CALCULATOR, a particularly powerful tool with broad application to image
analysis.
◆ ◆ ◆
Creating a land mask with the IMAGE CALCULATOR
Menu location: IDRISI GIS Analysis – Mathematical
Operators – IMAGE CALCULATOR
1. Use the main menu or the icon tool bar to open the IMAGE
CALCULATOR.
2. In the IMAGE CALCULATOR window, click on the radio
button for Operation Type: Logical Expression.
◆ ◆ ◆
Before continuing, we will stop and take a moment to familiarize ourselves
with the IMAGE CALCULATOR (Figure 5.4.2.c). The interface for this
program is based on the concept of a hand calculator. However, unlike a hand
calculator, the IMAGE CALCULATOR can operate on images.
Figure 5.4.2.c IMAGE CALCULATOR window.
The top part of the IMAGE CALCULATOR window has radio buttons for
specifying whether you wish to create a Mathematical expression or a
Logical expression. (The latter option, Logical expression, is indicated by the
upper arrow in Figure 5.4.2.c.) Below the radio buttons are two text boxes, on
the left for the Output file name, and the right for the Expression to process.
Developing an expression, or formula, is easy using the buttons in the large,
main area below the two text boxes. In particular, the Insert image button
(indicated by the lower arrow in Figure 5.4.2.c) is used to place an entire
image in the expression. There are additional buttons for logical
operations. At the bottom of the window are some basic commands for
processing, saving, and opening expressions.
◆ ◆ ◆
Creating a land mask with the IMAGE CALCULATOR
(cont.)
3. In the Output file name text box, enter the file name
landmask.
4. Click on the button for Insert Image.
5. A Pick List will open. Double click on hk_etm_b5.
6. The Expression to Process text box will now contain the
image name in square brackets: [hk_etm_b5].
7. Click on the button for the less than sign (“<”), and then
enter the threshold you determined in the previous step. Your
equation should now look something like the following:
[hk_etm_b5]<18.
8. Figure 5.4.2.d shows the resulting IMAGE
CALCULATOR expression.
9. Click on the Process Expression button.
◆ ◆ ◆
Figure 5.4.2.d IMAGE CALCULATOR with a threshold expression.
TerrSet will automatically display the processed land mask image (Figure
5.4.2.e). The image has values of 0 for land, and 1 for water. At this stage, if
you decide on reviewing the image that your threshold was too high, and
there is too much land classified as water, then it is a simple step to change
the value in the IMAGE CALCULATOR expression, and recreate the
file. Likewise, if you decide that the threshold value was too low, and there is
too much land where there should be water, then you can adjust the threshold
to a higher number.
Figure 5.4.2.e The land mask image.
5.4.3 Displaying Patterns in Water
In this section we will use our mask to suppress all land area pixels, leaving
only the water pixels for an image of water patterns. We will then apply a
look-up table (palette file) that highlights patterns of sediment in the water in
the Pearl River Estuary, and the surrounding lakes.
The land mask, created in the previous section, has values of 1 in the areas
interpreted to be water, and 0 elsewhere. We therefore can apply the mask
simply by multiplying the land mask by a selected image band.
◆ ◆ ◆
Applying the land mask with OVERLAY
Menu location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. Start the OVERLAY module from the main menu or the
tool bar.
2. In the OVERLAY dialog box, double click in the text box
for the First Image, and then in the automatically opened Pick
List, double click on hk_etm_b3.
3. Double click in the text box for the Second Image, and
select the land mask image, landmask.
4. In the window for the Output image, enter b3_landmask.
5. In the Overlay option section of the dialog box, select the
radio button for the option for First * Second (i.e. first times
second images).
6. Click on OK to process the overlay operation, and also
display the image.
◆ ◆ ◆
The masked image, with a color ramp applied by TerrSet automatically
(Figure 5.4.3.a), shows the complex sediment patterns in the bay very clearly.
Higher DN values, shown in oranges and reds, indicate more sediment-laden
water, or shallower water. Lower values, indicated by greens, indicate
clearer, deeper water.
The ETM+ thermal band also shows interesting patterns in the water. We can
follow the same procedure as with the red band to mask the thermal band.
Try doing this on your own using the OVERLAY module. If you have trouble,
follow the instructions below.
Figure 5.4.3.a ETM+ band 3 (red), with land masked out and color ramp
applied.
◆ ◆ ◆
Applying the land mask with OVERLAY
Menu location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. Start the OVERLAY program from the main menu or the
tool bar.
2. In the OVERLAY dialog box, double click in the text box
for the First Image, and then in the automatically opened Pick
List, double click on hk_etm_b6.
3. Double click in text box for the Second Image, and select
the land mask image, landmask.
4. In the window for the Output image, enter b6_landmask.
5. In the Overlay option section of the dialog box, select the
radio button for the option for First * Second (i.e. first times
second images).
6. Click on OK to process the overlay operation, and also
display the image (Figure 5.4.3.b).
◆ ◆ ◆
Figure 5.4.3.b ETM+ band 6 (thermal) with land masked out.
The image that is created (Figure 5.4.3.b) this time appears to be dominated
by just one color in the water, namely red. You might therefore think that
there is no thermal variation in the water. However, you may notice from the
legend, that the color ramp has been applied from 0, and that red
encompasses a range of values. Therefore, if we select a higher value than 0
for the lower end of the color ramp, we may well see differentiation in the
water. Note that a similar issue applied to the red band data, processed
earlier. In that case, however, the issue wasn’t as noticeable, since the
maximum value was so much lower.
We will therefore adjust the thresholds for the display, just as we did in
Section 5.4.2.
◆ ◆ ◆
Adjusting the thresholds for color ramp display in the
Composer Window
1. Find the Composer window, which is automatically opened
whenever an image is displayed.
2. In the Composer window, select Layer Properties. The
Layer Properties dialog box will open.
3. Slowly adjust the Display min slider to successively higher
positions. Observe how patterns in the water appear and vary
as you move the slider.
4. Slowly adjust the Display max slider to successively lower
positions.
5. The optimal values for Display Min and Display Max are
somewhat arbitrary. However, one set of values that gives a
good visual representation of the patterns is 121 and 150,
respectively (Figure 5.4.3.c). You can manually enter these
values in the relevant text boxes, and click on Apply.
◆ ◆ ◆
Figure 5.4.3.c ETM+ band 6 (thermal) with land masked out and modified
color ramp. White arrows point to thermal pollution sources, and a black
arrow to noise in the data.
The masked thermal image with the adjusted display range (Figure 5.4.3.c),
looks very different compared to the original display of the masked data
(Figure 5.4.3.b). This reminds us that the nature of the display stretch can be
very important in interpreting an image.
The thermal image also has many interesting features, and shows patterns not
evident in the red band image (Figure 5.4.3.a). A major source of thermal
pollution is evident as a dark red (high DN value) plume extending from the
island in the bottom right corner of the image (indicated by a white arrow in
Figure 5.4.3.c). The source for this warm water is the discharge from a power
station. A second major plume, also indicated by a white arrow, is evident
where the main channel of the Pearl River becomes much narrower, at the top
of the image. With careful examination, additional, smaller plumes can be
made out at other locations along the coastline.
Note also that there is some noise in the data, as shown by the narrow line of
warm temperatures that crosses the bay. This feature is indicated by a black
arrow in Figure 6.4.3.c.
5.4.4 Density Slicing Landsat Band 3
Sometimes it may be useful to summarize the complexity of the multiple DN
values into just a few discrete classes, a process termed Density
slicing. Density slicing can be conceptualized as a simple type of
classification, using just one band of data.
Figure 5.4.4.a RECLASS dialog box with options for Equal-interval reclass.
◆ ◆ ◆
Density slicing an image with RECLASS
Menu location: IDRISI GIS Analysis – Database Query –
RECLASS
1. Start the RECLASS module from the main menu or
toolbar.
2. The RECLASS dialog box will open.
3. Double click in the text box for Input file, and double click
on b3_landmask to select that file.
4. Enter a name for the Output file: b3_densityslice.
5. In the Classification type section of the RECLASS dialog
box, click on the radio button for Equal-interval reclass. The
controls in the dialog box will be changed automatically to
reflect this option.
6. Change the Minimum value to consider to 27.
7. Leave the Maximum value to consider at the default (105 in
this case).
8. Click on the Number of classes radio button, and enter 15
in the text box that opens to the right.
9. Figure 5.4.4.a shows the dialog box, with the parameters
specified.
10. Click on OK.
◆ ◆ ◆
In the above instructions, you were given a minimum value of 27 to enter,
rather than the default of 0. You can verify that this is a good choice by
examining the b3_landmask histogram, available with the program HISTO.
Once the RECLASS module has completed, the image will be displayed
automatically (Figure 5.4.4.b). In comparing this image to the original
masked band 3 data (b3_landmask, shown in Figure 5.4.3.a), you should note
that the range of the color ramps is different. In addition, the density slicing
has resulted in discrete steps in the colors, instead of the appearance of a
smooth surface.
Figure 5.4.4.b Density-sliced and land-masked ETM+ band 3 data.
5.4.5 Combination False Color Composite
for Land and Water
Occasionally it may be useful to create a false color composite that uses a
different band combination for different parts of the image. For example, a
band combination that is good for the land is not always good for water,
which has very different spectral properties. Therefore, for an effective
display, it may be useful to use one band combination for the water, and
another for the land. Water quality patterns are most apparent in the visible
wavelengths, because water’s peak transmissivity is in the green part of the
electromagnetic spectrum. However, land cover materials, especially
vegetation, benefit from a combination of visible, near and shortwave
infrared wavelengths. Therefore, for this exercise we will create a
combination of Landsat bands 1, 2 and 3 (i.e. visible wavelengths) for water
areas, and bands 3, 4 and 5 (i.e. red, near infrared and shortwave infrared) for
the land.
It is important to note that combination false color composites such as the one
we are creating here should be used with caution. A combination false color
composite may be misleading to users who are not aware of the processing
history of the image. In addition, unless the user has a reference image that
shows the band assignments for each part of the image, even an experienced
user might be misled by such a product.
The steps involved in this exercise are fairly numerous. Therefore, we will
take advantage of a very powerful tool in TerrSet: the MACRO MODELER.
The MACRO MODELER has already been explained in some detail in
Section 4.2, and if this section seems confusing to you, you should review
that material.
For this exercise, we will see an example of the use of the Macro Modeler as
a way to employ “canned” programs, which have been prepared previously.
◆ ◆ ◆
Adapting and running a previously created MACRO
MODELER model
Menu location: IDRISI GIS Analysis – Model Deployment
Tools – MACRO MODELER
1. Start the MACRO MODELER from the main menu or
main icon bar.
2. The MACRO MODELER graphical interface will open.
3. In the MACRO MODELER window, click on the Open icon
(second from left), or use the MACRO MODELER menu: File
– open. (Note that if the MACRO MODELER window is
highlighted, and you put your cursor over an icon, the icon
name is shown.)
4. A Pick List window will open. Double click on
segment_composite to select this file.
◆ ◆ ◆
The model will be shown automatically in the MACRO MODELER window
(Figure 5.5.4.a) as a series of linked icons. Although discussed extensively in
Section 4.2, we provide a short review here.
Within the MACRO MODELER window, the purple squares represent data
layers, and the pink parallelograms are TerrSet program modules. The dark
blue arrows indicate the input and output for the processes. Note that the
model we have opened is missing one input data layer, as well as a composite
generation module and a final output data layer, all of which we will provide.
Figure 5.4.5.a MACRO MODELER, showing the model when first opened.
Models with multiple inputs, outputs and modules, such as the
segment_composite model, are a bit daunting to try to interpret. However,
this model is actually quite simple. The input files are all on the left, and
comprise five of the seven Landsat bands. In the model, the five bands are
each stretched. The first three are then each processed through an overlay
operation. This is followed by a second set of overlay operations.
Note that ETM+ band 3 is linked as input for two different stretch modules.
This demonstrates that it is possible for one file to serve as input for multiple
modules.
◆ ◆ ◆
Adapting and running a previously created model (cont.)
5. In the MACRO MODELER window, click on the Raster
Layer icon.
6. The Pick List window will open. Select LANDMASK, a
file created in Section 5.4.2. (This image has DN values of 1
for the water areas, and 0 for land areas.)
7. A new raster layer, indicated by a purple rectangle and
labeled LANDMASK, will appear in your model.
8. Use the mouse to drag the landmask raster layer to a
position above the second column of input files (i.e. above
raster layer tmp000).
9. Click on the Connect icon (a blue bent arrow). Now click
on the landmask raster layer, and, keeping the mouse button
depressed, move the mouse to the top of the first Overlay
module. Remove your finger from the mouse button. This will
connect landmask to the Overlay function (Figure 5.4.5.b).
◆ ◆ ◆
Figure 5.4.5.b MACRO MODELER with the Landmask raster layer
added. Arrow points to the new link to the Overlay function.
◆ ◆ ◆
Adapting and running a previously created model (cont.)
10. Repeat the previous step two more times, connecting the
landmask raster layer to the two Overlay modules below the
first one. The result will be that the land mask will be
connected to all three Overlay modules.
11. The connection procedure described above automatically
enters the land mask as the second layer in each of the
Overlay module functions. To confirm that this has indeed
taken place, do the following: Place the mouse over the first
Overlay module (the one used in step 9, above). Right click.
A Parameters: Overlay window will open (Figure 5.4.5.c),
which is essentially a table showing the processing
parameters for Overlay. Confirm that the Second input image
is specified as landmask. You should also confirm that in the
bottom line of the window the overlay Operation parameter is
specified as Multiply.
12. Click on OK to close the Parameters: Overlay window.
◆ ◆ ◆
Figure 5.4.5.c Parameters: Overlay window showing Landmask as the
second input image.
Figure 5.4.5.d MACRO MODELER with final model.
◆ ◆ ◆
Adapting and running a previously created model (cont.)
13. Make the MACRO MODELER window a little larger, by
dragging the lower frame down a short way.
14. Now add a new module to the model, by clicking on the
Module icon.
15. A Pick List window will open. Double click on
Composite.
16. A new module, labeled Composite, will appear in the
model.
17. Use the mouse to drag the Composite module and its
output raster layer to the bottom right corner of the MACRO
MODELER window.
18. Click on the Connect icon. Click on raster layer tmp009,
and keeping the mouse button depressed, move the mouse
until you are over the Composite module. Remove your finger
from the mouse button. The tmp009 raster layer should now
be connected as one of the three inputs for the composite
Module.
19. Repeat the previous step to connect tmp010 as the second
input to the composite module.
20. Repeat again to connect tmp011 as the third input to the
composite module.
21. Change the output filename of the Composite module
from the default name (which will begin with temp, and is
followed by 3 numbers) by right-clicking with the mouse on
that raster layer icon.
22. A Change Layer Name window will open. Enter the new
file name: segment_fcc.
23. The resulting module is shown in Figure 5.4.5.d.
24. Save the model by clicking on the Save icon.
25. Run the model by clicking on the Run icon.
26. When prompted “The layer tmp007 will be overwritten if
it exists. Continue?,” click on Yes to All.
27. As the model runs, you can track its progress by
observing which module is highlighted in green.
◆ ◆ ◆
When the program is complete, the image will be displayed automatically
(Figure 5.4.5.e). The image only has meaning within the context of the land
mask image, landmask (Figure 5.4.2.e). You should compare the image to a
regular false color composite, such as hk123, to decide if you feel this
combination false color composite image does indeed provide more
information.
Figure 5.4.5.e Combination false color composite. Water: bands 3,2,1
(R,G,B). Land: 5,4,3 (R,G,B).
In the first part of the macro (Figure 5.4.5.f box 1), the blue, green and red
bands are stretched linearly using a 7% saturation (meaning that the lowest
7% and highest 7% values of the histogram will be set to 0 and 255
respectively). This saturation was specified to highlight variability in the
water. Then, these stretched images are multiplied by the land mask in order
to extract only values within the water (Figure 5.4.5.f box 2).
Land is stretched independently from water using a 2% saturation (Figure
5.4.5.f box 3). For land, the red, near infrared and shortwave infrared bands
are used, as they highlight vegetation features better than visible bands.
Finally, land and water are combined using a cover operation (first covers
second except when zero), and the color composite is created (Figure 5.4.5.f
box 4).
Figure 5.4.5.f MACRO MODELER with final model and explanation
boxes.
CHAPTER 6
IMAGE RATIOS
6.1 Introduction to Image Ratios
In this chapter, we will explore three different applications of band ratios. We
will start with one of the most common ratios used, that of a vegetation
index. We will then look at how a ratio might be designed to separate snow
from clouds. In the final sections of the chapter we will look at ratios in a
more in-depth fashion. We will design three ratios to highlight different
mineral compositions, and use the ratios in a false color composite. We will
explore these ratios to identify burnt areas and vegetation moisture.
6.2 Download Data for this Chapter
Starting with this chapter, we will use different data sets for each chapter, and
even for different sections within each chapter. Thus, in this chapter we will
work with three different image sets, one each for vegetation, snow and rock
ratios. Therefore, if you have not done so already, download the data from
the Clark Labs’ website for Chapter 6 and place it into a new subfolder
within the \RSGuide folder on your computer.
Note: Section 1.3.1 provides detailed instructions on how to download the
data. Also, the procedure for setting up the RSGuide folder on your computer
is described.
6.3 Vegetation Indices
6.3.1 Background
6.3.1.1 The normalized difference vegetation index (NDVI)
Some enhancement techniques help the analyst explore a data set, and do not
require any preconceived ideas about the potential spectral properties of the
objects in the scene. Sometimes, however, the analyst would like to enhance
a specific land cover, such as vegetation. There is a very long history in the
remote sensing literature of using ratios as a method of estimating vegetation
abundance (i.e. biomass estimates) and greenness associated with the
seasonal cycle of deciduous vegetation.
Green vegetation is particularly well suited for spectral enhancement because
the pigment, chlorophyll, absorbs blue and red wavelengths strongly, and
thus the reflectance of leaves at these wavelengths is very low (Figure
6.3.1.1). On the other hand, leaves typically reflect strongly at near infrared
wavelengths, providing a very strong contrast between the red and near
infrared. It is this contrast that the vegetation ratios capitalize on. In this
section, we will see how a contrast in spectral properties between two
different wavelengths provides a much more reliable method of
distinguishing a particular land cover than just a single wavelength does.
Figure 6.3.1.1 Graph of the spectral reflectance of vegetation and soil, with
the locations of Landsat TM bands 3 (red) and 4 (near infrared) shown
(equivalent to Landsat OLI bands 4 and 5).
The ratio we use is the normalized difference vegetation index, or NDVI.
This ratio is defined:
NDVI = (Near Infrared – Red) / (Near Infrared + Red).
NDVI is a variant of the simple ratio of Near Infrared/Red. However, by
constructing the ratio as the difference of the two wavelengths over the sum
of the two wavelengths, NDVI is normalized, so that the range falls between
-1 and +1, and the middle value is zero. Furthermore, NDVI is designed such
that high values of the ratio (close to +1) indicate abundant green vegetation,
and values near zero or less indicate an absence of vegetation.
6.3.1.2 Background on the data used in this exercise
The data for this exercise is from the National Oceanic and Atmospheric
Administration (NOAA) Advanced Very High Resolution Radiometer
(AVHRR). The AVHRR sensor has been flown on NOAA satellites since the
1970s, and therefore there is a large archive of AVHRR data, providing a
very rich source of information for multi-temporal studies. AVHRR data are
usually classified as coarse resolution, as the nominal pixel size at nadir is 1.1
kilometer. However, the sensor has a relatively broad field of view
(approximately 2400 kilometers), thus facilitating the construction of global
image mosaics. Although different AVHRR sensors have had slightly
different band combinations over time, all AVHRR sensors have included a
red band (band 1, which measures 0.58-0.68 μm radiation) and a near
infrared band (band 2, 0.72-1.10 μm), in addition to channels in the 3-5 μm
and 8-12 μm ranges. For this exercise, we will only work with AVHRR
bands 1 and 2.
In continental or global scale analyses of the terrestrial earth, the presence of
clouds is a particular problem, because it is extremely rare that cloud free
scenes are obtained over large regions. However, with coarse resolution data
there is a simple solution that draws on the short revisit time of the sensor. In
essence, the procedure is to overlay a large number of images acquired within
a short period of time, typically one to four weeks. This multi-temporal stack
of images is queried to find the image with the lowest amount of cloud, for
each pixel independently. This information is then used to assemble a single
multi-temporal composite image that is cloud-free, or as nearly cloud-free as
can be achieved from the input data.
The data for this exercise comprises two AVHRR images of Africa. The data
are multi-temporal mosaics representing the months of February and July
2001. The data have been resampled to a 20 kilometer grid, to make the file
size more manageable, and reduce noise.
6.3.2 Preparation
Start TerrSet.
We will now create a new project file and specify the working folders for that
project. If you find the instructions below too brief, you may need to review
Section 1.3.4, which provides greater detail. However, just to remind you, the
project file is the file used by TerrSet to keep track of the data locations for a
particular exercise. The working folder is the specific folder where your data
are found. Additional resource folders can also be specified.
◆ ◆ ◆
Create a new project file and specify the working folder
with the TerrSet EXPLORER
1. Start the TerrSet EXPLORER from the toolbar, or by
Maximizing the vertical tab by clicking on the (+) sign
located in the top left corner of the TerrSet workspace.
2. In the TerrSet EXPLORER window, select the Projects tab.
3. If the Editor pane obscures the listing of project files, drag
the boundary of the Editor pane down, to show the Projects
pane (Figure 6.3.2.a).
4. Right click within the Projects pane, and select the New
Project Ins option.
5. A Browse For Folder window will open. Use this window
to navigate to the RSGuide folder, which is the folder you
created on your computer for the data for this manual. Now
navigate to the Chap6_3 subfolder, within the
RSGuide/Chap6 folder.
6. Click OK in the Browse For Folder window.
7. A new project file, Chap6_3, will now be listed in the
Project pane of the TerrSet EXPLORER. The working folder
will also be listed in the Editor pane (Figure 6.3.2.b).
8. Minimize the TerrSet EXPLORER by clicking on the (-) in
the upper left corner of the TerrSet EXPLORER window.
◆ ◆ ◆
Figure 6.3.2.a Editor pane obscuring the Projects pane. Arrow points to the
boundary of the Editor pane.
Figure 6.3.2.b Projects and Editor panes visible, new project specified.
6.3.3 Exploratory investigation of the
AVHRR data of Africa
◆ ◆ ◆
Initial display and enhancement of AVHRR images
1. Start the DISPLAY LAUNCHER.
2. In the DISPLAY LAUNCHER window, click on the browse
button (…) to select the file name for display.
3. Select the feb_b1 image (the AVHRR red band) from the
Pick list window, and click on OK to close that window.
4. In the DISPLAY LAUNCHER window, select a GreyScale
palette.
5. Click on OK to display the image.
6. Find the Composer window in the TerrSet workspace. In
the Composer window, click on the Layer Properties button.
7. In the Layer Properties window, adjust the Display Max
slider so that the image has more contrast, and the pattern in
the Sahara Desert (North Africa) is clearer. A value of 675
appears to provide a good contrast (Figure 6.3.3.a).
8. In the Layer Properties window, click Apply, Save, and
then OK.
9. Now repeat the above steps 1-5 to display in another
viewer the feb_b2 image (the near infrared band), also with a
GreyScale palette (Figure 6.3.3.a). This image has better
contrast, and does not appear to need the additional steps to
specify a greater contrast.
◆ ◆ ◆
Figure 6.3.3.a AVHRR data of Africa, February 2001. Left: Band 1
(red). Right: Band 2 (near infrared).
The dark region across central Africa in the Band 1 (red radiance) image is
the dense tropical vegetation of central Africa (Figure 6.3.3.a). This dark
region is associated with relatively high Band 2 (near infrared radiance)
values, a common attribute of vegetation. We might think that we could
therefore simply use high values in Band 2 to identify vegetation. However,
this will not work because the Sahara desert, an area of very little vegetation,
also has particularly high values in the infrared band. Clearly, we need a
combination of the red and near-infrared bands to identify vegetation.
One simple way of combining two bands is to create a false color
composite. The creation of a false color composite was introduced in detail in
Section 2.2.4, and that section should be consulted if you find the instructions
too brief here.
◆ ◆ ◆
Create a false color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band by clicking on the adjacent browse
button (…), and in the resulting Pick list selecting feb_b1.
Click on OK to close the Pick list.
3. Repeat the previous step for the Green image band,
specifying the feb_b2 image.
4. Again, repeat the previous step for the Red image band,
this time specifying the feb_b1 image (i.e. the same as for the
Blue image band).
5. Enter the Output image filename in the text box provided:
feb_fcc.
6. Accept all other defaults, and click OK to create and
display the false color composite (Figure 6.3.3.b).
◆ ◆ ◆
Figure 6.3.3.b Africa February AVHRR false color composite image.
In the above steps, we are forced to use one of the two bands twice because
we only have two AVHRR bands, instead of the three we need for a false
color composite. Nevertheless, the results are quite impressive, especially
compared to the original black and white images. You should be able to
discern a very distinct east-west boundary between the lush green along the
West African coast, and the dry interior, transitioning to the very arid Sahara
Desert. This transition region is known as the Sahel.
However, although the false color composite is very useful to look at, it
retains a great deal of detail that is not relevant to vegetation, including
patterns associated with dunes and mountain ranges in the Sahara.
Furthermore, color is inherently subjective. Thus, we could not easily identify
one or more thresholds that differentiated between relatively dense and less
dense vegetation regions. On the other hand, a ratio such as NDVI is well-
suited for such tasks.
6.3.4 NDVI image of Africa
TerrSet’s IDRISI Image Processing offers a built-in module, VEGINDEX,
that facilitates the calculation of 19 different vegetation ratios. We will use
this program to calculate one of the simplest and most enduring ratios, NDVI.
◆ ◆ ◆
Calculating NDVI with VEGINDEX
Menu location: IDRISI Image Processing –
Transformation – VEGINDEX
1. Start VEGINDEX from the main menu.
2. The VEGINDEX dialog box will open.
3. Select the radio button for NDVI.
4. Click on the browse button (…) next to the Red band text
box, to identify the input file feb_b1.
5. Click on the browse button (…) next to the Infrared band
text box, to identify the input file as feb_b2.
6. In the Output image text box, type feb_NDVI.
7. Click on OK to generate and display the image.
8. After the program has finished processing, find the
VEGINDEX dialog box again, which may be hidden by the
displayed image.
9. Change the file specified for the Red band to july_b1.
10. Change the file specified for the Infrared band to july_b2.
11. Type a new Output image filename: july_NDVI.
12. Click on OK.
◆ ◆ ◆
The two images should show the vegetation patterns quite well. However, to
be able to compare between the images, it is necessary to adjust the display
properties so that a particular DN value has the same associated color in both
images. The current images have default stretches, based on the minimum
and maximum values in each image. The fact that the two images have
different stretches is immediately apparent when you compare the ocean
background areas.
Figure 6.3.4.a Africa NDVI for February.
◆ ◆ ◆
Setting the image contrast
1. Find the Composer window, which is automatically present
when any image is displayed.
2. Click in the feb_NDVI image, and then click on the Layer
properties button in the Composer window.
3. In the Layer Properties window, clear the Display Min text
box, and then enter -1.
4. Clear the Display Max text box, and then enter 1.
5. Click on Apply, and then Save.
6. The image should now have a legend that extends from -1
to +1.
7. Click in the july_NDVI image, and then click on the Layer
properties button in the Composer window.
8. In the Layer Properties window, clear the Display Min text
box, and then enter -1.
9. Clear the Display Max text box, and then enter 1.
10. Click on Apply, Save, and then OK.
◆ ◆ ◆
In comparing the February and July (Figure 6.3.4.a) data, note how the band
of high values (deep green) just south of the Sahara Desert has moved north
in July. In contrast, Southern Africa, with the exception of the tip of Africa,
near Cape Town, is now relatively dry. These two images capture the major
seasonal patterns of Africa and, in particular, show how the seasons follow
the sun.
In January, the sun’s rays are most intense in the southern hemisphere. The
low pressure belt and heavy rainfall caused by the rising air due to intense
heating is south of the equator, and the Sahel region is relatively dry. By July,
however, the latitude of the sun’s most intense illumination has migrated to
the northern hemisphere. Likewise, the low pressure belt also moves north,
bringing welcome rains to the Sahel. For the island of Madagascar, off the
south east coast of Africa, only the east coast is relatively wet in July. On-
shore easterly winds and local orographic precipitation bring moisture to an
otherwise dry region.
6.4 Discriminating Snow from Clouds
6.4.1 Overview
One of the most basic image enhancement procedures is the false color
composite. The combination of three bands, each assigned to a different
primary color, is a powerful method for visualizing the spectral information
in an image. As the number of spectral bands in a data set increases, the
potential number of combinations of bands as false color composites
increases exponentially. It is therefore important to consider what factors
make a good false color composite, and how the colors in the image can be
interpreted if fundamental information is available regarding the spectral
properties of the surfaces in the image.
In this exercise, we will choose a band combination to separate snow and
clouds, and also predict the associated colors based on the band combination
and color assignment we choose. We will then develop a snow ratio, or
index, and use that to set a threshold that maps snow. Thus, this exercise will
start with enhancement, and end with a simple classification. The procedure
we will follow is loosely based on Dozier (1989).
6.4.2 Snow and Cloud Properties
We know from everyday experience that both fresh snow and clouds are
typically very bright in the visible part of the spectrum. In fact, snow and
clouds have a reflectance close to 100% in the visible (Figure
6.4.2.a). However, their spectral properties beyond the visible, in the
shortwave infrared, are very different. Clouds are also bright in the shortwave
infrared, but snow has very low reflectance beyond 1.4 μm (Dozier 1989).
Figure 6.4.2.a Comparison of the spectral reflectance of snow and cloud, as
well as the Landsat 8 OLI spectral band passes. (Snow modeled as 200 μm
radii grains, cloud as 5 μm radii water droplets, modified from Dozier 1989).
Based on Figure 6.4.2.a, we can see that with careful selection of the spectral
bands it should be possible to create a false color composite that shows snow
and cloud as different colors. A false color composite has the advantage of
combining information from different wavelengths. For example, although
Figure 6.4.2.a appears to suggest that a single band, such as the 1.57 – 1.65
μm OLI band 6, can potentially differentiate snow and clouds, we need to
remember that there are usually other spectral classes that may confuse our
interpretation. Thus, snow is not the only substance that has a low reflectance
in band 6; water also has a low reflectance in that band. Adding further
complexity is the effect of varying illumination on slopes of different
steepness and topographic aspect. Furthermore, mixed pixels dilute the
characteristic spectral properties of the cover classes. Combinations of bands,
including both false color composites for visual interpretation, and ratios of
bands for either visual or more quantitative interpretation, can in many cases
overcome these problems.
6.4.3 Preparation
For this section we will use Landsat 8 OLI data from Mount Rainier in
Washington State, USA. In Section 6.2 you should have already downloaded
the data, however, we still need to set the Project and Working Folder for this
new data set.
Before starting you should close any dialog boxes or displayed images in the
TerrSet workspace.
◆ ◆ ◆
Create a new project file and specify the working folder
with the TerrSet EXPLORER
1. Start the TerrSet EXPLORER, if it is not already open.
2. In the TerrSet EXPLORER window, select the Projects tab.
3. Right click within the Projects pane, and select the New
Project Ins option.
4. A Browse For Folder window will open. Use this window
to navigate to the RSGuide folder, which is the folder you
created on your computer for the data for this manual. Now
navigate to the Chap6_4 subfolder, within the
RSGuide/Chap6 folder.
5. Click OK in the Browse For Folder window.
6. A new project file, Chap6_4, will now be listed in the
Project pane of the TerrSet EXPLORER. The working folder
will also be listed in the Editor pane.
7. Minimize the TerrrSet EXPLORER.
◆ ◆ ◆
6.4.4 A Color Composite to Discriminate
Snow from Clouds
We will first create a simulated natural color composite of the OLI data. We
call this a natural color composite because it somewhat replicates the colors
we might see with our eyes, if we were to fly over the landscape. We should
remember, however, that the OLI bands are not perfect matches for the
wavelengths the eye is sensitive to, and furthermore, each band is stretched in
the composite generation. Therefore the colors will not be identical to natural
colors.
◆ ◆ ◆
Create a simulated natural color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band by clicking on the browse button (…),
and in the resulting Pick list selecting OLI_MtRainier_B2.
Click on OK to close the Pick list.
3. Repeat the previous step for the Green image band,
specifying the OLI_MtRainier_B3 image.
4. Again, repeat the previous step for the Red image band,
this time specifying the OLI_MtRainier_B4 image.
5. Enter the Output image filename in the text box provided:
234comp.
6. Change the percent to be saturated from each end of the
greyscale to 5. This will increase the image contrast.
7. Accept all other defaults, and click OK to create and
display the false color composite (Figure 6.4.4.a).
◆ ◆ ◆
Figure 6.4.4.a Simulated natural color Landsat 8 OLI image of Mount
Rainier with snow and clouds in places.
In the simulated natural color image (Figure 6.4.4.a), snow and clouds are
both white, so it is hard to distinguish between these two objects within the
image. Clouds are usually acompanied by shadows; in this case shadows,
although difficult to see, are located to the northwest of each cloud (the image
is oriented so that north is approximately in the “up” direction). The presence
of shadows makes it possible for a human interpreter to differentiate the
clouds from the snow. However, in some cases patches of snow may be
small, or cloud shadows may not be easily identifiable, making the use of
shadows alone challenging for snow and cloud differentiation. Furthermore, a
spatial rule is difficult to implement using automated image processing, and
most remote sensing enhancement is usually spectrally based.
Referring back to the graph shown in Figure 6.4.2.a, it is apparent that the
standard false color composite has a major limitation in that it does not
include a short wave infrared band, where snow and clouds have contrasting
reflectance. Therefore, a better choice would be a false color composite
produced from bands in the visible, near infrared, and shortwave infrared
(e.g., OLI bands 3, 5, and 6).
For this reason, we will now create another false color composite. This time
we will assign OLI bands 3, 5 and 6, to blue, green and red, respectively.
◆ ◆ ◆
Create a false color composite image
Menu Location: File – Display – COMPOSITE
1. If necessary, start the COMPOSITE program using the
main menu or tool bar.
2. Specify the file name for the Blue image band as
OLI_MtRainier_B3.
3. Specify the file name for the Green image band as
OLI_MtRainier_B5.
4. Specify the file name for the Red image band as
OLI_MtRainier_B6.
5. Enter the Output image filename in the text box provided:
356fcc.
6. Set the percent to be saturated from each end of the
greyscale to 1.
7. Accept all other defaults, and click OK to create and
display the false color composite.
8. Click Close, to remove the COMPOSITE dialog box.
◆ ◆ ◆
For this false color composite image, we can predict the colors snow and
clouds should be, based on a comparison of the expected relative intensities
in each of three bands used (3, 5 and 6), as shown in Figure 6.4.2.a., and the
color assigned to those bands in the image (blue, green and red,
respectively). In addition, we need some information about the mixing of
colors of light, based on the additive mixing system that applies to computer
monitors (Figure 6.4.4.b).
• Clouds, which are bright in all three bands of this false color, should be
represented by high red, green and blue values in the false color
composite. A combination of red, green and blue makes white in the
additive color scheme used on computer monitors (Figure 6.4.4.b).
• Snow, on the other hand, is bright in only the first two bands: green and
near infrared (OLI 3 and 5, represented by blue and green in the false
color composite), but not the shortwave infrared (OLI band 6, represented
by red). The combination of blue and green makes cyan (Figure 6.4.4.b).
Thus, in summary, for the OLI Bands 3, 5, 6 false color composite, we
predict clouds will be white, and snow cyan.
Figure 6.4.4.b Additive colors.
The non-standard false color composite (Figure 6.4.4.c) does indeed show
snow in cyan, and clouds in white. In addition, vegetation is shown in green
because of its strong near infrared (band 5) radiance, which is assigned to the
monitor’s green gun.
Figure 6.4.4.c Washington State false color composite. Bands 3,5,6 as
BGR.
6.4.5 A Ratio to Discriminate Snow
The false color composite shown in Figure 6.4.4.c draws on the distinctive
spectral reflectance pattern of snow: strong absorption in the shortwave
infrared OLI band 6, and strong reflectance in the visible and near infrared,
including OLI bands 3 and 5. This contrast lends itself to the development of
a snow ratio. One possible ratio is (Green – Shortwave Infrared) / (Green +
Shortwave Infrared) (Dozier, 1989), that is equivalent to OLI (Band 3 – Band
6) / (Band 3 + Band 6). This is not a ratio that TerrSet offers as a prepared
program. However, the module OVERLAY provides a simple way of
implementing any ratio.
◆ ◆ ◆
Calculating ratios with OVERLAY
Menu location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. Start the OVERLAY program from the main menu or the
icon tool bar.
2. In the OVERLAY dialog box, double click in the text box
next to First image, and select OLI_MtRainier_B3 from the
Pick list. Click OK.
3. Select OLI_MtRainier_B6 for the Second image, and click
OK.
4. Enter the Output image name: snow3by6.
5. Select the radio button for First – Second / First + Second.
6. Click on OK, and then Close.
◆ ◆ ◆
The ratio image will be displayed automatically (Figure 6.4.5.a). Ratio values
that are high, those shown in red and yellow colors, indicate snow.
Figure 6.4.5.a OLI (Band 3 – Band 6) / (Band 3 + Band 6) ratio image. The
higher DN values (assigned to red colors in the figure) indicate areas of
snow.
Finally, to make the location of the snow most clear, we will make a map that
indicates where snow is located. This map could be used, for example, as a
mask, to blank out any snow-covered areas. To create our map, we need to
identify a threshold value in the snow ratio image that differentiates snow.
The value we select is somewhat arbitrary, since the boundary is actually a
fuzzy one; there will be a complete gradation from 100% snow-covered
pixels to pixels without any snow at all. We will choose our threshold by
examining the ratio image histogram. We observe that snow makes up only a
relatively small proportion of the image. Therefore, in the ratio image, it
should comprise the anomalously high values.
◆ ◆ ◆
Compute the histogram of DN values with HISTO
Menu location: IDRISI GIS Analysis – Database Query –
HISTO
1. Start the HISTO program from the main menu or the
toolbar.
2. Double click in the Input file name text box, and select
snow3by6. Click on OK.
3. Select the radio button for Numeric.
4. Accept all other defaults.
5. Click on OK.
◆ ◆ ◆
Observe the resulting table of data. Note that the fourth column represents the
histogram frequency, or the number of pixels in the image within the range of
DN values specified in the second and third columns. The frequNncy values
peak at 279,136 for DN values between -0.04 and -0.03, and then decline
rapidly. The frequency values reach a local minimum of 1317 between the
values of 0.34 and 0.35, and then increase slightly again. We therefore will
select 0.34 as a threshold, on the assumption that the slight increase is the
influence of the snow pixels.
Thresholding is an operation of dividing a single band image into a small
number of classes, in this case, just two. We will set up a rule, such that any
input DN value below the threshold will be classed as 0, and any DN above
the threshold will be classed as 1.
◆ ◆ ◆
Applying a threshold to an image with RECLASS
Menu location: IDRISI GIS Analysis – Database Query –
RECLASS
1. Start the RECLASS program from the main menu or
toolbar.
2. The RECLASS dialog box will open.
3. Double click in the text box for Input file, select snow3by6
from the Pick list, and click OK.
4. Enter a name for the Output file: snowmap.
5. Below the Output file name is the table of Reclass
parameters. Complete the first line of the table as follows:
Assign a new value of: 0
To all values from: -1
To just less than: 0.34
6. Complete the second line of the table:
Assign a new value of: 1
To all values from: 0.34
To just less than: 1
7. Check that the dialog box has been completed correctly
(Figure 6.4.5.b).
8. Click on OK.
9. TerrSet will generate a warning notice: Warning: the input
file contains real values. Would you like to convert the output
file to integer?
10. Click on Yes to accept this default of an integer output
file, as we want our output to contain only integer values (0 or
1).
◆ ◆ ◆
Figure 6.4.5.b RECLASS dialog box with parameters entered.
The output image will be displayed automatically. Compare the image to the
false color composite (Figure 6.4.4.c). Overall, the thresholded image
provides a good map of the snow.
Figure 6.4.5.c Snow map from the snow ratio image.
We can calculate the area that the snow covers in this image by using the
HISTO program one more time.
◆ ◆ ◆
Compute the histogram of DN values with HISTO
Menu location: IDRISI GIS Analysis – Database Query –
HISTO
1. Start the HISTO program from the main menu or the
toolbar, if it is not already open.
2. Double click in the Input file name text box, and select
snowmap. Click on OK.
3. Select the radio button for Numeric.
4. Accept all other defaults, and click on OK.
◆ ◆ ◆
Observe the resulting tabular data (Figure 6.4.5.d). The Lower Limit is the
minimum DN value, the Upper Limit should be interpreted as “just less
than.” Since our data are integer, Class 0 will therefore only include pixels
with a DN of 0, and class 1, pixels with a DN value of 1. The Frequency
column (fourth from the left) gives information on the number of pixels in
each category. Thus, we see that there are 3,168,865 pixels with a DN value
of 0, and 93,295 with a DN value of 1. (Your numbers should be similar,
unless you chose a different threshold for discriminating snow). Summing the
non-snow and snow pixels gives us 3,307,655. This latter sum is equal to the
total number of pixels in the image, 1685 rows by 1963 columns (1963 x
1685 = 3,307,655).
Figure 6.4.5.d HISTO output for the snowmap image map.
We can now estimate the area of snow. One pixel represents 30 meters by 30
meters or 0.09 hectares (30 m x 30 m = 900 m2, 1 hectare = 10,000
m2). Therefore, 93,295 x 0.09 ha = 8,396.55 ha of snow.
Another way of calculating areas is by using the AREA module available in
IDRISI GIS Analysis – Database Query. A simpler way of accessing the
calculation of areas for images in byte format is by right clicking on the
legend and selecting Calculate Area and then the units (e.g. Hectares). The
module AREA will run and the areas for each class will be displayed in a pop
up window (Figure 6.4.5.e).
Figure 6.4.5.e Calculation of areas from legend shortcut.
6.5 Mineral Ratios
In Section 6.3 we saw how the NDVI index is an excellent way to enhance
information about vegetation in an image, and in the previous section we saw
that snow could be mapped quite reliably with a custom “snow index.” In this
section we will explore the idea of ratios further. We will investigate what
makes a good ratio, and how ratio data can be combined to enhance different
mineral types.
Our study area is a volcanic region near Puna de Atacama, Bolivia. The rocks
in this region have been altered extensively by hydrothermal
fluids. Hydrothermal fluids consist of hot ground water. As these hot ground
waters circulate, they chemically alter the surrounding rock. Hydrothermal
fluids often carry dissolved chemicals that may precipitate, and may
eventually form economic mineral deposits. Thus, hydrothermally altered
regions make very good mineral exploration targets.
Early work using Landsat MSS data, which has just four spectral bands,
showed that ratios could be used to enhance mineral alteration in Nevada
(Rowan et al, 1974). In addition to enhancing subtle spectral differences,
ratios tend to suppress topographically-caused illumination differences in a
scene. We saw this topographic normalization effect to a certain extent in the
snow ratio data.
A ratio is created by dividing brightness values, pixel by pixel, of one band
by another. The primary purpose of such ratios is to enhance the contrast
between materials by dividing brightness values at peaks and troughs in a
spectral reflectance curve. Specifically, the absorption feature (trough)
should be used for the denominator of the ratio, and the peak brightness
(peak) as the numerator. This combination of peak and trough will tend to
result in higher numbers for the ratio when the class of interest is present.
It is very important that image analysts who use ratios understand the theory
behind them, and have specific absorption features in mind for the analysis.
Figure 6.5.a can be used to illustrate why the ratios identified in Table 6.5.a
can be used to highlight the types of minerals listed. Figure 6.5.a also shows
the band passes of the Landsat TM sensor, and thus serves to remind us that it
is only possible to use ratios in regions for which we have data. Kaolinite has
an absorption feature at 1.4 μm, but Landsat has no spectral band in this
region, and thus it cannot be used in an analysis using Landsat TM data. In
addition, we need to keep in mind that the bands integrate all the energy
across the entire width. Thus, although the mineral jarosite has a deep
absorption feature within the band pass of Landsat Band 7 (2.3 μm), the Band
7 DN value for a pixel of pure jarosite would be a single number, and would
include radiance from the adjacent, relatively high reflectance
regions. Therefore, the signal of jarosite in Band 7 would not be as distinctive
as might be expected at first.
Figure 6.5.a Spectra of selected rocks and minerals and Landsat TM spectral
band passes. (Source of spectra: Andesite: Johns Hopkins Spectral Library,
Hematite & Jarosite: USGS Spectral Library (Clark et al. 2003), Kaolinite:
JPL Spectral Library).
Table 6.5.a Typical Landsat TM Mineral Ratios
The Andesite spectrum (Figure 6.5.a) is typical of the country rock in our
study area in Bolivia. It is a relatively flat spectrum, with few spectral
features. Therefore, the andesite might be a spectrum in contrast to which we
hope to highlight the other minerals.
The spectra shown in Figure 6.5a suggest that a ratio of TM band 5 (SWIR 1)
by TM band 7 (SWIR 2) will result in a relatively large value for kaolinite,
and also, to a lesser extent, for jarosite. Jarosite, however, will have a very
high value for a TM band 5 by TM band 4 ratio. Several minerals will have
strong TM band 3 by TM band 1 ratios, but the ratio will be particularly
strong for hematite, because its strong red color (high reflectance in the red
area of the spectrum) contrasts against the lower band 1 (blue-green) values.
In developing the link from Figure 6.5.a to the image we will produce in this
exercise, we need to remember that the spectra are library mineral spectra,
and the image spectra are composites of many minerals. In some cases, there
are non-linear mixing effects, especially when dealing with iron minerals.
Non-linear mixing effects add further to the difficulties of predicting the
results.
The next section is a purely hypothetical example, to illustrate the idea of
ratios further, and should be completed prior to working with the real data.
6.5.1 Hypothetical Ratio Example
Figure 6.5.1.a presents two hypothetical spectral curves that might be
collected from a field spectrometer for two materials, A and B. Note that the
vertical axis is on an arbitrary 8 bit scale, instead of the more normal
reflectance, which has a 0-1 scale. The figure also shows the spectral band
passes of a 3-band imaging sensor, with 0.1 μm wide bands centered on 0.6,
0.8 and 1.0 μm.
Now, we will investigate the ratios from the three band sensor that would
give us the best separation of cover type A from B. The first step is to
complete Table 6.5.1.a, estimating the value that the sensor would record for
each cover type, in each band. You will simply estimate an average value for
the curve, within the sensitivity region of the band you are dealing with. To
provide an example, the first row, for the 0.6 μm band, has been completed
for you. In the image band (or channel) centered on 0.6 μm, material A has a
reflectance response that starts at 175 and peaks at about 195, but the sensor
only records a single number, representing an average. In this case the
average is probably close to 185. The value recorded for material B is similar,
but a little higher on average.
Figure 6.5.1.a Hypothetical field-collected spectral reflectance curves and
image sensor wavelength bands.
Table 6.5.1.a Table of expected DN values for the two cover types as
imaged by the three band sensor (compare to Figure 6.4.1.a).
Once you have completed the remaining two rows of Table 6.5.1.a, you are
ready to try developing your own ratio by completing Table
6.5.1.b. Specifically you should try to develop a ratio that highlights cover
type A in bright tones, and B in dark tones. As an example, the first row has
been completed using the ratio 1.0 μm / 0.6 μm. The DN values used in the
figure are derived from the table you will develop of the integrated values for
each band (Table 6.5.1.a).
The 1.0 μm / 0.6 μm ratio wasn’t very successful, because the values
obtained are very similar, 1.0 and 1.1. If we look at Figure 6.5.1.a, we can see
that this is not surprising, because at those wavelengths cover type A does not
have any distinctive absorption features. Instead, we need to select two
wavelengths, for one of which A has an absorption feature, and B
doesn’t. For the second wavelength both A and B should have relatively high
reflectance. Remember to put the wavelength that has the absorption feature
for cover type A in the denominator of the ratio you choose.
See if you can get a better result with a more carefully chosen set of band
ratios, based on the absorption features in Figure 6.5.1.a.
Hints for completing Table 6.5.1.b: There are two possible ratios that give a
high value for A, and a low value for B. Both choices give a ratio for A that is
about 50% higher than the value obtained for B. If your ratio is higher for B
than A, then your ratio has the wrong wavelength band in the denominator.
Table 6.5.1.b Worksheet for developing a ratio that highlights A relative to
B.
6.5.2 Preparation
For this section, we will use Thematic Mapper data from a volcanic region in
Puna de Atacama, Bolivia. In Section 6.2 you should have already
downloaded the data. However, we still need to set the Project and Working
Folder for the Bolivian data.
Before starting you should close any dialog boxes or displayed images in the
TerrSet workspace.
◆ ◆ ◆
Create a new project file and specify the working folder
with the TerrSet EXPLORER
1. Maximize the TerrSet EXPLORER.
2. In the TerrSet EXPLORER window, select the Projects tab.
3. Right click within the Projects pane, and select the New
Project Ins option.
4. A Browse For Folder window will open. Use this window
to navigate to the RSGuide folder, which is the folder you
created on your computer for the data for this manual. Now
navigate to the Chap6_5 subfolder, within the
RSGuide/Chap6 folder.
5. Click OK in the Browse For Folder window.
6. A new project file Chap6_5, will now be listed in the
Project pane of the TerrSet EXPLORER. The working folder
will also be listed in the Editor pane.
7. Minimize the TerrSet EXPLORER.
◆ ◆ ◆
6.5.3 Exploratory Investigation of the Puna
de Atacama TM Data
For this section, we will use TM data. Table 6.5.3.a lists the wavelengths of
the TM bands.
We will start by creating a simulated natural color composite of the TM data,
just as we did for the Washington State image used in Section 6.4. We will
use the program COMPOSITE, and assign bands 1, 2 and 3 to blue, green
and red, respectivley.
Table 6.5.3.a Thematic Mapper bands
*Note that Landsat TM Band 6 (thermal infrared) is not included with the
data for this exercise.
◆ ◆ ◆
Create a simulated natural color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band by clicking on the browse button (…),
and in the resulting Pick list selecting landsat1. Click on OK
to close the Pick list.
3. Repeat the previous step for the Green image band,
specifying the landsat2 image.
4. Again, repeat the previous step for the Red image band,
this time specifying the landsat3 image.
5. Enter the Output image filename in the text box provided:
123bolivia.
6. Accept all other defaults, and click OK to create and
display the false color composite (Figure 6.5.3.a).
◆ ◆ ◆
Figure 6.5.3.a Simulated natural color composite of Puna de Atacama TM
data.
Figure 6.5.3a illustrates that a simulated natural color composite works well
in this arid environment to show the hydrothermal alteration. Note the central
bright area, which is the main region of hydrothermal alteration, and which
provides a strong contrast against the dark volcanic rocks. Erosion products
from the hydrothermal area are resdistributed to the northeast and southwest
along stream courses.
6.5.4 Calculating Ratios
We will calculate three ratios to highlight the different minerals present in
these rocks: Band 5 / Band7, Band 5 / Band 4, and Band 3 / Band 1 (Table
6.5.3.a). We will use the program OVERLAY.
As with the snow ratio of Section 6.4., we will generate a normalized
difference ratio (First – Second) / (First + Second), rather than a simple ratio
(First / Second), because the former gives a more balanced range of values,
from -1 to +1, with 0 representing the middle value, namely a flat
spectrum. For simplicity’s sake, however, we will refer to each ratio as if it
were a simple ratio. Thus, (Band 7 – Band 5) / (Band 7 + Band 5) will be
referred to as Band 7 / Band 5.
◆ ◆ ◆
Calculating ratios with OVERLAY
Menu location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. Start the OVERLAY program from the main menu or the
icon tool bar.
2. In the OVERLAY dialog box, double click in the text box
next to First image, and select landsat5 from the Pick list.
Click OK.
3. Select landsat7 for the Second image, and click OK.
4. Enter the Output image name 5by7.
5. Select the radio button for First – Second / First + Second.
6. Click on OK.
◆ ◆ ◆
The resulting image does not look impressive. Partly this is because ratios are
very noisy. A ratio suppresses the majority of the information, which derives
from variation in illumination due to topography, and enhances a minor part
of the signal, the spectral differences between bands. Another reason why the
image looks rather unimpressive is that the default palette is the quant
palette. This ratio would make more sense as a black and white
image. Therefore, follow the instructions below to change the palette and
increase the contrast.
◆ ◆ ◆
Change palette and contrast enhancement of an image
1. Make sure the new 5by7 image is the focus of the TerrSet
workspace by clicking in the image.
2. Find the Composer window, and select the button for Layer
Properties.
3. In the Layer Properties window, select the browse button
(…) next to the text box below Palette File.
4. In the Pick list window, click on the plus sign next to the
TerrSet\symbols folder.
5. Scroll down, until you can select GreyScale. Click on OK
to close the Pick list window.
6. Back in the Layer Properties window, enter in the Display
Min text box -0.05.
7. Enter in the Display Max text box 0.20.
8. Still in the Layer Properties window, click on the buttons
for Apply, Save and OK.
◆ ◆ ◆
The resulting image (Figure 6.5.4.a) should show the central hydrothermally
altered area very distinctly.
Figure 6.5.4.a Ratio of Landsat bands 5 / 7, with greyscale palette and
contrast enhanced.
Having produced the first ratio, now run the OVERLAY operation two more
times to create the remaining two ratio combinations (Band 5 / Band 4 and
Band 3 / Band 1).
◆ ◆ ◆
Calculating ratios with OVERLAY
Menu location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. If necessary, start the OVERLAY program again from the
main menu or the icon tool bar.
2. In the OVERLAY dialog box, double click in the text box
next to First image.
3. The Pick List window will open. Select landsat5 from the
Pick list. If necessary, click on the plus symbol (+) to see the
names of the individual bands within the Chap6_5 folder.
Click OK to close the Pick list, if necessary.
4. Select landsat4 for the Second image, and click OK.
5. Enter the Output image name 5by4.
6. Select the radio button for First – Second / First + Second.
7. Click on OK.
8. The image will display automatically, with a quant palette
file.
9. Now alter the file names to create the third and last ratio.
10. In the OVERLAY dialog box, double click in the text box
next to First image, and select landsat3 from the Pick list.
Click OK.
11. Select landsat1 for the Second image, and click OK.
12. Enter the Output image name 3by1.
13. Click on OK.
◆ ◆ ◆
Once you have the three ratio images, you can combine them in a false color
composite, with the program COMPOSITE. Use the Band 5 / Band 7 ratio
for blue, Band 3 / Band 1 for green, and Band 5 / Band 4 for red.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the Blue image
band as 5by7.
3. Specify the Green image band as 3by1.
4. Specify the Red image band as 5by4.
5. Enter the Output image filename in the text box provided:
ratiofcc.
6. Accept all other defaults, and click OK to create and
display the false color composite (Figure 6.5.4.b).
◆ ◆ ◆
Figure 6.5.4.b Ratio false color composite. 5/7 as blue, 3/1 as green, and 5/4
as red.
With Figure 6.5.4.b to guide you, you should now be able to draw a sketch
map of the main minerals present in the image. For example, the central core
of the alteration zone has a blue color, indicating high values in the 5/7 ratio,
which in turn indicates the presence of clay minerals. In interpreting all the
colors in the image, remember the mixing of red and blue gives cyan (a light,
sky blue color), green and red gives yellow, and red and blue gives magenta
(purple) (see the color mixtures shown by Figure 6.4.4.b).
6.6 Other indices
6.6.1 Background
We saw in section 6.3 that we could create a normalized difference index that
highlights vegetation, and in section 6.4 we created an index to highlight
snow, separating it from clouds. In section 6.5, we generated some mineral
indices and learned that, if we know the spectral signatures of specific
features, we can create an index image to highlight them. In this section, we
will look at two other indices: the normalized burn ratio index, and a
normalized difference water index.
6.6.1.1 Normalized Burn Ratio index
The burn ratio index allows separating vegetation from burned areas. Burned
areas reflect highly in the shortwave infrared part of the spectrum (between
2.08-2.35µm), while absorbing in the near infrared. On the other hand,
healthy vegetation reflects highly in the near infrared and absorbs in the
shortwave infrared, with the amount of absorption in this part of the spectrum
being dependent on the vegetation moisture content (Figure 6.6.1.1.a).
Figure 6.6.1.1.a Spectral curves for healthy vegetation and burned areas.
Modified from USFS.
This difference in reflectance between burned areas and vegetation allows the
generation of the delta normalized burn ratio index (dNBR). This index
compares normalized burn ratio indices from images before and after a fire.
First a normalized burn ratio (NBR) is calculated for both before and after the
fire:
In the equation above, NIR refers to the NIR band (Landsat 8 OLI band 5),
and SWIR 2 refers to the second SWIR band (Landsat 8 OLI band 7). Then
the delta NBR (dNBR) is calculated by subtracting NBR post-fire from the
NBR pre-fire images
Note that the NBR was designed to be used in the delta (or differenced)
calculation. Because of this, the subtraction of a band is done inversely to
what we learned in Section 6.5, and therefore negative NBR values represent
the presence of fire.
6.6.1.2 Normalized difference water index
There exist different water indices, some designed to identify standing water
and others to identify moisture in vegetation. For example, since water
reflects in the visible part of the spectrum and absorbs in the longer
wavelengths, the contrast between the green and the shortwave infrared can
be used to identify standing water. On the other hand, when the interest is to
evaluate differences in water content in vegetation, we can take advantage of
the differences in reflectance of green and dry vegetation in the near infrared
and shortwave infrared part of the spectrum. Water absorbs in the SWIR part
of the electromagnetic spectrum; green vegetation therefore reflects less in
the SWIR than dry vegetation. Moreover, vegetation stress is reflected in a
decrease in the NIR reflectance. The NDWI developed by Gao (1996) is
calculated as follows:
This index has negative values for areas with soil exposed or non-
photosynthetic vegetation (e.g. dead vegetation) and positive values for
vegetation. Higher positive values represent higher water content in leaves, as
the difference in reflectance between NIR and SWIR becomes larger.
6.6.2 Preparation
In section 6.2 you should have downloaded all the data that we need for this
exercise. We will use Landsat 8 OLI data from Cordoba, Argentina, to
evaluate the vegetation pattern of the region using both NDVI and EVI. The
data were collected during the Austral spring (October) of 2013 and were
imported and atmospherically corrected using the LANDSAT module and
Cos(t) method.
We will now set the new Project and Working Folders. Before starting, close
any dialog boxes or displayed images in the TerrSet workspace.
◆ ◆ ◆
Create a new project file and specify the working folders
with the TerrSet EXPLORER
1. Maximize TerrSet EXPLORER.
2. In the TerrSet EXPLORER window, select the Projects tab.
3. Right click within the Projects pane, and select the New
Project Ins option.
4. A Browse For Folder window will open. Navigate to the
RSGuide folder and select the Chap6_6 subfolder.
5. Click OK in the Browse For Folder window.
6. A new project file, Chap6_6, will now be listed in the
Project pane of TerrSet EXPLORER. The working folder will
also be listed in the Editor pane.
◆ ◆ ◆
Before we begin our exercise, let’s first look at the data. We will display
individual bands to see the characteristics of the images used. We will display
and generate color composites using TerrSet EXPLORER and the
COMPOSER.
◆ ◆ ◆
Initial display of images
TerrSet EXPLORER
1. In TerrSet EXPLORER, go to the Files tab and double
click on the working folder to see the images within the
folder.
◆ ◆ ◆
You will see two sets of images, some start with the prefix
OLI_August2013, and some start with OLI_Oct2013. In this exercise we
will start looking at the October images (OLI_Oct2013).
◆ ◆ ◆
Initial display of images (cont.)
TerrSet EXPLORER
2. Click on the image OLI_Oct2013_B4.
3. Press the Ctrl Key on your keyboard and, without releasing
it, click on OLI_Oct2013_B5 and OLI_Oct2013_B7. You
should have the three bands selected in blue.
4. Release the Ctrl Key and right click on the selection.
5. Select the option Display Map (or hit the Enter Key while
pressing the Shift Key: Shift+Enter, Figure 6.6.2.a).
◆ ◆ ◆
Figure 6.6.2.a Displaying multiple images from TerrSet Explorer.
The three images should be displayed in different display windows. Note the
range of values. These are not raw DN values, instead they represent the pixel
reflectance, as they are already atmospherically corrected.
◆ ◆ ◆
Creating color composite with TerrSet EXPLORER and
COMPOSER
1. Click on the image OLI_Oct2013_B4.
2. Press the Ctrl Key of your keyboard and, without releasing
it, click on OLI_Oct2013_B5 and OLI_Oct2013_B7. You
should have the three bands highlighted in blue.
3. Now release the Ctrl Key and right click on the selection.
4. Select the option Add Layer(s) (or hit the Ins Key while
pressing the Shift Key: Shift+Ins).
5. The three images should now be added within the same
map composition window.
6. Select the display window that contains the three bands
overlayed one on top of the other one. The COMPOSER
should list the three bands.
7. In COMPOSER, select OLI_Oct2013_B7 (it should be
highlighted in blue) and click on the red square icon located
in the first row of icons at the bottom of COMPOSER.
8. Then select OLI_Oct2013_B5 and click in the green icon.
9. Finally, select OLI_Oct2013_B4 and click in the blue icon.
◆ ◆ ◆
Figure 6.6.2.b False color composite showing COMPOSER with bands 4,5
and 7 assigned to blue, green and red colors.
The resulting image (Figure 6.6.2.b ) is a false color composite that displays
the shortwave infrared (OLI band 7) in the color red, the near infrared (OLI
band 5) in green, and the red band (OLI band 4) in blue. The color composite
generated using the module COMPOSER is a temporary assignation of colors
to the different bands, and it is not saved to the disk as a new image. This
way of generating temporary color composites is useful when exploring
different band combinations to highlight a particular characteristic of the
environment (burnt areas in this case). Once the appropriate band
combination is chosen, COMPOSITE can be used to generate the permanent
image file on the disk, as done in previous exercises.
You can see agricultural areas to the east (North is at the top of the image),
and the Cordoba hills to the west of the scene. The city of Alta Gracia
appears in blue/ purple colors, and you can see the Los Molinos Dam lake to
the south. Note that the title of the image says “After Fire”. This image was
created after large wildfires in September 2013 affected the region, primarily
due to increased temperatures and high winds. The fire scar can be identified
by the deep red colors that relate to the high reflectance in the shortwave
infrared part of the spectrum. Vegetation appears green in this color
composite, since the near infrared is shown in green, i.e. healthy vegetation
reflects this part of the spectrum.
6.6.3 Normalized Burn Ratio
We will create a delta normalized burn ratio (dNBR) index in order to
categorize the burn severity of this fire. Figure 6.6.1.1.a shows the spectral
differences of healthy vegetation and burned areas, with burned areas having
a high reflectance in the SWIR2 part of the spectrum (OLI band 7) and low
NIR (OLI band 5). In order to highlight burned areas from healthy
vegetation, using the equation in section 6.6.1.1, we can calculate the index
for Landsat OLI as: (Band 5 – Band 7) / (Band 5 + Band 7). TerrSet does not
offer a module to automatically calculate this index. However, we saw in
section 6.4.5 that we can use the module OVERLAY for this task.
We will first calculate the NBR index for the post fire image of October
2013.
◆ ◆ ◆
Calculating ratios with OVERLAY
Menu location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. Start the OVERLAY program from the main menu or the
icon tool bar.
2. In the OVERLAY dialog box, double click in the text box
next to First image, and select OLI_Oct2013_B5 from the
Pick list. Click OK.
3. Select OLI_Oct2013_B7 for the Second image, and click
OK.
4. Enter the Output image name: NBR-postfire.
5. Select the radio button for First – Second / First + Second.
6. Click on OK.
◆ ◆ ◆
The ratio image is displayed automatically (Figure 6.6.3.a). Remember from
the description of the index in section 6.6.1.1 that the NBR shows burned
areas with large negative values.
Figure 6.6.3.a NBR postfire OLI (Band 5 – Band 7) / (Band 5 + Band 7).
Burned areas shown with negative values.
The next step is to calculate the NBR for the pre-fire image. In this case, we
have an image of August 2013.
◆ ◆ ◆
Calculating ratios with OVERLAY
Menu location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. In the OVERLAY dialog box, double click in the text box
next to First image, and select OLI_Aug2013_B5 from the
Pick list. Click OK.
2. Select OLI_Aug2013_B7 for the Second image, and click
OK.
3. Enter the Output image name: NBR-prefire.
4. Select the radio button for First – Second / First + Second.
5. Click on OK.
◆ ◆ ◆
Note that the pre-fire NBR image has positive values, or negative values very
close to zero. Figure 6.6.3.b shows the comparison between pre- and post-
fire. It is easier to compare and interpret indices if the palette is
symmetrically stretched around zero. In past chapters, we learned to change
the display minimum and maximum using TerrSet EXPLORER and
modifying the metadata. Here we will symmetrically stretch the values using
COMPOSER.
◆ ◆ ◆
Symmetric stretch with COMPOSER
1. Click on the displayed NBR-prefire image so that the
COMPOSER is activated.
2. On the bottom row of icons within COMPOSER, select the
icon in the middle. The image should stretch symmetrically
around zero.
3. Now activate COMPOSER for the NBR-postfire image by
clicking on the displayed image.
4. Click on the symmetric stretch icon (Figure 6.6.3.b).
◆ ◆ ◆
Figure 6.6.3.b Left: pre-fire NBR, Right: post-fire NBR.
Finally, we need to calculate the pixel-by-pixel delta between these two
images. We will use the module OVERLAY for this task. We want to
subtract the post-fire from the pre-fire image (NBRprefire - NBRpostfire).
◆ ◆ ◆
Calculating differences with OVERLAY
Menu location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. In the OVERLAY dialog box, double click in the text box
next to First image, and select NBR-prefire from the Pick list.
Click OK.
2. Select NBR-postfire for the Second image, and click OK.
3. Enter the Output image name: dNBR.
4. Select the radio button for First – Second.
5. Click on OK, and then Close.
◆ ◆ ◆
The final dNBR image will be displayed (Figure 6.6.3.c). Burned areas are
shown with large positive values. The larger the value the more severe the
burn.
Figure 6.6.3.c dNBR result showing burned areas with high positive values.
As we did with exercise 6.4, we can now extract the areas burned by
identifying which dNBR values correspond to burned areas. For this we will
use the module HISTO. Since fire areas are smaller than non-fire areas, we
expect a bimodal histogram, with a large peak at small dNBR values
representing areas not burned, and a smaller peak at higher dNBR values
representing the burned areas.
◆ ◆ ◆
Compute the histogram of dNBR values with HISTO
Menu location: IDRISI GIS Analysis – Database Query –
HISTO
1. Start the HISTO program from the main menu or the
toolbar.
2. Double click in the Input file name text box, and select
dNBR. Click on OK.
3. Select the radio button for Numeric.
4. Accept all other defaults.
5. Click on OK.
◆ ◆ ◆
Observing the histogram frequency (forth column) in the resulting data table,
we see the expected two peaks: Frequency values peak at 449,535 for dNBR
values between 0.01 and 0.03, then decline again rapidly to reach the local
minimum of 5,846 for dNBR values between 0.21 and 0.23, and then
increase slightly again. We therefore can select the dNBR value of 0.21 as the
threshold for dNBR values below which no burned areas are found.
We will use the module RECLASS to make values below the 0.21 threshold
zero, while leaving unchanged the dNBR values above the threshold.
◆ ◆ ◆
Applying a threshold to an image with RECLASS
Menu location: IDRISI GIS Analysis – Database Query –
RECLASS
1. Start the RECLASS module from the main menu or
toolbar.
2. The RECLASS dialog box will open.
3. Double click in the text box for Input file, select dNBR
from the Pick list, and click OK.
4. Enter a name for the Output file: BurnedAreas.
5. Below the Output file name is the table of Reclass
parameters. Complete the first line of the table as follows:
Assign a new value of: 0
To all values from: <
To just less than: 0.21
6. Check that the dialog box has been completed correctly
(Figure 6.6.3.d).
7. Click on OK.
8. TerrSet will generate a warning notice: Warning the input
file contains real values. Would you like to convert the output
file to integer? Click on No, as we want to keep the dNBR
values above the threshold as real values.
◆ ◆ ◆
Figure 6.6.3.d The RECLASS module with parameters set to reclassify non-
burned areas to zero.
Compared to our reclassification in section 6.4, here we did two different
things in this RECLASS operation. First, we used the symbol “<” to denote
that we want to reclassify all values from the minimum of the image. The use
of this symbol is very convenient as you do not need to know the exact
minimum value. Then, we only used one line of reclassification parameters
specifying which values we wanted to convert to zero. Since we did not
specify what to do with the values above the threshold, they will remain
unchanged, resulting in an image with zero values for non-burned areas and
dNBR values for burned areas (Figure 6.6.3.e).
Figure 6.6.3.e dNBR for only areas affected by fire.
We can see in Figure 6.6.3.e that there is variability of burn severity within
the fire scar, with areas to the east having higher severity (larger values) than
areas to the west. We also see smaller fires scattered across the region.
6.6.4 Normalized difference water index
We will now use the post-fire image to evaluate vegetation moisture. We will
first explore the image creating a false color composite that highlights
differences in vegetation moisture.
◆ ◆ ◆
Creating color composite with TerrSet EXPLORER and
COMPOSER
1. Click on the image OLI_Oct2013_B4.
2. Press the Ctrl Key of your keyboard and, without releasing
it, click on OLI_Oct2013_B5 and OLI_Oct2013_B6. You
should have the three bands selected in blue.
3. Now release the Ctrl Key and right click on the selection.
4. Select the option Add Layer(s) (or hit the Ins key while
pressing the Shift Key – Shift+Ins).
5. The three images should now be added within the same
map composition window.
6. Select the display window that contains the three bands
overlaid one on top of the other. COMPOSER should list the
three bands.
7. In COMPOSER select OLI_Oct2013_B6 (it should be
highlighted in blue) and click on the red square icon located
in the first row of icons at the bottom of COMPOSER.
8. Then select OLI_Oct2013_B5 and click on the green icon.
9. Finally, select OLI_Oct2013_B4 and click on the blue icon
(Figure 6.6.4.a).
◆ ◆ ◆
The false color composite shows healthy vegetation in green since it reflects
highly in the NIR. Water absorbs in the SWIR, therefore vegetation with high
moisture will look bright green, compared to dryer vegetation that will look
brown. Areas with high soil exposure will have high reflectance in the red
and NIR producing purple to magenta colors for wetter soils (with lower
SWIR reflectance), and white colors for very dry soils (with high SWIR
reflectance).
Our image was taken in October, which is the end of the dry winter months.
Clearly, the image shows very dry conditions and high soil exposure. There
are few green areas around wetter valleys, in riparian areas, pine plantations
(in the south west), and in some irrigated agricultural fields.
Figure 6.6.4.a False color composite of post fire image. Landsat 8 OLI
Bands 4,5,6 as B,G,R.
We will now calculate the Normalized Difference Water Index designed to
measure moisture in vegetation. This index was described in section 6.6.1.2.
For Landsat 8 OLI, it is calculated as (Band 5 – Band 6) / (Band 5 + band 6).
◆ ◆ ◆
Calculating ratios with OVERLAY
Menu location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. Start the OVERLAY module from the main menu or the
icon tool bar.
2. In the OVERLAY dialog box, double click in the text box
next to First image, and select OLI_Oct2013_B5 from the
Pick list. Click OK.
3. Select OLI_Oct2013_B6 for the Second image, and click
OK.
4. Enter the Output image name Oct_NDWI.
5. Select the radio button for First – Second / First + Second.
6. Click on OK.
◆ ◆ ◆
We will change the palette to a custom NDWI palette that shows dry
vegetation in brown that transitions into white and then into blue to represent
vegetation with high water content (we will learn how to create palettes in
next chapter). We will then stretch the image symmetrically using
COMPOSER.
◆ ◆ ◆
Changing palette of displayed image using COMPOSER
1. Click on the displayed Oct_NDWI image to activate
COMPOSER.
2. Click on the Layer properties icon, the first one from the
second row of icons at the bottom of the COMPOSER
window).
3. In the LAYER PROPERITES window click on the browse
icon (…).
4. Under Chap6\Chap6_6 select ndwi.
5. Click on OK.
6. Click Apply, OK, and Close in the LAYER PROPERTIES
window and then Close.
7. Finally, click on the symmetric stretch icon, which is the
center icon in the last row of icons at the bottom of
COMPOSER.
◆ ◆ ◆
Figure 6.6.4.b Normalized Difference Water Index calculated as OLI (Band
5 – Band 6) / (Band 5 + Band 6).
The resulting index should look like Figure 6.6.4.b. The dryness of the area is
revealed by predominant negative values, represented in brown colors. Areas
with higher vegetation moisture are shown with positive values, and are
represented with blue colors in the chosen palette. We can see pine forest
plantations having greater vegetation moisture than the surrounding
vegetation. Agricultural patches with center pivot irrigation are seen to the
east, and identified by their typical circular shape. Note that the index is
intended to identify vegetation moisture and therefore should only be
interpreted within areas of vegetation. For example, vegetation moisture
values within the lake do not represent vegetation moisture within the lake.
Before moving to the next exercise, close all TerrSet Windows.
CHAPTER 7
INTRODUCTION TO
CLASSIFYING
MULTISPECTRAL IMAGES
This is the first of three chapters dedicated to image classification. In these
chapters, you will learn how to classify a scene based on the spectral
properties of a pixel, a procedure that is analogous to classification using just
the colors of objects. Thus, multispectral classification is quite different
compared to our human vision system, which is mainly based on spatial
patterns and context. Humans have no trouble identifying objects in black
and white images, but the multispectral classification programs we are using
do very poorly with such data.
Multispectral classification usually requires some knowledge of the scene.
This differentiates multispectral classification from hyperspectral
classification, where, at least in theory, it may be possible to identify classes
entirely automatically, using generic spectral reflectance libraries. The
information about the scene may come from personal knowledge of the area,
field trips, or aerial photography. In addition, this information is usually
supplemented by image interpretation by the analyst.
Classification is a grouping or generalization of the data. Thus, it involves a
simplification. Consider for a moment a hypothetical three-band, 8-bit data
set. The potential number of unique combinations of DN values is 2553, or
16.6 million. The number of unique combinations grows exponentially with
the number of bands. Nevertheless, the number of useful classes that can be
identified reliably in a typical multispectral image is usually quite small,
perhaps ten or so. This is partly because usually only a small number of the
potential DN combinations are found in real data, and partly because there is
considerable variation within each class. Thus, the process of multispectral
classification involves not just identifying the average DN values of each
class, but also the variability of each class.
In classification, it is also important to differentiate between spectral classes
and informational classes. The spectral classes are the groups in the data;
informational classes are the map classes, the groups the analyst would like to
identify. There may be a 1:1 relationship between spectral and informational
classes, but in general that is unlikely to be so. For example, an analyst may
wish to identify the class Water. However, this class might consist of two
relatively distinct spectral classes such as deep, clear water, and shallow,
muddy water.
7.1 Introduction to Multispectral Classification
Methods
There are many classification methods. To characterize the differences
between these many methods, a number of terms are defined:
• Supervised versus unsupervised classification
• Soft versus hard classification
• Relative versus absolute classification
Understanding these terms helps illuminate some of the important
characteristics of classifiers that are dealt with in this section.
7.1.1 Supervised Versus Unsupervised
Classification
The difference between supervised and unsupervised classification relates to
when the analyst uses knowledge of the scene to guide the
classification. Unsupervised classification, which will be covered in this
chapter, uses an algorithm to identify the spectral classes, and the analyst
subsequently assigns informational class names to the algorithm-identified
spectral classes. With supervised classification (Chapter 8), the analyst
identifies regions in the image, known as training areas, to represent the
typical spectral classes that make up the informational classes. The
classification algorithm then classifies each pixel in the rest of the image
based on comparisons with training data, or more commonly, summary
properties of the training data.
In the abstract, unsupervised classification sounds like a more reliable and
less subjective process. Both supervised and unsupervised classification
require considerable subjective judgment and skill.
7.1.2 Soft Versus Hard Classification
The cartographic tradition is that of maps comprising discrete areas, with
sharp boundaries, and distinct, contrasting characteristics. This tradition has
been transferred to remote sensing classification, where the aim is usually to
assign each pixel to only one of a number of classes. The real world is not
necessarily so simple: classes are likely to grade into one another, and a
location may have characteristics of two or more classes. For example, the
classes Clean Water and Muddy Water may represent a continuum, and
Muddy Water in turn may grade through Wetlands into Dry Land. In
addition, a pixel that falls on a boundary between two or more classes would
be expected to have attributes of both classes.
In fuzzy classification (Chapter 9), a type of soft classification, a degree of
membership in each class is generated for each pixel location, rather than just
a value representing the class number, as is the case with hard classifiers.
(Note that the TerrSet documentation makes the argument that soft
classification is preferable to fuzzy classification as a generic term.)
7.1.3 Relative Versus Absolute Classifiers
It is relatively difficult to determine if two objects are the same, since then
one is forced to define how much variation is acceptable in the quality of
“sameness.” It is generally much easier to invert the process and find
differences. Thus, most classifiers are relative classifiers, assigning an
unknown pixel to a class only after comparing the unknown with each class,
and choosing the one to which it is most similar (or, the least different). In
contrast, an absolute classifier usually stops comparing an unknown pixel to
the training classes once a match has been met. Relative classifiers require
the user to identify training data for all the spectral classes, whereas absolute
classifiers only need training data for the class or classes of interest.
However, one disadvantage with absolute classifiers is that they are poor
generalizers, and often leave many pixels unclassified.
7.2 Download Data for this Chapter
For Chapters 7 and 8 we will use the same data. If you have not done so
already, download the data from the Clark Labs’ website for Chapter 7-8 and
place it into a new subfolder within the \RSGuide folder on your computer.
Note: Section 1.3.1 provides detailed instructions on how to download the
data. Also, the procedure for setting up the RSGuide folder on your computer
is described.
7.3 Unsupervised classification
7.3.1 Overview
The sequence of operations in unsupervised classification is summarized in
Figure 7.3.1.a.
Figure 7.3.1.a Overview of unsupervised classification.
The first step is to identify the list of informational classes based on
knowledge of the area and usually an examination of the image to determine
the likely classes that might be discriminated.
The next step is to cluster the image to produce the spectral classes, typically
many more than the expected number of final informational classes. The
analyst then views each spectral class, and develops a list of the original
spectral class numbers and the informational class number to which each
spectral class is assigned. Developing this list can be difficult, because often
the spectral class consists of more than one informational class. In other
words, when you try to decide which informational class to assign the
spectral class to, you find that there are two or even more informational
classes that seem appropriate. There are four options when this happens:
1. Label the pixel according to the dominant informational class.
2. Create informational classes that are mixtures.
3. Start again from the beginning choosing clustering parameters that will
give you even more classes.
4. Follow a “cluster-buster” procedure, where mixed classes are subjected to
a second round of clustering, in the hope that this will split these classes into
classes that are more pure.
In this exercise, we will follow option (1) from above, and try to identify the
dominant class for each cluster.
After the relationship between each spectral and informational class is
established, the image classes are recoded to produce a new image with only
informational classes.
7.3.2 Preparation
In Section 7.2, you should have already downloaded the data. However, we
still need to set the Project and Working Folders for this section.
Before starting, you should close any dialog boxes or displayed images in the
TerrSet workspace.
◆ ◆ ◆
Create a new project file and specify the working folders
with the TerrSet EXPLORER
1. Maximize TerrSet EXPLORER.
2. In the TerrSet EXPLORER window, select the Projects tab.
3. Right click within the Projects pane, and select the New
Project Ins option.
4. A Browse For Folder window will open. Use this window
to navigate to the RSGuide folder, which is the folder you
created on your computer for the data for this manual. Now
navigate to the Chap7-8 subfolder.
5. Click OK in the Browse For Folder window.
6. A new project file Chap7-8 will now be listed in the
Project pane of the TerrSet EXPLORER. The working folder
will also be listed in the Editor pane.
7. Minimize the TerrSet EXPLORER.
◆ ◆ ◆
For the classification exercises of Chapters 7 and 8, we will investigate
multispectral classification using Landsat OLI data of the area around
Morgantown, West Virginia, USA. Begin by creating a false color composite,
in which OLI bands 4, 5, and 6 are assigned blue, green and red, as described
below.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, specify the file name for
the Blue image band L8_OLI_Morgantown_B4. Click on OK
to close the Pick list.
3. Specify the Green image band as
L8_OLI_Morgantown_B5.
4. Specify the Red image band as L8_OLI_Morgantown_B6.
5. Enter the Output image filename in the text box provided:
OLI_456fcc.
6. Accept all other defaults, and click OK to create and
display the false color composite.
◆ ◆ ◆
Figure 7.3.2.a shows the results of the image, which will be displayed
automatically. The image has been annotated with the major land use types.
Figure 7.3.2.a False color composite of Morgantown, WV (Bands 4, 5, 6 as
B, G, R). Major land use types are indicated.
The false color composite image we have just created (Figure 7.3.2.a) will be
used for interpretation of the land cover types.
Before we go on to run the unsupervised classification, we need to generate a
Raster Group File for the input data. This format is useful, as it means we
can specify just one file name (the group), instead of the multitude of
individual bands. The procedure to create a raster group file is described
below briefly. If you want additional discussion on creating Raster Group
Files, you may wish to review the material in Section 1.3.7.
◆ ◆ ◆
Creating a file collection with the TerrSet EXPLORER
1. Maximize the TerrSet EXPLORER window from the menu
or the main icon toolbar.
2. Click on the tab for Files. If the files in the directory are
not listed, double click on the directory name (e.g.
RSGuide\Chap7-8), as listed in the Files pane.
3. Click on L8_OLI_Morgantown_B2.rst. The file name will
then be highlighted.
4. Press, and holding the CTRL key down, click on
L8_OLI_Morgantown_B3.rst,
L8_OLI_Morgantown_B4.rst,
L8_OLI_Morgantown_B5.rst, L8_OLI_Morgantown_B6.rst,
and L8_OLI_Morgantown_B7.rst.
5. You should now have 6 Landsat 8 OLI files highlighted.
Remove your finger from the CTRL button. Press the right
mouse button.
6. A pop-up menu will appear. Within this menu, scroll down
to Create, and then select Raster Group File.
7. A new file should be listed in the Files pane, Raster
Group.rgf. Select that file by right clicking on the filename.
8. Select the option Rename and type: OLI_all.
9. Press Enter on the computer keyboard.
◆ ◆ ◆
7.3.3 Develop the List of Land Cover Types
Table 7.3.3.a lists five major land cover types, which will form the basis of
the informational classes for the unsupervised classification. The information
in the table should be compared against the false color composite you have
generated to see if you can identify all the classes present (Figure
7.3.2.a). The numbers given in the left-hand column will be used to represent
the classes in the final classification. Note that the table does not include the
Coal Waste class shown in Figure 7.3.2.a. The reason for this is that this class
is difficult to differentiate with unsupervised classification, and for the
moment we will not try to separate it. Instead, for this exercise, you should
include the Coal Waste as part of the Commercial/Industrial/Transportation
class. For the supervised classification (Chapter 8), we will separate out this
class.
Table 7.3.3.a Table of land cover classes for unsupervised classification.
7.3.4 Group Pixels into Spectral Classes:
CLUSTER
The CLUSTER module performs a Histogram Peak Cluster Analysis
technique. In the Histogram Peak Clustering technique, classes are
automatically identified through the evaluation of modified image
histograms. Histogram peaks are identified based on different criteria (fine or
broad). Then, each pixel in the image is associated with its nearest peak to
produce the clusters.
The advantage of this technique is that the analyst does not need to identify
the number of output clusters, as this is determined by the number of peaks
found in the histogram.
CLUSTER has several parameters that need to be identified, which affect the
classification output. TerrSet does have excellent help documentation, as
already described (Section 1.3.3). You should get into the habit of checking
the on-line help to find out more about how the different modules work, or to
understand the implications of the different options offered, especially when
you are faced with options or parameters that you are unsure how to deal
with.
◆ ◆ ◆
Clustering an image with CLUSTER
Menu location: IDRISI Image processing – Hard
classifiers – CLUSTER
1. Start the CLUSTER program from the menu.
2. In the CLUSTER dialog box, click on the Insert layer group
button.
3. A Pick list window will open. Double click on OLI_all.
4. The six OLI bands (OLI_B2, OLI_B3,…. OLI_B6,
OLI_B7) should all be listed now in the CLUSTER dialog
box, in the Bands to be processed-Filename text box (Figure
7.3.4.a).
5. Type CBroad as the Output image.
6. Change the Clustering Rule to Drop least significant
clusters and the Percent to 5.
7. Leave all other defaults and click OK to run CLUSTER.
◆ ◆ ◆
Figure 7.3.4.a CLUSTER module with parameters set.
The histogram used for the evaluation of peaks is not the one from the
original image. Each image is stretched based on a number of grey levels and
percent saturation. The greater the number of grey levels and the lower the
saturation, the larger the number of peaks that could be identified, and
therefore the higher the number of output spectral classes. The default
parameters usually work well, however if you find that you have too many or
too few classes you can adjust them.
The generalization level (Broad or Fine) specifies how histogram peaks are
identified. The Broad classification produces less clusters and is good as an
overview or generalized classification, while the Fine clustering classification
provides larger separation between spectral classes and therefore more detail.
Finally, the clustering rule allows to either retain all possible clusters (with a
maximum of 256 spectral classes), set a maximum number of clusters, or
remove clusters that are less than a certain percentage of the image (i.e. are
not representative of large areas). In this case, if a cluster is smaller than 5%
of the image, the cluster is dropped and pixels reassigned to larger
clusters. The resulting image that you obtained should look like Figure
7.3.4.b.
Figure 7.3.4.b Spectral classes from CLUSTER Broad classification.
The final image will display automatically, however if you need to redisplay
the image, you should use the default Qualitative palette, as this is no longer
a raw image of radiance values, but a classified image.
This classification produced 10 clusters, representing 10 different spectral
classes. In the next step, we will identify to which information classes the
spectral classes correspond. This can be done through visual inspection of
each of the clusters. We will compare the cluster result to the color
composite.
7.3.5 Determine the informational classes
Although we now have a classification, we have only just begun our work.
The next stage is to build a table of values listing the spectral classes from 1-
10 (the number of spectral classes obtained in the CLUSTER module), and
the associated informational class number for each of those spectral classes.
We will use a visual interpretation of the false color composite of Bands 4, 5,
and 6 (OLI_456fcc, Figure 7.3.2.a) to determine the appropriate
informational class.
Note that you should expect at least some informational classes to have many
spectral classes. Unfortunately, the reverse relationship is not allowed, and
you cannot assign one spectral class to two informational classes.
Your final list of classes will have two columns, one each for the spectral
class (CLUSTER output), and associated informational class (final class
number). Thus, for example, you may develop a table such as:
12
23
31
...
10 4
There are three ways to decide on the informational class for a spectral
class. You may wish to use a combination of the methods.
• You can left click with the mouse on the color icon in the legend for the
map display, and switch between a display with just that class, shown in
red, and all other pixels in black. To make this option even more
effective, you can add the cluster result on top of the color composite and
assign the transparency layer option. In this way, when clicking on the
color chip, the class will be highlighted and all other pixels become
transparent, letting you see the color composite behind. This is the
simplest and most effective way of evaluating spectral classes. Note that
when you click a class, the value or identifier (ID) of that class appears
within the color chip (Figure 7.3.5.a).
• You can click in the image, and the class number for that pixel will be
displayed in a pop-up window. This is easiest with the classes with a
large number of pixels, such as the sinuous green cluster (Cluster 3). This
cluster can be identified as Water, by comparing it to the false color
composite.
• In a more time-consuming way of evaluating classes, you can
systematically work through the classes by setting them to distinctive
colors. This procedure is described in more detail in section 7.3.9.
Figure 7.3.5.a Class 3 highlighted with transparent display setting, letting us
see through other classes to the color composite.
For this exercise, we will use the simplest approach. The first step is to
display the result from the unsupervised classification on top of the color
composite.
First start by closing all displayed windows.
◆ ◆ ◆
Displayng multiple images with the TerrSet EXPLORER
1. Maximize TerrSet EXPLORER window from the menu or
the main icon toolbar.
2. Click on CBroad.rst. The file name will then be
highlighted. Continue pressing on the filename.
3. While holding the CTRL key down, click on
OLI_456fcc.rst color composite.
4. Right click on the selection and from the list of options
select Add Layer(s).
◆ ◆ ◆
The two images will be displayed one on top of the other, with CBroad on
the top since it was the file that was selected first. If the color composite is on
top, you might have selected the files in a different order. You can adjust the
order of the layers by dragging them up or down within COMPOSER.
Note that the displayed image does not have a legend. We need to add it.
◆ ◆ ◆
Adding Legend with Map Properties
1. Right click anywhere on the displayed image. A window
will pop-up.
2. Select Map Properties.
3. Under the Legends tab check the Visible box for Legend 1.
4. Under Layer, click on the down arrow and select CBroad.
5. Click OK. The legend for the cluster image should be
displayed.
◆ ◆ ◆
Now we will use the transparency option of COMPOSER. This option allows
to see through the classes not selected.
◆ ◆ ◆
Changing layer transparency with COMPOSER
1. In COMPOSER, select CBroad so that it is highlighted in
blue.
2. Click on the Transparent Layer icon (last icon on the first
row).
3. Click on a class to corroborate that the transparency is set.
◆ ◆ ◆
If the transparency is set correctly, the icon to the left of CBroad should
change to the transparency icon. When clicking on a spectral class the pixels
associated with that class should be highlighted in red and every other pixel
should become transparent, letting you see though to the color composite, as
shown in Figure 7.3.5.a.
Explore each cluster and identify, in the Table 7.3.5.a, the informational class
corresponding to each cluster. Cluster 3 was done as an example.
You might have noticed that, in some cases, it is difficult to assign a spectral
class to a single informational class, as it may be associated with multiple
informational classes. As mentioned at the beginning of the chapter, you can
assign multiple spectral classes to one informational class, but you cannot
assign a spectral class to multiple informational classes. In those cases, you
will need to choose the informational class that best matches the spectral
class.
Table 7.3.5.a Correspondence between spectral and informational classes.
7.3.6 Assign Each Spectral Class to an
Informational Class
This part of the procedure has two steps. First, we use the TerrSet EDIT tool
to enter the list of spectral and informational classes in a table, then we use
that table with the ASSIGN module. The advantage of developing a table is
that we have a record of the assignment we made, and it is easy to reapply the
recoding operation if we want to change a few values.
◆ ◆ ◆
Enter the recode values table with EDIT
Menu location: File – Data Entry – EDIT
1. Open the program Edit from the main menu or the main
icon bar.
2. The text editor window will open.
3. In the blank window enter the spectral class number, leave
a space, and then enter the informational class number. Enter
each spectral class and the associated informational class on a
new line. Start with spectral class 1 and end with 10. For
example, if you want to assign spectral class 1 to
informational class 2, your first line would be: 1 2. Use the
class correspondence identified in Table 7.3.5.a. See Figure
7.3.6.a for an example of how the completed list might look.
4. From the Edit menu select to save the file: File – Save As.
5. A Save As dialog box will open. In the Save As type text
field, click on the pull-down list, and select Attribute Values
File. This automatically gives the file you create an AVL
extension.
6. Enter the file name reclass1.
7. Click SAVE.
◆ ◆ ◆
Figure 7.3.6.a Edit window, with reclass data.
Now, use the Attribute Values file you have just created to create a new
image, with DN values recoded according to the scheme specified in the text
file.
◆ ◆ ◆
Create a new image with informational class numbers,
using ASSIGN
Menu location: File – Data Entry – ASSIGN
1. Use the TerrSet main menu to start the ASSIGN module.
2. In the ASSIGN window, use the pick list button (…) to
specify the Feature definition image as the output of the
ISOCLUST program, CBroad.
3. The Output image is the new classified map with
informational classes. Therefore, type the new file name:
CBroad_reclass.
4. The Attribute values file is the text file reclass1.
5. Enter an Output title, such as: Unsupervised CLUSTER
classification.
6. Figure 7.3.6.b shows the dialog box with the parameters
specified.
7. Click on OK.
◆ ◆ ◆
Figure 7.3.6.b ASSIGN dialog box with parameters specified.
The program will automatically display the reclassed image when processing
is complete (Figure 7.3.6.c).
Figure 7.3.6.c The reclassed classification.
The output image has the five informational classes that we identified in
Section 7.3.3 as our classification target. Note that if you want to change one
or more class assignments, you can simply do so by changing the values in
the Edit window, re-saving the files as an AVL, and running ASSIGN again
(You will be asked if you want to over-write the original file, click Yes).
If you are satisfied with the output, you are ready to create a meaningful
legend and palette for this classification.
7.3.7 Update the Legend and Create a
Custom Image Palette File
It is important to create a legend with the names of each informational class,
as well as to display each class with an appropriate color.
◆ ◆ ◆
Update the classified image with TerrSet EXPLORER
1. Maximize TerrSet EXPLORER from the main menu, or the
main icon bar.
2. In the TerrSet EXPLORER window, click on the Files tab.
3. If the files are not listed in the Files pane, double click on
the directory name to display the files.
4. In the Files pane, click on the CBroad_reclass.rst
classified image file.
5. In the Metadata pane below the Files pane, scroll down,
and find the blank cell next to the label Categories (Figure
7.3.7.a).
◆ ◆ ◆
Figure 7.3.7.a Categories cell in the Metadata pane of TerrSet EXPLORER.
◆ ◆ ◆
Update the classified image with the TerrSet EXPLORER
(cont.)
6. Click in the blank Categories cell. The cell will turn white,
and a Pick list button (…) will appear. Click on that button.
7. A Categories dialog box will open.
8. In the first cell below Code, enter 1.
9. In the cell below Category, enter Water.
10. Find the Add line icon on the right of the Categories
dialog box (Figure 7.3.7.b). Click on this icon.
11. In the new line enter the Code 2 and the Category Forest.
12. Repeat the previous two steps in order to enter the
remaining classes on three additional rows:
3 Pasture
4 Comm / Indus / Trans
5 Residential
13. Figure 7.3.7.c shows the Categories dialog box with the
legend specified.
14. Click on OK to close the Categories dialog box.
15. The TerrSet EXPLORER Metadata pane should now have
the number 5 in the cell next to Legend cats, indicating we
have specified category names for 5 classes.
16. Click on the icon for Save, in the bottom left corner of the
Metadata pane (accept the warning message). After the file is
saved, the icon will go blank.
◆ ◆ ◆
Figure 7.3.7.b Categories dialog box with the Add line icon indicated by the
arrow.
Figure 7.3.7.c Categories dialog box with the 5 legend categories specified.
The final step is to update the classified image palette file (color scheme)
with the SYMBOL WORKSHOP.
◆ ◆ ◆
Create a color look up table for the final map with
SYMBOL WORKSHOP
Menu location: File – Display – SYMBOL WORKSHOP
1. Start the SYMBOL WORKSHOP from the main menu or
the main icon bar.
2. Once the SYMBOL WORKSHOP dialog box and window
have opened, use the window menu for File – New.
3. In the New Symbol File dialog box, click in the radio button
for Palette.
4. Enter a file name: infclasses.
5. Click on OK to close the New Symbol File dialog box.
6. The SYMBOL WORKSHOP window will now change to
red squares (Figure 7.3.7.d).
7. Place the cursor over the second cell from the top left.
Confirm from the label that will be shown that this is cell 1.
Click in this cell.
8. A color dialog box will open. Since class 1 is Water, click
on a dark blue color chip. (Figure 7.3.7.e). You can adjust the
lightness (illumination) of the color on the vertical palette
located on the right-hand side of the dialog box.
9. Click on OK to close the Color dialog box.
10. Repeat steps 8 and 9 above to specify 2 (Forest) as a dark
green, 3 (Pasture) as light green, 4 (Com/Indus/Trans) as light
blue, and 5 (Residential) as pink.
11. When you have completed specifying the five colors, save
the palette file through the SYMBOL WORKSHOP window
menu File – Save.
◆ ◆ ◆
Figure 7.3.7.d SYMBOL WORKSHOP window after specifying a new
palette file.
Figure 7.3.7.e COLOUR Dialog box
Finally, redisplay your image, as described below. The displayed image
should now have the updated palette file and legend applied.
◆ ◆ ◆
Displaying an image with a custom palette file using the
DISPLAY LAUNCHER
Menu Location: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER program from the main
menu, or the tool bar.
2. In the DISPLAY LAUNCHER dialog box, click on the
option to browse for a file by selecting the Pick list button
(…) in the center left column.
3. A file Pick List window will open. Double click on the
CBroad_reclass raster file.
4. In the Palette File section of the DISPLAY LAUNCHER
window, click on the Pick list button (…).
5. Select the infclasses palette file by double clicking on the
filename.
6. Click on OK to display the image (Figure 7.3.7.f).
◆ ◆ ◆
Figure 7.3.7.f Final CLUSTER classification map.
In the next exercise, we will try a different unsupervised classification
method. Before we start, close all windows.
7.3.8 Iterative Self Organizing Clustering:
ISOCLUST
Another unsupervised classification method is the Iterative Self Organizing
Cluster analysis (ISOCLUST). ISOCLUST starts the unsupervised
classification with a seed of clusters, and then adjusts the cluster centroid
iteratively in order to improve the characterization of the classes. Unlike
CLUSTER, ISOCLUST requires the analyst to specify the output number of
spectral classes. The ISOCLUST process starts by running a histogram peak
cluster analysis (CLUSTER) using the Fine clustering option. Then, a
histogram of the clusters is computed in order to identify the spectral classes
that dominate the image. Those clusters are used as seeds in the iterative
process. All pixels in the image are re-evaluated and assigned to the most
similar cluster based on a Maximum Likelihood procedure (we will see this
method in Chapter 8). This process of cluster re-organization is repeated until
all specified iterations are completed.
The module ISOCLUST has a number of parameters that you, as the analyst,
have to specify. These parameters affect the classification output. Although
we will explain the parameters here, we recommend reviewing the TerrSet
Help, as already described in Section 1.3.3.
The first step is to identify which bands will be used to determine the spectral
classes in the unsupervised classification.
◆ ◆ ◆
Clustering an image with ISOCLUST
Menu location: IDRISI Image processing – Hard
classifiers – ISOCLUST
1. Start the ISOCLUST program from the menu.
2. In the ISOCLUST dialog box, click on the Insert layer
group button.
3. A Pick list window will open. Double click on OLI_all.
4. The six TM bands (OLI_B2, OLI_B3,….OLI_B6,
OLI_B7) should all be listed in the ISOCLUST dialog box, in
the Bands to be processed - Filename text box (Figure
7.3.8.a).
5. Click on Next.
◆ ◆ ◆
Figure 7.3.8.a ISOCLUST dialog box with the raster group file bands
specified.
The program will then generate an analysis of the potential classes and the
comparative number of pixels that fall into these classes. This information is
reported in a HISTOGRAM window (Figure 7.3.8.b). A new ISOCLUST
window (with a new set of control parameters) will also open.
Figure 7.3.8.b Histogram of potential classes.
The purpose of the histogram is to assist in the process of deciding on the
number of spectral classes to generate through the ISOCLUST procedure. In
examining Figure 7.3.8.b, it is apparent that most of the data fall in a small
number of classes, each with many pixels (You can change the Display graph
maximum, from its default value of 255 to 60. Then click the Update button
to refresh the histogram). The remainder of the pixels fall in a large number
of classes, each with very few pixels. Thus there is a diminishing return as
higher numbers of classes are selected.
The TerrSet on-line help suggests looking for a break in the histogram curve
to decide on a logical value. The number of spectral classes we choose should
be much larger than the number of informational classes. Since we are
interested in 5 classes, and the histogram seems to flatten out around 15. We
shall then select 15 spectral classes. This choice of 15 is somewhat arbitrary,
however, and the reader might like to experiment with different values, after
completing the initial guided exercise.
◆ ◆ ◆
Clustering an image with ISOCLUST (cont.)
6. In the ISOCLUST dialog box, specify the Number of
clusters desired as 15.
7. Leave the Number of iterations and Minimum sample size
per class (pixels) as the default values.
8. Specify the Output image filename as Isoclust15.
9. Figure 7.3.8.c shows the dialog box with the parameters
specified.
10. Click on OK.
◆ ◆ ◆
The number of iterations parameter identifies the number of times a pixel will
be re-evaluated and organized into new clusters. The larger the number of
iterations, the more refined the final classes, However, it is expected that, at
some optimum re-arrangement, increasing the number of iterations will not
significantly improve the re-allocation of pixels. The default usually works
well, as we are starting with a pre-organized seed of clusters. You are
encouraged to try experimenting with other numbers of iterations after
completing the exercise.
Figure 7.3.8.c ISOCLUST dialog box with clustering parameters.
The classification will take a while to process the image. You can watch the
Status bar and progress indicators in the outer TerrSet frame for messages on
progress. When the program is finished it will automatically display the
classification (Figure 7.3.8.d).
Figure 7.3.8.d Results of ISOCLUST program, with 15 classes.
Note that the output image is no longer in radiance values, but a classified
image. Therefore, when displaying this classified image, you should use the
TerrSet Default Qualitative palette, and display the legend.
7.3.9 Determine the Informational Class
As with section 7.3.5, even though we now have a classification, we need to
build a table of values listing the spectral classes from 1-15 (the number of
spectral classes we specified in the ISOCLUST program), and the associated
informational class number for each of those spectral classes. Here again we
will use a visual interpretation of the false color composite of Bands 3, 4, and
5 (OLI_456fcc, Figure 7.3.2.a) to determine the appropriate informational
class, however we will use a different approach.
This time we will systematically work through the classes by setting them to
distinctive colors. This procedure is described in more detail below. This part
of the exercise is a somewhat tedious process!
We will use the SYMBOL WORKSHOP. If you feel that the instructions are
too brief, please refer to section 7.3.7.
◆ ◆ ◆
Palette file interactive modification with SYMBOL
WORKSHOP
Menu location: File – Display – SYMBOL WORKSHOP
1. Start the SYMBOL WORKSHOP utility from the main
icon bar, or the main menu.
2. In the SYMBOL WORKSHOP window, select the menu
option for File – New.
3. In the resulting New Symbol File dialog box, select Palette
as the choice for Symbol File Type.
4. Enter the File name in the text box provided: iso15.
5. Click on OK to close the New Symbol File dialog box.
6. The SYMBOL WORKSHOP window will immediately
change from a grid of circles to a uniform red grid.
◆ ◆ ◆
We will now set up a gray scale range from 0 to 15, the maximum class
number. We will specify 0 as black, and 15 as white. The values between 0
and 15 will be progressive shades of gray between those two extremes.
◆ ◆ ◆
Palette file interactive modification with SYMBOL
WORKSHOP (cont.)
7. Run your cursor over the red cells. Note how a small
yellow rectangle appears next to the cursor. The number in
this rectangle indicates the image DN number associated with
that cell.
8. Select the upper left cell, cell 0, by clicking on it.
9. A Color dialog box will appear.
10. In the Color dialog box, click on the color chip for black
(the bottom left color chip in the Basic colors section).
11. Click on OK.
12. The SYMBOL WORKSHOP window should now have cell
zero set to black (Figure 7.3.9.a).
◆ ◆ ◆
Figure 7.3.9.a SYMBOL WORKSHOP with cell 0 set to black.
◆ ◆ ◆
Palette file interactive modification with SYMBOL
WORKSHOP (cont.)
13. Now select cell 15 (run the cursor over the cells, to
identify which cell is 15).
14. The COLOR dialog box will open. Select the chip for
white (bottom right color chip, in the Basic colors section).
15. Click OK to close the COLOR dialog box, and return to
the SYMBOL WORKSHOP window.
16. There should now be two cells that are not red: cell 0,
which is black, and cell 15, which is white.
17. In the text box labeled To, which is situated next to the
Blend button, enter 15.
18. Click on the Blend button.
19. The result should be a gray scale color ramp from black to
white for the first 15 cells (Figure 7.3.9.b).
◆ ◆ ◆
Figure 7.3.9.b SYMBOL WORKSHOP, with gray blend applied. Arrow
points to the To text box.
Having now established a palette file that goes from black to white for image
values that go from 0 to 15, we can now modify a few select values to other
colors. In this way, when we apply this palette file to the ISOCLUST image,
we will be able to see the general pattern of the classes in gray tones, but just
focus on a few classes at a time in the distinctive colors. Note that we start at
cell 1, and not 0, since TerrSet has assigned the first class to 1, not 0.
◆ ◆ ◆
Palette file interactive modification with SYMBOL
WORKSHOP (cont.)
20. In the SYMBOL WORKSHOP window, select cell 1, and
this time set this cell to blue. (The specific blue color chip is
not important; we simply want any bright and distinctive
color.)
21. Select cell 2, and set it to green.
22. Select cell 3, and set it to red.
23. Use the menu in the SYMBOL WORKSHOP window, to
save the palette file: File – Save.
◆ ◆ ◆
Figure 7.3.9.c shows the SYMBOL WORKSHOP window, with the gray
scale ramp, and three cells set to distinctive colors.
Figure 7.3.9.c SYMBOL WORKSHOP with 3 cells set to distinctive colors.
The next step is to apply the palette file to the image produced by the
ISOCLUST program. Therefore, find the DISPLAY window, with the
Isoclust15 image already displayed. If you have closed the window, redisplay
the image.
◆ ◆ ◆
Palette file interactive modification with SYMBOL
WORKSHOP (cont.)
24. Click on the map window that displays Isoclust15. This is
to bring the image to the front of your screen (i.e., active).
25. Now find the Composer window. This window is
automatically opened whenever a DISPLAY window is
opened.
26. In the Composer window, click on Layer Properties.
27. The Layer Properties window will open.
28. In the Layer Properties window, select the Display
parameters tab.
29. Click on the Pick List icon (…) next to the Palette file text
box, and in the Pick List that will open, double click on the
name of the palette file you have just created: Iso15.
30. Click on Apply. The image should now be displayed in
black and white, except for the selected classes we have set to
blue, green and red, respectively (Figure 7.3.9.d).
31. Compare the patterns in the classified image to the false
color composite (OLI_456fcc), which should be redisplayed
if necessary.
32. Note the informational class numbers for each spectral
class. (Hint: Classes 1 and 2 appear to be forest, while class 3
appears to be Commercial / Industrial / Transportation. Forest
is class number 2 and Comm/Indus/Trans, class 4 – see Table
7.3.3.a).
◆ ◆ ◆
Figure 7.3.9.d ISOCLUST image with the custom palette applied.
Now that you have decided on the informational class numbers for the first 3
classes, we will now modify the first three classes back to the gray scale
ramp, and assign DN values 4, 5, and 6 to blue, green and red, respectively.
◆ ◆ ◆
Palette file interactive modification with SYMBOL
WORKSHOP (cont.)
33. Find the SYMBOL WORKSHOP window.
34. Click on the button for Blend. This should remove the
colors that were applied to cells 1, 2 and 3, and the color ramp
should once again be from black to white without additional
colors in between.
35. Click on cell 4, and select the blue color chip.
36. Click on cell 5, and select the green color chip.
37. Click on cell 6, and select the red color chip.
38. Save the file by using the SYMBOL WORKSHOP window
menu for File - Save.
39. Find the DISPLAY window with the Isoclust15 image.
Click in that window.
40. In the Composer window, click on Layer Properties.
41. In the Layer properties window, click on Apply to update
the palette file used for displaying the image.
42. Determine which informational class each spectral cluster
belongs to.
◆ ◆ ◆
At this stage you should have a list of the spectral class numbers and the
associated informational class numbers for the first 6 of the 15 classes.
Repeat this procedure for the remaining 9 classes, so that you have a
complete table of values.
7.3.10 Reassign Each Spectral Class to an
Informational Class
This part of the procedure has two steps. First we will use the TerrSet
program EDIT to enter the list of spectral and informational classes in a table,
then we use that table in the program ASSIGN. The advantage of developing
a table is that we have a record of the assignment we made, and it is easy to
reapply the recoding operation if we want to change a few values.
◆ ◆ ◆
Enter the recode values table with EDIT
Menu location: File – Data Entry – EDIT
1. Open the program EDIT from the main menu or the main
icon bar.
2. The TerrSet TEXT EDITOR window will open.
3. In the blank window enter the spectral class number, leave
a space, and then enter the informational class number. Enter
each spectral class and the associated informational class on a
new line. Start with spectral class 1 and end with 15. For
example, if you want to assign spectral class 1 to
informational class 2, your first line would be: 1 2. See
Figure 7.3.10.a for an example of how the completed list
might look.
4. From the Edit menu select to save the file: File – Save As.
5. A SAVE AS dialog box will open. In the Save as type text
field, click on the pull-down list, and select Attribute Values
File. This automatically gives the file you create an AVL
extension.
6. Enter the file name reclass2.
7. Click SAVE.
8. A Values File Information dialog box will open. Take the
default Integer option, and click on OK.
◆ ◆ ◆
Figure 7.3.10.a TerrSet TEXT EDITOR window, with reclass data.
Now, use the Attribute Values file you have just created to create a new
image, with DN values recoded according to the scheme specified in the text
file.
◆ ◆ ◆
Create a new image with informational class numbers,
using ASSIGN
Menu location: File – Data Entry – ASSIGN
1. Use the TerrSet main menu to start the ASSIGN program.
2. In the ASSIGN window, use the Pick List button (…) to
specify the Feature definition image as the output of the
ISOCLUST program, Isoclust15.
3. The Output image is the new classified map with
informational classes. Therefore, type the new file name:
Isoclust15_reclass.
4. The Attribute values file is the text file reclass2.
5. Figure 7.3.10.b shows the dialog box with the parameters
specified.
6. Click on OK.
◆ ◆ ◆
Figure 7.3.10.b ASSIGN dialog box with parameters specified.
The program will automatically display the reclassed image when processing
is complete (Figure 7.3.10.c.).
Figure 7.3.10.c The reclassed classification.
If you are satisfied with the output, you are ready to proceed to the next
section (7.3.7). However, if you decide that you would prefer to change one
or more class assignments, you can simply edit the values you want to change
in the Edit window, and click on Save. Then find the ASSIGN window, and
click on OK. You will be asked if you want to over-write the original file.
Click Yes. A new image will automatically be displayed.
7.3.11 Update the Legend and Create a
Custom Image Palette File
The final output is almost created. However, we need to enter the
informational class names and choose colors for each class. Since we already
created an appropriate legend and palette in Section 7.3.7, we will just apply
them to this new classification.
◆ ◆ ◆
Update the classified image with the TerrSet EXPLORER
1. In the TerrSet EXPLORER window, click on the tab for
Files.
2. If the files are not listed in the Files pane, double click on
the directory name to display the files.
3. In the Files pane, click on the Isoclust15_reclass.rst
classified image file.
4. In the Metadata pane below the Files pane, scroll down,
and find the blank cell next to the label Categories (as you did
in 7.3.7 - Figure 7.3.7.a).
5. Click in the blank Categories cell. The cell will turn white,
and a Pick list button (…) will appear. Click on that button.
6. A Categories dialog box will open.
7. Click on Copy from… and a pick list will open.
8. Select CBroad_reclass from the Pick List, and click OK
(Figure 7.3.11.a).
9. Click OK. The categories should be automatically
completed with the names of the 5 classes (Figure 7.3.11.b).
10. Save the Metadata (accept the Warning message).
◆ ◆ ◆
Figure 7.3.11.a Categories dialogue box Copy from option clicked and pick
list open.
Figure 7.3.11.b Categories dialog box with the 5 legend categories
specified.
The final step is to re-display the image with the palette file (color scheme)
created in section 7.3.7. You can refer back to earlier in this exercise (Section
7.3.9) for more details, if the instructions below are too brief.
◆ ◆ ◆
Menu Location: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER program from the main
menu, or the tool bar.
2. In the DISPLAY LAUNCHER dialog box, click on the
option to browse for a file by selecting the Pick List button
(…) in the center left column.
3. A file Pick List window will open. Double click on the
Isoclust15_reclass raster file.
4. In the Palette File section of the DISPLAY LAUNCHER
window, click on the Pick list button (…).
5. Select the infclasses file by double clicking on it.
6. Click on OK to display the image (Figure 7.3.11.c).
◆ ◆ ◆
The final output should look similar to Figure 7.3.11.c
Figure 7.3.11.c Final ISOCLUST classification map.
7.4 Comparing classifications
We have now classified two images for the same region. Let’s compare these
two unsupervised classification outputs. A simple visual inspection of the two
images reveals that the main patterns are similar, although with some
differences. For example, the water and commercial areas in the output from
ISOCLUST seem more refined that in the CLUSTER result. We can visualize
the similarities and differences between the two classifications using a pixel
cross tabulation. A cross tabulation performs a pixel overlay union, where the
output represents the combination of classes in the two input images. For
example, if a pixel was identified as class 1 in the first classification and as
class 2 in the second classification, the crosstabulation output will have a new
class ID (e.g. 1) to represent the union of the two classes, i.e. 1 | 2.
The module CROSSTAB in TerrSet allows us to do this pixel union,
comparing the results from CLUSTER and ISOCLUST. In Chapter 11 we
will use it to detect changes across time.
◆ ◆ ◆
Overlay union operation with CROSSTAB
Menu location: IDRISI GIS Analysis – Database Query –
CROSSTAB
1. Open the CROSSTAB module from the main TerrSet menu
(Figure 7.4.a).
2. In the CROSSTAB window, leave the default Hard
Classification and double click in the text box next to First
image (column). In the resulting pick list, double click on
CBroad_reclass.
3. Double click in the text box next to Second image (row). In
the resulting pick list, double click on Isoclust15_Reclass.
4. In the region marked Output type, select the radio button
for Both cross-classification and tabulation.
5. In the Output image text box, enter Crosstab.
6. Click on OK.
◆ ◆ ◆
Figure 7.4.a CROSSTAB window with parameters specified.
The output of the CROSSTAB program is a map (Figure 7.4.b) and a table
(Table 7.4.a), that shows all similarities and differences between the two
classifications. Note, since the result of your classifications may be slightly
different, the map and table will also differ from the one shown, however, the
overall pattern should be similar.
Figure 7.4.b Crosstabulation of the reclassified result of CLUSTER and
ISOCLUST.
Although each possible combination of classes has its unique pixel ID, the
legend is composed of pairs of numbers separated by a vertical bar. The first
legend category, 1|1 represents the combination of class 1 in CBroad_reclass
and class 1 in Isoclust15_reclass. Since we reclassed both the CLUSTER and
ISOCLUST results to have the same informational classes, class 1 in both
images represents water. Because of this, the 1|1 category in the CROSSTAB
output represents all areas that were classified as water with both
classification methods. The category 1|2 on the other hand, represents
locations that were classified as water with in CBroad_reclass, but classified
as forest in the Isoclust15_reclass image.
If you click on the color chip for category 1|2, the category will be
highlighted in red. Doing this, you can see the spatial pattern of the
differences. In this case, areas that were classified as water in
CBroad_reclass and as forest in Isoclust15_reclass correspond to forest with
shadows. You can explore in this way the differences between the two
classifications.
The table (Table 7.4.a) result gives information about the number of pixels in
each of the combination of classes. In this case, the class number in the
columns corresponds to the CBroad_reclass image, while the class number in
the rows corresponds to the Isoclust15_reclass image. The table diagonal
(1|1, 2|2, 3|3, 4|4, 5|5) shows the number of pixels that were consistently
assigned to the same class with both classification methods. Table cells off
the diagonal represent the number of pixels where the two classifications
differ. The overall Kappa gives a measure of agreement between the two
images, where values closer to 1 represent a better agreement.
Table 7.4.a CROSSTAB Tabular result.
CHAPTER 8
SUPERVISED
CLASSIFICATION
8.1 Background
In the introduction of Chapter 7 we described the differences between
supervised and unsupervised classification. This chapter concentrates on
methods of supervised classification.
Figure 8.1.a illustrates the steps involved in supervised classification. First,
we determine which informational classes our classification will have. Then,
we identify small areas of known land cover. These areas are called training
sites. The image DN values (or reflectance if image was transformed) within
the training sites are generalized to represent the typical spectral properties of
the training classes and then used to create signatures for each class. Finally,
the rest of the image is compared on a pixel by pixel basis to the generalized
data of the training classes, to determine which class each pixel belongs to.
Figure 8.1.a Overview of the Supervised Classification Process.
The first step is to define the training sites. Each known land cover type will
be assigned an integer identifier, and one or more training sites will be
identified for each integer.
8.2 Preparation
It is recommended that you do this exercise only after completing the
unsupervised classification of Section 7.3. If you choose to do this section
first, or have not saved your results from Section 7.3, you will need to
complete Section 7.2 (copy the data) and 7.3.2 (set the project, create a false
color composite and a raster group file) before continuing with this exercise.
Open the TerrSet EXPLORER from the main menu or the main icon bar,
click on the Projects tab, and check that the radio button for the Chap7-8
project has been selected.
8.3 Develop the List of Land Cover Types
In Section 7.3.3 a list of five major land cover classes was developed. In this
exercise, we will add a sixth class, Coal Waste. We did not separate out this
class in the unsupervised classification (Section 7.3) because coal waste was
not clearly separable in that exercise. You will see, however, that it is at least
partially separable through supervised classification. Table 8.3.a lists the full
six classes for this exercise. The numbers in the table will be the DN used to
represent the classes in the final classification. Figure 7.3.2.a shows the false
color composite with the examples of each class indicated.
Table 8.3.a Supervised classification classes.
8.4 Digitize Training Classes
Before we begin digitizing, a general background and tips regarding
digitizing may help improve the effectiveness of your work.
• Select typical areas. The training polygons are supposed to represent
the typical properties of the class. Thus, it is important that your training
sites are as homogeneous as possible (i.e. they should contain only that
land cover type, and typically only one dominant color on the monitor).
However, the training sites should also cover the range of typical values
for that cover-type.
• Digitize efficiently. Do not spend hours digitizing each class - it is
important to be able to outline a "typical" area rapidly. You probably
should zoom in (i.e. expand the image on the screen), to make digitizing
easier. However, it is not worthwhile to zoom in so much that you are
digitizing individual pixels. In most cases you should be able to digitize a
polygon quite rapidly.
• Digitize a reasonable number of pixels. Your training sites should
contain an adequate sample of pixels for statistical characterization. A
general rule of thumb is that the number of pixels in each training set (i.e.
the total over all sites for a single spectral class) should not be less than
10 times the number of bands. Therefore, since we will use 6 bands of
OLI data, we should aim to have no less than 60 pixels per signature. It is
not necessary to count the number of pixels when you are digitizing the
polygon, as you will be warned later if the polygons are too small.
However, if you think a polygon is too small, or you don’t think the
polygon encapsulates the range of values in the class, you can always
digitize several polygons, all with the same number, and that will
automatically be combined later into one large spectral class.
• Bear in mind the difference between spectral classes and
informational classes. Many classification methods require the
distribution of values within the training sites to be unimodal. Because of
this, it is important to not include different spectral classes for the same
information class within the same training sites. For example, assume you
have two water bodies, one with clean, deep water, and one with shallow,
muddy water. This might give rise to two very different spectral classes
(Clean Water and Muddy Water), even though we would regard them as
one informational class (Water). Multiple training sites can be grouped
into one informational class by simply assigning the polygons the same
value (DN value). Alternatively, you might want to have different
numbers for each training site. With this latter approach, once you are
finished with the entire classification, you would then go back and reclass
the different spectral classes as one informational class, as we did when
combining spectral classes into informational classes in Section 7.3.6 (by
using the TerrSet modules ASSIGN or RECLASS). In this exercise,
however, we will take the simpler approach of assigning multiple spectral
classes to one DN value.
• Keep careful track of the class name you are digitizing. The most
common problem in this exercise is losing track of the class you are
digitizing. Always make a conscious effort to keep track of which class
you are digitizing. This will help avoid confusion.
We now need to display an image to digitize on. Therefore, as described
below, open the false color composite, created earlier, in Section 7.3.2.
◆ ◆ ◆
Displaying a false color composite using the DISPLAY
LAUNCHER
Menu Location: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER program from the main
menu, or the tool bar.
2. In the DISPLAY LAUNCHER dialog box, click on the
option to browse for a file by selecting the Pick list button
(…) in the center left column.
3. A Pick List window will open. Double click on the
OLI_456fcc raster file to insert this file name into the
DISPLAY LAUNCHER dialog box.
4. Click on OK to display the image.
◆ ◆ ◆
The next step is to digitize the training polygons, at least one for each
informational class, on the image we have just opened.
◆ ◆ ◆
Digitizing polygons
Menu location: Main tool bar icons only (Note that the
icons are grayed out if an image is not yet displayed.)
1. An image should already be open in the TerrSet workspace.
2. Click on the Full extent maximized icon from the main tool
bar to enlarge the image display to the maximum possible
within the constraints of your monitor.
3. Zoom in to a region with a large area of water. To do this,
first select the Zoom Window icon from the main tool bar.
4. Now, use the left mouse button to click in the image, and,
with the left button still depressed, stretch the screen box to
delineate the area you want to zoom into. Release the left
mouse button, and the display should only show the zoomed
area, much enlarged.
5. Click on the Digitize icon. A dialog box labeled Digitize
will open.
6. Enter the file name Water in the text box labeled Name of
layer to be created. Leave all other values at their defaults.
7. Click on OK to close the Digitize window.
8. Move the cursor over the image. Note how when the cursor
is over the image, it takes on a form similar to the Digitize
icon, instead of the normal arrow.
9. Move the cursor over the water feature. Digitize the first
vertex of the polygon that will represent Water, by pressing
the left mouse button. Continue clicking in the image to
specify the outer boundaries of the polygon you wish to
digitize.
10. Close the polygon by pressing the right mouse button.
Note that the program automatically closes the polygon by
duplicating the first point; so you do not need to attempt to
close the polygon by digitizing the first point again.
11. Figure 8.4.a shows an example of what the Water
digitized polygon might look like.
◆ ◆ ◆
Figure 8.4.a Digitized Water training polygon.
Correcting a mistake is relatively easy. You can delete an entire polygon,
after it has been closed, by following the procedure below. The first step,
perhaps somewhat counter-intuitively, is to save the polygon.
◆ ◆ ◆
Digitizing polygons (cont.): Deleting a polygon
12. Click on the Save digitized data icon in the main TerrSet
toolbar.
13. In the Save Prompt window, click on Yes.
14. Click on the Delete Feature icon in the main TerrSet
toolbar.
15. After clicking on the icon, the cursor becomes a pointing
hand.
16. Click on the polygon you wish to delete, and it will be
highlighted.
17. Now press the delete key on the computer keyboard.
18. A Delete feature dialog box will open.
19. Select Yes, if you wish to confirm deleting the polygon,
otherwise click No.
◆ ◆ ◆
The polygon we have just digitized is in an area of deep water. To the south
(i.e. bottom of the image), that same body of water is quite shallow, and the
spectral properties differ from the area we have just digitized (these
differences are more evident in a true color composite). Additionally, the
river to the west (left) is also a little different spectrally. This variability
within informational classes can be handled in different ways. If the spectral
differences are pronounced, it is better to keep them as different spectral
classes in order to increase signature separability. If the spectral differences
are small, we can consider them all part of the same spectral signature. In this
classification, we will assume that the spectral differences within
informational classes are minimal. Therefore, we will add two more polygons
to this vector layer, using the same number to represent all three polygons.
For this additional digitizing of the water class, and for the subsequent
digitizing of additional classes, it will be helpful to get a general idea of the
size and distribution of the training classes from Figure 8.4.b.
Figure 8.4.b Training classes for supervised classification.
◆ ◆ ◆
Digitizing polygons (cont.): Adding additional polygons to
a class
20. Return the view to the original full image, by clicking on
the Full extent normal icon in the main TerrSet tool bar.
21. Then click on the Full extent maximized icon.
22. Use the Zoom window icon to zoom in on the bottom right
hand corner of the image, where the river channel is very
narrow. As the river is narrow, and it will be harder to digitize
here, you should zoom in a great deal. If necessary, you can
repeat the zoom function to zoom in multiple times.
23. Click on the Digitize icon.
24. A dialog box labeled Digitize will open.
25. Accept the default to Add features to the currently active
vector layer, and click on OK.
26. The Digitize dialog box will now show the digitizing
parameters. It is very important that we change the number in
the ID or Value text box to the value of the previous polygon,
since we want to add it to that class, and not have a new
spectral class. Therefore, the ID or Value text box, which
currently should show the number 2, should be changed to 1.
27. Click on OK.
28. Now digitize the river channel. Be sure not to include any
pixels from the forested river banks.
29. Close the polygon by clicking with the right mouse
button. If you feel the polygon is not quite what you want,
review the material above for deleting a polygon, and try
again.
30. Now digitize a third polygon for the Water class, this time
for the river on the west (left) side of the image. Remember to
set the ID or Value to 1 in the Digitize dialog box.
31. If you are satisfied with this third polygon, save the
polygon file by clicking on the Save digitized data icon.
32. A Save prompt dialog box will open. Click on Yes.
◆ ◆ ◆
We will now add the Forest class. In order to make our work simpler, each
time we add a new class, we will digitize it in a separate file. An alternative
approach is to digitize all the classes in the same file. However, digitizing in
one file can get confusing.
◆ ◆ ◆
Digitizing polygons (cont.): Digitizing a second class
33. Return the view to the original full image, by first clicking
on the Full extent normal icon in the main TerrSet tool bar.
34. Then click on the Full extent maximized icon.
35. Use the Zoom window icon to zoom in on the bottom right
hand corner of the image, which is dominated by forest.
Include both the poorly illuminated (darker color) steep
slopes and brightly illuminated (brighter color) shallow
slopes.
36. Click on the Digitize icon.
37. A dialog box labeled Digitize will open.
38. Click on the radio button to Create a new layer for the
features to be digitized.
39. Click on OK.
40. The Digitize dialog box will now provide a text box for
the Name of layer to be created. Enter the new class name:
Forest.
41. Digitize a polygon that includes both poorly illuminated
forest areas and brightly illuminated areas. Close with a right
click.
42. Use the Zoom window icon to zoom in to the center of the
image, to get another example of forest (you can follow the
locations in Figure 8.4.b).
43. Click on the Digitize icon.
44. A dialog box labeled Digitize will open.
45. Accept the default to Add features to the currently active
vector layer, and click on OK.
46. The Digitize dialog box will now show the digitizing
parameters. Remember to change the number in the ID or
Value text box to the value of the previous polygon, 1.
47. Click on OK.
48. Now digitize the forest in this area. Be sure not to include
pixels from other classes (e.g. residential areas).
49. Close the polygon with a right click. Save the file by
clicking on the Save digitized data icon.
50. A Save prompt dialog box will open. Click on Yes.
◆ ◆ ◆
There exist a couple of shortcuts that you could use when digitizing. If you
want to close a polygon and continue digitizing with the same ID, as we did
with forest and water, you can close the polygon by right clicking while
pressing the Shift key. In other cases, if you want to take different spectral
signatures for the same class, or you want to change the ID of the new
digitized features, you can close the polygon by right clicking while pressing
the Alt key of the keyboard.
◆ ◆ ◆
Digitizing polygons (cont.): Digitizing other classes
51. Repeat steps 36 to 39 for the next class, Pasture. Note
that the pasture and grass class comprises mostly agricultural
pasture, which has a distinctive pink to brown color (Pasture 1
in Figure 8.4.b). However, the class also includes golf
courses, which are irrigated, and therefore have a different
color, yellow (Pasture 2 in Figure 8.4.b). Therefore, it will be
necessary to digitize two polygons for this class, as we did for
Water. Follow the procedure outlined in 20 to 32, being sure
to remember to set the ID or Value text box to 1 when you
digitize the second polygon in the Pasture file.
52. Repeat steps 36 to 39 for Residential. The Residential
class is a difficult class because it is characterized more by a
rough texture than a distinct color. Select two polygons as
indicated in Figure 8.4.b, by following the procedure outlined
in 20 to 32.
53. Now digitize the class Commercial. The Commercial
class has a distinctive dark blue color and a rough texture.
54. Because the area of the region digitized for the first
Commercial polygon is quite small (indicated by the polygon
labeled Commercial 1 in Figure 8.4.b), we will need to add a
second polygon for this class (indicated by Commercial 2 in
Figure 8.4.b). Follow the procedure outlined in 20 to 32,
being sure to remember to set the ID or Value text box to 1.
55. Now digitize the class Coal. The region shown in Figure
8.4.b for the class Coal consists of a large settling pond,
where coal is cleaned to remove waste.
56. Close the image before proceeding to the next step. If
there are any unsaved polygons, you will be prompted
whether you wish to save them. Click on Yes in the
Save/Update dialog box to save the polygons.
◆ ◆ ◆
8.5 Characterize training statistics
8.5.1 Generate Class Signatures with
MAKESIG
Once you are satisfied that your training sites were digitized satisfactorily,
you can create signature files for each cover class. These files have statistical
information about the reflectance values for each band for each site, derived
from the polygons we have just digitized.
◆ ◆ ◆
Create signature files using MAKESIG
Menu location: IDRISI Image Processing – Signature
development – MAKESIG
1. Start the MAKESIG program from the main menu.
2. In the MAKESIG dialog box, click on the Pick List button
(…) to the right of the text box of Vector file defining training
sites.
3. In the Pick List window, double click on the Water file.
4. In the MAKESIG dialog box, click on the button Enter
signature filenames.
5. An Enter signature filenames dialog box will open. In the
blank cell to the right of 1, type Water.
6. Uncheck the box labeled Create signature group file
(Figure 8.5.1.a).
7. Click on OK to close the Enter signature filenames dialog
box.
8. In the MAKESIG dialog box, click on the button for Insert
layer group.
9. From the Pick list, double click on OLI_all.
10. The MAKESIG dialog box should now show the six bands
of the OLI data (Figure 8.5.1.b).
11. Note: in the MAKESIG dialog box the text box for the
minimum sample size has been set automatically to 60 (10
pixels x the number of bands).
12. Click OK.
◆ ◆ ◆
Figure 8.5.1.a Enter Signature file names dialog box.
Figure 8.5.1.b MAKESIG dialog box with parameters set.
After the program has completed, the MAKESIG dialog box will remain
open, because of TerrSet’s persistent windows.
If you have sufficient pixels in each class, the program should execute
normally, and you will see a brief indication that the program is running in
the status bar of the TerrSet workspace.
However, if you do not have enough pixels, you will receive a warning
message. You will then need to either (a) increase the number of pixels in the
class, or (b) reduce the threshold of number of pixels required. Adding
additional pixels is relatively straightforward, and is described below.
However, if the program did end normally, then you can skip these
instructions below.
◆ ◆ ◆
Digitizing additional polygons in an existing vector file
Menu location: Main tool bar – Icons only
1. An image should already be open in the TerrSet workspace,
with the training polygons overlaid. If not, first display the
image OLI_456fcc. Then add the vector file you wish to add
to by clicking on the Composer window button for Add layer.
Add the vector layer you wish to work on, an d then select the
radio button for a Qualitative Symbol file.
2. If the image is still open from the Digitizing exercise, find
the Composer window, and click on the name of the class you
wish to add to. This will highlight the name of the class in
Composer window. Figure 8.5.1.c shows the Commercial
class selected.
3. You can now follow the procedures you are already
familiar with to add an additional polygon, as summarized
below.
4. Maximize the viewer by clicking on the Full extent normal
icon in the main TerrSet tool bar.
5. Click on the Full extent maximized icon. This enlarges the
display window to the largest size possible with the TerrSet
window.
6. Zoom in to the area of interest by clicking on the Zoom
window icon. Draw a rectangle around the area of interest.
7. Click on the Digitize icon.
8. When the Digitize dialog box opens, take the option to Add
features to the currently active vector layer, and remember to
change the ID or Value back to 1.
9. Digitize with the left mouse button.
10. Close the polygon with the right mouse button.
11. Save the file when you are done.
12. You will then need to rerun the MAKESIG program for
the class for which you added one or more new polygons.
◆ ◆ ◆
Figure 8.5.1.c Selecting a vector layer prior to adding additional polygons.
So far we have created only one signature file, for Water. We now need to
create the signature files for the remaining classes. Each signature will be in
its own separate file.
◆ ◆ ◆
Create signature files using MAKESIG (cont.): Remaining
signatures
1. The MAKESIG dialog box should still be open from
creating the Water signature. The Bands to be processed
should still list the OLI bands 2 through 7.
2. In the MAKESIG dialog box, click on the Pick List button
(…) for Vector file defining training sites.
3. In the Pick list window, double click on the Forest file.
4. In the MAKESIG dialog box, click on the button Enter
signature filenames.
5. An Enter signature filenames dialog box will open. In the
cell to the right of 1, type Forest.
6. Ensure the check box for Create signature group file is not
checked.
7. Click on OK to close the Enter signature filenames dialog
box.
8. Click on OK in the MAKESIG dialog box, to create the
Forest signature file.
9. Repeat steps 2 through 8 above to create the signature files
for the remaining vector files: Pasture, Commercial (this
class also includes Industrial and Transportation, but we
shorten the name to Commercial for simplicity’s sake),
Residential and Coal (the Coal Waste class).
◆ ◆ ◆
If you need to recreate any signature (maybe you are unhappy with the
signature, or something didn't work, or maybe at a later stage you decide you
did not collect all the cover classes needed) re-display the image using
DISPLAY, digitize the boundaries of the new polygon, save the file (either
with a new name, or use the old name, and thus over-write the file) and then
run MAKESIG again on the new vector file.
8.5.2 Group the Class Signatures into a
Single Signature Collection
We have now created the six signatures, each of which is in a separate
file. Although we could now proceed to the classification step, it is
convenient to group the signature files into a single collection. This will make
repeat handling of the files simpler, in that we will then only need to specify
the collection, instead of each file individually. Section 1.3.7 discusses
working with collections in some detail, and you may wish to review that
section if the instructions below are not sufficient. In addition, in Section
5.3.4, we worked with a raster collection to streamline the PCA decorrelation
stretch. In this exercise, we work with a signature collection, but the principle
is the same.
◆ ◆ ◆
Creating a signature file collection with the TerrSet
EXPLORER
1. Maximize the TerrSet EXPLORER.
2. In the TerrSet EXPLORER window, click on the tab for
Filters.
3. Uncheck all the boxes that are checked (these are the
default files that will be displayed in the TerrSet EXPLORER
window). You can do this by right clicking and selecting the
option Clear Filter.
4. Now click on the box to check the option to display
Signature (*.sig, *.spf) files (Figure 8.5.2.a).
5. Click on the tab for Files.
6. If the files are not listed in the Files pane, double click on
the directory name to display the files.
7. The Files pane should list the six signature files you have
created. Each signature will be listed twice: once as a *.sig
file, and once as a *.spf file. We will only work with the *.sig
files.
8. Highlight the signature files for the six signatures in this
order: Water.sig, , Forest.sig, Pasture.sig, Commercial.sig,
Residential.sig, and Coal.sig. Select multiple bands by
clicking on each file sequentially, while simultaneously
pressing the Ctrl key on the computer keyboard.
9. Right click in the Files pane. Select the menu option for
Create – Signature Group (Figure 8.5.2.b).
10. Check the Metadata pane to see that the six signatures are
listed in the same order as in step 8 above, i.e. Water.sig as
Group item (1), Forest.sig as Group item (2) etc. If the order
is not the same, you should redo steps 7 and 8.
◆ ◆ ◆
The order of the files will correspond with the order of the classes in the
classification. The first item in the signature group will be class 1, the second
signature class 2, and so on. If the order in your signature group is different,
the classification output will look different, as classes will have different IDs
than the ones in the book.
Figure 8.5.2.a TerrSet EXPLORER Filters pane, with Signature and
Signature Group filter checked.
Figure 8.5.2.b TerrSet EXPLORER Files pane, with signatures selected, and
pop-up menu for creating a signature group (collection) file.
◆ ◆ ◆
Creating a signature file collection with the TerrSet
EXPLORER (cont.)
11. Click on the new signature group collection file name
(Signature Group.sgf) to highlight the file in the Files pane.
12. Right click, select Rename, and enter a new name, typing
over the default name of Signature Group. Since this is the
collection of classification training signatures, we will enter
Train.
13. Press the Enter key on your computer keyboard.
14. The name Train.sgf will immediately be updated in the
Files window.
15. Finally, restore the filters by going back to the Filters tab
and leaving checked only the boxes for: Map composition
(*.map), Raster Group (*.rgf), Raster Image(*.rst), and
Vector Features (*.vct).
16. Go back to the Files tab. You should see all your raster,
vector and raster group files.
◆ ◆ ◆
8.5.3 Assessing and Comparing Signatures
Before classification, we need to assess the statistical characterization of our
training sites. In order for our training sites to be effective for classification,
they have to be pure, with no overlap of signatures. The purity of the training
site implies that only the specific spectral class was digitized during the
selection of training sites. For the spectral class to be pure, a histogram of the
signature should be close to unimodal across all bands. Bimodality implies
that you either mixed spectral or informational classes within the same
training site. Signature overlap implies that the spectral classes are too
similar. The larger the overlap, the lower the separability of the signatures. If
the signatures are too similar, it will be more difficult to separate the classes,
and therefore, it will be more difficult to classify them correctly. There are
different ways to explore and compare signature purity and overlap. For
purity, we will use signature histograms, and for separability we will use two
different methods: multispectral signature plots (SIGCOMP), and feature-
space scatter plots (SCATTER).
8.5.3.1 Signature histograms
We will plot signature histograms for each band to evaluate the frequency of
pixel values within the training sites to assess potential errors.
◆ ◆ ◆
Evaluate training site purity with HISTO
Menu location: File – Display – HISTO
1. Start the HISTO program from the main menu or the
toolbar.
2. Select the option Signature file.
3. Browse for the signature water under Input file name.
4. Change the class width to 20.
5. Click OK. You should get a stacked histogram similar to
Figure 8.5.3.1.a.
◆ ◆ ◆
Note that your histograms may be different depending on the sites you
selected, however, you should see multiple peaks, especially in the visible
bands (2, 4 and 5). Remember that when we created training sites, we
decided to consider the areas with different water characteristics (deep and
shallow) within the same spectral class. The multi-modality of the histogram
is therefore expected.
◆ ◆ ◆
Evaluate training site purity with HISTO (cont.)
Menu location: File – Display – HISTO
6. Now browse for the signature Forest under Input file name.
7. Click OK to get the multiple histogram for this class.
8. Repeat 6-7 for Pasture, Commercial, Residential and
Coal, evaluating for each histogram the presence of multi-
modality.
◆ ◆ ◆
Make notes of the presence of bimodality within your training sites. This
information will be useful together with the separability to evaluate if any
training site needs to be revised before the classification.
Figure 8.5.3.1.a Signature histogram for the water class.
8.5.3.2 Multispectral signature plot: SIGCOMP
As mentioned before, it can be useful to visualize the statistics of your
training signatures both as a check for gross errors, and also to help
understand which classes are spectrally similar, and thus potentially poorly
classified.
◆ ◆ ◆
Compare the statistics of the training data with
SIGCOMP
Menu location: IDRISI Image Processing – Signature
development – SIGCOMP
1. Use the main menu bar to start the SIGCOMP module.
2. In the SIGCOMP window, click on the button to Insert
signature group.
3. A Pick list will open. Double click on the Train file.
4. In the SIGCOMP window, select the radio button to
compare all signatures based on their means (Figure
8.5.3.2.a).
◆ ◆ ◆
Figure 8.5.3.2.a SIGCOMP window.
A graph of the signature means will open automatically in a new window
(Figure 8.5.3.2.b). Note that your graph should look somewhat similar, but
will not be exactly the same, as it is unlikely you have digitized precisely the
same pixels.
Figure 8.5.3.2.b Signature means for the six classes.
The graph in Figure 8.5.3.2.b tells us a lot about the classes we have
identified, especially if we mentally associate the band numbers with their
respective wavelengths (Table 2.1.1.2.a). For example, we note that Forest
and Pasture both have a very distinct pattern of high near infrared (OLI band
5) and low visible (especially OLI Band 4) values. This is typical of
vegetation and this spectral pattern is exploited in vegetation ratios (Section
6.3.4). We can also see that Residential is somewhat similar to the Forest
class, a result of the many trees in residential areas in this city. It is apparent
that Coal and Water are similar, with the latter being just slightly darker at all
wavelengths. It is this similarity of spectral shape that makes water and coal
waste difficult to separate, especially in unsupervised classification (Section
7.3). The graph also reveals that the near infrared and shortwave infrared
bands are better separating the different classes than the visible bands.
You may also want to experiment with the other options in the SIGCOMP
dialog box. For example, you can select radio buttons to compare the
signatures based on their maxima and minima, and then click on the button
for OK to generate the new graph. A graph of maxima and minima is useful
for understanding signature overlap. However, since the maximum and
minimum graphs get very confusing with many signatures displayed
simultaneously, it is best to select a subset of the signatures – for example,
just the Commercial and Residential signatures. Simply highlight in the
SIGCOM window a file you would like to exclude, and click on the button
for Remove file.
Figure 8.5.3.2.c shows a comparison of the minima and maxima of
Residential and Commercial. It is apparent that the two classes overlap
substantially in all bands. As an aside, it is worth mentioning here that this
type of graph shows only first order statistics (the statistics of each band
individually). Some remote sensing image classifiers can exploit second
order statistics (the statistics of the relationships between bands), and thus we
should not automatically assume that the Commercial and Residential classes
are not separable based on Figure 8.5.3.2.c.
Figure 8.5.3.2.c Comparison of minima and maxima of Commercial and
Residential.
8.5.3.3 Feature-Space Scatter Plots (SCATTER)
A way of visualizing the statistics across bands is by plotting signature
ellipses on top of a scatterplot. In the scatterplot, the X and Y axes
correspond to the two bands being compared. The ellipses are generated
based on the mean, variance and co-variance statistics of the training sites.
◆ ◆ ◆
Compare the statistics of the training data with
SCATTER
Menu location: IDRISI Image Processing – Signature
development – SCATTER
1. Use the main menu bar to start the SCATTER module
(Figure 8.5.3.3.a).
2. In the SCATTER window, Click on the Pick List button
(…) to the right of the text box of File 1 [Y-Axis].
3. In the Pick List window double click on OLI_B5.
4. Repeat step 2 for File 2 [X-Axis], selecting OLI_B4.
5. Type Scatter54 as the Output file name.
6. Check the Create signature plot file box.
7. Click on the Pick List button (…) next to the Signature
group file text box and select Train.sgf from the pick list.
8. Click OK to run SCATTER.
◆ ◆ ◆
Figure 8.5.3.3.a SCATTER module with parameters set.
The result is a composition similar to Figure 8.5.3.3.b. You can see in the
COMPOSER that the results consist of two files: a raster image and a vector
feature. In Figure 8.5.3.3.b, we blended the raster image and changed the
vector palette so that it could be better visualized in the printed form. The
raster image is a two-dimensional scatter plot where the X axis corresponds
to the pixel values in OLI_B4 (red) and the Y axis to the pixel values in
OLI_B5 (near infrared). The colors in the image correspond to the frequency
of pixels with the particular combination of red and NIR values. The vector
ellipses correspond to the statistical characterization of the different training
sites. If the vector Scatter54 is highlighted in the COMPOSER, when you
click on an ellipse the vector line feature will be highlighted in red, and the
class ID will be shown. The ID corresponds to the order of the files within
the signature group file, that in our case is: 1=Water, 2=Forest, 3=Pasture,
4=Commercial, 5=Residential, 6=Coal.
Figure 8.5.3.3.b Result from SCATTER
This type of scatter plot between the red and NIR, is commonly used to
identify different land cover types, particularly vegetation, water, and soil.
We know from the spectral characteristics of these land cover types, that
water absorbs in both the red and NIR. Because of this, the lower left corner
of the scatter plot corresponds to water. Vegetation absorbs in the red and
reflects in the NIR, therefore, the top left (low X, high Y) of the plot
corresponds with vegetation pixels. You can identify a diagonal line below
which values are zero, meaning that no pixel presents those red and NIR
values. This line is called the “soil line” and corresponds to the absence of
vegetation. Dry or bright soils have high reflectance in both the red and NIR,
therefore these types of soils correspond to the upper section of the soil line.
On the other hand, dark or wet soils, reflect less in the red and NIR, and
therefore correspond to the lower portion of the soil line.
Now that we know how to interpret the scatter plot, we can evaluate our
training sites. The Water (class 1) ellipse, as expected, is in the lower left
corner of the graph. The Forest (class 2) ellipse encloses low red and high
NIR values, and the Pasture (class 3) ellipse has a similar pattern, with
slightly higher values of red and NIR. The higher values in the red may be
related to the incorporation of dry pasture areas, while the larger NIR can be
related to the healthy irrigated pastures. Commercial areas (class 4) are
mostly paved being the ellipse closer to the no-vegetation line. The
Residential ellipse (class 5) encompasses a large range of values. This is
because residential areas are a mixture of different amounts of pavement and
vegetation. In Section 8.1.6.2, we saw that Commercial (class 4) and
Residential (Class 5) had large overlaps when visualized on single bands.
When looking at how the two vary across two bands, we see that, although
there exists some overlap, it is not very pronounced.
The larger the overlap of ellipses, the less separable the classes. Here we
evaluated the separability in just the red and NIR, however, two signatures
may overlap in this combination, but not in another. If you find considerable
overlap in a band combination, you should try a different one. Finally, if two
or more classes overlap considerably across all bands, two things could be
happening: 1) the training sites are not appropriate (e.g. when including
within the training site pixels not belonging to the class), 2) the spectral
bands used are not sufficient to differentiate the classes. If the former,
training sites should be revised by deleting mixed training areas and
digitizing new class examples. If the spectral bands are not sufficient to
separate the classes, these can be merged after classification (e.g. merging
residential and commercial into an urban class), or you can try including
other ancillary information (e.g. texture), to see if that helps in separating the
spectrally similar classes.
8.6 Assigning pixels to classes
In previous steps, we generated our examples of classes, and evaluated the
signatures. We will now assign each pixel in the image to one of the 6
classes. We will use three different methods. First will use the parallelepiped
classification, then we will do a maximum likelihood classification, and
finally we will run a neural network classification.
8.6.1 Parallelepiped Classification
We will now run the first of three classifiers which we will use to classify the
Landsat data. In this section we will do a parallelepiped classification.
Parallelepiped is an absolute classifier, which simply checks each unknown
pixel to see if it falls within the range of DN values for each band, for each of
the training classes. Either an unknown pixel falls within the ranges defined
by the training data, or it doesn’t. In the latter case, the pixel would remain
unclassified. Note that it is possible for a pixel to fall into more than one
class. In this case, usually an arbitrary decision is made, for example, based
on the order the classes were entered into the classifier. More details on
parallelepiped classification can be found in most basic remote sensing texts.
We will run the parallelepiped classification twice. Parallelepiped is a
relatively simple classifier (and therefore a very quick classifier), and is
susceptible to outliers (noise, slight variations) in the training data. The
second time we run the classification we will see how using a more
conservative definition of the class extents will help the classification. The Z-
score option defines the parallelepipeds based on the number of standard
deviations for each class. This approach, which assumes each class has a
Gaussian distribution, allows the analyst to exclude outliers in a systematic
fashion. However, note that by making the class boundaries narrower, a
greater percentage of pixels fall outside the range of the training classes, and
therefore will remain unclassified.
◆ ◆ ◆
Classify an image with PIPED
Menu location: IDRISI Image processing – Hard
classifiers – PIPED
1. Start the PIPED program from the menu.
2. In the PIPED dialog box click on the Insert signature
group button.
3. In the Pick list that opens, double click on Train to select
that signature group file.
4. For the Output filename, enter Piped-min-max.
5. The PIPED dialog box should now appear as in Figure
8.6.1.a.
6. Click on OK.
7. The classification should appear in a new window.
8. In the PIPED window, select the radio button for Z-Score,
leaving the default 1.96 z-score.
9. Change the Output filename to Piped-Z.
10. Click on OK.
◆ ◆ ◆
Figure 8.6.1.a PIPED dialog box.
At this stage you should have produced two classifications. In order to
compare this classification to our previous unsupervised classification
(Section 7.3) and our future maximum likelihood classification, it will be
useful to apply our earlier infclasses palette file so that the colors in the map
are the same as those we used before. However, we will need to add a color
for the Coal class. Therefore, before we proceed, we will modify our
previous palette file, created in Section 7.3.7.
(Note: If you did not do Section 7.3, or you did not save those files, you will
need to follow the instructions in Section 7.3.7 to create a new palette file,
and assign colors for DN values 1-5, before following the instructions
below.)
◆ ◆ ◆
Modify an existing look up table with SYMBOL
WORKSHOP
Menu location: File – Display – SYMBOL WORKSHOP
1. Start the SYMBOL WORKSHOP from the main menu or
the main icon bar.
2. Once the SYMBOL WORKSHOP dialog box and window
have opened, use the window menu for File – Open.
3. In the Open Symbol File dialog box, click in the radio
button for Palette.
4. Click on the File name pick list button (…).
5. In the Pick list window, double click on infclasses.
6. Click on OK.
7. The SYMBOL WORKSHOP window should now display
the palette file that you developed for the unsupervised
classification. Specifically, it should have different colors
assigned to DN values 1-5, and the remaining values should
all be red.
8. Place the cursor over the cell for a DN value of 6 (this
should be the first red cell to the right of the cells previously
assigned colors). Confirm that the label shows that this is cell
6. Click in this cell.
9. Since class 6 is Coal, click on a black color chip in the
Color dialog box that opens automatically.
10. Click on OK to close the Color dialog box.
11. Repeat steps 8 and 9 above to specify white for a DN
value of 0 (the first cell in the grid).
12. Save the palette file through the SYMBOL WORKSHOP
window menu File – Save as.
◆ ◆ ◆
Figure 8.6.1.b SYMBOL WORKSHOP with the new palette file specified.
◆ ◆ ◆
13. In the Save symbol file as dialog box, enter the new file
name: Sup-6.
14. Click OK to close the Save symbol file as dialog box.
15. Figure 8.6.1.b shows the SYMBOL WORKSHOP with the
two new colors specified.
◆ ◆ ◆
Now that we have created the modified palette file, we can apply it to each of
the two classified images.
◆ ◆ ◆
Apply a custom palette file to a previously displayed
image
1. Click in the display window for the Piped-min-max
classification to give it focus (bring this window to the front).
2. Find the Composer window, which is always opened when
an image is displayed. Click on this window to give it focus.
3. In the Composer window, click on the Layer Properties
button.
4. Click on the tab for Display parameters.
5. Click on the Pick List button (…) next to the Palette file
text box.
6. In the Pick List window that will open automatically,
double click on the Sup-6 Palette file.
7. In the Layer properties window, click on Apply.
8. This should update the color scheme for the Piped-min-
max classification. (Figure 8.6.1.c shows the results. Keep in
mind that your training classes will be slightly different from
those used to produce the figure, therefore your results will
not be identical.)
9. Repeat steps 1-8 for the Piped-Z image. (See Figure
8.6.1.d, but bear in mind that your results will also be slightly
different for this figure, too.)
◆ ◆ ◆
Figure 8.6.1.c Piped-min-max classification.
It is important to note that the legend beside each classification does not
include an important class: 0. The 0 class is the Unclassified. The
Unclassified class is a typical characteristic of an absolute classifier, such as
the parallelepiped classifier. Since we set a DN of 0 to white, any pixel that is
white is unclassified. It is immediately apparent that that the z-scores
approach (Figure 8.6.1.d) produces far more unclassified pixels than the min-
max approach (Figure 8.6.1.c). Although the latter might seem a better
product, close inspection of the original Landsat image suggests that the min-
max image has a very low overall accuracy.
Figure 8.6.1.d Piped-Z classification.
In particular, an excessive number of pixels appear to be classified as
Residential. In comparison, though the z-scores image has many unclassified
pixels, those that are classified appear to be relatively accurate. The z-scores
image appears to have a more reasonable distribution of the Residential class,
and the confusion between commercial and residential, and pasture and
residential, appear to be much more limited.
The z-score approach defines the class extents based on a statistical
distribution. Using a z-score of 1.96, we expect to leave unclassified 5% of
the pixels that are less similar to the class. Try running the classification
again with a larger number (e.g. 3), and then with a smaller number (e.g. 1) to
see the tradeoffs associated with changing this parameter. Remember to apply
the new palette file to the image after the classification. The larger the z-
score, the smaller the number of pixels that will be excluded. For example, a
z-score of 3 will exclude 0.3% of the least similar pixels, while a z-score of 1
would leave unclassified 32% of the pixels less similar to the training class.
8.6.2 Maximum Likelihood Classification
Maximum Likelihood is a powerful classification technique. It draws on
differences between the class means, as well as differences between the
covariance matrices (i.e. the variability and the degree and type of correlation
between bands). The latter parameter, difference between covariance
matrices, is difficult to visualize in an image or as graphs, but can be as
important, if not more so, than the difference between the class means. This
is particularly true as the number of bands increases. An additional
parameter, prior probability, can also be used to discriminate between classes.
Prior probability is the chance a certain class is likely to occur before you
even run the classification. For example, assume that, based on prior
knowledge of an area, you were able to estimate that your scene is likely to
be approximately 20% Water, 50% Forest and 30% Pasture. You could then
enter values of 0.2, 0.5 and 0.3 as the prior probabilities for those classes.
(Prior probability values sum to 1.0 by convention). In practice, however, this
is difficult to do, and most of the time we just set the prior probabilities as
equal values, which is the default in TerrSet. Additional information on
maximum likelihood can be found in most introductory remote sensing texts.
In short, the maximum likelihood classifier is based on the Bayes’ Theorem,
calculating the probability that a pixel belongs to each class, by combining
the probability associated with the training sites (called conditional
probability), with the prior probability. After evaluating the probability of a
pixel of belonging to each class, the pixel is assigned the class of highest
probability (the most likely class).
◆ ◆ ◆
Classify an image with MAXLIKE
Menu location: IDRISI Image Processing – Hard
classifiers – MAXLIKE
1. Start the MAXLIKE program from the menu.
2. In the MAXLIKE dialog box, click on the Insert signature
group button.
3. In the Pick list, double click on the Train signature group
file.
4. In the MAXLIKE dialog box, specify the Output image as
maxlike.
5. Leave the option for prior probabilities as the default (Use
equal prior probabilities for each signature), as well as the
Maximum likelihood for classification (between 0.0-1.0) at the
default of 0%.
6. Figure 8.6.2.a shows the dialog box completed.
7. Click on OK to run the classification.
◆ ◆ ◆
Figure 8.6.2.a MAXLIKE dialog box.
As a final step, we will now apply the palette file developed in Section 8.6.1
to this image, to aid comparison with the parallelepiped classification.
◆ ◆ ◆
Apply a custom palette file to a previously displayed
image
1. Click in the display window for the Maxlike classification
to give it focus (bring this window to the front).
2. Find the Composer window, which is always opened when
an image is displayed. Click on this window to give it focus.
3. In the Composer window, click on the Layer Properties
button.
4. Click on the tab for Display parameters.
5. Click on the Pick list button (…) next to the Palette file text
box.
6. In the Pick List window that will open automatically,
double click on the Sup-6 Palette file.
7. In the Layer properties window, click on Apply.
8. This should update the color scheme for the Maxlike
classification. (Figure 8.6.2.b shows the results. However,
bear in mind that your training classes will be slightly
different from that used to produce the figure, therefore your
results will not be identical.)
◆ ◆ ◆
Figure 8.6.2.b Maximum likelihood classification.
Classification is typically an iterative process. Your first classification is
likely to have some classes that are very poorly classified. You almost
certainly should return to the signature collection stage, and refine some or all
of your signatures.
For example, Figure 8.6.2.b has some green pasture areas classified as
residential. This suggests we need to add one or more training polygons to
the existing pasture vector file, specifically in the areas incorrectly labeled as
residential. To do this, you should follow the instructions for Digitizing
additional polygons in an existing vector file in Section 8.4, this time
selecting the Pasture vector file for the overlay. You would then need to run
the MAKESIG program again for this new class. As long as you overwrite the
existing Pasture.sig signature file, when you run MAKESIG, you do not need
to worry about changing the signature group file. Instead you can run the
MAXLIKE program directly, after you have specified a new file name (e.g.
maxlike2).
Figure 8.6.2.c shows the results of an improved classification, simply by
digitizing some extra pasture polygons, and illustrates how iterative
improvements can increase the accuracy of a classification greatly. Although
every pixel is classified, we can exclude from the classification pixels that do
not reach a certain threshold of likelihood, by changing the Maximum
likelihood for classification option to a value different than zero. For
example, if we change it to 0.2, all pixels with a probability of belonging to
any of the classes less than 0.2 will remain unclassified. Changing this
threshold uncovers locations that are dissimilar to all classes and therefore
may need to be included in the training, as part of existing signatures or as
new ones.
Finally, you should take a moment to compare the parallelepiped and
maximum likelihood classifications. The latter is a much more powerful
classification approach, although it is computationally intensive. Generally,
we would expect a much more accurate product from maximum likelihood,
and that does indeed appear to be the case for our study area.
Figure 8.6.2.c Maximum likelihood classification with modified Water class
statistics applied.
8.6.3 Multi-layer Perceptron Neural
Network Classification
The Multi-Layer Perceptron Neural Network (MLP) is a machine learning
classification algorithm that attempts to simulate the structure of the human
brain through a network of neurons. The structure of the neural network
allows the classifier to uncover complex relationships that may be more
difficult to find with maximum likelihood or parallelepiped classifiers. The
neural network algorithm is much more complex than the parallelepiped or
the maximum likelihood methods. You will see that the parameterization of
MLP can be daunting, however the implementation in TerrSet allows for
auto-training which makes the optimization of the most important parameters
easier.
8.6.3.1 Prepare training data for MLP
Unlike MAXLIKE or PIPED, that run on the statistical characterization of the
training sites (extracted with MAKESIG), MLP does not use this
information. MLP requires a vector or raster file with all the training sites,
where each class has a different ID. When we digitized our training sites, we
did it in separate files. Therefore, instead of having one single vector file with
all our training sites, we have 6 of them. We need to combine them into a
single file.
This combination can be easily done with the MACRO MODELER. In order
to save time, we already developed a model that you can open and explore. If
you find that the instructions below are too brief, please refer to section
4.2.1.1 (how to create a model), and section 5.4.5 (how to open and edit an
existing model).
◆ ◆ ◆
Adapting and running a previously created MACRO
MODELER model
Menu location: IDRISI GIS Analysis – Model deployment
tools – MACRO MODELER
1. Start the MACRO MODELER from the main menu or
main icon bar.
2. The MACRO MODELER graphical interface will open.
3. In the MACRO MODELER window, click on the Open icon
(second from left), or use the MACRO MODELER menu: File
– open. (Note that if the MACRO MODELER window is
highlighted, and you put your cursor over an icon, the icon
name is shown.)
4. A Pick List window will open. Double click on merge-
training to select this file. The macro file will open (Figure
8.6.3.1.a).
5. Click on the run icon.
6. A window will open in which you are warned that the files
created by the model will overwrite any existing files of those
names. Click on Yes to all.
7. Your new raster file is created and displayed (Figure
8.6.3.1.b).
◆ ◆ ◆
Figure 8.6.3.1.a shows the complete model that we will run. The input and
output boxes look different than our previous models (yellow instead of
purple), since we are working with vector files instead of raster files. When
we digitized the classes, all vectors had values of 1. The first step is to reclass
the polygon values into the class value (as specified in Table 8.3.a). We do
not do this step for water since this class ID is 1. The next step is to
concatenate all the reclassed vector files. We used the CONCAT module in
Section 3.1.2 to concatenate raster images, here we use it to merge all vector
polygons into a single file. Since we can concatenate only two files at a time,
we need to repeat the procedure.
Figure 8.6.3.1.a Merging of vector training sites with MACRO MODLER.
The final output is a vector file called trainingsites that should look similar to
Figure 8.6.3.1.b. Remember that your digitized polygons are not the same as
ours, therefore your file will look slightly different.
Figure 8.6.3.1.b Merged vector training sites.
8.6.3.2 Classifying with MLP
Now that our training sites are merged, we can run MLP. Classifying an
image with MLP consists of two steps: training the network, and running the
classification.
◆ ◆ ◆
Classify an image with MLP: Training
Menu location: IDRISI Image processing – Hard
classifiers – MLP
1. Start the MLP program from the menu. Figure 8.6.3.2.a
shows the module window with parameters set.
2. In the MLP dialog box, click on the Insert layer group
button.
3. In the Pick list, double click on the OLI_all raster group
file.
4. Under Input specifications, select Vector.
5. Click on the Pick List icon (…) and select the vector file
trainingsites.
6. Under Training parameters check the box next to Use
automatic training.
7. Under output options check the boxes for Hard
Classification and Map output activation levels.
8. Finally, specify the Output files. For Hard Classification
Image type MLP, and for Output layer activation files prefix
type MLP-Act.
9. Leave all other parameters as the default.
10. Click on Train to start the classification training process.
◆ ◆ ◆
Figure 8.6.3.2.a. The MLP module window with parameters set.
The structure of a typical neural network (Figure 8.6.3.2.b) consists of at least
three layers: an input layer (composed of the bands used in the classification),
one (or more) hidden layer, and an output layer (composed of the classes
from the training sites). Each layer contains nodes (also called neurons),
which are interconnected through different weights. The classification starts
by connecting nodes with a random set of weights, and then, each pixel is
evaluated one at a time. For each pixel, an output is generated that indicates
the similarity between the input and the corresponding class. The error of this
first output is evaluated and propagated backwards (back propagation) to
adjust the weights. The process is repeated for a set number of iterations.
Figure 8.6.3.2.b Structure of a neural network
A common property of machine learning algorithms is the capacity to over-
fit. Overfitting happens when the model fits the training data so closely that it
is not capable of generalizing (classifying pixels not given as training). In
order to minimize potential overfitting, MLP partitions the initial number of
samples for each class in two groups of equal size. One group is used for
training the MLP network (in the process of weight adjustment), and the
other one is used for testing (evaluating how well the model fits the samples
not used in the training process). While MLP runs, it is important to monitor
the two curves that appear in the error monitoring box. These curves
represent the training and testing errors mentioned above. The training error
curve shows for each iteration how well the model fits the training data, and
the testing error shows how well the model predicts locations not used in
training. If the model fits the training better than the testing (training error
lower than testing error), it means that overfitting is happening. Although the
automatic training option in MLP automatically restarts the training process
(assigning new weights) if the errors oscillate, the analyst should stop and re-
train the model manually if large overtraining is detected.
The running statistics box gives an overview of the resulting model,
providing information on the total number of pixels used in training and
testing, the learning rate, the number of iterations, the accuracy and skill of
the model, and finally the training and testing RMS (Root Mean Square
error). For more information on these or other parameters we recommend
reading the TerrSet Help for MLP.
After all iterations of the training process are completed, an HTML report is
displayed, giving information on the parameters used for training, the weights
used, and the contribution of the bands to the identification of the classes. If
we are satisfied with the training (e.g. we have a good accuracy and skill, and
no indication of overfitting), we can use the weights generated to produce the
classification output. Please see the TerrSet HELP for the description of other
results within this HTML report.
◆ ◆ ◆
Classify an image with MLP: Classify
Menu location: IDRISI Image processing – Hard
classifiers – MLP
11. Click on the Save weights button. Although saving
weights is optional, it is recommended in case you need to re-
do the same classification.
12. Click on the Classify button.
◆ ◆ ◆
MLP will produce a different output every time it is run. This is because both
training samples and weights are initialized randomly. Because of this, if
your first run does not produce satisfactory outputs, you can try running it
again a couple of times before starting to change other parameters, such as
the network structure or learning rate.
The structure of the network can be set under Network topology options
within the MLP interface. The input and output layer nodes are defined by
the number of bands and classes in the training site, and can’t be changed.
The user can specify the number of Hidden layers (1 or 2), and the number of
hidden layer nodes. Increasing the number of nodes allows the neural
network to uncover more complex relationships and interactions, however the
default works well in most cases. From the training parameters, the learning
rate is the most important parameter. This parameter controls the amount of
adjustment of the weights at each iteration. When adjusting parameters and
running the model, the aim is to get a model of high accuracy and low
overfitting. Many runs may be needed to produce a satisfactory classification.
The next step is to label the output classes by updating the legend. We
already did this in section 7.3.11, so you can refer to that section if
instructions here are too brief.
◆ ◆ ◆
Update the classified image with the TerrSet EXPLORER
1. In the TerrSet EXPLORER window, click on the tab for
Files.
2. If the files are not listed in the Files pane, double click on
the directory name to display the files.
3. In the Files pane, click on the MLP.rst classified image
file.
4. In the Metadata pane below the Files pane, scroll down,
and find the blank cell next to the label Categories (as you did
in Section 7.3.7, Figure 7.3.7.a).
5. Click in the blank Categories cell. The cell will turn white,
and a Pick List button (…) will appear. Click on that button.
6. A Categories dialog box will open.
7. Click on Copy from… and a Pick List will open.
8. Select maxlike from the Pick List.
9. Click OK. The categories should be automatically
completed with the names of the six classes.
10. Save the Metadata (accept the Warning message).
◆ ◆ ◆
Finally, display the classification with the custom color palette.
◆ ◆ ◆
Menu Location: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER program from the main
menu, or the tool bar.
2. In the DISPLAY LAUNCHER dialog box, click on the
option to browse for a file by selecting the Pick list button
(…) in the center left column.
3. A file Pick List window will open. Double click on the
MLP raster file.
4. In the Palette File section of the DISPLAY LAUNCHER
window, click on the Pick list button (…).
5. Select the Sup-6 file by double clicking on it.
6. Click on OK to display the image. (Figure 8.6.3.2.c).
◆ ◆ ◆
Figure 8.6.3.2.c shows our MLP result. If we compare it to the result of
MAXLIKE and PIPED, it appears to be a better classification. The result of
MLP has a better characterization of forest, pasture and residential areas,
however we can see that some commercial areas (in downtown Morgantown)
have been classified as Coal. Just how accurate these classifications are is
something we can, and indeed should, quantify. In Chapter 10 we will see
how to evaluate classification errors.
Figure 8.6.3.2.c MLP Hard classification result.
We will now explore another set of output images produced by MLP: the
activation outputs. In TerrSet EXPLORER you will see several files with the
prefix MLP-Act, followed by a number from 1 to 6. These activation images
indicate, for each pixel, the degree of membership to each possible class. A
high value indicates that the pixel is very similar to the class, while low
values indicate that the pixel is very dissimilar to that class. Let’s display one
of these images.
◆ ◆ ◆
Display image with TerrSet EXPLORER
1. In the TerrSet EXPLORER, click on the tab for Files.
2. If the files are not listed in the Files pane, double click on
the directory name to display the files.
3. In the Files pane, double click on the MLP-Act_5.rst image
file (Figure 8.6.3.2.d).
◆ ◆ ◆
Figure 8.6.3.2.d shows the activation image for class 5 (residential). We can
see that values range from zero to one, where one represents a high similarity
with the training class. Water areas have values close to zero (very dissimilar
to the residential class), however, although low, forest and pasture areas have
some similarity with the residential class. You can use the information from
this activation layer to refine the training sites, for example, if a location has
low values for all classes, it means that those spectral characteristics are not
represented within the training sites.
Figure 8.6.3.2.d Activation layer result for the residential class.
Classifiers that produce continuous results for each class, representing a
degree of class membership, or a class proportion, are called soft (or fuzzy)
classifiers. Here we see that MLP can act as either a hard or soft classifier,
however there exist classifiers that only produce soft outputs, as we will
describe in the next chapter.
CHAPTER 9
SOFT CLASSIFICATION
Chapter 7 and Chapter 8 provided an introduction to unsupervised and
supervised hard classification methods. It is assumed that you have
completed those chapters before you start this one. The major focus of this
chapter is on supervised soft (or fuzzy) classification. We will cover two
approaches: Linear unmixing and Mahalanobis Typicality.
9.1 Introduction
Soft classification is a particularly interesting topic in remote sensing and is
an area of intensive research. It is notable that TerrSet is very strong in
classification techniques in general, including in the area of sophisticated
classification approaches, such as fuzzy classification. Even if you never use
fuzzy classification in your own work, the theoretical issues associated with
this topic are well worth considering, and have ramifications in more
traditional methods.
Soft classification is an alternative to the relatively simple hard classification
methods discussed so far, where each pixel is assigned to one class only. In
soft classification methods, a pixel can potentially be associated with more
than one class.
There are three main applications for soft classification:
1. Classification of mixed pixels that arise from integrating discrete
areas of different classes. For example, an individual pixel along a river
bank may include both water and land areas. Mixed pixels are a direct
result of the spatial integration across the instantaneous field of view
(IFOV) of the sensor. The standard way to analyze mixed pixels is linear
pixel unmixing, however, other soft classifiers can also be used to
investigate class mixing.
2. Classification of conceptually fuzzy classes that arise from
variability in the underlying classes. Where classes have transitional,
rather than discrete boundaries, pixels from the transitional areas will
typically have a spectral radiance that is intermediate between those of
the pure classes. For example, river water may vary from deep and clear
to shallow and muddy. Thus, the deep and clear water is simply one end
of a continuum of classes, and the variability is inherent in the land cover,
not the mechanics of imaging the site.
3. Investigation of the confidence the user has in the classification of
each pixel. For example, how do we know that we haven’t ignored an
important spectral class in defining our class training areas?
In this chapter, we will investigate the use of soft classification for unmixing
pixels and how to identify similar a pixel is to the training sites, through a
Mahalanobis Typicality classification. In forest cover mapping, the IFOV of
the Landsat ETM+ sensor, at approximately 30 meters, is much larger than
the typical size of the canopy of a single tree. Thus, we can consider forest
spectral radiance as a mixture of the trees within each pixel. Foresters
traditionally recognize forest communities as groups of trees. However, the
boundaries of those communities may be gradational, and the same species
may occur in more than one community. Thus, it would seem that forest
applications are ideally suited to a fuzzy approach.
Our study site for this exercise is Chestnut Ridge, West Virginia, a subset of
the area on the extreme eastern edge of the area we were examining in the
supervised and unsupervised classification exercises. More background on
this area can be found in Nellis, et al. (2000).
9.2 Download Data for this Chapter
If you have not done so already, download the data from the Clark Labs’
website for Chapter 9 and place it into a new subfolder within the \RSGuide
folder on your computer.
Note: Section 1.3.1 provides detailed instructions on how to download the
data. Also, the procedure for setting up the RSGuide folder on your computer
is described.
9.3 Preparation
9.3.1 Setting up TerrSet Project
In Section 9.2 you should have already copied the data into the RSGuide
folder. However, we still need to set the Project and Working Folders for this
chapter.
Before starting, you should close any dialog boxes or displayed images in the
TerrSet workspace.
◆ ◆ ◆
Create a new project file and specify the working folders
with the TerrSet EXPLORER
1. Maximize the TerrSet EXPLORER or open it from the
toolbar.
2. In the TerrSet EXPLORER window, select the Projects tab.
3. Right click within the Projects pane, and select the New
Project Ins option.
4. A Browse For Folder window will open. Use this window
to navigate to the RSGuide folder, which is the folder you
created on your computer for the data for this manual. Now
navigate to the Chap9.
5. Click OK in the Browse For Folder window.
6. A new project file Chap9 will now be listed in the Project
pane of the TerrSet EXPLORER. The working folder will also
be listed in the Editor pane.
7. Minimize the TerrSet EXPLORER.
◆ ◆ ◆
The data for this section consists of Enhanced Thematic Mapper Plus
imagery from October 30, 2000. The bands are labeled etm30oct1 to
etm30oct7, where the last digit is the band number.
It is always useful to become familiar with the data at the start of a project.
Therefore, we will now create a simulated natural color composite. By now
you should be familiar with the process of creating color composites. To
create a simulated natural color composite, you need to match the display
colors of a band to the approximate wavelengths of the sensor (Table
2.1.1.2.a). Thus, for a simulated natural color image, the blue image display
band should be ETM band 1, the green band should be ETM band 2, and the
red band ETM band 3.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, double click in the Blue
image band text box, and then double click on the file
etm30oct1 in the subsequent Pick list.
3. Double click in the Green image band text box, and then
double click on etm30oct2.
4. Double click in the Red image band text box, and then
double click on etm30oct3.
5. Enter the Output image filename in the text box provided:
etm123.
6. Accept all other defaults, and click OK to create and
display the natural color composite.
7. Once the image has been displayed, you can close the
COMPOSITE window.
◆ ◆ ◆
Figure 9.3.1.a shows the results of the image, which will be displayed
automatically. The sinuous band of blue in the lower left of the image is a
river (the Cheat River), and the sinuous band of white is a major highway
(Interstate 68). A faint diagonal linear feature can be observed; this indicates
the zone of cleared vegetation associated with an electricity power line.
Figure 9.3.1.a Simulated natural color ETM+ image of Chestnut Ridge.
It is apparent from Figure 9.3.1.a that this is a very rugged area. The average
elevation of the upland areas is over 700 meters; in the river below, it is
approximately 270 meters. Note the presence of steep slopes in the image,
and the associated shadows. Since this is a northern hemisphere image, and
the image was acquired in the morning at approximately 10 a.m., the shadows
are on the northwest facing slopes.
The forest here is predominantly deciduous. Since this is an autumn image,
the colors in the image are much more varied than would be found with, for
example, a summer image. Unlike in the image from early October we were
classifying earlier, in this image from later in the month, two main forest
communities are evident. Oaks (Quercus spp.) tend to have a red to brown
color in this image, and yellow poplar (which we will refer to as “poplar”)
(Liriodendron tuliperifera) have a yellow to green color.
9.3.2 Digitize Training Class Data
Since the soft classification methods used in this chapter are supervised, we
will begin by digitizing training areas. Because of the variation inherent in
the autumn forest colors, we will need to digitize multiple training areas for
each of the two forest classes. We will also digitize a third class, shade, to
account for some forest areas that are shaded by the steep topography (Figure
9.3.2.a). Figure 9.3.2.a gives an overview of the recommended number and
distribution of training polygons that should be digitized.
Table 9.3.2.a Training classes for linear pixel unmixing.
Figure 9.3.2.a Training classes for the soft classification methods (Red =
Oak, Yellow = Poplar, and Blue = shadow).
Since you should be familiar with digitizing from Section 8.4, a relatively
brief description of digitization will be provided below. If this description is
not sufficiently clear, please review Section 8.4. before proceeding. Note that
Section 8.4 also includes a description of how to correct mistakes by deleting
polygons. Since in section 8.4 we learned to digitize each training site in
separate files, in this exercise we will learn how to digitize all training sites
within the same vector file. Note that you will have to keep track of the ID
class you are digitizing. We will digitize Oak, Poplar and Shadow using the
class ID and characteristics described in Table 9.3.2.a.
◆ ◆ ◆
Digitizing training polygons
Menu location: Main tool bar icons (Note that the icons
are grayed out if an image is not yet displayed.)
1. The simulated natural color image etm123 should already
be displayed in a Viewer.
2. Click on the Full extent normal icon from the main tool
bar, to display the entire image, if not already at the default
zoom.
3. Click on Full extent maximized to maximize the size of the
display window.
4. If necessary, click on the Zoom window icon to zoom in on
an area of oak trees (Table 9.3.2.a and Figure 9.3.2.a).
5. Click on the Digitize icon. A dialog box labeled Digitize
will open.
6. Enter the file name Training in the text box labeled Name
of layer the to be created. Leave all other values at their
defaults.
7. Click on OK to close the Digitize window.
8. Move the cursor over the image. Note how when the cursor
is over the image, it takes on a form similar to the Digitize
icon, instead of the normal arrow.
9. Digitize the first vertex of the first oak polygon by pressing
the left mouse button. Continue clicking in the image to
specify the outer boundaries of the polygon you wish to
digitize.
10. Close the polygon by pressing the right mouse button.
Note that the program automatically closes the polygon by
duplicating the first point; you do not need to attempt to close
the polygon by digitizing the first point again.
11. If you have zoomed in on the image, repeat the process to
return the image to full extent, and zoom in on another oak
area.
12. Now start digitizing the second polygon by clicking on the
Digitize icon again.
13. The Digitize dialog box will open again.
14. Confirm that the radio button for Add features to the
currently active vector layer is selected.
15. Click OK.
16. The ID for the polygon will be incremented automatically.
Be sure to set it back to 1, as all the oak polygons should be
digitized as part of the general Oak class.
17. Digitize the next polygon.
18. Repeat the process of adding polygons to the Oak class
until you have created all five oak training areas.
19. Save the polygon file by clicking on the Save digitized
data icon.
20. A Save prompt dialog box will open. Click on Yes.
21. Now start digitizing the second class (Poplar) by clicking
on the Digitize icon again.
22. The DIGITIZE dialog box will open again.
23. Select the radio button for Add features to the currently
active vector layer.
24. Click OK.
25. Change the ID value to 2 (the ID class for Poplar).
26. Digitize the poplar polygons. Remember that now we
should set the ID value back to 2 each time you start the
process of adding a new Poplar polygon to the file.
27. Save the polygon file by clicking on the Save digitized
data icon.
28. A Save/Update dialog box will open. Click on Yes.
29. Repeat steps 21 to 28 for the Shadow class. In this case,
you must set the ID value to 3 each time you start the process
of adding a new shadow polygon to the file. Remember to
save the file.
30. When you are finished digitizing all the training areas,
close the viewer. If you have any unsaved vector files, you
will be given an opportunity to save them.
◆ ◆ ◆
Remember that you can use the shortcut keys to facilitate adding polygons
with the same or different ID. If, after you finish digitizing one polygon, you
want to add a new one with the same ID, you can just close the polygon using
Shift+Right click. On the other hand, if after creating a polygon you want to
start digitizing a new one with the next ID, you can close the polygon using
Alt+Right click.
Unlike Exercise 8.4 where we created different vector files for each training
class, here we created one single vector that contains all the different classes.
Each class is identified with a separate ID within the vector. Your final
training site will look similar to Figure 9.3.2.a, with different colors for each
class.
9.3.3 Create Signature Files of Training
Data
The process of creating signature files is described in detail in Section 8.5.1.
Here we provide a brief overview of the process. Unlike the exercise in 8.5.1,
since we have all training sites within a single vector, we will run MAKESIG
just once.
◆ ◆ ◆
Create signature files using MAKESIG
Menu location: IDRISI Image Processing – Signature
Development – MAKESIG
1. Start the MAKESIG program from the main menu.
2. In the MAKESIG dialog box, click on the Pick List button
(…) to the right of the text box Vector file defining training
sites.
3. In the Pick List window, double click on the Training file.
4. In the MAKESIG dialog box, click on the button Enter
signature filenames.
5. An Enter signature filenames dialog box will open. In the
blank cell to the right of 1, type Oaks. In the blank cell to the
right of 2 type Poplar, and finally type Shadow for class 3
(Figure 9.3.3.a).
6. Check the box labeled Create signature group file, and
change the default name to Sig_All. This will automatically
create a signature group file.
7. Click on OK to close the Enter signature file names dialog
box.
8. In the MAKESIG dialog box, click on the button for Insert
layer group.
9. From the Pick list, double click on etm30oct_all.
10. The MAKESIG dialog box should now show the six bands
of the ETM data.
11. Click OK.
◆ ◆ ◆
Figure 9.3.3.a Signature file names dialog box for composite vector training
file, with parameters set.
9.4 Soft classification with Linear Unmixing
Linear spectral unmixing assumes that each pixel can be modeled as a linear
function of the input classes. Thus, this approach assumes that the spectral
value of a pixel that comprises 50% of each of two different classes should
lie half way between the two classes in the feature space (the spectral
dimension of the data) (Figure 9.4.a). The method assumes that the classes
specified in the training data are the only classes present in the image. If a
pixel is composed of a class that was not specified, the proportions assigned
to that pixel will be erroneous. A residual output file is also created, that
specifies how well pixel values match the calculated mixture of classes. The
higher the residual value, the less likely the calculated mixture exists.
Figure 9.4.a Example of the reflectance of a mixed hypothetical pixel that is
comped 50% of class 1 and 50% of class 2.
◆ ◆ ◆
Unmix an image
Menu location: IDRISI Image Processing – Soft
Classifiers/Mixture Analysis – UNMIX
1. Start the UNMIX program from the main TerrSet menu.
2. In the UNMIX dialog box, select the option for Linear
spectral unmixing.
3. Double click in the text box labeled Input signature group
file.
4. In the subsequent pick list, double click on Sig_all.
5. In the Output prefix text box, type Unmixed.
6. Figure 9.4.b shows the UNMIX dialog box, with files
specified.
7. Click on OK.
◆ ◆ ◆
Figure 9.4.b The UNMIX dialog box.
TerrSet automatically displays the unmixing residual image, with a
quantitative look up table (Figure 9.4.c). Note that your image will be slightly
different from that in the figure, because your training classes will not be
identical. However, the overall pattern should be similar.
Figure 9.4.c Linear spectral unmixing residuals.
As mentioned above, the unmixing residual gives an estimate of how well the
training data can produce the original pixels: larger values (represented in
yellow and red colors) indicate a poor match; low values (represented in
black and dark purple in the image) indicate a good match. In our case, since
we did not collect training data for the water or road classes, these areas have
high residuals, and therefore the calculated mixtures of oak, poplar and
shadow in those areas could be erroneous. It is also interesting to note that the
hillside on the northeast side of the river does not appear to be well
represented by the training classes, suggesting that perhaps this area is
covered by a third forest community.
Now examine the three images that represent the proportion of each class for
each pixel. These images have the prefix Unmixed, and the rest of the name
is the original class signature name. For example, UnmixedOak is the
proportion of Oak in each pixel (Figure 9.4.d). You will need to use the
TerrSet DISPLAY module to display each of the images or double click on
the image within TerrSet EXPLORER, as they are not automatically shown.
When you display the images, use all default options.
The three images showing the proportions of each class in each pixel reveal
that, although there are some areas that are very likely one forest class or the
other, most areas are a combination of proportions of the two forest classes.
Figure 9.4.d Proportion of oaks in each pixel.
9.4.1 Summarize the Output of the Forest
Classification
The images of the proportion of the Oak and Poplar communities might be
regarded as an endpoint to the analysis. However, these data are difficult to
visualize in their entirety, because the information is present in multiple
images, and due to the complexity of the output. Therefore, many soft
classification exercises end by returning to a modified version of the
traditional approach, where each pixel is summarized by the class with the
highest mixture proportion. TerrSet includes the module HARDEN to
produce hard classifications from the soft classifiers in an automated fashion.
In this exercise, we will take a slightly different approach in order to preserve
the fuzzy information in the forest communities, despite simplifying the
classification. We will do this by manually recoding the image to an output
that includes mixture classes.
◆ ◆ ◆
Recode linear unmixing images with RECLASS
Menu Location: IDRISI GIS Analysis – Database Query –
RECLASS
1. Open the RECLASS program from the main icon bar or the
main menu.
2. In the RECLASS window, enter the Input file name by
double clicking in the text box, and selecting UnmixedOaks.
3. Enter the Output file as Reclass_UnmixedOak.
4. In the Reclass Parameters section of the RECLASS
window, enter the values to complete the table as indicated
below:
5. Figure 9.4.1.a shows the RECLASS dialog box with the
files specified and the table completed.
◆ ◆ ◆
Figure 9.4.1.a RECLASS dialog box.
Remember from previous reclass operations that the > symbol indicates, in
this case, that all pixel values above 0.75 will be reclassed to 3.
In the table, 1 represents a low proportion of oak in the pixel, 2 a moderate
proportion, and 3 a dominant proportion.
◆ ◆ ◆
Recode linear unmixing images with RECLASS (cont.)
6. Click the Save as .RCL file button.
7. In the Save as window, enter the File name as
Recode_table.
8. Click on Save to close the Save as window.
9. In the RECLASS window, click on the OK button.
10. A dialog box will open with the question: Warning: The
input data contains real values. Would you like to change the
output file to integer? This question is prompted by TerrSet
having recognized that you are converting the image from a
continuous variable (numbers with decimals) to an integer
(whole numbers). Accept the default, and click on Yes.
11. The reclassed image will be displayed automatically.
12. We will now reclass the second file. In the RECLASS
window text box for Input file, double click in the Input file
text box. In the pick list, double click on UnmixedPoplar.
13. Change the Output file name to
Reclass_UnmixedPoplar.
14. Click on OK. Select Yes in response to the Warning dialog
box.
◆ ◆ ◆
Figure 9.4.1.b Unmixed Oak image reclassed into 3 classes.
So far, we have produced two 3-class images, one for each species. In the
next steps we will combine the two reclassed images into a single image.
This will be done in two steps. First we will run the program CROSSTAB, to
combine the two images. Finally, we will run yet another RECLASS to
produce a single summary image.
◆ ◆ ◆
Combine the two 3-class images with CROSSTAB
Menu Location: IDRISI GIS Analysis – Database Query –
CROSSTAB
1. Start the CROSSTAB program from the main menu.
2. In the CROSSTAB window, double click in the First image
(column) text box, and in the subsequent pick list, double
click on Reclass_UnmixedOaks.
3. Double click in the Second image (row) text box, and in the
subsequent pick list, double click on
Reclass_UnmixedPoplar.
4. Enter the Output image file name as:
Crosstab_oak_poplar.
5. Accept the default for the Type of analysis (Hard
classification) and Output type (Cross-classification image).
6. Figure 9.4.1.c shows the CROSSTAB window with the files
specified.
7. Click on OK.
◆ ◆ ◆
Figure 9.4.1.c CROSSTAB window.
The resulting image is displayed automatically (Figure 9.4.1.d). The image
shows the six possible combinations of the two input files. The first number
in the legend represents the DN value in the first image (Oaks, in our
exercise), and the second the DN value in the second image (Poplar). Table
9.4.1.a lists these classes and the underlying composition of the classes. The
final column in the table is the number we will use for the new value in the
second RECLASS operation.
Figure 9.4.1.d Cross tabulation image of forest classes.
Table 9.4.1.a CROSSTAB output classes.
An interesting case is where we have the class “2 | 3” or “3 | 2”. These are
cases where pixels are composed of exactly 25% Oaks and 75% Poplar, or
75% Oaks and 25% Poplar.
As a final step in our analysis, we will now reclass the output of the
CROSSTAB program to just four classes, using the program RECLASS.
◆ ◆ ◆
Recode the results of the cross tabulation output with
RECLASS
Menu Location: IDRISI GIS Analysis – Database Query –
RECLASS
1. Start the RECLASS program from the main menu or the
main icon tool bar.
2. In the RECLASS window, enter the Input file name by
double clicking in the text box and selecting
Crosstab_Oak_Poplar.
3. Enter the Output file as Reclass_Crosstab_Oaks_Poplar.
4. In the Reclass Parameters sections of the RECLASS
window, enter the values to complete as indicated below:
Make sure you understand why these values were chosen
by comparing each of these classes to Table 9.4.1.a. Also,
double check that you have entered the values correctly in
the table.
5. Click on the Save as .RCL file button.
6. In the Save as window, enter the Filename as
Recode_table2.
7. Click on Save to close the Save as window.
8. In the RECLASS window, click on the OK button.
9. The reclassed image will be displayed automatically
(Figure 9.4.1.e).
◆ ◆ ◆
Figure 9.4.1.e Automatic display of the RECLASS program.
Finally, add a descriptive legend using TerrSet Explorer. Additional details
of this procedure are described in Section 7.3.7.
◆ ◆ ◆
Update the classified image with the TerrSet EXPLORER
1. Start the TerrSet EXPLORER.
2. In the TerrSet EXPLORER window, click on the tab for
Files.
3. If the files are not listed in the Files pane, double click on
the directory name to display the files.
4. In the Files pane, click on the
Reclass_Crosstab_Oaks_Poplar file.
5. In the Metadata pane below the Files pane, scroll down,
and find the blank cell next to the label Categories.
6. Double click in the blank Categories cell.
7. A Categories dialog box will open.
8. In the first cell below Code, enter 1.
9. In the cell below Category, enter Non-vegetation or
shade.
10. Find the Add line icon on the right of the Categories
dialog box (it is the third icon on the right side of the dialog
box). Click on this icon.
11. In the new line enter the Code 2 and the Category Oaks.
12. Repeat the previous two steps in order to enter the
remaining classes on two additional rows:
3 Yellow Poplar
4 Mixed Oak and Yellow Poplar
13. Click on OK to close the Categories dialog box.
14. The TerrSet EXPLORER Metadata pane should now have
the number 4 in the cell next to Legend cats, indicating we
have specified category names for 4 classes.
15. Click on the icon for Save, in the bottom left corner of the
Metadata pane, and accept the warning message. After the
file is saved, the icon will go blank.
◆ ◆ ◆
Finally, redisplay the classification to show the new legend.
◆ ◆ ◆
Displaying a previously created image using the DISPLAY
LAUNCHER
Menu Location: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER program from the main
menu, or the tool bar.
2. In the DISPLAY LAUNCHER dialog box, double click in
the text box in the left side of the window.
3. A Pick List window will open. Double click on
Reclass_Crosstab_Oaks_Poplar.
4. Click on OK to display the image (Figure 9.4.1.f).
◆ ◆ ◆
Figure 9.4.1.f Final mixture image, with updated legend.
The final image (Figure 9.4.1.f) lends strong support to the notion that the
forest communities are mixtures. Although some areas are relatively pure,
almost one quarter of the scene appears to be a mixture of Oak and Yellow
Poplar. It is noteworthy that in Figure 9.4.1.f we have produced a map that
shows the relatively pure and mixed classes, although our training areas were
exclusively based on pure classes. It is only through the linear unmixing that
the mixed classes were identified.
Before we start the next exercise close all windows.
9.5 Soft classification with Mahalanobis
Typicalities
The Mahalanobis classifier (MAHALCLASS), calculates the similarity
between the spectral characteristic of a pixel and the spectral characteristic of
the training sites, using Mahalanobis distances. The smaller the distance in
the multivariate spectral space, the more similar the unknown pixel is to the
class. Mahalanobis distances range from zero to infinity, and are transformed
into a measure called typicality, that ranges from zero to 1. Typicalities
express the probability of finding a pixel with a Mahalanobis distance to a
particular class, greater than or equal to that of the pixel (the probability of
finding a pixel more dissimilar). A typicality of 1 therefore represents pixels
with spectral characteristics equal to the mean characteristics within the
training data. The lower the typicality, the lower the similarity with the
training data. As with linear spectral unmixing, the Mahalanobis classifier
produces one output per class and an overall uncertainty image.
Unlike UNMIX, where pixel values across classes added up to 1, in the
Mahalanobis classifier the sum of typicalities for a pixel can be greater than
one. This happens in cases where the pixel is similar to more than one class
(e.g. with overlapping signatures). Moreover, the sum of the typicalities can
be very low for all classes, meaning that the pixel is very dissimilar to the
mean characteristics found in all the training sites. Typicalities should be
interpreted with caution; having a low similarity to the mean spectral
characteristics of the class as represented in the training site does not
automatically indicate that the pixel does not belong to that class. In general,
only a very low (close to zero) typicality value could be considered as not
representing the class.
A very useful characteristic of this soft classifier is that it does not need to
exhaustively include all classes present in the image. Unlike all the other
classifiers we saw, MAHALCLASS allows having incomplete training
classes, or even classifying just a single class.
Since we already created the training sites and signatures, we can directly run
the module MAHALCLASS to obtain the soft classification for each class.
◆ ◆ ◆
Classify an image with MAHALCLASS
Menu location: IDRISI Image Processing – Soft
Classifiers/Mixture Analysis – MAHALCLASS
1. Start the MAHALCLASS module from the menu (Figure
9.5.a).
2. In the MAHALCLASS dialog box, click on the Insert
signature group button.
3. In the Pick list, double click on the Sig_All signature group
file.
4. In the MAHALCLASS dialog box, specify the Output prefix
as Mahal.
5. Click OK to run the classification.
◆ ◆ ◆
Figure 9.5.a MAHALCLASS module.
After running, the module will automatically display the classification
uncertainty image. Note that the area of the river and road, for which we did
not have training data, have high uncertainties.
Now we will display all output images as a stack of layers. First close the
image that was automatically displayed.
◆ ◆ ◆
Displaying multiple images with TerrSet EXPLORER
1. Open TerrSet EXPLORER.
2. Click on the image MahalClu.
3. Press the Ctrl key of your keyboard and, without releasing
it, click on MahalOaks, then click MahalPoplar, and then
click on MahalShadow. You should have the four images
highlighted in blue.
4. Now release the Ctrl key and right click on the selection.
5. Select the option Add Layer(s) (or hit the Ins key while
pressing the Shift key: Shift+Ins).
6. The four images should now be added within the same map
composition window.
◆ ◆ ◆
The map composition will have the classification uncertainty image displayed
on top (Figure 9.5.b), since this was the first image selected. We will now
explore the Mahalanobis typicality values in different areas of uncertainty.
◆ ◆ ◆
Exploring pixel values across images with the IDENTIFY
tool
1. Activate the Identify tool by clicking on its icon in the main
toolbar.
2. Click on different pixels and look at their values across
outputs in the Identify window (Figure 9.5.b). Try selecting
pixels with low, medium and high uncertainty.
◆ ◆ ◆
Figure 9.5.b Multiple images displayed in Composer and Identify tool report
after clicking on the road.
After exploring the output typicality for each class, and the classification
uncertainty, you might have noticed a pattern. The more similar the typicality
values across all classes (MahalOaks, MahalPoplar, and MahalShadow),
the higher the uncertainty (MahalClu). The uncertainty image not only
identifies the degree of commitment to one class, but also to the dispersion of
typicality values across classes. If all pixels have the same or very similar
typicality, it will result in high uncertainty. Areas of high uncertainty can be
used to identify either missing classes (areas with very low typicality values
across all classes), or classes with low separability (with high typicality
values across all classes). Information from this classification can therefore
be used to refine the supervised classification by adding new sites or refining
or modifying exiting training samples.
9.5.1 Summarize the outputs
We will now use RECLASS to combine the results from the Oak and Poplar
classes. If the two classes are mixed, we expect the pixel to have some
similarities to both Oak and Poplar.
◆ ◆ ◆
Recode Mahalanobis Typicality images with RECLASS
Menu Location: IDRISI GIS Analysis – Database Query –
RECLASS
1. Open the RECLASS program from the main icon bar or the
main menu.
2. In the RECLASS window, enter the Input file name by
double clicking in the text box and selecting MahalOaks.
3. Enter the Output file as Reclass_MahalOaks.
4. In the Reclass Parameters sections of the RECLASS
window, enter the values to complete the table as indicated
below*:
*Figure 9.5.1.a shows the RECLASS dialog box with the files
specified and the table completed.
◆ ◆ ◆
In the following steps, we will use RECLASS to create two new images, one
to represent areas where Oaks are present (i.e. have some degree of similarity
with the Oaks class), and one where Yellow Poplar is present. The next step
is to use CROSSTAB to identify areas where both Oak and Poplar species are
present, and therefore represent mixed communities.
Since the result of Mahalanobis typicalities is a measure of similarity and not
proportions, we cannot use the same thresholds identified for the unmixing
classification in exercise 9.4. In the introduction to this section we explained
that a low typicality does not necessarily mean that the class is absent.
Because of this, to classify the typicality image into one where pixels are
assigned to either the Oaks or no Oaks class, we chose a very low threshold.
In this reclassification, we are assigning a value of 1 to areas with typicalities
less than 0.07, i.e., to areas that do not contain the class. Areas with some
degree of typicality are assigned a value of 2, representing pixels that do
contain the class.
Figure 9.5.1.a RECLASS dialog box.
◆ ◆ ◆
Recode linear unmixing images with RECLASS (cont.)
5. In the RECLASS window, click on the OK button.
6. A dialog box will open with the question: Warning: The
input data contains real values. Would you like to convert the
output file to integer? This question is prompted by TerrSet
having recognized that you are converting the image from a
continuous variable (numbers with decimals) to an integer
(whole numbers). Accept the default, and click on Yes.
7. The reclassed image will be displayed automatically
(Figure 9.5.1.b).
8. We will now reclass the second file. In the RECLASS
module text box for Input file, double click in the Input file
text box. In the pick list, double click on MahalPoplar.
9. Change the Output file name to Reclass_MahalPoplar.
10. Click on OK and select Yes to the warning message.
◆ ◆ ◆
Figure 9.5.1.b Reclassed Typicalities into Oaks and no Oaks.
We can now run the CROSSTAB operation to find out which areas are only
Oaks, Oaks and Poplar, only Poplar, or none of them.
◆ ◆ ◆
Combine the two class images with CROSSTAB
Menu Location: IDRISI GIS Analysis – Database Query –
CROSSTAB
1. Start the CROSSTAB program from the main menu.
2. In the CROSSTAB window, double click in the First image
(column) text box, and in the subsequent pick list, double
click on Reclass_MahalOak.
3. Double click in the Second image (row) text box, and in the
subsequent pick list, double click on Reclass_MahalPoplar.
4. Enter the Output image file name as:
MahalCrosstab_oak_poplar.
5. Accept the default for the Type of analysis (Hard
classification) and Output type (Cross-classification image).
6. Click on OK. The CROSSTAB output with the combination
of classes will be displayed (Figure 9.5.1.c).
◆ ◆ ◆
Figure 9.5.1.c Cross tabulation of forest classes identified from
MAHALCLASS.
The output cross tabulation shows as class “1 | 1”, areas with very low
typicalities for both Oaks and Poplar, and therefore represents no vegetation,
or shade, areas. The second class (“2 | 1”), shows areas with similarities to
Oaks but not to Poplar, and therefore represents the pure Oak community.
The third class (“1 | 2”), identifies areas similar to Poplar but not to Oaks,
representing the pure Poplar community. Finally, the class “2 | 2” represents
areas with similarities to both the Poplar and the Oaks class, and therefore is
the Mixed Oaks and Yellow Poplar community.
Finally, we will add a descriptive legend. In this case we will use a shortcut
to update the legend. This shortcut works well when an image already has a
legend defined (as it is in this case) and we want to update or modify it.
◆ ◆ ◆
Update the Classified Image Legend
1. Click on the MahalCrosstab_oak_poplar window to bring
it to the front.
2. Right click on the color chip corresponding to the first
class. A set of options will be displayed.
3. Select the option Update Legend.
4. In the Edit Legend window, under Update current legend
caption, delete the old legend (1 | 1) and type Non-vegetation
or shade (Figure 9.5.1.d).
5. Click OK.
6. Right click on the color chip for the second class (2 | 1),
and select the option Update Legend.
7. In the Edit Legend window, under Update current legend
caption, delete the old legend and type Oaks.
8. Click OK.
9. Repeat the above steps to change the third category to
Yellow Poplar and the forth category to Mixed Oak and
Yellow Poplar (Figure 9.5.1.e).
◆ ◆ ◆
Figure 9.5.1.d Edit Legend window for Update Legend shortcut option
Figure 9.5.1.e Cross tabulation output with updated legend.
The final image (Figure 9.5.1.e) shows the pure and mixed communities
identified with MAHALCLASS. Let’s compare the result to the one produced
with the results of UNMIX (Figure 9.4.1.f). We can see that, although the
general pattern is similar, there exist many differences between the two
images. The result from MAHALCLASS has more areas defined as Non-
vegetation and Mixed, compared to the result from UNMIX. This is due to the
differences between the types of output produced (proportion vs typicality),
and the threshold used to define the Oak and Poplar classes in 9.5.1. You can
try changing the thresholds used in the definition of Oak and Poplar classes,
to see if it decreases the amount of mixed areas (note that although we used
the same thresholds to define both classes, they don’t have to be necessarily
equal).
CHAPTER 10
CLASSIFICATION ERROR
ASSESSMENT
10.1 Introduction
Chapters 7 and 8 introduced classification methods. These chapters should
be completed before beginning Chapter 10. In this chapter, we focus on error
analysis. There is a strong tradition in remote sensing classification of always
conducting an error analysis. The error analysis provides a statement
regarding the reliability of the classification, and is therefore essential
information for the map user.
The error should be estimated using an independent source of information to
provide a check of selected points. Ideally the independent data source would
be based on a field visit. Oftentimes, limited access, expense, or changes in
land cover over time, make a field visit impossible. Therefore, a visual
interpretation of aerial photography or some other high resolution imagery
(e.g. from Google Earth) is often a more practical alternative.
10.2 Download Data for this Chapter
If you have not done so already, download the data from the Clark Labs’
website for Chapter 10 and place it into a new subfolder within the \RSGuide
folder on your computer. Section 1.3.1 provides detailed instructions on how
to download the data. The procedure for setting up the RSGuide folder on
your computer is also described.
If you wish to use your own classification data (that were produced in
Chapters 7 and 8) for this error analysis exercise, you should copy the data,
including the raster file and associated metadata (e.g. MLP.rst and MLP.rdc
from the \RSGuide\Chap7-8 folder) to the folder for this exercise:
\RSGuide\Chap10).
10.3 Classification error analysis
10.3.1 Overview
Figure 10.3.1.a provides an overview of the error analysis procedure: sample
points are selected, the correct land cover class for those points is determined
independently, and then those points are used to estimate the overall
classification error. The figure also shows the main TerrSet programs we will
use to conduct the error analysis.
Figure 10.3.1.a Overview of the classification error analysis procedure.
10.3.2 Preparation
In Section 10.2 you should have already copied the data. However, we still
need to set the Project and Working Folders for this section.
Before starting you should close any dialog boxes or displayed images in the
TerrSet workspace.
◆ ◆ ◆
Create a new project file and specify the working folders
with the TerrSet EXPLORER
1. Start the TerrSet EXPLORER.
2. In the TerrSet EXPLORER window, select the Projects tab.
3. Right click within the Projects pane, and select the New
Project Ins option.
4. A Browse For Folder window will open. Use this window
to navigate to the RSGuide folder, which is the folder you
created on your computer for the data for this manual. Now
navigate and select the Chap10 subfolder.
5. Click OK in the Browse For Folder window.
6. A new project file Chap10 will now be listed in the Project
pane of the TerrSet EXPLORER. The working folder will also
be listed in the Editor pane.
◆ ◆ ◆
Check that the project directory has been set correctly by displaying the
images, as described below. If TerrSet is not able to find the image, then you
have not set the directory correctly. The DOQQ.rst is a false color composite
mosaic of USGS digital orthophoto quarter quadrangles (DOQQs) with 1
meter pixels. The Maxlike1.rst image is the improved maximum likelihood
classification, produced in Section 8.6.2.
◆ ◆ ◆
Initial display of images
Menu: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER from the main menu or
icon bar.
2. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and then double click on the file
DOQQ.
3. Click on OK to display the image (Figure 10.3.2.a).
4. Start the DISPLAY LAUNCHER again.
5. In the DISPLAY LAUNCHER window, double click in the
text box for the file name.
6. From the pick list that will open, double click on either the
file you have copied from the Chap7-8 directory or
maxlike1.
7. Double click in the Palette file text box, and then in the
resulting pick list, double click on Sup-6.
8. Click on OK to display the image (Figure 8.6.2.b).
◆ ◆ ◆
Figure 10.3.2.a False color DOQQ mosaic of Morgantown, WV.
Compare the two images you have just displayed. The DOQQ is a false color
image. TerrSet automatically scales an image so that the entire image can be
displayed on your monitor. Therefore, even if you have a very large monitor,
you are probably not seeing the full resolution of the data. However, it is easy
to zoom in to see more detail in selected areas.
In standard false color images, red typically represents green vegetation. The
DOQQ photographs were acquired in the very early spring (April), before the
deciduous trees had leafed out. Therefore, the deciduous trees are a rather
dark red or even magenta to blue, and evergreen vegetation has a very strong
red color. Grass typically has a light red color, trending to pink or even white
where the grass is sparse.
An interesting feature of Figure 10.3.2.a is the small patches of white, and
occasionally red, in the water. The white areas are sun-glint, and not rapids or
other features in the water. The red features in the water are related to the sun
glint and the mosaicking process, and are therefore also artifacts.
10.3.3 Generate the Random Sample
◆ ◆ ◆
Generate a random sample of test points with SAMPLE
Menu Location: IDRISI Image Processing – Accuracy
Assessment – SAMPLE
1. Start the SAMPLE program from the main menu.
2. In the SAMPLE dialog box, double click in the text box
labeled Reference image.
3. Double click on the name of the image which will be
evaluated for its accuracy (maxlike1, if you are using the
image provided).
4. From the list under the heading Sampling scheme, select
the option for Stratified random.
◆ ◆ ◆
As you can see from the SAMPLE dialog box, TerrSet provides three sample
selection strategies:
1. Random: the points are distributed randomly in no clear pattern
throughout the entire image. Some will randomly be close together, some
farther away from each other.
2. Systematic: the points are distributed an equal distance apart from each
other across the entire image. This is a dangerous option to select if there is
any structure to your data. For example, in an agricultural landscape, if the
fields are a typical size, systematic sampling may result in over-sampling or
under-sampling some cover types.
3. Stratified random: the points are distributed randomly, within a
geographic stratum, such as a land cover class. A related approach is
stratified, unaligned, systematic sampling (Chuvieco 2016). The strata used
are created simply by applying a matrix or grid to the image. Each grid cell is
then sampled randomly.
It is important to be aware that classes that are relatively rare (in our
classified image, this would include the Water, Commercial and Coal classes)
will have fewer samples than classes that are more common. TerrSet allows
for sampling each class independently by specifying a reference image as a
mask.
◆ ◆ ◆
Generate a random sample of test points with SAMPLE
(cont.)
5. Enter the number of points as 30.
6. Enter the output vector file name: sample.
7. Figure 10.3.3.a shows the dialog box with the parameters
completed.
8. Click on OK. A blank image with 30 points (the stratified
random sample) indicated by small black dots will be
generated.
◆ ◆ ◆
Figure 10.3.3.a The SAMPLE dialog box.
A total of just 30 points is probably much too low for a reliable estimate of
the accuracy of the classification. At least 100 points would be a
recommended minimum, with a number closer to 300 preferred. However,
this exercise with 30 points should at least illustrate the procedures involved,
and give an approximate idea of the overall map accuracy. We should be
particularly cautious about interpreting the accuracy of the individual classes,
however, as some classes will have very few, if any, test samples. This is a
consequence of not stratifying by class. Note that the resulting image may not
have exactly 30 points due to the geographic stratification. If that is the case,
re-run Sample until you get 30 points.
10.3.4 Interpret the True Land Cover Class
for Each Sample Point
The next step involves careful image interpretation. You will examine the
DOQQ to determine the “true” land cover for each of the 30 random points.
Because each random image generation is unique, your selection of 30 points
will not be the same as the one used to illustrate in this manual. Therefore, in
the next section, only the general procedure is described. You will have to
use your best judgment in interpreting the DOQQ.
We will record the true land cover class (coded by the DN values of each
class) in a simple text file, using the program EDIT. The values will be stored
in an Attribute Values File (AVL extension).
The final AVL text file will have the point number (from 1 to 30), followed
by the class number. For example, if point 1 was associated with class 3, and
point 2 with class 5, then the first two lines of your table will be
13
25
Note that each point is on a new line and there is a single space between the
point number, and the associated class. Since you have 30 points, your file
will have 30 lines. At this initial stage we will only list the point numbers; we
will add the interpreted class numbers subsequently.
◆ ◆ ◆
Build the initial values for the recode values table with
EDIT
Menu location: File – Data Entry – EDIT
1. Start the EDIT program from the main menu or the main
icon tool bar.
2. The TerrSet TEXT EDITOR window will open.
3. In the TerrSet TEXT EDITOR, sequentially enter the
sample point number, followed by a carriage return (“Enter”
on your keyboard), starting with 1, and ending with 30. Thus
each point will be on a new line. Start with point 1 and end
with point 30.
4. Use the menu in the TerrSet TEXT EDITOR window to
initiate saving the file: File – Save As.
5. In the Save file window, select from the Save as type pull-
down list Attribute values file.
6. In the File name text box, enter landcover. The file will
automatically be given an AVL extension.
7. Click on Save.
8. A Values File Information window will open. In this new
window, select the radio button for Integer, and click on OK.
9. Figure 10.3.4.a shows the landcover.AVL file in the EDIT
window.
10. Do not close the TerrSet TEXT EDITOR window, as we
will enter our land use interpretations directly into the file.
◆ ◆ ◆
Figure 10.3.4.a TerrSet TEXT EDITOR window with the Landcover.AVL
file.
We are now ready to start the interpretation. Table 10.3.4.a gives a short
description of each class, as well as the associated code (DN value). The DN
for each class is the same as the scheme used in Chapter 8 and thus for the
maxlike1.rst image.
Table 10.3.4.a Maxlike class numbers, to be used in coding the random
points generated by the SAMPLE program.
◆ ◆ ◆
Display vector file as overlay on DOQQ
1. If the file DOQQ.rst is not still displayed from Section
10.3.2, open it now, following the directions in that section.
Otherwise, click on the DOQQ display, to give it focus (bring
it to the front).
2. Find the Composer dialog box, which is automatically
opened when an image is displayed.
3. In the Composer dialog box, click on the button to Add
Layer.
4. In the Add Layer dialog box, double click in the left hand
text box.
5. A pick list will open; double click on the file sample.
6. Click on OK to close the dialog box, and add the points as
an overlay to the DOQQ image. However, at this stage the
points are too small to be visible. We will therefore make
them clearer by making them much larger.
7. In the Composer dialog box, click on the Layer Properties
button.
8. The Layer Properties dialog box will open. In this new
dialog box, click on the button for Advanced Palette/Symbol
Selection.
9. The Advanced Palette/Symbol Selection dialog box will
open.
10. In this new dialog box, in the Data Relationship area,
select Qualitative.
11. In the area labeled Symbol Size, select the radio button for
16.
12. Click on OK.
13. The vector points should now be displayed on the DOQQ
as very large circles. Figure 10.3.4.b shows an example of
what your display should look like. Note, however, that the
specific locations of your sample points will be different,
since the selection of locations is supposed to be random.
◆ ◆ ◆
Figure 10.3.4.b False color DOQQ showing the 30 sample points
overlaid. Note the Composer and Layer Properties dialog boxes. In the
Composer dialog box, the sample layer is highlighted.
Now that we have set up the TerrSet TEXT EDITOR window and the display
of the DOQQ with the SAMPLE overlay, we are ready to compile the list of
land use / land cover interpretations. In many cases determining the correct
value to assign the point is very difficult. What if the point falls on the
boundary of two classes? Technically, you should estimate the dominant land
use class over the 30 meter Landsat pixel centered on the point. Since the
DOQQ has 3 meter pixels, this would be a 10 by 10 pixel window. This is
actually rather difficult to do, so we usually just take the land use class
directly at the point itself.
Though making a decision of the spectral class for each sample point is very
hard, it illustrates well why the classification is so complex. For example, you
may find a point that falls in a large patch of trees in a residential area. The
land use is clearly Residential (and this is how you should code the
point). Nevertheless, the land cover, which is what is observed by the remote
imaging device, is Forest.
If you truly don’t know how to label a pixel, you could label it 0, which will
effectively delete that point. Use this approach with caution, though, as it will
reduce your number of sample points, and potentially bias your results.
Note: If you get RGB values instead of the point number when you try to
query a point as described in the next section, make sure that the sample
vector layer is highlighted in the Composer window (Figure 10.3.4.b).
◆ ◆ ◆
Interpret land use / land cover class for each point
1. In the DOQQ image display, select each sample point in
turn. For example, start in the bottom left corner. Query the
value of the sample point by clicking on the circle
representing the sample in the image display. Note the value
that will be indicated for this point. (It should be a value
between 1 and 30.)
2. Now select the Zoom Window icon from the main tool bar,
and zoom in to an area around the sample point. You may
need to zoom in several times until you are satisfied with the
display. You should zoom in enough that you can clearly see
the land cover, but not so much that you don’t see the
context.
3. You may also find it useful to maximize the window using
the Full extent maximized icon.
4. Photo-interpret the land use / land cover class, and identify
the correct DN code for the class (Table 10.3.4.a).
5. Find the TerrSet TEXT EDITOR window, and scroll down
to the correct line in the file for the point you have just
worked with (i.e. the sample number, from 1 to 30).
6. Next to the correct sample number, enter a space and then
the DN code for the land use / land cover.
7. Figure 10.3.4.c shows sample 1, with the interpreted land
cover (Residential, coded as 5), entered in the AVL file. (Note
that your sample #1 will not be in the same location, and
therefore will not necessarily have the same land use / land
cover class number.)
8. Return the Display window to the full image display, by
clicking on the icon for Full extent normal from the main
TerrSet tool bar.
9. Iterate through the process for interpreting each point (steps
1 through 8 above) until you have developed a complete table
of the “true” land cover class for each of the 30 samples.
10. Figure 10.3.4.d shows the completed table. (Note that
your table will have a different set of land use / land cover
classes, because your sample will be different.)
11. Save the text file by using the TerrSet TEXT EDITOR
main menu: File – Save.
12. A Values File Information window will open. In this new
window, select the radio button for Integer, and click on OK.
◆ ◆ ◆
Figure 10.3.4.c A zoomed in window around a sample, and the associated
value for Residential entered in the landcover.AVL file. (Note that your
sample 1 will not be in the same location as shown.)
Figure 10.3.4.d Example of completed landcover.AVL file. Your list will
have different land cover class numbers in the second column.
10.3.5 Rasterize the Recode of the Sample
Points
Now that we have interpreted what each sample represents, we need to
convert our vector file to a raster file. This raster file will have zeros
everywhere, except where the samples are located. For the samples, the DN
value will initially be equal to the sample number. We will then use the text
landcover.AVL file to recode the values in this image to the “true” values.
◆ ◆ ◆
Rasterize the sample vector file with RASTERVECTOR
Menu Location: File – Reformat – RASTERVECTOR
1. Open the RASTERVECTOR program from the main
menu.
2. In the RASTERVECTOR dialog box, select the radio button
for Vector to Raster.
3. Select the radio button for Point to Raster.
4. Select the radio button for Change cells to record the
identifiers of points.
5. Click on the file pick list button (…) next to the Vector
point file text box.
6. In the pick list window that will open, double click on the
sample vector file.
7. In the Image file to be updated text box enter
point_locations.
8. Figure 10.3.5.a shows the RASTERVECTOR dialog box
completed.
9. Click on OK.
10. A dialog box will open with the following message:
Image to be updated (point_location.rst) does not exist. Bring
up INITIAL to create this image?
11. Click on Yes.
12. The window for the program INITIAL will open. This
program creates a blank file, into which the rasterized vector
points are subsequently inserted.
13. In the INITIAL window, find and double click in the text
box for Image to copy parameters from.
14. In the Pick list that will open, double click on maxlike1.
15. Figure 10.3.5.b shows the INITIAL window with the
parameters selected.
16. Accept all other defaults, and click on OK.
17. The rasterized sample image will open in a new DISPLAY
viewer. The points themselves are very small, and may be
hard to see, and thus the image may appear to be almost
entirely black (DN = 0).
◆ ◆ ◆
Figure 10.3.5.a RASTERVECTOR window.
Figure 10.3.5.b INITIAL window.
The rasterized image that you have created will have a DN value equal to the
sample number (1 to 30), or zero if there is no sample at that location. For our
error assessment we need the DN value for the “true” land use / land
cover. Therefore, we will now use the Attribute Values file to create a new
image of the sample points, with values according to the true classes of those
points.
◆ ◆ ◆
Create new image with land cover class values using
ASSIGN
Menu location: File – Data Entry – ASSIGN
1. Use the main TerrSet menu to start the ASSIGN program.
2. In the ASSIGN window, select the radio button for Raster
file.
3. Double click in the Feature definition image text box, and
in the subsequent pick list, double click on point_locations.
4. In the textbox for Output image, enter truth_sample.
5. Double click in the Attribute values file text box, and in the
subsequent pick list, double click on landcover.
6. In the Output title text box, enter Photo-interpreted land
use / land cover.
7. Figure 10.3.5.c shows the ASSIGN text box with
parameters selected.
8. Click on OK, and the new image of sample points will be
displayed automatically. Again, the image may appear to be
entirely black, with the points hardly visible.
◆ ◆ ◆
Figure 10.3.5.c ASSIGN window.
10.3.6 Calculate the Classification Accuracy
using ERRMAT
In the previous steps we created an image that is blank everywhere, except
for the 30 pixels for which we determined the “true” land cover from the
visual interpretation of the DOQQ mosaic. We can now overlay this image
with the classification to determine the overall accuracy of that latter image.
◆ ◆ ◆
Calculate classification accuracy using ERRMAT
Menu Location: IDRISI Image Processing – Accuracy
Assessment – ERRMAT
1. Use the main TerrSet menu to start the ERRMAT program.
2. In the ERRMAT window, double click in the text box next
to Ground truth image.
3. In the pick list that will open, double click on
truth_sample.
4. Double click in the Categorical map image text box.
5. In the Pick list that will open, double click on the name of
the file you would like to evaluate the accuracy of. If you are
using the classified image from Chap10, this will be
maxlike1.
6. Figure 10.3.6.a shows the ERRMAT window with the files
specified.
7. Click on OK.
8. The accuracy assessment results will open in a new text
window, labeled Module Results. Note that you can save the
results to a file, or to the clipboard, thus facilitating pasting
the results in a text editor.
◆ ◆ ◆
Figure 10.3.6.a ERRMAT window.
Figure 10.3.6.b Example of ERRMAT output.
The ERRMAT accuracy assessment shows a detailed breakdown of how each
of your sample pixels in the “truth” file compares to its assignment in the
classified file. Figure 10.3.6.b shows an example output. Remember your
results will be different, as your random sample is different.
The resulting error matrix is expected to be symmetrical, however we can see
in Figure 10.3.6.b that, because no random samples were identified in the
photo interpretation as the Coal class (DN = 6), the last column of the table is
absent.
In the error matrix, correctly classified pixels are listed down the diagonal.
Because of the missing column in Figure 10.3.6.b, the diagonal is a bit
difficult to identify. An error matrix that indicated a 100% accurate
classification would have zeros everywhere, except the diagonal. Errors are
the non-zero values that are not on the diagonal.
The error of the overall classification is the proportion of incorrectly
classified test pixels. In Figure 10.3.6.b, the overall error is reported as 0.40
(i.e. 40%) in the bottom right cell in the main table. A 40% error rate is
higher than that normally desired for a remote sensing classification. Bear in
mind however that we only evaluated 30 points. Adding more points may
produce a different portrait of classification accuracy.
The Overall Kappa statistic is an attempt to adjust the accuracy for the
anticipated chance agreement that could be obtained by just randomly
assigning pixels to classes. The Kappa value is thus lower than the overall
accuracy. For Figure 10.3.6.b, the accuracy is 18/30 = 0.60 (i.e. 1.0 minus the
error, or 1.0 – 0.4 = 0.6). The reported Kappa is only 0.47, 0.13 lower than
the accuracy.
Many remote sensing scientists feel that Kappa provides crucial insight into
the success of a classification. This is because it provides a measure of how
much better than random the classification is, and also because it allows
calculation of confidence intervals around the derived value (Congalton and
Green, 2009). Nevertheless, the use of Kappa in remote sensing is very
controversial, with suggestions that its value is limited at best, and potentially
even misleading at worst (Stehman and Foody, 2009; Pontius and Millones,
2011).
When it comes to the pixels off the diagonal, there are two types of errors -
errors of commission, and errors of omission. For example, say you classified
all of the Morgantown image as Water. Clearly your errors of omission for
Water would be low, since you would not omit a single pixel that should have
been Water, because the whole classification is assigned to Water! However,
the errors of commission would be high, since you would commit an error in
calling the rest of Morgantown Water, when it clearly is not.
If we look at the commission errors (rows), we see that mapped class 5
(Residential) has a commission error of 0.73. This means that 73% of the
areas classified as residential were in fact something else. It also means that
27% of the areas identified as residential were in fact that category. This
value is called User’s accuracy and is calculated as 1 minus commission
errors. The user’s accuracy gives us information on the probability that a
pixel classified as a class is actually that category. Imagine you are interested
in visiting residential areas using this map. The user’s accuracy tells us that if
we visit a location classified as residential, 73% of the time we will find a
different category.
Omission errors (columns) tell us a different story about the
classification. They give us an indication of how well the different land cover
types were classified. If we look at the forest (class 2), it has a 0.23 omission
error. This means that 23% of the real forest areas were classified as
something else. It also means that 77% (1-omission errors) of the forests were
classified correctly. This measure is called Producer’s accuracy. The
producer’s accuracy allows the producer of the map to identify problem
categories to improve the classification. For example, for forest we see that
from the 13 areas that were forest, 3 (23%) were classified as
residential. Therefore, our classification is confusing forest and residential
classes.
Although we have discussed in detail the table shown in Figure 10.3.6.b, it is
probably best not to pay too much attention to the errors of commission and
omission, since the number of pixels for each class is so small. In order to
extract meaningful conclusions about the errors in the image, we need an
appropriate proportion of each class sampled. An appropriate sample size can
be calculated from binomial probability theory (Jensen 2016), as a function
of the expected percentage accuracy of the map (p), the expected percentage
of errors (q, i.e., q=100-p), the level of error allowed (L), and z, the desired
level of confidence (e.g. z=1.95 represents a 95% confidence interval).
This means that, if we desire to identify the number of samples we need, to
evaluate our classification with a 95% confidence, assuming that we allow no
more than 10% error in the estimate and predict an 80% accuracy as likely,
our calculation will be:
meaning that we need at least 62 points. The lower the maximum error
allowed, the larger the number of samples. For example, if we decrease our
maximum allowed error to 5%, we would need 246 samples.
As a final step in this exercise, you may want to calculate the accuracy of
your MLP and parallelepiped classifications, and compare them to your
maximum likelihood classifications. Carrying out such a task is quite
straightforward, since after copying the *.rst and *.rdc files to your
\RSGuide\Chap10\ directory, you can immediately run the ERRMAT program
(this Section, 10.3.6), just replacing the categorical map image with the
corresponding classification that needs to be evaluated (MLP.rst or PIPED-
z.rst). You do not need to replicate the preparatory steps of generating the
random sample, rasterizing it, and recoding it to the correct land use / land
cover classes.
CHAPTER 11
IMAGE CHANGE ANALYSIS
Change detection is one of the principal applications of remote
sensing. Reasons for this include:
• Archived images are often the only available record of past conditions.
• Remote sensing can be very effective and accurate for identifying
change.
• There is growing interest in monitoring change from local to global
scales.
In principle, change detection is very simple: changes in image brightness
values are assumed to represent change in ground conditions.
Broadly speaking, there are two types of change: change in degree, and
change in class. Change in degree is a change in some continuous variable,
such as biomass, whereas change in class implies a change from one material
to another, such as from a forest to a road.
Change Detection methods can be grouped in two classes: spectral analysis of
change, and post-classification comparison of previously classified
data. Post-classification is the simplest to understand, and is based on a GIS
overlay operation. Spectral analysis of change involves a comparison of the
image brightness values of the two dates. Spectral analysis of change
generally results in one or more continuous variables of change, though often
the results are summarized into a few change classes.
The case study for this exercise is imagery of Las Vegas, Nevada. Las Vegas
was the fastest growing city in the U.S. during the last couple of decades of
the twentieth century, more than tripling in population between 1980 and
2000, from 460,000 to 1.6 million (including Las Vegas and nearby
communities). The growth of the city is dominantly radially outwards. Most
new construction, and consequently most of the land cover change, occurs on
the fringes of the city. The arid climate also makes for excellent visibility for
remote sensing.
In this exercise, we will investigate post classification comparison and three
spectral change analysis techniques: image subtraction, principal component
analysis, and change vector analysis. These four methods are described in
more detail below. However, as with the other chapters, the reader is urged to
consult a remote sensing text (Table 1.1.2.a) or Warner et al. 2009b for
additional information, as the material below is only a very brief summary of
these topics.
11.1 Introduction
11.1.1 Change Detection Methods
11.1.1.1 Image Subtraction
Image subtraction involves the direct comparison of the DN values of two
dates. TerrSet offers four different outputs:
1. The raw differenced values.
2. The differenced values as a percentage of the original value of the pixel.
3. Change values expressed as z-scores, where the change value is expressed
in terms of the number of standard deviations it is from the mean.
4. Change value z-scores grouped into 6 classes of change.
11.1.1.2 Principal Component Analysis (PCA)
Principal component analysis (PCA) was originally introduced in Section
5.3.3 as a spectral enhancement technique. As explained in that section, PCA
is a rotation and translation of the band axes. The new bands are orthogonal
and uncorrelated. The first band is oriented to capture the maximum
variance. Subsequent bands are oriented to capture the maximum remaining
variance. PCA produces as many new bands as there were old bands,
although it is usually assumed that most of the information is present in the
first few new bands, which have most of the variance.
PCA can also be used for change detection. Instead of applying the PCA
technique to the image of one date, as is done for regular image enhancement,
the method is applied to two or more images simultaneously. For change
detection, it is assumed that the first PCA band of the multi-date set is likely
to be an average image, and therefore is likely to be of less interest. Change is
usually isolated in the second and subsequent bands.
11.1.1.3 Change Vector Analysis
Change vector analysis separates change into two components: magnitude
and direction.
The magnitude component is the simpler component: it is a measure of how
far in the spectral space (also known as feature space) the pixel DN values
have changed. It is thus a multiple-band variation of the image subtraction
method. Therefore, the magnitude component provides a measure of the
amount of change that has taken place, integrated over all wavelengths.
The direction component is a measure of the type of change. Normally the
direction is based on just two bands, or derived bands, such as Brightness and
Greenness in a Tasseled Cap Transformation. However, a multiband
extension to direction changes has been found to be effective, too (Warner,
2005).
11.1.1.4 Post Classification Comparison
In post classification, two independently produced classifications are
overlaid, and new classes showing the change from one date to the next are
generated. It is important that the two dates are classified to produce the same
classes, in the same order (i.e. the relationship between DN values and
classes is the same for both images).
11.1.2 Required Preprocessing
11.1.2.1 Geometric Co-Registration
It is obvious that the quality of co-registration in all change analysis studies
must be excellent (less than 1 pixel); otherwise artifacts will be introduced
into the analysis. With nadir-viewing satellite imagery such as Landsat, high
quality co-registration can usually be achieved. In rugged terrain,
orthorectification is usually required for imagery from off-nadir viewing
sensors such as SPOT. For aerial imagery, high quality orthorectification is
essential.
11.1.2.2 Radiometric Normalization
Radiometric normalization is a process by which the image DN values are
adjusted so that the specific values of each date can be compared
directly. Radiometric normalization is an attempt to reduce the effect of
variations in illumination and variations in atmospheric transmission and
scattering properties.
Unlike co-registration, radiometric normalization is not always needed. Post
classification comparison does not require that the images be radiometrically
normalized, but doing so can be useful, because then the same class
signatures can be applied to both images. Radiometric normalization is also
not required for PCA. Radiometric normalization is required for image
subtraction and CVA.
Before discussing normalization methods, it is useful to first briefly review
some terminology and background seen in Section 3.6. The concentration of
the sun’s energy is dependent on the angle of incidence of the energy, in
other words on the sun’s elevation in the sky. Furthermore, the sun’s energy
has to travel through the atmosphere, which absorbs and scatters the energy.
At the molecular level, atmospheric gases cause Rayleigh scattering that
progressively affects shorter wavelengths (causing, for example, the sky to
appear blue). Also, specific atmospheric components, such as ozone and
water vapor, cause absorption of energy at selected
wavelengths. Furthermore, aerosol particulates, the primary determinant of
haze, create Mie scattering, a largely non-selective (i.e., affecting all
wavelengths equally) effect. In visible wavelengths, moisture vapor has a
major effect on atmospheric properties.
The problem of atmospheric normalization has received considerable
attention from researchers in remote sensing, and two broad classes of image
radiometric normalization have been developed. The first method is to
convert image DN values to exoatmospheric radiance, which is in turn,
usually converted to estimated reflectance, using physical or empirical
models of radiative transfer. Radiance refers to the flux of energy per solid
angle in a given direction. While radiance corresponds to brightness in a
given direction, it is sometimes confused with reflectance, which is the ratio
of reflected and irradiant energy (illumination). Radiance is the energy
measured at the sensor and is dependent on the reflectance of the surface
being observed, the irradiant energy, and the interaction of the energy with
the atmosphere.
Conversion to radiance or reflectance is the best approach. However
sometimes this is not possible to implement because of a lack of information
about atmospheric transmissivity and scattering, or lack of knowledge of the
conversion factors from DN values to radiance.
The second class of image radiometric normalization is an empirical
regression of one date on the other. This approach is relatively easy to
implement, but does require clearly identifiable regions where the image is
assumed not to have changed. In some environments, change may be so
pervasive, or so complex, that it may not be possible to use this method.
Because radiometric normalization is so important for some change detection
methods, and can also be so difficult to implement, both methods will be
illustrated. We will first import Landsat TM and ETM for May 1984 and
May 2003, respectively, and convert them to reflectance through the Cost( t)
method using the LANDSAT module (as seen in Section 3.4). Landsat 8 OLI
bands for May 2013 will be radiometrically normalized to the 2003 image
through an empirical regression.
11.2 Download Data for this Chapter
In this chapter we will work with images from Landsat 5 TM, Landsat 7
ETM, and Landsat 8 OLI. We will import the raw TM and ETM images into
TerrSet as reflectance, while the OLI bands have already been imported as
raw DN values. Note that although we will be importing the images, these are
not the original bands as you would download them. For this exercise, the
bands have been previously windowed and only bands 2,3,4 and 5 were
included, in order to minimize the amount of data to download.
If you have not done so already, download the data from the Clark Labs’
website for Chapter 11 and place it into a new subfolder within the \RSGuide
folder on your computer. Section 1.3.1 provides detailed instructions on how
to download the data. Also, the procedure for setting up the RSGuide folder
on your computer is described.
11.3 Preparation
In the previous section (Section 11.2) you should have already downloaded
the data. However, we still need to set the Project and Working Folders for
this section.
Before starting, you should close any dialog boxes or displayed images in the
TerrSet workspace.
◆ ◆ ◆
Create a new project file and specify the working folders
with the TerrSet EXPLORER
1. In the TerrSet EXPLORER window, select the Projects
tab.
2. Right click within the Projects pane, and select the New
Project Ins option.
3. A Browse For Folder window will open. Use this window
to navigate to the RSGuide folder, which is the folder you
created on your computer for the data for this manual. Now
navigate to the Chap11 subfolder.
4. Click OK in the Browse For Folder window.
5. A new project file Chap11 will now be listed in the
Projects pane of the TerrSet EXPLORER. The working folder
will also be listed in the Editor pane.
◆ ◆ ◆
You will only see the OLI bands. You will not see the TM and ETM images,
however, since we have not yet imported them.
11.4 Import and Radiometric Correction
In Section 3.6.2 we performed a Chavez’s Cos(t) atmospheric correction, and
in Section 3.4 we learned how to import using the LANDSAT module. In this
exercise, we will import Landsat TM and ETM using the LANDSAT module
and automatically perform a conversion to reflectance values using the Cos(t)
method.
◆ ◆ ◆
Import Landsat images
Menu Location: File – Import – Government/Data
Provider Formats – Landsat Data Archive
1. This program has no associated icon on the main toolbar,
so use the menu as described in the title to this instruction box
to start the LANDSAT module.
2. The LANDSAT dialog box will open. (Figure 11.4.a shows
the dialog box, with the options selected, described below.)
3. To select the input file, click on the pick button… for the
Landsat metadata (MTL) file text box.
4. A Pick list window will open. Double click on the
RSGuide\Chap11 title.
5. Click on L5_1984.txt.
6. Click on OK, to close the Pick list window. TerrSet will
automatically input all available bands within the folder.
◆ ◆ ◆
Note that for this exercise we are providing only bands 2 to 5 (green, red,
NIR and SWIR).
◆ ◆ ◆
Import Landsat images (cont.)
7. Specify to include all 4 bands by leaving the label “yes”
under the Include column.
8. Specify an Output Image Name. The default is to have the
same output name as the input. We will change these to be
1984_b2, 1984_b3, 1984_b4 and 1984_b5 (for the raw bands
ending in _B2, _B3, _B4, and _B5 respectively).
9. Under Multispectral Bands select Convert to Reflectance.
10. Under Reflectance correction select Cos(t) model.
11. Click OK to run LANDSAT. Don’t close the LANDSAT
window.
◆ ◆ ◆
Figure 11.4.a LANDSAT module.
◆ ◆ ◆
Import Landsat images (cont.)
12. When the previous process has finished running, select the
input file by clicking on the pick button (…) for the Landsat
metadata (MTL) file text box.
13. A Pick list window will open. Double click on the
RSGuide\Chap11 title.
14. Click on L7_2003.txt.
15. Specify to include all 4 bands by leaving the label “Yes”
under the Include column.
16. Change the Output Image Name to: 2003_b2, 2003_b3,
2003_b4 and 2003_b5 (for the raw bands ending in _B2, _B3,
_B4, and _B5 respectively).
17. Leave the option Convert to Reflectance, and Cos(t).
18. Click OK to run LANDSAT.
◆ ◆ ◆
When you open TerrSet EXPLORER, now you should see 8 images. The
1984 image was acquired by Landsat 5 TM on May 15. The 2003 image, was
acquired by Landsat 7 ETM+ on May 28. As stated previously, these are
smaller subsets of the original bands. Images were cropped in order to make
the data for this exercise more manageable.
We will now display these two dates as false color composites.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, double click in the Blue
image band text box. The Pick list window will open.
3. In the Pick list window, double click on the file 1984_B2.
If necessary, click on the plus symbol to display the files.
4. Double click in the Green image band text box, and then
double click on 1984_B3.
5. Double click in the Red image band text box, and then
double click on 1984_B4.
6. Enter the Output image filename in the text box provided:
1984_fcc.
7. Click OK to create and display the false color composite.
8. Repeat steps 2-7, using 2003_B2, 2003_B3, and 2003_B4
to create the image 2003_fcc.
9. Once the images have been displayed, you can close the
COMPOSITE window.
◆ ◆ ◆
The two false color composite images provide a dramatic demonstration of
how rapidly the city of Las Vegas has grown over 19 years (Figure 11.4.b).
Figure 11.4.b False color composites of Las Vegas, Nevada. Left: May, 15
1984, Right: May, 28 2003.
In these images, you can clearly see the city expansion. Red areas are healthy
vegetation, since the near infrared is represented with this color. Note that the
vegetation for urban areas has a high contrast to the dry desert
vegetation. The dark area to the right of the image is part of Mead Lake.
For this study, we will use data that have already been co-registered. Co-
registration and georeferencing are dealt with in more detail in Chapter 3,
especially Section 3.2.4.
11.4.1 Preliminary Image Differencing
The two images have been converted to reflectance values, instead of
arbitrary image raw DN values. The simplest way to test the success of this
operation, and to illustrate the application of radiometric correction, is to run
a preliminary image differencing operation. The topic of image differencing
will be dealt with in more detail later in this chapter.
◆ ◆ ◆
Preliminary image differencing as a test for image
normalization
Menu Location: IDRISI GIS Analysis – Mathematical
Operators – OVERLAY
1. Open the OVERLAY program through the main menu or
the main toolbar.
2. In the OVERLAY window, double click in the text box for
First image. In the resulting pick list, double click on
2003_B3.
3. Double click in the text box for Second image. In the
resulting pick list, double click on 1984_B3.
4. In the text box for Output image, enter 03-84_B3.
5. Select from the list of Overlay options the radio button for
First – Second.
6. Figure 11.4.1.a shows the OVERLAY window with
processing parameters specified.
7. Click on OK.
◆ ◆ ◆
Figure 11.4.1.a OVERLAY window.
The difference image should have been displayed automatically (Figure
11.4.1.b). From the legend, note the color associated with a value closest to
0. These should be areas of no change. Although the city shows extensive
change (principally a reduction in red brightness values, as shown by the
negative values represented in green colors in the image), most of the
surrounding desert shows little change, as would be expected.
Note the increase in red reflectance between 1984 and 2003 (represented by
positive values) surrounding Lake Mead. Can you guess what this change is
related to? Tip: you can zoom in to that area in both false color composites.
To end this section, close all open windows.
Figure 11.4.1.b Preliminary change detection of Las Vegas, Nevada, red
bands, 2003-1984.
11.5 Change Detection Pre-Processing: Image
Normalization through Regression
Note that this section is an alternative method to the radiometric correction
through conversion to reflectance (or conversion to radiance). Of the change
detection methods presented in this manual, radiometric normalization is only
required as a preprocessing step for image differencing and change vector
analysis.
In the previous section, we used the automated method to convert to
reflectance provided by TerrSet’s LANDSAT import module. In this section,
we investigate an alternative method, based on developing a regression
relationship between two images. This procedure can be used if you do not
have the required metadata information to perform the conversion to radiance
or to reflectance. This method can also be used if, after doing the radiometric
correction, the images still have large discrepancies of reflectance values in
areas that did not experience change. This can occur when comparing images
from sensors with different spectral resolution for the different bands.
Although Landsat TM and ETM+ have the same spectral definition of bands,
Landsat OLI bands cover slightly different parts of the spectrum. OLI bands
are narrower (higher spectral resolution) for band 5 (near infrared) and 6
(shortwave infrared). For this exercise, we will correct Landsat OLI bands
collected in May 2013 to match the reflectance values of the 2003 bands.
The procedure for regression-based radiometric normalization requires the
identification of areas of no change, and developing empirical models to
convert the one date to have values equivalent to those of the second
date. One can select small areas of no change, and use a spreadsheet, or
another statistical package, to calculate the regression parameters. Our
approach, however, will be the opposite: we will first develop a mask to
indicate broad regions of change, and then use the remaining pixels to
develop the regression.
Before starting this section, close all the displayed windows you may have
open.
11.5.1 Create Multitemporal Display
The change mask is digitized manually on a multitemporal false color
composite. In the following sections, the preparation of the multitemporal
false color composite is first described, then the digitization of the mask.
◆ ◆ ◆
Preparing a multitemporal false color composite
1. Open TerrSet EXPLORER and go to the Files tab.
2. Within the Chap11 working folder, find and double click
on the file 2003_B3. This will automatically display the file.
3. Find the Composer dialog box, which is automatically
opened when an image is displayed.
4. In the Composer dialog box, click on the red button (see
arrowed button in Figure 11.5.1.a).
5. The image should now be displayed in tones of red (Figure
11.5.1.a).
◆ ◆ ◆
Figure 11.5.1.a 2003 Las Vegas image after clicking on red button.
◆ ◆ ◆
Preparing a multitemporal false color composite (cont.)
6. In TerrSet EXPLORER find the image 2013_B4_red_raw.
7. Right click on the image and select the option Add Layer to
add it to the map composition. The image will now be
displayed in the same viewer as the 2003_B3 image, with the
second image, 2013_B4_red_raw, opaquely covering the
first. The Composer dialog box will now show the names of
both the 2003_B3 and 2013_B4_red_raw images.
8. In the Composer dialog box, click on the button for the
green layer. The image should now be dominated by yellow
colors, with some red and minor areas of green. Yellow
indicates areas that are bright in both dates, because the two
different dates are displayed in red and green, and red and
green makes yellow.
◆ ◆ ◆
Although we can use the result from step 8 to identify areas of change, the
colors do not have much contrast. Because of this, we will assign the color
magenta to the 2003 image. We only have the option to assign the 3 basic
colors (blue, green and red), however we know that magenta is the
combination of blue and red.
◆ ◆ ◆
9. Add 2003_B3 for a second time. Find 2003_B3 in TerrSet
EXPLORER, right click and select Add Layer.
10. Again, the new image will opaquely cover the previously
displayed images.
11. In the Composer dialog box, click on the button for the
blue layer.
12. The result is displayed in Figure 11.5.1.b, and shows a
multitemporal false color composite, with unchanged pixels
in shades of gray, and changed pixels in either green (areas
where the red values are higher in 2013) or magenta (where
the values are lower).
◆ ◆ ◆
Figure 11.5.1.b Las Vegas multitemporal false color composite: 2003 as
magenta, 2013 as green.
In this study region, urban areas reflect less in the red part of the spectrum
than the surrounding natural vegetation cover (since the cover is mostly dark
asphalt), while areas of vegetation reflect more. Areas with high red
reflectance in 2003 and low red reflectance in 2013 will be displayed with the
magenta color. These are areas that were converted from natural vegetation
cover to urban (transition from high to low red reflectance). On the other
hand, green areas are those that had lower red reflectance in 2003 compared
to that of 2013. These can be locations where more soil is exposed. We can
identify two types of change: areas of urban expansion (identified by linear
patterns), and areas of natural change (happening with no particular
geometrical pattern). Although we can identify the change, pinpointing the
cause of these changes is something more difficult and requires knowledge of
the area. We will interpret these changes in more detail in section 11.6.1.
11.5.2 Digitize Change Mask
The next step is to digitize change areas with a value of 0, to mask them out
from the analysis. By now you should be familiar with digitizing. However, if
you feel the instructions below are too brief, please review Section 8.4 for
more details.
◆ ◆ ◆
Digitizing change areas on the multitemporal composite
1. The multi-temporal composite, as described above and
shown in Figure 11.5.1.b, should be displayed in the
DISPLAY window.
2. From the main icon toolbar, click on the icon for Full
extent normal and then on the icon for Full extent maximized.
3. Select the Digitize icon.
4. The Digitize dialog box will open. In the textbox for Name
of layer to be created, enter Mask.
5. In the text box for ID or Value enter 0.
6. Click on OK.
7. Digitize a large, single polygon that encloses the majority
of the changed pixels, i.e. pixels with bright magenta and
green colors (Figure 11.5.2.a). The unchanged areas, left
outside the polygon, should have a wide range of brightness
levels (gray tones), to give the most reliable regression. Note
that it is not necessary to digitize in very fine detail, because a
few changed pixels will not affect the overall regression, as
the sample will have many thousands of pixels.
8. Right-click to close the polygon.
9. Click on the icon Save digitized data.
10. Close the DISPLAY window.
◆ ◆ ◆
Figure 11.5.2.a Digitizing the changed areas. Note that the polygon outline
is somewhat rough, as the selection of change pixels is very broad.
11.5.3 Rasterize the Vector Mask
The mask we have created is a vector file. In order to apply the mask, it must
be rasterized. The rasterization process has two parts. First, a blank file, with
a value of 1 in every pixel location, is created. This file will have the
dimensions of the 2003 image. In the second step, the vector file is used to
over-write 0 in each pixel within the polygon just created.
◆ ◆ ◆
Create a new file prior to rasterization
Menu location: File – Data entry – INITIAL
1. Start the INITIAL program from the main menu.
2. In the INITIAL window, confirm that the radio button has
been selected for Copy spatial parameters from another
image.
3. Also confirm that the Output data type has been set to byte.
4. Enter the Output image file name: Mask.
5. Double click in the text box next to Image to copy
parameters from. In the resulting pick list, double click on
2003_B2 (or any other 2003 band).
6. Change the Initial value from the default 0 to 1.
7. Figure 11.5.3.a shows the INITIAL window with the
processing parameters specified.
8. Click on OK.
9. Because the image is uniform, with every pixel having a
value of 1, the image is not displayed. Therefore, once you
have confirmed from the progress meter that the program has
completed, close the INITIAL window.
◆ ◆ ◆
Figure 11.5.3.a The INITIAL window.
In the second step, the vector file is used to over-write 0 in each pixel within
the polygon just created.
◆ ◆ ◆
Rasterize the vector file with the polygon of change areas
Menu location: File – Reformat – RASTERVECTOR
1. Start the RASTERVECTOR program from the main menu
(Figure 11.5.3.b).
2. In the RASTERVECTOR window, select the radio button
for Polygon to Raster.
3. Double click in the Vector polygon file text box, and in the
resulting pick list double click on Mask (this is the vector we
digitized).
4. Double click in the Image file to be updated text box, and
in the resulting pick list, double click on Mask (This is the
raster file we created with INITIAL).
5. Click on OK.
6. The mask will be displayed automatically (Figure
11.5.3.c).
◆ ◆ ◆
Figure 11.5.3.b RASTERVECTOR window.
Figure 11.5.3.c Rasterized mask of change areas.
You should now confirm your rasterized mask (Figure 11.5.3.c), by checking
the legend, that the central area, corresponding to the region of the city, has a
value of 0, and the surrounding areas have a value of 1.
11.5.4 Regression of the Masked Imagery
In the regression analysis, we would like to find an equation that converts the
2013 DN values so that in areas that did not change in the ten year interval,
the values will be similar to those of the 2003 image (which in this case
represent reflectance). The formula we are looking for has the form:
2003 ref = a + b * 2013 DN
where a and b are the regression parameters. Therefore, we will specify the
2003 image as the dependent variable (y), and 2013 as the independent
variable (x) in the regression, even though it is actually the 2013 data that
will be modified in the application of the formula for the image
normalization.
◆ ◆ ◆
Regression estimation
Menu location: IDRISI GIS analysis – Statistics –
REGRESS
1. Start the REGRESS program from the main menu.
2. In the REGRESS window, double click in the Independent
variable text box. In the resulting pick list, double click on
2013_B4_red_raw.
3. Double click in the Dependent variable text box. In the
resulting pick list, double click on 2003_B3.
4. Double click in the Mask name (optional) text box. In the
resulting pick list, double click on Mask. Uncheck the box for
Mask output and specify flag value. Leave the flag value of 0.
5. Figure 11.5.4.a shows the REGRESS window, with the
processing parameters specified.
6. Click on OK.
◆ ◆ ◆
Figure 11.5.4.a The REGRESS window.
The program output is a scatter plot, along with a listing of the regression
parameters (Figure 11.5.4.b). Note the regression at the top left of the output
window (Y = -0.105790 + 0.000018 X for the mask used in the example;
your equation should be similar, but will not be precisely the same because
your digitized mask will be different). Write down the equation from the
regression for later use. A perfect relationship between the two bands should
have an intercept of zero (a = 0) and a slope of one (b =1).
Figure 11.5.4.b Regression of 2013 (x) and 2003 (y) red band images.
We will now repeat the regression calculation using the same dates, and
mask, but changing the bands from the red (ETM band 3, OLI band 4), to the
near infrared (ETM band 4, OLI band 5).
◆ ◆ ◆
Regression estimation (cont.)
7. The REGRESS window should still be open from the first
regression.
8. In the REGRESS window, double click in the Independent
variable text box. In the resulting pick list, double click on
2013_B5_NIR_raw.
9. Double click in the Dependent variable text box. In the
resulting pick list, double click on 2003_B4.
10. Confirm that the Mask name (optional) text box still
contains the file name Mask.
11. Click on OK.
◆ ◆ ◆
Write down this second regression equation.
Although not necessary for the rest of the exercises, you can repeat the
process for the green and shortwave infrared bands if desired.
11.5.5 Normalize the 2013 data using the
regression equation
We have now gathered the data we need for the normalization, and are ready
to apply the correction from the regression equations to the 2013 images. A
convenient tool for applying the equations is the IMAGE
CALCULATOR. The IMAGE CALCULATOR program was introduced in
Section 5.4.2.
Figure 11.5.5.a The IMAGE CALCULATOR window. The multiply (*) and
Insert Image buttons are arrowed.
◆ ◆ ◆
Apply normalization equation
Menu location: IDRISI GIS Analysis – Model Deployment
Tools – IMAGE CALCULATOR
1. Open the IMAGE CALCULATOR program from the main
menu or toolbar.
2. In the IMAGE CALCULATOR window, enter in the Output
file name text box 2013_b4_red.
3. Enter the equation you wrote down for the red band
regression (i.e. the first regression), in the Expression to
process text box.
Use * to represent multiply instead of entering x. Click on the
button for Insert Image. In the resulting pick list, double click
on 2013_b4_red_raw. This will place the file name in
parentheses in the equation. Thus, in the example in this text,
the equation becomes:
-0.105790 + 0.000018*[2013_b4_red_raw].
(Note that the values of your equation will vary slightly, but
should be in the same range.)
4. Figure 11.5.5.a shows the equation entered in the IMAGE
CALCULATOR window.
5. Click on Process Expression.
6. The output image will be displayed automatically.
◆ ◆ ◆
We will now repeat the development of the equation for the 2013 NIR band.
◆ ◆ ◆
Apply normalization equation (cont.)
7. In the IMAGE CALCULATOR window, change the Output
file name text box to 2013_b5_nir.
8. Enter the equation you wrote down for the band 4
regression (i.e. the second regression), in the Expression to
process text box. Be sure to change the file that is processed
from [2013_b4_red_raw] to [2013_b5_nir_raw]. Thus, for
the data in the example in this text, the equation becomes:
-0.064142+ 0.000013*[2013_b5_nir_raw]
(Your equation will vary slightly from this equation.)
9. Generate the normalized image by clicking on Process
Expression.
◆ ◆ ◆
The 2013 normalized red and near infrared bands have now been created.
These images have values comparable to the equivalent bands in the 2003
images.
Note: In this case we do not provide the metadata for the 2013 image in order
to try this empirical correction approach. If we look at the actual metadata
coefficients for this date to convert DN values into reflectance, those are -0.1
and 0.00002, which are very close to the values we obtained from the
regression.
11.6 Spectral Change Detection
11.6.1 Image Subtraction
Image subtraction is very simple to perform, because it simply requires the
subtraction of the DN values of one date from the other. It is essential,
though, that the images be normalized radiometrically first (see the
discussion in Section 11.1.2). Therefore, it is essential that radiometric
normalization exercises 9.4 and 9.5 are completed prior to doing this section.
In the exercise below we will use the results from Section 11.5, the
normalization using regression, to see changes between 2003 and 2013. We
will then use the results from Section 11.4, the normalization using the
TerrSet routine that converts DNs to reflectance, to see changes between
1984 and 2003.
Image subtraction can be performed using the IMAGE CALCULATOR, or
alternatively, with the dedicated change detection program, IMAGEDIFF.
We will use the latter program, which gives you four choices for the type of
output, as discussed in the introduction to this chapter. We will first select a
type of classification in which the differenced image output is grouped into
six classes. The classes are calculated by converting the differenced values to
z-scores. Z-scores are simply a type of transformation, where the values are
expressed as a distance from the mean of the change in reflectance values, in
units of the standard deviation of the distribution. Standard deviation units
are often used as a way of grouping the otherwise continuous change values.
We will start evaluating the changes between 1984 and 2003.
◆ ◆ ◆
Image differencing
Menu location: IDRISI GIS Analysis – Change / Time
Series – IMAGEDIFF
1. Start the IMAGEDIFF program from the main TerrSet
menu.
2. In the IMAGEDIFF window, double click in the text box
for Earlier image. In the resulting pick list, double click on
1984_B3.
3. Double click in the text box for the Later image, and in the
resulting pick list, double click on 2003_B3.
4. In the text box for Output filename, enter 84-03-imagediff.
5. For the Output options, select the radio button for the
Standardized class image.
6. Figure 11.6.1.a shows the IMAGEDIFF window with the
processing parameters specified.
7. Click on OK.
8. TerrSet will automatically display the resulting image
(Figure 11.6.1.b).
◆ ◆ ◆
Figure 11.6.1.a The IMAGEDIFF window.
Figure 11.6.1.b Results of image differencing using the red band for 1984
and 2003.
The image differencing image (Figure 11.6.1.b) shows a very interesting and
distinct pattern of land use change.
To understand this pattern, we need to understand how vegetation and urban
areas affect red reflectance. Desert soil will be brightest, especially where
salts have accumulated. The chlorophyll in vegetation absorbs red radiance,
and thus vegetated areas will be darker. Desert vegetation is relatively sparse,
and not very green. Thus, natural desert vegetation will be darker than
exposed soil, but not as dark as irrigated green vegetation, especially
residential lawns and golf courses. Asphalt will also absorb red. With that
background, we can now interpret the image.
The center of the city has not changed appreciably. It is therefore shown in
the change classification as having low differences from the mean (values
close to the mean) in the pastel colors of yellow and pale green, representing
respectively a small drop, or small rise, in radiance. Surrounding this
unchanged core, is an area of large changes (large differences from the
mean), shown in red and orange, and representing urban expansion in all
directions. The new urban land cover, especially residential development
with lawns, is much darker in the red band than the relatively sparsely
vegetated desert. On the edge of the city is a ring of higher red radiance,
shown in shades of green. These areas of higher reflectance represent new
construction, and general disturbance of desert vegetation, for example by
off-road vehicle traffic. The areas surrounding Lake Mead also show an
increase in red reflectance. Water absorbs the red part of the spectrum almost
completely, so these areas represent change from water to dry land and
indicate a shrinkage of the lake.
Let’s now evaluate the changes from 2003 to 2013.
◆ ◆ ◆
Image differencing
Menu location: IDRISI GIS Analysis – Change / Time
Series – IMAGEDIFF
1. Start the IMAGEDIFF program from the main TerrSet
menu.
2. In the IMAGEDIFF window, double click in the text box
for Earlier image. In the resulting pick list, double click on
2003_B3.
3. Double click in the text box for the Later image, and in the
resulting pick list, double click on 2013_B4_red.
4. In the text box for Output filename, enter 03-13-imagediff.
5. For the Output options, select the radio button for the
Standardized class image.
6. Click on OK.
7. TerrSet will automatically display the resulting image
(Figure 11.6.1.c).
◆ ◆ ◆
Figure 11.6.1.c Results of image differencing using the red band for 2003
and 2013.
The image differencing image (Figure 11.6.1.c) shows a similar pattern to
Figure 11.6.1.b. In these 10 years, urban growth seems to happen mostly in
the north and south direction. Interestingly, there are not many areas of
increased exposed soil with a pattern that would indicate new construction
areas, however we can see more soil exposed in the southern part of Red
Rock Canyon National Conservation Area, located in the southwest corner of
the image (we noted this in Figure 11.5.1.b, too), and surrounding the coast
of Lake Mead. The continuous shrinkage of the lake (we saw this also in
Figure 11.6.1.b) is related to increased population not only in Las Vegas, but
also in Los Angeles, San Diego, and Phoenix, as well as increased water
consumption for irrigation. The causes of vegetation decline in the Red Rock
Canyon area may be related to the 2012-2013 extensive drought affecting the
United States and Mexico. This drought began as a result of a strong La Niña
event (cooling of the water surface of the equatorial Pacific Ocean) coupled
with a pronounced heatwave.
11.6.2 Principal Component Analysis
Principal component analysis (PCA) is one of the easiest analysis methods to
run, and it usually captures much information. However, as we saw in the
discussion of PCA for spectral enhancement, Section 5.3.3, the interpretation
of the PCA output requires some thought. PCA does not require image
normalization; however, values in all bands should represent the same
measure (e.g. all bands should be in either DN values or reflectance, so we
couldn’t do a PCA between the 2003 images and the raw 2013 images). In
the instructions below, note that we select a total of six output components.
This is because we will use six input bands, and therefore the PCA
processing produces up to six output bands. It is good practice to always
produce all the output bands, in order to evaluate the information in them.
We first need to group the 1984 and 2003 bands into raster group files. We
have done this process multiple times already, so if the instructions below are
too brief please refer to Section 1.3. For this exercise, we will only work with
the visible and NIR bands.
◆ ◆ ◆
Creating a file collection with the TerrSet EXPLORER
1. Open TerrSet EXPLORER and go to the Files tab.
2. Click on 1984_B2, and press the Ctrl key.
3. Keeping the Ctrl key depressed click on 1984_B3, and
1984_B4 (in that order).
4. Right click in the Files pane. Select the menu option for
Create – Raster Group.
5. A new file, Raster Group.rgf, will be listed in the Files
pane.
6. Right click on this file and select Rename from the list of
options.
7. Enter the name 1984_all by typing over the default name of
Raster Group.
8. Press Enter on your computer keyboard.
9. Refresh the TerrSet EXPLORER view by pressing on the
F5 key of the keyboard or by right clicking on the working
folder and selecting the Refresh option.
10. Repeat 2 to 9 to create a raster group file for the 2003
bands, calling this raster group 2003_all.
◆ ◆ ◆
We can now run the principal component analysis.
◆ ◆ ◆
Multitemporal PCA calculation
Menu location: IDRISI Image Processing –
Transformation – PCA
1. Start the PCA program from the main menu.
2. In the PCA window, select Forward T-Mode as Analysis
Type and Covariance Matrix (unstandardized) as Matrix
Type.
3. Then click on the button to Insert layer group… In the
resulting pick list, double click on 1984_all.
4. For a second time, click on the Insert layer group… button.
In the resulting pick list, double click on 2003_all.
5. In the Number of components to be extracted text box,
enter 6.
6. In the Prefix for output files (can include path) text box,
enter PCA.
7. Select the radio button for Complete output in the Text
output for Forward T-Mode section.
8. Figure 11.6.2.a shows the PCA window with the processing
parameters specified.
9. Click on OK.
10. The text results are displayed in a new window, Module
Results. The images are not displayed automatically.
◆ ◆ ◆
Figure 11.6.2.a The PCA Window.
After the processing is completed, the Module Results window will display a
text file of the results of the analysis (Table 11.6.2.a). As mentioned above,
one of the difficulties of using PCA is that the output images can be difficult
to interpret. Nevertheless, by carefully examining the output as shown in the
Module Results, some interpretation can usually be made. Therefore, these
results should be saved, for example by clicking on the Save to File button,
because they help to interpret the PCA images.
The Module Results includes information on:
• The variance/covariance matrix (i.e. the variability of the bands, and
how they relate to one another).
• The correlation matrix (the relationship between the bands).
• The principal component eigenvalues (amount of variance explained, or
accounted for by each new component), expressed in two forms:
o As raw units of covariance, and
o As a proportion of the total variance (“% var.”) in the output.
• The eigenvectors, which give the equation to convert the input data to
get the output data.
• The Loadings, which provide information on the correlation between
the original bands and the new components.
Table 11.6.2.a The PCA Module Results output.
In order to interpret the text output, it will be useful to have the PCA images
displayed. Therefore, use the TerrSet Display Launcher to display the six
output files, each time using a GreyScale Palette file (Figure 11.6.2.b shows
the first four PCA images). For the discussion below, you should refer to the
relevant images, Table 11.6.2.a, as well as the two original false color
composites of 1984 (1984_fcc) and 2003 (2003_fcc), to verify the
interpretations suggested.
Figure 11.6.2.b Upper left: PCA_Tmode_Cmp1. Upper right:
PCA_Tmode_Cmp2. Lower left: PCA_Tmode_Cmp3. Lower right:
PCA_Tmode_Cmp4.
In interpreting the values for the Las Vegas change data (Table 11.6.2.a), we
see that the first four components (C1, C2, C3 and C4) comprise a total of
over 99% of the original variance (i.e. 83.22% + 11.74% + 3.54% +0.88%=
99.37%). This suggests that the majority of the information has been captured
in these first four components. Note how the images appear to get
progressively more noisy with higher numbers. For example, notice how
PCA_Tmode_Cmp1 shows the pattern of land use clearly, but
PCA_Tmode_Cmp6 has more noise present.
The eigenvectors and loadings help us understand what the output bands
mean. For example, we find that the eigenvectors for C1 are all positive, and
similar (0.01 to 0.47). This suggests that C1 (the image PCA_Tmode_Cmp1)
represents an average of all the bands, of both dates. Indeed, the loadings
show that C1 is highly correlated with all the input bands (the values are
approximately 0.87 or greater).
The second component (C2), on the other hand, has negative eigenvectors for
the bands for date 1, and positive values for the bands for date 2. This implies
that C2 (the image PCA_Tmode_Cmp2), is a difference image of 2003-1984,
and is thus somewhat similar to the results of the image differencing exercise
we completed earlier (although in our previous exercise we only compared
the red band across time).
The third component (C3), is notable for having negative values for the
visible bands (TM and ETM+ bands 2 and 3, corresponding to the green and
red parts of the spectrum) for the 1984 and 2003 dates, but positive values for
the near infrared band (TM and ETM+ bands 4). Since the difference
between the visible and near infrared bands is a measure of vegetation
presence, this suggests C3 (PCA_Tmode_Cmp3) is an average of the
vegetation for the two dates.
Component four (C4) is similar to C3, in that it has negative values for the
visible bands for 1984, and positive values for the near infrared band for the
same year. However, C4 has the opposite pattern for the 2003 bands, thus
suggesting this component is a vegetation difference image of 1984-
2003. Note that this means that new vegetation in 2003 will show as dark, not
bright, in the image PCA_Tmode_Cmp4.
As a final step, the PCA components can be visualized as a false color
composite, using components 1, 2, and 4. Since component 3 is an average
vegetation image, it is not so useful, and therefore is not used. Based on the
discussion above, however, before using component 4, we should first
reverse this image. This will make new vegetation bright in the color
composite, instead of dark, and thus make interpretation of the false color
composite easier. This reversal will make no other difference to the outcome.
◆ ◆ ◆
Create inverse image with SCALAR
Menu location: IDRISI GIS Analysis – Mathematical
operators – SCALAR
1. Start the SCALAR program from the main TerrSet menu.
2. In the SCALAR window, double click in the Input image
text box. In the resulting pick list, double click on
PCA_Tmode_Cmp4.
3. In the Output image text box, enter PCA_Tmode_Cmp4a.
4. In the Scalar value text box, enter -1.
5. Select the radio button for Multiply.
6. Figure 11.6.2.c shows the SCALAR window with the
processing parameters selected.
7. Click on OK.
8. The image is automatically displayed.
◆ ◆ ◆
Figure 11.6.2.c The SCALAR window.
You are now ready to create the false color composite, as described below.
◆ ◆ ◆
Create a color composite image
Menu Location: File – Display – COMPOSITE
1. Start the COMPOSITE program using the main menu or
tool bar.
2. In the COMPOSITE dialog box, double click in the Blue
image band text box, and then double click on the file
PCA_Tmode_Cmp1 in the subsequent Pick list.
3. Double click in the Green image band text box, and then
double click on PCA_Tmode_Cmp2.
4. Double click in the Red image band text box, and then
double click on PCA_Tmode_Cmp4a.
5. Enter the Output image filename in the text box provided:
pca124fcc.
6. Click OK to create and display the false color composite.
◆ ◆ ◆
Figure 11.6.2.d Multitemporal PCA false color composite.
In the resulting image (Figure 11.6.2.d), see if you can relate the colors to
your knowledge of the original bands. For example, red should be new
vegetation in 2003, and green to cyan should represent areas where the
albedo (average brightness) increased (roads and new construction
areas). Note, however, that the image presents an interesting overview of the
pattern of change in Las Vegas – with most growth occurring in a ring
pattern, and comparatively little growth to the east.
11.6.3 Change Vector Analysis
Change vector analysis, like image differencing, requires radiometrically
normalized data. It is therefore essential to complete sections 11.4 and 11.5
prior to working through this exercise. We will use the regression-normalized
data (from Section 11.5) for this exercise, to identify changes between 1984
and 2013.
Although some users of change vector analysis use spectrally transformed
data as input into change vector analysis, we will use the radiometrically
normalized bands. Using these bands will make it easier to compare the
results to other methods.
Figure 11.6.3.a The CVA window.
◆ ◆ ◆
Change vector analysis
Menu location: IDRISI GIS Analysis – Change / Time
Series – CVA
1. Start the CVA program from the main TerrSet menu.
2. In the CVA window, double click in the text box for the
Earlier date and Band 1. In the resulting pick list, double
click on 1984_B3, the TM red band.
3. Double click in the text box for the Earlier date and Band
2. In the resulting pick list, double click on 1984_B4, the TM
near infrared band.
4. Now chose the files for the Later date. Start by double
clicking in the Band 1 text box. In the resulting pick list,
double click on the raw band 2 data 2013_B4_red, the
normalized OLI red band.
5. Double click in the Later date and Band 2 text box. In the
resulting pick list, double click on 2013_B5_NIR, the
normalized OLI near infrared band.
6. In the Output magnitude image text box, enter cvamag.
7. In the Output direction image text box, enter cvadir.
8. Figure 11.6.3.a shows the CVA window with the processing
parameters specified.
9. Click on OK.
◆ ◆ ◆
Unlike other TerrSet modules, CVA does not open the output images. Start by
opening the change magnitude image.
◆ ◆ ◆
Display image
Menu: Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER from the main menu or
icon bar.
2. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and then double click on the file
cvamag.
3. Select the radio button for the GreyScale palette file.
4. Click on OK to display the image.
◆ ◆ ◆
After the image has opened, adjust the contrast of the image.
◆ ◆ ◆
Adjust image contrast
1. Find the Composer window and click on the Stretch
current view icon, to show the patterns of change more clearly
(Figure 11.6.3.b).
◆ ◆ ◆
Figure 11.6.3.b Magnitude of the spectral change stretched.
Figure 11.6.3.b displays the areas of change very clearly. The center of the
city has not changed appreciably. On the other hand, change is concentrated
across the north, west and south of the city, and to a slightly lesser extent to
the.east Only isolated areas of change are indicated in the desert, associated
with local changes such as landslides, changes in the Las Vegas Wash (the
river running through Las Vegas), water levels in Lake Mead, and mining.
Note that the magnitude image shows if the amount of change in reflectance
within a pixel over time is large or small, but does not tell us in which
direction reflectance is changing (e.g. is reflectance increasing or
decreasing?).
Now display the change direction image using the Default Quantitative
palette.
◆ ◆ ◆
Display image
Menu: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER from the main menu or
icon bar.
2. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and then double click on the file
cvadir.
3. Click on OK to display the image (Figure 11.6.3.c).
◆ ◆ ◆
Figure 11.6.3.c Change direction image.
Interpreting the change direction image is more difficult than the magnitude
image. However, with a little background information, the meaning of this
image becomes much clearer. The direction only makes sense in areas.
Use the Identify tool from the main menu to query the pixel values in this
image. The DN values are in units of degrees, and can be interpreted with
reference to the legend and also Figure 11.6.3.d, as will be explained in more
detail below.
Figure 11.6.3.d Interpreting the CVA direction angles.
Consider a pixel which has a combination of DN values associated with the
point labeled Date 1 in Figure 11.6.3.d (note that in our case they are
reflectance instead of the DN values, however the interpretation is the
same). Hypothetically, this point might be a pixel from the undisturbed
desert, with only sparse vegetation cover. This type of ground cover is likely
to have moderate to high reflectance in both the red and near infrared bands.
If this pixel is subsequently converted to lush vegetation, for example one of
the many golf courses in Las Vegas, the reflectance value for the red band
will decrease due to chlorophyll absorption, whereas the infrared band will
increase due to scattering off the spongy mesophyll of the grass
leaves. Remember from section 6.3 that vegetation is characteristically bright
in the near infrared. A change from desert to green grass will therefore result
in the movement of the point up and to the left in the graph (see the point
labeled Date 2(a) in Figure11.6.3.d). This direction of movement, measured
from the second point’s location, is an angle of 135°. With reference to the
legend of the change direction image for Las Vegas (Figure 11.6.3.c), we see
that change of approximately 135° is represented in shades of green.
It is important to realize that CVA separates change magnitude from
direction. Changes associated with the greening of previously built suburbs
will also have the same change direction as new vegetation due to the
construction of a new golf course in what was previously desert. However,
the greening of a suburb will, at least over the short term, have a much
smaller magnitude of change compared to the new golf course. Thus, not all
the green in the change direction image represents change between land cover
classes, but instead may represent subtle changes within a relatively
consistent land cover class.
An alternative change scenario to the one just described is presented by the
pixel labeled Date 2(b) in Figure 11.6.3.d. This pixel could represent a desert
area that is disturbed, for example, due to construction activities or even
recreational off-road vehicle use. In this circumstance, the loss of the sparse
vegetation and erosion of surface material will result in both red and near-
infrared DN values increasing. This direction of change is associated with a
225° change, and is shown in the change direction image legend as an orange
color. It is clear from Figure 11.6.3.c that this degradation of land cover is
extensive on the periphery of Las Vegas. This same direction of change
(increasing both red and NIR), happens also in the coastal areas of Lake
Mead, showing that the direction of change alone is not enough to interpret
the type of changes happening.
11.7 Post-Classification Change Detection
Post-classification change detection is the GIS overlay operation of two
previously classified images. Therefore, the major part of applying post-
classification change detection is classification.
Chapters 7, 8 and 9 describe classification in detail, and the sections on
supervised and soft classification (8 and 9) should be completed prior to
doing this excercise. The instructions for classification will be rather brief in
this section. If it is some time since you completed the classification section,
or you find the instructions below too brief, you may wish to refresh your
memory by rereading through that material.
The classification of land use in Las Vegas is a challenge, especially because
the geology of the area is so varied, and some of the geological classes are
spectrally similar to the urban land use classes, especially that of commercial
activities. Readers wishing to skip over the classifications, and directly apply
the change analysis operation may go directly to Section 11.7.9, and use the
classifications provided with this chapter’s data.
11.7.1 Preparation
You should have already downloaded the data, as described in Section
11.2. In addition, as described in Section 11.3, you should have established
the Chap11 project file within the TerrSet EXPLORER.
For this section, we will use the 1984 reflectance data, and the original 2013
raw data, instead of the normalized images created in the previous
sections. When doing post classification change detection, it is not necessary
to have the different dates calibrated.
Before starting, close all windows that you may have open from previous
exercises.
Display the false color composites created in Section 11.4, and the 2013 false
color composite that is provided as described below.
◆ ◆ ◆
Initial display of images
Menu: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER from the main menu or
icon bar.
2. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and then double click on the file
1984_fcc.
3. Click on OK to display the image.
4. Start the DISPLAY LAUNCHER again.
5. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and then double click on the file
2013_fcc.
6. Click on OK to display the image.
◆ ◆ ◆
11.7.2 Develop a List of Spectral Classes
The first step in supervised classification is to develop the list of spectral
classes. Examine the 1984 false color composite (1984_fcc). Five
informational classes can be identified, as listed in Table 11.7.2.a. The Desert
class is very variable, and consists of at least 8 to 10 different spectral
classes. Because of this difficulty, instead of working with a classification
method that requires defining all classes present in the imagery (e.g.
Maximum Likelihood), we will use the Mahalanobis Typicality soft
classification method, which allows us to classify selected classes. Using this
method, we need to collect training signatures for Water, Vegetation,
Commercial/Industrial/Transportation and Residential classes only. If you are
interested in using other supervised methods, you will need to define your
classes completely, and therefore you will need to make multiple spectral
classes for the Desert class (e.g. Desert1, Desert2, etc.).
Table 11.7.2.a Supervised classification classes for Las Vegas.
11.7.3 Digitize Training Polygons for the
1984 Image
Our overall aim is to classify each of the two images using Mahalanobis
Typicalities classification in separate classifications. This means that we will
need to collect a set of class signatures for each image independently.
Figure 11.7.3.a Zoomed false color composite of TM 1984 data with
training polygons overlaid. Note the multiple polygons for vegetation taken at
different areas to capture the variability of this class.
After you have developed the list of spectral classes, digitize the polygons
over the 1984 image. Figure 11.7.3.a shows a recommended number and
extent of the polygons required for this classification. The instructions below
describe the procedures for digitizing the polygons for the training areas.
Those who feel the instructions are too brief might review Section 8.4.
◆ ◆ ◆
Digitizing training polygons
Menu location: Main tool bar icons only (Note that the
icons are grayed out if an image is not yet displayed.)
1. The false color image 1984_fcc should already be
displayed in a Viewer.
2. Click on the Full extent normal icon from the main tool
bar, to return the image to the default zoom if you have
zoomed in on part of the image.
3. Click on the Full extent maximized icon from the main tool
bar, to enlarge the image display.
4. Click on the Zoom window icon. Draw a zoom box around
the lake on the right side of the image (this is a corner of Lake
Mead, created by the Hoover Dam, on the Colorado River).
5. Click on the Digitize icon. A dialog box labeled DIGITIZE
will open.
6. Enter the file name Water in the text box labeled Name of
layer to be created. Leave all other values at their defaults.
7. Click on OK to close the DIGITIZE window.
8. Move the cursor over the image, and the cursor will
become a Digitize icon, instead of the normal arrow.
9. Digitize the first vertex of the water polygon by pressing
the left mouse button. Continue clicking in the image to
specify the outer boundaries of the polygon you wish to
digitize. Note: Try to avoid the small islands in the lake,
however be sure to include areas near the coast – if you
display the green band by itself you will notice that this area
has slightly different reflectance in this band, probably due to
increased sediment content.
10. Close the polygon by pressing the right mouse button.
Note that the program automatically closes the polygon by
duplicating the first point; so you do not need to attempt to
close the polygon by digitizing the first point again.
11. Save the polygon file by clicking on the Save digitized
data icon.
12. A Save / Update dialog box will open. Click on Yes.
13. Click on the Full extent normal icon from the main tool
bar to return the image to the default zoom.
14. Click on the Full extent maximized icon from the main
tool bar.
15. Click on the Zoom window icon. Draw a zoom box around
the bright red vegetation in the irrigated fields to the right
(east) of the city.
16. Now start digitizing the polygon for vegetation by
clicking on the Digitize icon.
17. The DIGITIZE dialog box will open again.
18. Select the radio button for Create a new layer for the
features to be digitized.
19. Click OK.
20. Another dialog box labeled DIGITIZE will open.
21. Enter the file name Vegetation in the text box labeled
Name of layer to be created. Leave all other values at their
defaults.
22. Click on OK to close the DIGITIZE window.
23. Digitize the vegetation polygon with left mouse clicks.
Close the polygon with a right mouse click.
24. Zoom in to golf courses or city parks with bright red color
(high near infrared) signature.
25. Click on the Digitize icon.
26. This time, when the DIGITIZE dialog box opens, accept
the default radio button for Add features to the currently
active vector layer.
27. Click OK.
28. Another DIGITIZE dialog box will open.
29. The ID for the polygon will be incremented automatically.
Be sure to set it back to 1, as this new polygon should be
digitized as part of the same class.
30. Digitize a new vegetation polygon.
31. Repeat 24-30 for several golf courses and parks to capture
the variability of this class. Remember to change the ID back
to 1 each time.
32. Save the polygon file by clicking on the Save digitized
data icon.
33. A Save / Upate dialog box will open. Click on Yes.
34. Repeat the sequence of restoring, maximizing and
zooming (steps 13-15 above), only this time zoom in on the
dark downtown region, representing the commercial core of
the city. Name the file commercial.
35. Add to the commercial class by digitizing a second
polygon, this time over the airport runway, south of the city.
(The runway is a good example of the transportation part of
this class.) Follow steps 13-15 to zoom in on the runway.
36. Click on the Digitize icon, selecting the option Add
features to the currently active vector layer.
37. Click OK.
38. In the DIGITIZE dialog box, be sure to set it back to 1, as
both commercial polygons should be digitized as part of the
same class.
39. Digitize the polygon.
40. Save the polygon file by clicking on the Save digitized
data icon.
41. A Save / Update dialog box will open. Click on Yes.
42. Now digitize the residential class. Follow steps 13-15 to
zoom in on a dark red area of the more mature suburbs.
43. Click on the Digitize icon.
44. The DIGITIZE dialog box will open again.
45. Select the radio button for Create a new layer for the
features to be digitized.
46. Click OK.
47. In the text box labeled Name of layer to be created, enter
Residential.
48. Click on OK.
49. Digitize the residential polygon.
50. Save the polygon file by clicking on the Save digitized
data icon.
51. A Save / Update dialog box will open. Click on Yes.
◆ ◆ ◆
We have created the polygons for all the training areas, except the
desert. Because the desert is so complex, we will consider that anything that
is not similar to Water, Vegetation, Commercial or Residential belongs to the
desert class.
11.7.4 Create Signatures with MAKESIG
We are now ready to run the MAKESIG program to generate the signature
files. (See also Section 8.5.1).
We will first create a raster group file containing all 1984 bands.
◆ ◆ ◆
Creating a file collection with the TerrSet EXPLORER
1. Maximize the TerrSet EXPLORER window from the menu
or the main icon toolbar.
2. Click on the tab for Files. If the files in the directory are
not listed, double click on the directory name (e.g.
RSGuide\11), as listed in the Files pane.
3. Click on 1984_B2.rst. The file name will then be
highlighted.
4. Press, and holding the CTRL key down, click on
1984_B3.rst, 1984_B4.rst, and 1984_B5.rst.
5. You should now have the four Landsat 5 TM files
highlighted. Remove your finger from the CTRL button. Press
the right mouse button.
6. A pop-up menu will appear. Within this menu, scroll down
to Create, and then select Raster Group File.
7. A new file should be listed in the Files pane, Raster
Group.rgf. Select that file by right clicking on the file name.
8. Select the option Rename and type: 1984_all_sig.
9. Press Enter on the computer keyboard.
◆ ◆ ◆
◆ ◆ ◆
Create signature files using MAKESIG
Menu location: IDRISI Image Processing – Signature
development – MAKESIG
1. Start the MAKESIG program from the main menu.
2. In the MAKESIG dialog box, click on the Pick list button
(…) to the right of the text box of Vector file defining training
sites.
3. In the Pick list window, double click on the Water file.
4. In the MAKESIG dialog box, click on the button Enter
signature file names.
5. The Enter signature filenames dialog box will open. In the
blank cell to the right of 1, type 84-Water.
6. Uncheck the box labeled Create signature group file.
7. Click on OK to close the Enter signature file names dialog
box.
8. In the MAKESIG dialog box, click on the button for Insert
layer group.
9. From the Pick list, double click on 1984_all.
10. The MAKESIG dialog box should now show the four
bands of the TM data.
11. Click OK to create the signature file.
12. Now create the next signature file, for vegetation. Start by
double clicking in the MAKESIG dialog box text box for
Vector file defining training sites.
13. In the resulting pick list, double click on Vegetation.
14. In the MAKESIG dialog box, click on the button Enter
signature file names.
15. The Enter signature file names dialog box will open. In
the blank cell to the right of 1, type 84-Vegetation.
16. Click on OK to close the Enter signature file names dialog
box.
17. In the MAKESIG dialog box, click OK to create the
signature file.
18. Repeat steps 12-17 for each of the remaining vector files:
commercial and residential. Click the Enter signature file
names button, and change the right cell to the appropriate file
name (84-Commercial and 84-Residential, respectively).
◆ ◆ ◆
Now create a signature group file with the combined list of signatures. In
Section 8.5.2, we used the TerrSet EXPLORER. In this section we will use
the TerrSet COLLECTION EDITOR.
◆ ◆ ◆
Create signature group file with the COLLECTION
EDITOR
Menu location: File – COLLECTION EDITOR
1. Open the COLLECTION EDITOR from the main menu.
2. In the COLLECTION EDITOR window, use the menu for
File – New.
3. In the resulting New file window, click on the pull-down
menu next to the text box for Files of type. Select Signature
group files (*.sgf).
4. In the File name text box, type 1984-sig.
5. Click on Open.
6. The COLLECTION EDITOR window will now show the
list of potential signatures in the left column. The right
column, currently blank, is where signatures will be entered
(Figure 11.7.4.a).
7. In the left column click on 84-Water.
◆ ◆ ◆
Figure 11.7.4.a The COLLECTION EDITOR window after specifying the
new signature group file name.
◆ ◆ ◆
Create signature group file with the COLLECTION
EDITOR (cont.)
8. Click on the button Insert after >.
9. The Water signature should now be listed in the right
column.
10. Repeat steps 7 and 8, to insert in the following order, 84-
vegetation, 84-commercial, and 84-residential (Figure
11.7.4.b).
11. In the COLLECTION EDITOR window, use the menu to
select File – Save.
12. Close the COLLECTION EDITOR window. (It is
important to close the window, even though we will use the
program again later.)
◆ ◆ ◆
Figure 11.7.4.b The COLLECTION EDITOR window with signatures
specified in order.
11.7.5 Classify the 1984 Image
Classify the 1984 TM raster group file using the Mahalanobis typicalities
module, as described below.
◆ ◆ ◆
Classify an image with MAHALCLASS
Menu location: IDRISI Image processing – Soft classifiers
/ Mixture Analysis – MAHALCLASS
1. Start the MAHALCLASS program from the menu (Figure
11.7.5.a).
2. In the MAHALCLASS dialog box, click on the Insert
signature group button.
3. In the Pick List, double click on the 1984-sig signature
group file.
4. In the MAHALCLASS dialog box, specify the Output prefix
as Mahal-.
5. Click on OK to run the classification.
6. The classification uncertainty is displayed automatically
(Figure 11.7.5.b).
7. Close the MAHALCLASS dialog box, even though we will
use this program again later.
◆ ◆ ◆
Figure 11.7.5.a The MAHALCLASS dialog box.
Figure 11.7.5.b MAHALCLASS classification uncertainty. Note the high
uncertainty in the desert, since we did not specify training sites for this class.
The classification uncertainty image (Figure 11.7.5.b) shows high uncertainty
in desert areas, since we did not specify a training for this class. We are
interested, however, in knowing the typicalities for each of the four classes,
so we will display them individually.
◆ ◆ ◆
Displaying images with the TerrSet EXPLORER
1. Maximize TerrSet EXPLORER window from the menu or
the main icon toolbar.
2. Click on the tab for Files.
3. Double click on Mahal-84-residential. The image will be
displayed with the default palette (Figure 11.7.5.c).
◆ ◆ ◆
Figure 11.7.5.c Typicalities for the 1984 Residential class.
The Mahalanobis typicalities for the 1984 residential class are high within the
town. Note that the natural sparse vegetation in the western part of the scene
have low typicality (around 0.06) but not zero, meaning that they have some
similarities with the training used for Residential. This is because residential
areas also have vegetation. Areas very dissimilar to residential (black colors
in the image) have values of 0.0001, and therefore the minimum possible
identified threshold to exclude dissimilar areas has to be above this value
(e.g. 0.0002).
We can use the slider in Layer Properties to find the minimum typicality
above which values correspond to the residential class.
◆ ◆ ◆
Finding typicality thresholds with COMPOSER
4. In COMPOSER, select the Layer Properties icon.
5. The Layer Properties window will display.
6. Move the slider for Display Min to the right and see how
the low values “disappear”. (Figure 11.7.5.d)
7. Make a note of the values that appear to the right of the
Display min slider.
8. Repeat 1-7 for the other three classes.
◆ ◆ ◆
Figure 11.7.5.d Changing display minimum with Layer Properties.
When you slide the display minimum to the right, typicality values below the
number specified to the right of the slider (0.11 in Figure 11.7.5.d) will be
displayed with black color and seem to disappear. If you move the slider too
much to the right, residential areas will start to also vanish. It is important to
try to find a balance between overprediction (identifying more residential
areas than in reality) and underprediction (identifying less areas than in
reality).
Now that we identified the minimum typicality above which each class is
present, we can transform the soft output into a hard classification. In Chapter
9, we used RECLASS for this. Here we will use the module HARDEN.
11.7.6 Harden the Mahalanobis typicality
1984 classification
The module HARDEN allows us to transform a soft classification output into
a hard classification. To do this you need to specify the value above which
the class will be considered to be present. If a pixel has soft classification
values (typicalities in this case) above the specified threshold for more than
one class, the pixel is assigned to the class that has the highest value.
◆ ◆ ◆
Harden soft classification
Menu location: IDRISI Image Processing – Soft
Classifiers / Mixture Analysis – HARDEN
1. Use the TerrSet main menu to start the HARDEN program.
2. In the HARDEN window, select the radio button for
Typicalities from MAHALCLASS.
3. Click on Insert layer group and select Mahal-.
4. In the Output file text box type: 1984harden.
5. Now you need to complete the Minimum typicality column
with the minimum thresholds extracted in the previous
section. Note that your thresholds will be different from the
ones presented here since you used different training sites
(Figure 11.7.6.a).
6. Click on OK. The classified image will be displayed
automatically (Figure 11.7.6.b).
◆ ◆ ◆
Figure 11.7.6.a HARDEN module with parameters filled.
Note that the threshold that we specified for water is 0.0002, so that very
dissimilar areas (that have typicalities of 0.0001 instead of 0) remained
unclassified. Your specified minimum typicality should always be above
0.00011.
Figure 11.7.6.b HARDEN output for the 1984 classification.
The produced classification (Figure 11.7.6.b), is a good representation of
residential, commercial and urban vegetation areas, however some desert
areas are classified as commercial. If you are not happy with the
classification, you can re-evaluate the typicality thresholds, or take new (or
add more) training sites until your classes are well defined. In our case, we
will leave these errors since they are a small part of the image.
Note that desert areas have values of zero, since we did not classify them. We
will now recode our classes so that the Desert informational class has its own
ID.
11.7.7 Collapse Spectral Classes to
Informational Classes
The method for accomplishing the reassignment of classes is described in
Section 7.3.10. In this case, we only need to change the ID of the Desert class
from zero to five. We will use RECLASS for this, since only the values that
satisfy the conditions specified in the module are reassigned, leaving all other
values unchanged.
◆ ◆ ◆
Recode output with RECLASS
Menu Location: IDRISI GIS Analysis – Database Query –
RECLASS.
1. Start the RECLASS program from the main menu or the
main icon tool bar.
2. In the RECLASS window, enter the Input file name by
double clicking in the text box and selecting 1984harden.
3. Enter the Output file as 1984final.
4. In the Reclass Parameters section of the RECLASS
window, enter the values to complete the table as indicated
below:
Assign a new value of 5
To all values from 0
To just less than 1
5. In the RECLASS window, click on the OK button.
6. The reclassed image will be displayed automatically;
however it will be visualized better if you re-display the
image from TerrSet EXPLORER (Figure 11.7.7.a).
◆ ◆ ◆
Figure 11.7.7.a Result of the RECLASS program.
The next step is to add the legend categories.
◆ ◆ ◆
Modifying image metadata with TerrSet EXPLORER
1. In the TerrSet EXPLORER, click on the tab for Files.
2. If the files are not listed in the Files pane, double click on
the directory name to display the files.
3. If the Metadata pane is not shown below the Files pane,
right click on 1984final.rst. In the pop-up menu, select
Metadata.
4. If the Metadata pane is already shown, click on
1984final.rst.
5. In the Metadata pane, scroll down to Categories.
6. Double click in the blank cell next to Categories.
7. The CATEGORIES window will open.
8. Click four times on the Add Line icon on the right of the
CATEGORIES window, so that five blank lines are shown.
9. In the first blank line, under Code, enter 1.
10. In the next cell, under Category, enter Water.
11. In the subsequent blank line, enter 2 and Vegetation, and
in the following lines 3 and Commercial, 4 and Residential,
and 5 and Desert.
12. Figure 11.7.7.b shows the CATEGORIES window with
the classes specified.
13. Click on OK.
14. In the bottom left corner of the Metadata pane of the
TerrSet EXPLORER, click on the Save icon. Accept the
warning message.
◆ ◆ ◆
Figure 11.7.7.b The CATEGORIES window.
Now redisplay the 1984final image using the DISPLAY LAUNCHER as
described below.
◆ ◆ ◆
Display image
Menu: File – Display – DISPLAY LAUNCHER
1. Start the DISPLAY LAUNCHER from the main menu or
icon bar.
2. In the DISPLAY LAUNCHER window, double click in the
text box for the file name, and then double click on the file
1984final.
3. Under Palette file, select the Browse icon (…) and select
the file LVclass, which is a palette we already created for this
land cover.
4. Click on OK to display the image.
◆ ◆ ◆
11.7.8 Classification of the 2013 Image
Once you have completed the classification for the 1984 data, you should
repeat the classification exercise, except this time using the 2013 OLI data
(2013_fcc). You will need to collect a new set of signatures. It is essential,
however, that you should end up with the same specific informational classes,
which are associated with the same DN values. For example, since 1
represents Water for the 1984 data, 1 should also represent Water for the
2013 data.
Note that TerrSet places each signature in its own file, as well as linking them
through the group file. Therefore, for the 2013 signatures, it is very important
that you specify a name that differs from those of the 1984 signatures. One
way to ensure all the signature names are different is to add the numerals 13
in front of each name, thus Water becomes 13-Water.
The final image, after running the RECLASS program, should be called
2013final.
Figure 11.7.8 a Classification of the 1984 Landsat 7 TM with an updated
legend and palette.
Figure 11.7.8.b shows the final 2013 image classification with an updated
legend and palette. Compare with Figure 11.7.8.a to see how the city of Las
Vegas has grown in 39 years.
Figure 11.7.8.b Classification of the 2013 Landsat 8 OLI image of Las
Vegas.
11.7.9 Overlay of Two Independent
Classifications using CROSSTAB
After you have classified the 2013 OLI image, you are ready for the final
step, which is the overlay of the two separate classifications to obtain a single
change map.
Note: If you have skipped the previous classification steps, Section 11.7.1
through 11.7.8, and wish to use the classifications provided with the data for
this exercise, simply use the following files in the CROSSTAB
program: 1984_book and 2013_book, instead of 1984final and 2013final.
◆ ◆ ◆
Overlay operation with CROSSTAB
Menu location: IDRISI GIS Analysis – Database Query –
CROSSTAB
1. Open the CROSSTAB program from the main TerrSet
menu.
2. In the CROSSTAB window, double click in the text box
next to First image (column). In the resulting pick list, double
click on 1984final.
3. Double click in the text box next to Second image (row). In
the resulting pick list, double click on 2013final.
4. In the region marked Output type, select the radio button
for Both cross-classification and tabulation.
5. In the Output file text box, enter 84-13-Crosstab.
6. Figure 11.7.9.a shows the CROSSTAB window, with
parameters specified.
7. Click on OK.
◆ ◆ ◆
Figure 11.7.9.a CROSSTAB window.
The output of the CROSSTAB program is a post-classification change map
(Figure 11.7.9.b) and a table (Table 11.7.9.a). Note, since your classifications
will be different, unless you are using the classifications provided with the
book, the change map and table will be slightly different, however the overall
patterns should be similar.
Figure 11.7.9.b Cross-tabulation of the 1984 and 2013 classifications.
We already worked with CROSSTAB in section 7.4. We will now revise the
interpretation of the cross-tabulation output. The change map has a separate
category for each possible transition. Thus, the legend is indicated as a series
of pairs of numbers, each pair with a separating vertical bar. The first legend
category, 1|1, represents the combination (or change transition) of class 1 in
1984 and class 1 in 2013 (i.e. pixels that remained as Water in both
years.) This class is dominated by the small area of Lake Mead in the right
side of the image, displayed in red. Likewise, 1|2 represents the transition
from Water in 1984 to Vegetation in 2013.
Since each classification had five classes, there are 5*5 = 25 possible
transitions. However, only a small number of transitions dominate the map,
as can be observed by looking at Table 11.7.9.a. It is notable from the table
that in Las Vegas, most of the largest change classes (i.e. off-diagonal
elements in the table) are in column 5 (i.e. pixels that were Desert in
1984). Note in particular the large number of pixels that have changed from
Desert to Commercial (3) and Residential (4). Since each pixel represents 30
by 30 meters (90 m2, or .09 hectacres), it is a simple matter to convert the
change data to area or percentages. For example, the area of the Desert that
was converted to Residential between 1984 and 2013 is 391,692 pixels, or
391,692 x 0.09 ha = 35,252 ha (352.5 km2), or 10,043 ha (100.4 km2) per
year. Note that the area representing unchanged Residential (67,593 pixels, or
60.83 km2), represents about one seventh of the total area in Residential by
2013 (465,571 pixels, or 419 km2).
A simpler way of converting the transition to areas is by right clicking on the
legend and selecting the option Calculate area, and then selecting the desired
units.
Table 11.7.9.a. Cross-tabulation of 1984 and 2013 classifications as
calculated with CROSSTAB.
We can extract from the cross-classification table that the second largest
transition involves 78.8 km2 of Desert changing to Commercial. Water also
experienced changes, with a decrease of 13.6 km2 into desert (1 | 5). The table
also shows a transition of 4 km2 of water into residential (1 | 4), however this
might be due to the confusion between some residential areas and desert in
the classification. When interpreting the cross-classification image and table,
it is important to distinguish between possible changes and changes that may
be related to errors in the classification.
REFERENCES
Chavez, P.S., 1996. Image-Based Atmospheric Corrections – Revisited and
Improved. Photogrammetric Engineering and Remote Sensing: 62(9):
1025-1036.
Chuvieco, E. 2016. Fundamentals of Satellite Remote Sensing: An
Environmental Approach. CRC Press, Boca Raton, FL, 468p.
Clark, R. N., G. A. Swayze, R. Wise, K. E. Livo, T. M. Hoefen, R. F. Kokaly,
and S. J. Sutley, 2003. USGS Digital Spectral Library splib05a. USGS
Open File Report 03-395.
Congalton, R. G. and K. Green. Assessing the accuracy of remotely sensed
data: principles and practices. Second Edition. CRC Press, Boca Rotan,
Florida, 183p.
Dozier, J., 1989. Spectral signature of alpine snow cover from the Landsat
Thematic Mapper. Remote Sensing of Environment 28: 9-22.
Eastman, 2016. TerrSet Geospatial Monitoring and Modeling Software.Clark
Labs, Worcester, MA. (The TerrSet Manual is provided as PDF
document with the software).
Gao, B.C., 1996. NDWI—A normalized difference water index for remote
sensing of vegetation liquid water from space. Remote sensing of
environment, 58(3), pp.257-266.
Gillespie, A. R., A. Kahle and R. E. Walker, 1986. Color enhancement of highly
correlated images. I. Decorrelation and HSI contrast stretches. Remote
Sensing of Environment 20: 209 - 235.
Jensen, J. R., 2016. Introductory Digital Image Processing: A Remote Sensing
Perspective. Pearson, Glenview, IL, 623p.
Lauer, D.T., S.A. Morain, and V.V. Salomonson, 1997. The Landsat program:
Its origins, evolution, and impacts. Photogrammetric Engineering &
Remote Sensing 63: 831-838.
Lillesand, T. W., R. W. Kiefer, and J. W. Chipman, 2015. Remote Sensing and
Image Interpretation, Fifth Edition. John Wiley and Sons, New York,
720p.
Maune, D. F. (Ed.), 2001. Digital elevation Model Technologies and
Applications: The DEM Users Manual. The American Society of
Photogrammetry and Remote Sensing, Bethesda, MD, 359p.
Mika, A. M., 1997. Three decades of Landsat instruments. Photogrammetric
Engineering and Remote Sensing 63: 839-852.
Nellis, M. D., T. A. Warner, R. Landenberger, J. McGraw, J. S. Kite and F.
Wang, 2000. The Chestnut Ridge Anticline: The first major ridge of the
Appalachian Mountains. Geocarto International 15 (4): 73-78.
Pontius Jr, R. G., and M. Millones, 2011. Death to Kappa: birth of quantity
disagreement and allocation disagreement for accuracy assessment.
International Journal of Remote Sensing, 32(15): 4407-4429.
Richards, J. A., 2013. Remote Sensing Digital Image Analysis. Springer, Berlin,
494p.
Rowan, L. C., Wetlaufer, P. H., Goetz, A. F. H., Billingsley, F. C., and Stewart,
J. H., 1974. Discrimination of rock types and detection of hydrothermally
altered areas in south-central Nevada by use of computer-enhanced ERTS
images: U.S.G.S. Prof. Paper 883, 35p.
Sabins, F. F., 1997. Remote Sensing. W. H. Freeman and Company, New York,
494p.
Snyder, J. P., 1987. Map Projections - A Working Manual. United States
Geological Survey Professional Paper 1395, US Government Printing
Office, Washington, DC, 383 pp. Also available at:
https://doi.org/10.3133/pp1395
Stehman, S. V., and G. M. Foody, 2009. Accuracy Assessment. Chapter 21 in T.
A. Warner, M. D. Nellis and G. Foody (eds), The SAGE Handbook of
Remote Sensing. SAGE, London, UK.
USGS, 2006. SLC-off products: Background.
http://landsat.usgs.gov/data_products/slc_off_data_products/slc_off_background.php
(Last date accessed: September 2, 2006).
Warner, T., A., 2005. Hyperspherical Direction Cosine Change Vector Analysis.
International Journal of Remote Sensing 26(6): 1201-1215.
Warner, T. A., and D. Campagna, 2004. IDRISI Kilimanjaro Review.
Photogrammetric Engineering and Remote Sensing 70 (6): 669-673, 684.
Warner, T. A., M. D. Nellis and G. Foody (eds), 2009a. The SAGE Handbook of
Remote Sensing. SAGE, London, UK.
Warner, T. A., A. Almutairi, and J. Y. Lee, 2009b. Remote sensing of land cover
change. Chapter 33 in T. A. Warner, M. D. Nellis and G. Foody (eds),
The SAGE Handbook of Remote Sensing. SAGE, London, UK.
Wikipedia, 2005. Muhammad al-Idrisi. http://en.wikipedia.org/wiki/Al-Idrisi
(Last date accessed: January 5, 2006.
APPENDIX
A. Sources of Free Data
There is a surprisingly large number of websites with free or relatively
inexpensive imagery available. Often these images are directly available on-
line.
A.1 United States Geological Survey (USGS) Earth Explorer
Landsat data can be accessed from multiple places. Only data downloaded
from the USGS Earth Explorer is suitable for import with the LANDSAT
module. In Section C we provide an explanation on how to download data
from this website. The USGS Earth Explorer also provides Sentinel (see
A.2), AVHRR, and multiple other datasets.
https://earthexplorer.usgs.gov/
A.2 European Space Agency
Sentinel data can be accessed from the European Space Agency (ESA)
website. Only Sentinel data from this source can be imported using the
SENTINEL import tool. Sentinel data from other sources should be imported
with the GDAL tool within TerrSet.
https://sentinel.esa.int/web/sentinel/home
A.3. Land Viewer
The Land Viewer is an online tool to visualize the full achieve of Landsat 7,
Landsat 8, Sentinel and MODIS. It is a very handy tool when selecting data,
as it allows not only to visualize the images in full resolution, but also to
explore different pre-defined band combinations and indices. Images can be
downloaded from the same place and imported into TerrSet using the GDAL
or GEOTIFF tools. If you desire to use the LANDSAT or SENTINEL import
tools you will need to search for the same image in Earth Explorer or ESA.
(Note that, at the time of writing this manual, the dates reported in Land
Viewer interface were one day off. You should check the metadata to find the
correct acquisition date).
https://lv.eosda.com/
A.3 USGS GloVis
Global Visualization tool (GloVis) provides an interface for visualizing and
querying data sets including Landsat archives, Aster, MODIS data.
https://glovis.usgs.gov/
A.4. AmericaView
Another good source of imagery of the United States is AmericaView.
Individual AmericaView states that are members of the AmericaView
consortium maintain their own websites. Each state in the consortium has its
own slightly different collection of types of imagery. Browse each member’s
website to obtain their data.
http://www.americaview.org/
B. Sources of Data for Sale
Most major commercial and national space agencies have data for sale.
Below we present a partial list:
SPOT Image https://www.intelligence-airbusds.com/geostore/
Digital Globe www.digitalglobe.com
Many organizations that collect data sell their data through local agents. For
example, The Geocarto International Centre Ltd., publishers of this manual,
also act as resellers for a wide variety of satellite imaging companies.
http://www.geocarto.com/
C. Download Data with Earth Explorer
One of the most useful sources for a wide variety of free imagery is the
USGS Earth Explorer site, http://earthexplorer.usgs.gov/. Since this site is so
useful, we provide here a short tutorial for accessing data from the site in
preparation for importing in to TerrSet.
Create an Account
a) Go to http://earthexplorer.usgs.gov/.
b) Click on the REGISTER link at the top right of the screen. It will ask
you for a username, password, affiliation, and address. You should
receive an email confirmation.
c) Once you have an account created, login to your account.
You have several parameters to enter to start the search of images. You will
have to move sequentially though the following tabs:
1. Search criteria tab.
a. Select area of interest
i. Select a region of interest by searching for a place, path/row, after
identifying the location click on the Show button, or,
ii. Select a region or by clicking in the map.
iii. Select the date range or selected months.
2. Datasets tab
a. Select the desired dataset by clicking on the + sign and then checking
the box next to the dataset name. To download Landsat go to Landsat
Archive, click on the + next to Collection 1 Level-1, and check the box
next to the Landsat sensor you want to download (TM, ETM+ or
OLI/TIRS).
3. Additional criteria tab
a. For each dataset, you can select the amount of cloud cover allowed in
the image and other characteristics (e.g. day/night). When searching TM
data (that is the longest archive), it is recommended to restrict the results
by date and/or cloud cover.
4. Results tab
a. All Images found with the criteria will be shown in the Search Results
Tab.
b. Click Show/Browse Overlay to add the overlay to the map.
c. Click to download the image.
Download
1. In the download options, select Level 1 GeoTIFF Data product. You
need to be logged in to be able to access this option.
2. The files are compressed in two levels (zip and tar). You can use a
free unzipping tool such as 7zip to decompress them:
a. Decompress the zip file; the output will be a file with extension
.tar. This file should also be decompressed so that the output contains
all .tif bands and metadata (.MTL).
3. After you have extracted the .tif and text (.mtl) files, you are ready to
import them in TerrSet using the GeoTiff or LANDSAT modules.