0% found this document useful (0 votes)
27 views33 pages

Gis Rs 2 of 3f

AGRI 313 covers GIS and Remote Sensing Techniques, focusing on the components, data models, input methods, and spatial analysis techniques of Geographic Information Systems (GIS). It details the importance of hardware, software, data, people, and methods in GIS, along with the advantages and disadvantages of vector and raster data formats. The document also outlines various spatial analysis operations and techniques, including attribute queries, spatial queries, and multi-layer operations, emphasizing the role of GIS in analyzing geographic data effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views33 pages

Gis Rs 2 of 3f

AGRI 313 covers GIS and Remote Sensing Techniques, focusing on the components, data models, input methods, and spatial analysis techniques of Geographic Information Systems (GIS). It details the importance of hardware, software, data, people, and methods in GIS, along with the advantages and disadvantages of vector and raster data formats. The document also outlines various spatial analysis operations and techniques, including attribute queries, spatial queries, and multi-layer operations, emphasizing the role of GIS in analyzing geographic data effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

AGRI 313 : GIS & Remote Sensing Techniques

UNIT II

A geographic information system (GIS) is a computer system for capturing, storing,


checking, and displaying data related to positions on Earth’s surface.

A “geographic information system” (GIS) is a computer-based tool that allows you to


create, manipulate, analyze, store and display information based on its location. GIS
makes it possible to integrate different kinds of geographic information, such as digital
maps, aerial photographs, satellite images, and global positioning system data (GPS)
along with associated tabular database information.

A geographic information system (GIS) lets us visualize, question, analyze, and interpret
data to understand relationships, patterns, and trends.

Components of a GIS

GIS has 5 components: i) Hardware


ii) Software
iii) Data
iv) People
v) Methods
Hardware

Hardware is the computer on which a GIS operates. Today, GIS software runs on a wide
range of hardware types, from centralized computer servers to desktop computers used in
stand-alone or networked configurations.

Software

GIS software provides the functions and tools needed to store, analyze, and display
geographic information. Key software components are:
 Tools for the input and manipulation of geographic information
 A database management system (DBMS)
 Tools that support geographic query, analysis, and visualization
 A graphical user interface (GUI) for easy access to tools.

Data

The most important component of a GIS is the data. Geographic data and related tabular
data can be collected in-house or purchased from a commercial data provider. A GIS
can integrate spatial data with other existing data resources, often stored in a corporate
DBMS. The integration of spatial data and tabular data stored in a DBMS is a key
functionality afforded by GIS.

People

GIS technology is of limited value without the people who manage the system and
develop plans for applying it to real-world problems. GIS users range from technical
specialists who design and maintain the system to those who use it to help them perform
their everyday work.
Methods

A successful GIS operates according to a well-designed plan and business rules, which
are the models and operating practices unique to each organization.

Data Models in GIS:

Geographic data can be stored in a vector graphics or a raster graphics format.

Vector

Using a vector format, two-dimensional data is stored in terms of x and y coordinates. A


road or a river can be a series of x and y coordinate points. Nonlinear features such as
town boundaries can be stored as a closed loop of coordinates. The vector model is good
for describing well-delineated features.

Raster

A raster data format expresses data as a continuously-changing set of grid cells. The
raster model is better for portraying subtle changes such as soil type patterns over an
area.

Advantages of vector:

• Simple data structure


• Easy and efficient overlaying
• Compatible with satellite Imagery
• High spatial variability is efficiently represented
• Simple for own programming
• Same grid cells for several Attributes

Disadvantages of vector:

• Inefficient use of computer storage


• Errors in perimeter and shape
• Difficult network analysis
• Inefficient projection transformations
• Loss of information when using cells
• Less accurate (although interactive) maps

Advantages of raster:

• Compact data structure


• Efficient for network analysis
• Efficient projection transformation
• Accurate map output

Disadvantages of raster:

• Complex data structure


• Difficult overlay operations
• High spatial variability is inefficiently represented
• Not compatible with satellite Imagery.
Data Input

Database creation is the most important, Expensive, and time taking part of any GIS
project. Data input is the operation of encoding the data and writing them to the database
(storage). The creation of clean digital data is a most important and complex task upon
which the usefulness of the GIS depends.

Two aspects of the data need to be considered separately for geographical information
systems, these are first the positional or/and second the associated attributes that record
the cartographic features in terms of their spatial and non-spatial attributes that is the
main distinguishing criterion between automated cartography ( where the non-spatial
data relate mainly to color, line, type, symbolisms, etc.) and geographical information
processing (where the non-spatial data may record land use, vegetation types, soil types,
etc.)

In geographical information systems data input can be described in the following


point :

o Entering the spatial data (digitizing).


o Entering the non-spatial associated attributes.
o Linking the spatial data to the non-spatial data.

At each stage, there should be necessary proper data verification and checking
procedures to ensure that the resultant database is as free as possible from error.

Since the input of attribute data is usually quite simple, the discussion of data input
techniques will be limited to spatial data only. There is no single method of entering
spatial data into a GIS. Rather a several, mutually compatible methods can be used singly
or in combination.

The choice of data input method is governed largely by the application, the available
budget, and the type and complexity of data being input.

There are at least four basic procedures for inputting spatial data into a GIS. These are:
o Manual digitizing
o Automatic scanning
o Entry of coordinates using coordinate geometry
o Conversion of existing digital data.

Data Verification

Six clear steps stand out in the data editing and verification process for spatial data.
These are :
o Visual review. This is usually by check plotting.
o Cleanup of lines and junctions. This process is usually done by software first
and interactive editing second.
o Weeding of excess coordinates. This process involves the removal of redundant
vertices by the software for linear and/ or polygonal features.
o Correction for distortion and warping. Most GIS software has functions for
scale correction and rubber sheeting. However, the distinct rubber sheet algorithm
used will vary depending on the spatial data model, vector, or raster, employed by
the GIS. Some raster techniques may be more intensive than vector based
algorithms.
o Construction of polygons. Since the majority of data used in GIS is polygonal,
the construction of polygon features from lines/arcs is necessary. Usually, this is
done in conjunction with the topological building process.

o The addition of unique identifiers or labels. Often this process is manual.


However, some systems do provide the capability to automatically build labels for a
data layer.

These data verification steps occur after the data input stage and before or during the
linkage of the spatial data to the attributes. Data verification ensures the integrity between
the spatial and attribute data. Verification should include some brief querying of attributes
and cross-checking against known values.
Spatial analysis

Spatial analysis is a vital part of GIS. The results of a geographic analysis can be
commercial in the form of maps, reports, or both. Integration involves bringing together
diverse information from a variety of sources and analysis of multi-parameter data to
provide answers and solutions to defined problems. Spatial analysis can be done in two
ways. One is vector-based and the other is raster-based analysis. Two fundamental
functions of GIS have been widely realized: the generation of maps and the generation of
tabular reports.

If the purpose is to generate tabular output, then a simpler database management system
or a statistical package may be a more efficient solution. It is a spatial analysis that
requires the logical connections between attribute data and map features, and the
operational procedures built on the spatial relationships among map features. These
capabilities make GIS a much more powerful and cost-effective tool than automated
cartographic packages, statistical packages, or database management systems.

USING GIS FOR SPATIAL ANALYSIS

Spatial analysis in GIS involves three types of operations: Attribute Query also known as
non-spatial (or spatial) query, Spatial Query, and Generation of new data sets from the
original database (Bwozough, 1987). The scope of spatial analysis ranges from a simple
query about the spatial phenomenon to complicated combinations of attribute queries,
spatial queries, and alterations of original data. Various spatial analysis methods are
available viz. single/ multiplayer operations/overlay; spatial modeling; geometric modeling;
point pattern analysis; network analysis; surface analysis; raster/ grid analysis, etc.
Attribute Query: Requires the processing of attribute data exclusive of spatial
information. In other words, it’s a process of selecting information by asking logical
questions.

Example: From a database of a city parcel map where every parcel is listed with a land
use code, a simple attribute query may require the identification of all parcels for a specific
land use type. Such a query can be handled through the table without referencing the
parcel map (Fig. 1). Because no spatial information is required to answer this question,
the query is considered an attribute query. In this example, the entries in the attribute
table that have land use codes identical to the specified type are identified.

Spatial Query :

Involves selecting features based on location or spatial relationships, which requires the
processing of spatial information. For instance, a question may be raised about parcels
within one 2 KMs of an “XYZ” location. In this case, the answer can be obtained either
from a hardcopy map or by using a GIS with the required geographic information. While
basic spatial analysis involves some attribute queries and spatial queries, complicated
analysis typically requires a series of GIS operations including multiple attribute and
spatial queries, alteration of original data, and generation of new data sets. The methods
for structuring and organizing such operations are a major concern in spatial analysis.
Effective spatial analysis is one in which the best available methods are appropriately
employed for different types of attribute queries, spatial queries, and data alteration.

ANALYSIS TECHNIQUES

Simple topology for some spatial analysis methods appropriate for different geographical
data types is as below
Some basic guidelines for spatial analysis are:

GIS Usage in Spatial Analysis

GIS can interrogate geographic features and retrieve associated attribute information,
called identification. It can generate a new set of maps by query and analysis. It also
evolves new information through spatial operations. Here are described some analytical
procedures applied with a GIS. GIS operational procedures and analytical tasks that are
particularly useful for spatial analysis include:

o Single layer operations


o Multi-layer operations/ Topological overlay
o Spatial modeling
o Geometric modeling
o Calculating the distance between geographic features
o Calculating area, length, and perimeter
o Geometric buffers.
o Point pattern analysis
o Network analysis
o Surface analysis
o Raster/ Grid analysis
o Fuzzy Spatial Analysis
o Geostatistical Tools for Spatial Analysis

Single layer operations are procedures, which correspond to queries and alterations of
data that operate on a single data layer.

Example: Creating a buffer zone around all streets of a road map is a single layer
operation as shown in the figure below

Multi layer operations are useful for the manipulation of spatial data on multiple data
layers. The below figure depicts the overlay of two input data layers representing a soil
map and a land use map respectively. The overlay of these two layers produces a new
map of different combinations of soil and land use

Topological overlays: These are multi-layer operations, which allow combining features
from different layers to form a new map and give new information and features that were
not present in the individual maps.
Point pattern analysis: It deals with the examination and evaluation of spatial patterns
and the processes of point features. The below figure shows the distribution of an
endangered species examined in a point pattern analysis.
Network analysis: Designed specifically for line features organized in connected
networks, typically applies to transportation problems and location analysis such as
school bus routing, passenger plotting, walking distance, bus stop optimization, optimum
path finding, etc.

The surface analysis deals with the spatial distribution of surface information in terms of
a three dimensional structure.

The generation of DEM (Digital Elevation Model) from a series of GIS operations is a kind
of surface analysis.

Grid analysis involves the processing of spatial data in a special, regularly spaced form.

The darkest cells in the grid represent the area where a fire is currently underway. A fire
probability model, which incorporates fire behavior in response to environmental
conditions such as wind and topography, delineates areas that are most likely to burn in
the next two stages. Lighter shaded cells represent these areas. Fire probability models
are especially useful to firefighting agencies for developing quick-response, effective
suppression strategies.
Fuzzy spatial analysis is based on the Fuzzy set theory. Fuzzy set theory is a
generalization of Boolean algebra to situations where zones of gradual transition are used
to divide classes, instead of conventional crisp boundaries

VECTOR BASED SPATIAL DATA ANALYSIS

There are multi-layer operations, which allow combining features from different layers to
form a new map and give new information and features that were not present in the
individual maps.

Topological overlays:
Selective overlay of polygons, lines, and points enables the users to generate a map
containing features and attributes of interest, extracted from different themes or layers.
Overlay operations can be performed on either raster (or grid) and vector maps. In the
case of raster map calculation tool is used to perform overlay. In topological overlays,
polygon features of one layer can be combined with a point, line, and polygon features
of a layer.

Polygon-in-polygon overlay:
Output is polygon coverage.
Coverages are overlaid two at a time.
There is no limit on the number of coverages to be combined.

New File Attribute Table is created having information about each newly created feature.

Line-in-polygon overlay:
Output is line coverage with an additional
attribute. No polygon boundaries are copied.
A new arc-node topology is created.

Point-in-polygon overlay:
Output is point coverage with additional attributes.
No new point features are created.
No polygon boundaries are copied.

Logical Operators: Overlay analysis manipulates spatial data organized in different


layers to create combined spatial features according to logical conditions specified in
Boolean algebra with the help of logical and conditional operators. The logical conditions
are specified with operands (data elements) and operators (relationships among data
elements).
In vector overlay, arithmetic operations are performed with the help of logical operators.
There is no direct way to it.

Common logical operators include AND, OR, XOR (Exclusive OR), and NOT. Each
operation is characterized by specific logical checks of decision criteria to determine if a
condition is true or false. The below table shows the true/false conditions of the most
common Boolean operations. In this table, A and B are two operands. One (1) implies a
true condition and zero (0) implies a false. Thus, if the A condition is true while the B
condition is false, then the combined condition of A and B is false, whereas the
combined condition of

A OR B is true.
AND - Common Area/ Intersection / Clipping Operation
OR - Union Or Addition
NOT - (Inverter)
XOR - Minus
The most common basic multi-layer operations are union, intersection, and identify
operations. All three operations merge spatial features on separate data layers to create
new features from the original coverage. The main difference among these operations is
in the way spatial features are selected for processing.

Overlay operations

Different types of vector overlay operations give flexibility for geographic data
manipulation and analysis. In polygon overlay, features from two map coverages are
geometrically intersected to produce a new set of information. Attributes for these new
features are derived from the attributes of both the original coverages, thereby containing
new spatial and attribute data relationships.

One of the overlay operations is AND (or INTERSECT) in vector layer operations, in
which two coverages are combined. Only those features in the area common to both are
preserved. Feature attributes from both coverages are joined in the output coverage.
RASTER BASED SPATIAL DATA ANALYSIS

Grid Operations used in Map Algebra


Common operations in grid analysis consist of the following functions, which are used in
Map Algebra to manipulate grid files. The Map Algebra language is a programming
language developed to perform cartographic modeling. Map Algebra performs the
following four basic operations:

1. Local functions: that work on every single cell


2. Focal functions: that process the data of each cell based on the information of
a specified neighborhood
3. Zonal functions: that provide operations that work on each group of cells of
identical values
4. Global functions: that work on a cell based on the data of the entire grid.

Local Functions

Local functions process a grid on a cell-by-cell basis, that is, each cell is processed based
solely on its values, without reference to the values of other cells. In other words, the
output value is a function of the value or values of the cell being processed, regardless of
the values of surrounding cells.

For single layer operations, a typical example is changing the value of each cell by
adding or multiplying a constant. In the following example, the input grid contains values
ranging from 0 to 4. Blank cells represent NODATA cells. A simple local function
multiplies every cell by a constant of 3. The results are shown in the output grid at the
right. When there is no data for a cell, the corresponding cell of the output grid remains
blank.

Local functions can also be applied to multiple layers represented by multiple grids of
the same geographic area

Local functions are not limited to arithmetic computations. Trigonometric, exponential,


logarithmic, and logical expressions are all acceptable for defining local functions.

Focal Functions
Focal functions process cell data depending on the values of neighboring cells. For
instance, computing the sum of a specified neighborhood and assigning the sum to the
corresponding cell of the output grid is the “focal sum” function. A 3 x 3 kernel defines
neighborhood. For cells closer to the edge where the regular kernel is not available, a
reduced kernel is used and the sum is computed accordingly. For instance, a 2 x 2
kernel adjusts the upper left corner cell. Thus, the sum of the four values, 2,0,2 and 3
yields 7, which becomes the value of this cell in the output grid. The value of the
second row, second column, is the sum of nine elements, 2, 0, 1, 2, 3, 0, 4, 2, and 2,
and the sum equals 16.

Another focal function is the mean of the specified neighborhood, the “focal mean”
function. In the following example, this function yields the mean of the eight adjacent cells
and the center cell itself. This is the smoothing function to obtain the moving average in
such a way that the value of each cell is changed into the average of the specified
neighborhood.

Other commonly employed focal functions include standard deviation (focal standard
deviation), maximum (focal maximum), minimum (focal minimum), and range (focal
range).

Zonal Functions

Zonal functions process the data of a grid in such a way that cells of the same zone are
analyzed as a group. A zone consists of several cells that may or may not be contiguous.
A typical zonal function requires two grids – a zone grid, which defines the size, shape,
and location of each zone, and a value grid, which is to be processed for analysis. In the
zone grid, cells of the same zone are coded with the same value, while zones are
assigned different zone values.

The objective of this function is to identify the zonal maximum for each zone. In the input
zone grid, there are only three zones with values ranging from 1 to 3. The zone with a
value of 1 has five cells, three at the upper right corner and two at the lower left corner.
The procedure involves finding the maximum value among these cells from the value grid.
Typical zonal functions include zonal mean, zonal standard deviation, zonal sum, zonal

minimum, zonal maximum, zonal range, and zonal variety. Other statistical and geometric
properties may also be derived from additional zonal functions. For instance, the zonal
perimeter function calculates the perimeter of each zone and assigns the returned value
to each cell of the zone in the output grid.

Global Functions
For global functions, the output value of each cell is a function of the entire grid. As an
example, the Euclidean distance function computes the distance from each cell to the
nearest source cell, where source cells are defined in an input grid. In a square grid, the
distance between two orthogonal neighbors is equal to the size of a cell, or the distance
between the centroid locations of adjacent cells. Likewise, the distance between two
diagonal neighbors is equal to the cell size multiplied by the square root of 2. Distance
between non-adjacent cells can be computed according to their row and column
addresses.

In the below figure, the grid at the left is the source grid in which two clusters of source
cells exist. The source cells labeled 1 are the first clusters, and the cell labeled 2 is a
single cell source. The Euclidean distance from any source cell is always equal to 0. For
any other cell, the output value is the distance from its nearest source cell.

In the above example, the measurement of the distance from any cell must include the
entire source grid; therefore this analytical procedure is a global function.

Advantages of using the Raster Format in Spatial Analysis

Efficient processing: Because geographic units are regularly spaced with identical
spatial properties, multiple layer operations can be processed very efficiently.

Numerous existing sources: Grids are the common format for numerous sources of
spatial information including satellite imagery, scanned aerial photos, and digital elevation
models, among others. These data sources have been adopted in many GIS projects and
have become the most common sources of major geographic databases.

Different feature types are organized in the same layer: For instance, the same grid
may consist of point features, line features, and area features, as long as different
features are assigned different values.

Grid Format Disadvantages

Data redundancy: When data elements are organized in a regularly spaced system,
there is a data point at the location of every grid cell, regardless of whether the data
element is needed or not. Although several compression techniques are available, the
advantages of gridded data are lost whenever the gridded data format is altered through
compression. In most cases, the compressed data cannot be directly processed for
analysis. Instead, the compressed raster data must first be decompressed to take
advantage of spatial regularity.

Resolution confusion: Gridded data give an unnatural look and unrealistic presentation
unless the resolution is sufficiently high. Conversely, spatial resolution dictates spatial
properties. For instance, some spatial statistics derived from a distribution may be
different, if spatial resolution varies, which is the result of the well known scale problem.
Cell value assignment difficulties: Different methods of cell value assignment may
result in quite different spatial patterns.
UNIT III

INTRODUCTION MAPS AND SPATIAL INFORMATION

Mapp is a generalized representation of real-world geography. Cartographers or technicians


who make maps, use symbols to represent real-world features; such as lines for rivers or
roads, points for cities, and polygons for regions or districts. During the map making process,
information is usually generalized to make maps clearer and easier to understand. For
example, the mapmaker might choose to show only those cities with populations greater than
25,000 rather than cluttering up a map with every settlement with a population count recorded
in the census database.

Modern day computer-assisted cartography (map making assisted by computers) is faster


and more efficient than traditional cartography. Current geographic information systems (GIS)
and computer aided design (CAD) applications allow for the rapid development of many map
products and an effective means of communicating results. Before commencing the physical
production of a map it is important to understand several principles involved in composing a
map. The importance of quality and suitability of the datasets used for the job cannot be
overemphasized. For maps to be effective they need to convey relevant information to the
expected audience.

There are two main categories of the map that are displayed on screen or as hard copy:
o General reference maps.
o Thematic maps.

Most atlases are considered general reference maps and typically contain numerous
features, none of which predominate. Reference maps are generally rich in detail and take
longer to produce than other maps.

Thematic maps are at the other end of the spectrum of cartographic products. They
generally emphasize one or two map features relative to other background items. A map
showing land use zoning is an example of a thematic map. Zoning is highlighted over any
other map feature. Thematic maps are generally easier and faster to produce than a good
general reference map.

Maps can be classified anywhere in the continuum between reference and thematic
maps. For example, a road map may be rich in detail thus resembling a reference map,
but the highways tend to be more predominantly displayed, making it more of a thematic
map.

For all maps the following elements form the basis of a ‘good’ map:

1. Descriptive title.
2. The map itself, including the symbolization of geographic features.
3. The legend explains the geographic symbols.
4. Map scale.
5. Map projection.
6. North arrow (or compass).
7. Copyright, source, and publisher statements.
The map

The map itself is a generalized representation of the real-world geography of an area.


During the map making process, information is usually generalized to make maps clearer
and easier to understand. For example, the map maker might choose to show only those
cities with populations greater than 25,000 rather than cluttering the map with every
settlement with a population count recorded in the census database.

Map Legend

The map legend clearly explains the symbols used to represent geographic features on
the map. A legend does not necessarily need to include every symbol used in the map.
For example, most map readers understand that wavy blue lines represent a river. The
major symbols or themes however should always be prominent in the legend.

Map scale

Maps present a view of geography that is smaller than the real world, and as such, it is
necessary to note the scale of the map on the final map product. Scale can be shown as
a unit measure (e.g. 1:50 000 or as a graphic scale bar. Maps of a scale of 1:50 000 or
less are considered large-scale maps, whereas maps of a scale of 1:500 000 or greater
are classed as small-scale maps. Large-scale maps generally show more geographic
detail than small-scale maps.

Map projection

Map projections allow the cartographer to represent a portion of the 3-D curved surface of
the Earth on a flat (or 2-D) piece of paper. A map projection is either set in the geographic
data when it is created (and should be noted in the metadata), or it can be added or
modified within most spatial information system applications.
North arrow

Most spatial information systems and mapping applications enable a north arrow or
compass to be included on the map document. Depending on the map’s extent and
projection, the geographic north may be directed at the top of a page or slightly to the
right or left of the top.
Copyright, source, and publisher statements

A source statement informs users of where the map data originated and at what scale the
data was captured. A publisher statement identifies who produced the map and when the
current version was printed. A copyright statement identifies any copyright details. As part
of best practice procedures, the copyright information must be included.

Things to consider before making a map

Before making a map the following points need to be considered:


o The intended audience.
o Data sources.
o Composition tools.
Audience

Most spatial information systems and other mapping applications can produce a wide
variety of map products, from simple letter-sized maps to large wall maps printed on A0
plotters. It is important to consider and understand the intended requirements of the
primary audience when producing a map product.
Data sources

It is widely acknowledged that approximately 90% of the time invested in a typical spatial
information project involves the capturing or building of geographic data. When the time
arrives to compile the data and produce a map the map maker must understand
the data. For example, what projection is the data in? At what scale was the data
captured? When was the data gathered and who did it? If the map is saved, where is it on
the local government network? This information should be available from the metadata
associated with each data theme and emphasizes the importance of producing and
maintaining metadata.

Design Process

Producing a map that is simple, clear, uncomplicated, and pleasing to the eye requires
planning, and above all, it has to convey the information accurately. When a user
requests a map to assist in making a decision, the map must reflect what the user wants
to see. For example, if the issue is to display council ward boundaries and town planning
scheme zones, the first thing the user should note when viewing the map are the zones
and ward boundaries, and then any other information. Note: when viewed from a
cartographic perspective, GIS people must be aware of the basic elements of graphic
design as well as where and how to apply them.

Cartographic design principles


There are four basic principles to consider during the cartographic design process:
o Legibility.
o Visual contrast.
o Borders and neat lines.
o Hierarchical organization of layers.

Legibility (Clarity)
Map symbols must be legible to the reader. For example, lines representing roads need to
be differentiated from lines representing rivers. Circular points symbolizing settlements
must be different from points symbolizing traffic monitoring locations. Map feature labels
should be easily read by the map user within the context that the map is designed for.

Visual contrast
Thematic maps in which map symbols represent data should have good contrast with
other map features so that attention is drawn to contrasting shapes and colors. The layer
or theme that contains the important data should stand out from the background or other
layers. The role of the mapmaker is to ensure the reader’s eye is drawn to the
features that define the map’s purpose and is not confused with other less
important information.
Borders
The use of borders can aid the overall presentation and give a map a professional finish.
Borders can be placed around the whole map and/or around other elements (e.g. the
legend, source, copyright, and publisher statements). Mapmakers should ensure that
borders are aligned and distinguishable.

DATA SOURCES

Data sources for creating new data include remotely sensed data (Satellite images,
Aerial photographs), GPS (Global Positioning System) data, paper maps, and so on.
Remotely sensed data and GPS data are the primary data sources and paper maps are
secondary data sources.

i. Remotely sensed data: Remotely sensed data, such as digital orthophotos and
satellite images, are data acquired by a sensor from distance, Remotely sensed
data are raster data but they are useful for vector data input. Digital orthophotos are
digitized aerial photographs that have been differentially rectified or corrected to
remove image displacements.

ii. GPS Data: GPS data include the horizontal location based on the geographic grid
or a coordinate system. It has become a useful tool for spatial data input.

iii. Paper Maps: These include all types of hard copy maps.

Digitizing
There is no single method of entering the spatial data to a GIS rather, there are several,
mutually compatible methods that can be used singly or in combination The choice of
method is governed largely by application, the available budget, and the type of data
being input. The types of data encountered are existing maps, including field sheets and
hand-drawn documents, aerial photographs, remotely-sensed data from satellite or
airborne scanners, point-sample data (e.g. soil profiles), and data from censuses or other
surveys in which the spatial nature of the data is more implicit than explicit. The actual
method of data input is also dependent on the structure of the database of the
geographical system.

Manual input to a vector system


The source data are envisaged as points, lines, or areas. The coordinates of the data are
obtained from the reference grid already on the map, or from reference to a graticule or
overlaid grid or point data using GPS. They can then be simply typed into a file or input to
a program.

Digitizing is the process of converting data from analog to digital format. A digitizer is a
commonly used device to perform this task. A digitizer is an electronic or electromagnetic
device consisting of a table on a map or document that can be placed.
Different sizes of digitizers are as follows:

Active Area Digitizers


12" x 18" A3 Size
24" x 36" A1 Size
36" x 48" A0 Size
42" x 60" A00 Size

Digitizer accuracy is limited by the resolution of the digitizer itself and by the skill of the
operator. The coordinates of a point on the surface of the digitizer are sent to the
computer by a hand held magnetic pen or a simple device called a ‘mouse’ or a ’puck’.
For mapping where considerable accuracy is required, a puck consisting of a coil
embedded in plastic with an accurately located window with cross hairs is used. The
coordinates of a point are digitized by placing the crosshairs over it by pressing the
control button on the puck.

The principal aim of the digitizer is to input quickly and accurately the coordinates
of point and bounding lines.

Digitizing usually begins with a set of control points, which are later used for converting
the digitized map to real world coordinates. To digitize point features each point is clicked
once to record its location. Digitizing lines or polygon features can follow either point
mode or stream mode.

Manual Digitizing

Manually operated digitizers probably provide the most widely used means of converting
pre-existing maps into digital form. Spatial data are recorded in the form of single
coordinate pairs representing points, and series of coordinates representing lines and
area boundaries.

The main components of a manual digitizer are a flat surface, ranging in size between
small tablets about 30x30 cm to large tables l20x80 cm or more in dimension; a hand held
puck or cursor, used by the operator to indicate positions to be recorded; and a keyboard
for entering alphanumeric data and possibly commands. It is the larger devices that are
generally of the most use in cartography. The exact positioning of the puck is made
possible by a crosshair mounted within a flat glass panel, which may sometimes include a
magnifying lens. Also mounted on the puck are buttons that may be used for controlling
data entry.

The most commonly used technology for digitizing tables is electromagnetic, in which a
table inlaid with a fine grid of wires is associated with the puck which contains a metal
coil. The grid of wires in the table and the coil in the puck act either as transmitter and
receiver, or receiver and transmitter, respectively. If the puck is the transmitter, the
position of the crosshair is found by scanning the x- and y-coordinate grid wires to identify
those nearest the puck. The exact position is then found by interpolating between the
adjacent wires based on the nature of the signals received. Small format, lower resolution
digitizing tablets may sometimes use a stylus with a small coil in its tip, rather than a puck,
as the locating device.
In point mode, the digitizing operator specifically selects and encodes those points
deemed "critical" to represent the geomorphology of the line or significant coordinate
pairs. This requires some knowledge about the line representation that will be needed.
In stream mode, the digitizing device automatically selects points on a distance or time
parameter, which generates sometimes an unnecessary high density of coordinate pairs.

Semi-automatic line -following digitizers

Because manual digitizing is such a time-consuming procedure, considerable effort has


gone into attempts to develop automatic digitizers. This has resulted in the development
of automated line following devices, which in practice are only semi automatic in that they
must be positioned manually at the beginning of each linear feature to be digitized. They
may sometimes also need to be guided manually when they encounter junctions between
two or more linear features. The earliest line-following digitizers were based on
mechanical designs which typically involved a flat surface cross slide. These were
superseded by laser-based technology, exemplified by the Laser Scan Company's
Fasttrak and Lasertrak systems. Here the source map was represented on a transparent
sheet. A laser beam, deflected by mirrors, executed a local raster scanning pattern over a
portion of the line to be digitized and recorded the image on the film negative. This was
automatically analyzed to determine the path of the center of the line. The resulting
coordinates then served the additional purpose of helping determine where the next local
scans should be centered to follow the line.

Automated scanning

Scanners that scan entire documents are designed to create a digital representation of
the source in the form of a 2D array of pixel values. For the majority of cartographic and
GIS applications, this array or raster must then be analyzed to derive a vector
representation of the geographical features and the annotation. In respect of their primary
function, commercial scanners have usually been very effective in generating high-
resolution roasters of either binary or greyscale or color values. It is only recently,
however, that raster to vector conversion software has become sufficiently
sophisticated to be able to identify and correctly digitize (in vector format) a significant
the proportion of the graphical and textual information found on topographic maps. Even
so, considerable effort must often be expended in validation, feature coding, and
interactive graphical editing of the vectorized data.

Early application of scanner systems was confined to digitizing simple high-quality line
work, such as the color separates of contours of published maps. Current systems use
pattern recognition techniques to distinguish between, and hence provide feature
identification codes for, a variety of point, line, and area symbols. They can also interpret,
to a high degree of success, both printed and hand-written text. It may be appreciated that
the latest scanning systems have enormous potential for dealing with the backlog of
traditional 'analog' maps which must be digitized by many of the organizations wishing to
take advantage of GIS technology. With the recent improvements in the processing of
scanned data, it can be expected that these systems will continue to become more widely
used.

Raster scanners are usually based on an either drum or flat-bed design. Drum scanners,
such as those manufactured by Optronics, Scitex, and Tektronix, wrap the document on a
drum that is rotated adjacent to a photodetector head that moves incrementally along the
length of the drum. The documents may be monochromatic requiring a single detector, or
full color, in which case several photodetectors may be used simultaneously, each with its
color filter. As an alternative to the relatively large and expensive scanners, there are
available small format devices that use movable linear or area arrays of Charged Couple
Devices (CCD). The CCDs can be combined with a lens to form a camera which, in the
case of the linear arrays, moves the length of the document in a single scan.

Spatial data already in digital raster form

All satellite sensors and multispectral sensing devices used in aeroplanes for low altitude
surveys use scanners to form an electronic image of the terrain. These electronic images
can be transmitted by radio to ground receiving stations or stored on magnetic media
before being converted to a visual image by a computer system.
The scanned data are retained in the form of pixels. Each pixel has a value representing
the amount of radiation within a given bandwidth received by the scanner from the area of
the earth's surface covered by the pixel. This value can be represented visually by a
greyscale or color. Because each cell can only contain a single value, many scanners are
equipped with sensors that are tuned to a range of carefully chosen wavelengths. For
example, the scanners on the original LANDSAT 1 were tuned to four wavebands to be
able to record differences in water, vegetation, and rock.
QUALITY ISSUES & SOURCES OF ERRORS

The GIS database is a model of the real world, the real world being what the observer
perceives. Therefore, there is an inherent discrepancy between the GIS database and the
real world it presents. This is so because all models are approximations. Different sources
of errors will be encountered in using GIS at different stages e.g. data collection, data
input, data storage, data manipulation, data output, and use of results. However, errors in
data collection, data input, and data manipulation are the main concerns in a GIS
database. Sources of data collection can be broadly classified as primary and secondary
sources. Primary sources of data collection include techniques of geodesy,
photogrammetry, photo-interpretation, digital image processing of remotely sensed data,
and surveying. Secondary sources of data collection are the existing topographical maps
and the existing categorical coverage maps. There are three main groups of factors
governing the errors as given below (P. A. Burrough)

Obvious sources of errors

Errors resulting from natural variation or original measurements

– qualitative and quantitative

Data entry, output faults, natural variation, and observed bias

Errors arise through processing


Data Errors in GIS

Data editing and verification are in response to the errors that arise during the encoding of
spatial and non-spatial data. The editing of spatial data is a time consuming, interactive
process that can take as long, if not longer, than the data input process itself.

Several kinds of errors can occur during data input. They can be classified as:

o The incompleteness of the spatial data. This includes missing points, line
segments, and/or polygons.
o Locational placement errors of spatial data. These types of errors usually are the
result of careless digitizing or poor quality of the original data source.
o Distortion of the spatial data. This kind of error is usually caused by base maps
that are not scale-correct over the whole image, e.g. aerial photographs, or from the
material stretch, e.g. paper documents.
o Incorrect linkages between spatial and attribute data. This type of error is
commonly the result of incorrect unique identifiers (labels) being assigned during
manual key in or digitizing. This may involve the assigning of an entirely wrong label
to a feature, or more than one label assigned to a feature.
o Attribute data is wrong or incomplete. Often the attribute data does not match
exactly with the spatial data. This is because they are frequently from independent
sources and often from different periods. Missing data records or too many data
records are the most common problems.

The identification of errors in spatial and attribute data is often difficult. Most spatial errors
become evident during the topological building process. The use of check plots to
determine where spatial errors exist is a common practice. Most topological building
functions in GIS software identify the geographic location of the error and indicate the
nature of the problem. Comprehensive GIS software allows users to graphically walk
through and edit spatial errors. Others merely identify the type and coordinates of the
error. Since this is often a labour intensive and time consuming process, users should
consider the error correction capabilities very important during the evaluation of GIS
software offerings.

Spatial Data Errors

A variety of common data problems occur in converting data into a topological structure.
These stem from the original quality of the source data and the characteristics of the data
capture process. Usually, data is input by digitizing. Digitizing allows a user to trace
spatial data from a hard copy product, e.g. a map, and have it recorded by the computer
software. Most GIS software has utilities to clean the data and build a topologic structure.
If the data is unclean to start with, for whatever reason, the cleaning process can be very
lengthy. Interactive editing of data is a distinct reality in the data input process.

Experience indicates that in the course of any GIS project 60 to 80 % of the time required
to complete the project is involved in the input, cleaning, linking, and verification of the
data. The most common problems that occur in converting data into a topological
structure include:
o Slivers and gaps in the line work;
o Dead ends, e.g. also called dangling arcs, resulting from overshoots and
undershoots in the line work; and
o Bow ties or weird polygons from inappropriate closing of connecting features.

Of course, topological errors only exist with linear and areal features. They become most
evident with polygonal features. Slivers are the most common problem when cleaning
data. Slivers frequently occur when coincident boundaries are digitized separately, e.g.
once each for adjacent forest stands, once for a lake and once for the stand boundary, or
after polygon overlay. Slivers often appear when combining data from different sources,
e.g. forest inventory, soils, and hydrography. It is advisable to digitize data layers
concerning an existing data layer, e.g. hydrography, rather than attempting to match data
layers later. A proper plan and definition of priorities for inputting data layers will save
many hours of interactive editing and cleaning.

Dead ends usually occur when data has been digitized in a spaghetti mode, or without
snapping to existing nodes. Most GIS software will clean up undershoots and overshoots
based on a user defined tolerance, e.g. distance. The definition of an inappropriate
distance often leads to the formation of bow ties or weird polygons during topological
building. Tolerances that are too large will force arcs to snap one another that should not
be connected. The result is small polygons called bow ties. The definition of a proper
tolerance for cleaning requires an understanding of the scale and accuracy of the data
set.

The other problem that commonly occurs when building a topologic data structure is
duplicate lines. These usually occur when data has been digitized or converted from a
CAD system. The lack of topology in these types of drafting systems permits the
inadvertent creation of elements that are exactly duplicated. However, most GIS
packages afford the automatic elimination of duplicate elements during the topological
building process. Accordingly, it may not be a concern with vector based GIS software.
Users should be aware of the duplicate element that retraces itself, e.g. a three vertices
line where the first point is also the last. Some GIS packages do not identify these feature
inconsistencies and will build such a feature as a valid polygon. This is because the
topological definition is mathematically correct, however, it is not geographically correct.
Most GIS software will provide the capability to eliminate bow ties and slivers through a
feature elimination command based on area, e.g. polygons less than 100 square meters.
The ability to define custom topological error scenarios and provide for semi-automated
correction is a desirable capability for GIS software.

The adjoining figure illustrates some typical errors described above. Can you spot them?
They include undershoots, overshoots, bow ties, and slivers. Most bow ties occur when
inappropriate tolerances are used during the automated cleaning of data that contains
many overshoots. This particular set of spatial data is a prime candidate for numerous
bow tie polygons.
Attribute Data Errors

The identification of attribute data errors is usually not as simple as spatial errors. This is
especially true if these errors are attributed to the quality or reliability of the data. Errors
as such usually do not surface until later on in the GIS processing. Solutions to these
types of problems are much more complex and often do not exist entirely. It is much more
difficult to spot errors in attribute data when the values are syntactically good, but
incorrect.

Simple errors of linkage, e.g. missing or duplicate records, become evident during the
linking operation between spatial and attribute data. Again, most GIS software contains
functions that check for and identify problems of linkage during attempted operations. This
is also an area of consideration when evaluating GIS software.

Polygon errors

SPATIAL INTERPOLATION

What is Interpolation ?

Assume we are dealing with a variable that has meaningful values at every point within a
region (e.g., temperature, elevation, the concentration of some mineral). Then, given the
values of that variable at a set of sample points, we can use an interpolation method to
predict the values of this variable at every point For any unknown point, we take some
form of a weighted average of the values at surrounding points to predict the value at the
point where the value is unknown In other words, we create a continuous surface from a
set of points As an example used throughout this presentation, imagine we have data
on the concentration of gold in western Pennsylvania at a set of 200 sample locations:
Appropriateness of Interpolation
o Interpolation should not be used when there isn't a meaningful value of the
variable at every point in space (within the region of interest)
o That is, when points represent merely the presence of events (e.g., crime),
people, or some physical phenomenon (e.g., volcanoes, buildings), interpolation
does not make sense
o Whereas interpolation tries to predict the value of your variable of interest
at each point, density analysis (available, for instance, in ArcGIS's Spatial
Analyst) "takes known quantities of some phenomena and spreads it across the
landscape based on the quantity that is measured at each location and the
spatial relationship of the locations of the measured quantities"

CURRENT AND POTENTIAL USES OF GIS IN AGRICULTURAL PLANNING

GIS is playing an increasing role in agriculture production throughout the world by helping
farmers increase production, access information faster, reduce costs, and manage their
land more efficiently. Some of the challenges faced in the Agricultural sector are:

o Pre & Post harvest crop losses


o Heavy livestock losses to diseases and pests
o Low & declining soil fertility
o Inadequate disaster preparedness & response
o Inadequate storage, processing, water, and infrastructure to support the
agricultural sector
o Insufficient market knowledge

GIS technology has become a vital tool for crop management. Geographic data about soil
conditions help farmers to be more efficient in segmenting arable land to apply differential
rates of fertilizer, and forecasting to determine when, where, and what to plant in what is
known as precision agriculture. Satellite and aerial imagery are used to analyze the
existing conditions of the land and soil samples taken from the fields are used to
create a more precise understanding of the condition of a farm. By understanding the
condition of the land on a micro scale, farmers and those in the agriculture field can better
manage fertilizer and water application, resulting in reduced costs and better crop yields.
Resources on the use of GIS in Agriculture.

Using GIS one can focus on the following points in agriculture

o Production
o Landscape and its effect on the crop.
o Risk Assessment
o Pest Control
o Agricultural Monitoring o
o Agribusiness
o Precision Farming
o Water Resource Management
o Soil Erosion
o Results related to variance in crop yield, leaching potential, erosion risk and
economics on a farm field scale, and many more

The application of GIS allows for optimizing the use of resources on a site-specific basis
thereby contributing to minimizing detrimental environmental impacts.
RECENT TRENDS IN GIS

GIS technology has been making rapid strides, keeping pace with technological progress.
Since its advent, has taken many forms from a mapping platform to an analytics tool to
modeling, and decision making. Geographical Information Systems (GIS) have evolved
largely because of advancements in many parallel growing technologies that started
being called Geo-ICT Technologies. Very recently, a string of new evolutionary terms
from enterprise GIS, to geography network, interoperability, distributed computing, web
services, mobile GIS, grid computing, and mash-ups have captured geo technology
imagination, as well as attention.

A few technological trends in the field are discussed in this lecture material:

1. Distributed GIS
2. Location based services
3. GIS & Cloud computing
4. Volunteered in Geographic Information/Crowd Sourcing
5. Coupling process models with spatial models
6. Spatio-temporal modeling
7. Agent based mdodelling
8. Geoinormation Management
9. Expert Systems/Spatial DSS
10. Geographic Uncertainty modeling and error propagation analysis
11. Computational Geometry
12. Human Computer Interaction
13. Computer Vision Applications in GIS
14. Distributed and Parallel algorithms for GIS
15. GPU and Novel Hardware Solutions for GIS
16. Image and Video Understanding
17. Location Privacy, Data Sharing, and security
18. Ontology and Semantics
19. Spatio- Temporal Data Analysis & Management
20. Spatial Data Mining and Knowledge Discovery
21. Uncertainty Modelling of Spatial Data
22. Spatial Data Warehousing, OLAP, and Decision Support
23. Spatial Modeling and Reasoning
24. Spatial Query Processing and Optimization
25. Spatio- Temporal Sensor Networks
26. Standardization and Interoperability for GIS
27. Traffic Telematics & Transporation
28. Urban and Environmental Planning
29. Visual Languages and Querying
30. Web and Real Time Applications
SOFTWARE COMPONENTS USED IN GIS

GIS software provides the functions and tools needed to store, analyze and display
geographic information. Key software components are

o Tools for the input and manipulation of geographic information.


o A database management system (DBMS).
o Tools that support geographic query, analysis, and visualization.
o A graphical user interface (GUI) for easy access to tools.

GIS technology is of limited value without trained technical experts who manage the
system and develop plans for applying it to real-world problems. GIS users range from
technical specialists who design and maintain the system to those who use it to help them
perform their everyday work.

Flowchart of the components of GIS

Maps Images Statistical report

Maps Images Statistical report

Cartographic display system


This system allows users to select and extract a particular database or map output on the
screen or printer etc.
Map digitizing system
This system enables users to convert existing paper maps to digital form, further aiding
in developing a database.
Database management system

This system can analyze the attribute data. The term "attribute" refers to qualities or
characteristics of places with spatial and location information.
o Software, which provides cartographic display, map digitizing, and database query
capabilities are often referred to as automated mapping facilities management
(AM/FM) systems.
Geographic analysis system
This component can analyze the true spatial characteristics. The term "spatial" refers to
any two or three-dimensional data whether or not it relates directly to the surface of the
earth.
Image processing system
It helps to analyze and classify the remotely sensed images (digital images) according to
various classification techniques, which could be interpreted with the help of training data.

Statistical analysis system


This helps in the statistical analysis of spatial and temporal data, which is required in
scenario analyses.

DATA ACQUISITION :

Data acquisition is the process of sampling signals that measure real world physical
conditions and converting the resulting samples into digital numeric values that can be
manipulated by a computer.

The components of data acquisition systems include:

 Sensors, convert physical parameters to electrical signals.


 Signal conditioning circuitry, to convert sensor signals into a form that can be converted to
digital values.
 Analog-to-digital converters, convert conditioned sensor signals to digital values.

1. Energy source - the first requirement for remote sensing is an energy source that
provides electromagnetic energy.
2. Radiation and the atmosphere – as the energy travels from its source to the target, it
will come in contact with and interact with the atmosphere it passes through. This
interaction may take place a second time (active remote sensing) as the energy travels
from the target to the sensor.
3. Interaction with the target - once the energy makes its way to the target through the
atmosphere, it interacts with the target depending on the properties of both the target and
the radiation.
4. Recording of energy by the sensor - after the energy has been reflected by, or emitted
from the target, we require a sensor (remote - not in contact with the target) to detect and
record the electromagnetic radiation.
5. Transmission, reception, and processing - the energy recorded by the sensor has to
be transmitted, often in electronic form, to a receiving and processing station where the
data are processed into an image (hardcopy and/or digital).
6. Interpretation and analysis - the processed image is interpreted, visually and/or digitally,
to extract information about the target.
7. Application - the final element of the remote sensing process is achieved when we
apply the information we have been able to extract from the imagery about the target
to better understand it, reveal some new information, or assist in solving a particular
problem.

Visual Image Interpretation

Virtually all people live with a visual perception of his/her environment. This experience is
also used to interpret images (in 2D) and 3-dimensional structures and specimens.

The visual interpretation of satellite images is a complex process. It includes the meaning
of the image content but also goes beyond what can be seen on the image to recognize
spatial and landscape patterns. Visual elements of tone, shape, size, pattern, texture,
shadow, and association. Identifying targets in remotely sensed images based on these
visual elements allows us to further interpret and analyze. This process can be roughly
divided into 2 levels:

1. The recognition of objects such as streets, fields, rivers, etc. The quality of
recognition depends on the expertise in image interpretation and visual perception.
2. A true interpretation can be ascertained through conclusions (from previously
recognized objects) of situations, recovery, etc. Subject specific knowledge and
expertise are crucial.

Digital Image Processing:

 Digital Image Processing (DIP) is a technique that involves the manipulation of a


digital image to extract information.
 When satellite images are being manipulated in such a manner, this technique is also
referred to as satellite image processing.
The whole process of Digital Image Processing can be classified into three parts.

Digital Image Pre-Processing:

The raw satellite images may contain a variety of errors in their geometry and radiometry
(cosmetic appearance). Hence it is important to rectify these images before starting their
interpretation.

This typically involves the initial processing of raw satellite images for correcting
geometric distortions, radiometric corrections & calibration, and noise removal from the
data.

This process is also referred to as Image Rectification. Image pre-processing is done


before enhancement, manipulation, interpretation, and classification of satellite images
hence it is called so.

Digital Image Enhancement

Before starting visually interpret satellite images, some image enhancement techniques
are applied to manipulate the theme to improve and enhance features. This helps in
better interpretation of the images and in segregating one feature type from the other.
Image enhancement involves the use of some statistical and image manipulation
functions provided in the image processing software. These include contrast
enhancement, histogram equalization, density slicing, spatial filtering, image ratio (like
RVI, NDVI, TVI, etc.), principal components analysis (PCA), color transformations, image
fusion, image stacking, etc.

Digital Image classification

It is a software based image classification technique that involves automated information


extraction and subsequent classification of multispectral satellite images. These are
statistical decision rules that group pixels in different feature classes. Digital classification
techniques are less time consuming than visual techniques. Digital satellite images can
be classified digitally using a supervised, unsupervised, or hybrid type of image
classification (these will be discussed in detail in a separate chapter).

Geometric distortions manifest themselves as errors in the position of a pixel relative to


other pixels in the scene and concerning their absolute position within some defined map
projection. If left uncorrected, these geometric distortions render any data extracted from
the image is useless. This is particularly so if the information is to be compared to other
data sets, be it from another image or a GIS data set. Distortions occur for many reasons.
Digital Image Processing:

For instance, distortions occur due to changes in platform attitude (roll, pitch, and yaw),
altitude, earth rotation, earth curvature, panoramic distortion, and detector delay. Most
of these distortions can be modeled mathematically and are removed before you buy an
image. Changes in attitude however can be difficult to account for mathematically and so
a procedure called image rectification is performed. Satellite systems are however
geometrically quite stable and geometric rectification is a simple procedure based on a
mapping transformation relating real ground coordinates, say in easting and northing, to
image line and pixel coordinates.

Rectification is a process of geometrically correcting an image so that it can be


represented on a planar surface, conform to other images, or conform to a map (Fig. 3).
That is, it is the process by which the geometry of an image is made planimetric. It is
necessary when accurate area, distance, and direction measurements are required to be
made from the imagery. It is achieved by transforming the data from one grid system into
another grid system using a geometric transformation.

Rectification is not necessary if there is no distortion in the image. For example, if an


image file is produced by scanning or digitizing a paper map that is in the desired
projection system, then that image is already planar and does not require rectification
unless there is some skew or rotation of the image. Scanning and digitizing produce
images that are planar, but do not contain any map coordinate information. These images
need only to be geo-referenced, which is a much simpler process than rectification. In
many cases, the image header can simply be updated with new map coordinate
information. This involves redefining the map coordinate of the upper left corner of the
image and the cell size (the area represented by each pixel).

Ground Control Points (GCP) are the specific pixels in the input image for which the
output map coordinates are known. By using more points than necessary to solve the
transformation equations a least squares solution may be found that minimizes the sum of
the squares of the errors. Care should be exercised when selecting ground control points
as their number, quality, and distribution affect the result of the rectification.

Once the mapping transformation has been determined a procedure called resampling is
employed. Resampling matches the coordinates of image pixels to their real world
coordinates and writes a new image on a pixel by pixel basis. Since the grid of pixels in
the source image rarely matches the grid of the reference image, the pixels are
resampled so that new data file values for the output file can be calculated.
Applications of Remote Sensing in Agriculture :

Major Highlights

o Establishment of Mahalanobis National Crop Forecast Centre in the Department of


Agriculture & Cooperation, Ministry of Agriculture, Government of India, for
operational use of space technology to provide in-season crop forecasts and
assessment of drought situation
o Crop production forecasting for 8 major crops
o National agricultural drought assessment and monitoring
o Country-wide agricultural land-use mapping
o Horticultural crop inventory
o Agro-meteorological parameter retrieval and inputs to agro-advisory services
o Methane emission inventory & carbon accounting

Major Benefits

o Agricultural policy decisions


o Declaration of drought and shortfall in food grain and contingency planning
o Support to crop damage-assessment
o Advanced crop planning and diversification
o Timely tailoring of agronomic practices
o Demand-based irrigation scheduling

Operational Products / Services

o Acreage and production estimates of 8 major crops (rice [Kharif & Rabi], wheat, mustard,
jute, cotton, sugarcane, potato, sorghum) at District level Periodic agricultural drought
assessment in 13 States
o Annual agricultural land-use mapping for crop intensification
o Horticultural crop inventory
o Cropping system analysis
o Satellite-based bio-geophysical products (vegetation index, rainfall, solar radiation) for
agricultural crop monitoring and agromet-advisory services
o Capacity building in remote sensing & GIS applications for sustainable agriculture

* * * * * * *

You might also like