0% found this document useful (0 votes)
9 views25 pages

The Greenhouse Effect and Climate Change Revisited

The document discusses the greenhouse effect and its role in climate change, emphasizing its importance in maintaining Earth's temperature and supporting life. It highlights the complexities of climate modeling and the impact of human activities on atmospheric composition, which contribute to global warming. The article also compares the greenhouse effects on Earth with those on other terrestrial planets like Venus and Mars, providing insights into the broader implications of atmospheric changes across the solar system.

Uploaded by

inciteleal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views25 pages

The Greenhouse Effect and Climate Change Revisited

The document discusses the greenhouse effect and its role in climate change, emphasizing its importance in maintaining Earth's temperature and supporting life. It highlights the complexities of climate modeling and the impact of human activities on atmospheric composition, which contribute to global warming. The article also compares the greenhouse effects on Earth with those on other terrestrial planets like Venus and Mars, providing insights into the broader implications of atmospheric changes across the solar system.

Uploaded by

inciteleal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

INSTITUTE OF PHYSICS PUBLISHING REPORTS ON PROGRESS IN PHYSICS

Rep. Prog. Phys. 65 (2002) 1–25 PII: S0034-4885(02)12698-4

The greenhouse effect and climate change revisited


F W Taylor

Department of Physics, Oxford University, Clarendon Laboratory, Oxford OX1 3PU, UK

Received 8 November 2001


Published 12 December 2001
Online at stacks.iop.org/RoPP/65/1

Abstract

On any planet with an atmosphere, the surface is warmed not only by the Sun directly but also
by downward-propagating infrared radiation emitted by the atmosphere. On the Earth, this
phenomenon, known as the greenhouse effect, keeps the mean surface temperature some 33 K
warmer than it would otherwise be and is therefore essential to life.
The radiative processes which are responsible for the greenhouse effect involve mainly
minor atmospheric constituents, the amounts of which can change either naturally or as a
by-product of human activities. The growth due to the latter is definitely tending to force
a general global surface warming, although because of problems in modelling complicated
feedback processes, for example those involving water vapour, ozone, clouds and the oceans,
the precise rates of change and the local patterns which should be expected are not simple to
predict.
This article updates an earlier review which discussed the physical processes involved in
the greenhouse effect and theoretical and experimental work directed towards an understanding
of the effect on the climate of recent and expected changes in atmospheric composition. In the
last ten years, progress in data acquisition and analysis, and in numerical climate modelling, has
tended to confirm earlier predictions of the likelihood of significant rises in the mean surface
temperature of the planet in the next 50–100 years, although this remains controversial.
(Some figures in this article are in colour only in the electronic version)

0034-4885/02/010001+25$90.00 © 2002 IOP Publishing Ltd Printed in the UK 1


2 F W Taylor

Contents

Page
1. Introduction 3
1.1. Plan of the paper 3
1.2. Climate change and global warming 3
1.3. Factors controlling climate 4
1.4. Climate and climate change on the terrestrial planets 6
2. Atmospheric composition and structure 7
2.1. Origin of the atmosphere 8
2.2. Vertical structure 8
2.3. Atmospheric composition 9
2.4. Anthropogenic emissions 10
2.5. Feedback cycles affecting atmospheric composition 10
2.6. Atmosphere–ocean coupling 10
2.7. Clouds and aerosols 11
2.8. The biosphere 11
3. Models of climate change 12
3.1. Simple models 13
3.2. Complex models 16
4. Latest model predictions 17
5. Experimental studies 18
5.1. Ground-based programmes 18
5.2. Satellite measurements 21
6. Conclusion 24
References 25
The greenhouse effect and climate change revisited 3

1. Introduction

1.1. Plan of the paper


Taylor (1991) provided an introduction for physicists interested in understanding the basis
for current concern that the ‘greenhouse effect’ may be changing the climate. The present
revision follows the same outline, omitting all but a brief summary of the basics, which
remain unchanged. These include the origin and basic structure of the atmosphere; its
composition and photochemistry; natural biogenic and anthropogenic contributions to the
budgets of atmospheric minor constituents; feedback processes, especially those involving
hydrodynamics, clouds and aerosols; and the oceans as reservoirs and transporters of heat,
water vapour and carbon dioxide for the atmosphere.
The most important progress in the last decade has been in the acquisition of new data,
better use of existing data due to improved computers and new techniques, and the further
development of models which parametrize the processes involved and their interactions and
which can be used to make forecasts. The prognoses provided by even the most sophisticated
current model studies remain controversial, although substantially less so than before. This
is partly because of unavoidable uncertainties in the future changes in industrial emissions
and other such factors which provide input to the models. Of more interest to physicists are
the fundamental limitations in the models themselves, and the extent to which we can design
experiments to understand the key processes which they must represent.
There are also more elusive questions about how well we can ever expect to predict
the future evolution of a complex and essentially chaotic system. In considering this aspect in
particular, it is useful to consider the role of the greenhouse effect on the other terrestrial planets,
where the same physics acting within different boundary conditions produces illuminating and
often tantalizing results.

1.2. Climate change and global warming


Climate change can refer to many things, but the effect receiving the greatest attention at present
is the phenomenon generally referred to as global warming. The problem is to understand the
physical processes that maintain the mean (globally and seasonally averaged) temperature of
the surface of the Earth, and how this may change in response to factors such as expected
future changes in atmospheric composition due to pollution and other human activities. We
may then go on to attempt the more difficult task of tracing the broad timescale on which
change is predicted to occur, and global spatial variations in these changes.
This is climate forecasting, which differs from weather forecasting principally in
addressing longer, primarily decadal, timescales, and in being inevitably restricted to low
resolution in space and time. It remains a key question to ask on what ranges and scales such
forecasts are reliable and to what degree. They depend on advanced numerical models and
sophisticated observing systems, which are constantly being improved, as will be illustrated
by examples. We shall review the latest climate predictions, their reliability and possible ways
to improve their physical basis further.
It is known of course that the Earth’s climate has shown large variations in the past. We
have the evidence of the ice ages, for example, and more modern measurements show changes
of smaller amplitude on a variety of timescales. These fluctuations, usually discussed in terms
of temperature as the most important and best-measured parameter, are of great interest because
they may express the chaotic behaviour of the Earth-atmosphere–ocean-cryosphere system,
or they may be the response to external forcing such as changes in the energy arriving from
the Sun, or a combination of both. The system is so complicated and the historical record so
4 F W Taylor

incomplete that, at the present time, only modest progress has been made in understanding the
mechanisms underlying natural variability and no physical model exists which can explain from
first principles all of the fluctuations which appear in any kind of climate record. Reasonable
success has now been obtained in accounting approximately for the smoothed changes over
the last hundred years, as we shall see below.
With ‘hindcasting’ as calibration, the overwhelming task facing modellers remains
forecasting possible anthropogenic climate modifications, that is, quasi-permanent changes
driven by human activities. It is generally, although still not universally, recognized that
observed changes are the consequence of present-day levels of atmospheric pollution, and that
further changes are inevitable and likely to lead to serious environmental problems within the
lifetimes of people alive now. Options for avoidance strategies are limited and costly, putting a
high premium on the credibility of any prediction and leading to a massive effort to understand
the geosystem better, with more and better measurements and realistic, physics-based computer
models two of the top priorities.

1.3. Factors controlling climate


The Earth’s annual mean surface temperature is determined by the balance between the
incoming energy from the Sun, mostly at wavelengths near that of visible light, and outgoing
infrared energy from the surface and atmosphere. From a knowledge of the effective (equivalent
blackbody) temperature of the surface of the Sun (about 6000 K), the application of the Stefan–
Boltzmann law and some simple geometry provides a value for the effective temperature of the
Earth of approximately 250 K, or −23 ◦ C. This, the average equivalent blackbody temperature
at which the Earth radiates to space, is some 33 K less than the mean surface temperature. The
reason for the difference is of course the atmosphere, which lies over the surface of the Earth
like a blanket, with higher temperatures on the inside than the outside.
The energy from the Sun reaches the surface of the planet because solar radiation consists
mostly of photons at visible wavelengths and the atmospheric blanket is nearly transparent
at those wavelengths. The balancing, cooling radiation, however, is in the infrared part
of the spectrum, because the source is so much cooler, a few hundred degrees instead of a
few thousand. The opacity at these longer wavelengths is controlled by a cocktail of minor
constituents in which H2 O and CO2 dominate but CH4 , N2 O, halocarbons and other trace
gases, as well as clouds and aerosols, are also significant.
The contribution of each species depends on how much of it is present, and on the number,
strength and position of absorption bands in its infrared spectrum. The spectral bands saturate,
so that adding more of a gas has a relatively small effect on those of the strong bands which
are already strongly absorbing. This is the case for the strong fundamental band of CO2 near
15 µm, for example. Thus adding a quantity of a gas present initially only in extremely small
amounts (like the halocarbons) will tend to have a larger effect than adding the same amount of
a relatively abundant minor constituent like carbon dioxide, provided of course that the former
has spectral bands which are not obliterated by overlapping bands of strongly absorbing gases
like CO2 . The most common halocarbons have bands in the 8–14 µm atmospheric ‘window’, an
otherwise largely transparent region, and hence are relatively efficient greenhouse enhancers if
their abundance increases. This efficiency can be quantified by defining the ‘radiative forcing’
of a gas as the increase in the energy flux at the surface produced by a given change in the
amount of the gas. Estimates of this quantity for various gases for the period from 1750 to 2000
are shown in figure 1. Note especially the estimates of ‘level of scientific understanding’ along
the abscissa, and that the factors labelled ‘very low’ in this regard (principally those due to
aerosols, including clouds) could collectively represent a cooling effect capable of cancelling
The greenhouse effect and climate change revisited 5

Figure 1. The relative contributions of the principal greenhouse gases and other factors to the
radiative forcing of global warming, expressed as the difference between the years 2000 and 1750
(IPCC 2001).

the (better understood) warming effects of greenhouse gases like CO2 and CH4 . This is the
global warming controversy in a nutshell.
CO2 is the most important pollutant for greenhouse purposes, not because of its efficiency
(the strong CO2 bands are already saturated, as discussed above) but because there is so much of
it being produced, and because its second-strongest spectral band, the bending fundamental at
15 µm, falls right at the peak of the Planck function for typical terrestrial temperatures. Figure 2
shows how the three principal greenhouse gases have increased over the last 1000 years. CO2
has increased by 25% since the industrial revolution, and continues to rise at a rate of 0.5% p.a.;
methane concentrations have doubled in the same period and are increasing at 1% p.a.
The ‘secondary’ greenhouse agents, water vapour and cloud, which have an even larger
effect but which do not depend directly on human activities, are not included in these figures.
The mean level of water vapour is in a complex state of dynamic equilibrium with the liquid
and solid water in the oceans and cryosphere; any increase in mean surface temperature due
to increases in other gases such as CO2 will tend to be amplified by the increased capacity of
the lower atmosphere for water vapour. H2 O is in fact the most important greenhouse gas in
terms of its contribution in degrees to the surface temperature, since it is present in enormous
quantities. However, anthropogenic release of water vapour is not itself a direct contributor to
enhanced greenhouse warming, since the role of water vapour is one of amplifying changes
forced by other gases. Water vapour is the most effective component of the unperturbed
greenhouse, contributing about 65% of the 33 K of ‘natural’ warming which the planet enjoys.
The areas of greatest uncertainty in understanding the changing greenhouse effect are
undoubtedly those which involve clouds and aerosols. These have a large effect on the albedo
of the Earth and it has frequently been pointed out by sceptics that a tendency towards a hotter
6 F W Taylor

Figure 2. Global concentrations of three greenhouse gases as a function of time over the last
millennium.

Earth could be offset by a related tendency towards increased cloud cover, which reduces
the solar heating by reflecting more radiation to space. In fact, cloud–climate feedback is
extremely complex, and changes in the mean cloud cover or thickness can also increase any
warming which produced it by reducing the net thermal emission of the Earth to space. It
all depends on the height and composition of the cloud, and its microphysical properties such
as particle or drop size. Modelling the occurrence and properties of clouds and background
aerosols (i.e. the turbidity due to airborne particles which is present even when clouds are
not) as a function of air temperature, composition and other factors remains one of the biggest
problems in climate forecasting.

1.4. Climate and climate change on the terrestrial planets


The greenhouse effect is not, of course, unique to the Earth. It plays a major role in determining
the surface environment on the other terrestrial planets with atmospheres, Mars and Venus,
The greenhouse effect and climate change revisited 7

and the same greenhouse gases (principally carbon dioxide and water vapour) are responsible.
The existence of these planets offers an important opportunity to see how the greenhouse effect
works in situations other than that we observe on the Earth.
Venus is a particularly interesting subject, because there an extreme case of the greenhouse
effect raises the surface temperature to around 730 K, which is higher than the melting points
of lead, zinc and tin. This happens in spite of the fact that the net solar input is significantly less
than for the Earth, because the very high albedo of Venus (76% compared with around 30%)
more than offsets its greater proximity to the Sun. Venus is, in fact, an example of how clouds
increase the greenhouse effect. The sulphuric acid of which the Venusian clouds are composed
forms droplets which scatter very conservatively, diffusing a fraction of the incoming sunlight
down to the surface. At the same time, they are also very opaque in the infrared. The cloud
blanket therefore makes a large contribution to the backwarming effect, which outweighs the
albedo effect overall when combined with the contribution of around a million times as much
CO2 as the Earth.
The atmosphere of Mars is also warmed by airborne particles, in this case windblown dust
from the surface. The great size of the effect—around 30 ◦ C near the surface, and nearly 100 ◦ C
at the tropopause—emphasizes the potential scale of the aerosol problem on the Earth, and
the importance of dust-producing processes like volcanic eruptions and nuclear explosions,
particularly if these occur on a global scale.
Mars also possesses the solar system’s most dramatic evidence of past climate change.
Today’s frozen rocks and desert bear unmistakable features of rivers, lakes and seas. The origin
of these is highly controversial, but one leading theory is that Mars may in the past have had a
much thicker atmosphere. This would still have consisted primarily of CO2 , but being warmer
would have held much more water vapour, and possibly contained other constituents including
clouds. Together these might have produced a greenhouse effect large enough to raise the
temperature and pressure to values which could support liquid water on the surface. If so, how
could Mars have changed so much since then? The answer may lie in the high eccentricity of
its orbit and the large fluctuations in sunfall which result from resonances between this and
other orbital parameters. Alternatively, the heating could have been primarily due to volcanic
activity which has since subsided. In this case, water vapour, CO2 and other gases and particles
from the interior of Mars could have warmed the planet by greenhouse action and produced
lakes and rivers for as long as the volcanoes were sufficiently active. Yet another possibility
is that Mars lost enough of its early atmosphere in a collision with a large asteroid to cause
temperatures to fall below the freezing point of water. The resulting loss of water vapour
and clouds from the remaining atmosphere would further reduce the surface temperature and
eventually lead to the situation we find today, where the water not lost in the collision mostly
lies frozen beneath the surface and in the polar caps.
The climates of Mars and Venus clearly deserve much closer study. The general message
that Earth-like planets apparently can be subject to extremely large and variable greenhouse
warmings is an important and often under-rated one.

2. Atmospheric composition and structure

In this section, we recap briefly some of the fundamental properties of the major components
of the climate system, beginning with the origin and basic structure of the atmosphere. In
considering the crucial topic of atmospheric composition, it is necessary to take account of
a number of complex processes and cycles, including photochemistry, which modifies the
composition of the atmosphere in important ways, including the production of ozone; the role
of the biosphere, including the anthropogenic production of greenhouse gases; condensation
8 F W Taylor

processes involving water vapour and clouds; and the interaction of the oceans with the
atmosphere. All of these involve feedback loops, are difficult to isolate in observations and
indeed are often coupled together, all of which means that even when they are understood
qualitatively, they may be difficult to incorporate accurately into models.

2.1. Origin of the atmosphere


The Earth’s atmosphere is believed to be of secondary origin. This means that the gases
surrounding the solid planet at the time it condensed were lost to space at an early stage and
the present atmosphere subsequently formed as a result of outgassing from the crust. This
process was augmented to an unknown degree by the infall of icy cometary material. The
primitive atmosphere thus formed would have had a composition quite different from that
which exists today, consisting primarily of carbon dioxide, methane, ammonia and water
vapour with little free nitrogen or oxygen. The large proportion of nitrogen now present arose
as a consequence of the photodissociation of ammonia into nitrogen and hydrogen. The latter
gas is so light that it can escape from the Earth even at the present time, whereas most heavier
gases are gravitationally trapped now that the Earth is cooler and the Sun less active. As the
atmosphere, and life, evolved, most of the methane was oxidized to form carbon dioxide and
water vapour, although enough is produced (for example, by the decay of vegetable matter)
to leave a substantial amount in the atmosphere to the present day. Free oxygen exists as a
consequence of the presence of life, in the form of plants which convert carbon dioxide into
oxygen by photosynthesis. Thus, we owe the present state of the atmosphere to evolution in the
presence of a series of complex and dynamic processes. For a longer review see, for example,
Levine (1998).

2.2. Vertical structure


Figure 3 shows a simplified model of the various regions of the atmosphere, which are named
for their temperature structure. As noted above, a substantial fraction of the energy from
the Sun reaches the ground where it is absorbed. The lower atmosphere (troposphere) is then
heated from below by the ground and becomes convectively unstable. Large-scale overturning,
coupled with some radiative and turbulent transfer, advects energy upwards until it reaches a
level where the overlying atmosphere is of a sufficiently low optical thickness that significant
amounts of radiative cooling to space in the thermal infrared can occur. At this level, the
tropopause, convection ceases and the temperature tends to become constant with height. The
tropopause occurs at a level of about 10–16 km above the surface, depending mainly on latitude,
and is generally quite a sharp feature in the temperature profile.
The temperature gradient in the troposphere, called the lapse rate, is constant for a given
composition and equal to approximately 10 K km−1 for dry air. For moist air, dT /dz is less and
can be as small as 3 K km−1 . 6.5 K km−1 is a useful average value for the Earth. The lapse rate
above the tropopause, where convection stops, tends to zero (i.e. constant temperature with
height) because there is no longer enough absorption above the layer to stop emitted photons
reaching space. Then each layer is heated by radiation from the optically thick atmosphere
below, and cooled by radiating to space, to the same degree; to first order height is no longer
important. This region is called the stratosphere, since it is stratified in the sense that the layers
are not convectively unstable as they are in the troposphere.
Above about 20 km, temperature is observed to increase (figure 3) to a maximum value of
around 270 K near 50 km altitude. This is a consequence of the absorption of solar ultraviolet
radiation by the stratospheric ozone layer. The height of the temperature maximum is known as
The greenhouse effect and climate change revisited 9

Figure 3. The mean temperature structure of the atmosphere, showing the names given to various
regions.

the stratopause. Above the stratopause, the temperature declines again, reaching an absolute
minimum of about 190 K at the mesopause near 85 km, where the pressure is only a few
millionths of the standard surface pressure.

2.3. Atmospheric composition


Table 1 shows the typical composition of dry air near the surface. The amount of water which is
present in addition to these non-condensable species varies, but can be as high as a few per cent.
Some of the other species (e.g. ozone, carbon monoxide) are also very variable even in non-
polluted environments. Although its overall budget is poorly understood, tropospheric ozone
originates both on and near the surface and in downward propagation from the stratosphere
where the concentration is higher. Tropospheric ozone is generally on the increase, in contrast
to the stratospheric ozone layer, for which there are much-publicized fears of a trend towards
lower concentrations. Stratospheric ozone depletion and global warming due to the greenhouse
effect are separate but related problems, the main connection being that the ozone layer has a
small but not insignificant effect on the flux of solar energy reaching the surface, and a small but
again significant contribution to the opacity of the atmosphere in the thermal infrared. Both are
cases where the natural state is being perturbed by increasing amounts of man-made materials.
Ozone is a very unstable molecule, and the amount which is present in a given parcel of air
at a given time depends on a balance between production and loss mechanisms, both natural
10 F W Taylor

Table 1. Composition of clean dry air near the surface (% by volume).

N2 78.08
O2 20.95
CO2 0.033
Ar 0.934
Ne 1.82 × 10−03
He 5.24 × 10−04
Kr 1.14 × 10−04
Xe 8.7 × 10−06
H2 5.0 × 10−05
CH4 2.0 × 10−04
N2 O 5.0 × 10−05

and anthropogenic, the latter mainly due to catalytic cycles involving reactive nitrogen (NOx ),
halogen, especially chlorine (ClOx ), and hydroxyl (HOx ) compounds, in which small amounts
of pollutant can destroy large amounts of ozone (Brasseur et al 1999).

2.4. Anthropogenic emissions


Anthropogenic emission rates can be estimated reasonably reliably, but the actual rate of
buildup of greenhouse gases in the global atmosphere is much harder to determine, because it
requires estimates of the loss rates due to processes such as rainout, sedimentation, photolysis
and photosynthesis, which are difficult to quantify. For species like the halocarbons, lifetimes
are on the order of centuries and the gases may accumulate essentially indefinitely. The lifetime
of methane is a few years, and the product is another important greenhouse gas—stratospheric
water vapour. (Because of the cold-trapping effect of the tropopause, methane oxidation is at
least as important as transport from the troposphere as a source of stratospheric H2 O.) The
budget of CO2 , in particular, involves interactions with the oceans and plant life. Estimates of
the future buildup of the key greenhouse gases have been made, in spite of the difficulties in so
doing, since they are essential to any attempt to make a quantitative forecast of future global
warming, and will be further discussed below.

2.5. Feedback cycles affecting atmospheric composition


As already noted, if the lower atmosphere warms due to the addition of more anthropogenic
greenhouses gases like CO2 , it can hold more water vapour, which of course is freely available
by evaporation from the surface. Thus, water vapour tends to amplify any increase due to other
gases. Calculations indicate that this amplification factor may be as much as a factor of two.
A smaller, but still important, amplification factor may be due to methane, the production of
which in marshland etc is temperature dependent. It has been postulated that very substantial
additional amounts of methane may be released from subarctic soils and tundra as they thaw,
thus amplifying any warming trend.

2.6. Atmosphere–ocean coupling


The oceans are vast reservoirs of heat, momentum, moisture and dissolved gases, all critical
components of the climate system, and all of them quite efficiently exchanged between ocean
and atmosphere. Consider, for example, that the top five metres of the ocean stores more
thermal energy than the whole of the atmosphere, and that the oceans contain 50 times as much
The greenhouse effect and climate change revisited 11

carbon dioxide as the atmosphere. Heat is transported globally by ocean currents (driven by
winds) and released, with major effects on regional climates. The general circulations of the
atmosphere and oceans are so intimately interconnected that most leading efforts to construct
sophisticated climate models include the dynamics of both systems, and their coupling, in spite
of the extra strain such a step imposes on computing resources.

2.7. Clouds and aerosols


The enhancement of the greenhouse effect by increased tropospheric humidity may be offset,
or under certain circumstances amplified, by corresponding changes in the cloud cover. The
infrared properties of clouds, as well as cloud height and cover and the abundance of water
vapour, are highly variable quantities and, with aerosols, represent the greatest source of
uncertainty in the understanding of the climate system. The main effect of low clouds is to
increase the albedo of the Earth and hence to cool the whole troposphere including the surface.
High clouds, however, are made of large ice crystals and are generally strongly forward-
scattering at short (solar) wavelengths, while having large absorption coefficients for upwelling
long-wave radiation. These clouds could increase the greenhouse effect substantially, even in
modest amounts, such as those which could be produced by fleets of high-flying aircraft.
The atmosphere contains numerous other cloud types and thinner layers of aerosols which
may have important roles. Tropospheric aerosols are necessary for cloud nucleation, and
changes in the number or type of aerosols could be crucial in determining how the Earth’s
cloud cover evolves in the future. In the stratosphere, the Junge layer, which consists of
small sulphuric acid droplets, is thought to be increasing in density due to increasing sulphur
emissions added to volcanic activity, with an important effect by increasing the albedo of the
planet. It has been estimated that an increase in the average albedo of the Earth by only 10%
could decrease the surface temperature to that of the last ice age.
The understanding of cloud physics which is needed to tackle their role in the present and
future climate system is daunting. Computing the radiative properties of clouds, in particular
their transmissivity and reflectivity over the whole range of wavelengths important to the energy
balance of the Earth (about 0.2–100 µm), requires knowledge of not only the macroscopic
properties of the cloud, such as its extent and thickness, but also its microphysics, i.e. the
number density, shape and size distribution of droplets. For water clouds the droplets can
be assumed to be spherical, but ice crystals and some aerosol particles are often shaped like
plates or needles or irregular forms with no spherical symmetry. The need to include the shape
and orientation of the crystals is usually avoided by introducing an equivalent spherical mode
radius, citing probable random orientation of the particles as justification.
Then, if the refractive index of the cloud material is known, the cross-section for absorption
and scattering, and the phase function which describes the directional distribution of the
scattered photons, may be obtained for a single droplet from Mie theory (e.g. Goody and Yung
1989). Several algorithms exist (Hansen and Travis 1974) which allow the contributions of an
assemblage of droplets to be built up into functions or matrices which describe the behaviour
of a complete cloud. Even with the various simplifications mentioned, the computer power
required for radiative transfer calculations in clouds is formidable, and the climate modeller
generally resorts to relatively crude empirical parametrizations which by-pass the microphysics
altogether.

2.8. The biosphere


Life-forms on the Earth modify the atmospheric greenhouse in many ways, the two most
often discussed being the roles of plant life on land and in the sea in determining the carbon
12 F W Taylor

Figure 4. The global carbon cycle, including estimates of the amounts of carbon fluxes (in Gt/year)
between elements of the climate system (Schimel et al 1995).

dioxide abundance in the atmosphere, and the role of humans in producing CO2 and the
other greenhouse gases as a by-product of their burgeoning industrial activities. The latter
is, of course, a matter of great popular and political concern at the present time and a major
motivation for research into the relevant mechanisms and their likely consequences.
The global carbon cycle, illustrated schematically in figure 4, includes feedback processes,
some tending to accelerate and others to slow the buildup of atmospheric CO2 . Despite the
publicity which they receive, the principal man-made fluxes of CO2 , which are those due to
fossil fuel burning and deforestation, are small compared with the natural exchange between
plants, the soil and the oceans. Thus, it is necessary to be concerned not only about man-made
production of CO2 itself, but also any other pollution which may affect the natural equilibrium,
for example by reducing the population of biota in the oceans. On the other hand, the take-up
of CO2 by the oceans may increase if warmer water leads to an increase in aquatic plant life,
in spite of a reduction in the solubility of the gas at higher temperatures. More vigorous plant
growth on land, as a result of higher atmospheric CO2 concentrations, may also tend to slow
down the buildup of the gas in the atmosphere.

3. Models of climate change

Studies of greenhouse change, like virtually all problems involving complex coupled systems
and a multitude of different processes, and certainly all of those for which forecasting the
The greenhouse effect and climate change revisited 13

future is a goal, rely on numerical models. Simple versions provide useful insight into the
physics involved, while complex parametrizations are necessary to obtain the greatest possible
accuracy and to attempt to incorporate all of the feedback process. As already noted, it is still
not possible to produce full and accurate representations of the past and present climate from
first principles, and there is still no complete consensus about which aspects of the apparently
chaotic climate system predictions are possible on any particular timescale in the future.
Nevertheless, the current state of the art has advanced to the point where some fairly
general global forecasts can be made with sufficient precision to at least begin to address the
basic requirements of policy makers. For example, from model studies it can be said with a
high probability that mean temperatures will rise, and stratospheric ozone amounts decline, by
amounts of a few degrees and several per cent respectively in the next few decades. Models
are capable in principle of providing regional detail as well, and some already do so, but the
uncertainties are much larger. In this section, we look at the basics of climate models before
considering some of the most recent results.

3.1. Simple models


Very simple climate models, with some realistic features, are useful as an introduction to the
methodology. In the simplest, the atmosphere is represented by a single homogeneous layer
of gas, transparent at those wavelengths corresponding to most of the incoming solar energy,
and opaque at those wavelengths at which the Earth emits most of its thermal energy. Thus, a
flux of energy equal to one solar constant (1.37 kW m−2 ) falls entirely on the surface, raising
its temperature according to the Stefan–Boltzmann law to about 250 K. The outgoing energy
to space, here entirely from the atmosphere, must also be equal to the solar constant in order to
achieve energy balance. However, the flux from the atmosphere occurs in both the upward and
downward directions, and so in equilibrium the surface receives a second contribution equal
to that from the Sun, raising its temperature by a factor of 21/4 to 297 K.
If the atmosphere were homogeneous in reality, this enhancement would be the upper
limit on what could be achieved. In fact, it is clear (from studying Venus, for example) that the
greenhouse enhancement can be not one but many solar constants. This becomes possible if
we consider models in which the lower atmosphere is still opaque in the infrared, but warmer
in its lower regions (which warm the surface) than its upper (which radiate to space). As a
further refinement, we can allow crudely for the spectral band structure in the atmospheric
opacity profile.
The lapse rate of the atmosphere in the troposphere and the radiative equilibrium
temperature of the stratosphere can be obtained from elementary expressions obtained from
first principles, as follows.
The temperature gradient in the convectively mixed region can be calculated, assuming
that hydrostatic equilibrium applies, so that pressure P is related to density ρ at a given height
z by
dP = −gρ dz. (1)
If we assume that there is no net exchange of energy between a parcel of air and its surroundings,
then specific heat at constant pressure cp , temperature T , pressure P , and volume V are
related by
cv dT + p dV = 0. (2)
Using the perfect gas law P V = RT /M (R = gas constant per mole, M = molecular weight)
and the relationship cp − cv = R/M (cv = specific heat at constant volume) we find that the
14 F W Taylor

temperature gradient with height dT /dz is given by


dT /dz = −g/cp , (3)
−1 −1 −1
and, since cp is equal to about 1000 J kg K , is equal to approximately 10 K km for dry
air as noted earlier.
The stratospheric temperature Tx may be estimated by treating the region as if it were a
single slab of gas which is optically thin at all wavelengths, rather than just on average, which
is the real situation. (‘Optically thin’ means that a photon traversing the slab will do so with
a probability of not more than 1e of being absorbed or scattered.) Then Tx is related to the
effective radiative temperature of the Earth TE , which is obtained using the fact that the effective
(equivalent blackbody) temperature, TS , of the surface of the Sun is about 6000 K, its radius
RS is about 0.7 million kilometres and its distance from the Earth DES is about 150 million
kilometres. Applying the Stefan–Boltzmann law we obtain for the total radiant power of the
Sun
ESun = 4πσ RS2 (TS )4 W (4)
−8 −2 −4
where the constant σ is equal to 5.670 × 10 W m K . The solar constant S, which is
defined as the power per unit area arriving at the Earth from the Sun, is then given by
S =σ TS4 (R S /D ES )2 W m−2 (5)
−2
from which we find that S is approximately 1.37 kW m . In order for the planet to be in
equilibrium overall, the power emitted by the Earth over its whole area, EE , must be equal to
the total incoming power of the Sun which is intercepted by the projected area of the Earth,
less the fraction A which is reflected, i.e.
EE = 4π σ RE2 (T E )4 = (1 − A)Sπ RE2 W, (6)
from which we obtain the result that TE is approximately 250 K, or −23 ◦ C.
The expression for the energy balance of the stratosphere, treated as a slab of emissivity
and absorptivity ε is given by:
εσ (TE )4 = 2εσ (T x )4 (7)
whence
TE 250
Tx = 1/4
= 1/4 ≈ 210.2 K. (8)
2 2
A hierarchy of model atmospheres can now be constructed (figure 5). These are all
characterized by the same vertical lapse rate, −g/cp in the troposphere and 0 in the stratosphere,
but the tropopause height depends on the abundance of greenhouse gases like CO2 , while the
stratospheric temperature depends on the value assumed for the mean albedo or reflectivity of
the Earth. The tropopause height varies because it must occur at that level where the optical
depth, measured from the top of the atmosphere down, reaches unity, i.e. where the atmosphere
makes the transition from being optically thick, so that vertical convection is the most efficient
means of transferring energy upwards, to becoming optically thin, where radiation takes over
and the temperature becomes approximately constant with height. Increasing CO2 makes
the atmosphere more opaque in the infrared, tending to raise the height of the tropopause
and producing a corresponding increase in surface temperature (since the lapse rate is fixed).
The stratospheric temperature does not depend on composition because, in the optically thin
approximation, this depends only on the equilibrium temperature of the planet.
The opacity of the stratosphere measured vertically is due principally to the strong
absorption lines of species such as carbon dioxide and water vapour. For this kind of absorption,
the optical depth is a function of the square root of the product of pressure p and absorber
The greenhouse effect and climate change revisited 15

Figure 5. Simple climate models: (a) unperturbed; (b) increasing the albedo A of the Earth cools
the stratosphere and the surface while the composition, tropopause height and the tropospheric
lapse rate are unchanged; (c) increasing the abundance of CO2 while keeping the albedo and the
tropospheric lapse rate constant raises the tropopause height and warms the surface.

mass m (see, for example, Goody and Yung 1989). The path of absorber in the isothermal
column above the tropopause at pressure pt is H apt , where a is the mixing ratio by volume
of the absorbing gas (assumed constant with height, a reasonable approximation for CO2 ,
which is overwhelmingly the most important infrared absorber in the stratosphere) and H is
the atmospheric scale height (again constant with height, a fair approximation in the lower
stratosphere). The optical depth, and hence the equivalent grey absorption coefficient, is
therefore a function of the product of the pressure and the square root of the mixing ratio, so that

τ = f (ptrop a) = 1. (9)
We do not need the form of the function f for present purposes, but see Goody and Yung
(1989) for a discussion.
An albedo of A = 0.3, a tropopause height of 11 km and a pressure ptrop of 227 mb,
are considered to be representative means of the present-day atmosphere. Now consider the
effect of changing the absorber amount, including the case where the amounts are doubled, a
standard case for climate modellers (the resulting change in T0 is sometimes called the climate
sensitivity) and a situation which may occur on Earth√(for CO2 at least) during the next 50 years
or so. Then, according to (9), ptrop will move to 1/ 2 its present value. Physically, what is
happening is an increase in the depth of the convective layer to offset the greater opacity of the
atmosphere, so that heat is still brought up to the unit optical depth level where it can radiate
to space. (Note that the model does not depend on the relationship τ = 1 at the tropopause;
any constant value of τ would have the same effect. However, it can be shown from simple
radiative transfer theory that most of the heat reaches space from levels near τ = 1.)
This simple model predicts a surface temperature rise of about 9 ◦ C for the extreme case of
doubling the greenhouse gases in the atmosphere. An increase of CO2 from 330 to 350 ppmv,
roughly corresponding to the observed increase between 1975 and 1990, produces a surface
temperature rise of around 0.75 ◦ C, perhaps twice that observed (figure 6). This difference
can be explained by the lag introduced by the thermal inertia of the oceans, i.e. the time for
the climate system to reach equilibrium after a change in the ‘forcing’, which is orders of
magnitude longer than 15 years.
Despite the gratifying success of this simple calculation, it is of course necessary to
consider more accurate models that include feedback processes. The two most important are the
16 F W Taylor

Figure 6. Observed global temperatures from 1880 to 2000 (US National Climatic Data Center).

increased humidity of the air which would follow an increase in mean surface temperature, and
changes, probably increases, in the amounts of various types of cloud. Although water vapour is
the single most important absorber of those which produce the greenhouse effect, in the simple
model, the effect of increasing water vapour is negligible provided the increase is confined to
the troposphere. The physical reason is that the lower atmosphere is already optically thick,
and the vertical transfer of energy is dominated not by radiation but by convection. This is
represented by the fixed lapse rate in the model.
A second feedback process, this time probably tending to oppose any increase in the
surface temperature, is the increase in cloudiness which may result from increased humidity
and convection in the troposphere. Clouds are generally more reflective than the surface
they conceal, so increasing cloud cover tends to increase the albedo of the planet, which was
one of the parameters used to formulate the simple model. In the simple model, the surface
temperature is reduced by about 0.1 ◦ C for each 1% increase in the albedo increase of the
Earth.
Such a change could easily pass undetected, since albedo depends on quantities such as
cloud height, particle size, geographical location and so forth. Recently, satellite experiments
have been developed which measure the albedo and total thermal emission of the Earth from
space, as will be discussed below (section 5.2). Even these cannot detect changes in net
albedo without collecting many years of data, and the problem of separating trends from slow
fluctuations has to be surmounted.
In summary, very simple models give some insight into the principal processes at work,
and allow us to go through the motions of accounting for recently observed small changes
and of predicting major changes in the future. Obviously, the detailed values obtained
from such a simple and in some ways arbitrary model are not very reliable, but they do
emphasize the important principle that very basic physics implies non-trivial changes in mean
surface temperature and/or global cloud cover in response to changes in the atmospheric CO2
concentration. In the next section we shall consider the extent to which this conclusion can be
refined using sophisticated climate models.

3.2. Complex models


Complex climate models are developed for two main reasons. The first is that by introducing
more detailed physics, and more exact treatments of processes, we might hope eventually
to achieve more reliable predictions over long periods of time. The second is to recognize
The greenhouse effect and climate change revisited 17

that climate change is a regional affair and that changes in, for example, the mean surface
temperature of the Earth will occur as a result of large increases in some regions with smaller,
or even negative, increases in others. There will also be changes in the patterns of variability,
not only of temperature but also of important quantities such as rainfall which meteorologists
wish to understand and predict. Models with good spatial and temporal resolution, and
comprehensive formulation or parametrization of radiative, dynamical and even chemical
processes are required to address these goals.
It is beyond the scope of this review to describe the workings of complex climate models in
detail. All of them are based on general circulation models of one kind or another, increasingly
atmospheric and oceanic GCMs coupled together. The basic building blocks of any GCM
are the so-called primitive equations (e.g. Gill 1982), which express momentum balance,
hydrostatic balance, conservation of mass and conservation of energy. To these must be added
schemes for radiative transfer, cloud formation, surface interactions and all the processes
described in the preceding sections. These frequently involve the simplification, for example
by linearization, of complex processes, and constantly demand improvement. The resulting
models all give a heavily damped, essentially linear response to changing boundary conditions,
which is fairly unrealistic especially for long-term forecasts, and explains why models cannot
at present reproduce observed interannual variability.

4. Latest model predictions

Predictions obtained using existing climate models have been reviewed by the
Intergovernmental Panel on Climate Change of the World Meteorological Organization (IPCC
2001). This survey involved the work of nearly 1000 scientists and considered the results
from a number of quasi-independent models, and a wide range of ‘scenarios’ for the future
emission of greenhouse gases. It therefore represents the most authoritative set of conclusions
we have for a synthesis of all the available predictions and an assessment of the uncertainty.
The IPCC notes first that the global average temperature has increased over the 20th century
by 0.6 ± 0.2 ◦ C. Model studies suggest that this is mostly of anthropogenic origin, and in
particular that it tracks the increased emission of greenhouse gases and aerosols due to human
activities (figure 7). These and other tests, plus better computers and a more physical basis
have strengthened confidence in the ability of models to predict future climate.
Among other types of evidence, tide gauge data shows that the global average sea level
rose by 10–20 cm during the last century, while satellite and other observations show a decrease
in snow and ice cover as well. On the other hand, some important climate indicators appear
not to have changed, at least in recent decades when good measurements have been available,
most notably the mean temperatures in some parts of the southern hemispheric oceans and
Antarctica.
The range of predictions for the mean global surface temperature corresponds to an
increase of 1.4–5.8 ◦ C in the next century (figure 8). An increase larger than the last century and
without precedent in the last 10 000 years, according to palaeoclimate data, is considered very
likely. Sea level is expected to rise by 10–80 cm, due to the thermal expansion of the oceans
and, to a lesser extent, melting of glaciers and the polar icecaps. (The time constant for polar
melting is so long that its effect is unlikely to be significant for 100 years or more.) Models
indicate that extreme weather, deforestation and desertification could also be consequences
of global warming although the uncertainty of specific predictions with this level of detail
remains quite high.
18 F W Taylor

(a) (b)

(c)

Figure 7. Model simulations of annual global mean surface temperatures (IPCC 2001).

5. Experimental studies

Progress with understanding the greenhouse effect, and planning to ameliorate the social
and environmental effects of climate change, depends critically on our ability to design and
carry out programmes of measurements. Theory and modelling alone cannot improve our
understanding of the details of the physics on which models rely, and only good data can
test and improve specific models and parametrizations. There are few direct experimental
approaches to observing the greenhouse effect, however. We must probe the physical processes
in the atmosphere and oceans at the same time as we monitor the climate system for variations
and secular change. In this section we consider a representative selection of the ways in which
this is being done.

5.1. Ground-based programmes


Most of the existing body of knowledge about the climate system, and the evidence that global
change has begun to occur, comes from traditional instrumentation on ships and aircraft and in
meteorological stations around the world. These provide a long baseline in time, but face the
problem that truly global quantities are very difficult to obtain from discrete data. For example,
it might seem at first to be simple to obtain a mean global surface temperature by taking readings
from meteorological stations distributed on a grid, and then forming a spatial and seasonal
The greenhouse effect and climate change revisited 19

(a) (b) (c)

(d ) (e)

Figure 8. Global mean temperature and sea level changes in the next century, as predicted by
the Intergovernmental Panel on Climate Change, based on all of the available model calculations.
The upper three frames show the emission scenarios for carbon dioxide and sulfur dioxide as
representatives of the assumptions on which the predictions are based (IPCC 2001).

average. However, even today, this is very non-trivial, since the distribution of measurement
is non-uniform in space and time and aliasing of seasonal and non-secular changes can easily
occur. When long runs of data are required to search for trends, problems of changing
measurement techniques and unreliable intercalibration are encountered. Measurements made
within the urban ‘heat island’ are generally higher than those made at airports etc, and the last
50 years has seen the gradual move of met stations from one to the other.

5.1.1. Network for the detection of stratospheric change. The last decade has seen the
organization and improvement of measurement systems on a vast scale. As an example,
Taylor (1991) described the setting up of the Network for the Detection of Stratospheric Change
(NSDC), which commenced operation in January 1991. It consists of about 20 primary stations
and a larger number of complementary stations covering the globe. These address the rate of
build-up of greenhouse gases in the atmosphere as a whole by making upper-air measurements,
since those at the surface are confused by local effects such as the concentration of pollutants
near population centres, rainout and so forth. The stratosphere contains the main source for
some greenhouse gases (e.g. ozone), and the main sink for others (e.g. methane). Predicting the
rate of global change depends on understanding processes there, and furthermore most climate
models predict cooling of the stratosphere in response to an increase in the CO2 mixing ratio,
which is greater than the corresponding heating at the surface and therefore easier to detect (as
it has been, since about 20 years ago).
The measurements made by the network vary with site but the following are brief
descriptions of some of the most important.
20 F W Taylor

The differential absorption lidar technique uses light at two or more different wavelengths,
one of which is much more strongly absorbed by the species of interest than the other. For the
example of ozone, wavelengths of 0.353 and 0.308 µm are typical. The returned signals are
small but, given cloud-free skies and a long integration time (up to several hours) night-time
profiles from 5 to 50 km with vertical resolution of 200 m and absolute errors of less than 10%
can be achieved.
Temperature measurements by lidar rely on Rayleigh scattering from air molecules,
typically at the wavelength of Nd lasers (0.532 µm). The signal strength is proportional to the
air density and hence can be related to the temperature. The technique is valuable from about
30–80 km altitude, yielding temperature to an accuracy of better than 2 K with 200 m vertical
resolution. Below 30 km, backscatter from atmospheric aerosols confuses the temperature
signal and below about 26 km this contribution is so dominant that profiles of the aerosols
themselves can be derived. Water vapour lidars observing the Raman backscatter from water
vapour at 408 nm obtain vertical profiles from near the surface to the lower stratosphere.
Infrared spectroscopy is the most versatile tool of the experimental atmospheric scientist.
All of the key greenhouse gases, their source and sink species and their reaction partners
have vibration–rotation bands in the 2–15 µm spectral region, all of which is accessible
simultaneously to a Fourier-type spectrometer such as a Michelson interferometer. The
instruments used by NDSC achieve spectral resolutions in the region of 0.002 cm−1 , which
provide limited vertical profile information and will separate the stratospheric from the (often
much larger) tropospheric contribution to the spectral lines of interest. The Sun is used as the
source, and is observed over a range of solar zenith angles. The range of air masses thus obtained
makes it easier to observe weakly and more strongly absorbing species with the same set-up.
Ultraviolet and visible spectrometers, measuring in absorption using the Sun or the Moon
as a source, can obtain column abundance measurements which complement or are superior
to infrared data for certain species, including ozone, nitrogen dioxide, nitrogen trioxide, and
chlorine and bromine oxides. This is an update of the classic Dobson technique, first developed
for stratospheric ozone monitoring in the 1920s (Dobson 1930). Grating instruments with a
spectral resolution on the order of 1 nm and integration times of 20 s–20 min are used, and
precisions of around 1% (for O3 and NO2 ) and around 10% for the other species, are claimed.
Thermal emission from optically thin stratospheric thermal emission lines are monitored
using microwave receivers operating at wavelengths around 22.2 GHz (for water vapour) and
110 GHz (for ozone) to 279 GHz (for chlorine oxide). High spectral resolution defines the
shapes of the pressure-broadened lines and allows vertical profiles to be retrieved, but with
much less vertical resolution than is obtainable with active techniques such as lidar. Profiles
with a resolution of around one scale height (6–8 km) are obtained for around 30 min integration
times for water vapour (20–85 km), ozone (20–75 km) and chlorine oxide (25–45 km).

5.1.2. Cloud studies using aircraft. The key problem involving clouds is to relate their for-
mation processes, which can be parametrized in models, to their radiative effects. Much of
the current work is being done using instrumented aircraft, which sample the cloud directly in
order to measure its physical thickness, water content, temperature and pressure characteris-
tics. At nearly the same time, radiometers on the aircraft can obtain the upward and downward
fluxes of radiation above and below the cloud, in bandpasses corresponding to the solar and
planetary fluxes.
Going beyond basic cloud characteristics to an understanding of the physical processes—
principally scattering, emission and absorption—which occur in clouds, requires more sophis-
ticated measurements. It is possible, for instance, to measure the size (and to some extent the
shape if the particles are ice) of cloud droplets using optical methods whereby the drops are illu-
The greenhouse effect and climate change revisited 21

Figure 9. The estimated global carbon budget and its main components as a function of time
(Woods Hole Research Center).

minated so that their shadows fall onto a detector array. Such data, if combined with multispec-
tral measurements of the visible and near- and far- infrared flux in selected spectral intervals,
allow the radiative properties of real clouds to be calculated and compared with the measured
fluxes at their boundaries. Parametrizations useful in climate models can then be developed.

5.1.3. Carbon dioxide exchange with the oceans. Estimates of the global carbon budget, and
hence of the carbon dioxide amounts available to drive greenhouse warming, include a large
unidentified component (figure 9). This may be due in part to difficulties in measuring the
exchange of CO2 between the atmosphere and the oceans.
Shipborne measurements remain the only way to obtain data below the surface layer of the
oceans, and global and seasonal coverage is obviously a problem. International programmes,
such as the Joint Global Ocean Flux Study, coordinate the collection of vast numbers of data
and their analysis. Samples are collected from many depths using towed, floating and moored
sediment traps, pumps and coring drills. Water samples are analysed for their dissolved gas
and material content. The analysis aims to establish how much of the CO2 absorbed annually
by the oceans is subsequently re-released, and how, where and when. For instance, it has been
found that the exchange varies by a factor of 4 depending on the occurrence of the well known
El Niño phenomenon, because of differences in the temperature structure and the degree of
upwelling in the ocean, and the weakening of the winds in the eastern half of the Pacific during
El Niño events.
Some of the CO2 is retained permanently, after undergoing chemical and biological
changes and sedimentation. A large amount is in solution, perhaps to be more freely released
when global warming occurs (since the solubility of CO2 in water decreases with increasing
temperature) to provide an additional positive feedback mechanism. Minute plant life (phyto-
plankton) in the top 200 m of water consumes CO2 through photosynthesis, but the variability
of this process, its temperature dependence and the ultimate fate of the CO2 is obscure.

5.2. Satellite measurements


Observations from satellites are ideal in principle for global atmospheric studies, since the
whole planet can be surveyed regularly and efficiently, and in three dimensions. Unlike in situ
22 F W Taylor

programmes, the limiting factor is not a limited data gathering capability, but rather the need
to develop sensors and techniques which are capable of obtaining the required parameters with
sufficient resolution and precision. Again, selected examples will illustrate the current state
of the art.

5.2.1. Earth Radiation Budget measurements. Since the 1960s satellites have measured the
outgoing reflected solar radiation, i.e. the albedo, and the outgoing thermally emitted radiation,
from Earth orbit. This is a less simple task than it may at first appear; the absolute calibration of
accurate radiometers is notoriously difficult, particularly in space where the hostile environment
can lead to deterioration of reference targets which cannot be checked. Furthermore, no real
instrument can obtain a uniform spectral response over the wide wavelength ranges (roughly
0.4–4.0 µm for solar and 4.0–100 µm for thermal fluxes) required to measure local energy
balance. Finally, albedo, although a simple concept, in practice is difficult to derive from
data because integration over a 2π solid angle is required of a field which may have strong
directional components (the thermal flux is easier because in this case it is reasonable to expect
that cylindrical symmetry applies).
The instrumentation on NASA’s Earth Radiation Budget Experiment (ERBE) satellites
addressed these difficulties by using redundant standard radiance targets, by breaking the
wavelength range into segments which are measured and calibrated separately and by using
angular scans which later can be integrated into hemispherical fluxes. The last of these is
problematical because the angular coverage over a given region from a single satellite obviously
can never be complete; it is necessary to fit the data to empirical models of the reflectance
properties of different types of surface and integrate the model, with a corresponding addition
to the error budget.
There is still the difficulty of covering the globe with a small number of satellites, given
that the quantities of interest are highly variable in space and time, including a powerful diurnal
component. ERBE sensors flew on two sun-synchronous polar orbiting satellites (NOAA-9
and NOAA-10) and on the Earth Radiation Budget Satellite (ERBS), which was launched
into a 57◦ inclination orbit which precesses with a 36 day cycle. The first of an improved
version known as CERES (Clouds and the Earth’s Radiant Energy System) was launched into
a 35◦ inclination orbit in 1998 and the second in 2000 on Terra, part of NASA’s new Earth
Observation System series of satellites. In Europe, plans are well advanced to include radiation
budget sensors on a geostationary platform, using a field of view which covers the whole Earth
so as to obtain integrated global coverage.
Satellite data have already provided the first reliable information on such basic climate
variables as the mean cloud cover of the Earth (more than 60%, or 10% more than earlier
estimates). Oceans are cloudier than continents, at 67%, and clouds over the ocean are lower
and about 10% less reflective than those over land. The tropics and the temperate zones are
generally cloudier than higher latitudes, but the latter are typically almost twice as reflective.

5.2.2. European Environmental Satellite (Envisat). We turn now from radiation budget
satellites, which are essentially data gathering systems, to examples of scientific satellites,
where the instruments are experiments investigating atmospheric processes and the goal, so
far as forecasting is concerned, is less to look at the boundary conditions on the climate
system and more to improve the physics in the models. At the time of writing, the European
Space Agency is preparing to launch Envisat, a large (2.673 t) polar-orbiting satellite which will
provide measurements of the atmosphere, ocean, land and ice over a 5 year period (ESA 1998).
The greenhouse effect and climate change revisited 23

Figure 10. The Envisat spacecraft showing the sensors. See text for key to acronyms.

The objectives relevant to climate include quantitative monitoring of ocean–atmosphere


heat and momentum exchange, sea surface temperature, atmospheric composition and
associated chemical processes, ocean dynamics and variability, atmospheric temperature, water
vapour and cloud top height and coverage.
To accomplish this the following sensors are carried: a Medium-Resolution Imaging
Spectrometer (MERIS) for the observation of the ocean and coastal waters, three atmospheric
sounding instruments, the Michelson Interferometric Passive Atmospheric Sounder (MIPAS),
the Global Ozone Monitoring by Occultation of Stars experiment (GOMOS) and the Scan-
ning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY). Ocean
observations use a radar altimeter, an Advanced Synthetic Aperture Radar (ASAR) and the
advanced along track scanning radiometer, a dual-angle infrared radiometer for accurate sea
surface temperature measurement. A microwave radiometer measures the integrated atmo-
spheric water vapour column and cloud liquid water content, as correction terms for the radar
altimeter signal and for surface emissivity and soil moisture measurements and ice character-
ization. A diagram of the spacecraft showing these instruments in place appears in figure 10.

5.2.3. The Earth Observing System (EOS). The Earth Observation System is NASA’s latest
series of giant satellites dedicated to climate studies. The first in the series, called Terra, was
launched in 2000; Aqua and Aura are scheduled to follow in 2001 and 2003.
A key feature of the advanced instruments on these satellites is that they address much
more comprehensively than before the various difficulties of measuring the atmosphere near
24 F W Taylor

the surface from space. These include fluctuations in the spectral properties of the surface,
cloud contamination of radiances and the fact that the limb-viewing technique cannot be used
because the troposphere is too opaque when viewed tangentially. This last point means that the
vertical resolution which can be obtained is limited, and the abundance retrievals are subject to
errors introduced by the lack of knowledge of the shape of the vertical profile. Nevertheless,
tropospheric studies clearly are an essential part of a long-term programme to understand
climatic change and must be tackled.
MOPITT (for Measurements Of Pollution in the Troposphere) on Terra is a gas correlation
instrument viewing vertically downwards and obtaining a small footprint a few kilometres
square on the surface (Drummond et al 1996). It uses the pressure-modulated gas correlation
technique to separate the spectral lines of its target species, in this case carbon monoxide and
methane, from the forest of lines due to water vapour and other more abundant molecules.
With high-spatial-resolution maps, the problem of finding cloud-free regions is reduced and
MOPPITT offers the possibility of studying the production and subsequent evolution of
concentrations of these two important greenhouse gases. CO has a relatively short lifetime
of a few months and shows substantial variability across the globe, for example a factor of
three difference between the mean values in the northern and southern hemispheres. MOPITT
measurements aim to identify the surface sources and sinks, natural and anthropogenic, and
the transport processes from the surface to the upper atmosphere. Early results show CO
concentrated over areas of industrial activity and of biomass burning in South America.
Methane has a long lifetime (7 years) and as a result is well mixed in the troposphere, with
only a few per cent inter-hemispheric difference and seasonal variation. It remains to be shown
how well these subtle features can be extracted from MOPITT data.
Even more advanced is the Tropospheric Emission Spectrometer instrument on Aura; TES
is an immensely sophisticated Fourier transform spectrometer with mechanically cooled optics
and detectors, which has the potential to measure thermal emission from tropospheric minor
constituents with high spectral and high spatial resolution, over a wide spectral range (Beer
et al 2001). The main focus is on tropospheric ozone, a key constituent in pollution studies, but
global measurements of a range of species (NOy , CO, SO2 and H2 O as well as ozone), from the
surface up through the middle atmosphere, will also advance the atmospheric chemistry part
of the greenhouse puzzle rapidly once it is available. Finally, alongside TES on Aura will be
the High-Resolution Dynamics Limb Sounder HIRDLS, which aims to elucidate the complex
coupling between atmospheric chemistry and dynamics. A prominent example of this is the
way the budget of important trace constituents in both the troposphere and the stratosphere
depends on transport in both directions across the tropopause. To first order, such transport is
inhibited by the temperature structure and the ways in which it actually occurs are very poorly
understood. However, key issues like quantifying the sources of tropospheric ozone, and the
rates at which ozone-depleting halogen compounds are introduced into the stratosphere, and
eventually removed, depend crucially on new progress being made.

6. Conclusion

In 1990, the position according to Taylor (1991) was that ‘it will probably be another
decade at least before definite experimental evidence for or against global warming is in
hand’. Although not everyone would agree, this has probably turned out to be an accurate
prediction. It is also the case that advances in measurements and modelling and increased
understanding of many key atmospheric processes have placed forecasts of future warming
on a much firmer footing (although not unassailable—see, for example, Lindzen 1995). The
best of these forecasts (IPCC 2001) suggests continued global warming which could have
The greenhouse effect and climate change revisited 25

serious consequences for living things on the Earth on timescales of less than a century
(see also Hansen et al 1998, 2000). The largest advances to be anticipated in the next 10 years
will probably come in aspects of more regional climate forecasting, leading to a better
understanding of the expected impacts on different parts of the globe, and on the frequency of
extreme events such as hurricanes.

References

Beer R, Glavich T A and Rider D M 2001 Tropospheric emission spectrometer for the Earth Observing System AURA
satellite Appl. Opt. 40 2356–67
Brasseur G P, Orlando J and Tyndall G 1999 Atmospheric Chemistry and Global Change (New York: Oxford University
Press)
Dobson G M B 1930 Proc. R. Soc. A 129 411–17
Drummond, James R and Mand G S 1996 The measurements of pollution in the troposphere (MOPITT) instrument:
overall performance and calibration requirements J. Atmos. Ocean. Technol. 13 314
ESA 1998 ENVISAT-1 Mission and System Summary (European Space Agency)
Gill A E 1982 Atmosphere and Ocean Dynamics (New York: Academic)
Goody R M and Yung Y 1989 Atmospheric Radiation (Oxford: Oxford University Press)
Hansen J E et al 1998 Climate forcings in the industrial era Proc. Natl Acad. Sci. USA 95 12 753–8
Hansen J, Sato M, Reto R, Lacis A and Oinas V 2000 Global warming in the twenty-first century: an alternative
scenario Proc. Natl Acad. Sci. USA 97 9875–80
Hansen J E and Travis L D 1974 Space Sci. Rev. 16 527–610
IPCC 2001 (Houghton J T, Ding Y, Griggs D J, Noguer M, van der Linden P J and Xiaosu D (ed)) Climate Change 2001:
The Scientific Basis. Contribution of Working Group 1 to the Third Assessment Report of the Intergovernmental
Panel on Climate Change (IPCC) (Cambridge: Cambridge University Press) p 944
Lindzen R S 1995 How cold would we get under CO2 -less sky? Phys. Today 48 78–80
Lunine and Jonathan I 1998 Earth: Evolution of a Habitable World (Cambridge: Cambridge University Press) p 344
Schimel et al 1995 CO2 and the carbon cycle Climate Change 1994 (Cambridge: Cambridge University Press)
Taylor F W 1991 The greenhouse effect and climate change Rep. Prog. Phys. 54 881–918

You might also like