1 s2.0 S0959378010000750 Main
1 s2.0 S0959378010000750 Main
A R T I C L E I N F O
Abstract: The issue of ‘‘measuring’’ climate change vulnerability and adaptive capacity by means of
Article history: indicators divides policy and academic communities. While policy increasingly demands such indicators
Received 9 November 2009 an increasing body of literature criticises them. This misfit results from a twofold confusion. First, there is
Received in revised form 23 August 2010
confusion about what vulnerability indicators are and which arguments are available for building them.
Accepted 27 August 2010
Second, there is confusion about the kinds of policy problems to be solved by means of indicators. This
Available online 12 October 2010
paper addresses both sources of confusion. It first develops a rigorous conceptual framework for
vulnerability indicators and applies it to review the scientific arguments available for building climate
Keywords:
change vulnerability indicators. Then, it opposes this availability with the following six diverse types of
Vulnerability
Adaptive capacity problems that vulnerability indicators are meant to address according to the literature: (i) identification
Indicator of mitigation targets; (ii) identification of vulnerable people, communities, regions, etc.; (iii) raising
Index awareness; (iv) allocation of adaptation funds; (v) monitoring of adaptation policy; and (vi) conducting
Assessment scientific research. It is found that vulnerability indicators are only appropriate for addressing the second
type of problem but only at local scales, when systems can be narrowly defined and inductive arguments
can be built. For the other five types of problems, either vulnerability is not the adequate concept or
vulnerability indicators are not the adequate methodology. I conclude that both the policy and academic
communities should collaboratively attempt to use a more specific terminology for speaking about the
problems addressed and the methodologies applied. The one-size-fits-all vulnerability label is not
sufficient. Speaking of ‘‘measuring’’ vulnerability is particularly misleading, as this is impossible and
raises false expectations.
ß 2010 Elsevier Ltd. All rights reserved.
0959-3780/$ – see front matter ß 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.gloenvcha.2010.08.002
J. Hinkel / Global Environmental Change 21 (2011) 198–208 199
Parris and Kates, 2003; Böhringer and Jochem, 2007). Judging from The plurality of definitions of and approaches to assessing
this literature, indicators seem to be a typical example of failed vulnerability has led to intensive conceptual work that attempts to
science–policy communication. clarify concepts and methodologies. Glossaries have been com-
This paper argues that these opposing views result from two piled (e.g., ISDR, 2004; IATF/DR, 2006; McCarthy et al., 2001;
sources of conceptual confusion present in the academic and policy Thywissen, 2006; Parry et al., 2007), overarching frameworks
communities. First, there is confusion about what indicators are developed (e.g., Cutter, 1996; Jones, 2001; Brooks, 2003; Turner
and what they can accomplish in the domain of climate change et al., 2003; Luers, 2005) and different types of approaches have
vulnerability, not the least because vulnerability and related been classified (e.g., Timmerman, 1981; Kates, 1985; Kelly and
concepts such as adaptive capacity and sensitivity themselves Adger, 2000; Füssel and Klein, 2006; O’Brien et al., 2007; Füssel,
remain vague and inconsistently defined (Adger, 2006; Hinkel, 2007). A general discussion of definitions, methodologies and
2008; Ionescu et al., 2009; Wolf et al., 2010). Furthermore, it is not conceptual frameworks is beyond the scope of this paper. For
clear what ‘‘measuring vulnerability’’ means and some authors recent summaries see Adger (2006), Eakin and Luers (2006) and
even argue that vulnerability can, in principle, not be measured Wolf et al. (2010).
(Moss et al., 2001; Patt et al., 2008). Finally, the methodologies To date, the conceptual work has, however, not resolved the
applied in the development of vulnerability indicators are often terminological and methodological confusion associated with
not presented transparently (Gallopin, 1997; Eriksen and Kelly, vulnerability and related concepts. Little agreement has been
2006; Klein, 2009). reached beyond that there are competing conceptualisations of
Second, there is confusion with respect to the purpose of vulnerability and that vulnerability is place-based and context-
assessing vulnerability in general and indicating vulnerability in specific (Cutter et al., 2003).
particular. Most policy and academic documents in-fact remain Recent work carried out in the ADAM and FAVAIA projects
silent about the purpose for which the developed vulnerability addressed the conceptual and methodological confusion more
indicators shall be used. Policy and research questions addressed rigorously than previous attempts by applying methods of linguistic
are generally not stated or only vaguely so. This circumstance is analysis and formalisation (Hinkel, 2008; Ionescu et al., 2009; Wolf
particularly disconcerting because vulnerability assessments are et al., 2010). The usage of vulnerability and related concepts was
said to be carried out for very different purposes ranging from analysed in conceptual papers and case studies from the climate
identifying global mitigation targets to selecting local adaptation change, disaster risk, poverty and food security literature. Common
measures (Füssel and Klein, 2006; Smit and Wandel, 2006; Patt elements that occurred in the usage were abstracted and
et al., 2008). represented formally. Both theoretical definitions as well as
This paper addresses both sources of confusion in order to methodologies applied for assessing vulnerability were considered.
clarify the science–policy interface in the context of climate change This work comes to the following conclusions:
vulnerability. Section 2 reviews the state-of-the-art of climate
change vulnerability bringing in findings from recent conceptual 1. All definitions and methodologies analysed follow a common
work carried out in the context of the ADAM1 and FAVAIA2 form in that vulnerability is a measure of possible future harm.
projects. Section 3 then presents a rigorous conceptual framework The measure of harm refers to a value judgement on the
of climate change vulnerability indicators and Section 4 applies it ‘‘badness’’ of a state. Commonly used measures of harm are
to review which deductive, inductive, normative and other mortality, the number of people affected by floods and loss of
arguments are available for developing vulnerability indicators. ecosystem services. The possible future refers to the forward-
Next, Section 5 reviews the different purposes of indicating looking aspect of vulnerability. Not current but future harm is of
vulnerability and evaluates, for each purpose identified, to what interest and, importantly, this future harm may or may not
extent vulnerability indicators are the right means for addressing happen.
it. Finally, the gap that remains between the intended purposes and 2. Beyond this common form and these two elements little more
what vulnerability indicators can accomplish is discussed in communality could be found in definitions. Scientific definitions
Section 6 and some concluding recommendations are given in and frameworks are generally not more precise than ordinary
Section 7. language definitions or our intuitive understanding of the
concept.
3. Scientific definitions provide little, if any, guidance for designing
2. Vulnerability methodologies for assessing vulnerability. Generally, there is
ambiguity in making definitions operational due to the
The conceptual state-of-the-art of the field of vulnerability to generality and vagueness of the terms involved in the
climate change can be described as ‘‘Babylonian confusion’’ definitions. As a result, methodologies are generally only loosely
(Janssen and Ostrom, 2006). There are many definitions of the connected to the theoretical definitions that they make
term – Thywissen (2006), for example, lists 35 – and there is also a operational.
‘‘bewildering array of terms’’ (Brooks, 2003) that either express 4. Since scientific definitions do not provide much guidance for
similar ideas (e.g., risk, sensitivity and fragility) or inversely similar assessing vulnerability, guidance must come from the specific
ideas (e.g. resilience, adaptability, adaptive capacity and stability). case considered. Hence, methodologies for assessing (and
All of these terms overlap in their meanings, however non-trivially indicating) vulnerability must be developed based on the
so (Gallopin, 2006; Hinkel, 2008; Wolf et al., 2010). Furthermore, specific research or policy question addressed instead of on
the diversity of definitions is accompanied by a similar diversity of general definitions.
methodologies for assessing vulnerability. Methodologies include 5. Finally, due to the normative value judgement involved, making
participatory, simulation-model-based and indicator-based definitions operational, that is designing methodologies for
approaches and are applied to a great diversity of different assessing vulnerability, requires normative choice to be made.
systems, as well as spatial and temporal scales. The meaning of harm needs to be defined for the specific case
1
considered.
Adaptation and Mitigation Strategies: Supporting European climate policy;
http://www.adamproject.eu.
2
Formal Approaches to Vulnerability Assessment that Informs Adaptation; The vagueness in definitions and the associated ‘‘weak’’ link
http://www.pik-potsdam.de/favaia. between definitions and methodologies can be illustrated with the
[(Fig._1)TD$IG]
200 J. Hinkel / Global Environmental Change 21 (2011) 198–208
vulnerability
sensitivity
adaptive
stimuli
capacity
climate
variability
climate e xposure
Fig. 1. The relations between the concept of vulnerability and its defining concepts as given in the Working Group 2 glossary of the IPCC Third Assessment Report. The arrows
point from the defined concepts to the defining ones.
help of the definition of vulnerability developed within the the phenomenon ‘‘heat’’ can be measured by associating a number
Working Group II of the Intergovernmental Panel on Climate called temperature to it. By systematic I mean that the association
Change (IPCC), arguably the most authoritative one in the context needs to follow certain rules. Warmer phenomena, for example,
of climate change. The Third Assessment Report (TAR) of the IPCC should receive a higher number than colder ones. The application
defines vulnerability as: of different sets of rules leads to the different measurement scales
(e.g., ordinal, interval, and ratio).
‘‘the degree to which a system is susceptible to, and unable to Measurement thus is based on the notions of comparative or
cope with, adverse effects of climate change, including climate quantitative concepts, that is concepts that can take on different
variability and extremes. Vulnerability is a function of the values. These concepts will be called variables (Bernard, 2000).
character, magnitude, and rate of climate change and variation Comparability is key to the notion of vulnerability (Ionescu et al.,
to which a system is exposed, its sensitivity, and its adaptive 2009; Barnett et al., 2008; Wolf et al., 2010). We either compare in
capacity’’ (McCarthy et al., 2001, p. 995). time, that is we compare how one entity (e.g. a system, region or
group of people) changes over time or we compare in space, that is
One difficulty in making this definition operational arises from between different geographic or social entities.
the circumstance that the defining concepts themselves are vague How can vulnerability be measured? Strictly speaking it cannot,
and difficult to make operational. Fig. 1 decomposes the IPCC because vulnerability does not denote an observable phenomenon
definition on the basis of the definitions of the defining concepts (Moss et al., 2001; Patt et al., 2008). Vulnerability is a theoretical
given in glossary of the TAR (McCarthy et al., 2001, pp. 981–996). The concept in opposition to, for example, heat or income which are
arrows point from the defined concepts to the defining ones. The observable ones. This distinction between observable and theo-
concepts at the bottom of the figure are those left undefined. In order retical concepts has been subject to a lot of debate in philosophy of
for the IPCC definition to be clear, the ‘‘bottom’’ concepts would need science (Stegmüller, 1974; Carnap, 1995). The bottom-line is that
to be clear, which is, however, not the case. The concepts ‘‘ability to there is no clear cut and observability is a convention: if the
adjust’’ or ‘‘ability to cope’’, for example, are hardly more precise members of a scientific discipline have agreed upon a simple and
than ‘‘vulnerability’’ itself. Furthermore, many of the defining canonical way of measuring a concept, it is said to be observable. To
concepts such as ‘‘adverse effect’’ or ‘‘significant climate variations’’ give an example, there is not much debate about how to measure
contain a strong normative or subjective connotation. body height while there is about measuring intelligence or
A second difficulty lies in the lack of clarity on how the defining vulnerability.
concepts are combined, which is best illustrated by the second Since vulnerability is a theoretical concept, it is more accurate
sentence of the above-cited definition. This sentence has been very to speak about making the concept operational instead of
influential in the design of methodologies for assessing vulnera- measuring it. Making a theoretical concept operational consists
bility and often only this sentence is cited as being the IPCC in providing a method (an operation) for mapping it to observable
definition. That the second sentence is, however, not a good concepts; that method is then called the operational definition
definition can be easily illustrated by the attempt to define a car to (Schnell et al., 1999; Bernard, 2000; Copi and Cohen, 2005). In the
be a function of tires, engine and coachwork. A ‘‘true’’ definition case of vulnerability, the operational definition is generally called
would have to name the form of the function. This misinterpreta- the methodology of a vulnerability assessment. I will follow this
tion has led to the circumstance that many assessments have usage here.
focused on assessing the three arguments of this function (i.e. Indicators constitute one approach to making theoretical
exposure, sensitivity and adaptive capacity) separately, paying less concepts operational. An indicator is a function from observable
or no attention to how to combine these arguments. This variables (Gallopin, 1997), called indicating variables to theoretical
combination is, however, essential just as is the way tires, engine variables. The simplest kind of indicator is a scalar indicator which
and coachwork are combined in order to attain a car. maps one observable variable to one theoretical variable. For
example, the presence of a certain lichen specie (observable variable
3. Vulnerability indicators O) is often used to indicate air quality (theoretical variable T):
scalar indicator : O ! T
3.1. Measurement and indicators
Note that in the literature, the term indicator is often used for
Measurement is the systematic process of assigning a number referring only to the indicating variables rather than to the whole
to a phenomenon (i.e. to some thing we can observe). For example, function. This usage is, however, misleading, because an observ-
J. Hinkel / Global Environmental Change 21 (2011) 198–208 201
able variable only becomes and indicator when associated (by whole country (including its regions, economic sectors and social
means of a function) to a further variable, the one to be indicated. groups) to all climate related hazards (including both primary and
It is also important to note that indicators are ‘‘simple’’ secondary ones) and possibly other hazards. On the other hand,
functions. Often they are linear and they should always be even local assessments targeting individuals or communities need
monotonously increasing or decreasing. Non-monotonous func- to take into account the wide political, institutional, economic and
tions would be misleading; e.g. we would not want certain lichen social context that determines vulnerability, as expressed by the
specie to indicate both high and low air quality. concept of ‘‘contextual vulnerability’’ (O’Brien et al., 2007).
Often, several indicating variables are needed for making a The second unique challenge in developing vulnerability
concept operational. A composite indicator or an index is an indicators is the forward-looking aspect of vulnerability. As
indicator that maps (or aggregates) a vector of observable variables discussed above, vulnerability indicators must indicate a possibili-
to one scalar theoretical variable. The Human Development Index ty, i.e. some state that might or might not come about in the future
(HDI; UNDP, 1990), for example, maps values of the four (see also Patt et al., 2008; Ionescu et al., 2009). The ‘‘usual’’
observable variables life expectancy, adult literacy, mean years indicators, however, indicate a state and not the potentiality of a
of schooling and income (O1, O2, . . ., On) to the theoretical variable future state. The HDI, for example, indicates the current state of
human development (T): development rather than the possibility of future development.
Due to this forward-looking aspect of vulnerability, developing
0 1
O1 a vulnerability indicator includes building a predictive model, a
B O2 C task similar to the case of developing a simulation model. In both
B C
composite indicator : B .. C ! T cases, a function is built that, based on the observed present state,
@. A
On returns information on possible future states. The difference
between the two approaches is one of complexity and the
Finally, a vector-valued indicator maps a vector of observable treatment of time. In the indicator-based approach the function
variables into a vector of theoretical variables: (i.e. the indicator) is, by definition, simple (see above) and time-
independent (in the sense that it does not contain time as an
0 1
O1 0 1 argument). A vulnerability indicator does not give us information
B O2 C T1
B C B .. C on when in the future harm will occur. In the simulation-model-
vector-valued indicator : B .. C ! @ . A based approach the function (i.e. the simulation model) is complex
@. A
Tm and time-dependent, in the sense that it is a computer program
On
representing a dynamical system that is iterated over time
Vector valued-indicators are often displayed in form of so-called including feedbacks and non-linearity.
spider diagrams or radar charts (OECD, 2008), which are diagrams It is thus important to distinguish between:
of three or more variables represented on axis starting from one
central point. See, for example, the ‘‘Water Poverty Index’’ 1. Harm indicators, which are indicators that evaluate a state of an
(Sullivan, 2002) or the ‘‘Livelihood Vulnerability Index’’ (Hahn entity based on normative judgements of what constitutes a
et al., 2009). good or bad state. These indicators do not include the forward-
looking aspect.
3.2. Two unique challenges involved in developing vulnerability 2. Vulnerability indicators, which are indicators of possible future
indicators harm. These indicators include both the forward-looking aspect
as well as the normative aspect of defining harm.
For the purpose of this paper, I distinguish between three steps
involved in the development of vulnerability indicators. For a 4. Which arguments are available for developing vulnerability
broader discussion of steps involved in assessing vulnerability see indicators?
Schröter et al. (2004) or Patt et al. (2008) and in developing
indicators see UNEP (2001) and OECD (2008). In principle, there are three kinds of substantial arguments
The first step is the definition of what is to be indicated. In the case available for developing vulnerability indicators: (i) deductive ones,
of climate change vulnerability indicators, this would be the which are based on existing theory, (ii) inductive ones, which are
vulnerability of an entity to climate change. A wide array of different based on data of both the indicating variables as well as observed
entities such as individuals, households, communities, ecosystems, harm, and (iii) normative ones, which are based on value
regions, economic sectors and countries are considered. Generally, judgements. A fourth kind but non-substantial argument is also
these entities can be conceptualised as socio-ecological systems available, which is based only on data of the indicating variables
(SES; Gallopı́n, 1991) or coupled human-environmental system and thus irrespective of knowledge on vulnerability. Generally,
(Turner et al., 2003), because vulnerability is determined by the developments of indicators combine the different types of
interaction of a social (or human) and an ecological (or arguments. This section explores which deductive, inductive,
environmental) sub-system. Following this literature, I will use normative and non-substantial arguments are available for the
the term system as a synonym for entity. Defining the entity thus case of developing vulnerability and harm indicators and shows,
includes defining the system’s boundaries. with the help of a couple of examples from the literature, how
The second step is the selection of the indicating variables. these arguments are typically combined.
Technically speaking, this consists in defining the domain of the
indicator function. A possible but not necessary next step is the 4.1. Deductive arguments
aggregation of the indicating variables. This third step consists in
defining the indicator function itself. Using deductive arguments in the development of vulnerability
The development of vulnerability indicators in particular indicators means using available scientific knowledge in form of
involves two unique challenges. The first challenge lies in the frameworks, theories or models about the vulnerable system of
difficulty of exactly defining the vulnerable entity. On the one hand interest in the selection and aggregation of indicating variables. By
this is due to many assessments being interested in systems with framework I mean a set of concepts, by theory a set of general
wide system boundaries such as, for example, the vulnerability of a relations that hold amongst these concepts and by model a set of
202 J. Hinkel / Global Environmental Change 21 (2011) 198–208
more specific relations (Ostrom, 2005). The capability theory of Sen relying on ecosystem services (Schröter et al., 2005). Another
(1983), for example, suggests that the ability to participate in example is Moss et al. (2001) who used an integrated assessment
political activities is important for reducing poverty. From this model and scenarios to produce future evolutions of the state
theory one can deduce that variables that measure non-participa- variables of the global SES and then applied harm indicators to
tion in policy may indicate vulnerability to poverty. each evolution. It must be noted that simulation-model based
What frameworks, theories and models are available for approaches are sometimes wrongly referred to as vulnerability
developing climate change vulnerability indicators? For SESs in indicators in the literature. The approach of Moss et al. (2001), for
general there are no theories or models available, but only some example, is labelled ‘‘vulnerability-resilience indicators’’.
general frameworks such as those of Turner et al. (2003) and Finally, there is another albeit weak form of deductive
Ostrom (2009). These framework are abstract and only provide argument, which is expert judgement. This is actually the only
some rough guidance for selecting potential indicating variables deductive argument applied for the aggregation of indicating
but not for aggregating them. Frameworks in general do not variables. Brooks et al. (2005), for example, attempted to aggregate
provide arguments for aggregation, because they say nothing 11 indicating variables by expert judgement via a focus group
about the processes through which the different variables interact exercise. A robust result could, however, not be achieved; different
and may lead to future harm and thus cannot capture the necessary experts aggregated differently. The approach of Brooks et al. (2005)
forward looking aspect of vulnerability. A widely used approach will be discussed in more detail in the next section.
therefore is to separate SESs into their social and bio-physical sub- In summary, deductive arguments are only available for
systems. selecting indicating variables and not for aggregating them. Most
For the social sub-system, there is no general, let alone global, approaches in the literature apply this argument as the first step in
theory available either, but there are some frameworks that are their methodology. Further examples are given in the next
frequently used. Two prominent examples are the ‘‘root causes’’ of sections.
vulnerability put forward by Blaikie (1994) and the eight
determinants of adaptive capacity introduced by the IPCC (Smit 4.2. Inductive arguments
et al., 2001) and further elaborated by Yohe and Tol (2002) and
Adger et al. (2007). Again, these frameworks only help to select Using inductive arguments in the development of vulnerability
indicating variables and not to aggregate them. Furthermore, the indicators means using data for building statistical models that
deductive arguments for selecting indicating variables built on the explain observed harm through some indicating variables. For
available frameworks are rather weak with respect to climate example, if there is data that shows that heat-wave mortality was
change because they are based on the climate change unspecific highest in low-income neighbourhoods, one can induce that the
idea that the indicating variables indicate the current (positive or variable low income may generally indicate vulnerability to heat-
negative) potential to encounter or prevent harm in the future. waves.
Prominent indicating variables used are GDP and other types of Which statistical models can be developed in the case of climate
‘‘capitals’’ such as technology, education and infrastructure. The change vulnerability? Generally, the development of statistical
indicators developed are generally called adaptive capacity or models is only promising if two conditions are met: (i) systems can
social vulnerability indicators. be narrowly defined in the sense that they can be described using
At local scales and for specific social systems, there is, of course, few variables, and (ii) sufficient data is available. Both conditions
the whole of social science research offering a great pool of local are, however, rarely met in the case of climate change vulnerabili-
and contextualised theories and models. Sen’s above-mentioned ty.
concept of entitlement, for example, has been widely applied The first condition is rarely met because, as pointed out in the
(Bohle et al., 1994; Cutter, 1996; Hewitt, 1997; Kelly and Adger, last section, the subject matter of climate change vulnerability
2000) and, more recently, the concept of social capital has received assessments is usually a complex SES, which cannot, by definition,
attention (Adger, 2003; Pelling and High, 2005). Another example be described in simple terms with few variables.
is provided by Hahn et al. (2009), who surveyed 220 households in With respect to the second condition, it is important to note
Mozambique for developing a ‘‘Livelihood Vulnerability Index’’. that data for both the indicating variables as well as for
The indicating variables were selected deductively based on the experienced harm is needed. Generally, there is, however, little
literature, in particular related to the Sustainable Livelihood data on experienced harm. Furthermore, there is only data on harm
Approach (Scoones, 1998). Again, these local level theories only caused by fast-onset and not slow-onset hazards, since we are just
provide arguments for the selection and not the aggregation of beginning to observe and monitor impacts of slow-onset climate
indicating variables. change. The most commonly used harm indicators are economic
For the bio-physical sub-system there is also no general theory loss, mortality, people affected, people injured or left homeless
available, but there are specific dynamical (computer) models for (see, e.g., the Emergency Events Database EM-DAT at http://
some sectors (mostly agriculture, forestry and water) as well as so- www.emdat.be/). Most data is only available at the national level.
called integrated assessment models that include several sectors in Collecting data is of course a possibility but, since it requires a lot of
a stylised way. These models can, however, not be used for building resources, it is only feasible at the local level.
vulnerability indicators, because they are complex and thus cannot Two largely unsuccessful attempts to inductively build
be collapsed into ‘‘simple’’ indicator functions. If those models are national-level vulnerability indicators are given by Brooks et al.
used for assessing vulnerability they are applied in combination (2005) and Tol and Yohe (2007). Brooks et al. (2005) first used
with harm indicators in the sense of the ‘‘classical top-down’’ deductive arguments (i.e. expert judgement and literature) as well
simulation-model based approach (Dessai and Hulme, 2004): as the non-substantial argument of data availability (discussed
Possible future states of the vulnerable system are first simulated further below) to select a short list of 46 potential indicating
and then evaluated based on harm indicators. No vulnerability variables. The list was then reduced inductively. Using mortality as
indicators are involved in this approach. the harm indicator, 11 ‘‘key vulnerability indicators’’ that
One prominent example of this approach is the project ATEAM correlated with mortality at the 10% significance-level were
(Advanced Terrestrial Ecosystem Analysis and Modelling) that selected. Finally, as mentioned above, it was attempted to
applied a suit of ecosystem and hydrology models together with a aggregate the remaining 11 variables deductively by expert
harm indicator to assess the vulnerability of European regions judgement, which however did not deliver a robust result.
J. Hinkel / Global Environmental Change 21 (2011) 198–208 203
Tol and Yohe (2007) follow a similar approach and first and hydrology models, leading to one aggregated number that
deductively selected an initial list of 34 indicating variables based indicates the ‘‘goodness’’ of each ecosystem service (Schröter et al.,
on the above-mentioned eight determinants of adaptive capacity. 2005). Another way of dealing with this issue is, as some authors
Six alternative harm indicators such as number of people affected argue (e.g., Kelly and Adger, 2000; Hinkel and Klein, 2007, 2009),
by natural disasters, infant mortality and life expectancy were not to aggregate and to use vector-valued indicators.
selected for which data was available in the EM-DAT database. 24
of the 34 indicating variables were found to be statistically not 4.4. Non-substantial arguments
significant. Amongst the statistical significant ones, different ones
were found significant for different harm indicators. The authors A couple of further arguments are also applied in the
conclude that the results suggest that there are no universal development of vulnerability indicators. These arguments will
indicators of vulnerability or adaptive capacity on the national be called non-substantial here, because they are not based on
level; mechanisms that cause harm vary from case to case and knowledge about vulnerability (as are the deductive and inductive
hazard to hazard. ones) nor on value judgements (as are the normative ones) but only
In summary, induction is feasible when the systems can be on the structure of the data of the indicating variables. Note that
narrowly defined by few variables and sufficient data – for both the these arguments are different from the inductive ones described
indicating as well as harm variables – is either already available or above, since the inductive ones also make use of data for harm in
can be collected. These conditions are only fulfilled when the attempt to build a model of the vulnerable system that explains
considering the vulnerability of a specific system to a specific observed harm. On the contrary, non-substantial arguments do not
stimulus in a local context. reveal anything about how the indicating variables combine in the
process of causing vulnerability.
4.3. Normative arguments The argument of co-variation or multi-variation is frequently
used to reduce the number of indicating variables. Principle
Using normative arguments in the development of indicators component analysis (PCA) and similar methods for multivariate
means using (individual or collective) value judgements in the data analysis are applied to reduce the number of dimensions
selection and aggregation of indicating variables. The indicating (here, the number of indicating variables) needed to describe the
variables of the above mentioned HDI, for example, are selected state of the system whose vulnerability is to be indicated. It is
based on the normative argument that human development important to note that multivariate analysis does, however, not
should be characterised by the three dimensions of longevity, reveal anything about the influence of the indicating variables on
knowledge and income. The indicating variables are then the theoretical variable (here, vulnerability) to be indicated (OECD,
aggregated based on the normative argument that each dimension 2008).
should be equally important in characterising the state of A prominent example using this argument is the ‘‘Social
development (UNDP, 1991, 1993). Vulnerability Index’’ developed by Cutter et al. (2003) for the 3141
The latter, normative argument of equal weights is frequently counties of the United States. The authors first selected more than
used in aggregation. Aggregating the indicating variables arith- 250 variables deductively based on literature. This list was then
metically based on equal weights implies that the indicating narrowed down to 42 variables by getting rid of redundant
variables are perfect substitutes, which means that a low value in variables through multicollinearity analysis. Finally, PCA was
one variable can be compensated by a high value in another (Desai, applied and 11 components explaining 76.4 percent of the variance
1991; Sagar and Najam, 1998; Kaly et al., 1999; Cutter et al., 2003; were identified.
Hahn et al., 2009). Alternatively, the dimensions could be A further argument used in the aggregation is the one of
multiplied, suggesting that a low value in one dimension cannot robustness, which refers to an index being robust against the usage
be fully substituted by a high value in another dimension. The of alternative methods of aggregation. Robustness is tested by
above-mentioned ‘‘Livelihood Vulnerability Index’’ of Hahn et al. applying alternative methods of aggregation to the data of
(2009), for example, aggregates based on the normative argument indicating variables and evaluating the resulting rankings. For
of equal weights. the HDI index, e.g., it was shown that the index is not very sensitive
In the context of climate change vulnerability, normative with respect to aggregating geometrically or arithmetically (UNDP,
arguments are, however, predominantly applied in the develop- 1993).
ment of harm and not vulnerability indicators, because in the latter Finally, the argument of data availability plays a major role in the
case it is difficult to separate the forward-looking aspect of development of indicators in general and vulnerability indicators in
vulnerability from the normative one. Harm indicators, by particular. Niemeijer (2002) uses this argument for differentiating
definition, only deal with the normative aspect of defining what between deductive approaches for which ‘‘data availability is the
constitutes a good state. In vulnerability assessments, harm central criterion’’ and those for which it is not, calling the former
indicators are thus generally applied in combination with data-driven and the later theory-driven approaches.
simulation-models that take care of the forward-looking aspect
by projecting possible future states which then are evaluated using 5. Fit for purpose
harm indicators (e.g., Moss et al., 2001; Schröter et al., 2005).
Defining harm is generally not a straightforward exercise 5.1. Purposes of assessing vulnerability
because harm has multiple dimensions between which trade-offs
are involved. If we consider a community vulnerable to coastal Up to this point, the development of vulnerability indicators
flooding, for example, important dimensions are, people affected was discussed without reference to the problems that these
by floods, wetlands lost, damage cost and adaptation cost. indicators are meant to solve. Indicators serve, just as any other
Aggregation is further complicated because different stakeholders method, the purpose of solving a problem, or more specifically, of
usually value the dimensions differently. addressing a policy or research question. Hence, the potential of
One way of dealing with these issues is to directly involve indicators can only be discussed adequately in the light of these
stakeholders in the aggregation. The above mentioned ATEAM problems. This section therefore briefly reviews what is said in the
project, for example, involved stakeholders for weighing the literature about the purposes of developing indicators and
different dimensions of the high-dimensional output of ecosystem assessing vulnerability.
204 J. Hinkel / Global Environmental Change 21 (2011) 198–208
On the most general level, the purpose of indicators is to adaptation policy; and (vi) to conduct scientific research. The
describe the state of affairs of a complex system in simple terms next section discusses to what extent vulnerability indicators
(Hammond et al., 1995; Niemeijer, 2002; Barnett et al., 2008). For can meet these purposes.
example, we use indicators of biodiversity to describe the state of
complex ecosystems or indicators of human development to 5.2. Are vulnerability indicators fit for purpose?
describe the state of complex socio-economic systems. Since
indicators reduce complexity, they are, by their very nature, useful 5.2.1. Are vulnerability indicators the right means to identify
to communicate complex issues from science to policy or the mitigation targets?
general public. Indicators are often particularly designed for policy Clearly, the answer is no. Identifying mitigation targets is a
making or monitoring the performance of policy measures question of global scope and, as shown in Section 4, no general
(Hammond et al., 1995; Gudmundsson, 2003; Niemeijer and de deductive or inductive arguments are available for developing
Groot, 2008; Barnett et al., 2008). Many policy documents, in fact, vulnerability indicators at this scale. The identification of
directly suggest the development of indicators. A prominent mitigation targets is a matter of developing and applying earth
example is the Agenda 21 stating that ‘‘indicators of sustainable system and global integrated assessment models. These models
development need to be developed to provide a solid basis for are complex and cannot be collapsed into a simple indicator
decision making’’ (United Nations, 1992a, Chapter 40.4). function. Instead, these models are applied to project possible
The role of indicators in policy making,however, has been future impacts of given mitigation targets. Normative trade-offs
subject to a lot of critique. Evaluations of indicators in the domains between the many dimensions of the projected impacts are
of sustainability, environment and vulnerability conclude that important, but addressing these is a matter of developing harm and
most methodologies applied are not scientifically sound (Parris not vulnerability indicators.
and Kates, 2003; Eriksen and Kelly, 2006; Barnett et al., 2008) and
may actually mislead policy (Böhringer and Jochem, 2007). 5.2.2. Are vulnerability indicators the right means to identify
Indicators are often only used ‘‘symbolically’’ for legitimising particularly vulnerable people, regions or sectors?
decisions that would have been taken anyway or the absence of At local scales and when systems can be narrowly defined by
indicators may be used for justifying inaction (Gudmundsson, few variables, the answer is yes. Only under these conditions can
2003). inductive arguments be build based on data of observed harm in
The purpose of vulnerability assessments is, on the most order to find out which people or communities are most
general level, to inform decision making (Schröter et al., 2004; Patt vulnerable.
et al., 2008). The term assessment is thereby used in opposition to At larger scales, the answer is no. Induction has not revealed
research to express that problem-solving is driven by the purpose much so far and is not likely to do so in the future, due to the
‘‘to inform policy and decision-making, rather than to advance complexity of systems involved, the many variables needed to
knowledge for its intrinsic value’’ (Weyant et al., 1996, p. 374). describe them and the little data available. The only deductive
Turning to policy documents, little can, however, be learnt arguments available at larger scales are those based on general
about the specific purposes for which vulnerability indicators are vulnerability frameworks which, however, only help to select
meant to be used. The current Impact Assessment accompanying indicating variables and not to aggregate them into an index.
the European Commission’s White Paper on Adaptation, for Hence, simulation-model based approaches are, as stated above,
example, contains a section labelled ‘‘The need for indicators of the only available methodologies for assessing vulnerability at
vulnerability’’ (European Commission, 2009, pp. 15–19). In this larger scales. Collapsing these complex models into simple
section, however, nothing more is said beyond that vulnerability indicator functions, however, would mean that one disregards
indicators are needed ‘‘to help preparing EU-wide adaptation the more advanced knowledge available in form of these models.
policy’’ (p. 18).
A similar lack of specificity is found in the United Nation’s 5.2.3. Are vulnerability indicators the right means to raise awareness
Barbados Programme of Action (BPOA; United Nations, 1994), of climate change?
which calls for ‘‘the development of vulnerability indices and The direct answer is no, because the problem of raising
other indicators that reflect the status of Small Island Developing awareness of uncertain climate change is primarily a problem of
States’’ (paragraph 113 on p. 44). The remainder of the document risk communication. Risk communication is a social and institu-
remains, however, silent about the purpose for which these tional process (Renn, 2008), which (tautologically) must be
indicators shall be used. The only policy documents that are, to my addressed by risk communication methods. The information used
knowledge, more explicit on purposes are the proposals to use is only one of many factors that play a role in this process and often
vulnerability indicators as a tool for allocating adaptation funds to other factors such as the charisma of the communicator or the
‘‘particularly vulnerable countries’’ in the context of the UNFCCC stakeholders’ personal experience with climate-related extreme
(Klein, 2009). events are more important than the information itself (Hinkel
In the academic literature, a similar lack of specificity about et al., 2009).
the purposes of assessing vulnerability can be observed. Furthermore, what kind of information is useful in the risk
Vulnerability assessment case studies generally say little or communication process is subject to ongoing research and there
nothing on the policy or research questions that they address are no general answers (see, e.g., Bostrom et al., 1994; Patt et al.,
beyond calling themselves ‘‘vulnerability assessments’’ in the 2005). What can be said at the general level is that information is
first place (Hinkel, 2008; Wolf et al., 2010). Only the conceptual only considered useful by stakeholders if it is relevant to their
literature is more explicit and makes a few general distinctions needs (Cash et al., 2003). Given the vagueness of the concept of
(Füssel and Klein, 2006; Smit and Wandel, 2006; Patt et al., vulnerability and the rather complex idea it expresses (i.e. possible
2008). Drawing on this literature, the following six purposes for future harm) it seems that simpler concepts such as climate change
assessing vulnerability can be identified: (i) to identify mitiga- itself (e.g., increase of average temperature) or experienced harm
tion targets; (ii) to identify particularly vulnerable people, (e.g., the number of people that died during a heat wave) are likely
regions or sectors; (iii) to raise awareness of climate change; (iv) to be more relevant in the risk communication process. One
to allocate adaptation funds to particular vulnerable regions, interesting insight gained from the final stakeholder workshop of
sectors or groups of people; (v) to monitor the performance of the above-mentioned ATEAM project is that stakeholders were not
J. Hinkel / Global Environmental Change 21 (2011) 198–208 205
interested in the developed adaptive capacity indicator, because 5.2.6. Are vulnerability indicators the right means to conduct scientific
they felt that they could better judge for themselves on their ability research?
to respond (Schröter et al., 2004). In my opinion, the answer is no. Since research is an enquiry
process directed towards answering open questions, its success
5.2.4. Are vulnerability indicators the right means to allocate depends on the clarity of the research questions formulated.
adaptation funds to particular vulnerable people, regions or sectors? Labelling research ‘‘assessing vulnerability’’ or ‘‘developing vul-
Again, it must be emphasised that adaptation funds allocation is a nerability indicators’’ is, given the plurality of scales, systems and
much wider problem than the one of designing an allocation questions involved, too unspecific and disguises what is actually
algorithm. The primary problem is not one of science developing this done. This lack of specificity is, in my opinion, to a large extent
algorithm but one of stakeholders creating an appropriate institu- responsible for the Babylonian confusion around the concept of
tion for the allocation. What kind of institution is appropriate and vulnerability that prevails in the scientific community.
whether a vulnerability indicator may serve as allocation algorithm This confusion can only be alleviated by the usage of more
thereby are questions that cannot be answered in the general. specific labels. The above mentioned example of developing a
If we consider, for example, the global problem of allocating heatwave vulnerability index, for example, could more adequately
adaptation funds in the context of the UNFCCC to the ‘‘most be labelled by the research question underlying it, that is: Which
vulnerable’’ countries, the answer is no, indicators are not the right social factors can explain heatwave mortality? To give another
means of doing so. As stated earlier, there are no inductive example, the above mentioned work by Brooks et al. (2005) is titled
arguments available at this scale and the available deductive ‘‘The determinants of vulnerability and adaptive capacity at the
arguments based on frameworks only help to select indicating national level and the implications for adaptation’’. A more specific
variables and not to aggregate them. Given the plurality of existing title such as ‘‘Can mortality due to climate-related disasters be
frameworks and possible interpretations thereof, even the selec- explained by national-level socio-economic variables?’’ would
tion of indicating variables would, however, be contestable from a represent the papers content more accurately and would thus
scientific point of view. It is thus more than likely that any improve the sharpness of scientific communication, a step
indicator developed would be attacked by those Parties that are necessary for advancing the field of climate change impacts and
not happy with their vulnerability score attained under the adaptation beyond the Babylonian confusion.
particular indicator. Hence, the problem of global adaptation funds
allocation under the UNFCCC is one of negotiating a normative 6. Discussion
agreement amongst the Parties of the Convention (see also, Klein,
2009). Hiding this problem of negotiation behind the label In 5 of the 6 problems that vulnerability assessments are meant
‘‘developing vulnerability indicators’’ is misleading because it to address according to the literature, it is found that vulnerability
raises the false hope that research could potentially solve the indicators are not appropriate methodologies. A first eye-catching
problem which in turn may only delay the negotiation. reason why they are not appropriate is that the primary problems
If we consider the allocation of adaptation funds on a national faced are not ones of assessing vulnerability. In those cases, the
level, I would also answer no. As argued above, there is already term vulnerability is used as an unspecific proxy for addressing
more advanced knowledge available on the climate issues a much wider research and policy questions such as communicating
country is facing (e.g., through simulation-model based assess- risk or negotiating normative agreements, without, however,
ments, community-based vulnerability assessments or the Na- calling these by their proper names.
tional Adaptation Programmes of Actions). Vulnerability indicators This circumstance is, in my opinion, the legacy of the early IPCC
would not reveal more but rather disguise what is known. work which focused on mitigation and global impact assessment. In
Countries should address the known issues by negotiating national this context, few, well-formulated or at least implicitly clear
priorities and developing issue-specific programmes and projects. research questions were addressed such as: How dangerous are
the impacts of climate change and certain mitigation targets? The
5.2.5. Are vulnerability indicators the right means to monitor usage of the three components of the IPCC definition for assessing
adaptation policy? vulnerability made sense in that they follow the global, top-down
The preliminary answer is no; but again, more specificity is line of thought: First, climate models are run to assess the exposure,
needed. Adaptation is, similarly to vulnerability, a vague and then damage functions or impact models are applied to assess the
general concept. In most instances, adaptation remains a matter of sensitivity of the exposed entities and finally the potential impacts
social learning involving many decision makers on different scales thus attained are ‘‘corrected’’ by assessing adaptive capacity.
with differing perceptions (Hinkel et al., 2009). Before indicators In the meantime, a stronger focus on adaptation has emerged
can be developed, clarity is needed on the specific purpose of an and a much wider array of questions on all scales are being
adaptation policy. Indicators cannot be developed for adaptation addressed. This shift in focus has, however, been insufficiently
policy in general. The relevant policy fields need to be identified reflected in the terminology used. The decomposition of vulnera-
and policy goals and targets need to be set. bility into the components exposure, sensitivity and adaptive
If goals and targets have been set, the subsequent problem of capacity is not necessarily adequate for the much wider array of
monitoring adaptation policy performance may be addressed. The questions addressed today and should certainly not be taken as a
indicators used for this purpose, would, however, be harm and not blue-print for assessing or indicating vulnerability. The differences
vulnerability indicators. The successful outcome of adaptation between these components are increasingly blurred the more one
policy is that less harm is observed in the future, which is why harm moves away from the global, top-down and model-based
indicators are often called outcome indicators in this context. Since assessments. This is illustrated by the difficulties that many
the outcomes of adaptation policy can, in most cases, only be indicator-based assessments have in trying to decide to which of
observed in the far future, it has been suggested that process these three components indicating variables should be attributed
indicators shall be used to monitor the process of adaptation itself. to. Brooks et al. (2005), for example, report that experts in a focus
The indicators used for this purpose would also not indicate group asked to attribute lists of indicating variables to either
vulnerability but rather the institutional stages of the adaptation vulnerability or adaptive capacity were largely undecided.
process (e.g., whether a heatwave emergency management plan has The first necessary step in clarifying the science–policy
been put in place or not). interface therefore is to be more explicit and specific about the
206 J. Hinkel / Global Environmental Change 21 (2011) 198–208
problems addressed. The one-size-fits-all label ‘‘assessing vulner- forward looking aspect but only aggregate the multiple dimensions
ability’’ is not sufficient. A terminology must be applied that goes of an entity’s state based on value judgements.
beyond the concepts used in the IPCC definition and that is specific The framework was then applied to analyse which scientific
enough to convey the diversity of problems addressed and arguments are available for developing vulnerability and harm
methods applied. Interestingly enough, a recent review of indicators. In principle, four kinds of arguments are made: (i)
sustainable development indicators comes to similar conclusions deductive ones based on theory, (ii) inductive ones based on data
(Parris and Kates, 2003). for both harm and indicating variables, (iii) normative ones based
A second reason why vulnerability indicators are not appropri- on value judgements, and (iv) non-substantial ones based on data
ate for addressing some of the policy problems named above is the for only the indicating variables. Developments of indicators
lack of deductive arguments for aggregating indicating variables usually combine these four types of arguments.
and the lack of inductive arguments at larger scales. This paper It was found that deductive arguments are only available for
thus confirms findings of previous studies (e.g. Barnett et al., 2008) selecting indicating variables and not for aggregating them.
that indicator-based approaches are only appropriate at local Inductive arguments are only available at local scales when
scales and when systems can be narrowly defined. Only then is it systems can be well and narrowly defined by few variables. At
possible to describe entities by few variables and thus to develop larger scales, inductive arguments can generally not be built
an inductive argument. Furthermore, local indicators cannot be because the systems considered are complex socio-ecological
generalised, because vulnerability is context specific (O’Brien et al., systems which cannot be described by few variables and for which
2007) and factors that create adaptive capacity are different for little data is available. If vulnerability is assessed at larger scales,
different climate stimuli (Tol and Yohe, 2007). this is a matter of first applying simulation-models to project
Even at local scales two dilemmas remain. The first one pertains future states of the vulnerable system and then applying harm
to indicators in general. One one hand, we use indicators because indicators to evaluate these states. Confusion arises because these
issues are complex and indicators reduce this complexity by approaches are sometimes wrongly called vulnerability indicators
describing complex systems in simple terms, at best in terms of in the literature. Normative arguments are generally only applied
single numbers. On the other hand, the very meaning of in the development of harm and not vulnerability indicators. Non-
complexity is non-reducibility (Waldrop, 1992). For complex substantial arguments are frequently applied in the aggregation.
systems, theories and models are often scarce, uncertain or non- These arguments, however, do not reveal anything about the
linear. Indicators, however, are simple, mostly linear models. processes that cause vulnerability but only summarise the
Indicators can therefore, by their very nature, not capture surprise, information contained in the data of the indicating variables.
which is particularly disconcerting in the domain of climate Finally, this availability was opposed with the types of
change, because non-linearity, thresholds or so-called ‘‘tipping problems vulnerability indicators are meant to address. In the
points’’ are essential features of the climate problem (e.g., Lenton literature, vulnerability assessments are said to be carried out for
et al., 2008). the following six purposes: (i) to identify mitigation targets; (ii) to
A second dilemma pertains to climate change vulnerability, in identify vulnerable entities; (iii) to raise awareness; (iv) to allocate
particular. As shown here, indicators are best applicable when adaptation funds; (v) to monitor adaptation policy; and (vi) to
systems can be well and narrowly defined. The systems considered conduct scientific research. It was found that vulnerability
in the context of climate change vulnerability are, however, indicators are not the appropriate methodology for 5 of these 6
generally not of this kind. System boundaries are often difficult to purposes. Vulnerability indicators may be appropriate to identify
establish and need to be continuously redefined during the course vulnerable people, regions or sectors at local scales when systems
of an assessment (Eriksen and Kelly, 2006; O’Brien et al., 2007). The can be narrowly defined and hence deductive arguments are
problem of assessing vulnerability is a so-called ‘‘wicked problem’’ available for selecting indicating variables and inductive ones for
(Rittel and Webber, 1973), because there is ambiguity on what aggregating them. For the other purposes, either vulnerability is
exactly the problem to be solved is and no canonical solution not the adequate concept or, in the cases that it is, indicator-based
exists. approaches are not the appropriate methodology.
In face of these dilemmas, I recommend that if vulnerability The general conclusion that can be drawn from the work
indicators are to be developed, they should only serve as high-level presented is that both the policy and academic communities
entry points to further more detailed information behind. Since should collaboratively aim to make the problems addressed and
indicators reduce complexity, they can be interpreted in a variety the methodologies applied in the context of climate change
of different ways and background information is necessary to vulnerability and adaptation more explicit. To this end, a more
prevent misinterpretation. The different types of arguments used specific terminology needs to replace the vague and unspecific
in developing indicators should be made transparent. In particular, terminology used in the context of the IPCC definition of
normative arguments should be made explicit and be based on the vulnerability. The-one-size-fits-all label ‘‘vulnerability’’ is not
preferences of relevant stakeholders. Finally, due to the inherent suitable, because it disguises the wealth of different types of
‘‘wickedness’’ of the task, any vulnerability indicator would need to problems addressed and methods applied. The conceptual work on
be updated regularly, based on new research findings. vulnerability, in particular the quest for a universal definition of
vulnerability and associated concepts, should be given up in favour
7. Conclusion of using the existing terminology of the social sciences (and
extending it where needed) to describe problems and methods as
This paper first developed a rigorous conceptual framework for specifically as possible.
speaking about vulnerability indicators. A vulnerability indicator More specificity is also needed for speaking about vulnerability
indicates possible future harm and is a function which maps indicators in particular. Speaking of ‘‘measuring vulnerability’’
observable or indicating variables to theoretical variables denoting should be avoided, as this is impossible and raises false
vulnerability. Since vulnerability refers to future and not present expectations. More caution should be taken not to call impact
harm, this function must include a predictive model. In contrast to or harm indicators vulnerability indicators. The different types of
simulation-models, the indicator model is simple (often linear) and arguments involved in assessing vulnerability should be made
not explicit in time. Vulnerability indicators must also be explicit and cleanly separated from each other. In particular,
distinguished from harm indicators which do not include this normative arguments should be separated from inductive and
J. Hinkel / Global Environmental Change 21 (2011) 198–208 207
deductive ones. The framework developed in this paper provides a Gallopı́n, G.C., 1991. Human dimensions of global change: linking the global and the
local processes. International Social Science Journal 43 (4), 707–718.
more differentiated terminology that can be be applied towards Gallopin, G.C., 1997. Indicators and their use: information for decision-making.
these ends. SCOPE—Scientific Committee on Problems of the Environment International
Whether, and what kind of, indicators are useful for climate Council of Scientific Unions 58, 13–27.
Gallopin, G.C., 2006. Linkages between vulnerability, resilience, and adaptive
change adaptation policy remain open questions. Before these capacity. Global Environmental Change 16 (3), 293–303.
questions can be addressed, work is needed to spell out the climate Gudmundsson, H., 2003. The policy use of environmental indicators—learning from
relevant policy fields and to define goals and targets. Problems evaluation research. The Journal of Transdisciplinary Environmental Studies 2
(2), 1–12.
should thereby be defined as narrowly as possible, for the reasons Hahn, M.B., Riederer, A.M., Foster, S.O., 2009. The livelihood vulnerability index: a
named above. Given the novelty of the issues there is a need to pragmatic approach to assessing risks from climate variability and change. A
experiment and learn. Policy goals, targets and indicators need to case study in Mozambique. Global Environmental Change 19 (1), 74–88.
Hammond, A., Adriaanse, A., Rodenburg, E., Bryant, D., Woodward, R., 1995. Envi-
be evaluated and refined frequently.
ronmental Indicators: A Systematic Approach to Measuring and Reporting on
Environmental Policy Performance in the Context of Sustainable Development.
Acknowledgements World Resources Institute, Washington, DC.
Hewitt, K., 1997. Regions of Risk: A Geographical Introduction to Disasters. Addi-
son-Wesley Longman.
This paper is based on research that has been funded by the Hinkel, J., 2008. Transdisciplinary knowledge integration. Cases from integrated
Research Directorate-General of the European Commission assessment and vulnerability assessment. Ph.D. thesis, Wageningen University,
through its projects NEWATER (contract number 511179-GOCE) Wageningen, The Netherlands.
Hinkel, J., Bisaro, S., Downing, T., Hofmann, M.E., Lonsdale, K., Mcevoy, D., Tabara,
and ADAM (contract number 018476-GOCE). I thank Sandy Bisaro, J.D., 2009. Learning to adapt. narratives of decision makers adapting to climate
Anthony Patt and Alex Harvey for stimulating discussions on this change. In: Hulme, M., Neufeldt, H. (Eds.), Making Climate Change Work for Us:
topic and valuable comments on earlier drafts. European Perspectives on Adaptation and Mitigation Strategies. Cambridge
University Press, pp. 113–134.
Hinkel, J., Klein, R.J.T., 2007. Integrating knowledge for assessing coastal vulnera-
References bility. In: Fadden, L.M., Nicholls, R.J., Penning-Rowsell, E. (Eds.), Managing
Coastal Vulnerability. Earthscan, London.
Adger, N., 2006. Vulnerability. Global Environmental Change 16 (3), 268–281. Hinkel, J., Klein, R.J.T., 2009. The DINAS-COAST project: developing a tool for the
Adger, W.N., 2003. Social capital, collective action, and adaptation to climate dynamic and interactive assessment of coastal vulnerability. Global Environ-
change. Economic Geography 387–404. mental Change 19 (3), 384–395.
Adger, W.N., Agrawala, S., Mirza, M.M.Q., Conde, C., OBrien, K., Pulhin, J., Pulwarty, IATF/DR, 2006. On better terms. Working Group on Climate Change and Disaster Risk
R., Smit, B., Takahashi, K., 2007. Assessment of adaptation practices, options, Reduction of the Inter-Agency Task Force on Disaster Reduction, United Nations.
constraints and capacity. In: Parry, M.L., Canziani, O.F., Palutikof, J.P., van der Ionescu, C., Klein, R.J.T., Hinkel, J., Kumar, K.S.K., Klein, R., 2009. Towards a formal
Linden, P.J., Hanson, C.E. (Eds.), Climate Change 2007: Impacts, Adaptation and framework of vulnerability to climate change. Environmental Modelling and
Vulnerability. Contribution of Working Group II to the Fourth Assessment Assessment 14, 1–16.
Report of the Intergovernmental Panel on Climate Change. Cambridge Univer- IPCC, 2007. Summary for Policymakers. In: Palutikof, J., van der Linden, P., Hanson,
sity Press, Cambridge, pp. 717–743. C. (Eds.), Climate Change 2007: Impacts, Adaptation and Vulnerability. Contri-
Barnett, J., Lambert, S., Fry, I., 2008. The hazards of indicators: insights from the bution of Working Group II to the Fourth Assessment Report of the Intergov-
environmental vulnerability index. Annals of the Association of American ernmental Panel on Climate Change. Cambridge University Press, Cambridge,
Geographers 98 (1), 102–119. UK, pp. 7–22.
Bernard, H.R., 2000. Social Research Methods: Qualitative and Quantitative ISDR, 2004. Glossary. International Strategy for Disaster Risk Reduction, World
Approaches. Sage Publications, Thousand Oaks, London. Conference on Disaster Reduction, 18–22 January 2005. Kobe, Hyogo, Japan.
Blaikie, P.M., 1994. At Risk: Natural Hazards, People’s Vulnerability, and Disasters. Janssen, M.A., Ostrom, E., 2006. Resilience, vulnerability, and adaptation: a cross-
Routledge. cutting theme of the International Human Dimensions Programme on Global
Bohle, H., Downing, T., Watts, M., 1994. Climate change and social vulnerability. Environmental Change. Global Environmental Change 16 (3), 237–239.
Toward a sociology and geography of food insecurity. Global Environmental Jones, R.N., 2001. An environmental risk assessment/management framework for
Change 4 (1), 37–48. climate change impact assessments. Natural Hazards 23 (2–3), 197–230.
Böhringer, C., Jochem, P.E.P., 2007. Measuring the immeasurable. A survey of Kaly, U., Briguglio, L., McLeod, H., Schmall, S., Pratt, C., Pal, R., 1999. Environmental
sustainability indices. Ecological Economics 63 (1), 1–8. vulnerability index (EVI) to summarise national environmental vulnerability
Bostrom, A., Morgan, M.G., Fischhoff, B., Read, D., 1994. What do people know about profiles, SOPAC Tech. Rep. (275), 73.
global climate change? 1. Mental models. Risk Analysis 14 (6), 959–970. Kates, R.W., 1985. The interaction of climate and society. In: Kates, R.W., Ausubel,
Brooks, N., 2003. Vulnerability, risk and adaptation: A conceptual framework. J.H., Berberian, M. (Eds.), Climate Impact Assessment: Studies of the Interaction
Tyndall Centre Working Paper 38. Tyndall Centre for Climate Change Research, of Climate and Society, vol. 27 of SCOPE Report. John Wiley and Sons, Chichester,
Norwich, UK. UK, pp. 3–36.
Brooks, N., Adger, W.N., Kelly, P.M., 2005. The determinants of vulnerability and Kelly, P.M., Adger, N., 2000. Theory and practice in assessing vulnerability to climate
adaptive capacity at the national level and the implications for adaptation. change and facilitating adaptation. Climatic Change 47, 325–352.
Global Environmental Change 15, 151–163. Klein, R.J.T., 2009. Identifying countries that are particularly vulnerable to the
Carnap, R., 1995. Introduction to Philosophy of Science. Dover, New York. adverse effects of climate change: an academic or a political challenge? Carbon
Cash, D.W., Clark, W.C., Alcock, F., Dickson, N.M., Eckley, N., Guston, D.H., Jäger, J., & Climate Law Review 3, 284–291.
Mitchell, R.B., 2003. Knowledge systems for sustainable development. Proceed- Lenton, T.M., Held, H., Kriegler, E., Hall, J.W., Lucht, W., Rahmstorf, S., Schellnhuber,
ings of the National Academy of Sciences 100, 8086–8091. H.J., 2008. Tipping elements in the Earth’s climate system. Proceedings of the
Copi, I.M., Cohen, C., 2005. Introduction to Logic, twelth edition. Prentice Hall, National Academy of Sciences 105 (6), 1786.
Upper Saddle River, NJ. Luers, A.L., 2005. The surface of vulnerability: an analytical framework for examin-
Cutter, S.L., 1996. Vulnerability to environmental hazards. Progress in Human ing environmental change. Global Environmental Change 15 (3), 214–223.
Geography 20, 529–539. McCarthy, J.J., Canziani, O.F., Leary, N.A., Dokken, D.J., White, K.S. (Eds.), 2001.
Cutter, S.L., Boruff, B.J., Shirley, W.L., 2003. Social vulnerability to environmental Climate Change 2001: Impacts, Adaptation, and Vulnerability: Contribution
hazards. Social Science Quarterly 84 (2), 242–261. of Working Group II to the Third Assessment Report of the Intergovernmental
Desai, M., 1991. Human development concepts and measurement. European Eco- Panel on Climate Change. Cambridge University Press, Cambridge.
nomic Review 35 (2–3), 350–357. Moss, R.H., Brenkert, A.L., Malone, E.L., 2001. Vulnerability to climate change: a
Dessai, S., Hulme, M., 2004. Does climate adaptation policy need probabilities? quantitative approach. Pacific Northwest National Laboratory PNNL-SA-33642.
Climate Policy 4 (2), 107–128. Prepared for the US Department of Energy.
Eakin, H., Luers, A.L., 2006. Assessing the vulnerability of social-environmental Niemeijer, D., 2002. Developing indicators for environmental policy: data-driven
systems. Annual Review of Environment and Resources 31, 365–394. and theory-driven approaches examined by example. Environmental Science
Eriksen, S.H., Kelly, P.M., 2006. Developing credible vulnerability indicators for and Policy 5 (2), 91–103.
climate adaptation policy assessment. Mitigation and Adaptation Strategies for Niemeijer, D., de Groot, R.S., 2008. A conceptual framework for selecting environ-
Global Change 12 (4), 495–524. mental indicator sets. Ecological indicators 8 (1), 14–25.
European Commission, 2009. Impact Assessment accompanying the White Paper – O’Brien, K., Eriksen, S., Schjolden, A., Nygaard, L.P., 2007. Why different interpreta-
Adapting to climate change: Towards a European framework for action. Tech. tions of vulnerability matter in climate change discourses. Climate Policy 7 (1),
Rep., European Commission. 73–88.
Füssel, H.M., 2007. Vulnerability: A generally applicable conceptual framework for OECD, 2008. Handbook on Constructing Composite Indicators: Methodology and
climate change research. Global Environmental Change 17 (2), 155–167. User Guide. OECD Publishing.
Füssel, H.M., Klein, R.J.T., 2006. Climate change vulnerability assessments: an Ostrom, E., 2005. Understanding Institutional Diversity. Princeton University Press,
evolution of conceptual thinking. Climatic Change 75 (3), 301–329. Princeton, NJ.
208 J. Hinkel / Global Environmental Change 21 (2011) 198–208
Ostrom, E., 2009. A general framework for analyzing sustainability of social- Smit, B., Wandel, J., 2006. Adaptation, adaptive capacity and vulnerability. Global
ecological systems. Science 325 (5939), 419. Environmental Change 16 (3), 282–292.
Parris, T.M., Kates, R.W., 2003. Characterizing and measuring sustainable develop- Stegmüller, W., 1974. Begriffsformen, Wissenschaftsprache, empirische Signifikanz
ment. Annual Review of Environment and Resources 28 (1), 559–586. und theoretische Begriffe. Probleme und Resultate der Wissenschaftstheorie
Parry, M. L., Canziani, O. F., Palutikof, J. P., van der Linden, P. J., Hanson, C. E. (Eds.), und Analytischen Philosophie, Band II, Theorie und Erfahrung, 1. Halbband.
2007. Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribu- Springer, Berlin.
tion of Working Group II to the Fourth Assessment Report of the Intergovern- Sullivan, C., 2002. Calculating a water poverty index. World Development 30 (7),
mental Panel on Climate Change. Cambridge University Press, Cambridge, UK. 1195 – 1210. http://www.sciencedirect.com/science/article/B6VC6-45MDRWD-
Patt, A., Suarez, P., Gwata, C., 2005. Effects of seasonal climate forecasts and 3/2/013a361da21bd0b59689f77ada5fe6bf.
participatory workshops among subsistence farmers in Zimbabwe. Proceedings Thywissen, K., 2006. Components of risk. A Comparative Glossary. Studies of the
of the National Academy of Sciences 102 (35), 12623–12628. University: Research, Counsel, Education (SOURCE). Publication Series of the
Patt, A.G., Schröter, D., de la Vega-Leinert, A.C., Klein, R.J.T., 2008. Vulnerability United Nations University-Institute for Environment and Human Security
research and assessment to support adaptation and mitigation: Common (UNU-EHS) No 2.
themes from the diversity of approaches. In: Patt, A.G., Schröter, D., de la Timmerman, P., 1981. Vulnerability, resilience and the collapse of society. Environ-
Vega-Leinert, A.C., Klein, R.J.T. (Eds.), Environmental Vulnerability Assessment. mental Monograph 1.
Earthscan, London, UK. Tol, R.S.J., Yohe, G.W., 2007. The weakest link hypothesis for adaptive capacity: an
Pelling, M., High, C., 2005. Understanding adaptation: what can social capital offer empirical test. Global Environmental Change 17 (2), 218–227.
assessments of adaptive capacity? Global Environmental Change 15 (4), 308–319. Turner, B.L., Kasperson, R.E., Matson, P., McCarthy, J.J., Corell, R.W., Christensen, L.,
Renn, O., 2008. Risk Governance: Coping with Uncertainty in a Complex World. Eckley, N., Kasperson, J.X., Luers, A., Martello, M.L., Polsky, C., Pulsipher, A.,
Earthscan. Schiller, A., 2003. A framework for vulnerability analysis in sustainability
Rittel, H.W.J., Webber, M.M., 1973. Dilemmas in a general theory of planning. Policy science. Proceedings of the National Academy of Sciences 100 (14), 8074–
Sciences 4 (2), 155–169. 8079.
Sagar, A.D., Najam, A., 1998. The human development index: a critical review. UNDP, 1990. Human Development Report. Oxford University Press, New York.
Ecological Economics 25 (3), 249–264. UNDP, 1991. Human Development Report 1991. Oxford University Press, New York.
Schnell, R., Hill, P.B., Esser, E., 1999. Methoden der empirischen Sozialforschung. UNDP, 1993. Human Development Report 1993. Oxford University Press, New York.
Oldenbourg, München. UNEP, 2001. Vulnerability indices: climate change impacts and adaptation, vol. 3 of
Schröter, D., Cramer, W., Leemans, R., Prentice, I.C., Arajo, M.B., Arnell, N.W., Bondeau, UNEP Policy Series. United Nations Environemental Programme, Nairobi.
A., Bugmann, H., Carter, T.R., Gracia, C.A., de la Vega-Leinert, A.C., Erhard, M., United Nations, 1992a. Agenda 21. The United Nations Programme of Action From
Ewert, F., Glendining, M., House, J.I., Kankaanpää, S., Klein, R.J.T., Lavorel, S., Rio.
Lindner, M., Metzger, M.J., Meyer, J., Mitchell, T.D., Reginster, I., Rounsevell, M., United Nations, 1992b. The United Nations Framework Convention on Climate
Sabat, S., Sitch, S., Smith, B., Smith, J., Smith, P., Sykes, M.T., Thonicke, K., Thuiller, Change.
W., Tuck, G., Zaehle, S., Zierl, B., 2005. Ecosystem Service Supply and Vulnerability United Nations, 1994. Barbados Programme of Action.
to Global Change in Europe. Science 310 (5752), 1333–1337. Waldrop, M.M., 1992. Complexity: The Emerging Science at the Edge of Order and
Schröter, D., Polsky, C., Patt, A.G., 2004. Assessing vulnerabilities to the effects of Chaos. Simon & Schuster.
global change: an eight step approach. Mitigation and Adaptation Strategies for Weyant, J., Davidson, O., Dowlatabadi, H., Edmonds, J., Grubb, M., Richels, R.,
Global Change 10 (4), 573–595. Rotmans, J., Shukla, P., Cline, W., Fankhauser, S., Tol, R.S.J., Parson, E.A., 1996.
Scoones, I., 1998. Sustainable Rural Livelihoods: A Framework for Analysis. Institute Integrated assessment of climate change: an overview and comparison of
of Development Studies, Brighton. approaches and Results. In: Bruce, J., Lee, H., Haites, E. (Eds.), Climate Change
Sen, A., 1983. Poverty and Famines: An Essay on Entitlement and Deprivation. 1995: Economic and Social Dimensions of Climate Change. Cambridge Univer-
Oxford University Press. sity Press, Cambridge, pp. 367–396.
Smit, B., Pilifosova, O.V., Burton, I., Challenger, B., Huq, S., Klein, R.J.T., Yohe, G., Wolf, S., Hinkel, J., Ionescu, C., Hofman, M., Bisaro, S., Linke, D., Klein, R.J.T., 2010.
Adger, N., Downing, T., Harvey, E., Kane, S., Parry, M.L., Skinner, M., Smith, J., Vulnerability: a meta-analysis of definitions and methodologies. A clarification
Wandel, J., 2001. Adaptation to climate change in the context of sustainable by formalisation, Global Environmental Change, under review.
development and equity. In: McCarthy, J.J., Canziani, O.F., Leary, N.A., Dokken, Yohe, G., Tol, R.S.J., 2002. Indicators for social and economic coping capacitymoving
D.J., White, K.S. (Eds.), Climate Change 2001. Impacts, Adaptation, and Vulnera- toward a working definition of adaptive capacity. Global Environmental Change
bility. Cambridge University Press, Cambridge, pp. 877–912. 12 (1), 25–40.