0% found this document useful (0 votes)
14 views20 pages

Evaluation of Exisitng Community Resilience 51923

The document evaluates various community disaster resilience approaches and tools, highlighting the distinction between vulnerability and resilience metrics. It emphasizes the importance of using top-down metrics for initial assessments and bottom-up metrics for targeted resilience programming. The paper aims to inform South Carolina's resilience planning by comparing the strengths and weaknesses of existing metrics and approaches.

Uploaded by

maxbdixon2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views20 pages

Evaluation of Exisitng Community Resilience 51923

The document evaluates various community disaster resilience approaches and tools, highlighting the distinction between vulnerability and resilience metrics. It emphasizes the importance of using top-down metrics for initial assessments and bottom-up metrics for targeted resilience programming. The paper aims to inform South Carolina's resilience planning by comparing the strengths and weaknesses of existing metrics and approaches.

Uploaded by

maxbdixon2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Evaluation of Existing

Community Disaster Resilience


Approaches and Tools to
Support Resilience Planning
Efforts

Summary:
• Vulnerability and resilience metrics are not the same as they measure different
concepts.
• Top-down resilience metrics are best used for an initial filter or broad assessment
of where more information on resilience and its drivers should be gathered.
• Bottom-up metrics can be employed most effectively after a top-down assessment
narrows down study areas of interest.
• Bottom-up metrics can delve into specific communities to best target resilience
programming and funding based on actionable information.

By Margot Habets and Susan L. Cutter

Report for the South Carolina Office of Resilience


May 19, 2023
Introduction
The term resilience has been around for centuries with multiple, intertwined meanings
stretching from the mechanical and natural sciences to engineering, medicine, and to the
humanities and social sciences (Alexander 2013). Resilience has been used for decades as an
approach to examining the ability of a system or an entity such as a building to withstand a shock,
cope with or absorb changes, and adapt to such changes to bounce back and regain prior functions.
In the context of disaster risk reduction, resilience was initially applied to ecosystems (Holling
1973) and performance-based engineered structures including lifelines (Bruneau et al. 2003.
Resilience was not really introduced into hazards or disaster planning until the mid-1990s as an
approach for describing the capacity of communities to resist or recover from a disaster shock
(Emrich and Tobin 2018). In this context, resilience was used to describe the multi-dimensional
scale, time, and place-dependent interactions between preparedness, recovery, and adaptation in
response to shocks to communities. Rather than focusing on reducing the vulnerability in places,
the focus shifted to positive actions that communities could take to not only improve their
capacities to withstand the impacts of disaster risks but also to bounce forward in its aftermath, not
simply returning to what was there before.
In 2012, the National Academies published their seminal report, Disaster Resilience: A
National Imperative, to address the obstacles related to increasing the nation’s resilience, describe
the state of knowledge about hazards resilience including baselines and performance metrics, and
provide guidance on needed approaches to elevate resilience as a common goal. The study began
by defining resilience as “the ability to prepare and plan for, absorb, recovery from, or more
successfully adapt to actual or potential adverse events” (NRC 2012: 16). In the intervening
decade, the application of disaster resilience to raise awareness about disaster risk reduction,
stimulate communities to engage in and promote resilience actions has produced significant
advancements in resilience planning in all sectors (Walton et al. 2021). Many communities and
states now have a Chief Resilience Officer or a resilience office, resilience is incorporated into risk
mitigation planning at all levels, and in South Carolina, resilience is now a required element in
comprehensive plans (S.C. Code § 6-29-510).
While there is enormous enthusiasm for the idea and concept of disaster resilience,
resilience measurement science and practice still are not mature enough to determine which
approach works best in theory, or more importantly in practice (NASEM 2019). The purpose of
this white paper is to provide a critical evaluation of the current metrics and approaches used in
disaster/ hazard resilience including a comparison of their relative weaknesses and strengths to
help inform South Carolina’s strategic statewide resilience and risk reduction planning effort.
Metrics for Disaster Resilience
There is no dominant framework or standard for resilience measurement (Cutter 2016a)
because communities are different in their physical, social, and built environment characteristics,
disaster risk exposures, and capacities. By measurement, we mean the action of assessing a place
(or event) using a standard approach to compare the place over time, after changes in conditions,
or with other places (NASEM 2019). There are a multitude of activities and frameworks for

1
measuring resilience, which generally focus on the inherent resilience of a community – the pre-
existing resilience a community has at a particular point in time (Asadzadeh et al. 2017). Each
approach requires choices on resilience definitions, input data (quantitative, qualitative), study
area, and the hazards considered (Parker 2020). Individually and collectively such choices
influence the complexity of the metric and its lack of transference of the approach from one place
to another, or from one-time timeframe to another.
What are resilient communities?
Resilience can be a measurable outcome, a process, or some combination of the two.
Common elements in resilience frameworks focus on assets (the restoration of the physical
infrastructure) to achieve an outcome after an event (static conditions), or on social processes that
improve social and institutional capacities through social learning (dynamic processes). In some
instances, both asset and capacity approaches are used to define community resilience. However,
inherent in that conversation are the questions of resilience to what? And resilience for whom?
(Cutter 2016b; Meerow and Newell 2019). These basic conceptual differences (assets vs.
capacities; to what vs. for whom; and static vs. dynamic processes) influence the various
measurement approaches and resulting outputs.
South Carolina defines resilience in its introduction to the draft Strategic Statewide
Resilience and Risk Reduction Plan as “the ability of communities, economies, and ecosystems
within South Carolina to anticipate, absorb, recover and thrive when presented with environmental
change and natural hazards” (SCOR 2023, 14). In this respect, the state has taken a combination
of the assets and capacities perspective in its definition.
Most measurement schemas take a broad holistic view suggesting that communities contain
many different dimensions of resilience that are interdependent and connected. These inherent
dimensions are often referred to as capitals (environmental, economic/financial, cultural, social,
and infrastructure), and the capitals approach provides the general conceptual model for many of
the measurement approaches (Tierney 2019). Other schemas focus on the disaster cycle (e.g.,
recovery, preparedness) and concentrate on how social and cultural systems recover post-event
(Clarke and Mayer 2017) or measure resilience as the length of time for infrastructure (or lifeline)
restoration (in hours or days) after a major earthquake (Poland 2009). Other approaches may be
more localized in context such as the resilience of cities (Bozza et al. 2017; McPhearson et al.
2015) or rural areas (Cox and Hamlen 2015) or focused on a particular hazard or acute stressor
such as flooding (van de Lindt et al. 2020). Even resilience metrics that approach the concept from
the same framework can make different decisions in the variable selection and methodology,
resulting in different resilience measures and findings (Jones 2018).

2
How is it measured?
Common elements in community resilience measurement schemas include information on
the physical attributes and assets of an area combined with social and institutional capacities. Such
measurements are normally static snapshots of a particular time or context to be compared to other
indicators such as sustainable development goals (SDGs), vulnerability indices, environmental
justice metrics like EJ-40, or identified response needs within a community (e.g., FEMA lifelines).
Resilience metrics do not measure sustainability, FEMA lifeline performance, vulnerability, or
environmental justice (See Box 1). Resilience assessments determine prevailing conditions or
baselines of existing resilience in communities. These baselines provide the foundation for future
assessments (generally employing the same methodology) that can be compared to monitor
progress over time. However, limitations in input data render many of the current tools or
techniques not directly actionable or changeable. For example, variables that are difficult to
measure, operate over longer time scales before a change occurs, or are outliers are often ignored.
Instead, assessments use more available indicators as proxies arguing they still may have some
importance in community resilience (Cardoni et al. 2021; Carvalhaes et al. 2022).

Box 1: Vulnerability and Resilience


Social vulnerability is a product of social and place inequalities resulting in differential harm
and ability to respond to different population groups (Cutter 2003). It is generally a measure
of exposure to and degree of harm that a community may face. Community resilience, on the
other hand, encompasses the everyday qualities of a community that may enhance its ability
to prepare for, respond to, and recover from hazard events (Cutter et al. 2014). Their
relationship can be conceptualized as a Venn diagram, with a level of overlap between the
two, but still distinct differences.
Some indices that purport to measure resilience only use vulnerability indicators, such as the
U.S. Census Bureau’s (USCB) Community Resilience Estimates (CRE), assuming that the
two are opposites (USCB 2022). However, this would mean that all places with high social
vulnerability have low community resilience, which has been refuted (Derakhshan et al.
2022). Therefore, if you are measuring resilience, you are not necessarily measuring
vulnerability and it is important to approach the two separately. This is the methodology
adopted by the National Risk Index (NRI) and allows for the clear distinction between the
two concepts while also understanding how they overlap (Zuzak et al. 2023).

There are four primary ways that researchers implement resilience measurements: 1)
checklists, 2) scorecards, 3) indices that create a resilience assessment tool, and 4)
mathematical/statistical models. Scorecards and checklists tend to take a qualitative or self-
reported approach. They identify focal points of resilience in planning, and local business, and ask
local areas to determine their presence or absence within the community (a checklist) (Sempier et
al. 2010) or provide some assessment of the attribute’s conditions using a scorecard (Berke et al.
2019; Malecha et al. 2021).

3
The most common way to measure resilience is by using multi-variate composite indices
(Cutter 2016a). Resilience indices choose a variety of quantitative variables that theoretically
enhance resilience and combine them to create a comparative value of resilience for a selected
study area. These methods tend to be larger data aggregations and normally do not elicit
participation from local stakeholders in their construction (Asadzadeh et al. 2017).
Lastly, mathematical models and more advanced models such as AI try to model the
performance of infrastructure, human decision-making, and complex systems to understand the
dynamic forms and processes of resilience (Yabe et al. 2022). The results or outputs of resilience
measurement often are visualized through different types of mapping, charts, and dashboards
which communicate multiple visualizations and offer a more holistic view of community resilience
(Nguyen and Akerkar 2020).
Within each type of metric, researchers have to make decisions on how resilience is defined
and how it will be measured, either subjectively (defined by the subject) or objectively (defined
by theory and literature) (Jones 2018). Comparative reviews of the relative strengths and
weaknesses of various resilience metrics and tools abound with critiques ranging from
conceptualization, to methods, to input data (Bakkensen et al. 2017; Cai et al. 2018; Koliou et al.
2018; Johnson et al. 2020; NASEM 2019; Nguyen and Akerkar 2020; Sharifi 2016). However,
these critiques also highlight room for improvement, especially in translating the science of
resilience metrics to practice. Aligning top-down metrics comparable across multiple areas with
more locally-based bottom-up ones that may not be comparable in other places has been the major
impediment in moving community disaster resilience concepts to action (Cutter 2018).
What scale and units of measurement are used?
The choice of scale often depends on the level of decision-making addressing resilience as
well as the availability of data for analysis. In more qualitative schemas, scale also depends on the
type of tool used. An institution that may address resilience at a national level, like FEMA may
only be interested in resilience at a state or county level, while a state or county may be able to
fund specific projects and would find a zip code or census tract-level analysis more helpful. The
scale of data that is available can be a large limitation for resilience measurement. Since individual-
level data is unavailable, aggregated demographic data are used but this must be done with caution
to avoid issues with interpretation (Chu et al. 2021).
The unit of analysis is another consideration. Units of analysis are the objects of the study—
communities, watersheds, states, and countries. For communities, the unit of analysis is defined as
the administrative boundary and is further defined as counties, municipal boundaries, census
tracts/blocks, zip codes, or metropolitan statistical areas. Many of the data inputs on the resilience
capitals come from Census information, so census tracts/blocks, zip codes, or other census-
designated geographies define community boundaries instead of actual jurisdictional control. For
some localized applications census-defined enumeration units are problematic where other defined
areas (e.g., watersheds, flood zones, neighborhoods, land use) might have more currency for
measuring community resilience.

4
Fit for Purpose: Top-Down versus Bottom-Up Tools
While existing tools are useful for their specific design purpose, they are often limited in
their application to the specific and localized needs and investments of communities. In addition,
many resilience metrics are only conceptual or have been developed for one area and have yet to
be widely implemented. Resilience metrics can be described as top-down or bottom-up. While
theoretically and conceptually driven and often using national datasets for consistency, top-down
measurements use a single value to represent all the dimensions of resilience. Top-down schemas
are more policy-oriented at national, regional, and state scales where counties are the unit of
analysis. The top-down schema provides comparative analyses across large geographic areas based
on aggregated data.
In contrast, bottom-up approaches, provide a rich narrative on community change and
actions at very localized scales (sub-county). The use of qualitative data or experiential
information is not generalizable across broader geographies. There is a need to consider “fit for
purpose” in the selection of tools based on policy or local action orientation. In either case, there
is a need to reflect local conditions based on actionable data yet are able to scale up beyond the
local community (bottom-up) to reveal a broader pattern of resilience at larger scales, or
downscaled disaggregated national or state data to reflect local variability to partially capture local
assets and capacities.
A summary of commonly used and/or cited resilience indices is provided in Table 1. The
methodology used in the metric as well as the conceptual structuring of resilience (capitals) are
provided. Additional information including the approach, goal, positives, critiques, and a sample
of included variables for each metric is included in Appendix A. General findings for top-down
and bottom-up metrics as well as room for improvement are discussed below.

5
TABLE 1 – Summary of Indicators/Tools Described split by Top-Down or Bottom-Up approach
Indicator Name Author(s) Scale of Model/ Indicator Scorecard Dimensions/Capacities Identified
Analysis Tool /Checklist
TOP-DOWN
Baseline Resilience Indicators Cutter et al. 2014 County, Social, economic, community capital, institutional,
for Communities (HVRI BRIC) Derakhshan et al. 2022 Tract X housing/infrastructure, environmental
FEMA Community Resilience FEMA 2022 County, X Population, Household, Housing, Healthcare, Economic,
Index (FEMA CRI) Census Tract Connection to Community
Community Intrinsic Resilience Gerges et al. 2022 County X Transportation, energy, health and socio-economic
Index (CIRI)
Community Resilience Index Sherrieb et al. 2010 County X Social capital, Economic development
Natural Hazard Resilience Summers et al. 2018 County X Natural environment, built environment, society, governance,
Screening Index (NaHRSI) and risk
PEOPLES Framework Renschler et al. 2010 Community X Population and demographics, environmental and ecosystem,
Cimarello et al. 2016 organized governmental services, physical infrastructures,
lifestyle and community competence, economic
development, social-cultural capital
BOTTOM-UP
Coastal Communities Sempier et al. 2021 Community X Critical infrastructure, transportation, community plans,
Resilience Index (CCRI) mitigation measures, business plans, social systems
Communities Advancing Pfefferbaum et al. 2013; Community X Connection and caring, resources, transformative potential,
Resilience Toolkit (CART) 2015 disaster management, information and communication
Composite of Post-Event Well- Links et al. 2018; Schoch- County; X X Index: Pre-event community functioning; Prevention and
being (COPEWELL) Model Spana et al. 2019 Community Mitigation; Population vulnerability, inequality, and
and Rubric deprivation; Social capital and cohesion; event preparedness
and response; external resources
Los Angeles County Eisenman et al. 2014 Community Education, engagement, self-sufficiency, partnership
Community Disaster Resilience X
Project (LACCDR)
Rural Coastal Community Jurjonas & Seekamp 2017; Community Livelihood dependency/diversity, poverty/prosperity,
Resilience (RCCR) framework Jurjonas et al. 2020 X un/sustainable development, community
disengagement/cohesion, rigidity/agency
Rural Resilience Index (RRI) Cox and Hamlen 2015 Community X Social fabric, community resources, disaster management

6
Top-Down Resilience Metrics
Top-down resilience metrics are used to give a snapshot of the inherent resilience of a study
region. They can be used for a comparative understanding of the resilience landscape of a study
region that can lead to improved decision-making at a state or county scale. This scalability lets
stakeholders understand resilience across a large area while targeting specific counties that may
need additional resilience resources. Since county-level analysis aligns with existing resilience
programming from the federal government, top-down indices are a good first step to using those
resources in the places they are most needed.
All of the top-down approaches begin their study with a pre-determined definition and
framework of resilience. According to their theoretical approaches, FEMA CRI, the PEOPLES
framework, and NaSHRI conflate resilience with social vulnerability, whereas BRIC, CIRI, and
the Community Resilience Index either explicitly define their approach to the relationship between
resilience and vulnerability, or only include variables that are generally not used in social
vulnerability measurement. For example, the Community Resilience Index only addresses social
capital and economic development, and CIRI only uses 15 variables whereas BRIC widens the
scope of resilience to include 49 variables within social, economic, environmental, community,
institutional, and infrastructural capitals.
All top-down resilience metrics discussed here are indices created from local or national
datasets except for the PEOPLES Framework which is a largely GIS-based tool. Large, publicly
available datasets, while often only available at the county or census tract scale, are consistently
available over time and can be used to identify broad drivers and temporal patterns of resilience.
Locally sourced datasets, as used in CIRI, can be more accurate and more data related to resilience
may be available Depending on the scale and datasets used, top-down resilience metrics can be
quickly calculated, but some metrics rely on calculated datasets that become more time and labor-
intensive such as NaHRSI and the PEOPLES Framework which require complex modeling and
extensive data collection at 117 and 95 variables respectively. CIRI includes an additional equation
that would model resilience after a hazard, but this can only be related to infrastructural resilience
(Gerges et al. 2022). In addition to choosing variables that represent resilience, some indices strive
to include actionable variables, those that can be directly impacted by governments, to help
identify what changes need to be implemented to improve resilience for the target community. Of
all the indices (and variables within them) listed in Table 1, none are completely actionable, but
all have some and widely differing actionable variables.
An additional difficulty of top-down resilience indices is their wider application and
validation of outcomes. Variables in existing resilience indices may not always be applicable to
each study area or data may not be available at chosen locations or scales. Resilience is also a
place-based process and variables that may do a good job quantifying inherent resilience in one
area, for example on the coast of South Carolina, may be a poor choice for a variable in a
mountainous land-locked state such as Colorado. Expert and stakeholder input can improve
variable selection to possibly move national resilience indicators to the state and local scale,
making them more operational. Quantitative metrics like indices also require statistical and
external validation to make sure the measurement accurately portrays what it says it will.

7
Validation varies over the metrics presented, but there is work to be done overall in resilience
metric testing (Koliou et al.2018).
Bottom-Up Resilience Metrics
Though bottom-up resilience metrics may not always use a resilience framework created
by the study community, they are implemented at a local scale and are measurements of local,
place-based community resilience. Place-based means that the indicators used as well as the people
surveyed are local and account for contexts of resilience that may not be found elsewhere. These
can be found in top-down resilience metrics if created in partnership with communities, but
bottom-up metrics have this built into their methods from the beginning. The only index in our
assessment that is truly bottom-up is the RRI, which identifies variables through stakeholder
engagement and builds an index from these variables only (Cox and Hamlen 2015). In addition,
since bottom-up metrics involve the community in conceptualizing and measuring resilience, they
can also act as a resilience-building exercise.
Most bottom-up metrics are either community scorecards and checklists or community
assessments. Checklists, such as CCRI identify a specific audience for the assessment and direct
them to grade different parts of the study area on resilience qualities that generally cannot be
quantitatively measured. These types of assessments are easier to implement than larger focus
groups or participatory action research, but still function as both a teaching and assessment
mechanism. Generally, scorecards are created through an assessment of resilience literature to
identify what qualities improve the resilience of the community targeted and then administered to
that community. The production of the actual assessment can be created alongside stakeholders as
it is done in COPEWELL, or it can be research-based and adapted once administered similarly to
how the RCCR Framework approached its scorecard.
Community assessments take many different forms. The two assessments discussed here
are chosen to show the different approaches one can take in qualitatively assessing community
resilience in this way. CART adopts a four-stage participatory methodology, which, while time-
consuming, involves the target community through every step of the resilience assessment process
and results in a plan for resilience improvement. LACCDR takes a public health approach to
resilience and trains groups on resilience building for them to then take home and implement which
involves a high reliance on established NGOs, improving their resilience but possibly taking away
from other programming priorities. Both community assessments involve the creation of a
resilience toolkit and implementation over multiple meetings and stages with ample opportunity
for stakeholder feedback and revision. They are time-intensive, involving multiple different
qualitative methods (i.e., interviews, surveys, focus groups, community mapping, network
analysis) to determine very localized but detail-rich understandings of community resilience.
Bottom-up methods vary in approach and methodology to address resilience in a multitude
of ways, but their scope normally narrows to one or two capitals of resilience to make them doable.
The audience of these metrics can vary from government workers to NGOs to the public. Finding
participants and drawing out actions that can improve resilience takes time and effort, not only on
the researcher’s behalf but also by the community that is being assessed. These limitations can

8
make it difficult to repeat bottom-up methodologies or apply them across different parts of a larger
study area.
Conclusions
There is no single resilience metric that can tell researchers and stakeholders everything
they need to know about a study area’s resilience. However, resilience metrics are not all created
using the same approach, and they must be critically assessed before being applied to a study area
(Jones 2018). Resilience is theoretically different than vulnerability, however, many resilience
metrics conflate the two concepts, resulting in a measurement that is not explicitly resilience.
Top-down resilience indices involve large datasets that distill resilience down to
quantitative variables that are either combined into an index or used in GIS programs that attempt
to portray the systems of systems of resilience. These metrics can be used to aid in decision-making
and track resilience over time but are not always actionable and require local input to accurately
integrate more local measures of resilience. Also, there is still work to do to test and validate
different resilience metrics (Koliou et al. 2018, Parker 2020).
Bottom-up metrics are generally limited to approaching resilience through one or two
capitals due to their time- and resource-intensive nature. The interactive nature of bottom-up
metrics may result in the metric acting as a resilience intervention itself and can result in specific
resilience actions that are community-identified and supported. However, the time intensity of
these metrics means that it is difficult to track resilience over time or to administer multiple bottom-
up metrics over a larger area without substantial dedicated resources. As it stands, a combination
of top-down and bottom-up approaches is necessary to both identify areas with low inherent
resilience (policy-oriented) and actions that will be community-supported and effectively improve
resilience.
Currently, Charleston, Lexington, Florence, and York Counties all have specific resilience
chapters as parts of their comprehensive plans. They all identify hazards that directly impact the
counties and key tasks or actions that must be taken to improve their resiliency to these events, but
there is no evidentiary basis (e.g., direct or indirect measurement of resilience) for such actions or
mechanisms for monitoring their effectiveness. A top-down resilience metric coupled with bottom-
up resilience priorities can effectively target communities at the state and county level that are less
resilient and guide programs and projects that local communities self-determine. This is necessary
to efficiently and effectively utilize limited resilience funding for the largest impact on local
community resilience.

9
References
Alexander, D. E. 2013. “Resilience and Disaster Risk Reduction: An Etymological Journey.”
Natural Hazards and Earth System Sciences 13 (11): 2707–16.
https://doi.org/10.5194/nhess-13-2707-2013.
Asadzadeh, A., T. Kötter, P. Salehi, and J. Birkmann. 2017. “Operationalizing a Concept: The
Systematic Review of Composite Indicator Building for Measuring Community Disaster
Resilience.” International Journal of Disaster Risk Reduction 25 (October): 147–62.
https://doi.org/10.1016/j.ijdrr.2017.09.015.
Bakkensen, Laura A., Cate Fox-Lent, Laura K. Reid, and Igor Linkov. 2017. “Validating
Resilience and Vulnerability Indices in the Context of Natural Disasters.” Risk Analysis
37(5): 982-1004. https://doi.org/10.1111/risa.12677.
Berke, Philip, Matthew L. Malecha, Siyu Yu, Jaekyung Lee, and Jaimie H. Masterson. 2019. “Plan
Integration for Resilience Scorecard: Evaluating Networks of Plans in Six US Coastal
Cities.” J. Environmental Planning and Management 2(5): 901-920.
https://doi.org/10.1080/09640568.2018.1453354.
Bozza, Anna, Domenico Asprone, and Francesco Fabbrocino. 2017. “Urban Resilience: A Civil
Engineering Perspective.” Sustainability 9:103. https://doi.org/10.3390/su9010103.
Bruneau, Michel, Stephanie Chang, Ronald Eguchi, George Lee, Thomas O’Rourke, Andrei
Reinhorn, Masanobu Shinozuka, Kathleen Tierney, William Wallace, and Detlof von
Winterfeldt. 2003. “A Framework to Quantitatively Assess and Enhance Seismic
Resilience of Communities.” Earthquake Spectra 19(4):733-752.
https://doi.org/10.1193/1.1623497.
Cai, Heng, Nina S.N. Lam, Yi Qian, Lei Zou, Rachel M. Correll, and Volodymyr Mihunov. 2018.
“A Synthesis of Disaster Resilience Measurement Methods and Indices.” Intl. J. Dis. Risk
Reduction 31:844-855. https://doi.org/10.1016/j.ijdrr.2018.07.015.
Cardoni, Alessandro, Ali Zamani Noori, Rita Greco, and Gian Paolo Cimellaro. 2021. “Resilience
Assessment at the Regional Level Using Census Data.” International Journal of Disaster
Risk Reduction 55 (March): 102059. https://doi.org/10.1016/j.ijdrr.2021.102059.
Carvalhaes, Thomaz, Vivaldi Rinaldi, Zhen Goh, Shams Azad, Juanita Uribe, and Masoud
Ghandehari. 2022. “Integrating Spatial and Ethnographic Methods for Resilience
Research: A Thick Mapping Approach for Hurricane Maria in Puerto Rico.” Annals of the
Association of American Geographers. Association of American Geographers 112 (8):
2413–35. https://doi.org/10.1080/24694452.2022.2071200.
Chu, Samantha Hao Yiu, Su-Yin Tan, and Linda Mortsch. 2021. “Social Resilience to Flooding in
Vancouver: The Issue of Scale.” Environmental Hazards 20 (4): 400–415.
https://doi.org/10.1080/17477891.2020.1834345.

10
Cimellaro, Gian Paolo, Chris Renschler, Andrei M. Reinhorn, and Lucy Arendt. 2016. “PEOPLES:
A Framework for Evaluating Resilience.” Journal of Structural Engineering 142 (10):
04016063. https://doi.org/10.1061/(ASCE)ST.1943-541X.0001514.
Clarke, Hannah E., and Brian Mayer. 2017. “Community Recovery Following the Deepwater
Horizon Oil Spill: Toward a Theory of Cultural Resilience.” Society & Natural Resources
30(2):129-144. https://doi.org/10.1080/08941920.2016.1185556
Cox, Robin S., and Marti Hamlen. 2015. “Community Disaster Resilience and the Rural Resilience
Index.” American Behavioral Scientist 59 (2): 220–37.
https://doi.org/10.1177/0002764214550297.
Cutter, Susan L. 2016a. “The Landscape of Disaster Resilience Indicators in the USA.” Natural
Hazards 80 (2): 741–58. https://doi.org/10.1007/s11069-015-1993-2.
———. 2016b. “Resilience to What? Resilience for Whom?” The Geographical Journal 182 (2):
110–13. https://doi.org/10.1111/geoj.12174.
———. 2018. “Linkages between Vulnerability and Resilience.” Chapter 12 in Sven Fuchs and
Thomas Thaler (eds.), Vulnerability and Resilience to Natural Hazards, Cambridge
University Press, pp. 257-270.
Cutter, Susan L., Kevin D. Ash, and Christopher T. Emrich. 2014. “The Geographies of
Community Disaster Resilience.” Global Environmental Change: Human and Policy
Dimensions 29: 65–77. https://doi.org/10.1016/j.gloenvcha.2014.08.005.
Cutter, Susan L., Bryan J. Boruff, and W. Lynn Shirley. 2003. “Social Vulnerability to
Environmental Hazards.” Social Science Quarterly 84 (2): 242–61.
https://doi.org/10.1111/1540-6237.8402002.
Derakhshan, Sahar, Leah Blackwood, Margot Habets, Julia F. Effgen, and Susan L. Cutter. 2022a.
“Prisoners of Scale: Downscaling Community Resilience Measurements for Enhanced
Use.” Sustainability 14: 6927. https://doi.org/10.3390/su14116927.
Derakhshan, Sahar, Christopher T. Emrich, and Susan L. Cutter. 2022b. “Degree and Direction of
Overlap between Social Vulnerability and Community Resilience Measurements.” PloS
One 17 (10): e0275975. https://doi.org/10.1371/journal.pone.0275975.
Eisenman, David, Anita Chandra, Stella Fogleman, Aizita Magana, Astrid Hendricks, Ken Wells,
Malcolm Williams, Jennifer Tang, and Alonzo Plough. 2014. "The Los Angeles County
Community Disaster Resilience Project - A Community-Level, Public Health Initiative to
Build Community Disaster Resilience." International Journal of Environmental Research
and Public Health 11 (8) (08): 8475-90. https://doi.org/10.3390/ijerph110808475.
Emrich, Christopher T. and Graham A. Tobin. 2018. “Resilience: An Introduction.” Chapter 7 in
Sven Fuchs and Thomas Thaler (eds.), Vulnerability and Resilience to Natural Hazards,
Cambridge University Press, pp. 124-144.

11
Federal Emergency Management Agency (FEMA). 2022. “Community Resilience Indicator
Analysis: Commonly Used Indicators from Peer-Reviewed Research: Updated for
Research Published 2003-2021.”
https://www.fema.gov/sites/default/files/documents/fema_2022-community-resilience-
indicator-analysis.pdf
Gerges, Firas, Hani Nassif, Xiaolong Geng, Holly A. Michael, and Michel C. Boufadel. 2022.
“GIS-Based Approach for Evaluating a Community Intrinsic Resilience Index.” Natural
Hazards 111 (2): 1271–99. https://doi.org/10.1007/s11069-021-05094-w.
Holling, C.S. 1973. “Resilience and Stability of Ecological Systems.” Annual Review of Ecology
and Systematics 4:1-23.
Johnson, Paul M., Corey E. Brady, Craig Philip, Hiba Baroud, Janey V. Camp, and Mark
Abkowitz, 2020. “A Factor Analysis Approach toward Reconciling Community
Vulnerability and Resilience Indices for Natural Hazards.” Risk Analysis 40 (9):1795-1810.
https://doi.org/10.1111/risa.13508
Jones, Leslie. 2018. “Resilience isn’t the same for all: Comparing subjective and objective
approaches to resilience measurement.” WIREs Climate Change (10):1.
https://doi.org/10.1002/wcc.552.
Jurjonas, Matthew, and Erin Seekamp. 2018. “Rural Coastal Community Resilience: Assessing a
Framework in Eastern North Carolina.” Ocean & Coastal Management 162 (August): 137–
50. https://doi.org/10.1016/j.ocecoaman.2017.10.010.
Jurjonas, Matthew, Erin Seekamp, Louie Rivers, and Bethany Cutts. 2020. “Uncovering Climate
(in)justice with an Adaptive Capacity Assessment: A Multiple Case Study in Rural Coastal
North Carolina.” Land Use Policy 94 (February): 104547.
https://doi.org/10.1016/j.landusepol.2020.104547.
Koliou, Maria, John W. van de Lindt, Therese P. McAllister, Bruce R. Ellingwood, Maria Dillard,
and Harvey Cutler. 2018. “State of the Research in Community Resilience: Progress and
Challenges.” Sustainable and Resilient Infrastructure 5 (3): 131–51.
https://doi.org/10.1080/23789689.2017.1418547.
Links, Jonathan M., Brian S. Schwartz, Sen Lin, Norma Kanarek, Judith Mitrani-Reiser, Tara Kirk
Sell, Crystal R. Watson, et al. 2018. “COPEWELL: A Conceptual Framework and System
Dynamics Model for Predicting Community Functioning and Resilience After Disasters.”
Disaster Medicine and Public Health Preparedness 12 (1): 127–37.
https://doi.org/10.1017/dmp.2017.39.
Malecha, Matthew L., Sierra C. Woodruff, and Philip R. Berke, 2021. “Planning to Exacerbate
Flooding: Evaluating a Houston, Texas Network of Plans in Place during Hurricane Harvey
Using a Plan Integration for Resilience Scorecard.” Natural Hazards Review
22(4):04021030-1. https://doi.org/10.1061/(ASCE)NH.1527-6996.0000470.

12
McPhearson T., E. Andersson, T. Elmqvist, and N. Frantzeskaki. 2015. “Resilience of and
Through Urban Ecosystem Services. Ecosyst Serv 12:152–156.
https://doi.org/10.1016/j.ecoser.2014.07.012
Meerow, Sara, and Joshua P. Newell. 2019. “Urban Resilience for Whom, What, When, Where,
and Why?” Urban Geography 40 (3): 309–29.
https://doi.org/10.1080/02723638.2016.1206395.
National Academies of Sciences, Engineering, and Medicine (NASEM). 2019. Building and
Measuring Community Resilience: Actions for Communities and the Gulf Research
Program. Washington D.C.: The National Academies Press.
National Research Council. 2012. Disaster Resilience: A National Imperative. Washington D.C.:
The National Academies Press.
Nguyen, Hoang Long, and Rajendra Akerkar. 2020. “Modelling, Measuring, and Visualising
Community Resilience: A Systematic Review.” Sustainability 12 (19): 7896.
https://doi.org/10.3390/SU12197896.
Parker, Dennis J. 2020. “Disaster Resilience – a Challenged Science.” Environmental Hazards 19
(1): 1–9. https://doi.org/10.1080/17477891.2019.1694857.

Pfefferbaum, Rose L., Betty Pfefferbaum, Pascal Nitiéma, J. Brian Houston, and Richard L. Van
Horn. 2015. “Assessing Community Resilience: An Application of the Expanded CART
Survey Instrument With Affiliated Volunteer Responders.” American Behavioral Scientist,
59 (2): 181–199. https://doi.org/10.1177/0002764214550295

Pfefferbaum, Rose L., Betty Pfefferbaum, Richard L. Van Horn, Richard W. Klomp, Fran H.
Norris, & Dori B. Reissman. 2013. “The Communities Advancing Resilience Toolkit
(CART): An Intervention to Build Community Resilience to Disasters.” Journal of Public
Health Management and Practice: JPHMP 19 (3): 250–58.
https://doi.org/10.1097/PHH.0b013e318268aed8.

Poland, Chris. 2009. “Defining Resilience: What San Francisco Needs from its Seismic Mitigation
Policies.” San Francisco Bay Area Planning and Urban Research Association (SPUR).
http://www.jstor.org/stable/resrep22914.

United States Census Bureau (USCB). 2023. “Community Resilience Estimates.” Last modified
April 6, 2023. https://www.census.gov/programs-surveys/community-resilience-
estimates.html

Renschler, Chris S., Amy E. Fraizer, Lucy A. Arendt, Gian-Paolo Cimellaro, Andrei M. Reinhorn,
and Michel Bruneau. 2010. A framework for defining and measuring resilience at the
community scale: the PEOPLES resilience framework. Technical Report MCEER-10-
0006. NIST, Washington

Schoch-Spana, Monica, Kimberly Gill, Divya Hosangadi, Cathy Slemp, Robert Burhans, Janet
Zeis, Eric G. Carbone, and Jonathan Links. 2019. “The COPEWELL Rubric: A Self-

13
Assessment Toolkit to Strengthen Community Resilience to Disasters.” International
Journal of Environmental Research and Public Health 16 (13).
https://doi.org/10.3390/ijerph16132372.

Sempier, T.T., D.L. Swann, S.H. Emmer, and M. Schneider, 2010. Coastal Community Resilience
Index: A Community Self-Assessment. Mississippi- Alabama Sea Grant Program (MASGP-
08-014). https://repository.library.noaa.gov/view/noaa/37845.
Sharifi, Ayyoob. 2016. “A Critical Review of Selected Tools for Assessing Community
Resilience.” Ecological Indicators 69:629-647.
https://doi.org/10.1016/j.ecolind.2016.05.023.

Sherrieb, Kathleen, Fran H. Norris, and Sandro Galea. 2010. “Measuring Capacities for
Community Resilience.” Social Indicators Research 99 (2): 227–47.
https://doi.org/10.1007/s11205-010-9576-9.

South Carolina Office of Resilience (SCOR). 2023. Draft Introduction.


https://scor.sc.gov/sites/scor/files/Documents/introduction_3.1.23%20V2.pdf.
Summers, J. Kevin, Linda C. Harwell, Lisa M. Smith, and Kyle D. Buck. 2018. “Measuring
Community Resilience to Natural Hazards: The Natural Hazard Resilience Screening
Index (NaHRSI)—Development and Application to the United States.” GeoHealth 2 (12):
372–94. https://doi.org/10.1029/2018gh000160.
Tierney, Kathleen. 2019. Disasters: A Sociological Approach. Polity Press.
van de Lindt, John W., Walter G. Peacock, Judith Mitrani-Reiser, Nathaniel Rosenheim, Derya
Deniz, Maria Dillard, Tori Tomiczek, et al. 2020. “Community Resilience-Focused
Technical Investigation of the 2016 Lumberton, North Carolina, Flood: An
Interdisciplinary Approach.” Natural Hazards Review 21(3):04020029-1.
https://doi.org/10.1061/(ASCE)NH.1527-6996.0000387.
Walton, Abrash, Abigail, Janine Marr, Matthew J. Cahillane, and Kathleen Bush. 2021. "Building
Community Resilience to Disasters: A Review of Interventions to Improve and Measure
Public Health Outcomes in the Northeastern United States" Sustainability 13 (21): 11699.
https://doi.org/10.3390/su132111699.
Yabe, Takahiro, P. Suresh C. Rao, Satish V. Ukkusuri, and Susan L. Cutter. 2022. “Toward Data-
Driven, Dynamical Complex Systems Approaches to Disaster Resilience.” Proceedings of
the National Academy of Sciences of the United States of America 119 (8).
https://doi.org/10.1073/pnas.2111997119.
Zuzak, Casey, Emily Goodenough, Carly Stanton, Matthew Mowrer, Anne Sheehan, Benjamin
Roberts, Patrick McGuire, and Jesse Rozelle. 2023. National Risk Index Technical
Documentation. Federal Emergency Management Agency, Washington, DC.

14
Appendix A: Summary of Metric Approaches and Critique

BRIC
49 Variables; hierarchical construction with internal validation
Goal: To provide a reference point for examining current inherent resilience and aid in decision-
making with some actionable variables
Plus: theoretically and conceptually driven; ease of use/transparency; use of national datasets for
consistency; monitor drivers and changes over time; address inequality through visualization
Minus: single value not representative of all dimensions; not ground-truthed or validated; does not
measure interdependencies; internal consistency too variable; not all variables actionable; county
scale (and unit of analysis) too coarse; comparative descriptive NOT absolute predictive so
dependent on study area selection
Variable Examples: medical facilities (beds, physicians, psychosocial support facilities per 1,000
people); Business size; Gini coefficient; impervious surface change; mitigation cost share
percentage; Social assistance services per 1,000 people

Coastal Community Resilience Index


Checklist format with two hazard scenarios built in.
Goal: Targeted to local planners, engineers, managers, or administrators for a guided self-
examination of community resilience that can also be converted into an index for comparison.
Plus: structured self-assessment; easy to apply elsewhere; easy to understand; translates into a
resilience index; relatively quick to implement
Minus: not a replacement for detailed study; requires expert knowledge on wide-ranging topics;
relies on hazard event scenarios; subjective language for rating “strong” social systems
Variable examples: location of critical facilities; evacuation route availability; comprehensive plan
contents; early warning system; mitigation measures and activities

CART
Four stage participatory action methodology focused on resilience perception
Goal: Provide a toolkit focused on group participation to define a community profile of resilience
and plan for improved resilience
Plus: evidence-informed and supported theoretically; field-tested; participation is a resilience
intervention itself; involves stakeholders in creating knowledge and solutions; intervention and
implementation driven
Minus: time and labor intensive; application of assessment to new study areas difficult resulting
in limited application; will not result in measurement of resilience, rather interventions

15
Variable examples: neighborhood infrastructure mapping; community conversations; community
ecological maps; stakeholder analysis; SWOT analysis; capacity and vulnerability assessment

CIRI
15 Variables; Hierarchical construction, with variable compared to “ideal” and weighting
determined by user
Goal: To provide an inherent resilience index and post-disaster resilience that uses an “ideal” goal
for each variable and with weighting that is customizable to each place where it is implemented.
Plus: Highlights infrastructural resilience; includes novel penalty system that could model limiting
agents of recovery; compares score to a theoretical “perfect score”
Minus: Limited list of indicators; weighting and penalty schemas difficult to implement;
definitions of variables unjustified; requires proprietary data; justification of “perfect score”
arbitrary; penalty system requires more testing; index not ground-truthed; not all variables
actionable
Variable examples: road area, transit performance, microgrids, hospital beds, education, creative
class

Community Resilience Index


17 Variables; hierarchical construction with internal and external validation
Goal: Measure adaptive capacity through social capital and economic development specifically
Plus: ease of use/transparency; easily visualized; monitor changes over time
Minus: limited variables; single value not representative of all dimensions of community
resilience; not all variables actionable; county scale (and unit of analysis) too coarse; comparative
descriptive NOT absolute predictive so dependent on study area selection; not all variables
replicable
Variable examples: Gini coefficient; net gain/loss rate in business year; occupational diversity;
two-parent families; net migration rate

COPEWELL
49 Variables; System dynamics model for index; Rubric-based self-assessment
Goal: Assessment of pre-event, event, and post-event resilience characteristics through system
dynamics index development and additional community assessment
Plus: Index and self-assessment based on same theoretical framework; assessment co-developed
with community-level users; assessment is implementation-driven; participatory assessment acts
as resilience intervention itself; index relies on national datasets; index is hazard-specific

16
Minus: Time-intensive; index and assessment not well connected to create holistic view of
resilience; scale of index very coarse (County); application of assessment to new study areas
difficult resulting in limited application; Index includes hazard exposure and vulnerability,
conflating ideas of risk and resilience; not all index variables actionable
Variable examples: homes with internet service; food and water providers; insurance factors for
women, Medicare enrollees, and all adults; housing stock; income inequality; affiliation with
religious groups; hazard impact

FEMA Community Resilience Index


22 Variables; Additive across all variables
Goal: Through accepted variables from the literature, create a universal resilience index
Plus: constructed from variables found across various resilience methodologies; available for
download online from FEMA; incorporated into FEMA Resilience Analysis and Planning Tool
(RAPT)
Minus: Overlap between vulnerability and resilience not well justified (uses social vulnerability
indices in variable selection); uses all publicly available data;
Variable Examples: population without high school diploma; owner-occupied housing; number of
hospitals; population below poverty level; income inequality; population change

LACCDR
Training that includes network analysis, household survey, table-top exercise, and process
evaluation/reflection
Goal: Operationalize and measure factors and strategies to increase community resilience through
community coalition training
Plus: Training initiative; directly actionable; toolkit developed through stakeholder engagement;
interactive method directly improves resilience as its own action; resilience improvement through
established groups; public health led program (can be both plus and minus)
Minus: Time and resource-intensive; requires involved community organizations for
implementation; missing factors of resilience other than social/community
Variable examples: Pre-/Post- incident wellness; preparedness education; self-sufficiency;
partnerships between/within government and NGOs

NaHRSI
117 Variables; Hierarchical construction with no internal validation
Goal: Index of basic resilience that incorporates hazard risk within the index rather than creating
a separate exposure index for comparison

17
Plus: theoretically/conceptually driven; extensive measurement variables; can integrate hazard
event for post-event resilience and recovery modeling
Minus: Includes hazard exposure and vulnerability, conflating ideas of risk and resilience; intense
data management required; complicated model/equations for index construction; not validated or
ground-truthed; datasets inconsistent
Variable examples: communication continuity; biodiversity; land area type; hazard exposure and
loss; access to social support; structure vulnerability; condition of natural environment; labor-trade
services

PEOPLES Framework
95 Variables; GIS overlay methodology; systems of systems approach
Goal: Create a GIS tool to investigate different interactions across variables
Plus: holistic view of community resilience; geospatial focus;
Minus: requires complex modeling and extensive data collection; conflates vulnerability and
resilience within population demographics dimension;
Variable examples: population demographics (age, gender, race,); water, air, soil quality;
executive and administrative emergency functions, cultural facilities, lifelines (internet
connections, postal, healthcare, food supply, utilities, and transportation); collective action and
efficacy (conflict resolution and quality of life); financial services, CPI; employment and business
services; social services

RCCR Framework
Risk and Resilience spectrums created through local perception; Later revised to include climate
justice
Goal: Understand locally perceived resilience needs in coastal rural communities
Plus: Themes designed with practical application in mind; themes assessed by community
members through interviews and focus groups; focused on engaging communities in resilience
conversation; stimulate capacity building dialogue; manageable tool for replication; avenue for
community members to describe place-based issues and perceptions
Minus: Frames resilience and vulnerability as opposing forces influencing adaptive capacity; only
pre- and post- surveys from focus groups; not generalizable to other scales; findings very localized;
small group of people engaged at one time
Variable Examples (Survey): Threat of sea level rise, saltwater intrusion, and flooding;
vulnerability of the same three hazards; level of preparedness to the same three hazards

18
RRI
Blend of qualitative and quantitative data through Citizen engagement in generating locally
relevant data through Likert-scale
Goal: While an index, this is a bottom-up tool due to the identification of the variables included
occurring at the community level, creating a place-specific index
Plus: Indicators theoretically bound and iteratively chosen through local knowledge; focus on the
implementation of plan to increase resilience; developed in tandem with hazard risk assessment
tool
Minus: Due to local input, variables may not be widely applicable; measurement of resilience
through survey method - difficult to measure over time; time and resource intensive; final product
not easily interpreted; methods and variable list incomplete
Variable examples: Community wellbeing; housing and public spaces; communication options;
hazard awareness; emergency operations; community engagement

19

You might also like