OpenFAIR Level 1 Certification Guide
OpenFAIR Level 1 Certification Guide
This study guide will accompany the FAIR Analysis Fundamentals course and will further
prepare you for the OpenFAIR Certification exam. We hope you will take advantage of this
additional material and the practice exam questions and pursue the coveted OpenFAIR Level 1
Certification.
This document closely mirrors the Conformance Requirements document published by The Open
Group that details the individual learning objectives covered on the exam. In some cases, similar
items from the Conformance Requirements have been combined into a single section of the study
guide, so the bold headings will not precisely match the line items from the Open Group
document.
If you have any questions or feedback about this document, please contact
training@[Link]
1
Table of Contents
Unit 1: Basic Concepts
Unit 2: Terminology
Risk
Loss Event Frequency (and components)
Loss Magnitude (and components)
Forms of Loss
Components of Threat Assessment
Control Types and Definitions
Unit 3: Results
Interpreting Results
Qualifiers
Quantitative Analysis
Loss (Capacity and Tolerance)
Business Case Development
Complementary Frameworks
Assumptions
Components of Scoping (Asset, Threat, and Effect )
Threat Community Identification
Abstraction Level
Data Quality
Finding Objective Data
Troubleshooting
Unit 5: Measurement
Calibration Exercise
Monte Carlo Simulations
Accuracy vs. Precision
Subjectivity vs. Objectivity
Diminishing Returns
2
Unit 1: Basic Risk Concepts
Probability vs. Possibility
When performing a risk analysis, we want to focus on what is probable versus possible. The goal is to make
well-informed decisions based on probable outcomes of future events. Is it possible that a grizzly bear will walk
through your office door and maul you? Sure! Almost anything is possible. But is it probable? No.
We want to ensure we are analyzing probable scenarios and that our estimates for the variables of the
FAIR model represent what is probable, not what is possible. For example, in a data breach scenario, you may
be tempted to have the maximum secondary response cost include credit monitoring for every individual
impacted by the breach. Every last impacted client may sign up. But is it probable? No. In most publicly
reported breaches, the take-up rate for credit monitoring services the breached organization offers does not
exceed 30%. If we estimated the maximum to represent 100% take-up, our results would overestimate the
probable future loss from the breach scenario.
The FAIR analysis does not predict the amount of loss experienced over a given time frame. It does not
predict when a loss event will occur. The FAIR analysis describes the range of probable outcomes related to a
given scenario over a given time frame. When we conduct a FAIR-based analysis, we should not say, “We
predict between 3 and 8 loss events over the next year,” for example, but something more like “based on the
information currently available to us and the estimates of our subject-matter experts, it is probable that between
3 and 8 loss events will occur over the next year.” A statement like this reflects that we are just seeking to
provide information on what is probable, not making a prediction claiming that we know what will unfold.
3
Unit 2: Terminology
Risk
Risk measures how much money an organization will likely lose from a given scenario over a given time frame,
typically a year. Risk is a measurement of the probable frequency and probable magnitude of future loss. Risk is
derived from estimates of Loss Event Frequency and Loss Magnitude. Risk is expressed in a range of dollar
amounts (or other currency, if applicable.)
Threat events can be malicious or non-malicious. An example of a malicious threat event would be a bank
robber attempting to rob the bank or a cybercriminal launching a DDoS attack. (These would become Loss
Events if they succeed in robbing the bank, or if the DDoS attack succeeds in causing an outage, etc.)
An example of a non-malicious Threat Event Frequency would be the number of times a team member
completes a step in a business process in which an error would result in loss. Each time they execute the step,
there is the potential for loss, just like the robber attempts to rob the bank. In this example, Vulnerability would
represent the percentage of times an error is made in completing that step in the process. If the step is executed
1,000 times (Threat Event Frequency) with an error rate of 1% (Vulnerability), we can expect 10 errors resulting
in a loss (Loss Event Frequency.)
Contact Frequency
Contact Frequency represents the number of times a threat agent will contact the asset to the degree needed to
launch a Threat Event. Suppose a malicious external hacker seeks to extract sensitive data from a database. In
that case, they must come into contact with that database to the extent needed to extract the data. Similarly, if a
bank robber seeks to rob a bank, they must establish contact with the asset by physically entering the bank.
It's essential to understand the difference between contact events and Threat Events. A Threat Event requires a
conscious choice on the part of the threat to seek to harm the asset’s confidentiality, integrity, or
availability. If a network engineer comes across sensitive data while troubleshooting an issue, it is a contact
event, not a Threat Event. It only becomes a Threat Event if the engineer uses the opportunity to act
4
maliciously.
Contact Frequency is expressed as a count of occurrences. Three potential types of contact should be considered
when estimating Contact Frequency: Random Contact, Regular Contact, and Intentional Contact.
Random Contact
Contact events where the threat randomly encounters the asset, such as a tornado strike, are considered random
contacts. Encountering a large bear on a walk in the woods would be another example of Random Contact – the
threat (bear) was not seeking you out or explicitly targeting you; you just happened to cross paths, creating an
opportunity for the bear to launch a threat event against you. To reduce the likelihood of Random Contact, you
could, for instance, move a facility out of the typical path of hurricanes, etc.
Regular Contact
Contact events due to regular threat agent activity are considered Regular Contact. This includes the expected
contact with sensitive data privileged team members have daily. If car thieves walk your block every night
looking for someone to leave their car unlocked, they are coming into Regular Contact with assets and waiting
until the perceived level of effort is low enough that they choose to launch a threatening action. (If the car thief
specifically targets a single car every night because of its perceived value, the contact becomes intentional. The
important distinction is a purposeful targeting of a specific asset.)
Intentional Contact
Intentional Contact includes all contact events where the threat intentionally seeks a particular asset. An art thief
who targets a particular museum to obtain a specific painting, for instance, or a cybercriminal purposefully
scanning your website for exploitable weaknesses would be instances of Intentional Contact.
Probability of Action
Just because a threat agent comes into contact with an asset doesn’t mean they will choose to take a threat
action. The probability of Action represents the percentage of contact events that will become Threat Events
based on the threat agent’s choice. Probability of Action is affected by the perceived value of the asset to the
threat agent, the perceived level of effort the threat agent will have to undertake to cause loss, and the
perceived risk the threat agent faces in taking the threat action. Suppose the asset isn’t deemed valuable enough,
or the threat agent thinks it will be too hard to impact or result in unacceptable losses (for instance, arrest and
prosecution). In that case, the threat agent may choose not to follow through with the planned threat action. The
probability of Action is expressed as a percentage. If threat agents come into contact with an asset 10 times but
only choose to take action against that asset 10% of the time based on the perceived value, level of effort, and
risk, then we will expect to see only 1 threat event.
Perceived Value
The perceived value of the asset to the threat agent impacts the threat agent’s Probability of Action. The threat
agent may be willing to expend much effort and take on many risks if the asset's perceived value is excellent.
But if the asset's value isn’t significant enough in the threat agent’s eyes, they may choose not to move forward
with a threat action. For instance, if sensitive data in a database is appropriately masked such that the threat
agent cannot tell that it is valuable, they may not think the risk and effort involved in breaching the database is
worth the reward and may thus decide not to take a threat action against the asset.
5
asset, the less likely he or she is to launch a threat action against that asset. Threat agents also have limited
resources, and if a threat action requires too significant an investment of time or other resources, they may
choose not to attack an asset with which they have come into contact.
Perceived Risk
If a threat agent perceives that the risk associated with taking a threat action is too significant, given the asset's
value, they may choose not to pursue a threat action. For instance, a bank robber who enters a bank and sees
multiple armed guards and security cameras may judge that the risk of bodily injury and/or arrest is too high,
given the asset's value and the perceived level of effort required to affect the asset.
Vulnerability/Susceptibility
Vulnerability represents the percentage of threat events that will successfully (from the perspective of the threat
agent) become loss events. Stated more simply, it is the percentage of threat events that will result in loss to the
organization. If 100 threat events are expected to occur, and we estimate that we are vulnerable to 40% of them,
we can expect 40 loss events. This definition differs from the one typically associated with “vulnerability” in
the Information Security context. Still, it matches other common usages, such as “How vulnerable am I to
getting the flu?” Vulnerability is expressed as a percentage, representing the percentage of threat events that will
become loss events.
Vulnerability is derived from Threat Capability and Resistance Strength. Whether or not a Threat Event becomes
a Loss Event depends on the capability of the threat agent launching the threat action and on our ability to
defend against the threat action.
Threat Capability
Threat Capability and Resistance Strength are measured against the Threat Capability Continuum. Think of this
as a spectrum from the least capable threat agents to the most capable. Each threat community falls in a range of
values along this spectrum. We use a similar concept for tornados, using a standard scale to rate both the strength
of the tornado (“it’s an F3 tornado”) and the strength of the buildings the tornado may impact (“this building
can withstand F4 tornados.”)
Threat Capability represents the comparative place on the Threat Capability Continuum of the threat community
in scope for your analysis. Are they super-skilled nation-state-sponsored hackers in the top 10% of capability
compared to all other hackers? Are they inept beginning hackers with little knowledge or capability and are
down in the bottom 10%? Do they fall somewhere in the middle? Threat Capability is a percentage since it
refers to a group’s percentile range on the Threat Capability Continuum.
When estimates are made at the Threat Capability and Resistance Strength level, thousands of Monte Carlo
“boxing matches” are carried out to see what percentage of iterations resulted in a Threat Capability value that
was higher than the Resistance Strength value, representing a threat event to which the asset would be
vulnerable.
Threat Capability is determined by the Skills and Resources at the threat community’s disposal.
Skills
Is the Threat Community in-scope for your analysis made of a bunch of Luddites who know nothing about
technology? Or do they possess advanced computer science degrees and tons of engineering experience? More
skilled and knowledgeable threat agents will be placed higher on the Threat Capability Continuum. A threat
agent’s skills can be counteracted in some cases, for instance, by using an obscure technology that is difficult to
understand or reverse engineer.
6
Resources
How much time and material (cash, hardware, etc.) does the Threat Community have access to? Do wealthy
nation-state agencies sponsor them, or are they using spare computer parts they picked up at a pawn shop? Can
they afford to purchase exploit kits on the black market (if they aren’t skilled enough to create them
themselves)? Threat agents with more available time and materials are placed higher on the Threat Capability
Continuum, increasing Vulnerability in most malicious scenarios.
Resistance Strength
Resistance Strength is also measured along the Threat Capability Continuum. It represents the comparative place
on the continuum against which the organization believes it could successfully defend. If an organization thinks
its controls are sufficient to defend against all but the most skilled nation-state-sponsored attackers, its resistance
strength would be around 80% to 90% or 95%. If an organization believes it can successfully defend against the
average cybercriminal but not against skilled cybercriminals, its resistance strength estimate may be around 40%
to 70%. Resistance Strength and Threat Capability are percentages since they refer to a percentile position on the
Threat Capability Continuum.
Loss Magnitude
The right side of the FAIR Model helps us understand loss magnitude, or “how much money are we likely to
lose each time the bad thing happens?” Loss Magnitude is expressed in monetary values ($, €, £, etc.). Loss
Magnitude comprises the two phases of loss: Primary and Secondary.
Primary Loss
Primary Losses are losses incurred by the organization directly due to the loss event occurring. They are referred
to as primary because the organization impacted is the primary stakeholder. Launching an internal incident
response team in the event of a breach, for instance, is a primary loss, as the decision to do so was made by the
primary stakeholder directly as a result of the loss event occurring. Some primary losses don’t require a decision
to be made, such as lost productivity in the event of an availability scenario. The key idea is that primary losses
directly impact the primary stakeholder due to the loss event occurring.
Secondary Loss
Secondary Losses are losses incurred by the primary stakeholder due to the reactions of secondary stakeholders.
They are referred to as Secondary Losses because they manifest due to secondary stakeholders' actions and
reactions, such as clients, the media, government/regulatory bodies, contractual third parties, etc. Hiring outside
counsel to respond to lawsuits in the wake of a data breach, for instance, is a secondary response cost, as it was
made necessary by the actions of secondary stakeholders (the clients who are suing your organization.) Fines
and judgments that your organization has to pay due to a regulatory body’s decision to levy them against you or
a contractual third party’s decision to enforce service level agreements, etc., are secondary losses. The key idea
is that Secondary Losses are losses that don’t result directly from the loss event but from the reactions of
secondary stakeholders to the loss event.
7
Secondary Loss Magnitude
When secondary losses occur, how much might they cost the organization? That is the question that Secondary
Loss Magnitude seeks to answer. You’ll need to consider potential secondary losses across all six forms of loss.
How much might we spend on legal defense costs related to a lawsuit or enforcement action? How much might
we lose from those actions? How much will productivity be impacted if protesters respond to a Primary Loss
Event by picketing our buildings? Might they damage our buildings or other property? If so, how much is
probable regarding replacement/repair costs? These elements and much more should be considered when
estimating secondary loss magnitude depending on the scenario you’re analyzing.
Asset
The asset is the thing of value your organization seeks to protect. It will cause loss to your organization if it is
impacted from a confidentiality, availability, or integrity standpoint (depending on the scenario you’re
analyzing.) A database full of sensitive data can be an asset in a confidentiality scenario. A physical facility or a
system/application can be assets in availability scenarios. Important data that must be correct to be valuable to
your organization can be an asset in an integrity scenario. In the operational risk context, successfully
completing a business process can be an asset in an integrity scenario. The key here is remembering that the
asset is the thing that has to be impacted for your organization to experience loss.
Threat
The threat is the person, group of people, force of nature, piece of self-executing malicious code, etc., that takes
action against the asset in a way that causes loss to your organization. The key here is remembering that for
something or someone to be a threat, it has to take action against an asset in a way that seeks to cause loss via
harming the confidentiality, integrity, or availability of the asset. Threats are active entities, not passive control
conditions, etc.
Threat Communities
Threat agents with similar knowledge, skills, resources, motivations, standard methods, etc., are grouped into
threat communities for accessible communication across the cybersecurity industry. In the FAIR model, each
threat community is associated with a corresponding range of Threat Capability based on what is known about
their knowledge, skills, and resources. This information is the basis for a threat profile for each community.
Threat Profiling
Each threat community should have a threat profile that details commonly known attack methods used by that
community, typical motivating factors, information on the resources they have at their disposal, etc. Threat
profiles help an organization determine what threat communities are likely to target them based on their
motivations and should inform estimates of Threat Capability.
Primary Stakeholder
The organization for whom there is a risk from a given scenario. Who has skin in the game? Who will be
monetarily harmed if the asset's confidentiality, integrity, or availability is impacted? Typically, the primary
stakeholder is the asset owner and the organization for which the FAIR-based risk analysis is conducted.
Secondary Stakeholders
Others outside the organization can be secondary stakeholders if primary loss events can impact them. Clients,
regulators, the media, shareholders, etc., can all be impacted by loss events and can have an adverse reaction to
those loss events that cause further loss to the organization. These further losses resulting from secondary
stakeholders' reactions are referred to as secondary losses.
Threat Event
A threat event is an attempt by a threat to act on an asset in a way that would cause loss to the primary
8
stakeholder. A cybercriminal attempting to breach the confidentiality of your sensitive data, or an earthquake
occurring and “attempting” to knock down your building (even though an act of nature can’t consciously
“attempt” to do anything, the concept is the same.) Some of those attempts will be successful and become loss
events — attempts, where the attacker is stronger than the organization’s control environment or the earthquake,
is more robust than the building can withstand based on its construction methods and materials. Some of those
attempts will not result in loss — attempts where the control environment is stronger than the threat. The
percentage of threat events that will result in loss events is represented by Vulnerability.
Loss Event
Once the attempt (threat event) has successfully caused a loss to your organization, it has become a loss event. A
loss event is when a threat has harmed the asset (from a confidentiality, integrity, or availability standpoint) to
the extent that it causes realized losses to the primary stakeholder.
Loss Flow
The “loss flow” is the conceptual name given to the chain of events related to primary and secondary losses. In
the first section of the loss flow, a threat actor takes some action against an asset. That action affects the asset’s
confidentiality, integrity, or availability, directly resulting in loss to the primary stakeholder. That is the first
portion of the loss flow and refers to primary losses. In the second section, the primary loss event causes a
reaction by secondary stakeholders, and that reaction causes further losses to the primary stakeholder. Thus, the
loss flow represents the complete chain of events from the threat action to the realization of secondary losses by
the primary stakeholder.
Forms of Loss
Six forms of loss must be evaluated to understand how much loss a given scenario may present fully. In our
decades of experience quantitatively analyzing risk using FAIR, we have not found an example of realized
losses that do not fall in one of the six forms of loss outlined below.
Productivity loss and replacement costs are commonly seen in the primary loss phase. Reputation damage,
competitive advantage loss, and fines and judgments are commonly seen in the secondary loss phase. Response
costs are frequently seen in both phases. All forms of loss should be evaluated for each phase each time an
analysis is performed, as there could be novel ways in which loss would materialize from a given scenario.
Productivity Loss
Loss that results from an operational inability to deliver products or services. If your ability to satisfy your
primary value proposition has been impacted, you may be experiencing productivity loss. Common examples
include lost revenue when a retailer’s website is down, and customers don’t return later to place orders. Wages
paid to idle workers during an outage are also productivity losses, as the workers are temporarily unable to
deliver products or services and generate revenue. Be careful not to include revenue that is simply delayed in
your revenue loss estimates.
Response Costs
Response costs are any costs you incur in responding to or managing a loss event. In the primary phase, the
mobilization of an internal incident response team or hiring an external forensics firm to investigate a
cyberattack are common examples. When responding to the reactions of secondary stakeholders, like when you
have to hire outside legal counsel in response to lawsuits or regulatory actions brought against you, you’re
experiencing secondary response costs.
Replacement Costs
Replacement costs cover all losses from repairing or replacing damaged, destroyed, or stolen assets. Rebuilding
a data center wiped out by a tornado or replacing a stolen laptop are examples of replacement costs. Secondary
9
replacement costs are less commonly seen but can happen when a secondary stakeholder’s reaction damages
your assets, like if protestors smash your front windows to retaliate against you for poor handling of a breach of
their sensitive information.
Reputation Damage
Loss resulting from secondary stakeholders’ perspective that your organization’s value has decreased, your
products or services aren’t as valuable, etc., or your organization’s liability has increased. Customers who no
longer do business with you due to the loss event represent reputation damage, as does reduced stock valuation
or increased cost of capital.
Controls
Controls are processes or technical elements deployed in an environment to either keep bad things from
happening or reduce the loss to the organization when they do. The only value proposition for control is its
ability to reduce Loss Event Frequency or Loss Magnitude through its effect on a risk scenario. The FAIR
Standard groups control into four broad categories:
Avoidance Controls
Avoidance controls reduce the potential for threat agents to come into contact with assets. By limiting contact
frequency, these controls limit threat event frequency, and thus loss event frequency, and thus risk. With perfect
avoidance controls, no other controls would be needed because the threat agent would never come into contact
with the asset.
Examples of avoidance controls include: not hiking in parts of the wilderness where grizzly bears are known to
be active, reducing the asset surface area (e.g., reducing the number of assets), introducing layers of physical
security in between threat agents and assets, deploying firewalls to prevent malicious external actors from
coming into contact with your network, etc.
Deterrence Controls
Deterrence controls limit the threat actor’s probability of action once they have come into contact with the asset.
By limiting the probability of action, deterrence controls limit threat event frequency, and thus loss event
frequency, and thus risk. No other controls would be needed with perfect deterrence controls because no contact
events would ever become threat events.
Examples of deterrence controls include: increasing the perceived risk to the attacker with visible security
cameras or armed guards, displaying a message about logging of network activity on a log-in screen, increasing
the perceived level of effort to the attacker by using advanced encryption algorithms to protect sensitive data,
etc.
10
they decrease the threat agent’s ability to inflict harm once the threat agent has launched a threat action. By
limiting vulnerability, these controls limit loss event frequency and thus risk. No other controls would be needed
with perfect resistance controls, as no threat events would turn into loss events.
Examples of resistive controls include controls that help you detect threat events so that you can deploy defenses
before the threat event becomes a loss event, passwords, access management, hardened system configurations,
bulletproof glass, etc.
Responsive Controls
Responsive controls come into play once a loss event has been detected and include any controls that limit
subsequent losses. By limiting loss magnitude, these controls limit risk. This is achieved by either breaking the
threat agent’s contact with the asset or by minimizing losses incurred as a result of the loss event. No other
controls would be needed with perfect responsive controls, as loss events would immediately be responded to to
prevent all losses.
Examples of responsive controls include: retaliating against an attacker to the extent that you break their
contact with the asset, treating illnesses with medications that lessen the duration and severity of symptoms,
blocking an attacker’s IP address after a network breach, quickly restoring normal operations in an availability
scenario, limiting secondary losses by having agreements in place that discount credit monitoring or legal
defense costs, public relations campaigns, etc.
11
Unit 3: Results
Interpreting Results
The results of a FAIR analysis indicate the Annualized Loss Exposure from that scenario, or how much loss the
organization can expect to experience from this scenario over the next year. Understanding Annualized Loss
Exposure is straightforward in scenarios where Loss Event Frequency is estimated to be greater than one. But
when the Loss Event Frequency is less than one, it is more difficult. For instance, a loss event forecasted to
happen once every 4 years and have a loss magnitude of $4,000,000 would have an Annualized Loss Exposure
of $1,000,000. In these scenarios, you can think of Annualized Loss Exposure as “how much loss would we
incur from this scenario divided over the number of years it will probably take to have one occurrence of
the loss event.” Of course, these values are all expressed in and calculated from ranges in FAIR, but point
estimates are used here for clarity.
Since Annualized Loss Exposure results are reported using a probability distribution, the relative probabilities of
different outcomes can be assessed and reported on. For instance, reporting the range of losses between the 10th
and 90th percentiles allows statements like “80% of our simulations resulted in annualized loss exposure of
between $x and $y.” It is often helpful to break Annualized Loss Exposure down into primary and secondary
losses so your audience understands the relative frequencies and magnitudes of primary and secondary losses.
Translating Loss Event Frequency values from decimals to statements of frequency is advisable. For instance,
“primary losses would occur as little as once in 40 years (.025) and as much as once in 5 years (.20), with a most
likely frequency of roughly once in 17 years (.058).” Providing similar statements for the ranges of primary loss
magnitude, secondary loss event frequency, and secondary loss magnitude gives your audience a complete
understanding of the organization’s risk exposure from the analyzed scenario.
Suppose your organization’s risk tolerance related to this scenario has been defined. In that case, it can be
helpful to report the percentage of simulated years in which Annualized Loss Exposure was less than the risk
tolerance. This will help your audience understand how probable it is that losses will be above or below the
organization’s risk tolerance and can help inform decisions on whether or not to mitigate a given scenario.
Qualifiers
Sometimes, just presenting the quantitative results of a FAIR-based risk analysis doesn’t fully communicate the
situation. Sometimes, the risk may be minimal, but you still want to place the scenario on leadership’s radar
because of particular conditions. We call these two conditions “fragile” and “unstable.”
Fragile Qualifier
Suppose we currently have minimal risk exposure to a given scenario because, while we experience many threat
events, few, if any, of them are successful because of the effectiveness of a single control. Our Threat Event
Frequency estimate is high, but our Vulnerability estimate is low because of a single control. What would
happen if that single control were to fail? Those threat events would successfully become loss events, and our
losses from this scenario would be far more significant than we currently forecast. We call this situation “fragile”
because it could all shatter if that one control fails. We should consider responding to these situations by
building additional layers of control that will ensure Vulnerability stays low in the event of a failure of one
related control.
Unstable Qualifier
Suppose we currently have minimal risk exposure to a given scenario because, even though we have little to no
controls against it, we just aren’t experiencing threat events of the type. In this situation, the risk is minimal not
because we’re “good” (well-controlled) but because we’re “lucky” (just haven’t been targeted yet.) What would
12
happen if we started experiencing threat events of this type? With no controls in place, our Vulnerability would
be near 100%, and every threat event would become a loss event. Our losses from this scenario would be far
more significant than we currently forecast. We call this situation “unstable” because it could change rapidly if
we become a target of threat events of this type. We should consider responding to this situation by
implementing controls if we have reason to believe we could become a target of threat events of this type in the
future.
Qualitative Translation
It is perfectly acceptable to translate quantitative results into qualitative labels IF everyone in your organization
remains aligned on what those labels mean regarding dollars of probable loss. If, for instance, your organization
decides that any scenario with a most likely annualized loss of over $500,000 or a one-time maximum loss of
more than $5 million is a “high,” then you can continue to use that term, as you’ve infused it with quantitative
meaning. The translation table from quantitative results to qualitative labels needs to be included in all risk
reporting so that people are reminded of what the labels mean, and the organization should revisit the ranges and
their mappings periodically as their risk tolerance changes. Absent this translation table, you have no assurance
that what you mean when you say “high risk” is what your audience receives as “high risk.” This is so because
“high” is a subjective term that can be interpreted in many ways, exhibiting the need for quantitative risk
analysis.
13
within the parameters of most assessment frameworks.
14
Unit 4: Analysis Process
Assumptions
Failing to state the assumptions you make as you complete a risk analysis dooms the clear communication of
valuable results. Clearly stating assumptions is critical to have a well-defined scope.
Let’s say you’ve analyzed nation-state hackers breaching the confidentiality of a set of databases in your
organization. Suppose you don’t clearly state the threat community you’re concerned about. In that case, the
audience for your analysis may think the results you’re presenting represent the amount of risk the organization
faces from any threat communities that could target the information in those databases. Likewise, absolute
clarity about the in-scope asset(s) and effect(s) is needed so that analysts, subject-matter experts providing
estimates, and decision-makers you’re seeking to inform are all aligned on exactly what loss event or loss events
have been analyzed.
Assumptions can also influence the estimates your subject-matter experts provide. An SME may assume that a
control in place is working effectively and may not seek out information on the control’s operating effectiveness.
It’s your job as an analyst to identify the assumptions made by SMEs and decision-makers and to critically
question them to drive more objectivity into your organization’s risk analysis process.
Without identifying an asset, you cannot make estimates of loss magnitude or vulnerability. How can you
estimate how much loss may result from an asset being damaged if you don’t know what the asset is and, thus,
how valuable it is to the organization? And how can you estimate the resistance strength of the asset if you don’t
know what the asset is and, thus, what controls are in place around it?
Without identifying a threat community, you can’t make estimates of Vulnerability or Threat Event
Frequency. How can you estimate the percentage of threat events that will become loss events if you don’t
know how capable the threat actors are compared to your resistance strength? Given their motivations and
typical targets, is this a threat community that is even likely to target you?
Without identifying an effect, you can’t make estimates of Loss Magnitude. A breach of confidentiality of a
database will likely result in a different loss magnitude than an event that causes the unavailability of the same
database and any business processes it supports.
15
Identifying the Asset(s)
The in-scope asset is the value your organization seeks to protect, which is your analysis's focus. The asset could
be sensitive data in a database, a business-critical application, a physical facility, team members, the successful
completion of a business process, etc. You must define the asset in question so that you can make estimates of
the asset’s resistance strength, the resulting loss magnitude associated with the asset, etc.
Without differentiating threat vectors, subject-matter experts would be providing estimates that
encompassed all possible ways the confidentiality of the asset could be harmed, resulting in much more
significant uncertainty and less valuable results, especially if a decision needs to be made about the
implementation of a specific control that only affects one threat vector.
Scenario Parsing
It is mathematically possible to conduct a scenario analysis encompassing multiple threat communities, assets,
or threat vectors, so long as the definition of loss event clearly states all elements included in the analysis.
However, this would result in wide ranges and a loss of precision, reducing the usefulness of your analysis
results. Whenever an analysis incorporates multiple assets of different values, multiple threat communities of
different capabilities, multiple threat vectors against which we have different controls, or multiple effects, it is
advisable to break the analysis into smaller scenarios and estimate each individually. This will improve the
clarity of your communication of results and allow you to identify which scenarios present relatively more or
less risk to your organization. While it may sound like more work, it is often faster and more efficient to conduct
multiple less complex analyses than trying to combine numerous scenarios into one set of estimates.
Documenting Rationale
Documenting the rationale provided to accompany calibrated estimates is critically important, as it allows you to
share the basis for the estimates and defend the conclusions of your analysis. Rationale documentation should
include the name and title of the SME or SMEs consulted, the internal and external data that was considered, any
remaining sources of uncertainty regarding the estimate, and the controls that were considered when making the
estimate.
16
Choosing Abstraction Level
The decision of where to estimate the Loss Event Frequency side of the model depends on the question the
analysis seeks to answer/decision the analysis seeks to inform and the level of the model for which there is the
most high-quality readily available data. If the question is “How much risk do we have from this scenario,” an
estimate of Loss Event Frequency will suffice if adequate data exists. But estimating at Loss Event Frequency
will not allow us to answer, “Why do we have as much risk as we do? Is it because we face many threat events
or because we are highly vulnerable to a small number of threats? What could we do to reduce risk from this
scenario?” To answer those questions, we need to understand the relative impact Threat Event Frequency and
Vulnerability have on our forecasted losses.
Data Quality
Data quality is directly related to the objectivity with which it was gathered. Data from direct observation is
more credible and straightforward to defend than data from subjective opinions and is subject to biases. Data
that has been peer-reviewed or otherwise externally validated can be seen as having higher quality. Regardless
of the quality of your data, you should thoroughly document the rationale behind your estimates. Remember, the
purpose of our analysis is not to provide a precise answer but to reduce uncertainty. Even imperfect data can
help achieve that goal, and we shouldn’t feel forced to hide the data’s imperfections but instead should call
attention to them so that better information can be sought.
When soliciting data and estimates from subject-matter experts, it’s essential to frame your questions in a way
that suggests you’re looking for objective data. Instead of starting with “How many times do you think this
might happen,” begin your search for data/estimates with “Do we have any type of logging or event tracking
that could show us how many times this has happened in the recent past?” You want to be sure you aren’t
asking questions in a way that sounds like you’re seeking an opinion when you want objective data and
calibrated estimates based on critical consideration of said data.
Troubleshooting Analyses
When results look strange, you must first revisit your inputs and ensure you didn’t accidentally add a zero to an
estimate somewhere. If that isn’t the case, each estimate should be critically considered to make sure the SME
understood the scope and provided a reasonable estimate that wasn’t unduly subjective or biased. Seeking out
more internal or external data and consulting additional SMEs can help identify inputs that may not be accurate.
17
Use the variables one level lower in the model to lead a discussion and try to get an understanding of why the
SMEs disagree. For instance, are they disagreeing about a Loss Event Frequency estimate because one has better
data on Threat Event Frequency? Is one SME assuming control is in place on the in-scope asset because it is in
place on similar assets? This discussion is often enough to identify the disagreement and resolve it.
If that doesn’t work, you can run the analysis twice with the two different estimates and see if one of them
returns results that seem nonsensically large or small, considering previous losses from the scenario being
analyzed.
As a last resort, you can combine the two ranges by using the smaller minimums and the larger maximums and
clearly stating in reporting that we have more-than-usual uncertainty around the variable in question. This will
result in less precise results but accurately reflect the organization’s uncertainty about the scenario.
18
Unit 5: Measurement
Calibrated Estimation and Monte Carlo Simulation
The estimates we use as inputs in the FAIR model aren’t just any estimates. They’ve calibrated estimates,
meaning that the range provided represents a 90% confidence interval within which we think the actual value
we’re estimating will lie. We go through the calibration process so that our estimates, and ultimately our results,
are accurate with a useful degree of precision. It wouldn’t do us any good if our estimate for Loss Event
Frequency were “somewhere between 0 times and infinity times.” This range might be accurate (meaning that
the actual value will lie within it), but it is not precise enough to help us make valuable decisions. Calibrating
your estimate, taking it from being too wide to be helpful to a 90% confidence interval, provides the useful level
of precision we’re aiming for when we conduct FAIR-based analyses.
But how do we arrive at a 90% confidence interval? There’s a set of steps you can take to get you there
whenever you’re asked to provide an estimate for a quantity about which you are uncertain:
1. Start with the absurd — begin the process with a range you know is crazily wide. Fight against
your natural impulses to provide a precise answer or shrug and say, “I don’t know!”
2. Eliminate highly unlikely values — take that absurd range and chop off values you think are too
large or too small.
3. Reference what you know — see what kind of information you already have in your mind that
can be transferred onto this estimate. Use those reference points to narrow your range a bit more.
4. Play a calibration game — use an equivalent bet test like the spinner game to assess how
confident you are in your range.
How does an equivalent bet test like the spinner game work?
First, it has to be set up correctly by dividing the wheel into two sections. 90% of the wheel falls in one section
labeled WIN, and 10% falls in the second section labeled LOSE. Here’s how the game works:
You provide an estimate (in the form of a range) to the question being asked. If the actual value is within your
range a year from now, you will win $1000. It’ll allow you to either stick with your range OR spin the spinner
with a 90% chance of winning $1000 and a 10% chance of losing. Which would you, instead, stick with your
range or spin the spinner?
If you quickly answer, “I would stick with my range,” it means you think you have a higher than 90% chance of
your range being accurate. This tells me that you could narrow your range slightly to get to a more useful level
of precision.
If you quickly answer, “I would spin the spinner,” it means you think you have a lower than 90% chance of your
range being accurate. This tells me you must widen your range to get us to the 90% confidence interval we seek.
The point we’re hoping to find — the point at which your range is calibrated — is where it becomes too
difficult for you to choose between the range and the spinner. When it’s impossible to choose, it indicates that
you think your odds of winning are the same with either option. That means the range you’ve provided is your
90% confidence interval. That range represents your calibrated estimate.
19
For that range to be a 90% confidence interval, it implies that you think there’s a 95% chance that the actual
value is higher than the minimum estimate you provided and a 95% chance that the actual value is lower than
the maximum estimate you're provided. It can be helpful to think about your estimate in pieces like this so you
can critically and independently evaluate the values you provided.
It should be noted that you don’t have to use a spinner game as your equivalent bet test. Any game you set up
with a 90% chance of winning and a 10% chance of losing will work. For instance, you place 10 marbles in a
pouch, 9 of one color and 1 of another. If you draw a single marble, you lose; if you draw a marble of the
dominant color, you win. Would you instead stick with your range or choose a marble from the pouch? Any
equivalent bet test will help you arrive at a calibrated estimate.
Sometimes the question you’re being asked is too tricky or nebulous to estimate directly. You can arrive at a
better estimate if you decompose the question into more minor questions that are easier to estimate. For
instance, “How many rooms does New York City’s Waldorf-Astoria Hotel have?” Estimating this directly is
difficult, but we can identify easier-to-estimate questions that will help us out, like “How many floors does it
have?” and “How many rooms are on each floor?” This kind of decomposition is precisely done in the FAIR
model. “How much risk do we have from this scenario?” is too complex a question to estimate directly, but we
can arrive at an estimate by decomposing that question into “How many times over the next year is this bad
thing going to happen?” and “how much is it going to cost us each time it does?” If those questions are too
difficult to estimate, or if we have more/better information in the model, we can take advantage of further
decompositions and estimate at a lower level.
Estimating with ranges allows us to express our uncertainty about a quantity or value. Using calibrated
estimation allows us to create a range that we believe is accurate with a useful level of precision. But the
minimum and maximum are just two of the four parameters we must provide to make an estimate ready for use
in the FAIR model and its Monte Carlo simulations. We also need to provide a most likely value and an
indication of our confidence that the most likely value truly is the most likely. The most likely value helps shape
the probability distribution by determining where it peaks, and the level of confidence we have in that most
likely value determines how tall the curve is at that most likely value. If we are highly confident, the curve will
be very pointed, and there will be a more negligible probability of the Monte Carlo simulation choosing a value
near the minimum or maximum value. If we are not confident that the most likely value we’ve provided truly is
the most likely, then the curve will be flatter. More values will be chosen from across the whole range and not be
so concentrated around the designated most likely value.
It is always advisable to challenge subject-matter experts' assumptions when providing you with a calibrated
estimate. As an analyst, you want to make sure the SME is thinking critically about all aspects of their estimate
and isn’t exhibiting an over-reliance on historical data, a head-in-the-sand “that bad thing couldn’t happen here”
attitude, or any other bias that decreases the objectivity of the estimate they provide.
Once we’ve provided our four-parameter estimates, Monte Carlo simulation takes over to determine how much
risk we have from a given scenario by performing the calculations of the FAIR model with randomly selected
values from the distributions we’ve defined with our estimates. Monte Carlo simulation is a method for
performing mathematical calculations when you are uncertain about the inputs and need to express them in
ranges. For an introduction to Monte Carlo simulation, see: (Christmas cookie video) The advantages of using
Monte Carlo are that it allows for calculations in the face of uncertainty and that it helps us identify which
possible outcomes are more probable by creating a distribution of thousands of results instead of providing a
single point
estimate.
20
Accuracy vs. Precision
It’s human nature to provide precise answers to questions we’re asked, meaning that we want to answer with a
single figure or “point estimate,” referring to a single point on a number line. We want to please the person
asking the question by providing them with information that carries an air of certainty because it is so precise.
When we feel we can’t provide certainty, we default to “I don’t know.” Unfortunately, this need for precision
gets us in trouble, especially when answering questions about the future. How many times have you told a
spouse, “I’ll be home at 6:30,” only to arrive closer to 7 and find them miffed that you were home later than they
expected? The need to provide a precise answer is then your downfall. It would be advisable to seek instead to
provide an accurate answer by providing a range within which you are confident the real value will fall. When
someone needs to decide based on uncertain information you are providing, it is better to give them an accurate
but imprecise answer than to provide an answer that is precise but precisely wrong.
On the other hand, you don’t want to provide an accurate estimate, meaning the actual value is somewhere
within your range if that estimate is too wide to be useful. “I’ll be home between 1 pm and 2 am,” while
accurate, does not provide enough precision that it is useful to someone who needs that information to make a
decision. That’s why we go through the calibration process — to arrive at an estimate that is (hopefully)
accurate but with a useful level of precision.
Diminishing Returns
Analysts should be careful not to spend too much time and effort seeking minor improvements in estimates. At
some point, the increased precision of final results is less valuable than the increased time and effort spent
gathering more and more precise estimates. Obtaining a calibrated estimate from an SME who you’ve made
familiar with all readily available data is almost always sufficient for analysis. If more precision is needed
after running the analysis and presenting results, begin with the estimates about which you are most uncertain
and seek additional data that will help you make a more precise estimate.
21
The One Essential Risk Management Graphic
Originally published on the blog of the FAIR Institute ([Link])
As a board member or executive leader, you must know the fundamentals of effective risk management. One
graphic contains the knowledge you need.
Heat maps, bow-tie diagrams, tornado charts — the world is filled with charts and infographics about risk
management, some valuable and others ...not so much. But my favorite graphic, which I think teaches people the
most about suitable risk management methods, isn’t built in Excel, R, a GRC tool, or any other fancy software
package. While it demonstrates the necessity of quantitative risk management, it doesn’t involve any numbers!
It’s a simple set of text boxes that, once understood, opens the door to the kind of risk management program we
are all striving to build:
Well-Informed Decisions
To manage future losses — indeed, to
manage anything — an organization has to make a
many well-informed decisions. It has to decide
which risk scenarios to mitigate, accept, transfer, or
avoid. It has to decide which mitigation strategies
will most cost-effectively limit future losses. It has
to decide how much risk transference to obtain
through insurance policies, etc. If the organization
doesn’t get these decisions right, it cannot
effectively manage risk.
Effective Comparisons
To get those decisions right, the organization needs to be able to draw effective comparisons between options. It
needs to compare the risk associated with two scenarios to understand which is more important to tackle. It
needs to be able to compare the risk reduction and costs associated with two or more mitigation strategies to
decide which option to implement. If the organization can’t make these comparisons, it can’t make
well-informed decisions and, therefore, can’t effectively manage risk. …………………
Meaningful Measurements
Comparing requires measurement. I don’t know which road is longer unless I measure them in a common unit
like miles or kilometers. Similarly, an organization can’t know which risk scenario is scarier until it measures the
risk associated with each one meaningfully. It’s at this measurement stage that most risk management functions
get it wrong. ……………………………………………………………………………………………...
22
Comparing scenarios based on subjective “high, medium, low” ratings isn’t a meaningful measurement — how
can you be sure those terms are applied consistently across analysts? How do you know what “medium” really
means? Are the scenarios in the “very high” section of the heat map equally scary? How red is “red?” Even if
you sign off on definitions for each rating, they are likely filled with squishy terms like “significant” or
“considerable.” We don’t slap “high, medium, low” labels on financial statements like revenue projections, so
why do we continue to accept those labels when managing risk? Instead of deciphering layers of subjective
language open to individual interpretation, let’s try discussing risk the way we experience it: in money lost in the
future.
The meaningful risk measurement from a given scenario is the forecasted range of probable loss the
organization will experience over a given time frame. This forecasted range is expressed in dollars (or other
relevant currency) and is derived from estimates of how many times the bad thing might happen and how much
money it might cost us each time it does. We call these factors risk loss event frequency and loss magnitude,
respectively. ……………………………………………………………………………………….
These forecasted ranges are meaningful because they’re in the same unit where we experience loss and don’t fall
prey to the potential misunderstandings and lack of clarity we get when we use subjective qualitative labels like
“medium risk.” We may disagree on what “medium risk” means, but we know exactly what it means when we
hear, “We forecast losses of between $200,000 and $800,000 with a most likely value of $450,000 over the next
year.” Forecasting future losses in dollars provides a meaningful measurement for comparisons, enabling
well-informed decisions and, ultimately, effective risk management.
Managing risk is no different, yet most risk management practitioners continue to falter when making
meaningful risk measurements, ultimately hindering their organization’s ability to manage risk effectively. With
the FAIR model, you can leverage accurate modeling of risk scenarios and harness the value it provides to the
top of the chain — genuinely effective risk management.
The MIT Technology Review recently published an article about “cyber threats.” While the article identifies
23
trending attack methods and scenarios to be concerned about, none of the things that made the list are threats.
Keep in mind that a threat is a person, group of people, a force of nature, etc., that can act against an asset
in a way that results in loss take a look at the list MIT published:
“More huge data breaches” may be the outcome when threat actors act against your assets and overcome your
defenses, but breaches are not the threat actors themselves. The breach is not the person or thing taking the
action; it is the result of the action.
“Ransomware in the cloud” is a method that threat actors may employ, but ransomware itself is not a threat.
“The weaponization of AI” -- Until AI takes over and independently decides to pursue threatening actions
against organizations, AI will not be a threat. Like ransomware, it will continue to be a tool threats use to cause
loss.
“Cyber-physical attacks” -- While certainly concerning, cyber attacks against physical infrastructure are
another attack threat actors may pursue.
“Mining cryptocurrencies” -- Threat actors may harness your computer’s processing power to mine
cryptocurrencies, but the mining itself isn’t a threat.
The MIT Technology Review presented a confusing list of outcomes and attack methods, all grouped under the
misused term “threats.”
Articles like this reinforce the need for risk management professionals, cyber security analysts and
managers, and the cybersecurity press to align on common vocabulary to improve communication and
enhance our collective effectiveness and credibility.
Toward that end, let’s revisit the definitions of the threat-related terms used in the FAIR model and FAIR-based
analysis.
How do you identify a threat? Ask yourself, “Who or what can act against my asset in a way that could cause
loss?” A threat can be a specific person, a group of people like hacktivists or cybercriminals, a force of nature
like a tornado, or a self-propagating virus (whether of the type that infects computers or humans.) Whether
sentient or not, all of these entities act on assets in a way that can result in loss. That’s what makes them threats.
None of the items on MIT’s list act against assets; they are ways to act against assets or outcomes when
cybercriminals successfully act against assets.
Threat event frequency refers to the number of times over a given timeframe (typically a year) that a threat
will act against the asset you’re concerned about in a way that could result in loss. How often will a hurricane
threaten a processing facility, possibly taking it offline? How often will nation-state hackers attempt to breach
the confidentiality of your organization’s sensitive information? Note that the threat event is the attempt —
once the threat has impacted the asset in a way that causes realized losses, a loss event has occurred.
Whether a loss event occurs depends on the asset’s vulnerability to the given threat event. In FAIR terms,
vulnerability is the percentage of threat events of the type in scope for your analysis that will result in loss.
Vulnerability is derived from Threat Capability and Resistance Strength. How skilled, well-resourced, and
knowledgeable are the attackers? How much skill, knowledge, and resourcing is required to overcome the
controls defending the asset from threat events? These two variables are measured using percentiles along the
threat capability continuum, ranging from the most inept attackers to the most advanced. While analysts rarely
need to estimate at this level of the model, you must understand what Threat Capability and RS represent and
24
how they work so you’re thinking with the right mindset when you estimate Vulnerability directly.
Suppose we are all diligent in our use of these threat-related terms. In that case, we can contribute to the
eventual adoption of the FAIR lexicon, a critical step in the advancement of the risk management profession
toward better communication and clearer thinking.
25
OpenFAIR Exam Practice Questions
1. The scenarios one chooses to analyze and the estimates one makes for the variables of the FAIR
model should be based on what is ____________, not what is _____________.
a. certain, possible
b. probable, uncertain
c. uncertain, probable
d. probable, possible
2. “Based on our estimates and modeling, we can expect between $600,000 and $1.9M loss from
this scenario over the next year.” This statement describes what is:
a. Possible
b. Predicted
c. Certain
d. Probable
3. Order the elements of the “risk management stack” from top to bottom:
1) effective comparisons
2) accurate modeling
3) effective risk management
4) meaningful measurements
5) well-informed decisions
a. 2, 4, 1, 5, 3
b. 1, 4, 5, 2, 3
c. 3, 5, 1, 4, 2
d. 3, 1, 5, 4, 2
26
5. What variables can be used to derive Threat Event Frequency (TEF) in cases where an analyst
chooses not to estimate TEF?
6. In analyzing a scenario around the physical robbery of a bank branch, the number of times a robber
will enter the bank branch is best represented by which FAIR variable?
7. Privileged insiders within your organization access sensitive information all the time as a
requirement to fulfilling their job duties. This is an example of
a. Random contact
b. Regular contact
c. Intentional contact
d. Expected contact
8. Which of the following is the correct definition of Probability of Action used in the FAIR
model/methodology?
a. The probability that, once a threat agent has come into contact with an asset, they will decide to pursue a
threat action against the asset
b. The probability that a threat agent will successfully come into contact with an asset
c. The probability that a threat event will successfully result in loss to the organization
d. The probability that the organization will detect the threat agent and take some action to respond to the
threat agent’s presence/actions.
a. Vulnerability can be derived, beginning by making estimates of Threat Capability and Resistance
strength along the Threat Capability Continuum
b. Secondary Losses are losses that your organization incurs due to the reactions of secondary stakeholders
to the primary loss event
c. In any analysis, primary losses will be greater than secondary losses
d. Loss Event Frequency can be derived by making estimates of Threat Event Frequency and Vulnerability
10. Which of the following is the correct definition for Vulnerability as used in the FAIR model?
a. The probability that, once a threat agent has come into contact with an asset, they will successfully cause
loss to the asset
b. The likelihood that the organization will experience any loss events over a given timeframe
27
c. An individual weakness associated with a given asset, like a section of bad code that needs patching
d. The probability that a threat event will result in loss to the organization
11. What are the required elements that must be defined to have a properly-scoped risk scenario?
a. The asset and the primary and secondary stakeholders who would be affected if anything happened to the
asset
b. The asset involved, the threat community who seeks to harm the asset, and the forms of loss that would be
involved if the threats were to succeed
c. The analysts, decision-makers, and subject-matter experts will be involved in the analysis effort
d. The asset of concern, the threat community who seeks to harm the asset, and the effect the threat community
seeks to have on the asset
12. How many variables on the Loss Event Frequency side of the FAIR model are represented as counts
(as opposed to percentages or dollars?)
a. 2
b. 3
c. 4
d. 5
13. Why are the fragile and unstable risk modifiers/qualifiers important to include in the
communication of analysis results?
14. The results of the analysis show that a single control is currently limiting risk because it effectively
blocks 99% of hundreds of threat events. This is an example of a situation where the ______________
should be included in any discussion of results.
15. Which of the following is NOT a reason why a complete analysis scope must be clearly identified
and documented:
a. So that analysts fully understand the scenario being analyzed and can pinpoint relevant information
b. So that SMEs know the full details of the scenarios for which they are being asked to provide estimates
c. So that the audience for the analysis results fully understands what was in or out of scope for the
analysis conducted
d. So that assumptions can remain unstated and impact the interpretation of the results by various
stakeholders
28
16. Which of the following forms of loss would be least likely to manifest in the secondary phase of
loss?
17. Which of the following is NOT a required element of documenting a strong rationale for
provided estimates:
18. Which of the following actions should an analyst take when confronted with SMEs who disagree
about an estimate for a given variable of the FAIR model?
a. Ask each SME about the underlying factors that compose the variable being estimated to identify sources of
disagreement
b. Combine the estimates of the two SMEs into one wide range with more uncertainty
c. Run the analysis twice, once with each SME’s estimate, to see if either estimate generates nonsensical
results
d. All of the above are acceptable actions to take in the event of an intractable disagreement between SMEs
20. When playing a calibration game like the spinner, choosing one’s provided range instead of
spinning the spinner indicates
29
22. When making estimates and generating analysis results, the goal should be
23. Which of the following is NOT one of the four parameters needed to estimate use in the FAIR
model?
a. A minimum value
b. A most likely value
c. A level of confidence in the range provided
d. A level of confidence in the most likely value selected
a. 5, 3, 1, 4, 2
b. 3, 5, 4, 1, 2
c. 5, 1, 4, 3, 2
d. 5, 1, 3, 4, 2
25. Which of the following might be a manifestation of replacement costs that must be accounted for in a
FAIR analysis?
30
AnswerKey
1. D
2. D
3. C
4. B
5. B
6. C
7. B
8. A
9. C
10. D
11. D
12. B
13. A
14. C
15. D
16. C
17. D
18. D
19. B
20. A
21. B
22. C
23. C
24. A
25. C
31