0% found this document useful (0 votes)
130 views13 pages

Time-Efficient and Cost-Effective Network Hardening Using Attack Graphs

A paper on using attack graphs to determine where network security can be increased through hardening attempts such that the increase in security is the greatest for the least amount of money and time.

Uploaded by

LatwPIAT
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
130 views13 pages

Time-Efficient and Cost-Effective Network Hardening Using Attack Graphs

A paper on using attack graphs to determine where network security can be increased through hardening attempts such that the increase in security is the greatest for the least amount of money and time.

Uploaded by

LatwPIAT
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/235979981

Time-efficient and cost-effective network hardening using attack graphs

Conference Paper · June 2012


DOI: 10.1109/DSN.2012.6263942

CITATIONS READS
72 169

3 authors:

Massimiliano Albanese Sushil Jajodia


George Mason University George Mason University
99 PUBLICATIONS   1,098 CITATIONS    829 PUBLICATIONS   27,103 CITATIONS   

SEE PROFILE SEE PROFILE

Steven Noel
George Mason University
53 PUBLICATIONS   2,108 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Natural Intelligence Neuromorphic Engineering (NINE) about Unsupervised Deep Learning Rule View project

Modeling and Analysis of Moving Target Defense Mechanisms in MANET View project

All content following this page was uploaded by Massimiliano Albanese on 21 December 2015.

The user has requested enhancement of the downloaded file.


Time-Efficient and Cost-Effective Network Hardening Using Attack Graphs

Massimiliano Albanese∗ , Sushil Jajodia∗† , and Steven Noel∗


∗ Center for Secure Information Systems
George Mason University
4400 University Drive, Fairfax, VA 22030
Email: {malbanes,jajodia,snoel}@gmu.edu
† The MITRE Corporation
7515 Colshire Drive, McLean, VA 22102-7539

Abstract—Attack graph analysis has been established as a hardening options itself scales exponentially with the size of
powerful tool for analyzing network vulnerability. However, the attack graph. In applying network hardening to realistic
previous approaches to network hardening look for exact network environments, it is crucial that the algorithms are
solutions and thus do not scale. Further, hardening elements
have been treated independently, which is inappropriate for able to scale. Progress has been made in reducing the
real environments. For example, the cost for patching many complexity of attack graph manipulation so that it scales
systems may be nearly the same as for patching a single quadratically (linearly within defined security zones) [1].
one. Or patching a vulnerability may have the same effect However, previous approaches for generating hardening rec-
as blocking traffic with a firewall, while blocking a port ommendations search for exact solutions [2], which is an
may deny legitimate service. By failing to account for such
hardening interdependencies, the resulting recommendations intractable problem.
can be unrealistic and far from optimal. Instead, we formalize Another limitation of previous work is the assumption
the notion of hardening strategy in terms of allowable actions, that network conditions are hardened independently. This
and define a cost model that takes into account the impact of assumption does not hold true in real network environments.
interdependent hardening actions. We also introduce a near- Realistically, network administrators can take actions that
optimal approximation algorithm that scales linearly with the
size of the graphs, which we validate experimentally. affect vulnerabilities across the network, such as pushing
patches out to many systems at once. Further, the same
Keywords-network hardening, vulnerability analysis, attack hardening result may be obtained through more than one
graphs, intrusion prevention, reliability.
action. Overall, to provide realistic recommendations, our
I. I NTRODUCTION hardening strategy must take such factors into account.
We remove the assumption of independent hardening
Attackers can leverage the complex interdependencies of actions. Instead, we define a network hardening strategy as a
network configurations and vulnerabilities to penetrate seem- set of allowable atomic actions that involve hardening mul-
ingly well-guarded networks. In-depth analysis of network tiple network conditions. We introduce a formal cost model
vulnerabilities must consider attacker exploits not merely that accounts for the impact of these hardening actions.
in isolation, but in combination. Attack graphs reveal such This allows the definition of hardening costs that accurately
threats by enumerating potential paths that attackers can take reflect realistic network environments. Because computing
to penetrate networks. This helps determine whether a given the minimum-cost hardening solution is intractable, we
set of network hardening measures provides safety of given introduce an approximation algorithm for optimal hardening.
critical resources. This algorithm finds near-optimal solutions while scaling al-
Attack graph analysis can be extended to automatically most linearly – for certain values of the parameters – with the
generate recommendations for hardening networks. One size of the attack graph, which we validate experimentally.
must consider combinations of network conditions to harden, Finally, we determine the theoretical upper bound for the
which has corresponding impact on removing paths in the worst-case approximation ratio, and show that, in practice,
attack graph. Further, one can generate hardening solutions the approximation ratio is much lower than such bound.
that are optimal with respect to some notion of cost. Such The paper is organized as follows. Section II discusses
hardening solutions prevent the attack from succeeding, related work. Section III recalls some preliminary defini-
while minimizing the associated costs. tions, whereas Section IV provides a motivating example.
However, as we show, the general solution to optimal Then Section V introduces the proposed cost model, and
network hardening scales exponentially as the number of Section VI describes our approach to time-efficient and cost-
effective network hardening. Finally, Section VII reports ex-
The work presented in this paper is supported in part by the Army
Research Office under MURI award number W911NF-09-1-0525, and by perimental results, and Section VIII gives some concluding
the Office of Naval Research under award number N00014-12-1-0461. remarks and indicates further research directions.

978-1-4673-1625-5/12/$31.00 ©2012 IEEE


II. R ELATED W ORKS dependencies between initial conditions, such that removing
certain initial conditions may also disable additional condi-
A number of tools are available for scanning network tions, and this might not be necessary to harden the network.
vulnerabilities, such as Nessus [3], but these only report The work presented in this paper differs significantly from
isolated vulnerabilities. Attack graphs are constructed by previous work in that we (i) drop the assumption that initial
analyzing the inter-dependencies between vulnerabilities and conditions can be independently disabled; (ii) introduce
security conditions that have been identified in the target a formal cost model; and (iii) present an approximation
network [4], [5], [6], [7], [8], [9], [10], [11], [12], [13]. Such algorithm that generates suboptimal solutions efficiently.
analysis can be either forward, starting from the initial state
[8], [12] or backward from the goal state [9], [11]. Model III. P RELIMINARIES
checking was first used to analyze whether the given goal Attack graphs represent prior knowledge about vulnera-
state is reachable from the initial state [9], [14], but later bilities, their dependencies, and network connectivity. Two
used to enumerate all possible sequences of attacks between different representations are possible for an attack graph.
the two states [11], [15]. First, an attack graph can explicitly enumerate all possible
The explicit attack sequences produced by a model sequences of vulnerabilities an attacker can exploit to reach
checker face a serious scalability issue, because the number a target, i.e., all possible attack paths. However, such graphs
of such sequences is exponential in the number of vulnerabil- face a combinatorial explosion in the number of attack
ities multiplied by the number of hosts. To avoid such com- paths. Second, with a monotonicity assumption stating an
binatorial explosion, a more compact representation of attack attacker never relinquishes obtained capabilities, an attack
graphs was proposed in [4]. The monotonicity assumption graph can record the dependencies among vulnerabilities and
underlies this representation, i.e., an attacker never relin- keep attack paths implicitly without losing any information.
quishes any obtained capability. This newer representation The resulting attack graph has no duplicate vertices and
can thus keep exactly one vertex for each exploit or security hence has a polynomial size in the number of vulnerabilities
condition, leading to an attack graph of polynomial size (in multiplied by the number of connected pairs of hosts.
the total number of vulnerabilities and security conditions). In this paper, we adopt the definition of attack graph
Attack graphs have also been used for correlating in- presented in [17], which assumes the latter notion of attack
trusion alerts into attack scenarios [16], [17]. Such alert graphs.
correlation methods are parallel to our work, because they Definition 1 (Attack graph): Given a set of exploits E, a
aim to employ the knowledge encoded in attack graphs set of security conditions C, a require relation Rr ⊆ C × E,
for detecting and taking actions against actual intrusions, and an imply relation Ri ⊆ E × C, an attack graph G is the
whereas our work aims to harden the network before any directed graph G = (E ∪ C, Rr ∪ Ri ), where E ∪ C is the
intrusion may happen. vertex set and Rr ∪ Ri the edge set.
Algorithms exist to find the set of exploits from which the We denote an exploit as a predicate v(hs , hd ), indicating
goal conditions are reachable [4]. This eliminates some ir- an exploitation of vulnerability v on the destination host hd ,
relevant exploits from further consideration because they do initiated from the source host hs . Similarly, we write v(h)
not contribute to reaching the goal condition. The minimal for exploits involving only local host h.
critical attack set is a minimal set of exploits in an attack A security condition is a predicate c(hs , hd ) that indicates
graph whose removal prevents attackers from reaching any a satisfied security-related condition c involving the source
of the goal states [4], [11], [15]. The minimal critical attack host hs and the destination host hd (when a condition
set thus provides solutions to harden the network. However, involves a single host, we simply write c(h)). Examples of
these methods ignore the critical fact that consequences can- security conditions include the existence of a vulnerability
not be removed without removing the causes. The exploits in on a given host or the connectivity between two hosts. Initial
the solution usually depend on other exploits that also need conditions are a special subset of security conditions, as
to be disabled. The solution is thus not directly enforceable. defined below [2].
Moreover, after taking into account those implied exploits Definition 2 (Initial conditions): Given an attack graph
the solution is no longer minimum. G = (E∪C, Rr ∪Ri ), initial conditions refer to the subset of
A more effective solution to automate the task of hard- conditions Ci = {c ∈ C|e ∈ E s.t. (e, c) ∈ Ri }, whereas
ening a network against multi-step intrusions was proposed intermediate conditions refer to the subset C \ Ci .
by Wang et al. in [2]. Unlike previous approaches, which Intermediate conditions are usually consequences of ex-
require removing exploits, this solution focuses on initially ploits and hence cannot be disabled without removing the
satisfied conditions only. Initial conditions can be disabled, causes. Instead, initial conditions are not created through
leading to a readily deployable solution. However, Wang the execution of exploits, thus they might be removed.
et al. assumed that initial conditions can be independently Without loss of generality, we will explicitly model only
disabled. Although this is true in some cases, there may exist initial conditions that can be disabled, and omit initial
ftp(0,1) user(0) ftp(0,2) Figure 1 is relatively simple, with three hosts – denoted host
0, 1, and 2 respectively – and four types of vulnerabilities
ftp_rhosts(0,1) ftp_rhosts(0,2)
– f tp rhosts, rsh, sshd bof , and local bof . However,
because multiple interleaved attack paths can lead to the
trust(1,0) trust(2,0)
goal condition, an optimal solution to harden the network is
still not apparent from the attack graph itself, and finding
such a solution by hand may not be trivial. As an example of
rsh(0,1) rsh(0,2)
attack path, the attacker can first establish a trust relationship
from his machine (host 0) to host 2 (condition trust(2, 0))
ftp(2,1)
user(2)
via the exploit f tp rhosts(0, 2) on host 2, then gain user
privileges on host 2 (condition user(2)) with an rsh login
ftp_rhosts(2,1)
local_bof(2)
(exploit rsh(0, 2)), and finally achieve the goal condition
root(2) using a local buffer overflow attack on host 2
(exploit local bof (2)). The following are some of the valid
trust(1,2) sshd(0,1) sshd(2,1) attack paths that can be generated using existing algorithms
[4].
• f tp rhosts(0, 2), rsh(0, 2), local bof (2)
rsh(2,1) sshd_bof(0,1) sshd_bof(2,1) • f tp rhosts(0, 1), rsh(0, 1), f tp rhosts(1, 2), rsh(1, 2),
local bof (2)
• sshd bof (0, 1), f tp rhosts(1, 2), rsh(1, 2), local bof (2)
Intuitively, to prevent the goal condition from being
satisfied, a solution to network hardening must break all
ftp(1,2) user(1) the attack paths leading to the goal. This intuition was
captured by the concept of critical set, that is, a set of
exploits (and corresponding conditions) whose removal from
ftp_rhosts(1,2) local_bof(1) the attack graph will invalidate all attack paths. It has
also been shown that finding critical sets with the min-
root(1) imum cardinality is NP-hard, whereas finding a minimal
trust(2,1)
critical set (that is, a critical set with no proper subset
being a critical set) is polynomial. Based on the above
rsh(1,2) attack paths, there are many minimal critical sets, such
as {rsh(0, 2), rsh(1, 2)}, {f tp rhosts(0, 2), rsh(1, 2)},
{f tp rhosts(1, 2), rsh(0, 2)}, and so on. If any of those
root(2)
sets of exploits could be completely removed, all the at-
tack paths would become invalid, and hence the target
Figure 1. An example of attack graph, including initial conditions (purple
ovals), exploits (green rectangles), and intermediate conditions (blue ovals)
condition would be unreachable. Unfortunately, the above
solution ignores the following important fact. Not all exploits
are under the direct control of administrators. An exploit
can only be removed by disabling its required conditions,
conditions that network administrators cannot control, such but not all conditions can be disabled at will. Intuitively,
as privileges on the attacker’s machine. a consequence cannot be removed without removing its
causes. Some conditions are implied by other exploits. Such
IV. M OTIVATING E XAMPLE intermediate conditions cannot be independently disabled
Figure 1 depicts an example of attack graph, with exploits without removing the exploits that imply them. Only those
appearing as rectangles and conditions as ovals. Purple ovals initial conditions that are not implied by any exploit can be
represent initial conditions, whereas blue ovals represent disabled independently of other exploits. Hence, it is impor-
intermediate conditions. Some modeling simplifications have tant to distinguish between these two kinds of conditions, as
been made, such as combining transport-layer ftp connectiv- formalized in Definition 2.
ity between two hosts hs and hd , physical-layer connectivity, For instance, in Figure 1, exploit rsh(1, 2) cannot be inde-
and the existence of the ftp daemon on host hd into a single pendently removed, because the two conditions it requires,
condition f tp(hs , hd ). In this example, we assume that our trust(2, 1) and user(1), are both intermediate conditions
objective is to harden the network with respect to target and cannot be independently disabled. As long as an at-
condition root(2), i.e., we want to prevent the attacker from tacker can satisfy those two conditions through other ex-
gaining root privileges on host 2. The scenario depicted in ploits (for example, f tp rhosts(1, 2) and sshd bof (2, 1)),
the realization of the exploit rsh(1, 2) is unavoidable. ftp(0,1)
Hence, any of the above minimal critical sets, such as
{rsh(0, 2), rsh(1, 2)}, is theoretically a sound solution, but
ftp_rhosts(0,1)
practically not enforceable. For this reason, the approach
proposed in [2] relies on initial conditions only. However, it
has some limitations that we address in this paper. ftp(1,2) trust(1,0) sshd(0,1) ftp(0,2)
The approach of [2] has no explicit cost model and
assumes that each initial condition can be independently
ftp_rhosts(1,2) rsh(0,1) sshd_bof(0,1) ftp_rhosts(0,2)
disabled. Thus, even when all possible solutions are enu-
merated, determining the one with the minimum cost
is based either on a qualitative notion of cost or on trust(2,1) user(1) trust(2,0)
simply counting the conditions that need to be dis-
abled. For the attack graph of Figure 1, the algorithm
rsh(1,2) rsh(0,2)
in [2] returns two solutions, {f tp(0, 2), f tp(1, 2)} and
{f tp(0, 2), f tp(0, 1), sshd(0, 1)}1 . At this point, there is no
clear procedure to decide which solution has the minimum user(2)

cost, unless we make the assumption that the cost of remov-


ing each individual condition is assigned by administrators.
local_bof(2)
Intuitively, one may expect the solution {f tp(0, 2),
f tp(1, 2)} to have a lower cost than {f tp(0, 2), f tp(0, 1),
sshd(0, 1)}, as fewer conditions need to be disabled. How- root(2)
ever, removing both f tp(0, 2) and f tp(1, 2) may only be
possible if the ftp service on host 2 is shut down. This Figure 2. A tree-style attack graph equivalent to the graph of Figure 1
action may have a considerable cost in terms of disruption w.r.t. target condition root(2)
to legitimate users. In this case, the combined cost of
removing the conditions {f tp(0, 2), f tp(0, 1), sshd(0, 1)}
may be lower, as it may be achieved by simply blocking
For instance, an allowable hardening action may consist in
all traffic from host 0.
stopping ftp service on a given host. Thus, each action may
To conclude, note that the attack graph of Figure 1
have additional effects besides disabling a desired condition.
has the same hardening solutions as the simplified attack
Such effects must be taken into account when computing
graph of Figure 2. This is possible because the algorithm
minimum-cost solutions. Previous work simply assumes that
in [2] traverses the graph from target conditions to initial
initial conditions can be individually disabled. We take a
conditions, and, relying on the monotonicity assumption,
more general approach and therefore drop this assumption.
breaks all the cycles. Intuitively, from the point of view of
For instance, in the attack graph of Figure 1, disabling
a target condition, the attack graph can be seen as a tree
f tp(1, 2) might not be possible without also disabling
rooted at the target condition and having initial conditions
f tp(0, 2).
as the leaf nodes. In fact, each condition is implied by one
or more exploits. In turn, each exploit requires one or more Definition 3 (Allowable hardening action): Given an at-
preconditions to be satisfied. We leverage this observation tack graph G = (E ∪ C, Rr ∪ Ri ), an allowable hardening
in our approach to network hardening. action (or simply hardening action) A is any subset of the
set Ci of initial conditions such that all the conditions in A
V. C OST M ODEL can be jointly disabled in a single step, and no other initial
Disabling a set of initial conditions in order to prevent condition c ∈ Ci \ A is disabled when conditions in A are
attacks on given targets may result in undesired effects, disabled.
such as denial of service to legitimate users. These effects A hardening action A is said to be minimal if and only if
are greatly amplified when initial conditions cannot be A∗ ⊂ A s.t. A∗ is an allowable hardening action. We use
individually disabled, but rather require actions that disable A to denote the set of all possible hardening actions.
a larger number of conditions. In the following, we define Figure 3 depicts the same attack graph of Figure 2,
a network hardening strategy as a set of atomic actions that but it explicitly shows the allowable hardening actions,
can be taken to harden a network. represented as rounded rectangles. Dashed edges indicate
which conditions are disabled by each action. Intuitively,
1 Initial conditions that the administrators cannot control are not consid-
a network hardening action is an atomic step that network
ered for the purpose of network hardening. In the example of Figure 1,
the condition user(0), corresponding to user privileges on the attacker’s administrators can take to harden the network (e.g., closing
machine, is ignored. an ftp port). When an action A is taken, all and only
stop_ftp(2) block_host(0) stop_sshd(1) be individually disabled. In our framework, this simplifying
assumption corresponds to the special case where, for each
ftp(0,1) initial condition, there exists an allowable action that dis-
ables that condition only, i.e., (∀c ∈ Ci )(∃A ∈ A)A = {c}.
We then define the notion of network hardening strategy in
ftp_rhosts(0,1)
terms of allowable actions.
Definition 4 (Network hardening strategy): Given an at-
ftp(1,2) trust(1,0) sshd(0,1) ftp(0,2) tack graph G = (E ∪ C, Rr ∪ Ri ), a set A of allowable
actions, and a set of target conditions Ct = {c1 , . . . , cn }, a
network hardening strategy (or simply hardening strategy)
ftp_rhosts(1,2) rsh(0,1) sshd_bof(0,1) ftp_rhosts(0,2)
S is a set of network hardening actions {A1 , . . . , Am } s.t.
conditions c1 , . . . , cn cannot be reached after all the actions
trust(2,1) user(1) trust(2,0) in S have been taken. We use S to denote the set of all
possible strategies, and C(S) to denote the set of  all the
conditions disabled under strategy S, i.e., C(S) = A∈S A.
rsh(1,2) rsh(0,2)
Intuitively, a hardening strategy is a set of allowable
actions breaking all attack paths leading to the target condi-
user(2) tions.
We now introduce a cost model, enabling a more accurate
analysis of available hardening options.
local_bof(2)
Definition 5 (Hardening cost function): A hardening
cost function is any function cost : S → R+ that satisfies
root(2) the following axioms:

cost(∅) = 0 (1)
Figure 3. Possible hardening actions (orange rectangles) for the attack
graph of Figure 2
(∀S1 , S2 ∈ S) (C(S1 ) ⊆ C(S2 ) ⇒ cost(S1 ) ≤ cost(S2 )) (2)

(∀S1 , S2 ∈ S) (cost(S1 ∪ S2 ) ≤ cost(S1 ) + cost(S2 )) (3)


the conditions in A are removed2 . In the example of Fig-
In other words, the above definition requires that (i) the
ure 3, A = {stop f tp(2), block host(0), stop sshd(1)},
cost of the empty strategy – the one not removing any
stop f tp(2) = {f tp(0, 2), f tp(1, 2)}, block host(0) =
condition – is 0; (ii) if the set of conditions disabled under S1
{f tp(0, 1), sshd(0, 1), f tp(0, 2)}, and stop sshd(1) =
is a subset of the conditions disabled under S2 , then the cost
{sshd(0, 1)}. In this example, the condition f tp(1, 2) cannot
of S1 is less than or equal to the cost of S2 (monotonicity);
be individually disabled, and can only be disabled by taking
and (iii) the cost of the combined strategy S1 ∪ S2 is less
action stop f tp(2), which also disables f tp(0, 2)3 .
than or equal to the sum of the individual costs of S1 and
Therefore, when choosing a set of initial conditions to
S2 (triangular inequality).
be removed in order to prevent attacks on given targets, we
Combining the three axioms above, we can conclude that
should take into account all the implications of removing
(∀S1 , S2 ∈ S) (0 ≤ max(cost(S1 ), cost(S2 )) ≤ (cost(S1 ∪
those conditions. Removing specific initial conditions may
S2 ) ≤ cost(S1 ) + cost(S2 )).
require to take actions that disable additional conditions, A cost function is said to be additive if and only if the
including conditions not explicitly modeled in the attack following additional axiom is satisfied.
graph, such as conditions that are not part of any attack
path. To address this problem, we formalize the notion
of hardening strategy in terms of allowable actions, and (∀S1 , S2 ∈ S) (S1 ∩ S2 = ∅ ⇐⇒ (4)
define a cost model that takes into account the impact of cost(S1 ) + cost(S2 ) = cost(S1 ∪ S2 ))
hardening actions. This novel approach improves the state
Many different cost functions may be defined. The fol-
of the art, while preserving the key idea that solutions are
lowing is a very simple cost function:
truly enforceable only if they operate on initial conditions.
First, we drop the assumption that initial conditions can costa (S) = |C(S)| (5)
2 In practice, an action may also remove conditions not explicitly modeled The above cost function simply counts the initial condi-
in the attack graph, and this should be taken into account when assigning tions that are removed under a network hardening strategy
a cost to each action.
3 More precisely, all conditions of the form f tp(x, 2), where x is any S, and clearly satisfies the three axioms of Definition 5. If
host, are disabled by action stop f tp(2). actions in A are pairwise disjoint, then costa is also additive.
d
VI. N ETWORK H ARDENING and the size of each solution is n 2 , where d is the maximum
In this section, we first examine in more details the limita- distance (number of edges) between initial and target con-
tions of the approach proposed in [2], and then introduce our ditions5 , and n is the maximum in-degree of nodes in the
d

approximation algorithm to find reasonably good hardening attack graph. Worst case complexity is then O(nn ). The
strategies in a time efficient manner. proof is omitted for reasons of space.
The authors of [2] rely on the assumption that the attack
A. Limitations of Previous Approach graph of a small and well-protected network is usually small
The algorithm presented in [2] starts from a set Ct of and sparse (the in-degree of each node is small), thus, even
target conditions and traverses the attack graph backwards, if the complexity is exponential, running time should be
making logical inferences. At the end of the graph traversal, acceptable in practice. However, the result above shows that
a logic proposition of the initial conditions is derived as computing an optimal solution may be impractical even for
the necessary and sufficient condition for hardening the relatively small attack graphs. For instance, consider the at-
network with respect to Ct . This proposition then needs to be tack graph of Figure 4, where n = 2, Ct = {c21 }, and d = 4.
converted to its disjunctive normal form (DNF), with each According to Equation 6, there are 64 possible hardening
disjunction in the DNF representing a particular sufficient strategies in the worst case, each of size 4. The strategy
option to harden the network. Although the logic proposition that disables the set of initial conditions {c1 , c3 , c9 , c11 } is
can be derived efficiently, converting it to its DNF may incur one of such possible strategies. When d = 6, the number of
into an exponential explosion. initial condition is 64, and the number of possible strategies
Algorithm BackwardSearch (Algorithm 1) is function- becomes 16,384. For d = 8, |Ci | = 256 and the number of
ally equivalent to the one described in [2] – in that it possible strategies is over a billion.
generates all possible hardening solutions4 – under the sim-
B. Approximation Algorithm
plifying hypothesis that initial conditions can be individually
disabled, i.e., (∀ci ∈ Ci )(∃A ∈ A)(A = {ci }). However, To address the limitations of the previous network harden-
our rewriting of the algorithm has several advantages over ing algorithm, we now propose an approximation algorithm
its original version. First, it is more general, as it does not as- that computes reasonably good solutions in a time efficient
sume that initial conditions can be individually disabled, and manner. We will show that, under certain conditions, the
incorporates the notions of allowable action and hardening solutions computed by the proposed algorithm have a cost
strategy defined in Section V. Second, it directly computes that is bound to be within a constant factor of the optimal
a set of possible hardening strategies, rather then a logic cost.
proposition that requires additional processing in order to Algorithm F orwardSearch (Algorithm 2) traverses the
provide actionable intelligence. Last, in a time-constrained attack graph forward, starting from initial conditions. A
or real-time scenario where one may be interested in the first key advantage of traversing the attack graph forward is
available hardening solution, the rewritten algorithm can be that intermediate solutions are indeed network hardening
easily modified to terminate as soon as a solution is found. strategies with respect to intermediate conditions. In fact,
To this aim, it is sufficient to change the condition of the in a single pass, Algorithm F orwardSearch can compute
main while loop (Line 3) to (S ∈ S)(S ⊆ Ci ). Such hardening strategies with respect to any condition in C.
variant of the algorithm will generate hardening strategies To limit the exponential explosion of the search space,
that disable initial conditions closer to the target conditions. intermediate solutions can be pruned – based on some
However, when used to find the minimum-cost hardening pruning strategy – whereas pruning is not possible for the
solution, Algorithm BackwardSearch still faces the com- algorithm that traverses the graph backwards. In fact, in
binatorial explosion described below. Instead, the algorithm this case, intermediate solutions may contain exploits and
introduced in Section VI-B provides a balance between the intermediate conditions, and we cannot say anything about
optimality of the solution and the time to compute it. their cost until all the exploits and intermediate conditions
Under the simplifying hypothesis that initial conditions have been replaced with sets of initial conditions.
can be individually disabled – i.e., (∀ci ∈ Ci )(∃A ∈ In this section, for ease of presentation, we consider
A)(A = {ci }) – and allowable actions are pairwise disjoint hardening problems with a single target condition. The
– i.e., (∀Ai , Aj ∈ A)(Ai ∩ Aj = ∅) – it can be proved generalization to the case where multiple target conditions
that, in the worst case, the number of possible hardening need to be hardened at the same time is straightforward and
strategies is is discussed below.
Given a set Ct of target conditions, we add a dummy
d
|S| = |Ct | · n
2
k=1 nk
(6) exploit ei for each condition ci ∈ Ct , such that ei has ci as
its only precondition, as shown in Figure 5. Then, we add a
4 For ease of presentation, the pseudocode of Algorithm 1 does not show
how cycles are broken. This is done as in the original algorithm. 5 Note that d is always an even number.
Algorithm 1 BackwardSearch(G, Ct )
Input: Attack graph G = (E ∪ C, Rr ∪ Ri ), and set of target conditions Ct .
Output: Optimal hardening strategy.
1: // Initialize the set of all solutions and iterate until solutions contain initial conditions only
2: S ← {Ct }
3: while (∃S ∈ S)(S ⊆ Ci ) do
4: // Replace each non-initial condition with the set of exploits that imply it
5: for all S ∈ S do
6: for all c ∈ S s.t. c ∈ / Ci do
7: S ← S \ {c} ∪ {e ∈ E | (e, c) ∈ Ri }
8: end for
9: end for
10: // Replace exploits with required conditions and generate all possible combinations
11: for all S = {e1 , . . . , em } ∈ S do
12: S ← S \ {S} ∪ {{c1 , . . . , cm } | (∀i ∈ [1, m]) (ci , ei ) ∈ Rr }
13: end for
14: end while
15: // Replace initial conditions with allowable actions and generate all possible combinations
16: for all S = {c1 , . . . , cn } ∈ S do
17: S ← S \ {S} ∪ {{A1 , . . . , An } | (∀i ∈ [1, n]) Ai ∈ A ∧ ci ∈ Ai }
18: end for
19: return argmaxS∈S cost(S)

c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13 c14 c15 c16

e1 e2 e3 e4 e5 e6 e7 e8

c17 c18 c19 c20

e9 e10

c21

Figure 4. Example of attack graph with n = 2 and d = 4

dummy target condition ct , such that all the dummy exploits example of how this can be achieved using the mechanism
ei have ct are their only postcondition. It is clear that any to break cycle adopted by the algorithm in [2]. If the attack
strategy that hardens the network with respect to ct implicitly graph is not a tree, it can be converted to this form by
hardens the network with respect to each ci ∈ Ct . In fact, as using such mechanism. Looking at the attack graph from the
ct is reachable from any dummy exploit ei , all such exploits point of view of a given target condition has the additional
need to be prevented, and the only way to achieve this is by advantage of ignoring exploits and conditions that do not
disabling the corresponding preconditions, that is hardening contribute to reaching that target condition.
the network with respect to all target conditions in Ct .
On Line 1, the algorithm performs a topological sort of the
Additionally, we assume that, given a target condition ct , nodes in the attack graph (exploits and security conditions),
the attack graph is a tree rooted at ct and having initial and pushes them into a queue, with initial conditions at the
conditions as leaf nodes. In Section IV, we showed an front of the queue. While the queue is not empty, an element
Algorithm 2 F orwardSearch(G, k)
Input: Attack graph G = (E ∪ C, Rr ∪ Ri ), and optimization parameter k.
Output: Mapping σ : C ∪ E → 2S , and mapping minCost : C ∪ E → R+ .
1: Q ← T opologicalSort(C ∪ E)
2: while Q = ∅ do
3: q ← Q.pop()
4: if q ∈ Ci then
5: σ(q) ← {{A} | q ∈ A}
6: else if q ∈  E then
7: σ(q) ← c∈C | (c,q)∈Ri σ(q)
8: else if q ∈ C \ Ci then
9: {e1 , . . . , em } ← {e ∈ E | (e, q) ∈ Ri }
10: σ(q) ← {S1 ∪ . . . ∪ Sm | Si ∈ σ(ei )}
11: end if
12: σ(q) ← topK(σ(q), k)
13: minCost(q) = minS∈σ(q) cost(S)
14: end while

{sshd(0, 1)}. Assume that cost({stop f tp(2)}) = 20,


cost({block host(0)}) = 10, and cost({stop sshd(1)}) =
15. It is clear that the optimal strategy to harden the network
c1 cn
with respect to root(2) is S = {block host(0)}, with
a cost of 10. Let us now examine the behavior of the
e1 en algorithm for k = 1. All nodes are added to the queue
in topological order, and initial conditions are examined
first. After all the initial conditions have been examined,
ct we obtain σ(f tp(1, 2)) = {{stop f tp(2)}}, σ(f tp(0, 1)) =
{{block host(0)}}, σ(sshd(0, 1)) = {{block host(0)}},
and σ(f tp(0, 2)) = {{block host(0)}}. Once the algorithm
Figure 5. Example of multiple target conditions and dummy target examines exploit rsh(1, 2), on Line 7, before pruning, we
obtain σ(rsh(1, 2)) = {{stop f tp(2)}, {block host(0)}}.
After pruning (Line 12), we obtain σ(rsh(1, 2)) =
{{block host(0)}}, as {block host(0)} is the strategy
q is popped from the queue. If q is an initial condition, then with the lowest cost. Finally, we obtain σ(root(2)) =
q is associated with a set of strategies σ(q) such that each {{block host(0)}}, that is the algorithm, in this case, re-
strategy simply contains one of the allowable actions in A turns the optimal solutions.
disabling q (Line 5). If q is an exploit, then q is associated
with a set of strategies σ(q) that is the union of the sets of From the example above, it is clear that in our approach
strategies for each condition c required by q (Line 7). In fact, administrators only have to assign the cost of performing
an exploit can be prevented by disabling any of its required allowable actions (which are meaningful aggregates of initial
conditions. Finally, if q is an intermediate condition, then conditions), whereas in previous approaches they had to
q is associated with a set of strategies σ(q) such that each assign cost values to each individual initial condition.
strategy is the union of a strategy for each of the exploits Now, let us consider a different example showing how
that imply q (Lines 9-10). In fact, in order to disable an the value of k may have an impact on the optimality of the
intermediate condition, all the exploits that imply it must be solution. Intuitively, the higher the value of k, the closer the
prevented. When suboptimal solutions are acceptable, then computed solution is to the optimal one.
only the best k intermediate solutions are maintained at each Example 2: Consider the attack graph of Figure 6, and
step of the algorithm (Line 12), and the minimal hardening assume that cost({A1 }) = 10, cost({A2 }) = 18, and
cost for the current node is computed accordingly (Line 13). cost({A3 }) = 10. Also assume that cost is additive. It is
Example 1: Consider the attack graph of Figure 3. The clear that the optimal strategy to harden the network with
only three allowable actions on the corresponding network respect to c5 is S = {A2 }, with a cost of 18. Let us now
are stop f tp(2) = {f tp(1, 2), f tp(0, 2)}, block host(0) = examine the behavior of the algorithm for k = 1. On Line 1
{f tp(0, 1), sshd(0, 1), f tp(0, 2)}, and stop sshd(1) = we obtain Q = c1 , c2 , c3 , c4 , e1 , e2 , c5 . Thus, c1 is the
d
first node to be examined. After the first 4 elements of the tion ratio of algorithm F orwardSearch for k = 1 is n 2 .
queue have been examined, we obtain σ(c1 ) = {{A1 }}, Proof: We prove the result by induction, assuming
σ(c2 ) = {{A2 }}, σ(c3 ) = {{A2 }}, and σ(c4 ) = {{A3 }}. that the cost function is additive. We use the term level
Then e1 is considered. The full set of possible strategies for l to denote nodes that are at a distance l from the target
e1 is σ(e1 ) = {{A1 }, {A2 }}, but, since k = 1, only the condition. We need to prove that for each l ∈ [1, d − 2],
d−l
best one is maintained and propagated to following steps. the cost of hardening conditions at level l is n 2 times
A similar consideration applies to e2 . In conclusion we the optimal cost. The worst case – depicted in Figure 7 –
obtain σ(e1 ) = {{A1 }} σ(e2 ) = {{A3 }}. Finally, we obtain is the one in which (i) a single allowable action A∗ (with
σ(c5 ) = {{A1 , A3 }}, and minCost(c5 ) = 20, which is cost({A∗ }) = x) disables one precondition for each of the
2 exploits ed−1,i at level d − 1 (i.e., exploits depending on
slightly above the optimal cost. Similarly, it can be shown m

that, for k = 2, the algorithm returns minCost(c5 ) = 18, initial conditions), where m = nd is the number of initial
i.e., the optimal solution. This confirms that larger values of conditions; (ii) for each exploit ed−1,i , all the preconditions
k make solutions closer to the optimal one. not disabled by A∗ are disabled by an action Ai such that
cost({Ai }) = x − ε, where ε is an arbitrarily small positive
A1 A2 A3 real number; and (iii) actions Ai are pairwise disjoint.
Base case. When choosing a strategy for ed−1,i , the algo-
c1 c2 c3 c4 rithm picks the one with the lowest cost, that is strategy {Ai }
with cost x − ε. Then, when choosing a strategy for cd−2,i ,
the algorithm combines strategies for its n predecessors,
e1 e2 which all cost x − ε. Since such strategies are disjoint and
cost is additive, the cost to harden any condition at level
d − 2 of the attack tree is n · (x − ε).
c5
Inductive step. If hardening strategies for conditions at
level d−j of the attack tree cost nj/2 ·(x−ε), then hardening
Figure 6. Example of attack graph with d = 2 and n = 2
strategies for exploits at level d−j −1 of the attack tree also
cost nj/2 · (x − ε). When choosing a strategy for conditions
We now show that, in the worst case – when k = 1 – at level d − j − 2, the algorithm combines strategies for its
the approximation ratio is upper-bounded by nd/2 . However, n predecessors, which all cost nj/2 · (x − ε). Since such
experimental results indicate that, in practice, the approx- strategies are disjoint and cost is additive, the cost to harden
imation ratio is much smaller than its theoretical bound. any condition at level d − j − 2 of the attack tree is n · nj/2 ·
j+2
First, let us consider the type of scenario in which solutions (x − ε) = n 2 · (x − ε).
may not be optimal. To this aim, consider again the attack Although this result indicates that the bound may increase
graph configuration of Figure 6. When computing solutions exponentially with the depth of the attack tree, the bound
for e1 and e2 respectively, we make local decisions without is in practice – as confirmed by experimental results –
considering the whole graph, i.e., we independently compute much lower than the theoretical bound. In fact, the worst
the optimal solution for e1 and the optimal solution for e2 , case scenario depicted in Figure 7 is quite unrealistic.
given hardening strategies for their preconditions. However, Additionally, the bound can be reduced by increasing the
at a later stage, we need to merge solutions for both e1 and value of k. For instance, by setting k = n, the bound
d−2
e2 in order to obtain solutions for c5 . At this point, since becomes n 2 , that is the bound for a graph with depth
there exists an allowable action (i.e., A2 ) that would have d − 2 and in-degree n.
disabled preconditions of both e1 and e2 , with a cost lower Example 3: Consider the attack graph configuration of
than the combined cost of their locally optimal solutions, Figure 6 (with n = 2 and d = 2), and assume that
but the strategy including A2 has been discarded for k = 1, cost({A2 }) = x, cost({A1 }) = x − ε, and cost({A3 }) =
the solution is not optimal. This suggests that both k and the x − ε. For k = 1, if the cost function is additive, we obtain
maximum in-degree n of nodes in the graph play a role in minCost(c5 ) = 2 · (x − ε) ≈ 2 · x, which means that in the
determining the optimality of the solution. Additionally, as worst case the cost is twice the optimal cost.
the algorithm traverses the graph towards target conditions,
there may be a multiplicative effect in the approximation VII. E XPERIMENTAL R ESULTS
error. In fact, the depth d of the tree also plays a role In this section, we report the experiments we conducted to
in determining the outcome of the approximation, but this validate our approach. Specifically, our objective is to evalu-
effect can be compensated by increasing the value of k. We ate the performance of algorithm F orwardSearch in terms
can prove the following theorem. of processing time and approximation ratio for different
Theorem 1: Given an attack graph G with depth d and values of the depth d of the attack graph and the maximum
maximum in-degree n, the upper bound of the approxima- in-degree n of nodes in the graph. In order to obtain graphs
A1 A* A2 Am/2 shows how processing time increases when n increases and
for d = 4, and compares processing times of the exact algo-
rithm with processing times of algorithm F orwardSearch
for different values of k. It is clear that the time to compute
cd,1 cd,2 cd,3 cd,4 cd,m-1 cd,m
the exact solution starts to diverge at n = 4, whereas
processing time of algorithm F orwardSearch is still well
under 0.5 seconds for k = 10 and n = 5. Similarly,
Figure 9 shows how processing time increases when d
ed-1,1 ed-1,2 ed-1,m/2
increases and for n = 2, and compares processing times
of the exact algorithm with processing times of algorithm
F orwardSearch for different values of k. The time to
cd-2,1
compute the exact solution starts to diverge at d = 5,
whereas processing time of algorithm F orwardSearch is
still under 20 milliseconds for k = 10 and d = 10.
e1,1 e1,2
Exact solution k=1 k=2 k=5 k = 10
30
n=2

ct 25

Processing time (ms)


20

Figure 7. Worst case scenario


15

10

with specific values of d and n, we started from realistic 5


graphs, like the one of Figure 3, and augmented them with
0
additional synthetic conditions and exploits. Although the 0 1 2 3 4 5 6 7 8 9 10 11
large attack graphs we generated through this process are d

mostly synthetic, we made considerable efforts to make such


Figure 9. Processing time vs. d for n = 2 and different values of k
graphs consistent with real attack graphs. Additionally, for
each such graph, we randomly generated different groupings
of initial conditions into allowable actions, in order to Figure 10 shows how processing time increases when the
account for variability in what administrators can control. parameter k increases and for a fixed value of n (n = 4) and
All the results reported in this section are averaged over different values of d. From this chart, it is clear that large
multiple graphs with the same values of d and n, but graphs can be processed in a few seconds for values of k
different configurations of allowable actions. up to 5. As we will show shortly, relatively small values of
k provide a good balance between approximation ratio and
Exact solution k=1 k=2 k=5 k = 10 processing time, therefore this result is extremely valuable.
2.0 Similarly, Figure 11 shows how processing time increases
d=4
when k increases and for a fixed value of d (d = 8) and
1.5 different values of n. This chart confirms that large graphs
can be processed in a few seconds for relatively small values
Processing time (s)

of k.
1.0
We also observed the relationship between processing
time and size of the graphs (in terms of number of nodes).
0.5 Figure 12 shows a scatter plot of average processing times
for given pairs of d and n vs. the corresponding graph size.
0.0
This chart suggests that, in practice, processing time is linear
2 3 4 5 in the size of the graph for small values of k.
n
Finally, we evaluated the approximation ratio achieved
Figure 8. Processing time vs. n for d = 4 and different values of k
by the algorithm. Figure 13 shows how the ratio changes
when k increases and for a fixed value of n (n = 2) and
different values of d. It is clear that the approximation ratio
First, we show that, as expected, computing the optimal improves when k increases, and, in all cases, the ratio is
solution is feasible only for very small graphs. Figure 8 clearly below the theoretical bound. Additionally, relatively
d=2 d=4 d=6 d=8 d = 10 – for a fixed value of d (d = 4) and different values of
30
n=4
n – improves as k increases. Similar conclusions can be
25
drawn from this chart. In particular, the approximation ratio
d = 10, n = 4 is always below the theoretical bound.
1,398,101 nodes
Processing time (s)

20

d=4 d=6 d=8


15
1.6
n=2
10
1.5

5 1.4

Approximation ratio
0 1.3
0 2 4 6 8 10 12
k 1.2

1.1
Figure 10. Processing time vs. k for n = 4 and different values of d
1

0.9
n=2 n=3 n=4 n=5 0 2 4 6 8 10 12
60 k
d=8

50 Figure 13. Approximation ratio vs. k for n = 2 and different values of d


d = 8, n = 5
488,281 nodes
Processing time (s)

40

30
n=2 n=3 n=4
1.35
20
d=4
1.3
10
1.25
Approximation ratio

1.2
0
0 2 4 6 8 10 12 1.15
k
1.1

Figure 11. Processing time vs. k for d = 8 and different values of n 1.05

0.95
k=1 k=2 Linear regression line (k = 1) Linear regression line (k = 2) 0.9
4.0 0 2 4 6 8 10 12
k
3.5
R² = 0.999
3.0 Figure 14. Approximation ratio vs. k for d = 4 and different values of n
Processing time (s)

2.5
R² = 0.9924
2.0

1.5 VIII. C ONCLUSIONS


1.0 In this paper, we started by highlighting the limitations
0.5 of previous work in minimum-cost network hardening using
0.0
attack graphs. In particular, we showed – both theoretically
0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 1,400,000 1,600,000 and experimentally – that finding the exact solution to this
Graph size (# nodes)
problem is feasible only for very small graphs. We pro-
Figure 12. Processing time vs. graph size for different values of k
posed an approximation algorithm to find reasonably good
solutions in a time-efficient manner. We proved that, under
certain reasonable assumptions, the approximation ratio of
d
this algorithm is bounded by n 2 , where n is the maximum
low values of k (between 2 and 6) are sufficient to achieve in-degree of nodes in the graph and d is the depth of the
a reasonably good approximation ratio in a time-efficient graph. We also showed that, in practice, the approximation
manner. Additionally, as observed earlier, processing time is ratio is much smaller than its theoretical bound. Finally, we
practically linear in the size of the graph for lower values of reported experimental results that confirm the validity of our
k. Similarly, Figure 14 shows how the approximation ratio approach, and motivate further research in this direction.
The experiments described in this paper were conducted [9] R. W. Ritchey and P. Ammann, “Using model checking to
on mostly synthetic – yet realistic – attack graphs. Our analyze network vulnerabilities,” in Proceedings of the 2000
future plans include evaluating the proposed approach on IEEE Symposium on Research on Security and Privacy (S&P
2000), Berkeley, CA, USA, May 2000, pp. 156–165.
real data as well as deepening our understanding of cost
functions. Although additional work is required, the theoret- [10] R. Ritchey, B. O’Berry, and S. Noel, “Representing TCP/IP
ical and experimental results obtained so far are extremely connectivity for topological analysis of network security,” in
promising, and the proposed algorithm could be easily Proceedings of the 18th Annual Computer Security Appli-
adopted to augment the hardening capabilities currently cations Conference (ACSAC 2002), Las Vegas, NV, USA,
December 2002, pp. 25–34.
offered by available commercial tools such as Cauldron [18],
a vulnerability analysis framework originally developed by [11] O. Sheyner, J. Haines, S. Jha, R. Lippmann, and J. M. Wing,
members of our research group. Cauldron’s current approach “Automated generation and analysis of attack graphs,” in
to optimal network hardening is based on disabling the Proceedings of the 2002 IEEE Symposium on Security and
smallest possible set of edges in the attack graph, in order Privacy (S&P 2002), Berkeley, CA, USA, May 2002, pp.
273–284.
to prevent the attacker from reaching certain target condi-
tions. However, this approach has the same limitations of [12] L. P. Swiler, C. Phillips, D. Ellis, and S. Chakerian,
selectively removing exploits. As we discussed earlier in the “Computer-attack graph generation tool,” in Proceedings of
paper, it is not always possible to remove arbitrary exploits the DARPA Information Survivability Conference & Exposi-
(or attack paths) without removing their causes. In practice, tion II (DISCEX 2001), vol. 2, Anaheim, CA, USA, June
2001, pp. 307–321.
removing sets of initial conditions – which administrators
are likely to have control on – will help existing tools with [13] D. Zerkle and K. Levitt, “NetKuang - A multi-host configura-
generating hardening recommendations that can be actually tion vulnerability checker,” in Proceedings of the 6th USENIX
enforced. Security Symposium, San Jose, CA, USA, July 1996.

R EFERENCES [14] C. R. Ramakrishnan and R. Sekar, “Model-based analysis of


configuration vulnerabilities,” Journal of Computer Security,
[1] S. Noel and S. Jajodia, “Managing attack graph complexity
vol. 10, no. 1/2, pp. 189–209, 2002.
through visual hierarchical aggregation,” in Proceedings of
the ACM CCS Workshop on Visualization and Data Mining
[15] S. Jha, O. Sheyner, and J. Wing, “Two formal analyses
for Computer Security (VizSEC/DMSEC 2004). Fairfax, VA,
of attack graphs,” in Proceedings of 15th IEEE Computer
USA: ACM, October 2004, pp. 109–118.
Security Foundations Workshop (CSFW 2002), Cape Breton,
[2] L. Wang, S. Noel, and S. Jajodia, “Minimum-cost network Canada, June 2002.
hardening using attack graphs,” Computer Communications,
vol. 29, no. 18, pp. 3812–3824, November 2006. [16] S. Noel, E. Robertson, and S. Jajodia, “Correlating intrusion
events and building attack scenarios through attack graph
[3] Tenable Network Security R
, “The Nessus R
vulnerability distances,” in Proceedings of the 20th Annual Computer
scanner,” http://www.tenable.com/products/nessus. Security Applications Conference (ACSAC 2004), Tucson,
AZ, USA, December 2004, pp. 350–359.
[4] P. Ammann, D. Wijesekera, and S. Kaushik, “Scalable, graph-
based network vulnerability analysis,” in Proceedings of the [17] L. Wang, A. Liu, and S. Jajodia, “Using attack graphs for
9th ACM Conference on Computer and Communications Se- correlating, hypothesizing, and predicting intrusion alerts,”
curity (CCS 2002), Washington, DC, USA, November 2002, Computer Communications, vol. 29, no. 15, pp. 2917–2933,
pp. 217–224. September 2006.

[5] M. Dacier, “Towards quantitative evaluation of computer [18] S. Jajodia, S. Noel, P. Kalapa, M. Albanese, and J. Williams,
security,” Ph.D. dissertation, Institut National Polytechnique “Cauldron: Mission-centric cyber situational awareness with
de Toulouse, 1994. defense in depth,” in Proceedings of the Military Communi-
cations Conference (MILCOM 2011), Baltimore, MD, USA,
[6] S. Jajodia, S. Noel, and B. O’Berry, Managing Cyber Threats: November 2011.
Issues, Approaches, and Challenges, ser. Massive Computing.
Springer, 2005, vol. 5, ch. Topological Analysis of Network
Attack Vulnerability, pp. 247–266.

[7] R. Ortalo, Y. Deswarte, and M. Kaâniche, “Experiment-


ing with quantitative evaluation tools for monitoring opera-
tional security,” IEEE Transactions on Software Engineering,
vol. 25, no. 5, pp. 633–650, September/October 1999.

[8] C. Phillips and L. P. Swiler, “A graph-based system for


network-vulnerability analysis,” in Proceedings of the New
Security Paradigms Workshop (NSPW 1998), Charlottesville,
VA, USA, September 1998, pp. 71–79.

View publication stats

You might also like