0% found this document useful (0 votes)
66 views12 pages

The Design Space Root Finding Method For Efficient Risk Optimization by Simulation

This document summarizes a new method called the Design Space Root Finding method for efficiently solving risk optimization problems through simulation. Risk optimization allows for addressing both safety and economic goals in structural design. The conventional solution of risk optimization through Monte Carlo simulation disregards how the limit state function behaves over the design space. The proposed method finds the roots of the limit state function for each Monte Carlo sample, which provides more information to guide the optimization. The method is compared on test problems and is shown to be almost 20 times more efficient for 1D problems and at least 2 times more efficient for 20D problems than the conventional approach. Further improvements may be needed for higher dimensional problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views12 pages

The Design Space Root Finding Method For Efficient Risk Optimization by Simulation

This document summarizes a new method called the Design Space Root Finding method for efficiently solving risk optimization problems through simulation. Risk optimization allows for addressing both safety and economic goals in structural design. The conventional solution of risk optimization through Monte Carlo simulation disregards how the limit state function behaves over the design space. The proposed method finds the roots of the limit state function for each Monte Carlo sample, which provides more information to guide the optimization. The method is compared on test problems and is shown to be almost 20 times more efficient for 1D problems and at least 2 times more efficient for 20D problems than the conventional approach. Further improvements may be needed for higher dimensional problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Probabilistic Engineering Mechanics 44 (2016) 99–110

Contents lists available at ScienceDirect

Probabilistic Engineering Mechanics


journal homepage: www.elsevier.com/locate/probengmech

The Design Space Root Finding method for efficient risk optimization
by simulation
Wellison J.S. Gomes a, André T. Beck b,n
a
Center for Optimization and Reliability in Engineering (CORE), Department of Civil Engineering, Federal University of Santa Catarina, Florianópolis, SC,
Brazil
b
Department of Structural Engineering, University of São Paulo, São Carlos, SP, Brazil

art ic l e i nf o a b s t r a c t

Article history: Reliability-Based Design Optimization (RBDO) is computationally expensive due to the nested optimi-
Received 12 August 2015 zation and reliability loops. Several shortcuts have been proposed in the literature to solve RBDO pro-
Accepted 29 September 2015 blems. However, these shortcuts only apply when failure probability is a design constraint. When failure
Available online 19 November 2015
probabilities are incorporated in the objective function, such as in total life-cycle cost or risk optimiza-
Keywords: tion, no shortcuts were available to this date, to the best of the authors knowledge. In this paper, a novel
Structural optimization method is proposed for the solution of risk optimization problems. Risk optimization allows one to
Risk optimization address the apparently conflicting goals of safety and economy in structural design. In the conventional
Life-cycle cost optimization solution of risk optimization by Monte Carlo simulation, information concerning limit state function
Uncertainties
behavior over the design space is usually disregarded. The method proposed herein consists in finding
Monte Carlo simulation
the roots of the limit state function in the design space, for all Monte Carlo samples of random variables.
The proposed method is compared to the usual method in application to one and n-dimensional opti-
mization problems, considering various degrees of limit state and cost function nonlinearities. Results
show that the proposed method is almost twenty times more efficient than the usual method, when
applied to one-dimensional problems. Efficiency is reduced for higher dimensional problems, but the
proposed method is still at least two times more efficient than the usual method for twenty design
variables. As the efficiency of the proposed method for higher-dimensional problems is directly related to
derivative evaluations, further investigation is necessary to improve its efficiency in application to multi-
dimensional problems.
& 2015 Elsevier Ltd. All rights reserved.

1. Introduction In the context of structural optimization, many different for-


mulations have been employed in the last years in order to find
In a competitive environment, engineering systems have to be optimal designs. Among them, Deterministic Design Optimization
designed taking into account not just their functionality, but also (DDO) allows one to find the shape or configuration of a structure
their expected construction and operation costs, and their capacity that is optimal in terms of mechanics, but it does not take directly
to generate profits. This capacity depends on the risk that con- into account parameter uncertainties and their effects on struc-
struction and operation of a product or facility implies to the user, tural safety. A typical formulation of DDO reads:
to employees, to the general public or to the environment. The
d* = arg min [cost (d): d ∈ S, σ (d, λ) ≤ σyield ] (1)
capacity to generate profits can be adversely affected by the costs
of failure. The performance and safety of structural systems is af- where d is a vector containing the design variables; S ¼ [dl, du] is a
fected by uncertainties, or natural randomness, in the resistance of vector of design constraints, with dl and du the lower and upper
structural materials, in the loadings and in engineering models of bounds on the design variables; σ (d, λ)represents a deterministic
member resistance and load effects. Uncertainty implies risk, or design constraint, such as allowable stress; and λ is a vector of
the possibility of undesirable structural responses. safety coefficients, given by some design code but generally not a
design variable. In Eq. (1), the cost function only includes cost (or
n
volume) of structural materials, and sometimes manufacturing
Corresponding author.
E-mail addresses: [email protected] (W.J.S. Gomes), costs. Since safety is not quantified, the resulting optimal structure
[email protected] (A.T. Beck). may compromise safety, in comparison to the original (non-

http://dx.doi.org/10.1016/j.probengmech.2015.09.019
0266-8920/& 2015 Elsevier Ltd. All rights reserved.
100 W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110

optimal) structure. This will generally happen as more failure optimization problem: the inner loop is the reliability analysis and
modes are designed against the limit. Nowadays, it is widely re- the outer loop is the structural optimization. The coupling of these
cognized that DDO is not robust with respect to existing and un- two loops leads to very high computational costs. To reduce the
avoidable uncertainties in structural design. computational burden, several authors have proposed decoupling
Reliability-Based Design Optimization (RBDO) has emerged as the structural optimization and the reliability analysis. De-cou-
an alternative to properly model the safety-under-uncertainty part pling strategies may be divided in two types: (i) serial single loop
of the problem. With RBDO, one can ensure that a minimum (and methods and (ii) uni-level methods. The basic idea of serial single
measurable) level of safety is achieved by the optimum structure, loop methods is to decouple the two loops and solve them se-
by applying a constraint on the failure probability, Pf . A typical quentially, until some convergence criterion is achieved. On the
formulation of RBDO reads: other hand, uni-level methods employ different strategies to ob-
tain a single loop of optimization to solve the RBDO problem. State
d* = arg min [cost (d): d ∈ S, Pf (d, λ) ≤ P fadmissible ] (2) of the art reviews of RBDO including de-coupling strategies are
where Pf represents a reliability constraint. Generally, the cost provided in [25,3,39].
term in this formulation is the same as for DDO, that is, it does not Significantly, all de-coupling strategies mentioned above ad-
include expected costs of failure. Thus, RBDO allows finding a dress the classical RBDO formulation (Eq. (2)), where failure
structure which is optimal in a mechanical sense, and which does probabilities are design constraints. To the best of the authors
not compromise safety. However, results are dependent on the knowledge, no similar shortcuts exist for solving risk or life-cycle
failure probabilities used as constraints in the analysis. cost optimization problems. Thus, the computational burden as-
Risk Optimization (RO) increases the scope of the problem by sociated with risk optimization remains very large.
addressing the compromising goals of economy and safety [20,5– Solution of the underlying reliability problem is a key issue in
solving risk optimization problems. This is still a widely open re-
7]. This is accomplished by quantifying the monetary con-
search field, as different reliability methods found in the literature
sequences of failure, as well as the costs associated with con-
present very different computational costs and accuracies. More-
struction, operation and maintenance, and by including these
over, many new approaches have been proposed in recent years.
costs in the objective function. Thus, (reliability-based) Risk Op-
For instance, the Stochastic Subset Simulation method proposed
timization will also indirectly look for the optimum safety factors
by Taflanidis and Beck [42] can be seen as a shortcut to solving the
and failure probabilities:
risk optimization problem. However, this method rapidly loses
d* = arg min [CT (d) : d ∈ S] (3) efficiency for increasing number of design variables. In the ap-
proach by Taflanidis and Beck [42], the design variables are arti-
where CT(d) is the total expected cost, including expected costs of
ficially considered as uncertain and Subset Simulation is em-
failure. Since Eq. (3) has no design or reliability constraint, its ployed, in combination with a stochastic search algorithm, to solve
solution also leads to the optimum safety factors, λ*, or the opti-
the reliability and optimization problems simultaneously. As an-
mum reliability constraints, Pf * (d*, λ*). Expected costs of failure,
other example, the approach proposed by Jensen et al. [43] and
for each possible failure mode of the structure, are evaluated by
applied in a risk optimization problem by Valdebenito and
multiplying costs of failure by probabilities of failure. Hence, we
Schuëller [38], is very efficient even for problems involving thou-
note that in comparison to RBDO (Eq. (2)), in RO failure probability
sands of random variables; however, the method requires many
is no longer a constraint but part of the objective function. The
approximations, mainly when the objective function is the total
term CT(d) is further detailed in Section 2.2. expected cost.
A review of the literature shows that the nomenclature RBDO is Among the usual methods for reliability analysis, simple Monte
indistinguishably used to describe problems where failure prob- Carlo simulation or Latin-Hypercube sampling have been em-
abilities are design constraints [1,11–13,31,37,40,41], such as in Eq. ployed in many applications, in combination with different opti-
(2), or included in the objective function [10,14,17,2,21,29,32,34], mization algorithms, mainly due to generality and ease of accuracy
such as in Eq. (3). It should be clear that these two formulations control. In general, accuracy of Simple Monte Carlo simulation
lead to two fundamentally different problems. Risk optimization increases with larger number of samples. However, in the risk
(Eq. (3)) yields an unconstrained optimization problem, char- optimization solution, one complete reliability analysis is required
acterized by the existence of multiple local minima [19,20,6]. for each trial design. Many variants of the Monte Carlo method
Classical RBDO formulations, such as Eq. (2), lead to constrained have been proposed in order to decrease the computational cost
optimization problems. In classical RBDO articles [1,11– by decreasing the number of samples required to achieve con-
13,31,37,40,41] expected costs of failure are either not considered vergence: Latin Hypercube sampling [22,33], subset simulation [4],
or dismissed. In this article, we specifically address risk optimi- importance sampling [15], asymptotic sampling [27]. Regardless of
zation problems. the sampling strategy, simulation-based methods in general only
Both RBDO and RO formulations lead to problems which are look at one information about each sample: whether it belongs to
very computationally intensive to evaluate. This occurs due to the the failure domain or not. Hence, only the sign of the limit state
nested optimization and reliability analysis loops, which occur function matters. Information about how the limit state function
either with failure probability as constraints or as part of the ob- behaves over the design space is not computed. In this paper, it is
jective function. The computational burden is particularly large shown that this commonly employed strategy may not be the
when iterative numerical methods (e.g., non-linear or dynamic most efficient. In this paper, a novel method is proposed, which
finite element analysis) are employed in solution of the mechan- can in principle be combined with any of the sampling schemes
ical problem. mentioned above. This method is based on finding, for each
A number of approaches have been proposed in the literature sample, the roots of the limit state function in the design space.
in order to convert RBDO into DDO problems ([1,11– This dramatically reduces the computational cost, as will be shown
13,3,25,31,37,39–41]). When the underlying reliability problem herein.
(constraint in Eq. (2)) is solved by the First Order Reliability The core of this paper is organized in four sections. Section 2
Method (FORM) method, nested optimization loops are obtained presents the structural reliability problem and the risk optimiza-
in the classical RBDO formulation. Since FORM is an optimization tion formulation. Section 3 describes the proposed method, fo-
procedure itself, RBDO becomes a nested, double-loop cusing on its application in combination with Monte Carlo
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 101

simulation. In Section 4 some details about the employed opti- nsamp


1
mization algorithms are presented. The comparison between the Pf (d) = E [I [x, d]] ≅ ∑ I ⎡⎣ x i, d⎤⎦
nsamp i=1 (6)
proposed method and the usual one is made in Section 5, con-
sidering four different examples with different number of design where the values of the design variables d for which the prob-
variables and varying degrees of limit state and cost function non- ability of failure is being calculated are known a priori. The con-
linearities. Concluding remarks are presented in Section 6. vergence of the SMC method is achieved by increasing the number
of samples and depends on the order of magnitude of the prob-
ability of failure being estimated: the smaller the probability of
2. Risk optimization problem failure, the higher the number of samples necessary to perform
the analysis.
2.1. Structural reliability analysis To deal with the high computational burden associated with
the reliability analysis by simulation, many versions of the Monte
Let X and d be vectors of structural system parameters. Vector Carlo method have been proposed in the literature (e.g.: [4,15,23]),
X represents all random or uncertain system parameters, and in- trying to achieve convergence with a smaller number of samples.
cludes geometric characteristics, resistance properties of materials These techniques are in general more difficult to apply than SMC,
or structural members, loads and model errors for load effects and and many times the increased efficiency involves loss of generality.
member strengths. Some of these parameters are random in nat- Thus, the SMC remains the first and sometimes only option for
ure; others cannot be quantified deterministically due to un- some complex problems (mainly involving time dependent
certainty. Typically, resistance parameters can be represented as reliability.
random variables and loads are modeled as random processes of The method proposed in this paper focuses on simple Monte
time. Vector d contains all deterministic design variables, like Carlo and Latin Hypercube Sampling, but the authors believe it can
nominal member dimensions, partial safety factors, design life, be readily extended to or combined with other simulation tech-
parameters of inspection and maintenance programs, etc. Vector d niques. Also, as shown later, it maintains the generality of simple
may also include some parameters of random variables in X; for Monte Carlo simulation and even presents some advantages over
instance, the mean of a random variable may be a design variable. the original method.
The existence of uncertainty implies risk, that is, the possibility
of undesirable structural responses. The boundary between de- 2.2. Risk optimization problem
sirable and undesirable structural responses is given by limit state
functions g(X,d), such that: The life-cycle cost of a structural system subject to un-
certainties can be decomposed in an initial or manufacturing cost,
Ωf (d) = {x|g (x, d) ≤ 0} is the failure domain cost of operation, costs of inspections and maintenance, cost of
disposal and expected costs of failure (Cexpected). The expected cost
Ωs (d) = {x|g (x, d) > 0} is the survival domain (4)
of failure, or failure risk, is given by the product of a failure cost
Each limit state describes one possible failure mode of the (Cfailure) by a failure probability:
structure, either in terms of serviceability or ultimate capacity. The Cexpected (d) = Cfailure (d)⋅Pf (d) (7)
probability of undesirable structural response, or probability of
failure, is given by: Failure costs include the costs of repairing or replacing da-
maged structural members, removing a collapsed structure, re-
Pf (d) = P ⎡⎣ X ∈ Ωf (d) ⎤⎦ = ∫Ω fx (x) d x building it, cost of unavailability (downtime), cost of compensation
f (5)
for injury or death of employees or general users, penalties for
where fx(x) is the joint probability density function of vector X. environmental damage, etc. Failure consequences involving hu-
The probabilities of failure for individual limit states and for sys- man injury, human death or environmental damage can be ac-
tem failure can be evaluated using traditional structural reliability counted for by considering the amount of past compensation
methods such as First Order Reliability Method (FORM), Second payoffs. But since decision making should not be reduced to just
Order Reliability Method (SORM) and Monte Carlo simulation money, failure probability constraints related to acceptable risk
[26,28]. levels can always be included in Eq. (3).
In the risk optimization problem, reliability analyses have to be For each structural component or system failure mode, there is
repeated thousands of times. Hence, it is preferable to use a very a corresponding failure cost term. The total (life-cycle) expected
efficient algorithm, such as the FORM, to perform the reliability cost of a structural system becomes:
analyses. However, this method presents some limitations. For CT (d) = Cinitial (d)
example, it may have convergence problems depending on the
+ Coperation (d)
reliability analysis at hand and, as it involves a linear approxima-
tion of the limit state function, it can lead to inaccurate results + Cinspectionandmaintenance (d)
when high nonlinearity is involved. + Cdisposal (d)
Simple Monte Carlo (SMC) simulation is, in general, less effi-
cient than FORM or SORM, but it can be easily applied to any kind + ∑ Cfailure (d) Pf (d, λ)
failuremodes (8)
of reliability problem. It is based on the idea that the integral
presented in Eq. (5) can be interpreted as the mean value of a Initial or manufacturing costs increase with the safety coeffi-
stochastic experiment where a large number of independent cients used in design and with the practiced level of quality as-
outcomes (samples) of the random variables are generated ([44]). surance. More safety in operation involves more safety equipment,
Thus, by randomly generating nsamp samples of X, according to more redundancy and more conservatism in structural operation.
its joint distribution fx(x), and by considering a so-called indicator Inspection cost depends on intervals, quality of equipment and
function, I[x,d], which is equal to one if x belongs to the failure choice of inspection method. Maintenance costs depend on
domain and zero otherwise, Eq. (5) can be replaced by the fol- maintenance plan, frequency of preventive maintenance, etc.
lowing Monte Carlo estimate: When the overall level of safety is increased, most cost terms
102 W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110

increase, but the expected costs of failure are reduced. Any change 3.1. One-dimensional case
in d that affects cost terms is likely to affect the expected cost of
failure. Changes in d which reduce costs may result in increased In the usual application of the simple Monte Carlo simulation
failure probabilities, hence increased expected costs of failure. method, for a given value of the design variable, an estimate of the
Reduction in expected failure costs can be achieved by targeted probability of failure is obtained by Eq. (6), which requires one
evaluation of the indicator function for each sample. Thus, the
changes in d, which generally increase costs. This compromise
limit state function is evaluated once for each sample, but only the
between safety and cost is typical of structural systems.
information about if the sample led to failure or survival is taken
A proper point of compromise between safety and cost can be
into account, that is, only the sign of the limit state function
found by a so-called risk optimization analysis:
matters. During solution of the optimization problem, the limit
state function is evaluated many times, for different values of the
d* = arg min ⎡⎣ CT (d): d ∈ S⎤⎦ (9) design variable, but the information about how this function
changes over the design space is also disregarded. The proposed
where S¼ {dmin rd rdmax} is a set of constraints on the design method is based on the consideration of such information.
variables. The formulation above is sometimes called optimization In general, if the determination of the failure domain on the
of life-cycle costs [17,34]. However, life-cycle costs may not ne- design space (for each sample), requires less limit state function
cessarily be involved, that is: one may solve a (risk) optimization evaluations than required by the solution of the optimization
problem (per sample). Hence, it is preferable to determine that
problem involving only the first and last terms on the right side of
region than to apply the usual procedure. In other words, it can be
Eq. (8).
more efficient to determine the failure domain for each sample
once, in terms of the design variables, than to apply the recurrent
distinction between failure and survival for each sample, during
the whole optimization process. This sentence holds because, once
3. The Design Space Root Finding method
the failure domains are determined, they can be used to compute
the probabilities of failure at a small expense for any point of the
In the following, the proposed method is described focusing on
design space.
its application in combination with Monte Carlo simulation. The
So, while in the usual MC simulation method an integral over
bases of the proposed method are explained considering the one- the failure domain, considering all samples and for a given design
dimensional case, that is, the risk optimization problem with just point, leads to an estimate of the probability of failure for that
one design variable. After that, one possible way of extending this point, in the proposed method, first the failure domain is de-
method for the n-dimensional case is presented. termined, over the whole design space and for each sample, and

Fig. 1. Usual method versus proposed design space root finding method.
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 103

Fig. 2. Usual limit state function behaviors considered herein.

after that the probability of failure can be computed for any point for the fourth case. Thus, for all cases the failure domain can be
of the design space with no need of further limit state function identified by considering the roots of the limit state function on
evaluations. Fig. 1 illustrates the difference between the two the design space, if these roots exist.
approaches. After identification of failure domains for all samples, the in-
In structural optimization problems, the objective function can formation obtained must be gathered in order to compose a de-
easily present high nonlinearity and have more than one local scription of the probability of failure over the whole design space.
minimum, requiring a considerable amount of iterations of the In the one-dimensional case, this may be done by employing a
optimization method and, consequently, of limit state function vector, dfailure, containing the limits of the design space, dmin and
evaluations, in order to determine the global optimum. On the dmax, all roots di and dj, and a vector of probabilities of failure,
other hand, determination of the failure domain can be done in a pfailure, associated with each point of dfailure. Vector dfailure must
much easier way, by simply determining the roots of the limit always be organized in ascending order, while pfailure must be
state function and extracting some information during this pro- ordered accordingly. The information about where a failure do-
cess. This is the methodology adopted herein, hence the proposed main starts or ends, associated with the roots, must be used during
method is called the Design Space Root Finding method (DSRF) construction of the vector pfailure.
from now on. As an example, let us consider the first case, where failure
In this paper, four different limit state function behaviors are occurs for d rdi. Before the first simulation, vector dfailure contains
considered, as illustrated in Fig. 2. We believe these four cases only the limits of the design space, and the information about the
represent usual structural optimization problems. Others cases can probability of failure is that it is null for both dmin and dmax, so it is
be included in the proposed method by following a procedure null for the whole design space. When a first sample is simulated
similar to the one shown herein, but each additional case con- (nsamp ¼1), assuming that the root of the limit state function, di1, is
sidered can lead to loss of efficiency, since identification of the within the limits of the design space, i.e., dmin odi1 odmax, it adds
failure domain becomes more expensive. the information that the probability of failure is equal to 100% (one
In all four cases considered herein, when the limit state func- failure for one sample) for d rdi1. Hence, di1 is incorporated into
tion does not present any root within the design space, two results dfailure, which becomes a vector with three components, and
are possible: failure or survival for the entire interval. Thus, the pfailure, which now also has three components: one null compo-
failure domain is equal to or it is not contained in the design space. nent associated with dmax, and two unitary components, asso-
In the first case, Fig. 2a, the limit state function is monotonically ciated with dmin and di1. Fig. 3a illustrates the probability of failure
increasing with the design variable. Thus, given its root, di, the curve after determination of the failure domain for the first
failure domain is defined by {d rdi}. The second case, Fig. 2b, sample.
consists of a monotonically decreasing limit state function, the When a second sample is simulated, (nsamp ¼2), vector pfailure is
failure domain is such that {d Zdi}. The third and fourth cases, first updated by multiplying it by a factor (nsamp  1)/nsamp, since
Fig. 2c and d, present two roots each. Failure occurs within the the sample size has increased by one unit. Hence, failure prob-
interval {di Z dZ dj}, for the third case, and for {dr di} or {d Zdj}, abilities evaluated as 100% now become 50% (one failure for two
104 W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110

Fig. 3. Construction of the probability of failure curve.

samples). After the updating, assuming that the root related to the d, using the information represented by dfailure and pfailure, it is
second sample, di2, is also within the limits of the design space, di2 possible, for example, to adopt linear interpolations between the
is also incorporated into dfailure, and the components of pfailure two nearest points of d, or to consider that the required probability
associated with values of d smaller than di2 are increased by of failure is equal to the probability associated with the first
1/nsamp. Fig. 3b illustrates the probability of failure curve for component of dfailure being smaller than d. One notes that the first
nsamp ¼2. option is possible only because the proposed methodology allows
If the failure domain is equal to the design space, all compo- knowing where, in the design space, the failure domain is located
nents of pfailure are updated by the factor (nsamp  1)/nsamp and in- (for each sample). As this leads to a smoother representation of the
creased by 1/nsamp. If the survival domain is equal to the design probability of failure, it can be considered an advantage over the
space, only the probability updating is necessary. usual MC simulation method.
This process is repeated for as many samples as defined by the
user. Fig. 3c and d illustrates the probability of failure curve for 3.2. Extension to the multi-dimensional case
three and four samples, respectively.
Construction of the probability of failure curve considering the It is well known that the most efficient optimization methods
other three cases (Fig. 2b–d) is carried out in a similar way. In fact, are the so-called non linear programming methods for which,
during the same analysis, different samples can lead to different under some circumstances, proofs of convergence can be obtained.
cases, but the methodology remains valid. Fig. 4 shows the prob- When non-linear programming is applied, for a given initial point,
ability of failure curve for two different cases, as a function of the d0, the optimization process consists of determining a search di-
numbers of samples. Fig. 4a represents the same failure probability rection e, usually related to the derivative of the objective function
curves as in Fig. 3, but also for a higher number of samples. on that point, and performing a one-dimensional search in that
Finally, to calculate the probability of failure for a given value of direction, the so-called line search. The result of the one-

Fig. 4. Convergence of the solution for cases one and four (a) and (d), respectively (see Fig. 2).
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 105

Fig. 5. Probability of failure surfaces for two design variables and different numbers of samples.

dimensional search replaces the initial point, and the process is value of the objective function, and the respective design point, in
repeated until a stopping criterion is achieved. that direction. Two different line search methods are employed
During the one-dimensional search, the objective function, f(d), herein: the Davies, Swann and Campey method (DSC; [8,36]), and
is re-written as a function of only one variable, α, since every the second is the so-called Simplex method [30], implemented
design point d over the given line can be written as d ¼d0 þ αe. herein as described in [24]. The new design point replaces the
The optimization process for each search direction e consists of initial point, and the process is repeated until a given stopping
determining α*, where: criteria is achieved, in this case, until the Euclidean length of the
search direction vector or the variation of the objective function
α* = arg min ⎡⎣ f ( d0 + α e) : ( d0 + α e) ∈ S⎤⎦ (10) value (from the previous to the current iteration), is smaller than
Thus, it is possible to use, for each search direction, the same 10  8.
procedure as in the one dimensional case, and this is probably the In all cases studied herein, both the limit state functions and
most straightforward way of extending the proposed DSRF the mathematical representations of initial and failure costs have
method for the n-dimensional case. In this case, the vector dfailure analytical derivatives, hence exact derivatives are available. How-
can be written in terms of any of the design variables for which the ever, in the context of the Monte Carlo simulation, the computa-
correspondent component of e is different from zero (or higher tion of analytical derivatives is still a challenging issue. Thus, in
than a certain tolerance). order to make a fair comparison between the usual and the pro-
However, the derivative must be found by employing the usual posed method, derivatives are computed by finite differences, with
MC simulation method, and this can result in loss of efficiency of a sufficiently large number of samples. By fair comparison, we
the proposed method. More efficient extensions to the n-dimen- mean one that holds true in more general cases, when analytical
sional case are still being investigated. derivatives are not available.
Fig. 5 illustrates the construction of the probability of failure
surface for a case with two design variables. These surfaces can be
obtained, for example, by positioning a design point over the x axis 5. Numerical examples
and by determining the probability of failure curve for directions
perpendicular to that axis. In order to compare the proposed DSRF method and the usual
MC simulation method, three different optimization strategies, or
configurations, are adopted. The first configuration, simply de-
4. Optimization method nominated DSRF, combines the DSRF method with BFGS and with
two line searches: DSC and Simplex. The other two configurations
In this paper, nonlinear programming is employed to solve the involve MC simulation and the BFGS methods, but with different
optimization problem. Thus, given an initial point, a search di- line search schemes. The first scheme, SMC-I, uses only the DSC
rection is determined by calculating the derivative of the objective method, and the second, SMC-II, uses both DSC and Simplex line
function with respect to the design variables on that point and by searches.
applying the BFGS method [9,16,18]. Once the search direction is In the first configuration, DSRF, once the search direction is
defined, a line search is performed in order to find the minimum determined and the probability of failure curve is constructed, the
106 W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110

line search can be performed at a small computational expense; g (X, d) = ( 0.5⋅X1 + 0.5⋅X2 + 2.5⋅d1) − X3 (11)
thus, both the DSC and Simplex methods are employed, as there is
little computational cost penalty. The optimization scheme using and the initial cost is written in terms of a parameter n:
two line search methods is advantageous for the DSRF method, n
Cinitial (d) = 2⋅( d1 + 0.25) (12)
because it allows using the gathered information more efficiently
than applying just one line search method. Three different values of n are adopted: n ¼1, 2 and 3, in order
In case of the usual MC simulation method, the first scheme or to verify the efficiency of the DSRF method for different non-
SMC-I, where only DSC line search is employed, has advantages linearity degrees of the initial cost. The cost of failure is fixed and
over SMC-II. In fact, inexact line searches are widely adopted in the equal to 10, which corresponds, for example, to ten times the in-
literature when dealing with structural risk optimization pro- itial cost, when n ¼1 and d1 ¼0.25.
blems. This occurs because the line search itself presents a high As the limit state function involves only a linear combination of
computational cost (i.e., during the line search each objective independent standard normal random variables, for a given value
function evaluation requires a limit state function evaluation, of the design variable the exact probability of failure can be easily
which usually involves the solution of a numerical model). Hence, calculated, and it is possible to compare the probability of failure
very precise line search methods become expensive and are not curve constructed by the proposed method with the exact one.
recommended. The second line search scheme or SMC-II, using Thus, this example is used also as a validation of the proposed
both line search methods, is performed only to confirm the latter method.
sentence and to compare the proposed and usual methods also Fig. 6 presents the exact and the Monte Carlo estimate (ob-
under similar conditions. tained by the DSRF method) probability of failure curves for dif-
These three optimization configurations are employed in the ferent values of n. It is noted that, even by multiplying the prob-
solution of four examples, as an attempt to study the efficiency of abilities of failure by 10, which is the cost of failure adopted in this
the proposed method under different circumstances. The ex- example, the errors are very small and the probability of failure
amples involve limit state functions and initial costs which are curves are similar. However, it must be taken into account that
linear or nonlinear with respect to the design variables, as well as structural risk optimization problems usually involve very small
different numbers of design variables. Also, in order to study the failure probabilities over the search space. In this context, if the
efficiency of the DSRF method for different orders of magnitude of errors are still considerable, a larger number of samples must be
the probability of failure over the design space, a multiplying adopted.
factor is incorporated in the third example. The four examples In this example, as the limit state equation is the same for all
studied herein are summarized in Table 1. cases, the probability of failure limits do not change. Results of the
Each limit state function involves three independent random optimization are summarized in Table 2.
variables, designated by Xi, where i¼1,2,3, and represented by a It is noted that, since the construction of the probability of
normal distribution with unitary mean and variance 0.2. The failure curve depends only on the limit state function, the number
samples of each random variable are generated by Latin-hy- of limit sate function evaluations does not change when DSRF is
percube sampling [33], although other sampling schemes could applied for n ¼1, 2 or 3. Thus, the relative computational cost for
also be employed. SMC-II depends basically on the performance of the optimization
In the one-dimensional cases, 100,000 samples are employed, and line search methods. For higher values of n, the optimal design
while in the other cases (examples 3 and 4), which are more becomes more pronounced, convergence of optimization is faster
sensitive to errors in the derivative, more samples (200,000) are and the relative computational cost decreases. The DSRF method
adopted. The performances of the three optimization schemes in was over 165 times faster than SMC-II for all values of n.
each example are compared in terms of the number of limit state In comparison with SMC-I, the proposed method was over 19
function evaluations required to determine the optimal design, times faster for all cases; however, there is no clear tendency on
since these evaluations are the most expensive part of real struc- the relative computational cost. This probably occurs because the
tural optimization problems. line search schemes applied are considerably different and present
For the one-dimensional problems, the objective function is a significant influence on the relative computational cost. More
plotted considering a grid of 250 points over the design space. In specifically, the DSC line search converged more slowly when
the bi-dimensional case, a grid of 30 by 30 points is considered. An employed within SMC-I for n ¼2, probably due to the non-smooth
estimative of the lower and upper limits of the probability of description of the probability of failure curve which comes from
failure (over the design space) is obtained in these cases by de- the usual Monte Carlo simulation. Since the DSC method employs
termining the minimum and maximum values among the prob- a quadratic interpolation and the objective function becomes
abilities of failure computed along the grid. closer to a quadratic function when n ¼2, the opposite behavior
should be expected.
5.1. First example: varying the nonlinearity of the initial cost
5.2. Second example: varying nonlinearity of the limit state function
For the first example, the limit state function is linear and is
given by: For the second example, the nonlinearity degree of the limit
state function is controlled by a parameter m:
Table 1 m
Summary of the examples. (
g (X, d) = X1⋅X2 + 2.5⋅( d1 + 0.25) )−X3 (13)

Example Number of design Type of Type of and three values of m are tested: m ¼1, 2 and 3.
variables Limit state function Initial cost The cost of failure is equal to ten, and the initial cost is non-
linear, but remains the same for all three cases:
1 1 Linear linear, nonlinear
2 1 Linear, nonlinear Nonlinear 3
3 2 Nonlinear Nonlinear Cinitial (d) = 2⋅( d1 + 0.25) (14)
4 5, 10 and 20 Nonlinear Nonlinear
Fig. 7 presents the probability of failure curves for all three
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 107

Fig. 6. Analytical and estimated probability of failure curves for different nonlinearity degrees of the initial cost.

cases, obtained by using the proposed DSRF method. Results of the


optimization are summarized in Table 3.
For example 2, when the value of m increases, the limit state
function becomes more complex and the upper limit of the
probability of failure also increases. Thus, the DSRF method re-
quires a larger number of limit state function evaluations in order
to construct the probability of failure curve. However, the objective
function also becomes more complex. The BFGS-DSC method re-
quires more iterations to converge and the relative computational
cost of SMC-I increases from about 22 times, for m ¼ 1, to about 28
times the cost of the DSRF, for m ¼3.
In the case of the SMC-II, the Simplex method basically main-
tains its convergence properties; hence the number of limit state
function evaluations is similar for all values of m. Since the com-
putational cost of DSRF is increasing when m increases, the re-
lative computational cost decreases. Even though, the computa-
tional cost of SMC-II is over 86 times the cost of the DSRF, for all
cases.
Fig. 7. Probability of failure curve for different nonlinearity degrees of the limit
state function.
5.3. Third example: varying the order of magnitude of the failure
probabilities ⎛ 2 ⎞
Cinitial (d) = exp ⎜⎜ ∑ di4
( ) ⎟⎟⎠ − 1
In this bi-dimensional nonlinear case, a multiplying factor, γ, is ⎝ i=1 (16)
included in order to solve the same problem for different orders of
Fig. 8 presents the probability of failure surfaces for all three
magnitude of the failure probabilities over the design space. The
cases, obtained by using the proposed method. A summary of re-
limit state function is written as:
sults is presented in Table 4.
⎛ 2 ( d1 + d2 ) ⎞⎟ − X The main difference between the one-dimensional and these
g (X, d) = γ⋅⎜⎜ X1⋅X2 + 2.5⋅( d1⋅d2 + 0.25) + ⎟ 3 two-dimensional cases is that, in the latter, the DSRF method as
⎝ 2 ⎠ (15)
proposed herein, employs the usual MC simulation method each
Three values of γ are tested: γ ¼ 1.0, 1.5 and 2.0, and the cost of time a derivative is required. Also, as the derivative has been cal-
failure is assumed to be equal to 20, higher than the one used in culated by finite differences, each derivative computation requires
the one-dimensional cases. This increased value of the cost of ndes þ1 runs of the MC simulation method. Thus, the efficiency of
failure is adopted in order to maintain the influence of the prob- the proposed method is highly impaired by these circumstances. A
abilities of failure over the objective function even for higher va- fairer comparison could be performed, for example, by adopting
lues of γ, that is, for smaller values of the probability of failure. analytical expressions to calculate the derivatives, but in the
The initial cost is also nonlinear, and given by: context of the Monte Carlo simulation, this is a challenging issue,

Table 2
Summary of results, example 1.

n Number of limit state function evaluations (  103) Relative computational cost Probability of failure limits

DSRF SMC-I SMC-II SMC-I SMC-II Lower Upper

1 250 5900 42,400 23.59 169.56 0.00E þ00 5.00E  01


2 250 10,000 41,800 39.99 167.16 0.00E þ00 5.00E  01
3 250 4900 41,500 19.60 165.96 0.00E þ00 5.00E  01
108 W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110

Table 3
Summary of results, example 2.

m Number of limit state function evaluations (  103) Relative computational cost Probability of failure limits

DSRF SMC-I SMC-II SMC-I SMC-II Lower Upper

1 222 4900 39,900 22.08 179.76 0.00E þ00 2.20E  01


2 245 6600 42,100 26.94 171.84 0.00E þ00 4.50E  01
3 448 12,800 38,800 28.56 86.59 0.00E þ00 5.11E  01

Fig. 8. Probability of failure surface for different safety factors γ.

Table 4
Summary of results, example 3.

γ Number of limit state function evaluations (  103) Relative computational cost Probability of failure limits

DSRF SMC-I SMC-II SMC-I SMC-II Lower Upper

1.0 8622 23,600 134,400 2.74 15.59 0.00E þ00 4.50E  01


1.5 6270 24,400 158,600 3.89 25.30 0.00E þ00 2.61E  01
2.0 6949 25,800 134,000 3.71 19.28 0.00E þ00 1.66E  01

and it will be addressed in future studies. 5.4. Fourth example: varying the number of design variables
Thus, the aim of this example (and of the next one) is to show
that, even under these circumstances, the proposed DSRF method This example is similar to example three, but by varying the
can be more efficient than the usual one, at least when the com- number of design variables it is verified how the proposed method
putational cost of evaluating the derivative does not dominates the performs for higher dimensions. The limit state function is an
total computational cost. extension of Eq. (15), to the ndes-dimensional case, but with a
In fact, Table 4 shows that DSRF was about 2.7 times faster than constant factor γ ¼1.0:
SMC-I and 15.59 times faster than SMC-II, for γ ¼1.0, and that it
can be even more efficient when the range of failure probabilities ⎛ ndes ⎞2 ndes
1
over the design space is reduced; this can also be seen as an ad- g (X, d) = X1⋅X2 + 2.5⋅⎜ ∏ di + 0.25⎟ + ⋅ ∑ di − X3
vantage of the proposed method. Since in structural optimization ⎝ i=1 ⎠ ndes i = 1 (17)
problems failure probabilities are usually small, the construction of
probability of failure curve requires only a few limit state function The cost of failure is assumed to be equal to 20, and the initial
evaluations, hence increasing efficiency of the DSRF method. cost is an extension of Eq. (16):
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 109

Table 5 Acknowledgments
Summary of results, example 4.

ndes Number of limit state function evalua- Relative computational


Sponsorship of this research project by the São Paulo Research
tions (  103) cost Foundation, Brazil, (Grant numbers 2012/19840-6 and 2012/
21357-1) is greatly acknowledged.
DSRF SMC-I SMC-II SMC-I SMC-II

5 7120 35,600 115,000 5.00 16.15


10 15,715 34,200 174,000 2.18 11.07 References
20 18,262 36,800 158,800 2.02 8.70
[1] H. Agarwal, C.K. Mozumder, J.E. Renaud, L.T. Watson, An inverse-measure-
based unilevel architecture for reliability-based design, Struct. Multidiscip.
Optim. 33 (2007) 217–227.
[2] E. Aktas, F. Moses, M. Ghosn, Cost and safety optimization of structural design
⎛ ndes ⎞ specifications, Reliab. Eng. Sys. Saf. 73 (2001) 205–212.
Cinitial (d) = exp ⎜⎜ ∑ di4 ⎟⎟ − 1
( ) [3] Y. Aoues, A. Chateauneuf, Benchmark study of numerical methods for Relia-
⎝ i=1 ⎠ (18) bility-Based Design Optimization, Struct. Multidiscip. Optim. 41 (2010)
277–294.
The problem is solved for three different numbers of design [4] S.-K. Au, J.L. Beck, Estimation of small failure probabilities in high dimensions
variables: ndes ¼5, 10 and 20. by subset simulation, Probab. Eng. Mech. 16 (4) (2001) 263–277.
[5] A.T. Beck, C.C. Verzenhassi, Risk optimization of a steel frame communications
A summary of results is presented in Table 5. In this case, the tower subject to tornado winds, Lat. Am. J. Solids Struct. 5 (2008) 187–203.
limits of the probability of failure over the design space were not [6] A.T. Beck, W.J.S. Gomes, A comparison of deterministic, reliability-based and
computed, since they were studied in all previous examples; but risk-based structural optimization under uncertainty, Probab. Eng. Mechan. 28
(2012) 18–29, http://dx.doi.org/10.1016/j.probengmech.2011.08.007.
also due to the very high computational cost associated with the [7] A.T. Beck, W.J.S. Gomes, R.H. Lopez, L.F.F. Miguel, A comparison between robust
evaluation of failure probabilities for all points of a and risk-based optimization under uncertainty, Struct. Multidiscip. Optim. 52
(3) (2015) 479–492 10.1007/s00158-015-1253-9.
ndes-dimensional grid.
[8] M.J. Box, A comparison of several current optimization methods, and the use
Table 5 shows a tendency of decreasing efficiency of the pro- of transformations in constrained problems, Comput. J. 9 (1966) 67–77.
posed method when the number of design variables is increased, [9] C.G. Broyden, The convergence of a class of double-rank minimization algo-
rithms, J. Inst. Math. Appl. 6 (1970) 76–90.
but this is a direct consequence of the derivative computation. For [10] C. Bucher, M. Dan, D.M. Frangopol, Optimization of lifetime maintenance
higher number of design variables the computation of derivatives strategies for deteriorating structures considering probabilities of violating
by finite differences dominates the total computational cost, and safety, condition, and cost thresholds, Probab. Eng. Mech. 21 (1) (2006) 1–8.
[11] G.D. Cheng, L. Xu, L. Jiang, A sequential approximate programming strategy for
the comparison between the proposed and the usual methods is reliability-based optimization, Comput. Struct. 84 (21) (2006) 1353–1367.
impaired. In order to perform a better comparison, another [12] J. Ching, H. Wei-Chih, Transforming reliability limit-state constraints into de-
terministic limit-state constraints, Struct. Saf. 30 (2008) 11–33.
method to calculate derivatives should be employed. This is the
[13] X. Du, W. Chen, Sequential optimization and reliability assessment method for
subject of ongoing research by the authors. efficient probabilistic design, ASME J. Mech. Des. 126 (2004) 225–233.
[14] I. Enevoldsen, J.D. Sorensen, Reliability-based optimization in structural en-
gineering, Struct. Saf. 15 (1994) 169–196.
[15] S. Engelund, R. Rackwitz, A benchmark study on importance sampling tech-
niques in structural reliability, Struct. Saf. 12 (1993) 255–276.
6. Concluding remarks [16] R. Fletcher, A new approach to variable metric algorithms, Comput. J. 13 (1970)
317–322.
In this paper, a novel method was proposed for the solution of [17] D.M. Frangopol, K. Maute, Life-cycle reliability-based optimization of civil and
aerospace structures, Comput. Struct. 81 (7) (2003) 397–410.
risk or live-cycle cost optimization problems. In risk optimization, [18] D. Goldfarb, A family of variable metric updates derived by variational means,
failure probabilities are incorporated in the objective function, and Math. Comput. 24 (1970) 23–26.
existing shortcuts for conventional Reliability-Based Design Opti- [19] W.J.S. Gomes, A.T. Beck, Global structural optimization considering expected
consequences of failure and using ANN surrogates, Comput. Struct. 126 (2013)
mization problems do not apply. Risk optimization allows one to 56–68, http://dx.doi.org/10.1016/j.compstruc.2012.10.013.
address the apparently conflicting goals of safety and economy in [20] W.J.S. Gomes, A.T. Beck, T. Haukaas, Optimal inspection planning for onshore
pipelines subject to external corrosion, Reliab. Eng. Syst. Saf. 118 (2013) 18–27.
structural design. [21] T. Haukaas, Unified reliability and design optimization for earthquake en-
In the conventional solution of risk optimization by simulation, gineering, Probab. Eng. Mech. 23 (4) (2008) 471–481.
information concerning limit state function behavior over the [22] J.C. Helton, F.J. Davis, Latin hypercube sampling and the propagation of un-
certainty in analyses of complex systems, Reliab. Eng. Syst. Saf. 81 (2003)
design space is usually disregarded. A novel method was sug- 23–69.
gested, where for each Monte Carlo sample, the roots of the limit [23] P.S. Koutsourelakis, H.J. Pradlwarter, G.I. Schuëller, Reliability of structures in
state function in the design space are sought. This was called the high dimensions, part I: algorithms and applications, Probab. Eng. Mech. 19
(2004) 409–417.
Design Space Root Finding method, or DSRF. [24] J.C. Lagarias, J.A. Reeds, M.H. Wright, P.E. Wright, Convergence properties of
The DSRF method was compared with the usual method, by the Nelder-Mead simplex method in low dimensions, SIAM J. Optim. 9 (1)
(1998) 112–147, http://dx.doi.org/10.1137/S1052623496303470.
applying both to four examples involving one- and n-dimensional
[25] R.H. Lopez, A.T. Beck, Reliability-based design optimization strategies based on
problems, and different degrees of limit state function and cost FORM: a review, J. Braz. Soc. Mech. Sci. Eng. 34 (2012) 506–514.
function non-linearities. Results have shown that, for the one-di- [26] H.O. Madsen, S. Krenk, N.C. Lind, Methods of Structural Safety, Prentice Hall,
Englewood Cliffs, 1986.
mensional cases, the proposed DSRF method can be several times [27] M.A. Maes, K. Breitung, D.J. Dupuis, Asymptotic importance sampling, Struct.
more efficient than the usual method, with a computational cost at Saf. 12 (1993) 167–186.
least nineteen times smaller. In the multi-dimensional cases, effi- [28] R.E. Melchers, Structural Reliability Analysis and Prediction, 2nd ed, John
Wiley and Sons, NY, 1999.
ciency was reduced, but the computational cost was at least two [29] R. Mínguez, E. Castillo, Reliability-based optimization in engineering using
times smaller than the usual method. For the bi and n-dimensional decomposition techniques and FORMS, Struct. Saf. 31 (2009) (2009) 214–223.
[30] J. Nelder, R. Mead, A simplex method for function minimization, Comp. J. 7
cases, the comparison was impaired, because the cost to compute
(1965) 308–313.
derivatives by finite differences started to dominate the compu- [31] X. Qu, R.T. Hafta, Design under uncertainty using Monte Carlo simulation and
tational effort. Hence, in spite of the promising performance ver- probabilistic factor, in: Proceedings of ASME DETC’03 Conference, Chicago, IL,
2003.
ified herein, further study is necessary on the extension of the [32] M. Soltani, R.B. Corotis, Failure cost design of structural systems, Struct. Saf. 5
proposed method to the n-dimensional case. (4) (1988) 239–252.
110 W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110

[33] M. Stein, Large sample properties of simulations using latin hypercube sam- optimization, Struct. Multidiscip. Optim. 42 (2010) 645–663.
pling, Technometrics 29 (2) (1987) 143–151 , correction, vol. 32, p. 367. [40] P. Yi, G.D. Cheng, L. Jiang, A sequential approximate programming strategy for
[34] H. Streicher, R. Rackwitz, Time-variant reliability-oriented structural optimi- performance measure based probabilistic structural design optimization,
zation and a renewal model for life-cycle costing, Probab. Eng. Mech. 19 (1–2) Struct. Saf. 30 (2008) 91–109.
(2004) 171–183. [41] B.D. Youn, K.K. Choi, A new response surface methodology for Reliability-
[36] W.H. Swann, Report on the Development of a New Direct Searching Method of Based Design Optimization, Comput. Struct. 82 (2004) 241–256.
Optimisation, ICI Ltd., London, England, 1964, Central Instrument Laboratory [42] A.A. Taflanidis, J.L. Beck, An efficient framework for optimal robust stochastic
Research Note 64/3. system design using stochastic simulation, Comput. Methods in Appl. Mech.
[37] J. Tu, K.K. Choi, Y.H. Park, A new study on Reliability-Based Design Optimiza- Eng. 198 (1) (2008) 88–101.
tion, J. Mech. Des. 121 (4) (1999) 557–564. [43] H.A. Jensen, M.A. Valdebenito, G.I. Schuëller, D.S. Kusanovic, Reliability-based
[38] M.A. Valdebenito, G.I. Schuëller, Design of maintenance schedules for fatigue- optimization of stochastic systems using line search, Comput. Methods in
prone metallic components using reliability-based optimization, Comput. Appl. Mech. Eng. 198 (49-52) (2009) 3915–3924.
Methods Appl. Mech. Eng. 199 (33–36) (2010) 2305–2318. [44] O. Ditlevsen, H. Madsen, Structural Reliability Methods, John Wiley & Sons,
[39] M.A. Valdebenito, G.I. Schuëller, A survey on approaches for reliability-based Chichester, 1996.

You might also like