The Design Space Root Finding Method For Efficient Risk Optimization by Simulation
The Design Space Root Finding Method For Efficient Risk Optimization by Simulation
The Design Space Root Finding method for efficient risk optimization
by simulation
Wellison J.S. Gomes a, André T. Beck b,n
a
Center for Optimization and Reliability in Engineering (CORE), Department of Civil Engineering, Federal University of Santa Catarina, Florianópolis, SC,
Brazil
b
Department of Structural Engineering, University of São Paulo, São Carlos, SP, Brazil
art ic l e i nf o a b s t r a c t
Article history: Reliability-Based Design Optimization (RBDO) is computationally expensive due to the nested optimi-
Received 12 August 2015 zation and reliability loops. Several shortcuts have been proposed in the literature to solve RBDO pro-
Accepted 29 September 2015 blems. However, these shortcuts only apply when failure probability is a design constraint. When failure
Available online 19 November 2015
probabilities are incorporated in the objective function, such as in total life-cycle cost or risk optimiza-
Keywords: tion, no shortcuts were available to this date, to the best of the authors knowledge. In this paper, a novel
Structural optimization method is proposed for the solution of risk optimization problems. Risk optimization allows one to
Risk optimization address the apparently conflicting goals of safety and economy in structural design. In the conventional
Life-cycle cost optimization solution of risk optimization by Monte Carlo simulation, information concerning limit state function
Uncertainties
behavior over the design space is usually disregarded. The method proposed herein consists in finding
Monte Carlo simulation
the roots of the limit state function in the design space, for all Monte Carlo samples of random variables.
The proposed method is compared to the usual method in application to one and n-dimensional opti-
mization problems, considering various degrees of limit state and cost function nonlinearities. Results
show that the proposed method is almost twenty times more efficient than the usual method, when
applied to one-dimensional problems. Efficiency is reduced for higher dimensional problems, but the
proposed method is still at least two times more efficient than the usual method for twenty design
variables. As the efficiency of the proposed method for higher-dimensional problems is directly related to
derivative evaluations, further investigation is necessary to improve its efficiency in application to multi-
dimensional problems.
& 2015 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.probengmech.2015.09.019
0266-8920/& 2015 Elsevier Ltd. All rights reserved.
100 W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110
optimal) structure. This will generally happen as more failure optimization problem: the inner loop is the reliability analysis and
modes are designed against the limit. Nowadays, it is widely re- the outer loop is the structural optimization. The coupling of these
cognized that DDO is not robust with respect to existing and un- two loops leads to very high computational costs. To reduce the
avoidable uncertainties in structural design. computational burden, several authors have proposed decoupling
Reliability-Based Design Optimization (RBDO) has emerged as the structural optimization and the reliability analysis. De-cou-
an alternative to properly model the safety-under-uncertainty part pling strategies may be divided in two types: (i) serial single loop
of the problem. With RBDO, one can ensure that a minimum (and methods and (ii) uni-level methods. The basic idea of serial single
measurable) level of safety is achieved by the optimum structure, loop methods is to decouple the two loops and solve them se-
by applying a constraint on the failure probability, Pf . A typical quentially, until some convergence criterion is achieved. On the
formulation of RBDO reads: other hand, uni-level methods employ different strategies to ob-
tain a single loop of optimization to solve the RBDO problem. State
d* = arg min [cost (d): d ∈ S, Pf (d, λ) ≤ P fadmissible ] (2) of the art reviews of RBDO including de-coupling strategies are
where Pf represents a reliability constraint. Generally, the cost provided in [25,3,39].
term in this formulation is the same as for DDO, that is, it does not Significantly, all de-coupling strategies mentioned above ad-
include expected costs of failure. Thus, RBDO allows finding a dress the classical RBDO formulation (Eq. (2)), where failure
structure which is optimal in a mechanical sense, and which does probabilities are design constraints. To the best of the authors
not compromise safety. However, results are dependent on the knowledge, no similar shortcuts exist for solving risk or life-cycle
failure probabilities used as constraints in the analysis. cost optimization problems. Thus, the computational burden as-
Risk Optimization (RO) increases the scope of the problem by sociated with risk optimization remains very large.
addressing the compromising goals of economy and safety [20,5– Solution of the underlying reliability problem is a key issue in
solving risk optimization problems. This is still a widely open re-
7]. This is accomplished by quantifying the monetary con-
search field, as different reliability methods found in the literature
sequences of failure, as well as the costs associated with con-
present very different computational costs and accuracies. More-
struction, operation and maintenance, and by including these
over, many new approaches have been proposed in recent years.
costs in the objective function. Thus, (reliability-based) Risk Op-
For instance, the Stochastic Subset Simulation method proposed
timization will also indirectly look for the optimum safety factors
by Taflanidis and Beck [42] can be seen as a shortcut to solving the
and failure probabilities:
risk optimization problem. However, this method rapidly loses
d* = arg min [CT (d) : d ∈ S] (3) efficiency for increasing number of design variables. In the ap-
proach by Taflanidis and Beck [42], the design variables are arti-
where CT(d) is the total expected cost, including expected costs of
ficially considered as uncertain and Subset Simulation is em-
failure. Since Eq. (3) has no design or reliability constraint, its ployed, in combination with a stochastic search algorithm, to solve
solution also leads to the optimum safety factors, λ*, or the opti-
the reliability and optimization problems simultaneously. As an-
mum reliability constraints, Pf * (d*, λ*). Expected costs of failure,
other example, the approach proposed by Jensen et al. [43] and
for each possible failure mode of the structure, are evaluated by
applied in a risk optimization problem by Valdebenito and
multiplying costs of failure by probabilities of failure. Hence, we
Schuëller [38], is very efficient even for problems involving thou-
note that in comparison to RBDO (Eq. (2)), in RO failure probability
sands of random variables; however, the method requires many
is no longer a constraint but part of the objective function. The
approximations, mainly when the objective function is the total
term CT(d) is further detailed in Section 2.2. expected cost.
A review of the literature shows that the nomenclature RBDO is Among the usual methods for reliability analysis, simple Monte
indistinguishably used to describe problems where failure prob- Carlo simulation or Latin-Hypercube sampling have been em-
abilities are design constraints [1,11–13,31,37,40,41], such as in Eq. ployed in many applications, in combination with different opti-
(2), or included in the objective function [10,14,17,2,21,29,32,34], mization algorithms, mainly due to generality and ease of accuracy
such as in Eq. (3). It should be clear that these two formulations control. In general, accuracy of Simple Monte Carlo simulation
lead to two fundamentally different problems. Risk optimization increases with larger number of samples. However, in the risk
(Eq. (3)) yields an unconstrained optimization problem, char- optimization solution, one complete reliability analysis is required
acterized by the existence of multiple local minima [19,20,6]. for each trial design. Many variants of the Monte Carlo method
Classical RBDO formulations, such as Eq. (2), lead to constrained have been proposed in order to decrease the computational cost
optimization problems. In classical RBDO articles [1,11– by decreasing the number of samples required to achieve con-
13,31,37,40,41] expected costs of failure are either not considered vergence: Latin Hypercube sampling [22,33], subset simulation [4],
or dismissed. In this article, we specifically address risk optimi- importance sampling [15], asymptotic sampling [27]. Regardless of
zation problems. the sampling strategy, simulation-based methods in general only
Both RBDO and RO formulations lead to problems which are look at one information about each sample: whether it belongs to
very computationally intensive to evaluate. This occurs due to the the failure domain or not. Hence, only the sign of the limit state
nested optimization and reliability analysis loops, which occur function matters. Information about how the limit state function
either with failure probability as constraints or as part of the ob- behaves over the design space is not computed. In this paper, it is
jective function. The computational burden is particularly large shown that this commonly employed strategy may not be the
when iterative numerical methods (e.g., non-linear or dynamic most efficient. In this paper, a novel method is proposed, which
finite element analysis) are employed in solution of the mechan- can in principle be combined with any of the sampling schemes
ical problem. mentioned above. This method is based on finding, for each
A number of approaches have been proposed in the literature sample, the roots of the limit state function in the design space.
in order to convert RBDO into DDO problems ([1,11– This dramatically reduces the computational cost, as will be shown
13,3,25,31,37,39–41]). When the underlying reliability problem herein.
(constraint in Eq. (2)) is solved by the First Order Reliability The core of this paper is organized in four sections. Section 2
Method (FORM) method, nested optimization loops are obtained presents the structural reliability problem and the risk optimiza-
in the classical RBDO formulation. Since FORM is an optimization tion formulation. Section 3 describes the proposed method, fo-
procedure itself, RBDO becomes a nested, double-loop cusing on its application in combination with Monte Carlo
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 101
increase, but the expected costs of failure are reduced. Any change 3.1. One-dimensional case
in d that affects cost terms is likely to affect the expected cost of
failure. Changes in d which reduce costs may result in increased In the usual application of the simple Monte Carlo simulation
failure probabilities, hence increased expected costs of failure. method, for a given value of the design variable, an estimate of the
Reduction in expected failure costs can be achieved by targeted probability of failure is obtained by Eq. (6), which requires one
evaluation of the indicator function for each sample. Thus, the
changes in d, which generally increase costs. This compromise
limit state function is evaluated once for each sample, but only the
between safety and cost is typical of structural systems.
information about if the sample led to failure or survival is taken
A proper point of compromise between safety and cost can be
into account, that is, only the sign of the limit state function
found by a so-called risk optimization analysis:
matters. During solution of the optimization problem, the limit
state function is evaluated many times, for different values of the
d* = arg min ⎡⎣ CT (d): d ∈ S⎤⎦ (9) design variable, but the information about how this function
changes over the design space is also disregarded. The proposed
where S¼ {dmin rd rdmax} is a set of constraints on the design method is based on the consideration of such information.
variables. The formulation above is sometimes called optimization In general, if the determination of the failure domain on the
of life-cycle costs [17,34]. However, life-cycle costs may not ne- design space (for each sample), requires less limit state function
cessarily be involved, that is: one may solve a (risk) optimization evaluations than required by the solution of the optimization
problem (per sample). Hence, it is preferable to determine that
problem involving only the first and last terms on the right side of
region than to apply the usual procedure. In other words, it can be
Eq. (8).
more efficient to determine the failure domain for each sample
once, in terms of the design variables, than to apply the recurrent
distinction between failure and survival for each sample, during
the whole optimization process. This sentence holds because, once
3. The Design Space Root Finding method
the failure domains are determined, they can be used to compute
the probabilities of failure at a small expense for any point of the
In the following, the proposed method is described focusing on
design space.
its application in combination with Monte Carlo simulation. The
So, while in the usual MC simulation method an integral over
bases of the proposed method are explained considering the one- the failure domain, considering all samples and for a given design
dimensional case, that is, the risk optimization problem with just point, leads to an estimate of the probability of failure for that
one design variable. After that, one possible way of extending this point, in the proposed method, first the failure domain is de-
method for the n-dimensional case is presented. termined, over the whole design space and for each sample, and
Fig. 1. Usual method versus proposed design space root finding method.
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 103
after that the probability of failure can be computed for any point for the fourth case. Thus, for all cases the failure domain can be
of the design space with no need of further limit state function identified by considering the roots of the limit state function on
evaluations. Fig. 1 illustrates the difference between the two the design space, if these roots exist.
approaches. After identification of failure domains for all samples, the in-
In structural optimization problems, the objective function can formation obtained must be gathered in order to compose a de-
easily present high nonlinearity and have more than one local scription of the probability of failure over the whole design space.
minimum, requiring a considerable amount of iterations of the In the one-dimensional case, this may be done by employing a
optimization method and, consequently, of limit state function vector, dfailure, containing the limits of the design space, dmin and
evaluations, in order to determine the global optimum. On the dmax, all roots di and dj, and a vector of probabilities of failure,
other hand, determination of the failure domain can be done in a pfailure, associated with each point of dfailure. Vector dfailure must
much easier way, by simply determining the roots of the limit always be organized in ascending order, while pfailure must be
state function and extracting some information during this pro- ordered accordingly. The information about where a failure do-
cess. This is the methodology adopted herein, hence the proposed main starts or ends, associated with the roots, must be used during
method is called the Design Space Root Finding method (DSRF) construction of the vector pfailure.
from now on. As an example, let us consider the first case, where failure
In this paper, four different limit state function behaviors are occurs for d rdi. Before the first simulation, vector dfailure contains
considered, as illustrated in Fig. 2. We believe these four cases only the limits of the design space, and the information about the
represent usual structural optimization problems. Others cases can probability of failure is that it is null for both dmin and dmax, so it is
be included in the proposed method by following a procedure null for the whole design space. When a first sample is simulated
similar to the one shown herein, but each additional case con- (nsamp ¼1), assuming that the root of the limit state function, di1, is
sidered can lead to loss of efficiency, since identification of the within the limits of the design space, i.e., dmin odi1 odmax, it adds
failure domain becomes more expensive. the information that the probability of failure is equal to 100% (one
In all four cases considered herein, when the limit state func- failure for one sample) for d rdi1. Hence, di1 is incorporated into
tion does not present any root within the design space, two results dfailure, which becomes a vector with three components, and
are possible: failure or survival for the entire interval. Thus, the pfailure, which now also has three components: one null compo-
failure domain is equal to or it is not contained in the design space. nent associated with dmax, and two unitary components, asso-
In the first case, Fig. 2a, the limit state function is monotonically ciated with dmin and di1. Fig. 3a illustrates the probability of failure
increasing with the design variable. Thus, given its root, di, the curve after determination of the failure domain for the first
failure domain is defined by {d rdi}. The second case, Fig. 2b, sample.
consists of a monotonically decreasing limit state function, the When a second sample is simulated, (nsamp ¼2), vector pfailure is
failure domain is such that {d Zdi}. The third and fourth cases, first updated by multiplying it by a factor (nsamp 1)/nsamp, since
Fig. 2c and d, present two roots each. Failure occurs within the the sample size has increased by one unit. Hence, failure prob-
interval {di Z dZ dj}, for the third case, and for {dr di} or {d Zdj}, abilities evaluated as 100% now become 50% (one failure for two
104 W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110
samples). After the updating, assuming that the root related to the d, using the information represented by dfailure and pfailure, it is
second sample, di2, is also within the limits of the design space, di2 possible, for example, to adopt linear interpolations between the
is also incorporated into dfailure, and the components of pfailure two nearest points of d, or to consider that the required probability
associated with values of d smaller than di2 are increased by of failure is equal to the probability associated with the first
1/nsamp. Fig. 3b illustrates the probability of failure curve for component of dfailure being smaller than d. One notes that the first
nsamp ¼2. option is possible only because the proposed methodology allows
If the failure domain is equal to the design space, all compo- knowing where, in the design space, the failure domain is located
nents of pfailure are updated by the factor (nsamp 1)/nsamp and in- (for each sample). As this leads to a smoother representation of the
creased by 1/nsamp. If the survival domain is equal to the design probability of failure, it can be considered an advantage over the
space, only the probability updating is necessary. usual MC simulation method.
This process is repeated for as many samples as defined by the
user. Fig. 3c and d illustrates the probability of failure curve for 3.2. Extension to the multi-dimensional case
three and four samples, respectively.
Construction of the probability of failure curve considering the It is well known that the most efficient optimization methods
other three cases (Fig. 2b–d) is carried out in a similar way. In fact, are the so-called non linear programming methods for which,
during the same analysis, different samples can lead to different under some circumstances, proofs of convergence can be obtained.
cases, but the methodology remains valid. Fig. 4 shows the prob- When non-linear programming is applied, for a given initial point,
ability of failure curve for two different cases, as a function of the d0, the optimization process consists of determining a search di-
numbers of samples. Fig. 4a represents the same failure probability rection e, usually related to the derivative of the objective function
curves as in Fig. 3, but also for a higher number of samples. on that point, and performing a one-dimensional search in that
Finally, to calculate the probability of failure for a given value of direction, the so-called line search. The result of the one-
Fig. 4. Convergence of the solution for cases one and four (a) and (d), respectively (see Fig. 2).
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 105
Fig. 5. Probability of failure surfaces for two design variables and different numbers of samples.
dimensional search replaces the initial point, and the process is value of the objective function, and the respective design point, in
repeated until a stopping criterion is achieved. that direction. Two different line search methods are employed
During the one-dimensional search, the objective function, f(d), herein: the Davies, Swann and Campey method (DSC; [8,36]), and
is re-written as a function of only one variable, α, since every the second is the so-called Simplex method [30], implemented
design point d over the given line can be written as d ¼d0 þ αe. herein as described in [24]. The new design point replaces the
The optimization process for each search direction e consists of initial point, and the process is repeated until a given stopping
determining α*, where: criteria is achieved, in this case, until the Euclidean length of the
search direction vector or the variation of the objective function
α* = arg min ⎡⎣ f ( d0 + α e) : ( d0 + α e) ∈ S⎤⎦ (10) value (from the previous to the current iteration), is smaller than
Thus, it is possible to use, for each search direction, the same 10 8.
procedure as in the one dimensional case, and this is probably the In all cases studied herein, both the limit state functions and
most straightforward way of extending the proposed DSRF the mathematical representations of initial and failure costs have
method for the n-dimensional case. In this case, the vector dfailure analytical derivatives, hence exact derivatives are available. How-
can be written in terms of any of the design variables for which the ever, in the context of the Monte Carlo simulation, the computa-
correspondent component of e is different from zero (or higher tion of analytical derivatives is still a challenging issue. Thus, in
than a certain tolerance). order to make a fair comparison between the usual and the pro-
However, the derivative must be found by employing the usual posed method, derivatives are computed by finite differences, with
MC simulation method, and this can result in loss of efficiency of a sufficiently large number of samples. By fair comparison, we
the proposed method. More efficient extensions to the n-dimen- mean one that holds true in more general cases, when analytical
sional case are still being investigated. derivatives are not available.
Fig. 5 illustrates the construction of the probability of failure
surface for a case with two design variables. These surfaces can be
obtained, for example, by positioning a design point over the x axis 5. Numerical examples
and by determining the probability of failure curve for directions
perpendicular to that axis. In order to compare the proposed DSRF method and the usual
MC simulation method, three different optimization strategies, or
configurations, are adopted. The first configuration, simply de-
4. Optimization method nominated DSRF, combines the DSRF method with BFGS and with
two line searches: DSC and Simplex. The other two configurations
In this paper, nonlinear programming is employed to solve the involve MC simulation and the BFGS methods, but with different
optimization problem. Thus, given an initial point, a search di- line search schemes. The first scheme, SMC-I, uses only the DSC
rection is determined by calculating the derivative of the objective method, and the second, SMC-II, uses both DSC and Simplex line
function with respect to the design variables on that point and by searches.
applying the BFGS method [9,16,18]. Once the search direction is In the first configuration, DSRF, once the search direction is
defined, a line search is performed in order to find the minimum determined and the probability of failure curve is constructed, the
106 W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110
line search can be performed at a small computational expense; g (X, d) = ( 0.5⋅X1 + 0.5⋅X2 + 2.5⋅d1) − X3 (11)
thus, both the DSC and Simplex methods are employed, as there is
little computational cost penalty. The optimization scheme using and the initial cost is written in terms of a parameter n:
two line search methods is advantageous for the DSRF method, n
Cinitial (d) = 2⋅( d1 + 0.25) (12)
because it allows using the gathered information more efficiently
than applying just one line search method. Three different values of n are adopted: n ¼1, 2 and 3, in order
In case of the usual MC simulation method, the first scheme or to verify the efficiency of the DSRF method for different non-
SMC-I, where only DSC line search is employed, has advantages linearity degrees of the initial cost. The cost of failure is fixed and
over SMC-II. In fact, inexact line searches are widely adopted in the equal to 10, which corresponds, for example, to ten times the in-
literature when dealing with structural risk optimization pro- itial cost, when n ¼1 and d1 ¼0.25.
blems. This occurs because the line search itself presents a high As the limit state function involves only a linear combination of
computational cost (i.e., during the line search each objective independent standard normal random variables, for a given value
function evaluation requires a limit state function evaluation, of the design variable the exact probability of failure can be easily
which usually involves the solution of a numerical model). Hence, calculated, and it is possible to compare the probability of failure
very precise line search methods become expensive and are not curve constructed by the proposed method with the exact one.
recommended. The second line search scheme or SMC-II, using Thus, this example is used also as a validation of the proposed
both line search methods, is performed only to confirm the latter method.
sentence and to compare the proposed and usual methods also Fig. 6 presents the exact and the Monte Carlo estimate (ob-
under similar conditions. tained by the DSRF method) probability of failure curves for dif-
These three optimization configurations are employed in the ferent values of n. It is noted that, even by multiplying the prob-
solution of four examples, as an attempt to study the efficiency of abilities of failure by 10, which is the cost of failure adopted in this
the proposed method under different circumstances. The ex- example, the errors are very small and the probability of failure
amples involve limit state functions and initial costs which are curves are similar. However, it must be taken into account that
linear or nonlinear with respect to the design variables, as well as structural risk optimization problems usually involve very small
different numbers of design variables. Also, in order to study the failure probabilities over the search space. In this context, if the
efficiency of the DSRF method for different orders of magnitude of errors are still considerable, a larger number of samples must be
the probability of failure over the design space, a multiplying adopted.
factor is incorporated in the third example. The four examples In this example, as the limit state equation is the same for all
studied herein are summarized in Table 1. cases, the probability of failure limits do not change. Results of the
Each limit state function involves three independent random optimization are summarized in Table 2.
variables, designated by Xi, where i¼1,2,3, and represented by a It is noted that, since the construction of the probability of
normal distribution with unitary mean and variance 0.2. The failure curve depends only on the limit state function, the number
samples of each random variable are generated by Latin-hy- of limit sate function evaluations does not change when DSRF is
percube sampling [33], although other sampling schemes could applied for n ¼1, 2 or 3. Thus, the relative computational cost for
also be employed. SMC-II depends basically on the performance of the optimization
In the one-dimensional cases, 100,000 samples are employed, and line search methods. For higher values of n, the optimal design
while in the other cases (examples 3 and 4), which are more becomes more pronounced, convergence of optimization is faster
sensitive to errors in the derivative, more samples (200,000) are and the relative computational cost decreases. The DSRF method
adopted. The performances of the three optimization schemes in was over 165 times faster than SMC-II for all values of n.
each example are compared in terms of the number of limit state In comparison with SMC-I, the proposed method was over 19
function evaluations required to determine the optimal design, times faster for all cases; however, there is no clear tendency on
since these evaluations are the most expensive part of real struc- the relative computational cost. This probably occurs because the
tural optimization problems. line search schemes applied are considerably different and present
For the one-dimensional problems, the objective function is a significant influence on the relative computational cost. More
plotted considering a grid of 250 points over the design space. In specifically, the DSC line search converged more slowly when
the bi-dimensional case, a grid of 30 by 30 points is considered. An employed within SMC-I for n ¼2, probably due to the non-smooth
estimative of the lower and upper limits of the probability of description of the probability of failure curve which comes from
failure (over the design space) is obtained in these cases by de- the usual Monte Carlo simulation. Since the DSC method employs
termining the minimum and maximum values among the prob- a quadratic interpolation and the objective function becomes
abilities of failure computed along the grid. closer to a quadratic function when n ¼2, the opposite behavior
should be expected.
5.1. First example: varying the nonlinearity of the initial cost
5.2. Second example: varying nonlinearity of the limit state function
For the first example, the limit state function is linear and is
given by: For the second example, the nonlinearity degree of the limit
state function is controlled by a parameter m:
Table 1 m
Summary of the examples. (
g (X, d) = X1⋅X2 + 2.5⋅( d1 + 0.25) )−X3 (13)
Example Number of design Type of Type of and three values of m are tested: m ¼1, 2 and 3.
variables Limit state function Initial cost The cost of failure is equal to ten, and the initial cost is non-
linear, but remains the same for all three cases:
1 1 Linear linear, nonlinear
2 1 Linear, nonlinear Nonlinear 3
3 2 Nonlinear Nonlinear Cinitial (d) = 2⋅( d1 + 0.25) (14)
4 5, 10 and 20 Nonlinear Nonlinear
Fig. 7 presents the probability of failure curves for all three
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 107
Fig. 6. Analytical and estimated probability of failure curves for different nonlinearity degrees of the initial cost.
Table 2
Summary of results, example 1.
n Number of limit state function evaluations ( 103) Relative computational cost Probability of failure limits
Table 3
Summary of results, example 2.
m Number of limit state function evaluations ( 103) Relative computational cost Probability of failure limits
Table 4
Summary of results, example 3.
γ Number of limit state function evaluations ( 103) Relative computational cost Probability of failure limits
and it will be addressed in future studies. 5.4. Fourth example: varying the number of design variables
Thus, the aim of this example (and of the next one) is to show
that, even under these circumstances, the proposed DSRF method This example is similar to example three, but by varying the
can be more efficient than the usual one, at least when the com- number of design variables it is verified how the proposed method
putational cost of evaluating the derivative does not dominates the performs for higher dimensions. The limit state function is an
total computational cost. extension of Eq. (15), to the ndes-dimensional case, but with a
In fact, Table 4 shows that DSRF was about 2.7 times faster than constant factor γ ¼1.0:
SMC-I and 15.59 times faster than SMC-II, for γ ¼1.0, and that it
can be even more efficient when the range of failure probabilities ⎛ ndes ⎞2 ndes
1
over the design space is reduced; this can also be seen as an ad- g (X, d) = X1⋅X2 + 2.5⋅⎜ ∏ di + 0.25⎟ + ⋅ ∑ di − X3
vantage of the proposed method. Since in structural optimization ⎝ i=1 ⎠ ndes i = 1 (17)
problems failure probabilities are usually small, the construction of
probability of failure curve requires only a few limit state function The cost of failure is assumed to be equal to 20, and the initial
evaluations, hence increasing efficiency of the DSRF method. cost is an extension of Eq. (16):
W.J.S. Gomes, A.T. Beck / Probabilistic Engineering Mechanics 44 (2016) 99–110 109
Table 5 Acknowledgments
Summary of results, example 4.
[33] M. Stein, Large sample properties of simulations using latin hypercube sam- optimization, Struct. Multidiscip. Optim. 42 (2010) 645–663.
pling, Technometrics 29 (2) (1987) 143–151 , correction, vol. 32, p. 367. [40] P. Yi, G.D. Cheng, L. Jiang, A sequential approximate programming strategy for
[34] H. Streicher, R. Rackwitz, Time-variant reliability-oriented structural optimi- performance measure based probabilistic structural design optimization,
zation and a renewal model for life-cycle costing, Probab. Eng. Mech. 19 (1–2) Struct. Saf. 30 (2008) 91–109.
(2004) 171–183. [41] B.D. Youn, K.K. Choi, A new response surface methodology for Reliability-
[36] W.H. Swann, Report on the Development of a New Direct Searching Method of Based Design Optimization, Comput. Struct. 82 (2004) 241–256.
Optimisation, ICI Ltd., London, England, 1964, Central Instrument Laboratory [42] A.A. Taflanidis, J.L. Beck, An efficient framework for optimal robust stochastic
Research Note 64/3. system design using stochastic simulation, Comput. Methods in Appl. Mech.
[37] J. Tu, K.K. Choi, Y.H. Park, A new study on Reliability-Based Design Optimiza- Eng. 198 (1) (2008) 88–101.
tion, J. Mech. Des. 121 (4) (1999) 557–564. [43] H.A. Jensen, M.A. Valdebenito, G.I. Schuëller, D.S. Kusanovic, Reliability-based
[38] M.A. Valdebenito, G.I. Schuëller, Design of maintenance schedules for fatigue- optimization of stochastic systems using line search, Comput. Methods in
prone metallic components using reliability-based optimization, Comput. Appl. Mech. Eng. 198 (49-52) (2009) 3915–3924.
Methods Appl. Mech. Eng. 199 (33–36) (2010) 2305–2318. [44] O. Ditlevsen, H. Madsen, Structural Reliability Methods, John Wiley & Sons,
[39] M.A. Valdebenito, G.I. Schuëller, A survey on approaches for reliability-based Chichester, 1996.