Freet: Computer Program For Statistical, Sensitivity and Probabilistic Analysis
Freet: Computer Program For Statistical, Sensitivity and Probabilistic Analysis
Computer Program
for Statistical, Sensitivity and Probabilistic Analysis
FREET
PROGRAM DOCUMENTATION
Revision 09/2002
Part - 1
Theory
(Draft)
1
FREET Program Documentation
Part 1
Theory
Written by
Drahomír Novák, Břetislav Teplý,
Zbyněk Keršner and Miroslav Vořechovský
2
FREET Program Documentation
CONTENTS
PREFACE .................................................................................................................................................... 4
References …………………..…………………………………………………………………………..28
3
FREET Program Documentation
PREFACE
The aim of this document is to provide the theoretical basis and description of numerical methods
and approaches used in probabilistic module of ATENA software, FREET. Theoretical
background of relevant reliability techniques is provided together with information on theirs role
in reliability engineering. Our text will use some of the jargon of probability theory, but because
we are presenting the practical tool for engineers we will remain terminologically at the
fundamental level. First, the main reasons will be explained, which stimulated the development
of probabilistic module into nonlinear fracture mechanics software.
The properties of many physical systems and /or the input to these systems exhibit complex
random fluctuations that cannot be captured and characterized completely by deterministic
models. Probabilistic models are needed to quantify the uncertainties of these properties, to
develop realistic representations of output and failure state of these system and to obtain rational
and safe designs. Software ATENA (Červenka & Pukl 2000) represents an efficient tool for
nonlinear analysis of reinforced concrete structures taking into account recent theoretical
achievements of fracture mechanics. It enables a realistic modelling of structure, estimation of
failure load, including also the post-peek behaviour using state of art numerical methods of
fracture mechanics. However, until 2002, ATENA software was purely deterministic, it means
that all geometrical, material and load parameters of a computational model had to be fixed to
deterministic values. Generally, material, geometrical and load parameters of nonlinear fracture
mechanics models are rather uncertain (random) and modelling of these uncertainties of
computational model in a probabilistic way is therefore highly desirable.
The achievements of material science and modelling of concrete would be less important if
they do not contribute to everyday design practice and structural reliability. The more
complicated a computational model of a structure is the more difficult is the application of
reliability analysis of almost any level. Linear elastic analysis enables simple reliability
calculations – the last consistent reliability approach in the design was the allowed stress method.
Recent development introduced the significant inconsistency: Eurocode 2 (1991) demands a
nonlinear analysis using first mean values and second design values of material parameters. No
real guarantee and information on safety can be obtained using partial safety concept as accepted
in present design codes. The approach generally fails if the internal forces entering safety margin
(failure criteria) are not proportional to the load level, as is in case of complex nonlinear
problems. The more complex (statically indeterminate) the structure is the less satisfying the
inconsistent approach of partial safety factors is. This is quite well-known problem and there is
only one straightforward solution: Implementation of safety factors to the results of statistical
nonlinear analysis (failure load, stresses, deflections) not to input parameters. The general trend is
toward a consistent reliability assessment as recommended by Eurocode 1 (1993).
The important phenomenon in quasi-brittle materials is the size effect. The history of
description of size effect can be seen as a history of two fundamentally different approaches –
deterministic and statistical explanations. First explanation was definitely statistical – it dates
back to the pioneering work of Weibull (1939) and many others, mainly mathematicians.
Phenomenon that larger specimens will usually fracture under relatively smaller applied load was
4
FREET Program Documentation
that time associated with the statistical theory of extreme values. Then most researchers focused
on the energetic basis of size effect and the main achievements were purely deterministic. Let us
mention e.g. the book of Bažant & Planas (1998) as an extensive source of information. There are
two basic features of size effect phenomena: deterministic and statistical. Researchers used
different theories, from early works e.g. Shinozuka (1972), Mihashi & Izumi (1977), Mazars
(1982) considered uncertainties involved in concrete fracture. Recently, there are attempts to
combine last decade’s achievements of both fracture mechanics and reliability engineering e.g.
Carmeliet (1994), Carmeliet & Hens (1994), Gutiérez & de Borst (1999), Bažant & Novák
(2000a) and others.
Arguments mentioned above represent basis for the need to combine efficient reliability
techniques with present knowledge in the field of nonlinear fracture mechanics. Remarkable
development of computer hardware makes the numerical simulation of Monte Carlo type of
complex nonlinear responses possible. The reasons for complex reliability treatment of nonlinear
fracture mechanics problems can be summarised as follows:
• Modelling of uncertainties (material, load and environments) in classical statistical sense as
random variables or random processes (fields). The possibility to use statistical information
from real measurements.
• Inconsistency of design to achieve safety using partial safety factors – fundamental problem.
• Size effect phenomena.
The aim of FREET-ATENA basic statistical reliability nonlinear analysis is to obtain the
estimation of the structural response statistics (stresses, deflections, failure load etc.) or/and the
sensitivity analysis and the estimation of reliability. The probabilistic procedure can be basically
itemised as follows:
• Uncertainties are modelled as random variables described by theirs probability distribution
functions (PDF). The optimal case is if all random parameters are measured and real data
exist. Then statistical assessment of this experimental data (e.g. data on strength of concrete
or loading) should be done resulting in selection of the most appropriate PDF (e.g. Gaussian,
lognormal, Weibull, etc.). The result of this step is the set of input parameters for ATENA
computational model – random variables described by mean value, variance and other
statistical parameters (generally by PDF).
• Random input parameters are generated according to their PDF using Monte Carlo type
simulation (Latin Hypercube Sampling).
• Generated realisations of random parameters are then used as inputs for ATENA
computational model. The complex nonlinear solution is performed and results (response
variables) are saved.
• Previous two steps are repeated N-times (N is the number of simulations used). At the end of
the whole simulation process the resulting set of structural responses is statistically evaluated.
The results: Mean value, variance, and coefficient of skewness, histograms, empirical
cumulative probability density function of structural response.
Fundamental techniques needed to fulfil the aim mentioned above are implemented into
probabilistic module of FREET and described in this text. Section 1 introduces a reader into
elementary concept of classical reliability theory. Section 2 is devoted to the description of
5
FREET Program Documentation
uncertainties using mathematical models. The kernel of the whole probabilistic approach – Monte
Carlo type simulation is described in section 3 and how results are processed and interpreted is
the subject of section 4.
6
FREET Program Documentation
1. RELIABILITY THEORY
1.1 Introduction
The aim of statistical and reliability analysis is mainly the estimation of statistical parameters of
structural response and/or theoretical failure probability. Pure Monte Carlo simulation cannot be
applied for time-consuming problems, as it requires large number of simulations (repetitive
calculation of structural response). Historically, this obstacle was partially solved by approximate
techniques suggested by many authors, e.g. Grigoriu (1982/1983), Hasofer & Lind (1974), Li &
Lumb (1985), Madsen et al. (1986). Generally, the problematic feature of these techniques is the
(in)accuracy. Research was then focused on development of advanced simulation techniques,
which concentrates simulation into failure region (Bourgund & Bucher1986, Bucher 1988,
Schuëller & Stix 1987, Schuëller et al.1989). In spite of the fact that they usually require smaller
number of simulations comparing pure Monte Carlo (thousands), an application for nonlinear
fracture mechanics problem can be crucial and still almost impossible. But there are some
feasible alternatives: Latin hypercube sampling (McKay et al. 1979, Ayyub & Lai 1989, Novák et
al. 1998) and response surface methodologies (Bucher & Bourgund 1987).
The term stochastic or probabilistic finite element method (SFEM or PFEM) is used to refer to
a finite element method which accounts for uncertainties in the geometry or material properties of
a structure, as well as the applied loads. Such uncertainties are usually spatially distributed over
the region of the structure and should be modelled as random fields. From many works on SFEM
worked out in last two decades mention e.g. Kleiber & Hien 1992, Vanmarcke et al. 1986,
Yamazaki, F. et al. 1988, … The interest in this area has grown from the perception that in some
structures the response is strongly sensitive to the material properties, and that even small
uncertainties in these characteristics can adversely affect the structural reliability. This is valid
especially in the case of highly nonlinear problems of nonlinear fracture mechanics.
7
FREET Program Documentation
• The relative positions of the two curves: As distance between the two curves increase, the
probability of failure decreases. The position of the curves may be represented by the means
(µS and µR) of the two variables.
• The dispersion of the two curves: If the two curves are narrow, then the area of overlap and
the probability of failure are small. The dispersion may be characterized by the standard
deviations (σS and σR) of the two variables.
• The shape of the two curves: The shapes are represented by the probability density functions
fS(s) and fR(r).
8
FREET Program Documentation
g (X ) = g ( X 1 , X 2 ,..., X n ) ≥ 0 . (1.2)
The performance of the system and its components is described considering a number of limit
states. A limit state function can be explicit or implicit function of basic random variables, and it
can be in a simple or rather complicated form. Usually, the convention is made that it takes
negative value if a failure event occurs. Therefore the failure event is defined as the space where
Z ≤ 0 and survival event is defined as the space where Z ≥ 0 . Two basic classes of failure
criteria can be distinguished: structural collapse and loss of serviceability.
9
FREET Program Documentation
where Df represents failure region where g (X ) ≤ 0 (integration should be performed over this
region) and f ( X 1 , X 2 ,…, X n ) is the joint probability density function of random variables
X = X 1 , X 2 ,..., X n .
Equality Z = 0 divides multidimensional space of basic random variables X = X 1 , X 2 ,..., X n
into safe and failure region. Explicit calculation of integral (1.6) is generally impossible therefore
the application of a simulation technique Monte Carlo type is the simple and in many cases
feasible alternative to estimate failure probability (e.g. Rubinstein 1967, Schreider 1967,
Schuëller & Stix 1987 and others).
The First Order Reliability Method (FORM) has initially been proposed by Hasofer & Lind
(1974). In the FORM, a linear approximation of the limit state surface in the uncorrelated
standardized Gaussian space is used to estimate the probability of failure. For this purpose it is
necessary to transform the basic variables. The distance from the design point of the transformed
limit state function to the origin is called reliability index β.
Note that design point is the point on the limit state surface with the minimum distance to the
origin in standard normal space is considered to be important. It is also the point of maximum
likelihood if the basic variables are normally distributed. This point can be obtained by solving
the optimization problem expressed in (1.3). The maximum β is known as the reliability index. It
can be shown that the probability of failure is approximately given by:
p f = 1 − Φ ( β ) = Φ (− β ) , (1.7)
where Φ denotes the standardized Gaussian distribution function. In case of linear limit state
function and normally distributed basic variables no transformations are necessary and equation
(1.7) yields just the exact failure probability.
In spite of the fact that the calculation of failure probability using reliability index (according
to Cornell or Hasofer & Lind) does not belong to the category of very accurate reliability
techniques (e.g. Bourgund & Bucher 1986), it represents a feasible alternative in many practical
cases. The relationship between reliability index and failure probability is illustratively shown in
Fig. 1.2 for both original Z and standardized safety margin Z S =
(Z − µZ ) .
σZ
10
FREET Program Documentation
Fig. 1.2: Failure probability and reliability index a) safety margin in original space b) in
standardized space.
11
FREET Program Documentation
2. MODELING OF UNCERTAINTIES
Deterministic
Normal
1 x −µ
1 − ⋅ ⋅ 2
2 σ
f x (x ) = ⋅e −∞ < x < ∞
2⋅π⋅σ
PAR1 = µ PAR 2 = σ
x − µ
Fx ( x ) = φ ⋅
σ
12
FREET Program Documentation
Uniform
1
f x (x) = a≤x≤b
b−a
PAR1 =
1
2
(
⋅ 2 ⋅ e x − SIG ⋅ 12 = a )
PAR 2 = SIG 12 + PAR1 = b
SIG =
(
2⋅ x − a )
12
x−a
Fx ( x ) =
b−a
Shifted Exponential
f x ( x ) = λ ⋅ e − λ⋅( x − x 0 )
1
x 0 = PAR 2 = e x −
λ
1
λ = PAR1 =
SIG
FX ( x ) = 1 − e [− λ⋅( x − x 0 ) ] x0 ≤ x
Shifted Rayleigh
1 x −x 2
0
x − x 0 − 2 ⋅
α
f x (x) = ⋅e
α2
σx
α = PAR =
π
2 −
2
σx π
x 0 = PAR 2 = ⋅ + ex
π 2
2 −
2
x−x0
2
− 12 ⋅
α
Fx ( x ) = 1 − e x0 ≤ x
13
FREET Program Documentation
π
PAR1 = α = α≥0
σ⋅ 6
0.577
PAR 2 = u = m −
α
− e − α⋅( x − u )
Fx ( x ) = e
π
PAR1 = α = α≥0
σ⋅ 6
0.577
PAR 2 = u = m +
α
− e − α⋅( x − u )
Fx ( x ) = 1 − e
Lognormal
1 1 x
2
− ⋅ ⋅ln
1 2 σ ln x m ln
f x (x) = ⋅e
x ⋅ 2 ⋅ π ⋅ σ ln2 x
σ2
PAR 2 = σln x = ln 2 + 1
m
ln( x ) − ln(m ln )
Fx ( x ) = φ ⋅
σln
Gamma
υ ⋅ (υ ⋅ x ) ⋅ e − υ⋅ x
k −1
f x (x) = ; x ≥ 0; k > 0; υ > 0
Γ(k )
m
PAR1 = υ =
σ2
14
FREET Program Documentation
m2
PAR 2 =
σ2
Γ(k , υ, x )
x
Fx ( x ) = ∫ f x ( x )dx =
0
Γ(k )
1
m = υ ⋅ Γ 1 −
k
1
2 1 2
σ = υ ⋅ Γ1 − − Γ 2 1 −
k k
υ k
−
k
Fx ( x ) = e
1
m = ε + (w − ε ) ⋅ Γ 1 +
k
1
2 1 2
σ = (w − ε ) ⋅ Γ 1 + − Γ 2 1 +
k k
15
FREET Program Documentation
• The realization of variable Xi is then gained using the inverse transformation of cumulative
probability distribution function (CPDF)
xi,j = Φ X−1i (ui,j ) , (3.1)
where Φ X i (.) is the cumulative distribution function of Xi – see Fig. 3.1. Note that different
variables posses different CPDF.
Previous steps are performed for all input random variables X = X 1 , X 2 ,..., X i ,..., X n .
• In j-th simulation the function z j = g (X ) is evaluated using input variable representations
related to j-th simulation.
This process is repeated for all N simulations ( j = 1, 2,..., N ) .
• The final step represents either the statistical evaluation of the set Z = z1 , z 2 ,..., z j ,..., z n
rendering statistical moments of Z and/or the probability p f may be assessed as
n( g ≤ 0 )
pf ≈ . (3.2)
N
16
FREET Program Documentation
Using mathematical formalism and considering special function 1[g(X)] having two values only
1[g ] = 1 … g (X ) ≤ 0 ,
(3.3)
1[g ] = 0 … g (X ) > 0 .
Then we have
Pf = ∫ 1[g ] f (X)dX ,
ΩX
X (3.4)
[ ]
N
1
E pf =
N
∑ 1[g ] .
i =1
(3.5)
Evidently the great number of simulations has to be performed when small probability is
expected. When, for example COVPf = 0.1 is required and estimating p f around 10 −6 , than
N = 108 is necessary to perform.
17
FREET Program Documentation
18
FREET Program Documentation
where xi ,k is the k-th sample of the i-th variable X i , Φi−1 is the inverse CPDF for variable X i .
n
N
1 2 3 4 5 6
1 9 1 10 4 1 1
2 4 5 3 7 10 2
3 8 3 9 10 8 5
4 6 2 8 9 3 10
5 10 4 4 8 9 6
6 7 10 5 1 2 4
7 5 9 6 5 4 7
8 2 6 7 2 6 3
9 1 7 1 6 7 8
10 3 8 2 3 5 9
Table 1: Example of table of random permutations.
19
FREET Program Documentation
It should be noticed that this approach gives samples with a mean close to the desired one while
the sample variances might be significantly different.
where fi is the probability density function and the limits of integration are given by
k
y i ,k = Φi−1 . (3.9)
N
The integral above may not be always solvable in closed form. However, the extra effort of
doing the numerical integration is justified by the statistical accuracy gained as was shown by
(Huntington & Lyrintzis 1998).
20
FREET Program Documentation
There are generally two problems related to statistical correlation: First, during sampling
undesired correlation can be introduced between random variables (columns in Table 2). For
example instead a correlation coefficient zero for uncorrelated random variables undesired
correlation, e.g. 0.6 can be generated. It can happen especially in case of very small number of
simulations (tens), where the number of combinations is rather limited. Second task is to
introduce prescribed statistical correlation between random variables defined by correlation
matrix. Columns in Table 2 should be rearranged in such way to fulfill these two requirements: to
diminish undesired random correlation and to introduce prescribed correlation. The efficiency of
LHS technique was showed first time by McKay et al. 1979, but only for uncorrelated random
variables. A first technique for generation of correlated random variables has been proposed by
Iman and Conover (1982). The authors showed also the alternative to diminish undesired random
correlation. The technique is based on iterative updating of sampling matrix; Cholesky
decomposition of correlation matrix has to be applied. The technique can result in a very low
correlation coefficient (absolute value) if generating uncorrelated random variables. But
Huntington and Lyrintzis (1998) have found that the approach tends to converge to an ordering
which still gives significant correlation errors between some variables. The scheme has more
difficulties when simulating correlated variables. The correlation procedure can be performed
only once, there is no way to iterate it and to improve the result. These obstacles stimulated the
authors, they proposed so called single-switch-optimized ordering scheme. Their approach is
based on iterative switching of the pair of samples of Table 2. The authors showed that theirs
technique clearly performs well enough, but it may still converge to a non-optimum ordering. A
different method is needed for simulation of both uncorrelated and correlated random variables.
Such method should be efficient enough: reliable, robust and fast.
Note that the accurate best result is obtained if all possible combinations of ranks for each
column (variable) itself in Table 1 are treated. It would be necessary to try extremely large
number of rank combinations (NSim!)NV-1. It is clear that this rough approach is hardly applicable
in spite of the fast development of computer hardware. Note that we leave the concept of samples
selection from N-dimensional marginal PDF (with different partial components) and prescribed
covariance structure (correlation matrix).
The imposition of prescribed correlation matrix into sampling scheme can be understood as an
optimization problem: The difference between prescribed K and generated S correlation matrices
should be as small as possible. A suitable measure of quality of overall statistical properties can
be introduced, e.g. the maximal difference of correlation coefficients between matrices Emax or a
norm, which takes into account deviations of all correlation coefficients:
NV −1 NV
∑ ∑ (S − Ki , j )
2
Emax = max Si , j − K i , j , Eoverall = i, j (3.10)
1≤ i < j ≤ NV
i =1 j =i +1
The norm E has to be minimized from the point of view of definition of optimization problem:
the objective function is E and the design variables are related to ordering in sampling scheme
(Table 2). It is well known that deterministic optimization techniques and simple stochastic
optimization approaches can very often fail to find the global minimum. Such technique fails in
some local minimum and then there is no chance to escape from it and to find the global
minimum. It can be intuitively predicted that in our problem we are definitely facing the problem
with multiple local minima. Therefore we need to use the stochastic optimization method, which
21
FREET Program Documentation
works with some probability of escaping from local minimum. The simplest form is the two-
member evolution strategy, which works in two steps: Mutation and selection.
Step 1 (mutation): In generation a new arrangement of random permutations matrix X is obtained
using random changes of ranks, one change is applied for one random variable. Generation
should be performed randomly. Objective function (norm E) can be then calculated using newly
obtained correlation matrix - it is usually called “offspring”. The norm E calculated using former
arrangement is called “parent”.
Step 2 (selection): The selection chooses the best norm between the “parent” and “offspring” to
survive: For the new generation (permutation table arrangement) the best individual (table
arrangement) has to give a value of objective function (norm E) smaller than before.
Such approach has been intensively tested using numbers of examples. It was observed that the
method in most cases could not capture the global minimum. It failed in a local minimum and
there was no chance to escape from it, as only improvement of the norm E resulted in acceptance
of “offspring”.
The step “Selection” can be improved by Simulated Annealing approach (SA), a technique,
which is very robust concerning the starting point (initial arrangement of permutations table). The
SA is optimization algorithm based on randomization techniques and incorporates aspects of
iterative improvement algorithms. Basically it is based on the Boltzmann probability distribution:
−∆E
Pr ( E ) ≈ e kb ⋅T
(3.11)
where ∆E is difference of norms E before and after random change. This probability distribution
expresses the concept when a system in thermal equilibrium at temperature T has its energy
probabilistically distributed among all different energy states ∆E. Boltzmann constant kb relates
temperature and energy of the system. Even at low temperatures, there is a chance (although very
small) of a system being locally in a high energy state. Therefore, there is a corresponding
possibility for the system to move from a local energy minimum in favor of finding a better
minimum (escape from local minimum). There are two alternatives in step 2 (mutation). Firstly,
new arrangement (offspring) results in decrease of the norm E. Naturally offspring is accepted for
new generation. Secondly, offspring does not decrease the norm E. Such offspring is accepted
with some probability (3.11). This probability changes as temperature changes. As the result,
there is a much higher probability that the global minimum is found in comparison with
deterministic methods and simple evolution strategies.
In our case kb can be considered to be one. In classical application of SA approach for
optimization there is one problem: how to set the initial temperature? Usually it should be
considered heuristically. But our problem is constrained: Acceptable elements of correlation
matrix are always from interval <-1; 1>. Based on this fact the maximum of the norm (3.10) can
be estimated using prescribed and hypothetically “most remote” (unit correlation coefficients,
plus or minus). This approach represents a significant advantage: The heuristic estimation of
initial temperature is neglected; the estimation can be performed without the guess of the user and
the “trial and error” procedure. The initial temperature has to be decreased step by step, e.g. using
reduction factor fT after constant number of iterations (e.g. thousand) Ti+1 =Ti * fT. The simple
case is to use e.g. fT = 0.95, note that more sophisticated cooling schedules are known in SA-
theory (Otten and Ginneken, 1989, Laarhoven and Aaarts, 1987).
22
FREET Program Documentation
Fig. 3.3 The norm E (error) vs. number of random changes (rank switches).
The correlation matrix is obviously not positive definite. Strong positive statistical correlation is
required between variables (A, B) and variables (A, C), but strong negative correlation between
23
FREET Program Documentation
variables (B, C). It is clear that only compromise solution can be done. The method resulted in
such compromise solution without any problem, S1 (number of simulations NSim was high enough
to avoid limitation in number of rank combinations). Final values of norms are included on the
right side: first line corresponds to norm Emax (3.10) second line (bold) means overall norm Eoverall
(3.10). This feature of the method can be accepted and interpreted as an advantage of the method.
In practical reliability problems with non-positive definiteness exist (lack of knowledge). It
represents the limitation when using some other methods (Cholesky decomposition of prescribed
correlation matrix).
In real applications it can be a greater confidence to one correlation coefficient (good data) and a
smaller confidence to another one (just estimation). Solution of mentioned problems is weighted
calculations of both norms . For example the norm Eoverall can be modified in this way:
N V −1 N V
∑ ∑ w ⋅ (S − Ki, j ) (3.12)
2
Eoverall = i, j i, j
i =1 j = i +1
where wi,j is weight of certain correlation coefficient. Several examples of choices and resulting
correlation matrices (with both norms) are above. Resulting matrices S2 and S3 illustrate
significance of proportions among weights. Weights are in lower triangle and matrix K is
targeted again. Weights of accentuated members and resulting values are underlined.
24
FREET Program Documentation
4. PROBABILISTIC ASSESSMENT
25
FREET Program Documentation
the integers are uniformly distributed. In case of Latin Hypercube Sampling the representative
values of random variables cannot be identical, therefore there is no necessity to consider mid-
ranking. The nonparametric correlation is more robust than the linear correlation, more resistant
to defects in data and also distribution independent. Therefore it is particularly suitable for the
sensitivity analysis based on Latin Hypercube Sampling.
As a measure of nonparametric correlation we use the statistic called Kendall´s tau. It utilizes
only the relative ordering of ranks: higher in rank, lower in rank, or the same in rank, ie. the a
weak property of data and so Kendall´s tau can be considered as a very robust strategie. For a
detailed description of calculation, see (Novák et. al. 1993), here we present only a symbolic
formulae. As mentioned above Kendall´s tau is the function of ranks q ji (the rank of a
representative value of the random variable X i in an ordered sample of N simulated values used
in the j-th simulation which is equivalent to the integers in the table of random permutations in
the LHS method) and p j (the rank in ordered sample of the response variable obtained by the j-th
run of the simulation process):
τ i = τ (q ji ,p j ) , j = 1, 2, … , N . (4.1)
26
FREET Program Documentation
27
FREET Program Documentation
REFERENCES
Ayyub, B.M. & Lai, K.L. 1989. Structural Reliability Assessment Using Latin Hypercube
Sampling. In Proc. of ICOSSAR'89, the 5th International Conference on Structural Safety and
Reliability, San Francisco, USA, August 7 - 11, Vol. I, Structural Safety and Reliability: 1177-
1184.
Bažant, Z.P., Bittnar, Z., Jirásek, M. & Mazars, J. 1994. Fracture and damage in qusibrittle
materials. E&FN SPON An Imprint of Chapman & Hall.
Bažant, Z.P. & Novák, D. 2000a. Probabilistic nonlocal theory for quasi-brittle fracture initiation
and size effect. I Theory, and II. Application. Journal of Engineering Mechanics ASCE, 126
(2): 166-185.
Bažant, Z.P. & Novák, D. 2000b. Energetic-statistical size effect in quasi-brittle failure at crack
initiation. ACI Materials Journal, 97 (3): 381-382.
Bažant, Z.P. & Planas, J. 1998. Fracture and size effect in concrete and other quasi-brittle
materials. Boca Raton, Florida, and London: CRC Press.
Bourgund, U. & Bucher, C.G. 1986. A Code for Importance Sampling Procedure Using Design
Points - ISPUD - A User s Manual. Inst. Eng. Mech., Innsbruck University, Report No. 8 - 86.
Brenner, C., E. 1991. Stochastic Finite Element Methods (Literature Review). Institute of
Engineering Mechanics, Rep. No. 35-91 University of Innsbruck.
Bucher, C.G. & Bourgund, U. 1987. Efficient Use of Response Surface Methods. Inst. Eng.
Mech., Innsbruck University, Report No. 9-87.
Bucher, C.G. 1988. Adaptive Sampling - An Iterative Fast Monte - Carlo Procedure. J. Structural
Safety, 5 (2): 119-126.
Carmeliet, J. 1994. On stochastic descriptions for damage evaluation in quasi-brittle materials.
DIANA Computational Mechanics, G.M.A. Kusters and M.A.N. Hendriks (eds.).
Carmeliet, J. & Hens, H. 1994. Probabilistic nonlocal damage model for continua with random
field properties. ASCE Journal of Engineering Mechanics, 120 (10): 2013-2027.
CEB/FIP 1999. Updated model code, FIB, Lousanne, Switzerland – FIB: Manual textbook FIB
Bulletin 1, Structural concrete textbook on behavior, design and performance, Vol. 1
(Introduction, design process, materials).
CEN 1993. Eurocode 1. Basis of design and actions on structures. 6th env 1991-1 edition.
CEN 1991. Eurocode 2. Design of concrete structures. Env 1992-1-1 edition.
Cornell 1984 DOPLNIT
Červenka, V. & Pukl, R. 2000. ATENA – Computer Program for Nonlinear Finite Element
Analysis of Reinforced Concrete Structures. Program documentation. Prague, Czech republic:
Červenka Consulting.
28
FREET Program Documentation
Freudenthal, A.M. 1956. Safety and the Probability of Structural Failure. Transactions, ASCE,
121: 1327-1397.
Freudenthal, A.M., Garrelts, J.M. & Shinozuka, M. 1966. The Analysis of Structural Safety.
Journal of the Structural Division, ASCE, 92 (ST1): 267-325.
Grigoriu, M. 1982/1983. Methods for Approximate Reliability Analysis. J. Structural Safety, 1:
155-165.
Gutiérez, M.A. & de Borst, R. 1999. Deterministic and stochastic analysis of size effects and
damage evaluation in quasi-brittle materials. Archive of Applied Mechanics, 69: 655-676.
Hasofer, A.M. & Lind, N.C. 1974. Exact and Invariant Second-Moment Code Format. Journal of
Eng. Mech. Division, ASCE, 100 (EM1): 111-121.
Huntington, D.E. & Lyrintzis, C.S. 1998. Improvements to and limitations of Latin hypercube
sampling. Probabilistic Engineering Mechanics, 13 (4): 245-253.
Iman, R.C. & Conover, W J. 1982. A Distribution Free Approach to Inducing Rank Correlation
Among Input Variables. Communication Statistic, B11: 311-334.
Iman, R.C. & Conover, W.J. 1980. Small Sample Sensitivity Analysis Techniques for Computer
Models with an Application to Risk Assessment. Communications in Statistics: Theory and
Methods, A9 (17): 1749-1842.
Kleiber, M. & Hien, T.D. 1992. The Stochastic Finite Element Method, John Willey & Sons Ltd..
Laarhoven P.J & Aarts E.H. 1987. Simulated Annealing: Theory and Applications. D. Reidel
Publishing Company, Holland.
Li, K.S. & Lumb, P.1985. Reliability Analysis by Numerical Integration and Curve Fitting. J.
Struct. Safety, 3: 29-36.
Madsen, H.O., Krenk, S. & Lind, N.C. 1986. Methods of Structural Safety. Prentice – Hall,
Englewood Cliffs.
Margoldová, J., Červenka, V. & Pukl, R. 1998. Applied brittle analysis. Concrete Engineering
International, 8 (2): 65-69.
Mazars, J. 1982. Probabilistic aspects of mechanical damage in concrete structures. In G. C. Sih
(ed.), Proc., Conf. on Fracture Mech. Technol. Appl. to Mat. Evaluation and Struct. Design,
Melbourne, Australia: 657-666.
McKay, M.D., Conover, W.J. & Beckman, R.J. 1979. A Comparison of Three Methods for
Selecting Values of Input Variables in the Analysis of Output from a Computer Code.
Technometrics, Vol. 21: 239-245.
Mihashi, H. & Izumi, M. 1977. A stochastic theory for concrete fracture. Cement and Concrete
Research, Pergamon Press Vol. 7: 411-422.
Novák, D., Lawanwisut, W. & Bucher C. 2000. Simulation of random field based on orthogonal
transformation of covariance matrix and Latin Hypercube Sampling. Proc. of Int Conf. On
Monte Carlo Simulation Method MC2000, Monaco, Monte Carlo.
Novák, D., Teplý, B. & Keršner, Z. 1998. The role of Latin Hypercube Sampling method in
reliability engineering. In Shiraishi N., Shinozuka M., Wen Y.K., (eds). Proceedings. of
29
FREET Program Documentation
30