0% found this document useful (0 votes)
68 views30 pages

Freet: Computer Program For Statistical, Sensitivity and Probabilistic Analysis

This document provides an introduction and overview of the FREET program, which adds probabilistic and reliability analysis capabilities to the ATENA nonlinear fracture mechanics software. It discusses (1) reliability theory concepts like limit state functions and reliability indices, (2) modeling of uncertainties as random variables and statistical correlations, (3) Monte Carlo simulation methods like crude Monte Carlo and Latin hypercube sampling, and (4) probabilistic assessment techniques including statistical analysis, sensitivity analysis, and reliability analysis based on curve fitting. The overall aim is to obtain statistical estimates of structural response and reliability while accounting for uncertainties in materials, loads, and other parameters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views30 pages

Freet: Computer Program For Statistical, Sensitivity and Probabilistic Analysis

This document provides an introduction and overview of the FREET program, which adds probabilistic and reliability analysis capabilities to the ATENA nonlinear fracture mechanics software. It discusses (1) reliability theory concepts like limit state functions and reliability indices, (2) modeling of uncertainties as random variables and statistical correlations, (3) Monte Carlo simulation methods like crude Monte Carlo and Latin hypercube sampling, and (4) probabilistic assessment techniques including statistical analysis, sensitivity analysis, and reliability analysis based on curve fitting. The overall aim is to obtain statistical estimates of structural response and reliability while accounting for uncertainties in materials, loads, and other parameters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

FREET Program Documentation

Computer Program
for Statistical, Sensitivity and Probabilistic Analysis

FREET
PROGRAM DOCUMENTATION

Revision 09/2002

Part - 1
Theory

(Draft)

1
FREET Program Documentation

FREET Program Documentation

Part 1

Theory

Written by
Drahomír Novák, Břetislav Teplý,
Zbyněk Keršner and Miroslav Vořechovský

Brno, October 2002

2
FREET Program Documentation

CONTENTS
PREFACE .................................................................................................................................................... 4

1. RELIABILITY THEORY .................................................................................................................... 7


1.1 INTRODUCTION .................................................................................................................................. 7
1.2 FUNDAMENTAL CONCEPT OF STRUCTURAL RELIABILITY .................................................................... 7
1.3 RESPONSE AND LIMIT STATE FUNCTION ............................................................................................. 8
1.4 RELIABILITY INDEX ............................................................................................................................ 9
1.5 FAILURE PROBABILITY ....................................................................................................................... 9
2. MODELING OF UNCERTAINTIES............................................................................................... 12
2.1 RANDOM VARIABLES ........................................................................................................................ 12
2.1.1 Theoretical Models ................................................................................................................ 12
2.1.2 User-defined Probability Distribution ................................................................................... 15
2.2 STATISTICAL CORRELATION.............................................................................................................. 15
3. MONTE CARLO SIMULATION ..................................................................................................... 16
3.1 CRUDE MONTE CARLO SAMPLING .................................................................................................... 16
3.2 LATIN HYPERCUBE SAMPLING.......................................................................................................... 18
3.2.1 General remarks..................................................................................................................... 18
3.2.2 Idea of Stratification of Probability Distribution Function.................................................... 18
3.2.3 Statistical Correlation ............................................................................................................ 20
4. PROBABILISTIC ASSESSMENT ................................................................................................... 25
4.1 STATISTICAL ANALYSIS ..................................................................................................................... 25
4.2 SENSITIVITY ANALYSIS IN FORM OF NON-PARAMETRIC RANK-ORDER CORRELATION......................... 25
4.3 RELIABILITY ANALYSIS BASED ON CURVE FITTING............................................................................ 27

References …………………..…………………………………………………………………………..28

3
FREET Program Documentation

PREFACE
The aim of this document is to provide the theoretical basis and description of numerical methods
and approaches used in probabilistic module of ATENA software, FREET. Theoretical
background of relevant reliability techniques is provided together with information on theirs role
in reliability engineering. Our text will use some of the jargon of probability theory, but because
we are presenting the practical tool for engineers we will remain terminologically at the
fundamental level. First, the main reasons will be explained, which stimulated the development
of probabilistic module into nonlinear fracture mechanics software.
The properties of many physical systems and /or the input to these systems exhibit complex
random fluctuations that cannot be captured and characterized completely by deterministic
models. Probabilistic models are needed to quantify the uncertainties of these properties, to
develop realistic representations of output and failure state of these system and to obtain rational
and safe designs. Software ATENA (Červenka & Pukl 2000) represents an efficient tool for
nonlinear analysis of reinforced concrete structures taking into account recent theoretical
achievements of fracture mechanics. It enables a realistic modelling of structure, estimation of
failure load, including also the post-peek behaviour using state of art numerical methods of
fracture mechanics. However, until 2002, ATENA software was purely deterministic, it means
that all geometrical, material and load parameters of a computational model had to be fixed to
deterministic values. Generally, material, geometrical and load parameters of nonlinear fracture
mechanics models are rather uncertain (random) and modelling of these uncertainties of
computational model in a probabilistic way is therefore highly desirable.
The achievements of material science and modelling of concrete would be less important if
they do not contribute to everyday design practice and structural reliability. The more
complicated a computational model of a structure is the more difficult is the application of
reliability analysis of almost any level. Linear elastic analysis enables simple reliability
calculations – the last consistent reliability approach in the design was the allowed stress method.
Recent development introduced the significant inconsistency: Eurocode 2 (1991) demands a
nonlinear analysis using first mean values and second design values of material parameters. No
real guarantee and information on safety can be obtained using partial safety concept as accepted
in present design codes. The approach generally fails if the internal forces entering safety margin
(failure criteria) are not proportional to the load level, as is in case of complex nonlinear
problems. The more complex (statically indeterminate) the structure is the less satisfying the
inconsistent approach of partial safety factors is. This is quite well-known problem and there is
only one straightforward solution: Implementation of safety factors to the results of statistical
nonlinear analysis (failure load, stresses, deflections) not to input parameters. The general trend is
toward a consistent reliability assessment as recommended by Eurocode 1 (1993).
The important phenomenon in quasi-brittle materials is the size effect. The history of
description of size effect can be seen as a history of two fundamentally different approaches –
deterministic and statistical explanations. First explanation was definitely statistical – it dates
back to the pioneering work of Weibull (1939) and many others, mainly mathematicians.
Phenomenon that larger specimens will usually fracture under relatively smaller applied load was

4
FREET Program Documentation

that time associated with the statistical theory of extreme values. Then most researchers focused
on the energetic basis of size effect and the main achievements were purely deterministic. Let us
mention e.g. the book of Bažant & Planas (1998) as an extensive source of information. There are
two basic features of size effect phenomena: deterministic and statistical. Researchers used
different theories, from early works e.g. Shinozuka (1972), Mihashi & Izumi (1977), Mazars
(1982) considered uncertainties involved in concrete fracture. Recently, there are attempts to
combine last decade’s achievements of both fracture mechanics and reliability engineering e.g.
Carmeliet (1994), Carmeliet & Hens (1994), Gutiérez & de Borst (1999), Bažant & Novák
(2000a) and others.
Arguments mentioned above represent basis for the need to combine efficient reliability
techniques with present knowledge in the field of nonlinear fracture mechanics. Remarkable
development of computer hardware makes the numerical simulation of Monte Carlo type of
complex nonlinear responses possible. The reasons for complex reliability treatment of nonlinear
fracture mechanics problems can be summarised as follows:
• Modelling of uncertainties (material, load and environments) in classical statistical sense as
random variables or random processes (fields). The possibility to use statistical information
from real measurements.
• Inconsistency of design to achieve safety using partial safety factors – fundamental problem.
• Size effect phenomena.
The aim of FREET-ATENA basic statistical reliability nonlinear analysis is to obtain the
estimation of the structural response statistics (stresses, deflections, failure load etc.) or/and the
sensitivity analysis and the estimation of reliability. The probabilistic procedure can be basically
itemised as follows:
• Uncertainties are modelled as random variables described by theirs probability distribution
functions (PDF). The optimal case is if all random parameters are measured and real data
exist. Then statistical assessment of this experimental data (e.g. data on strength of concrete
or loading) should be done resulting in selection of the most appropriate PDF (e.g. Gaussian,
lognormal, Weibull, etc.). The result of this step is the set of input parameters for ATENA
computational model – random variables described by mean value, variance and other
statistical parameters (generally by PDF).
• Random input parameters are generated according to their PDF using Monte Carlo type
simulation (Latin Hypercube Sampling).
• Generated realisations of random parameters are then used as inputs for ATENA
computational model. The complex nonlinear solution is performed and results (response
variables) are saved.
• Previous two steps are repeated N-times (N is the number of simulations used). At the end of
the whole simulation process the resulting set of structural responses is statistically evaluated.
The results: Mean value, variance, and coefficient of skewness, histograms, empirical
cumulative probability density function of structural response.
Fundamental techniques needed to fulfil the aim mentioned above are implemented into
probabilistic module of FREET and described in this text. Section 1 introduces a reader into
elementary concept of classical reliability theory. Section 2 is devoted to the description of

5
FREET Program Documentation

uncertainties using mathematical models. The kernel of the whole probabilistic approach – Monte
Carlo type simulation is described in section 3 and how results are processed and interpreted is
the subject of section 4.

6
FREET Program Documentation

1. RELIABILITY THEORY

1.1 Introduction
The aim of statistical and reliability analysis is mainly the estimation of statistical parameters of
structural response and/or theoretical failure probability. Pure Monte Carlo simulation cannot be
applied for time-consuming problems, as it requires large number of simulations (repetitive
calculation of structural response). Historically, this obstacle was partially solved by approximate
techniques suggested by many authors, e.g. Grigoriu (1982/1983), Hasofer & Lind (1974), Li &
Lumb (1985), Madsen et al. (1986). Generally, the problematic feature of these techniques is the
(in)accuracy. Research was then focused on development of advanced simulation techniques,
which concentrates simulation into failure region (Bourgund & Bucher1986, Bucher 1988,
Schuëller & Stix 1987, Schuëller et al.1989). In spite of the fact that they usually require smaller
number of simulations comparing pure Monte Carlo (thousands), an application for nonlinear
fracture mechanics problem can be crucial and still almost impossible. But there are some
feasible alternatives: Latin hypercube sampling (McKay et al. 1979, Ayyub & Lai 1989, Novák et
al. 1998) and response surface methodologies (Bucher & Bourgund 1987).
The term stochastic or probabilistic finite element method (SFEM or PFEM) is used to refer to
a finite element method which accounts for uncertainties in the geometry or material properties of
a structure, as well as the applied loads. Such uncertainties are usually spatially distributed over
the region of the structure and should be modelled as random fields. From many works on SFEM
worked out in last two decades mention e.g. Kleiber & Hien 1992, Vanmarcke et al. 1986,
Yamazaki, F. et al. 1988, … The interest in this area has grown from the perception that in some
structures the response is strongly sensitive to the material properties, and that even small
uncertainties in these characteristics can adversely affect the structural reliability. This is valid
especially in the case of highly nonlinear problems of nonlinear fracture mechanics.

1.2 Fundamental Concept of Structural Reliability


In general, structural design consists of proportioning the elements of structure such that it
satisfies various criteria of safety, serviceability, and durability under the action of loads. In other
words, the structure should be designed such that it has a higher strength or resistance than the
effect caused by the loads. Schematic representation of failure probability evaluation is shown in
Fig. 1.1 by considering two variables (one relating to the load, S, on the structure and the other to
the resistance, R, of the structure). Both S and R are random in nature; their randomness is
characterized by the corresponding probability density functions fS(s) and fR(r), respectively. Fig.
1.1 also identifies the deterministic (nominal) values of these parameters, SN, and RN, used in
conventional safety factor-based approach. The area of overlap between the two curves (the
shaded region) provides basis for a qualitative measure of the probability of failure. This area of
overlap depends on three factors:

7
FREET Program Documentation

• The relative positions of the two curves: As distance between the two curves increase, the
probability of failure decreases. The position of the curves may be represented by the means
(µS and µR) of the two variables.
• The dispersion of the two curves: If the two curves are narrow, then the area of overlap and
the probability of failure are small. The dispersion may be characterized by the standard
deviations (σS and σR) of the two variables.
• The shape of the two curves: The shapes are represented by the probability density functions
fS(s) and fR(r).

Fig 1.1: Failure probability evaluation.

1.3 Response and Limit State Function


The classical reliability theory introduced the basic concept of structural reliability more formally
in the form of a response variable or safety margin (in case that the function expresses failure
condition) as the function of basic random variables X = X 1 , X 2 ,....., X n
Z = g (X ) = g ( X 1 , X 2 ,..., X n ) (1.1)
where g (X ) (computational model = nonlinear fracture mechanics model in our case) represents
functional relationship between elements of vector X (e.g. Freudenthal 1956, Freudenthal et al.
1966, Madsen et al. 1986, Schneider 1996). Elements of vector X are geometrical and material
parameters, load, environmental factors etc., generally uncertainties (random variables or random
fields). These quantities can be also statistically correlated.
In case that Z is safety margin, g (.) is called limit state function or performance function and
can be formulated usually using comparison of a real load and failure load. The structure is
considered to be safe if

8
FREET Program Documentation

g (X ) = g ( X 1 , X 2 ,..., X n ) ≥ 0 . (1.2)
The performance of the system and its components is described considering a number of limit
states. A limit state function can be explicit or implicit function of basic random variables, and it
can be in a simple or rather complicated form. Usually, the convention is made that it takes
negative value if a failure event occurs. Therefore the failure event is defined as the space where
Z ≤ 0 and survival event is defined as the space where Z ≥ 0 . Two basic classes of failure
criteria can be distinguished: structural collapse and loss of serviceability.

1.4 Reliability index


Reliability analysis methods employing reliability index or safety index take into account second
moment statistics (means and variance) of the random variables. Cornell (1994) suggested to use
the distance from the expectation of the limit state function to the limit state function itself as
reliability measure. This yields the reliability index:
µZ
β= . (1.3)
σZ
where µZ and σZ are the mean value and the standard deviation of the safety margin Z. In this
case β is actually the reciprocal value of the coefficient of variation of the variable Z. The
reliability index can be interpreted geometrically as the minimum distance from the limit state
function g(X) to the origin. Hasofer & Lind (1974) used this idea for generalised definition of the
reliability index. They proposed to use the minimum distance from limit state function (usually
nonlinear) to the origin in the uncorrelated normalized space as reliability measure. Such
generalised reliability index is given by:
β = uT u
(1.4)
Subject to g ( X ) = 0 ,
where β is the distance in a standard normal space U and g(X)=0 is the limit state surface. The
point x* at which β reaches the minimum is called the design point.
Reliability index represents the reliability measure to express reliability. Estimation of
Cornell´s reliability index is rather simple, as it needs the estimation of basic statistical
characteristics of safety margin. This task can be solved using Monte Carlo type simulation and
will be described in section 4.

1.5 Failure probability


The main aim of reliability analysis is the estimation of reliability using probability measure
called the theoretical failure probability defined as
p f = P(Z ≤ 0) . (1.5)

More formally, the theoretical failure probability as a measure of unreliability is defined as


pf = ∫ f (X , X
Df
1 2 ,..., X n )dX 1 , dX 2 ,..., dX n , (1.6)

9
FREET Program Documentation

where Df represents failure region where g (X ) ≤ 0 (integration should be performed over this
region) and f ( X 1 , X 2 ,…, X n ) is the joint probability density function of random variables
X = X 1 , X 2 ,..., X n .
Equality Z = 0 divides multidimensional space of basic random variables X = X 1 , X 2 ,..., X n
into safe and failure region. Explicit calculation of integral (1.6) is generally impossible therefore
the application of a simulation technique Monte Carlo type is the simple and in many cases
feasible alternative to estimate failure probability (e.g. Rubinstein 1967, Schreider 1967,
Schuëller & Stix 1987 and others).
The First Order Reliability Method (FORM) has initially been proposed by Hasofer & Lind
(1974). In the FORM, a linear approximation of the limit state surface in the uncorrelated
standardized Gaussian space is used to estimate the probability of failure. For this purpose it is
necessary to transform the basic variables. The distance from the design point of the transformed
limit state function to the origin is called reliability index β.
Note that design point is the point on the limit state surface with the minimum distance to the
origin in standard normal space is considered to be important. It is also the point of maximum
likelihood if the basic variables are normally distributed. This point can be obtained by solving
the optimization problem expressed in (1.3). The maximum β is known as the reliability index. It
can be shown that the probability of failure is approximately given by:
p f = 1 − Φ ( β ) = Φ (− β ) , (1.7)

where Φ denotes the standardized Gaussian distribution function. In case of linear limit state
function and normally distributed basic variables no transformations are necessary and equation
(1.7) yields just the exact failure probability.
In spite of the fact that the calculation of failure probability using reliability index (according
to Cornell or Hasofer & Lind) does not belong to the category of very accurate reliability
techniques (e.g. Bourgund & Bucher 1986), it represents a feasible alternative in many practical
cases. The relationship between reliability index and failure probability is illustratively shown in
Fig. 1.2 for both original Z and standardized safety margin Z S =
(Z − µZ ) .
σZ

10
FREET Program Documentation

Fig. 1.2: Failure probability and reliability index a) safety margin in original space b) in
standardized space.

11
FREET Program Documentation

2. MODELING OF UNCERTAINTIES

2.1 Random Variables

2.1.1 Theoretical Models


As a first step the uncertainties are modelled as random variables described by theirs
probability distribution functions (PDF). We arrive at an optimal case if all random parameters
are measured and real statistical data exist. Then statistical assessment of such experimental data
(e.g. data on strength of concrete or loading) should be done resulting in selection the most
appropriate PDF (e.g. Gausian, lognormal, Gumbel, Weibull, etc.). There is also possibility to
work directly with measured histograms (raw data) without mathematical model (bounded
histograms). Assessment of the data can be done using probability papers or standard statistical
tests (Kolmogorov-Smirnov or Chi-square test). In case of some input parameters with no
experimental evidence a professional judgement should be accepted.
The result of this step is the set of input parameters for ATENA computational model –
random variables described by mean value, variance and other statistical parameters (generally by
PDF).
Random input parameters are generated according to theirs PDF using e.g. some kind of
Monte Carlo simulation. The technique of stratified sampling called Latin Hypercube Sampling
is especially suitable (see 3.2), because it requires quite small number of simulations to get
accurate estimations.

Resume of probabilistic density functions, distribution functions and parameters of probability


distributions used in FREET.

Deterministic

Normal
 1  x −µ  
1  − ⋅ ⋅ 2 
 2 σ  
f x (x ) = ⋅e −∞ < x < ∞
2⋅π⋅σ
PAR1 = µ PAR 2 = σ

x − µ
Fx ( x ) = φ ⋅ 
 σ 

12
FREET Program Documentation

Uniform
1
f x (x) = a≤x≤b
b−a

PAR1 =
1
2
(
⋅ 2 ⋅ e x − SIG ⋅ 12 = a )
PAR 2 = SIG 12 + PAR1 = b

SIG =
(
2⋅ x − a )
12
x−a
Fx ( x ) =
b−a

Shifted Exponential

f x ( x ) = λ ⋅ e − λ⋅( x − x 0 )

1
x 0 = PAR 2 = e x −
λ
1
λ = PAR1 =
SIG
FX ( x ) = 1 − e [− λ⋅( x − x 0 ) ] x0 ≤ x

Shifted Rayleigh

 1  x −x 2 
0
x − x 0  − 2 ⋅  
α  
f x (x) = ⋅e
α2
σx
α = PAR =
 π
2 − 
 2

σx π
x 0 = PAR 2 = ⋅ + ex
 π 2
2 − 
 2
  x−x0  
2
 − 12 ⋅  
  α  
Fx ( x ) = 1 − e x0 ≤ x

13
FREET Program Documentation

Gumbel (Type I – Largest Values)


 − e − α⋅( x − u ) 
f x ( x ) = α ⋅ e −α⋅( x − u ) ⋅ e  

π
PAR1 = α = α≥0
σ⋅ 6
0.577
PAR 2 = u = m −
α
 − e − α⋅( x − u ) 
Fx ( x ) = e  

Gumbel (Type I – Smalles Values)


 − e − α⋅( x − u ) 
f x ( x ) = α ⋅ e −α⋅( x − u ) ⋅ e  

π
PAR1 = α = α≥0
σ⋅ 6
0.577
PAR 2 = u = m +
α
 − e − α⋅( x − u ) 
Fx ( x ) = 1 − e  

Lognormal
 1 1  x 
2
 − ⋅ ⋅ln    
1  2  σ ln x  m ln 
 
f x (x) = ⋅e  

x ⋅ 2 ⋅ π ⋅ σ ln2 x

 σ2 
PAR 2 = σln x = ln 2 + 1
m 

 ln( x ) − ln(m ln ) 
Fx ( x ) = φ ⋅  
 σln 

Gamma

υ ⋅ (υ ⋅ x ) ⋅ e − υ⋅ x
k −1
f x (x) = ; x ≥ 0; k > 0; υ > 0
Γ(k )
m
PAR1 = υ =
σ2

14
FREET Program Documentation

m2
PAR 2 =
σ2
Γ(k , υ, x )
x
Fx ( x ) = ∫ f x ( x )dx =
0
Γ(k )

Fréchét (TypeII – Largest)


k
k +1 u
k u  
f x (x) = ⋅   ⋅e x
x≥0
u x

  1 
m = υ ⋅  Γ 1 −  
  k 
1
  2  1  2
σ = υ ⋅ Γ1 −  − Γ 2 1 −  
  k  k 
  υ k 
 −  
 k 
Fx ( x ) = e  

Weibull (Type III – Smallest)


  x −ε  k 
k −1  −  
k  x−ε    w −ε  
f x (x) = ⋅  ⋅e  
x>ε
w −ε w −ε
  x −ε k 
 −  
  w −ε  
Fx ( x ) = 1 − e  
k>0

 1
m = ε + (w − ε ) ⋅ Γ  1 + 
 k
1
  2  1  2
σ = (w − ε ) ⋅  Γ  1 +  − Γ 2  1 +  
  k  k 

2.1.2 User-defined Probability Distribution

2.2 Statistical Correlation

15
FREET Program Documentation

3. MONTE CARLO SIMULATION

3.1 Crude Monte Carlo sampling


Monte Carlo simulation technique is a well known and widely used tool in the structural
reliability analysis. Basically it consists of N simulations (or repetitions, runs) of response or limit
state function evaluations with different trials of input vector X. The numerical realizations of it
are gained as follows:
• In j-th simulation a pseudo-random value ui , j ∈ 0; 1 for i-th variable is generated.

• The realization of variable Xi is then gained using the inverse transformation of cumulative
probability distribution function (CPDF)
xi,j = Φ X−1i (ui,j ) , (3.1)

where Φ X i (.) is the cumulative distribution function of Xi – see Fig. 3.1. Note that different
variables posses different CPDF.
Previous steps are performed for all input random variables X = X 1 , X 2 ,..., X i ,..., X n .
• In j-th simulation the function z j = g (X ) is evaluated using input variable representations
related to j-th simulation.
This process is repeated for all N simulations ( j = 1, 2,..., N ) .
• The final step represents either the statistical evaluation of the set Z = z1 , z 2 ,..., z j ,..., z n
rendering statistical moments of Z and/or the probability p f may be assessed as

n( g ≤ 0 )
pf ≈ . (3.2)
N

16
FREET Program Documentation

Fig. 3.1: Sketch of inverse transformation for random variable X i .

Using mathematical formalism and considering special function 1[g(X)] having two values only
1[g ] = 1 … g (X ) ≤ 0 ,
(3.3)
1[g ] = 0 … g (X ) > 0 .
Then we have
Pf = ∫ 1[g ] f (X)dX ,
ΩX
X (3.4)

and numerically it can be solved as

[ ]
N
1
E pf =
N
∑ 1[g ] .
i =1
(3.5)

The coefficient of variation of failure probability approximately holds


1
COVPf = . (3.6)
N ⋅ Pf

Evidently the great number of simulations has to be performed when small probability is
expected. When, for example COVPf = 0.1 is required and estimating p f around 10 −6 , than
N = 108 is necessary to perform.

17
FREET Program Documentation

3.2 Latin Hypercube Sampling

3.2.1 General remarks


The Latin Hypercube Sampling (LHS) simulation technique belongs to the category of advanced
simulation method (Iman & Conover, Novák & Kijawatworawet]. It is a special type of Monte
Carlo numerical simulation which uses the stratification of the theoretical probability distribution
function of input random variables. The LHS is a very efficient for the estimation of first two or
three statistical moments of structural response eg. (Bažant et. al. 1985, Novák & Teplý 1991). It
requires a relatively small number of simulations – repetitive calculations of the structural
response resulting from adopted computational model (tens or hundreds).
The utilization of LHS strategy in reliability analysis can be rather extensive. It is not
restricted just for the estimation of statistical parameters of structural response. The following
topics in which LHS can be applied are outlined as follows (for more details see Novák et. al
1998):
• estimation of statistical parameters of a structural response,
• estimation of the theoretical failure probability,
• sensitivity analysis,
• response approximation,
• preliminary „rough“ optimization,
• reliability-based optimization.
There are two stages:
• samples for each variable are strategically chosen to represent the variables CPDF,
• samples are ordered to match required statistical correlations among variables.

3.2.2 Idea of Stratification of Probability Distribution Function

Basic Latin Hypercube Sampling strategy:


The probability distribution functions for all random variables are divided into N equivalent
intervals (N is a number of simulations); the centroids of intervals are then used in simulation
process. This means that the range of the probability distribution function Φ ( X i ) of each random
variable X j is divided into N intervals of equal probability 1/N, see Fig. 3.2. The samples are
chosen directly from the CPDF, ie. by using the formula
 k − 0.5 
xi ,k = Φi−1  , (3.7)
 N 

18
FREET Program Documentation

where xi ,k is the k-th sample of the i-th variable X i , Φi−1 is the inverse CPDF for variable X i .

Fig. 3.2: Division of the variable domain into intervals.

The representative parameters of variables are selected randomly based on random


permutations of integers 1, 2, ..., j, …, N. Every interval of each variable must be used only once
during the simulation. Based on this precondition a table of random permutations can be used
conveniently, each row of such a table belongs to the specific simulation and the column
corresponds to one of the input random variables. Such a table of random permutations has
dimension N × 10 , example for N = 10 and n = 6 is presented in Table 1.

n
N
1 2 3 4 5 6
1 9 1 10 4 1 1
2 4 5 3 7 10 2
3 8 3 9 10 8 5
4 6 2 8 9 3 10
5 10 4 4 8 9 6
6 7 10 5 1 2 4
7 5 9 6 5 4 7
8 2 6 7 2 6 3
9 1 7 1 6 7 8
10 3 8 2 3 5 9
Table 1: Example of table of random permutations.

19
FREET Program Documentation

It should be noticed that this approach gives samples with a mean close to the desired one while
the sample variances might be significantly different.

Improved sample generation:


In order to better simulate variables to capture correctly means and variances, the random mean
of each section should be chosen (Huntington & Lyrintzis 1998) as
yi ,k
1
xi ,k =
N ∫ x f (x )dx ,
yi ,k −1
i (3.8)

where fi is the probability density function and the limits of integration are given by
k
y i ,k = Φi−1   . (3.9)
N
The integral above may not be always solvable in closed form. However, the extra effort of
doing the numerical integration is justified by the statistical accuracy gained as was shown by
(Huntington & Lyrintzis 1998).

3.2.3 Statistical Correlation


The NSim samples (where NSim is number of simulations planned for each random variable Xi) are
chosen from cumulative distribution function (CDF) as was described above. In following we
presume using LHS methodology for sampling. Table 2 represents sampling scheme, where
simulation numbers are in rows and columns are related to random variables and NV is number of
input variables.

Simulatio Var. 1 Var. 2 Var. 3 … Var. NV


n
1 x1, 1 x1, 2 x1, 3 … x1, NV
2 x2, 1 x2, 2 x2, 3 … x2, NV
… … … … … x3, NV
NSim xNSim, xNSim, xNSim, … xNSim,NV
1 2 3

Table 2 Sampling scheme for NSim deterministic calculations of g(X)

20
FREET Program Documentation

There are generally two problems related to statistical correlation: First, during sampling
undesired correlation can be introduced between random variables (columns in Table 2). For
example instead a correlation coefficient zero for uncorrelated random variables undesired
correlation, e.g. 0.6 can be generated. It can happen especially in case of very small number of
simulations (tens), where the number of combinations is rather limited. Second task is to
introduce prescribed statistical correlation between random variables defined by correlation
matrix. Columns in Table 2 should be rearranged in such way to fulfill these two requirements: to
diminish undesired random correlation and to introduce prescribed correlation. The efficiency of
LHS technique was showed first time by McKay et al. 1979, but only for uncorrelated random
variables. A first technique for generation of correlated random variables has been proposed by
Iman and Conover (1982). The authors showed also the alternative to diminish undesired random
correlation. The technique is based on iterative updating of sampling matrix; Cholesky
decomposition of correlation matrix has to be applied. The technique can result in a very low
correlation coefficient (absolute value) if generating uncorrelated random variables. But
Huntington and Lyrintzis (1998) have found that the approach tends to converge to an ordering
which still gives significant correlation errors between some variables. The scheme has more
difficulties when simulating correlated variables. The correlation procedure can be performed
only once, there is no way to iterate it and to improve the result. These obstacles stimulated the
authors, they proposed so called single-switch-optimized ordering scheme. Their approach is
based on iterative switching of the pair of samples of Table 2. The authors showed that theirs
technique clearly performs well enough, but it may still converge to a non-optimum ordering. A
different method is needed for simulation of both uncorrelated and correlated random variables.
Such method should be efficient enough: reliable, robust and fast.
Note that the accurate best result is obtained if all possible combinations of ranks for each
column (variable) itself in Table 1 are treated. It would be necessary to try extremely large
number of rank combinations (NSim!)NV-1. It is clear that this rough approach is hardly applicable
in spite of the fast development of computer hardware. Note that we leave the concept of samples
selection from N-dimensional marginal PDF (with different partial components) and prescribed
covariance structure (correlation matrix).
The imposition of prescribed correlation matrix into sampling scheme can be understood as an
optimization problem: The difference between prescribed K and generated S correlation matrices
should be as small as possible. A suitable measure of quality of overall statistical properties can
be introduced, e.g. the maximal difference of correlation coefficients between matrices Emax or a
norm, which takes into account deviations of all correlation coefficients:
NV −1 NV

∑ ∑ (S − Ki , j )
2
Emax = max Si , j − K i , j , Eoverall = i, j (3.10)
1≤ i < j ≤ NV
i =1 j =i +1

The norm E has to be minimized from the point of view of definition of optimization problem:
the objective function is E and the design variables are related to ordering in sampling scheme
(Table 2). It is well known that deterministic optimization techniques and simple stochastic
optimization approaches can very often fail to find the global minimum. Such technique fails in
some local minimum and then there is no chance to escape from it and to find the global
minimum. It can be intuitively predicted that in our problem we are definitely facing the problem
with multiple local minima. Therefore we need to use the stochastic optimization method, which

21
FREET Program Documentation

works with some probability of escaping from local minimum. The simplest form is the two-
member evolution strategy, which works in two steps: Mutation and selection.
Step 1 (mutation): In generation a new arrangement of random permutations matrix X is obtained
using random changes of ranks, one change is applied for one random variable. Generation
should be performed randomly. Objective function (norm E) can be then calculated using newly
obtained correlation matrix - it is usually called “offspring”. The norm E calculated using former
arrangement is called “parent”.
Step 2 (selection): The selection chooses the best norm between the “parent” and “offspring” to
survive: For the new generation (permutation table arrangement) the best individual (table
arrangement) has to give a value of objective function (norm E) smaller than before.
Such approach has been intensively tested using numbers of examples. It was observed that the
method in most cases could not capture the global minimum. It failed in a local minimum and
there was no chance to escape from it, as only improvement of the norm E resulted in acceptance
of “offspring”.
The step “Selection” can be improved by Simulated Annealing approach (SA), a technique,
which is very robust concerning the starting point (initial arrangement of permutations table). The
SA is optimization algorithm based on randomization techniques and incorporates aspects of
iterative improvement algorithms. Basically it is based on the Boltzmann probability distribution:
 −∆E 
 
Pr ( E ) ≈ e  kb ⋅T 
(3.11)

where ∆E is difference of norms E before and after random change. This probability distribution
expresses the concept when a system in thermal equilibrium at temperature T has its energy
probabilistically distributed among all different energy states ∆E. Boltzmann constant kb relates
temperature and energy of the system. Even at low temperatures, there is a chance (although very
small) of a system being locally in a high energy state. Therefore, there is a corresponding
possibility for the system to move from a local energy minimum in favor of finding a better
minimum (escape from local minimum). There are two alternatives in step 2 (mutation). Firstly,
new arrangement (offspring) results in decrease of the norm E. Naturally offspring is accepted for
new generation. Secondly, offspring does not decrease the norm E. Such offspring is accepted
with some probability (3.11). This probability changes as temperature changes. As the result,
there is a much higher probability that the global minimum is found in comparison with
deterministic methods and simple evolution strategies.
In our case kb can be considered to be one. In classical application of SA approach for
optimization there is one problem: how to set the initial temperature? Usually it should be
considered heuristically. But our problem is constrained: Acceptable elements of correlation
matrix are always from interval <-1; 1>. Based on this fact the maximum of the norm (3.10) can
be estimated using prescribed and hypothetically “most remote” (unit correlation coefficients,
plus or minus). This approach represents a significant advantage: The heuristic estimation of
initial temperature is neglected; the estimation can be performed without the guess of the user and
the “trial and error” procedure. The initial temperature has to be decreased step by step, e.g. using
reduction factor fT after constant number of iterations (e.g. thousand) Ti+1 =Ti * fT. The simple
case is to use e.g. fT = 0.95, note that more sophisticated cooling schedules are known in SA-
theory (Otten and Ginneken, 1989, Laarhoven and Aaarts, 1987).

22
FREET Program Documentation

In order to illustrate the efficiency of proposed technique, consider an example of correlation


matrix, which corresponds to properties of a concrete. They are described by 7 random variables;
parametric study of this example is given with emphasis on influence of number of simulations is
given in Vořechovský et al. 2002. It can be seen that as number of simulations increases,
correlation matrix is closer to the target one. Using standard PC correlating with SA took about
one second. Fig. 3.3 shows the decrease of norm E during SA-process. Such figure is typical and
should be monitored.

Fig. 3.3 The norm E (error) vs. number of random changes (rank switches).

In real applications of simulation technique in engineering (e.g. LHS), statistical correlation


represents very often a weak part of a priori assumptions. Because of this pure knowledge a
prescribed correlation matrix on input can be non-positive definite. The user can have difficulties
to update correlation coefficients in order to make the matrix positive definite. The example
presented here demonstrates that when non-positive definite matrix is on input, SA can work with
it and resulting correlation matrix is as close as possible to originally prescribed matrix but the
dominant constraint (positive definiteness) is satisfied automatically. Consider a very unrealistic
simple case of statistical correlation for three random variables A, B a C according to the matrix
K (columns and rows correspond to the ranks of variables A, B, C):

 1 0.9 0.9   1 0.499 0.499 


     0.401
K = 1 −0.9  , S1 =  1 −0.499   0.695 
 symm 1   symm 1   
 
 1 0.311 0.311  1 0.236 0.236 
   0.589     0.644 
S2 =  1 1 -0.806   0.884  , S3 =  1 1 -0.888   0.947 
1 10 1    1 100 1   
  

The correlation matrix is obviously not positive definite. Strong positive statistical correlation is
required between variables (A, B) and variables (A, C), but strong negative correlation between

23
FREET Program Documentation

variables (B, C). It is clear that only compromise solution can be done. The method resulted in
such compromise solution without any problem, S1 (number of simulations NSim was high enough
to avoid limitation in number of rank combinations). Final values of norms are included on the
right side: first line corresponds to norm Emax (3.10) second line (bold) means overall norm Eoverall
(3.10). This feature of the method can be accepted and interpreted as an advantage of the method.
In practical reliability problems with non-positive definiteness exist (lack of knowledge). It
represents the limitation when using some other methods (Cholesky decomposition of prescribed
correlation matrix).
In real applications it can be a greater confidence to one correlation coefficient (good data) and a
smaller confidence to another one (just estimation). Solution of mentioned problems is weighted
calculations of both norms . For example the norm Eoverall can be modified in this way:

N V −1 N V

∑ ∑ w ⋅ (S − Ki, j ) (3.12)
2
Eoverall = i, j i, j
i =1 j = i +1

where wi,j is weight of certain correlation coefficient. Several examples of choices and resulting
correlation matrices (with both norms) are above. Resulting matrices S2 and S3 illustrate
significance of proportions among weights. Weights are in lower triangle and matrix K is
targeted again. Weights of accentuated members and resulting values are underlined.

24
FREET Program Documentation

4. PROBABILISTIC ASSESSMENT

4.1 Statistical Analysis


The aim of the statistical analysis is to obtain the estimation of the statistical parameters scatter of
a structural response (stresses, deflections, failure load etc.) in the form of statistical moments,
thus assessing the scatter of the response variable.
At the end of the whole simulation process the resulting set of structural response is
statistically evaluated. The results are: mean value, variance, eventual skeewness, PDF of
structural response (e.g. deflection of the bridge in selected point, failure load etc.), histograms,
empirical cumulative probability density function.

4.2 Sensitivity Analysis in Form of Non-parametric Rank-order Correlation


An important task in the structural reliability analysis is to determine the significance of random
variables - how they influence a response function of a specific problem. There are many
different approaches of sensitivity analysis, a summary of present methods is given in (Novák et.
al. 1993).
A sensitivity analysis can answer the question „what variables are the most important ?“. In
this way the dominating and non-dominating random variables can be distinguished. On the base
of LHS there are two kinds of sensitivity analysis: Sensitivity in terms of coefficient of variation
and sensitivity in terms of nonparametric rank-order correlation coefficient.
The first approach is based on the comparison of partial coefficient of variation of the
structural response variable with variation coefficient of basic random variables. The second
approach utilizes the nonparametric rank-order statistical correlation between basic random
variables and the structural response variable. Only the later approach is considered in this text
due to its advantages. LHS simulation can be efficiently used to obtain such kind of information.
The sensitivity analysis is obtained as an additional result, and no significant additional
computational effort is necessary.
The relative effect of each basic variable on the structural response can be measured using the
partial correlation coefficient between each basic input variable and the response variable. The
method is based on the assumption that the random variable which influences the response
variable most considerably (either in a positive or negative sense) will have a higher correlation
coefficient than other variables. In case of a very weak influence the correlation coefficient will
be quite close to zero. Using the Latin Hypercube Sampling this kind of sensitivity analysis can
be performed almost directly without any particular additional effort. Because the model for
structural response is generally nonlinear, a nonparametric rank-order correlation is utilised. The
key concept of nonparametric correlation reads: Instead of the actual numerical values we
consider the values of its rank among all other values in the sample, that is 1, 2, ..., N. Then the
resulting list of numbers will be drawn from a perfectly known probability distribution function -

25
FREET Program Documentation

the integers are uniformly distributed. In case of Latin Hypercube Sampling the representative
values of random variables cannot be identical, therefore there is no necessity to consider mid-
ranking. The nonparametric correlation is more robust than the linear correlation, more resistant
to defects in data and also distribution independent. Therefore it is particularly suitable for the
sensitivity analysis based on Latin Hypercube Sampling.
As a measure of nonparametric correlation we use the statistic called Kendall´s tau. It utilizes
only the relative ordering of ranks: higher in rank, lower in rank, or the same in rank, ie. the a
weak property of data and so Kendall´s tau can be considered as a very robust strategie. For a
detailed description of calculation, see (Novák et. al. 1993), here we present only a symbolic
formulae. As mentioned above Kendall´s tau is the function of ranks q ji (the rank of a
representative value of the random variable X i in an ordered sample of N simulated values used
in the j-th simulation which is equivalent to the integers in the table of random permutations in
the LHS method) and p j (the rank in ordered sample of the response variable obtained by the j-th
run of the simulation process):
τ i = τ (q ji ,p j ) , j = 1, 2, … , N . (4.1)

In this way the correlation coefficients τ i ∈ −11


, can easily be obtained for an arbitrary
random variable and we can compare them. The greater absolute value of τ i for a variable X i ,
the greater influence has this variable on the structural response. An advantage of this approach is
the fact that a sensitivity measure for all random variables can be obtained directly within one
simulation analysis.
The rank-order statistical correlation is expressed by the Spearman correlation coefficient
n
6∑ d i2
r s = 1− i =1
, (4.2)
n(n − 1)(n + 1)
where d i is the difference of the order of the components in sequenced statistical files and n is
the range of the statistical file, or by Kendall´s tau
??????????????? (4.3)
This nonparametric sensitivity can illustrativelly be shown by parallel coordinate representation
which can clearly demonstrate the positive or negative influence of a basic random variable
(Novák et. al. 1998). This is shown in Fig. 4.1.

Fig. 4.1: Parallel co-ordinates representations.

26
FREET Program Documentation

4.3 Reliability Analysis Based on Curve Fitting

27
FREET Program Documentation

REFERENCES
Ayyub, B.M. & Lai, K.L. 1989. Structural Reliability Assessment Using Latin Hypercube
Sampling. In Proc. of ICOSSAR'89, the 5th International Conference on Structural Safety and
Reliability, San Francisco, USA, August 7 - 11, Vol. I, Structural Safety and Reliability: 1177-
1184.
Bažant, Z.P., Bittnar, Z., Jirásek, M. & Mazars, J. 1994. Fracture and damage in qusibrittle
materials. E&FN SPON An Imprint of Chapman & Hall.
Bažant, Z.P. & Novák, D. 2000a. Probabilistic nonlocal theory for quasi-brittle fracture initiation
and size effect. I Theory, and II. Application. Journal of Engineering Mechanics ASCE, 126
(2): 166-185.
Bažant, Z.P. & Novák, D. 2000b. Energetic-statistical size effect in quasi-brittle failure at crack
initiation. ACI Materials Journal, 97 (3): 381-382.
Bažant, Z.P. & Planas, J. 1998. Fracture and size effect in concrete and other quasi-brittle
materials. Boca Raton, Florida, and London: CRC Press.
Bourgund, U. & Bucher, C.G. 1986. A Code for Importance Sampling Procedure Using Design
Points - ISPUD - A User s Manual. Inst. Eng. Mech., Innsbruck University, Report No. 8 - 86.
Brenner, C., E. 1991. Stochastic Finite Element Methods (Literature Review). Institute of
Engineering Mechanics, Rep. No. 35-91 University of Innsbruck.
Bucher, C.G. & Bourgund, U. 1987. Efficient Use of Response Surface Methods. Inst. Eng.
Mech., Innsbruck University, Report No. 9-87.
Bucher, C.G. 1988. Adaptive Sampling - An Iterative Fast Monte - Carlo Procedure. J. Structural
Safety, 5 (2): 119-126.
Carmeliet, J. 1994. On stochastic descriptions for damage evaluation in quasi-brittle materials.
DIANA Computational Mechanics, G.M.A. Kusters and M.A.N. Hendriks (eds.).
Carmeliet, J. & Hens, H. 1994. Probabilistic nonlocal damage model for continua with random
field properties. ASCE Journal of Engineering Mechanics, 120 (10): 2013-2027.
CEB/FIP 1999. Updated model code, FIB, Lousanne, Switzerland – FIB: Manual textbook FIB
Bulletin 1, Structural concrete textbook on behavior, design and performance, Vol. 1
(Introduction, design process, materials).
CEN 1993. Eurocode 1. Basis of design and actions on structures. 6th env 1991-1 edition.
CEN 1991. Eurocode 2. Design of concrete structures. Env 1992-1-1 edition.
Cornell 1984 DOPLNIT
Červenka, V. & Pukl, R. 2000. ATENA – Computer Program for Nonlinear Finite Element
Analysis of Reinforced Concrete Structures. Program documentation. Prague, Czech republic:
Červenka Consulting.

28
FREET Program Documentation

Freudenthal, A.M. 1956. Safety and the Probability of Structural Failure. Transactions, ASCE,
121: 1327-1397.
Freudenthal, A.M., Garrelts, J.M. & Shinozuka, M. 1966. The Analysis of Structural Safety.
Journal of the Structural Division, ASCE, 92 (ST1): 267-325.
Grigoriu, M. 1982/1983. Methods for Approximate Reliability Analysis. J. Structural Safety, 1:
155-165.
Gutiérez, M.A. & de Borst, R. 1999. Deterministic and stochastic analysis of size effects and
damage evaluation in quasi-brittle materials. Archive of Applied Mechanics, 69: 655-676.
Hasofer, A.M. & Lind, N.C. 1974. Exact and Invariant Second-Moment Code Format. Journal of
Eng. Mech. Division, ASCE, 100 (EM1): 111-121.
Huntington, D.E. & Lyrintzis, C.S. 1998. Improvements to and limitations of Latin hypercube
sampling. Probabilistic Engineering Mechanics, 13 (4): 245-253.
Iman, R.C. & Conover, W J. 1982. A Distribution Free Approach to Inducing Rank Correlation
Among Input Variables. Communication Statistic, B11: 311-334.
Iman, R.C. & Conover, W.J. 1980. Small Sample Sensitivity Analysis Techniques for Computer
Models with an Application to Risk Assessment. Communications in Statistics: Theory and
Methods, A9 (17): 1749-1842.
Kleiber, M. & Hien, T.D. 1992. The Stochastic Finite Element Method, John Willey & Sons Ltd..
Laarhoven P.J & Aarts E.H. 1987. Simulated Annealing: Theory and Applications. D. Reidel
Publishing Company, Holland.
Li, K.S. & Lumb, P.1985. Reliability Analysis by Numerical Integration and Curve Fitting. J.
Struct. Safety, 3: 29-36.
Madsen, H.O., Krenk, S. & Lind, N.C. 1986. Methods of Structural Safety. Prentice – Hall,
Englewood Cliffs.
Margoldová, J., Červenka, V. & Pukl, R. 1998. Applied brittle analysis. Concrete Engineering
International, 8 (2): 65-69.
Mazars, J. 1982. Probabilistic aspects of mechanical damage in concrete structures. In G. C. Sih
(ed.), Proc., Conf. on Fracture Mech. Technol. Appl. to Mat. Evaluation and Struct. Design,
Melbourne, Australia: 657-666.
McKay, M.D., Conover, W.J. & Beckman, R.J. 1979. A Comparison of Three Methods for
Selecting Values of Input Variables in the Analysis of Output from a Computer Code.
Technometrics, Vol. 21: 239-245.
Mihashi, H. & Izumi, M. 1977. A stochastic theory for concrete fracture. Cement and Concrete
Research, Pergamon Press Vol. 7: 411-422.
Novák, D., Lawanwisut, W. & Bucher C. 2000. Simulation of random field based on orthogonal
transformation of covariance matrix and Latin Hypercube Sampling. Proc. of Int Conf. On
Monte Carlo Simulation Method MC2000, Monaco, Monte Carlo.
Novák, D., Teplý, B. & Keršner, Z. 1998. The role of Latin Hypercube Sampling method in
reliability engineering. In Shiraishi N., Shinozuka M., Wen Y.K., (eds). Proceedings. of

29
FREET Program Documentation

ICOSSAR-97 - 7th International Conference on Structural Safety and Reliability, Kyoto,


Japan: 403-409. Rotterdam: Balkema.
Novák, D., Teplý, B. & Shiraishi, N. 1993. Sensitivity Analysis of Structures: A Review. Int
Conf. CIVIL COMP"93, Edinburgh, Scotland, August: 201-207.
Novák, D., Vořechovský, M., Pukl, R. & Červenka, V. 2001. Statistical nonlinear fracture
analysis: Size effect of concrete beams. Int. Conf. On Fracture Mechanics of Concrete
Structures FraMCos 4, Cachan, France (in print).
Novák, D.& Kijawatworawet, W. 1990. A comparison of accurate advanced simulation methods
and Latin Hypercube Sampling method with approximate curve fitting to solve reliability
problems. Internal Working Report, No. 34-90.
Otten R. H. J. M. & Ginneken L. P. P. P 1989. The Annealing Algorithm. Kluwer Academic
Publishers, USA.
Rubinstein, R.Y. 1967. Simulation and Monte-Carlo Method. John Wiley & Sons, New Press,
Oxford.
Schneider, J 1996. Introduction to Safety and Reliability of Structures, IABSE, GED, ETH,
Zurich.
Schreider, Y.A. 1967. The Monte Carlo Method - The Method of Statistical Trials. Pergamon
Press, Oxford.
Schueller, G.I. & Stix, R. 1987. A Critical Appraisal of Methods to Determine Failure
Probabilities. J. Struct. Safety, 4 (4): 293-309.
Schueller, G.I., Bucher, C.G., Bourgund, U. & Ouypornprasert, W.1989. On Efficient
Computational Schemes to Calculate Structural Failure Probabilities. Probabilistic
Engineering Mechanics, 4 (1): 10-18.
Shinozuka, M. 1972. Probabilistic modeling of concrete structures. ASCE Journal of Engineering
Mechanics, Vol. 98, EM6: 1433-1451.
Vanmarcke, E., Shinozuka, M., Nakagiri, S., Schueller, G.I. & Grigoriu, M. 1986. Random Fields
and Stochastic Finite Elements, Structural Safety, No. 3: 143-166
Vořechovský M., Novák D., Rusina R. 2002. A New Efficient Technique for Samples
Correlation in Latin Hypercube Sampling, Proc. of 7th International Scientific Conference,
Košice, Slovakia, 102-108.
Weibull, W. 1939. The phenomenon of rupture in solids. Proc., Royal Swedish Institute of
Engineering Research (Ingenioersvetenskaps Akad. Handl.) 153, Stockholm: 1-55.
Yamazaki, F., Shinozuka, M. & Dasgupta, G. 1988. Neumann Expansion for Stochastic Finite
Element Analysis, Journal of Engineering Mechanics, 114 (8): 1335-1354.

30

You might also like