0% found this document useful (0 votes)
23 views9 pages

(Open Engineering) On A Stochastic Regularization Technique For Ill-Conditioned Linear Systems

Uploaded by

gabriellareinert
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views9 pages

(Open Engineering) On A Stochastic Regularization Technique For Ill-Conditioned Linear Systems

Uploaded by

gabriellareinert
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Open Eng.

2019; 9:52–60

Research Article

Henrique Gomes Moura*, Edson Costa Junior, Arcanjo Lenzi, and Vinicius Carvalho Rispoli

On a Stochastic Regularization Technique for


Ill-Conditioned Linear Systems
https://doi.org/10.1515/eng-2019-0008
Received Nov 28, 2017; accepted Jan 02, 2019
1 Introduction
Abstract: Knowledge about the input–output relations of It is very common in applied mechanics to come across
a system can be very important in many practical situa- identification problems when estimating a certain phys-
tions in engineering. Linear systems theory comes from ical parameters of interest. Unfortunately, most of these
applied mathematics as an efficient and simple modeling problems lie in the area of inverse problems. Either be-
technique for input–output systems relations. Many iden- cause of the lack of appropriate measurement instruments
tification problems arise from a set of linear equations, us- or due the difficulties to access the measurement loca-
ing known outputs only. It is a type of inverse problems, tions in order to obtain the desired quantities directly [1–
whenever systems inputs are sought by its output only. 4]. Inverse problems are characterized by determining un-
This work presents a regularization method, called ran- known causes based on observations of their effects [5].
dom matrix method, which is able to reduce errors on the A particular type of inverse problems, found frequently in
solution of ill-conditioned inverse problems by introduc- engineering, can be formulated in terms of a linear system
ing modifications into the matrix operator that rules the [6]
problem. The main advantage of this approach is the pos- x = A+ y, (1)
sibility of reducing the condition number of the matrix us-
where y is the output vector that represents the observed
ing the probability density function that models the noise
effect, A+ = (A* A)−1 A* is the pseudo inverse matrix oper-
in the measurements, leading to better regularization per-
ator and x is the input vector related to the desired quan-
formance. The method described was applied in the con-
tity. The matrix A is known as the matrix operator of the
text of a force identification problem and the results were
linear system y = Ax and A* is the Hermitian transpose of
compared quantitatively and qualitatively with the clas-
A. In the absence of noise in both vectors x and y, the ma-
sical Tikhonov regularization method. Results show the
trix inversion processes are computationally stable if A has
presented technique provides better results than Tikhonov
linearly independent columns [7]. However, if data is cor-
method when dealing with high-level ill-conditioned in-
rupted with noise, these processes may become unstable,
verse problems.
and noise can be largely amplified in the solution, so that
Keywords: Force Identification Problems, Inverse Prob- the solution might become completely meaningless [8].
lems, Regularization, Random Matrices When dealing with noisy inverse problems, it is neces-
sary the use of regularization techniques to obtain better
and stable solutions. In ill-posed and (or) ill-conditioned
systems, regularization involves the introduction of addi-
tional information to the system in order to improve its
solution. Many classical regularization methods, such as
Tikhonov regularization and mollifier methods, are dis-
cussed in detail in many books and papers in the litera-
*Corresponding Author: Henrique Gomes Moura: Engineering ture [8, 9]. In particular, the use of adapted matrices over
College, Laboratory of Noise Vibration and Harshness, University of several kinds of regularization problems has also been
Brasilia at Gama; Email: [email protected] studied recently [10, 11]. Moreover, other regularization
Edson Costa Junior: Engineering College, Laboratory of instrumen- techniques that involve matrix learning and large covari-
tation, Signal and Image Processing, University of Brasilia at Gama
ance problems are among the object of interest of several
Arcanjo Lenzi: Laboratory of Vibration and Acoustics, Federal
University of Santa Catarina researchers [12–14].
Vinicius Carvalho Rispoli: Engineering College, University of Force identification problems are examples of such
Brasilia at Gama kind of noisy and unstable inverse problems that claims
Open Access. © 2019 H. Gomes Moura et al., published by De Gruyter. This work is licensed under the Creative Commons Attri-
bution 4.0 License
Brought to you by | CAPES
Authenticated
Download Date | 5/2/19 4:46 PM
On a Stochastic Regularization Technique for Ill-Conditioned Linear Systems | 53

for regularization methods. These problems has gained a has a consistent solution that could describe physical sys-
lot of interest in the industry, especially in the fields of tems continuously.
civil engineering and structural mechanics, since it can be In practice, the measured data is a finite set, known
used to forecast the remaining lifetime of a given struc- to have a certain degree of approximation with the actual
ture [15]. Force identification problems aims to estimate phenomenon, and one wants to find a solution that is the
forces based on the measured structural response and the best possible fit to the problem. Depending on the quality
dynamic model of the structure behavior using frequency of the measured data, the search for the best solution of
response functions (FRF) in matrix form [16]. Several ad- the problem may find some instabilities and undesirable
vances and methods can be found in the literature related errors.
to the solution of a force identification inverse problem, Assume that a matrix operator A is singular, this im-
such as the use of mixed penalty functions for regulariza- plies that at least one singular value of A is zero. On the
tion [15], Kalman filter [17], Taylor formula [18] and sparse other hand, when some singular values are very small,
representations [19]. closely to zero, and not null A is called ill-conditioned.
In this context, this paper brings a novel strategy to This situation makes the solution very sensitive to small
regularize ill-conditioned systems of linear equations us- changes in the data. A way to measure this sensitivity de-
ing random matrices based on Monte Carlo methods. Here scribed is through the condition number of the matrix op-
we are interested in a regularization method that intro- erator A [20], it is defined as follows
duces modifications in the matrix operator A in order to
σmax (A)
κ(A) = ‖A‖2 · ⃦A+ ⃦2 =
⃦ ⃦
stabilize the desired inversion process. Compared to the ≥ 1, (2)
σmin (A)
other well known and well used regularization methods,
the method proposed in this paper has the advantage of where σmax (A) and σmin (A) are the maximal and minimal
modifying and regularizing the ill-posed linear system us- singular values of A, respectively, and ‖·‖2 is the norm de-
ing a case–specific model for the noise presented in the fined in the square summable sequence space ℓ2 [5].
signal. This is done through the use of specific probability Note that σmin (A) must be non zero, otherwise the con-
density functions that are related to the noise presented dition number cannot be calculated. It means that A must
on the sampled signal. The mathematical process of con- be a positive or semi-positive full-rank matrix. The system
structing the Random MAtrix method (RMA) is shown step is called well-conditioned if the condition number is close
by step in the following sections. Then, to validate the to its minimal value (the unit). While a problem with a
proposed method and the algorithm efficiency was used a high condition number is said to be ill-conditioned. For
force identification inverse problem based on a noise cor- the particular case of the square matrices operators, ill-
rupted data. Finally, the results were compared quantita- conditioned systems can be easily identified by the matrix
tively and qualitatively with the classical Tikhonov regu- operator A determinant, which is close to zero if the matrix
larization method and was shown that RMA provides bet- operator is quasi-singular.
ter results than the Tikhonov regularization in all the cases
tested.
2.2 Regularized Linear Systems

Consider a system of linear equations written in the matrix


2 Theory form as shown in Eq. (1). If noise is added to response vec-
tor y, then the solution vector x must be compensated by
2.1 Ill-Posed and Ill-Conditioned Systems ^ such that
an error x

A problem is said to be well-posed if the following require- ^) = y + y


A(x + x ^ = ỹ. (3)
ments are fulfilled: (1) there exist a unique solution to the
problem; (2) the solution is smooth, i.e., it depends contin- The error amplification factor |x^ |/|y
^ | will be very large if
uously on the input data. Otherwise the problem is called the matrix operator A is ill-conditioned [22].
ill-posed [8]. Now, consider in Eq. (3) that a before-state error ε0 is
The first condition requires that the solution is not am- used to reduce the effects of the uncertainties in ỹ, i.e.,
biguous. The second one assures the continuous depen-
^ + ε0 ) = A(x̃ + ε0 ) = y,
A(x + x (4)
dence on the solution of the problem. In simple words, a
well-posed problem is one that, based on its formulation, ^.
where x̃ = x + x

Brought to you by | CAPES


Authenticated
Download Date | 5/2/19 4:46 PM
54 | H. Gomes Moura et al.

{︁ }︁
On the other hand, if a regularized matrix operator where Ak = A + A ^ k . The set of matrices A ^ k can be deter-
A = A+A ^ is applied to the exact response vector y, then mined by a Monte Carlo simulation based on the relation
a compensation error ε a over the input of the system will between the before and after-state error given by Eq. (8).
arise. It can be written as For that, it is important to derive some analytical expres-
sions to estimate the error vectors before (ε0 ) and after (ε1 )
A(x + ε a ) = y. (5)
regularization.
^ is
The term ε a will be called the regularization error and A From Eq. (4) we have that the before-state error vec-
the matrix responsible to stabilize the inversion process. tor is related to the exact and noisy response vector as
We know that the exact response y is not available, so the ỹ + Aε0 = y. So, Aε0 = −(ỹ − y). Moreover, from Eq. (6)
we have Aε a + A ^ x̃ = 0. Then, since x̃ = A+ ỹ, the regulariza-
Eq. (5) must be modified to consider the noisy response ỹ, +
tion error is given by ε a = −A A ^ A+ ỹ. Finally, the before and
i.e.,
A(x̃ + ε a ) = ỹ. (6) after-state error vectors can be estimated by the following
expressions
Now, in Eq. (6), if we use the before-state error to com-
pensate the uncertainties in y then we should get ε0 = −A+ y
^, (10)
+
A(x̃ + ε a + ε0 ) = y. (7) ε1 = −A A^ A+ ỹ − A+ y
^, (11)
+
^ = ỹ − y, A is the pseudo inverse matrix operator
where y
In this case we write the after-state error vector ε1 = ε0 + ε a ^ is the desired matrix regularization.
and A
as a sum of the uncertainty and regularization effects.
As it is shown in Eqs. (10)–(11), the uncertainties y ^
The error vectors ε0 and ε1 show how the solution dif-
are needed to estimate the error vectors above. In order to
fers from both original and regularized systems. The sim-
faithfully represent the meaning of Eq. (8) and check per-
ple rule presented by Eq. (8) can be applied for any x̃,
fectly the regularization efficiency by using Eq. (9), the un-
to check if the regularization result is satisfactory. How-
certainties y^ should be approximated as much as possible
ever, there is no analytical expression available for that
by the uncertainties of the measured data.
and then errors should be numerically checked for each
The problem of finding the uncertainty y ^ leads to a
response of the system using
Monte Carlo simulation, using a reasonable probability
{︀ }︀
‖ε1 ‖2 ≤ ‖ε0 ‖2 . (8) density function (PDF). Thus, a set of uncertainties y ^i
can be found to check the random matrices modifications
The presented regularization strategy shows how to in many sensitivities input directions, which represents
find a satisfactory regularized linear matrix operator A us- the variations on the contaminated input data that can re-
ing a Monte Carlo simulation [23]. The regularized operator duce noise amplification factors in the system inversion.
A applied to a noisy output ỹ generates a reasonable noisy At first, a finite set of before-state error vectors {ε0i },
input data x̃. indexed by an integer i, is obtained from Eq. (10), through a
Monte
{︁ Carlo
}︁ simulation. For each random matrix A ^ k of the
^
set Ak , it is possible to find an after-state error set {ε1i }
2.3 Random Matrix Method (RMA) using Eq. (11). The error set {ε1i }, that produces the min-
imum Euclidian distances when compared to the before-
The proposed regularization aims to find a stochastic lin- ^ . The overall compu-
state set {ε0i }, is related to the best A
ear matrix operator A ^ that could improve the conditioning tation procedure is described below.
number of the system when it is added to the original ma- {︀ }︀ ⃦ ⃦
1. Generate a set y ^ i such that ⃦y^ i ⃦2 < εmax ;
trix A. Then, the new system should have a reduced error
2. Compute a before-state set {ε0i } using {︁ Eq.
}︁ (10);
amplification factor, which improves the inverse problem
3. Generate a finite set of matrices A ^ k such that
solution. {︁ }︁
The}︁method consists in generating a finite set of matri-
{︁ ^ k = Wk N(n), satisfying Eq. (9); Wk is a control
A
ces A ^ k , indexed by an integer k, such that their condi- matrix and N(n) is a function that returns matrices
tion numbers are smaller than the condition number of the of order n that are chosen according to the PDF of
actual matrix operator A, i.e., each matrix of the set must the noise encountered on the data and has the same
satisfy the rule expressed by the following equation ^ k . Here, in this paper, the function N(n) re-
order of A
(︀ )︀ turns matrices of order n with normally distributed
κ Ak < κ (A) , (9) coefficients;

Brought to you by | CAPES


Authenticated
Download Date | 5/2/19 4:46 PM
On a Stochastic Regularization Technique for Ill-Conditioned Linear Systems | 55

4. Compute a set {ε1i } for each random matrix A ^ k us- be minimized. The bottom line here is how to optimize the
ing Eq. (11); regularization parameter λ. Using a SVD technique [24, 25]
5. Choose the set {ε1i } that optimizes the Euclidian the matrix operator A ∈ Cn×m , with rank m, can be written
distances to the set {ε0i }; as
m
6. Adopt the random matrix A ^ k related to the optimum
σ i ui vTi ,
∑︁
A= (14)
set {ε1i } as the regularized matrix. i=1

It is important to emphasize that the method is highly where σ i is the singular value, ui is the left singular vec-
{︀ }︀
dependent on the uncertainties y ^ i and on the matri- tor and vi the right singular vector. Then, each term of the
{︁ }︁
ces modifications A ^ k . According to the Central Limit minimization problem stated by Eq. (13) can be written as
Theorem, for large computations the simulated uncertain- n ⃒ m ⃒
⃒2 ∑︁
⃒ T ⃒2

ε2 = ‖Ax − y‖22 =
∑︁ ⃒ T T ⃒
ties might be modeled using the standard normal distribu- ⃒ σ i v i x − u i y⃒ + ⃒ui y⃒ (15)
tion [23]. Other probability density functions could be ap- i=1 i=n+1

plied whether some specific information is known about


n ⃒
the referenced stochastic responses. On the other hand, ⃒ T ⃒2

η2 = ‖Cx‖22 = ‖x‖22 =
∑︁
⃒ v i x⃒ (16)
if no information about the noise is known then we sug-
i=1
gest that the random matrices must be generated prefer- {︂ ⃦ ⃦2 }︂
ably by uniform distribution or Gaussian distribution, so for any y ∈ R(A) = g : limλ→0 ⃦A#λ g − x+ ⃦ = 0 . Com-
⃦ ⃦
that it could be possible to search an unbiased solution. 2
bining Eq. (15) and Eq. (16) with Eq. (13) one can obtain a
As it is seen, RMA has two main regularization param-
solution
eters: εmax and Wk . The first one controls the amplitude of n
f i σ−1 T
∑︁
the numerical uncertainties. The numerical uncertainties xλ = i u yvi (17)
must be modeled according to the expected experimental i=1

error. The second one controls the amplitude of the ma- where f i = σ2i /(σ2i 2
+ λ ) is the solution filter factor. For
trices modifications, that is, the regularization levels. It is the case C = In , the regularization parameter acts over
known that the regularization error ε a is acceptable just for the smallest singular values, σ2i < λ, which will produce
small scales, which means that the regularization must be f i ≈ σ2i /λ2 .
sufficient to repair the ill-conditioned system.

2.4 Tikhonov Regularization

The Tikhonov regularization method is one of the old-


est techniques addressed to the solution of ill-conditioned
and ill-posed identification problems [20]. This technique
was first developed by the Russian Andrey Nikolayevich
Tikhonov, and it can be simple stated as the constrained
least square problem [21]:
{︁ }︁ Figure 1: On the left side, the generic form of the L-curve [2]. Note
min ‖Ax − y‖22 + λ2 ‖Cx‖22 . (12) the log-log scale. On the right side, the generic form of the decreas-
ing singular values curve.
For the discrete case, the Euclidian norm ‖Cx‖22 con-
trols the amplitude of the input x by applying a regular-
The criterium to choose the regularization parameter λ
ization parameter λ to the requested minimization prob-
can be developed by substituting Eq. (17) into Eq. (15) and
lem. The Tikhonov solution xλ,C is unique, and is formally
Eq. (16). The result is shown below
given by
xλ,C = A#λ y, (13) n
λ4
m ⃒
⃒ T ⃒2 ∑︁ ⃒ T ⃒2
⃒ ⃒ ⃒
ε2 =
∑︁
u i y + u i y (18)
(σ2i + λ2 )
(︁ )︁−1 ⃒ ⃒ ⃒ ⃒
where A#λ = AT A + λ2 CT C AT is the Tikhonov regular- i=1 i=n+1

ized inverse. If the matrix C is chosen to be the identity ma-


n
trix In , then we drop the term “C” in the Tikhonov solution, σ2i ⃒ T ⃒2
⃒ ⃒
η2 =
∑︁
u i y ⃒ . (19)
(σ2i + λ2 )

presented by Eq. (13), and the norm of the solution will also i=1

Brought to you by | CAPES


Authenticated
Download Date | 5/2/19 4:46 PM
56 | H. Gomes Moura et al.

These equations provide a balance between two


sources of error: the perturbation error, presented by
3 Experiments and Results
Eq. (19), due to the ill-conditioning, and the regularization
A force identification experiment was carried out for the
error, presented by Eq. (18), due to the system modification
system presented in Figure 2, constituted of a cantilever
imposed by λ. Note that the term ε2 is a strictly increas-
rectangular structure, with one side clamped and the other
ing function of λ, while the term η2 is strictly decreasing,
free. In the experiment accelerometers, force transducers,
and thus an optimum value of the regularization parame-
and electrodynamic shakers were used. White noise of the
ter λopt exists.
⃦ ⃦ same order of magnitude, however, separated by two dif-
The so called L-curve [2] is a plot of ⃦Cxλ,C ⃦2 versus
⃦ ⃦
⃦Axλ,C − y⃦ , and it represents a balance between these ferent noise generators, were supplied to the shakers.
2 Through the presented experimental setup it was pos-
two error quantities, which are the main issue of any reg-
sible to obtain a set of structural frequency response func-
ularization technique. The generic form of the L-curve is
tions (FRF) A ij (ω), which correlates the exciting forces (in-
presented in Figure 1, on the left side. The L-curve clearly
puts) x i (ω) with the measured accelerations y j (ω) (out-
distinguishes two important regions for the regularization
puts) [26, 27]. The experiment was carried out over an iner-
parameter λ. From the central corner to the left, the respec-
tial mass, in order to better control the reference force val-
tive regularization parameters λ decreases, and the per-
ues, measured by the force transducers, which connects
turbation error η2 , increase fastly. From the central corner
the electrodynamic shakers to the structure. The measured
to the right, the regularization parameter λ increases, de-
structural FRF was conveniently expressed in the matrix
creasing the perturbation error but increasing the regular-
form A ∈ C5×5 , according to Eq. (1).
ization error ε2 . Thus, the optimized regularization param-
eter λ is near to the central corner of the curve.
An alternative choice of the regularization parame-
ter λ is provided by the decreasing singular values curve,
schemed in Figure 1, on the right side. The decreasing sin-
gular values curve is a plot of the singular values, σ i , ver-
sus its numbers. Its known that the singular value decom-
position (SVD) can be used to better approximate a matrix
Ar (rank r) to a matrix A ∈ Cn×m [5]. Therefore, when deal-
ing with noise corrupted matrices, it is possible to find a
reasonable Ar matrix that better replaces the original noisy
A matrix, for the purpose of inverse identification prob-
lems.
In this context, the decreasing singular values curve
can be used to establish limits for the smallest singular val- Figure 2: Cantilever structure.

ues in the Tikhonov solution. The presented results show


that an optimized Tikhonov solution can be obtained by The number of spectral lines, for data acquisitions,
setting the value of the regularization parameter λ to the was set to 4096, and 6400 Hz was considered as the fre-
smallest acceptable singular value, so that the area W, de- quency sample rate. The two electrodynamic shakers ex-
fined from the greatest to the smallest acceptable singular cited the structure simultaneously at positions #3 and #5
value, corresponds to an empirical percentage of the total (considering positions equally distributed, starting from
area. the right clamped side, as it is seen in Figure 2). The iden-
The Tikhonov solution was applied, in the presented tification processes were executed in the range of zero
experimental identifications, by using the decreasing sin- to 4686 Hz (3000 spectral lines). The response measured
gular values curve, which returns better results than the data, structural frequency response functions A(ω) and
L-curve, on the choice of the regularization parameter λ. accelerations y(ω), were then corrupted with 25%, 50%
It was used a percentage of 99% on the choice of the opti- and 75% of additive and multiplicative white noise (see
mized regularization parameter λopt . Figure 3 for the 75% added noise case), in order to diffi-
cult the force identifications, and also, simulating a worst
case scenario situation.

Brought to you by | CAPES


Authenticated
Download Date | 5/2/19 4:46 PM
On a Stochastic Regularization Technique for Ill-Conditioned Linear Systems | 57

Finally, the force identification systems, correspond- amplification factor approximately at the same level as the
ing to each one of the three different noise level cases, were Tikhonov regularization which is desirable.
inverted using the RMA, Tikhonov regularization and least
squares. The solution to the non-regularized force identifi-
cation was obtained by a least square solution, expressed
100
by Eq. (13) in the matrix form, for λ = 0. Is expected that Noisy FRF
Reference
this naive identification procedure shows very discrepant 10 1
results pointing the importance of regularization in pres-

Absolute values for A(5,5)


ence of noisy data. 10 2

10 3
Table 1: Parameters and settings for both regularization methods.

10 4
Method Parameters and Settings
λ → obtained using the decreasing singular 10 5

Tikhonov values curve, with 99% of the total area


C = In 10 6
0 1000 2000 3000 4000
Frequency [Hz]
{︀ }︀
^ i }︁→ P = 50 sensitivities directions
{︁y
^ k → Q = 100 random matrices per direc-
A
Figure 3: Reference frequency response function A[5, 5](ω) and its
tion noisy counterpart corrupted with 75% of additive and multiplicative
εmax → approximated set to maximum magni- white noise.
Random tude order of noise in responses data (~25%
Matrix of the order of the response signal data)
W k → the amplitude order was set to be con- The results of the force identifications are shown in
stant for each level of noise added. The pa- Figure 5. First, it is immediate to note that the results of
rameter values was set by a normal distribu- the identification forces on nodes # 3 and # 5, obtained by
tion. the least square method, differ significantly from the refer-
ence curves as expected. This fact instantly reinforces the
basic necessity of a regularization technique in noisy ma-
The parameter W k , described in Section 2.3 in general trix inversion processes, as stated before.
as a control matrix, was chosen based on the dispersion of As far as the other identifications are concerned, also
the noise found in the FRF. It was estimated, for each level in Figure 5 it is possible to notice that the RMA is notice-
of contamination, based on the noise amplitude observed ably more satisfactory than the Tikhonov regularization
in contaminated FRF matrices. In this work, this parame- method, because it can be seen that the points in the RMA
ter was set to be constant and chosen as W k = 0.005 for curve are closer to their respective reference values. For
25% contamination, W k = 0.01 for 50% contamination a better visualization and understanding of the results, a
and W k = 0.02 for 75% contamination of the measured quantitative analysis is presented furthermore.
data. The forces identified at nodes #3 and #5, through the
In Figure 4 it is shown, on the top, the matrix con- different methods, were quantitatively compared with the
dition number before and after regularization, obtained reference force curve by means of the signal-to-error ratio
from both regularization methods. The obtained results (SER). The SER measures the ratio between the energy of
for the random matrix regularization method and for the the signal and the energy of the estimation error. It is cal-
Tikhonov solution are quite similar, but with small differ- culated (in decibels) as:
ences on the interval close to 2000 Hz. Also in Figure 4 (︃ ∑︀ ⃒ ⃒2 )︃
i F ref (ω i )
⃒ ⃒
is shown, on the bottom, the error amplification factor SER = 10 log10 ∑︀ ⃒ ⃒2 , (20)
for A(ω), before and after regularization, obtained from i F ref (ω i ) − F id (ω i )
⃒ ⃒
both regularization methods. The random matrix regular-
where Fref is the reference force and Fid is the identified
ization method and the Tikhonov regularized solutions
force by one of the three methods considered.
return equivalently good results in the whole frequency
Results presented in Table 2 are consistent with the
range. In this case we can say that the RMA regulariza-
qualitative results observed in Figure 5 considering the
tion maintains both the condition number and the error
signal contaminated with 75% of noise. Note that an in-

Brought to you by | CAPES


Authenticated
Download Date | 5/2/19 4:46 PM
58 | H. Gomes Moura et al.

105
LS

Force Identification at Node 3 [1/3 - octave band]


TKH
RMA
104 102
Matrix Condition Number for A(w)

103 9 × 101

102 8 × 101

101 REF
7 × 101 LS
TKH
RMA
100
0 1000 2000 3000 4000 0 1000 2000 3000 4000
Frequency [Hz] Frequency [Hz]

104
LS

Force Identification at Node 5 [1/3 - octave band]


TKH
Amplification Factor for A(w) identifying F5(w)

RMA
102
103
9 × 101

102
8 × 101

101 7 × 101
REF
LS
TKH
6 × 101 RMA
100
0 1000 2000 3000 4000 0 1000 2000 3000 4000
Frequency [Hz] Frequency [Hz]

Figure 4: On the top, the matrices condition number after the Figure 5: Force identification at position #3, on the top, and #5, on
Tikhonov regularization, RMA regularization, and Least Squares the bottom, resulted applying Tikhonov regularized, RMA regular-
solution. On the bottom, the error amplification factors after the ization, and Least Squares solution. In these cases, it is immediate
Tikhonov regularization, RMA regularization, and Least Squares to note that the RMA regularization results are closer to the refer-
solution. Both figures are related to the data corrupted with 75% of ence than the curve obtained using Tikhonov regularization. Both
additive and multiplicative white noise. figures are related to the data corrupted with 75% of additive and
multiplicative white noise.
Table 2: Signal-to-error ratio (in dB) between reference force and
identified forces using RMA, Tikhonov regularization and least
squares, for different values of additive and multiplicative noise fied in node #5 in relation to Tikhonov regularization. In
in both nodes #3 and #5. addition, we observed in the other results presented in
Table 2 that for contaminations with smaller noise lev-
Force at node #3 Force at node #5 els of 25% and 50%, the RMA still shows higher SER
Noise RMA THK LS RMA TKH LS in comparison to the SER related to the Tikhonov reg-
25% 6.85 (↑) 6.20 −36.53 5.14 (↑) 2.80 −20.10
ularization method. In these cases, RMA offers gains of
50% 5.48 (↑) 4.27 −15.52 1.85 (↑) 1.13 −21.46
75% 4.03 (↑) 2.94 −13.86 1.98 (↑) −0.07 −17.12 0.65 dB and 2.34 dB for 25% of noise contamination at
nodes #3 and #5, respectively; and gains of 1.21 dB and
0.72 dB for 50% of noise contamination at nodes #3 and
crease in SER between two forces identified by differ- #5, respectively. This way, considering all sixes identi-
ent methods indicates that the one with higher SER is fied forces, the minimum SER gain RMA obtained in the
closer to the reference force than the other. It is observed experiment was 0.65 dB and the maximum was 2.34 dB
that the RMA has a gain of 1.09 dB in the force present when compared to Tikhonov regularization. This indicates
∑︀ ⃒ ⃒2
at node #3 and a gain of 2.05 dB in the force identi- that the quadratic error i ⃒Fref (ω i ) − Fid (ω i )⃒ associated

Brought to you by | CAPES


Authenticated
Download Date | 5/2/19 4:46 PM
On a Stochastic Regularization Technique for Ill-Conditioned Linear Systems | 59

with Tikhonov method is about 16% higher than the RMA


error in the minimum case and about 70% higher in the
4 Conclusion
maximum case.
The main purpose of the random matrices regularization
It is important to point out that low condition numbers
method (RMA) is to reduce matrices condition numbers
or error amplification factors do not guarantee a good so-
and amplification factors in order to provide a better so-
lution. These approaches might be used just to provide a
lution in presence of ill-conditioning and noisy linear sys-
reasonable modification on the system, in order to restrain
tems. The RMA method works adding information to the
the solution in the presence of noise and ill-conditioning
original noisy systems, not truncating as other regulariza-
or just to check the identification conditions between dif-
tion techniques. This method uses the probability distribu-
ferent configurations. A good solution depends, at first, on
tion function that models the noise presented in the data to
the quality of the data and, second, on a successfully regu-
add information to the ill-posed system in order to achieve
larized identification process, which must act mainly over
better accuracy in the system inversion compared to other
the ill-conditioning, keeping the solution as close as pos-
analytical methods.
sible to that one obtained from a hypothetically noiseless
As it could be seen in the results presented that the
data.
random matrix regularization method operates safety and
Now focusing the attention on the internal results of
satisfactorily over the uncertainties in the data, because
the random matrix regularizations. As it is shown in Ta-
^ k were generated for each the method discards modifications that could, according
ble 1, 100 random matrices A
to the matrix condition number rule describe in Eq. (8),
sensitivity directions y ^ i . It means that the random matrix
increase regularization due to errors. Moreover, in all the
method should search for 100 regularization possibilities
situations tested the presented method produced results,
at each frequency line, or matrix inversion, by checking
that according to the signal–to–error ratio, are closer to
the Eq. (9). The matrix which produces the best overall re-
the reference curve than the other methods. Showing the
sult, over the 50 sensitivity directions, and off-course meet
efficiency of the proposed approach.
the rule described by Eq. (8), must be accepted. If none of
In this paper was used a force identification problem
the 100 matrices meet the Eq. (9) and Eq. (8), so the reg-
in order to demonstrate the efficiency and feasibility of the
ularization is not reasonable, and must be avoided. It is
random matrix method. However, the proposed approach
important to notice that RMA achieved 100% efficiency in
can be applied on any kind of noisy inverse problems aris-
the regularizations practiced along the frequency axis, in
ing in different fields of study, such as medical imaging,
all cases considered in this paper.
signal processing, remote sensing, oceanography, among
Due to its stochastic nature, the RMA method is highly
others.
sensitive to the choice of the regularization parameters P
Finally, it is important to note that the choice of the
and Q (Table 1), which define the order of magnitude of the
regularization parameters is in fact empirical, but it can
Monte Carlo simulations to be practiced. It is also of great
be aided by any knowledge about the noise in the data.
importance the appropriate choice of the probability den-
It is expected that a certain regularization parameter set
sity function responsible for modeling the noise that cor-
could optimize the force identification results. The prob-
rupts the dataset. In the cases presented here, the normal
lem in finding the best regularization parameters is been
distribution was selected precisely to fit the white noise
studied in the moment.
inserted in the data, as well as the experimental noise
that, through the application of the Central Limit Theorem
is usually approximated by a Gaussian function or Nor-
mal Distribution. It is worth to mention that datasets cor- References
rupted with noise that are modeled by probability density
functions different from the Gaussian distribution must [1] Tarantola, A., Inverse Problem Theory and Methods For Model
Parameter Estimation, Society for Industrial and Applied Math-
be treated using the same computational procedure de-
ematics, 2005.
scribed in Section 2.3, however the function N(n) must re-
[2] Tanaka, N. and Dulikravich, G. S., Inverse Problems In Engineer-
turn the coefficients given by the density probability func- ing Mechanics, International Symposium On Inverse Problems
tion related to the noise of the treated problem. In Engineering Mechanics 1998, Nagano, Japan, Elsevier, 1998.
[3] Ramm, A.G., Inverse Problems: Mathematical and Analytical
Techniques With Applications to Engineering, Springer, 2005.
[4] Isakov, V., Inverse Problems For Partial Differential Equations,
2nd ed., Springer, 2006.

Brought to you by | CAPES


Authenticated
Download Date | 5/2/19 4:46 PM
60 | H. Gomes Moura et al.

[5] Alifanov, O. M., Inverse Heat Transfer Problems, Springer, 1994. [17] Lourens, E., Reynders, E., De Roeck, G., Degrande G., Lombaert,
[6] Oppenheim, A.V. and Willsky, A.S., Signals and Systems, 2nd G., An augmented Kalman filter for force identification in struc-
ed., Prentice-Hall,1997. tural dynamics. Mechanical Systems and Signal Processing, 27,
[7] Ben-Israel, A. and Greville, T.N.E., Generalized Inverses: Theory 446–460, 2012.
and Applications, 2nd ed., Springer, 2003. [18] Li, X., Zhao, H., Chen, Z., Wang, Q. Chen, J., Duan, D., Force iden-
[8] Hansen, C., Rank-Deficient and Discrete Ill-Posed Problems: Nu- tification based on a comprehensive approach combining Tay-
merical Aspects of Linear Inversion, Society for Industrial and lor formula and acceleration transmissibility, Inverse Problems
Applied Mathematics, 1998. in Science and Engineering, 26(11), 1612–1632, 2018.
[9] Sarkar, T. K., Weiner, D., Jain, V., Some Mathematical Consider- [19] Qiao, B. J., Mao, Z., Chen, X.F., Sparse representation for the in-
ations in Dealing With the Inverse Problem, IEEE Transactions verse problem of force identification, Proceedings of ISMA 2016
on Antennas and Propagation, 29(2), 1981. Noise and Vibration Engineering Conference1685–1696, 2016.
[10] Noschese, S., Reichel, L., Inverse Problems for Regularization [20] Annalisa, F., Mathematical Conditioning: An International
Matrices, Numerical Algorithms, 60(4), pp. 531–544, 2012. Course on New Applications and Techniques of Experimental
[11] Huang, G., Noschese, S., Reichel, L., Regularization Matrices Modal Testing Updating, Optimization, and Damage Detection,
Determined by Matrix Nearness Problems. Linear Algebra and CADIS, 1998.
Its Applications, 502, 41–57, 2016. [21] Tikhonov, A. N., On the stability of inverse problems. C. R. (Dok-
[12] Bickel, J., Levina, E., Regularized Estimation of Large Covariance lady) Acad. Sci. URSS (N.S.), 39:176–179, 1943.
Matrices. The Annals of Statistics, 36(1), 199–227, 2008. [22] Silva, N. A. J., Inverse Problems Fundamental Concepts and Ap-
[13] Schneider, P., Bunte, K., Stiekema, H., Hammer, B., Villmann, plications (In Portuguese), Universidade de São Paulo, 2005.
T., Biehl, M., Regularization in Matrix Relevance Learning, IEEE [23] Papoulis, A., Probability, Random Variables and Stochastic Pro-
Transactions on Neural Networks, 21(5), 831 – 840, 2010. cesses, 4th ed., McGraw-Hill, 2001.
[14] Kakade, M., Shalev-Shwartz, S., Tewari, A., Regularization Tech- [24] Golub, G. H. and Van Loan, C. F., Matrix Computations, Johns
niques for Learning with Matrices, Journal of Machine Learning Hopkins, 3rd ed.,1996.
Research, 13, 1865–1890, 2012. [25] Nascimento, V., Singular Value Decomposition (SVD) Lecture
[15] Rezayat, A., Nassiri, V., Vanlanduit, S., Guillaume, P., Force iden- Notes (In Portuguese), Course: PEE-5794 – Ferramentas de
tification using mixed penalty functions. Proceedings of the 9th Análise Matricial para Aplicac˛ões em Engenharia Elétrica, Es-
International Conference on Structural Dynamics, 3783–3789, cola Politécnica, Universidade de São Paulo, 2004.
2014. [26] Rao, S., Mechanical Vibrations, 4th ed., Pearson Prentice Hall,
[16] Lage, Y. E. , Maia, N. M. M., Neves, M. M., Force Magnitude Re- 2009.
construction Using the Force Transmissibility Concept, Shock [27] Bendat, J. S. and Piersol, A. G., Random Data Analysis And Mea-
and Vibration, Article ID 905912, 9 pages, 2014. surement Procedures, 2nd ed., John Wiley & Sons, 1986.

Brought to you by | CAPES


Authenticated
Download Date | 5/2/19 4:46 PM

You might also like