166-Application IE Method in Modern Agro-Ecological Park Planning
166-Application IE Method in Modern Agro-Ecological Park Planning
Jiang Shen
Runliang Dou Editors
Editors
123
Editors
Ershi Qi Runliang Dou
Jiang Shen College of Management and Economics
Industrial Engineering Institution of CME Tianjin University
Tianjin Tianjin
People’s Republic of China People’s Republic of China
v
vi Contents
1.1 Introduction
Making procurement policies turns to be more and more challenging for manu-
facturers dealing with commodities with violent price fluctuations, such as agri-
cultural products, precious metals, mineral and energy resources. The timing
decision is a particularly hard choice for them; procurement made too early will
cause unnecessary inventory holding costs and miss the opportunity to purchase at
a possibly lower price in the future, while it made too late may squander the
chance to purchase at an earlier cheaper price.
There are two lines of literature related to this research, procurement models under
purchase price uncertainty, and operations management studies involving infor-
mation updating.
1 A Bayesian Learning Approach for Making Procurement 3
However, they are all about uncertain demands which distribution is periodically
updated based on newly obtained demand observations. Inspiringly, Miller et al.
(2005) incorporates Bayesian learning within a process design and capacity
planning problem and identifies a threshold to improve decision-making; we adopt
their framework and apply it to our procurement problem.
Consider an oil refinery faced with multiple dynamic but deterministic demands
for crude oils, which must be met and will be purchased at spot prices, and
amounts arrived before demanded trigger inventory holding costs. The assumption
of known demands pattern is realistic in this problem based on three reasons. First,
various kinds of oil products could serve several purposes such as generating
power, providing energies to automobiles and airplanes, etc., consequently the
overall oil demands arising from both life and industry do not fluctuate violently,
especially compared with the volatility of crude oil price. Second, since there are
industry regulations in Chinese oil markets to ensure relatively steady supply of oil
products, major refineries usually report their production plans yearly to the
administration and stick to them after being approved. Third, process industries
such as oil refining, characterized by continuous production process that couldn’t
afford to be disturbed too much, generally operate according to the determined
production plans out of economic considerations.
As researchers studying on the spot price of commodities did, we assume that the
spot price of each unit (e.g., barrel) of the crude oil, St, follows a geometric
Brownian motion with drift l and volatility r
dSt =St ¼ l dt þ r dWt ; ð1:1Þ
where Wt is a standard Weiner process. Note the point in time that matters is when
payment is made, not when order is placed. The terms on valuation time periods
are negotiable in practice, so for simplicity, the purchasing price is represented by
the spot price at delivery in following derivations and by the average of spot prices
during the lead time in empirical studies.
As inventory holding cost consists of cost of capital, cost of storage, insurance,
breakage, and many other items, among which cost of capital is the most signif-
icant component, we assume the inventory holding cost (per unit per time) is
charged at a fixed proportion, 0 \ h \ 1, to the corresponding purchasing cost.
That is,
1 A Bayesian Learning Approach for Making Procurement 5
ht ¼ h St : ð1:2Þ
In our problem, modeling in this way is more reasonable than commonly used
stationary or independent setting, because the cost of capital is particularly high for
the oil industry, which is capital-intensive.
What’s more, it is realistic to assume a lead time, say, one period, i.e., the
ordered quantity in current period will be delivered and paid in next period. Such a
setting also emphasizes the difference between the spot price observed when the
crude oils are ordered and the price to pay when they are received. Assume the
time length of one period is Dt.
The refinery’s decision is to determine when and how many units to purchase each
time to minimize the total expected costs over the planning horizon, in other
words, the refinery would behave risk-neutrally when making procurement deci-
sions. Risk-aversion modeling can be found in Martínez-de-Albéniz and Simchi-
Levi (2006), and it could be incorporated in the model by adjusting the drift term.
In this procurement problem, the refinery has a series of choices on ordering
timings for each quantity demanded and every single period in the planning
horizon is an option. Say there are n demands in the planning horizon, because
supply lead times are fixed, order i will always be used to serve the ith demand,
which arrival time and quantity are denoted by Ti and Di. We want to find an
optimal series of arrival times of each ordered quantity, denoting the ith order’s
delivery time as ti (thus its ordering time is ti - Dt), to obtain a minimization as
X
n
min Ef erti ½Di Sti þ Di hti ðTi ti Þg; ð1:3Þ
ti
i¼1
Let F (t, St) be the minimum expected discounted total cost if the current time is t,
the current unit price is St, and the firm has not made the purchase before. The
Bellman equation is
6 Z. Xie and L. Zheng
n o
F ðt; St Þ ¼ min erðtþDtÞ D E½StþDt jSt ½1 þ h ðT t DtÞ; E½F ðt þ Dt; StþDt ÞjSt ;
ð1:5Þ
where the first and second terms in braces on the right side of Eq. (1.5) are the
‘‘payoff’’ functions of termination and continuation, respectively. The objective
function is F (0, S0) where S0 is the spot price of the material at time 0; the
boundary condition is
Suppose we obtain at time 0 an estimate of future spot price that follows a log-
normal distribution; its logarithm is normally distributed with an unknown mean l ~
and a known variance r. We may regard l ~ as a random variable and learn more
about it under Bayes rule. Presume the prior belief is normally distributed as
N (l00, r00). Spot prices between period 0 and period 1 are observed as time
evolves from 0 to Dt, denoted by s1k. There could be more than one price real-
izations (k [ 1) between consecutive decision points in time, because the spot
price process evolves in a continuous pattern from which we can obtain several
discrete realizations; for instance, we make ordering decisions per month while
obtain price information per day. Then, as shown in, (Fink D A compendium of
conjugate priors, unpublished) the posterior distribution of l ~ is also normal with
mean and standard variance
pffiffiffiffiffi
l01 ¼ a1 l00 þ ð1 a1 Þs1 ; r01 ¼ a1 r00
ð1:8Þ
a1 ¼ r2 ðr2 þ kr200 Þ;
r þ ln½hðT2DtÞþ1
2
r2 a1 l00
s1 ¼ T2Dt
ð1:9Þ
1 a1
where T-2Dt [ 0. In other words, we should advance the ordering time to now as
long as s1 [ s*1, and hold on to the prior decision otherwise.
If the prior decision remains unchanged at period 1, then after new information
is obtained at later periods, similar procedure should be followed. And it is
straightforward to extend Property 2 to later periods.
Now we apply the proposed models to the procurement problems faced by Chinese
refineries, and the empirical studies are based on real-world data in crude oil spot
market. The benchmark model is the current policies of Chinese refineries, under
which the orders are almost evenly distributed along the planning horizons.
8 Z. Xie and L. Zheng
Our data set comes from public data source (Energy Information Administration)
and consists of daily spot prices (USD per barrel) of WTI from 1986/1/2 to 2011/6/
30 and those of Brent from 1988/1/4 to 2011/6/30. As for the quantity of crude oils
China has imported from the international market, there are yearly volume data
from 1994 to 2009, summing up to approximately 10.165 billion barrels.
We set half a month as the length of one period, which is also the length of lead
time and decision making interval, and two months (four periods) as the length of
one planning horizon. Taking case ‘‘demand for WTI at 2008/4/30’’ as an example,
a typical data processing procedure is as follows:
(1) Set 2008/3/1 as time 0, 2008/4/30 as time T (N = 4), target demand to be
served equals to 1; annualized risk-free rate equals to 3 %, which is approx-
imately the one-year Treasury bond rate; annualized cost of capital as per-
centage to occupied capital equals to 36 %, which is relatively high because
the oil industry is capital-intensive, as emphasized before.
(2) Use historical data, WTI’s spot prices from 2008/1/1 to 2008/2/29, to estimate
l00 and r at time 0; here we estimate from data of the past two months because
we want to make decision for future two months. Make passive decision
according to Property 1.
(3) For period 1 to N-1, if the order has not been placed before, consider spot
prices data in each month as additional information; calculate parameters and
statistics to make active decision according to Property 2.
(4) Use real-world price data to statistically assess the effectiveness of proposed
decision models compared to the benchmark, in which the costs are calculated
by taking average of the costs along the planning horizons.
Faced with spot prices data from 1986/1988 to 2011 and volume data from 1994 to
2009, we perform rough yet reasonable calculations for the total costs by multi-
plying average unit purchase cost over 24/26 years by aggregate quantities of
16 years. We compare the proposed models to the current policy, and the results
are summarized in Table 1.1.
Surprisingly, the passive model costs more than the current policy, perhaps
because the oil price trend in the real world changes so frequently so it’s no good
for the refinery to act passively following the decision made at the beginning of
planning horizon. However, the active model did save a lot of money for the
refinery. Note all the numbers are in billions of US dollars; although they are not
accurate, improvements are still considerable.
1.5 Conclusion
Acknowledgments This study is supported in part by the National Natural Science Foundation
of China (Grant No. 70771058/60834004), and the 863 Program of China (2008AA04Z102).
The authors thank H. S. Deng, X. F. Li, A. B. Pang, and G. P. Xiao from the China Petroleum
& Chemical Corporation for the detailed information on refinery procurement operations. The
authors also thank Prof. C. S. Park from Auburn University for his patient instructions on
Bayesian learning approach.
10 Z. Xie and L. Zheng
References
Azoury KS (1985) Bayes solution to dynamic inventory models under unknown demand
distribution. Manage Sci 31(9):1150–1160
Berling P, Martínez-de-Albéniz V (2011) Optimal inventory policies when purchase price and
demand are stochastic. Oper Res 59(1):109–124
Fabian T, Fisher JL, Sasieni MW, Yardeni A (1959) Purchasing raw material on a fluctuating
market. Oper Res 7(1):107–122
Fink D A compendium of conjugate priors, unpublished
Gaur V, Seshadri S, Subrahmanyam MG Optimal timing of inventory decisions with price
uncertainty, unpublished
Golabi K (1985) Optimal inventory policies when ordering prices are random. Oper Res
33(3):575–588
Gurnani H, Tang C (1999) Optimal ordering decisions with uncertain cost and demand forecast
updating. Manage Sci 45(10):1456–1462
Jiang XY (2010) The risk analysis of imported crude oil valuation (Chinese). Petrol Petrochem
Today 18(6):41–44
Kalymon BA (1971) Stochastic prices in a single-item inventory purchasing model. Oper Res
19(6):1434–1458
Karlin S (1960) Dynamic inventory policy with varying stochastic demands. Manage Sci
6(3):231–258
Li C, Kouvelis P (1999) Flexible and risk-sharing supply contracts under price uncertainty.
Manage Sci 45(10):1378–1398
Martínez-de-Albéniz V, Simchi-Levi D (2006) Mean-variance trade-offs in supply contracts. Nav
Res Logist 53(7):603–616
Miller LT, Park CS (2005) A learning real options framework with application to process design
and capacity planning. Prod Oper Manage 14(1):5–20
Scarf H (1959) Bayes solutions of the statistical inventory problem. Ann Math Stat
30(2):490–508
Secomandi N, Kekre S Commodity procurement with demand forecast and forward price
updates, unpublished
Yi J, Scheller-Wolf A Dual sourcing from a regular supplier and a spot Market. unpublished
Yu C, Fang J (2005) Optimization and control over the purchasing costs of imported crude oil (In
Chinese). Int Petrol Econ 13(8):44–46
Chapter 2
A Class of Robust Solution for Linear
Bilevel Programming
Abstract Under the way of the centralized decision-making, the linear bi-level
programming (BLP) whose coefficients are supposed to be unknown but bounded
in box disturbance set is studied. Accordingly, a class of robust solution for linear
BLP is defined, and the original uncertain BLP was converted to the deterministic
triple level programming, then a solving process is proposed for the robust solu-
tion. Finally, a numerical example is shown to demonstrate the effectiveness and
feasibility of the algorithm.
Keywords Box disturbance Linear bilevel programming Robust optimization
Robust solution
2.1 Introduction
B. Liu B. Li Y. Li
School of Management, Tianjin University,
Tianjin, China
B. Liu (&)
School of Information Science and Technology, Shihezi
University, Xinjiang, China
e-mail: [email protected]
Y. Li
School of Science, Shihezi University, Xinjiang, China
s:t: Ax þ By h
x; y 0
2 A Class of Robust Solution for Linear Bilevel Programming 13
In model (2.1),
For l ¼ f1; 2g; i 2 f1; . . .; mg; j 2 f1; . . .; ng; k ¼ f1; . . .; r g, cli ; dlj ; aki ; dkj ; hk
are the given data, and ðucl Þi ; ðudl Þj ; ðuA Þki ; ðuB Þkj ; ðuh Þk are the given nonnegative
data.
Under the way of the centralized decision-making, the robust solution of
uncertain BLP (1) is defined as follows:
Definition 1
(1) Constraint region of the linear BLP (1):
X ¼ fðx; yÞjAx þ By h; x; y 0; ðA; B; hÞ 2 lg
(2) Feasible set for the follower for each fixed x
XðxÞ ¼ fyjAx þ By h; x; y 0; ðA; B; hÞ 2 lg
(3) Follower’s rational reaction set for each fixed x
( ( ))
cT2 x þ d2T y; y 2 XðxÞ;
MðxÞ ¼ yy 2 arg min
ðA; B; hÞ 2 l
Definition 2 Let
( )
cT1 x þ d1T y t
m n
F :¼ ðx; y; tÞ 2 R R R
ðx; yÞ; ðc1 ; d1 Þ 2 l
The programming
minftjðx; y; tÞ 2 F g ð2:3Þ
x;y;t
Under the way of the centralized decision-making, based on the original idea of
robust optimization that the objective function can get the optimal solution even in
the worst and uncertain situation, the transform theorem can be described as
followings:
Theorem The robust linear BLP (1) with its coefficients unknown but bounded in
box disturbance set l is equivalent to Model (2.4) with certain coefficients as
followings:
X
m n
X
min Fðx; yÞ ¼ c1i þ lc1 i xi þ
d1j þ ld1 j yj
x
i¼1 j¼1
k ¼ f1; . . .; r g
x; y 0:
Proof (1) Firstly, the constraint region X of the linear BLP (1) is transformed into
the certain region. Consider the constraint region of the linear BLP (1):
2 A Class of Robust Solution for Linear Bilevel Programming 15
Ax þ By h; ðA; B; hÞ 2 l
8 9
> aki ¼ a þ ðuA Þ ; >
>
> ki ki >
>
>
> >
>
>
> ð u Þ
ðu Þ ð u Þ
; >
>
>
> A ki A ki A ki >
>
>
> >
>
>
> b ¼ b
þ ð u Þ ; >
>
>
> kj kj B kj >
>
>
> >
>
>
<X >
=
m X n ðuB Þkj ðuB Þkj ðuB Þkj ;
, 0 min aki xi þ
bkj yj hk
lA ;lB ;lh >
> hk ¼ hk þ ðuh Þk ; >
>
>
> i¼1 j¼1
>
>
>
> >
>
>
> ð u Þ
ð u Þ ð u Þ
; >
>
>
> h k h k h k >
>
>
> >
>
>
> i 2 f1; . . .; mg; j 2 f1; . . .; ng; > >
>
> >
>
>
> >
>
: k 2 f1; . . .; rg ;
Xm Xn
, 0 aki xi þ bkj yj hk
i¼1 j¼1
8 9
> ðuA Þki ðuA Þki ðuA Þki >
>
> >
>
>
> >
>
>
> ðuB Þkj ðuB Þkj ðuB Þkj >
>
>
<X >
=
m X n
þ min ðuA Þki xi þ
ðuB Þkj yj ðuh Þk ðuh Þk ðuh Þk ðuh Þk
lA ;lB ;lh >
> >
>
>
> i¼1 j¼1
i 2 f1; . . .; mg; j 2 f1; . . .; ng; >
>
>
> >
>
>
> >
>
: k 2 f1; . . .; rg ;
X m X n
x;y 0
, 0 aki ðuA Þki xi þ bkj ðuB Þkj yj hk þ ðuh Þk
8k2f1;...;rg
i¼1 j¼1
X
m n
X
, aki ðuA Þki xi þ bkj ðuB Þkj yj hk þ ðuh Þk ; k 2 f1; . . .; rg
i¼1 j¼1
ð2:5Þ
So the linear BLP (1) is transformed into the model (2.6) as followings:
16 B. Liu et al.
k ¼ f1; . . .; r g
x; y 0
(2) Next, according the equivalent form (Lobo et al. 1998)
min t
min f ðxÞ x;t
x , s:t: f ðxÞ t
s:t: x 2 D
x2D
and the K-T method, the model (2.6) can be transform-ed into the model (2.7) (Li
and Du 2011):
min Fðx; yÞ ¼ t
x;t
k ¼ f1; . . .; r g
x; y 0
X
m n
X
min Fðx; yÞ ¼ c1i þ lc1 i xi þ
d1j þ ld1 j yj
x;t
i¼1 j¼1
k 2 f1; . . .; rg;
x; y 0
(4) Next, because the optimal solution of BLP (1) is not influenced by the value of
c2 , we only consider how to choose the value of d2 . Based on the original idea of
robust optimization, the model (2.8) is transformed into the model (2.4) above.
The deterministic triple level programming (2.4) can be written as the following
programming (2.9) by the K-T method.
X
m n
X
min max Fðx; yÞ ¼ c1i þ lc1 i xi þ
d1j þ ld1 j yj
x d2 ;y;u;v
i¼1 j¼1
X
r
s:t: d2j ¼ uk bkj ðuB Þkj þ vj
k¼1
d2j ðud2 Þj d2j d2j
þ ðud2 Þj
" #
m
X Xn
uk akj ðlA Þ xi þ bkj ðlB Þkj yj ðhk þ ðlh Þk Þ ¼ 0 ð2:9Þ
i¼j j¼1
vj yj ¼ 0
X m n
X
aki ðlA Þki xi þ bkj ðlB Þkj yj hk þ ðlh Þk ;
i¼1 j¼1
min t
x;d2 ;y;u;v
X m n
X
s:t: c1i þ lc1 i xi þ
d1j þ ld1 j yj t
i¼1 j¼1
X
r
d2j ¼ uk bkj ðuB Þkj þ vj ;
k¼1
d2j ðud2 Þj d2j d2j
þ ðud2 Þj ;
" #
X
m Xn ð2:10Þ
uk aki ðlA Þki xi þ bkj ðlB Þkj yj hk þ ðlh Þk ¼ 0;
i¼1 j¼1
vj yj ¼ 0;
X m n
X
aki ðlA Þki xi þ bkj ðlB Þkj yj hk þ ðlh Þk ;
i¼1 j¼1
min t
x;d2 ;y;u;v;t;w
X m n
X
s:t: c1i þ lc1 i xi þ
d1j þ ld1 j yj t
i¼1 j¼1
X
r
d2j ¼ uk bkj ðuB Þkj þ vj ;
k¼1
d2j ðud2 Þj d2j d2j
þ ðud2 Þj ;
yj Mtj ;
vj M 1 tj ;
uk Mwk ;
Xm n
X
aki ðlA Þki xi þ bkj ðlB Þkj yj hk þ ðlh Þk M ð1 wk Þ
i¼1 j¼1
X
m n h
X i
aki ðlA Þki xi þ bkj ðlB Þkj yj hk þ ðlh Þk ;
i¼1 j¼1
ðuc1 Þ1 ¼ 0:5; ðud1 Þ1 ¼ 0:5; ðud1 Þ2 ¼ 3; ðud2 Þ1 ¼ 0:5; ðud2 Þ2 ¼ 0:5;
ðub Þ31 ¼ 0:5; ðub Þ32 ¼ 0:25; ðuh Þ3 ¼ 1; ðuh Þ4 ¼ 0:25:
According to the theorem and these data above, robust model transformed is
demonstrated as
20 B. Liu et al.
min t
x;y;u;v;g;z
s:t: 2x y1 þ y2 t
2:5 x 8;
y2 5:5;
1:5y1 1:3y2 3;
4x þ y1 2y2 10;
4y1 1:5y2 22;
1 1:5u1 þ u2 4u3 þ v1 2;
4 1:3u1 2u2 1:5u3 u4 þ v2 3;
y1 Mg1 ; y2 Mg2 ;
v1 M ð1 g1 Þ; v2 M ð1 g2 Þ;
uk Mzk ; k ¼ 1; 2; 3; 4:
1:5y1 1:3y2 3 ð1 z1 Þ;
4x þ y1 2y2 þ 10 ð1 z2 Þ;
4y1 1:5y2 þ 22 ð1 z3 Þ;
y2 þ 5:5 ð1 z4 Þ;
x; y; u; v; g; z 0;
g1 ; g2 2 f0:1g; zk 2 f0:1g; k ¼ 1; 2; 3; 4:
Under the way of the centralized decision-making, a class of robust solution for
uncertain linear BLP is defined, which expands further the application of BLP in
different circumstances. And based on the original idea of robust optimization, the
uncertain BLP was converted to the deterministic triple level programming. The
solving process is proposed to obtain the robust solution of uncertain linear BLP.
Finally, a numerical example is shown to demonstrate the effectiveness and fea-
sibility of the algorithm.
2 A Class of Robust Solution for Linear Bilevel Programming 21
References
Bialas WF, Karwan MH (1982) On two-level optimization. IEEE Trans Autom control AC-
27(1):211–214
Dempe S (2002) Foundations of bilevel programming. Kluwer Academic Publisher, Boston
Fortuny-Amat J, McCarl B (1981). A representation and economic interpretation of two-level
programming problem. J Oper Res Soc 32:783–792
Fortuny-Amat J, McCarl BA (1981) Representation and economic interpretation of two-level
programming problem. J Oper Res Soc 32(7): 83–792
Lai YJ (1996) Hierarchical optimization: a satisfactory solution. Fuzzy Sets and Syst 77:321–335
Li Y, Du G (2011) Robust linear bilevel programming under ellipsoidal uncertainty. Syst Eng
11:96–100
Lobo MS, Vandenberghe L, Boyd S, Lebret H (1998) Application of second-order cone
programming. Linear Algebra Appl 284:193–228
Mathieu R, Pittard L, Anandalingam G (1994) Genetic algorithm based approach to bi-level
linear programming. Oper Res 28:1–21
Soyster AL (1973) Convex programming with set-inclusive constraints and applications to
inexact linear programming. Oper Res 21:1154–1157
Wang J (2010) Research on the methods of interval linear bi-level programming, pp 54–55.
Tianjin University, Tianjin
Chapter 3
A Comparison of the Modified
Likelihood-Ratio-Test-Based Shewhart
and EWMA Control Charts
for Monitoring Binary Profiles
Abstract Profile monitoring is used to check and evaluate the stability of the
functional relationship between a response variable and one or more explanatory
variables known as profile over time. Many studies assume that the response
variable follows a continuous and normal distribution, while in fact it could be
discrete, for example binary profiles. However, at present, there are few researches
in this field. Based on an in-control binary dataset, this paper uses the logistic
regression model to estimate the parameters in Phase I. And in Phase II, we apply
bi-sectional search method to modifying the UCL’s calculation of the likelihood-
ratio-test-based Shewhart and EWMA control charts. Moreover, according to the
estimated parameters, ARL’s performances of the two modified control charts
under different parameters’ deviation are compared.
Keywords Binary profile Logistic regression model Bi-sectional search ARL
3.1 Introduction
Still, an iterative algorithm was used here to obtain a more accurate value of b
according to (Yeh et al. 2009), steps were as listed below.
^ ^
(1) Initialize the estimate of b ¼ b0 , and b0 can be acquired by the ordinary least
squares (O.L.S.) estimation. Set i = 0;
^ ^ ^ ^ ^
(2) According to bi , calculate gi , pi , li , W i ;
^ ^ ^ ^
(3) Calculate qi ¼ gi þðW i Þ1 ðy li Þ;
^ ^ ^ ^
(4) Update the estimate of b by calculating bðiþ1Þ ¼ ðX T W i XÞ1 X T W i qi , and set
i = i ? 1;
^ ^ ^
l l1 l1
(5) Repeat steps (3.2) through (3.4) for l times, until b b =b a,
where is the Euclidean norm and a is a sufficiently small constant (a is equal to
^ ^
105 here). Then b ¼ bl is the desired estimator of b.
which shows the second-order model is not fit to apply for the binary dataset and
the first-order model we set here is suitable.
After the calculation process explained in Zhu (2008), the likelihood ratio test
statistic can be defined as:
k ¼ ‘H1 ‘H0 ð3:7Þ
where ‘H1 and ‘H0 are the log-likelihoods, and k will be compared with a critical
value that is determined by a pre-defined type-I error rate a, if k is greater than the
critical value, then it means at least one parameter has a shift.
In Phase I, the fitted logistic regression model has been set: logðp=½1 pÞ ¼
^
b0 þ b1 logðxÞ, and the obtained parameters is valued as: b ¼ ðb0 ; b1 ÞT ¼
ð42:0537; 5:1704ÞT .
In Phase II, we generate deviations on parameters b0 and b1 to evaluate the
ARL’s performances of the modified Shewhart.LRT and EWMA.LRT control
charts and to investigate which parameter is more sensitive under given deviations.
The pre-defined Type I error rate (a) is set as 0.005 which will yield an in control
ARL as 200, and for EWMA.LRT control chart, the smoothing constant h is
assumed as 0.2. Results of ARL’s values of two modification control charts are
both based on 10,000 Monte-Carlo simulations, results are as shown below in
Table 3.1.
In Table 3.1, we define that ARL1 and ARL2 separately represent the ARL’s
values of Shewhart.LRT and EWMA.LRT control charts, and we can see that with
the parameters’ deviation becoming larger, the ARL’s result of the modified
control charts are getting more and more smaller, and the ARL’s results of
EWMA.LRT control chart are smaller than Shewhart.LRT control chart under the
same level of parameter’s deviation, which shows that it has a better performance
than Shewhart.LRT control chart to monitor the binary process here.
Meanwhile, between the parameters b0 andb1 , b1 is more sensitive than b0
when a deviation happens, which can be seen obviously from the ARL’s result.
Table 3.1 only shows the performance of the increasing shift of deviation
parameters; moreover, similar results happen in the decreasing shift of deviation
parameters which is not shown here.
Table 3.1 Arl’s performance of the modified control charts under parameters’ deviation
Db0 ARL1 ARL2 Db1 ARL1 ARL2
0 201.443 200.632 0 202.206 20.468
0.002 173.541 152.457 0.002 93.632 80.653
0.006 149.675 107.453 0.006 27.184 24.339
0.01 115.058 84.445 0.01 9.547 6.542
0.014 98.246 65.994 0.014 4.429 3.875
0.018 82.882 41.123 0.018 2.460 2.087
0.022 70.253 33.682 0.022 1.650 1.456
0.026 60.620 22.546 0.026 1.278 0.897
0.03 52.553 18.578 0.03 1.095 0.786
0.034 45.207 12.125 0.034 1.043 0.743
0.038 38.162 11.557 0.038 1.001 0.725
0.042 35.511 8.546 0.042 1.002 0.708
0.046 29.484 6.078 0.046 1.000 0.704
0.05 25.953 5.271 0.05 0.998 0.698
3 A Comparison of the Modified Likelihood-Ratio-Test-Based Shewhart and EWMA 29
3.7 Conclusion
References
Chong-yi Jing
Abstract There are few quantitative studies on decision making of air ticket price
control problem. In this paper we establish a game decision making model of price
control for government by introducing two new factors: consumer’s surplus (the
public welfares) and the passenger load rate (LR). We get some interesting con-
clusions from modeling and discussion. The administering authority, CAAC (Civil
Aviation Administration of China) is inclined to ignore the public welfares when
setting a higher control price and the airlines are always inclined to disobey the
control price of CAAC for achieving a higher passenger load rate and strength-
ening the competition edge. As a whole, the optimal strategy of CAAC is to set an
inter-zone control price and the optimal strategy of airlines is to self determinate a
price between the inter-zone prices. The reason of decision dissonance is that the
cost evaluation of ticket pricing for the two players has tremendous difference.
4.1 Introduction
There are many studies on air ticket control, which are almost limited to quali-
tative analysis. Some use natural monopoly theory (Zhang 2005; Liu 2006) and
some use welfare economics Liu (2002) to analyze the problem, and the result is
almost similar that the government should release control to air ticket price and be
unnecessary to intervene in operation or management of airlines (Liu 2002). Li and
Deng (2003), Min and Yang (2003) conclude that the civil aviation of China has
many pertinacious problems such as ticket price simplification, cut-throat
C. Jing (&)
Aviation Transportation Management School, Civil Aviation Flight
University of China, Guanghan, China
e-mail: [email protected]
4.2 Modeling
Hypothesis1: the game process is repetitive and limited, and the information for
each other is imperfect. (For example, cost, intension and so on).
Hypothesis2: the CAAC and airlines in China have some common interests and
close relations although the former is supervisor of the industry (for example, at
aspects of finance and political achievements).
Hypothesis3: at present in China, the general ticket demand is price inelastic.
As one player of the game, CAAC has two pure strategies: one is ‘‘Consider the
public welfare’’ (hereinafter called ‘‘Consider’’), the lower price is p1 and the
4 A Decision-Making Model of Price Control 33
a
CAAC
ey
1-a ob
Dis ß,p 02)
p2
Ignor
e (1-
(ß,p )
Airlines 2
Abid
e
Figure 4.2 is payoff matrix of the two players. We define their payoff functions
as bellows:
pCAAC ðCAÞ: The payoff of CAAC when CAAC considers the public welfare and
airlines abide the price control. Essentially, the payoff equals to TS, which should
R dðp Þk
contain CS and ES. And CS ¼ 0 1 f ðpÞdp, here d denotes derivative symbol,
ES ¼ ðp1 c1 Þ dðp1 Þk , so the payoff of CAAC is
34 C. Jing
Z dðp1 Þk
pCAAC ðCAÞ ¼ f ðpÞdp þ ðp1 c1 Þ dðp1 Þk
0
pairlines ðCAÞ: The payoff of airlines when airlines abide the price control and
CAAC considers the public welfare. Under the circumstances, the payoff of air-
lines is.
pairlines ðCDÞ: The payoff of airlines when airlines disobey the price control and
CAAC considers the public welfare. The payoff of airlines is
pairlines ðCDÞ ¼ ðp01 c2 Þ LRðp01 Þ TCL
pCAAC ðIAÞ: The payoff of CAAC when CAAC ignores the public welfare and
airlines abide the price control. Then the payoff just contains ES, so
pairlines ðIDÞ: The payoff of airlines when airlines disobey the price control and
CAAC ignores the public welfare. The payoff of airlines is.
pairlines ðIDÞ ¼ ðp02 c2 Þ LRðp02 Þ TCL
According to the model above, the mixed strategy of CAAC is h1 ¼ ða; 1 aÞ,
and the mixed strategy of airlines is h2 ¼ ðb; 1 bÞ. The payoff function of
CAAC can be expressed as.
2 Z k
! 3
dðp1 Þ
6b f ðpÞdp þ ðp1 c1 Þ dðp1 Þk þ 7
6 0 7
6 7
l1 ðh1 ; h2 Þ ¼ a6 ! 7
6 Z dðp01 Þk 7
4 k 5
ð1 bÞ f ðpÞdp þ ðp01 c1 Þ dðp01 Þ ð4:1Þ
0
2 3
b ðp2 c1 Þðp2 Þk þ
6 7
þ ð 1 aÞ 4 5
k
ð1 bÞ ðp02 c1 Þ dðp02 Þ
2 3
a ðp1 c2 Þðp1 Þk þ
l2 ðh1 ; h2 Þ ¼ b4 5
k
ð1 aÞðp2 c2 Þ dðp2 Þ ð4:3Þ
" #
aðp01 c2 Þ LRðp01 Þþ
þ ð1 bÞ
ð1 aÞðp02 c2 Þðp02 Þ TCL
ð4:4Þ
According to Hypothesis1, the game is limited and imperfect information,
airlines cannot distinguish whether the control price is set basing on public welfare
or not, and CAAC also cannot pry about cost information and competition
intention of airlines easily. In a limited period, CAAC sets a control price, if
airlines intend to disobey, they are inclined to set self-determination price as p0
uniformly, so p01 ¼ p02 ¼ p0 , which can be substituted into (4.2) and (4.4), then
we can get new a and b.
R dðp0 Þk
0 f ðpÞdp
b¼R R dðp0 Þk
dðp1 Þk
0 f ðpÞdp þ ðp1 c1 Þ dðp1 Þk 0 f ðpÞdp ðp2 c1 Þ dðp2 Þk
ð4:5Þ
4.3 Discussion
,2 k k R dðp1 Þk 3
ðp 2 c 1 Þ dðp 2 Þ ðp 1 c 1 Þ dðp 1 Þ f ðpÞdp
b ¼ 1 41 þ R dðp0 Þk
0 5
0 f ðpÞdp
, because 0 b 1, so
R dðp1 Þk
ðp2 c1 Þ dðp2 Þk ðp1 c1 Þ dðp1 Þk 0 f ðpÞdp
R dðp0 Þk 0;
0 f ðpÞdp
R dðp1 Þk
and therefore ðp2 c1 Þ dðp2 Þk ðp1 c1 Þ dðp1 Þk 0 f ðpÞdp;
and here.
Z dðp1 Þk
k k
ðp2 c1 Þ dðp2 Þ ðp1 c1 Þ dðp1 Þ þ f ðpÞdp ð4:7Þ
0
The left part of (4.7) is pCAAC ðIAÞ, while the right part of (4.7) is pCAAC ðCAÞ, so
(4.7) is pCAAC ðIAÞ pCAAC ðCAÞ, CAAC has no motivation to improve and con-
sider the public welfare, that is to say, CAAC is apt to maintain a higher industry
price unilaterally rather than decreasing the control price for the public. Actually,
CAAC has transferred the public welfare to enterprises and civil aviation industry
by pricing at a higher level (Shaffer 2001). According to Hypothesis2, CAAC and
airlines in China have many common interests and close relations, not only the
finance aspects, but also the politics achievements of CAAC have to rely on
development and stability of the civil aviation industry. At this point of view, the
result is corresponding to Hypothesis2, and also corresponds with the reality.
Because the ticket demand is price inelastic (Hypothesis3), and p1 \p2 , for
parameter a, it is supposed to follow the restrictions as bellow:
8
>
> ðp0 c2 Þ LRðp0 Þ TCL ðp2 c2 Þ dðp2 Þk
>
> a ¼
< ðp1 c2 Þ dðp1 Þk ðp2 c2 Þ dðp2 Þk
ð4:8Þ
> ðp1 c2 Þ dðp1 Þk ðp2 c2 Þ dðp2 Þk \0
>
>
>
:
0a1
Referring to (4.8), we get the calculation result:
Basing on the discussion A, the optimal strategy payoff of CAAC is ðp2 c1 Þ dðp2 Þk ,
o½ðp2 c1 Þdðp2 Þk k
of which the first order condition is op2 ¼ 0, so p2 ¼ 1þk c1 ; Basing on the
discussion B, the optimal strategy payoff of airlines is ðp0 c2 Þ LRðp0 Þ TCL, of
which the first order condition is o½ðp0 c2 ÞLRðp
op0
0 ÞTCL
¼ 0, so p0 ¼ c2 LRLR 0
0 , LR is the
For CAAC, the optimal strategy is setting a higher control price without considering
the public welfares because it has many underlying common interests with airlines
and the civil aviation industry, for example, at aspects of finance and political
achievements. But airlines are always inclined to disobey the control price and get
into price war. The reason is that airlines in China have to face furious competitions
after the so called ‘‘deregulation’’, while the property rights of leading airlines still
belong to administrative departments. For this kind of natural monopoly industry of
civil aviation, fixed costs are very high while the margin costs are very low, so for
airlines there always has been pressure and space of cutting down price. And we
have proved that when the self-determination price p0 is lower than the control price
p2 , airlines could become more competitive such as getting a higher passenger load
rate, a bigger market share and so on. However, when p0 \c2 , (it is possible because
of the unusual structure of property rights) the air transport market will be in disorder
and the whole industry welfares will be seriously damaged.
And then CAAC has to set a lower boundary for the control price to prevent this
phenomenon form happening, so that the self-determination price of airlines
cannot be under the lower boundary of control price on any account.
p1 is a lower control price when CAAC considers the public welfares, when
p0 [ p1 , the airlines can get excess profit, or else they will go into red. So the
optimal strategy for CAAC is setting an inter-zone control price, for example
½p1 ; p2 , and the optimal strategy for airlines is self-pricing between p1 and p2 .
Basing on the decisions above, CAAC could get balance among the industry
welfares, healthy development of the market and the public welfares, airlines
could get balance between the profits and competitive edges. Lastly it is necessary
to note that the final decision makings of the two players are both based on the
present special structure of property rights of civil aviation in China, which could
not be reformed or changed in a short time period.
4.5 Conclusion
Unlike many qualitative analyses on air ticket price control problem, we build a
quantitative price control model based on game theory by introducing two new
decision factors: CS (customer’s surplus, also called the public welfares) and LR
(passenger load rate). Through modeling and discussing, we get some interesting
results and conclusions: CAAC is inclined to ignore the public welfares when
setting the control price of air ticket, which may be a higher price p2 , whereas,
airlines are always inclined to disobey the control price of CAAC for achieving a
higher passenger load rate and strengthening the competition edge, which may be a
self-determination price p0 , and p1 \p0 \p2 , where p1 is a lower control price
when CAAC considers the public welfares.
40 C. Jing
At present, main airlines of China have very special structure of property rights,
that is to say airlines face dual pressures of administrative regulations and market
competitions, which may cause airline’s dumping price for market shares and
competition edge, even may lead to p0 \c2 , so CAAC has to define a lower
boundary for the control price to prevent this kind of vicious competition behavior.
As a result, CAAC sets an inter-zone control price for achieving to keep enough
industry welfares, ensure healthy development of the civil aviation industry and
consider moderate public welfares. And when p0 2 ½p1 ; p2 , airlines can get bal-
ance between the revenues and competition edge, and CAAC can also get balance
between the industry welfares and the public welfares.
The reason of decision dissonance between the two players is tremendous
difference of cost evaluation for air ticket pricing (c1 [ c2 , shown as discussion C).
c1 is the cost evaluation of CAAC, usually an average cost of the whole industry,
and c2 is the cost evaluation of airlines, usually a margin cost of the product. In
terms of economics and competitions, as long as p0 [ c2 , airlines are always
inclined to disobey and decrease the control price for a higher passenger load rate,
i.e. for competitive edge.
Acknowledgments We thank all the sponsors for their financial support. And especially,
Chongyi Jing Author thanks his dear friend Dr Bin Zhou for his thoughtful comments, sugges-
tions and helpful data resources, which greatly enhanced the accomplishment of this paper.
References
Han B (2000) A review on the price control of China’ s civil airplane ticket. Sci Technol Rev
1:52–54 (in Chinese)
Kang Z, Du W (2006) Control and re-control of civil aviation transport fare in our country.
Northeast Financ Univ Trans 3(5):16–18 (in Chinese)
Li X, Deng J (2003) The reform of price management system of civil aviation based on price
theory. China Civ Aviat Sch Trans 21(2):26–31 (in Chinese)
Liu J (2002) Game analysis and reform suggestion for air ticket price. China Price 11:26–29 (in
Chinese)
Liu J (2006) The control and competition of natural monopoly industry. Search Truth 2:181–182
(in Chinese)
Mei H, Zhu J, Wang X (2006) Study of revenue management pricing based on game analysis.
Forecasting 25(6):45–49 (in Chinese)
Min Z, Yang X (2003) Industry organization analysis on price evolvement of civil aviation
enterprises. ShanXi Financ Univ Trans 3:57–61 (in Chinese)
Qiu Y (2001) A comparison analysis of deregulating price control between China and USA and
the control policy choice of China. Special-zone Econ 4:35–36 (in Chinese)
Shaffer S (2001) Deregulation: theory and practice. J Econ Bus 53(6):107–109
Wang R (2004) Game explanation on price competition of airlines. Technoecon Manag Res
3:81–82 (in Chinese)
Yang S, Zhang X (2002) A game theoretical analysis of marketable air fare. Commer Res
12:10–13 (in Chinese)
Zhang L (2005) Control or deregulation of air transportation price. China Sci Technol Inf
12(21):113–123 (in Chinese)
Chapter 5
A Method for Multiple Attribute Decision
Making without Weight Information
but with Preference Information
on Alternatives
Yun-fei Li
5.1 Introduction
Y. Li (&)
School of Mathematics and Information, China West Normal University, Nanchong, China
e-mail: [email protected]
5.2 Preliminaries
In the following, we will introduce some important concepts and algorithms about
interval numbers (Xu and Da 2003).
Let ~a ¼ ½a ; aþ ¼ fxja x aþ ; a ; aþ 2 Rg, ~a is an interval number.
Specially, if a ¼ aþ , ~a is a real number.
The algorithms related to interval numbers are following: if ~a ¼ ½a ; aþ and
~
b ¼ ½b ; bþ , b 0, then
a¼~
(1) ~ b if and only if a ¼ b ; aþ ¼ bþ ,
aþ~
(2) ~ b ¼ ½a þ b ; aþ þ bþ ,
a ¼ ½ba ; baþ , specially, if b ¼ 0, then b~
(3) b~ a ¼ 0.
Let X ¼ fX1 ; X2 ; ; Xn g ðn 2Þ be the set of alternatives and U ¼
fu1 ; u2 ; ; um g ðm 2Þ be the set of attributes. Suppose the decision maker gives
5 A Method for Multiple Attribute Decision Making 43
a ij aþ ij
rij ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; rijþ ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð5:1Þ
P þ 2
n P 2
n
ðaij Þ ðaij Þ
i¼1 i¼1
If ~
aij is for the cost, then
. .
1 aþ ij 1 a ij
rij ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; rijþ ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð5:2Þ
P n . P n .
2 2
ð1 a ij Þ ð1 aþ ij Þ
i¼1 i¼1
In order to compare the similarity degree of two interval numbers and realize
the order of alternatives, we will introduce the concepts about deviation degree and
possibility degree of interval number.
Definition 1 (Xu and Da 2003): Suppose
a ¼ ½a ; aþ ; ~
~ b ¼ ½b ; bþ
are two interval numbers, let
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
bÞ ¼ ~
a; ~
dð~ b ¼ ðb a Þ2 þ ðbþ aþ Þ2
a~
a and ~
be the deviation degree between ~ b.
Definition 2 (Xu and Da 2003): Suppose ~a ¼ ½a ; aþ ; ~b ¼ ½b ; bþ
are two interval numbers and l~a ¼ a a ; l~b ¼ bþ b ; let
þ
a~
be the possibility degree of ~ a~
b and ~ b be the order relationship between ~a
~
and b.
In this paper, we will research how to rank Xi ði ¼ 1; 2; ; nÞ according to the
decision matrix A~ and the preference information ~hi ð~hi ¼ ½h ; hþ Þ.
i i
44 Y. Li
P
m
where wj is the weight of the attribute uj and wj ¼ 1ðwj 0 j ¼ 1; 2; ; mÞ:
j¼1
The preference information of the decision maker is subjective judgment to the
comprehensive attribute values of alternatives. But because of the limitations of
many real factors, there are deviations between the preference information of the
decision maker and the comprehensive attribute values of alternatives. In view of
the rationality of the decision making, the attributes weight vector w ¼ ðw1 ;
w2 ; ; wm ÞT will minimize the total deviation between the preference informa-
tion of the decision maker and the comprehensive attribute values of alternatives.
So, we will give the following optimal model:
8 X n Xn X m Xm
>
>
> minDðwÞ ¼
> di2 ð~zi ; ~
hi Þ ¼ ½ð rij wj h
i Þ2 þ ð rijþ wj hþ 2
i Þ
< i¼1 i¼1 j¼1 j¼1
>
> X
m
>
> wj ¼ 1 wj 0 j ¼ 1; 2; ; m
: s:t
j¼1
Where
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
uX X
u m m
hi Þ ¼ tð
di ð~zi ; ~ rij wj h i Þ 2
þ ð rijþ wj hþ i Þ
2
j¼1 j¼1
ði ¼ 1; 2; ; nÞ ð5:5Þ
be the deviation degree between the subjective preference information of Xi and
the comprehensive attribute values of Xi .
We construct the Lagrange function:
X
n X
m X
m X
m
Lðw; kÞ ¼ ½ð rij wj h
i Þ 2
þ ð r þ
w
ij j h þ 2
i Þ þ 2kð wj 1Þ ð5:6Þ
i¼1 j¼1 j¼1 j¼1
oLðw;kÞ
Let owk ¼0 ðk ¼ 1; 2; ; mÞ, we have
5 A Method for Multiple Attribute Decision Making 45
X
n Xm X
m
½ð rij wj h
i Þr
ik þ ð rijþ wj hþ þ
i Þrik þ k ¼ 0
i¼1 j¼1 j¼1
ðk ¼ 1; 2; ; mÞ ð5:7Þ
i.e.
X
m X n X
n
½ ðrij rik þrijþ rikþ Þwj ¼ ðh þ þ
i rik þ hi rik Þ k
j¼1 i¼1 i¼1
ðk ¼ 1; 2; ; mÞ ð5:8Þ
Where
X
n
gk ¼ ðh þ þ
i rik þ hi rik Þ ðk ¼ 1; 2; ; mÞ ð5:9Þ
i¼1
X
n
qkj ¼ ðrij rik þrijþ rikþ Þ ðk; j ¼ 1; 2; ; mÞ ð5:10Þ
i¼1
n X
X m n X
X m X
m
¼ ½ðrik Þ2 þðrikþ Þ2 y2k þ ðrij rik þrijþ rikþ Þyk yj
i¼1 k¼1 i¼1 j¼1 k¼1
k6¼j
X
n X
m X
m
¼ ½ð rik yk Þ2 þð rikþ yk Þ2 [ 0
i¼1 k¼1 k¼1
eTm Q1 g 1
k¼ ð5:13Þ
eTm Q1 em
So,
eTm Q1 g 1
w ¼ Q1 ðg em Þ ð5:14Þ
eTm Q1 em
And further, the best alternative will be given if we rank all the alternatives
based on the components of the priority vector x ¼ ðx1 ; x2 ; ; xn ÞT .
Based on the above discussion, we develop a new method to analyze the
MADM problem, in which weight information is unknown, the attribute values are
expressed by interval numbers and the decision maker has preference information
expressed by interval numbers on alternatives. There are six steps in the new
method:
Step 1. Let X ¼ fX1 ; X2 ; ; Xn g ðn 2Þ be the set of alternatives, U ¼ fu1 ;
u2 ; ; um g ðm 2Þ be the set of attributes and A ~ ¼ ð~aij Þ
nm be the decision matrix
þ
where ~ aij ¼ ½aij ; a ij is the attribute value of alternative Xi with respect to the
attribute uj ði ¼ 1; 2; ; n; j ¼ 1; 2; ; mÞ;
Step 2. According to (5.1) and (5.2), the decision matrix A ~ ¼ ð~aij Þ
nm nor-
~
malized into the dimensionless decision matrix R ¼ ð~rij Þnm ;
Step 3. According to (5.14), we obtain the optimal weight vector w ¼ ðw1 ;
w2 ; ; wm ÞT ;
Step 4. Utilize (5.4) to get the comprehensive attribute values ~zi ði ¼ 1;
2; ; nÞ;
Step 5. Utilize (5.3) to get the possibility degree of ~zi ði ¼ 1; 2; ; nÞ and
establish the possibility degree matrix P ¼ ðpil Þnn ;
Step 6. We calculate the priority vector x ¼ ðx1 ; x2 ;
5 A Method for Multiple Attribute Decision Making 47
; xn ÞT of the possibility degree matrix P ¼ ðpil Þnn and rank all the alter-
natives based on the components of the priority vector x ¼ ðx1 ; x2 ; ; xn ÞT ,
then get the best alternative.
When the decision maker select cadres, on one hand, he wants to select capable
cadres, on the other hand, he also wants to select his preferred cadres. Hence, there
is preference information on alternatives. Now, suppose some company faces how
to select cadres. Firstly, the company built an index system with six attributes: u1 -
morality, u2 -attitude to work, u3 -style of work, u4 -levels of culture and
knowledge structure, u5 -ability of leadership and u6 - ability of innovation; sec-
ondly, the company determines five candidates xi ði ¼ 1; 2; ; 5Þ based on the
recommendation and evaluation of the masses and statistical treatment. Because
the evaluation results about the same candidate are different, so, the attribute
values after statistical treatment are expressed by interval numbers, as listed in the
following Table 5.1:
To select the best cadre, the six steps are included:
Step 1. We utilize (5.1), (5.2) to transform the decision matrix A ~ into the
dimensionless decision matrix
0 1
½0:378; 0:405 ½0:394; 0:414 ½0:398; 0:423 ½0:407; 0:432 ½0:394; 0:410 ½0:415; 0:437
B C
B ½0:394; 0:429 ½0:389; 0:410 ½0:394; 0:415 ½0:394; 0:415 ½0:411; 0:438 ½0:394; 0:419 C
B C
R¼B
~
B ½0:395; 0:420 ½0:377; 0:396 ½0:408; 0:433 ½0:408; 0:433 ½0:386; 0:410 ½0:408; 0:424 C
C
B C
@ ½0:385; 0:405 ½0:413; 0:433 ½0:385; 0:410 ½0:390; 0:414 ½0:395; 0:419 ½0:417; 0:433 A
½0:384; 0:410 ½0:402; 0:414 ½0:402; 0:414 ½0:407; 0:419 ½0:402; 0:414 ½0:380; 0:391
Step 2. Suppose the decision maker’s subjective preference value (after nor-
malized) on the five candidates xi ði ¼ 1; 2; ; 5Þ as follows:
~h1 ¼ ½0:3; 0:5; ~
h2 ¼ ½0:5; 0:6; ~h3 ¼ ½0:3; 0:4;
~
h4 ¼ ½0:4; 0:6; ~
h5 ¼ ½0:4; 0:5
Utilize (5.14) to get the optimal weight vector:
~
Table 5.1 Decision matrix A
u1 u2 u3 u4 u5 u6
x1 [0.85,0.90] [0.90,0.92] [0.91,0.94] [0.93,0.96] [0.90,0.91] [0.95,0.97]
x2 [0.90,0.95] [0.89,0.91] [0.90,0.92] [0.90,0.92] [0.94,0.97] [0.90,0.93]
x3 [0.88,0.91] [0.84,0.86] [0.91,0.94] [0.91,0.94] [0.86,0.89] [0.91,0.92]
x4 [0.93,0.96] [0.91,0.93] [0.85,0.88] [0.86,0.89] [0.87,0.90] [0.92,0.93]
x5 [0.86,0.86] [0.90,0.92] [0.90,0.95] [0.91,0.93] [0.90,0.92] [0.85,0.87]
48 Y. Li
5.5 Conclusions
In this paper, a new method is proposed to discuss the MADM problem with
expert’s preference information expressed by interval numbers on alternatives, in
which the attribute weights information is unknown completely and the attribute
values are interval numbers. Considering the preference information of the
decision maker is subjective judgment to the comprehensive attribute values of
alternatives, hence the weight vector of attributes will minimize the total deviation
between the preference information values of the decision maker and the com-
prehensive attribute values of alternatives. In order to get the attribute weights, in
according to the above idea, we have established an optimal model based on the
deviation degree between the comprehensive attribute values of alternatives and
the preference information values of the decision maker. Solving the above model,
we can obtain the attribute weights and we utilize the priority vector of the
possibility degree matrix to compare the comprehensive attribute values of the
5 A Method for Multiple Attribute Decision Making 49
alternatives, and then rank the alternatives. The method is practical and effective
because it organically combines the subjective information and objective infor-
mation. Finally, an illustrative example is given to show the application of the
method.
References
Chen SY, Zhao YQ (1990) Fuzzy optimum theory and model (in Chinese). Fuzzy Syst Math
4(2):87–91
Cheng T (1987) Decision analysis. Science Press, Beijing
Da QL, Xu ZS (2002) Singule-objective optimization model in uncertain multi-attribute decision
making (in Chinese). J Syst Eng 17(1):50–55
Fan ZP, Ma J, Zhang Q (2008) An approach to multiple attribute decision making based on fuzzy
preference information on alternatives. Fuzzy Sets Syst 131(1):101–106
Gao FJ (2000) Multiple attribute decision making on plans with alternative preference under
incomplete information (in Chinese). Syst Eng Theory Pract 20(4):94–97
Goh CH, Tung YCA, Cheng CH (2003) A revised weighted sum decision model for robot
selection. Comput Ind Eng 30(2):193–199
Hwang CL, Yoon K (1981) Multiple attribute decision making: methods and applications.
Springer, New York
Jiang YP, Fan ZP (2005) Method for multiple attribute decision making with attribute interval
numbers and preference information on alternatives (in Chinese). Syst Eng Electr
27(2):250–252
Kim SH, Ahn BS (1999) Interactive group decision making procedure under incomplete
information. Eur J Oper Res 116(3):498–507
Kim SH, Choi SH, Kim JK (1999) An interactive procedure for multiple attribute group decision
making with incomplete information: range-based approach. Eur J Oper Res 118(1):139–152
Park KS, Kim SH (1997) Tools for interactive multi-attribute decision making with incompletely
identified information. Eur J Oper Res 98(1):111–123
Wei GW, Wei Y (2008) Model of grey relational analysis for interval multiple attribute decision
making with preference information on alternatives (in Chinese). Chin J Manage Sci
16(1):158–162
Xu ZS (2003) A method for multiple attribute decision making without weight information but
with preference information on alternatives (in Chinese). Syst Eng Theory Pract
23(12):100–103
Xu ZS (2004a) Method for multi-attribute decision making with reference information on
alternatives under partial weight information (in Chinese). Control Decis 19(1):85–88
Xu ZS (2004b) Projection method for uncertain multi-attribute decision making with preference
information on alternatives. Int J Inf Technol Decis Mak 3(3):429–434
Xu ZS, Da QL (2003) Possibility degree method for ranking interval numbers and its application
(in Chinese). J Syst Eng 18(1):67–70
Chapter 6
A Prediction of the Container Throughput
of Jiujiang Port Based on Grey System
Theory
Yan Du
Abstract Container handling system can be regarded as a grey system. Since the
grey GM (1,1) prediction model is applicable to small samples of data, the port
container throughout of Jiujiang port in the next 5 years can be predicted on the
basis of the relative data from 2006 to 2011. According to the predicted result,
feasible suggestions and available measures may be given to the development of
Jiujiang port, and scientific data may also be provided for it.
6.1 Introduction
A port is served as the joint and shipping center of the land, sea and air. It plays an
important role in the entire logistics network. Meanwhile, as an integrated transport
hub and the main distribution center of import and export, the port’s economic
growth take an inevitably great position in the urban economy and regional devel-
opment. Jiujiang port is the only port connecting Yangtze River and seas in Jiangxi
Province, which has rather obvious advantage. The international container cargoes
in Jiangxi are mainly transported to the coastal ports through Jiujiang port. As a
feeder port of Shanghai port, the cargoes are first shipped from Jiujiang port to
Shanghai and then exported to the other cities and countries in the world. In 2008, the
two ports established a closer cooperation. The Shanghai Port provided the Jiujiang
Port with the advanced management experience, ideas and other sorts of resources.
A new Jiujiang port is growing bigger and better with the rapid development of
Jiangxi’s economy, its role is increasingly evident in recent years (Du 2011).
Y. Du (&)
BusinessSchool, JiujiangUniversity, Jiujiang, China
e-mail: [email protected]
The Grey System Theory was built by Professor Deng Julong, a famous scholar in
China in 1982. It is a new method to study a problem of less data, poor information
and uncertainty. Gray System Theory specializes studying in the uncertain system,
which is a small sample of poor information by extracting valuable information
through the development of some of the known information to realize correct
description and effective monitoring and control system operation behavior and
evolution (Deng 2002). In this theory, ‘‘black’’ means information is unknown,
‘‘white’’ indicates information is completely clear, and ‘‘grey’’ describes some of
the information is clear and some information is not clear. Accordingly, the system
whose information is unknown is called a BlackSystem. The system whose
information is completely clear is called the WhiteSystem. The system between
them is called a Gray System (Liu and Xie 2008).
Gray System Model has a broad range of applications because there are no
special requirements and restrictions on the observed experimental data. Currently,
the most widely used Grey Prediction Model is GM (1,1) model, which contains a
variable, first derivative to predict the data.
The GM (1,1) model is based on a random time of the original sequence,
formed by the cumulative chronological time sequence showing the law can be
used first-order linear differential equations to approximate. In this paper, the grey
prediction GM (1,1) model, namely, the first-order linear differential equation
model in a variable model prediction is applied hereby.
6 A Prediction of the Container Throughput of Jiujiang Port 53
The gray system modeling is the use of discrete time series data to establish the
approximate continuous differential equation model. In this process, the Accu-
mulated Generating Operator (AGO) is the basic means. Its generating function is
the basis of gray modeling and predictable (Liu and Deng 1999).
Equipped with the original sequence:
n o
xð0Þ ðtÞ ¼ xð0Þ ð1Þ; xð0Þ ð2Þ; . . .. . .; xð0Þ ðnÞ
dxð1Þ
þ axð1Þ ¼ u ð6:1Þ
dt
a, u: Undetermined parameters
Differential to replace the differential, differential convenient variable trans-
formed into
xð0Þ ð2Þ 1=2 xð1Þ ð1Þ þ xð1Þ ð2Þ 1
xð0Þ ð3Þ 1=2 xð1Þ ð2Þ þ xð1Þ ð3Þ 1 a
... ¼ ...
þ
. . . u
ð6:2Þ
xð0Þ ðnÞ 1=2 xð1Þ ðn 1Þ þ xð1Þ ðnÞ 1
Abbreviated as follows:
YN ¼ B a ð6:3Þ
Using the least square method, you can get the solution of the equations are:
a 1
a ¼ ¼ BT B BT YN ð6:4Þ
u
Into the original differential equations, you can have the formula:
h uiak u
xð1Þ ðk þ 1Þ ¼ xð0Þ ð1Þ þ ð6:5Þ
a a
54 Y. Du
1 Xn
q¼
qð k Þ ð6:7Þ
n k¼1
1 Xn
s22 ¼ ½qðkÞ q2 ð6:9Þ
n k¼1
(3) The calculation of a posteriori differential ratio c and the small error proba-
bility p:
s2
c¼ ð6:10Þ
s1
p ¼ pf j qð k Þ
qj\0:6745s1 g ð6:11Þ
6.3.1 Calculation
The container throughput of Jiujiang Port in recent years is in Table 6.2. It shows
that from 2006 to 2011, the container throughput of Jiujiang Port is growing
rapidly overall, only in 2008 declined compared to 2007 because of the interna-
tional financial crisis. In 2009, the volume grew again and has maintained a good
momentum of sustained growth in 2010 and 2011 (Shao 2010).
We can see from Table 6.2:
Table 6.2 Container throughput and cumulative data table of Jiujiang port over the years (Unit:
Million TEU)
Years 2006 2007 2008 2008 2010 2011
Throughput 7.38 8.80 8.09 10.07 12.06 14.00
Cumulative 7.38 16.18 24.27 34.34 46.40 60.40
TEU Twenty-foot Equivalent Unit
56 Y. Du
Table 6.3 The actual values and predicted values (Unit: Million TEU)
Years 2006 2007 2008 2009 2010 2011
Actual value 7.38 8.80 8.09 10.07 12.06 14.00
Predictive value 7.38 7.80 8.98 10.35 11.92 13.72
Year 2012 2013 2014 2015 2016
Actual value – – – – –
Predictive value 18.22 20.99 24.18 27.85 32.08
dxð1Þ
0:1414xð1Þ ¼ 6:2195
dt
Get prediction model as follows:
h ui u
xð1Þ ðk þ 1Þ ¼ xð0Þ ð1Þ eak þ
a a
0:1414k
¼ 51:37 e 43:99
And then by the formula:
Posteriori difference inspection carried out for the model: The mean of the original
data for: x ¼ 10:067; The mean of the residuals is: q ¼ 0:0416; The variance of
the original data for: s21 ¼ 4:47; s1 ¼ 2:114; Variance of the residuals: s22 ¼ 0:337;
s2 ¼ 0:58; the ratio of a posteriori difference: c ¼ ss21 ¼ 0:27; Small probability of
error: p = 1.
According to Table 6.1, p = 1 [ 0.95, c ¼ 0:27, it can be judged that the
prediction is comparatively accurate. Therefore, this prediction model can predict
the container throughput of Jiujiang Port in the next few years, which is shown in
Table 6.3.
6 A Prediction of the Container Throughput of Jiujiang Port 57
The calculated results show that Gray System Theory is more accurate to
predict the container throughput of Jiujiang port. The calculations and statistics are
very close. From the predicted results can be seen, the growth rate of the container
throughput of Jiujiang port in the next 5 years is fast and stable. It can be inferred
that the container transport of Jiujiang port will be greatly developed. The basic
data of the model is established on the basis of the 2006–2011 objective statistics.
In 2012 and the coming years, there are some unknown factors, which are difficult
to determine and affect the economic environment and the development of
logistics industry. Objectively speaking, the model also has a certain degree of
uncertainty, but as the development of the industry prediction, it has a relatively
reliable reference.
Taking the development needs of the Jiujiang port, container throughput prediction
and the actual situation of Jiujiang port into account, specific measures for the
development of port logistics to Jiujiang port are provided as following.
During the ‘‘Eleventh Five-Year’’ period, make good use of the national imple-
mentation of the riverside development strategy to promote the construction of the
Yangtze golden waterway and to speed up port construction and port opening to
create the Jiujiang logistics distribution center, which will become the focus of
infrastructure investment in Jiangxi Province in order to enhance the Jiangxi
external the level of openness.
An economic hinterland is an area which the economic center can be touched and
can promote their economic development. Under the concept of modern logistics,
these factors which affect the development of the port are growing, such as ports
and the hinterland, the degree of value-added services capacity. After the opening
of the Beijing-Kowloon Railway, the hinterland economy of Jiujiang port will be
even broader. Jiujiang port is not only responsible for loading and unloading tasks
for the pillar industries transit in Jiangxi Province but also gradually expands to
inland provinces such as Hubei, Sichuan and Henan. In the strategy of opening up
along the river, Jiujiang must strengthen the cooperation of the adjacent regional
cities inside and outside the province and promote the establishment of coopera-
tion mechanisms (Yanbing and Liang 2003).
In summary, Jiujiang port should integrate existing resources to strengthen
infrastructure construction, to accelerate the chain structures of port container
logistics services, and to improve the efficiency of service operations to container
stations and operational management level. The goal is to provide a powerful
guarantee for the rapid development of Jiujiang port container transport.
References
Chen T, Gao Q (2008) The study on the factors influencing on the container throughput of the
port. Wuhan Univ Technol (Information and Management Engineering) 30(6):991–994+1003
Deng J (2002) The basis of the gray theory. Huazhong University of Science and Technology
Press, Wuhan, pp 100–118, 361–369
Du Y (2011) The studying on the issues of the development of port logistics distriparksin
Jiujiang, Nanchang University, 2011
6 A Prediction of the Container Throughput of Jiujiang Port 59
Jiao N, Huang G (2008) The container throughput forecast of Dalian port. World Shipping 6:7–9
Lee G, Ruan X (2007) Economic forecasting and decision-making and MATLAB. Tsinghua
University Press, Beijing
Liu S, Deng J (1999) GM (1,1) coding for exponential series. J Gray Syst 2:147–152
Liu M, Wang D (2005) Port cargo throughput forecasting method. Waterw Eng 2005 (3):53–56
Liu S, Xie N (2008) Gray system theory and its application. Science Press, Beijing, pp 66–78
Shao Z (2010) A prediction of the container throughput based on grey system theory—take
Wuhan port as an example. Port Econ 12:40–42
Shi B, Teng G (2004) The tutorial of the math example of MATLAB. China Railway Publishing
House, Beijing
Sun Y, Cheng L (2003) Gray prediction model in port container throughput forecast. Port
9:36–38
Wang F (2006) Improvement on unequal interval gray forest model. Fuzzy Inf Eng 6(1):118–123
Xiong H, Xu H (2005) Grey Control. National Defence Industry Press, Beijing, pp 32–35
Zheng Y (2011) The gray prediction of container throughput of Ningbo—Zhoushan port. China
Water Transp 11:2
Zhou Y (2007) The application of grey forecast model in the logistics needs. Modern logistics,
2007(11):59–61
Chapter 7
A Rapid Data Processing and Assessing
Model for ‘‘Scenario-Response’’ Types
Natural Disaster Emergency Alternatives
Keywords AHP/ANP Data processing Emergency alternatives assessment
Logarithm mean induced bias matrix Natural disaster
7.1 Introduction
numerical examples are used to test the proposed model in Sect. 7.3. Section 7.4
concludes the paper.
Proof Since matrix A is perfectly consistent, namely, aik akj ¼ aij holds for all i,
j and k, we have
1 Yn
cij ¼ log aik akj log aij
n k¼1
1
¼ log anij log aij ¼ 0
n
Therefore, all values in matrix C are zeroes, and matrix C is a zero matrix if
matrix A is perfectly consistent. h
Obviously, the above model can be transformed to the following models in terms
of the properties of logarithm function:
8 p ffiffiffiffiffiffiffiffiffi
Q
< C ¼ log
> n
s þ log AT ¼ 0
Affiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi !
n Q
n ð7:2Þ
>
: cij ¼ log aik akj þ log aji ¼ 0
k¼1
8 p ffiffiffiffiffiffiffiffiffi
Q
< C ¼ log
> n
AT ¼ 0 !
sAffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n Q
n ð7:3Þ
>
: cij ¼ log aik akj aji ¼ 0
k¼1
Theorem 7.2 The logarithm mean induced bias matrix (LMIBM) C is an anti
symmetric matrix if matrix A is inconsistent, that is,
64 D. Ergu et al.
C ¼ C T
ð7:4Þ
cij ¼ cji
1 Yn
1 Yn
1 1
cji ¼ log ajk aki log aji ¼ log log
n k¼1
n a a
k¼1 kj ik
aij
!
1 Yn
¼ log akj aik þ log aij ¼ cij
n k¼1
h
Corollary 7.1 There must be some inconsistent entries in the logarithm mean
induced bias matrix (LMIBM) C deviating from zero if the matrix A is inconsistent.
Especially, any row or column of matrix C contains at least one non-zero entry.
Proof by contradiction: Assume all entries in matrix C are zeroes even if the
judgment matrix A is inconsistent, that is, aik akj 6¼ aij holds for some i, j and k, but
cij ¼ 0; ði; j ¼ 1; . . .; nÞ, namely
1 Yn
cij ¼ log aik akj log aij ¼ 0
n k¼1
We can get
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n
Yn
n
Yn
aij ¼ aik akj ¼ ail alj
k¼1 l¼1
The result contradicts the previous assumption that aij 6¼ aik akj for some j and k,
and cij ¼ 0. Therefore, any row or column of matrix C contains at least one non-
zero entry. h
Based on Corollary 7.1, the most inconsistent data in matrix A can be identified
by observing the largest value in the logarithm mean induced bias matrix C.
According to Theorem 7.2, there are two equal absolute largest values in matrix C
since the values are anti symmetric, therefore, one can either observe the absolute
largest value above or below the main diagonal in the matrix C, which simplify the
observing number of times to n(n - 1)/2 and improve the speed of disaster
assessment.
7 A Rapid Data Processing and Assessing Model for ‘‘Scenario-Response’’ 65
L¼
0 1
a11 a1i a1n
a1j Q
n
B .. .. .. C a1k
B C
B . . . C k¼1
B C Q
n
B ai1 aii aij ain C
B C aik
B C k¼1
A¼BB .
.. .
.. C
C Qn
B C ajk
B .. C
B aj1 aji ajj . ajn C k¼1
B C
B C Qn
@ A ank
k¼1
an1 ani anj ann
Yn Y
n Y
n Y
n
R¼ ak1 ; aki ; akj ; akn
k¼1 k¼1 k¼1 k¼1
where
8 n T
>
> Q Q
n Q
n Q
n
>
<L¼ a1k ; aik ; ; ajk ; ; ank
k¼1 k¼1 k¼1 k¼1
n ð7:5Þ
>
> Q Qn Q
n Q
n
>
:R ¼ ak1 ; aki ; akj ; ; akn
k¼1 k¼1 k¼1 k¼1
where
!
Y Y
n
A¼LR¼ aik akj
k¼1 nn
Step 3: Observe the absolute largest values above or below the main diagonal
of matrix C, here denoted as cmax
ij , then we can easily identify the corresponding aij
in matrix A as the most inconsistent data. If there are other entries whose values
are close to cmax
ij , they can be identified as the most inconsistent entries.
66 D. Ergu et al.
In above section, the most inconsistent data is identified. In the following, the
estimating formula of identified inconsistent data is derived to adjust the identified
inconsistent data. Assume aij is the identified inconsistent data, which is corre-
sponding to the cmax
ij in matrix C, we have
1 Yn
cmax
ij ¼ log aik akj log aij
n k¼1
1 Yn
¼ log a2ij aik akj log aij
n lk¼16¼i;j
1
¼ log a2ij ~
an2
ij log aij
n
We can get
ncmax
ij ¼ log a2ij ~an2
ij =aij
p ffiffiffiffiffiffiffiffiffiffiffiffiffi
max
ncmax
ij
ð7:7Þ
n2
)~aij ¼ aij 10ncij ¼ aij 10 n2
Assume a decision maker needs to quickly assess the disaster degree of four places
attacked by earthquake with respect to the indirect economic loss to make a
‘‘scenario-response’’ types relief resource allocation, the 4 4 matrix A with
CR = 0.173 [ 0.1 used in Ergu et al. (2011b) and Liu (1999) is assumed to be the
collected judgment matrix by emergency expert in this paper, that is,
0 1
1 1=9 3 1=5
B 9 1 5 2 C
A¼B @ 1=3 1=5 1 1=2 A
C
5 1=2 2 1
Apply the LMIBM to this matrix.
Step 1: Compute the column matrix L and the row matrix R by formula (7.5),
7 A Rapid Data Processing and Assessing Model for ‘‘Scenario-Response’’ 67
L ¼ ð 0:0667 90 0:0333 5 ÞT
,
R ¼ ð 15 0:0111 30 0:2 Þ
Applying formula (7.7) to estimate the possible proper value of a31, we get
4 1
a31 ¼ a31 1042c31 ¼ 1020:4019 ¼ 2:1217 2
~
3
Therefore, the identified inconsistent data and its estimated value are the same
as the ones in (Ergu et al. 2011b) and (Liu 1999), whose CR = 0.0028 \ 0.1, but
the number of observed entries is reduced to 6 entries.
In the following, an 8 9 8 pair-wise comparison matrix A introduced in Cao
et al. (2008), Ergu et al. (2011b) and Xu and We (1999) is introduced to test the
proposed model for matrix with high order.
2 3
1 5 3 7 6 6 1=3 1=4
6 1=5 1 1=3 5 3 3 1=5 1=7 7
6 7
6 1=3 3 1 6 3 4 6 1=5 7
6 7
6 1=7 1=5 1=6 1 1=3 1=4 1=7 1=8 7
A¼6 6 7
6 1=6 1=3 1=3 3 1 1=2 1=5 1=6 7 7
6 1=6 1=3 1=4 4 2 1 1=5 1=6 7
6 7
4 3 5 1=6 7 5 5 1 1=2 5
4 7 5 8 6 6 2 1
Applying the LMIBM to this matrix.
68 D. Ergu et al.
Step 1: The column matrix L and the row matrix R by formula (7.5) are,
L ¼ ð 315 0:0857 86:4 0 0:0009 0:0037 218:75 80640 Þ
Applying formula (7.7) to estimate the possible proper value of a37, we get
8
a37 ¼ a37 1082c37 ¼ 6108=6ð0:8286Þ ¼ 0:4714 1=2
~
The identified inconsistent data are the same as the ones in Cao et al. (2008),
Ergu et al. (2011b) and Xu and We (1999), whose CR = 0.0828 \ 0.1, but the
number of observed entries is reduced to 28 entries instead of 56.
7.4 Conclusion
Acknowledgments This research has been supported by grants from the talent research plan of
Foxconn Technology Group (11F81210001).
References
Bina O, Edwards B (2009) Social capital and business giving to charity following a natural
disaster: an empirical assessment. J Socio-Econ 38:601–607
Cao D, Leung LC, Law JS (2008) Modifying inconsistent comparison matrix in analytic
hierarchy process: a heuristic approach. Decis Support Syst 44:944–953
Tsai C-H, Chen C-W (2010) An earthquake disaster management mechanism based on risk
assessment information for the tourism industry—a case study from the island of Taiwan.
Tour Manag 31:470–481
Ergu D, Kou G, Shi Y, Shi Y (2011a) Analytic network process in risk assessment and decision
analysis. Comput Oper Res. doi:10.1016/j.cor.2011.03.005 (in press)
Ergu D, Kou G, Peng Y, Shi Y (2011b) A simple method to improve the consistency ratio of the
pair-wise comparison matrix in ANP. Eur J Oper Res 213(1):246–259
Ergu D, Kou G, Peng Y, Shi Y, Li F (2012) Data consistency in emergency management. Int J
Comput Commun Control 7(3):451–459
Xu J, Lu Y (2009) Meta-synthesis pattern of analysis and assessment of earthquake disaster
system. SETP 29(11):1–18
Levy JK, Taji K (2007) Group decision support for hazards planning and emergency
management: a Group Analytic Network Process (GANP) approach. Math Comput Model
46(7–8):906–917
Koczkodaj WW, Szarek SJ (2010) On distance-based inconsistency reduction algorithms for
pairwise comparisons. Logic J IGPL 18(6):859–869
Zhong L, Liu L, Liu Y (2010) Natural disaster risk assessment of grain production in Dongting
Lake area, China. Agric Agric Sci Procedia 1:24–32
Li H, Ma L (2007) Detecting and adjusting ordinal and cardinal inconsistencies through a
graphical and optimal approach in AHP models. Comput Oper Res 34(3):780–798
Saaty TL (1994) How to make a decision: the analytic hierarchy process. Interfaces 24:19–43
Tsai C-H, Chen C-W (2011) The establishment of a rapid natural disaster risk assessment model
for the tourism industry. Tour Manag 32:158–171
Tufekci S, Wallace WA (1998) The emerging area of emergency management and engineering.
IEEE Trans Eng Manag 45(2):103–105
Liu W (1999) A new method of rectifying judgment matrix. Syst Eng: Theory Pract 6:30–34 (in
Chinese)
Wei Y-M, Fana Y, Lua C, Tsai H-T (2004) The assessment of vulnerability to natural disasters in
China by using the DEA method. Environ Impact Assess Rev 24:427–439
Xu Z, We C (1999) A consistency improving method in the analytic hierarchy process. Eur J
Oper Res 116:443–449
Chapter 8
A Research on Value of Individual Human
Capital of High-Tech Enterprises Based
on the BP Neural Network Algorithm
Abstract Based on the review of relevant research, this paper combines with the
characteristics of individual human capital of high-tech enterprises and constructs
a comprehensive evaluation index system including four first-level indicators and
sixteen secondary indicators. Using the BP algorithm of neural network model,
based on thirty equipment manufacturing enterprises, this paper also establishes an
evaluation model of the value of individual human capital for the chief engineer.
It holds that although there exist many approaches to evaluating individual human
capital value, many problems still remain unresolved, such as: too broad concept
definition, a lack of understanding of the specificity of evaluation object, a single
evaluation method and inadequacy of practicability or improper model design, and
among others the critical problem in the evaluation of human capital value lies in
the fact that the system cannot be truly applied to practice.
8.1 Introduction
Discussion on value of managers has been derived from the practice of early
human activities. The famous ancient Greek philosopher Plato has divided people
(according to their social values) in his Utopia into three classes, deeming it was
determined by the talent of man’s character. Ancient Roman slave-owners and
thinkers Gato and Varro focused on the selection criteria of the chamberlain in On
Agriculture, while in the eighteenth century, the father of ‘‘free economy’’ Adam
Smith has already regarded human as a form of capital. According to Neo-classical
X. Li P. Zhang (&)
Department of Management, Northwest University for Nationalities,
Lanzhou, China
e-mail: [email protected]
8.2 Methodology
Based on the latest research results from domestic and foreign scholars (Fu 2007),
combined with the characteristics of individual human capital of high-tech
enterprises, the present paper holds that the value of human capital in this sort of
enterprises includes four parts: the existing value of human capital, the value of the
contribution of human capital, the ongoing costs and opportunity cost of human
capital. Their relationship is as follows:
8 A Research on Value of Individual Human Capital 73
the total value of Human capital ¼ the existing value of human capital
þ the potential value of human capital
replacement costs of human capital
opportunity cost of human capital
ð8:1Þ
Among them: the existing value of human capital is from the external (or past)
factors of the enterprises, determined by the staff’s physiological conditions,
abilities, education status, work experience and the previous contributions to the
enterprise. The potential value of human capital stands for the present value of the
expected contribution value of employees in the enterprise. The ongoing costs of
human capital stands for the must investment in human capital making contribu-
tion continually, including the expenditure of the staff’s recruitment, training, and
physiology, and so on. The opportunity cost of human capital is the maximum
possible gains brought by other options based on the same cost. Evaluation index
system of high-tech enterprise human capital value is shown in Tables 8.1, 8.2.
At present, the neural network model has been developed into many types, of
which the most representative is the multi-layer perception neural network. In
1985, Rume jhart proposed the error back-pass learning algorithm, realizing
Minsky’s multi-layer network assumption, which can be shown in (Cheng et al.
2008) (Fig. 8.1).
BP neural network algorithm not only has the input layer nodes and output layer
nodes, but also has one or more hidden layer contacts. For the input signal, it
should first of all pass forward to the hidden layer nodes, after the role of function,
then pass the output signal at the hidden layer nodes to the output layer nodes, and
finally give the output. The S-shaped function is usually chosen as the activation
function at the node.
8 A Research on Value of Individual Human Capital 75
Such as:
1
f ðXÞ ¼ ;
1 þ ex=Q
where Q is the Sigmoid parameter adjusting the form of activation function. The
learning process of the algorithm consists of forward propagation and back prop-
agation. In the forward propagation process, the input information from the input
once the hidden layer, processing layer by layer, and transferring to the output layer.
State of each layer of neurons will influence the state of next layer of neurons. If the
output layer fails to receive the desired output, then it will switch to back propa-
gation, returning the error signal along the connection channel, and by modifying
the right values of all layers of neurons, making the minimum error signal.
Set an arbitrary network containing n nodes, each node is of Sigmoid type. Set
the network has only one output y, the output of any node i Oi, and set N samples
(Xk, Yk) (k = 1, 2, 3,…, N), to a particular input Xk, the network output Yk of
node i for Oik, then the input of node j is:
And the error function is defined as
1X N 2
E¼ yk y1k ;
2 k¼1
Thus:
8 X
>
> djk ¼ f 0 ðnetjk Þ dmk Wmj
<
m
>
> oE
: k ¼ dmk Oik
oWik
The above algorithm indicates that the BP model converts the I/O problem of a
set of samples to a nonlinear optimization problem, reaching an effective
dimension reduction effect.
In the human capital evaluation index system of high-tech enterprises, indi-
vidual variables and overall enterprise variables, the relative indicators and
absolute indicators are taken into consideration, besides, dummy variables are also
introduced. By adopting BP artificial neural network approach and standardizing
the data, a dimensionless comprehensive result of evaluation can be drawn. In the
process of learning about Network model itself, network parameters should be
constantly adjusted until gaining desired output to meet the requirements.
8.3 Results
The Original empirical data come from the survey questionnaire of ‘‘the techno-
logical innovation power of Lanzhou city in 2006’’. The survey had 148 valid
questionnaires returned, and the choosing of industry was based on the criteria for
the classification of ‘‘high-tech industry’’ from the classification categories of
high-tech industry released in 2002 by National Bureau of Statistics. To make the
sample data strong in comparability, the present paper selected 30 equipment
manufacturing enterprises, established for the chief engineer an evaluation model
of individual human capital value.
Normalize the 30 groups of sample observations the data between 0 and 1, 15 of
which as model building data, and the rest 15 as model test data. The input
parameters of BP neural network model are: input nodes being 17, the intermediate
layer being 1, the middle layer nodes being 12, the minimum training rate being
0.1, the dynamic parameters being 0.6, the Sigmoid parameter being 0.9,
8 A Research on Value of Individual Human Capital 77
permissible error being 0.001, and the maximum number of iterations being 1000.
After 280 iterations, the error of neural network comes close to zero, the fitting
residuals of the model are about 0.0001677, and thus the training result is desir-
able. Weights of each node in the Model are shown in Table 8.2.
The final fitting state of the model is shown in Table 8.3. The average error of
the 15 groups of modeling samples is 0.000979, while the average error of the 15
groups of testing samples is 0.000887. The learning result of the whole model is
desirable, and also fits the testing samples well. Therefore, the above model can be
used for the chief engineer in other similar enterprises to conduct the evaluation on
the individual human capital value.
8.4 Conclusion
At present, there are many approaches in terms of the study of human capital
value, mainly for the following three reasons:
First, the definition of the concept of human capital value is overly broad. It is
commonly believed that human capital value is derived from labor and creativity,
but in reality the embodiment of these factors is different and specific. As is often
the case, the newly recruited talents with high education and high titles have not
yet created new value, but they are still seen as part of increase of the stock of the
enterprise’ human capital. In addition, there is a relatively high discrepancy
between the potential value of human capital (or expected value) and the actual
value.
78 X. Li and P. Zhang
Second, the object of the evaluation of human capital value has its particularity.
While some argue that general human capital evaluation criteria can be estab-
lished, but due to the variety of evaluation object, the conclusions are quite dif-
ferent. From the point of view of scientific research, the conclusion simply from
induction will cause the ‘‘problem of induction’’ (Liang 2007). Owing to the fact
that the testing data fail to meet the demands of ‘‘random sampling assumes’’ or
‘‘law of large numbers’’, and ‘‘the central limit theorem’’, those research trying to
draw the value of general human capital can mostly lead to a generalization
problem (Wooldridge 2003). But the systematic analysis of non-equilibrium ran-
dom variable regression conducted by Clive Grange (1974) shows that, regardless
of the causal relationship between the random variables, the more unstable the
variables are, the better the fitting of the regression equation will be.
Third, part of the evaluation approaches to the human capital value is too
simple, and its practicability is not enough or the model design is unreasonable,
thus it can not be widely adopted in practice because of many problems, such as
inadequate of valid indicators, lack of uniqueness of the overall model set,
insufficiency of the statistical test of correlative relations, and so forth (Li 2008).
In view of the above problems, this paper holds that the value of human capital
evaluation should first define the scope of the study, analyze the characteristics of
the evaluation object in order to establish an evaluation index system and construct
an evaluation model.
Acknowledgments This paper is supported by the Humanities and Social Science Fund of the
Ministry of Education (09XJC790014; 11XJC630008).
References
Becker GS (1962) Investment in human capital: a theoretical analysis. J Polit Econ 70(5):9–49
Cheng L, Liu ZW, Ai XY (2008) An empirical analysis on the influencing factors of human
resource value. Acc Res 1:26–32
Dai CL (2006) Equity allocation and measurement of human capital for hi-tech enterprises.
J Wuhan Univ Technol (Inf Manag Eng) 28(12), pp 150–154
Flamholtz EG (1986) Human resource accounting. Shanghai Translation Publishing Company,
Shanghai, pp 23–27
Fu X (2007) Management contribution measurement on human capital of entrepreneurs.
Collected Essays Financ Econ 1:82–88
Granger C, Newbold P (1974) Spurious regressions in econometrics. J Econom 2:111–120
Guo XG (2003) The history of management theory. China Economic Publishing House Press,
Beijing, pp 9–10
Li ZN (2008) The specification of population regression models in econometric application
studies. Econ Res J 8:136–144
Liang ZY (2007) Study on the measurement of human capital value. Stat Decis 5:166–168
Marx K (1975) Capital. The Commercial Press, Beijing, p 34
Schultz TW (1961) Investment in human capital. Am Econ Rev 51(1):1–17
Schultz TW (1990) Human capital investment and urban competitiveness. Beijing Economy
Institute Press, Beijing, p 113
8 A Research on Value of Individual Human Capital 79
Abstract This study is concerned with the evaluation of wind power projects
under the Clean Development Mechanism (CDM), not only for the purpose of
CDM verification, but also for the financing of the project. A real options model is
developed in this paper to evaluate the investment decisions on wind power
project. The model obtains the real value of the project and determines the optimal
time to invest wind power project. Stochastic programming is employed to eval-
uate the real options model, and a scenario tree, generated by path-based sampling
method and LHS discretization, is constructed to approximate the original sto-
chastic program.
9.1 Introduction
Environmental issues have received global attention and carbon trading is con-
sidered to be a primary solution to reduce greenhouse gas (GHG) emissions. As a
developing country, China is mainly involved in the Clean Development Mech-
anism (CDM), which is one of the instruments created by the Kyoto Protocol to
facilitate carbon trading, and China’s corporations related to environmental
investments are facing the decision problems under the uncertainty of carbon
price.
According to the statistics of international wind energy committee, over 80 %
of carbon emissions come from the energy industry, of which 40 % is due to the
The fundamental hypothesis of the traditional DCF method is that future cash
flows are static and certain, and management does not need to rectify an invest-
ment strategy in face of changing circumstances (Myers 1977). Nevertheless, this
is inconsistent with the real-life situations. In practice, corporations often face
many uncertainties and risks, and the management has to deal address such
uncertainties proactively. This means that the issues of operating flexibility time
strategy have to be considered (Donaldson et al. 1983).
Other than DCF, real options analysis can be used to deal with investment options
and managerial flexibility. The real options method has its roots in financial options,
and it is the application and development of financial options in the field of real assets
investment. It is generally agreed that the pioneering work in real options was due to
Myers (1977). The work suggested that, although corporation investments do not
possess of forms like contract as financial options do, investments under a situation
of high uncertainty still have characteristics which are similar to financial options.
84 H. Qin and L. K. Chu
Therefore, the options pricing method can be used to evaluate the investments.
Subsequently, Myers and Turnbull (1977) suggested a ‘Growth Options’ for cor-
poration investment opportunity. Kester (1984) further developed Myers’ research,
and argued that even a project with an unfavourable NPV could also have investment
value if the manager had the flexibility to defer investment until a later date.
After three decades of development, the theory of real options has become an
important branch and a popular research topic. By examining the different
managerial flexibility embedded in real options were able to divide real options
into seven categories: Option to Defer, Option to Alter Operating Scale, Option to
Abandon, Option to Switch, and so on. Other types of options have also been
proposed and studied by researchers (O’Brien et al. 2003; Sing 2002).
The real options theory has been applied to investment problems in many
different fields including biotechnology, natural resources, research and develop-
ment, securities evaluation, corporation strategy, technology and so on (Miller and
Park 2002). It has also been applied to a gas company in Britain, leading to the
conclusion that certain projects are not economically feasible unless they are
permitted to have a faster price rise (Sarkis and Tamarkin 2005). Qin and Chu
(2011) developed a real options model to determine the optimal time to invest in
Carbon Capture and Storage (CCS) project in China, and analysed the behaviour
of China green technology business. It took the emission credits cost into con-
sideration and verified the real options analysis by evaluating environmental
related investments.
A license to invest in a wind power plant is a real option, which means that the
investor has the right, but not the obligation, to exercise the payment of investment
to obtain the revenue from the project. Facing the uncertain future and the irre-
versibility of decisions, the investor would value the opportunity to wait and get
more information about future conditions. The objective of this real options model
is to obtain an optimal timing of a wind power investment decision problem under
carbon price uncertainty.
Different from the traditional NPV method, the real options analysis considers not
only whether to invest, but also when to invest. The opportunity to invest in a wind
power plant could be taken as an option, whose value is given by the following
equation:
ROV ¼ maxðNPVo Þ NPVs ð9:1Þ
where NPVs is the static NPV of the wind power project investment without
considering any management flexibility, i.e., investment is made at stage t = 1.
NPVo is the investment with timing options, i.e., the investor could delay the
investment to the optimal time. Let T be the numbers of the planning periods,
normally it is around 20 years as the franchise periods of wind power project in
China is 25 years, and it takes several years to construct the power plant. The
decision variable is denoted as Xt, where Xt is a 0–1 variable. If the investor
exercises the investment at stage t, Xt = 1. Otherwise Xt = 0. t 2 f1; . . .; T g.
1; to invest
Xt ¼ t 2 f1; . . .; T g
0; not to invest
Under the CDM mechanism, the revenues due to the investment in wind power
come from two sources. The first is derived from the income of the electricity
generated and sold (RE). RE is deterministic because the feed-in tariff in China is
determined in the bidding. The other revenue is made due to the income of CERs
(RC), which is uncertain because the carbon price varies from time to time. The
revenue of investment R (per kWh) in wind power is the sum of these two parts:
R = RE ? RC. In the proposed real options model, we will focus on the income
of CERs.
If the wind power plant produces Et (tonne) CERs per unit electricity output every
year, and all the CERs are tradable in the carbon trading market. Then, the income
from CERs sales is At * Et. Denote by It the initial investment, and the construction
86 H. Qin and L. K. Chu
will take k years, Qt as the annual generating capacity (kWh) of the wind power
plant, Ct as the unit cost of generating output, NPVo can be expressed as follow:
!
XT
1t XT
pt þ At Et ct
t t
NPVo ¼ X t1 þ Q ð9:2Þ
t¼1 1 þ cf K¼tþk ð1 þ cÞk1
where
It Initial investment
Qt Annual generating capacity
Ct Unit cost of generating output
Et Annual CERs emission
k Construction period
c Discount rate
cf Risk-free rate.
To obtain the real options value, we need to maximise the NPVo. The decision
problem can be described as:
max NPVo ð9:3aÞ
s.t.
X
T
X t ¼ 1; 8t 2 f1; . . .; T g ð9:3bÞ
t¼1
X t ¼ 0 or 1; 8t 2 f1; . . .; T g ð9:3cÞ
The revenue of investment R (per kWh) in wind power is uncertain because of the
uncertainty of the revenue from CERs (RC), which can change considerably
depending on factors such as environment policy legislation, cost of alternative
fuels, product market demand and so on. Assume the price of CERs At shifts
stochastically over time represented by the geometric Brownian motions:
(T, 1) P1
(t, 1) (T-1, 1)
(2, 1) (T, 2) P2
( A11 ,1)
( Ant , Pnt )
(1, 1) (T, nT)
(2, n2) t T-1
(t, n ) (T-1, n )
(T,nT+1)
(2, N2)
(T, NT) PS
Given the assumption that the prices of CERs Atn shift stochastically over time
represented by the geometric Brownian motions, which is expressed in Eq. (9.4),
the sample paths of Atn can be obtained. Figure 9.2 shows the result of 50 sample
paths with parameter values in Table 9.1.
In general, continuous distributions cannot be incorporated directly in sto-
chastic programs because of the infinite number of scenarios that need to be
examined. For this reason, discrete distributions are generally used to approximate
the continuous distribution of stochastic parameters. The quality of scenario tree,
which consists of such discretized distributions, is vital to the accuracy of the final
solutions.
50 Sample Path
140
CERs price (yuan)
120
100
80
60
40
0 2 4 6 8 10 12 14 16 18 20
Stage t
Various methods have been proposed to generate scenario tree. These can be
classified into external sampling and internal sampling methods. External sam-
pling methods consist of optimal discretization sampling, conditional sampling,
moment-matching, and path-based sampling. Internal sampling is actually a sto-
chastic program solving algorithm with scenarios to be sampled during the solu-
tion procedure. Commonly used internal sampling methods are stochastic
decomposition, stochastic quasi-gradient, EVPI-based sequential important sam-
pling (Chu et al. 2005).
Equation (9.4) describes the price of CERs Atn as a known stochastic process. For
the sake of simplicity, path-based sampling is applied to generate the scenario tree,
and Latin Hypercube Sampling (LHS) is employed in this study to discretize Atn .
9.5 Conclusion
References
Birge JR, Rosa CH (1996) Incorporating investment uncertainty into green-house policy models.
Energy J 17:79–90
Bonifant BC, Arnold MB, Long FJ (1995) Gaining competitive advantage through environmental
investments. Bus Horiz 38:37–47
Cariño DR, Ziemba WT (1998) Formulation of the Russell-Yasuda Kasai financial planning
model. Oper Res 46:433–449
Cariño DR, Myers DH, Ziemba WT (1998) Concepts, technical issues, and uses of the Russell-
Yasuda Kasai financial planning model. Oper Res 46:450–462
Chu LK, Xu YH, Mak KL (2005) Evaluation of interacting real options in RFID technology
investment: a multistage stochastic integer programming model. Presented at the 14th
international conference on management of technology, Austria
de Neufville TWR (2004) Building real options into physical systems with stochastic mixed
integer programming. Presented at the 8th international conference on real options, Montreal
Dentcheva WRD (1998) Optimal power generation under uncertainty via stochastic program-
ming. In: Marti K, Kail P (eds) Stochastic programming methods and technical applications.
Springer, Berlin, pp 22–56
Donaldson G, Lorsch J (1983) Decision making at the top: the shaping of strategic direction.
Basic Books, New York
Kester WC (1984) Today’s options for tomorrow’s growth. Harv Bus Rev 62(2):153–160
Miller LT, Park CS (2002) Decision making under uncertainty-real options to the rescue? Eng
Econ 47:105
Mondschein SV, Schilkrut A (1997) Optimal investment policies for pollution control in the
copper industry. Interfaces 27:69–87
Myers SC (1977) Determinants of corporate borrowing. J Finan Econ 5(2):147–176
Myers SC, Turnbull SM (1977) Capital budgeting, and the capital asset pricing model: good news
and bad news. J Finan 32:321–333
Nehrt C (1996) Timing and intensity effects of environmental investments. Strateg Manag J
17:535–547
O’Brien JP, Folta TB, Douglas RJ, Timothy B (2003) A real options perspective on entrepreneurial
entry in the face of uncertainty. Manag Decis Econ 24:515–544
9 A Stochastic Programming Model 91
Porter ME, van der Linde C (1995) Green and competitive: ending the stalemate. Harv Bus Rev
73:120–134
Presley A, Sarkis J (1994) An activity based strategic justification mehodology for ecm
technology. J Environ Conscious Des Manuf 3:5–17
Qin H, Chu LK (2011) Real options model for valuating China greentech investments. In: 7th
international conference on digital enterprise technology, Greece, pp 134–141
Sarkis J (1999) A methodological framework for evaluating environmentally conscious
manufacturing programs. Comput Ind Eng 36:793–810
Sarkis J, Tamarkin M (2005) Real options analysis for ‘‘green trading’’: the case of greenhouse
gases. Eng Econ 50:273–294
Sing TF (2002) Time to build options in construction processes. Constr Manag Econ 20:119–131
Zhu L (2011) Energy investment modeling and its applications under the background of energy
security. Doctor, Management Science and Engineering, University of Science and
Technology of China
Chapter 10
Action Recognition Based on Hierarchical
Model
10.1 Introduction
Human action recognition is widely applied in the fields of surveillance, image and
video retrieval and so on. Recently many researches have been done for the
recognition, however, it still is a challenging problem due to diverse variations of
human body motions, for example, occlusion, scale, individual appearance and
individual motion fashion. To achieve good recognition performance, a good
representation with rich appearance and motion information is of vital. Usually if a
feature includes not only still appearance but also dynamic motion information, the
feature dimension is high, and it is more discriminative to classify the actions. But
this leads to high computational costs. To overcome this problem, we propose a
hierarchical model for action recognition using multi-features, and show its per-
formance compared to other exist algorithms.
The rest of this paper is organized as follows. Section 10.2 describes related
work. Section 10.3 elaborates the details of our model. Section 10.4 reports the
experiment results. Conclusion is given in Sect. 10.5.
Many works have been proposed for action recognition, and some reviews
(Moeslund et al. 2006; Poppe 2010; Ji and Liu 2010) have provided a detailed
description of the action recognition framework. Here, we only focus on the action
recognition that has related to our work.
Features based on shape or appearance is traditional representation in videos
analysis. This kind of representation is usually related to global properties of
human. First human is separated from the background, which is called box. And
then all kinds of descriptors based on silhouette or edges are described. Deng et al.
(2010) compute the silhouettes of the human body, and extracts points with equal
interval along the silhouette. Through constructing the 3D DAISY descriptor of
each point, the space–time shape of an action is obtained. Bobick and Davis (2001)
also make use of the silhouette, but they employ motion energy images (MEI) and
motion history image (MHI) to describe the actions. Ji and Liu (2009) present
contour shaper feature to describe multi-view silhouette images. In Nater et al.
(2010) a signed distance transform is applied to the box with fixed size. In
Weinland and Boyer (2008) silhouette templates are used to match. But the rec-
ognition rate may be reflected by the accurate degree of the localization of the box,
and the effect of background subtraction.
Another popular feature representation is about motion information. Messing
et al. (2009) use the velocity histories of tracked key points to recognize the
actions. Laptev et al. (2008) first use HOF descriptor to represent local motion
information. Recently the combination of shape and motion features has received
more attention. Lin et al. (2009) construct a shape-motion descriptor, and model a
shape-motion prototype tree using hierarchical k-means clustering. Klaser et al.
(2008) make use of HOG3D descriptor to combine motion and shape information
simultaneously. Although these descriptors include more rich information, and the
recognition accuracy is improved, the dimensions of the descriptors are bigger
than those of the descriptors with single shape or motion information. A trade-off
between computation complexity and accuracy should be considered. Therefore,
we propose a hierarchical model, in the first stage, a coarse classification is made
according to box features, and then different descriptors are designed to different
class, this can reduce the dim of the feature vectors.
10 Action Recognition Based on Hierarchical Model 95
There are five main steps to our approach: first, preprocessing video to achieve the
human box; second, extracting global box features; third, preliminary dividing the
actions into two different classes based on box features; forth, computing
respective motion features for the actions which belong to different classes; and
fifth, recognizing actions using nearest neighbor classifier and voting algorithm. In
the following, we describe each step in turn.
10.3.1 Preprocess
We start from a video of k frames which are described in a RGB space. For feature
extraction and recognition, we use background subtraction and filtering to segment
the foreground object, which is the box of a person in each frame of an action
sequence. In our paper, the motion direction of the same action is all aligned to the
same direction, e.g. for the action ‘running’, the direction is appointed from left to
right, Fig. 10.1b, c. Furthermore, the resulting silhouettes are converted to a binary
representation. Examples of the box are illustrated in Fig. 10.1, and the repre-
sentation of the boxes of a video S is
S ¼ fB1 ; B2 ; . . .. . .; Bk g: ð10:1Þ
For each box Bi (i = 1,…, k), a three-dimensional box feature vector is computed,
i.e. Fbi = [cxi, cyi, ri], here cxi, cyi, ri denote the coordinates of the center, aspect
ratio of the box. According to the variance of coordinates and aspect ratio in an
image sequences, we can easily divided the simple actions into two classes. Class 1
includes the actions which the motions mainly focus on the leg movements, such
as jogging, running, walking, skipping and so on. And the actions which legs are
all almost stayed in a static place are all belong to the Class 2, such as bending,
waving.
Having classified the actions into two classes, we can construct different weighted-
motion feature according to their particular characteristic.
96 Y. Wang et al.
(1) Box normalization. The size and the position of the box in each frame is
different, after getting the box feature Fbi, we normalized the boxes to equal
size (80 9 80 in our case) while maintaining aspect ratio.
(2) The division of subregions. When a person is in a state of motion in a video
sequence, his body is continually varying in time series. For the actions which
belong to Class 1, the movements of its upper limb and lower limb are more
important. Whereas for the actions in Class 2, e.g. running, the variances in
subregions of legs are more important. In order to exactly describe the
10 Action Recognition Based on Hierarchical Model 97
M Vx ; V y ¼ Vx ; Vy ; ð10:2Þ
O Vx ; Vy ¼ arctan Vy Vx : ð10:3Þ
O(Vx, Vy) is divided into eight equal sectors in polar coordinates, and the M(Vx, Vy)
can be regarded as the weights. The weighted-motion histogram’s x-axis reflects
the eight orientations. The histogram’s y-axis shows the contribution within each
orientation. And for each subregion we achieve a histogram.
Further, the subregions with grids are more important than others, in order to
precisely represent actions, for each grid in a subregion a single motion histogram
is computed. Therefore, the number of histogram is 8 ? 8 9 4 = 40.
10.4 Experiments
Besides we exchange the motion feature structure of the two classes. When the
actions in Class 2 use the feature structure of Class 1, as shown in Fig. 10.5, the
average accuracy is 93.4 %. When the actions in Class 1 use the feature structure
of Class 2, the influence of recognition rate is very small. The results prove that
our method is effective.
10.5 Conclusion
References
Blank M, Gorelick L, Shechtman E, Irani M, Basri R (2005) In: Proceedings of 10th IEEE
conference on computer vision, ICCV’05, Beijing, China, pp 1395–1402
Bobick AF, Davis JW (2001) The recognition of human movement using temporal templates.
IEEE Trans Pattern Anal Mach Intell 23(3):257–267
Deng C, Cao X, Liu H, Chen J (2010) A global spatio-temporal representation for action
recognition. In: Proceedings of 20th conference on pattern recognition, ICPR’10, Istanbul,
Turkey, pp 1816–1819
Horn BKP, Schunck BG (1981) Determining optical flow. Artif Intell 17(1–3):185–203
Ji X, Liu H (2009) View-invariant human action recognition using exemplar-based hidden
markov models. In: Proceedings of 2nd conference on intelligent robotics and applications,
ICIRA’09, Singapore, pp 78–89
Ji X, Liu H (2010) Advances in view-invariant human motion analysis: a review. IEEE Trans
Syst Man Cybern Part C Appl Rev 40(1):13–24
Klaser A, Marszalek M, Schmid C (2008) A spatio-temporal descriptor based on 3D-gradients. In:
Proceedings of 19th British machine vision conference, BMVC’08, Leeds, UK, pp 995–1004
Laptev I, Marszalek M, Schmid C, Rozenfeld B (2008) Learning realistic human actions from
movies. In: Proceedings of 21st IEEE conference on computer vision and pattern recognition,
CVPR’08, Anchorage, AK, pp 1–8
Lin Z, Jiang Z, Davis LS (2009) Recognizing actions by shape-motion prototype trees. In:
Proceedings of 12th IEEE conference on computer vision, ICCV’09, Kyoto, Japan, pp 444–451
Messing R, Pal C, Kautz H (2009) Activity recognition using the velocity histories of tracked
keypoints. In: Proceedings of 12th IEEE conference on computer vision, ICCV’09, Kyoto,
Japan, pp 104–111
Moeslund TB, Hilton A, Kruer V (2006) A survey of advances in vision-based human motion
capture and analysis. Comput Vis Image Underst 104(2–3):90–126
Nater F, Grabner H, Van Gool L (2010) Exploiting simple hierarchies for unsupervised human
behavior analysis. In: Proceedings of 23rd IEEE conference on computer vision and pattern
recognition, CVPR’10, San Francisco, CA, pp 2014–2021
Poppe R (2010) A survey on vision-based human action recognition. Image Vis Comput
28(6):976–990
Scovanner P, Ali S, Shah M (2007) A 3-dimensional sift descriptor and its application to action
recognition. In: Proceedings of 15th conference on multimedia, MM’07, Augsburg, Germany,
pp 357–360
Weinland D, Boyer E (2008) Action recognition using exemplar-based embedding. In:
Proceedings of 21st IEEE conference on computer vision and pattern recognition, CVPR’08,
Anchorage, AK, pp 1–7
Chapter 11
Adaptive Ant Colony Algorithm
Researching in Cloud Computing Routing
Resource Scheduling
Zhi-gao Chen
Abstract Cloud computing has been regarded as one of the most important
planning projects in the future, the technique will be beneficial to thousands
enterprises in our country. The advantages of Cloud service depend on efficient,
fast running network conditions. At present, under the condition of limited
bandwidth in our country, studying fast and efficient routing mechanism is nec-
essary, according to which Scheduling resource with the maximum capacity of a
network node. Therefore, in this paper, the parameters of network capacity was
increased as the threshold in each node to route adaptively, the shortest path can be
found quickly on the traditional ant algorithm, and also the network capacity of
nodes on the path can be adjusted accordingly. As the experimental result shown,
the congestion of data on the critical path can effectively avoid by this method.
11.1 Introduction
Z. Chen (&)
Hu Nan vocational institute of science and technology,
Changsha, China
e-mail: [email protected]
engineering and science 2010). At present, our country is in the growth stage of
cloud computing (Proceedings of 2010 third international symposium on knowl-
edge acquisition and modeling (KAM 2010) 2010) (HAN Bing 2010), over the
next 10 years, many IT enterprise and the small and medium-sized enterprises will
share hundreds of billions of dollars of ‘‘cloud computing cake’’ in the future of
our country (Nezamabadi-pour et al. 2006).
While there are some disadvantages in the cloud computing industry of our
country. Cloud computing needs to run in high-speed network to exert its
advantages (Satyanarayanan 2001), although China has been used widely the
broadband technology, but net is still less than other countries. Cloud computing
needs to run in high-speed network to play its advantages, Broadband technology
has been applied in our country widely, but the speed of the network is still lower
than other countries. It is the premise of cloud computing construction in china that
Operation cloud services needs a set of efficient and secure routing mechanism on
Cloud Computing to achieve efficient resource scheduling and storage, on the basis
of limited bandwidth; on the other hand, along with the expansion of cloud
computing network and increase of cloud services, higher requirements are put
forward to our existing network bandwidth and the resource scheduling, therefore,
the research and application of the cloud computation efficient routing mechanism
can improve the efficiency of routing and scheduling speed resources, as a result
the cloud service will run more efficiently in the existing network infrastructure
and improving fully on the return rate of investment on cloud computing infra-
structure, Reference and basis of China’s construction of cloud services platform is
provided better, the significant on the progress of our society and the pulling and
development of national industry is greatly.
management on their physical nodes. In the internal node, files are usually divided
into one or more blocks of data the blocks of data stored in a set of Data Node (IP
Multimedia Subsystem (IMS) 2009). Name Node execution operation about the
file system name space, such as open, closed, the rename operation, and also
decided to map data block from Name Node to Data Node. Data Node is
responsible for processing of read and write requests of customers, also performs a
data block in accordance with the Name Node instruction.
The ant algorithm (Ant Colony Algorithm) is proposed by the Italian scholar
Marco Dorigo in 1992, a parallel and efficient evolutionary algorithm (Guo et al.
2010). The algorithm is a probabilistic technique for the simulation of ant foraging
process in nature which is formed to find the optimal path in the graph, the core
idea is: ants will left a ‘‘pheromone’’ chemical substances in the path searching for
food (Hayden 2009), these ‘‘pheromone’’ can provide heuristic information where
selected on walking routes, for the follow-up ants to find food as the constant
updating of pheromone, optimal path can be found from the nest to the food in a
relatively short period of time. The algorithm has the advantages of high parallel,
convergence speed, and has Gained some satisfactory experimental results, in the
traveling salesman problem, routing and scheduling problems, but the standard ant
algorithm is easy to fall into local optimal solution. Considering the cloud com-
puting environment is large in scale, and the quality of service requirements, to
achieve efficient resource scheduling, the shortest path should be found in algo-
rithm and also meet the bandwidth requirements of which cloud services supplied
in the path each node can offer. The proposed adaptive ant algorithm, is based on
running a cloud service according to our country current limited bandwidth, by
setting the appropriate threshold about minimum network capacity to adaptively
adjust searching for the shortest path, can both quickly discover the resource such
as the routing, but also to improve the convergence of the algorithm, and the QOS.
provided by the between of each node network the maximum capacity of is has
been spotty. Once all the business on the shortest path on the transmission, will
make some smaller capacity network node to enter the congestion in advance. If
you cannot adjust in a timely manner will make a follow-up business to continue to
transfer from the shortest path, leading to congestion and data retransmission
exacerbate transmission of cloud services is very unfavorable. The establishment
of a multi-road by the table for the same source knot point to the target knot point.
Search idea is that the network capacity threshold set automatically when preferred
shortest path congestion routing, get a new ‘‘sub-optimal’’ shortest path. And so
on, so as to achieve the normal operation of the entire network in the state of
optimal network capacity. Therefore construct adaptive ant algorithm steps are:
solving the source node to destination node point shortest path process, consider
the capacity constraints and flow changes on the network each path in real time,
i.e., solving the shortest path to the source node to destination node sections
capacity minus the minimum link capacity. When the data was transferred on the
shortest path close to the minimum link capacity, the bottle neck sections of
congestion, the other nodes in which sections of the available network capacity
bottleneck link capacity minus the capacity of the original node, rather than the
capacity of the shortest path remains unchanged.
Firstly, each node in the cloud environment was treated as the point in the map
abstractly. A certain node was set as the start point, will was searched by the ant as
the ‘‘food’’ finally, namely to complete the routing process.
With the establishment and the end of the network session, the amount of
information on each node would change when each cycle completed, the Fireman
on each node Adjusted as (11.1, 11.2):
si ðt þ 1Þ ¼ qsi ðtÞ þ Dsi ð11:1Þ
X
mn
Dsi ¼ Dski ð11:2Þ
k¼1
Among them, the k-th ant in the cycle stays in the pixels on the filmon.
11.3.2 Experiment
11.4 Discussion
In this paper, the adaptive ant algorithm can effectively avoid congestion in net-
work on the shortest paths, and select the shortest paths automatic by setting the
minimum network capacity threshold for each of the segments in the shortest path
in hadoop cloud platform, provided a choice of an adaptive routing scheme in a
cloud environment.
The experiments about the algorithm are carried out in hadoop cloud platform,
not by simulator, the practicality and adaptability is better. A good self-healing
method was provided for some sections on the shortest path when some sections
were failed, for author/s of more than two affiliations: to change the default, adjust
the template as follows.
The minimum network capacity as the only one factor which was considered in
this method when the paths were selected adaptively on each segment, this is
certainly not enough in the real cloud environment, therefore, the algorithm needs
the further improvement if been applied in the cloud environment.
References
Xue Jing (2010) A brief survey on the security model of cloud computing. In: Proceedings of the
ninth international symposium on distributed computing and applications to business,
engineering and science
Asterisk (2010) Open source communications [CP/OL]
Fetterly D, Haridasan M, Isard M et al (2010) TidyFS: a simple and small distributed file system.
In: Proceedings of the USENIX annual technical conference (USENIX’11), 2011
Guo D, Wu K, Li J et al (2010) Spatial scene similarity assessment on Hadoop. In: Proceedings of
the ACM SIGSPATIAL international workshop on high performance and distributed
geographic information systems, New York, pp 39–42
Hayden C (2009) Announcing the map/reduce toolkit
IP Multimedia Subsystem (IMS) (2009) Stage 2 (Release 9)
108 Z. Chen
Nezamabadi-pour H, Saryazdi S, Rashedi E (2006) Edge detection using ant algorithm. Soft
Comput 10(7):623–628
Han Bing (2010) Research and implementation of future network computer based on cloud
computing (2010). In: Proceedings of 2010 third international symposium on knowledge
acquisition and modeling (KAM 2010)
Satyanarayanan M (2001) Pervasive computing: vision and challenges. IEEE Pers Commun
8(4):10–17
Chapter 12
An Empirical Analysis on Guangdong
Industrial Energy Intensity Based
on Panel Data
Xin-dong Hao
12.1 Introduction
China is facing severe challenges from energy supply and environment protection
due to its heavy dependence on energy. Up to now, Guangdong has made great
progress in industrialization, and its industrial output value has ranked first in
China for many years, and even some of its manufactured goods play a prominent
role on world scene. Industrialization needs large amounts of energy, Guangdong’s
total energy consumption was 152.367 million tones of standard coal in 2011, and
Guangdong try to reduce its per unit energy consumption by 18 % in the 12th
5-Year Plan (2011–2015).
Over the years, Guangdong’s industrial growth rely heavily on high input and
high energy consumption, its industrial energy consumption accounts for about
70 % of the total consumption. At present, Guangdong is facing serious challenges
X. Hao (&)
School of Economics and Management, Wuhan University, Wuhan, China
e-mail: [email protected]
consumption while the scale effect had a negative role. The development of steel,
nonferrous metal, building materials, chemical and other high energy consumption
industrial sectors was the main reason for the rapid rise of total energy con-
sumption. Zheng (2011) analyzed the general characteristics of energy consump-
tion and energy efficiency based on the data of industrial sectors in Guangdong,
and the result showed that there were many differences in different industrial
sectors.
In this paper, we go through the main study results of previous literature so as to
explore the relationship between industrial energy intensity and industrial added
value in Guangdong basing on gray relational analysis. By this way, we try to find
out which Guangdong industrial sectors should be the priority development ones
by the perspective of energy intensity and industrial added value. Our goal is to
identify the key industrial sectors of ‘‘win–win’’ which increasing industrial output
while reducing energy intensity.
The gray system theory was first proposed in 1982, and it is a system containing
both insufficient and sufficient information, called ‘‘black’’ and ‘‘white,’’ respec-
tively, is a gray system. A system can be called as a black box if its mathematical
equations or internal characteristics that describe its dynamics are completely
unknown. On the other hand, a system can be named as a white system if the
description of the system is completely known. Similarly, a system that has both
known and unknown information is defined as a gray system. The gray system
theory deals with a system containing insufficient information, the gray relational
analysis can capture the relationship between the main factor and other factors in a
system regardless whether this system has adequate information. The gray rela-
tional analysis can effectively avoid subjective bias and have better performance
than some traditional methods when the study involves economic, environmental
and technical indices problems.
In real life, each system can be considered as a gray system because there are
always some uncertainties in them. Even a simple price system always contains
some gray characteristics because of the various kinds of social and economic
factors. These factors are generally random and make it difficult to obtain an
accurate model. There are many situations in which the difficulty of incomplete or
insufficient information is faced, by gray relational method; we can calculate the
gray relational coefficient and the gray relational grade of a gray system, and to
find out the system’s internal relationships.
A GRA model is a kind of impact measurement model of two series which
named reference series and compare series. During the processes of system
development, the change trends of two series should be consistent, a higher grade
of synchronized change can be considered to have a greater ranking; otherwise, the
grade of relation would be smaller. Thus, the analysis method, which takes the
112 X. Hao
ranking of the relation into account, is established upon the degree of similarity or
difference of the developmental trends of two series to measure the degree of
relation.
Table 12.1 The sorts of energy intensity and grey relational degree (2004–2010)
Sectors Energy intensity Sort Degree Sort Sectors Energy intensity Sort Degree Sort
X1 0.061 1 0.893 6 X19 0.858 19 0.896 3
X2 0.127 2 0.910 2 X20 0.923 20 0.850 18
X3 0.283 3 0.830 29 X21 0.982 21 0.849 19
X4 0.379 4 0.727 34 X22 1.063 22 0.878 9
X5 0.445 5 0.841 25 X23 1.258 23 0.894 5
X6 0.454 6 0.840 27 X24 1.370 24 0.842 24
X7 0.623 7 0.895 4 X25 1.386 25 0.833 28
X8 0.664 8 0.845 22 X26 1.403 26 0.827 30
X9 0.667 9 0.843 23 X27 1.529 27 0.823 31
X10 0.678 10 0.854 16 X28 1.658 28 0.864 12
X11 0.681 11 0.917 1 X29 1.780 29 0.820 32
X12 0.687 12 0.856 13 X30 1.946 30 0.889 8
X13 0.724 13 0.876 10 X31 2.106 31 0.841 26
X14 0.769 14 0.850 17 X32 2.448 32 0.855 15
X15 0.771 15 0.713 35 X33 3.308 33 0.685 36
X16 0.776 16 0.856 14 X34 4.306 34 0.846 20
X17 0.776 17 0.890 7 X35 5.797 35 0.845 21
X18 0.847 18 0.867 11 X36 6.013 36 0.813 33
12 An Empirical Analysis on Guangdong Industrial Energy Intensity 113
12.4 Conclusion
References
Cai MR (2010) The research of energy and industrial development in Guangdong province. In:
Proceedings of information management, innovation management and industrial engineering,
ICIII’10, Hong Kong, VA, pp 333–336
Fisher-Vanden K, Jefferson GH, Jingkui M (2006) Technology development and energy
productivity in China. Energy Econ 28(5):690–705
Fu JY, Zhang HL (2011) An empirical analysis of influential factor of Guangdong’s energy
consumpution in the industrial sector. http://www.paper.edu.cn/index.php/default/releasepaper/
content/201204-424
Garbaccio RF, Ho MS, Jorgenson DW (1999) Why has the energy output ratio fallen in China?
Energy J 20(3):63–91
He JK, Zhang XL (2005) Analysis on the impact of the structural changes in the manufacturing
industry on the rising of intensity of GDP resources and its trend (in Chinese). Environ Prot
12:37–41
Hu JL, Wang SC (2006) Total-factor energy efficiency of regions in China. Energy Policy
17(5):3206–3217
Karen FV, Gary HJ, Liu HM, Tao Q (2004) What is driving China’s decline in energy intensity?
Resour Energy Econ 26(1):77–97
Lun X, Ouyang YG (2005) A study on energy constraint based on Guangdong sustainable
development (in Chinese). Coal Econ Res 28(2):30–32
Ma CB, David IS (2008) China’s changing energy intensity trend: a decomposition analysis.
Energy Econ 30(3):1037–1053
Shi D, Dong L (2007) A study on energy efficiency and energy saving potential and its
influencing factors in China (in Chinese). Nat Gas Technol 1(2):5–8
Sinton JE, Levine MD (1994) Changing energy intensity in chinese industry; the relative
importance of structural shift and intensity change. Energy Policy 22(5):239–255
Yu PG (2007) A factor analysis of the energy intensity in China (in Chinese). Acad Res 2:74–79
Zha Dl Zhou DQ, Ning D (2009) The contribution degree of sub-sectors to structure effect and
intensity effects on industry energy intensity in China from 1993 to 2003. Renew Sustain
Energy Rev 13(4):895–902
Zheng LZ (2011) Study on the energy efficiency of industrial sector in Guangdong province.
http://210.44.1.99/kns50/detail.aspx?dbname=CMFD2011&filename=1011191476
Zhu SX (2010) An analysis of fictitious conditions in Guangdong low coaban growth (in
Chinese). China Opening Herald 148(1):57–61
Chapter 13
An Empirical Research on the Ability
of Sustainable Development for Coal
Resource Exhausted Cities
Bing Zhang
13.1 Introduction
Our country has rich mineral resources, it has nearly one hundred large and
medium cities built mainly in mining. Since twenty first century, coal resources are
gradually shrinking and depletion, the resource cities standing due to coal have
difficult to see its former glory, which have seriously lagging behind in economic
development, a structural contradiction of single industry has become increasingly
emergent (Xu and Zhang 2000). With the growing exploitation of resources,
mining geological conditions become more complex, resource extraction enter-
prises are gradually aging and these resource-based cities will face the important
topic of economic transformation (Wang 2003). In the process of continuous
improvement of China’s market economy, how to promote resource exhausted
B. Zhang (&)
College of Management and Economics, Tianjin University, Tianjin, China
e-mail: [email protected]
In the model, CRCSD is the ability of sustainable development for coal resource
exhausted cities, K1 is the economy system, K2 is the society system, K3 is the
resource system, K4 is the environment system, S is the space variables, it refers to
the different cities, T is the time variables, it refers to the different development
stages.
Each subsystem is further represented by a number of indicators.
Ki ¼ f ðKi1 ; Ki2 ; ; Kin Þ i ¼ 1; 2; 3; 4
Specifically, the index system should highlight the following features. First, it
should reflect the quality and scale of economic development for coal resource
exhausted cities. Second, it should reflect the operating conditions of the social
system; the key is to make a clear assessment of eliminating poverty and
improving the quality of life and so on. Third, it should have a great importance to
the extent of development and utilization of the main resources and the richness of
the existing resources. Four, it should reflect the environment, especially the
capacity of natural environment and the ability of regional sustainable develop-
ment (Li 2005; Li and Lu 2008; Pang and Wang 2012).
According to the function need to be reflected by the above model and index
system, combined with the own characteristics and goals of sustainable develop-
ment for the particular system of coal resource exhausted cities, we establish a
index system for sustainable development and select 28 representative indicators
as shown in Table 13.1 (Li 2005; Li and Lu 2008; Pang and Wang 2012;
Akinbami and Fadare 1997; Marsh 1987).
Table 13.1 The index system of sustainable development for coal resource exhausted cities
Economic indicators Social indicators
GDP per capita Annual average wage of workers
Coal resources industry added value accounts for Disposable income per urban population
GDP Employment rate of urban population
The tertiary industry added value accounts for GDP The poverty rate
Financial revenue accounts for GDP The proportion of people in and above
The growth rate of investment in fixed assets college
Investment in environmental protection accounts for The labor force average years of schooling
GDP Insurance rates for social insurance
The actual utilization of foreign capital Living area per capita
Total amount of import and export
Resource indicators Environment indicators
Existing reserves of coal resource Wastewater treatment rate
The useful life of coal resource Waste gas treatment rate
The utilization rate of coal resource Dustfall concentration
Per capita share of coal resource The comprehensive utilization of solid and waste
The consumption of coal resource Green area per capita in urban
Available arable land per capita Forest coverage rate
Available water resource per capita
118 B. Zhang
According to the above index system of sustainable development for coal resource
exhausted cities, we select some key indicators to measure its capacity for sus-
tainable development. According to the Statistical Yearbook of the coal resource
cities from 2006 to 2011, we select the following variables and data as shown in
Table 13.2 (Statistical yearbook of the various coal resource-exhausted cities from
2006). Then use principal component analysis method to comprehensive measure
the level of sustainable development in these years of coal resource exhausted
cities. Data is processed by the software of EXCEL and SPSS (Gao and Dong
2007).
In Table 13.2, X1 is GDP per capita (yuan), X2 is the tertiary industry accounts
for GDP (percentage), X3 is investment in environmental protection accounts for
GDP (percentage), X4 is annual average wage of workers (yuan), X5 is employ-
ment rate of urban population (percentage), X6 is the proportion of people in and
above college (percentage), X7 is living area per capita (square meters), X8 is per
capita share of coal resource (ton), X9 is wastewater treatment rate (percentage),
X10 is green area per capita in urban (square meters) (Table 13.3).
The correlation coefficient matrix of each variable can be seen, the correlation
coefficient is larger among each variable, they also have a strong correlation. It can
use principal component analysis method to comprehensive reflect the level of
sustainable development through the relatively small number of variable synthetic
indicators.
Eigenvalue
4
1 2 3 4 5 6 7 8 9 10
Component Number
From the results of principal component analysis in Table 13.4 and Scree plot
in Fig. 13.1 can be seen, the contribution rate of the first principal component is
89.837 %, the contribution rate of the second principal component is 7.401 %, the
cumulative contribution rate of the first two components is 97.238 %. Therefore, it
can extract the first two principal components to measure the ability of the sus-
tainable development for coal resource exhausted cities.
From Table 13.4, the first two principal components eigenvalues respectively
are k1 ¼ 8:984, k2 ¼ 0:740.
The initial solution of the factor loading matrix is extracted by principal
component analysis method is shown in Table 13.5.
pffiffiffiffiffi pffiffiffiffiffi
For each column values in Table 13.5 respectively divided by the k1 , k2 , it
can get the unit eigenvectors corresponding with each eigenvalue e1 , e2 .
0 1 0 1
0:996 0:332
B 0:865 C B 0:289 C
B C B C
B 0:966 C B 0:322 C
B C B C
B 0:996 C B 0:332 C
B C B C
1 B 0:974 C B C
C ¼ B 0:325 C
e1 ¼ pffiffiffiffiffiffiffiffiffiffiffi B C
8:984 B 0:998 C B 0:333 C
B B
C
B 0:990 C B 0:330 C
B C B C
B 0:995 C B 0:332 C
B C B C
@ 0:651 A @ 0:217 A
0:989 0:330
So the expression of the first principal component can be drawn as follows.
Table 13.6 The ability of sustainable development for coal resource exhausted cities
Years The first principal component F1 The first principal component F2 Integrated value F
2006 10064.44 444.49 9074.79
2007 12337.07 528.41 11122.73
2008 14979.10 689.14 13508.22
2009 16926.58 790.20 15265.32
2010 19962.27 934.94 18003.29
2011 23140.46 1129.99 20873.01
Explanation: the numerical size of F value has no practical significance, different indicators
system will have different values, it only has the reference value
It can be seen by the values of the ability of sustainable development for coal
resource exhausted cities as shown in Table 13.6, in recent years, the overall
ability of sustainable development of China’s coal resource exhausted cities is
gradually increasing, but relative to the growth rate of GDP per capita at the same
period, the growth rate of the ability of sustainable development is relatively
slower. On the one hand, it thanks to the stable and orderly development of overall
macroeconomic in our country, on the other hand, it also benefits from the strong
support of the development of coal resource exhausted cities in the national policy
(Wang and Geng 2012; Campbell and Roberts 2003; Randall and Ironside 1996).
According to the index system and empirical analysis, it can take the following
measures to achieve sustainable development for coal resource exhausted cities. Use
industry adjustment assistance policies to broaden the financing channels to increase
the capital investment. Select the appropriate pillar industries to raise the employ-
ment rate. Improve the population quality. Strengthen the building of scientific and
technological contingent. Control pollution, enhance environmental protection.
Industry structure and regional development needs are adapting in coal cities.
References
Akinbami JFK, Fadare SO (1997) Strategies for sustainable urban and transport development in
Nigeria. Transp Policy 4(4):237–245
Campbell GA, Roberts M (2003) Urbanization and mining: a case study of Michigan. Resour
Policy 29(29):1028–1042
Gao XB, Dong HQ (2007) Data analysis and SPSS applications. Tsinghua University Press,
Beijing, pp 342–364
Li J (2005) The study of urban sustainable development index system and evaluation
methods—resource-exhausted cities for examples. Res Fin Econ Issues 259(6):52–56
13 An Empirical Research 123
14.1 Introduction
C. Wu (&)
Department of Industrial Design, National Kaohsiung
Normal University, Kaohsiung, Taiwan
e-mail: [email protected]
J. Chou
Department of Creative Product Design, I-Shou University,
Kaohsiung, Taiwan
C. Wu
Department of Information Management, National Formosa
University, Huwei, Taiwan
to their needs has now become an important and inevitable task for a company’s
product development team. (Liu 2009).
Quality Function Deployment (QFD) is an important quality control theory
proposed by the Japanese quality control masters Yoji Akao and Shigeru Mizuno.
It was Akao who first realized the value of this approach in 1969 and wanted to
utilize its power during the product design stage so that the product design
characteristics could be converted into precise quality control points in the man-
ufacturing quality control chart (Hill 1994); Sullivan (1986a, b) indicated that
Quality function deployment (QFD) provides a means of translating customer
requirements into the appropriate technical requirements for each stage of product
development.
In recent years, the correlative researches of application of QFD have been
developed rapidly and widely. As for the methodology, Eco-QFD, QFD integrated
with Kano Model and QFD integrated with FMEA have been proposed and
applied well.
Ernzer (2003) indicated that methodical support is necessary which integrates
environmental and market issues in a product. He proposed an integrated approach
called EI2QFD, which reduces the effort of carrying out an Eco-QFD. Addition-
ally, Ernzer proposed an effective and efficient application of eco-QFD. Kuo
(2005) introduced a fuzzy theoretic modeling approach to the Eco-QFD. The Eco
design product development problem was formulated as a fuzzy multi-objective
model based on the QFD planning. Chen (2009) proposed fuzzy nonlinear pro-
gramming models based on Kano’s concept to determine the fulfillment levels of
PCs with the aim of achieving the determined contribution levels of DRs in phase
1 for customer satisfaction. Besides, the failure modes and effects analysis
(FMEA) are incorporated into QFD.
Elif Kılıç (2009) indicated that in real world applications, the values of DRs are
often discrete instead of continuous. A new QFD optimization approach com-
bining MILP model and Kano model is suggested to acquire the optimized solution
from a limited number of alternative DRs, the values of which can be discrete.
Cecilia (2010) proposed a tool which utilized a combination of the Quality
Function Deployment (QFD)—Kano model to evaluate service quality. Relevant
information may be obtained about issues that should be improved in order to
increase customer satisfaction by listening to the voice of the customer (VOC).
Most of the associated researches provide valuable and valid methods to
enhance the effect of QFD. Though these studies applied successfully the proposed
methods to solve design problems, they focused on local perspective view only.
This article will concentrate on the creativity way of thinking ‘‘matter-element
theory and extension method’’, with the help of this method to assist to translate
customer requirements (CRs) into engineering characteristics (ECs) more deeply
and widely.
14 An Extensive Innovation Procedure to Quality Function 127
QFD, introduced by, Akao 1972 was designed to improve quality in product
development. It has been a successful tool in assisting product designers sys-
tematically incorporate customer requirements (CRs) into product and process
development (Akao 1990). Specifically, QFD systematically brings customer’s
needs to the level of detailed operations. The American Supplier Institute’s (ASI’s)
Four-Phase approach was selected as the framework of this research. As shown in
Fig. 14.1, the Four-Phase approach consists of product planning, part deployment,
process planning, and production planning phases.
QFD is a structured design tool, and is defined as: ‘‘A consumers’ needs ori-
ented tool which establishes the relationship between customer attributes and
design parameters to be quantified through the House of Quality (House of
Quality), and integrated customers’ cognition, finding key design factors in the
product to determine the direction of product development and market position-
ing.’’ HOQ indicates the relationship between customer requirements (what to do)
and engineering characteristics (how to do it). It is the engine that drives the entire
QFD process. In essence, the product planning phase translates qualitative cus-
tomer requirements into measurable engineering characteristics, and identifies
important engineering characteristics. The part deployment phase translates the
output of the product planning into critical part characteristics and explores
the relationship between engineering characteristics and part characteristics. The
process planning phase establishes the relationship between part characteristics
and manufacturing operations related to a part. Critical process parameters are
identified and deployed in operation instructions. The production planning phase
translates the manufacturing operations into production standards or work
instructions, such as the number of parts to be checked, type of tools to be used, the
inspection method to be performed, etc. (Liu 2009).
The first phase of QFD, usually called house of quality (HOQ), is of funda-
mental and strategic importance in the QFD system, since it is in this phase that the
customer needs for the product are identified and then, incorporating the producing
company’s competitive priorities, converted into appropriate technical measures to
fulfill the needs. (Chen and Wu 2005) The goal of product planning phase is to
translate customer requirements (CRs) into engineering characteristics (ECs) and
prioritize their importance. Therefore, the CRs must be acquired from market
surveys or customer questionnaires. The acquired information can be used to
calculate the relative importance of CRs, calculate the final importance of CRs,
and identify the Ecs, so that the final importance of ECs will be calculated and the
relationship matrix also be established.
The major procedure of QFD is to identify the customers’ needs for the product
and then convert into appropriate technical measures to fulfill the needs based on
the company’s competitive priorities. The priorities of product characteristics can
be obtained by translating important technical measures. According to their
characteristics, the prior engineering parameters will be identified and selected as
the key requirements to redesign. Throughout the procedure, the creativity activ-
ities with matter-element theory and extension method can be imported or
expanded mainly in two parts,
(1) The course of translating customers’ needs into product design attributes
(technical measures).
(2) The course of identifying corresponding product defect or the parameters
needed to be improved.
The use of extensibility of matter-element and extension method for the
translation of product design attributes and engineering properties can provide
effective assistance for the designer to conceive new products comprehensively
and deeply. In this research, we will discuss the procedure for improving design
attributes by the aids of extensibility of matter-element and extension method.
Matter-element and matter-element with multi-characteristics are defined as
follows,matter-element
R ¼ ðN ðtÞ; c; vðtÞÞ ð14:2Þ
matter-element with multi-characteristics
2 3
N ðtÞ c1 v1 ðtÞ
6 c2 v2 ðtÞ 7
6 7
RðtÞ ¼ 6 .. .. 7 ¼ ðN ðtÞ; C; V ðtÞÞ ð14:3Þ
4 . . 5
cn vn ðtÞ
Based on the divergence of matter-element, matter-element R0 ðN0 ; c0 ; v0 Þ can be
diverged from one or two of N0, c0, v0 to synthesize different matter-elements, and
build an extending tree. Extending tree is a method for matter to extend outwards
to provide multi-orientated, organizational and structural considerations, as oper-
ating in Fig. 14.2. An event is the interaction of matters and described as event-
element. Basic elements for describing an event-element are constructed by verb
(d), name of verb characteristic (b) and u, the corresponding measure about (b).
Event-element
IðtÞ ¼ ðdðtÞ; b; uðtÞÞ ð14:4Þ
Multi-dimensional event-element
IðtÞ ¼ ðdðtÞ; B; UðtÞÞ ð14:5Þ
14 An Extensive Innovation Procedure to Quality Function 131
In this research, design case ‘‘Bicycle’’, the most favorite exercise equipment and
transport is adapt to explain and verify feasibility of the proposed innovative
procedure. The proposed approach, QFD integrated with extension method, has
been used to investigate the users’ needs and relative functional requirements. In
the creative course, we make use of the extension method to improve product
design attributes and to determine the associated engineering design parameters.
The creative solution programs based on customers’ voice will thus be achieved.
Two parts are in this stage of research. First, obtain the needs of bicycle by
interviewing the elderly and translate to product design attributes, and then in the
second part, the questionnaire checks for the elderly to pick the important
requirements.
In this study, survey candidates are the cycling sport amateurs, and the cyclists
are included to avoid some problems are ignored. Gathering respondents’ views,
users’ requirements are sorted out as Table 14.1.
Use extending tree method to assist the establishment of HOQ. First, we build
the transformation model by extending tree: A matter refers to multi-characteris-
tics; a characteristic is also mapped by matters. Thus, ‘‘one matter with multi-
characteristics’’, ‘‘one characteristic maps to multi-matters’’, ‘‘one value maps to
multi-matters’’, ‘‘one matter, one value versus multi-characteristics’’, ‘‘one matter,
one characteristic versus multi-values’’ are the extending propositions of matter-
element to resolve contradiction.
2 3
Overall riding comfortable comfortable grip
6 anti-shock 7
6 7
R1 ¼ 6 7
4 comfortable saddle 5
ventilatory saddle
2 3
Portability operating
6 Light weight 7
6 experience 7
6 7
6 Breaking effort 7
6 7
R2 ¼ 6 7
6 Operatederailleur smoothly 7
6 7
6 Chain - link come off the ratchet 7
4 5
and slip gears
Flexible adjustment Easy to adjust seat post
R3 ¼
Quick detach
…etc.
We can translate product functional requirements into product design attributes
as shown in Table 14.2.
We derive and confirm the ‘‘hypostatic characteristics’’ from ‘‘functional
characteristics’’ and then the corresponding ‘‘certain characteristics’’ will also be
derived and confirmed. Then, we use the corresponding four pairs’ concepts such
as replacement, addition/deletion, expansion/contraction transformation to the iso-
matter-element but the addition/deletion, expansion/contraction transformation to
the distinct matter-element to describe the composition of matters and assist the
transformation and extension of matter-elements.
Take the design attribute, driving mechanism, for example. One of the Cus-
tomers’ complaints is that a traditional bicycle uses a chain for transferring the
pedal actuated driving force from the pedal crankshaft to the rear wheel. After
using period of time, the chain-link is prone to come off the ratchet and slip gears.
In addition, the long skirt also has the risk of being sullied or rolled into chain link.
The extension of relationship-element of driving mechanism
2 3
Transmission Front gear set Chain
6 Ratchet and slip gears Chain 7
6 7
6 Drive method Rotation 7
6 7
6 Bad effect Come off rear gear 7
6 7
Q¼6 Structure 7
6 7
6 Safety degree Poor 7
6 7
6 Disadvantages Sully 7
4 5
.. ..
. .
¼ ðs, A, WÞ
We consider making the ‘‘rigid transmission device’’ play the major role of
bicycle devices and then, design problem becomes ‘‘How to transform the flexible
device into a rigid device?’’ This leads to new design problems based on the new
concept. Repeat matter-element extending program. Transform the flexible device
into a new general model by the divergence tree. Therefore, restart the extending
processes by taking a new matter ‘‘rigid transmission device’’ and a new char-
acteristic ‘‘contact type’’ to set up matter-element, event-element, and relationship-
element and extending tree. Many ideas will be obtained. Therefore, a new device,
bevel gear set, can meet the demand. We may consider using ‘‘bevel gear set’’ to
replace ‘‘chain’’. We propose changing the transmission type. Bevel gear mech-
anism will be the new design concept we proposed.
14.5 Conclusions
Acknowledgments This work is supported by the National Science Council, Taiwan, and No.
NSC 95-2221-E-017-006-
References
Akao Y (1972) New product development and quality assurance deployment system, (in
Japanese). Stand Qual Control 25(4):243–246
Akao Y (1990) Quality function deployment: integrating customer requirements into product
design. Productivity press, Cambridge
Chan LK, Wu ML (2005) A systematic approach to quality function deployment with a full
illustrative example. Int J Manage Sci Omega 33:119–139
Chen LH, Ko WC (2009) Fuzzy approaches to quality function deployment for new product
design. Fuzzy Sets Syst 160(18):2620–2639
Ernzer M (2003) EI2QFD—an integrated QFD approach or from the results of eco-indicator 99 to
quality function deployment. In: Proceedings on environmentally conscious design and
inverse manufacturing, pp 763–770
Garibay Cecilia, Gutiérrez Humberto, Figueroa Arturo (2010) Evaluation of a digital library by
means of quality function deployment (QFD) and the Kano model. J Acad Librarianship
36(2):125–132
Hauser JR, Clausing D (1988) The house of quality. Harvard Bus Rev 66(3):63–73
Hill A (1994) Quality function deployment. In: Lock D (ed) Gower handbook of quality
management, 2nd edn. Gover, Brookfield, pp 364–386 (Chapter 21)
Kuo TC, Wu HH (2005) Fuzzy eco-design product development by using quality function
deployment. In: Proceedings on environmentally conscious design and inverse manufacturing,
eco design, 4th international symposium, pp 422–429
Lee YC, Sheu LC, Tsou YG (2008) Quality function deployment implementation based on Fuzzy
Kano model: an application in PLM system. Comput Ind Eng 55(1):48–63
Liu HT (2009) The extension of fuzzy QFD: from product planning to part deployment. Expert
Syst Appl 36:11131–11144
Sullivan LP (1986a) The seven stages in company-wide quality control. Quality Prog
19(5):77–83
Sullivan LP (1986b) Quality function deployment. Quality Prog 19(6):39–50
Chapter 15
Analysis on Grey Relation of Labor
Export Mechanism’s Influence Factors
in Poverty-Stricken Areas
Abstract Aiming at how to improve the labor export at the new period of poverty
alleviation work of China, this paper first analyzed the influence factors of labor
export mechanism in our country’s poverty-stricken areas theoretically, then
selected the related influence indexes, chose the poverty alleviation counties as the
research object and used the Grey Correlation Analysis, empirically analyzed each
index’s influence size and order, so as to provide basis of related policy for further
improving the poverty alleviation mechanism of labor export.
15.1 Introduction
From the ‘‘The Seven-year Priority Poverty Reduction Program’’, ‘‘Outline for
Poverty Reduction and Development of China’s Rural Areas (2001–2010)’’, to the
issue of the new round of national poverty alleviation tasks and implement of
‘‘Outline for Development-oriented Poverty Reduction for China’s Rural Areas
(2011–2020)’’, all what can see is the government’s spirit of encouraging poverty-
The research is supported by the New Century Training Program Foundation for the Talents of
the Ministry of Education (NET06-0703) and by National Social Science Foundation
(08BJY025).
stricken areas to export their labor force, so as to help the poor people get
employment in non-agricultural field and then increase their income to finally
overcome poverty. However a poverty policy wants to get implementation and
achieve the expected effects, it needs to meet the basic conditions: the poor people
have the opportunity and ability to benefit from it. But the influence factors of
labor export in poverty-stricken areas are various, just depending on poverty
alleviation strategy or poverty relief funds are not enough to make them realize the
goal successfully and get self-development in the constraint condition. In this case,
the poverty relief investment’s performance to the mechanism is very low, budget
constraints and the requirements of improving efficiency are also not allow the
funds’ increasingly growth. Policy’s limitation highlights the urgency of changing
the focus of poverty strategy. What must be realized is the labor export is not a
simple process of transferring the poverty-stricken areas’ labor force to others; it
needs to have a comprehensive understanding of each affecting factor, and then
choose corresponding policy tools. Only in this way, in a new period of poverty
alleviation and development, the export of labor can be more specific, and its
effects can also be more enduring.
Based on this, this paper collect 2002–2010s related data, and using the Grey
Correlation Analysis method (GCA) to study the influence factors of poverty-
stricken areas’ labor export mechanism, calculating the correlation degree and
sorting, so as to seek for the major factors of this mechanism and make accurate
evaluation.
reform has eased the restrictions. But the slow process of reform, make labor
flow free is still filled with obstacles and would be a long-term standing
problem. In this case, the explanation of expected income and human capital
for labor transferring is not sufficient. Li (2003) regards the institutional factors
have great influence on the China’s rural labor transfer. Improvement of the
household registration system will promote smooth migrant of the peasant-
worker. Li (2007) establishes rural labor force transformation model which
includes institutional factors, and comes to the similar conclusions.
B. Economic factors’ impacts on the rural labor force transformation. In Chen and
Hu labor transfer analysis framework, the town’s informal employment
department is introduced to explain labor transfer’s causes and obstacles,
observe the theory and practice, and study the transfer strategy and path of
China’s rural surplus labor, thus forming the widely accepted ‘‘Triangle Eco-
nomic Theory’’. Du (1997) observes the China’s rural laborers’ transfer
behavior from the perspective of income or resources, finds that the low
income of farming and shortage of agricultural resource are the main reasons of
rural labor force’s shifting. Bai and Gan’s (2005) paper investigates the Jiangxi
rural labor force transformation’s dynamics and obstacles by establishing the
dynamic game model, and finds that the primary reason of rural labor transfer
is economic income factors.
C. Discuss physical factors (include age, gender, health etc.) and mental factors
(include education degree and non-agricultural working experience), and other
demographic characteristics that effect the rural laborers’ transfer decision-
making. The results of Li (1999) study show that, the household’s education is
proportional to the possibility of labor flow, the higher the household’s education
degree is, the more likely to make the decision of family migration. Zhao con-
siders the influence of formal education on labor transfer decision-making is very
small, as to the labor force’s shift from agriculture to non-agricultural areas, the
influence is significant. Cheng and Shi’s research confirms this conclusion.
In addition, as to the labor from the poverty-stricken areas, they are not only
influenced by these three common factors as the national rural labors are, but also
facing another important influence factor—the government’s poverty alleviation
activities to labor export mechanism. As Wang (2004) study shows that, if the
poverty relief funds of training skills be combined with the labor export, it can put
a more significant effect on the farmers of the poverty-stricken areas and can be
more easily accepted. And he points out that the government should be given more
autonomy and flexibility in choosing specific projects, so that the projects can take
more farmers’ needs into consideration. Through analyzing the changes of China’s
rural poverty properties, Du thinks the poor community which has more migration
probability, the government should provide them the basic resources for migration
to ensure them can make full use of the outside employment opportunities to
improve their own welfare. Li (2007) tries to investigate the way of poverty
alleviation, finally come to the conclusion that the existing method can’t adapt to
140 S. Wang and Y. Zhou
This paper’s research object is to study the influence factors’ impact on poverty-
stricken areas’ labor export. The Grey Correlation Analysis is based on grey
correlation degree to analyze the relationship of lord behavior factor and the
relevant behavior factors in the grey system, and then judge the main factors and
secondary factors for the development of the system according to similar degree of
the curves’ geometry. The closer the curves are, the greater correlation between
the corresponding sequences is Deng (1993). If the grey correlation degree
between influence factors and the Labor export index is deeper, then the factor’s
impact on this mechanism is greater. Thus provide policy guiding direction to
promote smoother labor export in the new period of poverty alleviation work.
The basic steps of the grey relation analysis are:
Firstly, confirm the reference sequence that reflects system lord behavior
characteristics and the comparison sequences that influent the system behavior.
Set system behavior sequence and comparison sequences:
i 2 N; N ¼ 0; 1; 2; . . .; ; m = 2; yi ¼ ðyi ð1Þ; yi ð2Þ; . . .; yi ðnÞÞ;
Y ¼ yi
yi ðkÞ 2 yi ; k 2 K; K ¼ ð1; 2; . . .; nÞ; n = 3
Among them: y0(k), (k = 1, 2,…, n) is the reference sequence; yi(k), (i = 1,
2,…, m, k = 1, 2,…, n) are the comparison sequences.
Secondly, the reference sequence y0 and comparison sequences yi are converted
into proper dimensionless indexes in order to uniform dimension, making the
factors comparable. This paper uses the equalization method to eliminate the
dimension:
1X n
y ð kÞ 1X n
y ð kÞ
y0 ¼ y0 ðkÞ; y00 ¼ 0 yi ¼ y0 ðkÞ; y0i ¼ i
n k¼1 0 y0 n k¼1 i yi
(k = 1,2, …, n, i = 1,2,…,m)
Thirdly, calculate the correlation coefficients n0i ðkÞ of the reference sequence
and the comparison sequences to assess the correlation between them:
maxi maxk jy0 ðkÞ yi ðkÞj þ f maxi maxk jy0 ðkÞ yi ðkÞj
n0i ðkÞ ¼
jy0 ðkÞ yi ðkÞj þ f maxi maxk jy0 ðkÞ yi ðkÞj
Among them, f, is the distinguish coefficient, its value range is (0, 1), this paper
take f = 0.5.
Fourthly, calculate the grey correlation degree c0i. Due to the correlation
coefficients are correlation degrees at different times, so we need to prevent from
data too much to compare. It is necessary to average the correlation coefficients.
The formula of grey correlation degree is:
142 S. Wang and Y. Zhou
1X n
c0i ¼ f ðkÞ ðk = 1,2,. . .; n, i = 1,2,. . .; mÞ
n k¼1 i
Fifthly, sort the grey correlation degree. The correlation’s sequence between
factors is mainly used by the grey correlation degree to describe. Sorting the c0i,
can reflect comparison sequences’ impact on reference sequence.
Through the review of existing literatures, it can be seen that labor export is
restricted by many factors, but considering the analysis method’s feasibility, this
paper, on the basis of existing researches, using the Grey Correlation Analysis
method and combining with the ‘‘Chinese rural poverty monitoring reports’’,
‘‘China Statistical Yearbook’’ and ‘‘Chinese labor statistics yearbook’’, mainly
from the aspects of institute, economic, self-own and poverty alleviation activities
to empirically analyze influence factors of poverty-stricken areas’ labor export.
Since 2002, the State Council determined the 592 national poverty alleviation
work counties (i.e. counties) again according to ‘‘631 index method’’. So con-
sidering the continuity of the statistical data, this paper selects the time series data
of 2002–2010 years for the model analysis.
Y0 is set to represent the counties’ number of export labor who obtains the
employment outside the counties for more than a month. According to the analysis
above, the influence factors are classified into 4 kinds:
• Institutional factor (y1). This paper uses Chen (1999), Jin’s (2001) researches for
reference and selects the following indexes: the private sector employment rate
(y11), namely the non-state sector employment’s proportion of total town
employment; The urbanization rate (y12), that is, the proportion of the urban
population in total population, to measure the impacts of the institutional factors
including economic component change and the household registration system
change on the counties’ labor export respectively. In addition, institutional
changes’ consequence is the proportion of market distributes labor resources
gets bigger. Generally speaking, high degree of marketization of the country, the
proportion of government allocating labor resources is usually low. So this
paper adopts the proportion of GDP distributed by the market to approximate
reflect the Proportion of Market Allocate Resources index (y13), its measure-
ment formula is: (GDP–national finance income)/GDP (national finance income
without debt revenue).
• Economic factor (y2). Because the income gap between urban and rural areas
(y21) reflect the attraction for the rural labor transfer, the proportion of the
secondary and tertiary industry in GDP (y22) reflects the modern industrial and
service sectors’ output, and for the rural surplus labor’s transfer, the secondary
15 Analysis on Grey Relation of Labor Export Mechanism’s Influence Factors 143
and tertiary industry provide them with the huge absorb space, it is an important
way for rural labor to get non-agriculture employment; registered urban
unemployment rate (y23) represents the supply and demand situation of the
urban labor market. So we take these three as the economic factor indexes to
review.
• Self-own factor (y3), The rural human capital stock is the comprehensive
reflection of the rural labor force’s culture quality, so using the Rural Human
Capital Stock (y31) to measure the overall Labor’s culture quality of the
counties, its computation formula is:
X
4
y31 ¼ Qi hi
1
n represents the workforce of the county, Qi is the proportion of rural labor force
with different education degree in total labor force, hi is the Education
conversion coefficients.1 Considering the data’s accessibility, we classify the
rural labor’s education degree into four kinds, namely, illiterate or semiliterate,
elementary school, junior high school, high school and above.
Due to the First Industry Productivity (y32), namely the ration of county’s first
industry output value with number of the first industry employees, and Rural
Industrialization Rate (y33), namely the proportion of the counties’ non-
agricultural labor force in the total labor, the two indexes represent the labor’s
transfer ability from the first industry in the counties, so we choose them to
represent self-own factor indexes.
• Poverty alleviation factor (y4). As for the poverty alleviation factors, this paper
investigates from the following three aspects: poverty relief funds (y41), namely
the amount of poverty relief funds each year; the coverage of Poverty alleviation
projects (y42), namely that the proportion of poor village that participate in the in
poverty alleviation projects; Farmers’ participation (y43) in poverty alleviation
activity, using the proportion of farmers who have been funded by the poverty
alleviation projects to measure.
This paper use the GTMS3.0 software to process the initial data above, then get the
Grey Correlation Degree (GCD) of each single influence index, the result is as
Table 15.1 shows.
1
For the education conversion coefficient, this paper refer Li’s method, assume that illiterate or
semiliterate is 1, the primary school is 1.1, junior high school is 1.2, high school and high school
above is 1.5.
144 S. Wang and Y. Zhou
Table 15.1 Grey correlation degree of the counties’ labor export influence indexes (2002–2010)
Labor export number
Influence indexes GCD/single GCD/mean
Institutional factor (y1) Y11 0.7129(6) 0.7025(2)
Y12 0.7393(4)
Y13 0.6552(12)
Economic factor (y2) Y21 0.6888(7) 0.6830(4)
Y22 0.6813(9)
Y23 0.6788(10)
Self-own factor(y3) Y31 0.6769(11) 0.6957(3)
Y32 0.6828(8)
Y33 0.7274(5)
Poverty alleviation factor (y4) Y41 0.7805(3) 0.8139(1)
Y42 0.8273(2)
Y43 0.8340(1)
According to the Grey Correlation Analysis results, we can see that impacts of
the influence factors on the counties’ labor export are all relatively obvious, the
sequence is: y43 [ y42 [ y41 [ y12 [ y33 [ y11 [ y21 [ y32 [ y22 [ y23 [ y31 [
y13. The result shows grey correlation degree of the poverty alleviation factor is
the biggest, an average of 0.8139, the institutional factor takes the second place,
the economic factors’ is the minimum; As to the single factor, the coverage of
poverty alleviation projects and farmers’ participation in poverty alleviation
activity are of great influence, the grey correlation degree of them are both above
0.8, while the influence of the Proportion of Market Allocate Resources, rural
human capital stock and the town unemployment rate town influence are relatively
small, are 0.6552, 0.6788, 0.6788 respectively. Therefore, we can see:
(1) The poverty relief activities (0.8139) is the most important factor for the
counties’ smooth labor export, especially the labors’ participation in poverty
alleviation activity (0.8273) and projects’ coverage (0.8340). It shows that the
poverty relief projects and related funds to this mechanism are closely correlated.
This can be explained from the existing problems of present poverty alleviation
mode and labor export mechanism.
In the poor counties which with few outworkers, poor information and surplus
labor forces, the peasant lack adequate capital to pay for the initial cost of
transferring, and the majority of them, the transfer is originated from spontaneous.
In this case, only with the government’s involvement, organize and guide the labor
a proper way to transfer, associated with training them corresponding public
knowledge and skills and improving their participation in these activities, then
provide them appropriate initial funds, such as education and training funds or
interest loans, will the ability of labor forces to win non-agriculture employment
get effectively improved, and so will the resistance of the transfer channel
decrease. In addition, considering our country’s present poverty alleviation mode,
it just aims at the poor county, not the real poor people. After the key counties have
15 Analysis on Grey Relation of Labor Export Mechanism’s Influence Factors 145
been determined, resources will be given. But these resources are mainly used in
infrastructure projects, so that it cannot have wide coverage and the labor force
cannot benefit from it directly. Therefore, if the pointing accuracy of individual
increases, and making project cover more people, the poverty relief resources will
be more accurately and timelier transmit to the poor labor force, and make the
labor export mechanism more effective.
(2) The institutional factor (0.7025) is still one of the dominate factors that
affect the counties’ labor transfer. The institutional factor, including urbanization
rate (0.7393) and private sector employment rate (0.7129), raise the threshold of
labor export. The grey relation analysis result also concomitants with the present
situation of our country’s rural labor transfer. because of discriminations from the
census register system and other related institutional factors, the cost of labor
export increases invisibly, making the labor force pay more transaction cost to get
a non-agricultural job; On the other hand, the discriminations also make the labor
forces face the situation that trading places are not fixed and there are no contracts,
they can only seek job in the subprime market, hardly get a long-term stable
employment in the town, result in most of the labor forces wander between towns
and poor areas helplessly.
(3) In the self-own factor (0.6957) and economic factor (0.683), the counties’
proportion of non-agricultural production value (0.7274) and the income gap
between towns and countries (0.6888), are also important to the labor export
mechanism. This shows that the development of the counties’ non-agricultural
industries, the adjustment of economic structure, change of economic growth, and
actively promoting agricultural industrialization and the third industry in the
counties, will release labor force. As to the income gap between urban and rural, it
also can promote the transfer to some extent. What worth noting is, the rural human
capital which has been widely recognized as one of the key factors that affect rural
labor flow, however, in this study, its grey relational grade only gets 0.6769, ranking
No. 11. The reason for this situation may lie in the education pattern in poor counties,
it almost copy the city’ mode, just pay attention to the ordinary education, but ignore
the importance of vocational skills, result in one’s education does not fit him for a
certain job and low effectiveness of the education investment. So the deficiency and
dislocation of poor areas’ human capital make the gray relational grade between
rural human capital stock and labor export is small.
This paper has empirically researched the influence factors’ impact on the coun-
ties’ labor export base on the Grey Correlation Analysis method, the result shows
that the government’s poverty relief activities have positive impacts on the
counties’ labor export; these positive impacts come into effect mainly through
improving labor’s participation of poverty relief programs, programs’ coverage
and increasing the investment. This means that, increasing poverty alleviation
146 S. Wang and Y. Zhou
References
Bai YT, Gan XW (2005) The dynamic model analysis of labor transfer in Jiangxi province.
Enterp Econ (7)
Cai F (2001) The way of China’s population flow. Social Sciences Academic Press, Beijing
Chen ZS (1999) The research of China’s Marketization process. Shanghai People’s Publishing
House, Shanghai
Deng J (1993) Gray control system, 2nd edn. Huazhong University of Science and Technology
Press, Wuhan, pp 310–318
Du Y, Bai N (1997) The transfer of Chinese rural labor and the empirical research. Economic
Science Press, Beijing
Fang LM, Zhang XL (2007) The analysis on the effect of China’s rural poverty reducing policy
based on the theory of poverty-ability inspection. J Finan Econ (2):48–56
Jin YG (2001) The contribution of macro system changes to China’s economic growth in the
transition period. Sci Finan Econ 2:24–28
Li S (1999) China’s rural labor flow & income growth and distribution. Soc Sci China (2):16–33
Li P (2003) Migrant workers: social and economic analysis on chinese migrant workers. Social
Sciences Academic Press, Beijing
Li XY, Tang LX, Zhang XM (2007) The analysis on the input mechanism of poverty alleviation
funds. Probl Agric Econ (10):77–81
Wang S, Li W, Li Y (2004) The analysis on the focus and effect of China’s poverty alleviation
funds. J Agrotech Econ 5:45–49
Chapter 16
Analysis on Trend of Research
and Development Intensity in China
16.1 Introduction
Research and development (R&D) intensity is defined as the ratio between R&D
expenditures and GDP (OECD 2011a). R&D intensity is used as an indicator of an
economy’s relative degree of investment in generating new knowledge. Several
countries have adopted ‘‘targets’’ for this indicator to help focus policy decisions
and public funding (OECD 2011b). The government intends to have R&D
intensity reach 2.2 % by 2015 according to the Twelfth Five-Year Plan for
National Economic and Social Development of the People’s Republic of China
(English section of the central document translation department 2011).
The Human Development Report shows that R&D intensity has strong positive
correlation with logarithmic per capita GDP, and R&D intensity of high-income
intensity
(%)
2.5
2.0
1.5
1.0
Year
16 Analysis on Trend of Research and Development Intensity 151
Over the past thirty-four years, China’s economy has moved from being largely
closed to becoming a mayor global player. Gross expenditure on R&D (GERD)
increased consistently from 0.61 % in 1987 to 1.76 % of GDP in 2010. The
government requires above 2.5 % R&D intensity by 2020 according to the
Medium and long-term Science and Technology Strategic Plan (2006–2020) in
china (The Communist Party of China and State Council 2006). China’s R&D
intensity was low between 1987 (0.61 %) and 1998 (0.65 %). R&D intensity fell
from a peak of 0.74 in 1992 to 0.57 % in 1995 or 1996. R&D intensity increased
only marginally between 1997 and 1998 from 0.64 to 0.65 % but increased rap-
idly, particularly after 1998, at 1.07 % of GDP by 2002. China has started to step
into the stage of scientific and technological take-off since 2002. The China’s
R&D intensity trend from 1987 to 2010 is showed in Fig. 16.2.
China’s R&D intensity is still low. Taken as a whole, China is still in the primary
stage of socialism and remains a developing country. China is in the grouping of
upper-middle-income countries, in the middle stage of industrialization, and its
innovation ability places twenty-ninth in the word according to Global Innovation
Index (GII) 2011.
Fig. 16.2 The China’s R&D intensity trend from 1987 to 2010, Source Statistics of science and
technology of China, http://www.sts.org.cn/index.asp
152 B. Huang and L. Huang
Gray Prediction Theory, based on the Gray System Theory, established by Chinese
Scholar Professor Deng Julong in 1982 (Di 2002), is a new method to solve the
problems that are lack of data and information. GM (1, 1) model construct gray
prediction model to predict the characteristic quantity of a given amount of time,
or the time to reach a certain characteristic quantity by a number of equal interval
observed numbers which reflect the characteristics of the predictable object, such
as production, sales, population, interest rates, etc. (Deng 1982; Liu and Lin 2006;
Deng 1989). In theory, GM (1, 1) model is a continuous function of time,
stretching from the initial value to the future at any time (Deng 1995).
The basic procedure for grey prediction is as follows (Deng 2005):
Step 1: Determine the raw series x(0), class ratio series rð0Þ
The raw series x(0) for modeling are given as follows:
n o
xð0Þ ¼ xð0Þ ð1Þ; xð0Þ ð2Þ; . . .; xð0Þ ðnÞ ð16:1Þ
b ¼ ðD F C EÞ=ððn 1Þ F C CÞ ð16:7Þ
Step 5: Determine the Grey differential equation of GM (1, 1) and the white
response of GM (1, 1) as follows:
k = 0,1,2,… where xð1Þ ðkÞ is the actual value, x0ð1Þ ðkÞ is the modeled value.
Step 7: Check GM (1, 1) model accuracy
According to literatures (Liu et al. 2004), if dð1Þ ðkÞ value is equal or less than
0.01, the model is level 1, if dð1Þ ðkÞ value is between 0.01 and 0.05, the model is
level 2, if dð1Þ ðkÞ value is between 0.05 and 0.10, the model is level 3, if dð1Þ ðkÞ
value between 0.10 and 0.20, the model is level 4.
According to recent R&D intensity data in China, take data on R&D intensity in
China over the past 6 years as original sequence:
where n is equal six. Then make models for these sub-semi-groups sequence,
respectively, constantly adjusted to these sequences’ interval boundary value,
repeatedly modeling, thus obtained GM (1, 1) models with different interval
boundary value of the project, also is
154 B. Huang and L. Huang
x0ð0Þ ðk þ 1Þ ¼ x0ð0Þ ð1Þ b=a eab þ b=a:
In the end, make residual inspection of GM (1, 1) model with different interval
boundary value, then selected residual qualified model for level or close precision
level as the R&D intensity GM (1, 1) model.
Forecast data on China’s R&D intensity in the next 5 years is showed in
Table 16.1.
16.5 Conclusion
The upward trend of R&D intensity in China is generally in agreement with the
rule of R&D intensity trend. It has entered the second stage Science and Tech-
nology (S&T) take-off which is a crucial period. However the level of R&D
intensity in China is still low. But, this study documents that China has begun a
similar S&T take-off, and the increased ratio of R&D intensity is consistent with
that of GDP. So R&D intensity in China will be able to realize the steady and
quick growth based on the goal of building of innovation-oriented country.
Meanwhile, R&D intensity in China will have reached 2.5 % before 2016.
References
Deng J (1982) Control problem of grey system. Syst Control Lett 1(5):288–294
Deng J (1989) Introduction to Grey System Theory. J Grey Syst 1(1):1–24
Deng J (1995) Extent Information Cover Grey System Theory. J Grey Syst 7(2):131–138
Deng J (2005) The primary methods of grey system theory, 2nd edn. Huazhong University of
Science and Technology Press, Hubei, pp 8–15
Deng X, Ai Q (2004) Analysis on Chinese R&D investment model and relevant policies—
analysis on government leading R&D investment model. Contemp Finance Econ
232(3):23–24
16 Analysis on Trend of Research and Development Intensity 155
Di L (2002) Brief discussion on gray model and its application in physical education. Fujian
Sports Sci 21(1):6–8
English section of the central document translation department (2011) The twelfth five-year plan
for national economic and social development of the People’s Republic of China. Central
Compilation and Translation Press, Beijing, p 16
Gao J, Jefferson GH (2007) Science and technology take-off in China?: sources of rising R&D
intensity. Asia Pacific Business Review, 13(3):357–337
Liu SF, Lin Y (2006) Grey information theory and practical applications. Springer-Verlag,
London, pp 7–12
Liu S, Dang Y, Fang Z (2004) Grey system theory and application, 3rd edn. Science press,
Beijing, pp 163–164
OECD (2011a) Regions at a glance. OECD Press, Paris, p 62
OECD (2011b) Oecd science, technology and industry scoreboard 2011. OECD Press, Paris, p 18
The Communist Party of China and State Council (2006) The medium and long-term science and
technology strategic plan (2006–2020) in China. China Legal Press, Beijing, p 68
UNDP (United Nations Development Program) (2001) Human development report. Oxford
University Press, New York, pp 52–54
Zeng G, Tan W (2003) The development path of states’ R&D and basic research and its
inspirations. Stud Sci 21(3):154–156
Zhang W (2001) A comparative study R&D expenditure between China and U.S. China Soft Sci
12(10):79
Chapter 17
Application of IE Method in Modern
Agro-Ecological Park Planning
Li Yue
Abstract The present paper deals with the comprehensive planning of a new
modern agro-ecological park established by a company. Learning from the prac-
tices of IE method used in the manufacturing and service industries and consid-
ering the characteristics of modern agriculture, we offers some references to the
planning of fungi configuration, functional zone, processing zone and layout of the
park to maximize the overall benefits of the construction of a modern agro-
ecological park.
17.1 Introduction
The IE method has been widely used in the manufacturing and service industries
such as automobile, steel, machinery, home appliance, construction materials, and
information. Japan is the first country to apply IE method in agriculture and this
application is still at its initial stage (Guo 2003; Qi et al. 1999; Yi and Guo 2007).
One company (hereinafter referred to as A company) will establish a modern
agro-ecological park covering 90 mu, which will incorporate fungi cultivation,
personnel training, eco-tourism, entertainment and dining and conference and
leisure. Combining the characteristics of modern agriculture and the practices of
IE method used in manufacturing and service industries, this article considers
solutions to the problems in the planning of a modern agro-ecological park with a
view to maximizing corporate and social benefits (Shi 2002; Shin 2001; Wu et al.
2008).
L. Yue (&)
School of Manufacturing Science
and Engineering, Southwest University of Science and Technology, Mianyang, China
e-mail: [email protected]
The construction of modern agro-ecological park shares much in common with that
of modern manufacturing company. This paper employs the IE method in the
research of agro-ecological park planning in the following areas: location and layout
of the park, organizational structure planning, supply chain planning, capacity
planning, industrial chain planning, human resource planning, human factors envi-
ronmental planning and standardized production (Lu and Wang 2004; Wang 2009).
The research guideline and procedures in this article are shown in Fig. 17.1: (1)
using Fuzzy Comprehensive Evaluation method to choose the most suitable fungi
species for cultivation., (2) using SWOT analysis and Historical Analogy Analysis
method to predict the capacity of each functional zone and make personnel and
equipment planning according to the market demand., (3) in accordance with the
results of capacity planning, employing the knowledge of Operations Research,
building a model with the optimal area as its target and using Lindo to find the optimal
solutions to the area of each functional zone., (4) and on the above basis, using SLP
method to make a plan for the layout of the processing zone and the park (Jiang 2001).
Through the market research and the analysis of cultivation techniques, there are
six selected species of edible fungi to be cultivated: Morchella conica, Lentinus
edodes, Ganoderma lucidum, Coprinus comatus, Pleurotus ostreatus, Flammulina
velutipes, but only three of them can be cultivated due to the actual demand of A
company. Therefore, use fuzzy comprehensive evaluation method to make eval-
uations of the selected species. The evaluation indicators and their associated
weights are shown in Table 17.1 (Pan 2003).
According to the Table 17.1, the selected six species are graded, ranking from
high to low, and Morchella conica, Ganoderma lucidum, Lentinus edodes are the
final selections. See Fig. 17.2 (Hang 2003; Zhang 2002).
The agro-ecological park can be divided into 6 zones with some relevantly dis-
tinctive functions: (1) Dining and Training Zone; (2) Morchella conica Cultivation
Zone; (3) Ganoderma lucidum Cultivation Zone; (4) Lentinus edodes Cultivation
Zone; (5) Breeding and Sightseeing Zone; (6) Processing Zone (Ding 2004).
We analyze the geographic conditions and human environment of the park as
the first step, then start from the market analysis and make capacity predictions of
each zone using the historical analogy method and SWOT analysis according to
the various features of each functional zone, and finally make personnel, equip-
ment and construction planning for each zone in conformity with the results of
capacity predictions.
160 L. Yue
Dining and training zone is a major functional zone featuring farmer training,
instructing, dining (mainly for fresh edible fungi), eco-tourism, specialty shops,
DIY zone and leisure and entertainment. It is predicted that the annual number of
visitors will be 27,000–33,000 and about 1,000 edible fungi cultivators will
receive professional training annually.
The market demand for Morchella conica mainly comes from the visitors who are
coming here for relax and leisure. In accordance with the capacity planning for
dinning and training zone, it is projected that the annual demand of Morchella
conica will reach 5,900–7,300 kg.
Ganoderma lucidum has been proven to have medicinal properties, and its
market demand has been increasing year after year. There are two sales channels
for Ganoderma lucidum: (1) fresh Ganoderma lucidum consumed in restaurants;
(2) Ganoderma lucidum spore powder, dried Ganoderma lucidum and powdered
Ganoderma lucidum sold to herbal medicine dealers. Based on the market anal-
ysis, competitor analysis and SWOT analysis, it is forecasted that the annual
demand of Ganoderma lucidum will stand at 149,000–165,000 kg.
Skiitake is rich in vitamin, iron and potassium, treating loss of appetite and
relieving languor. Through the current market analysis and the sales data analysis,
it is calculated that the annual demand of Lentinus edodes will be
455,000–572,000 kg in terms of the historical analogy method.
Breeding and sightseeing zone takes breeding and earthless cultivation as its core,
mainly breeding for Morchella conica cultivation zone, Ganoderma lucidum
cultivation zone and Lentinus edodes cultivation zone. Thereupon, the capacity of
breeding zone can be determined to some extent by that of the three cultivation
zones. The capacity planning is as follows: Morchella conica 118–146 bottles,
Ganoderma lucidum 2980–3300 bottles, Lentinus edodes 9100–11440 bottles.
17 Application of IE Method in Modern Agro-Ecological Park Planning 161
Table 17.2 Annual capacity planning of processing zone (ten thousands kg/year)
Species of edible fungi Name Total
Lentinus edodes Powdered Lentinus edodes 319
Dried Lentinus edodes 410
Preserved Lentinus edodes 182
Ganoderma lucidum Powdered Ganoderma lucidum 74
Dried Ganoderma lucidum 124
Preserved Ganoderma lucidum 50
The main function of this zone is processing Ganoderma lucidum and shiitake.
Given the development goals of A company, the capacity planning of processing
zone is shown in Table 17.2.
The processing zone layout has much in common with plant layout, so SLP
method can be applied.
According to the processing techniques of Ganoderma lucidum and Lentinus
edodes, this zone can be divided into 10 operating units: 1 raw material base, 2
cleaning zone, 3 drying room, 4 sun drying zone, 5 sterilization room, 6 cooling
room, 7 pulverizing room, 8 low-temperature packaging room, 9 packaging and
quality monitoring workshop, 10 processing products zone.
First, analyze the processing procedures of the edible fungi and logistics
amount and then establish logistics relationship shown in Fig. 17.3.
Analyze non-logistics relationship in processing zone and establish non-logis-
tics relationship shown in Fig. 17.4.
Then, combine the two above relationships and establish integrated relationship
shown in Fig. 17.5.
Finally, according to the integrated relationship between various operating
units, draw their area diagram (Figs. 17.6 and 17.7).
Take the characteristics of edible fungi processing, relevant regulations and
restraints into account and then determine the overall layout of the processing zone
shown in Fig. 17.8 (Chen et al. 2008; Dong 2005).
162 L. Yue
The capacity per unit of each functional zone divided by its overall annual demand
equals the initial area of each zone shown in Table 17.3.
17
Table 17.4 The interest coefficient of yield acre of land in each functional zone
Name Morchella conica Ganoderma lucidum Lentinus edodes Catering and
cultivation zone cultivation zone cultivation zone training zone
Interest 13.4 11.1 7.6 6.5
coefficient
0:1X5 þ 1 ¼ X1 ð17:4Þ
18 X2 20 ð17:6Þ
35 X3 44 ð17:7Þ
0 X5 2 ð17:8Þ
Use Lindo to find the optimal solutions and calculate the results: X1 ¼ 1; X2 ¼
3; X3 ¼ 40; X5 ¼ 0; X4 ¼ 0:12X1 þ 0:09X2 þ 0:2X3 ¼ 10: Thus the optimal
area of each functional zone is shown in Table 17.5.
17 Application of IE Method in Modern Agro-Ecological Park Planning 167
Processing Zone
XX XX X E A
Catering and
Training Zone XX A I E O
Breeding and
Sightseeing Zone XX A I I I
Morchella
Conica
Cultivation Zone X I I E E
Ganoderma lucidum
Cultivation zone E E I E E
Close degree
2 8 8 9 14 13
Ranking
6 5 4 3 1 2
The higher ranking the close degree is, the closer to the center of the general layout
the operating unit should be and vice versa.
According to Table 17.6, the rankings are as follows: 1 Ganoderma lucidum
cultivation zone; 2 Lentinus edodes cultivation zone; 3 Morchella Conica Culti-
vation Zone; 4 breeding and sightseeing zone; 5 catering and training zone;
6 processing zone. The location-related map of each functional zone and the
general layout of the park are shown in Figs. 17.9 and 17.10.
168 L. Yue
Fig. 17.9 The location and area-related map of each functional zone
17.7 Conclusion
This paper examines a new way about modern agricultural park planning theo-
retically and practically. We draw references from the practices of IE method used
in the manufacturing and service industries and make full use of IE knowledge,
making planning for capacity, fungi configuration, park layout and other aspects so
as to maximize the overall benefits of the agro-ecological park.
References
Chen Y, Jia W-q, Huang Y (2008) The design of distributed measuring-control and monitor
system of edible fungi facilities. Hunan Agric Sci 3:149–150
Ding Q (2004) A recycling agriculture model based on edible fungi cultivation. J Anhui Agric Sci
7(5):21–22
Dong H (2005) Facilities planning and logistics analysis. Machinery Industry Press, Beijing
Guo C-q (2003) The status and role of industrial engineering in development of china’s
manufacturing industry. Ind Eng Manag 2:1–3
Hang Y (2003) The problem and countermeasure of edible fungus factory’s facilities cultivation.
Edible Fungi 6:2–4
Jiang W (2001) Development of soilless culture in mainland China. Trans CASE 17(1):10–15
Lu L, Wang J (2004) IE application of IE to agriculture production management. Ind Eng J
7(5):21–22
Pan E-s (2003) Planting planning and control. Shanghai Jiao Tong University press, Shanghai
Qi E-s, Wang Y-l, Lu L (1999) Developing status and tendency of Chinese industrial engineering
subject. Ind Eng J 2(1):1–4
Shi T (2002) Ecological agriculture in China: bridging the gap between rhetoric and practice of
sustainability. Ecol Econ 42(3):359–368
Shin DH (2001) An alternative approach to developing science parks: a case study from Korea.
Pap Reg Sci 80:103–111
Wang J (2009) Analysis of agricultural industry chain construction. North Econ 4:94–95
Wu Y-r, Qiu M-q, Wang C-b (2008) The reflection of modern agriculture demonstration garden’s
construction. Mod Agric Sci Technol 20:329–332
Yi S-P, Guo F (2007) Fundament of industrial engineering. Machinery Industry Press, Beijing
Zhang Y (2002) Probe into regarding the edible mushroom as the central industry in the
development of agricultural circular economy. Sci-Tech Inf Dev Econ 19(20):121–122
Chapter 18
Application of Improved Grey Prediction
Model for Chinese Petroleum
Consumption
18.1 Introduction
Energy is the life blood of the economy, relations with the development of the
national economy and improving the People’s living conditions. Energy con-
sumption is forecast to the stable and rapid economic development, to speed up the
healthy development of the energy industry, and conducive to the formulation of a
sound energy planning, so energy consumption accurate forecasts that is very
necessary.
Y. Ma (&)
School of Economics and Management, China Petroleum University,
Dongying 266580 Shandong, China
e-mail: [email protected]
M. Sun
School of Management, Shandong University, Jinan 250014 Shandong, China
18.2 Methodology
Deng proposed Grey theory to deal with indeterminate and incomplete systems.
Unlike conventional stochastic forecasting theory, Grey theory simply needs few
sample data inputs to construct a Grey model. Since the poor regularity for the
tested settlement data, the AGO technique in Grey forecasting is suitable to reduce
the randomization of the raw data efficiently. Generally, the procedure for GM
(1,1) forecasting is explained as follows:
Step 1. Denote the original data sequence by
n o
ð0Þ
X ¼ X ð0Þ ð1Þ; X ð0Þ ð2Þ; . . .; X ð0Þ ðnÞ ð18:1Þ
where
We obtained ^xð1Þ from Eq. 18.5. Let ^xð0Þ be the fitted and predicted series,
Grey prediction has lot of advantages, such as less data required, regardless of
distribution, regardless of trends, convenient operation, short-term forecasts of
high precision, easy to test, etc., so it is widely used, and achieves satisfactory
results. But there are some limitations. Many scholars encountered a number of
problems of low prediction accuracy in the application of GM (1,1) model, and
174 Y. Ma and M. Sun
In 1993, China became a petroleum importing country. With the rapid develop-
ment of the national economy, petroleum demand was in the rapid growth, the
petroleum supply situation will be even more severe in the future. No matter From
the goal of economic development, or the goal of environmental protection,
adjusting and improving the energy structure, promoting the diversification of
energy use are the only way for China’s sustainable energy strategy. Table 18.1
shows China’s petroleum consumption in recent years.
The raw data xð0Þ ðkÞ are tested by Quasi-smooth conditions. Then Test whether
xð1Þ have the quasi-exponential law. Raw data, for example:
xð0Þ ðkÞ
qðkÞ ¼ ð18:11Þ
xð1Þ ðk 1Þ
xð1Þ ðkÞ
Given dð1Þ ðkÞ ¼ ð18:12Þ
xð1Þ ðk 1Þ
when k [ 3; dð1Þ ðkÞ 2 ½1; 1:5, then Quasi exponential law are met.
First doing the quasi-smooth test, the results are as follow:
Table 18.2 Chinese petroleum consumption fitting value (million tons of standard coal)
Years Petroleum Traditional GM(1,1) model Error test
Predicted value Residual Relative error (%)
2002 35520 35520 0 0.00 C = 0.02
2003 38847 39746 -899 -2.31 P=1
2004 45319 42840 2479 5.47
2005 47414 46776 638 1.35
2006 49924 49770 154 0.31
2007 52735 53645 -910 -1.73
2008 53334 54987 -1653 -3.10
2009 54889 56765 -1876 -3.42
2010 61738 62876 -1138 -1.84
2011 63278 63786 -508 -0.80
176 Y. Ma and M. Sun
Table 18.3 The forecast value china’s petroleum consumption (million tons of standard coal)
Years Petroleum Years Petroleum Years Petroleum
2012 67646 2018 97180 2024 139911
2013 72690 2019 104769 2025 150558
2014 74876 2020 112466 2026 161961
2015 77925 2021 120915 2027 174083
2016 83413 2022 130027 2028 187124
2017 90110 2023 139911 2029 201156
18.4 Conclusion
Through the forecast shows that China’s petroleum consumption in the year of
2029 will increase of 13.351 billion tons standard coal compared to the year of
1998. A big shortfall of petroleum demand will lead a tough situation of the
petroleum security. In the year of 2029, China will basically realize industriali-
zation and modern agriculture, the rapid growth in demand of petroleum will
restrict China’s economic growth, if we do not take some practical countermea-
sures for the security of petroleum.
References
Jen-Ying Shih
19.1 Introduction
Securities and Exchange Commission, which made its executives earn an enor-
mous amount of illegal returns by manipulating the false information and inside
information in securities market. Similar cases also existed in Taiwan, for instance,
Rebar’s executives entrenched assets of the Rebar conglomerate and such behavior
hurt plenty of its stockholders and financial institutions who lend a lot of money to
the Rebar.
With consideration of the above examples, stock market investors prefer to
invest in the promising companies with strong corporate governance to minimize
the loss resulting from weak corporate governance; credit analysts of credit rating
agencies and financial institutions consider corporate governance factors in their
credit granting decisions to avoid the risks associated with bad debt (Standard and
Poor’s 2002; Fitch Ratings 2004; Ashbaugh-Skaifea et al. 2006). The authorities of
financial markets, such as the Financial Supervision Commission, R.O.C. (FSC),
thus required listed companies to file multifaceted corporate governance data,
including ownership structure, information on inside shareholders, institutional
shareholdings, board composition, board size, financial transparency, executive
compensation, etc., to protect stakeholders of these companies.
However, due to that corporate governance is presented by a massive amount of
data, to make a full use of these data, it is important to provide an analytical tool to
interpret the data. Therefore, this study applied an integrated self-organizing map
(SOM) model to study corporate governance of Taiwan’s semiconductor compa-
nies by recognizing implicit patterns of these data. The SOM has been successfully
used for visualizing and clustering high-dimension data (i.e., in this study, each
case company with lots of corporate governance data) in document classification,
construction of knowledge maps, market segmentation, financial statement anal-
ysis, etc. (e.g., (Back et al. 1998; Eklund et al. 2002; Shih et al. 2008; Shih 2011;
Kohonen et al. 2000)).
The remainder of this paper is organized as follows. In Sect. 19.2, we review
corporate governance literature to define features (input variables) used in this
study. Then the dataset of this study is introduced in Sect. 19.3. Subsequently,
Sect. 19.4 describes the methodology applied in this research (i.e., the integrated-
SOM model); Section 19.5 presents results and interpretation of such results.
Finally, the conclusions are provided in Sect. 19.6.
19.3 Dataset
This study gathered data from 53 semiconductor companies that are listed on
the Taiwan Stock Exchange (TSE). The data set contains a total of 30 corporate
governance variables which are illustrated as follows.
• Managerial ownership or insiders’ shareholdings
The factor is observed by the following variables:
1. Percentage of shares held by board directors
2. Percentage of shares held by supervisors
3. Percentage of shares held by managers
4. Percentage of shares held by family
5. Percentage of shares owned by large shareholders whose holdings exceed 10 %
of outstanding shares
6. Shares pledged by directors to shares owned by directors
7. Shares pledged by supervisors to shares owned by supervisors
• External shareholdings
The factor is observed by the following variables:
8. Percentage of external shareholdings
9. Percentage of shares held by institutional investors
10. Percentage of shares held by foreign institutional investors
11. Percentage of shares held by domestic government
12. Percentage of shares held by domestic financial institutions
13. Percentage of shares held by foreign financial institutions
• Board size
The factor is observed by the following variables:
14. Number of board directors
15. Number of supervisors
• Board composition
The factor is further decomposed into three dimensions, including family
control, independence and division of managerial work, measuring by the fol-
lowing variables.
i. Family control
16. Number of family-controlled board directors
17. Number of family-controlled supervisors
ii. Independence
18. Number of independent board directors
19. Number of independent supervisors
iii. Division of managerial work
20. Number of directors who are also managers in the conglomerate
21. Number of supervisors who are also managers in the conglomerate
19 Applying an Integrated SOM Model 181
19.4 Methodology
This study mainly used the SOM algorithm to analyze corporate governance
patterns of Taiwan’s semiconductor companies. Kohonen (1982) proposed the
SOM algorithm, a two-layered fully connected artificial neural network, for
clustering tasks. The SOM model is widely used for generating topology-pre-
serving maps and for data visualization. The most notable characteristic of the
SOM algorithm is that the similarities in input data are mirrored to a very large
extent by their geographical vicinity within the representation space. Similar types
of input data are assigned to neighboring regions on the map (Shih et al. 2008).
The main advantage of the SOM algorithm is that it can map high-dimension data
into a lower dimension representation space. The SOM algorithm has been suc-
cessfully applied in knowledge management (Shih et al. 2008; Schweighofer et al.
2001) and financial statement analysis (Back et al. 1998; Eklund et al. 2002).
In addition, the appropriate map size of SOM is determined by a growing SOM
(GSOM) algorithm (Shih et al. 2008) in this study. The interpretation of SOM maps
is assisted by LabelSOM (Dittenbach et al. 2002) for analyzing the clustering results.
This integrated SOM model is constructed by the following procedures.
Step 1: Initialize all parameters of the integrated SOM model for model
training, including an initial neighborhood range, an initial learning rate of the
SOM, an accepted error measure, an initial map size for SOM training process, a
growing-stopping criterion (s1 ) for growing SOM, a maximum number of labels
and a label threshold for LabelSOM in selecting labels that represent each unit in
the output layer.
182 J.-Y. Shih
Step 2: Start with a virtual layer 0 (see Fig. 19.1) consisting of only one unit
whose weight vector (m0) is initialized as the average of all the input data. Then
calculate the mean quantization error (mqe) by the Euclidean distance between the
weight vector of the unit and all input vectors. The mqe of the unit 0 is calculated
as the following equation.
1 Xn
m0 xj ;
mqe0 ¼
n j¼1
where xj denotes the input vector j, k k represents the Euclidean vector norm, and
n is the total number of the input vectors.
Step 3: Set the initial map size of the output layer in a topology of, for example,
2 9 2 units. This topology is determined by the parameter provided in Step 1.
Step 4: Initialize weight vectors of the output layer.
Step 5: Use input vectors to train the initial SOM by the following algorithm
(Shih et al. 2008), Fig. 19.2.
Step 6: Evaluate the mapping quality by calculating each unit’s mqe in this
map. The mqe of the unit i is calculated as the following equation.
1 X
mi xj ;
mqei ¼
nci x 2C
j i
where mi refers to the weight vector of the unit i, xj denotes the input vector
j mapped to the unit i, Ci denotes the set of all input vectors mapped to the unit i,
and ni is the total number of the input vectors that are mapped to unit i.
A large mqe value means that the input vectors are not clustered well by the
current map topology. Hence, some new units need to be added on this map to
improve the mapping quality (i.e., to decrease the mqe). To determine where to
add the new units, we have to find the error unit (e) which has the largest value of
mqe. The e is determined by the following equation.
19 Applying an Integrated SOM Model 183
e ¼ arg maxfmqei g:
i
Then, either a new row or column of units is interpolated between the error unit
and its most dissimilar neighbor (as shown in Fig. 19.3).
The weight vectors of these new units are initialized as the average of their
neighbors. After growing the map, calculate the mean mqe of all units (MQE) in
the current map. A map grows until its MQE is reduced to a predefined fraction
(the growing-stopping criterion, s1 ) of the mqe0 in the virtual layer (as shown in
Eq. 19.1). The lower the value of the quantization error, the better the map has
been trained.
MQE\s1 mqe0 ð19:1Þ
Step 7: After training GSOM, LabelSOM (Dittenbach et al. 2002) is used to
select the features that can represent the input vectors mapped to each unit in the
final map. The selection mechanism is based on the quantization error vector (qik ),
which is determined by the distance between the weight vector mi ¼
ðli1 ; li2 ; . . .; lin Þ 2 Rn of the unit i and all the input vectors xj ¼ ðnj1 ; nj2 ; . . .; njn Þ 2
Rn mapped to the unit i (Eq. 19.2). After sorting the all qik of each unit, according
to the parameters set by step 1 (i.e., the maximum number of labels and the label
threshold), several features are selected as the labels for representing the unit.
184 J.-Y. Shih
Fig. 19.3 Growing mechanism of GSOM adapted from (Dittenbach et al. 2002)
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
X ffi
1
qik ¼ ðlik njk Þ2 ; k ¼ 1; . . .; n ð19:2Þ
jCi j xj 2Ci
where Ci is the set of all input vectors mapped to the unit i in the output layer, lik
denotes the kth element of the weight vector of unit i in the output layer, and
njk represents the kth element of the input vector xj mapped to the unit i in the
output layer.
19.5 Results
The training result of the integrated SOM model is shown in Fig. 19.4, which
illustrate that eight types of corporate governance of semiconductor companies in
Taiwan are identified by this study. Based on the LabelSOM, each unit (cluster) is
assigned a set of up to three labels to represent various types of corporate gov-
ernance. The eight types of corporate governance are described as follows.
1. CEO also serves as the chairman of board (unification of directing). The
companies listed on Unit (1, 1)1 and Unit (2, 1) belong to this style; totally, 15
semiconductor companies are clustered in this style.
1
We use the notation (x,y) to refer to the unit in row x and column y, starting with (1,1) in the
upper left corner.
19 Applying an Integrated SOM Model 185
Fig. 19.4 Corporate governance maps of the semiconductor companies and its relationships with
ROA
19.6 Conclusion
References
Standard & Poor’s (2002) Standard & Poor’s corporate governance scores: criteria, methodology
and definitions. McGraw-Hill Companies Inc, New York
Fitch Ratings (2004) Evaluating corporate governance: the Bondholders’ perspective. In: Credit
policy special report, New York
Ashbaugh-Skaifea H, Collins DW, LaFond R (2006) The effects of corporate governance on
firms’ credit ratings. J Acc Econ 42:203–243
Back B, Sere K, Vanharanta H (1998) Managing complexity in large data bases using self-
organizing maps. Acc Manag Inf Technol 8:191–210
Eklund T, Back B, Vanharanta H, Visa A (2002) Assessing the feasibility of self-organizing maps
for data mining financial information. In: Proceedings of the 10th European conference on
information systems, Gdańsk, Poland, pp 528–537
Shih JY, Chang YJ, Chen WH (2008) Using GHSOM to construct legal maps for Taiwan’s
securities and futures markets. Expert Syst Appl 34(2):850–858
19 Applying an Integrated SOM Model 187
Shih JY (2011) Using self-organizing maps for analyzing credit rating and financial ratio data. In:
Proceedings of 2011 International conference on asia pacific business innovation and
technology management, Dalian, China, pp 109–112
Kohonen T, Kaski S, Lagus K, Salojävi J, Honkela J, Paatero V, Saarela A (2000) Self-
organization of a massive document collection. IEEE Trans Neural Netw 11(3):574–585
Luo Y (2005) How does globalization affect corporate governance and accountability? A
perspective from MNEs. J Int Manag 11(1):19–41
Kim KA, Kitsabunnarat P, Nofsinger JR (2004) Ownership and operating performance in an
emerging market: evidence from Thai IPO firms. J Corp Finan 10:355–381
Cui H, Mak YT (2002) The relationship between managerial ownership and firm performance in
high R&D firms. J Corp Finan 8:313–336
Short H, Keasey K (1999) Managerial ownership and the performance of firms: evidence from the
UK. J Corp Finan 5:79–101
Sengupta P (1998) Corporate disclosure quality and the cost of debt. Acc Rev 73:459–474
Bhojraj S, Sengupta P (2003) Effect of corporate governance on bond ratings and yields: the role
of institutional investors and the outside directors. J Business 76:455–475
Demetz H, Lehn K (1985) The structure of corporate ownership: Causes and consequences.
J Political Econ 93:1155–1177
Morck R, Shleifer A, Vishny RW (1988) Managerial ownership and market valuation: An
empirical analysis. J Finan Econ 20:292–315
Kapopoulos P, Lazaretou S (2007) Corporate ownership structure and firm performance:
Evidence from Greek firms. Corp Govern Int Rev 15(2):144–158
Jensen MC, Meckling WH (1976) Theory of the firm: Managerial behavior, agency costs and
ownership structure. J Finan Econ 4:305–360
Smith MP (1996) Shareholder activism by institutional investors: Evidence from CalPERS.
J Finan LI(1):227–252
Cheng S, Firth M (2005) Ownership, corporate governance and top management pay in Hong
Kong. Corp Govern Int Rev 13(2):291–302
Kohonen T (1982) Self-organized formation of topologically correct feature maps. Biol Cybern
43:59–69
Schweighofer E, Rauber A, Dittenbach M (2001) Automatic text representation, classification
and labeling in European law. In: Proceedings of ICAIL 2001, ACM Press, Amsterdam
Dittenbach M, Rauber A, Merkl D (2002) Uncovering hierarchical structure in data using the
growing hierarchical self-organizing map. Neurocomputing 48:199–216
Chapter 20
Applying Constraint Satisfaction Methods
in Four-Bar Linkage Design
Gang Chen
Keywords Change propagation Constraint satisfaction Graph theory Product
design
20.1 Introduction
G. Chen (&)
School of Mechanical Engineering, Tianjin University of Science and Technology,
Tianjin, People’s Republic of China
e-mail: [email protected]
How to apply constraint propagation and searching methods to find a solution for
four-bar linkage mechanism is explained. The purpose is to explore automatic and
efficient computerized methods for mechanical or mechatronic product design.
For example, in Fig. 20.4, if the first queen (variable A, which has its value
domain as 1–8) is put at cell (1, A), i.e. assigning variable A value 1, then the
shaded cells in Fig. 20.4a are excluded from domains of the remaining
7 variables. Figure 20.4b shows further constraint propagation after variable B
has been assigned value 3. The problem has become easier to be solved with
domain reduction.
(2) Search: This might be the most fundamental technique for solving CSPs.
Variables are instantiated one by one. With a partial solution, a new variable is
selected and instantiated. If it is impossible to find a feasible value after tra-
versing the new variable’s domain, then the searching process is backtracked to
select a new value for the previous variable. For a CSP with limited domains,
the searching process will carry on until either a solution is found or all the
combinations of possible values have been tried and have failed (Tsang 1996).
Figure 20.5a shows an inconsistent partial instantiation for the first three
variables in a 4-queen problem since no value is feasible for variable D. After
backtracking, i.e. assigning a new value to variable A, a solution is found as
illustrated in Fig. 20.5b.
A lot of improvements have been made to the simple backtracking algorithm,
such as forward checking constraints which involve the most variables or
heuristic-guided backtracking instead of chronicle backtracking.
Each constraint may have a weight that indicates its importance. Weight setting
represents the priority of a constraint during decision-making. Hard constraints
(i.e. constraints that must be satisfied) have higher weights than soft constraints,
which are negotiable or can be relaxed if necessary. For practical problems in real
life, a complete solution which satisfies all constraints is usually impossible. It is
more realistic to satisfy as many constraints as possible or the most important
constraints (with higher weights).
The application of CSP modeling and solving techniques in mechanical design
or manufacturing area is rare except for workshop scheduling problem. It might be
due to the following reasons:
(1) Realistic problems in mechanical engineering are usually very complex. For
example, variables may have different types and abstraction levels; constraints
may involve multiple variables and may be represented as complicated
mathematical functions.
(2) Value domains of variables in mechanical engineering are usually unlimited
and continuous. Traditional searching techniques must be modified.
Figure 20.6 shows a prototype constraint solving system developed using
Visual Basic. Interval computing is used to propagate constraints and prune off
value domains. Domain discretization based on the precision requirements can be
used to transform a continuous problem into a discrete problem.
(3) Determining the position of point F as follows: drawing a line C1F which is
perpendicular to line C1C2; drawing another line C2F with the angle
\C1C2F = 90-h, h is the limitation location angle as calculated in step 1;
(4) Making the circumscribed circle of DC1C2F, another joint A must locate at this
circle;
(5) Using the minimum transmission angle c as the optimization object to finally
determine the position of joint A as well as the frame (AD) length L4;
(6) As shown in Fig. 20.9, the crank length L1 and the coupler length L2 can be
calculated as follows:
Fig. 20.9 Calculating the crank length and the coupler length (McCarthy and Soh 2010)
196 G. Chen
The above-mentioned design procedure can be modeled as a CSP which has the
following variables:
(1) Crank length L1;
(2) Coupler length L2;
(3) Rocker length L3;
(4) Frame length L4;
(5) Travel velocity-ratio coefficient K;
(6) Rocker oscillating angle w;
(7) Transmission angle c;
The lower and upper limits for each variable should be specified. The value
domain for each variable should be discretized based on the precision require-
ments. For example, continuous domain (3–8) for rocker length can be
transformed into discrete domain (3.0, 3.1,…, 7.9, 8.0).
Some intermediate variables, such as limitation location angleh, positions of C1,
C2, F, and A, should also be presented in the constraint graph (see Fig. 20.10).
The Grashof condition, Eqs. (20.1) and (20.2), etc. are modeled as constraints
among variables.
The constructed CSP model is shown in Fig. 20.10. Constraint propagation and
searching techniques can be used to solve the problem and find a feasible solution.
20.4 Discussion
A lot of issues need to be addressed before the general CSP methods can be
realistically applied into solving mechanical engineering problem (Lottaz et al.
2000; Ribeiro et al. 2008; Ouertani and Gzara 2008; Ermolaeva et al. 2004; Chen
et al. 2006; Zhao et al. 2002; Li and Xiao 2004; Jie and Sun 2007; Xu et al. 2002;
Li and Xiong 2002). These issues are listed as follows:
(1) Design variables usually have different abstraction levels, how to represent
this characteristic using hypergraph;
(2) How to represent complicated mathematical functions as constraints;
(3) How to tackle and manage the complexity of a design problem;
(4) Many disciplines are usually involved in product design, how to represent
them in a constraint network;
(5) Product design usually involve several phases in product life cycle, how to
model the temporal relations among variables in a constraint graph.
20.5 Conclusion
This paper proposes applying CSP methods to solve mechanical design problems
with a crank-rocker mechanism as an example. The purpose is to explore an
effective and efficient computerized method for product design. This work is still
very primitive. A prototype system is currently under development to illustrate the
feasibility of the proposed ideas.
References
Chun-hua Wu
Abstract The estate industry is one main pole of China’s economy, and it has
done great contribution to economy development as well as improving urban and
rural appearance. But in recent years, the negative reports about building’s quality,
false advertising and so on appear frequently. These have done great damage to
estate enterprises’ social image, so it is necessary to set up one reasonable system
and give out proper method to assess estate enterprises’ social responsibility.
Reasonable appraisal will play an active role in changing estate enterprises’
negative image as well as promoting the realization of harmony society. After
analyzing the existing problems of estate enterprises, this paper sets up one
compressive system and applies ANP and fuzzy theory into the appraisal of estate
enterprise’s social responsibility. At last a case is shown to identify the appraisal
process.
21.1 Introduction
Under the big construction and big development background, the estate industry of
China has make great development in recent 10 years, becoming one main pole
of economy and occupying second place in absorbing labor forces only behind of
manufacturing industry (Wu 2011). However, during this process, many problems
appears, especially the absence of social responsibility, such as the false adver-
tising, getting abnormal profit by raising house price deliberately, discard of
ordinary customer’s urgent demand by putting too much resource in developing
C. Wu (&)
School of Architectural and Civil Engineering, An Hui University
of Technology, Ma An Shan, China
e-mail: [email protected]
high-level residential, and so on (Zheng 2008; Zhang and Qin 2007). It is the time
to do some thing to improve estate enterprises’ social responsibility, not only for
customer’s benefit but also for estate industry’s healthy and sustainable develop-
ment. But relative research in China just start, till now not much papers can be
found, so in order to enrich relative research and do some efforts to improve estate
enterprises’ social responsibility, this paper sets up one comprehensive index
system and gives out one model to appraise estate enterprises’ social responsi-
bility, thus providing some useful references to enterprises and relative govern-
ment department.
Because the estate industry of our country founded recently, till now is only about
30 years, relative law and regulation is not perfect, so in recent years, in order to
get windfall profit, some developers sacrifice profit of stakeholders deliberately
(Qu 2007). The negative reports often occur, such as quality problem of house,
bidding up price, peasants’ pay in arrears and so on (Ceng 2007). This phenom-
enon not only despairs customer’s interest, but also destroys the image of estate
industry, thus bringing unfavorable influence to its healthy development. Under
such background, the estate enterprise should take some positive action to improve
its image, so in the process of profit obtaining, the social responsibility should be
put much emphasis. Only by solving the exiting problems felicitously, can estate
industry’s social image will be improved, and then estate industry’s sustainable
and harmonious development can be realized.
After entering WTO, how to accelerate the speed to take part in international
competition becomes one common problem to domestic enterprises, including
estate enterprises. In recently years, after ISO9000 and ISO14000, now the western
is advocating SA 8000, which is the first worldwide standard pointed to enter-
prise’s social responsibility (Chen and Chen 2006). Its main purpose is to endow
market economy with humanism, so under such background, domestic estate
enterprises must treat social responsibility as important component of develop-
ment, other else, the chance taking apart in international cooperation and com-
petition will be lost.
21 Appraisal of Estate Enterprise’s Social Responsibility 201
Since 1970s, researchers began to study enterprise social responsibility, till now
several representative viewpoints have formed. First one is the viewpoint from the
economy, famous scholar Milton Friedman is its main advocator. This viewpoint
holds that enterprises’ only social responsibility is through utilizing resource to
obtain or increase profit under certain game rule. The second one is centered social
economics, to which Howard R. Bowen is the main representative scholar. It
thought increasing profit is not the all of enterprise’s responsibility, besides this,
protecting and upgrading social welfare were also important component (Yan and
Wang 2007). The third one pointed out enterprises should take on 3 aspects social
responsibility: realizing the earnings of finance, society and environment. This is
the famous ‘‘three-dimensional model’’, put forwarded by Archie B. Carroll. Until
early 1990s, more and more scholars and entrepreneurs began to use ‘‘corporate
citizenship’’ to describe enterprise’s social responsibility (Cui et al. 2011). As a
good corporate citizenship, the corporate should be good to all level interest-
related parties, such as customer, employee, district, environment, partnership as
well as stakeholders.
Based on the viewpoints above, this paper sets up one comprehensive index
system from four aspects, including operation and product, employee develop-
ment, social contribution and relation with nature, see Fig. 21.1.
The first aspect mainly concerns enterprise’s operation and product. Estate
enterprise is a business organization, so through smooth and healthy operation and
then bring acceptable profit to stakeholders well as the employees is important,
other else the estate enterprise will lose the most crucial developing power.
Meanwhile, enterprises’ main task is to provide qualified and acceptable product,
this means the house quality is reliable, house type as well as price is acceptable by
ordinary customers.
The second aspect concerns employee’ development. Employee is the most
valuable treasure of enterprise, employee’s benefit should get more attention than
202 C. Wu
before, because of this, the SA 8000 put employee’s benefit on the most important
status. Employee’s development not only means salary, but also refers upgrading
employee’s skill level, improving working environment and so on.
The third aspect concerns estate enterprise’s social contribution. Estate enter-
prise is one component of society, so it should do proper social contribution. This
means estate enterprises should obey relative laws, pay tax on time, create
employment chance, and be active in social public service.
The fourth aspect mainly concerns estate enterprises’ environment protection
consciousness. The estate enterprises of China doesn’t get rid of extensive growth
pattern, so high resource and energy consuming, low level of productivity and low
contribution of scientific development are still the main characters of estate
enterprises. According to estimation, compared with advanced countries, the estate
industry of China consumes 3–5 times resource, while providing merely 1/5–1/6
productivity of these countries. On the same time, in the process of development,
some estate enterprises ignore the rule of nature, so the lake is filled and farmland
is exploited and so on, leading to ecological environment deterioration (Li 2012).
Till now the research about estate enterprises’ social responsibility mainly stay on
qualitative analysis level, such as the sense to improve estate enterprise’s social
responsibility, the index study as well as the relation of improving social
responsibility and its healthy development. Although exiting research has play an
21 Appraisal of Estate Enterprise’s Social Responsibility 203
active role in improving estate enterprise’s social image at some degree, but it is
necessary to set up one more comprehensive index system and give out one
quantitative model to measure the level of society more objectively, which can
provide more information to enterprise and government.
The appraisal method put forward by this paper is as follow (Fig. 21.2).
Phrase1: index value fixation. To measure the value of every index scientifically is
first key step of appraisal. As to quantitative index, its value is got directly from its
financial statements or statistical data, based on appraisal subject’s performance.
While to qualitative index, it is difficult to use one objective value, so in order to get
more accurate measurement value, this paper will apply unascertained rational
number to do this task. Unascertained rational number is put forwarded firstly Guang-
Yuan Wang, academician of the Chinese Academy of Engineering. This method can
identify all the possibility and as well as experts’ reliability, thus avoiding the
information distortion or omission. Till now it has been applied in practice widely,
such as production decision, data processing, interval analysis and so on.
Phrase2: index weight fixation. The indexes are not of equal importance, so we
need to fix every index’s weight scientifically. In this paper, the task will be
realized by Analytic Network Process (short of ANP), which firstly put forward by
American Thomas L. Satty at 1996, It developed from AHP, so they have nearly
same decision principle, but their decision structure is different, for ANP network
structure is applied while in AHP is hierarchical structure (Sun et al. 2007). ANP
model can take into the feedback of different levels’ indexes and the inner loop
relation of the same level indexes, while traditional AHP only emphasizes the
domination of upper layer to lower layer and suppose the indexes under same level
is independent, so compared with AHP, ANP is inefficient in tackling complex
problem (Zhang et al. 2012a; Guo and Bai 2011). In a word, ANP can make the
analysis process more close to practical situation, thus making the analysis result
more efficient and more reliable.
Phrase3: social responsibility comprehensive appraisal. On the base of frontal
two phrases, in this phrase fuzzy theory will be used to appraise estate enterprises’
social responsibility. Because in the appraisal process, there is full of unascer-
tained information, so the fuzzy model is adapted and will make the appraisal
qualitative
indexes Phrase 3
Phrase 2
Phrase 1 Unascertained
Analytic Improved
rational Social
network fuzzy
number responsibility
The fixation process model
Fixation of quantitative comprehensive
index weight
index value indexes appraisal
objective value
result more objectively (Ma et al. 2007). But traditional fuzzy model has some
deficiency needing to be overcome, such as the max–min in algorithm of fuzzy set,
the maximum membership degree recognition criteria to ranked evaluation grade
(Liu et al. 2009; Zhang et al. 2012b). So in the application process, some
improvement will be done to fuzzy model from the two aspects.
In order to show the appraisal process of the model put forward by this paper as
well as identify its reasonability, this paper applies it to one estate of An Hui
Province.
To the quantitative index the value is get from this enterprise’s financial statements
or statistical data of relative government department. While to qualitative indexes,
we invite 4 experts to grade from 100 to 0 based on the logic of unascertained
rational number. For example, take index I13 as an example to show the grading
process. Specifically speaking, four experts’ grade intervals to this index are
(75–80), (70–80), (80–85) and (70–75) respectively, and because experts’ expe-
rience and knowledge structure is different, so different experts is allocated dif-
ferent credibility, which is 0.3,0.2, 0.3 and 0.4 respectively. Then based on
unascertained rational number theory, the credibility function is fixed:
8
> 0:3 x 2 ð70; 75Þ
>
>
< 0:4 x 2 ð75; 80Þ
f13 ðxÞ ¼
>
> 0:15 x 2 ð80; 85Þ
>
:
0 other
Through computation, the grade of I13 77.5 can be got. By the same way, other
quantitative indexes’ grade can be got.
Finally, we can get all indexes’ value, as shown in Table 21.1.
social relation
contribution with nature
In order to overcome the shortcoming of AHP, this paper turns to ANP, which can
make the weights more reasonable and scientific. First the network hierarchy is set
up as follow (Fig. 21.3).
Then this paper uses software Super Decision, which is designed specifically to
ANP, to realize the computation process. The weight get from the software are
shown in Table 21.2.
Firstly, based on the experts’ advice and industry standard, this paper sets up
indexes’ rank standard, including quantitative and qualitative indexes, see
Table 21.3. Then the membership function of every index can be set up.
Secondly, based on the logic of fuzzy theory (Qu 2007), substituting index’s
value into corresponding membership function, the membership matrix is got. But
in computation process, some change is done to fuzzy theory’s max–min algorithm
following the way of reference (Ceng 2007).
Finally, the comprehensive appraisal vector V is got,
2 3
0:578 0:422 0 0 0
6 0:478 0:465 0:057 0 07
6 7
V¼½ 0:29 0:22 0:26 0:23 6 7
4 0 0:14 0:545 0:315 0 5
0 0 0:408 0:592 0
¼ð 0:273 0:261 0:248 0:218 0 Þ
For the index rank is ordered, so the maximum subordinate recognition rule is
not perfect, then this paper turns to incredible recognition rule. Suppose threshold
value k ¼ 0:7, and then we can know this enterprise’s social responsibility level
belongs to rank general. From analysis process, we also can know enterprise’s bad
performance on social contribution and relation with nature is the main reason
lowering its social responsibility level.
21.6 Conclusion
Pointed to the problem of estate enterprise’s social responsibility, this paper set
ups one comprehensive index system, and then applies ANP and fuzzy theory into
responsibility appraisal process. One specific case is given, which identity this
model is feasible and reasonable.
The study of this paper can provide one tool to assess estate enterprise’s
responsibility quantitatively, thus avoiding traditional qualitative methods’ short-
comings. In a word, the study of this paper will do good to estate industry’s healthy
development as well as the realization of harmony society.
References
Sun H-C et al (2007) The appraisal of emergence bridge project design using ANP. Eng Sys
Theor Pract 3:63–70
Ceng R (2007) Study of estate enterprise’ social responsibility under context of harmonious
society. Econ Rev 8:116–118
21 Appraisal of Estate Enterprise’s Social Responsibility 207
Chen C, Chen C (2006) Study of construction estate’s social responsibility (in Chinese). Eng
Constr 20(4):298–399
Cui M, Hua J, Xiao C (2011) Research on evaluation system of construction corporate social
responsibility based on data envelopment analysis method. Sci Technol Eng
11(34):8658–8664
Guo W, Bai D (2011) Selection of manufacturing suppliers with ANP/TOPSIS. J Wuhan Univ
Technol (Information & Management Engineering) 33(1):147–151
Li Y (2012) Core estate cultivation of estate under social responsibility orientation. Knowl Econ
6:109–110
Liu K et al (2009) Fuzzy comprehensive evaluation for conceptual design of mechanic product
based on new membership conversion (in Chinese). J Mech Eng 45(12):162–166
Ma G, Mi W, Liu X (2007) Multi-level fuzzy evaluation method for civil aviation system safety
(in Chinese). J Southwest Jiao Tong Univ 42(1):104–109
Qu Y (2007) Thinking to social responsibility of estate enterprise (in Chinese). Constr Econ
7:94–96
Wu G (2011) The analysis of estate enterprises’ social duty and its realization using game theory.
Constr Econ 8:23–26
Yan W-Z, Wang D (2007) Appraisal of construction estate’s social responsibility based on fuzzy
and AHP. Constr Econ 12:46–48
Zeng R-P (2008) Estate enterprise’s social responsibility study under background of harmony
society. Econ Rev 4:116–118
Zhang L, Qin P (2007) Construction of corporate social responsibility management system in
construction industry based on SA8000 (in Chinese). J Tian Jin Univ (Social Sciences)
9(5):416–419
Zhang S, Hou X, Yang Q (2012a) Performance measurement of contractor based on BSC and
ANP. Stat Decis 1:175–178
Zhang G, Liu J, Wang G (2012b) Fuzzy reliability allocation of CNC machine tools based on
task. Comput Integr Manuf Syst 18(4):765–771
Chapter 22
Bottleneck Detection Method Based
on Production Line Information
for Semiconductor Manufacturing System
22.1 Introduction
X. Yu (&) F. Qiao Y. Ma
The School of Electronics and Information Engineering,
TongJi University, Shanghai, China
e-mail: [email protected]
At present, much research effort has been devoted to bottleneck detection and
can be divided into two categories. One is to detect the bottleneck before the start
of production system. Literature (Zhang and Wu 2012) firstly establishes an
optimization model by reducing some traditional constraints of a standard Job-
shop, then calculates the bottleneck’s characteristic value and chooses the excel-
lent by the simulated annealing algorithm. Literature (Zhai et al. 2010) uses the
orthogonal table and different assigned rules to construct a test program, with
production system job target as measurement index to identify bottleneck. The
other method to identify bottleneck is to conduct collection, imitation and simu-
lation analysis of the data from the production system which has been online for a
period of time. As in (Roser et al. 2002, 2003), system bottlenecks are divided into
independent bottlenecks and shift bottlenecks, they are identified by calculating
the maximum active time of the machines, which is similar to the common
equipment utilization method. Literature (Li et al. 2007, 2009; Wang et al. 2008) is
based on the data of the production line, the blockage and starvation information
are made full use of to detect bottlenecks. Literature (Kasemset 2009; Kasemset
and Kachitvichyanukul 2009, 2010) classifies the candidate bottleneck equipment
according to the static and real-time data from simulation and testifies the cor-
rectness of the bottleneck equipment selection via confidence interval level.
Semiconductor wafer fabrication system is recognized as one of the most
complex manufacturing systems, which normally contains as many as three or four
hundreds processing steps. In addition, wafer manufacturing has reentrant char-
acteristics, namely the same product will go through some processing center more
than once, which is different from the traditional Job-shop and Flow-shop system
(Wu et al. 2006). Most methods mentioned above are applicable to Flow-shop;
some can be used for Job-shop as in (Zhai et al. 2010). It will ignore some
stochastic disturbance such as equipment failure or seasonal variation of need
though it is faster and more convenient. Based on literature (Li et al. 2007), this
paper mainly studies the semiconductor wafer fabrication system by changing its
constraints and proposing a concept of relatively blocking rate, which can record
the detailed variation of the count of the jobs in the buffer throughout the whole
production system operation period. The method can make full use of outline and
online information of the production line, and it is not necessary to consider
machine sets, type, product type, processing route and all kinds of factors such as
random fluctuation in the process of identification. Therefore, it is a convenient
and accurate identification method.
In this approach, the queue length or waiting time of the equipment is measured
and the one which has the longest queue length or waiting time is considered as
manufacturing bottleneck, as is shown in formula 22.1
max
Bottleneck ¼ Machinej ¼ max 1 j m 0\t T ðWtj Þ ð22:1Þ
where T stands for simulation period, m is the number of the processing center in a
system or a model, Wtj is the number of jobs in the buffer for equipment j at a
certain time t during the simulation period.
The equipment which has the highest utilization rate is system bottleneck as is
shown in formula (22.2) (Zhou and Rose 2009).
max WTj ðTÞ +OTj ðT)
Bottleneck = Machinej ¼ 1 j m ð22:2Þ
T
where T is the time period considered, m is the number of the processing center in
a system or a model, WTj(T) and OTj(T) are the processing time and off-line time
of equipment j during time period T.
On the basis of the order, the load of every processing center is calculated
according to the job’s craft, and the machine which has the maximal relative load
is system bottleneck (Ding et al. 2008), as is shown in formula (22.3) and (22.4).
LB = Max(Lh Þ ð22:3Þ
X
x Xy htij
Lh ¼ qi j¼1 l
ðh ¼ 1; 2; . . .mÞ ð22:4Þ
i¼1
212 X. Yu et al.
where LB is the load of the system bottleneck, work center B is system bottleneck,
i is the type of the job, x is the number of the type of the job, y is the step number
of the job, h is the related coefficient of the equipment (if some job is processed in
the center, then h is 1, otherwise h is 0), tij is the processing time of step j of job i.
The method can identify the system bottleneck through a simple calculation using
some relevant technological parameters.
This paper chooses the model of HP-24 semiconductor production line as object of
study and EM-PLANT as simulation platform. HP-24 Model comes from silicon
wafer production technology center lab and most parameters are collected from
real devices. There are 24 equipment groups in the model and most of them are
single machine except the lithography (one group contains two machines, the other
group contains three machines). As one kind of simplified model, HP-24 only
214 X. Yu et al.
Table 22.1 Parameters for machines in Hp-24 model (Murphy and Dedera 1996) and simulation
data
Machine group Count Reentrant times Starvation rate (%) Nonblocking rate (%)
ID Name
1 CLEAN 1 19 14.73 97.72
2 TMGOX 1 5 23.97 99.31
3 TMNOX 1 5 20.89 99.50
4 TMFOX 1 3 58.23 99.90
5 TU11 1 1 83.30 100.00
6 TU43 1 2 56.30 99.98
7 TU72 1 1 81.39 100.00
8 TU73 1 3 59.87 99.93
9 TU74 1 2 71.97 99.98
10 PLMSL 1 3 65.82 99.96
11 PLMSO 1 1 78.44 100.00
12 SPUT 1 2 65.85 99.97
13 PHPPS 2 13 14.22 99.05
14 PHGCA 3 12 5.00 65.33
15 PHHB 1 15 63.23 99.94
16 PHBI 1 11 6.00 64.45
17 PHFI 1 10 53.88 99.93
18 PHJPS 1 4 57.36 99.91
19 PLM6 1 2 22.84 99.80
20 PLM7 1 2 67.12 99.96
21 PLM8 1 4 13.29 99.39
22 PHWET 1 21 37.82 99.44
23 PHPLO 1 23 26.00 99.36
24 IMPP 1 8 9.05 100.00
processes one type of products which has 172 processing steps (Ding et al. 2008).
During the simulation period, the buffer information is collected every half an hour
and the feeding method is subject to uniform distribution. The simulation period is
set to be 1 year and the data collected is shown in Table 22.1. The starvation rate
and non-blocking rate of each equipment are calculated by the formulas introduced
in the third section.
As is shown in Table 22.1 and Fig. 22.1, the starvation rate of machine 14 is
0.05, the non-blocking rate is 0.64, so the starvation rate plus the non-blocking is
0.69, which is the smallest of all machines. Thus, Machine 14 can be considered as
the bottleneck machine according to the thought of the method based on pro-
duction line information. As machine 14 is a parallel processing machine, it can be
regarded as the bottleneck processing center.
Bottleneck machine is the short and fat son of the system, so the system output
depends on the processing speed of bottleneck machine. The following is com-
parisons among different bottleneck detection methods.
(1) To prove the effectiveness of the machine group 14, we feed jobs into the
production line based on the processing speed of machine 14. Through
22 Bottleneck Detection Method 215
period. The shorter the processing period is, the higher the productivity will
be. Productivity determines the cost of final product, processing period, cus-
tomer satisfaction and so on.
(3) WIP (work in process) is the number of products in process online every day
and the scheduling goal is to make the index minimum.
(4) Utilization rate of bottleneck machine. The formula is UB ¼ TTopen , Twork is the
work
time that the machine is in the state of process, Topen is the uptime of the
machine. An overview of the comparison of different performance of each
method is shown in Table 22.2
From Table 22.2 we can see that when we use the information of the production
line to detect the bottlenecks, for average processing period, there is a 6.0 %
reduction compared to the relative load method, a 23.7 % reduction compared to
the equipment utilization, a 27.9 % reduction compared to the queue length. For
processing period variance, there is a 1.9 % reduction compared to the relative
load method, a 53.1 % reduction compared to the equipment utilization, a 66.3 %
reduction compared to the queue length. For productivity, there is a 1.9 % increase
compared to the relative load method, a 3.3 % increase compared to the equipment
utilization, a 5.1 % increase compared to the queue length. For average day wip,
there is a 3.5 % reduction compared to the relative load method, a 26.2 %
reduction compared to the equipment utilization, a 32.0 % reduction compared to
the queue length. For bottleneck machine utilization, there is a 2.2 % increase
compared to the relative load method, a 3.1 % increase compared to the equipment
utilization, a 9.3 % increase compared to the queue length. The relevant results are
displayed in Table 22.3.
Table 22.3 Performance analysis of the other three bottleneck detection methods compared to
the proposed method
Average Processing Productivity (%) WIP (%) Bottleneck
processing period machine
period (%) variance (%) utilization (%)
Relative load -6.0 -1.9 +1.9 -3.5 +2.2
Equipment utilization -23.7 -53.1 +3.3 -26.2 +3.1
Queue length -27.9 -66.3 +5.1 -32.0 +9.3
22 Bottleneck Detection Method 217
Data analysis presents that the proposed method in this paper has different range of
ascension than other methods and thus proves its practicality and effectiveness.
22.5 Conclusion
Detecting bottleneck accurately is the first step to implement the DBR thought.
Common bottleneck detection method has some limitations in the complex
semiconductor manufacturing system. This paper utilizes the production line
information to detect bottleneck which is based on data mining and obtains good
effect. Historical data underlies the process information of manufacturing system
and it can be regarded as a knowledge base which should be made full use of to
detect bottlenecks. Future work: 1. There are many uncertain factors in a real
production line, a single bottleneck feeding strategy may not be achieved good
effect all the time, it should be combined with the bottleneck scheduling strategy.
2. How to further mine the underlying experience, knowledge and rules of the
historical and online data to optimize the production line needs further study.
Acknowledgments This research was supported by Chinese National Natural Science Foun-
dation (61034004), Science and Technology Commission of Shanghai (10DZ1120100,
11ZR1440400), Program for New Century Excellent Talents in University (NCET-07-0622) and
Shanghai Leading Academic Discipline Project (B004).
References
Cao Z, Peng Y, Wu Q (2010) DBR-based scheduling for re-entrant manufacturing system (in
Chinese). In: 2010 Proceedings of contemporary integrated manufacturing system, vol 2010,
pp 566–572
Ding X, Qiao F, Li L (2008) Research of DBR scheduling method for semiconductor
manufacturing (in Chinese). Integr Mech Electr 2008:29–31
Kasemset C (2009) TOC based Procedure for Job-shop Scheduling. Doctoral Dissertation,
Industrial Engineering and Management, School of Engineering and Technology, Asian
Institute of Technology, Bangkok, Thailand
Kasemset C, Kachitvichyanukul V (2009) Simulation tool for TOC implementation. In:
Proceedings of ASIMMOD 2009, ASIMMOD 2009, Bangkok, Thailand, pp 86–97
Kasemset C, Kachitvichyanukuf V (2010) Effect of confidence interval on bottleneck
identification via simulation. In: Proceedings of the 2010 IEEE IEEM. IEEE, DC, USA,
pp 1592–1595
Li L, Chang Q, Ni J (2007) Bottleneck detection of manufacturing systems using data driven
method. In: Proceedings of IEEE international conference on symposium on assembly and
manufacturing. IEEE, Washington, DC, USA, pp 76–81
Li L, Chang Q, Ni J (2009) Data-driven bottleneck detection of manufacturing systems. Int J Prod
Res 47(18):5019–5036
Murphy RE Jr, Dedera CR (1996) Holistic toc for maximum profitability. In: 1996 IEEE/SEMI
advanced semiconductor manufacturing conference
218 X. Yu et al.
Rahman S (1998) Theory of constraints: a review of the philosophy and its applications. Int J
Oper Prod Manag 18(4):336–355
Roser C, Nakano M, Tanaka M (2002) Shifting bottleneck detection. In: Proceedings of the 34th
conference on winter simulation. IEEE, Washington, DC, USA, pp 1079–1086
Roser C, Nakano M, Tanaka M (2003) Comparison of bottleneck detection methods for AGV
systems. In: Proceedings of the 2003 winter simulation conference. IEEE, Washington, DC,
USA, pp 1192–1198
Sengupta S, Das K, VanTil RP (2008) A new method for bottleneck detection. In: Proceedings of
the 2008 winter simulation conference. IEEE, pp 1741–1745
Wang Z, Chen J, Wu Q (2008) A new method of dynamic bottleneck detection for semiconductor
manufacturing line. In: Proceedings of the 17th world congress the international federation of
automatic control, vol 17(1). Elsevier, Seoul, Korea, pp 14840–14845
Wu Q, Qiao F, Li L (2006) Semiconductor manufacturing system scheduling. Publishing House
of Electronics Industry, Beijing
Zhai Y, Sun S, Wang J, Wang M (2010) Bottleneck detection method based on orthogonal
experiment for job shop (in Chinese). Comput Integr Manuf Syst 16(9):1945–1952
Zhang R, Wu C (2012) Bottleneck machine identification method based on constraint
transformation for job shop scheduling with genetic algorithm. Inf Sci 188:236–252
Zhou Z, Rose O (2009) A bottleneck detection and dynamic dispatching strategy for
semiconductor wafer fabrication facilities. In: Proceedings of the 2009 winter simulation
conference. Institute of Electrical and Electronics EngineersInc, Austin, TX, USA,
pp 1646–1656
Chapter 23
Clinical Decision Support Model of Heart
Disease Diagnosis Based on Bayesian
Networks and Case-Based Reasoning
Abstract To boost the accuracy of clinical decision support systems and degrade
their misdiagnosis rates, a hybrid model was proposed with Bayesian networks
(BN) and case-based reasoning (CBR). BN were constructed with the feature
attributes and their casual relationships were learned. The similarities of feature
attributes were measured with the case matching method, as well as the knowledge
of their dependent relationships. Therefore, the accuracy of the diagnosis system
was enriched through the dynamic retrieval method.
23.1 Introduction
M. Xu J. Shen
TEDA College, Nankai University, Tianjin, China
M. Xu J. Shen (&)
College of Management and Economics, Tianjin University, Tianjin, China
e-mail: [email protected]
Bayesian Information
Networks Gain
Casebase
Similarity Optimal
CBR
measure feature set
Clinicians
Yes Add No
New Case Suggestion
where PðCi Þ is the prior probability for samples with Ci , PðCi Þ ¼ Si =S; m denotes
the number of classes, Si the number of samples labeled by Ci , and Si the total
number of samples.
Definition 23.5 Assume that the feature f has v values, ff1 ; f2 ; . . .; fv g. The data
sample is divided into v subsets by f , fS1 ; S2 ; . . .; Sv g, where Sj contains the cases
with feature value fj . The term S1j þ S2j þ þ Smj S is the weight of the jth
subset. Sij is the number of samples with class Ci in Sj , then
Xm
IðS1j þ S2j þ þ Smj Þ ¼ P log2 Pij
i¼1 ij
ð23:4Þ
where Pij is the probability of samples with class Ci in Sj , Pij ¼ Sij Sj .
23 Clinical Decision Support Model 223
First, the conditional probability is calculated as pðxi ; ri jDl ; hðtÞ Þ for all the
parameters in set D and the attribute xi . Given set D, its likelihood is
X X
lðhjDÞ ¼ ln pðDjhÞ ¼ hðxkj ; rij Þ ln hijk
l ijk
hðxkj ; rij Þ
hijk ¼ P ð23:6Þ
hðxkj ; rij Þ
Assuming the initial value hð0Þ , the expectation of the current likelihood hðtÞ is
XX
lðhjhðtÞ Þ ¼ ln pðDl ; Xl jhÞpðXl jDl ; hðtÞ Þ
l
For any h, if Lðhhðtþ1Þ Þ LðhhðtÞ Þ, then
X l
LðhhðtÞ Þ ¼ f akj ; p aj ln hjlk
jlk
As mentioned above, case retrieval and matching is the core phase of CBR, in
which the similarity is measured for the query with the historical cases. In the
literature, the retrieval of CBR is to measure the difference among the features of
cases, and Euclidean distance is an popular tool as the similarity measure function
(Jain and Marling 2001).
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
X m
Similarityðx; yÞ ¼ ðxi yi Þ2
i¼1
23.3 Conclusion
This paper proposed the CDSS based on the BN-CBR hybrid model. BN were
established with the feature attributes of heart disease, which improved the
accuracy of diagnosis reasoning through solving the problem of missing data in
23 Clinical Decision Support Model 225
the database. Not only were the similarities of feature attributes measured, but also
the case matching method integrated the knowledge of their dependent
relationships.
Acknowledgments This work was supported by the National Natural Science Foundation of
China (71171143), by Tianjin Research Program of Application Foundation and Advanced
Technology (10JCYBJC07300), and Science and Technology Program of FOXCONN Group
(120024001156).
References
Gu YS, Hua Q, Zhan Y et al (2003) Case-base maintenance based on representative selection for
1-NN algorithm. In: Presented at the international conference on machine learning and
cybernetics, vol 4, pp 2421–2425
Jain AF, Marling CR (2001) Case-based tool for treatment of behavioral problems.
In: Proceedings of the 33rd southeastern symposium on system theory, pp 337–341
Kong G, Xu DL, Yang JB (2008) Clinical decision support systems: a review on knowledge
representation and inference under uncertainties. Int J Comput Intell Syst 1(2):159–167
Ling HF, Guo JY, Yan J (2006) Similarity algorithm of CBR technology applied to fault
diagnosing field. J PLA Univ Sci Technol (Nat Sci Edn) 7(5):480–484 (in Chinese)
Ma XX, Huang XY, Huang M, Ni L (2002) Analysis of information flow based on entropy of
fault diagnosis. J Chongqing Univ (Nat Sci Edn), 25(5):25–28 (in Chinese)
Shiu SCK, Yeung DS, Sun CH et al (2001) Transferring case knowledge to adaptation
knowledge: an approach for case-base maintenance. Comput Intell 17(2):295–314
Zhang Z, Yang Q (1999) Dynamic refinement of feature weights using quantitative introspective
learning. In: Proceeding of the international joint conference in artificial intelligence. Morgan
Kaufmann, San Francisco, pp 228–233
Zhang Z, Yang Q (2001) Feature weight maintenance in case bases using introspective learning.
J Intell Inf Syst 16(2):95–116
Chapter 24
Construction of Project Management
Classification Frame and Classification
Controlling Model Based on Project
Portfolio
24.1 Introduction
The so-called project is completed under certain resource constraints of the human,
financial and other one-time activity of the target. The projects tend to include
more than one task. The realization of each task needs to consume a certain
amount of human, financial resources (Boddy and Macbeth 2000).
The project management is the project organizer operation system, the science
theory and the method. Through carries on the plan, the organization, the coor-
dination, the control to the project, is for the purpose of realizing the project goal
management system.
Project portfolio management is project choice process, that is, under the
enterprise resources limited premise, how to chooses the corresponding project, to
dispose appropriate resources, and to realize enterprise’s strategic target better.
The project level-to-level management means that after having chosen the good
project portfolio, according to the project involves the scope, the content, the
complex degree, and project management personnel’s knowledge and ability, it
divides into the project the different rank, and has key carries on the level-to-level
administration.
In the project management company, the daily work is the management, the
implementation of project. Along with enterprise’s development, the enterprise will
face lots of projects, and the manager must consider two most essential questions: (1)
How to guarantee the coherence of the project goal with enterprise’s strategic target;
(2) Under the condition of project multitudinous and limited resources, how to
manage multiple projects on time, according to nature, according to budget man-
agement. The first question is Project portfolio management question, and it has
solved the project priority problem; the second question was that under the condition
of having solved in the project priority situation, it should not take common methods
which was used in the progress of PM, but should consider all projects as a whole to
carry on the management, because enterprise itself was a system strategic whole. In
consider all projects as a whole to carry on the management in the process, because
each project characteristic, the scope, the complex degree are different, which cause
the scope and the content of the project management different. Simultaneously it
needs the project personnel’s different ability and knowledge, therefore it is
important to carry on the level-to-level administration to the project. The classifi-
cation management decides the project implementation management method and
the control pattern. First, the project execution graduation will affect the authorized
management way of the project. Next, the project execution graduation has decided
the project resources disposition way. Finally, the project graduation has decided the
project controlling mode (Cheng et al. 2003).
According to scope and complex degree, project management scope and content,
as well as personnel’s knowledge and ability involved, may divide into enter-
prise’s in project the enterprise project, the department level project and the
project level project. According to the different rank project situation, the enter-
prise determined the project carries on the starting time, the scope and completes
24 Construction of Project Management Classification Frame 229
the time and so on. Enterprise project graduation standard was shown in
Table 24.1.
According to its the complexity and innovativeness, the enterprise project was
strategic and chrematistic, and it should be carried on the special single row
management by the enterprise project office. This may centralize resources and the
time, and avoid the excessively many business management activity waste in
document (document) and the conference.
The department level project according to its key degree and enterprise’s
strategic sense, suitably enter the project office according to the project group way
to management; the individual project may promote according to the situation as
the company level, and carries on the single row management in the project office
(Dye and Pennypacker 1999).
The project level project, according to the research and development’ the
technological innovation, the management, may divide into the project group, and
carried on the management by each group.
The project execution graduation is carried out after specific priority, and dif-
ferent projects can be carried out through different program and sequence.
has also solved the problem: enterprise strategy executive ability insufficient, and
will be advantageous to enterprise’s strategic implementation.
The author believed that, the level-to-level administration frame and the control
pattern is necessary to solve above questions. Therefore, the enterprise should
adopts the different management and the control pattern to the different rank
project, according to its condition, its cope and the complex degree, the project
management scope and the content, as well as project management essential factor
reasonable.
References
Archer NP, Ghasemzadeh F (1999) An integrated framework for project portfolio selection. Int J
Proj Manag 17(4):207–216
Boddy D, Macbeth D (2000) Prescriptions for managing change: a survey of their effects in
projects to implement collaborative working between organizations. Int J Project Manage
18:297–306
Chapman CB (1997) Project risk analysis and management-PRAM the generic process. Int J Proj
Manag 15(5):273–281
24 Construction of Project Management Classification Frame 233
Cheng M-Y, Su C-W, You H-Y (2003) Optimal project organizational structure for construction
management. J Constr Eng Manag 129(1):70–79
Dye LD, Pennypacker JS (1999) Project portfolio management: selecting and prioritizing projects
for competitive advantage. Center for Business Practices, West Chester
Froese TM (1992) Integrated computer-aided project management through standard object-
oriented models. Doctoral dissertation presented to the Department of Civil Engineering of
Stanford University, at Stanford, California, USA in partial fulfillment of the requirements for
the degree of Doctor of Philosophy
Jennings NR, Wooldridge M (1998) Agent technology: foundations, application and markets.
Springer, Berlin
Lampel J (2001) The core competencies of effective project execution: the challenge of diversity.
Int J Proj Manag 19:471–483
Luiten GT, Tolman FP, Fischer MA (1998) Project-modelling in AEC to integrate design and
construction. Comput Ind 35:13–29
Patterson FD, Neaile K (2002) A risk register database system to aid the management of project
risk. Int J Proj Manag (20):365–374
Platje A, Seidel H, Wadman S (1994) Project and portfolio planning cycle-project based
management for multi-project challenge. Int J Proj Manag 12(2):100–106
Van Der Merwe AP (2002) Project management and business development: integrating strategy,
structure, processes and projects. Int J Proj Manag 20:401–411
Vanegas JA (1987) A model for design/construction integration during the initial phases of
design for building construction projects. Doctoral dissertation presented to the Department of
Civil Engineering of Stanford University, at Stanford, California, USA in partial fulfillment of
the requirements for the degree of Doctor of Philosophy
Ward SC, Chapman CB (1995) Risk-management perspective on the project lifecycle. Int J Proj
Manag 13(3):145–149
Williams TM (1994) Using a risk register to integrate risk management in project definition. Int J
Proj Manag 12(1):17–22
Wooldridge M, Jennings NR (1995) Intelligent agents: theory and practice. Knowl Eng Rev
10(2):115–152
Yan Y, Kuphal T, Bode J (2000) Application of multiagent systems in project management. Int J
Prod Econ 68:185–197
Zhang H, Zhang G (2009) Construction project life cycle management of integrated risk
management. Ind Technol Forum 11(8):235–237
Chapter 25
Product Material Combination Image
Method Based on GEP
Abstract Materials are the stuff of design. From the very beginning of human
history, materials have been taken from the natural world and shaped, modified,
and adapted for everything from primitive tools to modern electronics. Aiming at
the need for product material combination, this study presents a product material
combination decision-making method based on Gene Expression Programming
(GEP). The semantics differential method is utilized to extract user image, mul-
tidimensional scaling is utilized to select cognitive samples. GEP was utilized to
construct the image and the material models. Finally, implementation of this
proposed method is demonstrated high stability via a seat case.
25.1 Introduction
In the new century, the market has highly competition, and is simultaneously
affected in globalization, localization, and individualization. Traditionally black-
boxed designing model can’t match what the consumers require, exactly and
effectively, not mentioning the numerous chooses in the present. Mass custom-
ization relates to the ability to provide customized products or services through
flexible processes in high volumes and at reasonably low costs. The concept has
emerged in the late 1980s and may be viewed as a natural follow up to processes
that have become increasingly flexible and optimized regarding quality and costs.
In addition, mass customization appears as an alternative to differentiate compa-
nies in a highly competitive and segmented market. Therefore, the designers’
responsibilities are for the choices of appropriated use, suitable size, the favorable
K. Su (&) S. Kong
School of Mechanical and Automotive Engineering, Shandong Polytechnic University,
Jinan, China
e-mail: [email protected]
texture, the satisfactory style, and right message transformation in products. Let
products fit in human and let users conduct products designing. Then they are
really product that consumers really need.
Many scholars have done researches on product image research (Hsiao and Tsai
2004; Tsai et al. 2006; Xu et al. 2007; Liu et al. 2009; Kejun et al. 2008, 2009; Chen
2008; Nagamachi 1995; Su and Li 2005). Hsiao used gray theory and back-propa-
gation neural network for the color image evaluation of children’s walker; Tsai used
fuzzy neural network and gray theory for electronic locks color image evaluation;
Xu used genetic algorithms to build images and product modeling optimization
model. The above study focused mainly on the relationship between product image
and modeling and color, but is rarely involved product material. In this paper, based
on the GEP, the relationship between image and semantic are constructed by
quantification I. GEP was utilized to construct the image and the material models.
Finally, implementation of this proposed method is demonstrated high stability.
Gene expression programming (GEP) is, like genetic algorithms (GAs) and genetic
programming (GP), a genetic algorithm as it uses populations of individuals,
selects them according to fitness, and introduces genetic variation using one or
25 Product Material Combination Image Method 237
more genetic operators. The core technology of GEP is separated the assessment
and the variation process completely (Ferreira 2001). Its variation process uses
fixed-length linear string symbols; assessment process uses ET (Expression Tree),
line with the overall image and the constituent element of the distribution
mechanism, thus suitable for this study. The image of the material based on GEP
research framework shown in Fig. 25.2.
Based on the overall image and the constituent element, function set F = {‘+’,
‘-’, ‘*’, ‘/’, ‘S’, ‘C’, ‘Q’, ‘e’, ‘ln’, ‘pow’, ‘abs’}. S is sine function, in is Natural
logarithm function, pow is Power function, abs is absolute value function.
End set T = {constitute element}, can be expressed as T = {x1, x2, x3,…,
xn}. x1, x2, x3,…, xn is the parameter set.
In the experiment, if population size is N, according to reference documentation
(Ferreira 2001), then fitness function is:
1
ffitness ¼ 1000 ð25:1Þ
Eþ1
The equation above:
1X m
E¼ ðFðxÞ Fj Þ2
m j¼1
is mean square error of the experimental sample, m is the total number of training
set samples, F ð xÞ is mathematical expression based on GEP, Fj is jth training set
output observations. Gene encoding with the relevant parameters which affects the
overall image and construct an initial population, calculate the fitness value of
population by Eq. 25.1, use the principle of survival of the fittest to guide the
evolution of populations.
In this study, authors use the mean square error (S2 ) and correlation coefficent (R2 )
to test the validity and predictive power of algorithms. S2 calculated as:
1Xn
S2 ¼ ðPðijÞ Tj Þ2 ð25:4Þ
n 1
In order to rule out other factors influenced the image experiment, simple form
product should be used as a subjects. In this paper, we select high-speed train seat
to illustrate the effectiveness of the method.
1 2 3 4 5
6 7 8 9 10
25 Product Material Combination Image Method 241
According to the research of Jian (2002) and Ke (1997), we selected five common
material: wood, leather, metal, plastic, fabric. The 120 subjects (20 designers and
100 consumers) in this study taken part in the research (100 sets of data is training
data, 20 sets of data is test data) viewed the virtual images with 144 material
combinations and evaluated their value on a 100 mm measuring scale. The extreme
left side of the measuring scale represented ‘‘exceedingly disagree’’ and the extreme
right represented ‘‘exceedingly acceptance’’. After the assessment process is
completed, the scores are transferred into quantified values between 0 and 1.
The sample data corresponding to each word was selected for testing. For example:
Select the data corresponding to ‘‘public’’ participate in this experiment. There are
five data collection parameters. The property name is shown in Table 25.1.
In the experiment, the extracted parameters are shown in the Table 25.2 GEP:
In the experiment, we used C++, 2.80G Intel Core Duo CPU, 4G RAM, the best
individual mathematical expression is
x5 ðx2 x5 Þ
F ð X Þ ¼ sin sinð9:932985Þ þ
7:256194 x5 þ 2x2
x2 x1
þ cos x8 þ sinðx3 21:864838Þ x5 þ x4 þ
x3
þ x5 x2 þ sinðx1 cosð7:34278Þ þ x5 Þ
x3 sin sinðx1 þ x4 Þ x4
x2 x4
þ
1=2 ðx5 þ x3 þ x2 þ x4 x1 Þ
In order to verify the validity of this method, check the 20 experimental data in the
original model. MSE and CC of F ð X Þ is 0.0517, 0.9431. The mean-square error
and correlation coefficients of the GEP material image model is ideal. In addition,
experiments also showed that GEP algorithm had high efficiency, the total time for
running 100 times is only 3521 s, and 94 times gained sub-optimal solution which
close to the optimal solution.
25.5 Conclusion
Acknowledgments This work is supported by the fundamental research funds for the central
universities project (2010QNA5042); Nation natural science foundation, China (No. 61070075,
61004116, 61002147, 61003147); Zhejiang major science and technology projects
(2011C14018); Wenzhou Science and technology project (H20100042).
References
Chaoming K (1997) Research of material image for vision and touch texture. National Yunlin
university of science and technology, Taiwan (In Chinese)
Chen S-H (2008) Genetic programming: an emerging engineering tool. Int J Knowl Intell Eng
Syst 12(1):1–2
Ferreira C (2001) Algorithm for solving gene expression programming: a new adaptive problems.
Complex Syst 13(2):87–129
Hsiao SW, Tsai HC (2004) Use of gray system theory in product color planning. Color Res Appl
29(3):222–231
Jian L (2002) The research of product material image of sensation and perception. Dissertation,
Private Tunghai university, Taiwan (in Chinese)
Kejun Z, Shouqian S, Hongzong S (2008) Prediction of retention times for a large set of
pesticides based on improved gene expression programming. In: Proceedings of the 10th
annual conference on genetic and evolutionary computation, Atlanta, GA, USA, Association
for Computing Machinery, pp 1725–1726
Kejun Z, Shouqian S, Y Tang, Hongzong S (2009) Gene expression programming for the
prediction of acute toxicity of aldehydes. Chin J Anal Chem 37(3):425–428
Liu G, Cao H, Song X (2009) Products shape design based on genetic algorithm and fuzzy neural
network. Coal Mine Mach 30(11):205–208 (in Chinese)
Nagamachi M (1995) Kansei engineering: a new ergonomics consumer-oriented technology for
product development. Int J Ind Des 15(1):3–11
Su J-N, Li H-Q (2005) Investigation of relationship of form design elements to kansei image by
means of quantification-I theory. J LAN Zhou Univ Technol 31(2):36–38 (in Chinese)
Tsai HC, Hsiao SW, Hung FK (2006) An image evaluation approach for parameter based product
form and color design. Comput Aided Des 38(2):157–271
Xu J, Sun SQ, Zhang KJ (2007) Products shape optimization design based on genetic algorithm.
J Mech Eng 43(4):53–57 (in Chinese)
Chapter 26
D2-Index: A Dynamic Index Method
for Querying XML and Semi-Structured
Data
26.1 Introduction
With the growing importance of XML (eXtensible Markup Language) and semi-
structured data in information storage, exchange and query, much research has been
done to provide query mechanisms to extract information from XML and semi-
structured data. XML is an example of semi-structured data which are different
Existing XML index methods can be divided into two categories. One is region-
based, and the other one is prefix-based. The early research of XML index method
is mostly region-based, which has been unable to solve the problem of dynamic
update. Therefore, it has been gradually replaced by prefix-based (Wu et al. 2003).
According to the tree structure of XML data, prefix-based index encodes each
node with two parts, prefix and layer ID (Amagasa et al. 2003; Wang et al. 2003b).
The prefix is the code of parent node, and the layer ID is the unique identifier of
each node which is different from other sibling nodes. DewyIDs (Duong and
Zhang 2005) and ORDPATHs (O’Neil et al. 2004), are prefix-based and used
widely. And both of them reserve code range for insertion to avoid re-encoding
other sibling nodes and their children when inserting a new node. However, when
the nodes inserted are too much, the code range reserved cannot meet demand and
re-encoding is still inevitable. DLN (Bohnle and Rahm 2004) solves this problem
by variable encoding. DLN defines layers with fixed-length binary string, and
creatively defines layer separator, which avoids re-encoding effectively. Yet its
encoding is complex and no flexible. BSC (Wang et al. 2008) presents an encoding
26 D2-Index 247
In D2-Index, the binary code of each complete index consists of three parts: a
unique identifier including several layer IDs, the offset of the node in the XML
data, and the length of the node. Figure 26.1 shows the composition of the code for
each node in D2-Index.
The most important component of the binary encoding in Fig. 26.1 is pairs of
variables and their lengths. L defines the length in bits of the succeeding variable V
bitstring, which is familiar with ORDPATH. This encoding has the following
important properties (O’Neil et al. 2004): (1) we know where an L starts and we
can identify where it stops, (2) each L bitstring specifies the length in bits of the
succeeding V bitstring, (3) from the two properties above, we see how to parse the
code bitstrings.
D2-Index uses binary fraction to encode layer IDs, which is different from
ORDPATH. The binary storage of fraction generally references the standard of
IEEE, which makes presentation easily but is not flexible and usually encodes
useless bits because it defines a fraction with a certain length in bits. Thus we
propose a new encoding in this chapter, which can accurately represent a fraction
in relatively short bits.
In D2-Index, the initial value of each layer ID is 12, and each node’s layer ID is the
previous sibling one’s divided by 2. That is the reason why it is called D2-Index.
For instance, Fig. 26.3 shows the structural index of the XML data in Fig. 26.2
through D2-Index. It illustrates the results with tree structure in order to make it
easier to be understood.
In general, the encoding of index without offset and length (If not described
specially, the following text will be the same) should follow the principles as
follows:
• Uniqueness: Different node has different code.
• Orderly: The order of nodes’ appearance in the XML data should be reflected in
encoding.
1/
1/22
text
a a c c c
1/2·
1/2· 1/8·
1/8· 1/2
1/2 1/ 2· 1/16
1/2· 1/16··1/
1/22 1/2·
1/2·1/16
1/16··1/4
1/ 4 1/2·
1/2· 1/32
1/32··1/2
1/2 1/2·
1/2· 1/64
1/64··1/2
1/2
b d
1/2·
1/2· 1/8·
1/8· 1/2·
1/2·1/2
1/ 2 1/2·
1/2·1/
1/64
64·· 1/2·
1/2·1/ 2
1/2
The most important part of D2-Index is binary fraction presentation for the
layer ID of each node in the XML data. In order to reduce the code length, we
propose a variable-length encoding based on the IEEE standard of fraction storage,
and it meets all the requirements above. In the specification of the IEEE, a fraction
is stored with its binary scientific notation, and it consists of three parts: Sign,
Exponent and Mantissa.
In D2-Index, all the fractions are between 0 and 1, so Sign can be omitted. And
Exponent can be represented by Variable/Length pairs as ORDPATH. The most
difficult part is how to store Mantissa. It cannot be stored in fixed-length bits
because that makes the code length is too large, and contain a lot of useless bits.
Moreover, we cannot represent it with Variable/Length pairs because of the par-
ticularity of Mantissa. For example, there are two binary fractions 0.11 and 0.101.
Obviously, 0.11 is bigger than 0.101. The length of 11 is 2 and 101 is 3, and this
presentation makes 0.101 is bigger than 0.11 because length is the prefix of
variable.
Therefore, we propose a new presentation, which encodes Mantissa scalability
in bit that 10 defines 1 and 01 defines 0. Exponent is presented with Variable/
Length pairs, so we can know where Mantissa starts. And the succeeding bitstring
is an L of another Variable/Length pair. Then if L starts with 00 or 11, it will be
distinguished with Mantissa. In addition, the integer part of binary scientific
notation is always 1 and it has only 1 bit, so we encode it together with Mantissa in
order to reduce the complexity of encoding and decoding.
It is obvious that L must match the Fano condition that no code is prefix of
another code. In Table 26.1, it describes the prefix encoding schemes for L bit-
strings in D2-Index. The L bitstring 110 defines a component Variable/Length
encoding with assigned length L = 3, indicating a 3-bit V bitstring. The following
V bitstrings (000–111) represent V value of the first eight integers (0–7). Thus,
110110 is the bitstring for D2-Index ‘‘6’’. In the next row in Table 26.1, bitstring
10100 identifies an encoding with L = 4, and the 4-bit V bitstrings that follow
represent the range [8,23]; in particular, V = 8 is represented by bitstring 0000, 9
by bitstring 0001, …, up to 23 by bitstring 1111. Similarly, V in the rage [-8,-1]
is associated with the L bitstring 011, with -8 represented by the lowest bitstring
000 and -1 represented by the highest bitstring 111.
Example 26.1 Using L values of Table 26.1, we would generate ‘‘12 34 38’’ in
Table 26.2.
We can directly compare D2-Index values without decoding, and it will be
elaborated in the next section.
250 Y. Zhang et al.
In the previous section, it mentioned that the order of nodes’ appearance in the
XML data should be reflected in encoding. In other words, we can know the order
and relative position of nodes in the XML data by comparing the encoding values.
In Table 26.3, it lists several encoding values and their binary representation.
From top to bottom it is the order of nodes’ appearance in the XML data, the
document order.
Then we examine whether the results of comparing encoding values in bits
yielding the document order from the aspects as follows:
• Comparing ancestor-offspring nodes (as 1 and 2): It shows that the parent’s code
is the prefix of the child’s, and the child’s bitstring starts with 00. Obviously, we
can know that the order of parent is before their child with their bitstring.
• Comparing sibling nodes (as 1 and 5), non-sibling nodes with the same depth (as
2 and 4), and non-ancestor-offspring node with different depth (as 1 and 4):
When comparing sibling nodes, there are two cases. One case is that the lengths
26 D2-Index 251
Table 26.3 Sample encoding values and their binary representation in document order
Order Key code Bitstring
1 3 001 111 1010
4
2 3 1 001 111 1010 001 111 10
4 2
3 1 001 111 10
2
4 1 1 001 111 10 001 111 10
2 2
5 3 001 110 1010
8
6 5 001 110 100110
16
7 1 001 101 10
8
of their bitstrings are equal, such as node 1 and 5. In this case, the simple
bitstrings or byte by byte comparison yields document order, and the larger one
is former. The other case is that the lengths of their bitstrings are not equal, such
as node 1 and 3. In this case, we compare their bitstring in the shorter length,
and the larger one is former, such as node 3 and 5. But if the comparison result is
equal, the one with longer bitstring will be former, such as node 1 and 3.
In summary, the comparisons in the latter three cases are the same. Further-
more, the nodes with the equal length of bitstring can be compared directly. If not
equal, they will be compared in the shorter length. If the comparison result is not
equal, the larger one will be former. Else we will check the rest of the node with
longer bitstring. If the rest part of its bitstring starts with 00, it will be the other
one’s parent (or ancestor) and it will be former. Otherwise, it is latter than the other
one in document order.
Clearly, in D2-Index method, the index values decrease when the document
order increasing, which complies with the principle of orderly.
According to the position, D2-Index insertion can be divided into three cases:
leftmost of the siblings, rightmost of the siblings and between two siblings. With
different position, the encoding of insertion is different. It describes the specific
encoding process of inserted layer ID in Algorithm 26.1.
Algorithm 26.1 The Insertion Layer ID Encoding Algorithm
Input The layer ID of the node on the left side of inserted position, left, and the
layer ID of the node on the right side of inserted position, right.
Output The layer ID of inserted node, insertion.
1. If left does not exist (it means the inserted position is at leftmost of the
siblings);
252 Y. Zhang et al.
Example 26.2 It assumes that ‘‘12 34’’ and ‘‘12 12’’ are two existing sibling nodes,
and there are three nodes need to insert at the leftmost, rightmost of them and
between them. In Table 26.4, it shows the results after insertion.
We can see the indexes after insertion still yield document order as before. In
addition, all the codes are unique and their lengths are variable, so insertion will
not affect the existing node has been encoded.
In the previous chapter, we discuss the primary indexes. In practical we need prove
the query plans with secondary indexes. There are two of the important secondary
indexes as follows (O’Neil et al. 2004):
• Element and Attribute TAG index, supporting fast look up of elements and
attributes by name.
• Element and Attribute VALUE index.
In this chapter we discuss another secondary index called PATH index. Actu-
ally it is familiar with TAG index. The difference between them is that TAG index
is based on the node’s tag, but PATH index is based on the path which consists of
all the nodes’ tags from root to itself. In D2-Index, every different path has a
separate storage space to store all the nodes’ primary index keys, not the whole
index, with such a path.
26 D2-Index 253
XPath (XML Path Language) is a language for selecting and processing parts of an
XML document, and it is published by W3C. In D2-Index, we use XPath to
express the path of the query plans, and get the results from the XML data with the
collection of their primary index keys found by secondary index (He and Yang
2004). The basic steps of query plans are as follows:
1. Enter a query expression.
2. Find the secondary index and get the collection of primary index keys.
3. According to the keys, obtain the offsets and lengths of the nodes.
4. Get the results.
26.5 Experiment
Table 26.5 Detailed information of the main XML data in the experiments
File Name File size Elements Attributes Max-depth Avg-depth
reed 277 K 10546 0 4 3.19979
sigmodRecord 467 K 11526 3737 6 5.14107
nasa 23.8 M 476646 56317 8 5.58314
treebank_e 82 M 2437666 1 36 7.87279
SwissProt 109 M 2977031 2189859 5 3.55671
dblp 127 M 24032673 6102230 6 2.90228
We can see that the encoding length and expansion ratio of D2-Index are not
directly related to the number of nodes in XML data, but its depth. As the depth
increases, the encoding length shows a clear upward trend. Moreover, the size of
index files is affected by the depth more than the size of XML data.
In Table 26.7, it shows the time consumption of the query plans with three
XPath expressions to the XML document DBLP by D2-Index, and the query
results are consistent with XQuery (Goldberg 2009).
Acknowledgments This work is funded by the Open Foundation of Key Laboratory of Software
Engineering of Yunnan Province under Grant No. 2011SE13, A Study on XML-based Adaptable-
Event-Driven Integrated Software Framework of Yunnan Science Foundation under Grant No.
2009CD009, and the Postgraduates Science Foundation of Yunnan University under Grant No.
YNUY201131.
References
Chen Q, Lim A, Ong KW (2003) D(K)-index: an adaptive structural summary for graph-
structured data. In: Proceedings of the ACM SIGMOD international conference on
management of data, San Diego, CA
Clark J, De Rose S (1999) XML Path Language (XPath) Version 1.0. W3C Recommendation
Cohen E, Kaplan H, Milo T (2002) Labeling dynamic XML trees. In: Proceedings of PODS,
pp 271–281
Duong M, Zhang Y (2005) A new labeling scheme for dynamically updating XML data.
In: Proceedings of ADC, pp 185–193
Goldberg KH (2009) XML, 2nd edn. Peachpit Press, Berkeley
He H, Yang J (2004) Multiresolution indexing of XML for frequent queries. In: Proceedings of
the 20th international conference on data engineering, Boston, pp 683–694
O’Neil PE, O’Neil EJ, Pal S (2004) ORDPATHs: insert friendly XML node labels.
In: Proceedings of SIGMOD, pp 903–908
Wang H, Park S, Fan W, Yu PS (2003a) ViST: a dynamic index method for querying XML data
by tree structures. In: Proceedings of the ACM SIGMOD international conference on
management of data, San Diego, CA
Wang H, Perng CS, Fan W, Park S, Yu PS (2003b) Indexing weighted sequences in large
databases. In: ICDE
Wang C, Yuan X, Wang X (2008) An efficient numbering scheme for dynamic XML trees.
In: Proceedings of the 2008 IEEE international conference on computer science and software
engineering (CSSE), pp 704–707
Wu H, Wang Q, Yu JX, Zhou A, Zhou S (2003) UD(k,l)-index: an efficient approximate index for
xml data. In: Proceedings of the 2003 international conference on web-age information
management, pp 68–79
Chapter 27
Data Preprocessing in Web Usage Mining
Xiang-ying Li
Abstract At present, the study on Web Usage Mining mainly focuses on pattern
discovery (including Association Rules, sequence pattern, etc) and pattern analysis.
However, the study on the main data sources, that is to say, the study on web-log
pre-process is relatively rare. Given that high-quality data helps a lot in improving
Pattern mining precision, this paper studies from this aspects, and proposes the high-
effective data preprocessing method.
27.1 Introduction
The main data resource of Web Usage Mining is web log, from which we can
know the browsing behaviors of clients. Based on the browsing behaviors of
clients (Shao 2009), we can (1) modify the corresponding web link; (2) get to
know the interested points of clients, and provide personalized pages for them;
(3) subdivide the clients, carry out different promotion strategies for different
customers aiming to improve (ROI) return on investment; (4) find out clients’
clicks on ads, based on which modify the ads setup (Cooley 1997a).
Data preprocessing is the first part in Web Usage Mining. Whether the data
preprocessing is good or bad will directly influence the effect of the following links
(Zhao 2003; Wu 2002), such as Association Rules mining, Sequence Pattern
discovery and the categorical and clustering of clients and so on.
X. Li (&)
College of Information Engineering, Shandong Youth University
of Political Science, Jinan, China
e-mail: [email protected]
Data Client
Session Path Transaction
Cleaning Identification
Identification Completion Identification
In a word, Data Preprocessing can greatly improve the quality of Data Mining
and shorten the time needed in practical Data Mining. Web Usage Mining, whose
object is mainly web log, is even more affected by Data Preprocessing, for web
log, different from the traditional well-structured data base or data from Data
Warehouse, is semi-structured. In addition, the incomplete data in web log lead by
all kinds of reasons, and the purpose of Web Usage Mining (Liu 2007a; Zhang
2006; Liu 2007b), which is unlike that of transaction data mining, require log files
to be preprocessed before mining, converting the log files to format easy to mining
and laying foundation for improving the accuracy and effectiveness of final pattern
mining. As the Fig. 27.1 shows that it is a complete process of Web Usage Mining.
And the paper pays attention to the module of Data Preprocessing, of which
several steps, Data Cleaning, Identify Clients, Session Identification and Path
Completion are included, as shown in Fig. 27.1.
The task of Data Cleaning is to delete data unrelated to mining, such as pictures of
GIF, JPEG and jpg format. These pictures are in WebPages in large numbers,
when clients visit WebPages, pictures and cartoons exist in log files as independent
records. For most mining task, they can be ignored (of course, we have to
reconsider if the websites is special for pictures). Though deleing these records
contributes nothing to improve mining effect (Wang 2000; Liu 2003; Tang 2002;
Xu 2003; Ji 2009), it can decrease the data to be processed afterwards, improve
processing rate and reduce the effect of invalid data upon mining process. The
experimental data adopted is a week’s logs files (2012\3\1–2012\3\7) from http://
my.sdyu.edu.cn/, in all 132 M bytes. Before Data Cleaning, there are 1,274,051
records. After Data Cleaning as the above method, 336,741 records are left. Thus
we can see that this step can greatly reduce the data to be processed later, and
improve the processing rate.
27 Data Preprocessing in Web Usage Mining 259
Clients’ identification is to identify from logs files which clients visit the website
and each client visits what web pages.
The clients registered are easy to be identified. However, many clients unreg-
istered surf the internet by web proxy server or several clients share one computer,
and the existing of firewall and a client using different browsers, all these add
difficulties to client identification. Certainly we can confirm the visiting clients by
Cookies, but taken personal privacy into consideration, Cookies are forbidden by
many clients’ browsers.
How to distinguish many visiting clients using the same computers? Some ideas
are to detect whether we can directly visit the page from the page we have visited
last time by topology map in websites (Pirolli 1996), if we can’t, it is probably that
many clients use one computer, and then we can identify clients by Client Session
Automatic Generation Technique based on Navigation Patterns (Cooley 1997b).
Considered that the two methods above can only partly solve the problem of
clients’ uncertainty, the paper proposes an algorithm—Client and Session Identi-
fication Algorithm (CSIA). This algorithm takes comprehensive consideration, and
combines Client_IP, topology diagram in websites, the browsers version and
Referer_Page to identify individual clients, with good accuracy and expansibility.
Meanwhile, the algorithm—CSIA also can complete the task of Session Identifi-
cation, the specific content of the algorithm will be given in Session Identification
part, and the paper will first proposes the definition of client as followed:
Definition 1 Clientsi = \Client_ID, Client_IP, Client_URL, Client_Time,
Client_RefPage, Client_Agent [ , 0 \ i \ n, n represents the total numbers of
clients, Client_ID is the identification of clients identification and Client_IP,
Client_URL, Client_Time, Client_RefPage, Client_Agent respectively stands for
the IP address of clients, the pages visiting, the time of visiting pages, the pages
visited and operating system clients used and the version of browsers, from which
the unique client is confirmed.
The time scan of a log file is different, from 1 day, one week–one month etc. In
such a long time, a client can not visit a website only for one time, so how to
distinguish client’s access record left by one—time visit or many times visits? And
this involves the problem of Session Identification. A session is time serials of
URLs in the process that the client visits a webpage. The relative definition is
given as followed:
Definition 2 The definition of Sessionsi: Sessionsi = \ClientID, Sj, [refpagej1,
refpagej2,……refpagejk] [ , 0 \ i \ n, n represents the total numbers of sessions,
ClientID is the identification identified in the process of clients identification,
260 X. Li
Algorithm: CSIA
Input: log files; TimeSpan
Output: ClientID, Session
(for the specific Algorithm occupies too much space, here proposes the
framework in JAVA language limited to space constrains)
27 Data Preprocessing in Web Usage Mining 261
262 X. Li
27 Data Preprocessing in Web Usage Mining 263
User
1 7 2012-3-1 10:21:36 2012-3-1 10:27:38 2012-3-1 10:32:22
Session
Time
IP—Urls Sequence
27.3 Results
27.4 Conclusion
The paper pays attention to the data preprocessing module in web usage mining,
focusing on the specific realization of preprocessing, and proposes the algorithm
CSIA that can better identify client and session simultaneously, with higher
identification accuracy than the common identification algorithm. And the paper
hopes that the work done can help web usage mining researchers.
At present the study on Web Usage Mining at aboard is abundant, and some
universities and research institute at home also have studied on this aspect, but the
influential one is not that much. Considering the business value, wide application
prospect and the large developing space of its related technique of Web Usage
Mining, much more research strength will be put into this area. The focus of
research will have more tendencies on the visualization of pattern analysis and
analysis results, and man–machine interaction aspect based on the continuous
study on pattern discovery.
References
Cooley R (1997a) Web mining: information and pattern discovery on the world wide web. In: 9th
international conference on tools with artificial intelligence (ICTAI’97), New-port Beach,
USA, 1997, pp 558–567
Cooley R (1997b) Grouping web page references into transactions for mining world wide web
browsing patterns. In: Proceeding Of the IEEE knowledge and data engineering exchange
workshop (KDEX-97)
Cooley R, Mobasher B, Srivastava J (1999) Data preparation for mining world wide web
browsing patterns. J Knowl Inf Syst 1:5–32
Ji Y (2009) Application cases of data mining technology. China Machine Press, Beijing
Liu Y (2003) Research on content mining technology based on web. Harbin Institute of
Technology, Harbin
266 X. Li
Liu W (2007a) Design for web usage mining model. Appl Res Comput 24(3):184–186
Liu L (2007b) The Preprocessing of web usage mining. Comput Sci 5:200–204
Pirolli P (1996) Silk from a sow sear: extracting usable structures from the web. In: Proceeding of
1996 conference on human factors in computing systems (CHI-96), Vancouver, British
Columbia, Canada
Shao F (2009) Principle and algorithm of data mining. Science Press, Beijing, pp 379–380
Tang Q (2002) The text mining based on web. Comput Eng Appl 21:198–201
Wang J (2000) Research of web text mining. J Comput Res Dev 37(5):513–520
Wu Q (2002) Client identification in the processing of web log mining. Comput Sci 29(4):64–66
Xu M (2003) Study on text mining on web. Basic Autom (5):44–46
Zhang W (2006) Clustering web client based on interest similarity. Shandong Univ (Nat Sci)
41(6):54–57
Zhao W (2003) Research on data processing technology in web log mining. Comput Appl
23(5):62–64
Chapter 28
Design Knowledge Reduction Approach
Based on Rough Sets of HCI
28.1 Introduction
The theory of rough sets was originally proposed by Pawlak as a formal tool for
modeling and processing intelligent systems characterized by insufficient and
incomplete information (PawIak 1991; PawIak 1982). Its main advantage is that
through rough set we can find the connection and characteristics of the data and
extract the implied rules without the apriori knowledge and additional information
(Zhou and Wu 2011).
If RðAÞ 6¼ RðAÞ, then said that set A is rough set of equivalence relation R in
domain U (Li and Cercrone 2005; Miao and Li 2008).
Because of the large quantity of targets, complex cooperation relations and
frequent tactical movements in modern wars, it is very difficult to accept an
increasing number and types of data, and process them into information of weapon
system (Zhang et al. 2003; Pawlak 2002). In order to improve the efficiency of
operations and avoid misoperation, concise and friendly interface of multimedia
becomes more and more important. But in the design process, weapon context
information is complex, changeable and indistinguishable. In this paper, the
authors defined the weapon context as follows: context is to decide or influence
weapon display control calculation and the information of man-machine interac-
tive process, which may come from users, equipments and systems, the external
environment, and other entities. If we establish all interfaces corresponding to all
the context knowledge respectively, this will be a very big job (Zhang and Liang
1996; Wang et al. 2006). And there is a lot of information to deal with in the
system, which is a complicated constraint-based reasoning process.
At present, the basic methods of uncertain measurement in the rough set theory
are (Zhang et al. 2001): Precision, rough membership function, attribute depen-
dency attribute significance, inclusion degree and conditional information entropy
etc. Zhang (Cai and Yao 2009) has put forward the concept of inclusion degree as
a measurement of rough set data analysis, indicating that every measurement can
be concluded as inclusion degree. Beauboude et al. (Dai and Li 2002) introduced a
method of measuring the uncertainty of rough set based on information entropy,
which is better to reflect and can measure the uncertainty more precisely.
Design knowledge reduction approach based on rough sets of weapon display
interface is put forward in this paper.
Through observing the practical operation situation and the use of weapons’
interface to get a record form, it can be considered as a knowledge representation
system, KRS for short. In the KRS table, row presents research objects; and
column presents attribute (Cheng and Sun 2007; Wei et al. 2006). A list can be
regarded as a cluster of equivalence relation of attributes’ definition, and this kind
of list is usually called the decision table.
According to part of the provided data, the decision table has been constructed
as shown in Table 28.1:
In Table 28.1, domain U ¼ fx1 ; x2 ; . . .; xn g means that there are n contexts and
interface mode records; condition attribute set C ¼ fCon1 ; Con2 ; . . .; Conm g mean
that there are m different weapon equipment contexts, such as presenting target
distance, and presenting target speed. Decision attribute set D ¼ fUI1 ; UI2 ; . . .;
UIt gmeans that there are t different interfaces.
In order to obtain valuable knowledge from the decision table, the original data
must be pretreated first. The common pretreatment mainly includes two aspects:
totalize the incomplete data and discrete normalize the attribute value (Wei 2006).
Delete the row record which has attribute values missing, and get a complete
decision table. In Table 28.2, the object X4 should be deleted. Obviously, this
method is very simple, and we use it when in the decision table the incomplete
information object number is far less than complete information data; otherwise,
we cannot use this method.
(2) Compensation method
For the KRS which contain incomplete data, we use the following approaches
to add the missing data and complete the KRS:
(i) According to the actual requirements, take incomplete attribute value as a
kind of special value; (ii) Using statistical principle, assess the missing attribute
values according to the rest of the record object attribute value. If the missing value
is numerical, take the arithmetic mean value of this property in other objects as a
supplement; and if the missing value is no-numerical, take the highest frequency
value of this property in other objects as the supplement. In Table 28.2, the arith-
metic mean value of Con2 in other objects is (500 ? 680 ? 400 ? 800)/4 = 595.
We take 595 as a supplement. In addition, there are some other methods of com-
pensation, such as condition combination compensation method, and compensation
method based on the indiscernibility relation.
The weapon context information with numerous types, some are numerical while
some not, some are continuous while others not, and they have different value
confines and measurement units. It is hard to create a decision table with clear
structure, if we do not discrete normalize the attribute value. Discrete normalize
the attribute value must meet the needs of the two aspects: One hand, the
dimension of the attribute value should be reduced. On the other hand, the
information loss should be avoided to the greatest extent. There are two ways of
discrete normalize the attribute value.
28 Design Knowledge Reduction Approach 271
When we design the weapon human-computer interface, there are many data
samples and condition attributes in the decision table. The decision table is very
complex, and it is hard to find the implied knowledge from the data. The reduction
process of decision table is deleting the redundant data, reducing condition attri-
bute dimensionality and reducing the sample size.
In the actual battlefield, the situation of the battlefield is diversity and uncer-
tainty. Sometimes, we got the same sample information, but we made different
decisions; so it is an inconsistent decision table. When we process the inconsistent
decision table, we usually transfer it into a consistent table. The reduction method
is as follows:
(1) Firstly, merge the repeat record and reserve the inconsistent part;
(2) Secondly, simplify the condition attributes. Delete every condition attribute
in turn, if the table changed into inconsistent, the attribute cannot be reduced, or it
can be reduced;
(3) Thirdly, simplify the decision rule. On the basis of above, delete an attribute
for every decision rule. If the table changed into inconsistent, the attribute cannot
be reduced, or it can be reduced. Then get the core value table of decision;
(4) Finally, according to the core value table generated the simplest decision
table. That is to say, if we delete some core value, the decision rule is the same
with others, so it is the simplest decision table.
The core part of above method is, specific algorithm of reducing attribute, and it
is shown as follows (Meng et al. 2008):
272 Q. Xue et al.
Before establishing the final decision table, the original decision table should be
discredited. In this paper, due to the attributes have different nature, we use local
discrete method and adopt experts subjective designed to discrete the attributes.
Discrete standards are provided by application unit shown in Table 28.4.
28 Design Knowledge Reduction Approach 273
Given that the condition set of decision table is C ¼ fa1 ; a2 ; a3 ; a4 g, and decision
attribute set is D ¼ fdg. Let the initial attribute core is COREDðCÞ ¼ . From the
decision table, we get that U=C ¼ffX1 g; fX2 g; fX3 g; fX4 g; fX5 g; fX6 g; fX7 g;
fX8 g:fX9 ; X16 g; fX10 g; fX11 g; fX12 g; fX13 g; fX14 g; fX15 g; fX17 g; fX18 g; fX19 g;
fX20 g; fX21 gg, and U=D¼ffX1 ; X6 ; X9 ; X13 g; fX2 ; X4 ; X5 ; X8 ; X11 ; X16 ; X21 g;
274 Q. Xue et al.
fX3 ; X15 ; X18 ; X20 g; fX7 ; X10 ; X14 ; X19 g; fX12 ; X17 gg. According to the algorithm,
we get, POSðC; DÞ ¼ ffX1 g; fX2 g; fX3 g; fX4 g; fX5 g; fX6 g; fX7 g; fX8 g; fX9 g;
fX10 g; fX11 g; fX12 g; :fX13 g; fX14 g; fX15 g; fX16 g; fX17 g; fX18 g; fX19 g; fX20 g;
fX21 gg; kC ðDÞ ¼ 19=21. So the decision Table 28.5 is inconsistent. And we get
that inconsistent of decision table is because sample X9 and sample X16 have the
same condition attributes but different decision attributes. After remove a1 , get
kCfa1 g ðDÞ ¼ 11=21 6¼ kC ðDÞ, so cannot be remove. That is to say,
a1 I CORED(CÞ. Similarly, we can obtain the dependency of other condition
276 Q. Xue et al.
28.3.3 Example
28.4 Conclusion
References
Cai L, Yao X (2009) Attribute reduction algorithm based on rough set theory. J Sichuan Univ Sci
Eng 22:34–37 (in Chinese)
Cheng S, Sun S (2007) Adaptive human-computer interaction user modeling based on rough sets.
Comp Sci 34:48–52 (in Chinese)
Dai JH, Li YX (2002) Study on discretization based on rough set theory. In: Proceedings of 1st
international conference on machine learning and cybernetics, Piscataway, IEEE Press,
pp 1371–1373
Li J, Cercrone N (2005) Empirical analysis on the geriatric care data set using rough sets theory.
Tech Report, CS-2005-05
Meng J, Liu Y, Mo H (2008) New method of packing missing data based on rough set theory.
Comp Eng Appl 6:175–177 (in Chinese)
Mi R, Li X (2009) The knowledge reduction in incomplete fuzzy decision information system.
J Wuhan Univ Sci Eng 22(3):24–28 (in Chinese)
28 Design Knowledge Reduction Approach 277
Miao D, Li D (2008) Rough sets theory algorithms and application. Tsinghua University Press,
Beijing (in Chinese)
PawIak Z (1982) Rough sets. Int J Comp Inf Sci 15:341–356
PawIak Z (1991) Rough sets-theoretical aspects of reasoning about data. Kluwer Academic
Publishers, Dordrecht
Pawlak Z (2002) Rough sets and intelligent data analysis. Inf Sci 147:1–12
Wang X, Cai N, Yang J, Liu X (2006) Methods of rough set uncertainty measure based on
approximation precision and condition information entropy. Acad J Shanghai Jiao Tong Univ
51(7):1130–1134
Wei D (2006) Research on rough set model and knowledge deductions for incomplete and fuzzy
decision information system, Nanjing University, Nanjing (in Chinese)
Wei D, Zhao Y, Zhou X (2006) A rough set approach to incomplete and fuzzy decision
information system. 2006 IEEE 6th world congress on intelligent control and automation,
June 2006
Zhang W, Liang Y (1996) The theory of uncertainty reasoning. Publishing Houses of Xi’an
Jiaotong University, Xi’an
Zhang W, Wu W, Liang J (2001) Rough set theory and method. Science Press, Beijing,
pp 182–185
Zhang W, Liao X, Wu Z (2003) An incomplete data analysis approach based on rough set theory.
Pattern Recognit Artif Intell 16:158–163 (in Chinese)
Zhou L, Wu W (2011) Characterization of rough set approximations in atanassov intuitionistic
fuzzy set theory. Comp Math Appl 62:282–296
Chapter 29
Diagnostic Testing and Analysis of CAN
Data Bus Based on the Sagitar Power
Transmission System
Wen Fang
Abstract Based on the Sagitar power system of CAN data bus, analysis of a
power transmission system network structure, elaborated the power system of
CAN data bus termination resistor working principle and test method. Diagnostic
testing by the BOSCH FSA740 standard engine analyzer, a power system CAN
data bus circuit breaker, CAN_H and CAN_L data between the wire short circuit,
CAN data bus to the positive or negative electrode short-circuit diagnostic testing
and analysis.
29.1 Introduction
Through the network can satisfy the people to the modern vehicle driving safety,
comfort, emission performance and fuel consumption growing demand, reducing
wire and connector number reduces the required space and the weight of a vehicle.
But the automobile network application for repair and technical personnel raised
taller requirement, is the current automotive electrical network repair of a problem.
Automobile network structure for various manufacturers varies, generally
divided into a power transmission system, comfortable and infotainment systems,
combination of instruments and diagnostic interface system, each system data
exchange through CAN data bus. Wiper motor, light and rain sensor and anti-theft
alarm device assembly using the LIN data bus. Due to the adoption of the central
diagnosis interface or gateway, so the CAN data and the LIN data can be cross-
border data exchange. In this chapter, based on the FAW-Volkswagen power of
W. Fang (&)
Department of Automotive Engineering, Sichuan Vocational and Technical College of
Communications, Chengdu, China
e-mail: [email protected]
CAN data bus, through a vehicle network diagnostic test for repair, technical staff
to provide a good reference.
Power CAN data bus sub-system includes engine management system control unit,
automatic transmission, with EDS ABS, steering column electronic device, illu-
mination distance adjusting device and airbag. The data transmission rate of
500 kbit/s, each control unit receives information by CAN data bus continuously
from the other control unit, and immediately on power system working condition
changes, for real-time bus. The data bus of short circuit or open circuit causes the
bus off (Fig. 29.1).
For the CAN data bus signal, bus wire end is equivalent to the role of independent
transmitter, so at the end of wire will produce reverse operation of the signal, the
signal is superimposed on the effective signal will cause the distortion, must be in
29 Diagnostic Testing and Analysis 281
the high frequency bus network end termination abort signal, which might
otherwise occur reflection. The process of reflection and crashed into a dock
embankment, then reflection and with follow-up wave superposition is similar,
terminal resistance effect is like sand, if the wave washed onto the beach, the beach
will absorb the energy of waves and causing no wave superposition. So the data
bus to be connected to a terminal resistor termination data transmission, absorption
signal operation to the data wire end when the energy.
High speed bus connects the CAN physical interface typically uses the standard
ISO11898. This standard stipulated the transmission medium for a two bus lines,
two terminal resistances are provided for 120 ohm. But not all manufacturers are
using the standard ISO11898, in the Sagitar power transmission system on the
CAN data bus, the data wire end without installing the standard two 120 ohm
resistor. But the engine management system control unit of resistance of 66 ohm
resistor to assume a central, power transmission system bus remaining on the
control unit has a high resistance, each of the resistance for 2.6 k ohms. Because
the control system of the resistance is connected in parallel, so according to the
following way of calculating the load resistance of the total resistance value:
1 1 1 1 1 1 1 1
Rges ¼ þ þ þ ¼ þ þ þ ¼ 61:35 X
R1 R2 R3 R4 2600 2600 2600 66
In the system can use the resistance measuring device for short and open fault
diagnosis. As the power transmission system of CAN data bus through a bus
terminal 15 is switched on and in short-term continue operation after shut down
completely, so it will be in the interval after the time, allowed pull off the control
unit plug, check the CAN_H and CAN_L data between the wire resistance. If the
resistance value is greater than 250 ohms, show that the engine control unit of a
data conductor circuit; if the resistance value is less than 30 ohm, show that the
data wires may exist between short circuit; if the CAN_L or CAN_H data wire and
a grounding between measured resistance value is less than 300 ohm, then the
negative short circuit (Fig. 29.2).
When the automobile multiplex system for information transmission link (or
communication lines) have a fault occurs, the communications line short circuit,
282 W. Fang
broken circuit and circuit physical nature of the communication signal attenuation
or distortion, will cause multiple electronic control unit not working or electronic
control system error action. Power transmission system CAN data bus failures that
may occur with the power transmission system of CAN data bus circuit, CAN_H
and CAN_L data between the wire short circuit, CAN data bus to the positive or
negative electrode short circuit. Through the BOSCH FSA740 standard engine
analyzer diagnostic test, observation different of data communication signal and
standard communication data signal.
From the oscillogram, CAN_H signal, the CAN_L signal, signals are under the
symmetrical arrangement. Level from dominant to recessive state switching state
without interference. CAN_H and CAN_L signal recessive level is 2.5 V, domi-
nant state CAN_H signal voltage value is about 3.5 V, CAN_L signal voltage
value is about 1.5 V. According to the bus load, these values may be vary in the
hundreds of millivolt range. Voltage signal curve depends largely on the use of
oscilloscope. The general rule is: the higher resolution, signal display more clearly
(Fig. 29.3).
29 Diagnostic Testing and Analysis 283
Oscilloscope preferably using two channel and measurement data wire signal
voltage to ground for diagnosis, can be more easily analyzed voltage level and
fault diagnosis. Relative measurement data wire on the voltage signal display the
voltage difference. From the oscilloscope, CAN_H and CAN_L signal level are
2.5 V recessive dominant state CAN_H signal voltage value of about 3.5 V, is
normal waveform; the voltage of the CAN_L signal values of approximately 3 V,
is abnormal waveform. Thus can judge CAN_L wire circuit breaker (Fig. 29.4).
From the oscilloscope, CAN_H and CAN_L signal level are 2.5 V recessive.
Dominant state CAN_H signals voltage value of about 2.3 V is abnormal wave-
form; the voltages of the CAN_L signal values of approximately 1.5 V, is normal
waveform. Thus CAN_H wire circuit breaker can be judged (Fig. 29.5).
CAN_L line voltage is 0 V if the CAN_L data wire short circuit to ground.
CAN_H wire recessive level becomes 0.2 V from about 2.5 V, at the same time
the voltage level becomes the dominant state from the recessive state; if the
CAN_L data wire short circuit to positive electrode, and the engine is shut off and
the connection without contact resistance, CAN_L signal voltage level is increased
to the voltage value 12 V (Fig. 29.6 is about 11.75 V), while in the data wires
cannot be observed changes from the recessive state level to the dominant state
(Fig. 29.7).
The oscilloscope must be observed two data wires without voltage signal if
CAN_H data wire on the negative short. CAN_H data wire voltage level becomes
0 V, CAN_L becomes 0.2 V wire level; if the CAN_H data wire short circuit to
positive, CAN_H signal voltage level is increased to the voltage value 12 V when
the engine is shut off and the connection without contact resistance, at the same
29 Diagnostic Testing and Analysis 285
time in the two data wires cannot be observed changes from the recessive state
level to the dominant state (Figs. 29.8 and 29.9).
286 W. Fang
Fig. 29.10 Between CAN_H and CAN_L data wires short circuit oscillogram
Oscillogram shows two voltage signals overlap each other and curves of the same
if data wires are connected to short circuit each other. The two signal voltages are
2.5 V approximately (Fig. 29.10).
29.5 Conclusion
CAN bus (multiplex transmission system) is one of the most promising buses; its
application field is constantly expanding. Multiplex transmission system has
incomparable advantages to the traditional automobile wiring. Because high
technology content, there are still some fault can not accurately find and eliminate
29 Diagnostic Testing and Analysis 287
via the test equipment. Automobile repair and technical personnel must be find
fault by the systematic analysis of multiplex network basic structure and signal
transmission principle, the flexible application of multiplex transmission network
specific diagnostic methods, multi-path transmission network diagnostic test, find
out the multiplex network structure inherent laws.
Chapter 30
Effectiveness to Driving by Interbeds
in Channel Sand of River Delta System
30.1 Introduction
In China, the Mesozoic and Cenozoic basins of continental sediments are wide-
spread with rivers and developed sands.
The river-deltaic sandstone dominates the continental basins. It is ascertained
that in China the majority of oil reserves are located in the river-delta deposition
system. Songliao Basin is China’s largest continental depression-type basin of oil
and gas with the typical dual-structure featured by lower rifting and upper
depression. Before the early Cretaceous Denglouku it is the depression evolution
stage of the basin. The Quantou Period-Neijiang Period marks the evolution stage
of depression (Liu et al. 2010), in which the formation of oil and gas at the middle
and lower part of Daqing Placanticline is a large river-delta depositional system of
depression sediments. The channel sand is the main part of oil and gas reserves. It
is found through the study of depositional mechanism of channel sand and during
the oil field development process that, channel sands are not homogeneous con-
nection (Wang et al. 2009). Changes in various factors like hydropower conditions,
provenance supply and climate lead to the extensive heterogeneity (Sun et al.
2008) within the channel sand; particularly the prevalence of impermeable inter-
bed affects the laws of fluid flow.
The knowledge of interbed of channel sands and genesis and the analysis of
driving effects of various flooding means are particularly important for the
development of oil fields.
In recent years, there is a growing research of interbeds. Jiao Hongbing once
mentioned in the ‘‘Influences of Interbed Distribution of Meandering River on the
Development Effective Driven by Water’’ that: factors like interbed dip angle,
spaces and the ratio of interbed depth to reservoir depth. As the dip angle (3–15°)
is increased, the rate of recovery driven by water is also increased and the 7–15
folds is a moderate growth; the increased interbed space (10–40 m) and the
recovery driven by water exhibits a nearly linear rise; H interbed/H oil layer (0–2/
3) is increased and the recovery driven by water is lowered. After the comprhesive
collection of distribution features of all kinds of channel sands in Daqing Oilfield,
a study of types, genesis of the channel sand interbed and their influences on the
driving effects of various driving means.
Horizontal interbeds mainly are located in braided rivers of oilfields like Lamadian
and Saertu, low-bend distributary channel of oil fields like Xingshugang and straight
distributary channel. There are two major genetic types: (1) due to changes in the
hydropower condition or provenance supply during the deposition process, interbeds
are formed among sandy laminae reservoir of fluvial facies, including the muddy
intercalation and conglomerate interbed cemented by muds; (2) interbed arising from
physical and chemical effects during the diagenesis of sediments. Such interbed is
292 L. Zhao et al.
mainly calcareous (Yu et al. 2010). In addition, there is a marked difference in the
lithology of sediments, the upper and lower part of reservoir. Because the vertical
accretion of the suspended sediment featured by fine lithology, small depth, exten-
sive distribution and wide scope, are capable of forming an vertically upward
impermeable shelter (Li et al. 2009) during the crest concussion process in the flood
event of braided river.
Lateral interbeds are mainly developed in the meandering rivers of oils fields like
Shaertu and Xingshugang. Under the action of lateral interbeds, each flood forms a
lateral accretion. An argillaceous layer can be formed during the dry season and
interflood period between two floods (Zhang et al. 2008). The oblique crossing of
lateral accretion with the formation boundary and of overlaying argillaceous layer
with the formation boundary form the argillaceous lateral interbed in the channel.
Generally, the argillaceous lateral interbed is only developed at the upper and middle
part of point bar, thus forming a ‘‘semi-connection’’ featured by the connected
bottom, disconnected upper and middle portions of the point bar of meandering river.
The presence of horizontal interbeds affects the flow of reservoir fluids and
changes the distribution of entire field of infiltrating fluid (Wang and Zeng 2008).
The higher the development frequency of interbeds, the more complex distribution
of oil and water movement. The development of a large number of unstable
interbeds reduces the sinking speed and gravity action of injected water along the
moving direction for the sand bed of fining upwards sands, favorable for the
vertical swept volume and and oil driving efficiency; but for coarsening upward
sands, it prevents the the effect of gravity sinkage on the swept volume during the
water driving oil process. The fining upwards sands without interbeds are greater
than the those with interbeds in the flooded intensities. The flooded intervals of the
bottom appears earlier. While the unstable interbeds of the coarse upward sands
reduces the oil scavenging cofficients and oil driving efficiency of gravity in the
coarse upward sands. It is of severe implications on the development effects. The
longer or more interbeds, the worse development effects. It mainly lies in that the
oil driving efficiency near the lower part of interbeds is far less than other regions.
The dead oil area is formed during the latter stage of high water content.
30 Effectiveness to Driving by Interbeds in Channel Sand of River Delta System 293
polymer and exerts negative impacts on the expanded swept volume of the
polymer profile control.
Through the comparison of Fig. 30.2 (3) with (4), it can be seen in the polymer
flooding process, the remaining oil near the interbeds is not significantly improved.
Through the comparison of Fig. 30.2 (1) with (4), the remaining oil is diffused in
the local areas after the polymer flooding. The region with the oil saturation of
between 0.4 and 0.6 is much wider than the that before the polymer flooding. Only
in local aeras like interbed bottom and top, etc., the oil saturation of remaining oil
is as same as that before polymer flooding. It shows the polymer flooding can
develop remaining oil in most regions within the braided channel sands. But due to
the presence of the horizontal interbeds, the remaining oil at the bottom has not
been significantly improved.
The impacts of lateral interbed on the distribution of the remaining oil are closely
related to the combination of direction of dip and injection direction (Ling et al.
2008). There are three major combination relationship of the injection direction
and interbed inclination, namely the direction, against along the interbeds, or the
extension direction of interbeds fluvial ldirection) (see Fig. 30.3). For the study of
different combinations of direction and the impact of lateral interbeds on the
distribution of remaining oils, three dimensional conceptual model of three
combinational directions are set up for the numerical simulation.
Fig. 30.3 Schematic view of combinational relationship of lateral interbed and injection
direction
30 Effectiveness to Driving by Interbeds in Channel Sand of River Delta System 295
Fig. 30.4 Cross-section of oil saturation with the water content of 0.8, 0.925 and 0.98 with water
driving at the homogeneous sandstone oil reservoir without interbed and the water content of 0.98
through the follow-up water driving
296 L. Zhao et al.
dramatically. The appearance of oil walls or the obvious oil wall is related to the
reservoir non-homogeneity and well spacing. In general, the more severe hetero-
geneity, the greater differences in the positive rhythm, the less likely to appear the
oil wall. The braided channel sand due to sound connectivity of reservoir has less
penetration rate differences, thus the obvious oil wall appears. At the same time the
appearance of oil wall is controlled by well space. Due to the upper lateral interbed
and lower belts of high permeability of meandering river sands, the injected
polymer solution suddenly ingresses along the bottom. In addition, when the
remaining oil in the interbed is displaced, it only flows out of the bottom of lateral
interbed, thus no V-shaped oil wall will be formed.
The polymer flooding is used when the water content is 0.925. Figure 30.4 (4)
and (5), respectively show the profile of oil saturation when stopping the polymer
injection and the follow-up water driving until the water content reaches 0.98, the
remaining oil is still distributed on the top of reservoir near to the oil well. The
ultimate recovery rate for 0.70 PV of injected polymer is 42.43 %. (Table 30.1
shows the equivalent of the relationship between the injection of different volume
of polymers and recovery rate. The ultimate recovery rate after displacement refers
to the corresponding value of ultimate recovery when the follow-up water driving
enables the water content to be 0.98 after the injection of polymer of 0.70 PV).
Compared with the water driving, the ultimate recovery rate only grows by
1.15 %. It can be seen that the effects of water driving is significant. Under this
circumstance, the effects of polymer driving are insignificant and may be
uneconomical.
When injection is along the direction of interbeds, the impact of interbeds on the
development of remaining oils can be seen in Fig. 30.5. After the polymer driving,
more distribution of remaining oil at one side of the water well towards the lateral
interbed and the structural top formed by two interbeds is related to the tilting
direction of oil–water interface and hydrodynamic direction. Due to the obstruc-
tion of interbeds, the recovery rate for the water content of 0.925 by water driving
Table 30.1 Relationship between different volumes of polymers and recovery rate
Driving direction Water content of Water content 0.2 PV 0.4 PV 0.6 PV 0.7 PV
0.925 of 0.98
No interbeds 35.51 41.28 41.24 40.78 41.73 42.43
Injection along interbeds 26.00 30.96 35.05 34.18 36.57 37.26
Injection against 23.38 28.57 34.87 35.78 36.74 36.78
interbeds
Injection along the fluvial 34.94 41.11 40.99 38.73 38.59 41.94
direction
30 Effectiveness to Driving by Interbeds in Channel Sand of River Delta System 297
is 26.00 %. The recovery rate for the water content of 0.98 by complete water
driving is 30.96 %. The recovery rate can reach 37.26 % when the polymer driving
is employed. The three values drop 9.51, 10.12 and 5.17 % in comparison with
those without interbeds. It can be seen that due to the obstruction of interbeds, the
recovery rate decreases around 10 % compared with those without interbeds. After
the polymer driving, the obstructed crude oil has been remarkably improved. The
differential value in the crude oil recovery is reduced to 5 % or so.
Fig. 30.5 Cross-section of oil saturation with the water content of 0.8, 0.925 and 0.98 with water
driving and the water content of 0.98 through the follow-up water driving along the interbeds
298 L. Zhao et al.
Fig. 30.6 Cross-section of oil saturation with the water content of 0.8, 0.925 and 0.98 with water
driving and the water content of 0.98 through the follow-up water driving against the interbeds
with those without interbeds. Through the polymer driving, the differences in the
crude oil recovery is reduced by over five percents.
The injection along the interbed has the least resistance in theory (maximum
permeability along the fluvial direction without direct obstruction of lateral in-
terbed) and excellent water driving effects. Figure 30.7 shows a profile of oil
saturation at all stages of injection in the direction. The remaining oil is mainly
located on the reservoir top near to the oil well when the water content is 0.98 with
the follow-up water driving. On the profile, the remaining oil between two oil
wells is V-shaped. The unobvious obstruction can be seen near some lateral in-
terbed. The recovery rate is up to 34.94 % when the water content is 0.925. The
ultimate recovery rates for the follow-up water driving and polymer driving are
high, reaching respectively 41.11 and 41.94 %. The water injection along the
interbed has higher recovery rates than those without interbeds. But under this
situation the recovery rate improvement is small, only 0.83 % for the polymer
driving effects. Within less than 7 years of polymer driving, the recovery rate can
even be lowered.
30 Effectiveness to Driving by Interbeds in Channel Sand of River Delta System 299
Fig. 30.7 Cross-section of oil saturation with the water content of 0.8, 0.925 and 0.98 with water
driving and the water content of 0.98 through the follow-up water driving along the fluvial
direction
30.5 Conclusion
The distribution location, scale, injection mean, recovery means and reservoir
rhythm of horizontal interbeds have direct bearing on the distribution law of
remaining oil, capable of preventing upward movement of oil and gas while
obstructing the downward movement of water on the interbeds, thereby expanding
the swept volume. It is an important reason for the formation of remaining oils.
The horizontal interbeds can serve to flood in intervals and prevent the interbed
water breakthrough to a certain extent. During the polymer driving, the horizontal
interbeds can change the fluid direction to prevent oil and gas flow, reducing the
displacement effects.
The lateral interbed in the meandering river is a major factor for the formation
of remaining oil. Although the rise of bottom water can not be obstructed, but at
the upper part of lateral interbed, especially the acute angle formed by the interbed
and top surface is the highest concentration of remaining oils. In most cases, the
closer to oil wells, the more remaining oils in the angle.The lateral interbed will
increase the resistance to oil driving and make the polymer injection difficult. At a
certain time of polymer injection, the recovery rate of crude oil at some stages may
be lower than that at the time of complete water driving. In the sand of developed
lateral interbed, different directions of water injection affect the recovery rate of
crude oils and also make the distribution location of remaining oil in the interbeds
300 L. Zhao et al.
different. In general, the injection directions leading to crude oil recovering rate
from high to low are: driving along the extension of interbeds (river direction),
along the lateral interbed and against the lateral interbed.
References
Li M, Zhao Y, Liu X (2009) Distribution of petroleum enriched areas, Changling Sag, southern
Songliao Basin. Pet Explor Dev 36(4):413–418
Ling Z-f, Wang L-j, HU Y-l (2008) Flood pattern optimization of horizontal well injection. Pet
Explor Dev 35(1):85–91
Liu Z, Sa L, Dong S (2010) Current situation and trend of geophysical technology in CNPC. Pet
Explor Dev 37(1):1–10
Su Q (2004) Distribution mode of remaining oil in the positive rhythm sand and development by
horizontal well. Pet Geol Recovery 11(6):34–38
Sun C, ZHU X-m, ZAN G-j (2008) Sand body types and reservoir properties in Cretaceous
Jiufotang Formation in Luxi Sag. Pet Explor Dev 35(5):569–575
Wang X-z, Zeng L-f (2008) Effect of practical techniques in producing remaining oil in Gudong
Oilfield. Pet Explor Dev 35(4):467–475
Wang H, Liu G, Chen G (2009) Development of mm-level mud-films and significance for
reservoirs in Yingtai area, Daqing Oilfield. Pet Explor Dev 36(4):442–447
Yu J, Yang Y, Du J (2010) Sedimentation during the transgression period in Late Triassic
Yanchang Formation, Ordos Basin. Pet Explor Dev 37(2):181–187
Zhang Q-g, Bao Z-d, Song X-m (2008) Hierarchical division and origin of single sand bodies in
Fuyu oil layer, Fuyu Oilfield. Pet Explor Dev 35(2):157–163
Zhang G, Lan Z, Liu P (2009) Fracturing control method for deep volcanic rock gas reservoirs in
Daqing exploration area. Pet Explor Dev 36(4):529–534
Chapter 31
Effectiveness Valuation of Electronic
Countermeasure on Ground Air Defense
and Anti-missile
Keywords Analytic hierarchy process Delphi method Electronic counter-
measure system Effectiveness valuation Improved ADC method
31.1 Introduction
As the electronic technology in air strikes and air defense against are extensive
used in the field, electronic against becomes an important part of the modern war.
As one of the important forces in ground to air defense, we air-defense unit of the
ground will certainly put up drastic rivalry in electronic against conditions. So,
how to evaluate the electronic against effectiveness is an important issue.
31.2 Methodology
Currently, there are many methods to evaluate the effectiveness. But in all of the
methods the ADC is more comprehensive, precise and its index is more clear
which can reflects weapon system’s physical advantages. There are also limitations
in this method that every index must have a specific expression (AD A 109549
1981). As the ground to air defense electronic against system is complex and lack
quantitative indexes, it is difficult to analyse its C matrix (Meng et al. 2003; Sang
2008; Li and Wang 2008).
So this article talks about how to ameliorate the ADC model, and its main part
uses the improved ADC method to have a strict process and get an authentic
outcome. In the premise of using as much analytical method as possible, calculate
the weight by APH for some uncountable index and find out the point with Experts
consult method to solve the calculation problem. Combining the quality and
quantity can use the ADC to good advantage and can also make up its disad-
vantages, so that we can evaluate electronic against effectiveness effectively.
Combining with improved ADC model elements, the index system as follows: the
evaluation of ground to air defense effect in electronic against conditions is decided
by A, D, C three matrixes (AD A 109549 2010), A and D are decided by mainte-
nance and reliability, C is decided by anti-jamming matrix C1, the electronic
reconnaissance capability matrix C2, the anti-radiation missile resistance capability
matrix C3, the anti-stealth ability matrix C4, the survival ability matrix C5.
Though ground to air defense electronic against systems are different in theory,
function and structure, but the typical electronic against process is: Firstly,
reconnaissance equipment such as satellite, radar and photoelectricity equipment
reconnoitre the radiation source. Secondly, the Data-Processing-Center find out its
31 Effectiveness Valuation of Electronic Countermeasure on Ground 303
3 8
position and radiation source recognition system identify and pick up features such
as the working frequency. Thirdly, our ground to air defense electronic against
systems takes soft killing or hard killing according to the obtained information.
Soft against methods include radar jamming equipment and photoelectricity
jamming equipment. Hard killing methods are launching missiles to against ARM.
According to its typical process, building elementary model can find out sys-
tem’s reliable frame as Fig. 31.1, and it can also discover system’s original state as
Table 31.1.where:
1 -Equipment of radar reconnaissance;
2 -Equipment of photoelectricity reconnaissance;
3 -Equipment of secondary plane reconnaissance;
4 -Center of data processing;
5 -System of radiation sources identifies;
6 -Equipment of radar disturbing;
7 -Equipment of photoelectricity disturbing;
8 -Equipment of hard destroys.
Explanation: the number in the picture shows the serial number of each part.
Es ¼ AT ½D½C ð31:1Þ
Ai ¼ MTBFi=MTBFi þ MTTRi ði ¼ 1; 2; . . .; 8Þ
where:
i-number in Fig. 31.1.
Combining eight states in Table 31.1, availability A of electronic counter-
measure system can be got by the calculable models of combined system.
A ¼ ða1 a2 a3 a4 a5 a6 a7 a8 Þ
Y
8 Y
8 Y
8 Y
8
a1 ¼ Ai a2 ¼ ð1 A1 Þ Ai a3 ¼ ð1 A2 Þ Ai a4 ¼ ð1 A3 Þ Ai
i¼1 i¼2 i¼1;6¼2 i¼1;6¼3
Y
8 Y
8 Y
7 X
7
a5 ¼ ð1 A6 Þ Ai a6 ¼ ð1 A7 Þ Ai a7 ¼ ð1 A8 Þ Ai a8 ¼ 1 ai
i¼1;6¼6 i¼1;6¼7 i¼1 i¼1
ð31:2Þ
(2) the sub models of dependability D
The factors of D are decided by dependability level. The dependability level’s
expression of each parts in electronic against system is
Ri ¼ expðki tÞ ði ¼ 1; 2; . . .; 8Þ
where:
ki is the parts’ invalidation possibility, and can be got by: ki = 1/MTBFi
State transfer probability d11–d88 can be got by system’s original state and
every part’s dependability. The d11 means the probability that system runs nor-
mally from beginning to the end. The d12 means the probability that system runs
normally at beginning but radar reconnaissance equipment conk out at last. It can
be got by
Y
8
d12 ¼ ð1 R1 Þ Ri
i¼2
where:
Pt —radio power;
Gt —antenna gain;
k —wavelength;
r —RCS;
K —Boltzmann constant;
T0 —receiver noise temperature, it can be denotation by 290 K;
Dfr —receiver bandwidth;
L —the loss factor of system;
Fn —noise coefficient;
(S/N)min —minimum SNR
where:
rj —polarization loss, rj = 0.5;
Pj —jamming power;
Gj —interference machine lord disc plus;
Dfj —jamming signal bandwidth.
where:
ht Df Rt
N1 ¼ 1 K1 P1 K1 ¼ ð31:7Þ
ht0 DF Rto
we can get:
C2 ¼ x1 N1 þ x2 N2 ð31:9Þ
where:
x1x2 are decided by experts, x1 = 0.43, x2 = 0.57.
c. The anti-radiation missile resistance capability matrix C3 (Liu 2010), the
anti-stealth ability matrix C4
Because of the complexity and changefulness of the diverse random events
deciding the competency matrix C3, C4, and the fact that some criterions lack
quantified representation in ADC method, innovation of the method is proposed in
the thesis. Through combination of qualitative and quantitative process with ADC
method, hierarchical analytic process and Delphi method are used jointly to
implement the effectiveness valuation of electronic countermeasure. And they can
be broken up to index system in Table 31.2 (Ti 2005a).
The weight of each index can be decided by hierarchical analytic process and
every index’s relative importance can be shown by using ratio build judge matrix
1–9. Taking the anti-stealth ability matrix C4 for an example: assuming that the
sub model’s tactics measure and techniques ability constitute the matrix of remark
collection:
31 Effectiveness Valuation of Electronic Countermeasure on Ground 307
1 3:03
T ¼ ðtmn Þ22 ð31:10Þ
0:33 1
The eigenvector of judge matrix can be calculated by ‘‘addition method’’
according to expression (31.10).
1X 2
tmn
xm ¼
2 n¼1 P
2
tkn
k¼1
where:
mn —the number of sub index;
xm —the weight of the third layer index;
xmk —the weight of sub index’s coefficient;
Fmk —the sub index point given by experts.
where:
K —the number of ruined center;
P0k —the probability of raided on the center;
P1k, P2k,…, PQk —the probability of destroying the center after all effective
against measures are taken.
Y
5
C¼ CK ð31:13Þ
k¼1
The improved ADC method which can evaluate the effectiveness of electronic
countermeasure on ground air defense and anti-missile can be got by joining
formulae (31.2)–(31.6), (31.9), (31.11), (31.12) to formula (31.14).
The effectiveness of two supposed typical ground air defense and anti-missile
systems in electronic against conditions which can be evaluated by the model have
been got. System 2 is partly advanced to system 1 by improving reliability level
and radar’s anti-stealth ability of every part in System 2. The numerical value of
each parameter of 2 systems above can be got from figure Tables 31.3 and 31.4.
31 Effectiveness Valuation of Electronic Countermeasure on Ground 309
Elucidation: in Table 31.4, the number which follows every index is its weight;
other numbers in Table 31.4 are points.
From Table 31.3:
A1 ¼ ½ 0:4279 0:0595 0:0214 0:0713 0:0371 0:0107 0:0951 0:277
2 3
0:6392 0:0041 0:0720 0:0168 0:0002 0:0008 0:0001 0:2668
6 0 0:6943 0:0028 0:0513 0:0014 0:0003 0:0002 0:2497 7
6 7
6 0 0 0:7101 0:0082 0:0312 0:0051 0:0001 0:2453 7
6 7
6 0 0 0 0:7452 0:0362 0:0014 0:0006 0:2166 7
D1 ¼ 6
6 0
7
6 0 0 0 0:7642 0 0:0012 0:2346 7
7
6 0 0 0 0 0 0:8013 0:0004 0:1983 7
6 7
4 0 0 0 0 0 0 0:8537 0:1463 5
0 0 0 0 0 0 0 1
radar’s anti-stealth ability and the reliability of system2’s improving. The result is
accordant with practice (Yan et al. 2007; Chin 1998; Packer 2003; Whatmore
2005; Hall and Betts 1994; Rius et al. 1993). And it is fully proved that the
improved ADC method is in validity to evaluate the effectiveness of electronic
countermeasure on ground air defense and anti-missile.
31.4 Conclusion
References
Li Z, Wang S (2008) Several ways of effectiveness valuation to weapon. J Univ Navy Force Eng
90(1):97–99 (in Chinese)
Liu Y (2010) Mathematics model of effectiveness for electronic recce equipment. J UEST Chin
24(3):304–306 (in Chinese)
Meng Q, Zhang J, Song B (2003) Effectiveness valuation of fish torpedo. Publishing company of
national defence industry, Beijing (in Chinese)
Packer RJ (2003) Computer modelling of advanced radar techniques: the advanced radar
simulator. In: IEE colloquium on electronic warfare systems
Ribeiro RA (2006) Fuzzy multiple attribute decision making: a review and new preference
elicitation techniques. Fuzzy Sets Syst 78(2):155–18
Rius JM et al (1993) High-frequency RCS of complex radar target in real time. IEEE Antennas
Propag 41(9):1308E–1319E
Sang W (2008) The study on ways of effectiveness valuation of electronic countermeasure. Tech
Electron Countermeasure 24:34–41 (in Chinese)
Schrick G (2008) Interception of LPI Radar Singles. In: IEEE international conference on radar
Ti N (2005a) Formulas for measuring radar ECCM capability. IEE Proc, 132(3):198–200, July
2009
Ti N (2005b) Rader ECCMs new area: anti-stealth and anti-ARM. IEEE Trans Aerosp Rev
31(3):1120–1127
Volakis JL (1994) XPATCH: a high-frequency electromagnetic scattering prediction for complex
three dimensional objects. IEEE Antennas Propag Mag 36(1):65–69
Whatmore L (2005) The evaluation of radar effectiveness using the generic ECM ES model. In:
IEE colloquium on computer modeling and simulation of radar systems
Wu X, Hui Zhang, Wang S (2010) The study on technic measure of aerial defence radar to
confront anti-radiation missile. Electron Countermeasure Spacefl 22(1):10–13 (in Chinese)
Yan Y, Zhang Q (2009) The effectiveness evaluation of the C4ISR system based on the improved
ADC method. J Univ Air Force Eng 34(1):32–35 (in Chinese)
Yan Y, Lei Y, Zhang Q (2007) The design of the C4ISR system effect valuation software. Arith
Tech Aviat 37(1):4–7 (in Chinese)
Yan Y, Lei Y, Zhang Q (2008) Research on reliability of the efficiency evaluation of the C4ISR
system based on ADC method. Firepower Command Control 33(1):21–24 (in Chinese)
Zhang Y, Volz R, Sehgal A (2000) Generating Petri net driven graphical simulation tool for
automated systems. In: Proceedings of the American nuclear association 2000 annual
meeting, pp 140–147
Zhou S, Tao B (2007) Mathematical simulation of using decoying and killing missiles to counter
anti-radiation missiles. (1): ADA320857
Chapter 32
Energy Consumption and Economic
Growth: Cointegration and Granger
Causality Test
Abstract In this paper, we use cointegration technique and VEC model to study
the relationship between energy consumption and economic growth in China from
1953 to 2008 and have the following conclusions: 1. Long-term cointegration
relationship exists between energy consumption and GDP growth. 2. In the long-
term, energy consumption and economic growth have no Granger causality; in the
short term, a bi-direction Granger causality between energy consumption and GDP
exits.
32.1 Introduction
J. Liang (&)
School of Management, Tianjin University, Tianjin, China
e-mail: [email protected]
Z. Liu
STS Study Center, Tianjin University, Tianjin, China
Kraft and Kraf (1978) gave the first study on the relationship between energy
consumption and economic growth. They used GNP and energy consumption data
of U.S. from 1947 to 1974, obtained the one-way causal relationship from GNP to
energy consumption. This means that a conservative policy of the energy will not
cause greater impact on the GNP growth. However, Akarca and Long (1980) used
same method as Kraft, with less sample size, cannot draw the same conclusion.
This means that Kraft’s conclusions sound less robust to the impact of sample size.
Since then, much research has focused on using a different sample, different
modeling methods, different measurement methods to explore the inherent rela-
tionship between energy consumption and economic growth. Masih.et al studied
the case of Asian countries (Masih and Masih 1996). They studied eight countries
and regions in Asian of their income to total energy consumption, the results
showed an inconsistent Granger causality among the eight countries. Cheng and
Lai (1997) studied the situation in Taiwan from 1955 to 1993, the results showed
that one-way causal relationship only GDP to energy consumption is accepted.
Ghali and Elsakka (2004) used the Cointegration technique to explain the Cana-
dian situation. They established a neo-classical production function model
including four variables which is output, capital, labor and energy consumption,
the results showed that a long-term co-integration relationship was existence
among these four variables. Lee and Chang (2007) established linear and nonlinear
models in Taiwan using 1955–2003 data, they concluded the existence of a U-
shaped relationship between energy consumption and economic growth in Taiwan,
and the non-linear model can better fitting the relationship between the two. Gross
(2012) applied ARDL technique to study industrial, commercial and transport
32 Energy Consumption and Economic Growth 315
sectors’ energy consumption and growth, the findings showed that a single causal
relationship between growth and energy consumption of the commercial sector; a
two-way causal relationship of growth and energy consumption in the transport
sector. Cointegration methods and Granger test are also widely used in this topic
(Apergis and Payne 2009; Jumbe 2004; Yoo 2006; Yang 2000; Yu and Choi 1985;
Yu and Jin 1992; Yu et al. 1988; Wolde-Rufael 2005).
According to the characteristics of the data generation process, time series data can
divided into two types: stationary and non-stationary process. For non-stationary
random process, if after d times difference, the stochastic process becomes a
stationary process, and d-1 times the differential is still a non-stationary process,
we call it an I(d) process. For non-stationary time series, there will be a serious
problem known as the ‘‘spurious regression’’ by using the ordinary least squares
method, and the resulting of the estimated value and the parameter are not valid.
Cointegration theory proposed by Granger and Engle can deal with the two non-
stationary time series data for the long-run equilibrium study. Given two I(1) data
series, if a linear combination of the series is I(0),we call the two series
cointegration.
The vector error correction model (VEC) is a VAR model with constraints. It
applies to non-stationary sequence which is known to have co-integration rela-
tionship. When there is a wide range of short-term dynamic fluctuations, VEC
expression of endogenous variables will limit the long-term behavior in order to
make them converge to the long-term cointegration relationship. Due to a series of
partial short-term adjustment can correct the deviation from the long-run equi-
librium, the cointegration term is also known as error correction. The error cor-
rection term can reflect the dynamic mechanism of the long-run equilibrium short-
term fluctuations deviate from the self-correction. This method can separate the
long-term and short-term Granger cause. Because of the error correction term
contains the long-term cointegration relationship, the long-term Granger cause can
be detected by the significant of the error correction coefficient.
Before using the error correction model, a unit root testing is required. Com-
monly used unit root statistic tests are ADF test, KPSS test, PP test. Then, we must
test the exits of the long-term equilibrium relationship between the non-stationary
random variables, namely the existence of cointegration relationship between them.
Granger pointed out that there must be an expression of error correction model
among cointegration I(1) variable. The usual test methods are EG two-step method
and JJ trace statistic method.
316 J. Liang and Z. Liu
This paper selects the annual GDP and energy consumption data from 1953 to
2008. The real GDP using 1978 constant prices, energy consumption is calculated
by coal consumption calculation. All data are from the New China 60 years sta-
tistics compilation. All variables are transformed into their natural logarithm.
First, the ADF test is applied to detect the unit root of the two sets of data. We
choose the equation test including the intercept and time trend, lag order is
determined by the Akaike Information Criterion (AIC). Table 32.1 shows the test
results:
As shown above, under the 10 % significant level, the P values of both lngdp
and lnec are greater than 10 %, therefore, both cannot reject the existence of a unit
root hypothesis; the first-order differential data can reject the null hypothesis, so
the differential data is I(0) (Table 32.2).
After the unit root test, we began to study the cointegration relationship
between two variables. We use the E-G two-step method to determine the exis-
tence of a cointegration relationship between two variables. The model is set as
Eq. (32.1):
ln gdp ¼ b0 þ b1 ln ec þ ut ð32:1Þ
Using the OLS method,we can get the following equation:
ln gdp ¼ 4:34 þ 1:19 ln ec
ð0:534Þ ð0:049Þ
Using the ADF test to detect the stationary of the residuals of Eq. (32.1), we can
conclude that, under the 10 % confidence level, the residual is I(0).
The residual is I(0) process means that long-term cointegration relationship
between lngdp and lnec exits. Economic growth and energy consumption in the
long-run equilibrium mechanism can be portrayed by Eq. (32.1).
Energy consumption and GDP in the long-term co-integration relationship
described the long-run equilibrium situation between the two. The error correction
model can describe the short-term fluctuations. Short-term fluctuations can be
described by the following equation:
In this section, we establish a bivariate vector error correction model to explore the
long term Granger causality between energy consumption and economic growth.
The model is as follows:
Xm Xm
D ln gdpt ¼ a2 þ d2 ECMt1 þ b D ln gdpti þ
i¼1 2i
c D ln ecti þ e2t
t¼1 2i
Xm Xm
D ln ect ¼ a1 þ d1 ECMt1 þ i¼1
b1i D ln gdpti þ c D ln
t¼1 1i
ecti þ e1t
and short-term Granger cause may be caused by the lack of control variables. In
the long run, the relationship between energy consumption and GDP could be
impact by other important variables (capital, labor, and import and export, etc).
32.5 Conclusions
In this paper, we use cointegration technique and VEC model to study the rela-
tionship between energy consumption and economic growth in China from 1953 to
2008. We have the following conclusions: 1. Long-term cointegration relationship
exists between energy consumption and GDP growth. 2 In the long-term, energy
consumption and economic growth have no Granger causality; in the short term,
there is a bi-direction Granger causality between energy consumption and GDP.
References
Kraft J, Kraf A (1978) On the relationship between energy and GNP. J Energy Dev 3:401–403
Akarca AT, Long TV (1980) On the relationship between energy and GNP: a reexamination.
J Energy Dev 5:326–331
Masih AMM, Masih R (1996) Energy consumption, real income and temporal causality: results
from a multi-country study based on cointegration and error-correction modeling techniques.
J Energy Econ 18(3):165–183
Cheng SB, Lai TW (1997) An investigation of cointegration and causality between energy
consumption and economic activity in Taiwan province of China. Energy Econ 19:435–444
Ghali KH, El-Sakka MIT (2004) Energy use and output growth in Canada: a multivariate
cointegration analysis. Energy Econ 26(2):225–238
Lee C-C, Chang C-P (2007) The impact of energy consumption on economic growth: evidence
from linear and nonlinear models in Taiwan. Energy 32(12):2282–2294
Gross C (2012) Explaining the (non-) causality between energy and economic growth in the
U.S.—a multivariate sectoral analysis. Energy Econ 34(2):489–499
32 Energy Consumption and Economic Growth 319
Apergis N, Payne JE (2009) Energy consumption and economic growth in central America:
evidence from a panel cointegration and error correction model. Energy Econ 31(2):641–647
Jumbe CBL (2004) Cointegration and causality between electricity consumption and GDP:
empirical evidence from Malawi. Energy Econ 26(1):61–68
Yoo SH (2006) The causal relationship between electricity consumption and economic growth in
the ASEAN countries. Energy Policy 34(18):3573–3582
Yang H (2000) A note on the causal relationship between energy and GDP in Taiwan. Energy
Econ 22(3):309–317
Yu E, Choi J (1985) The causal relationship between energy and GNP, an international
comparison. J Energy 10(12):249–272
Yu E, Jin J (1992) Cointegration tests of energy consumption, income and employment. Res
Energy 14(3):259–266
Yu E, Choi P, Choi J (1988) The relationship between energy and employment: a re-examination.
Energy Syst Policy 11:287–295
Wolde-Rufael Y (2005) Energy demand and economic growth: The African experience. J Policy
Model 27(5):891–903
Chapter 33
Estimation of Lead Time in the
RFID-Enabled Real-Time Shopfloor
Production with a Data Mining Model
Abstract Lead time estimation (LTE) is difficult to carry out, especially within
the RFID-enabled real-time manufacturing shopfloor environment since large
number of factors may greatly affect its precision. This paper proposes a data
mining approach with four steps each of which is equipped with suitable mathe-
matical models to analysis the LTE from a real-life case and then to quantitatively
examine its key impact factors such as processing routine, batching strategy,
scheduling rules and critical parameters of specification. Experiments are carried
out for this purpose and results imply that batching strategy, scheduling rules and
two specification parameters largely influence the LTE, while, processing routine
has less impact in this case.
Keywords Data mining Lead time Radio frequency identification (RFID)
Real-time Shopfloor production
33.1 Introduction
customer orders on time (Chen and Prorok 1983). Shorting lead time is commonly
used in the manufacturing companies to improve their images and future sales
potentials. However, only its shorting is not adequate because customers always
require the accurate and precise estimation of lead time so as to ensure their
production and delivery due date.
There are several challenges when contemplating to carry out LTE. Firstly, large
number of factors such as the process routine, batching strategy, and scheduling
rules affects the lead time. The studies of their influences on the LTE are limited,
not only in qualitative aspects, but also quantitative facets (Jun et al. 2006).
Secondly, it is commonplace that most portion of lead time is spent upon either
waiting in queues and transit. Actual processing time is difficult to estimate due to
the dynamic manufacturing environment and uncertain disturbances. Thirdly, LTE
heavily relies on the statuses of various manufacturing objects such as machines,
operators and materials, whose real-time information is hardly captured. Therefore,
the contingent situations deteriorate its accuracy and effectiveness.
In order to tackle the above challenges, large number of methods has been
proposed to improve the precision, efficiency and effectiveness of LTE (Ozturk
et al. 2006; Ruben and Mahmoodi 2000; Sudiarso and Putranto 2010; Ward 1998).
However, these methods only concentrate on analytical, experimental and heuristic
aspects. Real-life cases are limitedly reported and studied.
This research is motivated by a real-life automotive manufacturer who has
applied RFID technology to support the real-time production on its manufacturing
shop floors over 5 years. Great myriad of data has been captured, representing the
typical manufacturing objects like machines, operators and materials. Due to the
application of RFID technology, shop floor production has been significantly
improved (Dai et al. 2012). The senior manager contemplated to explore the lead
time from such massive RFID-enabled shopfloor data. Thus, a research team
formed by a group of experts from collaborating universities investigated the entire
company and decided to apply data mining for this purpose.
Several research questions are concerned in this paper. The first question is
what aspects can largely affect the LTE and how we can examine their effects on
the estimation. The second question is how we can work out the practical and
precise lead time under the RFID-enabled real-time production ambience.
In order to answer the above questions, this paper proposes a data mining model
for estimating the lead time from the RFID-enabled shopfloor production data. It
includes four steps: data cleansing, data clustering, pattern mining and data inter-
preting, each of which is equipped with suitable mathematical models. The objec-
tives of this paper are to quantitatively examine the key factors which can influence
the LTE as well as to figure out the precise lead time for various product series.
Related work can be categorized into three domains. They are lead time estimation
(LTE), data mining as well as production planning and scheduling.
33 Estimation of Lead Time 323
LTE is significant in planning shopfloor operations, thus, many research has been
carried out. Three types of LTE procedures are studied so as to figure out the
effectiveness shopfloor information in bottleneck-constrained production system
(Ruben and Mahmoodi 2000). This work indicates the reduction of bottleneck
shiftiness that is based on the excess capacity at non-bottleneck work centers.
A data mining approach with regression tree was proposed for LTE in make-to-
order manufacturing (Ozturk et al. 2006). This research uses a large set of attri-
butes to work out the prediction and then compares with other methods from the
literature. Another approach using a heuristic algorithm was introduced to estimate
the lead time of a complex product development process (Jun et al. 2006). This
paper adopts computational experiments to show the effectiveness and efficiency
of the reduction of makespan of branch-merge types. Some factors influenced the
LTE were examined in a make-to-order company by a mathematical models
(Sudiarso and Putranto 2010). The experiment results imply the usefulness of the
fuzzy approach for estimating the lead time without simulations.
The basic concepts and techniques of data mining were introduced by Han et al.
(2011). Planning and data mining approach was integrated to create better planners
under the background of unmanned and manned space flight (Frank 2007). This
paper reviews the current work in this area and integrates some technologies for
opening research issues. A RFID-enabled data mining model was demonstrated to
collect the data and analyze the data for exhibition industry (Wang et al. 2009).
This model integrates the exhibitor and relational customers by the data mining
model when executing planning. The nature and implications of data mining
techniques in manufacturing and implementations on two fields—product design
and manufacturing were discussed (Wang 2007). That illustrates the methodology
enabled engineers and managers to understand large amount of data by extracting
useful information. A wide and comprehensive review of data mining used in
manufacturing field was investigated in order to reveal the progressive applications
and existing gaps (Choudhary et al. 2009).
effective and typical pieces are chosen. Further statistics and sampling approach
has been adopted to select a suitable amount of data for specific analysis.
RFID shopfloor data have several characteristics. Firstly, these data are gen-
erated automatically and instantly, thus the volume is super large due to the daily
production operations. Secondly, while the accuracy of current RFID application
in shopfloor production is improving, there are still some errors such as duplicated,
missing and uncompleted data. Thirdly, RFID data involve large number of
information which reflects the practical situations in a RFID-enabled ubiquitous
manufacturing environment. Physically, each piece of RFID data keeps the sta-
tuses of workers, machines and materials within the entire production cycle.
Logically, a set of RFID data imply the production disciplines such as logistics
trajectory, lead time fluctuation and their key impact factors.
In order to suit the above characteristics, this section proposes a four-step data mining
approach, each of which is equipped with a suitable mathematical model. The four
steps are data cleaning, data clustering, pattern mining and data interpreting.
The cleansed RFID data are clustered by using support vector machine (SVM)
manner for several reasons. Firstly, SVM adopts supervised learning methods to
analyze data and recognize patterns (Joachims 1999). Different categories are divided
by a clear gap. That is easy to classify the RFID data which may have tinny differ-
ences. Secondly, multiclass SVM enables the classifications from several elements
like product series, machines etc. Meanwhile, the bounds on error of classification
could be avoided by maximizing the margin and the over-fitting could be minimized
by selecting the maximal margin hyperplane in the feature space (Wang et al. 2004).
This paper uses multiclass SVM model to cluster RFID data. The purposes are
to generate various data aggregations by different standards first, and second to
classify the aggregations by different impact factors. The model is formulated as:
Least square (LS) is used for establishing the patterns from a set of RFID-enabled
data. LS minimizes the sum of the squares of the errors in every single equation
(Geladi and Kowalski 1986). Specifically, this paper adopts least square polyno-
mial fitting (LSPF) to work out the best fit in the least-square sense that minimizes
the sum of squared residuals. Then, the models from the best fit are selected for
predicting the processing time for different stages. The sum of processing of
different stages is lead time for a specific product series. In addition, the eigen-
functions of key factors are obtained by LSPF with diversified degree of poly-
nomial to examine the effect on the estimates.
The model from LSPF is
X
n
S ð xÞ ¼ ai ui ð xÞ ð33:3Þ
i¼0
The above extracted models must be further interpreted for several reasons. Firstly,
these models must be evaluated in a confidence interval to give the assurance that
33 Estimation of Lead Time 327
the data provided by them are correct. Polynomial interpolation and residual
analysis are used for this purpose. Secondly, the raw RFID data may be normalized
by different and heterogeneous methods within different steps. Data interpreting
ensures that the values from the models could be understood in diversified
applications. Finally, the predicted values may be used by different applications
like ERP, APS with standardized format.
The data interpretation model is formulated as:
Y ¼ U ð S ð x Þ Þ ð33:4Þ
where U is an interpretation function that contains a set of functions in a vector to
interpret the values in different applications. For example, to identify the lead time
of a specific product series, U can be specifically expressed as:
There are several product series which are divided into categories of diesel, pas-
senger, and racing cars etc. The experiment takes S-series used in passenger
vehicles for example. S-series is determined by three key parameters: total length
(TL), head diameter (HD) and stem diameter (SD). The production of such product
follows batch mode. Each batch contains 180 pieces. The lead time is defined as a
batch passes through all the manufacturing stages before shipping to the customer.
Figure 33.1 demonstrates the S-series products with different processes and total
process time. LTE has been based on the times which have been obtained from
past experiences or time study. It is obtained from the sum of each process as
shown in the bottom of Fig. 33.1 (S00: 330 m, S-01: 340 m and S-02: 420 m, m
means minute). However, they are greatly varied and affected by some key factors
when executing real case.
There are three categories in s-series. They are S-00, S-01 and S-02 with 10, 9
and 12 processes (stages). Their specifications are 100*40*8, 100*50*10 and
100*70*12 respectively. Figure 33.2 presents a statistics analysis on the three
categories. The statistics data comes from the RFID-enabled real-time shop floor
data within 10 months which are the peak season of this company. Each point
represents the mean value of a month. The mean values of three categories are
328.7, 345.6 and 423.6 with the standard deviations are 17.6, 20.0 and 16.1. From
the experiences, this company allows 60 % of the standard deviations. Therefore,
the interval of lead time can be worked out with the maximum values as criteria in
peak season, while, minimum values in off-season. The intervals for S-00, S-01
and S-02 in this case are [339.26, 318.14], [357.6, 333.6] and [433.26, 413.94].
328 R. Y. Zhong et al.
S-series
Unit: mm
1 2 3 4 5 6 7 8 9 10 11 12
Magnetic Detecting
Coarse Grinding
Seal Inspecting
Rough Lathing
Face Welding
Straightening
Quenching
Tempering
Nitridizing
Washing
Packing
Cutting
Total (m)
40
50
70
35
30
50
25
10
10
10
330
50
60
70
35
45
50
10
10
10
340
60
70
70
50
30
60
15
20
15
10
10
10
420
33.6 Conclusion
This paper introduces a data mining model to analyze the lead time estimation
(LTE) in RFID-enabled real-time production environment. A typical product with
three categories has been examined for estimating the lead time. After that, four
key factors are scrutinized to find their impacts on LTE. It is observed that, in this
case, processing routine almost do not affect LTE, while, batching strategy,
scheduling rules as well as head diameter and stem diameter largely influence
LTE. This paper works out the intervals for predicting the lead time and optimal
quantity in a batch as well as quantitative analysis of the above key factors so as to
guide the LTE.
Future research will be carried out from two aspects. Firstly, case-based rea-
soning (CBR) will be concerned to solve new problems by using the meaningful
RFID shopfloor data. Secondly, the data mining model can be extended to explore
the practical processing time, setup time for real-time production planning and
scheduling in the RFID-enabled real-time environment.
References
Alexander MS (1980) Manufacturing lead time estimation and the implementation of a material
requirements planning system: a computer simulation study. The University of Western
Ontario, Canada
Chen JS, Prorok PC (1983) Lead time estimation in a controlled screening program. Am J
Epidemiol 118(5):740
Choudhary A, Harding J, Tiwari M (2009) Data mining in manufacturing: a review based on the
kind of knowledge. J Intell Manuf 20(5):501–521
Dai QY, Zhong RY et al (2012) Radio frequency identification-enabled real-time manufacturing
execution system: a case study in an automotive part manufacturer. Int J Comput Integr
Manuf 25(1):51–65
Erdirik-Dogan M, Grossmann IE (2008) Simultaneous planning and scheduling of single-stage
multi-product continuous plants with parallel lines. Comput Chem Eng 32(11):2664–2683
Frank J (2007) Using data mining to enhance automated planning and scheduling. In: Proceedings
of IEEE symposium on computational intelligence and data mining, IEEE, March 1–April 5,
pp 251–260
Geladi P, Kowalski BR (1986) Partial least-squares regression: a tutorial. Anal Chim Acta
185:1–17
Graves SC (1999) Manufacturing planning and control. Handbook of applied optimization,
Massachusetts Institute of Technology, MA, pp 728–746
Han J, Kamber M, Pei J (2011) Data mining: concepts and techniques. Morgan Kaufmann, San
Francisco
Huang GQ, Fang J, Lv HL et al (2009) RFID-enabled real-time mass-customized production
planning and scheduling. In: Proceedings of 19th international conference on flexible
automation and intelligent manufacturing, 6–8 July, Teesside, UK
Jeffery SR, Garofalakis M, Franklin MJ (2006) Adaptive cleaning for RFID data streams. In:
Proceedings of the 32nd international conference on very large databases. VLDB Endowment,
pp 163–174
Joachims T (1999) Making large-scale SVM learning practical. MIT Press, Cambridge
Jun HB, Park JY, Suh HW (2006) Lead time estimation method for complex product
development process. Concurr Eng 14(4):313–328
Maravelias CT, Sung C (2009) Integration of production planning and scheduling: overview,
challenges and opportunities. Comput Chem Eng 33(12):1919–1930
Ozturk A, Kayaligil S, Ozdemirel NE (2006) Manufacturing lead time estimation using data
mining. Eur J Oper Res 173(2):683–700
Rao J, Doraiswamy S, Thakkar H, Colby LS (2006) A deferred cleansing method for RFID data
analytics. In: Proceedings of the 32nd international conference on very large databases,
pp 175–186
Ruben RA, Mahmoodi F (2000) Lead time prediction in unbalanced production systems. Int J
Prod Res 38(7):1711–1729
Sudiarso A, Putranto RA (2010) Lead time estimation of a production system using fuzzy logic
approach for various batch sizes. Proc World Congr Eng 3:1–3
Wang K (2007) Applying data mining to manufacturing: the nature and implications. J Intell
Manuf 18(4):487–495
Wang Y, Wong J, Miner A (2004) Anomaly intrusion detection using one class SVM. In:
Proceedings of the 5th annual IEEE SMC information assurance workshop. IEEE, pp 358–364
Wang W, Chang CP, Huang CT, Wang BS (2009) A RFID-enabled with data mining model for
exhibition industry. In: Proceedings of the 6th international conference on service systems and
service management. IEEE, Xiamen, China, 8–10 June, pp 664–668
Ward MN (1998) Diagnosis and short-lead time prediction of summer rainfall in tropical North
Africa at interannual and multidecadal timescales. J Clim 11(12):3167–3191
Chapter 34
Evaluation of Green Residence Using
Integrated Structural Equation Model
with Fuzzy Theory
34.1 Introduction
M. H. Hu C. K. Liao
Department of Industrial Engineering and Management, Yuan Ze University, Taoyuan,
Taiwan, China
M. Y. Ku (&) P. Y. Ding
Department of Industrial Engineering, National Chin-Yi University of Technology,
Taichung, Taiwan, China
e-mail: [email protected]
Housing is one of the basic needs to satisfy the six people’s livelihood of eating,
clothing, housing, transportation, education and entertainment; traditional housing
simply provides sheltering. With social and economical advances, the need for
quality residential houses becomes increasingly demanding. In addition to pro-
viding a safe and comfortable sheltering, green residential houses are becoming a
concerned emphasis. The definition of green residential houses is discussed in this
section.
a. Definition of Residence
Green buildings that recently emerge in Taiwan cover a wide range of build-
ings. Green residential house is a part of green building. Although most scholars
have proposed the definition of ‘‘green residential houses’’, this term has not been
clearly defined. Based on literature information, the ‘‘green residential house’’ is
defined in this research as ‘‘the residential house that is constructed based on the
concept of low carbon with natural ventilation and lighting to reduce energy
requirement and wastewater discharge’’. A green residential house is a house that
can breathe by itself in some sense.
c. Characteristics of Green Residential Houses
Huang (2006) has pointed out that SEM is the most important emerging sta-
tistical method for quantifying information in modern sociology; it has been
extensively applied in the various fields such as management science, psychology
and economics.
b. Chartered Financial Analyst (CFA)
CFA is used for testing validity to examine the significant and structure of latent
variables.
The above introduction of SEM reveals that SEM is a research and analysis
method of statistical analysis technology for dealing with complicate multivariate
information and data. In this research, the CFA available in SEM is used to
develop factors for evaluating green residential houses so that the results are
reliable and valid.
34 Evaluation of Green Residence 337
The Fuzzy Theory that was proposed by Prof. Zadeh in 1965 emphasizes that the
concept of human thinking, inferring and cognitive ability is somewhat fuzzy for
solving uncertain and fuzzy problems encounter in a real world. Chien (2009)
considered that the fuzzy theory can be used for solving decisions not only for
policy making but also for daily life related matters such as fuzzy control of
variable frequency air conditioning unit. In recent years, the fuzzy theory has been
integrated with many other methods including grey relational analyses, TOPSIS
and AHP, among the many others.
Values of the fuzzy semantic variables are also called semantic values; they can
be expressed by defining the base variable of a fuzzy membership function
(Zimmermann 1987), or the membership function can be considered as quantified
attributes of semantic values. The triangular fuzzy number A in the region of real
number R indicates that any x 2 R is designating a number uA ð xÞ 2 ½0; 1, and that:
8
< ðx cÞ=ða cÞ; c x a
>
uA ð xÞ ¼ ðx bÞ=ða bÞ; a x b ð34:1Þ
>
:
0
34.3 Methodology
The objective of this research is to apply the combined structural equation model
(SEM) and fuzzy theory for investing the possibility of developing green resi-
dential houses in Taiwan. Additionally, questionnaires are distributed to residence
living in Taichung (Taiwan) region for understanding their need and degree of
338 M. H. Hu et al.
Based on the viewpoint of consumers, this research is carried out for investigating
the factors of green residential houses that satisfy the need of general public.
Review of literature leads to the factors that influence the construction of green
residential houses as included in the following research structure (Fig. 34.2):
Based on the review of literature on green residential houses, the factors used in
the model for evaluating green residential houses are grouped into three major
dimensions: ‘‘Indoor Environment’’, ‘‘Energy Saving Facilities’’ and ‘‘Community
Environment’’ as illustrated in the follows:
1. Indoor environment: The occupants of the green residential house enjoy a
comfort and leisure life provided by appropriate residential lighting, ventilation
and insulation.
2. Energy saving facilities: The objective of green residential house is achieved by
providing energy saving features such as electricity saving, water saving and
noise insulation.
3. Community environment: The residence of a community can enjoy the green
residential houses only if the community environment and facilities are well
planned such as wastewater treatment facilities, green zone, storm water
drainage, and facilities to recover garbage.
34 Evaluation of Green Residence 339
The fuzzy theory that is used for solving uncertain and fuzzy problems encoun-
tered in the real world is implemented in this research using the following
procedures.
Procedure 1: A fuzzy preference order matrix is established to evaluate the
semantic variables expressed by K professionals and experts based on each cri-
terion. After evaluating the m projects (Ai, i = 1,…, m) based on n criteria
(Cj, j = 1,…,n) for k professionals and experts, geometric mean is used to inte-
grate the evaluation results to yield fuzzy preference order matrix as follows:
2 3
~x11 ~x12 ~x1n
6 ~x21 ~x21 ~x21 7
~ ¼6
D 6 .. .. ..
7
.. 7 8i; j ð34:2Þ
4 . . . . 5
~xm1 ~xm2 ~xmn
where: ~xij ¼ aij ; bij ; cij is a triangular fuzz number that represents the fuzzy value
of ith prospective project in the jth criterion. The following equation is used to
normalize the fuzzy matrixbb R ~ ¼ ~rij :
mn
!
aij bij cij
~rij ¼ þ ; þ ; þ ; j 2 B cj ¼ max cij if j 2 B
cj cj cj i
ð34:3Þ
aj aj aj
~rij ¼ ; ; ; j 2 C cj ¼ max aij if i 2 C
aij bij cij i
The five classes of fuzzy semantic conversion scale proposed by Chen and
Hwang (1992) are used to evaluate the importance attached to green residential
houses to convert the semantic expression into fuzzy numbers as shown in
Table 34.1.
Procedure 2: Weighed coefficients: The weight of each evaluation condition can
be calculated; the weight of evaluation attribute is wi ¼ ðw1 ; w2 ; . . .; wm Þ. In this
research, a simpler gravity method developed by Yager (1980) is used to perform
fuzzy sorting. Results of equation derivation show that the sorting of centroid
fuzzy values for the fuzzy triangular function can be expressed by Eq. (34.4). The
34.4 Analyses
The SPSS 18.0 statistical software is used in this research for carrying out sta-
tistical analyses. The statistical methods include descriptive statistical analyses,
reliability analyses and the various examinations. The overall model analyses are
performed using AMOS 17.0, and finally the fuzzy theory is applied for find out
the practicality of green residential houses and the order of importance for factors
considered by the general public for purchasing green residential houses.
The results obtained by perform the two-stage five-factor examinations are used to
confirm the validity of the three assumptions proposed in this research. The
explanations are provided in the following paragraphs:
1. Constructing green residential houses is an external latent variable that is
reflected by the three internal latent variables, i.e. indoor environment, energy
saving facilities and community environment. All coefficients obtained by
conducting CFA examinations show are greater than 0.7 confirming that green
residential houses can be evaluated using these three dimensions.
2. Indoor environment is an internal latent variable that is reflected by observing
three variables.
3. Energy saving is an internal latent variable that is reflected by observing three
variables.
4. Community environment is an internal latent variable that is reflected by
observing four variables.
The fussy theory and the centroid method are used to determine relevant weighted
value for investigating the factors important to constructing green residential
houses in the Greater Taichung (Taiwan) region. Procedures for conducting the
fuzzy theory are as follows:
Procedure 1: Standards for evaluation are established. In this research, the
evaluation factor in ‘‘Dimensions for constructing green residential houses’’ are
used to carry out the evaluation of factors affecting green residential houses as
shown in Table 34.2.
Procedure 2: After the original matrix is developed, Eq. (34.5) can be used to
evaluation the weight of attributes by calculating the fuzzy value of the various
evaluation dimensions as shown in Table 34.3. The geometric average is then
342 M. H. Hu et al.
calculated to obtain the weight value and order of magnitude for various factors
Table 34.4.
Procedure 3: Eq. (34.7) is used to defuzzify, calculate and sort. Order of factor
important to the construction of green residential houses is shown in Table 34.5.
34 Evaluation of Green Residence 343
Data in the above table reveal that the need of a green residential house with
energy saving feather by the general public is relatively high. As the indoor
environment is concerned, the residential lighting, which is related to energy
savings, is also emphasized by the general public indicating that the general public
has already acknowledged the importance of energy resources. Hence, further
planning of green residential houses needs to emphasize the energy saving func-
tion in order to satisfy the need of consumers.
34.5 Conclusions
Results of investigation and analyses obtained in this research show that more than
80 % of the general public in Taiwan accepts green residential houses so that
developing green residential houses is feasible. The general public also expresses
that the green residential house costs less than 10 % more than a regular residential
house so that 10 % cost difference is accepted. Additionally, developing green
residential houses needs the cooperative efforts of government, developer and
consumer. The government has to promote programs for dissimilating information
on green residential houses, and encourage developers to construct more green
residential houses through a rewarding system so that the general public is more
willing to purchase green residential houses. Effective promotion of green resi-
dential houses will enable developer to base on specifications of green residential
houses for selecting appropriate materials and methods to construct residential
houses that are healthy and comfortable with low pollution and reduced energy
consumption. The concept and advantages of green residential houses are dis-
similated to the general public so that when purchasing residential houses, con-
sumers consider green residential houses as the primary choice, and are willing to
pay a little more for green residential houses. The idea of green residential houses
can thus be implemented in Taiwan.
344 M. H. Hu et al.
References
Bao-ping Chen
35.1 Introduction
After several years of development, the grey system theory has centered on gray
model as the core of the model system, which is built on gray equation, gray
matrix, gray algebra system, etc., relying on the grey relation space analysis (Liu
and Dang 2009). Grey clustering is a branch of gray system theory, which can be
employed in the multivariate statistical analysis. Such method has been used to put
the samples together in a multi-dimensional space and separate these samples with
B. Chen (&)
Department of Computer Information and Management,
NeiMongol University of Finance and Economics, Hohhot, China
e-mail: [email protected]
correlated measurement from each other. Edge on dealing with small sample
problems, the grey clustering is one of the most applicable subjects in recent years.
The population problem is one of the key elements of China’s social and
economic development. Demographic statistics and the real world case reminds us
that if the growth of population is out of control, the middle aged generation will
make up the big foundation of pyramid-like age structure; if in the ‘‘only one
child’’ policy, the senior generation will take place of the middle aged people with
a mushroom-like age structure. After 1978, an aging population issue becomes
unavoidable in China. Therefore, the age structure dilemma is not only a society
problem, but a serious economic problem. The situation does not allow of any
delay to deal with the current situation. There are a lot of useful researches of how
age structures influence the growth of economy and CPI, and studies of different
age structure among the regions (Dang et al. 2009). The examples of Liang Zhang,
Elsie adjusting Leslie’s Model of Population age structure (Zhang et al. 2011),
Xiulin, Gen’s The analysis about population age structure based on composition
statistics (Gen 2011), Show ever, in all of these research papers, only few of them
use grey model to demonstrate such situation.
The author apply the grey model of midpoint trigonometrical whitenization
weight function to assess the composition of population age with its bring-up ratios
in 2010 (all original data comes from China Statistical Yearbook 2010) and cat-
egory the age structure into three regions. After the sorting, the author also ana-
lyzes which regions have the overwhelming age structure among the other two
regions. The research shows that the grey model can easily assess the age structure
of the population and also illustrate the aging degree of each area more objectively
and accurately than other models.
sample objects of whitenization weight function. According to the Chinese age data,
the author chose center point trigonometrical whitenization weight function of the
grey evaluation model to make it a reasonable classification.
Assume there are N objects, m assessment index, s different gray clusters, for
object I of index J sample observation value is Xij, i = 1,2,3,…, n, j = 1,2,3,…,
m, please make diagonal assessment according to the Xij value for the corre-
sponding object I. During sorting different grey cluster, we will determine the
maximum of grey level as ‘‘gray center point’’. The following is the process of
calculation:
Step1: Accordance with the evaluation requirements of grey number s, I define
each grey cluster as 1,2,…, s, each of their center point as k 1, k 2,…k s, in which of
them maybe any kind of the target grey central point (not the center point neces-
sarily). I also define the domain of each index as S correspondingly, which are k 1, k
2,…k s.
Step2: Make connection (kk,1)correspondingly with(kk-1,0)(kk+1,0) and get K
gray triangle whitenization weight function to index J :f kj ðXij Þ ðj ¼ 1; 2; . . .; m;
k ¼ 1; 2; . . .; sÞ. As the measured value X of index J, can be calculated each grey
type (k = 1, 2,…, s) and its grey level f kj ðXij Þ through the following function:
8
>
> 0; x 62 ½kk1 ; kkþ1
>
>
>
< x kk1
; x 2 ðkk1 ; kk Þ
fjk ð xÞ ¼ kk kk1 ð35:1Þ
>
>
>
> kkþ1 x
>
: ; x 2 ðkk ; kkþ1 Þ
kkþ1 kk
Step3: Calculate the comprehensive clustering coefficient rki (i = 1,2,…,m;
k = 1,2,…,s) from the objects I of their grey cluster K.
X
m
rki ¼ f kj Xij gj ð35:2Þ
j¼1
Comparing with other grey clustering methods, the center point triangle white-
nization weight function is based on the most likely point kk as the center point of
the level, so such kind of method can easily calculate center point triangle whit-
enization weight function of each grey clustering through k0,k1,k2,…ks,ks ? 1.
People can be more familiar with how to accurately get the center point of grey
cluster than that of the intervals, therefore, any research conclusion based on such
method might be more reliable and accessible.
When UN do population census, they usually begin with the age of 65. Generally,
there are three categories by age: below 14, 14–65, 65 above (65 inclusive) (Zhang
and Lei 2011). There is a fact we cannot ignore that china is a large country, so
there are total different economic development among area, especially the east and
west part of China (Bao 2012; Kang 2009; Ding 2012). Each area, which of the age
structure is changing from middle adult to senior citizen and this gap expends
gradually. The following is that the author applied the grey model of midpoint
trigonometrically whitenization weight function to according to assess the com-
position of population age with its bring-up ratios in 2010 and category the age
structure into three regions, all the data come from China Statistics Yearbook
2010.
Considering the different economic development among each area in China and
the research result more reliable, the author will divide the 31 provinces, cities and
autonomous regions into three parts: east, middle and West. East: Beijing, Tianjin,
Hebei, Liaoning, shanghai, Jiangsu, Zhejiang, Fujian, Shandong and Hainan;
Middle: Shanxi, NMG, Jilin, Heilongjiang, Anhui, Jiangxi, Henan, Hubei and
Hunan; West: Sichuan, Guizhou, Yunnan, Guangxi, Shanxi, Gansu, Qinghai,
Ningxia and Xinjiang. Age Data collected from all regions is below 14, 15–64 and
65 above (65 inclusive) and the number of population in each area. In addition, the
author also adds young dependency rate and old dependency rate of each part of
China. We can get the weight of age of below 14 years old through dividing the
total amount of population of age below 14 years old by the total sample amount
of population; similarly, we also get each weight of the age 15–64 and 65 above
35 Evaluation of Population Age Structure Model 349
using the same method. Table 35.1 shows each weight of the three different age
group, young dependency ratio and senior dependency ratio of the 31 cities.
The following is the process of evaluation:
Step1: as the sample matrix X:31 areas are the clustering objects (i = 1,2,…, 31),
each age group weight from each region, as well as their dependency ratio corre-
spondingly of each clustering index as J (j = 1,2,3,4,5).
Step2: divided each region into three categories: excellent, normal, poor, which
means grey number S is 3, calculate each gray center point based on its mean,
maximum and minimum value. For example, the age group of below 14 year old,
its level center point of excellent level is 20, medium level is 15, and poor level is
seven.
f11 ½20; 26; ; 100; f12 ½15; 20; ; 26; f13 ½5; 15; ;
f21 ½78; 83; ; 100; f22 ½73; 78; ; 83; f23 ½50; 73; ; 78
f31 ½; ; 5; 7; f32 ½; ; 7; 9; f33 ½; ; 9; 14
f41 ½25; 39; ; 100; f42 ½21; 25; ; 39; f43 ½6; 21; ; 25
f51 ½; ; 7; 10; f52 ½; ; 10; 13; f53 ½; ; 13; 17
Step4: calculate the comprehensive clustering coefficient rkj from the objects
i(i = 1, 2, …, 31) of their grey cluster k(k = 1, 2, 3). For example: rkj of Bei-
jing = {0.0937, 0.3728, 0.0000, 0.1137, 0.1313}.
Step5: Calculate grey cluster of each region using max {rkj } = rkj .
For example: max {rkj } of Beijing = max{0.0937, 0.3728, 0.0000, 0.1137,
0.1313} = 0.3728
Step6: Based on the value of rkj of each region and get the further sorting of
each grey type, we can drive the conclusions as follows:
From the 1970s, China has implemented the one-child policy, but the enforcement
of this policy differs regionally. Due to the less developed economy and lack of
prospection in central and western parts of the country, this policy is not strictly
carried out, especially in the western parts of China, which results in high birth rate
there because of relatively more tolerant policy environment, This situation result
in the varying degree of aging in different part of China. During the research, the
author found the assessment result was same as the real world case: Age structure
in Sichuan, Guizhou, Yunnan, Tibet, Shaanxi, Gansu, Qinghai, Ningxia and
Xinjiang have changed slowly, so these provinces haven’t been in the stage of
aging yet and have relatively more reasonable demographic structure than any
other area. Some metropolis like Beijing, Tianjin and Shanghai, there is a large
number of Talent Introduction under the age of 65, the demographic structure is
still at a rational level in spite of some difference from Sichuan province. But in
Liaoning, Heilongjiang, Shandong, Guangdong Province, and some parts of the
western area, there are larger elderly population cardinal number and more speedy
of aging, therefore, they have most prominent Irrationality in demography
Table 35.2.
35 Evaluation of Population Age Structure Model 351
35.4 Discussion
Acknowledgments Project supported by the Science Research Projects of the Colleges and
Universities in Inner Mongolia (NO.NJZY11106) and the Natural Science Foundation Project in
Inner Mongolia (NO.2010 MS1007).
References
Bao Y-x (2012) An analysis of the effect of population ageing on regional economic development
basing on the model of neoclassical economic growth. Popul Econ (1):1–7
Dang Y, Liu S, Wang Z (2009) Model on gray prediction and decision model. Science Press,
Beijing
Ding D (2012) Continuous dependence on the parameters of self-similar measures. J Hubei Univ
(Nat Sci) 34(1):26–30
Gen X-l (2011) The analyses about population age structure based on composition statistics.
J Appl Stat Manag (1):118–126
352 B. Chen
Kang J-y (2009) Impact of change of population age structure on consumption in China. Popul
Econ (2):60–64
Liu S, Dang YG (2009) Grey system on dealing with the theory and practical applications. Social
Sciences Edition, Beijing
Wang J, Huang K, Wang H, Li Y (2011) A hierarchy cluster method for functional data. J Beijing
Univ Aeronaut Astronaut (Soc Sci edn) (1):86–102
Yuan C, Liu S (2007) A grey degree based on grey whitening weight function. J Jiangnan Univ
(Natl Sci Edn) 6(4):494–496
Zhang G-h, Wu Y, Zhang M (2011) Enterprise emergency management capacity comprehensive
evaluation based on gray cluster analysis. J Quant Econ (3):94–99
Zhang L, Lei L-h (2011) An analysis of the relation of the regional population age structure and
the household consumption in China. Popul Econ (1):16–21
Zhang L, Shi S-l, Dou C-Y (2011) Adjusting Leslie’s model of population structure. Coll Math
(8):99–102
Chapter 36
Evaluation of Recycle Level of Qaidam
Salt Lake Circular Economy
with Intuitionistic Fuzzy Entropy
Measures
Fei Feng, Yu Liu, Jian Zhang, Nan Wang and Xiao-hui Xing
Abstract The purpose of this paper is to build the evaluation index system of
Qaidam salt lake circular economy and sorting the programs according to the
weights with the intuitionistic fuzzy sets conception. Intuitionistic fuzzy sets
conception takes into account the objective and subjective weights comprehensive
to determine weight, and then get the more accurately weight. This paper plays a
referenced role for assessing the circular level of circulation in the Qaidam region.
36.1 Introduction
China was first reported circular economy in 1997 (Min 1997), it shows that
Chinese late start in the circular economy.
Zhong et al. (2006) summed up in the present on recycle economic, China’s
mainly research recycle economic is concentrated in the following areas: firstly,
the connotation and principle of recycle economics; Secondly, the role of the
recycling economy and the impact on socio-economic; Thirdly, the discussion of
the pattern of the recycle economic; Fourthly, the supporting conditions to achieve
recycle economic. So, in the circulating levels of evaluation have not done enough,
and some of them based steel. For example, Wenjie and Ma (2007) build a set of
index system about ecological steel industry, include positive benefit and negative
benefits, and use them evaluate the Chinese level of ecological steel industry in
2003; Cui et al. (2008) base on the present index system of circular economy and
the evaluation index system of steel and iron industry clean production, and build
evaluation index system of steel and iron industry based on circular economy. Ma
et al. (2007) build index system and evaluation system of steel and iron industry by
green manufacturing, and use DEA evaluate the green degree of Chinese eighteen
steel and iron companies. But, now there is few people evaluate the level of salt
lake circular economy, and it mainly still locate theory research phase. This paper
analysis present salt lake resource situation and reference to the index system of
steel and iron circular economy and Chinese index and evaluation system, and
combine the feature of salt lake to build index and evaluation system, then use the
intuitionistic fuzzy(IF) sets and empower weight algorithm to evaluation.
Zhang (2006) pointed that there is gap of data in index system of recycle economic,
so some index are fuzzy when statistical investigation. Hong and Choi (2000) and
Li (2005) used to do useful exploration of decision problems based on the intui-
tionistic fuzzy sets. And intuitionistic fuzzy sets is characterized by taking into
account the element of non-empty set degree of membership and non-membership
information, and which makes the ability to express, more flexible, and more
suitable to deal with the reality of the practical problems (Wang and Yang 2010).
Intuitionistic fuzzy sets constitute a generalization of the notion of a fuzzy set and
were introduced by Atanassov in 1983 (Atanassov 1986).
Definition Set X is a not empty classic set, and an intuitionistic fuzzy set A in a
universe X is an object of the form A = {\x, lA(x), vA(x) [ |x [ X}, where, for
all x[X, lA(x) [ [0,1] and vA(x) [ [0,1] are called the membership degree and the
36 Evaluation of Recycle Level of Qaidam Salt Lake Circular Economy 355
The EGj reflect the program sets based on the decision-making information of
attribution Gj ambiguity and uncertainty, and the higher values indicate that the
degree of fuzzy and degree of uncertainty higher,and it meaning that the degree it
rely on attribution Gj less. Let DGj is the degree of deviation of decision-making
information of attribution Gj, where, DGj = 1-EGj , j = 1,2,…,n. Then the for-
mula of subjective weights of attribution Gj is as follows:
DGj
r j ¼ Pn ; j ¼ 1; 2; 3; . . .; n ð36:2Þ
i¼1 DGj
Moreover, taking into account the decision makers’ preferences and experi-
ences, we according to the subjective weights k = (k1,k2,k3,…,kn) for correction,
the formula is as follows:
356 F. Feng et al.
kj r j
wj ¼ Pn ð36:3Þ
i¼1 kj rj
There are 33 salt lakes in Qaidam, and the salt lake resources are the unique
advantaged resources, and it play an important position and role in Chinese
national economic construction, moreover, the lithium mine, and chemical fertil-
izer and asbestos are ranking first in China, in Table 36.1.
At present, the main developed salt lake includes the Chaka, Keke Lake,
Chaerhan Lake, and small Qaidam Lake. The main production includes salt,
potassium salt, magnesium salt and boron salt (Yu and Tan 2000). However, the
development of Qinghai salt lake resources, in the past 50 years, always had been
based on potassium resource development as the main product. The associated or
36 Evaluation of Recycle Level of Qaidam Salt Lake Circular Economy 357
symbiotic resources, like lithium, boron, magnesium, rubidium, bromine and other
active ingredients do not use effectively, and most of them as the waste emissions,
and formed a one-way linear process of ‘‘resources—products—waste’’. This
unidirectional extensive irrational development, on the one hand lead to tremen-
dous waste of the salt lake resources, and exacerbated destruction of resources; the
other hand, this mode of production resulting in high production costs and
affecting the economic efficiency of enterprises and the competitiveness of prod-
ucts in the market, and weak ability to withstand market risks (2005). Therefore,
the route of development of circular economy is a priority, and effectively evaluate
the circulatory levels of ability is the most important.
The principle to build salt lake recycle economic evaluation and index system
should be based on the ‘‘reducing, reusing, recycling’’ (3R) as the main criteria and
should take into consideration the levels of corporate, regional and social at the
same time. Wang and Chen (2003) proposed the five principles to build index of the
salt lake resources for evaluating sustainable development: systematic principle,
scientific principle, operability principle, regional principle and dynamic principle.
Salt lake recycle economic evaluation index system is still in the exploratory stage,
and it is still not reached a uniform, accepted standard. Based on the above principles
of constructing, and relying on the instructions on the ‘‘recycle economic evaluation
358 F. Feng et al.
index system’’ in China and the existing statistical system of the National Bureau of
Statistics proposed the output indicator of resources, the consumption indicator of
resources, comprehensive utilization of resources indicator, waste disposal volume
indicator. Besides, Wang and Chen (2003) based on the method of problematic focus
(Yang and Hong 2001) determined index system which contains sustainable utili-
zation of salt lake resources, the impact on the environment of exploiting and using
salt lake resources, and sustainable development of the salt lake industries for
themselves. Wang and Feng (2012) considered the unique characteristics of the
object in the circular economy innovation evaluation, and combined with the phase
characteristics of the economic and social development propose the evaluation index
of the recycle economic innovation situation and effects to evaluate the capability of
innovation. Then, this paper consider these evaluation index comprehensively and
provide the evaluation index system with six first-grade indexes and twenty-two
second-grade indexes, in the Table 36.2.
Table 36.2 The evaluation index system of the salt lake circular economy
Target layer Criterion level layer Index layer
The evaluation index The output indicator of The main mineral resources output rate
system of the salt lake resources The energy output rate
circular economy Water resources output rate
The consumption Water consumption per unit of GDP
indicator of resources Unit GDP energy consumption
Ten thousand production value of ‘‘three
wastes’’ emissions
The comprehensive Industrial waste gas comprehensive
utilization indicator of utilization rate
resources Industrial solid waste comprehensive
utilization rate
Industrial water reuse rate
The proportion of industrial ‘‘three
wastes’’ comprehensive utilization
output value of the production in GDP
The waste disposal Waste recycling rate
volume indicator Industrial solid waste disposal quantity
Industrial waste water emissions
Sulfur dioxide emissions
COD emissions
The impact on the The capability of resource consumption
environment of The capability of environmental bearing
exploiting and using The capability of ecological system
salt lake resources bearing
indicator The capability of resources environmental
protection
Recycle economic The proportion of research funds in GDP
innovation situation The proportion of circular economy
evaluation indicator research funds in scientific and
technological activities funds
The number of patents
36 Evaluation of Recycle Level of Qaidam Salt Lake Circular Economy 359
Based on the formulas of intuitionistic fuzzy sets, and then use the intuitionistic
fuzzy weighted averaging (IFWA) operator to evaluate salt lake recycle economic
index system. The mainly steps are as follows:
Firstly, based on the formula (36.1) to calculate the every attributive intuition
fuzzy entropy, and we can get six value, that is EG1, EG2, EG3, EG4, EG5, EG6
Secondly, based on the formula (36.2) to calculate the every attributive objective
weight; Thirdly, use the formula (36.3) correct weight with the subject weights,
where, the subject weights could come from experts scoring; Fourthly, use the
formula (36.4) we can get the value of IFWA based on sets of programs; Finally,
based on the formula (36.5) we can get the value of the S (IFWA (Yi)) and select
the maximum score value for the optimal solution.
36.4 Conclusion
With the continuous development of circular economy, the effective index system
is conducive to better monitoring and evaluation for circulating levels, and realize
the minimum consumption to get the maximize development benefits. This paper
try to propose the evaluation index system of the salt lake circular economy, and
use the intuitionistic fuzzy sets and intuitionistic fuzzy weighted averaging
(IFWA) operator algorithms, and take into account the objective and subjective
weights comprehensive to determining weight, then get the more accurately
weight, and evaluation of the program can achieve eventually.
Acknowledgments This paper was supported by ‘The National Key Technology R&D Program’
(Grant No.2012BAH10F01, 2011BAC04B02), ‘Education program for New Century Excellent
talents’ (Grant No.NCET-11-0893), ‘Beijing municipal talents supporting plan for enhancing
education’ (Grant No.PHR20110514), ‘science and technology innovation platform of university
in Beijing’ and ‘Beijing knowledge management research base program’.
References
Huang X (2004) Circular economy: industry pattern and policy system. Nanjing university press,
Nanjing, pp 10–10
Li DF (2005) Multi attribute decision making models and methods and using intuitionistic fuzzy
sets. J Comput Syst Sci 70(1):73–75
Li D, Zhang W (2006) The research of evaluation index system of circular economy (in Chinese).
Stat Res 23(9):23–26
Ma S, Qi E, Huo Y, Pan Y (2007) The research of iron and steel industry green manufacturing
evaluation system (in Chinese). Sci Sci Manage Sci Technol 28(9):194–196
Min Y (1997) The German circulation economic law (in Chinese). Environ Her 3:40–40
Qin H (2012) Darong Intuitionistic fuzzy entropy and weighted decision making algorithm (in
Chinese). Math Pract Theory 42(4):255–260
Wang X, Chen J (2003) Study on sustainable development evaluation index of Qinghai salt lakes
resources (in Chinese). J Qinghai Norm Univ 42(4):68–72
Wang M, Feng Z (2012) Research on evaluation index system of circular economy innovation (in
Chinese). Chin Popul Resour Environ 22(4):163–166
Wang K, Yang H (2010) Multiple attribute decision making method based on intuitionistic fuzzy
sets (in Chinese). Fuzzy Syst Math 24(3):114–118
Wenjie L, Ma C (2007) The research of iron and steel industry ecological analysis and evaluation
(in Chinese). Inquiry Econ Issues 8:97–103
Xiao H (2007) Review on the development mode of district circular economy and its evaluating
system research (in Chinese). Ecol Econ 4:52–55
Xu Z (2008) Intuition fuzzy information integration theory and application. Science Press,
Beijing, pp 1–3
Yang C, Hong S (2001) Mineral resources sustainable development index discusses-focus method
(in Chinese). Resour Ind 1:29–31
Yi K (2005) Recycling-based economic production model of exploiting resources of Qinghai salt
lakes (in Chinese). J Salt Lake Res 13(2):20–24
S Yu, Tan H (2000), Exploitation of Chinese salt lake resources and environmental protection (in
Chinese). J Salt Lake Res 8(1)24–29, 2000
Zhong T, Huang X, Li L, Wang C (2006) Assessing regional circular economy development:
approaches and indicator systems: a case study in Jiangsu province (in Chinese). Resour Sci
28(2):154–162
Chapter 37
Evolution Analysis of Standardization
Production Behavior in GI Agricultural
Product Enterprise Cluster
37.1 Introduction
Z. Li (&) T. Chen
College of Management and Economics, Tianjin University, Tianjin, China
e-mail: [email protected]
37.2 Methodology
Evolutionary game theory takes the group behavior as the research object, from the
bounded rational individual as the starting point, thinks that individual decision-
making behavior is not always optimal, it is often not possible to find the optimal
strategy in the beginning. Individual decision-making is through individual mutual
imitation, learning in dynamic process to achieve, stepwise finds the better strategy
(Sun et al. 2003; Wu et al. 2004).
One of the most important dynamic behavior analysis models in evolutionary
game theory is replicator dynamic model. Replicator dynamics is a dynamic dif-
ferential equation which describes adopted frequency of a particular strategy
37 Evolution Analysis of Standardization Production Behavior 363
(behavior) in a group. It can well describe the change trend of bounded rational
individual in groups and predict individual behavior in groups. Replicator dynamic
model can be expressed as:
h i
dxðtÞ= ¼ xðtÞ ut ðaÞ u ð g Þ ;
dt t
where xðtÞ is the proportion of the involved members in the groups who adopt pure
strategy a during the time t involved in groups; ut ðaÞ is the expected utility of the
involved members who use the pure strategy a in the period t; ut ðgÞ is the group’s
average expected utility (Friedman 1991; Taylor and Jonker 1978; Friedman
1998). Evolutionary game theory provides good analysis methods and tools for
analyzing the mutual game behavior of cluster members in GI agricultural product
cluster enterprises who adopt standardization strategy (Hu 2010).
37.3 Results
GI agricultural products processing enterprises are the core bodies who can add the
value of agricultural products, the process of adopting standardized production is
directly related to the GI agricultural product reputation and the economic interest
of relevant interest persons. In the processing link of GI agricultural products, this
paper takes adopting standardized production behavior in GI agricultural products
processing enterprise cluster as an example, analyzes the dynamic behavior evo-
lution (Zhang 2011).
Hypothesis 1 GI agricultural product enterprise cluster has n similar enterprises in
scale. At one time t, each enterprise in clusters is faced with the same set of
strategies Si ¼ fs1 ; s2 g, where is named {adopting the standards, not adopting the
standards}. Thus, it assumes that in the period t, the percentage of adopting
standards in cluster is xðtÞ, so the percentage of not adopting standards is 1 xðtÞ.
Hypothesis 2 The amount of resources which each enterprise has is 1, the amount
of resources which can be put into the standardized production is e.
Hypothesis 3 R is the benefit of standardization production in GI agricultural
product enterprises cluster, which is a regional sharing, for all enterprises sharing.
Here, it assumes that Rðe; n; xðtÞÞ ¼ ðenxðtÞÞ1=a , here 0 e 1, a 0. In the above
formula, the benefit R is decided by the numbers nxt ðtÞ of enterprises who adopt
standardization strategy in cluster and the amount e of resource which is put into
the standardized production, and both size is proportional to R; where a is the
productivity parameters of the standardized production, in this article a is assumed
to be valued 2.
364 Z. Li and T. Chen
Assume that in a certain period t, uc is the expect profit of the enterprises which
use standardized strategy, ub is the expect profit of the enterprises which do not use
standardized strategy, u is the average expected returns of GI agricultural product
enterprises cluster. According to Fig. 37.1, it can concluded that
Enterprise 1
adopting the standards not adopting the standards
Enterprise 2
According to the discrete dynamic system theory, it can get the following
conclusion:
Conclusion 1 when 0\k\1, the Formula (37.5) can change into (Shen and
Wang 2011)
366 Z. Li and T. Chen
x (t )
0 ε 1
n (1− λ )
2
37.4 Discussion
According to the above, it can be known that the initial state of the GI agricultural
product enterprise clusters, the amount of resource of the enterprise using stan-
dardized strategy, the spillover coefficient of standardization strategy, the effi-
ciency of standardized strategy and the size of GI agricultural product enterprise
cluster and other factors play important roles on the formation, evolution and
stability of GI agricultural product enterprise cluster.
(1) The amount of resource of the enterprise using standardized strategy e. On the
condition that a certain amount of resources needed by standardized strategy,
the less the enterprises using standardized strategy, the more resources
invested by every enterprise. In addition, the more the resource required to
invest, the higher the ratio threshold of using standardized strategy by GI
agricultural product enterprise clusters is. Therefore, reducing e through the
construction of government subsidies mechanism, improving the agricultural
science and technology extension system and other measures are contribute to
the evolution and stability of adopting standardized strategy of the GI agri-
cultural product enterprise clusters generally.
(2) The spillover coefficient k. When k closes to 1, spillover effect is obvious,
which will lead to more ‘‘free-riding’’ behaviors, so it leads to a decrease in the
quality of agricultural products, and even endanger the collective reputation.
So, only by cluster internal or local government to improve the punishment
system, the ‘‘free-riding’’ acts will be reduced.
(3) The efficiency a. The size of a directly affects the threshold of using stan-
dardized strategy in GI agricultural product enterprise clusters. Thus, in
reality, planning the industrial structure and the coordination of organizational
settings and designing management mechanism can effectively influence the
standardized popularization efficiency.
e
(4) The scale n. From the Fig. 37.2, when the initial area is in nð1k Þ2
xðtÞ 1,
the smaller clusters, the popularization rate of using standardized strategy is
higher, thus more stable. It means that, in reality, it should control the number
of enterprises, ensure the effective supervision and management, being com-
bined with the above measures at the same time.
37.5 Conclusion
This paper constructs the evolutionary game model of using standardized strategy
in GI agricultural product enterprise clusters, and focuses on the analysis of
behavior evolution of cluster members in different conditions. The results show
that using standardized strategy may be influenced by a variety of key factors, so
local government, industry associations and other coordination organization which
368 Z. Li and T. Chen
are used to control these factors should play an important role. Establishing the
supervision and punishment mechanism of standardized strategy implementation,
through the improvement of the initial state, reducing the burden of cluster
members, thereby improving the popularity efficiency of standardization, reducing
the exterior effect of standardized popularization, making standardization to carry
out effectively, safeguarding the characteristics and market competitive advantage
of GI agricultural product.
Acknowledgments This paper is supported by the Humanities and Social Sciences Program of the
Ministry of Education of China (10YAJ630014).
The authors thank the editor and the anonymous referees for their comments and suggestions.
References
38.1 Introduction
(Cheng and Su 2005; Deng and Hu 2010; Zhu 2011; Zhou et al. 2011; Ren et al.
2011; Zhu et al. 2011; Wen 2000; Hu 2003; Yuan 1999; National Bureau of
Statistics 2010; Wu et al. 2006; Guo et al. 2006; Cheng 1997; Zhang and Xue
2008; Ye 2004). The first one is extrapolation method which forecasts the future
status using the past data, for example, the time series method is used frequently;
The second one is cause and effect method which forecasts the future status by
finding out the relationship between the forecasting variables and correlated
variables according to the past data, for example, regression analysis method; The
last one is judgment analytical method which forecasts the future status depending
on the experience and comprehensively analytical ability of forecast experts.
Because of the correlation among the forecast factors of vehicle population is
anfractuosities, the primary and secondary relation is uncertain and the quantity
relation is difficult to pick up, all above methods have some flaws at different level
simultaneously.
Neural network is an important artificial intelligence technology developed in
recent years (Wen 2000; Hu 2003; Yuan 1999). Because of its characteristics such
as parallel distributed processing, self-organization, self-adaption, self-learning,
associative memory, stronger fault tolerance and so on, it is applied successively in
many areas. Moreover, it is used widely to forecast the multi-factor, uncertain and
nonlinear problems (Zhu et al. 2011). This type of forecasting method similar to a
black box can overcome the shortages of traditional forecasting methods well.
Moreover, it can combine well with some traditional forecasting methods (Deng
and Hu 2010). In this paper, BP neural network is used to forecast the vehicle
population, and a forecast model based on BP neural network is discussed.
The structure of Backpropagation (for short BP) Neural Network model is shown
in Fig. 38.1. The main ideas of BP algorithm is dividing the leaning or training
processes into two stages. The first stage is forward-propagation process which
performs according to an order of inputting the initial input vector, entering into
the input layer, going though the hidden layer and arriving at the output layer by
dealing with every layer and calculating the actual output value of each unit; the
second stage is called counter-propagation process. When the desired output result
is not gotten at the output layer, D-value (namely error) between actual output
result and desired output result is calculated recursively layer by layer. In order to
reduce the error, then, the weights of corresponding joints in different layers are
adjusted according to the D-value. The two processes are applied repeatedly and
the learning process is over until the error signal drops off and meets the rea-
sonable demand.
The neural network trained by learning process extracted the nonlinear mapping
relation contained in the specimens, and stored the values of weights in the net-
work. During the operational stage, when a new example is inputted into the
38 Forecasting of Vehicle Capacity Based on BP Neural Network 371
i j counter -propagation
x
y
input
desired output
vector result
output layer
established neural network, it can build up the arbitrary nonlinear mappings from
n-dimensional input space to m-dimensional output space. Finally, it can exactly
describe the laws of specimens which are hard to describe using some mathe-
matical equations.
In this paper, the model of BP neural network including three layers, namely,
input layer, one hidden layer and output layer was built and discussed.
There are many relevant factors influencing the vehicle population. For example,
gross industrial output value is the foundation of national economy, it influences
the vehicle market remarkably; Traffic volume reflects the objective demand to the
transport facilities and can promote the development of vehicle market. So pas-
senger and freight volume is an important factor influencing the vehicle popula-
tion; furthermore, highway total mileage is also a key factor determining the
vehicle population. Thus, the joint number of input layer is identified as three, they
are total product of society, road traffic volume and highway total mileage.
Because of shortage of strict theory or basis, choosing joint number of hidden layer
is a complex question. Normally it is determined in accordance with the experi-
ence. If the hidden units are less, network may not be trained; however, more
hidden units make the learning time to prolong and the error may increase. In
general, when more hidden units are used, the possibility of forming local mini-
mum decreases; conversely, a platform phenomenon caused by the slowly
descending learning error function value forms easily. So the network joint
372 Y. An et al.
Table 38.1 National motortruck population and its main influence factors
Years Motortruck Gross product of society Road freight volume Highway total
population (ten (hundred million Chinese (ten thousand ton/km) mileage (km)
thousand) Yuan)
1989 346.37 16909.2 733781 101.43
1990 368.48 18547.9 724040 102.83
1991 398.62 21617.8 733907 104.11
1992 441.45 26638.1 780941 105.67
1993 501.00 34634.4 840256 108.35
1994 560.33 46759.4 894914 111.78
1995 585.43 58478.1 940387 115.70
1996 575.03 67884.6 983860 118.58
1997 601.23 74462.6 976536 122.64
1998 627.89 78345.2 976004 127.85
1999 676.95 82067.5 990444 135.17
2000 716.32 89468.1 1038813 140.27
2001 765.24 97314.8 1056312 169.80
2002 812.22 105172.3 1116324 176.52
2003 853.51 117251.9 1159957 180.98
2004 893.00 136875.9 1244990 187.07
2005 955.50 182320.6 1341778 193.05
38 Forecasting of Vehicle Capacity Based on BP Neural Network 373
Table 38.2 Forecasting results and data error analysis from 2003 to 2005
Years 2003 2004 2005
Actual value 853.51 893.00 955.50
Forecasting value 818.55 849.93 937.92
Relative error 4.10 % 4.82 % 1.84 %
Mean error 3.59 %
374 Y. An et al.
Table 38.3 Forecasting results of vehicle populations in 2015, 2020 and 2025
Years Motortruck Gross product of society Road freight volume Highway total
population (ten (hundred million Chinese (ten thousand ton/km) mileage (km)
thousand) Yuan)
2015 1968.58 587314.8 3316312 479.80
2020 2601.22 785172.3 4506324 555.52
2025 3523.51 977251.9 5729957 648.98
Cheng and Su (2005) forecasted the data from 1999 to 2001 using gray forecast
method and the average forecast error was 20.94 %. So the application of BP
neural network method makes the mean error to decrease from 20.94 to 3.59 %.
It means that the forecasting accuracy is improved greatly by using the BP neural
network method.
If the forecasting value is input to the network model again as a new input value,
the vehicle population in arbitrary future year can be forecasted. The influence
factors of vehicle populations in 2015, 2020 and 2025 are forecasted separately
using time series method, which are shown in Table 38.3.
38.4 Summary
BP neural network model sets up in this paper is 3-5-1-type, which fits the raw data
accurately and gets higher forecasting precision. This type of method has an
unparalleled advantage compared with other methods, especially it is a more
important and better method to forecast vehicle population, enact capacity
resource management policies and establish capacity expropriation counterplan.
However, BP Neural Network still has some flaws such as slow training rate,
easily plunging into local minimum. And its forecasting accuracy can be further
improved.
References
Cheng QS (1997) Attribute recognition theoretical model with application. Acta Scientiarum
Naturalium Universitatis Pekinensis 33(1):12–20 (in Chinese)
Cheng CS, Su LL (2005) Forecasting of vehicle population in China based on Hereditary BP
Algorithm. J Changsha Commun Univ 21:79–83
38 Forecasting of Vehicle Capacity Based on BP Neural Network 375
Deng W, Hu SJ (2010) Non-linear combined prediction model of medium and long-term civil
vehicle population of China. J Transp Syst Eng Inf Technol 10:103–108
Guo YS, Yuan W, Fu R (2006) Research on characteristics of assessment indexes for road safety.
J Highway Transp Res Dev 23(5):102–103 (in Chinese)
Hu WS (2003) Neural network theory and engineering application. Transportation Institute of
Southeast University, Nanjing, pp 16–23 (in Chinese)
National Bureau of Statistics (2010) National statistical yearbook. China Statistics Press, Beijing,
pp 56–62 (in Chinese)
Ren YL, Cheng R, Shi LF (2011) Prediction of civil vehicles’ possession based on combined
logistic model. J Ind Technol Econ, 31:90–97
Wen X (2000) MATLAB neural network apply design. Science Press, Beijing, pp 26–29 (in
Chinese)
Wu YH, Liu WJ, Xiao QM (2006) AHP for evaluating highway traffic safety. J Changsha Univ
Sci Technol (Nat Sci) 3(2):7–8 (in Chinese)
Ye QY (2004) The synthetic assessment of occupation craftsmanship appraisal of engineering
surveying. Surv Mapping Sichuan 27(1):43–44 (in Chinese)
Yuan ZR (1999) Manual neural network and apply. Tsinghua University Press, Beijing, pp 35–39
(in Chinese)
Zhang XR, Xue KS (2008) Application of attribute recognition theoretical model to
comprehensive evaluation of eco-industrial park. Environ Sci Manage 33(11):167–171
Zhou JJ, Zhao Y, Long DH (2011) The study of prediction model of taxi quantity of the city.
Technol Econ Areas Commun 85:27–31
Zhu XH (2011) Cars quantities predication based on multivariate linear regression. J Hubei Univ
Technol 26:38–39
Zhu C, Zhou HP, Zhong BQ (2011) Forecasting of vehicle population based on combined model
with mixed neural network. J Changsha Univ Sci Technol (Nat Sci) 8:13–16
Chapter 39
Fuzzy Data Mining with TOPSIS
for Fuzzy Multiple Criteria Decision
Making Problems
39.1 Introduction
exchange, network advertisement, etc. Among them, the auction website has taken
the majority of e-commerce, and is growing up continuously constantly.
The development of the auction website has not only changed people’s life but
also changed its consumption habit. Consumers don’t need to go out and can stay
at home and surf the Net and buy the goods needed. And utilize the auction
website to have several advantages to buy the goods, price flexible, goods various
in style, rate of exchange easy, 24-h marketing and have no regional constraint etc.
The factors which the auction website could operating continuously, totally
depend on users, whether would like to continue using and trading the platform
and doing a deal. In other words, the service performance of the auction website
has determined whether this website can manage continuously. Therefore, how to
measure the auction website effectively to manage and serve performance, not
only the subject that website’s operator cares about, but also an important refer-
ence indicator while choosing the trade platform of consumer.
The first job of setting up the performance assessing method is to propose the
effective assessment criterion. There is no standard measure criterion for auction
website. Therefore, it is quite difficult to set up the effective assessment criterion
accorded with subject of assessing. Especially when assess and plan executors
have no professional knowledge set up of the criterion; the setting-up of the
criterion must gather the relevant specialty personage’s opinion. However, it is
obtained face to face that the professional personage’s opinion is difficult, or need
to spend a lot of cost and time. For overcome this difficulty, this research propose
one utilize literature review to replace expert opinion probe into method—‘fuzzy
time weighted’ effectively originally.
Assess the criterion in performance about the auction website; quite a lot of
literatures propose different assessment criteria. In order to choose the really
important assessment criterion objectively and effectively, give consideration to
the applicability of gradual progress with time of the criterion at the same time.
This research is summing up the assessment criterion that relevant literature,
arrange in an order and select the assessment criterion that the research institute
need by way of fuzzy time weight (Hoffman and Novak 1996; Hung et al. 2003).
To consider the criterion of auction website service performance, will often be
influenced by the appraiser experience subjectively, course the result indistinct and
uncertainty, in addition, the assessment of auction website service performance are
not assess the criterion singly can be satisfied, must consider a lot of assessment
criteria at the same time. So, after the ones that finish assessing criterion, this
research continues utilizing the criterion of assessing to set up the auction website
to deal in the performance questionnaire. What the questionnaire topic made use of
linguistic variable in the fuzzy method to express appraiser’s linguistic variable is
fuzzy, the ones that assessed the criterion after building and constructing, then
appraised to the behavior in assessing the criterion of website by the user of
auction website. Finally, serve to apply fuzzy theory and combine multiple cri-
terion decision method tool TOPSIS come assessment opinion to combine persons
who assess while being arithmetical performance, thus build and construct the
assessment result that an auction website serves performance. The performance
39 Fuzzy Data Mining with TOPSIS 379
and competitive ability that this method can not merely offer service of
understanding the one’s own one in the auction website, can still offer consumers
to choose the best auction website of a service performance (Troy and Shaw 1997;
Zadeh 1965, 1975).
39.2 Methodology
The fuzzy theory had been proposed by Professor Zadeh. He viewed its human
cognitive process (mainly in order to think deeply and deduce). Uncertainly, it
come by the way of mathematics expression, and mathematics that is traditional is
from having only ‘True’ and ‘False’s two value logics (Binary logic). Expand to
the continuous n-value (Continuous multi-value) with gray area Logic. And the
fuzzy theory utilizes under the jurisdiction of the function (Membership Function).
Value come, describe one speciality of concept, come, shows from 0 to 1 element
which belong to a certain concept of intensities with 1 number value that asks also,
the ones that call this element to assembling in this value are under the jurisdiction
of degree (Membership grade). Expect that discourse domain
U ¼ x1 ; x2; . . .; xn ;
and the fuzzy set of discourse domain U, A e present as
n o
x1 ; ue ðx1 Þ ; x2 ; ue ðx2 Þ ; . . .; xn ; ue ðxn Þ
A A A
In true world, most decision questions have a lot of assessment criteria, it was not
the single indicator that be weighed, but each criterion cannot turn into the same
unit to compare, so utilize many criterion decision technology carry on decision of
assessment method to born because of answering. Common many criteria decision
method as if the analytic approach of the level (AHP), For example: Bi (Chow
et al. 1994) AHP and apply the research that is assessed to supplier’s performance;
In addition SMART analytic approach and TOPSIS. And TOPSIS is by Hwang
and Yoon (Van Heck and Ribbers 1997) Multi Criteria Analysis Model. The
method, this theoretical foundation supposes promptly every assessment indicator
all has a dull characteristic that increases progressively or decreases progressively,
it is solved and so-called ideal solution (Grant and Schlesinger 1995; Hung et al.
2003). It is all criterion optimum value that make up (the greatest one of attribute
value of the interests; the minimum one of attribute value of the cost). Shoulder the
ideal to solve on the contrary (Negative-ideal Solution). It is all criteria difference
make up the most (the minimum one of attribute value of the interests; the largest
one of attribute value of the cost). Evaluate for select scheme calculate with Euclid
Distance, in order to assess indicator comparing to degree of approximation that
ideal solve, the scheme chosen answers and the ideal distance that is solving is
being shortest, it is farthest and shoulder the distance that the ideal is being solved.
TOPSIS has assessed the law and has already been accepted and employed the
order of all kinds of trades to compare and assess extensively in the educational
circles. For example: Wang, Xu (The National Library 2011), employ research of
assessing operation performance of listed company of TOPSIS method; Wang,
Chen (The National Library 2011), assess the listed company of the computer and
manage the performance with the financial indicator.
According to motive and purpose of this research, we carry on collection and
arrangement of literature and relevant materials, added in more than 20 foreign
research quoted in the thesis by 105 theses and dissertation, by fuzzy time weight
method, according to assess criterion occurrence number, relevance and time
weight remit, appear 5 indices, total 25 items auction website serve assessment
criterion of performance mainly exactly, and assess the criterion of these to make
into the questionnaire, regard user of the auction website as testees and carry on
questionnaire investigation. And then assess two big auction websites (Yahoo &
Return) at present service performance. The questionnaire one is mainly divided
into two major parts. First, it is importance degree comparing and assessing the
performance of serving and assessing the criterion. Second and assess the satis-
faction of assessing the criterion; And via the questionnaire retrieving, the rated
value to two major auction websites that utilize the fuzzy theory and evaluate the
weight of the service criterion project of network shop and person who assesses
integration, utilize the decision method TOPSIS method of many criteria finally,
get rank order and good and bad detail that two auction websites serve perfor-
mance (Lambert and Sharma 1990; Parasuraman et al. 1985).
39 Fuzzy Data Mining with TOPSIS 381
The research of the auction website belongs to quite young subject, but numerous
relevant research issues. This research analyzes and quotes use to extract out the
measure criterion of managing performance of auction website which this research
institute needs to relevant literature. The main facts are involved in 105 theses
(The National Library 2011).
First, in order to overcome the difficulty that the expert opinion has, reduce the
relevant cost which assesses the necessary expenditure of the homework at the
same time. This research uses the way in which the literature probes into to
produce the criterion of assessing, look at every literature the same expert, analyze
and combine assessment performance relevant components of managing the per-
formance in each literature with the auction website, use to extract the important
assessment criterion.
Network auction has been just the trade form coming out in development in
recent ten years; the user is at the stage of break-into this kind of transaction
formally. So, with the progress of relevant science and technology and gradual
progress of time, the user, to the performance that the auction website shows,
concerned angle and proportion are being produced and changed constantly.
Because performance of user’s service for auction website varies with gradual
progress of time greatly; the turnover auctioned adds a new line of consumer’s
consciousness with all increasing on day too. While the literature is put in order,
we find though the time block that relevant literature are issued is not big, but
service performance that consumers mind has some changes every year (Bellman
and Zadeh 1970; Chen and Hwang 1992; Day 1984).
In order to further assess and consider objectively because the time array
produced and assessed the criterion change, this research is during the process of
putting the literature in order, the assessment criterion of putting forward various
years is entrusted to different weights respectively. The assessment criterion after
gathering together wholly is according to issuing number of times and relevance, it
issues times fuzzy weight on taking advantage of, in order to get the real weight of
the criterion. Analyzing the domestic master’s thesis quoted will issue times and
all lie between in 1998 to 2010 because of this research institute, and foreign most
literature definitely also and. So, we build the fuzzy time weight of setting up and
belong to one degree of functions as shown in Fig. 39.1.
This research is set out by the user view, through the collection and gathering
together wholly of the literature, by weight method of fuzzy time, calculate
according to assessing the occurrence number of the criterion and time weight,
score according to criterion weight order elect 5 construct surface, total 25 items
auction website serve assessment criterion of performance mainly (The National
Library 2011; Angehrn 1997; Athanassopoulos 2000; Beam 1999; Bellman and
Zadeh 1970; Cheng 1994; Fornell 1992; O’Connor and O’Keefe 1997; Kim and
Stoel 2004; Nielsen and Tahir 2005; Parasuraman et al. 1988; Paul 1996; Simon
2001).
382 C. Low and S. Lin
After receiving the criterion of assessing, we entrust the criterion to the weight of
different levels through the fuzzy way to define with justice, make by favorable
follow-up study and questionnaire. Assess the detail as follows:
Step1: Determine to assess the semantic parameter of the criterion importance
and count fuzzily
It is because in design of questionnaire utilize by Likert scale with five grade
scale, can utilize person who assesses each person by different linguistic variable.
To expresses the important measurement value of degree of each assessment
criterion of persons who assess, as shown in Fig. 39.2.
Step2: Calculate every fuzzy weight of assessing the criterion
e j ¼ lj ; mj ; uj ; j ¼ 1; 2; . . .; n
w
ð39:1Þ
lj ¼ Min lij ; i ¼ 1; 2; . . .m
i
39 Fuzzy Data Mining with TOPSIS 383
!1=m
Y
m
mj ¼ mij ; i ¼ 1; 2; . . .; m
i¼1 ð39:2Þ
uj ¼ Max uij ; i ¼ 1; 2; . . .; m
i
Step3: Each assess the solving and melting fuzzily of fuzzy weight of the
criterion
Its main purpose is to change the fuzzy weight of 25 assessment criteria into a
clear single number value (Ofj), Can learn importance degree and priority of each
assessment criterion, and often the fuzzy method to melt of the opinion includes
‘the maxima–minimum set method’, ‘the greatest average law’ and ‘center law’,
among them it is the simple and most easy method to calculate too that it is the
most general that center law is and is adopted, so, this research utilizes center law
to change the fuzzy weight of n assessment criteria, the conversion method is as
follows:
l j þ m j þ uj
ofj ¼ ; j ¼ 1; 2; . . .; n ð39:3Þ
3
!1=m
Y
m
mtj ¼ mitj ; i ¼ 1; 2; . . .; m
i¼1 ð39:5Þ
n o
utj ¼ Max uitj ; i ¼ 1; 2; . . .; m
i
e ¼ wj
W ; j ¼ 1; 2; . . .; n ð39:7Þ
1n
e e j are calculate assess fuzzy weight of criterion with assess merger fuzzy
x tj and w
service performance value of criterion each under the auction website in t websites
each in Sects. 39.3.1 and 39.3.2.
Step2: Assess the matrix regularization fuzzily
!
ltj mtj utj
er tj ¼ ; þ; þ ; j2B ð39:8Þ
uþ
j uj uj
uþ
j ¼ max utj if j 2 B
t
Step3: Build and construct the fuzzy decision weight matrix of the
regularization
e ¼ ev tj
V ; t ¼ 1; 2; . . .; k; j ¼ 1; 2; . . .; n ð39:9Þ
kn
where
e
v tj ¼ er tj w
ej ð39:10Þ
Step4: Determine to be shouldering the ideal to solve
vþ
e j ¼ rmin d ð e
v tj ;e
v þ Þ e
v tj ; t ¼ 1; 2; . . .k ; j ¼ 1; 2; . . .n ð39:11Þ
j
v
e j ¼ rmin d ð e
v tj ;e
e
v tj ; t ¼ 1; 2; . . .k ; j ¼ 1; 2; . . .n ð39:12Þ
j Þ
v
v þ
where the absolutely positive solution e j and negative solution ev
j is:
v þ
e j ¼ ð1; 1; 1Þ
v
e j ¼ ð0; 0; 0Þ
Step5: Calculate that is shouldering the ideal and solving the distance
Calculate method as follows:
X
n
dtþ ¼ d ev tj ; vþ
j ; t ¼ 1; 2; . . .k ð39:13Þ
j¼1
X
n
dt ¼ d ev tj ; v
j ; t ¼ 1; 2; . . .k ð39:14Þ
j¼1
dt
OPIt ¼ t ¼ 1; 2; . . .; k ð39:15Þ
dtþ þ dt
39.3 Results
39.3.1 Process
This research gathers together and exactly appears 5 and mainly indices via col-
lection and arrangement of literature and relevant materials, total 25 assessment
criteria that service performance of auction website, and assess the criterion of these
to design into a questionnaire, the sample is the website user who often use the
auction website (Yahoo and Return). Testee send out it questionnaire by investi-
gation of carrying on, last questionnaire of 60 copies together this research, retrieve
59 part questionnaires, the effective questionnaire 57 part among them, the effective
questionnaire rate is 95 %. 57 persons who assess evaluate 25 service performances
that two major auction websites include to assess importance degree of the crite-
rion, appraise to the satisfaction of this criterion while using this website actually to
the persons who assess, then we utilize the fuzzy theory and TOPSIS method,
calculate the common performance indicators of two major auction websites, ask
the good and bad level of service performance of two major auction websites.
39.3.2 Results
According to the analysis described above, the good and bad orders of the service
performances of two domestic auction websites are: Yahoo is expressively better
than Return, found finally the buyer presented the obvious difference to two major
domestic service satisfaction of auction website as in Tables 39.1 and 39.2.
39.4 Discussion
Have assessed and contained more criterion and persons who assess more in
performance of auction website, it is fuzzy on possessing with assessing the cri-
terion congenitally at the same time, difficult accurate quantization.
Therefore, it is often difficult to exactly amalgamate and verify its objectivity to
assess the result. Among the subjects especially as assessing when surface of
different literary compositions have either excellent or bad behavior, the whole
performances of websites will become quite difficult to arrange in an order. In
addition, the assessment criterion needing to be established of expert opinion, must
often spend lengthy time and a large amount of cost, on a small scale assessment of
plan have more difficultly implement.
To problem characteristic described above, originally research and propose in
weight way of fuzzy time, extract the important assessment criterion out to replace
and generally remit the whole expert opinion with technology such as Delphi
method from the literature. And then through questionnaire way, received the
performance appraisal of user’s service for comparative theme of website. And
will appraise and melt it fuzzily in order to agree with receiving the fuzzy weight
inborn and fuzzily of language purpose. Then we arranged the website’s perfor-
mance in an order with TOPSIS.
Through the appraisal of 25 assessment performances, we received two com-
parative subjects (Yahoo and Return) in this research the performance is arranged
in an order. We can be found out by the result of this research, the service
performance of Yahoo is far higher than Return.
39.5 Conclusion
Assess the performance of the auction website for the example, this research has
combined several kinds of employing extensively and simple and feasible fuzzy
theory technology of the present stage, aim at proposing a common procedure of
fuzzy much attribute decision, as assess target have many attribute, need many
people assess and assess content have fuzzy difficult while quantizing clearly
characteristic, the way that this research institute puts forward is suitable for the
using of assessment of this kind of problem. Can assess the target to further
arrange in an order clearly while combining TOPSIS technology, especially when
assessing the target and arranging in an order numerously and difficultly. Choose
such as large attribute makes policy regardless of the scheme, if assess the criterion
and persons who assess and can be confirmed, can use the way that this research
institute put forward to assess and arrange in an order the track case performance.
In addition, this research regards literature as experts, put forward the brand-
new literature review concept. Screen the way to assess criterion with the fuzzy
time weight, when it is difficult for the expert to investigate expensively and live,
388 C. Low and S. Lin
can offer the comparatively economic substituting scheme not losing its objec-
tivity. The weight concept of fuzzy time, though the concept is quite simple, we
think we can further probe into the application feasibility in other fields in this kind
of concept.
References
Angehrn A (1997) Designing mature internet business strategies: the ICDT model. Eur Manag J
15(4):361–364
Athanassopoulos AD (2000) Customer satisfaction cures to support market segmentation and
explain switching behavior. J Bus Res 47:191–207
Beam CM (1999) Auctioning and bidding in electronic commerce: the on-line auction. PhD
thesis, University of California, Berkeley
Bellman RE, Zadeh LA (1970) Decision-making in a fuzzy environment. Manag Sci
17(4):141–164
Chen SJ, Hwang CL (1992) Fuzzy multiple attribute decision making: methods and applications.
Springer, Berlin
Cheng SW (1994) Practical implementation of the process capability indices. Qual Eng
7(2):239–259
Chow G, Heaver TD, Henriksson LE (1994) Logistics performance: definition and measurement.
Int J Phys Distrib Logist Manag 24(1):17–28
Day RL (1984) Modeling choices among alternative responses to dissatisfaction. Adv Consumer
Res 11:244–249
Fornell C (1992) A national customer satisfaction barometer: the Swedish experience. J Marketing
1(56):6–21
Grant WH, Schlesinger LA (1995) Realize your customers’ full profit potential. Harv Bus Rev
73(4):59–72
Hoffman DL, Novak TP (1996) Marketing in hypermedia computer-mediated environments:
conceptual foundations. J Marketing 60(3):50–68
Hsu HM, Chen CT (1996) Aggregation of fuzzy opinions, under group decision making. Fuzzy
Sets Syst 79(3):279–285
Hung YH, Huang ML, Chen KS (2003) Service quality evaluation by service quality performance
matrix. Total Qual Manag 14(1):79–89
Kaufmann A, Gupta MM (1991) Introduction to fuzzy arithmetic: theory and applications. Van
Nostrand, New York
Kim S, Stoel L (2004) Dimensional hierarchy of retail website quality. Inf Manag 41(5):620–632
Klir GJ, Yuan B (1995) Fuzzy sets and fuzzy logic theory and application. Prentice-Hall Inc.,
Prentice-Hall, NJ
Lambert DM, Sharma A (1990) A customer-based competitive analysis for logistics decisions. Int
J Phys Distrib Logist Manag 20(1):17–24
Langari JY, Zadeh LA (1995) Industrial applications of fuzzy logic and intelligent systems. IEEE
press, New York
Mendel JM (1995) Fuzzy logic systems for engineering: a tutorial. Proc IEEE 83:345–377
Nielsen J, Tahir M (2005) Homepage usability: 50 websites deconstructed. New Riders,
Indianapolis, pp 130–135
O’Connor GC, O’Keefe B (1997) View the web as a marketplace: the case of small companies.
Decis Support Syst 21:171–183
Parasuraman A, Zeithaml VA, Berry LL (1985) A conceptual model of service quality and its
implications for future research. J Marketing 49:41–50
39 Fuzzy Data Mining with TOPSIS 389
40.1 Introduction
Data mining is the process that attempts to discover usable information in large
data sets (Fayyad et al. 1996). It is a popular research topic in many fields.
However, the data set sample distribution may be highly imbalanced or skewed. It
means one class in the data set might be represented by a large number of
examples, while the other might be relatively represented by a few. Since tradi-
tional classifiers which focus on seeking an overall accuracy over the full data set
often produce high accuracy on the majority class, while poor predictive accuracy
on the minority class, they are not apposite to deal with skewed data decision tasks
(Batista et al. 2004; Chawla et al. 2004).
There are also some methods proposed to cope with this kind of problem, such
as the methods of sampling (Batista et al. 2004; Tong et al. 2011), moving the
decision thresholds (Cristianini 2000; Jo and Japkowicz 2004) and adjusting the
cost-matrices (Cristianini 2000). Sampling methods tend to reduce data imbalance
level by downsampling-removing instances from the majority class or upsampling-
duplicating the training instances from the minority class or both. Moving the
decision thresholds method tries to adapt the decision thresholds to impose bias on
the minority class. The third kind of methods tries to adjust the cost (weight) for
each class or change the strength of rules to improve the prediction accuracy
(Batista et al. 2004). However, these three kinds of methods lack a rigorous and
systematic treatment on imbalanced data (Huang et al. 2004). For example,
downsampling the data might lose information, while upsampling the data will
introduce noise (Bargiel and Andrze 2002).
How to build an effective and efficient model on this kind of problem is an
important concern of the data mining (Tong et al. 2011; Nie et al. 2011;
Liu et al. 2011). In this study, ‘information granulation’ is conducted to build
knowledge discovery model to deal with this problem. Granular computing rep-
resents information in the form of some aggregates such as clusters, classes, and
subsets of a universe, which is called as information granules (IGs), and then
solves the targeted problem on each information granule (Yao 2001). When a
problem involves incomplete, uncertain, or vague information, human beings tend
to utilize aggregates to ponder the problem instead of numbers. If distinct elements
cannot be differentiated by ordinary methods, one may study information granules
which are collections of entities aggregated together due to their similarity,
functional adjacency and indistinguishability or the like (Huang et al. 2004; Nie
et al. 2011; Liu et al. 2011), and then solves the problem over the information
granules .
The process of constructing information granules is referred to as information
granulation. It was first pointed out by Zadeh (1996) who proposed the term
‘information granulation’. Zadeh also emphasized that a plethora of details may
not necessarily amount to knowledge. Information granulation serves as an
abstraction mechanism to reduce an entire conceptual redundancy. Granular
computing (GrC) which concerns the representation and processing of complex
information entities as information granules is quickly becoming an emerging
paradigm of information processing (Bargiel and Andrze 2002). GrC is a superset
of the theory of fuzzy information granulation, rough set theory and interval
computations, and is a subset of granular mathematics (Zadeh 1997). GrC arise in
the process of data abstraction and derivation of knowledge from information
In this study, we propose granular features mining model for knowledge dis-
covery (GFMM) to improve prediction performance, which should not only pro-
duce good prediction accuracy on the majority class but also produce good
prediction accuracy over the minority class. Firstly, suitable information granules
are constructed by ETM-ART2 (Luo and Chen 2008) which is an improved ART2
40 Knowledge Discovery from Granule Features Mining 393
with good clustering performance, and then information granules based key feature
analysis method is proposed to discrete represent the IGs to mining compact
knowledge rules. According to the knowledge rules, it can soon predict the final
class of new inputted data. Experiments for a glass classification problem show
that our method outperforms individual SVM and C4.5 classifiers.
Imbalanced data sets occur in many practical industries, such as business, medical
and fraud detecting data. Imbalanced data sets exhibit skewed class distribution in
which most instances belong to the majority class while far fewer instances belong
to a smaller class. Usually, the majority class has bigger proportion than the
smaller class. Take fraud data for example, there are usually very few instances of
fraud as compared to the large number of honest instances because it should be
honest rather than fraud by nature. Information granulation has a better insight into
its essence of the data sets and can help to comprehend the target problem. In this
section, our proposed granule features mining model for knowledge discovery is
described in detail. Figure 40.1 shows the process.
IGs are constructed by ETM-ART2 according to the similarity of numerical
data. The total number of IGs may be remarkably smaller than the size of
numerical data set. Most of all, the number of IGs in the majority class would also
be remarkably reduced compared to the size of numerical data. And it would
increase the proportion of the minority class and improve the imbalanced data
situation. By construct IGs, we also can get main characteristics of the data
essences by reduce some redundancy from the data sets. Then we propose key
feature analysis method processing IGs to mine compact knowledge rules.
Denote a data set D ¼ fdi gNi¼1 , it includes N sample patterns with M classes and
each is p-dimension. There is an attribute to express its class name for each sample
pattern.
IGs are usually grouped according to the similar ‘size’ (that is granularity) in a
single layer. Since IGs can exist at different levels of granularity, the appropriate
levels of granularity can be helpful to deal with the target problem and it should be
discovered as knowledge. There are many approaches to construct IGs, such as rough
sets, Self-Organizing Map (SOM) network, and etc. (Bargiel and Andrze 2002).
394 J. Luo et al.
automatically group all input patterns into one cluster if they are not similar
sufficiently. Furthermore, ETM-ART2 network shows the same appropriate stability
properties as ART2 network, which is also self-organizing network capable of
dynamic, on-line learning and with higher accuracy of hierarchical clustering. So we
use ETM-ART2 network to construct IGs.
IARðvÞ PIRðvÞ
GIðvÞ ¼ ð40:3Þ
HðvÞ
where IARðvÞ and PIRðvÞ are larger, the IGs could be purer, and at the same time,
HðvÞ is smaller, this granularity is a good solution.
The vigilance parameter of ETM-ART2 can be set to about 1 and then decrease
gradually until find a satisfying vigilance with maximum GIðvÞ. Then the vigilance
which make maximum GIðvÞ by granularity vis used to construct appropriate IGs.
And the suitable level of granularity by ETM-ART2 is selected.
396 J. Luo et al.
Some IGs are formed through ETM-ART2 network learning according to the
selected granularity. Denote the number of IGs is q, usually q\\N, the number of
IGs for the ith class is qi.
It is difficult to mining knowledge rules directly from the attributes values of all
IGs. So we propose the key feature analysis method for IGs to get key feature that
can differentiate with the other classes significantly.
Firstly, we give some definitions:
For patterns of the ith class, the minimum and maximum value on the jth
dimensional attribute can compose the class’s jth dimensional attribute value
range, which is denoted as CFij .
For patterns in the kth granule of the ith class, the minimum and maximum
value on the jth dimensional attribute can compose the granule’s jth dimensional
ði;kÞ
attribute value range, which is denoted as GFj . Any minimum or maximum
value point of granules is denoted as Cut Point on CFij . The interval between any
two cut points is abbreviated as ICF.
For all patterns, the minimum and maximum value on the jth dimensional
attribute can compose the data set’s jth dimensional attribute value range, which is
denoted as QFj.
Then key features are analyzed as follows:
Make the jth dimensional attribute of the ith class as an example, let Sij denotes a
ði;kÞ
ICF on CF ij . If GF j \ Sij 6¼ /, we say the kth IGs cover Sij . Suppose the number
of IGs of the ith class that cover Sij is gði; Sij Þ, the number of IGs of non-ith class
that cover Sij is gði; Sij Þ. Then we can compute the cover degree of IGs on Sij :
For the jth dimensional attribute of the ith class, choose the ICF with maximum
cover degree as the key feature range of the jth dimensional attribute for ith class,
40 Knowledge Discovery from Granule Features Mining 397
and denoted as KF ij . If the number of ICF with maximum cover degree is larger
than 1, choose the ICF with minimum gði; Sij Þ value as the key feature. Then the
key features of all attributes for all classes can be found.
For the jth dimensional attribute, make two endpoint of key feature range of each
class segment QFj into several sub-ranges which are denoted as sub-attributes. And
the jth dimensional attribute can be segmented into several sub-attributes.
ði;kÞ
According to the sub-attributes, each IG’s attribute GF j can be discretely rep-
resented by discrete value 1 or 0. Including the class of each IG, a decision table
U can be figure out by each IG’s sub-attributes.
Though the number of total attributes is increasing, the sub-attributes parti-
tioned by key features has significant differentiate capability for class and they are
non-strongly correlated, and is represented by discrete value 1 or 0. So mining
from the decision table compose by sub-attributes is easier to get compact
knowledge rules.
According to the decision table U, we use C4.5 decision tree to mining knowledge
rules since C4.5 decision tree algorithm is fast, easy to implement to get useful
rules.
A mined rule has the following format: if a1 ¼ x1 and …and ap ¼ xp then
c ¼ ct . ap ¼ xp denotes sub-attribute ap value is xp , and ct is the decision class.
We use support and confidence to measures the knowledge rules. The number
of instances that contain all items in the antecedent denoted as A(R), and the
number of instances that contain all items in the consequent parts denoted as Y(R).
Then support and confidence can be defined as SupðRÞ ¼ jAðRÞ \ Y ðRÞj=jU j and
Cer ðRÞ ¼ jAðRÞ \ Y ðRÞj=jAðRÞj.
Then we define instance match degree for rule R as MatðRÞ. It is the ratio of the
number of condition attributes the instance meet in the all condition attributes in
antecedent.
The steps to predict the test instance’s class by the knowledge rules are as
follows:
398 J. Luo et al.
(1) Make the instance’s attributes discrete according to the sub-attribute found
above.
(2) Compute the MatðRÞ for each knowledge rule.
(3) Predict the test instance’s class by the rule with maximum Mat (R).
We use glass data set from UCI machine learning repository (Lake and Merz
1998), on which assemble three data sets to make experiments. They are depicted
in Table 40.1. Glass1 includes two classes: building_windows_float_processed,
others. Glass2 includes two classes: headlamps, others. Glass3 include three
classes:
vehicle_windows_float_processed,
vehicle_windows_non_float_processed, others
Table 40.1 Data sets Data sets Number Attributes No. %Class
Glass1 214 9 32.71 %:67.29 %
Galss2 214 9 13.55 %:86.45 %
Glass3 214 9 40.65 %:35.51 %:23.83 %
40 Knowledge Discovery from Granule Features Mining 399
40.3.3 Results
When input the samples to the ETM-ART2 to construct IGs, we set q to 0.9 and
then decrease to 0.5 stepped by 0.05. The granularity v is selected with maximum
GIðvÞ. The experimental data about IGs is list in Table 40.2
Compared to the size of training patterns, the number of IGs is reduced to about
30 %. AverðIARÞ is above 90 % and PIR is above about 80 %. So these IGs are
help to mining useful knowledge rules. Then test patterns were predicted their
classes by the knowledge rules.
To compare the performance of GFMM, we conducted the experiments by two
other approaches: (1) non-information granulations for single C4.5 decision tree
algorithm. (2) SVM (Cristianini 2000; Platt 1999), which is based on Person VII
Kernel Function and use Sequential minimal optimization. The results are listed in
Table 40.3
It can be seen that GFMM has better classification accuracy especially for data
set of minority class with manifest higher GM. C4.5 and SVM show good overall
classification accuracy, while they show poorer accuracy for minority class and
GM is poorer.
40.4 Conclusions
In this paper, granule features mining model for knowledge discovery (GFMM) is
studied. We proposed granularity index and use ETM-ART2 to construct useful
information granules, and then proposed information granules based key feature
400 J. Luo et al.
analysis method to discrete represent the IGs to mining compact knowledge rules.
New patterns can be soon predicted by the knowledge rules.
In many real world data sets, most of the instances belong to a larger class and
far fewer instances belong to a smaller. Usually, the smaller class should be
concerned and interesting. And the cost is usually high when the smaller (positive)
class instances are misclassified. Normal systems which ignore the imbalanced
distribution of class tend to misclassify the minority class instances as majority,
and will lead to a high false rate of the minority class. In this paper, we consider
IGs instead of numerical data, it might increase the distribution proportion of
minority class and improve imbalanced situation of data. We also utilize ETM-
ART2’s property to construct suitable IGs conveniently. Through extensive
experiments with different imbalance ratio datasets, our proposed method is shown
to be effective and superior to several other classification methods such as single
C4.5 and SVM.
References
Platt JC (1999) Fast training of support vector machines using sequential minimal optimization.
In: Schoelkopf B, Burges C, Smola A (eds) Advances in kernel methods: support vector
learning. MIT Press, Cambridge, pp 185–208
Tong L-I, Chang Y-C, Lin S-H (2011) Determining the optimal re-sampling strategy for a
classification model with imbalanced data using design of experiments and response surface
methodologies. Expert Syst Appl 38(4):4222–4227
Yao YY (2001) On modeling data mining with granular computing. In: Proceedings of the 25th
international computer software and applications conference on invigorating software
development, Washington, DC, USA, pp 638–643
Zadeh LA (1996) Fuzzy sets and information granularity. In: Klir GJ, Yuan B (eds) Fuzzy sets,
fuzzy logic, and fuzzy systems. World Scientific Publishing, River Edge, pp 433–448
Zadeh LA (1997) Announcement of GrC
Chapter 41
Multivariable Forecasting Model Based
on Trend Grey Correlation Degree and its
Application
Abstract Among the current forecasting methods, trend grey correlation degree
forecasting method is limited to a single variable time series data, but cannot solve
the problem of multivariable forecasting. While multiple regression forecasting
can only be used for multivariable linear forecasting, and can be easily affected by
random factors. Therefore, this paper combines trend grey correlation degree
forecasting based on optimization method and the multiple regression forecasting,
generates the multivariable forecasting model based on trend grey correlation
analysis, and uses this model to forecast GDP in Henan Province, not only to
overcome the effect of random factors on time series, but also to comprehensively
consider the various factors that affect the development of objects, thus to achieve
the effect of improving accuracy and increasing the reliability of forecasting. And
this paper also provides a new method for the study of multivariable combination
forecasting.
Keywords Grey prediction Multivariate regression forecasting Optimization
model Trend grey correlation degree
41.1 Introduction
The grey system theory was first proposed in 1982 by the Chinese scholar, Professor
Deng Julong. This theory requires starting from the system perspective to study the
relationship between the information, that is, to study how to use known
L. Fu L. Wang
Management and Economic Department, Public Resource
Management Research Center, Tianjin University, Tianjin, China
J. Han (&)
Management and Economic Department, Tianjin University, Tianjin, China
e-mail: [email protected]
information to reveal unknown information. The grey system theory needs a small
amount of calculation and does not require the typical distribution regularity, so it
has been widely used in many areas. The earliest and most typical forecasting model
of the grey system theory is GM (1, 1) model, but there has been a case with too
large deviation in specific application (Baozhang 1994). In order to overcome the
shortage of GM (1, 1) in relevant applications, people have proposed various cor-
rection methods of GM (1, 1), such as GM (1, 1) with residual error correction, GM
(1, 1) with K correction (Zhang and Dai 2009), etc. In terms of the multivariable
grey prediction, the first model was GM (1, N) (Deng 1990). After that, people
proposed other improved models, for example, GM (n, h), widening GM (n, h), and
so on. However, these models are exponential model, which is unrealistic in many
practical applications (Xiong et al. 2003). For this reason, Peng and Xiong 2008
proposed trend grey correlation degree forecasting model based on time series
correlation analysis that proposed by Zhao et al. 2004, using orthogonal design
strategy to consider the combination when comparing the size between the absolute
differences. But this method only selected some representative combinations, did
not exhaust all the possible combinations, which is bound to affect the forecast
accuracy of the results. In order to achieve the purpose of exhausting the data
combination containing the value to be predicted in series calculation process,
according to the goal of forecasting method based on trend correlation degree,
Wang 2011 established the objective functions to construct optimization problem,
and then solved the value to be predicted in the global scope. However, this method
is limited to the single variable time series, can not solve the problem of multi-
variable forecasting. Therefore this paper combines trend grey correlation degree
forecasting based on optimization method and the multiple regression forecasting,
not only to overcome the effect of random factors on time series, but also to
comprehensively consider the various factors that affect the development of objects,
thus to achieve the effect of improving the accuracy and reliability of forecasting.
Let:
Dx0 ðk þ 1Þ ¼ x0 ðk þ 1Þ x0 ðkÞ
ð41:1Þ
Dx1 ðk þ 1Þ ¼ x1 ðk þ 1Þ x1 ðkÞ
If x0 ðk þ 1ÞDx1 ðk þ 1Þ 0 ðk ¼ 1; 2; . . .; n 1Þ; then said series Dx0 and Dx1
have the same trend.
Definition 2 (Zhao et al. 2004):
X0 ¼ ðx0 ð1Þ; x0 ð2Þ; . . .; x0 ðnÞÞ
; ði ¼ 1; 2; . . .; mÞ
Xi ¼ ðxi ð1Þ; xi ð2Þ; . . .; xi ðnÞÞ
Provide X0, Xi are time interval series which have the same length, and
X
n1
jxi ðk þ 1Þ xi ðkÞj 6¼ 0 ði ¼ 0; 1; . . .; mÞ
k¼1
Let:
1 X n1
Di ¼ jxi ðk þ 1Þ xi ðkÞj ð41:2Þ
n 1 k¼1
1
yi ðkÞ ¼ xi ðkÞ ð41:3Þ
Di
In these formulas, k ¼ 1; 2; . . .; n and i ¼ 0; 1; 2; . . .; m.
Dyi ðk þ 1Þ ¼ yi ðk þ 1Þ yi ðkÞ ð41:4Þ
(k ¼ 1; 2; . . .; n 1; i ¼ 0; 1; 2; . . .; m)
8
>
> 0
>
>
< if Dy0 ðk þ 1ÞDyi ðk þ 1Þ ¼ 0
jDy0 ðk þ 1Þj þ jDyi ðk þ 1Þj ð41:5Þ
>
> sgn Dy0 ðk þ 1Þi ðk þ 1Þ
>
> 2maxðjDy0 ðk þ 1Þj; jDyi ðk þ 1ÞjÞ
:
if Dy0 ðk þ 1ÞDyi ðk þ 1Þ 6¼ 0
1Xn1
cðX0 ; Xi Þ ¼ f ðk þ 1Þ ð41:6Þ
n k¼1 i
For single-factor trend analysis and forecasting, we often use trend extrapolation
method (Xu 1998; Yu and Huang 2003). But when using this method to the trend of
series with random factors, it will engender considerable errors, and make predic-
tions distorted. If using the principle of multiple regressions to establish a multi-
variable grey prediction model, it will be conducive to combine various factors
impacted on the development of objects to do comprehensive prediction, and to
improve the forecasting accuracy (Su et al. 2007; Chen and Wang 2012). Building
multivariate forecasting model based on trend grey correlation degree is as follows:
^yðtÞ ¼ b0 þ b1^x1 ðtÞ þ b2^x2 ðtÞ þ þ bk ^xk ðtÞ ð41:9Þ
In this formula, ^yðtÞ is the predictive value of the object at time t; ^xi ðtÞ is the
predictive value of the variable Xi at time t, which is obtained by using optimi-
zation model to solve the trend grey correlation degree model; bi ði ¼ 1; 2; . . .; kÞ is
the parameter to be estimated.
In the model, bi is decided by the least squares method (Li and Pan 2005):
41 Multivariable Forecasting Model 407
In this formula, RSS represents the Residual Sum of Squares, TSS stands for
Total Sum of Squares, k is the factor number, n is the total number of equations.
The closer R2 to 1, the better goodness of fitting the estimated regression
function effect on the sample points, that is, the stronger the role of the inde-
pendent variable explaining on the dependent variable (Zhang 2009).
(2) F-Test
P
n
ð^yi yÞ2 =k
ESS=k i¼1
F¼ ¼ n ð41:12Þ
RSS=ðn k 1Þ P
ðyi ^yi Þ2 =ðn k 1Þ
i¼1
^
bj
t¼ ðj ¼ 1; 2; . . .; kÞ ð41:13Þ
sðbj Þ
In this formula, t follows t(n-k-1) distribution. Given the significant level a,
checking a t test table for critical value ta=2 ðn k 1Þ. If jtj [ ta=2 ðn k 1Þ,
rejecting the null hypothesis, which means that, the impact of the independent
variable on the dependent variable is significant (Li and Liu 2010).
This paper selects the State-owned units’ Investment in Fixed Assets (x1), the Total
Revenue from Leasing of the Use Right of State-owned Land (x2) as the factor
variables, studies the effect of two factors on GDP (y) of Henan province, and
forecasts. The data come from the Statistical Yearbook of Henan Province
(2005–2011) (Statistics Bureau of Henan Province 2005).
First, use formula (41.7) to build three series, they are X0, X1 and X2, according
to the State-owned units’ Investment in Fixed Assets (x1).
X0 = (1127.02, 1367.02, 1608.59, 1715.78, 2127.02, 2586.9)
X1 = (1367.02, 1608.59, 1715.78, 2127.02, 2586.9, 2857.41)
X2 = (1608.59, 1715.78, 2127.02, 2586.9, 2857.41, S1)
Second, use formulas ((41.1)–(41.6)) to calculate fi ðk þ 1Þ and cðX0 ; X1 Þ:
f1 ð2Þ ¼ 0:9903;
f1 ð3Þ ¼ 0:7173;
f1 ð4Þ ¼ 0:6330;
f1 ð5Þ ¼ 0:9565;
f1 ð6Þ ¼ 0:7881;
cðX0 ; X1 Þ ¼ 0:8176
Then, calculate cðX1 ; X2 Þ containing the predictive value ^x1 ðt) of 2011, sub-
stitute into formula (41.8), use the Matlab software, get:
S1 ¼3139:5
Similarly, obtain fi ðk þ 1Þs and cðX0 ; X1 Þ of the Total Revenue from Leasing
of the Use Right of State-owned Land (x2) and GDP in Henan Province (y), and the
predictive values S2 and S0. The results are shown in Table 41.1.
According to the data of y, x1 and x2 from the years of 2000–2010, use the
software SPSS18.0 to obtain the numerical value of the estimated parameters bi
and the prediction model.
41 Multivariable Forecasting Model 409
Table 41.1 Grey correlation degree correlation coefficient and predictive value of each variable
GDP(y) Total revenue from leasing of the use
right of state-owned land (x2)
f1 ð2Þ 0.8814 0.6006
f1 ð3Þ 0.8834 0.5929
f1 ð4Þ 0.9957 0.8988
f1 ð5Þ 0.7125 0.5271
f1 ð6Þ 0.7316 0.5461
cðX0 ; X1 Þ 0.8409 0.6331
^x1 ðt) 20688 789.28
41.4 Conclusions
2723204 S0 ¼ 6544:04
2723204 ^yðtÞ ¼ 952:74
It is obviously that the deviation of ^yðtÞis smaller, which is closer to the actual
situation. And this shows that combining trend grey correlation analysis prediction
based on optimization method and the multiple regression prediction, has not only
overcome the effect of random factors on time series, but also comprehensively
considered the various factors that affect the development of objects, and made the
results of prediction more accurate. This also provides a new method for the study
of multifactor combination forecast.
Acknowledgments Project Subject: Soft Science Research Project in Henan Province: Research
on State-owned Assets Supervision Mode in Henan Province (112400430009).
410 L. Fu et al.
References
Chen X, Wang B (2012) The grey forecasting of multiple factors and its algorithm. Math Pract
Theory 42(1):80–83 (In Chinese)
Henan channel on Xinhuanet http://www.ha.xinhuanet.com/add/hnnews/2012/02/29/content_24
799412.htm
Deng J (1990) The tutorial of grey system theory. Huazhong University of Science and
Technology Press, Wuhan, pp 1–30 (In Chinese)
Li G, Liu D (2010) Experimental course of econometrics. China Economic Press, Beijing, p 37
(In Chinese)
Li Z, Pan W (2005) Econometrics. Higher Education Press, Beijing (In Chinese)
Peng W, Xiong H (2008) The prediction method based on grey correlation analysis. J Changjiang
Univ 5(1):129–131 (In Chinese)
Statistics Bureau of Henan Province (2005–2011) Statistical yearbook of Henan province. China
Statistic Press, Beijing (In Chinese)
Su B, Cao Y, Wang P (2007) Research on grey forecasting model of multi-factor time series.
J Xian Univ Archit Technol 39(2):289–292 (In Chinese)
Wang Y (2011) Application of the optimization method in the forecasting based on the grey
correlation degree of time series data. J Qingdao Univ Sci Technol 32(3):317–319
(In Chinese)
Xiong H, Chen J, Qu T (2003) Grey forecasting method based on analogy relation. J Wuhan
Transp Univ 1–20 (In Chinese)
Xu G (1998) Statistic forecast and decision-making. Shanghai University of Finance &
Economics Press, Shanghai (In Chinese)
Yu G, Huang H (2003) The selection method of time series data model. J GuangXi Norm Univ
21(1):191–194 (In Chinese)
Zhang X (2009) Mathematical economics. China Machine Press, Beijing, p 113 (In Chinese)
Zhang H, Dai W (2009) The improvement of grey GM (1, 1) forecasting model. J Zhejiang Univ
Sci Technol 26(1):142–145 (In Chinese)
Zhao B, Tian J, Zhang H (2004) The grey method on correlation degree analysis of time series
data. J Shandong Norm Univ 19(4):17–19 (In Chinese)
Zhu B (1994) Study and reviews on the essential methods of grey system. Syst Eng Theory Pract
14(4):52–54 (In Chinese)
Chapter 42
Partner Selection About the PPP
Reclaimed Water Project Based
on Extension Evaluation Method
Keywords Extension Partner selection Private-Public-Partnership Reclaimed
water project
42.1 Introduction
In recent years, at home and abroad, the practice and research about construction
and management of public project has pursued energetically Private-Public-Part-
nership (PPP) (Gao 2007). Its aim is to attract non public sectors involved in
infrastructure projects, alleviate the government financial pressure and improve the
level of public services. Now PPP mode has been widely used in global scope, and
is becoming the government achieves its economic goals and upgrades the level of
public services to the core ideas and measures (Li and Qu 2004). The recycled
water in city after depth handling can satisfy the need of industry, agriculture and
view in the city fluid matter requests. As an effective path that alleviated shortage
of water resources in water-deficient area. It is paid close attention. But, mean-
while, a lot of factors restrict the project’s construction and development, while the
funds problem is main factor in these factors.
As a constituent part of public project, the project finance mode of Private-
Public-Partnership can effectively work out the funds problem that the recycled
water item faces and push forward the construction of recycled water project. The
PPP project is based on the cooperation of government and the private section. So
that constructs a rational and an effective relationship determines the success of
PPP project (Li 2009). Therefore, one of the first questions that the public sector
facing in the project finance mode of PPP, is how to choose appropriate private
partners (Meng and Meng 2005). And it’s also a very important link that deter-
mines the success of PPP project. The research about selecting partners mostly
concentrates on the supply chain business enterprise look for partners. But the
research about government selecting private partners in PPP project is very few.
This paper, according to recycled water PPP project’s characteristics and
experience at home and abroad, tries to analysis the index sign of selecting
partners. Then it uses the extension evaluation method to evaluate partners rea-
sonably and effectively, optimized partner selection.
The recycled water project belonging to infrastructure occupies large sum of funds,
and has the characteristics of low-return and long return, long recovery period
characteristics. It also faces many risks, such as market, price and fluid matter etc.
This paper, according to recycled water PPP project’s characteristics and the
experience at home and abroad, builds the selection of PPP recycled water project
partner’s evaluation index sign system (Hou 2011).
(1) The credit status of enterprise. The credit status of enterprise mainly
includes two aspects: commercial credit which gets it according to credit rating
and social credit which get it according to the implementing of social responsi-
bility condition. If lacking commercial credit, it will increase project risk, and
increase trades cost. If lacking social credit, it will be hard to balance business
enterprise benefits and social benefits, and increase public private conflict, damage
to the overall interests of the project goals.
(2) The scale of the enterprise cooperation. The size of the enterprise cooper-
ation including three main conditions: registered capital, production capacity and
staff number. Usually, the larger companies, its comprehensive strength and ability
are stronger to resist risk. Choosing a larger cooperative enterprise can digest part
of the risk independently in the face of the environmental factors changing. And it
leads to the success.
(3) The cooperation enterprise’s financial situation. The cooperation enterprise
financial conditions include the total assets, cash flow, the asset-liability ratio,
42 Partner Selection About the PPP Reclaimed Water Project 413
return on assets and other major economic and financial index. It can understand
the assets structure, capital conditions and operating conditions according to
analysis this financial index.
(4) The technology level of cooperation enterprise. The technology level of
cooperation enterprise includes three aspects: quantity of technical personnel,
equipment, and similar project experience. Reviewing the technique of the
cooperation enterprise helps to learn the advantage of the cooperation enterprise.
That’s what determines the result of the item.
(5) The operating and management level of cooperation enterprise. The oper-
ating and management level includes: profit ability, management quality, man-
agement system management system, organization form, and others. The recycled
water project is a newer industry than many other industries. Public sector lacks
experience of project management, construction and operation. Meanwhile, the
recycled water project has long pay-back period. So the recycled water project
needs partner has a higher management level.
(6) The willingness and ability to bear risks of cooperation enterprise. It mainly
includes the following aspects: risk management ability, risk attitude and risk
sharing level. Reasonable risk sharing is an important factor of success for PPP
project (Smith 2004). And it is the foothold of stable cooperation relations. At
present stage, the recycled water industry is still at the further promotion and
development stage, has lots of complicated risks. If project partners lack of
willingness and ability to share the risk, not only can increase the project of
construction and operation cost, but also can reduce the overall efficiency of the
implementation of the project (Cai 2000).
Synthesizes the above analysis, building PPP reclaimed water project partner
evaluate index sign system.
The extenics takes framework for the matter-element theory and extension
mathematics theory. With ordered triple form R = (N, C, V) describing objects’
basic unit, short for matter-element. Matter-element is triples that composed by
object, feature and eigenvalue, recorded as R = (N, C, V). N represents object, C
represents feature and V represents eigenvalue, recorded as V = (a, b). They are
called three factors of matter-element. In the matter-element, V = C(N) reflects
the object relationship of quality and quantity. It composed the name of features
(C) and their eigenvalue (V) (Cai 1999). An object has many characteristic ele-
ments; it can use n dimension matter-element to description:
2 3 2 3
N C1 V1 R1
6 C V 7 6 R 7
6 2 27 6 27
Ri ¼ 6 .. .. 7 ¼ 6 .. 7
4 . . 5 4 . 5
Cn Vn Rn
414 X. Gao and X. Chen
indexes weights determines by the Delphi method. The sub-goal each weight
is:
!
X
n
ai ði ¼ 1; 2; . . .; nÞ ; ai ¼ 1 :
i¼1
8
The smaller the better:
< KðxÞ ¼ ðb xÞ=ðb aÞ; x 2 \a; b [
KðxÞ ¼ 0; xa ð42:2Þ
:
bKðxÞ ¼ 0; xb
(2) Evaluate each sub-goal Ci for first class evaluation.
Use dependent function calculated review things N for Ci relationship of
degree.
X
n0
Ki ðNÞ ¼ aik Kðcik Þ ð42:3Þ
k¼1
Supposing that now we need to evaluate the following four potential cooperative
enterprises R1, R2, R3, R4, and then select the suitable companies as project
partners. Each enterprise index values are as in Table 42.1.
Thus, In order to simplify the problem, we define that each sub-index weight is
equal, and use the correlation function and the weight to obtain the qualified
degrees of various indicators of various enterprises. Use formula (42.3) to calcu-
late the correlation degree K (N) of the enterprise sub-goals Ci
(i = 1, 2, 3, 4, 5, 6), and then use level analysis method to judge each weight of
sub-goals of the matrix a = [0.21, 0.14, 0.23, 0.1, 0.16, 0.16], and finally use
formula (42.4) to calculate the goodness C (N1), C (N2), C (N3), C (N4). Good-
ness of each enterprise C (Ni) = (0.48, 1.25, 0.36, 0.58), arrange each enterprise’s
goodness: C (N2) [ C (N4) [ C (N1) [ C (N3), C (N2) is largest, and then the
optimal partner is enterprise R2.
42.5 Conclusions
The paper constructs evaluation index system about partners selection of PPP
reclaimed water project, and applies extension evaluation method on a compre-
hensive evaluation of potential partners. Considering From multiple perspectives
and multiple factors, rational use the collected information and establish the
evaluation model of multi-level indicator parameters, use quantitative values to
show assessment results and can completely reflect the comprehensive level of
program to make the evaluation simple, easy to operate and more practical. It
provides a simple method for PPP recycled water projects to select partners.
References
Cai W (1999) Extension theory and its application. Chin Sci Bull 44(17):1538–1548
Cai W (2000) The extenics overview. Syst Eng Theory Pract 2:92–93
Dai B, Qu X (2011) Research on a model for choosing cooperation innovation partners based on
fuzzy comprehensive evaluation. Sci Technol Prog Policy 28:120–122
Gao X (2007) The recycled water recycle project risk analysis and control. Group Economics
Research, pp 222–223
Hou L (2011) Game analysis of government investment public construction projects. J Eng
Manag 27:323–324
Huang Y (2007) Research on evaluation method and decision making of PPP project. Tongji
University, Shanghai
Li C (2009) Research on the evaluation of urban transportation sustainable development based on
extension method. Beijing Jiaotong University, Beijing
Li C, Qu J (2004) Project finance, 2nd edn. Science Press, Beijing, p 198
Meng F, Meng J (2005) Partners extension synthesis assessment method. Computer Integrated
Manufacturing System, pp 869–871
Smith NJ (2004) Engineering project management. Blackwell Science Ltd, Oxford, pp 260–263
Zhao H, Xu S, He J (1996) Analytical hierarchy process. Science Press, Beijing
Chapter 43
Personnel BDAR Ability Assessment
Model Based on Bayesian Stochastic
Assessment Method
Abstract It is important for us to evaluate each trainee’s ability with the aim of
improving BDAR training efficiency. The typical assessment methods include
fuzzy evaluation, gray correlation evaluation, neural network and so on. All of
these methods are not able to make full use of the historical information.
Determining membership function in first two methods is not easy. And ANN
needs a lot of data sample which is difficult to obtain in BDAR training. So we
can’t use these methods to model the assessment of personnel BDAR ability. Then
we introduce Bayesian Stochastic Assessment Method which can deal well with
the nonlinear and random problem. Each indexes’ standard is given according to
the characteristics of BDAR training. A modified normal distribution which can
make full use of historical information was put forward to determine the prior
probability. And the poster probability is determined by distance method.
Keywords Bayesian Stochastic assessment Bayesian theorem Personnel
BDAR ability Prior probability
43.1 Introduction
Battlefield damage assessment and repair is a series of activities, which can make
equipments’ basic functions recover quickly by emergency diagnosis and rush-
repair technologies. It consists of battlefield damage assessment and damage repair
Let B1, B2,…,Bn denote the events group. Bi \ Bj ¼ /ði 6¼ jÞ, where / is impos-
S
n
sible event, and Pð Bi Þ ¼ 1, PðBi Þ [ 0. Then based on the Bayesian theorem in
i¼1
Probability Theory (Cheeseman et al. 1988), we can get the next formula:
PðBi ÞPðAjBi Þ
PðBi jAÞ ¼ P
n ð43:1Þ
PðBi ÞPðAjBi Þ
i¼1
where, PðAjBi Þ, PðBi jAÞ are both the conditional probability, PðBi Þ is the prior
probability of every events.
The personnel BDAR ability can be classified into four levels: B1, B2, B3, B4. The
meanings of them are: BDAR engineer, BDAR assessment, normal technician and
equipment operator (Cheeseman et al. 1988; Fried et al. 1997). P(Bi) (i = 1, 2, 3,
4) is an initial estimation for each trainee’s ability level without any data sample
information, it is prior probability. We can get the prior probability based on the
correlative indexes information in general grade evaluation. But it is reasoned only
by experiments and judgments without enough data sample. Considering on both
the BDAR characteristic and the requirement of applying trainee’s history infor-
mation enough, we modify the normal distribution to estimate the prior probability
for each grade which each person’s BDAR level belongs to. This method will be
discussed detailedly in next part.
P(A|Bi) is the probability of index A belong to level Bi. P(Bi|A) is the
re-estimation of each ability levels’ probability after knowing the index A. It is
poster probability. Bi is the standard of ith personnel ability level, i = 1, 2, 3, 4;
A is index for BDAR assessment; Akj is the value of ith assessment index for kth
BDAR person; j is the assessment index’s number (j = 1, 2,…, m). Based on the
characteristic of personnel BDAR ability level evaluation, we can know: m = 4;
Then we can apply (43.1) in this problem as following:
PðBi ÞP Akj jBi
P Bi jAkj ¼ P4 ð43:2Þ
i¼1 PðBi ÞP Akj jBi
(1) Personnel BDAR ability assessment index and standard: The personnel
BDAR ability assessment is different as normal evaluations in which real man or
422 Z. You et al.
woman operates real equipments and evaluated by real experts. This is because the
circumstances needed by BDAR training cannot be well set in real world. So we
must apply the simulation to train. This is the way that real man or woman
operates the virtual prototypes and it is automatically evaluated by the assessment
software. So the assessment index must be suitable to this characteristic. By
analyzing (Li et al. 2000, 2003), we use the help numbers, the operator time, the
irrelevance operator number and the losing number of inevitable operating to
assessment the personnel BDAR ability (Fig. 43.1).
Where the fist three indexes is a certain value and the last one is a variable
according to certain battle damage. We suppose the allowing max time is T, and
then we can get the standard for each level (Wang et al. 2006). It is shown in
Table 43.1.
(2) Prior probability (Ronald and Myers 1978): The prior probability is the
primary estimation of personnel BDAR ability without any sample information.
The prior probability will be uniform distribution on (Li et al. 2000; Gu et al. 2011)
if the trainee is firstly evaluated without any ability history information in database.
Its value is 1/n. If the trainee is not firstly assessed, we will use a modified normal
distribution which considers the ability history information enough in order to
estimate the prior probability (Cheng et al. 1985). This method is detailed in
following.
Firstly, setting a interzone value for Table 43.3 each ability level: B[ {B1, B2,
B , and B4} = {[100, 80), [80, 60), [60, 30), [30, 0]}, the median of B is
3
1
B2
0.9
B4 B3 B1
0.8
0.7
Prior probability
B2
0.6 B4
B1
0.5 B3
0.4
0.3
0.2
0.1
0
0 10 20 30 40 50 60 70 80 90 100
History assessment value for trainee
where Wj is the weight of each assessment index. There are many methods to
calculate the index’s weight, such as Analysis Hierarchy Process (AHP), binomial
coefficient weighted sum, Rough set method, Delphi, Entropy method, main factor
analysis method, multiple regression analysis and so on (Park and Kim 1999).
Each method has its own advantages and disadvantages. Based on the BDAR
ability assessment’s characteristic, we apply the modified AHP to determine the
weight for each index as shown in Table 43.2 (Jia 1995).
(5) Final decision of BDAR ability level based on maximum probability:
According to the maximum likelihood classification rule, we can get the trainee’s
BDAR ability level (Ph) by choosing the maximum in all probabilities of each
level:
Ph ¼ max Pi ði ¼ 1; 2; 3; 4; 5Þ ð43:7Þ
43.4 Conclusion
The Bayesian stochastic assessment methods can use the history information
effectively and it can deal with the complex nonlinear relationship in assessment.
Its physical meaning is explicit, the calculating steps are very easy and the
effectiveness is very good. By analysis, we regard help numbers; irrelevance
operator number, operator time and Losing number of inevitable operate as the
assessment factors. And then we build a comprehensive model for personnel
BDAR ability based on Bayesian stochastic assessment methods. It is consistent
with other assessment methods mentioned before and proved by the applying
instance. In the future, we will apply this method to develop the BDAR training
assessment system to improve the effectiveness of the BDAR training simulation.
References
Cheeseman P, Kelly J, Self M, Stutz J, Taylor W, Freeman D (1988) Auto class: A Bayesian
classification system. In: Proceedings of the 15th International conference on machine
learning, vol 140, pp 52–65
Cheng P, Chen XR, ChenGJ, Wu CY (1985) Parameters estimation (Chinese) Shanghai science
technology princess, Pudong, pp 20–100
Fishman, Monte Carlo (1996): Concepts, algorithms, and applications, Springer, New York,
pp 85–122
Fried N, Geiger D, Goldszmidt M (1997) Bayesian network classifiers. Mach Learn
2–3(29):131–163
Gu XP, Ai JL, Han H (2011) Direction-determination ability evaluation based on interval number
grey relational analysis(Chinese). Aerosp Electron Warf 27(3):26–29
Hyacinth SN (1999) Intelligent tutoring systems: an overview. Artif Intell Rev 40(4):251–277
Jia H (1995) A modified arithmetic determining weight in AHP (Chinese). WTUSM Bulletin of,
Sci Technol, pp 25–30
Jin KR, Sun CH (1996) Chaotic recurrent neural networks and their application to speech
recognition. Neuron Computing 13(224):281–294
Li JP, Shi Q, Gan MZ (2000) BDAR theory and application (Chinese) Army industry publitions,
pp 10–50
Li M, Sun SY, Lv GZ (2003) Design and implementation of evaluation model of simulative
maintenance training (Chinese). Comput Eng 29(9):186–188
Liang C (2011) Evaluating quality model of concrete construction project using Bayes
disciminant analysis method (Chinese) Concr 259:50–53
Park KS, Kim SH (1999) Tools for interactive multi-attribute decision: Range-based approach.
Eur J Oper Res 118:139–152
Patz Richard J, Junker Brian W (1999) A Straightforward approach to Markov chain Monte Carlo
methods for item response models. J Educational Behav Stat 24(2):146–178
Ronald E, Myers WRH (1978) Probability and statistics for engineers and scientists, 2nd,
MacMillan, New York
Tan PN, Steinbach M (2011) Introduction to data mining Posts &telecom press, Jan, pp 139–155
Wang SM, Sun YC (2008) Evaluation of virtual maintenance training in civil airplane based on
fuzzy comprehensive evaluation (Chinese). Aircr Des 28(6):42–45
Wang RS, Jia XS, Wang RQ (2006) Study of intelligence frame based on case of battlefield
damage assessment (Chinese). Comput Eng 32(7):174–184
Chapter 44
Power Control of Cellular System
in CDMA Technology Integrated with SIC
Abstract In terms of the problems in power control in the CDMA system of the
existing SIC, whether SIC is perfect or not should be considered first, i.e., whether
there is error in channel estimation or judgment. Secondly, the multi-cell situation
should also be considered with outer-cell interference no longer be assumed
independent of the transmitted (received) power of users. In the context, the paper
firstly derives the expressions for optimal power allocation in the decoding order;
then puts forwards the optimal decoding order which can minimize the outage
probability of the system for the restriction of transmitted (received) power of
users, and finally deduces the optimal decoding order that can minimize the total
transmitted power. The paper comes to the conclusion of being able to minimize
the total transmitted power on the condition that the Eb/I requirements of users are
met through simulation comparison of total transmitted power in different
decoding orders.
transmission rate is also a key method to guarantee the Quality of Service (QoS)
requirements of different types of users and to improve the system capacity in case
of co-existence of multiple operations.
At present, joint optimization (Ivanek 2008) has been conducted for the power
control and decoding order adjustment in the CDMA system integrated with Serial
Interference Cancellation (SIC). Nevertheless, in this kind of research, the opti-
mization only aims to adjust the decoding order to minimize the total transmitted
power of users under the premise of catering to the users’ requirements on bit-
energy-to-interference-spectral-density ratio, Eb/I. And there are following limi-
tations in these operations respectively: the assumption (Paulraj et al. 2004) of SIC
is perfect, i.e., the previously detected user signal interference has been eliminated
completely in the subsequent detection of user signals; the interference of outer-
cell user signal on the in-cell user signal (outer-cell interference for short) is
assumed independent of the constant of users’ transmitted (received) power or
considered only for the single cell cases (Zahariadis and Doshi 2004); there is no
limitation (Jorguseski 2001) of users’ transmitted (received) power.
However, in practice, the channel estimation error and decoding judgment error
make it impossible for the SIC to be perfect. That is to say the reconstructed
signals of the user whose decoding has been detected will never be equivalent
correctly to the received signals of the user and thus, after the elimination of the
reconstructed signals of the user whose decoding has been detected from the
composite signal, if there is still residual signal component of this kind of users in
the composite signal, it will impact the system performance in decoding detection
of the current users.
Suppose that there are K users in the target cell, the received composite signal r0(t)
at the base station is composed of received signals xi(t) (i = 1,2,…,K) of various
users, outer-cell interference signals i(t) (i.e., interference of user signals from
outer-cell on in-cell signals) and background noise n(t). The expression is as
follows:
X
K
r0 ðtÞ ¼ xi ðtÞ þ iðtÞ þ nðtÞ ð44:1Þ
i¼1
data bit and the sequence of spread spectrum code word, and si refers to the
relative time delay.
Figure 44.1 shows the structure (Jantti and Kim 2000) of CDMA base station
receiver integrated with SIC technology. The base station receiver integrated with
SIC detects and decodes the signals of various users in the received composite
signals. In the event of detection and decoding of the user i, the base station
receiver will reconstruct the received signals of the user through channel esti-
mation with the reconstructed signal recorded as si (t). Eliminate the reconstructed
signal from the composite signal before detection of the subsequent users; that is to
say, before the detection and decoding of user i, the composite signal should be
renewed as
ri ðtÞ ¼ ri1 ðtÞ si1 ðtÞ; i ¼ 1; 2; . . .k ð44:3Þ
where s0 (t) = 0. The mth bit is decoded and decided (Saghaei and Neyestanak
2007) by the following means for user i.
Z
^ 1 mTi þsi
bi;m ¼ sgnf riðtÞ ai ðt si Þdtg ð44:4Þ
T ðm1ÞTi þsi
where, Ti indicates the duration of the bit symbol. This process will be repeated in
the base station until the signals received by the K users are detected and decoded.
Accordingly, the signal to interference-plus-noise ratio (SINR) for the noise
received by user i is:
P
SINRi ¼ Pi1 PK PK ; i ¼ 1; 2; . . .k ð44:5Þ
j¼1 hPj þ j¼iþ1 Pj þ f ð j¼1 Pj Þ þ N0 W
It is assumed that all hi share the same value, and represented by h, which is the
peak value of all hi on conservative terms (Gu et al. 2005). In the above expres-
sion, N0W indicates the background noise power, W Pthe band width and N0 the
K
corresponding power spectral density. Moreover, f j¼1 Pj indicates the outer
cell interference power (Liao et al. 2007), and f the ratio of the outer cell
interference power to the in-cell received power, which means the outer cell
interference is no longer independent of users’ transmitted (received) power. This
is due to the fact that different received power requirements of users mean different
transmitted power requirements under different decoding orders; therefore, the
interference is imposed by in-cell users on the outer cell varies. Likewise, the
signal power differs with different decoding orders. That is why it is unjustifiable
to assume the outer-cell interference to be a constant independent of users’
transmitted (received) power in actual multi-cellular system.
The expression for optimal power allocation with certain decoding order has been
provided in documentation (Kwok and Lau 2003); however, it is only applicable to
the supposition that the outer-cell interference is a constant independent of users’
transmitted (received) power or aims at single-cell. The expression of optimal
power allocation for multi-cell will be deduced in this article where the outer-cell
interference is no longer assumed to be independent of users’ transmitted
(received) power.
Users in the system have their specific requests for QoS, and only when each
QoS is satisfied can normal communication be ensured:
ðW=Ri Þ SINRi ci ; i ¼ 1; 2; . . .k ð44:6Þ
where, !i indicates the SINR user i requests at the rate of Ri. In practice, if users’
transmitted (received) power allocated by the system can validate the above
expression, then there will be specific power allocated that can validate the fol-
lowing expression (Lee et al. 2002):
ðW=Ri Þ SINRi ¼ ci ; i ¼ 1; 2; . . .; k ð44:7Þ
This is provable in reference to how an individual user examines the receiver
(Gu et al. 2005), but the proof will not be given here. Accordingly, it is the best
way to allocate power with a given decoding order and minimum total transmitted
power. Therefore, QoS requested by users is given by expression (44.7).
It is assumed that !i ¼ W=ðRi !i Þ so as to deduce the received power requested
by users with the Eb/I requirement met. The following equations can be got using
expressions (44.5) and (44.7):
Xi1 XK XK
Yi Pi þ Pi ¼ j¼1
hP i þP i þ j¼iþ1
P i þf j¼1
P j þ N0 W; i ¼ 1; 2; . . .; k
ð44:8Þ
44 Power Control of Cellular System 431
N0 W
Pk ¼ nP Qi o ð44:12Þ
K
ðYk f Þ ðh þ f Þ i¼2 j¼2 1 þ Yj1 h þ Yj
The above expressions give the allocation means of users’ received powers
more commonly when compared with the existing documentations. The expres-
sions (44.11) and (44.12) give the same requested received powers as in docu-
mentation (Chatterjee et al. 2007) if there is no f , the ratio of outer-cell
interference to in-cell interference, namely the outer-cell interference is supposed
to be a constant independent of users’ transmitted (received) power or only the
single cell is taken into consideration. The expressions (44.11) and (44.12)
describe the examination of the receiver by individual user (Lee et al. 2002) when
residual power factor h equals 1. The two expressions give the same requested
received power as in documentation (Jorguseski 2001) if there is no f and residual
power factors are somehow different, and the same requested received power as in
documentation (Ivanek 2008) if f does not exit and residual power factor h is 0,
meaning, there is no perfect SIC for residual power after the interference of each
user in a single cell is eliminated.
Definition 1 In given decoding order, if the constraint Eqs. (44.7) and (44.11) can
not be satisfied, the condition is called the outage of the system in the decoding
order.
432 P. Zhang and J. Min
If there are K users in the system, there may exist K! decoding orders, and the
system has different outage in performance in different orders. In case the trans-
mitted (received) power is limited, the research on optimal decoding order will be
helpful to minimize the outage probability for the design of better system meeting
actual requirement.
We can conclude the following inferences based on the Definition 1.
Inference 1: Among all possible decoding orders, the system can get the
minimum outage probability in ZD (Zs Descending) order.
Demonstration: According to Definition 1, once the system is practicable, the
constraint Eqs. (44.7) and (44.11) can be realized in decoding order ZD, even if the
equations are not practicable in other decoding orders. Thus, we can conclude that
among all possible decoding orders, the system can get the minimum outage
probability in decoding order ZD. Q.E.D.
To comprehensively and intuitively understand the rationality of decoding
order ZD in system minimum outage probability, we can consider the following 2
special cases:
(1) All the users have the same rate and corresponding Eb/I requirements, namely,
all the Y i ¼ W=ðRi ci Þ results are the same. In such case, decoding order ZD is
corresponding to the decoding order confirmed by the descending order Pmax i
The later a user is decoded, the smaller degree other users are interfered, and
the lower demand the received power requires based on Eqs. (44.7) and
(44.11). So the constraint Eqs. (44.7) and (44.11) are most likely to be satisfied
in the decoding order confirmed according to the Pmax i descending order which
is adopted by the K users. Furthermore, if all the users have the same maxi-
mum transmitted power, then the decoding order confirmed according to the
Pmax
i descending order is corresponding to the decoding order confirmed
according to the descending order based on the user channel gain.
(2) All the users have the same maximum received power Pmax i . In such case, the
decoding order ZD is corresponding to the decoding order confirmed by the
descending order based on Y i ¼ W=ðRi ci Þ. In the same way, according to Eqs.
(44.7) and (44.11), the bigger the Y i ¼ W=ðRi ci Þ of a user is, the lower
requirement of the corresponding SINR is. Meantime, the earlier a user is
decoded, the more interference it will get from other users. Thus the Eqs.
(44.7) and (44.11) probably be satisfied in the decoding order of all the users
confirmed by the descending order based on Y i ¼ W=ðRi ci Þ. Further, if all the
users have the same Eb =I requirement, the decoding order confirmed by the
descending order based on Y i ¼ W=ðRi ci Þ is corresponding to the decoding
order confirmed by the ascending order based on Ri.
44 Power Control of Cellular System 433
If there are K users in the system, there will be K! possible decoding orders. And
in different decoding order, the gross transmitted power of the system is different.
Low transmitted power is helpful to prolong the service time of the mobile ter-
minal battery and extend its lifetime, and reduce the electromagnetic radiation
pollution to environment. So it is necessary to study on the optimal decoding order
for the minimum gross transmitted power. Though documentation (Saghaei and
Neyestanak 2007) has considered the combination of the CDMA system in SIC in
minimum gross transmitted power, its objects are mainly single cells and it
assumes that the interference power from outer cells is a constant not dependent on
user transmitted (received) power. In the cellular system, the cells are not isolated,
and the change of the in cell’s transmitted (received) power can influence that of
outer cells. Even though documentation (Jorguseski 2001) has considered the
interference of outer cells, it is only based on perfect SIC.
This paper analyzed the question of how to adjust the decoding order for a
minimum gross transmitted power in multi-cell condition which assumes that the
outer-cell interference is dependent on the user transmitted (received) power and at
the same time the SIC is not in perfect condition.
Based on the Theorem 1 mentioned above, we can prove and obtain the following
theorem and determine how to adjust decoding order for the total transmitted
power minimization according to the following theorem.
Theorem 2 The decoding order is identified pursuant to the descending sort of
users’ channel gains, and then the transmitted (received) power will be distributed
under the decoding order according to expressions (44.10), (44.11) and (44.12).
Consequently, the total transmitted power can be minimized under the premise of
satisfying the user’s Eb/me requirement.
Demonstration: we suppose that the users A and B are respectively the Lth and
(L ? 1)th with decoding detected and for the user A0 s channel gain hA is lower
than B0 s hB, that is hA \ hB. Therefore, we investigate the changes of total
transmitted power fore and after exchange of A0 s and B0 s decoding orders. It is
supposed that during the exchange, both users’ received powers are respectively Pi
(i = 1, 2, …, K) and P~ (i = 1, 2, …, K) under expressions (44.10), (44.11) and
(44.12). So we consider the following three conditions to facilitate the analysis:
(1) L = 1 means that A is the first user getting decode detected among the
original decoding order. And the rest users’ received powers remain unchanged
except A and B before the exchange of decoding orders of A and B according to
434 P. Zhang and J. Min
ð44:13Þ
That is, the total transmitted power is reduced after the exchange.
(2) L = K-1 means B is the last user getting decode detected among the
original decoding order. And the rest users’ received powers remain unchanged
except A and B before the exchange of decoding orders of A and B according to
Theorem 1, that is, Pi ¼ P ~ i , i B K-2. Correspondingly, the difference of total
transmitted power before and after exchange of decoding orders of A and B is as
follows
XK XK
Pi =hi ~ i hi ¼ ðPA =hA þ PB =hB Þ P
P ~ B hB þ P
~ A hA
i¼1 i¼1
ðh þ Yk2 Þ ðh þ YK2 Þðh þ YA Þ ðh þ Yk2 Þ ðh þ Yk2 Þðh þ YB Þ
¼ Pk2 hA hB hB hA
ð1 þ YA Þ ð1 þ YA Þð1 þ YB Þ ð1 þ YB Þ ð1 þ YB Þð1 þ YA Þ
ðh þ Yk2 Þ ð1 hÞ
¼ Pk2 ð1=hA 1=hB Þ [ 0
ð1 þ YA Þ ðh þ YB Þ
ð44:14Þ
So the total transmitted power can be lowered after the exchange.
(3) If 1 \ L\K-1, A is the first user with decode detected among the original
decoding order while B is not the last user. Therefore, the rest users’ received
powers remain besides A and B before the exchange of decoding orders
according to Theorem 1, that is, Pi ¼ P~i , i B L-1 or i C L+2. Similarly to
above-mentioned (1) and (2), the total transmitted power can be lowered after
the exchange.
From the above three conditions, the requirements of total transmitted power
can be lowered after the exchange of decoding orders for adjacent users A and B.
Repeat the adjacent users’ exchange of decoding orders until the decoding order
is the one identified by descending order of users’ channel gains. Based on the
above-mentioned three conditions, the requirement of total transmitted power can
be continuously lowered during such exchange process of decoding orders of
users. Thus, the Theorem 2 is established. Q.E.D.
In this paper, gains descending (GD) means the decoding order identified
through the descending order of users’ channel gains. The users’ powers will be
distributed under such decoding order according to expressions (44.10), (44.11)
and (44.12). Consequently, the total transmitted power can be minimized under the
premise of satisfying the user’s Eb/I requirement.
44 Power Control of Cellular System 435
of Sect. 44.1. In addition, it is also highlighted from Figs. 44.2 and 44.3 that the
requirement of total transmitted power grows in direct proportion to the number of
users or of the residual power factor h, which conforms to actual situation.
44.5 Conclusion
This paper takes account of the relevant power control in cellular system with
CDMA technique in combination with SIC. The expression of optimal power
distribution is firstly deduced under the given decoding order; and then, the optimal
decoding order is proposed to minimize the outage probability in the system if there
are limitations to users’ transmitted (received) powers; finally, the optimal
decoding order is deducted to minimize the total transmitted power. The simulation
compares the total transmitted powers of the system under different decoding
orders, and the results support the analysis of this paper.
Acknowledgments The research work in this paper is supported by Research on the Technology
of Load Balancing on Server Cluster under the grant 2012KJ05.
References
Chatterjee M, Lin HT, Das SK (2007) Rate allocation and admission control for differentiated
services in CDMA data networks. IEEE Trans Mob Comput 6(2):179–197
Gu et al. (2005) QoS based outer loop power control for enhanced reverse links of CDMA
systems. IEEE Electr Lett 41(11):659–661
Ivanek F (2008) Convergence and competition on the way toward 4G. IEEE Microw Mag
9(4):6–14
44 Power Control of Cellular System 437
Jantti R, Kim SL (2000) Second-order power control with asymptotically fast convergence. IEEE
J Sel Areas Commun 18(3):447–457
Jorguseski L (2001) Radio resource allocation in third generation mobile communication
systems. IEEE Commun Magaz 39(2):117–123
Kwok YK, Lau VKN (2003) System modeling and performance evaluation of rate allocation
schemes for packet data services in wideband CDMA systems. IEEE Trans Comput
52(6):804–814
Lee JW, Mazumdar RR, Shroff NB (2002) Downlink power allocation for multi-class CDMA
wireless networks. Proceedings of IEEE INFOCOM, vol. 3. pp 1480–1489
Liao CY, Wang LC, Chang CJ (2007) A joint power and rate assignment algorithm for multi-rate
soft handoffs in mixed-size WCDMA cellular systems. IEEE Trans Veh Technol
56(3):1388–1398
Paulraj AJ, Gore DA, Nabar RU (2004) An overview of MIMO communications—a key to
gigabit wireless. Proc IEEE 92(2):198–218
Saghaei H, Neyestanak AAL (2007) Variable step closed-loop power control in cellular wireless
CDMA systems under multi-path Fading. IEEE Pacific Rim conference on communications,
computers and signal processing. PacRim 2007. IEEE Press, pp. 157–160
Zahariadis T, Doshi B (2004) Applications and services for the B3G/4G era. IEEE Wirel
Commun 11(5):3–5
Chapter 45
Research of Embedded Intelligent
Decision Support System
for Natural Disasters
45.1 Introduction
With the higher level of urbanization, function becomes more complex, the
potential problems of natural disasters is more and more, human beings’ depen-
dence on energy supply and the city configuration is increasing and system
J. Shen T. Li
College of Management and Economics, Tianjin University,
Tianjin, People’s Republic of China
M. Xu (&)
TEDA College, Nankai University, Tianjin 300457, China
e-mail: [email protected]
security and reliability of the machines which work in cities is required much more
higher. In 2010, a severe storms triggered landslide in Zhouqu County, Gansu. In
2008, our country suffered a rare snow disaster and the Wenchuan earthquake. All
about these show that our country’ natural disaster integrated emergency response
system have many shortcomings. Based on Bayes Consulting Co., Ltd.’s research,
with the perfection of natural disasters emergency system, share of investment of
Chinese informatization is increasing. For example, some great cities and
municipalities such as Beijing invest an average of 100 million RMB in the natural
disaster integrated emergency response system and follow-up system’s upgrade
and engineering investment will reach 100 million RMB in five years. So the next
five years will be a large-scale construction period of emergency support platform.
As the further development of the natural disaster emergency application market,
the proportion of software and services will sharp increase which produces larger
market development space. Decision support system is the new market demand in
recent years on the basis of comprehensive emergency (UNDP 2004; Apel et al.
2004; Arnell 1989).
The construction of integrated emergency response and intelligent decision
support system for natural disasters will contribute to the comprehensive efficiency
and benefits of existing emergency system, which make the whole system produce
more social benefits and economic benefits and help information sharing function
do much better. The study establishes an embedded decision support integrated
system for natural disasters and realizes information sharing and emergency
command of natural disasters emergency system on the basis of the research on
embedded information sharing system and embedded support system and intelli-
gent decision support platform. The study will promote the process of natural
disaster emergency management research, and improve the level of natural disaster
management research and emergency response (Blaikie et al. 1994). Also, it may
make the daily management countermeasure and emergency plan system and
disaster mitigation plan on natural disasters for related administrative departments,
and provide scientific proof and technical support for disaster preparedness and
response and urban development planning.
The following are the foreign exemplary decision support system: Portfolio
Management System, which supports for investors to manage customers’ securi-
ties, has the following functions: stock analysis, securities processing and classi-
fication; Brand aid, a hybrid market model which is used to product promotion,
product pricing and advertising decisions, can help the manager and management
analyses strategies and make decisions by connecting goods sales, profits with the
manager’s action; Projector helps the manager to construct and explore the
methods to solve problems, which supports enterprise’s short-term planning.
Geodata Analysis and Display System, developed by IBM Research, is used to
aided design of police patrol routes, urban design and school jurisdiction scope
arrangement. Ford did research on a decision support system for the flood disaster
warning decisions; William et al. constructed spatial decision support system to
442 J. Shen et al.
decide dangerous chemicals transport routes; Ezio tried to utilize real-time deci-
sion support system to manage natural disaster risk. The following are domestic
application and research: The government macro-economic management and
public management issues; The water resources allocation and early warning
system for flood control; Industry planning and management and all kinds of the
development and utilization of resources; Decision-making of ecological and
environmental control system; Natural disaster management.
Zhongtuo Wang described the decision support system by functionality. During
his analysis, the experience of the decision makers penetrated the mathematical
model of the decision support system; Sprague et al. proposed decision support
system three parts structure, namely dialogue component, data component (data-
base and database management system) and model component(model base and
model base management system); Bonczak et al. proposed DSS three system
structure, namely Language system, problem processing system and knowledge
system. Now, Aided decision-making ability of decision support system develops
from single decision model to more comprehensive decision-making model; By
combining decision support system and expert system, a decision support system
combining intelligent decision support system (Yong et al. 2007), qualitative aid
decision making and quantitative aid decision making is developed.
This research studies the structure of the decision support system for natural
disasters, the realization of information sharing platform and the key technology of
embedded systems such as interface issues (microprocessors, scalable RAM and
ROM, the data latch and bus coprocessor) to build a fuzzy lookup table,to connect
the management station and the bus architecture, system software and the
knowledge of the natural disasters, information resource database integration, and
system support structure. In this study, we use embedded technology to embed
information fusion technology integration and information-based decision support
system into the entire emergency management system based on interface tech-
nology, which achieves the optimal combination of hardware and software and the
sharing of information resources of natural disaster response system and emer-
gency command. Through the embedded technology, we connect data layer, logic
layer and application layer of decision-making platform with interface of the entire
emergency response system to enhance the efficiency and effectiveness of the
entire complex system that is to optimize the design, faster system responsiveness,
reduce the consumption of system resources and reduce hardware costs, while
enabling a variety of resources for the entire emergency response system to share
information (Yong et al.2007; Sun et al. 2007).
The overall framework of natural disasters embedded decision support inte-
grated system constructed in this study includes the following five levels.
45 Research of Embedded Intelligent Decision Support System 443
Higher ......
Internet
Database Database
Firewall server server
Controlling
signal
...... Data signal
Natural disaster
Supervisory emergency
control Decision-making Server Server Embedded decision
desk plan support platform
Controlling Data
signal signal
Ethernet
Data
Controlling signal
signal
Remote IO Remote IO AI
DI module module 431 -1KF10
ET200 M ET200 M Transducer
421 -1BL00
Button, Discrete input
input Controlling Controlling BP
operation signal 1 signal 1 ECG
box
AO
DO Controlling Controlling
signal N signal N 432 -1KF10 Signal
422 -1BL00 Discrete conditioning
output
Instructions, Distributed control system 1 Distributed control system N
relay
AI/AO System
DI/DO System
Fig. 45.1 Emergency interact embedded decision support integration system framework for
natural disasters
444 J. Shen et al.
The CPU uses Siemens CPU416-2DP. Switch DI template uses the 421-1BL00 for
field devices such as switches, operating box I/O points of access. DO template
uses the 422-1BL00. Both are 24VDC*32CH-type template. The use of interme-
diate relays to control equipment used to indicate the unit to direct output. Analog
template AI uses the 431-1KF10, which is use to access for instrumentation
monitoring signals, such as pressure sensors. The AO templates use the 432-
1KH10, in which channel type can be any configuration for controlling the reg-
ulator valve 9+. Ethernet card with the CP443-1 communicates over twisted pair.
Image there are two racks: One is central rack and other one is expansion rack. We
use template Interface IM460 to connect central rack and expansion rack and
utilize the CPU built-in PROFIBUS-DP interface to achieve communication with
the inverter. Then we use expansion unit—M communication card to connect to a
remote station ET200 M. The system’s communication networks uses Siemens
high speed industrial Ethernet the Culan connection for communication between
the PLC and the PLC, in which the communication speed reaches 10 Mb/s. We use
PROFIBUS-DP, one of the international fieldbus standards, and inverter, used to
communication and control and belongs to the device bus, for the complex field
devices and distributed I/O. The physical structure is the RS-485, the transmission
rate is up to 12 Mb/s. As we use the same protocol standards and transmission
media, and utilize ET200 M as remote I/O station, we should set ET200 M in a
location where there exist many I/O such as operating floor. In this way we can
reduce cable volume effectively. Twisted-pairline media and TCP/IP open pro-
tocol are used for communication between monitoring station and the PLC.
The hardware of the embedded support system takes the embedded microprocessor
as the core and integrates storage systems, and various input/output devices;
software contains the system startup programs, drivers, embedded operating sys-
tem, applications, etc.
(1) Embedded chip
Include embedded microprocessors, embedded microcontrollers, embedded
DSP processor and embedded on-chip system.
(2) Interface circuit of embedded system
We design or select the hardware interface according to the receipt signal that
need to deal with or the control signals that should be send to by the integrated
storage systems and a variety of input/output devices. These interfaces
include: embedded microprocessors, scalable RAM and ROM, the data latch
and bus coprocessor selection, design and field testing.
45 Research of Embedded Intelligent Decision Support System 445
(one is FMSPCI and other one is UED) to improve the transmission efficiency
of the line. The following table is the form of data Zhen of physical layer.
45.6 Conclusion
This article uses embedded technology to connect the data level, logical level and
application level of the decision-making system with the interface of the whole
stand-by system and meanwhile form an integrated decision-making support
system of natural disasters. This can improve the efficiency and outcome of the
whole complicated system, that is, it can optimize the design, raise its responding
speed, reduce its resource consumption and its costs of hardware. In addition, it
can also realize the information sharing of its various resources in the whole stand-
by system.
This article will contribute to a raise in both the response to natural disasters
and the anti-risk abilities. It will also provide better services, that is, the stand-by
system, embedded supporting system and information-sharing platform, to insti-
tutions including government offices, legislation, tax, police, administration, flood
prevention and so on.
Acknowledgments This work was supported by the National Natural Science Foundation of
China (71171143), by Tianjin Research Program of Application Foundation and Advanced
Technology (10JCYBJC07300), and Science and Technology Program of FOXCONN Group
(120024001156).
448 J. Shen et al.
References
Apel H, Thieken AH, Merz B et al (2004) Flood risk assessment and associated uncertainties. Nat
Hazard Earth Syst Sci 4(2):295–308
Arnell NW (1989) Expected annual damage and uncertainties in flood frequency estimation.
Water Res PI Manag 115:34–107
Blaikie P, Cannon T, Davis I et al (1994) People’s vulnerability and disasters. Routledge,
(Natural Hazards) London pp 189–190
Sun XD, Xu XF, Wang G et al (2007) Multi-objective optimization of process based on resource
capability. J Harbin Inst Technol 14(4):450–453
UNDP. Reducing disaster risk: a challenge for development. John S. Swift Co., USA. (2004)
www.undp.org/bcpr
Yong H, Jia Xing H, Ju Hua C (2007). Software project risk management modeling with neural
network and support vector machine approaches. In: Third international conference on natural
computation, Hainan, pp 358–362
Chapter 46
Research on Classifying Interim Products
in Ship Manufacturing Based
on Clustering Algorithm
Abstract This paper first reviews some methods of classifying interim products
during shop manufacturing. Then, the general cluster analysis is present. The
progress of classifying interim products based on cluster analysis is given, and
seven features of interim products are present to analysis. Finally, a case is cal-
culated to show the effective of the method, which provides a useful way for
effective classification of the intermediate products in the shipbuilding process.
46.1 Introduction
vigorously in our shipyard. Ship production model for assembly includes segmental
construction and pre-outfitting on the basis of the group technology. Using interim
products oriented, it combines the hull construction, outfitting and painting work,
which makes modern shipbuilding mode of hull, outfitting and painting integration.
In this process, the traditional decomposition mode which is in system-oriented
must be changed to the mode which is dispersed specialized production of interim
products-oriented. This mode can allocate resources rationally, which means every
production risk can be allocated to the corresponding production resources. The
key issue of it is the classification of interim products by product characteristics
and production characteristics.
This paper first reviews some methods of classifying interim products during
shop manufacturing. Then, the general cluster analysis is presented. The progress
of classifying interim products based on cluster analysis is given, and seven fea-
tures of interim products are present to analysis. Finally, an instance is calculated
to show the effective of the method, which provides a useful way for effective
classification of the intermediate products in the shipbuilding process.
the intrinsic link and differences between the data, but also provides an important
basis for further data analysis and knowledge discovery (Jain et al. 1999).
k and l are two samples, xkj, xlj are value of sample k and l on dimension j, the
distance between them is Dkl .
N-dimensional expression is:
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
uX
u n n
Dkl ¼ t xkj xlj ð46:2Þ
j¼1
Because dimensions are different in the mixed data set, the data should be
regulated before the distance calculation. Generally, the following methods for
data regulation are use.
46 Research on Classifying Interim Products in Ship 453
There are two samples k and l and the index j, the coefficient a j ðk; lÞ is defined
in the following:
If j is a binary variable:
j 0ðK and 1 values is sameÞ
a ðk; lÞ ¼ ð46:3Þ
1ðK and 1 values is not sameÞ
If j is a nominal index:
j 0ðK and 1is belong to the same classÞ
a ðk; lÞ ¼ ð46:4Þ
1ðK and 1is belong to the different classÞ
If j is an indicator of a sequence, set the value of j is desirable 1, 2, …, t:
x x
kj lj
a j ðk; lÞ ¼ ð46:5Þ
t
If j is an interval indicator or arithmetic index:
xkj min xfj xlj min xfj
f f xkj xlj
a j ðk; lÞ ¼ ¼ ð46:6Þ
max xfj min xfj max xfj min xfj max xfj min xfj
f f f f f f
Distance:
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u X
u1 n
d ðk; lÞ ¼ t ½a j ðk; lÞ2 ð46:7Þ
n j¼1
(1) Type
Type is to describe the main types of segmentation from the shape, which can
be divided into flat segmentation, special flat segmentation, curved segmentation,
special curved segmentation and superstructure. Process route of different types of
sub-manufacturing are not the same.
(2) Assembly length
Assembly length means the connection length in part assembling,assembly
assembling, and segmentation assembling. The assembly length reflects the size of
the assembly workload to a certain extent.
(3) Welding length
Welding length is the welding length of puzzle, part, assembly, and segmen-
tation. In order to reflect the welding workload and degree of difficulty, it is
necessary to transfer welding length into the conversion length. This conversion is
derived by multiplying the coefficient. The coefficient of this effect sometimes are
multiple, such as the size and thickness of steel plates and profiles, weld type,
welding edge forms, working conditions and steel factors, etc.
(4) Weight
The segment weight determines the segmentation of the demand for production
equipment. Generally based on the lifting equipment and transport equipment
capacity, the weight is divided into several levels: less than 10 T, 10–30 T,
30–60 T, 60–100 T, etc.
(5) Projected area
The size of segmentation projected area reflects the size of the site area of the
segmented manufacturing production.
(6) Height
Segment height is the height of subsections and the base surface of the con-
nected construction, which decides whether need to install the framework in
welding connection components.
(7) Turn Over (Zhong et al. 2003)
Whether to turn over is an important production characteristic in manufactur-
ing. By turning over, some welding work in of the original overhead position
becomes down welding, which is lower in work difficulties. However, turn over
needs lifting equipment, at the same time, some assistive devices should be welded
for turning over. The process route including turn over has a huge impact in
manufacturing process.
No
Determine whether the number
of groups is N
Yes
End
2 3
x11 x12 x1j x1n
6 x21 x22 x2j x2n 7
6 7
6 .. .. .. .. .. .. 7
6 . . . . . . 7
X¼6
6 xi1
7
6 xi2 xij xin 77
6 . .. .. .. .. .. 7
4 .. . . . . . 5
xm1 xm2 xmj xmn
The original distance matrix is calculated by calculate the distance between the
interim products:
2 3
0
6 d21 0 7
6 7
Dð0Þ ¼ 6 .. .. .. 7
4 . . . 5
dm1 dm2 0 mm
Find the minimum value dmin in D(0), the corresponding interim products are
merged into one class. Then, the distance between the class and other interim
products are calculated to get the new distance matrix D(1). The distance between
the new class which combines p and q, with the class r can be calculated by
weighted pair group method of the average distance.
d ðu; v þ wÞ ¼ ½dðu; vÞ þ d ðu; wÞ=2 ð46:8Þ
456 L. Gong et al.
Table 46.1 shows some ship segmentations. According to the actual production
needs, these six segmentations should be grouped into four classes for production.
Its original distance matrix D (0) is as follows:
2 3
0
6 0:89 0 7
6 7
6 0:63 0:56 0 7
6 7
6 0:30 0:78 0:59 0 7
6 7
4 0:57 0:81 0:51 0:69 0 5
0:63 0:62 0:28 0:67 0:52 0
Minimum value dmin = 0.28, so Section C, F segmentation are combined into
one class.
2 3
0
6 0:89 0 7
6 7
6 0:30 0:78 0 7
6 7
4 0:57 0:81 0:69 0 5
0:63 0:59 0:63 0:52 0
The minimum value dmin = 0.30, so Section A, D segmentation are combined
into one class.
At this point, these segments are divided into four groups: C and F, A and D, B, E.
46.6 Conclusion
References
Keywords Basic social security fund Financial and accounting information
Information disclosed Game analysis
47.1 Introduction
The financial and accounting information disclosure of social security funds takes the
social security funds as the accounting entity, and it provides the information formed
through processing and sorting the financial accounting information of this entity to
users of financial statements, and makes it to the public, which is an important part of
the social security fund finance. As the purpose of social security funds operation is to
pursue self-balance, to ensure the normal balance of payments, and to achieve the
maintenance and appreciation of funds’ value; the primary purpose of the financial
information disclosure is to manage responsibilities so as to ensure that the owners of
the social security funds—the state, pay units and individuals are clear about the
operational status of the various funds. The posterior purpose is to provide decision-
relevant information to the users of financial statements. However, due to the con-
fusion of the decentralized management system of social security funds and the lack
of social security funds’ supervision, the funds are often diverted, misappropriated,
wasted and special funds are difficult to be used only for special purpose, which
reduces the efficiency of social insurance, and seriously impacts on security of social
security funds and continuous operation of the system. For example: Officials from
the Ministry of Labor and Social Security claimed that since 1998, the Ministry of
Labor and Social Security have carried out 5 inspections with other ministries, in
which they found that more than 170 billion Yuan was misappropriated in 2000 and
discovered 16 provinces are diverted and misappropriated in 2004. Especially the
Shanghai social security fund case and the Guangzhou social security fund misap-
propriation case in recent years have triggered common concern on the operational
and regulatory issues of social security funds. In the meantime, serious issues such as
the loose daily supervision of social insurance funds, the backward regulatory legal
system of the social insurance funds, over-dependence on the system of adminis-
trative supervision, the absence of social regulation, and the lack of medium-term
and the long-term early warning regulatory are exposed.
The publish of the classic paper Uncertainty and Welfare Economics written by
Kenneth Arrow in 1963 marks the establishment of Health Economics. This paper
discusses risk aversion, moral hazard, asymmetric information, externalities of
charitable actions and a lot of other issues that occupy important positions in the
later health economics research (Arrow 1963). Cho and Kreps (1987) study that a
rational person will choose a strictly dominant strategy in game. In 1991, Claire
et al. applied measurement techniques to measure the level of information
disclosure in financial reports and the effects have achieved by use of this mea-
surement technique till now (1991). Gong and Zhou (2006) believed that the
financial information disclosed have three basic theoretical models. Zhou (2001)
thinks each of social security funds is an accounting entity. In 2001, Wang
proposed that in-depth research must be done to improve the pay off vectors of
investors and listed companies in the game matrix and Nash equilibrium; while in
the game of government and listed companies, the supervision level should be
improved so as to control the can false accounting information disclosed by the
listed companies (Wang 2001). In 2002, Gou believed that the objective of
the social security accounting was to provide information users with accounting
information based on financial information and with certain qualitative charac-
teristics (Gou 2002). In 2003, Joseph and Terry examined the relationship between
the independence of the regulatory members and the information disclosure quality
of the company with financial difficulties, and proved that there was a positive
correlation between them (Joseph and Terry 2003). Wei et al. (2003) utilized game
47 Research on Financial Accounting Information Disclosure 461
Accounting information as a public good also has supply and demand like other
products. Though the reason for the generation of demand and supply of
accounting information is that the information learned by the supply and demand
462 S. Dai et al.
Economist Nash’s solution to the classic game case of—the ‘‘prisoner’s dilemma’’
indicates that the premise of the agreement to be complied with is that the benefits
of complying with the agreement outweigh the benefits of breaking the agreement,
or the loss of breaking the agreement is greater than the loss of complying with the
agreement. Otherwise, the parties will not have the interest to comply with the
agreements. It is believed that game theory has great reference significance on
analyzing the selection of the parties between the financial and accounting
information disclosers on the social security funds and the regulators. Meanwhile,
the both game parties will induce the parties to produce the motive of breaking the
agreement for the results of their own interests pursuit. Accounting principles,
accounting standards and other related economic laws and regulations, are a kind
of agreement. How to ensure that this agreement is implemented, and effectively
restrains the concerned parties, especially the behavior of regulators and the
financial and accounting information disclosers? According to the basic idea of
47 Research on Financial Accounting Information Disclosure 463
the ‘‘Nash equilibrium’’, the key premises to achieve the ‘‘Nash equilibrium’’ are
the scientific and reasonable system design, strictly working in accordance with
the system, complying the system, and everyone is equal before in the system
(Zhang 1996).
Assumption: 1. Game participants in the game model are rational, and they will
make the optimized rational decision under some constrained conditions.
2. Assumption that there are two kinds of actions for the government regulators to
choose: check or not check; two kinds of actions for financial and accounting
information discloser to choose: financial accounting fraud or financial accounting
credit. (Here in order to simplified model, fraud include actions that cause dis-
tortion of accounting information disclosure such as profit forecast inaccurate,
reporting manipulation and the disclosure of financial information still exists in
time) 3. Assumption that understanding of each participant about action selection
of other participant is not necessarily accurate, and the information of participants
in this model is not complete. According to the above assumptions, complete
information static state game model between financial and accounting information
discloser and government regulators can be built (see Table 47.1). This game is a
static state game between government regulators and the financial accounting
information discloser of social insurance fund.
Table 47.1, the first number in the table shows earnings of the government
regulators; the second number shows earnings of financial and accounting infor-
mation discloser. Strategies taken by game participants and condition of earnings are
common knowledge and common information of both sides, whose game are
developed by these. When financial accounting credit of financial accounting
information discloser of social insurance fund and government regulators don’t
check and both sides have no extra income and loss, ‘‘extra income’’ of both sides at
this moment can be seen as 0. When government regulators don’t check and financial
accounting of the social security fund managers makes a fraud, the obtained extra
income is E. when financial accounting of the social security fund managers makes a
fraud and is investigated by the government regulators, fines should be paid besides
extra income E is confiscated and its income is-F(fine). Fines in real life usually are
several times the illegal extra income. However, the government regulators should
pay the cost C, such as dispatch of labor power and material resources, time to check
and so on. The performance reward of government regulators obtained by investi-
gating fraud of financial accounting information is B. Here, E, R, F, C, B are positive.
464 S. Dai et al.
Table 47.1 Static state game model between financial and accounting information discloser and
government regulators
Government regulatorsc Accounting information discloser
Accounting Accounting
fraud (c) credit (1 - c)
Irregularity (h) E ? B ? F – C, -F -C, 0
Non-inspection (1 - h) 0, E 0, 0
According to this payoff matrix, we can calculate expected profit function of the
financial accounting information discloser of social insurance fund:
S ¼ c hðF Þ þ c ð1 hÞ E ð47:1Þ
Expected profit function investigated by the government regulators are:
L ¼ c hðE þ F þ B CÞ þ ð1 cÞhðCÞ ð47:2Þ
In this formula, expected profit function of the financial accounting information
discloser of social insurance fund is S. According to fraud probability, we can get
its reaction function by derivation:
oS
¼ E h ðE þ F Þ ð47:3Þ
oc
Similarly, for expected profit function investigated by the government regula-
tors L, we can get its reaction function by derivation according to regulatory
probability:
oL
¼ c ð E þ F þ BÞ C ð47:4Þ
oh
Combination of (47.1) and (47.2) can get the optimal regulation probability of
the government regulators and the optimal fraud probability of the financial
accounting information discloser of social insurance fund, which are as follows:
E C
h ¼ c ¼
EþF EþFþB
It is synthesizing the above mathematical reasoning process; we can get Nash
equilibrium of mixed-strategy game.
Because the additional income E obtained from the fake information disclosure
is independent of this model, it will be regarded as constants out of human control
in the analysis. It can be seen that in the Nash equilibrium solution in the model, is
directly proportional to C and inversely proportional to F ? B.
The fraud in financial and accounting information disclosure of the social
security fund depends on the government regulators: if the smaller the law
enforcement of the government regulators C is, the bigger the awards B obtained
47 Research on Financial Accounting Information Disclosure 465
On the basis of the government regulation is not in place due to lack of financial
and material resources, public supervision institutions and the system for reporting
the social security fund disclosure fraud are considered to be established, or the
report can be done through the network or telephone.
As mentioned in Sect. 47.4.1, assume that the social security fund management
institutions are found to choose to disclose the fake accounting information and
punished seriously, which can make the Nash equilibrium point close to the Nash
equilibrium with best total social utility Pareto. It is not quite as simple as that,
though. First, it is uncertainty whether the government regulators have enough
energy to verify all the social security fund management institutions. Second, even
if verification is conducted, the verification shall not be guaranteed to be fully
successful, that is, the concealment of some false accounting information is too
intense to be found by the existing technology. Next, it shall be dependent on the
degree of the punishment on those agencies which disclose the false accounting
information. If the punishment is less than the benefits obtained from the disclo-
sure of false information, this phenomenon can’t be governed well. In the end, the
regulatory effect will depend on the benefits and costs of the regulators. All of
these show that social security agencies and the government regulators optimize
their action strategies in the game constantly.
In Table 47.2, the first figure indicates the benefits of the regulators and the
second one indicates the benefits of accounting information disclosers.
Suppose P is the probability that the government regulators receive the reports
without inspecting the social security fund management institutions. Therefore, the
probability that the government regulators don’t receive the reports without
inspection is 1-P. The probability that the regulators succeed in the verifying the
social securities fund disclosers is K while the probability of failed verification is
1-K. The reputation losses of the government regulators due to the fraud in the
466 S. Dai et al.
Table 47.2 The benefits of the regulators and accounting information disclosers
Accounting Government regulators
information discloser
Irregularity (h) Non-inspection (1 - h)
Confirmed (K) Not confirmed No result Report and verify the
(1 - K) (1 - P) irregularities (P)
Accounting fraud (c) E?B?F– C, E 0, E E ? F - R, -F
C, - F
Accounting integrity -C, 0 -C, 0 0, 0 0, 0
(1 – c)
financial and accounting information disclosure of the social securities fund are
R (Reputation).
1. Where the situation of the probability of the fraud in financial and accounting
information disclosure of the social security fund is given, the expected benefits
obtained in the event of inspection and non-inspection conducted by the regulators
is respectively:
p2 ¼ ðE þ F RÞc P þ 0 ð47:6Þ
C
p1 ¼ p2 ; c ¼
ð E þ F Þ ð K PÞ þ K B þ R P
2. Where the probability that the regulators conduct the inspection is given, the
expected benefits obtained by social securities fund from fake information dis-
closure and true information disclosure are separately:
p3 ¼ ½Eð1 K Þh FKh
ð47:7Þ
þ ½Eð1 PÞ EPð1 hÞ
p4 ¼ 0 ð47:8Þ
ðE þ F ÞP E
p3 ¼ p4 ; Get h ¼
ðE þ F ÞðP K Þ
Therefore, the improved mixed-strategy Nash Equilibrium is
C
c ¼
ðE þ F ÞðK PÞ þ K B þ RP0
ðE þ FÞP E
h ¼
ðE þ FÞðP KÞ
The improved mixed strategy Nash equilibrium indicates: if the social security
fund managers choose to undertake the false accounting information disclosure in
47 Research on Financial Accounting Information Disclosure 467
the form of c [ c , then the best choice for the regulators is to inspect. Otherwise,
the regulators will not inspect. If c ¼ c to undertake the false information
disclosure, the best choice for the regulators is not to inspect. Similarly regulators
conduct the inspection in the form of h [ h , the best choice for the social
securities fund management institutions is to disclose the true accounting infor-
mation. Otherwise, the reverse. Because the additional income E obtained from the
fake information disclosure and irregularities report and investigation P are
independent of this model, they will be regarded as constants out of human control
in the analysis. It can be seen that the probability of the fraud in financial and
accounting information disclosure of the social security fund depends on the
government regulators:
(1) when K–P is more than 0: if the smaller the law enforcement of the
government regulators C is, the bigger the awards B obtained by the government
regulators due to investigating the frauds in the accounting information disclosure
are and the penalties income F is. Besides, the bigger R and the probability for
regulators to succeed in the verification K is, the smaller the probability of the
frauds in the accounting information disclosure is.
At this time, the probability of the regulation and supervision of the government
regulators is mainly dependent on the degree of punishment F. The bigger F is, the
greater the probability of the supervision is.
(2) When K–P is less than 0: if the smaller the law enforcement of the government
regulators C is, the bigger the awards B and R obtained by the government regulators
due to investigating the frauds in the accounting information disclosure and the
probability for regulators to succeed in the verification K are. Besides, the less the
penalties income F is, the smaller the probability of the frauds in the accounting
information disclosure is.
At this time, the probability of the regulation and supervision of the government
regulators is mainly dependent on the degree of punishment F. The smaller F is,
the greater the probability of the supervision is.
Usually, K–P is greater than zero. If K–P is less than 0, it indicates that the
regulators don’t have any supervision effects. This situation is not allowed to
appear. As a result, the probability of the frauds in the accounting information
disclosure is mainly inversely proportional to B, F, R, K and directly proportional
to C. The measures shall be taken to contain B, F, R, K and C together, which will
minimize the probability of the frauds in the accounting information disclosure.
47.5 Conclusion
A. Social Securities Fund Accounting System and Social Securities Fund Financial
System issued in 1999 should be improved and updated. From the point of view of
game theory, the system arrangement is the rule of games. At the same time, the
system arrangement is the result of the game. Unreasonable or imperfect system
468 S. Dai et al.
arrangement will update and improve in the continuous dynamic game to reach a
new equilibrium at last. That is, the system designer and system executor conduct
a dynamic game. A good system arrangement should be perfected continuously. In
a certain sense, the ‘‘system’’ is more important than the game itself.
B. The government shall establish not only the efficient, honest, relatively inde-
pendent social security fund supervisory institutions, but also the non-governmental
supervisory institutions independent of the government regulators and provide the
public phones, web sites, mailboxes and other report measures. At the same time, this
can make the various regulators supervise each other.
C. On the basis of the application of the advanced computer technology, the
construction of the social security fund information management platform, the
supervision of revenues and expenditures of the social security fund and
management in advance, and the network management of the whole progress of fund
usage, the off-site surveillance and dynamic supervision will be realized gradually.
D. To establish an account checking system between the social security
administration department and the social security fund administrative agencies to
prevent the deviations of the their accounts. At the same time, actuarial reports and
submissions of the enrolled actuaries and the audit reports and submissions of the
certificated public accountants shall be provided in the financial accounting reports
and management reports of the social security fund.
References
Arrow KJ (1963) Uncertainty and the welfare economics of medical care. Am Econ Rev
53:941–973
Cho I-K, Kreps D (1987) Signaling games and stable equilibria. Q J Econ 102:179–221
Claire L, Marston Philip, Shrives J (1991) The use of disclosure indices in accounting research: A
review article. British Account Rev 23:195–210
Deng C (2003) A study on the problem of earnings management based on game theory. Acc Res
5:37–42 (in Chinese)
Du X (2004) Corporate governance development and regulation of accounting information
disclosure-game analysis and historical evidences. J Finance Econ 9:74–84 (in Chinese)
Gong L, Zhou H (2006) The key points for the auditing of social insurance fund. J Guizhou Univ
(Social Sciences) 25 (in Chinese)
Gou Y (2002) Current situation and reform idea of china’s social security accounting. Contemp
Finance Econ 2:73–76 (in Chinese)
Joseph VC, Terry LN (2003) Audit committee independence and disclosure: choice for
financially distressed firms. Corp Governance Int Rev 11(4):289–299
Li W, Duan H, Kong X, Ma X (2010) Study on the regulation mode of Chinese social security
fund. Int J Bus Manag 5(9):124–126
Tong G, Liu C (2011) Game analysis of fund supervision. information technology. In:
International conference on computer engineering and management sciences (ICM), 2011,
pp 197–199
Wang X (2001) A game analysis on the accounting information publication of listed companies.
Econ Surv 28(6) (in Chinese)
Wei S, Xue H, Lu T (2003) Analysis on game among parties in information disclosure. Economic
Problems 10:51–53 (in Chinese)
47 Research on Financial Accounting Information Disclosure 469
Zhao L (2008) The perfection of China’s social security fund supervision system. Shanghai
Finance 11:55–57 (in Chinese)
Zhang W (1996) Game theory and information economics. Shanghai people’s publishing house,
Shanghai
Zhou Y (2001) Social security fund of accounting research. Dongbei University of finance
&economics press, Dalian (in Chinese)
Chapter 48
Research on the Multi-Target Tracking
Algorithm Based on Double FPGA
Xu-dong Liu
Abstract The accuracy and real-time of the multi-target tracking system have been
the main study problems in the targets tracking field. In this research, two slices of
FPGA are applied as the main processing chips. Kalman Filter and Particle Filter, the
two commonly used filtering algorithms are dynamically integrated to make use of
their characteristics in multi-target tracking processing. The real-time of multi-target
tracking is realized through the complementation of double FPGA chips.
48.1 Introduction
With the increasing complexity of the targets tracking background and the rapid
development of digital technology, the requirements for radar multi-target tracking
ability in the interference background become much higher than ever. The multi-
target tracking technology has been widely used in the fields of military, commu-
nications, satellite navigation, and remote sensing etc. The core of the targets
tracking study is the filter algorithm. Therefore, it is the hot spot and difficult point of
the research field to find a better performance filtering algorithm to deal with the
linear and nonlinear problems in the actual system. In order to conduct more
effective real-time multi-target tracking in complex environments, the paper intro-
duces a new filtering algorithm by applying double FPGA as the processor core, and
combining Kalman Filter Algorithm and the Particle Filter Algorithm together.
X. Liu
Changchun University of Science and Technology, Changchun, China
X. Liu (&)
Economic Management Cadre College of Jilin Province, Changchun, China
e-mail: [email protected]
Kalman Filter describes system by state space method, and is composed of state
equations and measurement equations. Kalman Filter estimates the state current
value with the previous state estimated value and some recent observation data, and
describes the current value in the form of state variable estimated value (Ding 2003).
Suppose the system state variable is Xk at the moment k, the state equation and
the measurement equation are as follows.
Xkþ1 ¼ Ak Xk þ xk ð48:1Þ
yk ¼ Ck Xk þ vk ð48:2Þ
In the equations, ‘‘k’’ means time, and here it refers to the ‘‘k’’ iteration value of
the corresponding signal. The input signal ‘‘xk’’ is white noise. The observation
48 Research on the Multi-target Tracking Algorithm 473
noise of the output signal ‘‘Vk’’ is White Noise, too. The branch gain of the input
signal to the state variable equals to 1. ‘‘A’’ is the gain matrix among the state
variables, which can change with the time. ‘‘C’’ means the gain matrix between the
state variables and output signals. The signal model is shown in Fig. 48.1. If time
variable ‘‘k – 1’’ replaces ‘‘k’’ in the state equation, the equations are as follows.
Xk ¼ Ak1 Xk1 þ xk1 ð48:3Þ
y k ¼ C k Xk þ v k ð48:4Þ
Here ‘‘Xk’’ is a state variable. ‘‘xk-1’’ means the input signal, which is white
noise. ‘‘Vk’’ is observation noise. ‘‘Yk’’ is observation dada.
Kalman Filter Algorithm by FPGA is shown in Fig. 48.2. The calculation of the
covariance and the calculation of filter gain are realized by one FPGA. While the
posterior state estimate and the posterior error covariance estimate are conducted
by another FPGA. By this way, double FPGA can work simultaneously on the
algorithm computation and storage, and improve the performance of the real-time.
For the linear Gaussian random system, Kalman Filter can get an accurate
analytical solution of posterior probability density function. But in some cases, it is
hard to get an accurate posterior probability density function, so the approximate
suboptimal estimates are required (Xu 2008; Abdel-Hakim and Farag 2006).
Besides, because of the poor system’s observability and the low-degree linearity of
the state space model, Kalman Filter Algorithm can not meet the requirements in
the aspects of the convergence precision and the convergence time (Deng 2008). In
order to solve the problem, Gordon introduced re-sampling algorithm into
sequential importance sampling, and put forward the importance sequential
importance re-sampling algorithm. Thus the algorithm becomes more perfect. The
Sequential Importance Re-sampling Algorithm is also the base of the Particle
Filter Algorithm. Compared with conventional Kalman Filter Algorithm and its
improving algorithm (Zawar and Malaney 2006), the Particle Filter can perform
better in non-linear & non-Gaussian environment (Wu et al. 2009). When the
particle quantity is enough, its estimate can approximate the optimal solution.
474 X. Liu
The Particle Filter is a filter method for non-linear & non-Gaussian system, based
on Monte Carlo Methods (MCM) and the Recursive Bayes Estimation. As the
fundamental idea of Particle Filter, firstly, in terms of the experience conditional
distribution of the system state vector (Czyz et al. 2007), a group of random
samples set is generated by state space sampling. The sample set is called ‘‘Par-
ticle’’. And then the particles weight and the sample position is adjusted cease-
lessly in accordance with the observation data (Wang 2009). Finally, the correction
initial experience conditional distribution and the estimation system states and
parameters can be achieved through the adjustments of the particles information.
Algorithm shows in Fig. 48.3.
Normally, the descriptions of nonlinear & non-Gaussian system are as follow.
Xk ¼ fk ðXk1 ; vk Þ ð48:5Þ
Yk ¼ hk ðXk ; wk Þ ð48:6Þ
In the equations, ‘‘Xk [ Rn’’ means system state of ‘‘K’’ time. ‘‘Yk [ Rn’’ means
measurement. ‘‘Vk [ Rn’’ and ‘‘Wk [ Rn’’ represent the independently and
simultaneously distributed system noise and observation noise sequence respec-
tively. State space model describes dynamic system model. The implied time is
white variable. In the study of multi-target tracking, state vector contains goal
48 Research on the Multi-target Tracking Algorithm 475
The paper only takes first 100 frame images in the simulation process as reference.
For simple realization, suppose both the system noise and the measurement noise
are zero mean Gaussian white noise.
The simulation results shows in Fig. 48.5. The track constituted by solid circles
is the actual multi-target motion trajectory, and hollow rings form the algorithm
estimate trajectory. From the graph, for each target, the difference value between
the filtering algorithm estimate trajectory and the actual target motion trajectory is
476 X. Liu
small. In the process of the target movement, even if there exists a high acceler-
ation, the filtering effect can still be achieved, which completely meet the
requirement of the multi-target tracking.
48 Research on the Multi-target Tracking Algorithm 477
48.4 Conclusion
This paper designed and realized the application of multi-target tracking system
algorithm in complex environment by using FPGA as its main processing chips,
which can satisfy the requirement of the subsequent processing for multi-target
detection and tracking. The simulation results show that the expected objectives
can be achieved. If there is a further demand to improve the system signal real-
time processing speed, a piece of DSP (TMS320C6416) can be applied for signal
preprocessing before the two pieces of FPGA perform signal processing specifi-
cally. By this way, better signal characteristics can be distinguished. Besides, it
will have a bright market prospect to add a USB form output interface in the
system which uses this kind of algorithm.
References
Abdel-Hakim AE, Farag AA (2006) A sift descriptor with color invariant characteristics. IEEE
Comput Soc Conf Comput Vis Pattern Recognit 67(10):1978–1983
Czyz J, Ristic B, Macq B (2007) A particle filter for joint detection and tracking of color objects.
Image Vis Comput 21(9):1271–1281
Deng WT (2008) Research of particle filter algorithm based on FPGA implementation. Mater
dissertation, Beijing Jiaotong University, Beijing, China
Ding YM (2003) Digital signal processing. Xidian University Press, Xian, pp 48–61
Feng Y (2008) The research on data association in multiple target tracking. Mater dissertation,
Xidian University, Xi’an, China
Hu P (2010) Research on video object tracking with Kalman filter. Mater dissertation, College of
Computer Science of Chongqing University, Chongqing, China
Hu S (2008) Visual target tracking based on Particle filter. Ph.D. dissertation, Nanjing University
of Science and Technology, Nanjing, China
Liu G (2003) The research and realization of multiple target tracking algorithm. Ph.D.
dissertation, Northwestern Polytechnical University, Xi’an, China
Qian Z (2011) A multi-target tracking problem of occlusion. Mater dissertation, Guangdong
University of Technology, Guangzhou, China
Vo B, Ma W (2006) The Gaussian mixture probability hypothesis density filter. IEEE Trans
Signal Process 49(10):4091–4104
Wang TT (2009) Particle filter algorithm application and research on target tracking. Mater
dissertation, North University of China, China
Wang Y (2004) Research on occlusion in object tracking. Mater dissertation, Huazhong
University of Science and Technology, Wuhan, China, 2004
Wu C, Yang D, Hao ZC (2009) Color image tracking algorithm based on particle filter. Optics
Precis Eng 17(10):2542–2546
Xu LZ (2008) Visual tracking based on particle filter and kalman filter under complex
environments. Mater dissertation, Zhejiang University, Hangzhou, China
Zawar S, Malaney RA (2006) Particle filters and position tracking in Wi-Fi networks. In:
Proceedings of the 63rd IEEE vehicular technology conference. Melbourne, Australia, 7–10
May 2006, pp 613–617
Zhang C (2010) Research on object detection algorithm of partially occluded. Mater dissertation,
Xi’an Technological University, Xi’an, China
Chapter 49
A Study of the Effect of Xi’an Subway
on the Rent of Office Building
49.1 Introduction
Whether investors will develop a real estate, or firms choose office location, office
space location is a critical factor. Existing urban rail transit construction experi-
ence has shown that urban rail transit through the ‘‘value capture’’ phenomenon in
improving the quality and attractiveness of urban areas can play a significant role
(Du and Mulley 2007). It will not only improve the traffic convenience along the
region, and can attract a variety of life, business, education, entertainment and
other facilities along the metro aggregation, enhance and improve the life envi-
ronment of the office along the subway.
Therefore, it has strong stimulation to the development of surrounding property,
thus contributes to the high-density development of the site and enhancement of
the real estate value along the subway. In general, the closer the office building is
to the Metro arrivals, the greater the transit has impact on office rental. However,
some studies suggest no impact on the rent of office along the subway.
The impact of the subway on the rent of office building reflects not only the
strength of the external economic role of the subway construction, but the driving
agglomerating force on the surrounding economy. In essence, it reflects the input
and output value of the subway construction, rail transport space resource values
and the economic sustainability of the future development of urban rail transit.
Therefore, this research can provide valuable suggestions, which is about more
rational and scientific development of urban rail transit planning and use-planning
of land along the subway in order to fully realize the optimal allocation of urban
space resources, and also provide strong support for the decision-making of the
investors on the siting.
Subway Line 2 is currently the only subway line in Xi’an, which has been built
and put into operation. It is a total length of 26.4 km, runs through north and south
of the city. The installment is from the Xi’an North Passenger Transport Station to
south, through the Bell Tower, along the South Hamlet and Xiao-zhai to the
Qu-jiang International Convention and Exhibition Centre. It officially has put into
operation in September 2011. This paper studies the rental office along Xi’an
Subway Line 2 and conduct an empirical analysis of the impact of subway con-
struction on office rentals by employing BP neural network model.
Since the 1970s, there has been extensive and in-depth research about interests
influence on urban rail transit project development in foreign countries. Therein,
the impact research of rail transit construction on the surrounding property values
is an important research topic in this field. Mejia-Dorantes et al. (2011) studied
Madrid metro and by using spatial statistical technique come to the conclusion that
the subway plays a very important role in improving transport efficiency and urban
planning. Pagliara and Papa (2011) studied the value impact of the city region
along the subway, and research shows as time goes, the value of surrounding land
will gradually improve.
The subway from the Beijing Railway Station to Western Suburb Apple Orchards
is the China’s first metro line, and started trial operation in 1971. In recent years, with
the rapid development of domestic metro, the relationship of rail transportation and
real estate values along it is the concern of many scholars. Some studies focused on
the residential real estate, such as the studies of Yang and Shao (2008) and Liu and
Shang (2009). Other studies aimed at the office market such as Jiao et al. (2009). It
carried out the empirical results on the office rentals change along Beijing Metro 10
line and show that there is no significant effect of urban rail transit on the office space
rental changes. The main reason was that the center effect of the city center and the
aggregation effect of shopping district weakened the impact of urban rail transit. The
49 A Study of the Effect of Xi’an Subway 481
tool of empirical analysis of these studies mainly has transportation cost model, the
hedonic price model or neural network analysis. Transportation cost model is the
model based on the relationship between transport costs and real estate value without
considering the value impact of real estate factors. The hedonic price model com-
monly uses the form of linear equations functions mainly to solve the static problem.
But the neural network model with particular autonomous learning is able to accu-
rately simulate and solve non-linear relationship. It has the higher accuracy and
reliability (Yang and Shao 2008).
On the basis of existing research, the paper uses the BP neural network model to
analyses the influence strength of Xi’an Metro Line 2 on the office rental con-
sidering various internal and external factors affecting office rentals.
In order to get more qualified samples, the samples are limited to the following: (1)
samples are taken from rental offices surrounding some great station, such as
Chang’an Road Station, South gate Door Stations, taking into account of the big
gap in commercial atmosphere and traffic conditions along the north–south of the
Metro Line 2; (2) samples are only the rental offices 800 m depart from the
subway, due to no effects beyond a certain range; (3) excluding the exotic sample
of which the rent is significantly low or high. In accordance with above the limit,
this paper collected 80 samples out of the 20 leased office apartments along Xi’an
Subway Line 2. Based on existing research, the paper selects 11 impact factors
variables of office rentals as following: distance from the subway exit, office type,
commercial agglomeration degree, public facilities, traffic conditions, image of the
appearance, decoration level, internal support facilities, where the floor, property
costs and property level. All variables are defined as follows in Table 49.1 (Wang
and Chen 2009).
Among them, the descriptive variables in the sample are quantified, referring to
the real estate valuation method and expert scoring method. First a standard
sample is selected for each independent variable, Scoring 100, and then the other
samples are scored one by one comparing to the standard sample rate. The
quantified samples are normalized, the formula of normalization is:
! !
X n X n X n
Yi ¼ Xi Min Xi Max Xi Min Xi
i¼1 i¼1 i¼1
Xi and Yi are the Values before and after the conversion (Fang and Chen 2004) .
482 Y. Wang and W. Wei
This article selects BP neural network model with three-layer hierarchy network,
nodes of input layer, node of output layer and nodes of hidden layer. There are a
number of hidden nodes on the adjacent; neurons of adjacent on the lower are fully
connected. That is, each lower neuron with each upper neuron to achieve the right
connections, while each floor no connection between neurons. First, the input
signal propagates forward to the hidden layer nodes, after the role of function, then
the hidden output signal transmitted to the output node, and finally output is
obtained. The role of sigmoid-type function of node activation function is selected
the non-linear physical. Network training is selected TRAINGDM optimization
algorithms to improve the network generalization ability and convergence speed,
and the largest training sated 500, the target error value sated to 0.01, the fre-
quency showing sated 50. Eleven main factors are as the input of the BP neural
network model, rental of office space rent is the output. The paper selects 60
groups as the training samples and 20 groups as the simulation data. The results of
the model Training is shown in Figs. 49.1 and 49.2.
The result of model after repeated experiments is that hidden layer nodes is 30.
The best results Fig. 49.1 shows that the error of the desired output of training
sample and calculating output is less than 1 % after 168 times simulation training
by the network. When the network reaches a predetermined target error, model
training will get success. Figure 49.2 shows a sample model fitting, R = 0.83912,
the model fitting of the sample data close relatively high to 84 %.
From Table 49.2, using the trained model to test, the maximum error between the
output value and the ‘‘expectation’’ is 10.52 %, with an average error of 5.2 %.
Comprehensive evaluation result is consistent with the expected data. This shows
49 A Study of the Effect of Xi’an Subway 483
Table 49.2 The samples tested in the input and model estimates the value of output
Tested samples (Yi) 1 2 3 4 5 6 7
Real value 0.1607 0.2047 0.0689 0.2216 0.063 0.1198 0.1524
Model estimates 0.1022 0.1803 0.1642 0.1682 0.1065 0.028 0.1501
Relative error 0.0585 0.0244 -0.095 0.0533 -0.0435 0.0918 0.0023
Tested samples (Yi) 8 9 10 11 13 14 15
Real value 0.0398 0.2392 0.4625 0.5199 0.1069 0.1606 0.157
Model estimates 0.145 0.2021 0.4589 0.4839 0.1744 0.2084 0.2084
Relative error -0.105 0.0372 0.0036 0.036 -0.0675 -0.048 -0.0515
the shopping malls, hotels, apartments, hospitals and other public facilities can
greatly enhance the value of office space. Internal supporting facilities have a great
impact on office rentals, namely 11.7 %, due to the intelligent management of the
office bringing great convenience to the business office.
The impact of distance from the subway exit on office rentals is 10.5 %, located
the fourth in the selected factors, which indicates that the impact of the Xi’an
Subway Line 2 on the rents of office buildings along it is more significant. In
recent years, following Xi’an’s rapid economic development, a sharp increase in
foreign population and the rapid growth of private car, the problem of urban traffic
congestion is increasingly serious. Xi’an Subway Line 2 is located in the city
north–south traffic artery and it is opened to a greater extent to mitigate this traffic
load in the trunk on the ground, and raise the transportation convenience of office
properties, but also be lower transportation costs of enterprises settled, thus the
greater value of office properties.
However, ‘‘the city of Xi’an, rapid rail transit construction plan’’ shows that
Xi’an will build a total of six subway lines and a total length of 251.8 km, forming
the network structure of the board radial layout. Now, in Xi’an, only Metro Line 2
is completed, the length is only one tenth of the total length of all metro Planning,
and it can only solve the convenient traffic problems on a trunk road. Since metro
transport network has yet to be not formed, the synergy between the subway lines
also cannot play, and powerful subway externalities are not adequately reflected.
Competition between the subway lines will also have a negative effect for the
rental of office premises, so the combined effect of these two aspects of com-
parative study has yet to be further after the completion of the subway network
(Chen and Wu 2005).
49.4 Conclusion
Urban rail transit not only is able to respond to rapid economic development of
urban traffic congestion, but also can improve the quality and attractiveness of
urban areas. Based on BP neural network model for quantitative, the paper anal-
yses the influence degree of the Xi’an Subway Line 2 on the rent of office
buildings. Studies show that Xi’an Line 2 subway has a significant impact on the
office buildings rent along, the importance weight is of 10.5 % and places the
fourth out 11 office influencing factors. The construction of the Xi’an Subway Line
2 not only enhances the convenient transportation and the convenience of the
people’s work and life, also enhances the value of office buildings along it.
However, as the Metro Line 2 is the only one metro line in Xi’an, and metro
transport network has not yet formed, the value enhancement of the property
brought by synergy effect among the subway lines cannot be fully manifested, On
the other hand, the subway line of this article will lose the current the monopoly
competitive strength due to the formation of the future metro network. Thus
weakening the role of the subway line for the office rent promotion, Hence, the
future research is very significant after the metro network completion (Ma and
Yang 2010).
Acknowledgments Supported by the key discipline of Shaanxi Province and Supported by the
featured discipline of philosophy and general social sciences of Shaanxi higher education
institutions.
References
Chen F, Wu Q-B (2005) Quantitative research on the increase of real estate price alongside the
urban rail transit (Chinese). Urban Rail Transp Res 10:12–17 (Chinese)
Du H, Mulley C (2007) The short-term land value impacts of urban rail transit: quantitative
evidence from Sunderland, UK. Land Use Policy 24:223–233
Fang X-Y, Chen Z-N (2004) Review of impacts of urban rail transit on property values in foreign
countries. Trop Geogr 09:270–274 (Chinese)
Jiao X, Gu L-P, Hu C-M (2009) Prices analysis of rail transit impact on real estate based BP
neural network. J Chongqing Inst Technol Nat Sci 10:135–139 (Chinese)
Liu X-J, Shang P (2009) Analysis of the impact of real estate prices along the Xi’an Subway Line
2. Railway Transp Econ 02:41–43 (Chinese)
486 Y. Wang and W. Wei
Ma C-Q, Yang F-S (2010) Impact of rail transit on price increase of residential real estate.
J Traffic Transp Eng 4:91–96 (Chinese)
Mejia-Dorantes L, Paez A, Vassallo JM (2011) Transportation infrastructure impacts on firm
location: the effect of a new metro line in the suburbs of Madrid Original Research Article.
J Trans Geogr 10:91–97
Pagliara F, Papa E (2011) Urban rail systems investments: an analysis of the impacts on property
values and residents’ location Original Research Article. J Trans Geogr 19(2):200–211
Wang C-C, Chen Y-H (2009) Urban rail transit impact on the surrounding office rentals—
empirical analysis based on the Beijing Subway Line 10 (Chinese). Construction Economy,
2009, 4-6–4-9 (6 Month suppl)
Yang L-Y, Shao C-F (2008) Integrated forecasting model for real estate price along urban rail
transit based on BP neural network and Markov chain. J Jilin Univ Eng Technol Ed
3:514–519
Chapter 50
Urban–Rural Income Gap in China:
Evolutionary Trend and Influencing
Factors
Cun-gui Li
Abstract This paper is concerned with the evolutionary trend of Chinese urban–
rural residents’ income gap and find out its main influencing factors. Firstly, the
current situation and historical evolutionary trend of urban–rural residents’ income
gap in China were analyzed. The results demonstrated that since the reform and
opening up in 1978, the Chinese urban–rural income gap shows the phase change
characteristics: reduced—expanded—again reduced—again expanded—flattened.
Then, by using the data from 1978 to 2010, multiple linear regression models were
established to identify the correlation between urban–rural income gap and its
influencing factors. The study proves that the urban–rural dual structure,
employment structure and urbanization have positive correlation with urban–rural
income gap, and there is a negative correlation between rural financial develop-
ment level and urban–rural income gap.
Keywords Evolutionary trend Influencing factors Multiple regression analysis
Urban–rural income gap
50.1 Introduction
For a long time, the development strategy of ‘‘industrial priority and urban bias’’
was implemented in China; with respect to industrial policy, stressed the priority
of urban industrial; with respect to capital flows, the agricultural surplus was
transferred from countryside to city through taxation, price scissors between
industrial goods and agricultural products, financial institutions and so on; with
respect to population migration, there were many obstacles and constraints for
labors to migrate from countryside to city. Those urban–rural dual policies result
C. Li (&)
School of Economics, Henan University of Science and Technology, Luoyang, China
e-mail: [email protected]
in the urban–rural income gap is increasingly widening. In 2011, the per capita
annual disposable income of Chinese urban households is 21 810 Yuan, and the
Per Capita Annual Net Income of Rural Households is 6 977 Yuan. The ratio of
urban income to rural income reaches to 3.13:1. Moreover, a lot of hidden incomes
are not included in the income of urban households, such as housing subsidies
medicate, medical insurance, unemployment insurance, old-age pension, guarantee
a minimum income and other social welfare. If we took them into account, the
urban–rural income gap would become much bigger (Li and Luo 2007).
Currently China has become one of the countries of largest urban–rural income
gap all over the world. The enlarging urban–rural income gap will inevitably affect
China’s economic sustainable development. Levin (2001) holds that like many
other developing countries, the urban-biased policies are the main reason of
enlarging urban–rural income gap (Levin 2001). A study by Cai (2003) has shown
that the urban–rural relationship in many developing countries is mandatory,
namely, urban biased. The urban–rural income gap caused by urban biased policies
is the common phenomenon in developing countries (Cai 2003). Hertel (2006)
found that the household registration system, rural land contract system, and non-
agricultural labor migration restrictions have significant impact on urban–rural
income gap (Hertela and Zhai 2006). Lei and Cai (2012) draws a conclusion that
the distortion of primary income distribution and urban-biased fiscal expenditure
policy enlarge the urban–rural inequality significantly (Lei and Cai 2012). Chen
and Peng (2012) thinks that the opportunity inequality caused by the household
registration system is an important reason for the urban–rural income gap (Chen
and Peng 2012). Zuo draws a conclusion that the main measures to narrow the
urban–rural income gap is to speed up the process of urbanization (Zuo 2012).
Based on these literatures, using the time series from 1978 to 2010, this paper will
built multiple regression models to test the relationship between urban–rural income
gap and its influencing factors, such as urban–rural dual economic structure,
urbanization, growth rate of GDP, employment structure, rural industrialization
level, and rural financial development level. The purpose is to find out the key factors
of influencing the urban–rural income gap and provide basis for decision making of
reducing urban–rural dual economic system and forming a new pattern that inte-
grates economic and social development in urban–rural areas. The arrangements of
following parts of this paper are as follows: the second part will analyze the current
status and historical evolutionary trend of urban–rural residents’ income gap in
China; the third part will make an empirical analysis on the influencing factors of
urban–rural income gap; in the fourth part, the useful conclusions will be given.
According to Fig. 50.1, since 1978, although the growth rate of absolute urban–
rural income, the ratio of urban residents’ income to rural residents’ income
becomes bigger. From 1978 to 2011, the per capita disposable income of urban
50 Urban–Rural Income Gap in China 489
Fig. 50.1 The changing trend of urban–rural income growth rate and gap from 1978 to 2011 in
China
households and the per capita net income of rural households have increased from
343.40 Yuan and 133.60 Yuan to 21 810 Yuan and 6 977 Yuan. The annual
average growth rate is 13.40 and 12.73 % respectively. Because the urban resi-
dents’ income grows faster than rural, the absolute urban–rural income gap is
becoming wider.
As shown in Fig. 50.1, the urban–rural income gap shows obvious phase
characteristics since the reform and opening up in China.
(1) The first phase (1978–1983), the urban–rural income gap shows the trend of
fast reduction. The central task of this phase was rural economic system
reform, and the rural household contract responsibility system was approved
and promoted. The enthusiasm of the peasants for production was greatly
motivated, the rural productive force was liberated and the agricultural labor
productivity was raised by a big margin. On the contrary, the urban economic
system reform was relatively backward. As a result, the growth rate of rural
residents’ income was higher than that of urban residents. From 1978 to 1983,
the per-capita net income of rural households rose by 18.32 %, the per capita
disposable income of urban residents by 10.46 %. The relative urban–rural
income gap dropped from 2.57 in 1978 to 1.82 in 1983.
(2) The second phase (1984–1994), the urban–rural income gap shows the trend of
fast expansion in general. At this phase, the focus of economic reform changed
from countryside to city, result in a rapid growth of the urban economy. Over
the same period, the incentive effects of the rural household contract
responsibility system on the agriculture was diminishing, the agricultural labor
force increased gradually, and the price of agricultural means of production
increased substantially, therefore, the rural economic growth slowed down.
From 1984 to 1994, the average annual growth rate of the urban per capita
disposable income and rural per-capita net income was 18.30 and 13.14 %
respectively. The relative urban–rural income gap increased from 1.83 in 1984
to 2.86 in 1994.
490 C. Li
(3) The third phase (1995–1997), the urban–rural income gap shows a diminishing
trend once again. At this phase, the purchasing price of agricultural and
sideline products was raised considerably; the rural township enterprises
developed rapidly, so the income from wages and salaries of rural households
increased by a large margin. During the same period, the government
implemented tight macro-control policies, leading to a slow down of urban
economic development and a soft landing for the economy. From 1995 to
1997, the average annual growth rate of the urban per capita disposable
income was 13.86 % and the average annual growth of rural per-capita net
income was 19.62 %. The growth rate of income of urban residents urban was
much lower than the growth rate of rural residents’ income. The relative
urban–rural income gap dropped from 2.71 in 1995 to 2.47 in 1997.
(4) The fourth phase (1998–2003), the urban–rural income gap shows the trend of
expansion once again. At this phase, there commonly appeared the following
situations, such as it was difficult for peasants to sell food, the output
increased, but income did not increase. The pace of development of rural
township enterprises slowed down, its ability to absorb rural surplus labor
force weakened gradually and its contribution to increase farmers’ income
decreased obviously. In addition, the urban-oriented policies and institutional
arrangements with the household registration system at its core have great
constraints for the rural surplus labor to migrate to non-agricultural industries
of rural areas or urban areas. Household registration system and urban
employment system have become the primary restricting bottleneck of
increasing income of peasants. From 1998 to 2003, the average annual growth
rate of the urban per capita disposable income was 9.32 % and the average
annual growth of rural per capita net income was only 3.94 %, less than half of
urban. The relative urban–rural income gap increased from 2.51 in 1998 to
3.23 in 2003.
(5) The fifth phase (2004-present), the urban–rural income gap tend to become
flattened. From 2004 to 2012, nine No. 1 central documents were issued by
central committee of the CPC to solve the three dimensional rural issues. A
series of policy measures to support and benefit agriculture, rural areas and
farmers were implemented, such as agricultural tax, livestock tax and taxes on
special agricultural products were rescinded throughout the countryside;
increase financial support for agriculture; carry out the subsidy policies of
direct subsidies to grain peasants, fine breed allowance and Farm machinery
purchase subsidy; implement the strategy of balancing urban and rural eco-
nomic and social development. Both the urban and rural income has a rela-
tively high growth. From 2004 to 2011, the average annual growth rate of the
urban per capita disposable income was 12.74 % and the average annual
growth of rural per-capita net income was 13.16 %. The relative urban–rural
income gap is reduced from 3.21 in 1998 to 3.13 in 2011.
50 Urban–Rural Income Gap in China 491
This paper takes the urban–rural income gap (Y) as the explained variable
(dependent variable), which is measured by the ratio of the urban residents’
incomes to rural residents’ incomes. Urban–rural income gap = disposable
income of urban residents/per capita net income of rural residents. The bigger it is,
the larger urban–rural income gap will be.
It is unnecessary to consider all factors effected urban–rural income gap.
According to the significance of the influence and availability of the data, this
paper mainly considers the following independent variables.
(1) The coefficient of urban–rural dual structure (X1). Lewis (1954) proposed the
essay ‘‘economic development model with unlimited supplies of labor’’. This
essay assumed that an unlimited supply of labor was available in traditional
agricultural sector and the marginal labor productivity is negligible, zero, or
even negative, much lower than urban labor productivity generally. Accom-
panied by the transfer of labor, the dual economic structure will eliminate
(Lewis 1954). The coefficient of urban–rural dual structure (X1) = (the sec-
ondary and tertiary industries output/the secondary and tertiary industries
labor force)/(the primary industry output/the primary industrial labor force).
(2) The urbanization level (X2). In theory, with the improvement of the urbani-
zation level, the urban–rural income gap will decrease. On the one hand, the
aggregation effect of cities can absorb a lot of rural labors, which will result in
the improvement of agricultural productivity and increase of farmers’ income
(Lu and Chen 2004). On the other hand, the cities have a strong radiation and
driving function to rural economy. The city can promote rural economic
development by means of technology transfer, industries transfer, capital
output, information dissemination and so on. China’s current urbanization is
incomplete, which lead to agricultural scale management cannot realize and
rural migrant workers in cities cannot live in cities permanently (Ma 2008).
Meanwhile, among rural residents, only those who possess higher human
capital or have certain skills can easily migrate to cities; those who stay in
countryside are usually low education, lack of necessary skills, elder and
female persons. These people are also known as ‘‘386199 troops, 38 refer to
women, and 61 refer to the child, 99 refer to the elderly. The massive spillover
of human capital in rural areas of China may account for widening urban–rural
income gap.
(3) Growth rate of per capita GDP (X3). Kuznets (1995) proposed the ‘‘inverted
U’’ hypothesis to explain the income distribution gap changes with the process
of economic development in industrial countries. Kuznets inversed U curve
indicates that at the beginning of the process of economic development,
especially in the national per capita income increased from the lowest to the
492 C. Li
middle level, the income gap will increase along with the growth of per capita
GDP; when the economic development reaches to a certain stage, the income
gap will gradually reduce with further growth of per capita GDP (Kuznets
1995).
(4) Employment structure (X4). In theory, with the labors migrating from the
primary industry to the secondary and tertiary industries, the urban–rural
income gap will decrease. On the one hand, the transfer of the primary
industrial labor is helpful to realize agricultural large-scale management and
improve the agricultural labor productivity. On the other hand, non-agricul-
tural employment will improve rural residents’ wage income, which would
reduce the urban–rural income gap. But, since the reform and opening up in
1978, the restrictions on rural labor mobility were loosen gradually, but the
urban-biased welfare systems still exist. When rural labors find a job in cities,
they will subject to discriminatory treatment in the aspect of industry access,
wages, rights and interests maintenance, and so on. The employment structure
(X4) = non-agricultural industries employed persons/total employed persons.
(5) Rural industrialization level (X5). The development of rural industrialization
not only can absorb a lot of rural labors, but also can promote the process of
rural modernization. When more and more rural persons left land, the farmers
will possess more resources. Thus, the appropriate large-scale management of
agriculture will achieve (Hou 2005). Rural industrialization level (X5) = rural
non-agricultural industries employed persons/rural total employed persons.
(6) Rural financial development level (X6). The production theory of economics
shows that the increase of capital will result in the growth of output. The
development of rural financial can increase rural capital stock, and then lead to
the growth of agricultural output and improvement of farmers’ income. The
Rural financial development level (X6) = rural loans/the out of primary
industry (Liu and Hu 2010).
Multiple linear regression models are often used as empirical models when more
than one independent variable is involved, which can approximate the true
unknown functional relationship between the dependent variable and independent
variables (Pao 2008).
Based on the statistical data from 1978 to 2010 (Table 50.1), the multiple
regression model is built to test the relationship between urban–rural income gap
and its influencing factors.
X
6
Y ¼ b0 þ bj Xj þ e ð50:1Þ
j¼1
50 Urban–Rural Income Gap in China 493
Table 50.1 The statistical data of urban–rural income gap and its influencing factors in china
(1978–2010)
Years Y X1 X2 (%) X3 (%) X4 (%) X5 (%) X6 (%)
1978 2.57 6.10 17.92 10.19 29.48 7.57 15.17
1979 2.42 5.08 18.96 6.15 30.20 7.71 14.14
1980 2.50 5.09 19.39 6.50 31.25 8.52 18.20
1981 2.20 4.56 20.16 3.90 31.90 8.86 18.02
1982 1.95 4.26 21.13 7.46 31.87 8.88 18.17
1983 1.82 4.10 21.62 9.26 32.92 10.20 19.14
1984 1.83 3.76 23.01 13.67 35.95 14.18 29.85
1985 1.86 4.18 23.71 11.93 37.58 16.01 30.53
1986 2.12 4.19 24.52 7.24 39.05 17.73 39.30
1987 2.17 4.09 25.32 9.81 40.01 18.81 43.91
1988 2.17 4.22 25.81 9.50 40.65 19.51 43.67
1989 2.29 4.48 26.21 2.48 39.95 18.84 45.83
1990 2.20 4.05 26.41 2.33 39.90 18.43 47.66
1991 2.40 4.56 26.94 7.70 40.30 18.59 55.71
1992 2.58 5.06 27.46 12.85 41.50 19.86 65.94
1993 2.80 5.27 27.99 12.66 43.60 22.38 69.49
1994 2.86 4.79 28.51 11.81 45.70 24.95 48.52
1995 2.71 4.38 29.04 9.73 47.80 27.53 24.88
1996 2.51 4.16 30.48 8.86 49.50 28.98 50.82
1997 2.47 4.45 31.91 8.18 50.10 28.95 57.82
1998 2.51 4.66 33.35 6.80 50.20 28.24 67.65
1999 2.65 5.09 34.78 6.69 49.90 26.98 74.16
2000 2.79 5.64 36.22 7.58 50.00 26.34 73.28
2001 2.90 5.95 37.66 7.52 50.00 25.22 76.83
2002 3.11 6.28 39.09 8.35 50.00 23.86 82.83
2003 3.23 6.57 40.53 9.34 50.90 23.79 92.47
2004 3.21 5.71 41.76 9.43 53.10 25.85 83.65
2005 3.22 5.88 42.99 10.66 55.20 27.71 86.67
2006 3.28 5.94 44.34 12.05 57.40 29.57 80.82
2007 3.33 5.71 45.89 13.57 59.20 30.74 78.74
2008 3.31 5.45 46.99 9.07 60.40 31.15 74.43
2009 3.33 5.34 48.34 8.67 61.90 32.03 87.01
2010 3.23 5.16 49.95 9.91 63.30 32.56 85.74
Data resources: China statistical yearbook, almanac of China’s finance and banking, China
township enterprise yearbook and China compendium of statistics 1949–2004
Table 50.2 Regression analysis result using with the method of enter-coefficients a
Model Unstandardized Standardized coefficients t Sig. Collinearity
coefficients statistics
B Std. Error Beta Tolerance VIF
(Constant) -2.849 0.910 -3.132 0.004
(X1) 0.413 0.045 0.654 9.212 0.000 0.407 2.457
(X2) -0.102 0.042 -2.007 -2.398 0.024 0.003 341.301
(X3) -0.006 0.008 -0.033 -0.675 0.506 0.872 1.147
(X4) 0.189 0.065 3.832 2.918 0.007 0.001 839.643
(X5) -0.089 0.040 -1.444 -2.225 0.035 0.005 205.142
(X6) 0.001 0.003 0.041 0.244 0.809 0.072 13.893
a
Dependent variable: Urban–rural Income gap (Y)
50 Urban–Rural Income Gap in China 495
Table 50.3 Regression analysis result using with the method of stepwise-coefficients a
Model Unstandardized Standardized coefficients t Sig. Collinearity
coefficients statistics
B Std. Error Beta Tolerance VIF
1 (Constant) 0.194 0.185 1.046 0.304
(X2) 0.030 0.004 0.587 8.181 0.000 0.652 1.533
(X1) 0.300 0.045 0.475 6.618 0.000 0.652 1.533
2 (Constant) -0.756 0.233 -3.252 0.003
(X1) 0.392 0.039 0.620 10.025 0.000 0.588 1.700
(X4) 0.037 0.005 0.759 7.350 0.000 0.211 4.735
(X6) -0.005 0.002 -0.254 -2.183 0.037 0.167 5.989
a
Dependent variable: Urban–rural Income gap (Y)
50.4 Conclusion
Since the reform and opening up in 1978, the urban–rural income gap in China
shows the phase change characteristics of reduced—expanded— again reduced—
again expanded—flattened, and each change has a great relationship with the
affection of policy changes.
According to empirical analysis, the relationship between urban to rural income
gap and its key influencing factors is as follows.
(1) There is a positive correlation between the urban–rural dual structure and
urban–rural income gap. According to the model (1) and model (2), the partial
regression coefficients of X1 are both positive and maximum, proving that the
urban–rural dual structure is the most key factor that leads to the expansion of
496 C. Li
Acknowledgments The work was supported by the tender subject of Henan government
decision-making research (No. 2012B239) and the investigation subject of Henan federation of
humanities and social sciences and Henan federation of economic organizations (No. SKL-2012-
3318).
References
Chen W-T, Peng X-M (2012) Household registration system, employment opportunity and the
income gap between urban and rural residents in China (in Chinese). Econ Surv
29(2):100–104
Cai F (2003) Rural urban income gap and critical point of institutional change (in Chinese) Soc
Sci China 40(5):16–25
Hertela T, Zhai F (2006) Labor market distortions, rural–urban inequality and the opening of
China’s economy (in Chinese). Econ Model 23(1):76–109
50 Urban–Rural Income Gap in China 497
Hou Y-j (2005) The development of rural industrialization and urban–rural dual industrialization
(in Chinese). J Xingtai Univ 20(2): 33–35
Kuznets S (1995) Economic growth and income inequality. Am Econ Rev 45(1):1–28
Lei G-q, Cai X (2012) The distortion of primary income distribution, urban–biased fiscal
expenditure policy and urban-rural inequality (in Chinese). J Quant Tech Econ 29(3):76–89
Levin J (2001) China’s divisive development. Harv Int Rev Fall 23:40–42
Lewis WA (1954) Economic development with unlimited supplies of labor. Manchester Sch Econ
Soc Stud 22(2):139–191
Li S, Luo C-L (2007) Re-estimating the income gap between urban and rural households in China
(in Chinese). J Peking Univ 44(2):111–119
Liu Y-w, Hu Z-y (2010) Empirical study on rural financial development on the urban–rural
income gap (in Chinese). J Shanxi Finan Econ Univ 32(2):45–52
Lu M, Chen Z (2004) Urbanization, urban-biased economic policies and urban–rural inequality
(in Chinese). Econ Res J 50(6):50–58
Ma X-S (2008) The negative effect and countermeasures of incomplete urbanization (in Chinese).
Jiangxi Soc Sci 35(1):176–185
Murata Y (2002) Rural-urban interdependence and industrialization. J Dev Econ 68(68):1–34
Pao H-T (2008) A comparison of neural network and multiple regression analysis in modeling
capital structure. Expert Syst Appl 35:720–727
Zuo Y-H (2012) The analysis of urban–rural income gap in status and income source contribution
(in Chinese). Econ Probl 36(1):27–31
Chapter 51
Sensitivity Analysis on Optimized
Sampling for Sealing Performance
of GVTP
51.1 Introduction
Gate valve of thermal power (GVTP) is used to nuclear power and thermal power
station which work in high pressure and temperature. The technological parame-
ters still increase continuously for improving thermal efficiency. According to the
L. Yu S. Yu W. Yang (&)
Lanzhou University of Technology, Lanzhou, China
e-mail: [email protected]
L. Yu
e-mail: [email protected]
S. Yu
e-mail: [email protected]
The global parameters of sealing performance of the GVTP are design variables
and the quantity is so numerous that is hard to optimize through general counting.
The sensitivity analysis is the method that judges the responsible degree of the
target function following with the disturbance of design variable. The optimized
target function can be chosen from the result of the FEM. For enhance the sealing
performance, the total deformation on the sealing surface of the GVTP is regarded
as target function to optimize, and convergence direction is approaching minimum
51 Sensitivity Analysis on Optimized Sampling 501
where V(Y) is the total variance of exported variable Y. Vi is the main response of
xi, and the other parameters are analyzed the response degree of interaction.
The sensitivity indices can be computed using a Monte Carlo method (Homma
and Saltelli 1996). The principle is to generate randomly samples of parameters
within their permissible ranges and to estimate V(Y), Vi and V-i as follows
(Fesanghary 2009):
1X N
Vi ¼ Y ðjÞ Y 0ð jÞ f02 ð51:2Þ
N j¼1
1X N
0 ð jÞ
Vi ¼ Y ðjÞ YT f02 ð51:3Þ
N j¼1
502 L. Yu et al.
And finally
Vi
Si ¼ ð51:4Þ
V ðY Þ
Vi
Sn ¼ 1 ð51:5Þ
V ðY Þ
where Si is the first-order sensitivity of the ith parameter. Vi is the sum of all the
variance parameter except i. f is the function of Y as variable. N is the times of
iteration. Using the displacement parameters S as the object function, and the
sensitivity response can be obtained to optimize the global parameters of the valve
for improving sealing performance.
(2) Sampling of the Monte-Carlo method: The basic theory of the Monte-Carlo
method is loose the iteration times by finite samplings. While the probability
distribution of the state variable xi (i = 1,2,…,n) is known and independent, it
generates the random count for xi fitting its probability distribution according
to the limit condition Z = g(x1,x2, …,xn). The series xi(i) is put into the state
function Z = g(xi) and can be present as independent series Z(i) by iteration.
Assuming the quantity M of state function as safety coefficient if Z(i) \ 1, and
security reserve if Z(i) \ 0, while the times of simulation are enough, the
frequency M/N is close to probability by the law of Theorem, and the damage
probability can be presented as Pf = P{Z-g(X1,X2,…,Xn)}. The Monte-Carlo
method can accurate fitting the distribute function of probability G(Z) for Z,
and counts the mean value uz and normalized value rz.
where X is the define region of X,I[G(x)] is the indicative function, x is the basic
random vector, and X = [X1,X2,…,Xn], fX(x) is the probability density function,
hX(x) is the importance sampling density function.
The material of the GVTP is forgeable and creep resistant, and the material is
ASTM A335 F92 according to Case 2179-6. There have not perfect materials to be
applied to thermal cycle system more than 600 C in internet so far, and the
material using in the temperature is on studying and testing period. ASTM A335
F92, P112 and E911 are used to valve of ultra-supercritical power unit widely and
F92 has better forging property and stability in higher temperature than former
materials, and it have less thermal coefficient of expansion and more thermal
conductivity coefficient which can reduce thermal stress on inside and outside
wall. These materials have better creep strength in high temperature fitting thermal
stress requirement caused by thermal expansion than other alloy steels, and the
creep strength working high temperature and 105 h can attain to 90–100 MPa. The
applied range of ASTM A335 F92 is wider than P112 and E911 in internet and it is
applied materials of the GVTP in this paper.
Sensitive analysis can get the solution directly by the FEM (Kahraman 2009;
Lin 1999), and use the Monte-Carlo method to reduce the quantity of sampling
points (Melchers and Ahammed 2004; Ahammed and Melcher 2006). The per-
formance of F92 is attained to physical parameters for counting of the FEM. The
operating station is 33.1 MPa and 560 C, the medium is superheat steam flowing
12.3 m/s. The initial conditions are defined to the FEM for getting the distribution
of total deformation on the sealing surface of the GVTP. The performance of A335
F92 can be seen in Table 51.1.
The methods of structural optimization are control deformation mostly, and the
processes are making minimization of deformation with increasing the quality of
the structure (Wang et al. 2009; Pettit and Wang 2000). The displacement on
sealing surface of the GVTP can be seen as the deformation and defined as
objective function. Constraint condition is certain control domains for solving the
extreme value of target function. The improper constraint condition can leads to
lengthy solving process if the range is oversize, or makes more errors if the range
is undersize. The constraint condition of design variable is listed in Table 51.2.
There are 10 parameters been distributes as Pi (i = 1,2,…,10), and value range is
marked in the bracket. The distribution of sampling has three patterns, N means
normal distribution, A means average distribution and R means random distribu-
tion. The approximation direction of iteration has three patterns, I means
increment direction, D means decrement direction and R means random direction.
The minimum total deformation is defines as target function for optimized. By the
Monte Carlo method, the sampling points choosing in global parameters decrease
from 240 to 156 and iterating times is reduced from 1032 to 697. The efficiency
and accuracy of counting is improved obviously. The sensitive relationship of
structure parameters can be expressed by bar chart directly, respond degree of
sensitivity can be contrasted by the length of structure parameters, the larger length
mean it have more affecting degree to objective function and the shorter length
mean the sensitive degree is lower than others. Positive value of sensitive response
means sensitive degree of objective function increases follow with the value
increasing of structure parameter, and negative value means sensitive degree
increases follow with the value reducing. The sensitive relationship of structure
parameters can be seen from Fig. 51.1.
In Fig. 51.1, there have 10 series global parameters be chosen to sensitivity
analysis. Absolute values of sensitive distribute from 0.73 to 0.04, and composed
by 3 negative values and 7 positive values. The responsive relationship of sensitive
analysis can be seen that P1 is the most sensitive parameter for the displacement
on sealing surface and its value of sensitivity is -0.87, its means that the total
deformation should increases with reduces of P1. The sensitivities of P5 and P3 are
51 Sensitivity Analysis on Optimized Sampling 505
Sensitive Response
0.4
0.2
0.0
-0.2
-0.4
-0.6
-0.8
-1.0
Global Parameters
0.73 and 0.48 that are less than P1. The sensitivity of P4 is 0.04 that is the
minimum value than the others and it means the P4 can be neglected in optimized
process. To modify more sensitivity parameters in the process of design and total
deformation can be improved obviously, and this project should be used at design
firstly.
In Fig. 51.2, the responsive figure is three-dimensional space range that is
constituted by P1, P5 and maximum total deformation on sealing surface of the
GVTP. It is shown that maximum total deformation is greater following with the
increase of P1 and P5. The maximum equivalent stress is minimum value while
P1 is 164.5 mm and P5 is 211.4 mm, and its value is 7.7 9 10-2mm less than
10 9 10-2 mm that is the maximum deformation of allowable sealing, and
maximum total deformation is reduced by 21 %, so the series parameters should
be taken into account in design process. The design parameters can interfere each
other and lead to counting error, so the change arrange of design parameters should
parameters
0.135
0.120
0.105
0.090
0.075
174
192 168
200
208 162
P5[
mm
216 156 m]
224 [m
] 150 P1
506 L. Yu et al.
no more than 15 %. It has not extreme values in the optimizing process at some
times and the choosing method of noninferior solution on multiobjective optimi-
zation is important.
51.5 Conclusion
The sealing performance of the GVTP is hard to amend for concerning with a large
number of global and operation parameters. By the sensitivity analysis, the number
of the parameters is decreased to 10, and the displacement on the surface of the
GVTP is affected directly by these major parameters for deciding with sensitivity
analysis. It is shown that P1,P3 and P5 are the higher sensitivity than others, and
correcting the parameters can enhance the sealing performance. The number of
sampling is decreased by 35 % and maximum total deformation is reduced by
21 %.
As usually, the way of improving sealing performance of the GVTP is
strengthen the local place out of leaking point. By using the sensitivity analysis,
there has higher pertinence in the process of optimized, the quantity of design
variable is reduced obviously. Improving the global parameters of higher sensi-
tivity which leads to leaking, the sealing performance of the GVTP is optimized
and the safe degree of power system is ensured.
References
Ahammed M, Melcher RE (2006) Gradient and parameter sensitivity estimation for systems
evaluated using Monte-Carlo analysis. Reliab Eng Syst Saf 91(10):594–601
Alegre JM, Preciado M, Ferren D (2007) Study of the fatigue failure of an anti-return valve of a
high pressure machine. Eng Fail Anal 14(2):408–413
Chan K, Tarantola S, Saltelli A, Sobol IM (2000) Variance-based methods. In: Saltelli A, Chan
K, Scott EM (eds) Sensitivity analysis. Wiley, New York, pp 167–197
Chen ZB, Li GQ (2007) Low cyclic fatigue life study for the intermediate pressure main steam
valve housing in the power station. Heat Treat Met 32(12):122–127
Fesanghary M, Damangir E, Soleimani I (2009) Design optimization of shell and tube heat
exchangers using global sensitivity analysis and harmony search algorithm. Appl Therm Eng
29(8):1026–1031
Homma T, Saltelli A (1996) Importance measures in global sensitivity analysis of nonlinear
models. Reliab Eng Syst Saf 52:1–17
Kahraman A (2009) Natural model of planetary gear trains. J Sound Vib 173:125–130
Lin J, Parker RG (1999) Analitical characterization of the unique properties of planetary gear free
vibration. J Vib Acoust 228(1):100–128
LU HC (2007) The development of super-critical and ultra super-critical thermal power plant
valve. Valve 36(3):1–4
Melchers RE, Ahammed MA (2004) Fast approximate method for parameter sensitivity
estimation in Monte-Carlo reliability. Comput Struct 82(6):55–61
51 Sensitivity Analysis on Optimized Sampling 507
Pettit RG, Wang JJ, Toh C (2000) Validated feasibility study of integrally stiffened metallic
fuselage panels for reducing manufacturing costs. Langley Research Center, Hampton
Saltelli A, Tarantola S, Campolongo F (2009) Sensitivity analysis as an ingredient of modeling.
Stat Sci 15(4):377–395
Su C, Li PF, Han DJ (2009) Importance sampling monte-carlo method based on neumann
expansion response surface techniques. Eng Mech 26(12):1–11
Tao ZL, Cai DS, Yan CL (2009) Three dimensional numerical simulation and experimental study
on fluid field of control valve in power station. J Eng Thermophys 24(12):63–69
Wang XM, Liu ZY, Guo DM (2009) Sensitivity analysis of topology optimization micro flexible
mechanical structure by homogenization method. Mech Eng China 10(11):1264–1267
Wang AL, Wu XF et al (2010) Multidisciplinary optimization of a hydraulic slide valve based on
CFD. J Shanghai Jiaotong Univ 44(6):1767–1772
Xiang XW, Mao JR, Sun B (2006) Numerical investigation of flow characteristic of control valve
of steam turbine in the entire range of operating mode. J Xi’an Jiaotong Univ 40(7):290–296
Yu L, Yu SR (2007) Sensitivity analysis on performance of sealing pressure in triple offset
butterfly valves. Fluid Mach 32(3):163–165
Zhang Y, Zhu LS, Zhang YM (2010) Reliability based sensitivity analysis of mechanical strength
via Quasi-Monte Carlo Method. J Northeast Univ 31(11):1594–1598
Zhu QG, Chuan GG (2010) Numerical simulation on unsteady flow field in the main stop and
control valve system of a 1000 mw ultra-supercritical steam turbine. J Chin Soc Power Eng
30(8):743–749
Zhu GY, Lei LJ (2011) Water cone valve pilot valve port cfd f low field analysis and structure
optimization. Sci Technol Eng 11(2):591–598
Chapter 52
Study on Application of Logistic Curve
Fitting and Forecast from Inbound
Tourist Market
Wei-qi Tan
Abstract The rising of logistic curve basically coincides with the development and
growth of the tourist market. It has great advantages in improving forecast precision
to use the logistic growth curve to fit the developing trend of the tourist market and
to make forecast. The calculation method of K value is proposed in the paper
according to the characteristics of curve symmetry. Based on this, a and r parameter
values can be calculated and the logistic curve model of China’s inbound market
can be fitted by using the method of general curve regression. By the test of v2 ,
fitting logistic curve regression meets the requirements. So logistic curve regression
can be used as a general method for forecasting the tourist market.
52.1 Introduction
W. Tan (&)
Anqing Vocational and Technical College, Anqing, Anhui, China
e-mail: [email protected]
and marketing (Huang et al. 2009). With the influences and restrictions from the
growth rate, environmental restrictions and time, the logistic growth curve begins
to slow down. Once it reaches the critical point, it grows rapidly but never
indefinitely. When it gets saturated, the growth speed slows down because of
intense competition for resources. The growth curve is slightly elongated as an
‘‘S’’ which is shown in Fig. 52.1: changes in future. Based on time series, there are
various of quantitative methods that can be used to predict. According to math-
ematical principles, there are two kinds: autoregressive conditional heteroske-
dasticity model and stochastic volatility model. But the two methods are naturally
limited in predicting tourist market with greater volatility. This paper is going to
introduce the fitting and forecast of the logistic curve to tourist market in order to
improve the quality of the forecast (Wand et al. 2009).
Logistic curves are derived from the study process of population growth. They
have been widely used in the field of natural science and social science. The curve
equations can be shown as follows:
k
y¼ k [ 0; a 2 R; r [ 0 ð52:1Þ
1 þ aert
Among the letters, k represents the environment or resource limitation, a is an
undetermined constant, r for growth and t for time. It has revealed the law of changes
in growth rate, environmental restrictions and time. The similar relationship
52 Study on Application of Logistic Curve Fitting and Forecast 511
between them widely exists in natural science and social science. The curve has been
widely used in the field of natural science and social science. Nowadays, the logistic
regression is a commonly used analysis and forecast method in social science.
The logistic curve model is applied to fit and forecast the population growth,
economic development and life cycle of tourism areas, which has significant
effects (see Hu 2011; Huang et al. 2011; Li 1993; Yang 2008, 2009).
Based on the study of logistic curve equations, a Canadian scholar Butler
combined them with changes of tourist destinations and put forward the theory of
life cycle of destination in 1980. He divided the tourist destinations into 5 different
growth stages: the exploration stage, the involvement stage, the development
stage, the consolidation stage and the stagnation stage. From then on, many
scholars did researches on the theory of life cycle of destination. Although there
are different views on it, scholars all agreed that the law of life cycle exists in
developing tourism products and tourist areas. This law is of important theoretical
and practical significance in forecasting the tourist market. Due to our growth
tourist market, the trend of tourist market development increases sharply when we
use regression analysis method to forecast, which cannot reflect the medium-term
and long-term trend of the development in the tourist market. Logistic curve
equations are established on the basis of the trend in growth forecast, so they can
overcome the defects and are of great significance to the prediction of the tourist
market (Zhang and Xue 2009).
There are exponential function curves, logarithm function curves, power function
curves, hyperbolic function curves, sigmoid curves, and so on. They can be used to
fit curves, analyze and predict according to the scatter plots. Scholars have con-
ducted extensive research on the logistic curve fitting. Several methods are used to
determine the parameters : the search method, the Gauss–Newton method, the
improved Gauss–Newton–Marquit method, and so on. Since the coupling problem
between the parameters cannot be solved in the process of solving the problems
which need alternate iterations, the selection of initial iteration value is always the
key point of being successful during the solving process (Huang et al. 2011). The
determination of parameters of the logistic curve equation is closely connected
with the characteristics of the curve. We need to conduct research on the char-
acteristics of logistic curve equations to find more scientific and reasonable
512 W. Tan
methods for parameter estimation. Besides the logistic curve shown in Eq. (52.1),
there is another curve equation (52.2):
k
y¼ k [ 0; a 2 R; r [ 0 ð52:2Þ
1 þ eart
Among the letters, k represents saturated level and r for growth factor. The ea in
formula (52.2) can be regard as a in formula (52.1). In this way, (52.1) and (52.2)
are coincident in essence. This article uses the curve shown in formula (52.1).
1. Asymptotic lines
k k
lim rt
¼ 0; lim ¼ k;
t!1 1 þ ae t!þ1 1 þ aert
We can see from (Jin 2010) that the logistic curve has two asymptotic lines: y = 0
and y = k
2. Monotonicity
The speed function from the growth process of the logistic curve can be formed
by working out the first-order derivative of the logistic curve.
karert
Since y0 ¼ ð1þae rt Þ2
[ 0, the logistic curve is monotonically increasing.
3. Three key points
If we want to work out the second-order derivative of the logistic curve, we can
rt
ðarert rÞ
have y00 ¼ kareð1þae rt Þ3
. Make it equal to 0, we have t ¼ lnr a. When
t\ lnr a ; y00 [ 0, the curve is concave; when t [ lnr a ; y00 \0, the curve bulges.
If we want to work out the third-order derivative of thelogistic curve, we can
3 rt
ð14arert þa2 e2rt Þ
have y000 ¼ kar e
ð1þaert Þ4
. Make it equal to 0, we have
1 4arert þ a e2 2rt
¼ 0. Solve the equation we have t1 ¼ ln a1:317 r ;
ln aþ1:317
t2 ¼ r . In this way, there are three key points in the logistic curve:
t1 ¼ ln a1:317
r ; t2 ¼ lnr a ; t2 ¼ ln aþ1:317
r .
The curve fitting refers to the process of conducting curve regression analysis with
two variable data to get a significant curve equation. Usually there are three steps:
(1) Choose appropriate curve type according to the exact relationship between
variable X and Y. (2) As for the selected curve type, configurate linear regression
equation with least square method after linearization and test significantly. (3)
Convert linear regression equation into the corresponding curvilinear regression
equation and make inference on the related statistical parameters. The logistic
curve has k, r, a the three selected parameters, so the general curve fitting can’t
determine the parameters (Yang 2008)
8 rt1
< y1 ¼ k=ð1 þ ae Þ
>
y2 ¼ k=ð1 þ aert2 Þ
>
:
y3 ¼ k=ð1 þ aert3 Þ
Then,
by 0 ¼ In a rt
Parameters a and r can be determined by Eq. (52.5), of which
9
r ¼ SPy0 t =SSt =
In a ¼ y0 þ rt ð52:5Þ
;
a ¼ eIn a
X 1X 2
X
SPy0 t ¼ ðti tÞ2 ¼ ti2
ti ;
n
X X 1 X X 0 1X 0 1X 0
SSt ¼ ðti tÞðy0i y0 Þ ¼ tiy0i ti yi t ¼ tiyi ¼ yi
n n n
Since the logistic regression equation curve still contains parameter k besides
regression parameters a and r, we need to discuss the testing method for goodness-
of-fit (Lie and Zeng 2002). The related coefficients are as follows:
P 2
ðy0i ^
y 0Þ
i¼1 SSR 1
R2 ¼ 1 ¼ ¼ :
Pn
SST 1 þ SSE
ðy0i y0 Þ2 SSR
i¼1
The SST is for total sum of square, the SSE for residual squares and the SSR for
sum square within groups. There are two types of distribution: SSE 2
r2 v ð1Þ and
SSR 2 2
r2 v ðn 1Þ. We use distribution v ðn 1Þ for goodness-of-fit test. Look up
the data in chart v2 , if we take a ¼ 0:05 and have v2 \v20:05 ðn 1Þ, the equation
52
Table 52.1 China’s inbound tourist numbers from 1978 to 2009 (unit: one hundred million)
The particular year 1978 1979 1980 1981 1982 1983 1984 1985
t 1 2 3 4 5 6 7 8
Inbound tourist numbers 0.18092 0.42039 0.57025 0.77671 0.79243 0.9477 1.28522 1.78331
The particular year 1986 1987 1988 1989 1990 1991 1992 1993
t 9 10 11 12 13 14 15 16
Inbound tourist numbers 2.28195 2.69023 3.16948 2.45014 2.74618 3.33498 3.81149 4.15269
The particular year 1994 1995 1996 1997 1998 1999 2000 2001
t 17 18 19 20 21 22 23 24
Inbound tourist numbers 4.36845 4.63865 5.11275 5.75879 6.34784 7.27956 8.34439 8.90129
The particular year 2002 2003 2004 2005 2006 2007 2008 2009
t 25 26 27 28 29 30 31 32
Inbound tourist numbers 9.79083 9.16621 10.90382 12.02923 12.49421 13.18733 13.00274 12.64759
The particular year 2010
Study on Application of Logistic Curve Fitting and Forecast
t 33
Inbound tourist numbers 13.37622
Source National Bureau of Statistics of China & National Tourism Administration of the People’s Republic of China
515
516 W. Tan
performs well in fitting, which means the expected value is mainly fit with the
actual value. The statistics of v2 can be described in the equation below:
X ðyi ^yi Þ2
v2 ¼ ð52:6Þ
^yi
The data in Table 52.1 were collected from the original data provided online from
National Bureau of Statistics of China and National Tourism Administration of the
P.R.C.
Substitute the data of t = 16, 17, 18 and calculate K according to formula (52.3),
then k = 14.257.
By using dfittool matlab, we get r = 0.189, lna = 3.822. Put SSE: 3.183, R-
square: 0.9711, Adjusted R-square: 0.9701 in goodness-of-fit test and the results
showed that the line fit well. Calculate lna = 3.822 and we have a = 45.6955.
And we can get the logistic regression curve equation of China’s inbound tourism:
14:257
^y ¼ ð52:7Þ
1 þ 45:6955e0:189t
52.6 Conclusion
The rising of logistic curve basically coincides with the development and growth
of the tourist market. It has great advantages in improving forecast precision to use
the logistic growth curve to fit the developing trend of the tourist market and to
make forecast. The growth patterns of China’s inbound tourist market follow the
logistic curve equations which can be used to fit and forecast China’s inbound
tourist market. There are many kinds of tourist markets in China. We can consider
using logistic curve equations for regression forecast provided that their growth
scatter plots are similar to the sigmoid curves.
52 Study on Application of Logistic Curve Fitting and Forecast 517
References
Hu X-h (2011) Parameters estimation for logistic curve and application. Math Theory Appl
31(04):32–36 (in Chinese)
Huang Y, He P, Li Z (2009) Empirical study on product life cycle based on logistic model.
Jiangsu Univ Sci Technol (Soc Sci Edn) 9(04):51–55 (in Chinese)
Huang H, Ma F, Ma Y (2011) Logistic curve model for regional economy medium-term and
long-term forecast. J Wuhan Univ Technol (Inf Manage Eng) 13(01):94–97 (in Chinese)
Li Z-w (1993) Foundation and application of gray logic curve model. Math Pract Theory
23(01):49–52 (in Chinese)
Lie Q-j, Zeng Q (2002) Logistic regression model and its research progress. J Prev Med Inf
18(05):417–420 (in Chinese)
Q-s Jin (2010) The foundation of higher vocational mathematics. Jilin University Press, p 79 (in
Chinese)
Wand J-w, Han Y-q, Wand H-z (2009) Application of ecological mathematical models to
population predication—taking Daqing for example. Saf Environ Eng 16(07):30–35 (in
Chinese)
Wu J, Huang Z-f (2004) Study on the application of logistic curve simulating tourism destination
lifecycle. Geogr Geo-Inf Sci 20(05):91–95
Yang L-F (2008) A study on the law of Chinese internet-user growth based on logistic curve.
J Xiamen Univ Technol 17(04):86–89,108 (Chinese)
Yang C-F (2009) Construction of tourism destination state forecasting model and case study. Res
Sci 6:1015–1021 (in Chinese)
Zhang X-M, Xue D (2009) A comparative study on two mathematical models of tourism area life
cycle. Tour Sci 29(04):6–13 (in Chinese)
Chapter 53
Study on Decision Mechanism Choosing
by Cost Model for Projectized
Organization
Hua-ming Zhang
Abstract The cost of making decision in enterprise differs when they have dif-
ferent decision mechanism. So decision mechanism choosing is very important. A
model for calculating the cost of projectized organization is designed in this article
with the parameters of the organization information and the relationship among
projects. The cost of projectized organization is analyzed under different decision
mechanism, on the base of which some tactic is advanced for projectized orga-
nization to choose decision mechanism.
Keywords Decision mechanism Organization structure Projectized organization
53.1 Introduction
H. Zhang (&)
School of Economics and Management, Nanjing Forestry University, Nanjing, China
e-mail: [email protected]
53.2 Modelling
When building the model, we hypothesize that the enterprise decides its work
capacity to minimize expected cost. The objective of building the model is to
analyze how the decision mechanism affect expected cost and which mechanism is
suitable to projectized organization. So that advice can be drawn for enterprise to
choose decision mechanism.
For the convenient of studying, the number of project departments in the
enterprise is represented by parameter n. The projectized organization can be
described in Fig. 53.1, (Lian and Fu 2007). In this kind of organization there may
be functional departments in the enterprise, but they can not affect project. And
executive of the enterprise directly control project manager.
53 Study on Decision Mechanism Choosing by Cost Model 521
yi DH ¼ k þ twi ð53:3Þ
Substituting (53.3) into cost function (53.2) can get minimum value of cost
function.
Decision criteria of separation mechanism is
QDH wi DH 1
yi DH ¼ ð53:4Þ
BþD n1
The minimum of expected cost is
QDH ra0 2 n
C DH ¼ C ð53:5Þ
B þ D 2ðn 1Þ
2
The parameter QDH is defined as QDH ¼ r 0r2aþr
0
e
2.
a
Decision mechanism is
QAI wAI 1
yAI ¼
2B n1
Minimum value of expected cost function is
QAI ra 2 n
C¼C ð53:6Þ
2B 2ðn 1Þ
The parameter QAI in the equation is defined as follow.
ra 2
QAI ¼
ra þ re 2
2
wi HH ¼ a þ e0 þ ai þ ei
Minimum value of expected cost function is
Decision mechanism is
QHH wi HH 1
yHH ¼ 2 þr 2
ðB þ DÞ þ ðB DÞQHH rraa2 þr e
2
n1
a0
ra 2 þ ra0 2
QHH ¼
ra þ ra0 2 þ re 2
2
The parameter QHH reflects the accuracy processing information. Bigger value
indicates lower minimum value of expected enterprise cost. At the same time the
cost is relative to the sum of system oscillation and individual oscillation. It is also
relative to parameter B and parameter D and n. The cost will increase with the
decrease of ra 2 þ rc 2 and the increase of n, B ? D and B - D. Horizontal
mechanism would be suitable when system oscillation and individual oscillation is
524 H. Zhang
very big. When individual oscillation is very small its cost is approximate to that
of assimilation mechanism.
In decentralization mechanism project departments consider different system
information and different predicted individual oscillation. The sum of system
information and individual oscillation can be defined as wi DI .
wi DI ¼ a þ ai þ ei
Minimum value of expected cost function is
Decision mechanism is
QDI wi DI 1
yDI ¼ ra 2
ðB þ DÞ þ ðB DÞQDI n1
ra 2 þra0 2
ra 2 þ ra0 2
QDI ¼
ra þ ra0 2 þ re 2
2
The parameter QDI reflects the accuracy processing information. Bigger value
indicates lower minimum value of expected enterprise cost. At the same time the
cost is relative to the sum of system oscillation and individual oscillation. It is also
relative to parameter n, parameter B and parameter D. The cost will increase with
the decrease of ra 2 þ rc 2 and the increase of n, B ? D and B - D.
In dissimilation mechanism what departments consider is wi D which is defined
as wi D ¼ a þ ei :
Minimum value of expected cost function is
QD ra 2 n
C¼C ð53:9Þ
ðB þ DÞ þ ðB DÞQD 2ðn 1Þ
Decision mechanism is
QD wD 1
yD ¼ i
ðB þ DÞ þ ðB DÞQD n 1
The parameter QD in the equation is defined as follow.
ra 2
QD ¼
ra 2 þ re 2
The parameter QD reflects the accuracy processing information. Bigger value
indicates lower minimum value of expected enterprise cost. At the same time the
53 Study on Decision Mechanism Choosing by Cost Model 525
cost is relative to the sum of system oscillation and individual oscillation. It is also
relative to parameter B and parameter D. The cost will increase with the decrease
of ra 2 and the increase of B ? D and B - D. So dissimilation mechanism is
suitable when system oscillation is very big.
From the result conducted by the model, conclusion can be easily made as follow.
When the number of projects in the enterprise is not large the appropriate
decision mechanism is in connection to the proportion of individual oscillation or
system oscillation in all information.
In the fields that have rapid changing environment the system oscillation will be
large. If the individual oscillation in the project departments is very small and can
be neglected the efficiency of the four decision mechanisms in the front would be
equal. But the information processed in horizontal mechanism and decentralization
mechanism is more than that of assimilation mechanism and dissimilation
mechanism. In consideration of the cost for processing information it is proper to
choose assimilation mechanism and dissimilation mechanism.
In the fields that have mature technology and stable environment system
oscillation is relatively small and individual information is relatively large. The
cost of assimilation mechanism, dissimilation mechanism, horizontal mechanism
and decentralization mechanism is equal. The cost of separation is the lowest. So it
is more appropriate than the other mechanisms.
If the system oscillation and individual oscillation are all very large and cannot
be neglected horizontal mechanism and decentralization mechanism process more
information than other mechanism. So they are more efficient. But they are also
different. If the competitive factor in among projects is larger than coordination
factor decentralization mechanism will be more efficient. And horizontal mecha-
nism will be more efficient on the contrary.
With the increase of the number of projects in enterprise the competition among
projects would increase rapidly and the coordination among them would decrease
rapidly. If the number increase to a certain degree competition factor would be far
larger than coordination factor and the coordination can be neglected. The cost
function of the enterprise would increase with the increase of the number of
projects in the enterprise. If the number is large the expected cost would be large.
The extreme condition is that the number is infinite. The minimum of expected
cost will be c. At this time projectized organization would not be efficient. The
reason is that when the number of projects in enterprise is small the competition
among projects is small and can be solved by project managers. But with the
increase of the number in enterprise project managers can no longer solve
the competition by themselves. It is necessary to add new department to manage
the resource that all projects compete for. They are functional departments.
526 H. Zhang
53.5 Conclusion
In this paper cost model established with parameter defined as competition rela-
tionship and coordination relationship and the number of projects in the enterprise.
The cost of projectized organization is analyzed under different decision mecha-
nism. Selection strategy is proposed on choosing decision mechanism for pro-
jectized organization. So it is feasible to analyze the cost of projectized
organization quantitatively. But it is not enough to analyze organization only from
the perspective of cost. Operating efficiency and performance should also be
analyzed. These work remains to be further studied.
References
54.1 Introduction
As the subject of the economic behavior in securities market, investors will make
investment decisions in accordance with the information they obtained. What they
pursue is the maximum returns with fixed risks or the minimum risk with the fixed
returns. However, due to information asymmetry in reality, most investors will
follow others’ to make investment decisions leading to herd behavior happen.
Buying-in or selling-out the same stock of most investors at the same time will
greatly aggravate the fluctuation of the stock price, then cause the stock price to be
overestimated or underestimated, and finally affect corporate investment behavior.
This kind of influence can be divided into two situations as follows.
54 Study on Impact of Investor’s Herd Behavior 529
On one hand, in order to maximize the basic value and pursue the goal of doing
better and bigger, the corporate operator needs to increase investment and
implement projects with positive net present value. The buyer’s herd behavior
means that most investors buy a large number of the same stock during the same
period, and indicates that most investors are optimistic about the development of
corporate. This will result into the rise of stock price, which deviates from its inner
value. And then it will ease financing constraint and financing costs accordingly,
providing an important guarantee for the smooth implementation of projects with
positive net present value in terms of financing. Therefore, the corporate operator
will invest more projects with positive net present value, and then expand the scale
of the corporate. On the contrary, the seller’s herd behavior means most investors
sell the same stock during the same period, which indicates that most investors are
not optimistic about the development of corporate. This will cause the stock price
to fall, which deviates from its inner value. And then it will increase financing
constraint and financing costs accordingly, influencing the implementation of
projects with the positive net present value. Therefore, the corporate operator will
give up some projects with the positive net present value to avoid getting into
financial dilemma when they are making investment decisions.
On the other hand, the corporate operator will take the corresponding measure
to maximize the stock price according to the stock price fluctuation. The buyer’s
herd behavior gives rise to the overestimated stock price, which is consistent with
the goal of the corporate operator. In order to maintain the high stock price, the
corporate operator will improve the level of investment to cater for the majority of
investors’ expectations. The seller’s herd behavior leads to the underestimated
stock price, which goes against the purpose of operators. In order to go through the
worse situation, the corporate operator will consider share repurchase at the
appropriate time. Share repurchase requires a large amount of cash, which imposes
restrictions on the corporate investment.
Therefore, the paper proposes the following hypotheses based on the above
analysis:
Hypothesis 1: the higher the degree of buyer’s herd behavior is, the higher the
level of corporate investment is.
Hypothesis 2: the higher the degree of seller’s herd behavior is, the lower the
level of corporate investment is.
Considering the availability of the data, we use the open-end funds as the
representative of investors. As the main force of the investors, the open-end funds’
herd behavior can be used to measure the herd behavior of investors to achieve the
empirical requirement of this paper. The original data of the investors’ herd
behavior is the open-end funds’ shareholding details of the annual report data. We
compare the number of shareholding stock i of the open-end funds in the period
t annual report with the number of shareholding the same stock in the period t-1
annual report. If the number of shareholding increased compared with the last
period, we call the open-end funds of stock i in the t period as net buying. On the
contrary, we call it as net selling. Further, we can get the number of investors who
are the net buyers or sellers of stock i in period t by data statistics. In order to
guarantee the validity, we eliminate abnormal sample data which the sum of the
net buying and net selling is less than five and the number of net buying or net
selling is zero. Then, we exclude the ST group, PT group, and companies whose
net assets are negative and financial data is missing. At last, we get 3049
observations.
All the data comes from the CSMAR database.
where HMt represents the herd behavior degree of investors, Pi,t is the proportion
of number of investors who are the net buyers of stock i to the number of investors
who are all trading of stock i; E(Pi,t) is the expected value of Pi,t; AFi,t is an
adjustment factor.
The LSV model calculation steps are as follows:
Firstly, the calculation formula of Pi,t is as follows:
54 Study on Impact of Investor’s Herd Behavior 531
Bi;t
Pi;t ¼ ; ð54:2Þ
Bi;t þ Si;t
where Bi,t is the number of investors who are the net buyers of stock i during
period t, Si,t is the number of investors who are the net sellers of stock i during
period t.
Secondly, according to the study of Lakonishok et al. (1992), we use the pro-
portion of all stock trades which are purchased by investors during period t. The
specific calculation formula is as follows:
Pi¼Ni;t
Bi;t
Pt ¼ Pi¼Ni;t i¼1 Pi¼Ni;t ; ð54:3Þ
i¼1 Bi;t þ i¼1 Si;t
where E|Pi,t-Pt| is calculated under the null hypothesis Bi,t following a binomial
distribution with parameter Pt. Thus,
P Bi;t ¼ k ¼ CNk i;t Pkt ð1 Pt ÞNi;t k ; ð54:5Þ
we can figure out AFi,t by substituting Eq. (54.5) into Eq. (54.4).
Fourthly, we calculate the herd behavior degree of buyers and sellers. Given
that the degree of herd behavior is different when buying and selling stocks, its
impaction on corporate investment will also be different. Therefore, according to
‘‘buy herding measure’’ and ‘‘sell herding measure’’ proposed by Wermers (1999),
in this paper HMt is defined as follows: if HMt [ 0, then HMt represents the
buyer’s herd behavior; if HMt \ 0, then HMt represents the seller’s herd behavior.
Values of HMt significantly different from zero are interpreted as evidence of herd
behavior. The greater the absolute value of HMt is, the more serious the investors’
herd behavior is.
On the basis of previous studies, in this paper we select some control variables.
Specific definitions are in Table 54.1.
where It is the level of corporate investment, HMt is the herd behavior of investors,
others are control variables.
The results of regression results of model (54.6) are shown in Table 54.2.
The coefficient of investors’ herd behavior degree is significantly positive at the
95 % confidence level, suggesting that the higher the degree of buyer’s herd
behavior is, the higher the level of corporate investment is; and the higher the
degree of seller’s herd behavior is, the lower the level of corporate investment is.
Hypothesis 1 and hypothesis 2 are verified.
As the regression results of control variables, the larger the corporate scale is,
the higher the level of corporate investment is. This may be due to the domestic
listed corporation, especially large corporate tend to invest more to achieve the
goal of doing better and bigger. The greater the cash flow of previous period in
corporate is, the higher the level of corporate current investment is. The initial cash
54.5 Conclusion
Research results show that, the buyer’s herd behavior can significantly improve the
level of corporate investment; and the seller’s herd behavior has significant
inhibiting effect on the level of corporate investment. What’s more, the impact of
the buyer’s herd behavior on corporate investment is greater than the sellers’.
Acknowledgments This work was supported by the National Social Science Foundation of
China (07AJL005), Program for Changjiang Scholars and Innovative Research Team in Uni-
versity (IRT0916), and Science Fund for Innovative Groups of Natural Science Foundation of
Hunan Province of China (09JJ7002).
References
Baker M (2009) Capital market-driven corporate finance. Annu Rev Financ Econ 1(1):181–205
Baker M, Stein JC, Wurgler J (2003) When does the market matter? Stock prices and investment
of equity-dependent firms. Q J Econ 118(3):969–1005
Chang EC, Cheng JW, Khorana A (2009) An examination of herd behavior in equity markets: an
international perspective. J Bank Finance 24(10):1651–1679
Chiang TC, Zheng D (2010) An empirical analysis of herd behavior in global stock markets.
J Bank Finance 34(8):1911–1921
Christie WG, Huang RD (1995) Following the pied piper: do individual returns herd around the
market? Financ Anal J 51(4):31–37
Chun WD, Li XY, Li M (2011) Study on herd behavior in Chinese stock market based on
dispersion model. Forecasting 30(5):25–30 (in Chinese)
Keynes JM (1936) The general theory of employment, interest and money. Macmillan, London,
pp 133–134
Lakonishok J, Shleifer A, Vishny RW (1992) The impact of institutional trading on stock prices.
J Financ Econ 32(1):23–43
Lao P, Singh H (2011) Herding behaviour in the Chinese and Indian stock markets. J Asian Econ
22(6):495–506
Li XF, Li JM (2011) Investors’ individual herd behavior: distribution and degree—a matrix
model based on partitioning clustering algorithm. Stud Int Finance 28(4):77–86 (in Chinese)
Li CW, Yu PK, Yang J (2010) The herding behavior difference between institutional investor and
individual investor. J Financ Res 54(11):77–89 (in Chinese)
Maug E, Naik N (2011) Herding and delegated portfolio management: the impact of relative
performance evaluation on asset allocation. Q J Finance 1(2):265–292
Sias RW (2004) Institutional herding. Rev Financ Stud 17(1):165–206 (Spring)
Wermers R (1999) Mutual fund herding and the impact on stock prices. J Finance 54(2):581–622
Wu FL, Zeng Y, Tang XW (2004) Further analysis of Chinese investment funds’ herding
behavior. Chin J Manag Sci 21(4):7–12 (in Chinese)
Chapter 55
Study on the Evaluation Framework
of Enterprise Informationization
Keywords Application stage Benefit evaluation Enterprise informationization
(EI) Evaluation model
55.1 Introduction
Q. Yuan (&)
Department of Finance and Economics, Shandong University of Science and Technology,
Jinan, China
e-mail: [email protected]
S. Yu
College of Economics and Management, Shandong University of Science and Technology,
Qingdao, China
e-mail: [email protected]
Y. Huo
College of Economics and Management, Shandong Women’s University, Jinan, China
e-mail: [email protected]
D. Li
Shandong Chenlong Energy Sources Group Co., Ltd, Tengzhou, China
corporate information resources effectively and improves its own operation and
management and the abilities of research and development, meanwhile strength-
ening its core competitiveness (Ke and Li 2007).
In the process, the evaluation of EI is an important part of the information-
ization construction and application. The quality of the evaluation directly influ-
ences the effects of the EI application as well as the development in following step.
However, the current work on matter on the theory or on application has not
formed a mature and practical system, which is an obstruction on the healthy
development of informationization work. To sort out the evaluation of EI, forming
a certain constructive ideas to provide a reference for the preparation or being
implemented information technology companies is the purpose of this article.
Evaluation of EI is not only a guide to build enterprise informationization, but
also scales to measure the level of enterprise informationization. The evaluation
not only can guide enterprises to understand accurately the connotation of infor-
mation technology and to clear the purpose of information technology, but also
guide enterprises to correctly formulate the information technology strategy and to
protect the implementation of the informationization project. EI evaluation can not
only improve the overall quality, the sustainable development ability, and the
international competitiveness of enterprises, but also promote local economy and
social development. It is obvious that for improving the efficiency and the prof-
itability of EI process the evaluation work come into spotlight.
social development. It is obvious that for improving the efficiency and the prof-
itability of EI process the evaluation work come into spotlight.
Corporate strategy
Evaluators
The phasing of EI is the early work on the evaluation of EI. For the evaluation of
EI work (Liu et al. 2004), the appropriate information phasing can play a multi-
plier effect on the smooth development of information technology evaluation.
In order to carry out the work of stage division, we need to choose the phasing
basis suitable for EI evaluation under the principle of phasing and to master the
main features of each stage.
Under the normal circumstances, the phasing of EI should follow three prin-
ciples: (a) between each level it should have continuity which can reflect the
gradual process of development and trend. (b) Each phase should have significant
and easily described characteristics. (c) The phase characteristics identified should
reflect the change of the enterprise as soon as possible.
Under the three principles of framework above, the writer conducts the system
analysis about the stage research of local and foreign enterprise information
application, and makes sure the main stage division by mainly using the following
information both at home and abroad: information system input and effective
(Nolan); the role of information (Synnott); information technology application
(Mische); IT application mode (Boar); outside, the efficiency of benefit, dispersion,
comprehensive (Edwords). These serve as the reference of classifying the infor-
mationization evaluation stage (Nolan 1975; Churchill et al. 1969; Synnott 1987;
Mische 1995; Edwards et al. 1991).
55.6 Conclusion
References
Churchill NC, Kempster JH, Uretsky M (1969) Computer based information systems for
management: a survey. National Association of Accountants , New York
Edwards C, Ward J, Bytheway AJ (1991) The essence of information systems. Prentice Hall
International (UK) Ltd, Prentice Hall, pp 28–31
Ke J, Li C (2007) The investigation of enterprise informationization performance appraisal.
Intelligence Magazine 10:30–35 (Chinese)
Liu Y, Hao W, Wei L (2004) The development patterns of the enterprise information stage and
the stage characteristic analysis. Sci Technol Manag Res 2:101–103 (Chinese)
Mische MA (1995) Transnational architecture a reengineering approach. Inf Manag 12(1):98–100
Nolan RL (1975) Thoughts about the fifth stage. Database 7(2):4–10
Synnott WRL (1987) The information weapon: winning customers and market with technology.
Wiley, New York, pp 88–93
Wang K, Liu B (2007) Stage evolution and index setting of enterprise informationization benefit
evaluation stage evolution. Intelligence Magazine 11:29–31 (Chinese)
Wen C (2010) Informationization impact mechanism to enterprise competitive capability and the
strategic choice. Jilin University 4 (Chinese). Doctorial thesis
Chapter 56
Supplier Selection Model Based
on the Interval Grey Number
Z. Zhang (&) S. Wu
Beijing Modern Logistics Research Center, Beijing Wuzi University,
Beijing, People’s Republic of China
e-mail: [email protected]
S. Wu
e-mail: [email protected]
56.1 Introduction
Four evaluation models for supplier selection feature prominently in the literature:
LW models, the total cost models, the mathematical programming methods and
the grey correlation models. Each of these is introduced below.
(1) Linear-weighting models: LW models evaluate potential suppliers using
several equally weighted factors, and then allow the decision-maker to choose the
supplier with the highest total score (Timmerman 1986). Although this method is
simple, it depends heavily on subjective judgment. In addition, these models
weight the criteria equally, which rarely happens in practice (Min 1994;
Ghodsypour and O’Brien 1998).
In contrast to the equal weighting utilized in LW models, AHP is an effective
method for providing a structured determination of the weights of criteria by using
pair wise comparison to select the best suppliers. Several researchers have used
AHP to deal with the supplier selection issues. These include Nydick and Hill
56 Supplier Selection Model Based on the Interval Grey Number 545
(1992), Barabarosoglu and Yazgac (1997), Tam and Tummala (2001), Bhutta and
Huq (2002) and Handfield et al. (2002).
(2) Total cost of ownership models: TCO models attempt to include the
quantifiable costs that are incurred throughout the purchased item life cycle into
the supplier selection model. Monczka and Trecha (1988), Smytka and Clemens
(1993), Roodhooft and Konings (1996), Chen and Yang (2003) attempted to
integrate the total cost into their evaluation models.
(3) Mathematical programming methods: Mathematical programming methods
can be used to formulate the supplier selection problem in terms of an objective
function to be maximized or minimized by varying the values of the variables in an
objective function. Several papers have used single objective techniques to solve
the supplier selection issues, these include linear programming (Pan 1989;
Ghodsypour and O’Brien 1998), goal programming (Buffa and Jackson 1983;
Karpark et al. 1999) or mixed integer programming (Chaudhry et al. 1993;
Rosenthal et al. 1995).
(4) Grey correlation models: Grey system theory can be used to solve uncer-
tainty problems in cases with discrete data and incomplete information (Deng
2002). It is, therefore, a theory and methodology that deals with poor, incomplete,
or uncertain systematic problems. Several papers have used Grey correlation
models to select suppliers. These include Tsai et al. (2003), Jadidi et al. (2008) and
Yang and Chen (2006).
All the existing methods are effective when the criteria values are real numbers,
but it cannot be used for the interval grey number situations. So vendor selection
urgently needs a solution for the interval grey criteria. This paper proposes a new
model to solve this problem.
There are many patterns of supplier selections. The pattern in this paper is
described as follows: The managers of purchasing department have the criteria
value of the ideal referential supplier, the criteria value of ideal referential supplier
is called optimum criteria value; we should compare the optimum criteria value
with the criteria value of supplier candidates and chose the closest one. We will
give some basic concepts and methods for the convenience of the reader to
understand.
In our research, the value of criteria is an interval grey number, so the criteria
value sequences are interval grey sequences.
Definition 1 Liu et al. (2008) if the value ranges is known and exact value is
unknown, then this number is called grey number, noted as . If grey number has
546 Z. Zhang and S. Wu
both upper bound b and lower bound a, then this number is called interval grey
number, noted as 2 ½a; b.
Definition 2 Zeng et al. (2010) if a sequence is consist of interval grey number,
then this sequence is called an interval grey number sequence, noted as
X ðÞ ¼ ððt1 Þ; ðt2 Þ; . . .; ðtn ÞÞ ð56:1Þ
where
ðtk Þ 2 ½ak ; bk ðk ¼ 1; 2; . . .; nÞ ð56:2Þ
The criteria value with different dimension cannot form comparable criteria
value sequence, so criteria value should be transformed to no-dimensional
sequence. Standardization method is a mostly-used dimensionless method, and the
mean method is better than the former (Guo et al. 2011). But for Interval grey
criteria sequence, there are no existing methods that can transform the sequence to
no-dimensional sequence.
Now, we give a method as following: we can just transform the midpoint
sequence of interval grey criteria value, and the lengths of interval grey number are
invariant. The midpoint can be calculated according to (56.3) and be transformed
to no-dimensional sequence according to (56.4).
aik þ bik
mik ¼ aik þ ð56:3Þ
2
mik
m0ik ¼ ð56:4Þ
k
m
Where aik is the lower bound of criteria k and
bik is the upper bound of criteria
k is the mean value of the midpoints’ value of criteria k.
k. m
Definition 3 Liu et al. (2008) if the starting point and endpoint of the function are
determined and the graph of the function is left up, right down, this function is
called typical whitenization weight function (TWWF in short), see Fig. 56.1.
Where xki ð1Þ, xki ð2Þ, xki ð3Þ, xki ð4Þ are called the turning points of fik ð xÞ.
TWWF can be used to describe the probability that the grey interval criteria
take values in its range. TWWF of interval grey criteria value can be determined
by experience of the purchasing managers.
In order to describe the geometrical characteristic of interval grey criteria value
and its TWWF, we proposed the definition of grey figure and grey point as follow:
Definition 4 The figure combined by interval grey line and WWF is called grey
figure, noted as 4, Grey figure’s gravity center is called grey center, noted as G,
see Fig. 56.2. The circle which has the equal area with grey figure is called grey
circle, noted as O, it’s radius is called grey radius, noted as r, see Fig. 56.2`.
The fundamental idea of this paper is that we calculate a grey closeness degree
between the compared supplier alternatives set and the ideal referential supplier
alternative to determine the ranking order of all alternatives of supplier and to
select the ideal supplier based on interval grey numbers.
Firstly, the quantitative criteria should be converted into the proper dimen-
sionless indexes, and regard the dimensionless criteria value of supplier as an
interval grey number sequence. Then, map the interval number sequence and its
WWF on the rectangular coordinates system, see Fig. 56.3.
We assume that <i is the grey figure sequence of CSi and <0 is grey figure
sequence of the ideal referential supplier. The closeness degree between <i and <0
is judged based on the similarity level of grey figure and the distance between
them. The similarity level of grey figure is represented by the difference between
grey radius and the distance between grey figures is represented by the grey point’s
distance.
548 Z. Zhang and S. Wu
The specific methodology of this model is mainly includes the following steps:
Step 1 Calculate the Grey points’ horizontal ordinate sequence and vertical
ordinate sequence according to (56.5) and (56.6).
Where xi ðtk Þ, yi ð tk Þ are the grey points’ horizontal ordinate and vertical
ordinate of supplier candidate i, x0 ð tk Þ, y0 ðtk Þ are the grey points’ horizontal
ordinate and vertical ordinate of ideal referential supplier.
The longer of the grey points distance the smaller the similarity of different
sequence. So the distance degree (noted as Di0) is calculated according to (56.8).
" #
m X
X n X
n
D0i ðtk Þ ¼ di0 ðtk Þ di0 ðtk Þ di0 ðtk Þ ð56:8Þ
i¼1 k¼1 k¼1
Step 3 Calculate the weight that reflects the difference of the circle’s radius
which has the equal area with grey figure according to (56.9) and (56.10).
56 Supplier Selection Model Based on the Interval Grey Number 549
P
n
1 r00i ðkÞ r00i ðkÞ
k¼1
x00i ðkÞ ¼ ð56:9Þ
n1
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
xk ð3Þ þ xk ð4Þ xk ð1Þ xk ð2Þ xk0 ð3Þ þ xk0 ð4Þ xk0 ð1Þ xk0 ð2Þ
r00i ðkÞ ¼ i i i i
2p 2p
ð56:10Þ
Step 4 Compute the weighted sums of grey point’s distance by use of the weight
in step 3.
In this section, we will concentrate on how to apply the new supplier selection
method based on interval grey number to evaluate vendors. It is not our intention
to develop an evaluation model with considering all criteria as mentioned above.
Therefore, this paper will only provide a feasibility study on the new model for
vendor evaluations.
Before using the new method, quantitative criteria should be converted into the
proper dimensionless indexes. The processed information of five candidate sup-
pliers is present in Table 56.1. The turning point of WWF is present in Table 56.2
550 Z. Zhang and S. Wu
Step 3 calculates the weight that reflects the difference between the area of the
optimum grey figures and compared figures, which are noted as xPDCSi ðkÞ, see
Table 56.9.
Step 4 Compute the weighted sums of grey point’s distance by the use of
weights in step 3.
SC1 ¼ 109:834 SC2 ¼ 96:666 SC3 ¼ 81:762
SC4 ¼ 78:022 SC5 ¼ 73:193
So, candidate supplier 1 is selected
552 Z. Zhang and S. Wu
56.5 Conclusion
The ultimate goal of supplier selection is to select appropriate suppliers that can
provide faster delivery, lower cost and better quality, and further to increase the
corporate competitiveness. But the information of delivery, prize, quality and
technical capability is usually uncertain, and the criteria values might be interval
grey numbers. This problem could not be solved by the existing methods. So we
put forward a new method based on the interval grey number.
The method, in this paper, could deal with the supplier selection problem in an
uncertain environment. Though it was demonstrated that the new method, which is
used to select the best supplier, it is a good means of evaluation. The proposed
model has two disadvantages.
(1) The interval grey value of criteria is not easy to obtain in practice. Because
the suppliers’ information which existing evaluation methods needed is real
number, there is little interval grey information in hand.
(2) There is no existing scientific method to determine the WWF of the interval
grey criteria value. Most WWFs could only be determined by the experience of the
managerial and/or the technical staffs.
Our suggestions are, Firstly, establish an interval grey information system for
the supplier selection of your department. Secondly, study the appropriate method
for determining the interval grey criteria value’s WWF.
After all, the new model should be widely applied for supplier selection in
practices. In comparison with other models, this model is more applicable and
effective.
56 Supplier Selection Model Based on the Interval Grey Number 553
References
Amir S, Farid S, Yazdankhah A (2010) Group decision making process for supplier selection with
VKOR under fuzzy environment. Expert Syst Appl 37:24–30
Barabarosoglu G, Yazgac T (1997) An application of the analytical hierarchy process to the
supplier selection problem. Prod Inventory Manag J 1st q 38(1):14–21
Bhutta KS, Huq F (2002) Supplier selection problem: a comparison of the total cost of ownership
and analytical hierarchy process approaches. Int J Supply Chain Manag 7(3):126–135
Buffa FP, Jackson WM (1983) A goal programming model for purchase planning. J Purch Mater
Manag, Fall, pp 27–34
Chaudhry SS, Forst FG, Zydiak JL (1993) Vendor selection with price breaks. Eur J Oper Res
70:52–66
Chen CC, Yang CC (2003) Total-costs based evaluation system of supplier quality performance.
Total Qual Manag 14(3):325–339
Deng JL (2002) The basis of grey theory. Huazhong University of Science and Technology press,
Wuhan, pp 8–16, 122–202
Dickson GW (1966) An analysis of vendor selection systems and decisions. J Purch 2(1):28–41
Ghodsypour SH, O’Brien C (1998) A decision support system for supplier selection using an
integrated analytic hierarchy process and linear programming. Int J Prod Econ 56/57:199–212
Guo YJ, Ma FM, Dong QX (2011) Analysis of influence of dimensionless methods on deviation
maximization method. J Manag Sci China 4(5):19–28
Handfield R, Walton S, Sroufe R, Melnyk S (2002) Appling environmental criteria to supplier
assessment: a study in the application of the AHP. Eur J Oper Res 141:70–87
Jadidi O, Yusuff RM, Firouzi F, Hong TS (2008) Improvement of a grey based method for
supplier selection problem. J Achiev Mater Manuf Eng 31(2):770–777
Karpark B, Kumcu E, Kasuganti R (1999) An application of visual interactive goal programming:
a case in vendor selection decisions. J Multi-Criteria Decis Anal 8:93–105
Liu SF, Dang YG, Fang ZG (2008) Grey system theory and its application, 4th edn. Science
Press, Beijing, pp 3–4, 50–55
Min H (1994) International supplier selection: a multi-attribute utility approach. Int J Phys Distrib
Logist Manag 24(5):24–33
Monczka RM, Trecha SJ (1988) Cost-based supplier performance evaluation. J Purch Mater
Manag 24(2):2–7
Nydick RL, Hill RP (1992) Using the analytical hierarchy process to structure the supplier
selection procedure. Int J Purch Mater Manag 28(2):31–36
Pan AC (1989) Allocation of order quantity among suppliers. J Purch Mater Manag, Fall,
pp 36–39
Roodhooft F, Konings J (1996) Vendor selection and evaluation: an activity based costing
approach. Eur J Oper Res 96:97–102
Rosenthal EC, Zydiak JL, Chaudhry SS (1995) Vendor selection with bundling. Decis Sci
26(1):35–48
Smytka DL, Clemens MW (1993) Total cost supplier selection model: a case study. J Purch Mate
Manag 29(1):42–49
Tam MCY, Tummala VMR (2001) An application of the AHP in vendor selection of a
telecommunications system. Omega 29:171–182
Timmerman E (1986) An approach to vendor performance evaluation. J Purch Mater Manag
26(4):2–8
554 Z. Zhang and S. Wu
Tsai C-H, Chang C-L, Chen L (2003) Applying grey relational analysis to the vendor evaluation
model. Int J Comput 11(3):45–53
Weber C, CurenT J, Benton W (1991) Vendor selection criteria and methods. Eur J Oper Res
50:2–18
Yang C-C, Chen B-S (2006) Supplier selection using combined analytical hierarchy process and
grey relational analysis. J Manuf Technol Manag 17(7):926–941
Zeng B, Liu SF, Meng W, Zhang J (2010) Incidence degree model of interval grey number based
on space mapping. Syst Eng 28(8):122–126
Chapter 57
The Accuracy Improvement of Numerical
Conformal Mapping Using the Modified
Gram-Schmidt Method
Keywords Generalized eigenvalue problem Modified gram-schmidt method
Numerical conformal mapping Vandermonde matrix
57.1 Introduction
Conformal mapping theory has been widely applied in the fluid dynamics and
many other fields with a strong vitality. Computing conformal mapping is a very
difficulty question. Amano et al. have achieved many numerical results on the
studies of the charge simulation method for conformal mappings (Amano 1987,
1988a, b, 1994) in which the charge points is often gained by experience. On the
other hand, Sakurai and Sugiura proposed a numerical method for conformal
mappings by using Padé approximation (Sakurai and Sugiura 2002), allows the
Y. Lu (&) S. Zheng
Faculty of Science, Kunming University of Science and Technology
of China, Kunming, People’s Republic of China
e-mail: [email protected]
D. Wu
School of Mathematical Sciences, University of Electronic Science
and Technology of China, Chengdu, People’s Republic of China
Y. Wang
Computing Center, Kunming University
of Science and Technology of China, Kunming, People’s Republic of China
H ljþkþ1 n1
~ n :¼ ½~ j;k¼0 ; ð57:2Þ
~\ ljþkþ2 n1
n :¼ ½~ ð57:3Þ
H j;k¼0 :
\
Note, in the formula above, that the elements of H ~ n are defined via (57.1). Thus
from (57.2) and (57.3), new charge points can be gained by calculating the gen-
eralized eigenvalue problem H ~\ ~
n x ¼ k H n x, instead of calculating the roots of
BðzÞ (Kravanja et al. 1999, 2003; Sakurai et al. 2003).
558 Y. Lu et al.
H ~\
~ n ¼ V T DN V; H T
n ¼ V DN ZN V: ð57:5Þ
Here, V can be expressed as
~ ~ 0 R
V ¼ QR ¼ ½Q; Q ¼ QR ð57:6Þ
O
~ 2 CNN is an unitary
for QR decomposition, where Q 2 CNn ; Q0 2 CNðNnÞ : Q
~ 2 CNn is expressed as
matrix, and R
~¼ R :
R
O
detðRT Þ 6¼ 0; detðRÞ 6¼ 0:
Therefore, it can be verified that
\
~ n k H
detðH ~ nÞ ¼ 0
and
In this section, we provide the numerical example of the charge simulation method
and the proposed method. We draw a comparison between the proposed
method and the charge simulation method. The algorithm of the charge simulation
method is denoted by M0. In contrast, the proposed algorithm by using the
modified Gram-Schmidt method is referred to as M1. On a Microsoft Windows
operating system, the calculations were performed by using Matlab. The numerical
error is defined as the maximal distance from the point which is the map of the
point on the boundary C in the z plane, onto the circumference of the unit circle in
the direction of radius in the w plane. Here the eigenvalues of QT DN ZN Q
kQT DN Q were calculated by using the eig command in Matlab.
Example Exterior of trochoid: the boundary is given by
x ¼ 0:9 cos t þ 0:1 cos 3t
:
y ¼ 0:9 sin t 0:1 sin 3t
Collocation points and charge points in the charge simulation method are placed
by (Watanabe 1984).
In Fig. 57.1, the error curves of the conformal mappings by various numerical
calculation methods are shown. The results of charge simulation with N ¼ 200
were used for the calculation of (57.1). The accuracy of M1 is superior to M0. M1
gained the best accuracy at n ¼ 29. Figure 57.2 showed the locations of the charge
points for M1 at n ¼ 29. From the results of example, we see that the accuracy of
57 The Accuracy Improvement of Numerical Conformal Mapping 561
M1 is superior to M0. Figure 57.3 shows the exterior of the trochoid. The con-
formal mapping of Fig. 57.3 computed by M1 at n ¼ 29 is shown in Fig. 57.4.
From the result of M1 in Fig. 57.4, we see that the boundary of the trochoid is well
mapped onto the unit circle.
562 Y. Lu et al.
57.5 Conclusions
In this paper, a numerical method using the modified Gram-Schmidt method has
been proposed for improving the accuracy of conformal mapping. The applica-
bility of our method has been demonstrated with numerical results. The accuracy
of conformal mapping by the proposed method is better than achievable by the
charge simulation method. The error analysis for the proposed method will be
investigated in the future.
57 The Accuracy Improvement of Numerical Conformal Mapping 563
References
Amano K (1987) Numerical conformal mapping based on the charge simulation method. Trans
Inform Process Soc Japan 28:697–704 (in Japanese)
Amano K (1988a) Numerical conformal mapping of exerior domains based on the charge
simulation method. Trans Inform Process Soc Japan 29:62–72 (in Japanese)
Amano K (1988b) Numerical conformal mapping of doubly-connected domain based on the
charge simulation method. Trans Inform Process Soc Japan 29:914–92 (in Japanese)
Amano K (1994) A charge simulation method for the numerical conformal mapping of interior,
exterior and doubly-connected domains. J Comput Appl Math 53:357–370
Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. The Johns Hopkins University
Press, Baltimore
Kravanja P, Sakurai T, Van Barel M (1999) On locating clusters of zeros of analytic functions.
BIT 39:646–682
Kravanja P, Sakurai T, Sugiura H, Van Barel M (2003) A perturbation result for generalized
eigenvalue problems and its application to error estimation in a quadrature method for
computing zeros of analytic functions. J Comput Appl Math 161:339–347
Niu X, Sakurai T (2003a) An eigenvalue method for finding the multiple zeros of a polynomial.
Trans Japan Soc Ind Appl Math 13:447–460 (in Japanese)
Niu X, Sakurai T (2003b) A method for finding the zeros of polynomials using a companion
matrix. Japan J Indust Appl Math 20:239–256
Saad Y (2003) Iterative methods for sparse linear systems, 2nd edn. SIAM, Philadelphia
Sakurai T, Sugiura H (2002) A method for numerical conformal mapping by using Padé
approximations. Trans Inform Process Soc Japan 43:2959–2962 (in Japanese)
Sakurai T, Kravanja P, Sugiura H, Van Barel M (2003) An error analysis of two related
quadrature methods for computing zeros of analytic functions. J Comput Appl Math
152:467–480
Stewart GW (1973) Introduction to matrix computations. Academic Press, New York
Tyrtyshnikov EE (1994) How bad are Hankel matrices? Numer Math 67:261–269
Watanabe N (1984) A collection of figures for conformal mappings. Cangye Bookstore, Japan (in
Japanese)
Chapter 58
The Application of AHP in Biotechnology
Industry with ERP KSF Implementation
58.1 Introduction
The application of ERP system is very complex and relies on background and
motivation. Performance management and analysis of the KSF are required to
ensure successful application of the ERP system in the biotechnology industry
(Wong and Keng 2008). To cope with the competitive orchid market in Taiwan in
the future, it is necessary to reduce capital, increase profit and enhance competi-
tiveness. The objective of this study is to explore enterprise resource planning in
the KSF-Biotechnology industry for effective integration of ERP system (Li et al.
2007). A clear picture of ERP implementation will promote biotechnology
industry’s market.
Based on the above mentioned research backgrounds and motives, this sturdy is
intended for realizing the following purposes: (1) To consolidate and summarize
the literature of KSF of ERP SYSTEM implementation; (2) To understand the
KSF of ERP SYSTEM implementation; (3) To understand the difficulties and
obstacles encountered at each stage of ERP system implementation and solution to
overcome.
Daniel (1961) was one of the first to propose the concept of Key Success Factor
(KSF) or critical success factor. It was highlighted that the success of most
industry is determined by three to six factors, such are known as the KSF. KSF
after which economist Commons (1974) referred to as the ‘‘limiting factor’’ and
had the concept applied in economy management and negotiation. Thereafter
Barnard (1976) applied the concept in management decision making theory. He
considered the analysis required for decision making was essentially looking at
‘‘strategic factors’’. In addition, Tillett (1989) applied the concept of strategic
factors to dynamic system theory. He viewed that the ample resources of an
organization was the key factor. Policies were established to maintain and ensure
maximum utilization of resources. In addition, they were important in resources
forecast. KSF is the top priority in industrial analysis. It is important in the
management of control variables, as well as the source of competitive advantage.
58 The Application of AHP in Biotechnology Industry with ERP KSF Implementation 567
58.3 Methodology
To study the KSF and their role in ERP implementation, the questionnaire was
divided into two parts:
(1) First Stage Questionnaire: (1) Target audience: the junior and middle man-
agement level of companies involved in ERP system; (2) The company’s key
success factors of ERP.
(2) Second Stage Questionnaire: (1) Target audience: the junior and middle
management level of companies involved in ERP system; (2) The part was
based on the first part and had the factors that contributed to the success of
ERP analyzed.
58 The Application of AHP in Biotechnology Industry with ERP KSF Implementation 569
(3) The aim was to seek the relative importance of KSF: (1) the motive behind
companies in ERP implementation; (2) The second level as the measurement
index: the four dimensions: internal factors, ERP system features, ERP soft-
ware support, results followed by ERP implementation; (3) Third level gauged
the KSF of second level indexes.
58.4 Results
This section covered the first stage of the analysis. Its target audience was the
junior and middle management level of the companies that was involved in suc-
cessful ERP system integration. Out of the total of 20 questionnaires issued, 14
were received. Of those received 13 were valid questionnaires after omitting one
that was incomplete (70 % completed). Likert’s five-point level score system was
used for data mining. The reliability of this study was summarized by Cronbach’ s
a coefficients where the reliability a of the four dimensions in ERP implementation
fell within the range of 0.50 \ a \ 0.90. It is thus a good indicator of the
research’s reliability and value (Wang 2004).
1. The determination of
executives in
implementation
Internal 2. A highly effective ERP
factors implementation team
across department
3. ERP project team
allowed full authorization
4. ERP implementation
progress
5. Communication between
project team and
departments
6. Staff training
7. Department acceptance in
system implementation
1. Real-time response
service
2. Assists companies in staff
training, and technology
ERP transfer
software 3. Expertise demonstrated
support by vendor
4. Equipment provided by
vendor
5. Understanding the needs
of user
6. Communication with
company
In ‘‘KSF research on the chain of cafés’’ (Qin 2002), it was mentioned that
hierarchy weighing is also known as local priority which refers to relative com-
parison of weight between each level. The overall weight is known as Global
Priority, which is the weight of the level above (second level) multiplied by the
factors in the current level (third level). This is to display the impact of the factors
in the current level (third level) has on the entire evaluation. Therefore based on
the results from the four dimensions, the compound weights were listed in
Table 58.2 and ranked in order of importance. For better clarity, the values were
multiplied by 100. For example: compound weight (c) = second level weight (a) *
third level weight (b) *100 %.
58.5 Conclusion
The main focus of this study was to identify the Key Success Factors and its level
of importance in ERP implementation. This was carried out successfully through
AHP analysis as well as the survey from questionnaire which was designed based
on interviews and literature review. The factors of ERP implementation in the
company were consolidation in Table 58.3.
In recent years a number of 4 * 6 KSF was usually considered by most
researchers. Therefore, in this case study, the focuses were placed on the first 6
KSF (Yang et al. 2007).
(1) Staff training and education: As the company in this case study is a tradi-
tional industry, the employees were not as highly educated and most did not
possess computer skills. Therefore the company had invested a considerable
amount of time in conducting training courses. Their staff training and education
were divided into two main stages; (1) The team of E- training. They were mainly
responsible for system maintenance, program modifications, and as training
instructors. In addition to basic knowledge, the E- team was required to work with
the ERP software vendors in training courses; (2) The company was responsible
for part of their training courses. The courses were planned in conjunction with the
ERP implementation. A 30-hour course was planned according to the work sys-
tem. The E- team members served as lecturers and all system users were required
to undergo training in a classroom setting during non-working hours. The course
572 M.-L. Wang et al.
(3) Under the impetus of the executives, staffs were trained to work with the
new system. Staffs that refused change had to be let go. With full budget support
and authorization given by the top management, E-team was able to focus on all
steps and methods of implementation and had complex issues resolved. Therefore
the support from the executives was the main crucial success factor.
(4) Communication between the ERP software vendor and company. The
company in this case study differs from other manufacturing industries. It had to
rely on customized software. The company spent almost a year working with the
software vendor in building a customized ERP system. Meetings were held
between the parties of the E-team members, representatives from every department
and the ERP software vendor. Every one to two weeks pre and during the
implementation period, meetings were held. The meetings were changed to once a
month upon implementation and finally to only emergency situations. ERP soft-
ware vendors to assist companies in training and technology transfer. From the
factor of ERP software vendor features, the company in this case study has placed
less priority on the training and technology transfer. The services provided by the
ERP software vendor were as follows: (1) To provide professionals in staff training
and to arrange training in system conversion. Tailor training courses to meet
customers’ needs; (2) To carry out an assessment on the old system before
determining the means of data transfer (The system requires Windows 2000 Server
operating systems, server software: Tomcat, programming languages: Java, data-
base: Oracle 9i). The E-team was responsible for data transfer operations which
was followed by data integration data into the ERP system by the software
vendors.
(5) Accuracy and real-time system. The progress of the ERP system imple-
mentation could be monitored through network and video systems (cameras).
574 M.-L. Wang et al.
Through real-time monitoring, effective quality control and work progress could
be ensured. Officials could also depend on the system database to identify prob-
lems should they occur during construction.
(6) Flexible and efficient allocation of resource. One of the biggest results upon
ERP implementation was the increased flexibility and allocation of resources,
hence more efficient business operations.
58.5.1 Limitations
(1) The company in this case study is a single case company. The key factors for
implementation mentioned here may not apply in the non-construction
industry. Therefore, there may be limitation on the scope of the findings. The
similar research method is applicable to another industry (e.g., semiconductor
industry) to find any same or distinct conclusions.
(2) This study was not able to widen its scope of survey from a larger pool of
employees due to time, manpower, and financial constrains. This might have
an impact on the results and analysis.
The analysis of this research was based primarily on the csompany in this case
study. It is recommended that subsequent research to be carried out on companies
of difference portfolios in order to compare results and discussion. This will be a
good source of reference for companies and organization considering ERP
implementation.
References
Huang ZQ, Huang B Wang H Lin R (2008) A study on the critical factors and strategy of
Phalaenopsis industry development. Taiwan Agric Assoc Rep 9(6):50–58
Li M, Liu GM, Ding S, Lin Y (2007) A study of KSF of China’s biotechnology industry.
Soochow J Econ Bus Stud 56(2):27–51
Qin JW (2002) The study of KSF of the coffee chain KSF. Tamkang University’s master of
science in management science research papers
Wang HN (2004) Development of the key performance indicator (KPI) system. http://
www.mamage.org.cn
Wong YF, Keng LB (2008) AHP analysis of the KSF of marine museum outsourcing business
model by KSF. Eng Sci Educ J 5(20):200–222
Yang CC, Yang CC, Peng CF (2007) Analyses of employees’ behavior models by introduction of
ERP—an example of the notebook computer industry. Chin-Yu J 25:39–57
Chapter 59
Application of Gray Correlation Method
in the Employment Analysis of Private
Enterprises
Bao-ping Chen
59.1 Introduction
At present the reform and opening up, China’s private enterprises have been
developing vigorously from small to large and from weak to strong and become an
important economic growth point in China’s national economic development (Liu
2005). Since the mid-1990s, the employment of the private sector has been
B. Chen (&)
Department of Computer Information and Management,
NeiMongol University of Finance and Economics, Hohhot, China
e-mail: [email protected]
developing substantially in scale and growth speed and gradually become the
absolute subject to solve the employment issue in our society, which provides an
important guarantee for China’s social stability. At the same time, it can be seen
that the private enterprises in all regions are developing unevenly in employment
and the number of employment varies greatly (Feng et al. 2010). Then, the private
enterprises in which industries play a leading role in employment? Many scholars
have studied the development of the private sector, such as Feng Tainli’s An
Empirical Study on Political Capital and the Accession to Loan from State-owned
Banks of Chinese Private Enterprises (Song and Dong 2011), Song Qicheng’s On
the Relationship between Development of Private Enterprises and Employment
(Zhang 2004), etc. However, there has been relatively little research conducted on
the degree of impact of the private enterprises in various industries on
employment.
Gray correlation analysis method is one method to quantitatively describe as
well as compare the trend of the development and change of a system (Dang et al.
2009).. Through the determination of the similarity degree of the geometrical
shapes between the reference data column and several comparative data columns,
the closeness of the correlation is estimated and the correlation degree of curves
also is reflected. In the development process of one dynamic system, the major
influential factors can be analyzed through the sequencing of the correlation
degrees. Among them, a low correlation degree means there is no or less influence
from this factor while a high correlation degree means this factor is the major one
influencing the development of the system.
According to the number of employment of the private enterprises in 7
industries in 21 regions of China given by China Statistical Yearbook (2010), this
paper applies the grey correlation analysis to reveal the major sectors impacting
employment and analyze its causes. The empirical studies have shown that: the use
of this method can evaluate the degree of impact of the private enterprises in
various industries on employment more systematically, objectively and accurately,
which has a certain reference value to solve the employment problem of China.
Grey correlation method has a low requirement on the sample size and its regu-
larity. It can be applied to the evaluation study with few statistical data, large data
grey, great data fluctuation or non-typical distribution regularity. Grey correlation
method based on the gray system theory is a multi-factor analysis technique which
uses grey correlation to describe the strength, degree and sequence of the corre-
lation between the factors through the calculation of the grey correlation degree.
The specific procedures are as follows.
Step 1: determine the reference sequence and comparative sequence. On the
basis of the qualitative analysis, determine one dependent variable and multiple
independent variables. For m indexes which have n evaluation objects, according
59 Application of Gray Correlation Method 577
to the historical statistics, the reference sequence X0 which reflects the corre-
sponding condition of the things and the comparative sequence which describes
the corresponding situation of m factors are given. Among them:
Reference sequence:
X0 ¼ ðx0 ð1Þ; x0 ð2Þ; . . .; x0 ðmÞÞ ð59:1Þ
Comparative sequence:
Xi ¼ ðxi ð1Þ; xi ð2Þ; . . .; xi ðmÞÞði ¼ 1; 2; . . .; nÞ ð59:2Þ
Step 2: calculate the absolute correlation degree. Set the length of Xi is same as
that of Xj and Xi0 and Xj0 are their respective initial point zero images.
Among them:
Xn1
1
j si j ¼ x0i ðkÞ þ x0i ðnÞ ð59:5Þ
k¼2 2
Step 3: calculate the relative correlation degree. Set the length of Xi is same as
that of Xj and their initial values are not equal to zero. Xi’ and Xj’ are respectively
the initial value images of Xi and Xj. Take eij 0 , the absolute correlation degree of
Xi’ and Xj’ as the grey relative correlation degree of Xi and Xj. Note as rij. Among
them:
Step 4: solve the comprehensive correlation degree. Set the length of Xi is same
as that of Xj and their initial values are not equal to zero. There is no positive
correlation between eij (the absolute correlation degree of Xi and Xj) and rij (the
relative correlation degree of Xi and Xj). Comprehensive correlation degrees take
both the absolute change and relative change of the data sequences into consid-
eration and at the same time satisfy 4 Axiom of grey correlation degree. Note qij as
the grey comprehensive correlation degrees of Xi and Xj. Among them:
578 B. Chen
The value of h represents the emphasis on the absolute correlation degree eij and
relative correlation degree rij. Generally, the value of h is 0.5. When h is set, grey
comprehensive correlation degree is unique. However, this kind of conditional
uniqueness does not affect the analysis on the problem.
Step 5: utilize the calculated comprehensive correlation degree analysis qij to
analyze the correlation sequence.
Since the reform and opening up, the private enterprises has mushroomed in the
rapid speed and become the basic force to promote the national economic
development as well as an important guarantee to realize the interests of the
people. The development of the private enterprises plays an important role in
optimizing our resources, increasing the employment rate and promoting the
national economic growth (Wang et al. 2011; Fang et al. 2011; Zhang et al. 2011).
However, the private enterprises develop unevenly in different provinces and cities
of China with a great difference. This paper tries to explore the main reasons
causing the difference with the gray correlation method and find ways to solve the
problem so as to achieve common development and avoid the decline of the
overall level affected by individual backward areas.
Divide the provinces, cities and districts in the country into three parts: the
eastern area includes Beijing, Hebei, Liaoning, Shanghai, Jiangsu, Zhejiang,
Shandong, Guangdong, etc.; the central area includes Shanxi, Heilongjiang,
Henan, Hubei, etc. the western area includes Guangxi, Xinjiang, etc. According to
the number of employment of the private enterprises in 21 regions in 2010 given
by China Statistical Yearbook (2010), this paper selects the total employment as
the reference sequence and the number of employment in seven industries, namely,
manufacturing, construction, transportation, wholesale and retail, accommodation
and catering, leasing and business services and others as the main assessment
index. Table 59.1 is the relevant data of 21 cities in 2010.
From Table 59.1, the reference sequence can be obtained:
X0 = (323.2, 123.3, 294.1, 170.7, 216.0, 511.4, 247.3, 293.9, 343.1, 1297.3,
758.8, 397.4, 278.4, 247.5, 636.6, 375.2, 451.8, 372.7, 1233.1, 241.5, 79.8, 43.2,
48.6, 132.4)
Comparative sequence is:
Xi ¼ ðxi ð1Þ; xi ð2Þ; xi ð3Þ; . . .; xi ð21ÞÞði ¼ 1; 2; . . .; 21Þ ð59:10Þ
59
Table 59.1 The number of employment of the private enterprises in 7 regions in 2010 [million people]
Cities Total Manufacturing Construction Transportation Wholesale Accommodation Leasing Others
and retail and catering and services
Beijing 323.2 11.5 8.7 6.6 88.7 21.0 46.8 15.4
Tianjin 123.3 33.4 5.4 5.5 39.8 4.6 10.1 4.7
Hebei 294.1 59.4 9.6 10.2 136.2 20.2 9.2 19.6
Shanxi 170.7 16.1 4.0 3.3 94.1 15.6 5.9 17.1
Neimenggu 216.0 24.4 7.6 10.8 94.9 23.1 9.1 19.2
Liaoning 511.4 87.5 27.0 46.5 193.9 31.0 24.2 32.0
Jilin 247.3 31.5 27.3 10.3 102.5 21.4 7.7 20.0
Heilongjiang 293.9 34.2 8.8 11.3 114.0 28.4 12.6 52.6
Shanghai 343.1 53.5 25.9 13.6 121.5 16.4 45.1 12.0
Application of Gray Correlation Method
Step 1: Calculate the absolute correlation degree. Take manufacturing calls for
example, through the initialization operation (settled as 1-time interval sequence of
equal length), we can obtain:
X1 = (11.5, 33.4, 59.4, 16.1, 24.4, 87.5, 31.5, 34.2, 53.5, 475.3, 259.7, 57.5,
65.4, 53.3, 144.0, 61.9, 71.5, 39.4, 293.3, 34.7, 4.4, 5.2, 5.2, 16.3)
Through the operation of initial point zero images on X0 sequence and X1
sequence, we can obtain following sequences:
X0 = (0.00, -199.91, -29.19, -152.58, -107.24, 188.19, -75.98, -29.30,
19.89, 974.05, 435.53, 74.10, -44.88, 5.71, 313.34, 51.96, 128.57, 49.41, 909.88,
-81.79, -190.89)
X1 = (0.0, 21.85, 47.91, 4.55, 12.85, 75.98, 19.97, 22.66, 41.97, 463.82,
248.16, 45.99, 53.91, 41.84, 132.47, 50.43, 60.02, 27.84, 281.76, 23.20, 4.75)
Calculate the values of js0 j, js1 j and js1 s0 j, Among them:
js0 j ¼ 2252:91; js1 j ¼ 1679:64; js1 s0 j ¼ 573:27;
Thus, according formula (59.4), the absolute correlation degree of manufac-
turing calls can be calculated and its value is 0.5676. Similarly, the absolute
correlation degrees of all the factors can be calculated. Namely:
e01 ¼ 0:8728; e02 ¼ 0:5501; e03 ¼ 0:5310;
e04 ¼ 0:8926; e05 ¼ 0:5290; e06 ¼ 0:5917;
e07 ¼ 0:5560
Step 2: Calculate the relative correlation degree. Take the manufacturing calls
for example. After the initialization operation, calculate the initial value images of
X0 sequence and X1 sequence. Namely:
X0 = (1.0, 0.38, 0.91, 0.52, 0.66, 1.58, 0.76, 0.90, 1.06, 4.01, 2.3, 1.22, 0.86,
0.76, 1.96, 1.16, 1.39, 1.15, 3.81, 0.74, 0.4095)
X1 = (1.0, 2.89, 5.16, 1.39, 2.11, 7.60, 2.73, 2.96, 4.64, 41.30, 22.56, 4.99,
5.68, 4.63, 12.51, 5.38, 6.21, 3.41, 25.48, 3.01, 1.41)
Calculate the initial point zero images of X0’ and X1’. Namely:
X0’ = (0.0, -0.61, -0.09, -0.47, -0.33, 0.58, -0.23, -0.09, 0.06, 3.01, 1.34,
0.22, -0.13, -0.23, 0.96, 0.16, 0.39, 0.15, 2.81, -0.25, -0.59)
X1’ = (0.0, 1.89, 4.16, 0.39, 1.11, 6.60, 1.73, 1.96, 3.64, 40.30, 21.56, 3.99,
4.68, 3.63, 11.51, 4.38, 5.21, 2.41, 24.48, 2.01, 0.4)
The values of js0 j, js1 j and js1 s0 j can be obtained.
js0 j ¼ 6:96; js1 j ¼ 145:96; js1 s0 j ¼ 138:99
Thus, according formula (59.8), the relative correlation degree of manufac-
turing calls can be calculated and its value is 0.5255. Similarly, the relative cor-
relation degrees of all the factors can be calculated. Namely:
r01 = 0.5255, r02 = 0.6418, r03 = 0.6735, r04 = 0.6828, r05 = 0.9475,
r06 = 0.9007, r07 = 0.7218
Step 3: Calculate the comprehensive correlation degree. Utilize the above
absolute correlation degree and relative correlation degree and formula (59.9) and
59 Application of Gray Correlation Method 581
at the same time set h ¼ 0:5, the comprehensive correlation degrees of all the
factors can be calculated. Namely:
59.4 Conclusions
the effects of personal factors. All these make the evaluation results more accurate.
Gray correlation analysis method adopts the correlation degree to quantitatively
describe the strength of the influences between things. The calculated value of the
correlation degree falls on the interval. The larger the value is, the stronger the
influence between things is. The geometric significance of the correlation degree is
the difference degree of the geometrical shapes between curves which represent
different things or factors (Sun 2011). If the correlation degree of certain index is
high, it means this index is one major factor affecting things. On the contrary, if the
correlation degree of certain index is low, it means this index has a low influence.
Applying grey correlation method into the principal component analysis to seek
the major influential factors can take several factors into consideration compre-
hensively, which avoids the subjectivity of the single factor. In this way, the
analysis process can be more reasonable and objective and the analysis results can
accurately reflect the differences between various factors. The above case shows
that gray correlation analysis method has a low requirement on the regularity of
the original data and definite objectivity and scientificity. Besides, it is simple for
use, not time consuming and easy to understand.
Acknowledgments Project supported by the Science Research Projects of the Colleges and
Universities in Inner Mongolia (NO.NJZY11106) and the Natural Science Foundation Project in
Inner Mongolia (NO.2010 MS1007).
References
Dang Y, Liu S, Wang Z (2009) Model on gray prediction and decision model. Science Press,
Beijing, China (in Chinese)
Fang F, Tang W, Cheng G (2011) Performance evaluation of beijing innovative enterprises based
on principal component analysis. J Beijing Inf Sci Technol Univ 2011(8):89–94
Feng T, Jing R, Wang G (2010) An empirical study on political capital and the accession to loan
from state-owned banks of Chinese private enterprises. Forecasting 29(3):26–31
Liu Y (2005) Analysis of the features of the boosting trend by private enterprises. J Hebei Univ
Econ Trade 26(2):25–30
Liu S, Dang Y et al (2009) Grey system on dealing with the theory and pratical applications.
Social Sciences Edition, Beijing
Song Q, Dong C (2011) On the relationship between development of private, J Chongqing Univ
Technol (Social Science) 25(7):26–31
Sun L (2011) Comparison between performance of principal component analysis and fuzzy
analysis in water quality evaluation. Environ Sci Manag 8:178–181
Wang H, Li Y, Guan R (2011) A comparison study of two methods for principal component
analysis of interval data. J Beijing Univ Aeronaut Astronaut (Social Sciences Edition)
2011(7):86–89
Zhang F (2004) On the employment potentiality of the private enterprises in Liaoning.
J Shenyang Norm Univ (Social Science Edition) 28(3):19–21
Zhang J, Hu X, Lin X (2011) Research on the financial revenue of Hainan province based on the
principal component analysis. J Hainan Norm Univ (Natural Science) 2011(9):260–264
Chapter 60
The Design of Three-Dimensional Model
for the Economic Evaluation of the Coal
Enterprise Informationization
Abstract According to the coal enterprise characteristics and the current coal
enterprise informationization construction circumstance, the author designed a
three-dimensional model for the economic evaluation of the coal enterprise in-
formationization with a low demand for data and great practicability, so as to
direct the construction of the coal enterprise informationization, and to evaluate
the economic returns.
60.1 Introduction
Q. Yuan (&)
Department of Finance and Economics, Shandong University of Science
and Technology, Jinan, China
e-mail: [email protected]
S. Yu
College of Economics and Management, Shandong University of Science
and Technology, Qingdao, China
e-mail: [email protected]
The three-dimensional model for the economic evaluation of the coal enterprise
informationization (hereafter referred to as the three-dimensional model) is shown
below as Fig. 60.1. The three-dimensional coordinates are delineated in Fig. 60.1.
The volume of the cube that is surrounded by the dimensions in the three-
dimensional model can be calculated by the coordinates quantification and undi-
mensionalization, and it can be used as a reference for the economic evaluation of
the coal enterprise informationization.
(3) Z-axis: The Internal and External Invisible Earnings of the Coal Enterprise
Informationization
The invisible earnings refer mainly to the improvements of the efficiency of
the coal enterprise, and they are difficult to measure in currency.
Based on economist Galbraith’s theory, and the opinions of Edivinsson and
Malone, the coal enterprise informationization is a complex systematic pro-
ject. It not only affects the internal organization and staff in the coal enterprise,
but also plays a role of great importance on the upstream and downstream
firms (Edvinsson and Malone 1997).
In this paper, the author defines the invisible earnings in the coal enterprise as
intellectual capital, and the intellectual capital can be divided into organiza-
tional capital, market capital and human capital.
Considering that the invisible earnings caused by informationization in the
coal enterprise can be regarded as the increase of intangible assets, the author
sets the Z-axis origin as the center, and defines the left interval as the increase
of the internal invisible earnings, which consists of organizational capital and
human capital, and the right interval is defined as the increase of the external
invisible earnings, which is formed by the market capital (KaPlan and Norton
1996; Stewart 1994; Bharadwaj 2000).
586 Q. Yuan and S. Yu
The quantification of coordinate axses are delineated above, but can not be cal-
culated directly because of the differences among dimensions. In this paper, the
author uses mathematical methods for the undimensionalization.
The catastrophe points are not available in the three-dimensional model, neither
are the sample data; therefore, the mathematical undimensionalization method is
simple and accurate.
The higher value of the coordinates in the three-dimensional model means the
better performance of the informationization. The valuation of X-axis and Z-axis
are the results of formula 60.4. The valuation of the Y-axis is already dimen-
sionless and does not need to be calculated.
The undimensionalizaiton formula is shown as below:
i imin
RðiÞ ¼ imin i imax ð60:4Þ
imax imin
The coordinates of the dimensions can be figured out through the definition,
quantification and undimensionalization.
As is shown on Fig. 60.1, the valuation of X1 is the result of the undimen-
sionalization of enterprise management domain. The valuation of X2 is the result
of the undimensionalization of technology domain. The valuation of Y1 is the ratio
of tangible assets to informationization cost. The valuation of Z1 is the internal
invisible earnings of the coal enterprise informationization, and Z2 is the external
invisible earnings.
The volume that is surrounded by the three dimensions is the economic eval-
uation of the coal enterprise informationization, and the formula is shown as
below.
B¼XY Z ð60:5Þ
X, Y, Z are the undimensionalization results, and B is the undimensionalization
of the coal enterprise economic benefits.
60 The Design of Three-Dimensional Model for the Economic Evaluation 589
60.3 Conclusions
The three-dimensional model has great practicability, a low demand for data and is
easier to evaluate the economic returns compared with the index synthetic
evaluation.
The three-dimensional model maintains the supporting capacity of the coal
enterprise informationization, and emphasizes the relativity and conditionality of
the enterprise informationization economic returns.
This model can not only be used for evaluating the economic returns of the
developing information system, but also can be used as a reference for choosing an
information system, and plays a role of great importance in the coal enterprise
informationization construction.
References
Abstract This article calculates the total carbon emission of Shandong Province
and the variations in carbon emission of all industries during 1995–2010 term by
term. Based on the synchronous data of population and economy, decomposition
analysis is applied to decompose the carbon emission of Shandong Province into
scale effect, structure effect and technical effect. Then the article analyzes the
relations between the average GDP’s carbon emission, the total amount of carbon
emission, the per capita of carbon emission and these effects by use of de Bruyn’s
model. Results show that: scale effect and structure effect led to the growth of
carbon emission; technical effect played opposite role to make the amount of
carbon emission lower. The article gives some countermeasures and suggestion
according to the results.
61.1 Introduction
In recent years, factors decomposition model has been extensively used in the area
of environment research to measure the relative importance of each effects to
environmental pollution changes. De Bruyn (1997) believes that economic scale
expansion, industrial structure change and population strength change lead the
G. Wu (&) J. Hou
Research Center of Resources Economy and Strategy, Shandong University of Finance and
Economics, Jinan 250014, People’s Republic of China
e-mail: [email protected]
L. Wu
China Minsheng Banking Corporation Ltd., Jinan 250002, People’s Republic of China
outcome. He called them as follows: scale effect, structure effect and technical
effect (Chen et al. 2004). The method of decomposition analysis becomes more
and more important because it can effectively separate the effects that lead to the
change of population from any possible effects (Stern 2002).
According to the data during 1995–2010, the article will research the effects on
the carbon emission of Shandong Province that caused by economic scale,
industries structure and technical improvement using de Bruyn’s factors decom-
position model and Ang’s PDM (parametric Divisia methods) (Ang 1994), and
then put forward corresponding measures.
In this model, Ct is the amount of carbon emission in the t year, Yt is GDP in the t
year, It is the strength of carbon emission in the t year. Sit ¼ Yit =Yt is the GDP’s
proportion of added value for industry i, Iit ¼ Cit =Yit is the strength of carbon
emission for industry i in the t year. So the change of carbon emission’s amount
can be attributed to the interaction of economic scale, industrial structure and
technical improvement. The formula is:
X
Ct ¼ Yt It ¼ Yt Sit Iit ð61:1Þ
i
X
DCtec ¼ ½Yt1 þ bðYt Yt1 Þ½Sit1 þ bðSit Sit1 ÞðIit Iit1 Þ ð61:4Þ
i
The article uses equation of (61.8) to estimate the amount of carbon emission in
Shandong Province.
! !
X X X
Ct ¼ p Eti þ Lt ¼ p Ftij Mj þ Ltj Mj ð61:8Þ
i ij j
In the equation of (61.8), Ct is the amount of carbon emission (tc); p is the unit
energy’s carbon emission coefficient (tC/tce). Development and Reform Com-
mission of China believes p = 0.67 tC/tce (Wu 2009). Eti is the amount of energy
consumption of industry i (tce). Lt is the amount of energy consumption of resident
life (tce). Ftij is the amount of energy j’s consumption of industry i in the t year (t).
Ltj is the amount of energy j’s consumption of resident life in the t year (t). Mj is
the conversion factor between energy j and standard coal.
Based on the data of energy consumption caused by industries and resident life,
the article gets the data of total amount of carbon emission, and the carbon
emission of the three industries and resident life during 1995–2010. At the same
time, the article gets the pertinent data of GDP, the added value of the three
industries and the population. The data in Table 61.1 is the base for analysis.
The energy data in this article comes from the part of Shandong Province
Energy Balance Sheet (physical quantity) in ‘‘China Energy Statistics Yearbook’’
(2000–2005) and Integrated Energy Balance Sheet in ‘‘China Statistical Year-
book’’ (2011). The data of economic and population comes from ‘‘Shandong
Statistical Yearbook’’ (1996–2011) (China’s National Bureau of Statistics 2002,
2005; Statistics Bureau of Shandong province 1996).
594
30
20
effect contribution (%)
10
0
Average
1995-1996
1996-1997
1997-1998
1998-1999
1999-2000
2000-2001
2001-2002
2002-2003
2003-2004
2004-2005
2005-2006
2006-2007
2007-2008
2008-2009
2009-2010
-10
-20
-30
In the research time, the contribution rate of carbon emission scale effect is always
above 0 (9–18 %), the average rate is 12.3 %. It means that the enlargement of
economic scale improves carbon emission amount. The change path likes an
inverted ‘‘N’’ along with the growth of economic (Fig. 61.1). It’s basically the
same with the economic development of Shandong Province. 1995–2000, and the
improvement speed of economic was drop obviously in the background of the
Asian financial crisis, the speed dropped from 12.02 % in 1996 to 10.28 % in 2000
and the influence of scale effect became weaker. The improvement speed of
economic was straight up in the following 5 years, the influence of scale effect
became stronger in this period and it reached the maximum as 15 % in 2005. In the
period of The Eleventh-Five, China government adopted the policy of energy-
saving and emission reduction, the efficiency of energy consumption was
improved clearly in Shandong Province, and the strength of energy consumption
dropped 22.1 % than last 5 years. Because of the Global financial crisis in 2008,
the speed of economic improvement dropped slowly again and dropped to 12.30 %
of 2010, the scale effect became much weaker. The economic improvement path
obviously proved the change of scale effect. The by-product (carbon emission) will
grow large with the development of economic scale. So, obviously, the press from
scale effect of carbon emission will keep long time in the developing Shandong
Province even all over China which is a developing country.
As the same with scale effect, structure effect also improves the amount of carbon
emission in the research period, with the numbers between 0–3 %, the average
number 1.2 %, and the influence much weaker. It suggested that in the passed
15 years the change of economic structure did not reduce the amount of carbon
emission but improved it, although the number is only 10 % of scale effect. In the
last 15 years, the structure of industries in Shandong Province is 20.4:47.6:32.0 in
1995 moves to 9.2:54.2:36.6 in 2010. The ratio of added value of the first industry
dropped 11.2 % while the second industry and the third industry improved 6.6 and
4.6 % in turn. Although the structure of in industries has improved, the second
industry which emits 5–6 times carbon than the first industry and 3–4 times than
the third industry became larger in the structure. It directly leads the number of
structure effect above 0, becoming the draw power of carbon emission. This sit-
uation means the change of structure is important and urgent to carbon emission
reduction.
61 The Effects Decomposition of Carbon Emission Changes and Countermeasures 597
In the research period, the fluctuation of technical effect was strong, with the
contribution numbers between -20 and 26 %. In most of the years the numbers
were below 0 except the period from 2000 to 2005, and the average number is -
3.1 %. It means that the contribution of technical effect is not quite obviously and
has a certain amount of randomness. But in average, the technical effect reduced
the amount of carbon emission totally. In fact, the promoted contribution of
technical effect comes from the technical improvement of industry units. In the
period of Eleven-Five, the policy of energy-saving and emission reduction was
executed powerful in Shandong Province. Some specific policies got notable
achievements like ‘‘eliminating backward production capacity’’, ‘‘developing
technical of energy-saving’’. The efficient of energy-using improved large. These
actions led the number of technical effect contribution dropped to -6.7 % in 2008
form 25.3 % in 2005 and retain below 0. The technical effect inhibits the growth of
carbon emission efficiently, although the space is still large. So it’s an important
task to insist for a long time that using energy-saving technology to strength the
technical effect, such as ‘‘cogeneration of heating and electric power’’, ‘‘waste heat
and pressure generate electric power’’, ‘‘new types of motors’’ and so on.
The analysis above shows that among the three mechanisms that effect the changes
in carbon emission, scale effect and structure effect enlarge the amount of carbon
emission, technical effect inhibits the amount of carbon emission. According to the
analysis, the article gives several countermeasures to promote the development of
low carbon economic.
First, the government should establish and sound the finance and taxation policies
to encourage energy-saving and emission reduction, enhance financial support to
make energy-saving technology industrialization, to develop new energy and
environmental industry, and to eliminate and modify high energy-consuming
devices. Meanwhile, tax breaks on energy conservation should be implemented to
control high energy-using and emission products and consumption. Second, the
government should speed up the service industries that are energy-saving, practice
marketing methods like ‘‘contract manage the energy-using’’ and ‘‘certificate
energy-saving products’’, cultivate a cluster of energy-saving equipment industry,
strength the competitiveness of energy-saving products, and build long time
mechanism of energy-saving and carbon emission reduction.
The situation of energy supplies should be changed. At the basis of local level of
economy, industrial structure, energy structure and the level of energy consump-
tion, the local government ought to determine its target for energy consumption,
and control the total amount of energy cost. According to the discipline that
economy growing should keep a balance with society, energy consumption
indexes system that combines total amount index with strength index of energy
61 The Effects Decomposition of Carbon Emission Changes and Countermeasures 599
Our government should publicize the knowledge about resources environment and
climate changes, such as energy-saving and low carbon economy. Base on NGOs
and grass-roots communities, launching actions focused on energy efficiency and
low carbon widely can help to build low-carbon conception, strength low-carbon
awareness, popularize the pathway how to product and consume in a low-carbon
way, and launch universal low carbon campaign.
References
De Bruyn SM (1997) Explaining the environmental Kuznets curve: structural change and
international agreements in reducing sulphur emissions. Environ Dev Econ 2(4):485
Chen L, Wang D, Fang F (2004) Main factors of pollution in China—decomposition model and
empirical analysis. J Beijing Normal Univ (Natural Science Edition) 40(8):561–568
Stern DI (2002) Explaining changes in global sulfur emissions: an econometric decomposition
approach. Ecol Econ 42(2):201
Ang BW (1994) Decomposition of industrial energy consumption: the energy intensity approach.
Energy Econ 18:163–174
Sun JW (1998) Changes in energy consumption and energy intensity: a complete decomposition
model. Energy Econ 20(1):85
Ang BW, Zhang FQ (2000) A survey of index decomposition analysis in energy and
environmental studies. Energy 25(12):1149
Hu C, Huang X, Zhong T, Tan D (2008) Character of carbon emission in china and its dynamic
development analysis. China Popul Resour Environ 3:38–42
Wu G (2009) Research on energy-saving emission reduction strategies in China. Economic and
Science Press, Beijing, pp 15–28
China’s National Bureau of Statistics (2002–2005) Energy statistics yearbook. China Statistical
Publishing House, Beijing
China’s National Bureau of Statistics (2005) China statistical yearbook. China Statistical
Publishing House, Beijing
Statistics Bureau of Shandong province (1996–2010) Shandong statistical yearbook. China
Statistical Publishing House, Beijing
Chapter 62
The Empirical Research of the Causality
Relationship Between CO2 Emissions
Intensity, Energy Consumption Structure,
Energy Intensity and Industrial Structure
in China
Keywords Carbon dioxide emissions intensity Energy consumption structure
Energy intensity Industrial structure Multivariate cointegration Vecm model
62.1 Introduction
Ang used multivariate cointegration and vecm model to explore the dynamic
causality relationships between carbon dioxide emissions, energy consumption,
and economic growth in France for the period 1960–2000 (James 2007). Zhang
and Cheng (2009) adopted the TY-VAR method to discuss the existence and
direction of the causality relationship between carbon emissions, economic
growth, and energy consumption in China from 1960 to 2007 (Zhang and Cheng
2009). Feng et al. (2009) employed cointegration analysis and Granger causality
test to explore the long-run equilibrium relationships, short term dynamic rela-
tionships and causality relationships between energy intensity, energy consump-
tion structure and economic structure in China over the time span from 1980 to
2006 (Feng et al. 2009). Halicioglu (2009) used bounds test and Granger causality
analysis to explore dynamic causality relationships between economic output,
carbon emissions, energy consumption and foreign trade in Turkey for the period
1960–2005 (Halicioglu 2009). Soytas, Sari adopted the TY approach to explore the
long run Granger causality relationship between carbon dioxide emissions, eco-
nomic growth and energy consumption in Turkey (Ugur and Ramazan 2009).
Chang detected the causality relationship between carbon dioxide emissions,
energy consumption and economic growth in China from 1981 to 2006, applying
the vecm model and Granger causality test. Lotfalipour used Toda-Yamamoto
method to examine the causality relationships between carbon emission, energy
consumption, and economic growth in Iran over the time from 1967 to 2007
(Mohammad et al. 2010). Menyah and Wolde-Rufael adopted the bound test and
Granger causality test to analyze the causality relationship between carbon dioxide
emissions, economic growth, and energy consumption in South Africa over the
time from 1965–2006 (Kojo and Yemane 2010). Chang and Carballo used vecm
and var model to explore the cointegration and causality relationships between
carbon dioxide emissions, energy consumption and economic growth for twenty
countries belong to Latin America and the Caribbean region during the period
1971–2005 (Chang and Claudia 2011). Hatzigeorgiou et al. employed the vecm
model to explore the dynamic causality relationship between GDP, CO2 emissions
and energy intensity for Greece during the period 1977–2007 (Emmanouil et al.
2011). Bloch et al. used Johansen multivariate cointegration and vecm model to
investigate the causality relationships among carbon dioxide emission, coal con-
sumption and economic output for China (Harry et al. 2012). Jayanthakumaran
et al. used the bounds cointegration analysis and the ARDL model to test the long
run and short term relationships among carbondioxide emission, economic growth,
trade and energy consumption during the time 1971–2007, comparing China with
India (Kankesu et al. 2012). Chen used a multinomial logit model to examine the
key elements affecting the causality relationships between energy consumption
and economic output for 174 samples (Cheng et al. 2012).
Above all, the conclusions derived from these empirical researches are various
and even conflicted. It is because that the different data sets are collected, different
time spans are selected, different econometric model are applied and different
countries are focused. In this paper, we apply multivariate cointegration, vecm
model and granger causality test to examine the causality relationship among CO2
62 The Empirical Research of the Causality Relationship Between CO2 Emissions 603
We collect and calculate the annual data of CO2 emissions intensity, energy
consumption structure, energy intensity and industrial structure in China during the
time period 1980–2009 as research samplings. CO2 emissions intensity denotes
CO2 emissions per unit of GDP, which is defined as CI. The coal proportion
accounting for the primary energy consumption is used to describe energy con-
sumption structure, which is called ECS. Energy intensity denotes primary energy
consumption per unit of GDP, which is named EI. The proportion of tertiary
industry is used to represent industrial structure, which is denoted as IS. The
annual data of GDP is collected from China statistical yearbook of 2010, the
primary energy consumption data is from China’s energy statistics yearbook of
2010, and the CO2 emissions data come from the databank of world development
indicators.
62.3 Methodology
Cointegration analysis claims that the level time series or the same difference time
series of variables must be stationary. So it is necessary to test the stationarity of
variables using unit root test. ADF test, introduced by Dickey and Fuller, is the
most broadly applied root test method. The model is represented as follows.
X
k
DCIt ¼ qCIt1 þ ki DCIti þ et ð62:1Þ
i¼1
X
k
DECSt ¼ qECSt1 þ ki DECSti þ et ð62:2Þ
i¼1
604 T. Zhao and X. Ren
X
k
DEIt ¼ qEIt1 þ ki DEIti þ et ð62:3Þ
i¼1
X
k
DISt ¼ qISt1 þ ki DISti þ et ð62:4Þ
i¼1
If the variables pass the unit root test, we can use cointegration tests to explore the
long run equilibrium relationships among the variables, which introduced by
Johansen and Juselius. Base on the maximum likelihood procedure, the Johansen
co integration test are used to confirm the existence of long run equilibrium
relationship among CO2 emission intensity, energy consumption structure, energy
intensity, and industrial structure during the time period 1980–2006. The trace
statistic value can be used to ascertain the existence of cointegration. The model is
as follow.
0 1 0 1 0 1
CI CI CI
B ECS C B C X
p1
B ECS C
DB C ¼ ab0 B ECS C þ Ci DB C
@ EI A @ EI A @ EI A þet ð62:5Þ
i¼1
IS IS t1 IS ti
As reflected in Table 62.2, over the time from 1980 to 2009, the trace test dem-
onstrates that at the 5 % significance level, just one cointegration relationship
exists among CO2 emission intensity, energy consumption structure, energy
intensity, and industrial structure. The cointegration equation is represented as
follow.
CI ¼ 0:201 ECS þ 0:017 EI 0:281 IS ð62:7Þ
606 T. Zhao and X. Ren
The cointegration equation expresses that, in the long term, both energy con-
sumption structure and energy intensity have positive role to CO2 emission
intensity, while industrial structure plays an obvious negative part in CO2 emission
intensity. Supposing that other variables remain unchanged, if the energy con-
sumption structure is improved by 1 %, CO2 emission intensity will lessened
accordingly by 0.201 %. With energy intensity dropping by one percentage, CO2
emission intensity is cut down by 0.017 %. If the proportion of tertiary industry of
the industrial structure goes up in 1 %, CO2 emission intensity may decrease by
0.281 %.
The result of cointegration test and VECM model denote that there exist some
causal relationships among CI, ECS, EI and IS. However, it cannot ascertain the
direction and the number of the causal relationships, and then the Granger cau-
sality test is used. The result shows as Table 62.3.
If the null Hypothesis is accepted at the 10 % significant level, it denotes that
there is not the causality relationship. Table 62.3 reveals that there exist four group
unidirectional causality relationships running from CI to ECS, CI to IS, EI to ECS,
and IS to ECS. Meanwhile, the bidirectional causality relationship between CI and
EI is appeared. The decrease of CO2 intensity will improve energy consumption
structure, which leads to the decrease of coal consumption amount. The decrease
of CI also can promote industrial structure improvement, and stimulate the pro-
gress of the tertiary industry. Meanwhile, if energy intensity drops, the third
industry in the country industrial structure in the total increase will promote the
energy structure to be improved.
62.5.1 Conclusion
62.5.2 Suggestions
(1) To speed up the adjustment of industrial structure of our country, increase the
proportion of the third industry. With high economic added value and low
energy consumption characteristics, the proportion of third industry has
become an important symbol of development of a country’s low carbon
economy. The industrial structure has the biggest impact on carbondioxide
emissions intensity. At present, the proportion of industries with low added
value, high energy consumption and high pollution, is too big in national
economy of China, this is the main cause carbon of the high dioxide emissions
intensity. So our government has to implement strategic adjustment to the
national economic structure, eliminate energy-intensive and low production
value industries, emphasize on the development of high added value, low
energy consumption and high technology industries, try to develop the new
service industry, and the new energy industry.
(2) Improve the coal-dominated energy consumption structure. The annual aver-
age proportion of coal amounting for the primary energy consumption is 76 %
during period from 1980 to 2009. It also determines that China may not
change the current energy consumption structure in the short term, but could
increase oil and gas consumption proportion, and develop water power, wind,
62 The Empirical Research of the Causality Relationship Between CO2 Emissions 609
nuclear energy, solar energy and other clean energy to gradually improve
China’s energy consumption structure.
(3) Strengthen the low carbon technology innovation and improve energy effi-
ciency. Since the reform and open policy, our country industrialization process
gets the rapid development, along with the phenomenon of excessive energy
consumption and serious waste. Low carbon technology innovation is an
important way to realize national resource conservation and environment
friendly development. In order to support low carbon technology innovation,
the government can levy resource tax, energy tax and environmental taxes to
make subsidies for low carbon technology innovation activities.
References
Chang CC, Claudia CSF (2011) Energy conservation and sustainable economic growth: the case
of Latin America and the Caribbean. Energy Policy 39(7):4215–4221
Cheng P-Y, Chen S-T, Chen C-C (2012) Energy consumption and economic growth—new
evidence from meta-analysis. Energy Policy 44(4):245–255
Ching-Chih C (2010) A multivariate causality test of carbon dioxide emissions, energy
consumption and economic growth in China. Appl Energy 87(11):3533–3537
Emmanouil H, Heracles P, Dias H (2011) CO2 emissions, GDP and energy intensity: A
multivariate cointegration and causality analysis for Greece, 1977–2007. Appl Energy
88(4):1377–1385
Feng T, Sun L, Zhang Y (2009) The relationship between energy consumption structure,
economic structure and energy intensity in China. Energy Policy 37(12):5475–5483
Halicioglu F (2009) An econometric study of CO2 emissions, energy consumption, income and
foreign trade in Turkey. Energy Policy 37(3):1156–1164
Harry B, Shuddhasattwa R, Ruhul S (2012) Coal consumption, CO2 emission and economic
growth in China: empirical evidence and policy responses. Energy Econ 34(2):518–528
Ilhan O (2010) A literature survey on energy-growth nexus. Energy Policy 38(1):340–349
James AB (2007) CO2 emissions, energy consumption, and output in France. Energy Policy
35(10):4772–4778
Kankesu J, Reetu V, Ying L (2012) CO2 emissions, energy consumption, trade and income: a
comparative analysis of China and India. Energy Policy 42(3):450–460
Kojo M, Yemane W (2010) Energy consumption, pollutant emissions and economic growth in
South Africa. Energy Econ 32(6):1374–1382
Kraft J, Kraft A (1978) On the relationship between energy and GNP. J Energy Dev 3:401–403
Mohammad LR, Mohammad FA, Malihe A (2010) Economic growth, CO2 emissions, and fossil
fuels consumption in Iran. Energy 35(12):5115–5120
Ugur S, Ramazan S (2009) Energy consumption, economic growth, and carbon emissions:
challenges faced by an EU candidate member. Ecol Econ 68(6):1667–1675
Zhang X-P, Cheng X-M (2009) Energy consumption, carbon emissions, and economic growth in
China. Ecol Econ 68(10):2706–2712
Chapter 63
The Evaluation of China Construction
Industry Sustainable Development
on DEA Model
Abstract Using the DEA model and based on the panel data of China construction
industry between the year of 1995 and 2008, this paper chooses the number of
employees, fixed assets and the total power of mechanical equipment as input index,
gross output and value added as output index, to conduct an overall evaluation of
China construction industry Sustainable Development. Based on the input–output
index, the study data shows that China construction industry has been in low level of
development, and experienced from dropping to rising and begun to develop well
since 2002. Furthermore, according to the results of the element adjustment, the
sustainable development of China construction industry can be achieved by opti-
mizing employees’ structure and proportion of investment assets, improving the
operational efficiency of mechanical equipment and expanding the market actively.
63.1 Introduction
can understand the level of development in the construction industry with evalu-
ation material, and provide a scientific basis for the industrial development policy
with the aid of the evaluation results. After collecting and analyzing literatures
above, we can find that there are some problems in determining the index weight in
the use of evaluation methods (Zhang 2010). Therefore, in this paper, data
envelopment analysis (DEA) is applied to evaluate which of the sustainable
development of China construction industry, don’t need to determine the weight of
the index, to avoid subjective factors with the objective validity, which is more
objective and effective to the evaluation results of the construction industry’s
sustainable development. Furthermore, the results are adjusted (Xiao 1994).
This paper uses C2 R model of DEA to analyze. The envelopment form of the C2 R
model for evaluating DMUj is as follows:
63 The Evaluation of China Construction Industry 613
8 " !#
>
> X t X t
>
> Min h e þ
sr þ
sr
>
>
>
> r¼1 i¼1
>
>
>
> Xn
>
>
>
> s:t: kj xij þ si h xij0 ¼ 0
>
>
< j¼1
X n ð63:1Þ
> kj yij sþ
>
>
> r ¼ yij0
>
> j¼1
>
>
>
> kj 0; j ¼ 1; 2; . . .; n
>
>
>
>
> s
>
> i 0
>
:
sþ
r 0
(3) If and only if the efficiency score h is equal to one and all optimal slacks are
zero, DMU is called efficient unit, the formation of the efficient frontier for
constant returns to scale, and the DMU is technical efficiency and scale effi-
ciency. When h is equal to one but sþ
r 6¼ 0 or si 6¼ 0, DMU is called weak
efficient unit; When h \ 1, DMU is inefficient, maybe in technical inefficiency
or scale inefficiency (Sheng 1996).
Using the C 2 R model, this paper takes the Chinese construction 1995–2008 year
for decision unit, selects the calendar year panel data of China construction
industry as the analysis data of input and output indicators. The date originates
calendar year ‘‘Chinese Statistic Almanac’’ from 1999 to 2009.
From the input–output point of view, the sustainable development of the con-
struction industry includes input subsystem and output subsystem, the key elements
of each subsystem are selected as the input and output indicators.
As human, fund, and material are the three types of input elements in the
construction industry, input indicators selected in this paper for the number of
employees of China construction industry, the fixed assets and total power of its
own machinery and equipment at the end of the year, which represent three types
of elements as the labor input, capital inputs, machinery and equipment input;
output indicators selected for the construction industry output value and added
value (Porteous 2002). The construction industry output value can reflect the
overall production of the construction industry efficiency better relative to revenue,
gross profit, and other output indicators, and value added of construction can
reflect the value added capability and space of industry efficiently (Reinhardt
1999).
Therefore, in the DEA model, the input index is determined in this paper for:
the number of employees, fixed assets and total power of its own machinery and
equipment at the end of the year; output index for: the construction industry output
value and added value. So DMU can be defined as 11, it represents the years of
1995–2008; the number of input indexes is 3; the number of output indexes is 2.
The data of input and output are included in Table 63.1.
Take the input and output data from Table 63.1 into the C2 R model, we can get the
solution (Tables 63.2 and 63.3) using Maxdea5.0 software.
63 The Evaluation of China Construction Industry 615
Table 63.1 The data of input and output indexes of China construction 1995–2008
Year Input index Output index
x1 (10,000 x2 (100 millions x3 y1 y2
persons) RMB) (10,000 kWh) (100 millions (100 millions
RMB) RMB)
1995 1497.9 1850.76 7056.5 5795.73 1668.64
1996 2121.9 2685.89 9804.8 8282.25 2405.62
1997 2101.5 3083.81 8668.5 9126.48 2540.54
1998 2029.99 3380.89 8656.52 10061.99 2783.79
1999 2020.1 3752.66 9077.77 11152.86 3022.26
2000 1994.3 4204.71 9228.11 12497.60 3341.09
2001 2110.7 4951.31 10251.72 15361.56 4023.57
2002 2245.2 6183.80 11022.52 18527.18 3822.42
2003 2414.3 6548.74 11712.38 23083.87 4654.71
2004 2500.28 7148.85 14584.05 29021.45 5615.75
2005 2699.9 7621.45 13765.56 34552.10 6899.71
2006 2878.2 8395.68 14156.29 41557.16 8116.39
2007 3133.7 9175.82 15579.39 51043.71 9944.35
2008 3314.9 10258 18195.37 62036.81 11911.65
Data source Calendar year ‘‘Chinese Statistic Almanac’’ from 1999 to 2009
Table 63.3 Results of trend of China construction industry development capacity 1995–2008
P
Year h ki K Technology efficiency Operating track
1995 0.776 0.140 0.180 Invalid –
1996 0.771 0.202 0.262 Invalid Inferior to track
1997 0.709 0.213 0.300 Invalid Inferior to track
1998 0.709 0.234 0.330 Invalid Inferior to track weakly
1999 0.694 0.254 0.366 Invalid Inferior to track
2000 0.684 0.280 0.409 Invalid Inferior to track
2001 0.7 0.338 0.483 Invalid Benign to track
2002 0.532 0.321 0.603 Invalid Inferior to track
2003 0.612 0.391 0.639 Invalid Benign to track
2004 0.676 0.471 0.697 Invalid Benign to track
2005 0.78 0.579 0.742 Invalid Benign to track
2006 0.876 0.681 0.777 Invalid Benign to track
2007 0.975 0.835 0.856 Invalid Benign to track
2008 1 1 1 Valid Benign to track
Mean 0.750 0.424 0.546 – –
output results achieve the optimal state, and both scale efficiency and technical
efficiency are effective. While, the evaluating results of the DEA is invalid in the
years from 1995 to 2007, the relative values of the efficiency fluctuate largely, and
change overall from downward to upward with a V-type curve, as shown in
Table 63.1. Before 2005, the construction industry develop slowly, and the overall
development trend appears declining. In 2005, the efficiency index is lowest, and
h = 0.528, which indicates that if the returns to scale is in a constant state, 46.8 %
of the input resources are wasted. In the 14 years, there are 7 % when the effi-
ciency of DEA is effective, there are 5 years when the comprehensive efficiency
index is less than 0.7, 6 years when the comprehensive efficiency index is between
0.7 and 0.8, 3 years when the comprehensive efficiency index is more than 0.8, the
average efficiency is 0.88 between 1995 and 2007, So we think that development
of China’s construction industry is lower (Requate 2005).
According to the results of Table 63.2, during the years when the DEA is valid
in 2008, the slack variables s þ
i and sr are both zero, which indicates that China
construction industry system runs without sink and deficit, and the input–output
has reached its optimal state; Otherwise, during the invalid years of DEA, the input
and output present with sink or deficit, which indicates that China construction
industry system runs in the invalid state.
In input, there are some years appears oversupplied, 13 years to employees and
10 years to its own machinery and equipment at the end of the year, which
indicates that the technology quality of employees and mechanization level of
China construction industry is low, the industry is still labor-intensive and rough
(Tieterberg 1994). In output, the construction industry output value appear defi-
cient, while the slack variables of added value are zero, which indicates that trends
of the added value of China construction industry products are good.
63 The Evaluation of China Construction Industry 617
63.4 Conclusion
Table 63.4 DEA projection and adjustment results of China construction industry development
Year The number of employees Fixed assets savings Its own machinery and equipment
savings (10,000 persons) (100 millions RMB) at the end of the year savings
1995 1033.496 413.771 4507.607
1996 1452.398 614.233 6130.150
1997 1394.491 895.964 4787.756
1998 1255.275 983.563 4404.205
1999 1179.051 1149.969 4461.185
2000 1064.492 1327.451 4124.504
2001 990.921 1486.317 4105.608
2002 1181.431 2892.032 5183.669
2003 1118.888 2540.226 4602.183
2004 937.446 2312.714 6005.839
2005 779.766 1679.601 3226.065
2006 619.411 1406.058 1758.283
2007 366.249 612.007 389.124
(3) Adjust DEA results show that investments of China construction industry in
the number of employees, the fixed assets and machinery and equipment
operating efficiency, are all in excess, leading to gross domestic product output
deficit. China construction industry is still labor-intensive and rough.
Based on the conclusion of analysis above, from the point of view of enhancing
the ability of sustainable development of China construction industry, this article
suggests that people should be focusing on optimizing employee’s structure and
quantity, improving their technical level, allocating the proportion of investment
assets rationally, increasing the operating efficiency of the machinery and equip-
ment gradually, reducing invalid investment of productive elements and resources,
and expanding the market space actively, in particular, broadening foreign con-
struction market positively.
References
Deng XH et al (2008) Management information system curriculum design and teaching reform
based on project management technology. Manage Inf 59(9):80–83
Gong ZQ, Zhang ZH (2004) Life cycle assessment and management towards sustainable
construction industry. Qinghai Univ J 47(2):24–29
Koopmans TC (1951) An analysis of production as an efficient combination of activities.
Activitiy analysis of production and allocation, vol 13. Cowles Commission for Research in
Economics, pp 39–44
Li J (2007) Research on financing barrier of retrofit for energy saving of existing buildings and
corresponding countermeasures. Archit Econ 56(12):15–17
Lu N, Cai AY (2006) Synthetic evaluation of sustainable development of construction industry in
China—data of 31 provinces (autonomous regions and municipalities) from 1999 to 2003.
Chongqing Archit Univ J 46(4):94–97
63 The Evaluation of China Construction Industry 619
Porteous C (2002) The new eco-architecture: alternatives from the modern movement. Spon
Press, London, p 121
Reinhardt FL (1999) Bringing the environment down to Earth. Harvard Bus Rev 77(4):149–157
Requate T (2005) Dynamic incentives by environmental policy instrument—a survey. Ecol Econ
54(175):195
Sheng ZH (1996) The theory, method and application of DEA (In Chinese), vol 11. Beijing
Science Press, Beijing, pp 36–47
Tieterberg T (1994) Environmental economics and policy. Harper Collines, New York
World Commission of Environment and Development (1987) Our common future. Oxford
University Press, Oxford
Xiao WP (1994) Target analysis and evaluation models on science-technic progress at the
construction enterprise. Chongqing Constr Univ J 16(1):1–8
Yu G, Wei QL, Brokett PA (1996) Generalized data envelopment analysis model. Ann Oper Res
66:47–89
Zhang YW (2010) The research on China architecture development evaluative method. Econ Res
Guide 48(7):54–56
Chapter 64
The Prediction of Customer Retention
Costs Based on Time Series Technique
Abstract Customer expenditure is one of the vital factors that impact customer
asset, the measurement and prediction of customer expenditure means a lot to the
measurement of customer asset (Chen 2006). From the perspective of customer
asset, we’d like to study the measurement of customer retention costs—the major
component part of customer expenditure. Firstly, we define the components of
customer expenditure and explain the connotation of customer retention costs;
secondly, using time series technique, we build a prediction model of retention
costs, then we predict the customer future costs on the basis of this model. Last,
this prediction model is used to the case and the results prove that this model is
effective. Besides, this model has reference value to develop the study of the
measurement of customer asset.
64.1 Introduction
Customer expenditure, a factor impacting customer value, has been paid attention
after the fact that customer value-oriented becomes a generally recognized mar-
keting concept. To achieve the purpose of measuring customer value accurately
and objectively, we should focus on the measurement of customer expenditure.
However, we lack the theoretical and empirical research about it currently, the
reason for which is that it’s not an easy task to predict the future costs, considering
the uncertainty of future. As a part of customer expenditure, the measurement of
future costs is very vital; it will not only impact the measurement of customer
expenditure, but also have an effect on evaluating customer value accurately, then
impacting the expenditure management and marketing decision (Ness et al. 2001).
Therefore, the purpose of this paper is to study the measurement of future costs.
Retention costs are its major expression. We will predict the customer retention
costs by time series technique and historical data, aiming to provide a suitable way
for solving problems of measurement and prediction of customer expenditure.
Customer expenditure means that enterprises spend some money on attracting and
retaining customers (Liu 2003). There are two ways to classify customer expen-
diture: according to the costs spent in customer expenditure life cycle, it can be
divided into costs incurred and future costs; according to the overall process of
obtaining and retaining customers, it can be divided into customer retention costs
and acquisition costs (Zhao and He 2009). This paper integrates these two ways
under the thought of customer life cycle, and believes that retention costs and
acquisition costs constitute historical and current costs, retention costs are future
costs’ major expression. The sum of historical and current costs and future costs is
lifelong costs (Yang 2011). So it’s very easy to see that we must know acquisition
costs and retention costs if we want to measure customer expenditure. Acquisition
costs are costs incurred, which are available because we have actual data, while
retention costs are future costs, predicted by suitable way, chosen by characters
and rules. Obviously, retention costs are more difficult than acquisition costs.
Costs of sales which happen in the selling process include policy costs and
service costs. Costs caused by the differences of sales policy include sales com-
mission costs, costs of push money, and costs of cash discounts and so on. Service
costs in completing purchasing include order processing costs, packaging costs,
cargo handling costs, etc.
Customer retention costs are the spending to retain existing customers. They
include customer sales service costs and customer management costs (Jiang 2006).
Customer after-sales service costs are comprised of several parts: Sales tech-
nical support, training costs, maintenance service costs and product upgrades costs.
After-Sales Service is one of the key marketing policies to improve customer
satisfaction, especially for complex products of high technicality. Whether com-
panies provide after-sales service has become a basic requirement to retain cus-
tomers. Favorable after-sales service can develop corporate image and strengthen
willingness to repeat their purchasing behaviors. But, we have to admit the fact
that it is not a small spending, which is related directly to production environment,
technology and the quality of personnel.
Customer management costs include customer account management costs and
relationship maintenance costs. Customer account management costs are accounts
receivable management costs caused by credit business. They include wages
caused by managing accounts receivable, travel expense caused by collecting
accounts receivable, bad debt losses and bad debt processing fee. This fee has been
on increase because of the sincere competition and the rise of credit business.
Relationship maintenance costs are emotional investment costs caused by
retaining purchasing relationship with existing customers; including regularly
visiting costs, presenting gifts costs, support costs of customers and long-term
relationship redeem costs, etc.
From above we can see acquisition costs are costs incurred, we have current
data, so it’s not a tough task. Retention costs are future costs, characterized by
uncertainty. They are predicted by suitable way chosen by characters and rules.
Therefore, we must know customer retention costs in order to obtain lifelong costs.
That’s why we write this paper.
From above, we know many parts of retention costs and factors which impact it are
relevant with customers themselves. These factors’ varieties make customer
expenditure uncertain. So, we must consider their effects on retention costs, if we
want to establish prediction model. But, there are too many factors, it’s impossible
624 F. Yu et al.
to analyze every factor and build models. Besides, their effects have been shown in
the time series of past and current time. In a word, this paper will use time series
technique to build prediction model of customer retention costs.
The basic idea of time series technique is that we should find regularity that
appearance changes over time from historical data. Besides, we assume that this
regularity will continue to the future. We can predict the future data according to
this regularity. We use single-element time series in this paper. The purpose of
building the model is to use historical data and random errors to predict the change
of variable. Generally, we assume that random errors et at different times are
statistically independent and normally distributed random variables. There are
three steps when using time series technique: pretreatment of time series, the
establishment of model, short-term forecasts of customer costs (Wang 2008).
There are two parts of the pretreatment of time series: the judgment of sta-
tionary nature and pure randomness testing. At first, we should judge the stationary
nature of time series by drawing timing diagram. If the time series is not of
stationary nature, we should do zero mean and difference stationary processing of
series. If they are of stationary nature, we should do pure randomness testing. Pure
randomness series are also called white noise sequences; they are the series that
the past behaviors will not impact the future development. The series can be
divided into different types. Every type has a method to analyze (Guo et al. 2006).
There are three phases of the model building: order determination, parameter
estimation and adaptive test. There are three ways to do order determination: ACF
order determination, PACF order determination, residual variogram order deter-
mination method, the best criterion function order determination method. What
should be noticed is that if we adopt the first method, the order we judge is not the
certain result, the exact order can be obtained by other ways.
When we finish this step, the next is parameter estimation: build ARMA (p, q)
model by a set of sample data sequence, judge the order (p, q) and parameter.
There are three ways to estimate parameter: moment estimation, maximum like-
lihood estimation, least squares estimation. Least squares estimation is more
accurate because it uses information at utmost (Cheng and Li 2007).
After the establishment of the model, we will do adaptive test to ensure whether
we get enough information. The null hypothesis is: residuals series are white noise
sequences. If we refuse null hypothesis, which means residuals has relevant infor-
mation, fitting model is not significant. If not, that means the model is significant.
After two steps, we can apply the model to predict customer retention costs.
This paper will use SAS to build ARMA (p, q) model.
quarter of 2001 to second quarter of 2010. We take customer A for example and
show its data processing. Other companies have the same data processing.
In the judgment of stationary nature, the results show that retention costs of this
customer are sequential; fluctuations are in smooth and have a certain periodicity.
We judge it the stationary time series.
Next, we do pure randomness testing.
The results of white noise testing shown in Table 64.1.
The results are: P values of LB test statistic are very small (\0.0001). There-
fore, we are sure that (confidence level[99.999 %) customer retention costs series
are smooth non-white noise sequences.
Thirdly, we experience order determination. We adopt ACF order determina-
tion, PACF order determination. We can see that all autocorrelation coefficients
decay to two times of the standard deviation range of fluctuations. It shows that
sequences obviously are short-term related. But the sequence of significant non-
zero correlation coefficient of attenuation for the little finger volatility process is
quite continuous and slow, the autocorrelation coefficient can be regarded as not
cut tail. In addition, it also shows that delayed two-order partial autocorrelation
coefficient are significantly greater than two times the standard deviation, the other
partial autocorrelation fluctuate in two times the standard deviation range of the
random of small value. The process of attenuating from non-zero correlation
coefficient to small value fluctuations is very prominent coefficient. Partial auto-
correlation coefficient can be regarded as two order censored. Therefore, we
et
consider using model AR (2) xt ¼ 1/ B/ B2 ; et Nð0;r2 Þ to Fit the values of the
1 2 e
observed sequences.
Through further model fitting and optimization, in an ARMA (p, q) model
where moving average delay of the order is less than or equal to five, BIC relative
minimum amount of information is an ARMA (2, 0), so we are certain that the
model is model AR (2). Then we use least squares method to do parameter esti-
mation of model AR (2). Results show that mean MU and other parameters are
significant, (t test statistic P values are less than 0.0001).
So Sample data fitted model is xt ¼ 1:16163xt1 0:66487xt2 þ et .
Next step is the adaptive test of the model. Results show that P values of LB
statistic, delayed orders are significantly greater than a. The fitted model is sig-
nificantly available.
Last, we use the model to predict the customer expenditure. Sample data is
extended to the second quarter of 2011. We can get the retention costs of third,
Fourth quarter of 2010 and first, second quarter of 2011. The retention costs of
future four quarters are 246015. Using the same way, we get the results of the
retention costs in a year: 203178, 313482, 235837 and 173109.
626 F. Yu et al.
64.5 Conclusion
References
Chen Y (2006) Analysis of customer value from the perspective of the cost of services. Master
thesis, NanJing Normal University, NanJing
Cheng Z-y, Li X (2007) (6) The research of customer costs in customer profitability analysis.
Coast Enterp Technol 3rd episode:25–26
Fei Y (2007) Customer profitability analysis based on ABC—the use of chemical enterprises.
Master thesis, Xi’an Polytechnic University, Xi’an
Guo C-m, Shen Y-a, Wang X-r, Gui L-j (2006) Research on the model of parametric cost
estimation based on ABC. Syst Eng Pract 26(2):55–61
Jiang Y (2006) The measurement and management of customer asset, Master dissertation, Hunan
University, Hunan
Liu X (2003) Prediction model of the value of customer asset and its use in marketing decision.
Quant Tech Econ Res 5th episode:158–161
Ness JA Schrobeck Mj, Letendre RA, Douglas WJ (2001) The role of ABM in measuring
customer value. Strateg Finance 82(9):32–37(10); 44–49
Wang Y (2008) The use of Time sequence analysis. Renmin University of China Press, Beijing
Yang J-f (2011) The measurement of customer expenditure based on the customer asset. Master
thesis, Xi’an Polytechnic University, Xi’an
Zhao Q-k, He C-c (2009) The costs of customers. Manag Aspect 6:116–117
Chapter 65
The Research on the Location Selection
of the Bank Outlets Based on Triangular
Fuzzy Analytic Hierarchy Process
Abstract This paper analyzes the influencing factors of the location of bank
outlets under the use of the triangular fuzzy analytic hierarchy process, then counts
and gets the weights of each factor. Furthermore, it shows the feasibility of this
model approach in the process of the site, in order to provide a reference for the
decision-makers.
Keywords Bank outlets Fuzzy analytic hierarchy process (FAHP) Location
Triangular fuzzy numbers
65.1 Introduction
Physical outlets play a key part in bank marketing as the most important places
where banks manage various business activities. They are the operational platform
and the extension of information antennae of banks. However, commercial bank
outlets in China which are often installed in the administrative level, lack scientific
basis. Repeatability exists in the same business circle. Similarity in products and
scales leads to low operation efficiency (Guo 2010). Nearly each bank carries out
the research on the planning and transformation of the bank outlets as the result of
the fierce horizontal competition. In this case, how to get scientific sites is full of
practical significance.
65.2 Analysis
The same as the location of other establishment, various factors should be taken
into account. This paper will state it in the following five aspects.
(1) Geographical site factor: Location belongs to the geographical category. More
convenient the geographical conditions are, bank outlets are more likely to
gather together (Xu 2008). The indexes it involves are: road access, parking
numbers, bus stop numbers, numbers of public places, communities, malls and
enterprises in this area. There is feasibility to set up outlets as the business
circle is more dense.
(2) Competitors factor: Try to have a understanding of your competitor numbers
and your market share in this area. Distinction exists between service products
in different banks. Compared to hardware facilities conditions of rivals, you’re
allowed to have an explicit self-positioning, learn your own advantages and
disadvantages. If you want to gain more market and customers, you’d better
improve the core competitive power and turn to personalized services.
(3) Marketing factor: Banks of different position have different location. For
example, Bank of China position business priority in high-end customers, so
their outlets mostly dot in urban district. The Agricultural Bank focus on
agricultural support programs, so that they tend to place in suburbs. Sup-
porting costs and rents must be different in each outlet. Higher cost will reduce
the efficiency, and then influence the performance of banks. Marketing anal-
ysis mainly contains: bank positioning, own orientation, supporting costs and
rents, floor space, outdoor visual effect.
(4) Population economic factor: Potential target customers depend on total pop-
ulation and economy, so do the bank size. Meanwhile, per capita income
levels and persons flow rates in unit time also have an impact on the location.
(5) Urban development planning: As a typical factor, urban planning is an overall
arrangement which municipal government is to deploy the urban land use,
spatial distribution and various construction, according to the development
goal in a period. It contains urban reconstruction of old areas and program-
ming of new districts. In addition, urban traffic planning is also included.
In the formula l O m O u, l and u are said as M’s upper bound and lower bound
m is the mid-value of M in the membership of 1. General triangular fuzzy number
M is expressed as (l, m, u) (Chen 2004).
Operation method of two triangular fuzzy numbers M1 and M2 is:
630 Y. Han and F. Dai
M1 ¼ ðl1 ; m1 ; u1 Þ; M2 ¼ ðl2 ; m2 ; u2 Þ
M1 þ M2 ¼ ðl1 þ l2 ; m1 þ m2 ; u1 þ u2 Þ
M1 M2 ¼ ðl1 l2 ; m1 m2 ; u1 u2 Þ
1 1 1 1
; ;
M u m l
(4) Calculating the normalized weights of the index Setting M1 ðl1 ; m1 ; u1 Þ and
M2 ðl2 ; m2 ; u2 Þ are triangular fuzzy numbers. The possibility that M1 is
more equal to M2 can be defined in triangular fuzzy number as:
The possibility that one fuzzy number is more equal to other K fuzzy numbers
can be defined as:
X
5 X
5
aij ¼ð1 1 1Þ þ ð0:4 0:611 0:917Þ þ þ ð0:556 0:722 1:208Þ
i¼1 j¼1
X
5
aij ¼ð1 1 1Þ þ ð0:4 0:611 0:917Þ þ ð1:056 1:5 2Þ
j¼1
!1
X
5 5 X
X 5
DB1 ¼ aij aij ¼ ð0:1010 0:1847 0:3161Þ
j¼1 i¼1 j¼1
!1
X
5 5 X
X 5
DB3 ¼ aij aij ¼ ð0:0927 0:1537 0:2659Þ
j¼1 i¼1 j¼1
!1
X
5 5 X
X 5
DB4 ¼ aij aij ¼ ð0:1335 0:2481 0:4260Þ
j¼1 i¼1 j¼1
!1
X
5 5 X
X 5
DB5 ¼ aij aij ¼ ð0:0721 0:1121 0:2081Þ
j¼1 i¼1 j¼1
0:1335 0:3161
vðDB1 DB4 Þ ¼ ¼ 0:7423
ð0:1874 0:3161Þ ð0:2481 0:1335Þ
vðDB1 DB5 Þ ¼ 1
The same method can be obtained the weights that the first class index relative
to the second class index, just like Table 65.4 shows, based on this we can gain the
total order that the sub-rule layer C relative to the target layer A.
From Table 65.4, we know that competitors factor play an important part in the
location, while the influence of urban development planning is relatively minimal.
So numbers of competitor in this area should be given priority when a new site is
set up (Li et al. 2005). Try to learn the diversity of services and products between
rivals from market research. If you want more market share, innovation is indis-
pensable. At the same time, population and economy should be taken into con-
sideration. It’s necessary to expand service radium and improve the service level.
65.3 Summary
Due to the limitation of scoring, the method is general applied in particular area
(Fan et al. 2005). It cannot be denied that it has reference value by eliminating
personal subjective judgment. Also, it has good evaluation in a given scheme to
choose the best.
References
Cao Y (2009) Location decision analysis of chain stores based on the fuzzy analytic hierarchy
process. Bus Perspect 594:48–49
Chen X (2004) Application of fuzzy hierarchy process for optimum selection in decision-making.
Comput Eng Des 10:1847–1849
Fan LF, Jiang HB, Chen KS (2005) The application of fuzzy analytic hierarchy process in the
location selection of distribution centers. Mod Logist 11:15–17
Guo XP (2010) The research on influential factors about layout of commercial bank outlets.
Anhui University, Hefei
Ji D, Songg B, Yu TX (2007) The method of decision-making based on fuzzy analytic hierarchy
process and its application. Fire Control Command Control 11:38–41
Jiang Y, Liu D (2010) Evaluation of industrial cluster comprehensive performance based on
fuzzy analytic hierarchy process. Stat Decis 02:31–33
Li B, Huang S (2009) Application of method for evaluating in quality assurance system of MMS.
Tech Econ 28:50–53
Li Y, Hu XH, Qiao J. (2005) An improved fuzzy analytic hierarchy process method. J Northwest
Univ 01:11–16
Liu LJ, Fan RG (2005) The application to supplier partner selection of the fuzzy analytic
hierarchy process based on the triangular fuzzy numbers comparative theory. Logist Sci
Technol 127:117–121
Orlovsky SA (1986) Decision-making with a fuzzy preference relation. Fuzzy Sets Syst
18:105–120
Tao C, Zhang H (2012) Risk assessment for the third-party damage to pipeline based on fuzzy
hierarchy process. Gas storage Transp 31:99–102
Xu S (1988) Principle of analytic hierarchy process-practical decision-making and methods.
Tianjin University Press, Tianjin
Xu F (2008) Empirical research on the location of bank outlets. Zhejiang University of
Technology, Hangzhou
Xuan Z, Hua L (2008) The evaluation of the port site selection based on fuzzy hierarchy process.
Chin Water Transp 12:68–70
Yan T, Zhu R (2009) The research of universities’ financial risk identification based on fuzzy
analytic hierarchy process. Stud Finan Account Educ 03:26–30
Chapter 66
The Study of Sino-Russian Trade
Forecasting Based on the Improved Grey
Prediction Model
Abstract In this paper, we improved the traditional GM (1,1) model with the
other-dimensional gray-scale by-ways, which has a higher accuracy, and predicted
the Sino-Russian future trade. First of all, we introduced the theory of GM (1,1)
grey and GM (1,1) grey equidimensional filling vacancies. Secondly, we estab-
lished GM (1,1) grey forecasting model of equidimensional filling vacancies by
using the trade volume between China and Russia from 2000 to 2011. Then, we
forecasted the Sino-Russian trade in 2012. At the end of the paper, we analyzed the
forecast results, and we found that Sino Russian trade still has very large devel-
opment space.
66.1 Introduction
Along with the deepening of world economy integration, the economic connection
between each country is increasingly close. China is the most populous country in
the world. Russia, China’s largest neighbors, is the world’s largest country. They
have a more than 4300 km common boundary line and a long trading history. At
present, both of China and Russia are permanent members of the UN Security
Fig. 66.1 Volume of trade between China and Russia from 1994 to 2010
Council, members of WTO, and play a significant role in international politics and
economic affairs.
From Fig. 66.1, we can see, Sino-Russian trade history is a long and tortuous
development process. For example, from 1999 to 2008, because of the reforms of
Russia’s foreign trade policy, the rapid growth of Chinese economy and other
reasons, Volume of trade between China and Russia grew stably and rapidly, while
the Sino-Russian trade sharp declined under the influence of the global financial
crisis in 2008 (Du 2011; Ren and Wang 2011; Zhang and Liang 2011; Ma et al.
2006; Li et al. 2012; Chen 2009; Zhao 2010).
From the above situation, we can see, the foreign trade volume growth presents
a certain degree of volatility and uncertainty, because of the impact of trade policy,
international market demand, emergencies and many uncertainties, and so on,
which has increased difficulty to the accurate prediction of trade volume. However,
accurate prediction of volume of foreign trade has important significance in pro-
moting the national economy stable and sustained growth, and in developing
reasonable and effective foreign trade policy (Deng 2005; Li 2008; Chu 2011;
Wang et al. 2008).
Because the volume of foreign trade will be influenced by many uncertain
factors, some scholars use the traditional GM (1,1) model for foreign trade volume
prediction. Because of its small sample size, the higher prediction accuracy, the
gray prediction technology has been widely used. While, GM (1,1) models has its
limitations as other forecasting methods. For example, when the data with a
greater degree of dispersion, that is to say, the greater the gray scale data it is, the
prediction accuracy worse; and it is not suitable for predicting long-term forecast
for several years after the push (Zhou and Jiang 2004; Zhou and Zhou 2011; Wang
et al. 2009; Zhang et al. 2009; Niu et al. 2006). In this paper, we improved the
traditional GM (1,1) model with the other-dimensional gray-scale by-ways, and
predicted the Sino-Russian future trade.
66 The Study of Sino-Russian Trade Forecasting 639
The sequence xð1Þ ðkÞ has the law of exponential growth, so we generally believe
the sequence xð1Þ ðkÞ meets the exponential growth in the form of general solutions
of first-order linear differential equation:
dxð1Þ
þ dxð1Þ ¼ u ð66:4Þ
dt
Under normal circumstances, the Sino-Russian trade data we get is discrete
data, while the index should be a continuous type of equation. At this time, the
general approach is: using xð0Þ ðk þ 1Þ to represent the differential term of the
discrete form. Here, xð1Þ take the average load of k and k ? 1, namely:
640 Z. Zhang et al.
1 h ð1Þ i
xð1Þ ¼ x ðkÞ þ xð1Þ ðk þ 1Þ ð66:5Þ
2
Therefore, the equation is transformed into:
1 h i
xð0Þ ðk þ 1Þ þ a xð1Þ ðkÞ þ xð1Þ ðk þ 1Þ ¼ u ð66:6Þ
2
The results will be written in matrix form as follows:
2 ð0Þ 3 2 3
x ð2Þ 12 ½xð1Þ ð1Þ þ xð1Þ ð2Þ 1
6 xð0Þ ð3Þ 7 6 1 ½xð1Þ ð2Þ þ xð1Þ ð3Þ 1 7
6 7 6 7 a
4 ... 5 ¼ 4 ð66:7Þ
2
... ...5 u
xð0Þ ðnÞ 12 ½xð1Þ ðn 1Þ þ xð1Þ ðnÞ 1
2 ð0Þ 3
x ð2Þ
6 xð0Þ ð3Þ 7 a ,
Here, we order Yn ¼ 4 6 7 ,A¼
... 5 u
xð0Þ ðnÞ
2 3
12 xð1Þ ð1Þ þ xð1Þ ð2Þ 1
6 1 xð1Þ ð2Þ þ xð1Þ ð3Þ 1 7
B¼6
4
2 7, then: Yn ¼ BA.
... ...5
12 xð1Þ ðn 1Þ þ xð1Þ ðnÞ 1
" #
^
By solving the equation, we know A ¼ ðBT BÞ B Yn ¼ a^ , then put
1 T
u
parameters back to the original equation, we know:
" ^
# ^
ð1Þ ð1Þ u ^
aðkþ1Þ u
b
x ðk þ 1Þ ¼ x ð1Þ ^ e þ ^ ðk ¼ 0; 1; 2; . . .Þ ð66:8Þ
a a
After 1-IAGO, we can get the discrete form:
x ð0Þ ðk þ 1Þ ¼ b
b x ð1Þ ðk þ 1Þ b x ð1Þ ðkÞ
" ^
#
^
a ð0Þ u ^a k
¼ ð1 e Þ b x ð1Þ ^ e
a
ðk ¼ 0; 1; 2; . . .n 1Þ
The dynamic process of equidimensional filling vacancies on the original
sequence: Removexð0Þ ð1Þ, add b
x ð1Þ ðn þ 1Þ. Thus, the original data sequence turns
into:
n o
xð0Þ ¼ xð0Þ ð2Þ; xð0Þ ð3Þ; . . .; xð0Þ ðnÞ; b
x ð1Þ ðn þ 1Þ
Based on this adjusted the original data sequence, re-use the traditional GM
(1, 1) model to predict the next value.
Finally, repeat the above steps until get the final demand forecast results.
Table 66.1 The Sino-Russian trade from 2000 to 2011 (billion dollars)
Years 2000 2001 2002 2003 2004 2005
Volume of trade 80.03 106.71 119.27 157.58 212.26 291.01
Year 2006 2007 2008 2009 2010 2011
Volume of trade 333.87 481.55 569.09 387.52 555.33 835
the standard model. Then, we forecast the Sino-Russian trade in 2012. The Sino-
Russian trade from 2000 to 2011 as follows (Table 66.1):
After using Matlab to get the original series and after series of accumulated
generating trends, the exponential growth trend can be found. That is to say, we
can use GM (1, 1) to predict (Fig. 66.2).
With the help of Matlab, the result could be got, as follows:
" #
^
a 0:1539
A¼ ^ ¼
u 1296409:5357
Therefore:
ðk ¼ 0; 1; 2. . .Þ
With the help of traditional GM (1,1) model and improved gray GM (1,1)
model, we forecast the Sino-Russian trade in 2012, the results are as follows
(Table 66.2):
Table 66.2 Compare the predictions between traditional GM (1, 1) model and improved model
Traditional grey Improved grey model
Years Actual value Predictive value Residuals Predictive value Residuals
2011 835 833.78 1.22 834.50 0.5
There are two indicators about difference test after test: poor ratio of posterior
‘‘c’’ and small error probability ‘‘p’’. ‘‘c’’ is smaller, the model is better; ‘‘p’’ is
greater, the model is better (Tables 66.3, 66.4).
It can be seen from the result, the accuracy of the model no matter improved or
not is good. However, the accuracy of the improved model was better than before
the improvements. In other words, improved gray model has a higher extrapolation
in Sino-Russian trade forecasting.
66.3 Conclusion
Because the foreign trade volume between the two countries is affected by the
country’s economic conditions, trade policy, international market demand, unex-
pected events and many uncertain factors, which belongs to the gray areas of the
system. Because of foreign trade volume prediction in gray scale is too large, the
traditional GM (1, 1) model to forecast precision is reduced which results in not
applicable. In this paper, we improved the traditional GM (1, 1) model, and used the
new model predicted Sino-Russian trade. At the end of the paper, we found that the
improved model has much higher accuracy than the traditional one, through
comparing the predict results which were got by using the traditional one and
the improved one. However, the greater the need to predict the amount, the greater
the computation it is. For this, the model still needs to improve again.
644 Z. Zhang et al.
References
Chen D (2009) On the influences of the international financial crisis on china’s foreign trade and
countermeasure. J Hubei Polytech 5(4):67–72
Chu X (2011) A forecast of foreign trade of Beijing based on the gray system theory. China Bus
Market 2011(5):54–58
Deng J (2005) Basic methods of gray system. Hua-Zhong University of Science and Technology
Press, Wu Han, pp 60–70
Du Y (2011) China’s foreign trade development status and countermeasures. China Bus Trade
2011(5):201–202
Li S (2008) A trend forecast of China import and export trade total volume based on the gray
system model. Commer Res 2008(3):113–115
Li L, Dunford M, Yeung G (2012) International trade and industrial dynamics: geographical and
structural dimensions of Chinese and Sino-EU merchandise trade. 32(1):130–142
Ma T, Li B, Fang C, Zhao B, Luo Y, Chen J (2006) Analysis of physical flows in primary
commodity trade: a case study in China. Resour Conserv Recycl 47(1):73–81
Niu D, Zhang B, Chen L, Zhang T (2006) Application of intelligent optimization grey model in
middle-term electricity demand forecasting. East China Electr Power 1(1):8–11
Ren T, Wang Y (2011) Present situation and prospect of Sino-Russian economic and trade
relations. China Econ Trade Herald 3:64–65
Wang Y, Sun L, Xu C (2008) The dynamic analysis of export structure versus china trade
competitiveness index. Oper Res Manag Sci 17(2):115–120
Wang Z, Dang Y, Liu S, Lian Z (2009) Solution of GM (1, 1) power model and its properties.
Syst Eng Electron 10(10):2380–2383
Zhang J, Liang S (2011) China’s foreign trade development trends and policy measures in post-
crisis era. J Yunnan Univ Finance Econ 6:43–48
Zhang L, Ji P, Du A, He Q (2009) Comparison and application of several grey-forecasting models
to mid-long term power load forecasting. J China Three Gorges Univ (Natural Sciences)
3(6):41–45
Zhao M (2010) China-Russia relations enter a new period of historical development. Russian
Cent Asian East Eur Stud 1:62–67, 96
Zhou J, Jiang Z (2004) China’s exports forecast based on gray system model of GM (1.1). J Int
Trade 2004(2):27–29, 39
Zhou Z, Zhou F (2011) The application of grey model of equidimensional filling vacancies in
forecasting GDP. J Huanggang Norm Univ 31(6):26–28
Chapter 67
The Role of Preference and Emotion
in Environmental Risk Perception
Keywords Environmental risk perception Risk preference Time dealy effect
Emotion Apprial theory
67.1 Introduction
The recent years has seen the sharp increase in environmental risk frequency and
severity, causing casualty and financial loss. The petroleum leakage accident in the
Bohai Sea and the cadmium pollution in Guangxi are typical examples of recent
risks involves both objective factors and subjective factors. Risk preference is one
of the two main subjective factors in environmental risk perception, together with
time preference. Although it is generally accepted that people can be risk-averse,
risk-neutral, or risk-seeking, it is more practical to assume all human beings are
averse towards environmental risks, since they cannot get any benefit from envi-
ronmental risks. Also, it is reasonable to assume that all people require compen-
sation for delayed benefit, i.e. people are averse to time delay.
Risk preference manifests people’s attitudes towards risks. It is usually mea-
sured via the comparison between expected value and subjective certainty
equivalent. When the subjective certainty equivalent of a risky option is equal to
its expected value, the decision maker is risk-neural; when lower than expected
value, he is risk-averse; and when greater than expected value, he is risk-seeking.
Extensive explorations have been conducted in this field. The first and perhaps
most influential result is the expected utility theory. This model received wide-
spread support and became the basis for decision under uncertainty in several
decades, due to clarity and simplicity in both logic and form. While in financial
field Markowitz proposed his risk–return model to explain the St. Petersburg
Paradox. However, the findings of more and more anomalies, like framing effect,
challenged the two theories. It was under this situation that prospect theory was
proposed.
Time preference shows people’s discounting of future gain or loss. Extensive
researches have been conducted in time preference, bringing forth multiple the-
ories and findings. The Discounted Utility (DU) model is the first widely accepted
model, in which future utility is manifested by utility discounting factor. Later
researches found many anomalies, greatly undermining the validity of the DU
model. To better explain people’s real activities under time delay researchers
proposed models like hyperbolic discounting (Pender 1996), and hyperboloid
discounting (Green et al. 1997).
For intertemporal risks, discounted expected utility has long been the dominant
model to explain people’s attitudes towards both time delay and uncertainty at the
same time. However, its underlying assumption that people are risk neutral
towards gains and losses is under criticize.
Although both risk preference and time preference in intertemporal risks have
been researched in detail, few have focused on the relationship between them,
simply treating them as orthogonal dimensions. Recently, some studies have
noticed that time delay can affect subjective probability judgment (Epper et al.
2009). Some researchers treat time delay as an implicit risk, arguing that time
delay increases aversion towards risk (Baucells and Heukamp 2010). Some other
researchers use impatience to argue that time delay makes people more risk-
averse. While Construal Level theory (Liberman and Trope 2008) suggests that
time delay reduces risk aversion. Latest behavioral experiments show that time
delay makes people more risk tolerant. Anyhow, researches in neuroscience
(Loewenstein 2001) and biology (Boyer 2008) strongly support that time has an
impact on risk preference.
648 C. Xie et al.
classified as the agency, coping potential, fairness, certainty, and outcome desir-
ability (Watson and Spence 2007).
Emotion is one of the causes of behavioral tendency (Scherer 2009). Different
emotion in risk perception can cause different environmental risk actions, implying
the important role of emotion in environmental risk management. However,
emotion is far from being the sufficient condition of behavior (Gattig and Hend-
rickx 2007; Pender 1996; Green et al. 1997). Environmental risk types and cultural
factors coordinate the relationships between emotions and behaviors. In addition,
different people have different ways to express their emotions. It requires further
researches to find what emotions are relevant in specific environmental risks, and
what are the effects of appraisals on emotional reactions in specific environmental
risks. In brief, future research can focus on the particular factors in emotion in
specific environmental risks, like nuclear leakage, water pollution, and soil
contamination.
different time delays, 0 day, 6 week, 12 week, and any uncertain time between
0 day and 12 weeks. Therefore, participants in the experiment will face 40 dif-
ferent choices.
Different from Noussair and Wu 2006, we will assume no specific models on
expected values, especially the linear relationship between probability and prob-
ability weighting function, i.e. subjective probability. We will ask subjects to
report certainty equivalent on exercise date for each lottery rather the present
value. Data will be collected and analyzed, comparing the certainty equivalents for
the same lottery under different time delays. By imposing specific form on
probability, we will be able to analyze further into the structure of time delay effect
on risk preference.
Given the influence of emotion on risk perception, we will also test how dif-
ferent emotions affect people’s risk preference under delay. We will try to elicit
different emotions among subjects with different contents and ask them to answer
questions which are designed to test their emotions. By comparing the certainty
equivalents of the same lottery with the same time delay under different emotions,
it is able to catch insight into how specific emotions affect risk preference.
The questionnaire survey will feature specific environmental risks, like nuclear
leakage, water pollution, soil contamination or air pollution. With properly
designed questionnaire, we will gain data reflecting both people’s risk preference
in a specific environmental risk, and how their instant emotion affects time delay
effect on risk preference.
The subjects will include students, teachers, and white collar workers. Those
who join in laboratory behavioral experiments will be well guided before the
experiments to well understand the effect of their choice and get some amount of
money as reward according to their performance.
67.5 Conclusion
The roles of risk preference and time preference in environmental risks are crucial.
The finding that risk preference can be affected by time delay is of value in
understanding people’s environmental risk perceptions. We expect to find that
time delay makes people more risk tolerant. Based upon this, we will gain a new
approach to explain people overreaction towards temporal environmental risks,
and their neglect for long-term environmental risks like global warming. As for
emotion, we anticipate to find that overwhelming emotions can greatly change
people’ risk preference and time delay effect on risk preference.
This research will be the first study to explore the effect of time delay on
environmental risk perception. It is also supposed to be the first study that com-
bines risk preference and emotion in environmental risk perception. Also, it will
provide evidence for identifying the difference between people’s risk preference
under gain situations and that under loss situations. We hope to catch insight into
the role of risk preference and emotion in environmental risk perception,
67 The Role of Preference and Emotion 651
especially the effect of time delay on risk preference with application in envi-
ronmental issues. Our research will provide deeper insights to people’s attitudes
towards environmental risks, and thus offer better guidance for relevant environ-
mental risk communication and management.
Acknowledgments This work was supported by the National Science and Technology Support
Program (2009BAK53B06), the National Natural Science Foundation of China (71101035) and
the Humanities and Social Sciences of Education Ministry (12YJA880130).
References
Scherer KR (2009) The dynamic architecture of emotion: evidence for the component process
model. Cognit Emot 23(7):1307–1351
Keller C, Bostrom A, Kuttschreuter M, Savadori L, Spence A, White M (2012) Bringing
appraisal theory to environmental risk perception: areview of conceptual approaches of the
past 40 years and suggestions for future research. J Risk Res 15(1):237–256
Gattig A, Hendrickx L (2007) Judgmental discounting and environmental risk perception:
dimensional similarities, domain differences and implications for sustainability. J Soc Issues
63(1):21–39
Pender JL (1996) Discount rates and credit markets: theory and evidence from rural India. J Dev
Econ 50(2):257–296
Green L, Myerson J, McFadden E (1997) Rate of temporal discounting decreases with amount of
reward. Mem Cognit 25(5):715–723
Epper T, Fehr-Duda H, Bruhin A (2009) Uncertainty breeds decreasing impatience: the role of
risk preferences in time discounting. In: Working paper in institute for empirical research in
economics, University of Zurich, Zurich
Baucells M, Heukamp FH (2010) Common ratio using delay. Theory Decis 68(12):149–158
Liberman N, Trope Y (2008) The psychology of transcending the here and now. Science
11(11):1201–1205
Loewenstein GF, Weber EU, Hsee CK, Welch N (2001) Risk as feelings. Psychol Bullet
127(2):267–286
Boyer P (2008) Evolutionary economics of mental time travel? Trends Cognit Sci 12(6):219–224
Rundmo T, Moen BE (2006) Risk perception and demand for risk mitigation among experts,
politicians and lay people in Norway. J Risk Res 9(6):623–640
Hongxia D, Rosanne F (2010) A cross-cultural study on environmental risk perception and
educational strategies: implications for environmental education in China. Int Electron J
Environ Educ 1(1):1–19
Vastfjall D, Peters E, Slovic P (2008) Affect, risk perception and future optimism after the
tsunami disaster. Judgm Decis Mak J 3(1):64–72
Peters EM, Burraston B, Mertz CK (2004) An emotion-based model of risk perceptionand stigma
susceptibility: cognitive appraisals of emotion, affective reactivity, worldviews, and risk
perceptions in the generation of technological stigma. Risk Anal 24(5):1349–1367
Lerner JS, Keltner D (2000) Beyond valence: toward a model of emotion-specificinfluences on
judgment and choice. Cognit Emot 14(4):473–493
Smith CA, Kirby LD (2009) Putting appraisal in context: toward a relational model of appraisal
and emotion. Cognit Emot (23)7:1352–1372
Watson L, Spence MT (2007) Causes and consequences of emotions on consumer behavior—a
review and integrative cognitive appraisal theory. Eur J Mark 41(3):487–511
Noussair C, Wu P (2006) Risk tolerance in the present and the future: an experimental study.
Manag Decis Econ 27(6):401–412
Chapter 68
The Model Research on Risk Control
Qing-hai Zhang
Abstract With the development of the society and the growth of technical
complexity, the risks of many problems are increasing, which promotes a pressing
need to conduct a research on the technology and methods of risk control. Having
identified and assessed all the possible risks, the paper divides them into four
types, and designs the risk control model for each type aiming at minimizing the
risk probability and harm degree. Furthermore, the paper also extends the model
concerning the economic costs to the one concerning social benefits and other
factors.
68.1 Introduction
Risk is the phenomenon that widely exists in people’s work and life. Risk have the
following characteristics. Firstly, risk is the cause of disaster and accident, or the
economic loss and casualties of disaster and accident. Secondly, the occurrence of
risk is uncertain. Thirdly, the loss degree of risk is uncertain, and there’s difference
between the probable result and anticipated outcome (Doherty 2000a).
The risk management was born for the existence of risk phenomenon, which is
new management science about occurrence rule of risk and risk control technology
deriving from the USA in 1950s. The three kernel stages in risk management are
risk identification, risky appraisal and risk control. Risk identification is to identify
the present and potential factors which may cause loss in the management course,
analyze if there is uncertainty in the factors and determine if the uncertainty exists
objectively. Risk identification is the first step and stage of risk management and
the basement of the whole risk management. Risk appraise is also important work
Q. Zhang (&)
Basic Courses Department, Military Economy Academy, Wuhan, China
e-mail: [email protected]
that tests, weighs and estimate the risk, size up the probability and predicate the
serious degree.
The final aim of risk identify and assess is to avoid and control risk, achieving
optimum political, economic, and social benefits with the minimum cost, reduce
the probability of risk accidents and the scope and effect of loss by the greatest
extent (Jarrow and Turnbull 1998; Dohety 2000; Arther Williams and Heins RM Jr
1997). Risk control, the most popular risk management technique is fit to plan and
implementation stage, which is vividly described as ‘‘admit risk, try to reduce the
occurrence and the effect’’.
Specially, the risk control includes prior control, press control and subsequent
control. For the prior control, we need to compare risk of different schemes and
choose the best plan one giving consideration to every aspect. For the press control,
there are two conditions. In the first condition, the risk is unacceptable. In the
second condition, however, the risk is acceptable. If the risk exceeds the maximum
acceptable level of risk, we have to cancel the present scheme and choose the
alternate one, or rescue the present scheme by reducing the assess indicator and
adjust the tactical and technical data requirements. If the risk is acceptable, we
should continue monitoring the risk. For the subsequent risk, we need to summarize
and popularize advanced experience and take warning from the failure.
By risk recognize and assess, we can find out the main risk and fix its probability
and harm degree. There are four types of risk.
Risk 1, low probability, low harm. This risk is secondary and acceptable.
Risk 2, high probability, low harm. This risk should be well controlled to reduce
its probability. Though the dangerous level of individual risk is not high, it’s
necessary to guard against accumulator risk.
Risk 3, low probability, high harm. This risk is seldom, but it will be subversive
dangerous once it happens. So precautionary measures should be taken and new
type of risk should be kept a weather eye on.
Risk 4, high probability, high harm. This risk is essential, and precautionary,
shifting, diminishing measures should be taken to reduce the influence of this risk
and prevent the subversive risk (LiuJun 2008).
The purpose of risk control is to reduce the probability of risk accident by the
greatest extent and reduce the scope of the loss. Apparently, this is a optimization
68 The Model Research on Risk Control 655
problem. So we need to create risk control model with the method of mathematical
programming.
This model is designed for high probability and low harm risk that is risk 2.
This risk can be divided into two types. Firstly, the harm is acceptable when the
risk happens. Secondly, the harm is small but unacceptable. In the first case, the
risk can be defined as acceptable risk. In the second case, as the probability of risk
2 is high, control measures are needed to reduce the probability. Besides, the
control measures demand for cost that is called control cost. So we need to fully
consider probability P and cost C. The model is based on the integration, so it’s
called P C model (Vincent and Jeryl 1985). P C model aims at increasing
economic efficiency, which is suitable to apply to normal risk control.
In single factor model, there are two hypothesizes about si which are taking
measure si and not taking measure si . xi is 0-1 variable:
0 ; take si
xi ¼ :
1 ; nottake si
But in practice, we can choose several measures to control the risk, and
determine the extent of each measure. So create several factor model based on
mathematic programming knowledge (Benink 1995).
Create n dimension function P ¼ Pðx1 ; x2 ; ; xn Þ, in which xi is the extent of
control measure si , according to different extent, give xi different numerical value. P
is the corresponding risk probability when taking different extent of control mea-
sures such as s1 ; s2 ; ; sn . Suppose xi ði ¼ 1; 2; ; nÞ is continuous variable, so
function P ¼ Pðx1 ; x2 ; ; xn Þ is a n dimension continuous function of
x1 ; x2 ; ; xn . Our target is to reduce the probability of this risk, so take the function
as the target function of the programming problem, and calculate the minimum :
min Pðx1 ; x2 ; ; xn Þ:
Take C1 ; C2 ; ; Cn standing for the control cost of control measures
s1 ; s2 ; ; sn . Apparently, different extent of control measure si brings different
control cost, which means Ci is a function of xi (Jiang 2002):
Ci ¼ Ci ðxi Þ ; ði ¼ 1; 2; ; nÞ:
Then we get the restrain condition:
C1 ðx1 Þ þ C2 ðx2 Þ þ þ Cn ðxn Þ C;
C is the total acceptable cost.
Then we get the programming problem:
min Pðx1 ; x2 ; ; xn Þ
(
C1 ðx1 Þ þ C2 ðx2 Þ þ þ Cn ðxn Þ C
s:t: :
Ci ðxi Þ 0 ; i ¼ 1; 2; ; n
The above model focuses on economic efficiency. But in actual life, except eco-
nomic efficiency, social efficiency is more important in some cases. Then, S which
68 The Model Research on Risk Control 657
stands for social efficiency is instead of C which stands for control cost. So the new
model can be named as P S model whose create method and steps can refer to
P C model. In the same way, the model can extend and changes into other forms
according to various needs.
This model aims at risk 3 with low probability and high harm. The risk can be
divided into two parts. The probability of the first risk is acceptable. The proba-
bility of the second risk is low, but the risk is unacceptable. In the first case, the
risk is considered to be acceptable risk (Jorion 1997; Arrow 1971; Smith 1998;
Delianedis and Geske 1998). In the second case, as the harm is serious when risk 3
happens, so it’s necessary to take control measures to reduce the loss.rsymbolizes
the loss, which is the difference between the final result and the intended goal. The
cost results from the control measures is called control cost. So we need to fully
consider loss scope r and cost C. The model is based on the integration, so it’s
called r Cmodel.
The create method, steps and extension of the r C model are similar to the
P C model.
This model aims at risk 4 with high probability and serious harm that is the key
point. Both reducing the probability P and lightening the loss scope are necessary
(Ward 1999). As a result, the model has two target functions, making use of double
target programming model:
min Pðx1 ; x2 ; ; xn Þ
min rðx1 ; x2 ; ; xn Þ
(
C1 ðx1 Þ þ C2 ðx2 Þ þ þ Cn ½xn C
s:t: :
Ci ðxi Þ 0 ; i ¼ 1; 2; ; n
This model will consider the different focus levels and demands of P and r
according to the practice. Operation researches always change double target
programming into single target programming(Editorial Board of Operational
Research 2002). That’s to make a weighted analysis of probability and loss scope
to form a target: min k1 P þ k2 r.
658 Q. Zhang
68.6 Conclusion
Above all, the risk2, risk3 and risk4 have been talked out. So for risk1 with low
probability and little serious, we generally consider it as secondary and acceptable
risk. If the probability or the harm is unacceptable, we need to reduce the prob-
ability or the harm when the above models could be used.
In the particular problems, there are many methods of risk control. But theory
and practice of risk management prove that either method has its applicability and
limitation. So we need to choose different control method according to different
problem and characteristic of risk. The three models in this paper reduce the
probability of the risk and lighten the loss scope of four types of risk, using the
programming knowledge together with economic cost. It provides risk control
experts with theoretical basis from the view of methodology (Doherty 2000b;
White 2004). But in actual risk control, the risk control experts ought to overall
consider different factors to decide which method to take with their own
experience.
References
Arrow KJ (1971) Essays in the theory of risk bearing. NorthHolland, New York, pp 86–93
Arther Williams C,.Heins RM Jr (1997) Risk management and insurance. McGraw-Hill Higher
Education, Boston, pp. 17-31, 41–47
Benink HA (1995) Coping with financial fragility and systemic risk boston. Klumer Academic
Publishers, London, pp 43–47
Delianedis G, Geske R (1998) Credit risk and risk-neutral default probabilities: information about
rating migrations and defaults. Paper presented at the Bank of England conference on credit
risk modeling and regulatory implications, London, 21–22 Sept 1998
Doherty NA (2000) Integrated risk management techniques and strategies for managing corporate
risk. McGraw-Hill, New York, pp 65–67
Doherty NA (2000) Integrated risk management techniques and strategies for managing corporate
risk. McGraw-Hill, New York, pp 134–167
Neil A. Dohety (2000) Integrated risk management. McGraw-Hill Companies, New York,
pp 37–40
Editorial Board of Operational Research (2002) A brief introduction to operational research. Qing
hua university press, Beijing, pp 75–79
Jarrow RA, Turnbull SM (1998) The intersection of market and credit risk. Paper presented at the
Bank of England Conference on Credit Risk Modeling and Regulatory Implications, London,
21–22 Sept 1998
Jiang Q (2002) Mathematic model. Qing hua university press,Beijing, pp 79–84
68 The Model Research on Risk Control 659
Jorion P (1997) Value at risk: the new benchmark for controlling market risk. The McGraw-Hill
Companies, Inc., New York, pp 122–126
LiuJun (2008) An introduction to risk management. Qing hua university press, Beijing
Smith ML (1998) Risk management and insurance. McGraw-Hill Inc , New York, pp 106–118
Vincent TC, Jeryl M (1985) Risk analysis and risk management: an historical perspective. Risk
Anal 5(2):103–120
Ward SC (1999) Assessing and managing important risks. Int J Proj Manag 17:331–336
White L (2004) Management accountants and enterprise risk management. Strateg Financ
43:10–14
Chapter 69
TOPSIS Based Power-Saving Plans
Choice for Manufacturing Enterprise
69.1 Introduction
Since the twenty-first century, low-carbon economy has been attached more and
more importance to. For manufacturing enterprises, the low-carbon developing
model means that diminishing their energy consuming, improving the utilizing
efficiency, and curtailing waste discharging are the foremost issues to be resolved.
Improving production efficiency and lowering equipment cost are the basic
conditions for the optimization of any enterprise’s power-saving plans. The opti-
mization methods include Fuzzy Evaluation Model, Analytical Hierarchy Process,
Gray Comprehensive Evaluation, TOPSIS, etc., (Yue 2003) among which
TOPSIS, as a simple statistical method, is of high reliability and of low error (Guo
and Jin 2010). This paper, taking one electronic manufacturing enterprise as the
case, puts forward three power-saving plans, optimizes these plans with TOPSIS,
and analyzes the effects of the optimal choice.
69.2 Methodology
Step 6: Obtain the optimal plan from Ci (Bin and Li-jie 2006)
69 TOPSIS Based Power-Saving Plans Choice 663
The case enterprise is an electronic manufacturing factory. There are too many
testing processes with high energy consumption. Before packaging, there are for
steps of test for the products. Its testing workshops use LCD equipment to display
the testing information. The followings are the existed troubles for the enterprise:
Large space is occupied by LCD equipment. The information displayed by
LCD equipments is very simple and much equipment is used for the tests, which
lead to large space occupation. The whole produce lines are crowded and the
layout of lines is in chaos, which caused the operating personnel to walk
frequently, and to walk in large range. These factors speed up personnel’s fatigue,
and the efficiency is low.
The testing equipments are of high cost. Through statistics, there are 373 testing
stations in the whole workshop. The cost of LCD equipment is 1200 Yuan per
station, and the total cost is up to 447,600 Yuan.
The energy consumption is high. Through statistics, the power fee from one
workshop is high up to 96,300 Yuan per year.
Through brain-storm, three plans are addressed. The first is that replace the CMC
for LCD equipment which is a kind of apparatus for displaying scanning and
testing information. The advantage of this plan is of comprehensive information
and of good effects; and the weakness is of low visual angles and of high cost. The
second plan is that substitute LED equipment for LCD. The advantage is of good
visualization, of efficient information and of low equipment cost; and the weakness
is of too simple in displaying information. The third plan is that change LCD
equipment for LED indicator lamps with different colors. The advantage is of
direct observation, of efficient information and of low price; and the weakness is of
little information and of chaos in visualization.
Through analysis and research, lowering equipment cost, diminishing space
occupation, saving power, and improving the efficiency are the indexes for plan
choice. The steps to use TOPSIS for the choice of the power-saving plans are the
followings:
Step 1: Set up decision indexes set C
664 D. Wang and K. Zheng
2 3
y1 ¼ lowering cost
6 y ¼ improving efficiency 7
6 2 7
C¼6 7
4 y3 ¼ occupying space 5
y4 ¼ saving power
Step 2: Set up plans set X
2 3
x1 ¼ plan1
6 7
X ¼ 4 x2 ¼ plan2 5
x3 ¼ plan3
Step 3: Decide the weight of indexes and set up the weight set W
2 3
W1 ðlowering costÞ ¼ 0:25
6 W ðimproving efficiencyÞ ¼ 0:15 7
6 2 7
W¼6 7
4 W3 ðoccupying spaceÞ ¼ 0:35 5
W4 ðsaving power Þ ¼ 0:25
Step 4: Decide the actual value of indexes
Through market survey and field measurement, the price of CMC is 1500 Yuan
per station, and the space occupation is 0.00179 m3; the price of LED monitor is
200 Yuan per station, and the space occupation is 0.00145 m3; the price of LED
indicator is 150 Yuan each, and the space occupation is 0.00062 m3. According to
the above data, the score of the three plans’ efficiency-improving and power-
saving can be obtained as Table 69.1:
Step 5: Ascertain normalized decision matrix Y
The dimensions of the above values are different. Thus it is necessary to
transform the value to normalized ones with Eq. 69.1.
Step 7: Calculate positive and negative ideal solution with Eq. 69.3.
Dþ þ
1 ¼ 0:240; D1 ¼ 0:082; D2 ¼ 0:092;
D2 ¼ 0:236; D3 ¼ 0:099; D
þ
3 ¼ 0:246
Until present, the case enterprise has introduced plan2 to display testing infor-
mation, and the performance is fairly well. Before improvement, the total cost of
equipment is up to 447,600 Yuan and the power fee from one workshop is high up
to 96,300 Yuan per year. After the enforcement of plan 2, the total cost is
74,600 Yuan and the power fee is 12,000 Yuan per year. The cost saving is
373,000 Yuan, and the power fee is lowered by 84,300.
At the same time, the substitute of LED monitor for LCD equipment makes the
production line in order, improves personnel’s morale, and allows workers to see
the information clearly without much motion. And through speech set on the
666 D. Wang and K. Zheng
equipment, the information of code type can not only be identified by visualization
but also by sound, which facilitates workers’ operation. And the replaced LCD
equipment can be made use by other department and production line. The goal of
lowering cost and saving power is attained.
69.5 Discussion
Since the early twenty-first century, we have focused on low carbon, which
emphasizes protecting our environment. For the public, it means leading a simple
and saving life. And for the enterprises, it means eliminating the redundant
emission, lowering power using, comprehensively reusing the material called
waste before, and recycling the materials that can be reused such as packages,
bottles, and so on.
Unfortunately, in the early stage of the low-carbon economy, there are a
number of requirements on the publics at present. Those on enterprises seem to be
neglected. Someone will say that environment protection has been awakened by
many governments in the world, and the relative institutions have been initiated
since the 1960s. But an obvious fact must be laid on the desk that compared with
the public the amount of power-using by enterprises is fairly huge. Thus, enter-
prises should be the priority to low-carbon economy.
For enterprises, to meet the requirement of reducing emission, reusing and
recycling, they should enforce their consciousness and self-discipline besides
abiding by the outer institutions, which means they should adjusting their opera-
tion strategies to cover reducing, reusing and recycling., and some scientific
methods should be adopted.
This paper takes a specific case to explore the way to lower its power-using
with TOPSIS. But faced with the pressure of profit earning and growing, quite a
few enterprises will not consider too much low-carbon. Seemingly, it is reason-
able. But through deep exploring, the fact that one enterprise which complies to
the demand of low-carbon usually affords high costs, which will be a burden for its
development. And those enterprises that consider these factors of low-carbon will
not obtain anything in return. Although some stimulus measures have been
enforced in low-carbon by some governments; these measures are usually treated
as temporary ones. And form the long period perspective, these measures will
cause damage to the enterprises’ operation. The reasons are that some enterprises
will rely highly on the allowances form government, and some will use these
allowances to compete with others, which will cause unfairness, even international
trade disputes such as anti-dumping and anti-subsidies.
So, for enterprises, the low-carbon economy needs more innovations including
technology, operation and management. The technology innovation means some
techniques that are efficient in low-carbon will be invented and adopted by
enterprises. The operation innovation means the enterprises must change their
visions. From the present research and practice, low-carbon supply chain seems as
69 TOPSIS Based Power-Saving Plans Choice 667
one effective strategy. Its application will equalize the costs of reusing, reducing
and recycling on the supply chain. This strategy needs some supportive means
such as low-carbon contracts for all members on the chain, interest-collaboration
between different members, and so on. And on the basis of operation innovation,
the management must innovate on some aspects such as information management,
outsourcing, supplier management, channel management, etc.
In a word, this paper probes the way to low-carbon development of one
enterprise’s power-saving. To comprehensively realize low-carbon economy, the
field must extend from the public to all industries. And enterprises should play
relatively important role in the process. The cost of the realization of low-carbon
economy requires a full range of innovations.
References
Barbarosoglu G (2000) A decision support model fore customer value assessment and supply
quota allocation. Prod Plan Control 11(6):608–616
Bin S, Li-jie W (2006) Study of method for determining weight based on rough set theory.
Comput Eng Appl 29:216–217
Feng K, Liu H (2005) A new Fuzzy TOPSIS Algorithm for MADM based on decision maker’s
subjective preference. In: Proceedings 24th Chinese control conference, Guangzhou, P.R.
China, pp 1697–1701
Guo X, Jin L (2010) Grey incidence TOPSIS for multiple attribute decision making (in Chinese).
Sci Technol Manage CHN 12(5):49–51
Liao Z, Rittscher J (2007) A multi-objective supplier selection model under stochastic demand
conditions. Int J Prod Econ 105(1):150–159
Lin M-C, Wang C-C, Chen M-S, Chang CA (2008) Using AHP and TOPSIS approaches in
customer-driven product design process. Comput Ind 59(1):17–31
Pawlak Z (1982) Rough sets, theoretical aspects of reasoning about data. Int J Comput Inform Sci
11:314–356
Verma R, Pullman ME (1998) An analysis of the supplier selection process, vol. 26, no. 6.
Omega, pp 739–750
Xu K (2010) TOPSIS method based on pairwise comparisons (in Chinese). Math Pract Theory
CHN 40(5):110–114
Yue C (2003) Theory and methods for decision (in Chinese). Science Press, Beijing, pp 133–140.
Chapter 6
Chapter 70
Research on Central Control DDSS
System for Fund Portfolio Management
Abstract In order to satisfy the demand of fund portfolio management and based
on the feature of balancing in the central control and distribution decision, a
central control DDSS is schemed based on the systemization of decision-making
theory. The scheme provides a balance point of a dynamic management, and
makes the fund investment more controllable and flexible.
Keywords Portfolio management Systemization of decision-making Distrib-
uted decision support system (DDSS)
70.1 Introduction
C. Hu (&) E. Qi
Management School of Tianjin University, 300072 Tianjin, China
e-mail: [email protected]
70.2 Theory
70.3 Model
In the DDSS classic model, each subsystem has its own database, model base and
method base system. Data resources and decisions made are exchanged on each
node through network information system; the result of processing all nodes would
give us the final decision. This design emphasizes: the exchanges between the
nodes, independent decision-making of the node, as well as the combined result of
all decision made by all the independent nodes.
However, the structure of such systemized decision-making model lacks the
control over decision-making in the system. As mentioned earlier, in order to
672 C. Hu and E. Qi
systemize a DDSS, the DDSS model should also include a common database, a
common model base, a common method base, and a common knowledge base.
These four libraries are the constraints for all the subsystem when it comes to
decision making. Comparing portfolio management in practice with DDSS, the
common database stores data regarding fund managers team goals and the allo-
cation of the resources; method base stores data that represents portfolio managers
investment combinations and risk control policies; model base holds models to
processing asset valuation and risk assessment formulas; knowledge base is the
storage for the various subsystems decision-making results, and it also make
adjustments to the other three database’s constraint and algorithm.
A DDSS model with the four libraries as mentioned above is shown in
Fig. 70.2:
In the above DDSS model, the four public database access the DDSS through
the network information system, all independent decision-making subsystems use
data from all four common database and provides result under the given con-
straints. In general, the four common databases require constraints inputted
through the man–machine interface. In some special cases, these data can come
from an intelligent decision support system with specialized control indicators.
674 C. Hu and E. Qi
70.4 Application
Based on the above structure of DDSS model, a DDSS model with multi-
management and central control is developed, as shown in Fig. 70.3:
In this framework, the portfolio management committee inputs the investment
funds, investment objectives, portfolio principles and other fund associated data to
DDSS. Decision-making support system decomposes all managers’ constraints
into the four libraries, and they will become indicators and constraints in man-
ager’s decision-making process. At the same time, the knowledge base, model
base and database provides the supporting environment for the subsystem on
decision making by providing target parameters, constraints, and some general
data, which determine the overall system control of the portfolio. Fund Manager of
each team has its own independent decision support system, and they are the
subsystem of a DDSS. This subsystem can be structure to be used by a single
decision maker’s DDS. Depending on the fund manager’s knowledge of intelligent
tools, the subsystem can be structured for a single decision-maker with a smart
DDS. Fund managers make decisions under the above constraints and environment
with their own judgments.
70.5 Conclusion
In this paper, in order to satisfy the demand of fund portfolio management and
based on the feature of balancing between central control and distribution decision,
a systemized central control DDSS scheme for portfolio management is proposed.
By introducing the systemization of decision-making theory and model, and
adding overall resource constraints and operational constraints of a central data-
base onto the classic DDSS model, the DDSS model is more controllable and
flexible for a dynamic portfolio management in terms of balancing between the
central control and distribution decision of DDSS model.
References
Chung HM (1993) Distributed decision support systems: characterization and design choices. In:
Proceedings of the 26th annual Hawaii international conference on system sciences,
pp 660–667
Elton EJ, Gruber MJ, Brown SJ, Goetzmann WN (1996) Modern portfolio theory and investment
analysis, 6th edn. Wiley Publishers, New York
Gao H (2005) Decision support systems (DSS) theory, case. Tsinghua University Press, Beijing
Janis IL, Mann L (1977) Decision making: a psychological analysis of conflict, choice, and
commitment. Free Press, New York
Kirn S, Schlageter G (1992) Distributed decision support in federative knowledge based systems.
In: 2nd ISDSS conference, Ulm, Germany
70 Research on Central Control DDSS System 675
Roy B (1977) A conceptual framework for a prescriptive theory of decision aid, multiple criteria
decision making, TIMS studies in the management sciences, vol 6. North-Holland Publishing,
Amsterdam, pp 179–210
Swanson EB (1990) Distributed decision support systems: a perspective. In: Proceedings of the
23rd annual Hawaii international conference on system sciences, pp 129–136
Yue C (2003) Decision theory and methods. Science Press, Beijing
Zeleny M (1977) Adaptive displacement of preferences in decision making, multiple criteria
decision making, TIMS studies in the management sciences, vol 6. North-Holland Publishing,
Amsterdam, pp 147–159
Author Biographies
HU Cheng (1954) Male, Chairman of the Board for Hong Kong Licheng Capital Group, is engaged in
investment banking industry. No. 43 Queen’s Road East, Hong Kong, Room 1607, telephone (852)
68762311
QI Er-shi (1954) Male, Professor, Doctor Tutor, Tianjin University, is engaged in the research in
various fields of industrial engineering. Management School, Tianjin University, Tianjin 300072,
China. Tel (022) 27405100
Chapter 71
The Evaluation and Application
of Residential Structure System Based
on Matter-Element Model
71.1 Introduction
The current residential structure system contains Cast-in situ concrete structure,
steel structure, the assemble type concrete structure with both advantages and
disadvantages (Lei and Chen 2010). For example, Cast-in situ concrete structure is
good at safety and durableness, but has a complex process and a high energy
consumption; steel structure supplies a larger space (Bi 2008), needs a shorter
construction period with higher cost, poor refractory and corrosive resistance and
the assemble type concrete structure charactered a short construction period,
S. Yu (&)
School of Management,
Xi’an University of Architecture and Technology, Xi’an 710055, China
e-mail: [email protected]
X. Liu
Department of Technology, The Engineering Co. Ltd of China Construction,
Hefei 230000, China
e-mail: [email protected]
energy efficiency, good quality but single structure form and poor seismic per-
formance (Jia et al. 2010). Consider the above, it is extremely important to make
an overall assessment on residential structure system in technological and eco-
nomical aspects (Zhang 2010a).
Face the different structure of the residential system, how to carry out the
effective evaluation become a key issues (Zhang 2010b). Until now, our evaluation
system has been stayed at the primary stage, only used construction cost and
energy consumption as main indexes without any analysis for influence on envi-
ronment and society nor different degree of residential technical and economic
performance between different social groups (Mi et al. 2010). To solve the
question, a comprehensive evaluation system is called for.
Related Matter-element theory and some qualitative and quantitative research
methods are used in this paper to evaluate the current residential structure system
scientifically, synthetically and reasonably. Hoping to give a guidance to the
government and real estate company, in order to promote popularize of the most
effective residential structure and supply scientific basis for development of
Residential industrialization (Huang and Zhu 2009).
Take a six layer residential building for example, its total length is 65 m, total
width is 12 m, total height is 18 m, every floor area is 780 m2, the construction of
the residential area is 97.5 m2, seismic intensity is 7°, the site category is 3rd.
Three types of residential structures are chosen, the supporting dates are recorded
in Table 71.1.
Table 71.2 Each index Cij relative to the index system of T weights
First grade indexes Weigh Second grade indexes Weigh
Applicability C1 0.18 Reconstruction C11 0.08
Flat surface layout C12 0.06
Indoor and outdoor traffic condition C13 0.04
Economy C2 0.26 Construction cost C21 0.16
Use-cost C22 0.10
Safety durability C3 0.36 Durability of construction C31 0.16
Durability of decoration C32 0.12
Building Fire Protection C33 0.06
Control of indoor pollution C34 0.02
Sustainable development C4 0.2 Building energy saving C41 0.11
Green building materials C42 0.04
Rationality of the water resources utilization C43 0.05
1. Determining the matter-element matrix of joint region, classics region and the
matrix to be evaluated.
The system of matter-element is an unit to describe the object, which consists of
the name of object N, characteristic c and the value v of the object’s characteristic
c. N, c, v is called the three key elements of the matter-element (Baldwin and Kim
1997).
The normalized form of matter-element is described as follows: R = [N, c, v].
Usually, an object has more than one characteristic. Given n characteristics of
an object c1 ; c2 ; . . .cn; and relevant values v1 ; v2 ; . . .; vn; , which we can use to
describe the object called n-dimension matter-element, recorded as:
2 3 2 3
N c1 v1 R1
6 c v 7 6 R2 7
6 2 2 7 6 7
R¼6 .. .. 7 ¼ 6 .. 7
4 . . 5 4 . 5
cn vn Rn
680 S. Yu and X. Liu
In the murderer wij For each characteristics of cij is weight coefficient. Eval-
uation of the level for the subject matter:
kj ¼ max kj ðp0 Þ ðj ¼ 1; 2; . . .; mÞ:
R2 ¼ N2 ; ci; x2i
good; reconstruction C11 h0:5; 0:75i
flat surface layout C12 h0:5; 0:75i
¼ .. ..
. .
ationality of the water resources utilization C43 h0:5; 0:75i
R3 ¼ N3 ; ci; x3i
ordinary; reconstruction C11 h0:25; 0:5i
flat surface layout C12 h0:25; 0:5i
¼ .. ..
. .
ationality of the water resources utilization C43 h0:25; 0:5i
R4 ¼ N4 ; ci; x4i
bad; reconstruction C11 h0; 0:25i
flat surface layout C12 h0; 0:25i
¼ .. ..
. .
ationality of the water resources utilization C43 h0; 0:25i
(c) Determining the matter-element matrix of evaluated
Cast-in situ concrete structure, steel structure and the assemble type concrete
structure are selected as research object, through investigation and analysis, can
get 12 of evaluation index of the normalized data, the matter-element matrix as
follows:
3. Determining the correlation function and the order of evaluation
(Table 71.4).
Take Cast-in situ concrete structure for example, by Eqs. (71.1), (71.2) and
(71.3) can draw C11—C44 about the level of correlation degree index and use
Eq. (71.4) to review that the subject matter P0 about each level of correlation
degree, see Table 71.5.
After the same steps, we can get the result of three structures about each level of
comprehensive correlation degree, see Table 71.6.
For the Cast-in situ concrete structure, k2 ð pÞ ¼ max kj ðpÞ; j 2 ð1; 2; 3; 4Þ, that
means that it belongs to level good.
For the steel structure, k1 ð pÞ ¼ max kj ð pÞ; j 2 ð1; 2; 3; 4Þ, that means that it
belongs to level excellent.
For the assemble type concrete structure, k1 ðpÞ ¼ max kj ðpÞ; j 2 ð1; 2; 3; 4Þ,
that means that it belongs to level excellent.
In order to compare the two structures further, after we get each of the result
normalized, we evaluate the results secondly. If kj0 ð pÞ ¼ max kj ðpÞ; j
2 ð1; 2; . . .; mÞ, that means that p belongs to level j0 (Li et al. 2007) if
Kj ðpÞ minKj ðpÞ
Kj ðpÞ ¼
maxKj ðpÞ minKj ðpÞ
X
s X
s
j ¼ j Kj ðpÞ= Kj ðpÞ ð71:5Þ
j¼1 j¼1
The paper first calculates the weigh of indexes by AHP-LSDM, then make a
comprehensive assessment on three types of structure systems by Matter-element
model, and finally finds that the assemble type concrete structure is the better
choice for its advantages and promotion of residential industrialization. The
research is helpful to reduce the blindness of invest and promote the research and
development of residential construction system.
References
Baldwin CY, Kim BC (1997) Managing in an age of modularity. Harvard Bus Rev 75(5):84–93
Baldwin CY, Kim BC (2000) Design rules: the power of modularity. Cambridge MIT Press,
Cambridge
Bi Jl (2008) Research on residential industrialization in China. Chongqing University, Chongqing
Guo F (2006) Application of the CS theory in real estate industry. Sci Technol Manag
2006:15–17
Huang Y, Zhu J (2009) Based on the matter-element model of the urban transportation evaluation
and empirical research. Syst Eng 27(2):79–84
Jia H, Wu X, Li H (2010) Simple discuss project management information. Proj Manag Technol
2010(8):86–89
Lei Zy, Chen W (2010) Applications research in performance evaluation of project management
information based on matter-element. Proj Manag Technol 2010(8):86–89
Li Y (2008) Customer satisfaction strategy in the application of residential. Bus Res
2008(9):70–723
Li P, Zhang X, Zhang J (2007) Empirical study on the driving factors of [14] real estate industry
customer satisfaction. J Hunan Univ 21(6):50–54
Liu S (2010) Grey system theory and application. Science Press, Beijing
Mi S, Jia H, Wu X, Li H (2010) Simple discuss project management information. Proj Manag
Technol 2010(8):86–89
Porter ME (1985) Competitive advantages: creating and sustaining superior performance. The
Free Press, New York, pp 33–61
Porter M (1990) The competitive advantage of nations. Harvard Bus Rev 68(2):74
Zhang F (2010a) Project management information construction problems and countermeasures.
Theor Invest 13(5):231–238
Zhang Z (2010b) Project management information to the development trend of the research.
China Build Inf 16(14):48–51
Chapter 72
Logistic Financial Crisis Early-Warning
Model Subjoining Nonfinancial Indexes
for Listed Companies
Abstract The occurrence of financial crisis is related with financial factors. Many
nonfinancial ones also contain important information relevant to the occurrence of
financial crisis. If merely financial factors are taken into consideration, much
useful information will be lost. Thus, the early-warning capacity of the model will
be reduced. What’s more, we will fail to learn the cause for the occurrence of
financial crisis at a more profound level. It is imperative to draw nonfinancial
index into the study of financial crisis early-warning and build a more effective and
more complete financial crisis early-warning model. The paper introduces not only
financial index, but also nonfinancial index including enterprise ownership struc-
ture, corporate governance, and major item, etc, while it takes a preliminary
identification and screening about the study sample, paired sample and early-
warning indicators. Then we set up enterprise’s financial crisis early-warning
model to complete the warning index system.
S. Ding (&)
Beijing Polytechnic, Beijing, China
e-mail: [email protected]
Y. Hou
Department of Economics and Management, NCUT, Beijing, China
e-mail: [email protected]
P. Hou
Department of Foreign Languages, Xi’an Jiaotong University, Xi’an, China
e-mail: [email protected]
72.1 Introduction
This paper will take listed companies who have received ST for operation in China
securities market A-shares as samples. 87 companies in total in Shanghai Stock
Exchange and Shenzhen Stock Exchange are chosen: 25 ones who were the first to
receive ST in 2007, 34 ones who received ST in 2008, and 28 ones who received
ST in 2009. The financial and nonfinancial index information of those listed
companies in the three years before ST is used to forecast whether they are
financial crisis companies.
In order to find out the early-warning index which has an impact on ST
companies by comparing ST companies with non-ST companies, this paper also
chooses 87 non-ST companies by the ratio of 1:1 as paired samples. To guarantee
the consistency and comparability with the original sample data, the paired
samples are in the same or similar industry and in the similar asset size with the
original ones while the same last three years’ information is used as study object.
Sample data come from Wind, CSMAR and RESSET Databases.
This paper classifies early-warning indexes into financial ones and nonfinancial
ones. On the basis of previous research, 31 indexes are chosen, according to the
principle of sensitivity, accuracy, representativeness and comprehensiveness.
Among those indexes, there are 16 financial indexes, selected in accordance with
debt-paying ability, operating capacity, earning power and development capacity.
The other 15 indexes are nonfinancial ones, selected by shareholding structure,
corporate governance, significant matters and other factors. See Table 72.1.
Table 72.1 Early-warning index
72
Development capacity main business’s increasing rate of (Main business income of this year—main business income of last
income X13 year)/main business income of last year
Rate of capital accumulation X14 Growth of owner’s equities this year/owner’s equities at the
beginning of the year
Increasing rate of net assets X15 (Net assets of this period—net assets of last period)/net assets of
last period)
Increasing rate of total assets X16 (Total assets of this period—total assets of last period)/total assets
of last period
(continued)
687
Table 72.1 (continued)
688
This paper selected the most effective alternative method of parameter test, Mann–
Whitney test.
U-test equation (Liao et al. 2008) is as followed:
mðm þ 1Þ Xm
Uxy ¼ mn þ R;
i¼1 i
2
nðn þ 1Þ X n
Uyx ¼ mn þ R;
j¼1 j
2
690 S. Ding et al.
The test results are as followed: the indexes’ values of X1, X2, X4, X8, X9, X10,
X11, X12, X14, X15, Y4, Y8, Y10, Y11, Y13, Y14, Y15, 17 in all, are smaller than the
significant level, while the other 7 indexes do not pass the significant test.
In general, a total of 21 indexes pass the significant test.
We take a KMO test before factor analysis to determine whether the financial
ratios involved are suitable for it (Table 72.2).
By KMO test, the results show 0.729 of KMO test coefficient, indicating a high
relevant between the indexes, so they are suitable for factor analysis. The 744,202
of Bartlett Chi square value and 0.000 \ 0.05 of P value show the 12 financial
indexes are not independent and there is a certain relationship between them.
We screen out 12 financial indexes by significance test above: X1, X2, X3, X4, X8,
X9, X10, X11, X12, X14, X15. Then we take factor analysis on these 12 indexes, and
find that, the characteristic values of first 4 common factor are greater than 1, and
their accumulated contribution rates reach 84.819 %, recorded as F1, F2, F3, F4.
To explain them reasonable, we need to get the correlation coefficients between the
4 common factor and the 12 initial financial indexes. So the paper uses orthogonal
rotation maximum variance method to do the conversion, and gets the factor
loading matrix as followed:
From the factor loading matrix after rotation above, we can see that the 4 factor
variances respectively yield high load capacity in different index variables.
According to the factors’ load distribution, we can make a further analysis as
followed (Table 72.3):
(1) Index factor load capacity of F1 on X8, X9, X10, X11, is far greater than that of
other indexes. It shows the company’s operating profit level and the ability.
(2) Index factor load capacity of F2 on X1, X2, X3, X4, is far greater than that of
other indexes. It shows the company’s solvency.
(3) Index factor load capacity of F3 on X12, X15 is far greater than that of other
indexes. It shows the company’s profitability and growth ability.
(4) Index factor load capacity of F4 on X13, X14, is far greater than that of other
indexes. It shows the company’s ability to grow.
Conduct regression analysis with the four common factors F1, F2, F3, F4 obtained
by factor analysis and the nine nonfinancial index variables Y1, Y2, Y4, Y8, Y10,
Y11, Y13, Y14, Y15, which have been through parameter T test and non-
parameter U test. Through forward gradual selection variables method, the
synthetical early-warning model based on both financial and nonfinancial indexes
is constructed. The regression results are presented in Table 72.5.
The above chart illustrates that the coefficient of every explanatory variable is
significant when it is a = 0.05, which implies that the model fits well. Through the
Table 72.4 The logistic regression results based on financial indexes alone
Variables in equation
B S.E. Wald df Sig. Exp (B)
Step 1a F1 -4.261 1.069 15.884 1 0.000 0.014
F2 -0.748 0.343 4.749 1 0.029 0.473
F3 -0.400 0.369 1.172 1 0.079 0.670
F4 -0.687 0.624 1.212 1 0.071 0.503
Constant -0.730 0.363 4.042 1 0.044 0.482
a
inputting Variables F1, F2, F3, F4 in step 1
72
Table 72.5 The regression results of the Logistic synthetical model injecting nonfinancial indexes
Variables in equation
B S.E. Wald df Sig. Exp (B)
Step 3a F1 -3.219 1.094 7.390 1 0.003 0.039
F2 -2.114 1.168 3.908 1 0.023 0.121
F3 -2.103 1.612 6.367 1 0.014 0.122
F4 -1.601 1.701 9.948 1 0.005 0.202
Shareholding Proportion of the Controlling Shareholder Y1 3.437 1.806 6.312 1 0.006 31.094
Logistic Financial Crisis Early-Warning Model
coefficients of the variables in the chart above, the Logistic financial crisis
synthetical early-warning model injecting nonfinancial indexes is obtained:
1
P¼ 2 0 13
1:130 3:219F1 2:114F2 2:103F3
6 B 1:601F4 þ 3:437Y1 2:108Y8 C7
1 þ exp6 B
4@
C7
A5
þ3:262Y10 þ 3:285Y11
þ3:923Y14 2:888Y15
From the above synthetical early-warning model, one can see that it is positive
correlation between the nonfinancial index variable Shareholding Proportion of the
Controlling Shareholder Y1 and the occurrence of financial crisis probability P,
which implies that the higher the shareholding proportion of the controlling
shareholder, the greater the probability of financial crisis. It is negative correlation
between the nonfinancial index variable CR_5 index Y4 and P, which indicates
that the higher the shareholding proportion of the first five substantial shareholders
and the ownership concentration, the less the probability of financial crisis.
Meanwhile, if the company is involved in violation record, lawsuit or attribution
and abbreviation alteration, the probability of financial crisis will be further
greater.
Since the ratio between the original samples and the paired samples is 1:1, hence 1
is to represent companies with financial crisis while 0 is to represent companies
without financial crisis. P = 0.5 is taken as discriminating section ratio. If P [ 0.5,
it is marked as company with financial crisis; if P \ 0.5, it is marked as company
with normal financial condition.
Input the index variable data of the 86 companies in the testing samples,
consisting of 43 ST listed companies and 43 non-ST listed companies, into the
early-warning model based on financial indexes alone to test the model’s veracity.
Testing results are illustrated in Table 72.6.
The above chart shows that the constructed early-warning model based only on
financial indexes is able to discriminate accurately 32 ST companies and 35 non-
ST companies, taking P = 0.5 as predicted discriminating point and the actual 43
ST listed companies and 43 non-ST ones as testing samples. In other words, the
accuracy rates of the early-warning model based only on financial indexes to the
prediction for the ST companies and non-ST ones respectively are 74.42 and
81.39 %. the average percentage is 77.91 %.
72 Logistic Financial Crisis Early-Warning Model 695
Table 72.6 Testing results of the Logistic model based on financial indexes alone
Classification tablea
Observed value Predicted value
Group Accuracy rate Misjudgment rate
(%) (%)
ST Non-ST
company company
Group ST company 32 11 74.42 25.58
Non-ST 8 35 81.39 18.61
company
Total percentage 77.91 22.09
a
Discriminant Piont.500
Input the index variable data of the 86 companies in the testing samples, consisting
of 43 ST listed companies and 43 non-ST listed companies, into the Logistic
synthetical early-warning model based both on financial and nonfinancial indexes
to test the model’s veracity and compare the testing results of the two models.
Testing results are illustrated in Table 72.7.
From the above chart one can see that the constructed Logistic synthetical
early-warning model injecting nonfinancial indexes is able to discriminate accu-
rately 35 ST companies and 37 non-ST companies, taking P = 0.5 as predicted
discriminating point and the actual 43 ST listed companies and 43 non-ST ones as
testing samples. Thus, the accuracy rates of the Logistic synthetical early-warning
model injecting nonfinancial indexes to the prediction for the ST companies and
non-ST ones respectively are 81.39 and 86.05 %. The average predicting
percentage is 83.72 %.
Table 72.7 Testing results of the Logistic synthetical early-warning model injecting nonfinan-
cial indexes
Classification tablea
Observed value Predicted value
Group Accuracy rate Misjudgment rate
(%) (%)
ST Non-ST
company company
Group ST company 35 8 81.39 18.61
Non-ST 6 37 86.05 13.95
company
Total percentage 83.72 16.28
a
Discriminant Piont.500
696 S. Ding et al.
By comparing the testing results of the two models, one can see that after
drawing nonfinancial index variables in, model’s accuracy rate increases by
5.81 %, which manifests that it enhances effectively the predicting accuracy rate of
the model to draw nonfinancial index into the study of financial crisis early-
warning.
References
Chen J (1999) Empirical analysis of listed company financial deterioration prediction. Acc Res
6:31–38
Deng X, Wang Z (2006) Financial distress prediction from the nonfinancial perspective. Manag
Sci (3):71–80
Gui M, Wu S (2007) Financial distress model study of nonfinancial perspective. Financ Econ
(22):132–133
Liao Y, Zhang L, Liu L (2008) Empirical study of financial early warning based on nonfinancial
and financial information. Mod Manag Sci 4:57–59
Lv J (2006) An empirical study of financial distress and symptom analysis based on nonfinancial
indicators—from manufacturing listed companies. J Grad Sch Chin Acad Soc Sci 2:52–58
Tan Y, Zhang L (2005) Research of bankruptcy prediction method subjoining nonfinancial
variables. Sci Technol Ind 5(10):31–34
Wan X, Wang Y (2007) Fuzzy warning model research for financial crisis of enterprise based on
nonfinancial index. J Manag 4(2):195–200
Wang K, Ji M (2006) Company in deficit finance early warning study based on the financial and
nonfinancial index. J Financ Econ 32(7):63–72
Wu S (2001) Financial distress prediction model research of our listed companies. Econ Res
6:46–55
Yang H (2007) Nonfinancial index application research in financial crisis early warning model.
Acc Commun (Compr Ed) 5:31–32
Yang Y (2008) Review and evaluation of the selection of nonfinancial index in early warning
research. Acc Commun 6:100–101
Zhang M, Cheng T (2004) Audit opinion’s information content in early warning. Acc Commun
12:47–48
Chapter 73
Evaluation Research on Logistics
Development of the Yangtze River Port
Based on the Principal Component
Analysis
Gao Fei
Abstract This article analyzes the significance of port logistics as well as factors
influencing the development of Yangtze River port logistics. On this basis, a
scientific evaluation system of the Yangtze River port logistics development and a
principal component analysis model of the port logistics development level
evaluation are established. Taking the ports group along the Yangtz river in Anhui
province as an example, this article justifies the validity of the river port logistics
development level evaluation system.
Keywords Ports along the Yangtze River Port logistics Evaluation system
Principal component analysis
At present, related research on port logistics evaluation has become one of the
focuses of the theory. Many scholars have done a lot of work in this field, such as
Cao Weidong, Cao Wave, Wang Ling, Wei Ran etc. Some use a specific object for
the evaluation and analysis of the port logistics system. However, most researches
focus on the application of modeling methods while paying little attention to the
evaluation index system. In addition, inaccurate understanding of the port logis-
tics’concept leads to a one-sided evaluation index system, which to some extent
affects the evaluation result. Combined with previous research, this paper attempts
to discuss the connotation of the port logistics, build a relatively reasonable river
port logistics evaluation index system on the basis of analyzing influencing factors
of port logistics’ developmental level, and conduct a case study of ports along the
Yangtze river through evaluation model by applying the principal component
method (Xu 2004).
G. Fei (&)
Anqing Vocational and Technical College, Anqing 246003 Anhui, China
e-mail: [email protected]
Port logistics refers to that the center port cities make use of its own port’s
advantages, relying on advanced hardware and software environment, to
strengthen its radiation ability in logistics activities around the port, highlight the
port set of goods, inventory, distribution features, with harbor industry as the basis,
with information technology as the support, aim at integration of port resources
and develop a comprehensive port service system covering the features of all links
in the industry chain of logistics. Port logistics is a special form of the integrated
logistics system and also is an irreplaceable and important node, which completes
the basic logistics service and the value added service to supply the whole chain
logistics system (Play 1995).
Port logistics’ developmental ability along the Yangtze River: logistics’ devel-
opmental ability reflects the existing development of port logistics’ capability,
based on their own advantages and competitive resource, its outcome and the
status of the past and present logistics market’s development. The port logistics
development ability can be reflected from the logistics’ infrastructure equipment,
haven dimensions, informatization level, the standardization of logistics and port’s
developmental level.
The river port logistics developmental environment: port logistics develop-
mental environment is an extrinsic factor for measuring the port’s logistical
development, and is the guarantee of the present developmental ability and the
cultivation basis of potential development. Port’s overall environment has very
important influence on the development of logistics. For instance, logistics ser-
vices and hinterland economic development level will have direct impact on
logistics service demand and growth potential. Port logistics developmental
environment usually depends on the economic environment, policy environment,
human resource environment and so on (Han and Wang 2001).
The port logistics’ capability of sustainable development: the sustainable
development capacity of port logistics is a measure of port logistics’ subsequent
development ability. Logistics sustainable development must be in accordance
with the carrying capacity of nature. Only by guaranteeing the sustainability of
resource and ecology can we make the sustainable development of logistics pos-
sible. This requires that in the pursuit of logistics development, we must pay
73 Evaluation Research on Logistics Development of the Yangtze River Port 699
Based on the analysis of the port logistics’ connotation and its influencing factors
above, this paper divides port logistics’ evaluation index system into three levels:
the first level is the target level (Mao 1996), namely the evaluation of river port
logistics development level; the second level is an first-class indicator. Based on
the analysis of factors influencing the port logistics, it establishes two first-class
indicators, which are logistics developmental ability and environment and influ-
ence of logistics development respectively; the third level is the second-class
indicator. This is the core part of the index system as well as the operable indi-
cator’s component. This article identifies 14 two-level index according to the three
factors influencing the port logistics system while considering the theoretical and
practical possibility (Han and Micheline 2001) (Table 73.1):
Table 73.1 14 two-level index according to the three factors influencing the port logistics
system
Target layer Level indicator Two level index
The port logistics Logistics development V1 Waterfront line length (KM)
development V2 Berth number
level of V3 Cargo throughput
V4 The port number of employees
V5 The level of public information
platform
V6 Logistics standardization level
V7 Profit ability
V8 The level of logistics services
V9 Investment in fixed assets (Million
yuan)
Logistics development V10 Hinterland economy GDP (Billion
environment and yuan)
influence V11 Total retail sales of consumer goods in
the hinterland of (Billion yuan)
V12 Hinterland trade (Billion yuan)
V13 Policy environment
V14 College school student number
700 G. Fei
Choosing four years, 2000, 2003, 2007, 2010 from 2000 to 2010, according to the
‘‘China City Statistical Yearbook’’ (2001, 2004, 2008 and 2011), ‘‘statistical
yearbook of Anhui province’’ (2001, 2004, 2008 and 2011) and the Anhui Yangtze
River 5 ports’ statistics report (Chun 2001), through the establishment of 5 Port
Logistics comprehensive strength evaluation index database, using SPSS13.0
software, to analyze and process the data. According to standard extracting main
factor with factor eigenvalue greater than 1 and the cumulative contribution rate
73 Evaluation Research on Logistics Development of the Yangtze River Port 701
Table 73.2 Anhui River Port Logistics comprehensive strength index along the Yangtze (Qi)
2001 2004 2007 2010
Ma’anshan 6.82711 19.57205 5.32404 12.74210
Wuhu, 69.977337 87.99985 67.10115 91.96178
Tongling -18.49716 -25.48272 -45.71171 -55.06373
Chizhou -61.99780 -55.49272 -30.70364 -5.56749
Anqing 40.69597 50.40360 18.99016 61.92710
From Table 73.2 we can see that Port Logistics comprehensive strength index of
Wuhu harbor, Ma’anshan port and Anqing harbor is always positive, indicating that
logistics development level in the area of the port logistics development is above the
average (Foster 1992); Chizhou Port Logistics comprehensive strength index has
always been negative, which indicates that the port logistics development level
has always been below the average; the development of Tongling port logistics has
obvious ups and downs, with comprehensive strength index turning from
-18.49716 in 2001 into 55.06373 in 2010, and it continues to be negative, sug-
gesting that the port logistics development has been below the average (Helen 1992).
rank second in 2001, and its value is 40.69597, with its index 18.990162010 falling
to its low ebb. In 2010, its port logistics comprehensive strength index rise to
61.92710, ranking second.
On the port logistics development level evaluation system, we should first pay
attention to the research of evaluation index system (Chames et al. 1978). Only
with an in-depth analysis of the influencing factors of port logistics system
established on the basis of scientific and reasonable index system can we conduct
further evaluation. At the same time, we should also take the development level of
hinterland economy as the important evaluation index (Saul and Adam 1999).
References
Cao Play (1995) Preliminary study of the port system along the Yangtze river in Anhui province.
Geogr Sci 15(2):154–162
Chames A, Cooper W, Rhods E (1978) Measuring the efficiency of decision making units.
European J Opt Res (6):429–444
Cloud Chun (2001) The Development of the port and into the transformation of the logistics
center. Port Handl 4:23–25
Eiichi Taniguchirusellg Thompson (2002) Modeling city logistics. J Transp Res Board (1):45–51
Thomas A Foster (1992) Logistics benchmarking: searching for the best. Distribution (3):31–36
Han JW, Micheline K (2001) The data mining: concepts and techniques. In: Fan X (ed.) Meng
translation. Mechanical Industry Press, Beijing, pp 76–77
Han ZL, Wang G (2001) Port logistics characteristics and influencing factors. Chinese Ports
(8):38–40
Han ZL, Wang G (2001) Port logistics characteristics and influencing factors. Dalian Port Ocean
Dev Manag (4):39–42
Helen R (1992) Improve quality through benehmarking. Transp Distribut (10):l2–20
Lu Avenue (1988) Location theory and regional research methods. Science Press, Beijing
Mao HY (1996) Shandong province sustainable development indicator system. Geography
15(4):16–23
Nevem Working Group (1989) Performance indicators in logisticsf. IFS Publication, Bedford,
pp 36–39
Saul E, Adam R (1999) Enterprise performance and ownership: the case of Ukraine. European
Econ Rev (4–6):1125–1136
Tian Yu (2000) Logistics efficiency evaluation method. Logist Technol 2:34–36
Xiao P, Han ZL (2001) Coming of age of integrated logistics and port function of the evolution.
Trop Geogr (3):41–43
Xu JW (2004) Port logistics development. World Ocean 27(2):31–32
Xu Shubo (1998) Analytic hierarchy principle. Tianjin University Press, Tianjin, China
Chapter 74
A Game Analysis of New Technical
Equipment Procurement
74.1 Introduction
The major problem of new technical equipment procurement lies in the uncertainty
of its development and manufacture (Hartley 2007). And for military purchasers
and the new equipment suppliers, this uncertainty is characterized by the difficulty
of conquering new technology. Different difficulty leads to different developing
costs, which can not be confirmed before the procurement contract is signed, but
its probabilistic distributions can be estimated (Aliprantis and Chakrabarti 2000).
The concrete costs the new technical equipment developer (Party B) pay out is
their privacy. So for the good of the company, they always claim their high
technical difficulty leads to high costs even with low technical difficulty and low
costs. Then it is so difficult for the purchaser(Party A) to judge whether the cost
information form Party B is true. But Party A can choose different quantity of the
purchase, also their own privacy, to avoid the moral risk of Party B.
Suppose both parties have to reach an agreement in the purchase and sale contract
that the price under high technical difficulty is Ph , and under low technical difficulty
the price is Pl . Then under high technical difficulty, the marginal cost of Party B is
Ch , and purchase quantity of Party A is Qh ; under low technical difficulty the
marginal cost of Party B is Cl , and purchase quantity of Party A is Ql . And,
Ph [ Ch ; Pl [ Cl ; Ql [ Qh
Here, Ch and Cl represent the privacy of Party B, which are the fixed values,
while Qh and Ql are variables in the free charge of A, which has to become fixed
74 A Game Analysis of New Technical Equipment Procurement 705
Table 74.1 Gain matrix of new equipment game based on high technical difficulty I
Party B
Ph Pl
Party A Qh -Ph, Qh (Ph-Ch) -Pl, Qh (Pl-Ch)
Ql -Ph, Ql (Ph-Ch) -Pl, Ql (Pl-Ch)
Qh ðPh Ch Þ [ Qh ðPl Ch Þ
Ql ðPh Ch Þ [ Ql ðPl Ch Þ
So, whatever choice Party A makes, Party B will inevitably choose to quote
high price, and there is nothing party A can do to force Party B into preferring low
quotation. That is to say, game equilibrium will be reached under the condition
that Party A requires low quantity and Party B presents high quotation.
Under the circumstances of low technical difficulty, gain matrix game is shown
in the following table.
Table 74.2 Gain matrix of new equipment game based on high technical difficulty II
Party B
Ph PI
Party A Qh -Ph ? D, Qh (Ph-Ch) –Pl, Qh (Pl-Ch)
Ql -Ph, Ql (Ph-Ch) –Pl ? D, Ql (Pl-Ch)
706 A. Zhang et al.
Table 74.3 Gain matrix of new equipment game based on low technical difficulty
Party B
Ph PI
Party A Qh -Ph ? D, Qh (Ph-Ch) -Pl, Qh (Pl-Cl)
Ql –Ph, Ql (Ph-Cl) -Pl ? D, Ql (Pl-Cl)
Like the game of new equipment based on high technical difficulty, this game is
also equilibrated under the condition that Party A requires low quantity and Party
B presents high quotation (Table 74.3).
The main reason why both games are equilibrated based on the same condition
is that whatever choice Party A makes, Party B inevitably chooses high quotation
which is beneficial. Namely, high quotation is the dominant strategy of Party B and
low quotation is his strict dominated strategy.
Under the condition of high technical difficulty, if Pl [ Ch ; Ql ðPl
Ch Þ [ Qh ðPh Ch Þ; the disequilibrium game point that Party A requires high
quantity and Party B presents low quotation: ðPl þ D ; Ql ðPl Ch ÞÞ, is strictly
superior to the equilibrium game point that Party A requires low quantity and Party
B presents high quotation: ðPh þ D ; Qh ðPh Ch ÞÞ. Then, the game between
both parties get stuck in the ‘‘prisoner’s dilemma’’, which stems from the fact that
Part B always pursue the optimal profit which is however considered the worst to
Party A, who has to choose the suboptimal point to improve the unfavorable
condition, so does Party B, and subsequently both sides are bound to reach a
suboptimal equilibrium rather than optimal equilibrium which is unstable.
The same analysis is also applicable to the purchase of new technical equipment
under the condition of low technical difficulty.
To avoid ‘‘prisoner’s dilemma’’ and arrive in the optimal condition, both parties
may reach an agreement beforehand and may sign the following two contracts:
The quotation of Party B is Ph while the purchase quantity of Party A is Qh ; ` The
quotation of Party B is Pl while the purchase quantity of Party A is Ql . And Party
B is allowed to choose either of two, at the same time, Party A may make a
promise during the game that Party A does choose low quantity if Party B prefers
high quotation and vice versa. This promise is made by Party A without risk, just
conveying the message to Party B that those who reap profits at the expense of
others will end up ruining themselves.
74.3 Conclusions
Whether the procurement price is accurate and rational is closely related to the
improvement of military equipment and the benefits of military expenditure on
equipment purchase (Li et al. 2011). Therefore, new technical equipment pro-
curement seems especially important that it is urgent to win a priority and improve
74 A Game Analysis of New Technical Equipment Procurement 707
the benefits in the procurement game, though the optimal equilibrium is not stable,
which promotes a pressing need to take specific measures as follows.
On the one hand, given many related links and departments involved in the
equipment procurement, purchasers should build up the sense of responsibility to
deal well with and strengthen all kinds of relationships (Zhang et al. 2009).
On the other hand, purchasers should make a good job of price review work,
which requires them to actively focus on or participate in scientific research,
gaining an adequate understanding of the details (quality, performance, design,
material, manufacture, etc.) of the equipment to accumulate some related infor-
mation, and also pushes them to get acquainted with the critical information
(business concepts, pricing strategy, foaming quotes, rational price, etc.) of the
suppliers to make a good preparation for the subsequent work (Yuan and Hu 2008;
Wang et al. 2007). All the measures mentioned above, if taken completely, can not
only effectively prevent the suppliers exaggerating the equipment cost but also
give firm guarantee for a rational quotation and an effective procurement contract.
References
Aghion P, Bolton P (1992) An incomplete contracts approach to financial contracting. Rev Econ
Stud 1992(6):473–494
Aliprantis CD, Chakrabarti SK (2000) Games and decision making. Oxford University Press,
New York, Oxford
Hang H, Tan G (2011) Equipment purchase power supervision based on reason. J Liaoning Tech
Univ (Nat Sci Ed) 2011(A01):211–216
708 A. Zhang et al.
Hao S-c, Jiang Y-n (2010) Research on purchasing corruption reasons and countermeasure based
on game theory model. Storage Transp Preserv Commod 2010(3):95–97+75
Hartley K (2007) The Arms industry, procurement and industrial polices. In: Sandler T, Hartley K
(eds) Handbook of defense economics, Vol 2. Elsevier, Amsterdam
Hou D-P, Wang Z-J (2001) Theoretical discuss and applications of nonlinear assessment. China
University of Science and Technology Publishing House, Hefei
International Society of Parametric Analysts (2007) Parametric estimating handbook, Fourth
edn., ISPA/SCEA Joint Office, Vienna, VA, pp 77–78
Li J, Gan M, Wang F (2011) A game approach to collusion in purchasing and pricing of military
reserves. Logistics Technol 2011(7):214–216
Sun Z-b, Jin C-h, Peng l (2011) Game analysis of weapon and military equipment procurement.
Mil Econ Res 2011(6):26–28
Wang H-m, Qu W, Bai H-w (2007) Study on model and strategy based-on asymmetric of
equipment procurement information’s game. J Acad Equip Command Technol 2007(5):31–34
Wang J-k (2011) Gambling analysis on anticorrosion and supervision to the Government
procurement. Value Eng 2011(17):135–136
Weitzman M (1980) The ratchet principle and performance incentives. Bell J Econ 1980:302–308
Xiang F-x, Xin W-f (1997) On enhancing three awarenesses and deepening administrative reform
of equipment procurement expenditure. Mil Econ Res 18:52–56
Xie X-h, Wang J-w, Yang M-j (2011) Some key issues in improvement of competitive system of
Chinese military equipment procurement. J Mil Econ Acad 2011(5):155–157
Yuan Y-q, Hu L (2008) Game analysis on military materials procurement under the lowest bid
price. Logistics Technol, 2008(10):259–261
Zhang H-y, Zhang W-j (2007) Analysis on anti-collusion based on game theory in tendering
procurement. Logistics Technol 2007(4):22–24+39
Zhang T, Cao M-y, Ou Y (2009) Incentive pricing model for equipment acquisition based on
game theory. J Armored Force Eng Inst, 2009(6):20–22+39
Chapter 75
Constructing Performance Measurement
Indicators in the Government’
Information Unit in Taiwan: Using
Balanced Scorecard and Fuzzy Analytic
Hierarchy Process
Yi-Hui Liang
the governments for constructing the strategies and blueprints for self evaluation.
Further, these can also provide important information for effective resource
investment in Government’ MIS Department.
Keywords Government MIS Performance measurement Balanced scorecard
75.1 Introduction
Performance appraisal system is the most effective tool used for government
reengineering. Performance appraisal aims to help people achieve their strategies,
missions, visions and goals.
Wu (2000) supposed that Good performance appraisal systems can enable
government departments to allocate reasonable resources, prioritize resource
investment, further improve departmental effectiveness and efficiency, and orga-
nizational members adopt identical methods to pursue their goals, encourage their
morale, and cause them to focus on organizational vision.
Traditional government departments usually developed their information sys-
tems according to their individual requirements, and hence did not communicate
with each other, leading people to develop bad impressions and stereotypes
regarding government performance owing to inefficient government operations.
Balanced Scorecard (BSC), which was developed by Kaplan and Norton
(1992), is a useful and popular method of identifying business performance using
lagging and leading indicators based on the foundation of visions and strategies.
Balanced Scorecard implies that organizational performance is evaluated not only
utilizing financial indicators, but also simultaneously non-financial indicators.
Balanced Scorecard built a framework to transform organizational vision and
strategies into a series of consistent performance indicators, and thus execute and
control organizational administration, allow organizational members to more
concretely learn the vision and strategies of organization, and also l and also help
managers track the outcomes of implemented strategies.
Since Executive Yuan, Republic of China implemented performance reward
and performance management plan in 2003, this plan followed the BSC spirit.
However, Executive Yuan then consider the business properties, organizational
culture, and management and check, so as to authorize each government depart-
ment to set up its own performance evaluation process and evaluation indicators
(Directorate-General of Personal Administration 2005). Until now, Executive
Yuan does not force government departments to set up their own performance
evaluation process and evaluation indicator (Chu and Cheng 2007).
The analytical hierarchy process (AHP) (Kaplan and Norton 1992), which is the
multi-criteria technique, is considered appropriate for solving complex decision
problems (Directorate-General of Personal Administration 2005). The AHP is
based on theory, and offers information on the relative weight of the BSC
75 Constructing Performance Measurement Indicators 711
performance indicator (Chu and Cheng 2007; Searcy 2004). Otherwise, (Zadeh
1965) (Liedtka 2005) developed fuzzy theory to handle uncertain problems
involving fuzziness and vagueness. Lee et al. (Martinsons et al. 1999) posited that
traditional BSC failed to consolidate diverse performance indicators. Lee et al.
(Martinsons et al. 1999) also suggested the fuzzy AHP method as as an answer for
this problem.
BSC can help managers of government organizations holistically evaluate
information technology (IT) investments, as well as the performance of informa-
tion system (IS) departments. This study builds a Framework for evaluating
government MIS departments based on BSC. The study summarizes how to
combine the BSC and fuzzy AHP to serve as a decision tool for government
organization. The tool can be used not only to assess the contribution of a specific
government MIS department, but also analyze the performance and direct the
activities of government MIS departments.
75.2 Methodology
This study builds a Framework for evaluating government MIS departments based
on BSC and fuzzy AHP.
This study adopted the dimensions and indicators which developed by Martinsons
et al. (1999), Liang et al. (2008), and related government MIS experts to develop
my proposed the dimensions and indicators. The research variables are showed as
Table 75.1.
5.1 When α = 1, use α-cut to get median Positive Reciprocal Matrix. Then,
calculate the weight use AHP method to get the weight matrix.
5.2 When α = 0, use α-cut to get minimum positive reciprocal matrix and
maximum positive reciprocal matrix. Then, calculate the weight use AHP
method to obtain the weight matrix.
5.3 In order to make sure that calculated weight value is fuzzy number,
therefore, adjusted the coefficient.
5.4 After obtained adjusted coefficient, calculate minimum positive reciprocal
weight matrix and maximum positive reciprocal weight matrix of every mea-
surement dimension.
5.5 Combing adjusted minimum, maximum and median values to get the fuzzy
weight in kth evaluation member and kth measurement dimension.
5.6 Utilize average method to integrate the fuzzy weight of evaluation members
and measurement dimensions.
75.3 Results
75.4 Conclusion
constructing the strategies and blueprints for self evaluation. Further, these can
also provide other related departments for effective resource investment in infor-
mation units of governments.
Compared to Miller and Doyle (1987) 和 (Saunders and Jones 1992), the
proposed IS evaluation dimensions and indicators more focus non-profit organi-
zations characteristics.
References
76.1 Introduction
People had and have to face so many natural disasters, for example, the Kobe
earthquake (1995) in Japan, the hurricane ‘‘Rita’’ (2005) in the United States, the
Wenchuan earthquake (2008) in China, etc. These natural disasters had destroyed a
lot of traffic facilities and made emergency rescue very difficult. For example, some
key bridges, tunnels and line hubs of the traffic lines may be damaged or destroyed. If
vehicles can pass through the key bridges (tunnels), transportation mileage will be
shortened. But if these bridges (tunnels) were destroyed, the vehicles may have to
make a detour to transport, or even backtrack (Li and Guo 2001; Gan et al. 1990;
X. Liu (&) Y. Ma
Department of Military Transportation, University of Military Transportation,
Tianjin, China
e-mail: [email protected]
M. Zhong
Department of Basic Science, University of Military Transportation,
Tianjin, China
e-mail: [email protected]
Liu and Jiao 2000; Wu and Du 2001; Zhang et al. 2002; Renaud and Boctor 2002). It
may delay the mission. So decision-making of multi-vehicle path decision problem
in the emergency environment became more and more important.
This article discusses the multi-vehicle path decision problem when some
critical sections (bridges, tunnels, etc.) of the road network may be destroyed. The
mathematical model is constructed and the taboo heuristic algorithm is given to
solve the problem.
The vehicle routing problem is recognized NP-hard. If there are n demand vertices,
there will be n! kinds of optional routes. The number of optional routes which
include ðb1 ; b2 Þ or ðb2 ; b1 Þ is 2ðn 1Þ!. Those optional routes which include
ðb1 ; b2 Þ will be at least ðn kÞ! when kðk nÞ demand vertices have been fulfilled
and the route ahead of the vehicles has been destroyed. Only consider k ¼ 1, the
number of optional routes may be
n! 2ðn 1Þ! þ 2ðn 1Þ!ðn 1Þ! n! þ 2ðn 1Þ!ðn 1Þ!
76 A Decision Model and Its Algorithm for Vehicle Routing Problem 719
If n ¼ 10, the number of optional routes is about 2:6 1011 . When the eval-
uation of each scheme should be calculated, it will take too long to finish. If
consider k [ 1 and the whole network be taken into account, the calculation time
will be longer more.
r; s 2 ðd d0 Þ [ f0g ð76:1Þ
M X
X n X
n
min z0 ¼ cij x0ijm ð76:2Þ
m¼1 i¼0 j¼0
X
n X
n
qj xijm Qm m ¼ 1; . . .M ð76:3Þ
i¼1 j¼1
X
M X
n
xijm ¼ 1 j2d ð76:4Þ
m¼1 i¼1
720 X. Liu et al.
X
M X
n
xijm ¼ 1 i2d ð76:5Þ
m¼1 j¼1
M X
X n M X
X n
x0jm ¼ xi0m ¼m ð76:6Þ
m¼1 j¼1 m¼1 i¼1
X
M X
n
x0hi jm ¼ 1 hi 2 H ð76:7Þ
m¼1 j¼1
M X
X n
x0i0m ¼ M ð76:8Þ
m¼1 i¼1
X
M X
n
x0ijm ¼1 i 2 d0 ð76:9Þ
m¼1 j¼1
M X
X n
x0ijm ¼1 j 2 d0 ð76:10Þ
m¼1 i¼1
involved in the collaboration will return to the warehouse. (76.9) and (76.10) mean
that each remaining demand vertex can only access one time. (76.11) means the
vehicle can not pass by the destroyed road.
If each value of phb in the expressions (76.1) is taken into account, the algorithm
will need too long time to calculate. And the law of the destruction is hard to find
in emergency environment. In fact, the decision makers always concerned about
worst-case situation or best-case situation. So the evaluation of the schemes can be
simplified to the maximum evaluation value in worst-case situation, or the mini-
mum evaluation value in best-case situation.
The minimum milage of the scheme Ri is noted as bestðRi Þ and the maximum
milage of the scheme Ri is noted as worstðRi Þ. The utility of the scheme Ri can be
noted as
uðRi Þ ¼ uðbestðRi Þ; worstðRi ÞÞ:
If ðb1 ; b2 Þ 62 Ri , namely if there is no road being destroyed then
uðRi Þ ¼ f ðRi Þ ¼ bestðRi Þ ¼ worstðRi Þ:
If ðb1 ; b2 Þ 2 Ri , when decision-maker gets the information that the road was
destroyed, the scheme should be renewed.
Therefore, the worst-case situation is that the information of damaged road is
known when the vehicle has been arrived at the vertex of the damaged road and the
vehicle may have to make a long detour to transport. If ðb1 ; b2 Þ 2 Ri , sk ðRi Þ is
the serial number of demand vertex k in rib . And ff ðRi ði; jÞÞ is the milage of all the
vehicles
when the vehicle b runs from demand vertex i to demand vertex j.
ff R0i ðsk Þ is the milage of all the vehicles after the scheme has been renewed at sk .
Theorem If ðb1 ; b2 Þ 2 Ri , the maximum milage for the transport process is
f1 ðse ; Ri Þ, and f1 ðse ; Ri Þ f1 ðse ; Ri Þ, in which se ¼ minðsb1 ; sb2 Þ and sk se .
Proof According to the path of the scheme Ri
f1 ðsk ; Ri Þ ¼ ff ðRi ð0; sk ÞÞ þ ff R0i ðsk Þ ð76:13Þ
f1 ðse ; Ri Þ ¼ ff ðRi ð0; se ÞÞ þ ff R0i ðse Þ ð76:14Þ
Let (76.13)–(76.14), get
f1 ðse ; Ri Þ f1 ðsk ; Ri Þ ¼ ff ðRi ðsk ; se ÞÞ þ ff R0i ðse Þ ff R0i ðsk Þ ð76:15Þ
Because ff ðRi ðsk ; se ÞÞ is the milage of all the vehicles when the vehicle b runs
from demand vertex sk to demand vertex se , then
722 X. Liu et al.
ff ðRi ðsk ; se ÞÞ 0:
Because ff ðR0i ðsk ÞÞ is the milage of all the vehicles after the scheme has been
renewed at sk and ff ðR0i ðse ÞÞ is the milage of all the vehicles after the scheme has
been renewed at se and sk se ,
(1) Generate a path sequence randomly for the initial solution. Code n demand
vertices and note warehouse as 0. Path solution is a random arrangement of the
number 0 to n. The head and tail of the path solution are 0. There are M 1
0 s in the middle of the path solution. And they are situated among 1 to n
randomly. The numbers between two 0 s represent the service path of one
vehicle.
For example, to 6 demand vertices and two cars, a path solution is 0-1-2-3-0-4-
5-6-0.
(2) Generate neighborhood of the solution. Exchange the situation of two demand
vertices or insert a new vertex into the service path to gain new solutions.
(3) Process the constraints of the vehicle capacity. If the total demands of demand
vertices exceed cargo capacity of the vehicle, the scheme should be
eliminated.
(4) Taboo objects are two adjacent demand vertices. The length of the taboo list
increases with the increase of evolution generation. If current value is better
than the optimal value in history, the taboo is lifted.
(5) Evaluate the value of the scheme. Calculate the minimum mileage and
maximum mileage of each scheme. When calculate the maximum mileage of
the scheme which contains the destroyed sections of the path, use an inner
taboo search algorithm that is similar to the external algorithm. The final
evaluation value depends on the decision-making criteria, such as optimistic
criteria, pessimistic criteria, compromise guidelines and expectations criteria,
etc. (Qu et al. 2004; Golden et al. 1998; Lia et al. 2005; Renaud et al. 1996;
Stenger et al. 2012; Li and Li et al. 2011; Qian 2011; Liu et al. 2005).
76 A Decision Model and Its Algorithm for Vehicle Routing Problem 723
(6) Consider the direction of the path. On optimistic criterion, the changed value
of the destruction has not been calculated to evaluate the value. In order to
compensate for this deficiency, a number of sub-optimal evaluation values will
be picked up, and the worst case results will be calculated for decision-making
reference. The schemes that have the same sequence but different direction are
considered to be different. Such as 0-1-2-3-4-0 and 0-4-3-2-1-0 are different
schemes.
76.6 Example
There are 1 warehouse vertex and 10 demand vertices in the Table 76.1. And data
in the table is the distance between two vertices.
For each demand vertex, the demand is 5. There are two vehicles being used to
transport. For each vehicle, the maximum cargo capacity is 35 (Table 76.2).
76.7 Conclusion
References
Gan Y, Tian F et al. (1990) Operations research. Tsinghua University Press, Beijing
Golden BL, Wasil EA, Kelly JP, Chao I-M (1998) The impact of metaheuristics on solving the
vehicle routing problem: algorithms, problem sets, and computational results. In: Crainic T,
Laporte G (eds) Fleet management and logistics. Kluwer, Boston, pp 33–56
Lia F, Goldenb B, Wasilc E (2005) Very large-scale vehicle routing: new test problems,
algorithms and results. Comput Oper Res 35:1165–1179
Li J, Guo Y (2001) Vehicles scheduling theory and method. China supplies press, Beijing
Liu C, Jiao S (2000) Urban post-earthquake relief system relief decision. J Nat Disasters 3
Li Q, Li Q (2011) Based on spatio-temporal crowding emergency evacuation route optimization
method. J Mapp 55(4):517–523
Liu X, He G, Gao W (2005) The multiple vehicles coordinated stochastic vehicle routing model
and algorithm. Syst Eng 23(4):105–109
Qu Z, Cai L, Li C (2004) The frame for vehicle routing problem of large-scale logistics systems.
J Tsinghua Univ (Nat Sci Ed) 44(5):43–44
Qian W (2011) Tabu in combination with genetic results on the distribution routing optimization
research and application. Comput Appl Softw 2011:53–57
Renaud J, Boctor FF (2002) A sweep-based algorithm for the mix vehicle routing problem. Eur J
Oper Res 140:618–628
Renaud J, Laporte G, Boctor FF (1996) Tabu search heuristic for the multi-depot vehicle routing
problem. Comp Ops Res 21(3):229–235
Stenger A, Enz S, Schwind M (2012) An adaptive variable neighborhood search algorithm for a
vehicle routing problem arising in small package shipping. Transp Sci 47(1):64
Wu Y, Du G (2001) Management science foundation. Tianjin university press, Tianjin
Zhang F, Wu X, Guo B et al. (2002) Logistics network usability research. Syst Eng Theory
Method 12(1):77–80
Chapter 77
A Solution to Optimize Enterprise
Business and Operation Process
Keywords Business process Depth application Enterprise informatization
Operation process Process optimization Solution
77.1 Introduction
important topic for discussion at the conference, and the enterprise business pro-
cess optimization was considered as a key method. The Chinese manufacture
enterprises have begun to enter the stage of process optimization, so it is mean-
ingful to research and propose a solution of process optimization.
In a corporation, enterprise management consists of many business processes,
and a business process is composed of many operation processes executed by
different roles at different places. The aim of enterprise informatization is to
improve enterprise management, and the improvement of enterprise management
could be realized by optimizing the business process, and the optimization of
business process could be supported by optimizing the operation process, the
relationship among them is shown in Fig. 77.1. So the level of enterprise infor-
matization could be advanced by optimizing the operation process continuously.
The aims of optimizing enterprise business process should include 2 points:
(1) To decrease the total time consumed of the business process;
(2) To strengthen the value-added capability of the business process and reduce
the non-value-added link.
There were many researches about business process optimization. Some people
use the petri net technology to construct a model for a business process, and then
use software to analyze and optimize it (Wang et al. 2008; Wang 2007; Aalst 1998;
Ling and Schmidt 2000; Pan et al. 2005; Li and Fan 2004; Pang et al. 2008). The
method could find the key route and figure out the shortest route, but it does not
tell us how to describe, decompose and sort out the business process. Someone
presented a process log method of business process digging by using the users’
operation log in some information system, such as business process management
(BPM) system, ERP system and so on (Zhang 2010; Feng 2006; Gaaloul et al.
2005, 2009). This process log method could represent the invisible business pro-
cess, but there are few such information systems which could support it. To
optimize business process, somebody used cost analysis method (Hu et al. 2003;
Cooper 1990; Spoede et al. 1994), or a Business Process Reengineer (BPR)
method based on the value chain (Baxendale et al. 2005), but the methods lacks
maneuverable steps; Prof. Lan proposed a new method, which established a
general equilibrium relationship of the enterprise value chain based on the dual
theory of linear program (Lan et al. 2011), but the operation of calculating the
value by using the cost was also a little complex.
Operation Business
process process
optimization optimization
77 A Solution to Optimize Enterprise Business 729
77.2 Methodology
The solution consists of two parts. The first part is a method to decompose a
business process into a series of continuous operation processes, which is named
Timeline-Place-Roles (TPR) method, as shown in Fig. 77.2. Based on the timeline,
decompose a business process into many activities according to activity places,
and make sure the places of each two adjacent activities are different. Then
decompose every activity into several operation processes according to activity
roles to assure that the roles of each two adjacent operation processes are not the
same. All the operation processes construct the business process. Record the place,
roles, and time consumed of every operation process in a table after the operation
process has been decomposed.
The second part is an ‘‘8 steps’’ value-added method to optimize every oper-
ation process of the business process, which contains 8 steps from simplification to
appreciation in order to make it more valuable, as shown in Fig. 77.3.
The optimization steps are as follows:
(1) Judge whether it could be deleted. If it is meaningless or repeating, then delete it.
(2) Judge whether it could be executed at the same time with other operation.
(3) Judge whether it could be carried out by the computer.
(4) Judge whether it could be simplified.
(5) Judge whether it could be standardized.
(6) Judge whether it could be extended to contain more information.
(7) Judge whether it could add another new operation process to make it more
valuable.
(8) Judge whether it could strengthen the value-added part.
Timeline
Automate? Extend?
Simplify? Standardize?
77.3 Case
Place Roles
product dept. planner, production director
This Figure shows that the ‘‘planner’’ and ‘‘workshop material member’’ are the
main roles.
According to the second step of the Timeline-Place-Roles method, the activities
were decomposed into 14 operation processes, as shown in Table 77.1. The Table
records process Serial Number (SN), place, and roles of every operation process.
Then work out the average Time Consumed (TC, minutes) of every role in every
operation process. For instance, it would cost the planner 5 min (and 5 min back)
to send the plan to the workshop and give it to the work director who would spend
1 min to scan and check the plan. Then workshop statistician would summarize
into a material requisition according to the plan and the Bill of Material (BOM) of
the related product, and it cost would him 30 min.
According to the 14 operation processes in the Table 77.1 and the ‘‘8 steps’’
optimization method of the solution, the previous ‘‘product-prepared’’ business
process was optimized into 8 operation processes combining the information base
which was formed by the HZERP system in the company A. The optimized
‘‘product-prepared’’ business process was shown in Table 77.2. The three opera-
tions (Audit by production director, Audit by technical director and Audit by
workshop director) could be paralleled and simplified by the HZERP system; the
three roles could audit the plan and sign their names in no order or even at the
same time.
From Table 77.2, the company A could optimize its business process according
to the information system. Three operations were deleted or replaced by the system
automatically. Especially, the operation of summarizing into a material requisition
by the workshop statistician was carried out by the MRP subsystem of the HZERP
system, which saved time and made the result accurate.
732 X. Chang et al.
Table 77.3 The time consumed (including work and wait) before and after process optimization
Role TC/Before (m) TC/After (m) Proportion
of value
added (%)
Planner 2 ? 2 ? 10 ? 5 ? 10 = 29 0 100
Production director 1 1 0
Technical director 1 1 0
Workshop director 1?1?0?1?0=3 2 0
Production manager 1 0 100
Workshop statistician 30 ? 0 = 30 0 100
Workshop material 3 ? 20 ? 1 ? 3 ? 10 ? 5 = 42 3 ? 3 ? 5 = 11 73.8
member
Warehouse director 1 1 0
Warehouse keeper 20 ? 1 ? 3 ? 10 = 34 20 ? 1 = 21 38.2
Total 112 37 70
Total business duration 2 ? 2 ? 10 ? 5 ? 10 ? 30 1 ? 1 ? 1 ? 20 ? 3 66.7
? 1 ? 3 ? 20 ? 1 ? 3 + 3 ? 5 = 34
? 10 ? 5 = 102
Finally, analysis and contrast with the time consumed (including work time and
wait time) before and after the process optimization, the result was shown in
Table 77.3. TCB means the time consumed before, and TCA means the time
consumed after. The proportion of value added was characterized by the propor-
tion of the time saving, and was equal to (TCB - TCA)/TCB.
Table 77.3 shows that the total TCA was 112 min, while the total TCB was
only 37 min with 70 % value added. The total business duration before optimizing
process cost 102 min, while the one after optimizing process cost 34 min with
66.7 % value added. The time consumed by the three key roles (planner, workshop
statistician and workshop material member) was decreased largely, and the work
of the workshop statistician was omitted.
The results proved that the solution could optimize enterprise business process
and improve the effect continuously and deepen informatization application to a
certain extent, and the effect would be much better combining the usage of the
information system. Enterprises could improve the effect continuously and deepen
informatization application by using the solution proposed in this paper.
77.5 Conclusion
(1) The relationship among the process optimizing and enterprise informatization
was discussed.
(2) A Timeline-Place-Role (TPR) method of decomposing business process into
operation processes was proposed.
734 X. Chang et al.
(3) The ‘‘8 steps’’ value-added method of operation process analysis and contin-
uous optimization based on the value chain was proposed.
(4) The TPR method and ‘‘8 steps’’ method made up a complete set of solution.
(5) Take a business process of a continuous manufacture company as a case, this
paper made the case analysis and optimization by using the proposed solution
and compared with the time consumed before and after process optimization.
(6) The results proved that the solution could optimize enterprise business pro-
cess. And the effect would be much better by combining the usage of the
information system, which is also a new trend to promote management
technological upgrading.
Acknowledgments This study was financially supported by HUST self-determined and inno-
vative research funds for national defense (2011), Fundamental Research Funds for the Central
Universities (2011TS039), and Program for New Century Excellent Talents in University (No.
NCET-09-0396).
References
Aalst Wvd (1998) The application of petri nets to workflow management. J Circuits Syst Comput
8(1):21–66
Baxendale S, Gupta M, Raju PS (2005) Profit enhancement using an ABC model. Manag
Account Quart 6(2):11–21
Cooper (1990) Cost classification in unit based and activity-based manufacturing cost systems.
J Cost Manag (3):4–14
Feng C (2006) Workflow mining algorithm on time-based log. Computer software and theory.
Master dissertation, Shandong University, Shandong
Gaaloul W, Alaoui S, Baina K, Godart C (2005) Mining workflow patterns through event-data
analysis. In: Proceedings of 2005 symposium on applications and the internet workshops,
SAINT2005, Trento, pp 226–229
Gaaloul W, Gaaloul K, Bhiri S, Haller A, Hauswirth M (2009) Log-based transactional workflow
mining. Distributed Parallel Databases 25(3):193–240
Hu YG, Wang TM, Qiao LH (2003) BPR method based on enterprise value chain analysis (in
Chinese). Aeronaut Manuf Technol (8):55–59
Lan BX, Wang YM, Wang W (2011) Enterprise resource optimization and value chain analysis
(in Chinese). Chinese J Manag Sci (1):69–76
Li HF, Fan YS (2004) Workflow model analysis based on time constraint petri nets. Ruan Jian
Xue Bao/J Softw 15(1):17–26
Ling S, Schmidt H (2000) Time petri nets for workflow modelling and analysis. In: Proceedings
of the 2000 IEEE international conference on systems, man and cybernetics, Nashville,
pp 3039–3044
Pan Y, Tang Y, Tang N, Yu Y, Dao W (2005) A workflow model based on fuzzy-timing petri
nets. In: Proceedings of the 9th international conference on computer supported cooperative
work in design, Coventry, pp 541–546
Pang H, Fang ZD, Zhao Y (2008) Simplification analysis and schedulability verification of timing
constraint workflow model. Comput Integr Manuf Syst 14(11): 2217–2223 ? 2230
Spoede C, Henke EO, Unmble M (1994) Using activity analysis to locate profitability drivers.
Manag Account (3):43–48
77 A Solution to Optimize Enterprise Business 735
Wang ZZ (2007) Research of multimodal transportation process optimization based on petri net.
Vehicle operation engineering. Ph.D. dissertation, Jilin University, Jilin
Wang YP, Li SX, Wang ZZ, Li SW, Dong SW, Cui LX (2008) Optimization of production
logistics process of automobile manufacture enterprise based on petri net (in Chinese). J Jilin
Univ (Engineering and Technology Edition) 38(2):59–62
Zhang LQ (2010) Research on block-structured process mining techonology for business process
modeling. Computer software and theory, Ph.D. dissertation, Shandong University, Shandong
Zhou JX, Liu F, Chen LL, Liu RX (2008) Application status and prospects of ERP system to
China’s foundry enterprises (in Chinese). Foundry 57(9):885–891
Chapter 78
An Approach with Nested Partition
for Resource-Constrained Project
Scheduling Problem
78.1 Introduction
Z. Liu (&) W. Yu
Institute of Systems Engineering, Huazhong University of Science
and Technology, Wuhan, China
e-mail: [email protected]
W. Yu
e-mail: [email protected]
Z. Liu W. Yu
Key Laboratory of Education Ministry for Image Processing
and Intelligent Control, Wuhan, China
library (Kolisch and Sprecher 1997). Many algorithms have been developed to
solve the problem and compare and analyze the results with experiment using the
standard problem in the library.
The essence of the resource-constrained project scheduling is to arrange the
execute time of each activity in the network under the constraints of resources and
precedence relations. There are three kinds of solution method which are opti-
mization (Bianco and Caramia 2012), heuristics (Kolisch 1996) and intelligent
algorithm (Kolisch and Hartmann 2006).
Nested Partition is to partition the feasible solution space so that more search
efforts can be expanded in the subregions that most likely contain the best solution.
One of its important features is its flexibility that it can incorporate many efficient
heuristics into its search procedure in order to get better solution. Another one is
that parallel computing capacities can be taken advantage of and searching in
subregions can be done independently and in parallel with only a little coordi-
nation overhead. Therefore, it’s usually used to solve large-scale problems
(Shi and ölafsson 2000).
In this paper, a time-based Nested Partition Framework is proposed to solve
RCPSP, where the operations in NP will be discussed respectively. And the whole
framework will be tested on PSPLIB.
The classic RCPSPs can be stated as the following. It’s assumed that a single
project consists of j = 1,…, J activities with a non-preemptive duration of dj
periods, respectively. Due to technological requirements, precedence relations
between some of the activities enforce that an activity j = 2,…, J may not be
started before all its immediate predecessors i 2 Pj (Pj is the set of immediate
predecessors of activity j) have been finished. Without loss of generalization, we
can assume that activity 1 is the only start activity and activity J is the only finish
activity. K types of renewable resources supplied by the partners will be consumed
during the project. It is assumed that the project needs rjk units of resource k to
process activity j during every period of its duration. Let At be the set of activities
being executed in period t.
The capacity of resource k supplied is noted by Rk . The due date of the project
is D. With a given D, we can get the earliest finish time ej and the latest finish time
lj of activity j by using Critical PathMethod (CPM). The time parameters in the
problem are all integer valued.
We use a set of continuous decision variables xj 2 ðej ; lj Þj ¼ 1; ; j to be the
finish time of activity j. The decision variable can be stated as X ¼ x1 ; x2 ; . . .; xj
jxj 2 ej ; lj ; j ¼ 1; . . .; jg.
78 An Approach with Nested Partition 739
s:t: xi xj dj ; 8i 2 Pj ; j ¼ 1; . . .; j ð78:2Þ
X
rjk Rk ; k ¼ 1; . . .; D ð78:3Þ
j2At
xj 2 Zþ1 ; j ¼ 1; . . .; J ð78:4Þ
(78.1) is the objective that minimize the makespan of the problem, (78.2) dem-
onstrates the precedence relation constraints among activities, (78.3) demonstrates
the resource constraints, (78.4) demonstrates the natural constraints of each
activity’s finish time.
The Nested Partition method is partitioning and sampling based strategy. In iter-
ation of the algorithm, the entire solution space is viewed as a union that comprises
a promising region and a surrounding region. The four operations of Nested
Partition method are as follows.
(1) Partitioning: This step is to partition the current most promising region into
several subregions and aggregate the remaining regions into the surrounding
region. With an appropriate partitioning scheme, most of the good solutions would
be clustered together in a few subregions after the partitioning.
(2) Random Sampling: Samples are taken from the sub regions and the sur-
rounding region according to some sampling procedure. The procedure should
guarantee a positive probability for each solution in a given region to be selected.
As we would like to obtain high quality samples, it is often beneficial to utilize
problem structure in the sampling procedure.
(3) Calculation of the Promise Index: For each region, we calculate the promise
index to determine the most promising region. The promise index should be
represented as the performance of the objective.
(4) Moving: The new most promising region is either a child of the current most
promising region or the surrounding region. If more than one region is equally
promising, ties are broken arbitrarily. When the new most promising region is the
surrounding region, backtracking is performed. The algorithm can be devised to
backtrack to either the root node or any other node along the path leading to the
current promising region.
740 Z. Liu and W. Yu
By critical path method (CPM), the earliest finish time ej and latest finish time lj
of each activity j can be calculated as variable xj 2 ej ; lj with the relaxation of
resource constraints.
The whole searching space of the problem without resource-constraints and
precedence relations in the space can be described as:
X
: ½e1 ; l1 ½e2 ; l2 . . . ½eJ ; lJ :
1 3 8
4
7
78 An Approach with Nested Partition 741
Table 78.1 The parameters of activities in the project shown in Fig. 78.1
j 1 2 3 4 5 6 7 8
dj 0 4 5 1 8 1 10 0
rj 0 1 1 3 1 1 1 0
ej 0 4 5 1 12 2 11 12
lj 12 16 24 14 24 24 24 24
By CPM, change of the activity’s earliest finish time (EF) would influence the EF
of the activities that has direct or indirect successor relationship with it; change of
the activity’s latest finish time (LF) would influence the LF of the activities that
has direct or indirect precedence relationship with it.
Such influence is considered in order to narrow the searching space as possible.
Here complete partition operation is demonstrated below. Figure 78.1 shows the
network of a project.
In Table 78.1, j represents activity sequence, dj represents the time of activity
j being executed rj represents the resource consumed units of activity j per period
during the activity j being executed ej and lj represents the EF and LF of activity
j by CPM setting deadline D = 24 respectively. Initial searching space of the
project is denoted as follow:
[0, 12] 9 [4, 16] 9 [5, 24] 9 [1, 14] 9 [12, 24] 9 [2, 24] 9 [11, 24] 9 [12, 24]
Suggest that activity 2 is selected as the base point and the finish time interval
of activity 2 is divided into two parts as [4, 10] and [11, 16]. For the part [4, 10],
the LF is changed, which results in the modification of the activities that are
predecessors or transitive predecessors of activity 2. So the finish time interval of
activity 1 and activity 6 are changed into [0, 6] and [11, 16] respectively.
For the part [11, 16], the EF is changed, which results in the modification of the
activities that are successors or transitive successors of activity 2. So the finish
time interval of activity 5 and activity 8 are changed into [11, 16] and [19, 24].
In every stage, an activity is selected to expand a partial schedule, setting its start
time and finish time without violating resource constraints and precedence con-
straints among activities. The procedure is as below.
742 Z. Liu and W. Yu
Definition:
X
pRkt :¼ Rk r ; 8k; t;
j2At jk
En :¼ jjj 62 Cn ; Pj Cn ;
Initialization:
n ¼ 1; Cn ¼ £;
define En ; pRkt ; t ¼ 1; . . .; T; k ¼ 1; . . .; K;
j ¼ min jjtð jÞ ¼ inf tðiÞ ;
j2En i2En
ej ¼ max FTj jj 2 Pj þ dj ;
( )
tjej t lj ;
FTj ¼ min 0
;
rj k pRkt0 ; t ¼ t dj þ 1; . . .; t; 8k
Cn ¼ Cn [ fj g;
n ¼ n þ 1;
( )
jjj 62 fCn [ An g; Pj Cn ;
En :¼ ;
rjk pRk ; 8k; tn þ d 2 ej ; lj
( )
jjj 62 fCn [ An g; Pj Cn ;
En0 :¼ ;
rjk pRk ; 8k; tn þ d [ lj
( )
jjj 62 fCn [ An g; Pj Cn ;
En00 :¼ ;
rjk pRk ; 8k; tn þ d\ej
78 An Approach with Nested Partition 743
Initialization:
n ¼ 1; tn ¼ 0;
En ¼ f1g; An ¼ Cn ¼ £;
pRk :¼ Rk ; 8k;
whilejCn [ An j\JDostagen
ð1Þtn ¼ min FTj jj 2 An1 ;
An ¼ An1 n jjj 2 An1 ; FTj ¼ tn ;
Cn ¼ Cn1 [ jjj 2 An1 ; FTj ¼ tn ;
if En ¼ £ and if An 6¼ £; gotoð1Þ;
else if En00 6¼ £;
tn ¼ min ej dj jj 2 En00 ;
Re define En ; gotoð2Þ
ð2Þ j ¼ min jjtðjÞ ¼ inf tðjÞ ;
j2En i2En
An ¼ An [ fj g;
Re define pRk ; En ;
if En 6¼ £; gotoð2Þ;
elsen ¼ n þ 1;
Activities need to be selected in both serial schedule and parallel schedule generate
schema. To select appropriate activities and selecting different activities in
744 Z. Liu and W. Yu
different schedules can guide to better results and avoid repeated optimal solution.
The follow are several sampling schemas.
(1) Random sampling: Activities are selected randomly.
(2) Biased sampling: Activities are selected according to some mapping
mechanism that with a priority rule to get corresponding priority rule uð jÞ of
activity j, the feasible activity is selected by a probabilityUð jÞ ¼ P uj2Eð jÞuð jÞ
n
(3) Regret-based biased sampling: It’s similar to biased sampling; the differ-
ence is to set a set of regret value uð jÞ which compares the priority value of
activity j with the worst sequence in feasible activity set as qð jÞ :¼ maxi2En uð jÞ
uð jÞ where a ‘‘minimal’’ priority rule is employed. Then, the probability for j to be
selected is Uð jÞ. In these sampling schemas, the priority rules can be the same as
those in
ðqð jÞ þ 1Þa
Uð jÞ ¼ P a
i2En ðqðiÞ þ 1Þ
We can get initial schedule from the last iteration that can be taken advantage in
this iteration for iteration depth deeper than 1. Therefore, the promising region can
be estimated with the finish time in initial schedule of the selected activity where
double searching effort would be expanded.
Pi ¼ fjjSTi [ ¼ FTj þ 1g
78 An Approach with Nested Partition 745
X
pRkt ¼ Rk rjk ; 8k; t
j2OLn
OLn ¼ /
BackwardPass:
ComputeFTj ; j 2 ½1; J
descending order activity sequence by FTj
to createUL0 ¼ ðp0 ; p1 ::pJ1 Þ
n ¼ 0; i ¼ 1
WhileULn 6¼ /
Begin
NewLatestFT ¼ minSTSi 1
OLnþ1 ¼ OLn [ pi
i¼iþ1
End
ForwardPass:
ComputeSTj ; j 2 ½1; n
ascending order activity sequence by STj
to createUL0 ¼ ðq1 ; q2 ::qn Þ
n ¼ 0; i ¼ 1
WhileULn 6¼ /
Begin
NewEaliestST ¼ maxFTPi þ 1
OLnþ1 ¼ OLn [ qi
i¼iþ1
End
Pi is the set of immediate predecessors of activity i, Si is the set of immediate
successors of activity i. OLn represents union of the activity sequence that has been
rearranged at the step n. ULn represents union of the activity sequence that has not
been rearranged at the step n.
78.7 Moving
For each region dj , the promising index as the best performance value within the
region is calculated.
I dj ¼ minxj ; j ¼ 1; 2. . . M þ 1
Promising region is selected by
dj ¼ arg minxj
If promising region is one of the sub regions then we set n ¼ n þ 1 as well as
dn ¼ dj , and dn is to be partitioned in the next iteration.
Otherwise, backtrack operation is performed then we set n ¼ n 1 as well as
dn ¼ dn1 , and the promising region in the previous iteration is to be partitioned.
Here the iteration is ended when the schedule number reaches some specific
amount.
Based on some basic experiments, we paid more attention to the following con-
figuration of parameters in this NP framework: Biased Sizing, Regret-based Biased
Sampling with priority rule LFT. In addition, we employ Double Justification to
improve feasible solution. In the iteration we schedules 30 times in both promising
region and surrounding region in experiment of 1000 schedules and 120 times in
experiment of 5000 schedules.
We employ test sets J120 instance in PSPLIB and make a comparative study as
below (Table 78.2)
We can read from the table that NP method performs excellent among algo-
rithms that are not intelligent and good among all the algorithms.
78 An Approach with Nested Partition 747
78.9 Conclusion
Acknowledgments This work has been supported by the Chinese National Science Fund under
the Grant 71071062, the Hubei Science Fund under the Grant 2009 CDB242 and the Scientific
Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry of
China and Open Foundation of Key Laboratory of Education Ministry for Image Processing and
Intelligent Control.
748 Z. Liu and W. Yu
References
Alcaraz J, Maroto C, Ruiz R (2004) Improving the performance of genetic algorithms for the
RCPS problem. In: Proceedings of the 9th international workshop on project management and
scheduling, pp 40–43
Bianco L, Caramia M (2012) An exact algorithm to minimize the makespan in project scheduling
with scarce resources and generalized precedence relations. Eur J Oper Res 219(1):73–85
Chen F, Ling W (2012) An effective shuffled frog-leaping algorithm for resource-constrained
project scheduling problem. Comput Oper Res 39(5):890–901
Chen W, Shi YJ, Teng HF, Lan XP, Hu LC (2010) An efficient hybrid algorithm for resource-
constrained project scheduling. Inf Sci 180(6):1031–1039
Hartmann S (1998) A competitive genetic algorithm for resource-constrained project scheduling.
Naval Res Logist 45(7):279–302
Hartmann S (2002) A self-adapting genetic algorithm for project scheduling under resource
constraints. Naval Res Logist 49(5):433–448
Kolisch R (1996) Efficient priority rules for the resource-constrained project scheduling problem.
J Oper Manag 14(3):179–192
Kolisch R, Hartmann S (2006) Experimental investigation of heuristics for resource-constrained
project scheduling: an update. Eur J Oper Res 174(1):23–37
Kolisch R, Sprecher A (1997) PSPLIB-A project scheduling problem library. Eur J Oper Res
96(1):205–216
Mendes JJM (2003) A random key based genetic algorithm for the resource constrained project
scheduling problem workshop on computer science and information technologies. Ufa, Russia
Schirmer A, Riesenberg S (1998) Case-based reasoning and parameterized random sampling for
project scheduling. Technical report, University of Kiel, Germany
Shi L, ölafsson S (2000) Nested partitions method for global optimization. Oper Res
48(1):390–407
Tseng LY, Chen SC (2006) A hybrid metaheuristic for the resource-constrained project
scheduling problem. Eur J Oper Res 175(2):707–721
Vallsa V, Ballest F (2005) Justification and RCPSP: A technique that pays. Eur J Oper Res
165(2):375–386
Ziarati K, Akbaria R, Zeighamib V (2011) On the performance of bee algorithms for resource-
constrained project scheduling problem. Appl Soft Comput 11(4):3720–3733
Chapter 79
An Approximate Dynamic Programming
Approach for Computing Base Stock
Levels
Hong-zhi He
Abstract This paper studies the classical model for stochastic inventory control,
i.e. a finite horizon periodic review model without setup costs. A base-stock policy
is well known to be optimal for such systems. The author gives a new heuristic
computation procedure for calculating the base-stock levels. The idea is based on
approximate dynamic programming. A numerical example is provided to serve as
an illustration.
79.1 Introduction
H. He (&)
Department of Engineering Management, Luoyang Institute of Science and Technology,
Luoyang, China
e-mail: [email protected]
79.2 Formulation
This paper considers the simplest inventory system where setup cost is negligible.
The inventory policy is implemented on a periodic-review basis. The finite time
horizon is assumed to be T periods.
Let Jt ðyÞ represents the optimal value function at time t, x represents the order-
up-to level, Dt represents the random customers’ demand during time t and t ? 1,
ft ðDt Þ represents the density function of the demand distribution during period t,
Ft ðDt Þ represents the cumulative distribution function of customers’ demand
during period t, p represents penalty cost for one unit of backlogged product during
79 An Approximate Dynamic Programming Approach 751
one period, h represents the unit holding cost, c represents the unit purchasing cost,
the stochastic inventory control model can be represented as:
þ cðx yÞ þ Jtþ1 ðx Dt Þg
Zx Z1
¼ min h ðx Dt Þf t ðDt ÞdDt þ p ðDt xÞf t ðDt ÞdDt ð79:1Þ
xy
0 x
Z1
þ cðx yÞ þ Jtþ1 ðx Dt Þf t ðDt ÞdDt :
0
It’s well-known that this value function is convex, and the optimal policy is a
base stock policy. Computing the base stock level s becomes the key question in
this paper. In the end of the last period T, the salvage value function is assumed to
be JTþ1 ðxÞ, which is a convex, decreasing and differentiable function in x.
It deduces
Z1
ðh þ pÞFTðxÞ ¼ p J0Tþ1 ðx DT Þf T ðDT ÞT c: ð79:4Þ
0
This equation determines the last-period base stock level sT , i.e. sT should
satisfy
752 H. He
R1
p J0Tþ1 ðx DT Þf T ðDT ÞdDT c
0
FT ðsT Þ ¼ ð79:5Þ
hþp
Now let’s interpolate the value function for period T by a quadratic function.
Take three points: y ¼ s2T ; y ¼ 3s2T ; y ¼ 2sT . For y ¼ s2T ; from base stock
policy we have x ¼ sT at minimum. Value function
hs i ZsT Z1
T
JT ¼h ðsT DT ÞfT ðDT ÞdDT þ p ðDT sT ÞfT ðDT ÞdDT
2
0 sT
ð79:6Þ
hs i Z1
T
þ c sT þ JTþ1 ðsT DT Þf T ðDT ÞdDT :
2
0
3s 3s
For y ¼ 2
T
; from base stock policy we have x ¼ 2
T
at minimum. Value
function
3sT
Z½ 2
Z1
3sT 3sT 3sT
JT ¼h DT fT ðDT ÞdDT þ p DT fT ðDT ÞdDT
2 2 2
0 3sT
½2
Z1
3sT
þ JTþ1 DT f T ðDT ÞdDT :
2
0
ð79:7Þ
For y ¼ 2sT ; from base stock policy we have x ¼ 2sT at minimum. Value function
Z2sT
JT ð2sT Þ ¼ h ð2sT DT ÞfT ðDT ÞdDT
0
Z1 Z1
þp ðDT 2sT ÞfT ðDT ÞdDT þ JTþ1 ð2sT DT Þf T ðDT ÞdDT :
2sT 0
ð79:8Þ
i.e.
ðh þ pÞFt ðxÞ þ 2atþ1 x ¼ p þ 2kt atþ1 btþ1 c: ð79:11Þ
s 3s
One solves st by the above equation. Then let y ¼ 2t ; 2 t ; 2st , and compute
Jt s2t ,
Jt 32st and Jt ð2st Þ. For y ¼ s2t , x ¼ sT ,
Zst Z1
st st
Jt ¼h ðst Dt Þft ðDt ÞdDt þ p ðDt st Þft ðDt ÞdDt þ c st
2 2
0 st
Z1
þ ~Jtþ1 ðst Dt Þf t ðDt ÞdDt
0
Zst Z1
st
¼h ðst Dt Þf t ðDt ÞdDt þ p ðDt st Þf t ðDt ÞdDt þ c st
2
0 x
Z1
þ ½atþ1 ðst Dt Þ2 þ btþ1 ðst Dt Þ þ ctþ1 f t ðDt ÞdDt :
0
ð79:12Þ
3s 3s
For y ¼ 2
t
;x ¼ 2
t
,
754 H. He
3st
Z½ 2 Z1
3st 3st 3st
Jt ¼h DT fT ðDT ÞdDT þ p DT fT ðDT ÞdDT
2 2 2
0 ½32st
Z1
~ 3st
þ Jtþ1 Dt f t ðDt ÞdDt
2
0
3st
Z½
2 Z1
3st 3st
¼h Dt f t ðDt ÞdDt þ p Dt f t ðDt ÞdDt
2 2
0 3
s
½ 2t
Z1 " #
2
3st 3st
þ atþ1 Dt þbtþ1 Dt þ ctþ1 f t ðDt ÞdDt :
2 2
0
ð79:13Þ
For y ¼ 2st ; x ¼ 2st ,
3st
Z½ 2 Z1
3st 3st
Jt ð2st Þ ¼ h DT fT ðDT ÞdDT þ p DT fT ðDT ÞdDT
2 2
0 3st
½2
Z1
3st
þ ~Jtþ1 Dt f t ðDt ÞdDt
2
0
3st
Z½ 2
Z1
3st 3st
¼h Dt f t ðDt ÞdDt þ p Dt f t ðDÞdDt
2 2
0 ½32st
" #
2
3st 3st
þ atþ1 Dt þbtþ1 Dt þ ctþ1 f t ðDt ÞdDt :
2 2
ð79:14Þ
Fitting the above three (y, x) pairs into quadratic function ~Jt ðyÞ ¼ at y2 þ bt y þ ct ,
one gets at ; bt and ct .
One loop is finished. Repeating the above procedure, one gets fst ; Jt g
for t ¼ T 1; T 2; . . .; 1:
79 An Approximate Dynamic Programming Approach 755
X
14
ð14 kÞ4k X
1
ðk 14Þ4k
~
J11 ð14Þ ¼ 1 e4 þ 10 e4
0
k! 14
k!
1 h
X i 4k
þ a12 ð14 kÞ2 þ b12 ð14 kÞ þ c12 e4 :
0
k!
By using Matlab one gets ~J11 ð14Þ ¼ 36:0792:
By using the Lagrange interpolation method one gets
~
J11 ð yÞ ¼ 0:3677y2 6:7756y þ 58:8762:
The data for the following periods are listed in Table 79.1.
Finally, we solved the base stock level for period 1 is 14. It departs largely from
the expected mean of one-period demand (k = 4). This is because of the accu-
mulative effect of the relatively high penalty cost. The base stock levels are
roughly decreasing over time periods, which validates the intuition that the base
stock levels grow as the remaining time horizon grows. The base stock level for
period 1 is rather high compared with the mean demand in that period. This is
because the holding cost is low compared with the penalty cost.
79.5 Conclusion
References
Chen F, Song JS (2001) Optimal policies for multiechelon inventory problems with Markov-
modulated demand. Oper Res 49:226–234
Gaver DP (1959) On base-stock level inventory control. Oper Res 7:689–703
Huh WT, Janakiraman G, Muckstadt JA, Rusmevichientong P (2009) An adaptive algorithm for
finding the optimal base-stock policy in lost sales inventory systems with censored demand.
Math Oper Res 34(2):397–416
Iglehard D, Karlin S (1962) Optimal policy for dynamic inventory process with non-stationary
stochastic demands. In: Arrow KJ, Karlin S, Scarf H (eds) Studies in applied probability and
management science. Stanford University Press, Stanford, pp 127–147
Iida T, Zipkin PH (2006) Approximate solutions of a dynamic forecast-inventory model. Manuf
Serv Oper Manag 8:407–425
Karlin S (1960) Dynamic inventory policy with varying stochastic demands. Manag Sci
6:231–258
Levi R, Pál M, Roundy RO, Shmoys DB (2007) Approximation algorithms for stochastic
inventory control models. Math Oper Res 32:284–302
Lovejoy W (1992) Stopped myopic policies in some inventory models with generalized demand
processes. Manag Sci 38:688–707
Morton TE, Pentico DW (1995) The finite horizon nonstationary stochastic inventory problem:
near-myopic bounds, heuristics, testing. Manag Sci 41:334–343
Roundy RO, Muckstadt JA (2000) Heuristic computation of periodic-review base stock inventory
policies. Manag Sci 46:104–109
Song J, Zipkin P (1993) Inventory control in a fluctuating demand environment. Oper Res
41(2):351–370
Srinagesh G, Sridhar T (2001) An efficient procedure for non-stationary inventory control. IIE
Trans 33:83–89
Zipkin P (1989) Critical number policies for inventory models with periodic data. Manag Sci
35:71–80
Zipkin P (2000) Foundations of inventory management. The McGraw-Hill Companies, Boston,
pp 375–378
Chapter 80
An Inventory Model Considering Credit
Cost and Demand Rate Varying
with Credit Value
80.1 Introduction
(Paul et al. 1996; Pando 2011; Zhou and Wang 2009). Assume the ideal condition
is that when a supplier provides a certain product to a fixed group of customers, the
supplier will not take the initiative to increase its credit (for example by adver-
tising), and consequently the system will be constantly involved in a vicious circle
where backorder leads to the drop of credit which subsequently cause the reduction
in demand rate. Under such premise, the supplier needs to formulate its optimal
inventory policy to achieve maximum operating profit. This paper will analyze
how backorder affects credit cost, starting from delivering situations.
The following assumptions are made to simplify the researching model: (1) Initial
demand rate is a constant (assume it is 1), and loss of accumulative credit caused
by backorder directly affects demand rate; (2) Replenishment is made instanta-
neously and replenishing ability is infinite, i.e. any amount of replenishment can be
achieved promptly; (3) There is no lead time for ordering, i.e. goods are replen-
ished instantaneously when orders are made; (4) backorder is allowed; (5)
Ordering time of goods is finite, but the operational process begins and ends
without backorder; (6) Price discount is not considered.
Definitions of each notation included in the model are as follows:
H = time span of ordering;
c1 = inventory holding cost per unit per time span;
c2 = backorder cost per unit per time span;
c3 = cost per order;
ti = time point of backorder;
Ti = time point of ordering;
li = time span during which we have inventory in hand in the ith cycle,
li ¼ ti Ti1 ;
Li = total time span of the ith cycle, Li ¼ Ti Ti1 ;
a = credit losing coefficient, i.e. the proportion of people who lose faith in the
company when backorder occurs, a 2 ½0; 1;
Pi = the accumulated credit after the ith cycle, and
l1 l2 l3 li
Pi ¼ 1 a i þ þ þ þ ;
L1 L2 L3 Li
ki = demand rate in the ith cycle, ki ¼ Pi1 k, where k is the initial demand rate
which is a constant;
Qi = amount of order at Ti1 ;
s = selling price per unit;
k = purchase price per unit.
80 An Inventory Model Considering Credit Cost and Demand 761
Amount of Inventory
O t1 T1 t 2 T2 Tn-1 Tn
Time
80.3.1 Model A
max TPðn; li ; Li Þ
s:t: li Li
ln ¼ Ln ð80:2Þ
L1 þ L2 þ þ Ln ¼ H
Li 0; li 0
80.3.2 Model B
O t1 T1 t2 T2 Tn-1 Tn
⎛ t − t1 ⎞
⎜1 − α ⎟λ
⎝ t ⎠
Time
80 An Inventory Model Considering Credit Cost and Demand 763
1 l1 l2 li1
1 ði 1Þa þ a þ þ þ kc1 l2i ;
2 L1 L2 Li1
the backorder cost is:
ZTi
l1 l2 li
kc2 ðTi tÞ 1 ia þ a þ a þ þ a dt
L1 L2 t Ti1
ti
At the end of the ith cycle, the accumulated credit of the supplier is:
l1 l2 l3 li
Pi ¼ 1 a i þ þ þ þ :
L1 L2 L3 Li
Total cost within the time span H is:
TC ¼ C ðL1 Þ þ CðL2 Þ þ þ C ðLn Þ
P
and total revenue is TI ¼ ðs kÞ ni¼1 Qi ; thus the total profit is TP ¼ TI TC.
Same as Model A, it is a nonlinear programming problem.
As a increases from 0 to 1, TP declines, and the amount of its reduction is in fact the
cost defrayed by the company owing to reduced credit, i.e., the credit cost. 3. When
the value of a is fixed, the credit cost in model B is generally larger than that in model
A, and this is due to the loss brought about by dissemination of information.
Next, let us consider the situation when n 3, i.e., there are more than two
orders within a fixed time period. LINGO is used to solve the problem with the
value of n or a varying, and the results are showed in the Table 80.2.
We can see from Table 80.2 that: 1. When a is fixed and n ascends from 2 to 4,
total profit rises. 2. For a fixed value of a, relative credit cost does not increase
regularly with the rise of n, but depends on the value of a. When a exceeds the
critical point, the opposite situation will occur.
80.5 Conclusion
This paper proposed for the first time the concept of credit cost, analyzed how
backorder affects credit cost starting with delivering conditions, considered the
influence of credit loss to demand rate, and derived an appropriate model which
verified the existence of credit cost with specific numerical calculation. The result
indicates that suppliers will reduce backorder while the credit losing coefficient a
increases; less profit will be achieved in models which take credit cost into con-
sideration compared with those do not, and the reduction in profit is in fact credit
cost; rate of change in credit cost descend gradually as a increases; besides, when a
is the same, the credit cost in model B is generally larger than that in model A
owing to the fact that concealing backorder information will reduce credit cost.
References
Gupta R, Vrat P (1986) Inventory models for stock dependent consumption rate. Oper Res
23(01):19–24
Hariga M (1994) Inventory lot-sizing problem with continuous time-varying demand and
shortages. J Oper Res Soc 45(07):827–837
Henery RJ (1990) Inventory replenishment policy for cyclic demand. J Oper Res Soc
41(07):639–643
Jie M, Zhou Y, Liu Y, Ou J (2011) Supply chain inventory model with two levels of trade credit
and time varying demand. Syst Eng Theory Pract 31(2):262–269
Pando V, Garcia-Laguna J (2011) Maximizing profits in an inventory model with both demand
rate and holding cost per unit time dependent on the stock level. Comput Ind Eng
62(2):599–608
Paul K, Datte TK, Chaudhuri KS, Pal AK (1996) Inventory model with two component demand
rate and shortages. J Oper Res Soc 47(08):1029–1036
Ritchie E (1980) Practical inventory replenishment policies for a linear trend in demand followed
by a period of steady demand. J Oper Res Soc 31(07):605–613
Zhao X, Huang S (2008) Inventory management. Tsinghua University Press, Beijing
Zhou Y, Wang S (2009) Theory and method for inventory control. Science Press, Beijing
Chapter 81
An M/M/1 Queue System with Single
Working Vacation and Impatient
Customers
Xiao-ming Yu, De-an Wu, Lei Wu, Yi-bing Lu and Jiang-yan Peng
Abstract In this paper, an M/M/1 queueing system with a single working vacation
and impatient customers is considered. In this system, the server has a slow rate to
serve during a working vacation and customers become impatient due to a slow
service rate. The server waits dormant to the first arrival in case that the server
comes back to an empty system from a vacation, thereafter, opens a busy period.
Otherwise, the server starts a busy period directly if the queue system has cus-
tomers. The customers’ impatient time follows independently exponential distri-
bution. If the customer’s service has not been completed before the customer
becomes impatient, the customer abandons the queue and doesn’t return. The
model is analyzed and various performance measures are derived. Finally, several
numerical examples are presented.
Keywords Impatient customers M/M/1 queue Probability generating functions
Single working vacation
81.1 Introduction
X. Yu D. Wu (&) L. Wu J. Peng
School of Mathematical Sciences, University of Electronic Science
and Technology of China, Chengdu, People’s Republic of China
e-mail: [email protected]
D. Wu Y. Lu
Faculty of Science, Kunming University of Science and Technology,
Kunming, Yunnan, People’s Republic of China
equation for G0 ðzÞ is obtained, where G0 ðzÞ is the generating function of the queue
size when the server is on vacation. It is easy to calculate the fractions of time the
server being in working vacation or busy period. Various performance measures
including the mean system size, the mean sojourn time of a customer served is
obtained. Section 81.3 gives are some numerical results.
We study an M/M/1 queue with a single working vacation and impatient cus-
tomers. We assume that arrival process follows Poisson proccess with parameter k,
and the rule of serving is on a first-come first-served (FCFS) basis. If the server
returns from a working vacation to find the system empty, he waits dormant to the
first arrival thereafter and open a busy period. The working vacation follows
exponential distribution with parameter c, and the service times during a service
period and a working vacation follow exponential distribution with parameters lb
and lv , respectively, where lb [ lv . During the working vacation, customers
become impatient and the impatient times follows exponential distribution with
parameter n. If the customer’s service has not been completed before the customer
becomes impatient, the customer abandons the queue and doesn’t return.
Remark 1. If lv ¼ 0, the current model becomes the M/M/1 queueing model
which has a single vacation and impatient customers, and was studied by Altman
and Yechiali (2006). If n ¼ 0, the current model has a single working vacation, it
was studied by Zhao et al. (2008).
The total number of customers in the system and the number of working servers
are denoted by L and J, respectively. In other words, when J ¼ 0. The server is in a
working vacation, and when J ¼ 1, the server is in a service period. Then, the pair
fJ; Lg constructs a continuous-time Markov process with transition-rate diagram
shown in Fig. 81.1. Here, the steady-state transition probabilities are defined by
Pjn ¼ PfJ ¼ j; L ¼ ng. j ¼ 0; 1; n ¼ 0; 1; 2; . . ..
Then, we can get the set of balance equations as follows:
770 X. Yu et al.
8
>
>
>
>
> ðc þ kÞP00 ¼ ðn þ lv ÞP01 þ
<n ¼ 0
>
j¼0 lb P11 ð81:1Þ
>
>
>
> ðk þ c þ lv þ nnÞP0n ¼
>
>
:n1
kP0;n1 þ ½lv þ ðn þ 1ÞnP0;nþ1 ;
8
<n ¼ 0 kP10 ¼ cP00
j¼1 ðk þ lb ÞP1n ¼ kP1;n1 þ ð81:2Þ
:n1
cP0n þ lb P1;nþ1 :
The probability generating functions (PGFs) is defined by
X
1
Gj ðzÞ ¼ Pjn zn ; j ¼ 0; 1:
n¼0
0 0
ðk þ cÞG0 ðzÞ þ nzG0 ðzÞ ¼ kzG0 ðzÞ þ nG0 ðzÞ þ lb P11 : ð81:5Þ
This agrees with Altman and Yechiali (2006) (see (5.3), p. 274).
(2) Letting n ¼ 0 in (81.4), we get the M=M=1 with a single working vacation.
This agrees with Zhao et al. (2008).
81 An M/M/1 Queue System with Single Working Vacation and Impatient Customers 771
Equation (81.8) expresses G0 ðzÞ in terms of P00 and P11 . Also, from (81.4),
G1 ðzÞ is a function of G0 ðzÞ, P00 , P11 . Thus, once P00 and P11 are calculated, G0 ðzÞ
and G1 ðzÞ are completely determined. We derive the probabilities P00 , P11 and the
mean system sizes in the next subsection.
P
since G0 ð1Þ ¼ P0: ¼ 1 n¼0 P0n [ 0,
nc
and limz!1 ð1 zÞ ¼ 1, we must have
lb P11 k1 ð1Þ þ lv P00 k2 ð1Þ ¼ 0
implying that
lb P11 k1 ð1Þ cG0 ð1Þk1 ð1Þ
P00 ¼ ¼ : ð81:11Þ
lv k2 ð1Þ l v k 2 ð 1Þ
And (81.4) can be written as
czG0 ðzÞ þ lb P10 z lb P11 z lb P10
G1 ðzÞ ¼ : ð81:12Þ
ðkz lb Þð1 zÞ
By using L’Hopital rule, we derive
0
lb P10 þ cG0 ð1Þ
G1 ð1Þ ¼ ; ð81:13Þ
lb k
implying that
0 lb k P00
E½L0 ¼ G0 ð1Þ ¼ P1 l: ð81:14Þ
c k b
On the other hand, from (81.3),
0
0 ðlv kÞG0 ð1Þ þ cG0 ð1Þ lv P00
E½L0 ¼ lim G0 ðzÞ ¼ ; ð81:15Þ
z!1 n
implying that
0 ðk lv ÞG0 ð1Þ þ lv P00
G 0 ð 1Þ ¼ : ð81:16Þ
cþn
Equating the two expressions (81.14) and (81.16) for E½L0 , and using
1 ¼ P0 þ P1 , we get
knlb þ kclb nk2 ck2 lv k2 ð1Þ
G0 ð1Þ ¼ ; ð81:17Þ
g1 þ g2
ck1 ð1Þ
P00 ¼ G0 ð1Þ; ð81:18Þ
lv k 2 ð 1 Þ
where
g1 ¼ knlb þ kclb kclv nk2 lv k2 ð1Þ
g2 ¼ lb c2 þ cnlb þ kclv ck1 ð1Þ
81 An M/M/1 Queue System with Single Working Vacation and Impatient Customers 773
Denote S the total sojourn time of a customer in the system, which is measured
from the moment of arrival to departure, including completion of service and a
result of abandonment. By using Little’s law,
E½L E½L0 þ E½L1
E ½ S ¼ ¼ ð81:22Þ
k k
Denote Sserved the total sojourn time of a customer who completes his service.
We note thatSserved is more important porfermance measure. Denote Sjn the con-
ditional sojourn time of a customer who does not leave the system, and ðj; nÞ is the
state on his arrival. Evidently, E½S1n ¼ ðn þ 1Þ=l, but this is for n ¼ 0; 1; 2; . . .
rather than for n 1 in (81.15).
Now, we derive E½S0n by using the method used by Altman and Yechiali
(2006), when J ¼ 0, for n 1,
c 1 k 1
E½S0n ¼ þ E½S1n þ þ E½S0n
hnþ1 hnþ1 hnþ1 hnþ1
nn þ lv 1
þ þ E½S0;n1 ; ð81:23Þ
hnþ1 hnþ1
where
hn ¼ c þ k þ lv þ nn
774 X. Yu et al.
for n ¼ 0; 1; 2;…. The second term above is derived from the fact that a new
arriving customer does not influence the sojourn time of a customer present in the
system. The probability n=ðn þ 1Þin the third term comes from the fact that when
one of the n þ 1 waiting customers abandons, our customer is not the one to leave.
Then,
hn cðn þ 1Þ
½c þ ðn þ 1Þn þ lv E½S0n ¼ þ þ ðnn þ lv ÞE½S0;n1 :
hnþ1 lb ð81:24Þ
We also have
k 1 1 k 1 l
E½S00 ¼ þ þ þ E½S00 þ 2v ; ð81:25Þ
h1 h1 lb h1 h1 h1
implying that
1 h0 c
E½S00 ¼ þ : ð81:26Þ
c þ n þ l v h 1 lb
Iterating (81.23) we obtain, for n 1,
1 hn ðn þ 1Þc
E½S0n ¼ þ
c þ ðn þ 1Þn þ lv hnþ1 l )
X n b Y n
1 hi ði þ 1Þc jn þ lv
þ þ :
i¼1
c þ ði þ 1Þn þ lv hiþ1 lb j¼i
c þ ðj þ 1Þn þ lv
ð81:27Þ
Finally, we use the expression for E½S1n and drive
X
1 X
1
E½Sserved ¼ P1n E½S1n þ P0n E½S0n : ð81:28Þ
n¼0 n¼0
implying that
E½L1 þ P1: X 1
E½Sserved ¼ þ P0n E½S0n : ð81:29Þ
lb n¼0
But the first sum in (81.28) starts from n ¼ 0, it’s different from [15].
In this section, we show the numerical examples for the results obtained in Sect.
81.2.4. The effectes of lv and n on E½L0 and E½L in (81.14) and (81.21) are shown
in Fig. 81.2. Evidently, with lv increaseing, namely, the server works faster and
81 An M/M/1 Queue System with Single Working Vacation and Impatient Customers 775
faster, the mean system size of working vacation E½L0 and the mean system size
E½L decreases when n is fixed. We also find that if n is smaller, E½L0 and E½L are
bigger. From the numerical analysis, the influence of parameters on the perfor-
mance measures in the system is well demonstrated. The results are suitable to
practical situations.
81.4 Conclusions
In this paper, we have studied M/M/1 queueing systems with a single working
vacation and impatient customers. In this system, the server has a slow rate to
serve during a working vacation and customers become impatient due to a slow
service rate. The server waits dormant to the first arrival in case that the server
776 X. Yu et al.
comes back to an empty system from a vacation, thereafter, opens a busy period.
Otherwise, the server starts a busy period directly if the queue system has cus-
tomers. The customers’ impatient time follows independently exponential distri-
bution. If the customer’s service has not been completed before the customer
becomes impatient, the customer abandons the queue and doesn’t return. The
probability generating functions of the number of customers in the system is
derived and the corresponding mean system sizes are obtained when the server was
both in a service period and in a working vacation. Also, we have obtained closed-
form expressions for other performance measures, such as the mean sojourn time
when a customer is served.
References
Altman E, Yechiali U (2006) Analysis of customers impatience in queues with server vacations.
Queueing Theory 52:261–279
Baba Y (2005) Analysis of a GI/M/1 queue with mul-tiple working vacations. Oper Res Lett
33:201–209
Baccelli F, Boyer P, Hebuterne G (1984) Single-server queues with impatient customers. Adv
Appl Probability 16:887–905
Boxma OJ, de Waal PR (1994) Multiserver queues with impatient customers. ITC 14:743–756
Dalay DJ (1965) General customer impatience in the quence GI/G/1. J Appl Probab 2:186–205
Li J, Tian N, Zhang Z (2009) Analysis of the M/G/1 queue with exponentially working vacations
a matrix analytic approach. Queueing Syst 61:139–166
Palm C (1953) Methods of judging the annoyance caused by congestion. Tele 4:189–208
Servi LD, Finn SG (2002) M/M/1 queues with working vacations (M/M/1/WV). Perform Eval
50:41–52
Takacs L (1974) A single-server queue with limited virtual waiting time. J Appl Probab
11:612–617
Van Houdt B, Lenin RB, Blonia C (2003) Delay distribution of (im)patient customers in a
discrete time D-MAP/PH/1 queue with age-dependent service times. Queueing Syst 45:59–73
Wu D, Takagi H (2006) M/G/1 queue with multiple working vacations. Perform Eval 63:654–681
Yue D, Yue W (2009) Analysis of M/M/c/N queueing system with balking, reneging and
synchronous vacations. Advanced in queueing theory and network applicantions. Springer,
New York, pp 165–180
Yue D, Yue W (2011) Analysis of a queueing system with impatient customers and working
vacations. QTNA’11 Proceedings of the 6th international conference on queueing theory and
network applications. ACM, NY, USA, pp 208–212
Zhao X, Tian N, Wang K (2008) The M/M/1 Queue with Single Working Vacation. Int J Inf
Manage Sci 19:621–634
Chapter 82
Analysis of Bank Queueing Based
on Operations Research
Abstract This paper considers the bank queueing problem based on M/M/n
model. Through the analysis of the special situation of input distribution function
and loss of customer, this paper proposes segmented k input model and the losing
customer queueing model. With the actual data collected from the bank, the
calculation of the probability of losing customer provides a suggestion for the bank
to further improve.
Keywords Changeable customer arrival rate Changeable input rate Impatient
customer Queueing model Stable distribution
82.1 Introduction
With the development of economy, modern finance has become an essential part of
the society. Bank, the main body of financial industry, has also become one of the
most important service systems. Queueing is a common phenomenon in our daily
life, the queueing problems in the bank is also a hot issue that is very much
concerned. The queueing makes much contribution to the loss of customers which
followed the loss of money. So how to solve the queueing problem efficiently and
economically becomes more and more vital (Fan and Yuan 2005).
L. Li (&) J. Wu J. Ding
School of Mechanical Engineering, Beijing Institute of Technology, Beijing, China
e-mail: [email protected]
Some standard notations for distributions are Poisson distribution, Erlang distri-
bution with j phases, degenerate (or deterministic) distribution, general distribu-
tion (arbitrary), and a phase-type distribution. One of the most common adopted
notations is Poisson distribution (Xiang 2012).
A discrete stochastic variable X is said to have a Poisson distribution with
parameter k [ 0, if for k = 0, 1, 2… the probability mass function of X is given by
ek kk
PðX ¼ kÞ ¼ ð82:1Þ
k!
where k is the mean arrival rate.
Poisson distribution required satisfying the following characteristic (Qin 2008):
(1) Stable. During any period of length t of the time interval, the probability of an
event happening is only relevant to the value of t other than the position of it.
(2) Memoryless (or Markov processes). In two mutually disjoint time interval of
T1 and T2, the numbers of events are independent.
(3) Generalized. The probability of more than one events (or more than one
customer arrived at the bank) happened at the same time can be ignored.
Considering only the input process of the formation of steady state condition.
Since the opening time of the bank is from 9:00 am to 5:00 pm and some of the
customer arrived at the bank before the network started operating, we selected the
82 Analysis of Bank Queueing Based on Operations Research 779
data from 8:30 pm to 5:00 am. In the meantime, we picked the statistical unit of
the data as 15 min per person.
According to the collected data from Fig. 82.1, more than one peak appeared as
the customer arrival reaches a relatively high value. The first peak happened
between 9:00 and 10:00 am every day, for the banks opened at 9:00 am, and many
customers are in a rush to do business. The second peak is between 1:00 and
2:00 pm as many office workers go to the bank after the lunch during lunch break.
The last peak will be between 4:00 and 5:00 pm, as customers want to catch the
last chance before the bank closed for the day. During the time other than these
three peaks, the amount of customer will be relatively low.
This paper segments the sampling time, and collects, statistics of customer num-
bers of arrival in different time period as X, and the frequency distribution of
customer numbers as n (Liu and Zhang 2008).
The null hypothesis of the test is as followed:
H0 : X PðkÞ ð82:2Þ
The unknown k uses the maximum likelihood estimation:
1 X
1
^
k¼
x¼ ni xi ð82:3Þ
n i¼1
The Pearson’s Chi squared test is used to test the null hypothesis. Karl Pearson
calculated the frequency as pi ¼ ni =n (Yao and Liu 2011), and then the value of
the test-statistic.
X
n
ðni npi Þ2
v2 ¼ ð82:4Þ
i¼1
npi
R. A. Fisher’s exact test proved that if the null hypothesis failed to be rejected
and the value of n is large enough, the test-statistic generally follows a Chi squared
distribution as the degree of freedom being n-r-1 (Wang 2010). The critical
region of a significance level a is.
W = v2 [ v2a ðk r 1Þ ð82:6Þ
where k is the number of groups, r is the number of unknown values used to
calculate the sample data estimation of test-statistics.
Table 82.1 shows the statistics of customer arrival within the length of 15 min
as the data had been segmented into different groups.
After calculation and examination, the overall customer arrivals obey the k ¼
12:1 Poisson distribution. Considering the customer arrival frequency, the effect of
difference between days can be neglected based on the analysis of long-term data.
As for different hours within a day, the changes of parameter cannot be ignored.
It will be more practical and accordant with the status of actual customer arrival
of the bank to adopt specific parameter for different hours within a single day.
Since the peaks and the troughs are generally within an hour, the value of k is
considered to be a constant in the 1 h (Yang 2008). To eliminate the influence of
the peaks and troughs, we selected an hour as a unit to further analyzes the Poisson
flow of parameter variation.
When the customer arrive at the system, the respond are of three different types.
The first type of customer will leave the system if all the servers are occupied. The
second type of customer will wait even if the traffic intensity levels exceed the
number of servers. The last type of customer will leave the system if the number of
waiting customers reached its capacity. In the second and third type, the queueing
discipline defines the way the customer will be served, the order in which they are
served could be First in first out, Last in first out, Processor sharing or Priority. The
queue can also be defined as single queue or multiple queue system.
The queueing discipline in the bank is quite typical second type as mentioned
above, and the capacity of queue can be considered to be infinite with loss of
customers. With the help of the queueing machine, the queue became single and
organized. The customer with priority will be separate from the ordinary customer
and be served without much delay in VIP room which cannot serve the ordinary
customer. The paper will be focusing on the ordinary customer queue based on
First come first serve discipline of single queue.
M/M/c model indicates a system where arrivals are a Poisson process, service time
is exponentially distributed with c servers.
The main appropriate parameters are as follows (Deng 2010):
Ls: the number of customer, including the customer being served and in the queue;
Lq: the number of customer waiting;
Tq: the average waiting time of the customer in the system;
k: the average rate of arrival;
l: the average rate of service;
q: the service intensity, value being the ratio of average arrival rate k and average
service rate l, q ¼ k=l.
782 L. Li et al.
In the real-life situation, customers continuously arrive at the system, but not all of
them enter the system to wait and be served. Some customers arrive at the bank
and leave once they think there are too many waiting customers. Some other
customers leave the system for being in the queue for too long before they receive
service. It has become a serious problem that waiting time and the expected
waiting time is too long to cause the loss of customer of the bank. The problem of
the loss of customer in the bank’s queueing system is not being fully researched
which is exactly what the paper wants to study and analyze.
The first thing a customer will do when he or she arrives at the bank is to decide
whether to join the system or not. The decision wasn’t made upon the calculation
based on previous waiting time before getting service but the expected waiting
time calculated based on the number of servers, the average service time and the
number of waiting customer. Generally, with the growth of the queue’s length, the
probability of customer joining the.system will diminish. Also, with the growth of
the number of servers, the probability will go up.
Let ak be the probability of the customer joining the system when the length of
queue is k and the number of server is n.
In previous papers, the situation of customer joining the single queue single
server system with the probability of ak ¼ 1=ðk þ 1Þ (Lu 2009) and ak ¼
pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi
k þ 1 k (Ren 2010) has been discussed.
This paper will introduce the parameter bðb [ 0Þ, and generalize the proba-
bility distribution into multiple servers, and the function of customer joining the
system as follows:
n
ak ¼ ðk 2 N; b as normalÞ ð82:8Þ
bk þ n
The introduction of the server number n and the parameter b improves the
relationship between the probability and the customer number k by making it more
realistic.
kn
kk ¼ kak ¼ ð82:9Þ
bk þ n
The status flow diagram (Thomas 2000) can be drawn as Fig. 82.2.
According to the status flow diagram, k’s equation (Tai and Gao 2009) can be
applied under equilibrium conditions as follows,
82 Analysis of Bank Queueing Based on Operations Research 783
Fig. 82.2 The status flow diagram of the queueing model of a changeable input rate
For status 0,
k
kp0 ¼ lp1 ) p1 ¼ p0 ¼ qp0 ; ð82:10Þ
l
For status 1,
kn kn q2 n
p1 ¼ 2lp2 ) p2 ¼ p1 ¼ p0 ; ð82:11Þ
bþn 2lðb þ nÞ 2ð b þ nÞ
For status 2,
kn kn q3 n2
p2 ¼ 3lp3 ) p3 ¼ p2 ¼ p0 ; ð82:12Þ
2b þ n 3lð2b þ nÞ 3!ð2b þ nÞðb þ nÞ
For status n-1,
kn qn nn1
pn1 ¼ nlpn ) pn ¼ p0 ;
ðn 1Þb þ n n!ðb þ nÞð2b þ nÞ ½ðn 1Þb þ n
ð82:13Þ
For status n,
kn qnþ1 nn
pn ¼ nlpnþ1 ) pnþ1 ¼ p0 ; ð82:14Þ
nb þ n nn!ðb þ nÞð2b þ nÞ ½nb þ n
For status n ? r-1,
qnþr nnþr1
pnþr ¼ p0 ; ð82:15Þ
nr n!ðb þ nÞð2b þ nÞ ½ðn þ r 1Þb þ n
For all status,
8 9
>
> q k nk >
>
< p0; k ¼ 1; 2; . . .; n >
>
=
k!nðb þ nÞð2b þ nÞ ½ðk 1Þb þ n
pk ¼ ð82:16Þ
>
> qk nk >
>
>
: knþ1 p0; k ¼ n þ 1; . . . ; >
n n!ðb þ nÞð2b þ nÞ ½ðk 1Þb þ n
From the Regularity condition.
X1
k¼0
pk ¼ 1 ð82:17Þ
784 L. Li et al.
We can get:
Xn qk nk
p0 ¼ 1 þ k¼1 k!nðb þ nÞð2b þ nÞ ½ðk 1Þb þ n
1 ð82:18Þ
X1 qk nk
þ knþ1
k¼nþ1 n n!ðb þ nÞð2b þ nÞ ½ðk 1Þb þ n
It is noteworthy that when b ¼ 1; n ¼ 1
1
p0 ¼ 1 þ q þ q2 2! þ q3 3! þ ¼ eq ð82:19Þ
qk q
pk ¼ e ; ðk ¼ 0; 1; 2; . . .Þ ð82:20Þ
k!
which means the stable distribution is Poisson Distribution with parameter q.
The value of b can be calculated through the analysis of a sample survey of the
customer arrival.
The evaluation can be calculated as follows,
(1) the average rate of customer arrival
X
1 X
n
kqk nkþ1
k¼ kk p k ¼ p0
k¼0 k¼0
k!nðb þ nÞð2b þ nÞ ½kb þ n
ð82:21Þ
X
1
kqk nkþ1
þ p0
k¼nþ1
nknþ1 n!ðb þ nÞð2b þ nÞ ½kb þ n
k X
n
qkþ1 nkþ1
¼
q ¼ p0
l k¼0 k!nðb þ nÞð2b þ nÞ ½kb þ n
ð82:22Þ
X
1
qkþ1 nkþ1
þ p0
k¼nþ1
nknþ1 n!ðb þ nÞð2b þ nÞ ½kb þ n
(5) when the customer k ? 1 arrives at the bank noticing k customers are in the
system, the probability of joining the system is, and the probability of loss is
82 Analysis of Bank Queueing Based on Operations Research 785
X
1 X
1 X
1
Ploss ¼ P ð Ls ¼ k Þ ð 1 a k Þ ¼ pk ak pk ¼ 1 k k ð82:25Þ
k¼0 k¼0 k¼0
Let there be a system with servers of n, infinite capacity, customer arrival being
Poisson Distribution, average arrival rate of k. As a customer arrives at the bank
with all servers occupied, he or she will wait in the queue. The customer will be
impatient and even leave the system when the length of the queue is too long or the
average service time is too long while the queue is not too long (He et al 2009).
A research shows that when the waiting time exceeds 10 min, the customer
started to feel impatient. When it exceeds 20 min, the customer become irritable,
and when it exceeds 30 min, they might leave because of angry (Sun 2010). The
research indicates that a precaution to reduce the average waiting time is of
importance.
The intensity of customer leaving being at is relevant to the waiting time k.
Take the simplest model when at ¼ dtðd [ 0Þ into consideration. Collect the data
as the customer leave the bank and analyze the pattern in statistics angle. As the
establishment of model based on time measurement is too vague, the number of
customer may be k ¼ nt=l when the leaving customer enter the system consid-
ering the average service time is 1=l. The customer waits for time of t and leaves
with at ¼ dtðd [ 0Þ can be transformed into leaving the system with ak ¼ kd=n ¼
hkðd [ 0Þ before entering the system. The status flow diagram can be drawn as
Fig. 82.3
Analyze the flow diagram based on k’s equation and we can get:
8
>
>
> ðnqÞk
< p0 ; 0 k n
k!
pk ¼ ð82:26Þ
>
> nn q k
>
: p0 ; k [ n
n!ð1 þ bÞð1 þ 2bÞ ½1 þ ðk nÞb
where q ¼ q1 =n; b ¼ h=ln:
( )1
Xn
ðnqÞk X1
nn q k
p0 ¼ þ ð82:27Þ
k¼0
k! k¼nþ1
n!ð1 þ bÞð1 þ 2bÞ ½1 þ ðk nÞb
Fig. 82.3 The status flow diagram of the queueing model with impatient customer
786 L. Li et al.
82.5 Conclusion
The core concept and basic function of the bank is to satisfy the need of customer.
Thus the bank should be devoting to improve efficiency or add the number of
servers to avoid the situation of customer leaving the system for the expected
waiting time being too long. This paper takes the variable parameters into con-
sideration and builds a more real-life concerned model. The customer-losing rate
can be used as an evaluation to fulfill the basic function of the bank. The bank can
compare the expense of opening a new server with the loss of losing customer to
get a customer loss rate in the balance as P. When the actual losing rate is larger
than this value, a bank should open a new server to reduce the loss.
References
Deng C (2010) M/M/n queuing model of changeable input rates. J Mianyang Norm Univ
29(8):1–3
Fan W, Yuan H (2005) The study about bank customer services system based on queue theory.
Value Eng 24(12):126–128
He J, Li D, Li H (2009) Optimization of the service windows deployed. Mod Electron Tech
32(6):134–136
Jiang C, Yang L (2009) Queue model in personal banking and sensitivity analysis. J Guangdong
Coll Finance Econ 8(3):69–73
Liu R, Zhang Z (2008) Hypothesis test in the judgment on a binomial distribution and a Poisson
distribution. J Qinghai Univ (Nat Sci) 26(1):44–47
Lu C (2009) Queue theory. Beijing University of Posts and Telecommunications Press, Beijing,
pp 31–99
Qin Z (2008) Mathematical model and solution of the problem based on the queue at the bank.
Pop Sci Tech 10(3):24–26
Ren X (2010) Study of an M/M/1 queuing system with a variable input rate. J Neijiang Norm
Univ 25(10):35–36
Sun Z (2010) The application of queueing theory in the bank window and teller staffing. West
China Finan 31(6):20–21
Tai W, Gao S (2009) A M/M/1 queuing model with variable input rate. J Chongqing Norm Univ
(Nat Sci) 26(1):69–77
Thomas G R (2000) Computer networks and systems: queueing theory and performance
evaluation, 3rd edn. Springer, New York, pp 22–30
82 Analysis of Bank Queueing Based on Operations Research 787
Wang L (2010) Probability theory and mathematical statistics. Dalian University of Technology
Press, Dalian, pp 245–255
Xiang H (2012) Discussion on poisson distribution and its application. Learn Wkly 6(10):205
Yao X, Liu R (2011) Probability theory and mathematical statistics. Peking University Press,
Beijing, pp 135–142
Yang M (2008) Bank queuing system data analysis and counter setting optimization research.
J Wuhan Univ Technol (Inform Manag Eng) 30(4):624–627
Chapter 83
Application of DEA-Malmquist Index
in Analyzing Chinese Banking’s Efficiency
83.1 Introduction
Commercial banks are the main body of China’s financial industry; they play an
important role for China’s economy development and the improvement of people’s
living standard. However, after the 1990s, the control of the commercial banks is
becoming more relaxed with the globalization of economy and financial freedom
in China, especially after China’s entry into the WTO which allows foreign banks
run the business in China, makes the competition between commercial banks
intensified. Since the U.S subprime mortgage crisis, the financial tsunami makes
we know that the commercial bank efficiency is the key for them to have a place in
the competition. How to improve the efficiency of Chinese commercial banks is
the vital problem to be solved for the bank authorities and the bank decision
makers. Therefore, evaluating the efficiency situation of China’s commercial
banks clearly and correctly, exploring the measures and preferences to improve the
efficiency of China’s commercial banks is priority.
At present, the widespread use of the bank efficiency evaluation method is Data
envelopment analysis (DEA) in academia. DEA which is raised by Charnes et al.
(1978) is an approach for measuring the relative efficiency of peer decision making
units (DMUs) that have multiple inputs and outputs. The essence of DEA is using
the ‘‘frontier analysis’’, according to a certain standard to construct a production
frontier and the gap between the evaluated bank and the frontier is its efficiency.
The advantages of the DEA method are that it need not give the weight of each
index by people, also need not given the production function form of frontier in
advance, and it can deal the project with multiple outputs and multiple inputs
(Chen and Zhu 2004). So this method used in the same industry to analyze the
efficiency has its own unique advantages. Penny (2004) investigates X-efficiency
and productivity change in Australian banking between 1995 and 1999 using DEA
and Malmquist productivity indexes, and finds that regional banks are less efficient
than other bank types. Total factor productivity in the banking sector was found to
have increased between 1995 and 1999 due to technological advance shifting out
the frontier. Zhu et al. (2004) measured the efficiency of China’s largest 14
commercial banks over the period 2000–2001 using the super-efficient DEA model
and Tobit regression method. The results show that the overall efficiency of the
four state-owned commercial banks is far less than 10 joint-stock commercial
banks, and the excessive number of employees is a major bottleneck restricting the
state-owned commercial banks efficiency. Chen et al. (2005) examine the cost,
technical and allocative efficiency of 43 Chinese banks over the period 1993–2000.
Results show that technical efficiency consistently dominates the allocative effi-
ciency of Chinese banks, and the large state-owned banks and smaller banks are
more efficient than medium sized Chinese banks.
But from most related literature of evaluating the efficiency of commercial
banks, we may find that they mainly adopt DEA model to describe the commercial
banking efficiency status, but the description is basically a static comparison, or
even though the dynamic description, the description is incomplete (Mette and
Joseph 2004; Ariff and Can 2008; Laurenceson and Yong 2008). Therefore, to
evaluate bank efficiency by DEA based on Malmquist index began to be widely
used. Zhang and Wu (2005) analyzed the efficiency change of China’s commercial
banks during the period of 1999–2003 using all Input-Oriented Malmquist Index
approach. Gao (Gao et al. 2009) studies the panel data of primary commercial
banks over the period of 1997–2006, and it calculates the total factor productivity
and its decomposition indexes based on the DEA-based Malmquist productivity
index. Maria et al. (2010) develop an index and an indicator of productivity change
that can be used with negative data and use RDM efficiency measures to arrive at a
83 Application of DEA-Malmquist 791
Malmquist-type index, which can reflect productivity change, and use RDM
inefficiency measures to arrive at a Luenberger productivity indicator.
This paper has introduced the theory of related DEA model and Malmquist
index and analyzed the inputs and outputs selection of 14 China’s listed com-
mercial banks, and measured the efficiency and the dynamic changes of the effi-
ciency of 14 China’s listed commercial banks during the period of 2007–2010.
83.2 Methodology
Consider we have n DMUs, and that each DMUj ðj ¼ 1; 2; . . .; nÞ has m inputs and
s outputs. Suppose Xj , Yj are the input and output of DMUj , and Xj ¼ ðx1j ; x2j ;
. . .; xmj Þ, Yj ¼ ðy1j ; y2j ; . . .; ysj Þ, then we can define the DEA model as follows:
min h
Xn
s:t: kj Xj hXj0
j¼0
ð83:1Þ
X
n
kj Yj Yj0
j¼0
kj 0; j ¼ 0; 1; 2; . . .; n
where kj are the weights of input/output indexes, h is the efficiency score. And if
h\1, the DMU is inefficient; If h ¼ 1, the DMU is efficient.
Malmquist index model was brought out by Malmquist in 1953 in the process of
analyzing consumption. Nishinizu and Page firstly used this index to measure the
change of productivity, since then the Malmquist index model was combined with
DEA theory and has a wide use in measuring the efficiency of production
(Nishimizu and Page 1982). The Malmquist index is defined as:
12
t;tþ1 Dtþ1 ðxtþ1 ; ytþ1 Þ Dt ðxtþ1 ; ytþ1 Þ
M ¼ tfp ¼ ð83:2Þ
Dtþ1 ðxt ; yt Þ Dt ðxt ; yt Þ
From model (2) we cam see that Malmquist index is an efficiency index, which
represents the efficiency changes from t to t þ 1. If Malmquist [ 1, then the
792 M. Ding et al.
Reasonable definition of inputs and outputs of banks is the key problem using
DEA model to measure efficiency of the commercial banks. Recently, the gen-
erally accepted method in division of inputs and outputs of the bank in
83 Application of DEA-Malmquist 793
Based on DEA and Malmquist indices, we calculate the efficiency of China’s listed
commercial banks from 2007 to 2010 which can be seen from Table 83.1 and
Efficiency change of China’s listed commercial banks’ Malmquist index during
2007–2010 which can be seen from Table 83.2 by matlab.
According to the empirical results in Table 83.1, we conduct the following
analysis:
First, from the aspect of time window, we can find that the average efficiency
scores of 14 China’s listed commercial banks are all less than 1, this shows that
each listed commercial bank is DEA inefficient. And the overall efficiency of
China’s banks shows a declined trend, from 0.982 in 2007 down to 0.979 in 2010.
deposits
794 M. Ding et al.
Table 83.1 The efficiency score of China’s commercial banks during 2007–2010
Banks 2007 2008 2009 2010 Average
BC 0.970 0.938 0.949 0.980 0.959
CCB 0.971 0.901 0.981 1.000 0.963
ICBC 0.930 0.858 0.943 0.900 0.908
BCM 1.000 0.988 1.000 1.000 0.997
CMB 1.000 0.980 0.936 1.000 0.979
CIB 1.000 1.000 1.000 1.000 1.000
CITIC 1.000 0.990 1.000 1.000 0.998
SPDB 0.993 1.000 1.000 1.000 0.998
CMBC 1.000 1.000 1.000 1.000 1.000
HXB 0.935 0.936 0.923 0.962 0.939
SDB 1.000 1.000 1.000 0.992 0.998
BOB 1.000 1.000 1.000 1.000 1.000
NOB 1.000 1.000 1.000 1.000 1.000
NBB 0.947 0.909 1.000 1.000 0.964
Average score of state-owned banks 0.968 0.921 0.968 0.970 0.957
Average score of joint-stock commercial banks 0.990 0.987 0.980 0.994 0.987
Average score of urban commercial banks 0.982 0.970 1.000 1.000 0.988
Average 0.982 0.964 0.981 0.988 0.979
Table 83.2 The Malmquist Index score of China’s commercial banks during 2007–2010
Banks effch tech pech sech tfp
BC 1.003 1.026 1.000 1.003 1.029
CCB 1.010 1.063 1.000 1.010 1.073
ICBC 0.989 1.039 1.000 0.989 1.028
BCM 1.000 1.027 1.000 1.000 1.027
CMB 1.000 0.995 1.000 1.000 0.995
CIB 1.000 1.034 1.000 1.000 1.034
CITIC 1.000 0.982 1.000 1.000 0.982
SPDB 1.002 0.999 1.000 1.002 1.001
CMBC 1.000 0.832 1.000 1.000 0.832
HXB 1.010 0.991 1.015 0.995 1.001
SDB 0.997 1.042 1.000 0.997 1.048
BOB 1.000 1.015 1.000 1.000 1.015
NOB 1.000 1.023 1.000 1.000 1.023
NBB 1.018 0.989 1.000 1.018 1.007
Average score of state-owned commercial banks 1.001 1.039 1.000 1.001 1.039
Average score of joint-stock commercial banks 1.001 0.982 1.002 0.999 0.985
Average score of urban commercial banks 1.006 1.009 1.000 1.006 1.015
2007–2008 0.981 1.039 1.001 0.981 1.019
2008–2009 1.018 0.936 0.999 1.019 0.953
2009–2010 1.008 1.037 1.004 1.004 1.045
2007–2010 1.002 1.003 1.001 1.001 1.005
83 Application of DEA-Malmquist 795
At the same time, the efficiency score in 2008 is the lowest which indicates that the
financial crisis had more adverse impact on Chinese banking industry and the risk
defense ability of Chinese banking industry is insufficient.
Second, from the bank ownership form, we can find that average efficiency
score of urban commercial banks is the highest in the 4 years, reach up to 0.988;
the average efficiency score of joint-stock commercial banks is the second; The
lowest efficiency score is state-owned commercial banks, only 0.957. In addition,
we may find that the average efficiency scores of the urban commercial banks all
achieve the DEA efficient respectively on 2008 and 2009 which shows the
Operation efficiency of urban commercial banks are Overall good.
Third, from the average efficiency score of each commercial bank in 4 years,
we can find that all state-owned commercial banks are DEA inefficient. For the
joint-stock commercial banks, CIB and CMBC are all DEA efficient in 4 years,
BOB and NOB are also DEA efficient. Moreover, ranking in the last three places
respectively is BC, HXB and ICBC.
We evaluate the efficiency of 14 China’s listed commercial banks above using
DEA method, but this method calculates the efficiency of the commercial banks
from static goniometer, namely horizontal comparison of the efficiency in the same
period of different commercial banks which is not suitable for the longitudinal
description of the dynamic changes of the efficiency in a period. So we measure
the dynamic changes of the efficiency of the commercial banks in China using the
Malmquist index to make the evaluation of the efficiency of Chinese commercial
banks more detailed and more comprehensive.
According to the empirical results in Table 83.2, we conduct the following
results:
First, the average Malmquist index of the 14 China’s listed commercial banks
from 2007 to 2010 is 1.005, greater than 1, which means the overall efficiency of
China’s banks is rising. The overall efficiency during the period of 2007–2008 and
2009–2010 are all rising but the efficiency in 2008–2009 is only 0.953, less than 1
the reason is the negative influence of financial crisis. In addition, the average
Malmquist index of joint-stock commercial banks is less than 1, is declined in four
yeas. The largest increased efficiency is state-owned commercial banks. The
efficiency of CMB, CITIC and CMBC is declined in 4 years, and the efficiency of
SPDB and HXB is nearly unchanged, and the others are rising.
Second, the overall improved efficiency of 14 China’s listed commercial banks
is due to the increase of the efficiency progress (effch) and the technical change
(tech). The overall declined efficiency of the joint-stock commercial banks is due
to the decrease of the technical change (tech). And the increase of the state-owned
commercial banks is mainly due to the decrease of the technical change (tech).
Moreover, for the increase of efficiency progress (effch) of the urban commercial
banks is mainly due to the increase of scale efficiency (sech).
796 M. Ding et al.
83.4 Conclusion
The paper has introduced the theory of related DEA model and Malmquist index
and analyzed the inputs and outputs selection of 14 China’s listed commercial
banks, measured the efficiency and the dynamic changes of the efficiency of 14
China’s listed commercial banks during the period of 2007–2010. Through the
analysis, the results show that the average efficiency scores of 14 China’s listed
commercial banks are all DEA inefficient. The average efficiency score of urban
commercial banks is the highest in the 4 years, the average efficiency score of
joint-stock commercial banks is the second and the lowest efficiency score is state-
owned commercial banks. The average Malmquist index of the 14 China’s listed
commercial banks is greater than 1, and the overall improved efficiency of 14
China’s listed commercial banks is due to the increase of effch and tech.
Acknowledgments This paper is supported by National Science Fund for Distinguished Young
Scholar (70825006); The Funds for Innovation Research Changjiang Scholar (IRT0916) in China
and Hunan Provincial Natural Science Foundation of China(09JJ7002).
References
Ariff M, Can L (2008) Cost and profit efficiency of Chinese banks: a non-parametric analysis.
China Econ Rev 21:260–273
Charnes A, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision making units. Eur
J Oper Res 2(6):429–444
Chen XG, Michael S, Brown K (2005) Banking efficiency in China: application of DEA to pre-
and post-deregulation eras: 1993–2000. China Econ Rev 16:229–245
Chen Y, Zhu J (2004) Measuring information technology’s indirect impact on firm performance.
Inf Technol Manage J 5:9–22
Feng G, Serletis A (2010) Efficiency, technical change, and returns to scale in large US banks:
panel data evidence from an output distance function satisfying theoretical regularity. J Bank
Finance 34(1):127–138
Gao M, Yang SY, Xie BC (2009) Research on changing tendency of commercial banks
productivity efficiency in China. J Xidian Univ Soc Sci Ed 19(5):51–55
Giokas DI (2008) Assessing the efficiency in operations of a large Greek bank branch network
adopting different economic behaviors. Econ Model 25:559–574
Laurenceson J, Yong Z (2008) Efficiency amongst China’s banks: a DEA analysis five years after
WTO entry. China Econ Rev 1(3):275–285
Maria CA, Portela S, Thanassoulis E (2010) Malmquist-type indices in the presence of negative
data: an application to bank branches. J Bank Finance 34:1472–1483
Mette A, Joseph CP (2004) Combining DEA window analysis with the Malmquist Index
Approach in a study of the Canadian banking industry. J Prod Anal 21(1):67–89
Nishimizu M, Page JM (1982) Total factor productivity growth, technical efficiency change:
dimensions of productivity change in Yugoslavia in 1965–1978. Econ J 92:929–936
Penny N (2004) X-efficiency and productivity change in Australian banking. Aust Econ Pap
43(2):174–191
Zelenyuk V (2006) Aggregation of Malmquist productivity indexes. Eur J Oper Res
174:1076–1086
83 Application of DEA-Malmquist 797
Zhu N, Zhuo X, Deng Y (2004) The empirical analysis of the efficiency and reform strategy of the
state-owned commercial Banks in China. Manage World 2:18–26
Zhang J, Wu H (2005) The empirical analysis of efficiency of Commercial Bank of China based
on Malmquist Index Approach. J Hebei Univ Technol 34(5):37–41
Chapter 84
The Improvement on R. G. Bland’s
Method
Yu-bo Liao
Abstract Cycling may occur when we use the simplex method to solve linear
programming problem and meet degeneration. Such cycling problem can be
avoided by the Bland method. In this paper, we will present an improved Bland
method with more iterative efficiency than the Bland method.
Keywords Bland method Linear programming Linear optimization Simplex
method
84.1 Introduction
In plain English one can say that a linear optimization (LO) problem consists of
optimizing, i.e., minimizing or maximizing, a linear function over a certain
domain. The domain is given by a set of linear constraints. The constraints can be
either equalities or inequalities.
The simplex method for linear programming problems was first proposed by
Dantzig in 1947 (Dantzig 1948), which can be described as follow:
Supposing that the given standard linear programming problem is
mins ¼ cx
Ax ¼ b
x0
0 1 0 1 0 1
a11 a1n x1 b1
B . C B . C
where A ¼ @ A, x ¼ @ .. A, b ¼ @ .. A,
am1 amn xn bm
Y. Liao (&)
School of Basic Science, East China Jiaotong University, Nanchang, China
e-mail: [email protected]
c ¼ ð k1 kn Þ
The rank of A ¼ ðaij Þmn is m, n m 1. The steps of the simplex method can
be summarized as follow:
• The first step: B ¼ ðpj1 ; pj2 ; . . .; pjm Þ is the known feasible basis, and the
ð0Þ
canonical form and the basic feasible solution xB ¼ B1 b ¼ ð b10 bm0 ÞT o
• The second step: Check the testing number. If all testing numbers satisfy
kj 0; ðj ¼ 1; 2; nÞ, the corresponding basic feasible solution xð0Þ is the
optimal solution. All the process is ended, otherwise go to next step;
• The third step: If some testing number kr [ 0 and B1 pr ¼ ðb1r ; b2r ;
; bmr ÞT 0, there is no optimal solution for this problem. All the process is
ended, otherwise go to next step;
• The forth step: If some testing number kr [ 0 and there is a positive number in
ðb1r ; b2r ; ; bmr ÞT , make xr be the entering-basis variable (if there are a few of
positive testing numbers, choose the largest one in order to improve the iterative
efficiency. This method n is namedoas the largest testing number method), and the
minimum ratio is min bbi0ir bir [ 0 ¼ bbs0sr Hence the leaving-basis variable xjs can
be determined (if there are a few same minimum ratios, choose the minimum-
subscript variable as the leaving- basis variable). Substitute pr for pjs , obtain the
and then go to next step;
new basis B,
ð1Þ 1 b,
• The fifth step: Obtain the canonical form and the basic feasible xB ¼ B
corresponding to new basis B (which can be realized directly by elementary row
transformation of the corresponding simplex tableau in manual calculation).
Afterwards, substitute B for B, substitute xð1Þ ð0Þ
for x , and then return to the
B
second step.
For the non-degenerate linear programming problems, using the largest testing
number simplex method in iteration, after finite iterative steps, the optimal solution
must be obtained or not existed. But for degenerate linear programming problems,
this method may not be valid because basis cycling may appear. In 1951,
A. J. Hoffman first designed one example where appears cycling in iterations. In
1955, E. M. L. Beale designed a simpler example to show the possible cycling
problem (Beale 1955; Tang and Qin 2004; Zhang and Xu 1990).
To avoid infinite cycling, R. G. Bland proposed a new method in 1976 (Bland
1977). In the Bland method the cycling can be avoided in calculation if abiding by
two rules which are shown as following (Andersen et al. 1996; Nelder and Mead
1965; Lagarias et al. 1998; Bixby 1994; Herrera et al. 1993; Wright 1996; Han
et al. 1994; Hapke and lowinski 1996; Zhang 1999; Terlaky 1985; Terlaky 2000;
Terlaky and Zhang 1993; Wagner 1958; Ward and Wendell 1990; Wolfe 1963;
Wright 1998; Elsner et al. 1991; Han 2000):
• Rule 1: Once there are a few positive testing numbers, choose the corresponding
minimum-subscript basic variable as the entering-basis variable;
84 The Improvement on R. G. Bland’s Method 801
• Rule 2: Once a few ratios bbi0ir , in different rows reach the minimum at the same
time, choose the corresponding minimum-subscript basic variable as the leav-
ing-basis variable.
Rule 2 determines the leaving-basis variable, and it is same as the forth step of
the simplex method. However, the entering-basis variable is determined by Rule 1,
but the largest testing number method. The advantage of the Bland method is
simple. However because it only considers the minimum subscript, but the
decreasing speed of the target function, its iteration times are often much more
than those of the largest testing number method. In this paper, we will first prove a
theorem, and then use this theorem to propose an improved Bland method with
much more computation efficiency.
There is only one zero in bi0 ði ¼ 1; 2; mÞ, and now assume that bs0 ¼ 0 and
bi0 [ 0. After this iterative step, according to the hypothesis, because only one
basic variable is zero, only if the row in which leaving-basis variable locates is not
s row, the value of target function will decrease and xð0Þ will be transferred;
Moreover, because the target value will not increase in iteration, xð0Þ will not
appear again. Therefore, if the conclusion is not valid, there is only one case: In the
iteration afterwards, The row in which the leaving-basis variable locates is s row,
802 Y. Liao
84.3 Conclusion
In summary, the large testing number method has high iteration efficiency, but it
has the cycling problem; the Bland method can avoid the cycling problem, but
results in low iteration efficiency. In order to eliminate those two disadvantages,
we proposed an improved method which can prevent the cycling theoretically with
higher computation efficiency.
Acknowledgments I would like to thank the support provided by East China Jiaotong Uni-
versity Research fund and Jiangxi province Research fund.
84 The Improvement on R. G. Bland’s Method 803
References
Andersen ED, Gondzio J, Meszaros Cs, Xu X (1996) Implementation of interior point methods
for large scale linear programming. In: Terlaky T (ed) Interior point methods of mathematical
programming. Kluwer Academic Publishers, Dordrecht, pp 189–252
Beale EM (1955) Cycling in the dual simplex algorithm. Nav Res Logist Quart 2:269–276e
Bixby RE (1994) Progress in linear programming. ORSA J Comput 6(1):15–22
Bland RG (1977) New finite pivoting rules of simplex method. Math Oper Res 2:103–107
Dantzig GB (1948) Programming in a linear structure. Comptroller USAF, Washington, DC
Elsner L, Neumann M, Vemmer B (1991) The effect of the number of processors on the
convergence of the parallel block Jacobi method. Linear Algebra Appl 154–156:311–330
Han L (2000) Algorithms for unconstrained optimization. Ph.D. Thesis, University of Connecticut
Han S, Ishii H, Fuji S (1994) One machine scheduling problem with fuzzy duedates. Eur J Oper
Res 79:1–12
Hapke M, lowinski RS (1996) Fuzzy scheduling under resource constraints. Proceedings on
European workshop on fuzzy decision analysis for management, planning and optimization,
pp 121–126
Herrera F, Verdegay JL, Zimmermann H-J (1993) Boolean programming problems with fuzzy
constraints. Fuzzy Sets Syst 55:285–293
Lagarias JC, Reeds JA, Wright MH, Wright PE (1998) Convergence properties of the
Nelder–Mead simplex algorithm in low dimensions. SIAM J Optim 9:112–147
Nelder JA, Mead R (1965) A simplex method for function minimization. Comput J 7:308–313
Tang HW, Qin XZ (2004) Applied optimal method, Dalian Science and Technology University
Press, Dalian
Terlaky T (1985) A convergent criss-cross method, Mathematics of Operationsforschung und
Statistics. Ser. Optimization 16:683–690
Terlaky T (2000) An easy way to teach interior point methods. European Journal of Operations
Research 130(1):1–9
Terlaky T, Zhang S (1993) Pivot rules for linear programming: A survey on recent theoretical
developments. Ann Oper Res 46:203–233
Wagner HM (1958) The dual simplex algorithm for bounded variables. Nav Res Logist Quart
5:257–261
Ward JE, Wendell RE (1990) Approaches to sensitivity analysis in linear programming. Ann
Oper Res 27:3–38
Wolfe P (1963) A technique for resolving degeneracy in linear programming. J SIAM
11:205–211
Wright MH (1996) Direct search method: once scorned now respectable. In: Griffiths DF, Watson
GA (eds) Numerical analysis 1995: proceedings of the 1995 dundee biennial conference in
numerical analysis. Addison-Wesley, Harlow, pp 191–208
Wright MH (1998) The interior-point revolution in constrained optimization, Numerical analysis
manuscript 98–4-09. AT & T BellLab’s, Murray Hill
Zhang S (1999) A new variant of criss-cross pivot algorithm for linear programming. Eur J Oper
Res 116(3):607–614
Zhang JZ, Xu SJ (1990) Linear programming, Science Press, Beijing
Chapter 85
Influence Mechanism of Lean Production
to Manufacturing Enterprises’
Competitiveness
Abstract The success of Toyota as well as other enterprises of Japan has proved
that lean production can improve manufacturing enterprises’ competitiveness
greatly. However, lean production’s application in other countries is not ideal. One
of the reasons is that lean production is treated as a tool set not as system engi-
neering, so under such background, this paper studies the influence mechanism of
lean production to manufacturing enterprises’ competitiveness upgrading from
systematic perspective. In this paper, lean production is not merely confined to
improvement tools, but is treated as a system, including improvement tools, lean
culture and staff factor. The direct and indirect effect of the three aspects to
manufacturing enterprises’ competitiveness is analyzed by SEM using SMOS17.0.
Analysis result demonstrates the influence mechanism of LP to competitiveness
clearly. The study of this paper has practical sense to lean implementation in China
and meanwhile it enriches lean production theory.
85.1 Introduction
Lean production (short of LP) is from Toyota Production System, whose superi-
ority has been proved by success of Toyota Motor Corporation as well as other
Japanese manufacturing corporations. Because it integrates the characters of Ford
H. Zhang (&)
Management Science and Engineering School, An Hui
University of Technology, Ma An Shan, China
e-mail: [email protected]
Z. Niu
Management and Economy Department, Tianjin University,
Tianjin, China
production Mode and handicraft production mode—low cost with high quality,
and can satisfy the diversified need of customer-focused marketing, so it is
regarded as the third production mode. After 1990s, especially after the publication
of book named the machine that changed the world, more and more enterprises
outside of Japan began to learn and apply LP. From theoretical point of view, LP
can upgrade manufacturing enterprises’ competitiveness greatly, but its 20 years’
application process is not smooth, not a few enterprises claim their lean imple-
mentation is failure or didn’t gain desired outcome. Atkinson, Hines et al., Sim and
Rodgers dictated that less than 10 % of UK organizations have accomplished a
successful lean implementation (Bhasin 2012). Famous IE expert of China Er-shi
Qi also pointed out lack of lean environment, enterprises of China encountered
high failure rate in lean implementation process. The reason leading to this phe-
nomenon may be complicated, but the fact that treating LP as merely a tool set
may be one of the key factors.
Under such background, this paper will regard LP as an engineering system and
aims to study the influence mechanism of lean implementation to competitiveness
of manufacturing enterprises, finding out direct and indirect effect of LP’s different
dimensions to manufacturing enterprises’ competitiveness upgrading.
The viewpoint that improvement tool is one main component of lean implemen-
tation is accepted by many researchers and lean practitioners. Because improve-
ment action must be implemented by some means of tools and lean thought needs
improvement tools to identify, so many researchers paid attention to it. Monden
(2008) pointed out that LP is the compound of JIT production, including field
management, resource management, TQM and information system management
(Monden 2008). Shah and Ward (2007) pointed out LP comprises three aspects
tools, including tool set about supplier management, tool set about customer
management and tool set of inner operation management (Shah and Ward 2007).
Fullerton and McWatters (2002) did appraisal to LP using 10 tools, they are
85 Influence Mechanism of Lean Production 807
focused factory, group technology, Single Minute Exchange of Die, TPM, multi-
skills operator, level operation, purchase on time and TQM (Fullerton and
McWatters 2002). Kojima and Kaplinsky (2004) thought LP system mainly con-
tain three aspects technology, flexibility, quality and persistence (Kojima and
Kaplinsky 2004). Based on the introduction above, this paper gets the following
hypothesis.
H1: application of improvement tools has active influence to manufacturing
enterprise’s competitiveness.
Famous management expert Peter F. Drucker once said to staff is the only resource
of enterprise, thus management’s crucial purpose to mine staff’s potential. To lean
implementation, staff also plays an irreplaceable role, because staff is the executor
of improvement tool and the carrier of lean culture. As to its importance, FujioCho
once said a sentence ‘‘before making car must first made man’’. Many researchers
also support this viewpoint. In Toyota mode, the internal training material of
Toyota Corporation, respecting for people and continuous improvement are treated
as two pillars of TPS (Ohno 2001). Lander (2007) also pointed out staff is the most
valuable resource of Toyota, so training education and career development is every
important to enterprises (Lander 2007). Monden (2008) thought in order to satisfy
the need of change, the flexibility of staff is very important (Fullerton and
McWatters 2002). Besides direct influence, staff also has indirect influence to
upgrading of competitiveness. As the carrier of lean tool, staff will develop and
adjust lean technology, making it suitable to demand of specific environment and
requirement. So based on the extant research, the following hypothesis is put
forward.
H2: lean staff has positive effect to manufacturing enterprises’ competitiveness.
H3: lean staff has active influence to improvement tool’s development.
Lean culture
talent cultivation system and fostering lean culture can enterprises’ competitive-
ness will be improved everlastingly (Liker 2008). Besides this aspect, dense lean
culture will make staff more actively taking part in improvement and provide
strong dynamic to ensure the improvement is unremitting. On the base of above
discussion, this paper put forward one hypothesis:
H4: lean culture cultivation has positive direct effect to manufacturing enter-
prises’ competitiveness.
H5: lean culture will has active influence to lean staff.
Based on the analysis above, the concept model of this paper is got, see
Fig. 85.1.
85.3 Methodology
85.3.1 Method
This paper will apply structural equation modeling (short of SEM) to verify above
hypothesis. Through seeking variables’ inner structure relation, it can verify
whether the model assumption is reasonable and if theoretical model has fault, it
can point out how to revise. SEM is a group of equations reflecting relation of latent
and observed variables, through measuring observed variables it can infer latent
variables’ relation and verify model’s correctness (Gong et al. 2011). Observed
variables can be measured directly, which is signified by box in path chart, while
due to things’ complexity and abstraction, it is difficult to measure latent variables
directly and in path chart it is signified by elliptic. SEM can substitute for multiple
regression, path analysis, factor analysis as well as covariance analysis and so on
(Zhang and Gao 2012), its application began at late of twentieth century in society,
psychology, education, management, economy as well as other fields.
In studying relation between LP and manufacturing enterprise’s competitive-
ness, traditional quantitative methods are not applicable, because they can not
analysis the relation between multiple latent variables and multiple observed
variables as well as the relevance among latent variables, so SEM is used in this
paper.
85 Influence Mechanism of Lean Production 809
In data collection, three main ways were used. Firstly, the MBA of Tian Jin
University, who engaged in production management, are investigated in written
form. Secondly, the questionnaire is emailed to potential respondents, located in
810 H. Zhang and Z. Niu
Tian Jin, He Bei, Shan Dong, An Hui and Jiang Su province. Thirdly, field survey.
In this manner chief of production management and employee engaged in lean
improvement are invited to fill the questionnaire. 500 questionnaires are given out,
and 245 effective questionnaires are collected, the recovery rate is 49 %.
0. 68** Enterprise’s
Staff factor
competitiveness
0. 45**
0. 57***
Lean culture
Figure 85.2 shows that except the path coefficient (045) of staff factor to
improvement tool merely arrives a = 0.05 significance level, all other coefficients
reach a = 0.01 significance level, especially the coefficient of lean culture to
enterprise’s competitiveness arrives a = 0.001 significance level. So the five
hypotheses put forward in this paper are all supported. The Fig. 85.2 shows both
the direct and indirect influence of lean dimension to manufacturing enterprise
competitiveness. Concretely, to improvement tool, its direct influence is 0.51 and
indirect influence is 0, so its comprehensive influence to competitive is 0.51; to
staff factor, its direct influence is 0.68 and indirect influence is 0.40 9 0.51 =
0.204, so its comprehensive is 0.884 and to lean culture, its direct influence is 0.57,
indirect influence is 0.45 9 0.68 = 0.306, so its comprehensive influence is 0.876.
812 H. Zhang and Z. Niu
85.5 Conclusion
References
Kojima S, Kaplinsky R (2004) The use of a lean production index in explaining the transition to
global competitiveness: the auto components sector in South Africa. Technovation
24(3):199–206
Koole SE (2005) Removing borders: the influence of the Toyota Production System on the
American office furniture manufacturer. Ph. D. Grand Valley State University, America
Lander E (2007) Implementing Toyota-style systems in high variability environments. Ph.D.
Dissertation, Michigan University, America
Liao I-H (2005) Designing a lean manufacturing system: a case study. Ph.D. Dissertation,
Binghamton University, America
Liker JK (2008) Toyota culture- the heart and soul of the Toyota way. China Machine Press
Monden Y (2008) Toyota production system. He Bei University Press, China
Ohno T (2001) The Toyota way 2001. Toyota Motor Corporation, Toyota
Shah R, Ward PT (2007) Defining and developing measuring of lean production. J Oper Manag
25(1):785–805
Zhang W, Gao X (2012) Structural equation model analysis of foreign investment, innovation
ability and environmental efficiency. China Soft Sci Mag 1(3):170–180
Chapter 86
Mobile Device User Research in Different
Usage Situation
Abstract In this paper, we report the difference of users’ cognition and operating
efficiency in three typical situations, such as noisy, dark, walking condition. The
single-factor experiment’s result data suggest noisy effects mobile user’s cognition
significantly, and walking situation will affect users’ performance to some extent,
but user experience does not relevant to different situation.
86.1 Introduction
With the rapid development of mobile devices and mobile Internet, user has
entered the ‘‘experience economy era’’ (Luo 2010). The great successes of the
Apple’s range of products proved user-centered-design and great attention paid on
user experience are very important for a company. Compared to product in other
area, mobile devices are been used in a more complex environment. And user’s
cognition, operating efficiency and subjective experience are not the same in
different situation. Mobile device screen space is also an important factor distin-
guished from other products in the field. How to improve the user experience is a
very challenging task.
In this paper, we study mobile users’ cognition, operating efficiency and user
experience in there different typical situations, aiming to provide basis to enhance
mobile device user experience.
W. Liu J. Li (&)
Beijing University of Posts and Telecommunications, Beijing, China
e-mail: [email protected]
An abstract definition of the user experience: users’ all aspects of perception when
they interact with products and services (UPA 2006). Garrett thinks that the user
experience includes brand characteristics, information availability, and function-
ality, content and other aspects of experience. Mobile user experience involves a
wide range, in addition to hardware, now more concern is paid on the operating
system, applications, and interface design experience.
Situated cognition theory thinks the cognitive process is constructed by the situ-
ation, guidance and support, and individuals’ psychology usually active in the
context (Du 2007). When people use mobile devices, their brain is composed by
attention, comprehension and retention. Attention can be divided into centralized,
decentralized, and transfer. While there is a clear demand or potential interests,
user tends to be concentrated. But when there is interference information and time
is uncontrollable, they will be distracted. User will transfer their attention to
explore information. Fitts’s Law suggests that reducing the distance between the
starting position to the target distance and increasing the size of target can
accelerate the speed user find the target (Luo 2010). User’s operation habits and
interaction expectation own its unique mental model when they using mobile
device.
This paper studies in different situation mobile device (In this paper, refers to the
cell phone) users’ cognitive, operating ergonomics and user experience differ-
ences, aiming to provide a theoretical basis for the mobile design. This study
belongs to the scope of psychology and ergonomics, and we analysis experimental
quantitative data and supplemented by qualitative methods in the experiment.
86.2 Methodology
24 participants (11 males and 13 females), who are 22–25 years old, and familiar
with mobile phones, with a certain touch-screen mobile phone operating experi-
ence, but never use HTC Desire HD, phones with Android 2.3 operating system
and 365 curriculum applications. Their vision is normal and corrected visual acuity
above 1.0, right-handed. After a brief study, all of them can cooperate with host to
complete the test and questioners independently. They never participated in similar
experiments.
In this paper, we use the HTC Desire HD phone which screen size is 4.3 inches,
resolution is 480 * 800 px, and operation system is Android 2.3.2. All the
experiment is been hold in a lab where participants can walk small-scale.
Experiment materials include four Icon list picture, 365 curriculums Version1.1
(Android), the availability of subjective evaluation questionnaire and user expe-
rience evaluation questionnaire (Figs. 86.1 and 86.2).
group variables are noisy, dark and waking situation (Yamabe 2007). Control
group participants have test in light-sufficient, place-fixed, and quiet indoor. The
users just participate in one group. After the experiment, all participants complete
subjective assessment questionnaire (Fig. 86.3).
answer some questions about the pictures. Those questions are used to measure
participants’ cognition.
Experiment second part is testing participants’ performance of operating 365
curriculum application. Before the experiment, participants have some time to use
the application indecently. In the formal experiment, there are three tasks for
participants to complete. The first is to find the timetables of the day, at the
meantime host records the time participants used to complete the task. Then
participants need to answer two questions about the operation. The second task is
to remove the day of a lesson, also host records the time to complete the task. The
third task is to set the personal information, and host records the time. After
completing those tasks, participants have to fill in the form of testing subjective
experience. The form is 7-point scale and has 10 questions.
86.3 Results
After standardizing the data of each group answers to objective questions, the
result shows in Fig. 86.4. It suggests participants’ cognition have certain differ-
ences between the four groups, and the best one is control group which are without
any interferences, on the opposite side, the noisy group’s cognition is lowest.
Table 86.1 shows the result of experiment data to two-tailed T-test. The
bilateral Sig = 0.049 \ 0.05 of noisy group data to the control group data suggests
in the 95 % significance level, the noisy group’s cognition are significantly dif-
ferent to that of control group. It means noisy satiation impact on user cognition
greatly.
In this paper, we use several aspects to assess the degree of the overall user
experience by setting weights to aspects.
Figure 86.6 suggests that the overall experience evaluation of the four groups in
the experiment is not much difference. Probably the user experience of the eval-
uation itself is a subjective evaluation, so the evaluation of the product due to
different user groups will be very different.
86.4 Discussion
According to the analysis to the experiment data, noisy, dark and walking situation
have a certain on mobile users’ cognition, operating performance and user
experience.
822 W. Liu and J. Li
86.5 Conclusion
The experiment studies in different situation, the difference of mobile device (This
article refers to the cell phone) users’ cognitive, operating performance and user
experience. The experiment uses methods of control groups and the single factor
control to analysis the experiment data. And the result of experiment shows the
impact of noise on the cognitive situation of mobile device users significantly;
while the walking environment has a certain impact on the user’s operating per-
formance. The impact of different situations to user experience is not significant.
Actual design should analysis the characteristics of the product and the users’ own
characteristics deeply.
References
Du Y (2007) Context-aware learning for intelligent mobile multimodal users interfaces, pp 1–5
Ingwersen P (2000) Cognitive information retrieval annual review of information science and
technology
Li J (2009) Comprehensive evaluation of the cognitive load of human-computer interaction
process
Luo S (2010) Context-based user experience design in the mobile interface
UPA Usability Body of Knowledge, 2006.9. http://www.usabilitybok.org/glossary
Yamabe T (2007) Experiments in mobile user interface adaptation for walking users
Chapter 87
Optimal Enterprise Cash Management
Under Uncertainty
87.1 Introduction
Optimization models for cash management can be divided into two main groups
based on objection function. The first deals with demand by cost-benefit or loss-
benefit analysis, pioneered by Baumol–Tobin model (Baumol 1952; Tobin 1956),
and extended, among others, by Frenkel and Jovanovic (1980, 1981), Bar-Ilan
(1990), Dixit (1991), Ben-Bassat and Gottlieb (1992), Chang (1999) and Perry and
Stadje (2000). In this approach the optimal demand for cash is decided by the
trade-off between opportunity cost and benefits of cash holding. The second cat-
egory of models concerns the demand by drift control theory, pioneered by Miller
and Orr (1966), and extend by Bar-Ilan et al. (2004), Bar-Ilan and Lederman
(2007).
However, the authors above mainly consider cash only and they consider either
single period or infinite horizons only. In this paper, we present a model to obtain
the optimal allocation ration between cash and financial assets based on utility
maximization for different horizons. The model departs from portfolio choice
theory (Barberis 2000), Aizenman and Lee (2007) and multi-period newsboy
model (Matsuyam 2006), and instead emphasizes the importance of cash in pro-
viding insurance for bankruptcy.
The rest of paper is organized as follows. In Sect. 87.2 we introduce the
framework of cash management. A numerical example is presented particularly to
calibrate our model in Sect. 87.3. Section 87.4 offers some concluding remarks.
Y
t
f ðX1 ; X2 ; X3 ; . . .; Xt Þ ¼ f ð Xn Þ ð87:2Þ
n¼1
y1 x1 ) W1;1 ¼ W0 ðx1 y1 Þ 1 þ rf ;1 þ ð1 x1 Þ 1 þ rs;1 þ y1 ;
y1 [ x1 ) W1;2 ¼ W0 ½ð1 x1 Þ ðy1 x1 Þð1 þ h1 Þ 1 þ rs;1 þ y1 ;
(2) t = 2
y1 \x1 ; y2 \x2;1 )
W0 ðx1 y1 Þ 1 þ rf ;1
x2;1 ¼ ;
W1;1
W0 ð1 x1 Þ 1 þ rs;1
1 x2;1 ¼ ;
W1;1
W2;2 ¼ W1;1 1 x2;1 y2 x2;1 ð1 þ h2 Þ 1 þ rs;2 þ y2 ;
828 X. Wei and L. Han
y1 [ x1 ; y2 \x2;2 )
x2;2 ¼ 0;
" #
ð1 x1 Þ
W0 1 þ rs;1
ð y 1 x 1 Þ ð 1 þ h1 Þ
1 x2;2 ¼ ;
W1;2
" #
x2;2 y2 1 þ rf ;2
W2;3 ¼ W1;2 ;
þ 1 x2;2 1 þ rs;2 þ y2
y1 [ x1 ; y2 [ x2;1 )
x2;2 ¼ 0;
" #
ð1 x1 Þ
W0 1 þ rs;1
ðy1 x1 Þð1 þ h1 Þ
1 x2;2 ¼ ;
W1;2
(" # )
1 x2;2
W2;4 ¼ W1;2 1 þ rs;2 þ y2 ;
y2 x2;2 ð1 þ h2 Þ
y1 [ x1 ; y2 [ x2;1 )
W0 ðx1 y1 Þ 1 þ rf ;1
x2;1 ¼ ;
W1;1
W0 ð1 x1 Þ 1 þ rs;1
1 x2;1 ¼ ;
W1;1
W2;1 ¼ W1;1 ðx21 y2 Þ 1 þ rf ;2 þ 1 x2;1 1 þ rs;2 þ y2 ;
(3) t
Let a set Rþ be defined by
Rþ ¼ faja 0; a 2 Rg
Where R denotes a set of all real numbers. Moreover, At and Bt , t ¼ 1; 2; 3. . .,
are defined by
Then we denotes
X 1 ¼ A1 A2 At ;
X2 ¼ A1 A2 At1 Bt ;
X2t 1 ¼ B1 B2 Bt1 At ;
X2t ¼ B1 B2 Bt :
y ¼ ðy1 ; y2 ; . . .; yt ÞT
y 2 X1 )
Wt2;1 xt1;1 yt1 1 þ rf ;t1
xt;1 ¼ ;
Wt1;1
Wt2;1 1 xt1;1 1 þ rs;t1
1 xt;1 ¼ ;
Wt1;1
" #
xt;1 yt 1 þ rf ;t
Wt;1 ¼ Wt1;1 ;
þ 1 xt;1 1 þ rs;t þ yt
y 2 X2 )
Wt2;1 xt1;1 yt1 1 þ rf ;t1
xt;1 ¼ ;
Wt1;1
Wt2;1 1 xt1;1 1 þ rs;t1
1 xt;1 ¼ ;
Wt1;1
(" # )
1 xt;1
Wt;2 ¼ Wt1;1 1 þ rs;t þ yt ;
yt xt;1 ð1 þ ht Þ
……
y 2 Xt1 )
xt;2t1 ¼ 0;
" #
1 xt1;2t1
Wt2;2t2 1 þ rs;t1
yt1 xt1;2t1 ð1 þ ht1 Þ
1 xt;2t1 ¼ ;
Wt1;2t1
" #
xt;2t1 yt 1 þ rf ;t
Wt;2t 1 ¼ Wt1;2t1 ;
þ 1 xt;2t1 1 þ rs;t þ yt
830 X. Wei and L. Han
y 2 Xt )
xt;2t1 ¼ 0;
" #
1 xt1;2t1
Wt2;2t2 1 þ rs;t1
yt1 xt1;2t1 ð1 þ ht1 Þ
1 xt;2t1 ¼ ¼ 1;
Wt1;2t1
(" # )
1 xt;2t1
Wt;2t ¼ Wt1;2t1 1 þ rs;t þ yt ;
yt xt;2t1 ð1 þ ht Þ
The manager’s preferences over terminal wealth are described by constant
relative risk-aversion utility functions of the form
Wt1A
uð W t Þ ¼ ð87:3Þ
1A
The manager’s problem is to solve equation
V ðWt Þ ¼ max E E0 uðWt Þrs;1 ; rs;2 ; . . .; rs;t
x1
82 Z 3 9
> >
>
>6 u W t;1 h ð y Þdy >
>
>
> 7 >
>
>
> 6 X 1 7 >
>
>
<66 Z 7 >
= ð87:4Þ
7
¼ max E 6 þ6 7
u Wt;2 hð yÞdy. . . 7jrs;1 ; rs;2 ; . . .; rs;t
x1 >
> 6 X2 7 >
>
>
>6 Z 7 >
>
>
> 4 5 >
>
>
> >
>
: þ u Wt;2t hð yÞdy ;
X 2t
Where max denotes the problem is solving the optimal x1 , and denotes the fact
x1
that the manager calculates the expected return from the beginning of period 1 on.
E is the expectation operator of rs;t . hð yÞ is the joint density function for
ðy1 ; y2 ; y3 ; . . .; yt Þ.
When budget constraint binds, that is, ð1 xt Þ ðyt xt Þð1 þ ht Þ\0, the
final wealth in period t is 0. Now even though the manager liquidates all financial
assets, they could not satisfy the payments, the bankruptcy is likely to occur.
PðXt ¼ iÞ ¼ 0:25; i ¼ 1; 2; 3; 4 ;
P rs;t ¼ 0:03 ¼ P rs;t ¼ 0:08 ¼ 0:5; Lt ¼ 1; Ht ¼ 4 ;
Yt ¼ Xt 0:005Rt1 0:002St1 and W0 ¼ 12 ;
rf ;t ¼ 0:04; ht ¼ 0:5; A ¼ 5; t ¼ 1; 2; 3; 4; 5; 6:
87 Optimal Enterprise Cash Management 831
The simulation results are reported in Fig. 87.1. The optimal x1 maximizes the
function (87.4).
The results show that the optimal choice of inter-temporal model is different
from that of single-period model. The former makes the manager choose to hold
more cash. The reason is long-horizon managers has an intrinsically larger need
for cash to quell possible transaction and precautionary demand. And we also
conclude that higher yield volatility of financial assets, liquidation cost of financial
and coefficient of risk aversion will raise the demand for cash. For saving space,
we omitted the figure in the paper.
This paper presents a dynamic model for enterprise cash management under
uncertainty. The numerical method was used to obtain optimal level of cash
holdings. The results show that higher yield volatility of financial assets, liqui-
dation cost of financial assets and coefficient of risk aversion will raise the demand
for cash. It also shows that the optimal choice of inter-temporal model is different
from that of single-period model. The former makes the manager choose to hold
more cash. The reason is long-horizon managers has an intrinsically larger need
for cash to quell possible transaction and precautionary demand.
References
Aizenman J, Lee J (2007) International reserves: precautionary versus mercantilist views, theory
and evidence. Open Econ Rev 18(2):191–214
Barberis N (2000) Investing for the long run when returns are predictable. J Finance
LV(1):225–264
Bar-Ilan A (1990) Overdraft and the demand for money. Am Econ Rev 80:1201–1216
Bar-Ilan A, Lederman D (2007) International reserves and monetary policy. Econ Lett
97:170–178
832 X. Wei and L. Han
Bar-Ilan A, Perry D, Stadje W (2004) A generalized impulse control model of cash management.
J Econ Dyn Control 28:1013–1033
Baumol W (1952) The transaction demand for cash—an inventory theoretic approach. Q J Econ
66:545–556
Ben-Bassat A, Gottlieb D (1992) Optimal international reserves and sovereign risk. J Int Econ
33:345–362
Chang F (1999) Homogeneity and the transactions demand for money. J Money Credit Bank
31:720–730
Dixit A (1991) A simplified exposition of the theory of optimal control of Brownian motion.
J Econ Dyn Control 15:657–673
Frenkel JA, Jovanovic B (1980) On transactions and precautionary demand for money. Q J Econ
94:24–43
Frenkel J, Jovanovic B (1981) Optimal international reserves: a stochastic framework. Econ J
91:507–514
Matsuyam K (2006) The multi-period newsboy problem. Eur J Oper Res 171:170–188
Miller M, Orr D (1966) A model of the demand for money by firms. Q J Econ 81:413–435
Perry D, Stadje W (2000) Martingale analysis of a stochastic cash fund model. Insur Math Econ
26:25–36
Tobin J (1956) The interest elasticity of the transaction demand for cash. Rev Econ Stat
38:241–247
Chapter 88
Problem Analysis and Optimizing
of Setting Service Desks in Supermarket
Based on M/M/C Queuing System
Chun-feng Chai
Abstract Queuing problem is an important factor that affects operation level and
efficiency of the supermarket. To solve the queuing issue properly through
effective measures has become a top priority for supermarket. This paper provides
reference for decision at the issue of optimizing the quantity of service cashier
desks, improving service efficiency and decreasing operating costs. It analyses
Supermarket cashier system queuing issues through establishing M/M/C queuing
model on basis of operation research.
88.1 Introduction
In general we do not like to wait. But reduction of the waiting time usually
requires extra investments. To decide whether or not to invest, it is important to
know the effect of the investment on the waiting time. So we need models and
techniques to analyze such situations (Adan and Resing 2001). Going shopping in
the supermarket during spare time has become a kind of our life habit today
popular with supermarket. We enjoy the life of shopping, also are worried about
the problems of supermarket cashier service system. Less opening number of
cashier desks will lead to more customers waiting for too long service, which will
cause the customer dissatisfaction and running away. If the supermarket opens too
many service desks, it can reduce customer’s waiting time and length of line, but
this will increase the supermarket operating costs. The supermarket operator must
C. Chai (&)
School of Economy and Management, Taiyuan University of Science
and Technology, Taiyuan, China
e-mail: [email protected]
consider how to balance the two factors. As the deal terminal between supermarket
and consumer, the service desk has a direct influence on the image of the service
quality and efficiency. Also it can affect the operation level and efficiency of the
whole supermarket. Therefore, how to dynamically and reasonably arrange the
number of service desks according to the customer flow and the time needed and
how to balance the customer satisfaction and operational cost, are the problems
which should be solved by the enterprise.
88.2 Methodology
2 Leaving
Poisson
3
Process
Queue
C
88 Problem Analysis and Optimizing of Setting Service Desks in Supermarket 835
Any queuing system is consist of three parts. They are respectively input process,
queuing discipline and service agency. From these three parts, a hypothesis for
Supermarket cashier service system can be made (Yan 2012).
The hypothesis of input process is mainly for the customer getting to the super-
market cashier service system (Zheng and Gu 2005). First of all, customer source
is infinite. Secondly, in the description of the features, customer reaching cashier
desk is random and independent. Besides, the following features can be learned:
the arrival of the customer numbers any time has nothing to do with arriving
moments, but only with the time interval; the probability of the arrival of two
customers at the same time is almost zero. Through the above analysis, it can
assume that the input process of supermarket cashier service system is a Poisson
process and that the number of arrival customers in unit interval meets Poisson
distribution with parameters k.
The customer to the supermarket cashier service system is random. If there is free
service, then into service; if not, then the customers wait in line. If Supermarket
layout rationally, the space is enough without jams, the customers will generally
choose the shortest lines to wait for service (Wang and Miao 2012). When the
other lines get shorter while waiting, the customers will change the line imme-
diately, so the queuing discipline of cashier service system in supermarket is the
rules of first- come- first- service in waiting rules (Huang and Xiao 2009).
There are C check stands in the supermarket and work independently among each
other. Based on customer order of line up service customer, one customer is served
once. Because the goods kinds and number customers brought are different, the
service time of check stand is random. The service time of check stand of
supermarket can be assumed to meet the negative exponential for the l parameter
(Liu and Liu 2009).
836 C. Chai
According to the above analysis and assumptions established, we know that in the
system there are C working check stands, arrival of the customer meet the k
Poisson distribution. The service time of every customer is independent and meets
the negative exponential for the l parameter. System capacity infinite. If the
service window is busy when customer arrives, then wait. The Supermarket
cashier queuing system is a M/M/C queuing system (Deng 2000; Liu et al. 2011).
According the Little formula to set parameters q service intensity, k as cus-
tomer arrival rate, l as average service time, c is the number of cashier open.
When q\1, the system can achieve a steady state, and has a smooth distribution
(Li et al. 2000; Zhou 2011).
" c #1
c1 k
X 1 k 1 1 k
P0 ¼ þ ð88:1Þ
k¼0
k! l c! 1 q l
ck k
k! qc P0 0 k\c
Pk ¼ c k ð88:2Þ
c! q P0 kc
Through analysis of the system, the following corresponding target parameters
can be drawn (Sun 2007).
Average waiting length of team:
ðcqÞc q
Lq ¼ P0 ð88:3Þ
c!ð1 qÞ2
The average of system length of team (or the average number of customers
waiting in the system)
k ðcqÞc q k
L ¼ Lq þ ¼ P þ
2 0
ð88:4Þ
l c!ð1 qÞ l
team length customer can endure is L1, from the above analysis, the conclusion is
as follows (Miller 1981; Zhang et al. 1997).
The system can run normally, that is, service intensity q\1; the waiting time is
less than the longest waiting time, that is Wq T1 ; the wait queue length is smaller
than the longest, that is Lq L1 .
Only the minimum service unit who meet the three requirements will be the
best. Among them, C is unknown and k, l, T1, L1 is known, so P0 is known. The
model of checkout counter can be described by the following model is:
( )
c c
k ðcqÞ q ðcqÞ q
c ¼ min cq ¼ \1; Wq ¼ P0 T1 ; Lq ¼ P 0 L1
cl kc!ð1 qÞ2 c!ð1 qÞ2
ð88:7Þ
The constraint condition is:
8
> k
>
> q ¼ cl \1
>
>
>
>
>
> ðcqÞc q
>
< Wq ¼ P 0 T1
kc!ð1 qÞ2 ð88:8Þ
>
>
>
> ðcqÞc q
>
> L ¼ P L1
>
>
q
c!ð1 qÞ 2 0
>
>
:
k; c; l; T1 ; L1 0
Among them, k, l, T1, L1 are all known and c is unknown. c* is the best
number of service units. After optimization, the average wait time and average
wait queue length in the system respectively are:
ðc qÞc q
Wq ¼ P0 ð88:9Þ
kc !ð1 qÞ2
ðc qÞc q
Lq ¼ P0 ð88:10Þ
c !ð1 qÞ2
According to the survey of customer flow and service time in the supermarket, the
supermarket whose business hour is from 8 to 22, for a total of 14 h, had 50
checkout counters. The research time was various parts of the day in weekend and
weekdays. In each time slot, we conducted random surveys of 200 unit time (each
838 C. Chai
Table 88.1 Arrival rate of customers in each time slot and the number of available checkout
counters
Time interval Arrival rate of customers(people/h) Number of service desks
Weekday Weekend Weekday Weekend
8:00 * 9:00 756 840 15 16
9:00 * 10:00 984 1116 15 16
10:00 * 11:00 1332 1536 22 24
11:00 * 12:00 1524 1560 24 24
12:00 * 13:00 1296 1620 22 25
13:00 * 14:00 1380 1656 22 25
14:00 * 15:00 1524 1884 24 27
15:00 * 16:00 1680 2052 26 30
16:00 * 17:00 1716 1668 26 25
17:00 * 18:00 1668 1908 26 28
18:00 * 19:00 1680 1992 26 30
19:00 * 20:00 1776 1800 27 27
20:00 * 21:00 1980 1524 27 25
21:00 * 22:00 1416 852 22 16
unit time is 5 min), the Table 88.1 below is the statistics of the customer arrival
situation.
According to the survey, the service time of cashier desk obeys negative expo-
nential distribution with the parameters l (l = 58.32). Discussing the working
time interval from 9:00 to10:00 as an example, according to the data in Table 88.1,
the average service strength of the system for this moment is:
k 984
q¼ ¼ ¼ 1:125 [ 1 ð88:11Þ
nl 15 58:32
This shows that the system has been very crowded at this time, and the cus-
tomers have to wait for a long time to get service. Now the customers must be not
satisfied with the system very much. The reality of the situation is the same, which
should be improved.
In the system, only the service strength q\1 the system will reach the balance.
88 Problem Analysis and Optimizing of Setting Service Desks in Supermarket 839
k 984
When q ¼ nl \1, with 58:32n ¼ 16:87
n \1, got n C 17, this means that at least 17
service desks should be open to achieve the system balance state from 9:00 to
10:00.
Taking 9:00–10:00 as an example, according to the state equation of service
desks negative exponential distribution queuing system, it can work out the
probability of system free, the team length of average waiting and the relationship
expression of service desk number C. And put C = 17 k = 984 l = 58.32, q ¼
0:992 into formula (88.1–88.6), the final solution:
After optimization, the Numbers of service desks in various time interval reach
the optimal, as the customer do not need long queue for leave, and customer
service satisfaction for supermarkets will increase, the loyal customers of super-
market will be more and more customers, so that the income will increase as well.
88.5 Conclusion
This paper utilizes the classic queuing theory to solve the queuing problem and
optimize service strategy. According to different passenger flow volume in dif-
ferent time, the opening number of cashier number should be set in a flexible way,
so as to shorten the waiting time of customers, improve customer satisfaction,
reduce the cost and improve the competitive power of enterprises.
References
Adan I, Resing J (2001) Queuing theory. Department of Mathematics and Computing Science
Eindhoven University of Technology, Eindhoven, p 7
Deng X (2000) The optimal product line desing based on the queuing theory. Oper Res Manage
Sci 9(3):64–69
Hlynka M (2010) An introduction to queuing theory modeling and analysis in applications.
Techno metrics 52:138–139
Huang Z, Xiao YJ (2009) M/M/C/? queuing system model and its application example analysis.
Technol Dev Enterp 11:92–93
Li W, Li M, Hu Y (2000) Operational research. Tsinghua University press, Beijing, pp 310–348
88 Problem Analysis and Optimizing of Setting Service Desks in Supermarket 841
Liu W, Liu Z (2009) Applying queuing theory to quantitative analysis on outpatient billing
service in hospital. Chin Med Equip J 10:87–89
Liu L, Xu J, Zhang T (2011) Statistical analysis of bank service window scheduling strategy
model. Stat Res 28:75–79
Mandelbaum M, Hlynka M (2009) History of queuing theory in Canada prior to 1980. INFOR
11:335–353
Miller DR (1981) Computation of steady-state probability of M/M/1 priority queues. Oper Res
29(5):945–948
Sun B (2007) The model of bank queue and its appliance in improving the bank quality of
services. Hefei University of Technology, Hefei
Takacs L (1962) Introduction to the theory of queues. Oxford University Press, New York, p 161
Wang R, Miao W (2012) Based on the M/M/n the queue theory and large travel scenic spot
internal queuing phenomenon. Reform Econ Syst 3:177–178
Yan W (2012) Problem analysis and system optimization strategy research about bank queuing.
Finance Econ 6:63–65
Zhang ZG, Vickson RG, van Eenige MJA (1997) Optimal two-threshold polices in an M/G/1
queue with two vacation types. Perform Eval 29:63–80
Zheng H, Gu F (2005) Optimization of the queuing system in large supermarket. Chin J Manage
2(2):171–173
Zhou W (2011) Application research of queuing theory model in medical service system.
HuaZhong University of Science and Technology, Wuhan
Chapter 89
Proposed a Novel Group Scheduling
Problem in a Cellular Manufacturing
System
Abstract This paper presents a new integrated mathematical model for a cellular
manufacturing system and production planning. The aim of this model is to
minimize machine purchasing, intra-cell material handling, cell reconfiguration
and setup costs. The presented model forms the manufacturing cells and deter-
mines the quantity of machines and movements at each period of time that min-
imizes the aforementioned costs. It is so difficult to find an optimal solution in a
reasonable time. Thus, we design and develop a meta-heuristic algorithm based on
a genetic algorithm (GA). This proposed algorithm is evaluated, and the related
results confirm the efficiency and effectiveness of our proposed GA to provide
good solutions, especially for medium and large-sized problems.
Y. Gholipour-Kanani (&)
Department of Management, Islamic Azad University—Qaemshahr Branch,
Qaemshahr, Iran
e-mail: [email protected]
N. Aghajani
Department of Industrial Engineering, Islamic Azad University—Qazvin
Branch, Qazvin, Iran
R. Tavakkoli-Moghaddam
Department of Industrial Engineering, College of Engineering,
University of Tehran, Tehran, Iran
S. Sadinejad
Department of Industrial Engineering, Islamic Azad
University—Research and Science Branch, Tehran, Iran
89.1 Introduction
This section presents a new integrated pure integer linear programming model of
the CMS and PP under following assumptions.
89.2.1 Assumptions
1. The processing time for all operations of a part type is known and deterministic.
2. The capabilities and time-capacity of each machine type are known and con-
stant over the planning horizon.
3. Parts are moved in a batch within cells. Intra cell batch handling cost is known
and constant. It independent on distance.
4. The number of cells is known and constant over the planning horizon.
5. The upper and lower bounds of cell sizes are known and constant.
6. Relocation cost of each machine type from one cell to another between periods
is known. All machine types can be moved to any cell. Relocation cost is the
sum of uninstalling and installing costs. Note that, if a new machine is added to
system, we have only the installation cost. On the other hand, if a machine is
removed from the system, we have only the uninstallation cost.
7. The set up cost for all parts is known.
8. The batch sizes for all parts and in each period are constant.
9. The independent demand of parts is deferent from period to another period.
89.2.3 Parameters
1; if one unit of machine type f is placed in cell c, at periodk
rfck ¼
0; otherwise
1; if part type i isprocessed during period k
zik ¼
0; otherwise
1; if operation j of part i to be processed is done in cell cduring period k
Xjick ¼
0; otherwise
1; ifoperation j of part ito be intra cell handled is done in cell cduring period k
bjick ¼
0; otherwise
ð89:1Þ
89 Proposed a Novel Group Scheduling Problem 847
X X
nfck nfc;k1 0; 8ðf ; kÞ ð89:2Þ
c c
X
c
Xjick ¼ zik ; 8ðj; i; kÞ ð89:3Þ
X X
i j
dik kji Xjick Df nfck ; 8ðc; kÞ ð89:4Þ
X
LBc f
nfck UBc ; 8ðc; kÞ ð89:5Þ
nfck ¼ nfc;k1 þ yþ
fck yfck ; 8ðf ; c; kÞ ð89:6Þ
nfck ; yþ
fck ; yfck 2 f0; 1; 2; . . .g; 8ðf ; c; kÞ ð89:12Þ
number of machines in each cell due to the limit of the physical space. In addi-
tional, there should be at least one machine in each cell; otherwise the cells
disappear. Equation (89.5) specifies the lower and upper bounds of cell sizes.
Equation (89.6) states that the number of type k machines in the current period in a
particular cell is equal to the number of machines in the previous period, adding
the number of machines being moved in, and subtracting the number of machines
being moved out of the cell. Equation (89.7) specifies the intra cell material
handling. Equation (89.8) specifies the corresponding binary variable for system
set up. Equations (89.9) and (89.10) set the value of equal to 1 if at least one unit of
type k machine is placed in cell l during period t or 0 otherwise. Equations (89.11)
and (89.12) are integrality constraint.
Cell 3 0 6 0 0 2
This procedure creates the initial population (Pop), which must be a wide set
consisting of disperse and good solutions. Several strategies can be applied to get a
population with these properties. The solutions to be included in the population
can be created, for instance, by using a random procedure to achieve a certain level
of diversity. In this study, an initial population of the desired size is generated
randomly. For example, when there are five parts, the algorithm generates 10
solutions randomly, depending on the problem size.
89.3.3 Fitness
Each solution has a fitness function value, which is related to the objective
function value (OFV) of the solution. However, the population can have feasible
and infeasible solutions. An option to manage the infeasibility is to use both cost
and feasibility. This can be written as fitness cost feasibility; where s is the
solution, cost of the objective function value. Feasibility is equals to 1 if the
solution is feasible; otherwise it is zero. Therefore, the fitness is not one value;
however it is two, namely the cost and the feasibility of the solution.
The parent selection is important in regulating the bias in the reproduction process.
The parent selection strategy means how to choose chromosomes in the current
population that will create offspring for the next generation. Generally, it is better
that the best solutions in the current generation have more chance to be selected as
parents in order to create offspring. The most common method for the selection
mechanism is the ‘‘roulette wheel’’ sampling, in which each chromosome is
assigned a slice of a circular roulette wheel and the size of the slice is proportional
to the chromosome’s fitness.
850 Y. Gholipour-Kanani et al.
The main task of the mutation operator is to maintain the diversity of the popu-
lation in the successive generations and to exploit the solution space. In this paper,
a mutation operator, called Swap Mutation, consist of swapping any two randomly
chosen genes in a chromosome (Torabi et al. 2006). At first, we define ‘‘mutation
strength’’, demonstrator of the maximum number of swap moves performed. If the
strength of the mutation is chosen to be one, then it performs a single swap move,
provided a given probability P(M). So the strength of the mutation shows the
number of consecutive swaps on the individual chromosome.
89 Proposed a Novel Group Scheduling Problem 851
Delphi7 program was used for designing algorithm Genetic. The executer pro-
cessor of algorithm Genetic and Lingo is a computer with characteristic of
1.8 GHz and 768 MB. The calculation of optimal value especially in large
dimensions is difficult because CMS planning model solution is complicated. So,
the answers of Lingo8 software are an answer near to optimal. The answers are
shown in Table 89.1. We compare the obtained objective value from genetic
algorithm and Lingo software methods in small dimensions problems. We deter-
mine the difference percentage from Lingo answer and study memory size and
CPU time. We can see the results of the problem solution by genetic algorithm and
lingo software are the same in small dimension problems. So, this thing shows the
efficiency of algorithm. Lingo software does not be able to solve the large prob-
lems in acceptable time that genetic algorithm make optimised answer or near it in
most proper time. The result of some test problem is shown in Table 89.1 and also
Growth of solution time for genetic algorithm and lingo software are compared in
Fig 89.3.
The model considered in this article is the model for minimizing machine pur-
chasing, intra cell material handling, cell reconfiguration, and set up costs.
According to the researches done, this problem is the type of NP-hardness that is it
is solution with the optimisation software will be impossible if dimension of the
problem increases. The approaches such as branch & bound and dynamic planning
have computational time limitation and saving limitation in company. So, using of
heuristic algorithm would be effective. The results obtained are as follows:
• With the extension of problems, the computational time will be increase by
Lingo while this increase would be little in comparison with Lingo in case of
using heuristic algorithm.
• Variation of the productions increases and industry moves toward using of
cellular manufacturing for using of its benefits. So using of usual methods in
planning of variation of the productions increases and industry moves toward
using of cellular manufacturing for using of its benefits. So using of usual
methods in planning of cellular manufacturing systems doesn’t have good
performance and it should be pay attention to the heuristic methods.
Followings are some suggestion for the future research.
• Some of the parameters of this problem can be considered in the fuzzy and
converted to the fuzzy cellular manufacturing systems.
• Multiple routes do not be considered in this problem considering multiple routes
can make closer the problem to the real condition. So that investigation can be
valuable.
• Inventory cost does not be considered in the article. That can be considered in
the future article.
10000
time
8000
6000
4000
2000
0
1 2 3 4 5 6 7 8 9 10
example
89 Proposed a Novel Group Scheduling Problem 853
References
Abstract With the rapid growth of economy, China evidence sharp increase on its
GDP. But at the same time it feels more and more pressure from industry wastes in its
environment. To relieve the pressure on environment while still maintaining the
sustainable development in China, decision makers starts to focus more on mea-
suring the efficiency of waste treatment process. Because the industry wastes are
divided into three classes (i.e. waste water, waste gas and solid wastes), we should
apply different treatments to deal with them. In this paper, we propose a multiple
parallel DEA methodology and apply it to calculate the efficiency for the treatment
of these three kinds of wastes. By formulating the three types of treatments as three
parallel sub-systems in ecological environment optimization, the efficiency of
overall, as well as of individual wastes’ treatment can be calculated. The statistic
data from 30 individual provinces of China in 2010 are used to demonstrate the
effectiveness of our approach. Suggestions on optimizing the ecological environ-
ment in different regions based on our measurement are given at the end of the paper.
Keywords Ecological regions Overall efficiency Parallel DEA Sub-system
efficiency Treatments of wastes
90.1 Introduction
In the past, China relies mainly on the former Soviet mode to fuel its own economy
development, which focuses on increasing the inputs, especially labor and capital
investment. In that process, we scarified our environment and limited resources for
L. Wang (&)
School of Management, Changchun Institute of Technology, Changchun, China
e-mail: [email protected]
N. Li
School of Economics and Management, China University of Petroleum, Qingdao, China
e-mail: [email protected]
the economy growth. With the influence of global greenhouse effect and serious
pollution, decision makers begin to shift from traditional production mode and lay
more emphasis on the treatments of wastes. Generally, we can classify the wastes
into three types, i.e. waste gas, waste water and solid wastes. The treatments of
three types of wastes are pivotal measures to build environmental-friendly regions.
Clarke et al. (1991) discussed water quality management issues in Oregon, USA
and proposed constructive measures to enhance the capability of waste water’s
treatment. At the same time, other two types of wastes, viz. waste gas and solid
wastes also play important roles in ecological environment. Guan et al. (2011)
proposed a coordination of Energy-Economy-Environment System to express the
close relationship between energy, economy and environment.
The evaluation of wastes’ treatment should be applied to identify the develop-
ment level of ecological optimization (Wu et al. 2005). Murtaugh (1996) proposed a
statistical methodology with ecological indicators. However, the treatment process
of waste gas, waste water and solid wastes can be modeled as parallel system with
almost no interaction among them. At the same time, the three processes cover all
aspects of wastes’ treatment. In this paper, we applied parallel DEA model to
calculate the efficiency of each individual treatment and the overall efficiency for
whole region.
The rest of this paper is organized as follows. In Sect. 90.2, the parallel DEA
models are introduced. In Sect. 90.3, we identified the indicators for efficiency
calculation and illustrate the collection of the corresponding data. The calculation
results are presented in Sect. 90.4.
DEA model CCR was proposed by (Charnes et al. 1978), which applied an optimal
linear programming formula to calculate efficiency of DMUs. Suppose we have n
DMUs, and that kth DMUk (k = 1, 2, …, n) has m inputs, denoted as xik (i = 1, 2,
…, m), and s outputs, denoted as yrk (r = 1, 2, …, s). The traditional CCR DEA
model can be expressed by the following formula (90.1).
X
s
Ek ¼ max ur yrk
r¼1
8X m
>
> vi xik ¼ 1
>
>
>
>
>
>
i¼1
>
> ð90:1Þ
<X s Xm
s:t: u r y rj vi xij 0; j ¼ 1; . . .; n
>
> r¼1 i¼1
>
>
>
>
>
> ur ; v i e
>
>
:
r ¼ 1; . . .; s; i ¼ 1; . . .; m
90 Regional Eco-Environment Optimization 857
Sub-system 2
Sub-system q
By calculating with DEA models, the optimal weights can be allocated for each
DMU, denoted as vi ¼ ðv1j ; v2j ; . . .; vmj Þ, ur ¼ ðu1j ; u2j ; . . .; usj Þ, which guarantee
the kth DMU with the maximum efficiency value. If the objection of model (90.1)
equals to 1, then the DMU is denoted as DEA efficient DMU. If the objection of
model (90.1) is less than 1, then the DMU is denoted as DEA inefficient DMU.
DEA models have obvious advantages in measure the performance of multiple
inputs and outputs system. However, traditional DEA model take system as a black
box and ignores the internal structure of system.
In general, the inside of DMU can be classified into different structures and the
internal structure can affect the overall efficiency of whole system. For each of
sub-systems, its efficiency has strong impact on the system’s overall efficiency. In
this paper, we will use the DEA model to deal with parallel sub-system structures.
To overcome the shortcomings of traditional DEA models, Kao (2009) pro-
posed parallel DEA model for measure the relationship between sub-systems and
DUM. Figure 90.1 shows the diagram for ‘‘Parallel structure’’ DEA model for a
DMU.
For the kth DMU, there are q sub-systems and each of sub-system has the same
number and types of inputs and outputs. The q sub-systems are denoted as sub-
system 1, sub-system 2,…, sub-system q. we use Xikp and Yrkp to express the ith
input and rth output, respectively, of the pth sub-system. The relative inefficiency
of a set of n DMUs, each has q parallel sub-systems, can be calculated by
following formula:
858 L. Wang and N. Li
X
q
min spk
p¼1
8 m
> X
>
> vi Xik ¼ 1
>
>
>
>
>
>
i¼1
>
>
>
> Xs X m
>
> p
ur Yrk vi Xikp þ spk ¼ 0
>
>
>
> r¼1 i¼1 ð90:2Þ
>
<
s:t: X X
s m
p
>
> u Y
r rj vi Xijp 0
>
>
>
> r¼1 i¼1
>
>
>
> ur ; vi e; p ¼ 1; 2; . . .; q;
>
>
>
>
>
>
> j ¼ 1; . . .; n; j 6¼ k
>
>
>
: r ¼ 1; . . .; s; i ¼ 1; . . .; m
The model (90.2) above should be calculated for n times to obtained the
inefficiency slacks of systems as well as their sub-systems.P However, the ineffi-
ciency slacks is not equal to inefficiency scores because m v X
i¼1 i ik
w
is not equal to
1 for kth DMU with wth sub-systems.
Pm Therefore, the inefficiency score calculated
by swk should be divided by w
i¼1 vi Xik . Hence, the final efficiency score is
w
1 sk Pm vi X w .
i¼1 ik
Chinese government is increasing year by year (Guo et al. 2007). The investment in
the environment infrastructure not only enhances the capability to treat environment
pollutions for now but also for future. It is clear that Chinese government is putting in
more and more resources and effort on optimize eco-environment. In order to
effectively optimize the eco-environment, we need to know what the efficiency of
treatment pollution in China is. As we are evaluating the efficiency of treatment in
eco-environment, we need analyze the structure of wastes’ treatment.
In general, the wastes are divided into three types: (a) waste gas, waste water and
solid wastes. To optimize the eco-environment, we should also apply corre-
sponding treatment measures for these three types of wastes. In our model, we
divided the optimization of eco-environment into three parallel processes, i.e.
waste gas treatment, waste water treatment and solid wastes’ treatment. If we
represent each waste treatment as a sub-system, there are multiple indexes, which
can be listed to measure the efficiency of each process of waste treatments in the
view of multiple inputs and outputs. The indexes are shown in Table 90.1
(Bao et al. 2006).
For waste gas treatment, we use 1-in-4-out indexes to interpret the sub-system’s
efficiency. For waste water treatment, we design 2-in-2-out to explain the effi-
ciency of sub-system. For solid wastes’ treatment, we apply 1-in-3-out to measure
the sub-system’s efficiency.
860 L. Wang and N. Li
Therefore, the three regions have the characteristics of large inputs and lager
outputs.
For Jilin, Heilongjiang, Fujian and Hainan, they are efficient regions too
(efficiency score = 1). Those regions are middle developed regions. Heilongjiang
and Jilin locate in the northeast part of China. Although these regions are industry
basement in 1980s, the center of industry development has transferred into coastal
regions. Therefore, the transformation relieved the pressure of eco-environment in
those regions. Fujian and Hainan are coastal provinces, who are not industry
centers or basements. Therefore, the pollution in Fujian and Hainan are relatively
less than other coastal regions.
The other 7 efficient regions are Jiangxi, Guizhou, Shaanxi, Gansu, Qinghai,
Ningxia and Xinjiang, which are located on the west part of China. Those regions’
development of industry lagged behind other eastern regions.
862 L. Wang and N. Li
Figure 90.3 shows the efficient regions on the map of China. The green
provinces in the map are the efficient regions with high performance on eco-
environmental optimization.
In Table 90.2, there are 16 regions whose efficiency scores are less than 1.
To optimize eco-environment and keep sustainable development mode in China,
we should empower the treatment capability in the next few years. At the same
time, we notice that the average efficient values are more than 0.9. The means the
gaps between different regions in eco-environmental optimization are small.
Therefore, it is quite feasible to optimize the overall eco-environment in China.
90.5 Conclusions
Over the past 30 years, China enjoy economy booming at the expense of envi-
ronment pollution. How to enhance the capability of deal with those wastes should
be important measures to make our environment friendly. Now, Chinese gov-
ernment recognized the importance of protecting eco-environment and invests
heavily on improving environment. To quantify the results of eco-environmental
optimization, comprehensive evaluation method should be applied to measure the
90 Regional Eco-Environment Optimization 863
efficiency accurately. In this work, we propose the parallel DEA model and apply
it to analyze the eco-environment efficiency for 30 individual provinces of China.
Our results demonstrate that with our model, the government can get accurate eco-
environmental optimization levels for those 30 regions and make corresponding
measures to enhance optimization capability of eco-environment in China.
Acknowledgments The main work of this paper is supported and sponsored by Young Foun-
dation of Ministry of education, humanities and social science research projects (11YJC630100),
project of Shandong Economic and Information Technology Committee (No. 2012EI107) and the
Fundamental Research Funds for the Central Universities (11CX04031B).
References
Bao C, Fang C, Chen F (2006) Mutual optimization of water utilization structure and industrial
structure in arid inland river basins of Northwest China. J Geog Sci 16(1):87–98
Clarke SE, White D, Schaedel AL (1991) Oregon, USA, ecological regions and subregions for
water quality management. Environ Manage 15(6):847–856
Charnes A, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision making units. Eur
J Oper Res 2:429–444
Guan H, Zou S, Zhou X, Huang Z (2011) Research on coordinated evaluation of regional energy-
economy-environment system. Commun Comput Inf Sci 225(2):593–599
Guo R, Miao C, Li X, Chen D (2007) Eco-spatial structure of urban agglomeration. Chin Geogr
Sci 17(1):28–33
Kao C (2009) Efficiency measurement for parallel production systems. Eur J Oper Res
196:1107–1112
Liu Y, Li R, Li C (2005) Scenarios simulation of coupling system between urbanization and eco-
environment in Jiangsu province based on system dynamics model. Chin Geogr Sci
15(3):219–226
Murtaugh PA (1996) The statistical evaluation of ecological indicators. Ecol Appl 6(1):132–139
Wu K, Hu S, Sun S (2005) Application of fuzzy optimization model in ecological security pre-
warning. Chin Geogr Sci 15(1):29–33
Xu Y, Tang Q (2005) Land use optimization at small waterhed scale on the Loess Plateau. J Geog
Sci 19(5):577–586
Chapter 91
Research on Brand Strategy to Small
and Medium-Sized Enterprises
Xin-zhu Li
Abstract Based on the analysis of brand attributes, from the perspective of value
chain theory, this paper presents that brand strategy is a significant strategy for the
small and medium-sized enterprises (SMEs) to realize higher additional value of
products and gain competitive advantage in market. Formation of scientific brand
development strategy planning, clear definition to core brand value, cultivation of
self-owned brand, occupation of competitive advantage by correct brand posi-
tioning, selection of correct brand appeal, and adoption of innovative brand
operational model are the important approaches and means for SMEs to realize
brand strategy, extricate from operational predicament, and promote additional
value of products.
Keywords Brand strategy Small and medium-sized enterprise (SME) Smiling
curve Value chain
91.1 Introduction
Brand, as the symbol and identification to the enterprise and its product and
service, delivers specific information to consumers. As an important link in the
value chain, brand plays a decisive role in promotion of the whole value. Favor-
able brand innovation strategy is a powerful force to increase the additional value
of product and service, and to contribute to enhance competitive advantage and
cultivate core competitive competence.
X. Li (&)
Economics and Management School, Wuhan University, Hubei, China
e-mail: [email protected]
Margin
Human resource management
Supportive
Activities Research and development
Purchase
Basic activities
Additional value
Intellectual Brand/service
property
Research,
development Manufacture Marketing
and design
The attributes of brand can be divided into material attributes and social attributes.
The material attribute embodies use value of commodity, which belongs to its
essential attributes and exists before purchased, reflecting the relationship between
human being and commodity. For example, Jetta and Benz both represent vehicle
in respect to their material attribute.
The social attribute embodies symbol value of commodity, which belongs to
social derived attributes and is not shown until purchased and used, reflecting the
relationship between human beings and the commodity. For example, Jetta is only
regarded as a convenient and fast vehicle, while Benz is the symbol of nobleness,
success and social status.
quality difference (such as river sand used for building), and commodities inde-
pendently priced for monopoly (such as water, power, and coal gas) (Aker 1990).
Brand not only refers to a name or signal, but also represents many-sided com-
mitment an enterprise make and a major communication channel between the
enterprise and consumers. The brand with good image reflects consumers’ trust to
the enterprise.
For enterprises without brands, they lose an opportunity to earn consumers’
trust as well as to demonstrate their strength. Owning to lack of trust, many SME’s
may become scapegoat for dominant large enterprises in the value chain and be
confronted with enormous market risks. Without the support of brand, it is difficult
for SMEs to directly display their competitive advantage, losing many commu-
nication opportunities (Bhat and Reddy 2001). As a result, SMEs can only attach
themselves to the lower end of the value chain and hardly obtain independence in
market only with small margin.
For SMEs that are satisfied with a small margin in the manufacture value chain and
even without trademark, as their competitive edge relying on low-cost labor and
resource is shrinking, the original extensive operation model cannot guarantee
long-term development. Therefore, to increase additional value, transform oper-
ation model and pursue long-term development, brand strategy is a practical
choice for enterprises to expand the market, get rid of price competition and
enhance competitiveness.
In the process of SMEs transition from low end to high end of value chain, brand
building shall be recognized as a systematic strategy project featured by integrity,
constancy and total involvement, and shall be regarded as the core component of
SMEs development strategy. All operation activities shall be designed, launched,
maintained, managed, guided, and coordinated by centering on the brand, to
enhance brand equity through long-term and dedicated work.
91 Research on Brand Strategy to Small and Medium-Sized Enterprises 869
Brand embodies the relationship between the enterprise and consumers. By offering
unique values demanded by consumers, brand is applied to establish firm relation-
ship with consumers, rather than to endow a good name or earn popularity in a short
time at a high promotion cost. Core brand value shall be defined based on demands of
target consumers, in addition to correct perception of the brand (Smith 2001).
Many enterprises, for lack of correct cognition of the brand, equate brand
building and advertising, and believe that well-known brand and even strong brand
can be built in a short period through advertisement. Many such enterprises as Qin
Chi, Sanzhu and Aidor, once pursued for popularity by overspreading advertise-
ment; however, these brands piled up by high advertising costs have vanished in the
market for a long time. The lessons they leave to later generations are very profound.
For products of industrial equipment which face specific users, they are the raw
materials, accessories or production means for users, so the material attributes are
emphasized in brand promotion with focuses laid on safety, quality, practical and
other use value, and the brand appeal is usually to create value for users.
870 X. Li
To ordinary consumer goods, the social attributes embody the symbolic value
which is more appealing to the consumers and reflects the relationship among
human beings. For example, the brand promotion for food and beverage emphasize
on cheerfulness, exercises, and vitality, that for high-end automobiles emphasize
on dignity and elegance, and that for telecommunications and home appliances
emphasize on harmony, family love and convenience (Christensen 2010).
Due to the restrictions of various objective situations, there are many practical
difficulties for many enterprises to maintain long-term leading edge in technology
and quality, thus homogenization of product function is basically inevitable. In
order to differentiate their products from competitors’, provide differentiated
product value and obtain consumer’s sustained favor, the emotional communica-
tion with consumers appears very important (David 1991).
However, many enterprises are often restricted to attract consumers by func-
tional benefits of the brand while ignoring the expression of emotional benefits.
Purely functional benefits appeal is likely to make the brand in dilemma of
homogenization competition. In order to avoid the price competition brought about
thereby, the consumers’ satisfaction and loyalty to the band can be promoted
through emotional communication.
Professor Don Schultz deems that the investment philosophy of brand building
shall transfer from media-oriented model to one focusing on brand connection or
brand contact points (Schultz and Schultz 2005). While the problem many
enterprises encounter during the brand building is the confusion of brand images
and lack of unified brand image in the mind of consumers, which seriously affects
consumers’ cognition of the brand.
In fact, brand building is a systematic project. The enterprises shall start from
the research on consumer behavior to find out the contact point of the brand and
deliver a consistent brand message and create a unified brand image through
effective management of brand contact point. By studying the media contact habits
of the target customers, the enterprises can choose the specific approach for brand
communication and improve communication efficiency by precise work. Espe-
cially for SMEs lacking of funds, effective management of brand contact points to
deliver a unified brand image is an important means to reduce the brand promotion
cost and improve efficiency.
adjust measures to local conditions and actively take innovative operation mode
and sales mode of the brand in the process of brand building to earn their own
competitive advantages (Porter 2002).
Many former small enterprises stand out and grow by taking distinctive brand
operational models. Some examples can be taken here, online shopping, TV
shopping and other non-traditional store-free direct selling is an innovation for
marketing channel model and is gradually nibbling the traditional retail market;
Canon replaces Xerox to be the leader of copier market, which is an innovation of
redefining the customer market; the ‘‘straight-thorough processing’’ of Dell is an
innovation of computer customization (Kreinsen 2008). The rapid development of
social economy and the complexity of consumer demand request the innovation of
brand operational model. Adapting to this changing trend, it is possible to create a
miracle within the industry by brand management and operational mode
innovation.
Profit is the ultimate goal of enterprises in the value chain theory. In the
increasingly competitive market and growing product homogenization, brand is an
important tool to provide differentiated value to consumers and plays an
increasingly significant role in market competition. Enterprises may optimize their
value chain to achieve long-term development of the brands (Pavitt 1984). The
optimization effects of value chain on multi-brand strategy are reflected in the
following aspects:
First, the value chain analysis can be applied to enhance the brand value.
Through analysis of value chain to identify the elements which can enhance
product functions and features and factors that may affect brand image, the pro-
duction costs can be reduced and the optimal resource allocation can be realized
(Xu 2009).
Second, the systematic management of brand based on detailed elements and
links of the value chain can improve the value of enterprise image. The brand
value is reflected just due to the asymmetry of consumers’ understand of product
information. Meanwhile the enterprises shall concentrate on exploring the brand
culture, creating product differentiation and forming their own characteristics to
meet the customers’ emotional demands and create personalized brand image.
Third, the differentiation of products can be employed from the perspective of
value chain to define enterprise strategy. Each link in the value chain is inde-
pendent and also interact each other. Through the analysis of various value chains,
the enterprise can recognize whether these chains are separated or coordinated
each other to achieve synergy effect and realize product and enterprise brand
optimization (Chen and Zheng 2009).
872 X. Li
91.7 Conclusions
To change the position from low end of the value chain, extend from the bottom of
the ‘‘smiling curve’’ to both ends, improve the additional value of the product and
enhance the competitiveness, brand construction is one important link for the small
and medium-sized enterprises. Brand strategy is a request for adaptation to eco-
nomic restructuring and also a strategic issue for sustainable development.
Brand is the image of the enterprise as well as its products, just like the image
of a person. To promote overall value creational capability, SMEs shall apply the
tool of brand strategy through formulation of appropriate brand strategy, estab-
lishment of scientific brand development plan, definition of clear brand position,
choice of correct brand appeal, and adoption of innovative brand operation model
in accordance with their own operational situations.
References
Cai-feng Li
Keywords Linguistic judgment matrix Method to select parameter Parameter
Preference
92.1 Introduction
C. Li (&)
Department of Hechi, University Guangxi, Yizhou,
People’s Republic of China
e-mail: [email protected]
positive reciprocal matrix. The second method involves the induced ordered
weighted averaging (IOWA) operator put forward by literature (Yager 2003),
linguistic ordered weighted averaging (LOWA) operator in literature Herrera et al.
(1996), linguistic weighted arithmetic averaging (LWAA) and extensive ordered
weighted averaging (EOWA) operator in literature (Xu 1999),and other operators
in literatures (Herrera et al. 1995, 1996, 2000; Herrera and Herrera-Viedma 2000;
Umano et al. 1998; Wang and Fan 2002, 2003).
In ranking methods based on the consistency of linguistic judgment matrix, few
literatures add parameters into ranking methods, although there is parameter in
shift formula of literature (Chen and Fan 2004), it contains no active substance.
The paper puts forward a ranking method of linguistic judgment matrix involving
parameters, called parameter ranking method based on linguistic judgment matrix.
The information of decision maker would be mined in methods, and then decision
maker gets the better priority weights of linguistic judgment matrix.
Literature (Chen and Fan 2004; Fan and Jiang 2004) describes linguistic judgment
matrix and its consistency. Assuming there is linguistic phrase set S ¼
fSa ja ¼ t; . . .; 1; 0; 1; . . .; tg; and decision-making problem is limited to finite
set A ¼ fa1 ; a2 ; . . .; an g; where ai denote the project i. Decision maker uses a
matrix P ¼ ðpij Þnn to describe the information of the project set A, where pij
evaluate project ai and project aj , when pij ¼ fS1 ; S2 ; . . .; St g, project ai is better
than project aj , the more pij is, the greater project ai is superior to project aj , in
contrast, pij 2 fSt ; ; S1 g, project aj is better than project ai , the smaller pij is,
the greater project ai is inferior to project aj , while pij ¼ S0 , project ai is as good as
project aj , matrix P is called as linguistic judgment matrix.
Definition 1 (Herrera et al. 1995) Let S ¼ fSa ja ¼ t; . . .; 1; 0; 1; . . .; tg denote
natural language set, where Si is the i-th natural language, the subscript i and the
corresponding natural language can be obtained from following function I and I 1 :
I : S ! N; IðSi Þ ¼ i; Si 2 S
I 1 : N ! S; I 1 ðiÞ ¼ Si
Definition 2 (Chen and Fan 2004) With respect to P ¼ ðPij Þnn , 8i; j; k 2 J, its
elements satisfy with the following equation:
Iðpij Þ þ Iðpjk Þ þ Iðpki Þ ¼ 0 ð92:1Þ
Then the linguistic judgment matrix is consistent.
I : S ! N; IðSi Þ ¼ i; Si 2 S
92 Research on Information Mining About Priority Weight 875
I 1 : N ! S; I 1 ðiÞ ¼ Si
The logistic relation between linguistic judgment matrix and priority weight is put
forward in this chapter, called as the parameter ranking method based on linguistic
judgment matrix in the paper.
Theorem 1 A sufficient and necessary condition of the consistent linguistic
judgment matrix P ¼ ðpij Þnn is that there exist a positive normalized vector x ¼
ðx1 ; x2 ; . . .; xn ÞT and h, which satisfy the following formula:
xi
Iðpij Þ ¼ logh ; i; j 2 J; where h [ 1: ð92:2Þ
xj
Sufficient condition
If pij of linguistic judgment matrix satisfies with Iðpij Þ ¼ logh x xj ; i; j 2 J, where
i
Pn
h [ 1, xi [ 0; xi [ 0, and i¼1 xi ¼ 1, it is easy to draw the following con-
clusion: I ðpIJ Þ þ Iðpjk Þ þ Iðpki Þ ¼ 0, logh x xi xk xi xi xk
xj þ logh xk þ logh xi ¼ logh xj : xk : xi
i
¼ logh 1 ¼ 0. So, the linguistic judgment matrix P ¼ ðpij Þnn is consistent. From
formula (92.2) and h [1, it is not difficult to draw the following conclusion:
pij 2 fs1 ; s2 ; . . .; st g , I pij [ 0 , x
xj [ 1 , xi [ xj , the more pij is, the more
i
xi
xj is, in other word, the project ai is prior to the project aj to greater extent;
pij 2 fst ; ; s1 g , I pij \0 , x xj \1 , xi \xj ; the smaller pij is, the
i
xi
smaller xj is, in other word, the project ai is inferior to the project aj to greater
876 C. Li
extent; pij ¼ s0 , I pij ¼ 0 , x
xj ¼ 1 , xi ¼ xj which demonstrate that it is
i
The following example 1 demonstrates that different parameter may induce dif-
ferent ranking project.
Example 1 There are two selectable projects with two attributes u1 ; u2 . After a
decision maker grades every attribute from 0 to 100, the decision-making matrix B
can be obtained, whose normalized matrix is R. The decision maker constructs the
linguistic judgment matrix H through pairwise comparison in accordance
with
S0 S 4
linguistic scale S ¼ fSa ja ¼ 5; . . .; 1; 0; 1; . . .; 5g, H ¼ , it is obvi-
S4 S0
ous that the judgment H is consistent.
Next, we consider the following two situations. First, assuming h ¼ 1:4953,
from formula (92.3), x ¼ ð0:8333; 0:1667Þ can be obtained, through utilizing
simply weighing method, the evaluation of above two projects is Z = (0.5056,
0.4944), so a1 a2 (Tables 92.1, 92.2).
Secondly, assuming h ¼ 1:6818, from formula (92.3), x ¼ ð0:8889; 0:1111Þ
can be obtained, through utilizing simply weighing method, the evaluation of
above two projects is Z = (0.4926, 0.5074), so a2 a1 . Example 1 demonstrates
that different parameter may induce different ranking result under multiple stan-
dards, so it is necessary to select reasonable parameter and to put forward ranking
method by introducing parameter in the decision based on judgment matrix.
When there are less than 5 selectable projects, it is considerable to apply the first
comprehensive weight method, whose stages are as follows. First, the decision
maker selects two projects from projects A1 ; A2 ; . . .; An , such as A1 ; A2 . Secondly,
the decision
P maker
gives the two projects real-valued weight
0 0 2 0 0 0
x1 ; x2 i¼1 xi ¼ 1 . Thirdly, insert x1 ; x2 into the formula (92.2), and obtain
the following formula:
x01
Iðp12 Þ ¼ logh ð92:4Þ
x02
Fourthly, from formula (92.4), it is not difficult to solve the parameter h which
embodies the preference of the decision maker. Finally, it is important to insert the
value of h into the formula (92.3) to solve the priority weight which embodies the
preference of decision maker to greater extent.
Example 2 If decision maker gives following linguistic judgment matrix A and
real-value matrix A0 induced from A.
S0 S1 S0 S0 1 h1 1 1
S S0 S1 S1 h 1 h h
A ¼ 1 ; A 0
¼
1 h1 1 1
S0 S1 S0 S0
S0 S1 S0 S0 1 h 1 1
The decision maker gives the projects A1 ; A2 the weight ðx01 ; x02 Þ ¼ ð0:4; 0:6Þ, it
is obvious that A22 is consistent, principle submatrices of A1 ; A2 also are
consistent.
x0
From formula (92.4), Iðp12 Þ ¼ logh x10 , 1 ¼ logh 0:4 , obtain parameter h ¼ 1:5,
2
1
Pn 0:6 P 1
Pn
then insert h ¼ 1:5 into formula(92.3): xi ¼ hn k¼1 Iðpik Þ = ni¼1 hn k¼1 Iðpik Þ , so
878 C. Li
h1=4 1:51=4
x1 ¼ ¼
h1=4 þ h3=4 þ h1=4 þ h1=4 1:51=4 þ 1:53=4 þ 1:51=4 þ 1:51=4
h3=4 1:53=4
x2 ¼ ¼
h1=4 þ h3=4 þ h1=4 þ h1=4 1:51=4 þ 1:53=4 þ 1:51=4 þ 1:51=4
h1=4 1:51=4
x3 ¼ 1=4 1=4 1=4
¼
h þh 3=4
þh þh 1:51=4 þ 1:53=4 þ 1:51=4 þ 1:51=4
h1=4 1:51=4
x4 ¼ ¼
h1=4 þ h3=4 þ h1=4 þ h1=4 1:51=4 þ 1:53=4 þ 1:51=4 þ 1:51=4
The priority vector of the project is
x ¼ ð0:2222; 0:3333; 0:2222; 0:2222Þ:
If there are more selectable projects whose number is between 5 and 9, considering
the complexity and diversity of decision making and human thinking, it is possible
to have deviation, so the paper puts forward the second comprehensive weight
method based on the consideration that the weight obtained from formula (92.3)
and the subjective weight of decision maker should be smaller. The stages of the
above method are as follows: First, every decision maker gives the subjective
weight to arbitrary three projects in order to obtain more preference information.
Secondly, the optimization model should satisfy with the following equation:
0 P 3 P
3 12
X
3 1
3 Iðpik Þ X
3 1
3 Iðpik Þ
minf ðhÞ ¼ @h k¼1 x0 h k¼1 A ;
i
i¼1 i¼1
s:t h 1
P
where x01 ; x02 ; x03 is the subjective weight, and 3i¼1 x0i ¼ 1
Thirdly, the parameter can be obtained from above model, and is inserted in the
formula (92.3), the priority weight of project can be found.
P
Let di ¼ 13 3k¼1 Iðpik Þ; i ¼ 1; 2; 3, then the above model can be simplified.
!2
X
3 X
3
minf ðhÞ ¼ h di
x0i h di
i¼1 i¼1
s:t h 1
92 Research on Information Mining About Priority Weight 879
s:t h 1
P3
where di0 ¼ 3 1
Iðpik Þ; i ¼ 1; 2; 3;
k¼1
h i2
min f ðhÞ ¼ 2 h1=3 0:25ðh1=3 þ h2=3 þ h1=32 Þ
h i2
þ h2=3 0:5ðh1=3 þ h2=3 þ h1=32 Þ ; s:t h 1
h ¼ 1:833, insert h ¼ 1:833 into formula (92.3), and obtain the priorityvector
x ¼ ð0:124; 0:3047; 0:124; 0:124; 0:3882Þ.
If there are more selectable projects whose number is between 5 and 9, considering
the deviation of decision maker understanding the scale, the paper puts forward the
third comprehensive weight method based on the linguistic judgment matrix,
whose stages are as follows. First, every decision maker gives the subjective
weight to arbitrary three projects. Secondly, insert the weight into formula (92.3)
to obtain three equations, and solve the unknown parameters h1 ; h2 ; h3 . Thirdly,
compute the average h of h1 ; h2 ; h3 ; h ¼ 13 ðh1 þ h2 þ h3 Þ. Finally, after inserting h
into the formula (92.3), the priority weight of project can be found. If the decision
maker gives the subjective weight w01 ; w02 ; x03 to the projects A1 ; A2 ; A3 , the fol-
lowing three equations can be obtained from formula (92.3).
880 C. Li
P
3 P
3
1
3 Iðp1k Þ X
3 1
3 Iðpik Þ
x1 ¼ h k¼1 = h k¼1 ð92:5Þ
i¼1
P
3 P
3
1
3 Iðp2k Þ X
3 1
3 Iðpik Þ
x2 ¼ h k¼1 = h k¼1 ð92:6Þ
i¼1
P
3 P
3
1
3 Iðp3k Þ X
3 1
3 Iðpik Þ
x3 ¼ h k¼1 = h k¼1 ð92:7Þ
i¼1
Through applying the third method, the example 3 also can be solved.
92.6 Conclusion
The paper discusses the problem about parameter of priority of linguistic judgment
matrix, demonstrates the necessity of adding parameter in the sorting method
based on linguistic judgment matrices, puts forward the conclusion of obtain the
parameter value through mining information of decision maker, and some methods
of selecting parameter through making full use of the preference information
which can be reflected by the subjective weight.Project supported by the Scientific
Research Foundation of the Higher Education Institutions of Guangxi Zhuang
Autonomous Region (Grant No. 201204LX394).
References
Chen Y, Fan Z (2004) Study on consistency and the related problems for judgment. Syst Eng
Theory Pract 24:136–141 (in Chinese)
Chen S, Hwang CL (1992) Fuzzy multiple attribute decision-making. Springer-Verlag, Berlin
Fan Z, Jiang Y (2004) A judgment method for the satisfying consistency of linguistic judgment
matrix. Control Decis 19(8):903–906 (in Chinese)
Herrera F, Herrera-Viedma E (2000) Linguistic decision analysis: steps for solving decision
problems under linguistic information. Fuzzy Sets Syst 115(10):67–82
Herrera F, Herrera-Viedma E, Verdegay JL (1995) A sequential selection process in group
decision-making with linguistic assessments. Inf Sci 85(4):223–229
Herrera F, Herrera-Viedma E, Verdegay JL (1996) Direct approach processes in croup decision
making using linguistic OWA operators. Fuzzy Sets Syst 79:175–190
Herrera F, Herrera-Viedma E, Verdegay JL (1996) Direct approach processes in group decision
making using linguistic OWA operators. Fuzzy Sets Syst 78(2):73–87
Herrera F, Herrera-Viedma E, Martinez L (2000) A fusion approach for managing
multi-granularity linguistic term sets in decision making. Fuzzy Sets Syst 114(9):43–58
Umano M, Hatono I, Tamura H (1998) Linguistic labels for expressing fuzzy preference relations
in fuzzy group decision making. IEEE Trans Syst Man Cybern Part B Cybern 28(2):205–218
92 Research on Information Mining About Priority Weight 881
Wang XR, Fan ZP (2002) A topsis method with linguistic information for group decision making.
Chin J Manag Sci 10(6):84–87 (in Chinese)
Wang XR, Fan ZP (2003) An approach to multiple attribute group decision making with
linguistic assessment information. J Syst Eng 18(2):173–176 (in Chinese)
Xu Z (1999) Uncertain attribute decision making: methods and application. Tsinghua University
press, Beijing (in Chinese)
Yager RR (2003) Induced aggregation operators. Fuzzy Sets Syst 137:59–69
Chapter 93
Research on the Project Hierarchy
Scheduling for Domestic Automobile
Industry
Peng Jia, Qi Gao, Zai-ming Jia, Hui Hou and Yun Wang
Abstract To improve the accuracy and performability of the vehicle R&D project
scheduling of domestic automobile enterprises, a ‘‘4 ? 1’’ hierarchy process
system of domestic automobile enterprises is analyzed and summarized. A
corresponding four levels scheduling management mode of the vehicle R&D
project is presented based on the hierarchy process system, and a planning
approach of three-month rolling schedule for the fourth level is proposed. Sche-
dule minor adjustment and modification are given to solve the different extent
change of rolling schedule.
93.1 Introduction
Q-Gate between two stages. These R&D stages and Q-Gates constitute the
flow of company Q-Gates level.
(2) Cross majors/fields level. On the basis of division of stages and Q-Gates, all
the flow nodes of company Q-Gates level are subdivided according to majors
or fields which are involved by automobile products, and the second level
subflow of cross majors or fields is defined.
(3) Cross departments level. The flow nodes of cross majors/fields level are
subdivided to departments which are involved by all majors or fields, and the
third level subflow of cross departments is defined.
(4) Department level. The R&D flow is defined within the departments according
to their business scope in the vehicle R&D process. The flow is also the
subdivision of cross departments flow nodes, so it is the fourth level flow in the
vehicle R&D process system.
(5) Foundational fixed level. In order to improve the R&D efficiency, the enter-
prises establish many fixed flows for certain R&D activities, as the basic
supporting of the vehicle R&D process system. The flows can realize the
automatic transfer among different steps of flow activities, avoid repeated hand
labor and reduce manual workload.
Figure 93.1 Shows the ‘‘4 ? 1’’ process system of Q automobile Co., Ltd.
which is established based on their new product R&D manual (http://
doc.mbalib.com/view/d9fd9a8d5538f64af4cfb3; http://www.docin.com/p-
220283401.html; http://wenku.baidu.com/view/dd7f9633a32d7375a4178066.html
). In the first level, the vehicle R&D process is divided into eleven stages from P0
to P10, and eleven Q-Gates are set up, such as new project research instruction,
project R&D instruction, engineering start instruction, digital prototype and so on.
In the second, the stages are subdivided according to the involved majors or fields.
Taking P2 sculpt design stage as an example, it’s divided into many tasks belong
to mechanical design, manufacturing process, marketing, and other majors. In the
third level, the flow of cross majors or fields is subdivided to departments. Taking
the first round structural design as an example, the flow is divided to platform
technology department, battery system department, CAX design simulation
department and others. In the fourth level, the R&D flow in every department is
defined. Take the first round assembly design as an example, the flow defines four
flow steps that are the definition of system function and performance, the design of
system parts and components, the definition of parts and components function and
performance and the summary of system data. Document approval flow is a
foundational fixed flow, and the typical procedures include compile-proofread-
audit-approve.
886 P. Jia et al.
P0 P1 P2
P0 Stage New project P1 Stage Project P2 Stage Engineering P3 Stage
Project pre-research Research Project approval R&D Sculpt design start Detailed
instruction engineering design
instruction instruction
P3 P4 P5
P4 Stage P5 Stage Validation P6 Stage P6 P7 Stage
Digital Product design Verification Product design Small PVS Zero
prototype prototype prototype
verification validation batch trial freeze production
P9
P8 Project P10 Stage P10
P7 P8 Stage P9 Stage Project continuous
0S freeze Mass production SOP Project summary summary Project
The First freeze report improvement acceptance
Level Flow
Analysis process
The Second
Level Flow Manufacturing
process department
Platform technology
department CAE analysis and
Battery system optimization The first round some
department parts and components
Electric drive CAX design simulation structural design
technology department department Platform technology
System control department
technology department Battery system
department
Make CAS data Check the vehicle The first round Issue CAS data Electric drive
general layout assembly design technology department
Outsourcing design Platform technology Platform technology Outsourcing design System control
company department department company technology department
Company Q-Gates Level The First Level Flow Big Schedule The First Level Schedule Project Manager
Cross Majors/ Fields Level The Second Level Flow Cross Majors/ Fields Schedule The Second Level Schedule Project Manager
Cross Departments Level The Third Level Flow Cross Departments Schedule The Third Level Schedule The Head of Majors/Fields
Department Level The Fourth Level Flow Department Operation Schedule The Fourth Level Schedule The Head of Department
Fig. 93.2 Corresponding relationship between hierarchy R&D process and hierarchy schedule of
automobile
schedule (cross departments schedule) and the fourth level schedule (department
operation schedule). It is corresponding layer by layer between the four-level R&D
flow and the four-level schedule, as shown in Fig. 93.2.
(1) The first level schedule, also known as big schedule, is planned by project
manager based on the vehicle R&D Q-Gates flow. The vehicle R&D stages
correspondingly constitute the stage summary tasks in the first level schedule
The Q-Gates correspondingly constitute the milestones.
(2) The second level schedule is cross majors/fields schedule. It is planned by
project manager based on the vehicle R&D cross majors/fields flow. The flow
nodes correspondingly constitute the tasks in the second level schedule, and
are put under the phase tasks which correspond to the high level flow nodes of
the second level flow nodes, so this level schedule is also the decomposition of
the first level stage schedule. In addition, the second level schedule tasks will
be assigned to the appropriate majors or fields.
(3) The third level schedule is cross departments schedule in the project. These
tasks of this level schedule are the decomposition of the majors or fields tasks
by the head of majors or fields based on the cross departments flow, and the
tasks will be assigned to departments. The third level flow nodes corre-
spondingly constitute the tasks in the third level schedule.
(4) The fourth level schedule is operation schedule within the departments. The
head of departments decomposes the work of the third level schedule tasks
based on the department flow and the functions and responsibilities of the
department. The fourth level flow nodes correspondingly constitute the fourth
level schedule tasks, and the tasks will be assigned to the project members.
There is no direct relationship between the foundational fixed flow and the
decomposition of the project schedule, but the flow can support the implementa-
tion of the schedule tasks.
Figure 93.3 shows the four-level R&D project schedule of Q automobile Co.,
Ltd. which is planned based on the company’s four-level vehicle R&D flow.
888 P. Jia et al.
The vehicle R&D project has long project cycle, involves wide majors and fields
range, needs many coordinated interaction among departments, exists more
uncertainty factors. Therefore, in the stage of project approval, the schedule cannot
be decomposed exhaustively, only the first, the second and the third level schedule
can be initially decomposed based on the standard R&D stages, the involved
departments and overall R&D requirements. The detailed fourth level specific
operation schedule within department is difficult to accurately plan.
The method of rolling scheduling (http://baike.baidu.com/view/1359753.htm)
can effectively solve the above problem as it can regularly revise future schedule.
The schedule is planned based on the principle of detailed recent and coarse
forward. It means to plan detailed specific recent schedule and the coarse forward
schedule at first, and then regularly make the necessary adjustments and revision to
the schedule according the situation of implementation and the technical problems.
The method combines recent scheduling and forward scheduling. On the one hand,
it can plan the next R&D tasks in advance. On the other hand, it can solve the
contradiction between the relative stability of schedule and the uncertainty of
93 Research on the Project Hierarchy Scheduling 889
actual situation better, and effectively improve the accuracy and performability of
schedule.
The R&D cycle of domestic automobile products is usually 3–5 years, so the
three-month rolling period for domestic automobile enterprises is reasonable and
easy to manage and achieve. Therefore, it is needed to plan three-month rolling
schedule in the period of vehicle R&D project. It means that the fourth level
schedule of the next 3 months is planed in every month.
In the process of planning three-month rolling schedule, the schedule will be
adjusted and revised according to the actual situation. It may lead to different
extent change of the schedule. Two ways can be used to deal with different extent
change.
(1) Schedule minor adjustment. The project managers can adjust the project
schedule for small change of schedule which does not affect the milestone
tasks and the key tasks on the critical path.
(2) Schedule modification. When the change affects the milestone tasks and the key
tasks on the critical path, the project managers must modify the project sche-
dule. The schedule modification will be achieved through implementing the
change flow of project schedule, and then increase the version of the schedule.
93.4 Conclusion
Acknowledgments The project is supported by the National High Technology Research and
Development Program of China (through grant No. 2012AA040910) and the Natural Science
Foundation of Shandong Province (through grant No. ZR2012GM015).
890 P. Jia et al.
References
Chen G (2008) FA corporation’s R&D process reengineering Based on APQP and project
management (in Chinese), Xiamen University
Chrysler Corporation, Ford Motor Company, and General Motors Corporation (1995) Advanced
product quality planning (APQP) and control plan reference manual, in press
Information on http://www.docin.com/p-220283401.html, (in Chinese)
Information on http://baike.baidu.com/view/1359753.htm, (in Chinese)
Information on http://wenku.baidu.com/view/dd7f9633a32d7375a4178066.html, (in Chinese)
Information on http://doc.mbalib.com/view/d9fd9a8d5538f64af4cfb3 b7593da32b.html, (in
Chinese)
Ju Y (2008) Research on application of the project management in R&D of CA 305 automobile
self-determination-oriented products (in Chinese), Tianjin University
Lin J (2008) Research on methods and application of project management to the new vehicle
research & development Project (in Chinese), Tianjin University
Liu S (2009) The application of project management on entire vehicle design (in Chinese),
Shanghai Jiao Tong University
Sun W (2004) Application of project management in automobile product development, (in
Chinese). Automob Sci Technol 4:44–46
Wang W (2009) Research of project management based on vehicle products R&D (in Chinese),
HeFei University of Technology
Zhang Q (2008) Application research of the project management based on lifecycle in automobile
development (in Chinese). Shanghai Auto 8:24–27
Chapter 94
SVM-Based Multi-Sensor Information
Fusion Technology Research in the Diesel
Engine Fault Diagnosis
Keywords Diesel engine Fault diagnosis Multi-sensor information fusion
Support vector machine (SVM)
94.1 Introduction
The vigorous development of the automotive market has led to the improvement of
diesel engine fault diagnosis technology has become the mainstream of diesel
engine fault diagnosis, diagnostic techniques based on sensor signals. Tradition-
ally, the relative maturity of the spectrum-based signal analysis algorithms, but
such methods due to lack of time local analysis function, and is not suitable to
analyze non-stationary signals. The diesel engine vibration signal contains a large
number of high-frequency, low frequency and its harmonic components. By
Vapnik’s support vector machine (SVM) (Vapnik 1995) is a new learning machine
based on statistical learning theory. Compared to the neural network, which will
use heuristic learning methods in the implementation with a lot of experience in
composition. SVM avoid the local minimum problem, and not overly dependent
on the quality and quantity of the sample, greatly improving its generalization
ability. Multi-sensor information fusion technology can improve the integration
and integration of information between the different sensors, information redun-
dancy, complementarity, timeliness and accuracy. The theory of SVM is intro-
duced into the multi-sensor information fusion technology, and applied in the
agricultural diesel engine fault diagnosis, and achieved good results.
The test measured in normal and pipeline oil spill two states under the cylinder
head vibration 5 signal for each, is calculated to extract diagnostic indicators such
as Table 94.1.
Recourse to Table 94.1 data to establish a diagnostic model. Assume that the
indicators in the state vector X = [X1, X2, X3] T in accordance with and other
894 J. Lv et al.
P
covariance matrix normal distribution, denoted as X * NðlðhÞ; Þ. Sample mean
to estimate the l (h):
2 3 2 3
62:342 51:5256
lðh ¼ 8:365Þ ¼ 4 8:7649 5 lðh ¼ 7:607Þ ¼ 4 7:6918 5
13:1231 11:1961
P
To estimate the sample covariance matrices in the :
2 3
X Xk Xni 1:8874
1 i Þ ¼ 4 0:1707 0:3926
l ÞðX1l X 5
¼ ðX l X
n k 1 l¼1 i¼1 1
0:1595 0:5986 0:9834
The current methods commonly used in diesel engine fault diagnosis, including
wavelet analysis, artificial neural network diagnosis, extended rough set theory,
and so on. Each method has the characteristics for diesel engine operation of the
law and prone to failure characteristics, compare the pros and cons of various
methods in dealing with the diesel engine fault diagnosis is the key to promote the
further development of diesel engine fault diagnosis technology.
Fault data processing capabilities of several methods for comparing the above, in
the experiments from the actual testing of the diesel engine, select the total number of
features for 1820, the normal signal, the total number of features for the 714’s
imbalance signal, the total number of features for the 1148 collision friction signal.
From randomly selected 70 % of the characteristics of data for network training, the
remaining 30 % for network testing. Therefore, training in normal working condition
the signal characteristics for 1274, the imbalance in the number of signal charac-
teristics for 497, collision friction signal characteristics 812. Signal characteristics of
normal conditions in the test were 546, the number of features of the unbalanced
signal 217, and collision characteristics of friction signal for 336. The experimental
results of the training set and test set, respectively, as shown in Table 94.2.
94 SVM-Based Multi-Sensor Information Fusion Technology Research 895
From the experimental results can be seen, for diesel engine fault diagnosis,
artificial neural network methods require a large amount of data is not dominant.
The wavelet analysis method in the training set for the high recognition rate of the
normal signal, but performance degradation is more obvious in the test set. And for
the failure of the diesel engine, we put more weight on the test set under diesel
imbalance signal and friction collision signal to identify the correct rate. The
imbalance signal recognition, SVM-based multi-sensor information fusion tech-
nology and generalized rough set theory is almost equal, there are certain
advantages in the identification of friction collision signal.
94.5 Conclusion
References
Coello CAC, Lechuga MS (2002) MOPSP: a proposal for multiple objective particle swarm
optimization. In: Proceedings of the IEEE congress on evolutionary computation, Honolulu,
Hawaii, USA
Hsu CW, Lin CJ (2002) A comparison of methods from multi-class support vector machines.
IEEE Trans Neural Netw 46(13):415–425
Hu Z-h, Cai Y-z, Li Y-g et al (2005) Data fusion for fault diagnosis using multi-class support
vector machines. J Zhejiang Univ Sci 6A(10):1030–1039
Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of IEEE
international conference on neural network. Perth, Australia: 1942–1948
Platt JC (1999) Fast training of support vector machines using sequential minimal optimization.
In: Proceedings of advances in kernel methods support vector learning. MIT Press,
Cambridge, pp 185–208
Vapnik V (1995) The nature of statistical learning theory. Springer-Verlag, New York
Chapter 95
The Purchase House Choice Research
Based on the Analytic Hierarchy Process
(AHP)
Keywords Ahp Multiobjective decision-making The consistency test The
indicators of purchasing house Weight
95.1 Introduction
Buying a satisfied house is the dream of many people, however, today, with the
fluctuant development of real estate industry, to realize this dream is not so simple.
In the actual purchasing process, buyers’ requirements for houses are not limited to
residential and other simple functions but require more humane, more comfortable.
Therefore, the respects that buyers concern about are increasingly broad, and the
requirements tend to fine, including real estate lots, product price, design style,
According to Saaty’s rule of thumb, when CR \ 0.1 that the judgment matrix
has a satisfactory consistency, now, the normalized eigenvector, corresponding to
the largest eigenvalue kmax , as the weight vector of the judgment matrix.
A large survey network has done a large-scale survey to the people who want to
buy houses, and according to the large amounts of data, it has obtained the houses
buyers’ concern degree to all aspects in the purchasing house process, shown in
Table 95.2.
Based on Table 95.2, we can obtain the degrees of influence of the purchasing
indicators when people buy a house. Then according to Saaty’s comparison
Real estate Product Design Landscape Property District Quality Developers’ Traffic
The criteria layer C
lots price style supporting services supporting of house credibility conditions
The program layer P Central time zone China shuiyun stream Purple pavilion dongjun Riverfront Maple city
Table 95.2 The concern Indicators of purchasing house Concern degrees (%)
degrees to indicators of
purchasing house Real estate lots C1 48.08
Product price C2 34.35
Design style C3 49.13
Landscape supporting C4 23.43
Property services C5 32.17
District supporting C6 27.62
Quality of housing C7 47.23
Developers’ credibility C8 27.43
Traffic conditions C9 45.97
criterion (Hu and Guo 2007), through the pairwise comparisons of the 9 aspects in
the criteria layer, we can establish the judgment matrix A, as follows:
0 1
1 4 1=2 6 4 5 2 5 3
B 1=4 1 1=4 4 2 3 1=4 3 1=4 C
B C
B 2 4 1 6 4 5 3 5 3 C
B C
B 1=6 1=4 1=6 1 1=4 1=3 1=5 1=2 1=5 C
B C
A¼B B 1=4 1=2 1=4 4 1 3 1=4 3 1=4 C
C
B 1=5 1=3 1=5 3 1=2 1 1=5 2 1=5 C
B C
B 1=2 4 1=3 5 4 5 1 5 2 C
B C
@ 1=5 1=3 1=5 2 1=3 1=2 1=5 1 1=5 A
1=3 4 1=3 5 4 5 1=2 5 1
By calculating, we can get the largest eigenvalue of the judgment matrix A, that
is kmax ¼ 9:7920, and the corresponding normalized eigenvector is
CI 1 0:0990
CR1 ¼ ¼ ¼ 0:0683\0:1:
RI9 1:45
The above result indicates that it passed the consistency test. So w1 is the
weight vector of the criterion layer C to the target layer O.
Table 95.3 the judgement matrices of Ck-P and the results of consistency test
Layer P Layer C
C1 C2 C3 C4 C5 C6 C7 C8 C9
w21 w22 w23 w24 w25 w26 w27 w28 w29
P1 0.4675 0.4554 0.0955 0.0919 0.4675 0.4733 0.4718 0.4675 0.4718
P2 0.2771 0.2628 0.16 0.3016 0.2771 0.2842 0.1643 0.2771 0.1643
P3 0.16 0.1409 0.2771 0.1537 0.16 0.1696 0.2562 0.16 0.2562
P4 0.0955 0.1409 0.4675 0.4528 0.0955 0.0729 0.1078 0.0955 0.1078
kj 4.031 4.0104 4.031 4.1658 4.031 4.0511 4.0458 4.031 4.0458
Cl2j 0.0103 0.0035 0.0103 0.0553 0.0103 0.017 0.0153 0.0103 0.0153
CR2j 0.0114 0.0039 0.0114 0.0614 0.0114 0.0189 0.017 0.0114 0.017
From Table 95.3, we can see that the consistency ratio CR of the various
indicators are all less than 0.1, that is, all passed the consistency test. Then the
weight of layer P to layer C is:
According to the C–O weights w1 and the P–C weights w2, we can obtain the P–O
weights:
According to the combined weights w, we can finally obtain the ranking of the four
intentive regions, that is, Central time zone region is better than Purple pavilion
902 Z. Sun et al.
dongjun region, Purple pavilion dongjun region is better than China shuiyun
stream region, and China shuiyun stream region is better than Riverfront Maple
city region. Therefore, funds permitting, buying a house in Central time zone
region could better meet the demand on all aspects.
95.4 Conclusion
AHP method is practical, and its calculation is simple and easy to be operated.
Using AHP method to analyze a variety of consideration factors in house pur-
chasing decision-making, it can guide consumers to buy houses scientifically and
rationally. This method is also applied in the purchasing decision-making of other
consumer goods, such as car buying, insurance buying, etc. (Song and Wang 2012;
Zhang and Lin 2012; Kang and Zhu 2012). In summary, AHP method has certain
guiding significance in solving similar multiobjective problems.
References
Chen D, Li D, Wang S (2007) Mathematical modeling (in Chinese). Science Press, Beijing, ch. 8,
pp 195–201
Chen J, Cheng Z, Liu Y (2011) Analytic hierarchy comprehensive evaluation of the power
transmission project (in Chinese). J Electr Power 26(5):408–412
Hu Y, Guo Y (2007) Operational research tutoria (in Chinese). Tsinghua University Press,
Beijing, ch. 13, pp 422–425
Huang D, Wu Z, Cai S, Jiang Z (2006) Emergency adaption of urban emergency shelter: analytic
hierarchy process-based assessment method (in Chinese). J Nat Disasters 15(1):52–58
Jin Q, Wang Y, Jia Z (2011) AHP and its application in recruitment (in Chinese). Comput Mod
27(8):190–192
Kang J, Zhu Q (2012) Application of AHP in project investment decision (in Chinese). Ningxia
Eng Technol 11(1):25–28
Li Y, Liu S, Zhang S (2012) Risk assessment of military logistics outsourcing based on matter-
element analysis (in Chinese). J Mil Jiaotong Univ 14(3):71–75
Mo S (2007) AHP in Decision-making in the public purchase (in Chinese). Econ Forum
21(19):49–51
Pan S, Duan X, Feng Y (2010) The application of AHP in the selection of the investment program
of military engineering (in Chinese). Mil Econ Res 31(8):40–41
Qi Y (2008) Research of performance appraisal system based on AHP (in Chinese). J Xi’an
Polytech Univ 22(3):125–128
Song Y, Wang Z (2012) Research of agricultural product logistics park location basis on AHP (in
Chinese). Storage Transp Preserv Commod 34(3):90–92
Yang R, Zhang Z, Wu Y, Lei L, Liu S (2004) An application of AHP to selection for designs and
lots of lottery tickets (in Chinese). J Chengdu Univ Inf Technol 19(3):451–457
Yuan N (2012) Evaluation of tourism resources based on AHP method in ancient villages—a case
of world heritage site of Xidi and Hongcun (in Chinese). Resour Dev Market 28(2):179–181
Zhang Y, Lin J (2012) Powder distribution scheduling based on AHP (in Chinese). J Jiamusi Univ
(Nat Sci Edn) 30(1):86–90
Zhao S (2007) The application research of AHP on comprehensive evaluation of physical fitness
(in Chinese). J Beijing Univ Phys Educ 30(7):938–940
Chapter 96
The Relations Tracking Method
of Establishing Reachable Matrix in ISM
Keywords Adjacency matrix Directed graph Interpretative structural modeling
Reachable matrix
96.1 Introduction
X. He (&)
School of Economics and Management, Southwest University
of Science and Technology, Mianyang, China
e-mail: [email protected]
Y. Jing
Institute of Technology, China Academy of Engineering Physics,
Mianyang, China
e-mail: jywxfh163.com
dialogue method (Zhenkui 1998) and Warshall algorithm (Lipschutz and Lipson
2002; Wang and Ge 1996) and so on. In this paper, the relations tracking method
for reachable matrix is presented, compared with the other methods, this method
can avoid complex matrix operations.
96.2.1 Definition
Relations tracking method, just as its name implies, the relationships among many
factors in a system should be tracked firstly and then the reachable matrix may be
established. In this paper we use Fig. 96.1 as an example to illustrate.
96.2.2 Steps
The first step is to find out the direct reachable set of each node.
Direct reachable set is the element set that a node can reach directly not
including the node itself, expressed with D\i[. For example, for Fig. 96.1, the
direct reachable set of node S2 is D\2[ = {3, 4}. Similarly, D\4[ = Ø. All the
direct reachable sets are shown in Table 96.1.
The second step is to find out the tracking reachable set of each node.
Tracking reachable set is the element set that each node can reach, whether
directly or indirectly, which includes the node itself, expressed with R\i[. For
reachable matrix, this step is quite important.
The core idea of relations tracking method is as follows. Each node can be
viewed as a source node and the direct reachable set of the node will be obtained,
then each node in the direct reachable set can be viewed as a branch node which
can be used as the next level branch node, thus the tree branch of each node can be
obtained and all the nodes of tree branch constitute the tracking reachable set of
S2
S3 S5
S4
96 The Relations Tracking Method of Establishing Reachable Matrix in ISM 905
each node. If a direct reachable set is an empty set, then the tracking reachable set
is the node itself. For example, in Table 96.1, the direct reachable set of S4 is
empty, then R\4[ = {4}.
The important principles: in the process of branching, if a node is repeated, then
the node should be omitted, that is the node is no longer continue to branch.
For example, for the tracking reachable set of S1, the tree branch obtained is
shown in Fig. 96.2.
Thus, R \ 1[ = {1, 2, 3, 4, 5}, similarly, R \ 2[ = {1, 2, 3, 4, 5},
R \ 3[ = {1, 2, 3, 4, 5}, R \ 5[ = {1, 2, 3, 4, 5}, R \ 6[ = {1, 2, 3, 4, 5, 6},
the tree branches of S2, S3, S5 and S6 are shown in Fig. 96.3.
The third step is to write out the reachable matrix.
The reachable matrix M is shown below.
2 3
1 1 1 1 1 0
61 1 1 1 1 07
6 7
61 1 1 1 1 07
M¼6 6 7
7
60 0 0 1 0 07
41 1 1 1 1 05
1 1 1 1 1 1
This method can avoid fundamentally complex matrix operations, and it just
needs to track the relations among nodes.
The relations tracking method reflects the essence of reachable matrix. In terms of
this method, the tracking reachable set can be obtained directly based on the
directed graph, so the reachable matrix obtained from relations tracking method is
just requested. From reachable matrix M it can be known that the elements in row
1, 2, 3 and 5 are same, which suggest that S1, S2, S3, S5 may form loops.
2 D<1>={2}
3 4 D<2>={3 4}
D<3>={1 4 5},D<4>= Ø
5
906 X. He and Y. Jing
1 4 5
3 4
1 5 2
5
6
3
1 3
1 4
2 4 5
2
The relations tracking method can avoid the repeated searching because a
repeated node will be no longer continue to branch. Breadth First Search is a
method looking for the shortest path between two nodes in a directed graph (Wang
et al. 1994; Lu and Feng 2006; Yuan and Wang 2011), although the repeated
searching can be avoided, it requires that a clear hierarchical relation be estab-
lished firstly. However, in the ISM process, the hierarchical relationship is just
obtained after reachable matrix.
2 3
0 1 0 0 0 0
60 0 1 1 0 07
6 7
61 0 0 1 1 07
A¼6
60
7
6 0 0 0 0 077
40 0 1 0 0 05
1 0 1 0 1 0
A1 ¼ A þ I
2 3
1 1 0 0 0 0
60 1 1 1 0 07
6 7
6 7
61 0 1 1 1 07
¼660
7
6 0 0 1 0 077
6 7
40 0 1 0 1 05
1 0 1 0 1 1
A 4 ¼ ðA + IÞ4 = A3 :A1 = A3 = M
2 3
1 1 1 1 1 0
61 1 1 1 1 07
6 7
6 7
61 1 1 1 1 07
¼6 60
7
6 0 0 1 0 07 7
6 7
41 1 1 1 1 05
1 1 1 1 1 1
908 X. He and Y. Jing
Formula method is a traditional method and used widely, but the biggest
drawback of this method is the complicated matrixes calculation which is only
acceptable when elements are quite few. Actually a system is often large with
many elements, and the relationships among elements are quite complicated.
Therefore complicated matrix calculation decreases the practicality of this method
(Tian and Wang 2003).
Because the calculation process is quite tedious, so the partial steps (k = 4 and 5)
are omitted.
When k = 6, the comparing result is as follows.
k 6 i 1 p16 ¼ 0
i 2 p26 ¼ 0
i 3 p36 ¼ 0
i 4 p46 ¼ 0
i 5 p56 ¼ 0
i 6 p66 ¼ 0
96 The Relations Tracking Method of Establishing Reachable Matrix in ISM 911
96.4 Conclusion
When establishing ISM, the calculation of reachable matrix is always quite crucial
and tedious, in order to solve this problem, the relations tracking method for
reachable matrix is presented, compared with the other methods, this method can
avoid complex matrix operations and consequently enhance the practical opera-
bility of ISM.
References
Abstract From the perspective of region economics, this paper makes the com-
parison on regional advantages and the selection of the regional center city
between sixteen provincial capital cities in the Central, Northwest and Southwest
Region in China, by exploiting AHP and constructing the index system with
twelve secondary indexes on the five factors of geography location, traffic facili-
ties, economics, population and human capital. The following research conclusions
are drawn. First, in Central Region, Wuhan, having the highest composite score,
should be selected as the regional center city. Second, although it is not located the
geographical center in the Northwest Region, Xi’an has the highest composite
score and each secondary index is highest, which should be selected as the regional
center city. Third, in Southwest Region, Chongqing should be selected as the
regional center city.
97.1 Introduction
With the development of the Chinese economy, Chinese government has been
transforming the mode of economic growth and expanding domestic demand. In
the twelfth five-year guideline for national economic and social development of
the People’s Republic of China, it is stated clearly to improve the pattern of
regional development and expand inland development. For example, ‘‘Build long-
term mechanism to expand domestic demand, rely on consumption, investment
and exports, and promote economic growth.’’ ‘‘Take the expansion of consumer
Many scholars have made a lot of researches on the development of the cities and
regions. Linneker and Spence (1996) concluded the positive relationship of
transport infrastructure with regional economic development. Lawson (1999)
researched the competence theory of the region. Siegfried (2002) presented that it
has closer economic associations in adjacent regions.
The comparative researches between the cities are made mainly from the
perspective of urban competitiveness. Hao and Ni (1998) researched empirically
seven cities’ competitiveness of Beijing, Tianjin, Shanghai, Dalian, Guangzhou,
Xiamen, and Shenzhen from the 21 subdivision indexes by using principal com-
ponents analysis method. Ning and Tang (2001) designed a city competitiveness
model, based Michael E. Porter and IMD national competitiveness model. Li and
Yu (2005) presented that city competitiveness is the sustainable development
capacity of a city to attract, acquire, control and convert resources, and then create
value and wealth and improve the living standard of the presidents. Wei-zhong Su,
Lei Ding, Peng-peng Jiang and Qi-yan Wang made the empirical study for the
tourism competitiveness between different cities (Ding et al. 2006; Su et al. 2003;
Jiang and Wang 2008; Wang and Wang 2009). Cheng-lin Qin and Jun-cheng made
the researches on the polycentric urban-regional structure (Qin and Li 2012; Zhu
et al. 2012). However, the studies are mostly made from the city’s current situa-
tion. There are few studies from the perspective of Regional Economics, from the
basic potential factors, such as geographical location, transport and radiation.
97 The Selection of the Regional Center City 915
capital. In the knowledge economy era, human capital is the main factor reflecting
competition ability. In this paper, considering the above five factors and the data
collection, twelve secondary indexes are selected, shown in Table 97.1.
As shown in Table 97.1, Geography location is compared by calculating the
distance sum between the city and other cities. The city with the smallest distance
sum is located relatively in the center, which can bring the polarization and dif-
fusion effects. The city area is small.
Railroad and highway is distributed in a mesh structure in the surrounding area.
The railway and highway conditions within the city can not reflect a city traffic
convenience. Moreover, the population and human capital have mobility. The
quantity within city can not reflect the city’s regional competitive advantage.
Therefore, the traffic facilities, the population and the human capital select the
province data as the index. In order to eliminate the influence of provincial area,
the railway mileage and the highway mileage per ten thousands square kilometers
are selected as the index reflecting a city traffic convenience. The total population
number and the employment number of the province are selected as the population
index. Relatively, the total number of college student in the province is selected to
reflect the total situation of human capital. The average number of people having
the higher education per ten thousands persons is selected to reflect the average
situation of human capital, as well as the average number of the college student per
ten thousands persons. The GDP, total retail sales of consumer goods, per capita
disposable income of urban residents and per capita net income of rural residents
of that city are selected as the index of economics.
97 The Selection of the Regional Center City 917
Because each index has different units and dimensions, dimensionless processing
for these indexes must be done in order to compare and summarize. The normal
methods of index standardization are range transformation method, linear pro-
portional method, normalized method, standard sample transformation method,
vector normalization method and taking reciprocal. In the paper, linear propor-
tional method is used on the standardization
and summation of these indexes.
In the decision matrix X ¼ xij mn ;
For the positive index, given xj ¼ max xij 6¼ 0, then
1im
xij
yij ¼ ; ð1 i m; 1 j nÞ ð97:1Þ
xj
xj
yij ¼ ; ð1 i m; 1 j nÞ ð97:2Þ
xij
920 J. Cheng and J. Shang
Y ¼ yij mn is called as the linear proportional standard matrix.
In the twelve indexes, the distance sum is the reverse index, which means the
higher the distance sum is, the lower the transform score is. So the index of
distance sum should be standardized by the Eq. (97.2). The other eleven indexes
are the positive index, which should be standardized by the Eq. (97.1). It must be
noted that the maximum and minimum values are the respective region value
within central, southwest and northwest regions, not the total value.
Considering the data consistency, all the data are in 2010. The data of X1, X2, X7,
X8, X9, X10 and X12 are from the ‘‘2011 China Statistical Yearbook’’. The data of
X3, X4, X5, and X6 are from Statistical Communiqué on the 2010 National Eco-
nomic and Social Development on these 16 cities. The original data are not listed
for saving the space. By calculating, the final result is shown on the Table 97.6, in
which the Y column is the final composite score.
As shown in Table 97.6, by considering and calculating comprehensively the
twelve secondary indexes, it is concluded that the composite score rank of the six
cities in Central Region are Wuhan (87.52), Zhengzhou (85.75), Changsha (74.83),
Hefei (73), Nanchang (64.03) and Taiyuan (54.38), which means Wuhan, the
highest score city, has the greatest advantages to become the regional center city in
Central Region. But the gap between Zhengzhou and Wuhan is very small, only
1.77. After analysis of various secondary indexes, Both of Wuhan and Zhengzhou
have five indexes being the highest score (100). By comparing the secondary
respectively, Wuhan has the advantages of better geography location and eco-
nomics, while Zhengzhou has the advantages of better traffic and more population
and labor force. Zhengzhou is predominant on the total number of human capital.
97
region Lanzhou 100.0 28.9 39.0 33.9 33.8 63.2 59.2 68.5 73.4 41.1 70.7 58.7 66.74
Xining 95.9 13.0 12.0 19.4 14.4 63.3 71.2 15.1 15.1 4.8 81.0 34.9 50.33
Yinchuan 89.6 94.9 47.3 23.5 14.0 76.8 79.5 16.9 16.7 8.6 86.1 58.2 62.54
Urumqi 39.0 12.8 12.8 40.4 35.0 64.7 96.3 58.5 43.7 27.1 100.0 45.7 38.50
Southwest Chongqing 99.4 100.0 100.0 100.0 100.0 91.1 64.3 35.9 38.3 48.1 100.0 100.0 88.77
region Chengdu 100.0 43.2 38.7 70.3 84.0 100.0 100.0 100.0 100.0 100.0 77.2 74.2 83.11
Guiyang 95.6 67.1 60.6 14.2 16.8 79.7 72.8 43.2 48.1 29.8 61.2 46.0 66.17
Kunming 90.5 37.1 37.4 26.9 36.8 90.6 70.8 57.2 56.3 40.4 66.9 57.7 63.18
Lhasa 48.6 2.6 3.6 2.3 3.1 79.6 61.0 3.7 3.5 2.9 63.7 56.9 27.35
921
922 J. Cheng and J. Shang
The traffic of the Northwest Region and the regional gap is worst in the three
regions, which means the polarization and diffusion effect of Xi’an for Northwest
Region is weaker. Xi’an should strengthen the connection so as to build the
Northwest growth pole in Asia-Europe continental bridge and drive the develop-
ment of Northwest Region. The composite scores of Chongqing and Chengdu are
both on the top in the southwestern region, much greater than other provincial
capital cities in the Southwest Region. Furthermore, the two cities are much close
on the geographical location, Chongqing and Chengdu should reinforce the
cooperation and become the regional growth dual pole so as to drive the devel-
opment of Southwest Region.
Acknowledgments Based on empirical data of the Central, Northwest and Southwest Region in
China
References
Ding L, Wu X, Wu Y, Ding J (2006) A system of evaluation indicator for the urban tourism
competitiveness. Econ Geogr 26(5):511–515 (Chinese)
Hao S, Ni P (1998) The study on the China city competitiveness: a case of several cities. Econ Sci
20(3):50–56 (Chinese)
Jiang P, Wang X (2008) Research on competitiveness of coastal tourism cities in China: an
empirical study of Dalian, Qingdao, Xianmen and Sanya. Tour Sci 28(10):12–18 (Chinese)
Lawson C (1999) Towards a competence theory of the region. Camb J Econ 23(2):151–166
Li N, Yu T (2005) On urban competitiveness and the methods, process of evaluation. Hum Geogr
20(3):44–48 (Chinese)
Linneker B, Spence N (1996) Road transport infrastructure and regional economic development:
The regional development effects of the M25 London orbital motorway. J Transp Geogr
4(2):77–92
Luan G (2008) Regional economics. Tsinghua University Press, Beijing, pp 18–39 (Chinese)
Ning Y, Tang L (2001) The concept and indicator system of urban competitive capacity. Urban
Res 16(3):19–22 (Chinese)
Peng G, Li S, Sheng M (2004) AHP in evaluating government performance: determining
indicator weight. China Soft Sci 19(6):136–139
Qin C, Li H (2012) Progress of studies on polycentric in western countries. Hum Geogr
27(1):6–10 (Chinese)
Siegfried J John (2002) The economics of regional economics associations. Q Rev Econ Finance
42(1):1–17 (Chinese)
Su W, Yang Y, Gu C (2003) A study on the evaluation of competitive power of urban tourism.
Tour Tribune 15(3):39–42 (Chinese)
Wang Q, Wang D (2009) Construction and application of Chinese city tourism competence
evaluation system. Stat Res 26(7):49–54 (Chinese)
Zhu J, Zhang M, Son C, Tang J (2012) Polycentric urban-regional structure and its coordinal
symbiosis in Wuhan Urban Circle. Urban Stud 19(3):7–14 (Chinese)
Chapter 98
A Research on Mine Pressure Monitoring
Data Analysis and Forecast Expert System
of Fully Mechanized Coal Face
98.1 Introduction
In coal industry, the roof accident has been the major security hidden danger for
coal mine workers for many years (Qian and Shi 2003). According to statistics, the
coal roof accidents make up 42 % of various accidents. The roof accidents are
threatening the life of coal mine workers seriously. One of the main reasons, which
cause all kinds of mine disasters, is mine pressure appearance (Cen 1998).
Therefore, it is particularly important to carry out pressure monitoring for
hydraulic support which is the roof support equipment. In recent years, with the
development of science and technology, the constant improvement of mining
technology and strengthening the requirement of safety production in coal mine,
mine pressure monitoring has been carried out extensively to some extent (Sun
et al. 2006). We have designed a set of software, which owns perfect function, for
mine pressure monitoring data analysis of fully mechanized coal face. It plays a
positive role in the research on appearance regularity of mine pressure and the
prevention of coal mine roof accidents. Based on date analysis we have also
designed mine (Zhang 2004).
Software interface is designed by using Configuration King which owns the fea-
tures of strong adaptability, extensive application, easy extension, low cost and
short development cycle. Besides, it owns rich graphics library and all kinds of
communication interface. It is compatible with other programming languages, and
it could be extended using Visual Basic 6.0 and Visual C++. It has the alarm
window. Meanwhile, it could generate all kinds of reports and real-time trend
curve expediently.
Data processing and analysis are designed based on Matlab which is a business
mathematics software developed by MathWorks company of American. Matlab is
the senior calculation language used for algorithm development, data analysis and
numerical calculation. Besides, this software, with kind working platform
and programming environment, has enhanced functionality of data analysis and
graphics processing.
Main function modules of data analysis software on mine pressure monitoring are
shown in Fig. 98.1: Main function modules of data analysis software on mine
pressure monitoring are shown in Fig. 98.1 (Gong and Wang 2011):
Different users have different operating authority, which is also the key to guar-
antee the normal operation of the system. User login module is the necessary
channel through which users enter the main application program. Here users need
to complete the information authentication. We need to understand the process of
user information. The flow diagram is shown in Fig. 98.2.
Y
Y
N
N
N
Quit program
Record deletion
Record insertion
Unconditional output
Conditional output
Data storage
98 A Research on Mine Pressure Monitoring Data Analysis 929
Histogram
Statistical analysis
Frequency analysis
Regression analysis
Multiple regression
According to the process, every interval percent can be calculated when the
extreme value and the number of intervals given. On the basis of it, the probability
distribution examination and the characteristic value calculation could be carried
out.
Unitary linear or nonlinear and multivariate linear or nonlinear regression
analysis could be taken on the basis of shared database. It also adds the multi-
variate stepwise regression. The program can automatically screen factors to
ensure the effectiveness of the regression equation. Besides, the program could
carry on weight analysis of influence factors (Mu et al. 2012).
Calculation and analysis method of module is shown in Fig. 98.4.
Any database could be queried in this module. In a single query condition, people
can find out the maximum, the minimum and the average. The record number and
percentage which meet certain condition could be found out. Search results can
then be displayed and printed according to user requirements. This module con-
tains content as shown in Fig. 98.5.
Switching database
mathematical statistics method, but not choosing optionally or analyzing using the
wrong method. The so-called pertinence means summarizing the achievements
practically and realistically according to concrete contents, purpose and the used
instruments of this observation, then obtaining the mine pressure characteristics,
the roof control method and the improvement way of this working face or tunnel
(Cao 2011). This module contains several contents as shown in Fig. 98.6.
98 A Research on Mine Pressure Monitoring Data Analysis 931
The database establishment not only includes all of the observed contents and the
relevant information of working face production, but also meets convenient data
processing (Li et al. 2002). Four original databases are established (Zhao et al.
2011; Tan 2000).
We record every observation line which is set every ten hydraulic supports.
Observation contents include the support working state and roof bolting effects.
The record, which is made for every day or every shift, includes the position of
winning machine, the setting load of stanchion and the end resistance.
This database, taking mining face cycle for a record, could carry out various
calculations of the stanchion resistance.
This section contains the basic parameters, such as the length of the working face,
the slope angle, the coal thickness, the coal hardness and the surrounding of
working face.
According to the results of data analysis and coal mine safety production
experience for many years, this system explores with mine pressure prediction
mechanism deeply. Mine pressure prediction expert system is on the basis of mine
pressure data analysis, prediction model, reasoning strategy, monitoring method
and the data processing method. The core of the system is the knowledge base and
reasoning machine. The knowledge base is the set of mine domain knowledge that
mine pressure prediction needs, including basic facts, rules and other relevant
information (Huang et al. 2008). The knowledge representation is various,
including framework, rules, semantic networks and so on. The knowledge,
deriving from field experts, is the key to expert system capacity, namely the
quality standard of the expert system depends on the quality and quantity of the
knowledge. As the knowledge base and the expert system are independent of each
other, user can change and perfect knowledge contents to improve the performance
of the system. While reasoning machine, which is actually proceeded to explain
the knowledge, is the executive core part of problem solving. According to the
semanteme of knowledge, it conducts interpretive execution for the knowledge
found through certain strategy and records the results in the appropriate space of
the dynamic base.
Mine pressure forecast expert system model is shown in Fig. 98.7.
The establishment of the system should include the following three factors:
• It should possess the knowledge of experts who are in the mine pressure forecast
field.
• It can simulate specialistic thought.
• It also possesses expert-level problem-solving.
The process of establishing mine pressure forecast expert system could be
called ‘‘the knowledge engineering’’. In other words, the software engineering
thought is applied to design system based on the knowledge. It includes several
aspects below (Jiang et al. 1995):
• Knowledge acquisition.
• The selection of appropriate knowledge representation.
• Software design.
• The engineering accomplishment with the right computer programming
language.
The system, which sets up a bridge between automatic forecast and forecast
mechanism, is able to predict mine pressure rule and display overpressure alarm. It
plays a significant role in safety production, roof management, reasonable selec-
tion of hydraulic support and optimized support design.
98 A Research on Mine Pressure Monitoring Data Analysis 933
Predicting regular
Guiding roof Selection of Guiding equipment
pattern of mine
management hydraulic support operation
pressure
Establishing an Knowledge
Model base effective decision Database
base
model
98.6 Conclusion
a. User Login Module Design (1) Through the mine pressure monitoring data
analysis of the fully mechanized mining face, people could grasp mine pressure
distribution, working resistance of hydraulic support, pressure cycle, caving
span, first pressure span and so on. It has theoretical significance in safety
production.
b. The result of data analysis could provide important basis for the correct
selection of hydraulic support and occupy a significant role in giving full play
to the performance of mining equipment.
934 H. Qiao et al.
c. Mine pressure forecast expert system could predict roof accident effectively.
People would move support before the peak value comes and find hidden
danger of the support, such as tilt, roof exposure, sealing performance and so
on. The application of mine pressure monitoring system is an important mea-
sure to avoid blindness and empiricism of roof management. And it could also
provide reliable basis for working out mining regulations of similar coal seam.
References
Cao J (2011) Mine pressure monitoring and data analysis of roadway in working face with soft
coal seam and great height, (in Chinese). Coal Mine Support 04:19–24
Cen C (1998) Stope roof control and monitoring technology, (in Chinese). Press of China
University of Mining and Technology, Xuzhou
Gong L, Wang Q (2011) Research and development of data analysis system of mine pressure
monitoring, (in Chinese). West-China Explor Eng 05:167–169
Huang Y, Yao Q, Ding X, Zhang L, Wang Y, Li L (2008) Mining pressure prediction of upper
roof in condition of hard roof, (in Chinese) 12:56–57
Jiang F, Song Z, Song Y, Yang Y, Zhao W, Qian M (1995) Basic study of expert system for
predicting weighting in coal face, (in Chinese). J China Coal Soc 20(3):225–228
Li H, Long Y, Zhou D (2002) The application of database in data processing, (in Chinese). Meas
Tech 01:42–44
Mu H, Wang F, Mu Y (2012) Mine pressure monitoring and data analysis of 1401E roadway in
Zhaizhen coal mine, (in Chinese). Shandong Coal Sci Technol 01:189–190
Qian M, Shi P (2003) Mine pressure and stratum control, (in Chinese). Press of China University
of Mining and Technology, Xuzhou
Sun Y, Wen Z, Zhang H, Liu Z (2006) Dynamic forecasting and roof support quality monitoring
of rigid roof located, (in Chinese). Coal Technol 25(9):64–65
Tan H (2000) C language programming, (in Chinese). Tsinghua University Press, Beijing
Zhang K (2004) The application of pressure monitoring system of fully mechanized support in
coal mine roof management, (in Chinese). Mining Industry Institute of Shandong University
of Science and Technology
Zhao L, Zhang B, Xiao K, Lu X (2011) Research of C language programming method with zero
defect, (in Chinese). Softw Eng 01:50–52
Zhou X, Bai C, Lin D, Wang Z (2011) Research on mechanisms of roof pressure prediction, (in
Chinese). J China Coal Soc 36(S2):299–303
Chapter 99
The U-Shaped Relationship Between
Corporate Social Disclosure
and Corporate Performance: Evidence
from Taiwan’s Electronics Industry
Abstract This study investigates the corporate social disclosure (CSD) of the
electronics industry in Taiwan and examines the relationship between corporate
social responsibility disclosure and corporate economic performance. The annual
reports of 600 out of 929 companies on the Taiwan Market Observation Post
System and in the Taiwan Economic Journal database in 2009 were hand-col-
lected. The results reveal the practice of corporate social responsibility disclosure
in Taiwan’s electronics industry. More specifically, this paper finds that the
relationship between corporate social responsibility disclosure and corporate
economic value-added is best illustrated by the U-shaped curve. On the one hand,
the findings of this study help to build knowledge of corporate social responsibility
in Taiwan’s business companies. On the other hand, the results of this study
somewhat explain the inconsistent findings of the relationship between corporate
social responsibility disclosure and corporate performance in the previous litera-
ture. This study provides important implications for both academics and
practitioners.
99.1 Introduction
In a modern society, business firms have been viewed as open systems that interact
with and are integral parts of their environment. Business firms not only obtain
material, financial, and human resources from the outside environment, but also
gain support and legitimacy from their stakeholders and the whole society (Ott
et al. 2010). Business firms are now facing increasing pressure from stakeholders,
regulators and society as the latter demands more comprehensive and transparent
information regarding the former’s financial soundness, employee policies, envi-
ronmental policies, social responsibility involvement, etc. As a result, an
increasing number of business firms are now disclosing their corporate social
responsibility activities (Bebbington et al. 2007). Although corporate social
responsibility disclosures (CSD) has been the subject of substantial academic
research for the last two decades, CSD has so far remained mainly a phenomenon
of developed countries in Western Europe, the USA, and Australia (Bebbington
et al. 2007; Patten 2002). In fact, very few papers have discussed this issue in the
context of developing countries, and the few ones that have focused on Hong
Kong, Korea, Malaysia, Singapore, and some South African countries (Bebbington
et al. 2007; Choi 1998; Tsang 1998). In Taiwan, to our knowledge, only some
limited empirical studies have concentrated on this issue. Because of this gap in
the empirical studies, it is very difficult to know the practices of CSD in Taiwan.
For this reason, our first purpose is to examine the CSD practices of companies in
Taiwan.
In addition, the relationship between CSD and firm performance remains
inconsistent in the previous literature (Garay and Gonzalez 2010). One view is that
increased disclosure will produce costs of equity capital and negatively impact
firm performance (Dhaliwal et al. 2009). A contradictory view holds that by
increasing their social and environmental disclosures, firms will enhance their
reputations (Armitage and Marston 2008), which in turn helps them to gain support
and legitimacy from stakeholders and society. Therefore, the undefined relation-
ship between CSD and firm performance in previous research is one of the most
important issues for future research (Richardson and Welker 2001).
Recently, a study by Wang et al. (2008) find that the relationship between
corporate philanthropy and firm financial performance follows an inverted U-
shape. Their findings provide an important reference for our arguments in this
study. Additionally, based on the private costs theory and the agency theory, when
firms disclose more social and environmental information to the public, they will
incur substantial direct and indirect costs (Barnett and Salomon 2006). These costs
would suggest a negative relationship between CSD and firm performance.
However, according to the stakeholder theory, the increased disclosure of firms
will reduce information asymmetry, thus lowering the estimation risk of the dis-
tribution of returns, which consequently enables firms to gain the trust of investors
and stakeholders. As a result, the firms will obtain resources controlled by these
stakeholders, such as human capital, financial capital, social capital, etc.
99 The U-Shaped Relationship 937
Over the past two decades, CSD has been one of the most commonly discussed
issues in many developed economies (Bebbington et al. 2007). However, previous
findings on the relationship between CSD and firm performance have been
inconsistent (Garay and Gonzalez 2010). According to Gray et al. (1993), it is
unnecessary to report social and environmental disclosures because of the absence
of any demand for such information and the absence of any legal requirement for
CSD. If firms disclose their CSR in such circumstances, the costs would outweigh
the benefits (Solomon and Lewis 2002). It would also be irrational for firms to
disclose any information harmful to themselves. Consequently, a burden cost will
have a negative impact on firm performance (Dhaliwal et al. 2009).
However, a number of scholars have suggested that by making social and
environmental disclosures in their annual reports, firms enjoy multiple benefits
(Armitage and Marston 2008; Godfrey 2005), such as enhanced firm reputation
(Armitage and Marston 2008), effective response to pressure and prediction of
future environmental regulations (Blair 2000), reduction in information asymmetry
and boost in investor interest (Gray et al. 1995), as well as establishment and
maintenance of good stakeholder relationships, which are conducive to gaining
support and legitimacy from stakeholders and society (Milne and Patten 2002).
Yet still, some other researchers claim that there is no significant relationship
between CSD and firm performance (Freedman and Wasley 1990). According to
their findings, it is likely that the relationship between CSD and firm performance
is more complex, and not simply a direct one as proposed by previous studies
(Wang et al. 2008). For this reason, it is necessary to take a further step to clarify
the relationship between CSD and firm performance.
To understand the relationship between these two variables, it is essential to
consider simultaneously the costs and benefits of CSD activities. Based on the
private costs theory, CSD is a costly endeavor, leading to direct expenses that have
938 C.-S. Lin et al.
99.3 Methodology
Table 99.2 Results of regression analysis on the relationship between CSD and economic value-
addeda
CSD_ Sentences CSD_Words
Model 1 Model 2
b VIF b VIF
Control variables
b -0.13** 1.05 -0.12** 1.07
N/M_Ratio 0.05 1.10 0.05 1.11
Sale_GR -0.05 1.04 -0.06 1.04
Cap_Ex 0.00 1.09 0.01 1.07
Independent varialbes
CSD_ Sentences/Words -0.16* 3.61 -0.14* 2.52
CSD_ Sentences/Words square 0.26** 3.67 0.21** 2.47
R2 0.05 0.05
Adjusted R2 0.04 0.04
F 11.65** 10.89**
a
n = 600 *** \ 0.001 ** \ 0.01 * \ 0.05
Social and environmental disclosures are widely discussed in recent years; how-
ever, the discussions are limited to developed countries in Western Europe. Due to
cultural and national differences among developed and developing countries, the
results of previous studies on these developed countries cannot be generalized to
other developing countries (Bebbington et al. 2007; Choi 1998; Tsang 1998). This
study examines the relationship between CSD and firm performance. For the
measurement of CSD, this study adopts content analysis of CSD in terms of
number of words and sentences, which is consistent with the methods employed by
previous research. In terms of firm performance measurement, this study uses
economic value-added to capture each firm’s internal financial situation and
market dynamics. The finding of a U-shaped relationship between CSD and
942 C.-S. Lin et al.
References
Armitage S, Marston C (2008) Corporate disclosure, cost of capital and reputation: evidence from
finance directors. Biristish Account Rev 40:314–336
Barnett ML, Salomon RM (2006) Beyond dichotomy: the curvilinear relationship between social
responsibility and financial performance. Strateg Manage J 27:1101–1122
Bebbington J, Larrinaga C, Moneva JM (2007) Corporate social reporting and reputation risk
management. Account Auditing Account J 21(3):337–361
Blair A (2000) Richer and Greener, Prime Minister’s speech to the Confederation of British
Industry/Green Alliance conference on the environment
Brewer C, Chandra G, Hock CA (1999) Economics value added (EVA): its uses and limitations.
SAM Adv Manage J 64(2):4–12
Choi JS (1998) An evaluation of the voluntary corporate environmental disclosures a Korean
evidence. Soc Environ Account 18(1):2–7
Dhaliwal D, Li OZ, Tsang A, Yang YG (2009) Voluntary non-financial disclosure and the cost of
equity capital: the case of corporate social responsibility reporting. Chin Univ Hong Kong,
Hong Kong
Donaldson T, Preston LE (1995) The stakeholder theory of the corporation: concepts, evidence,
and implications. Acad Manage Rev 20(1):65–91
Freedman M, Wasley C (1990) The association between environmental performance and
environmental disclosure in annual reports and 10Ks. Advances in public interest accounting,
pp 183–193
Garay A, Gonzalez M (2010) Internet-based corporate disclosure and market value: evidence
from Latin America. Paper presented at the annual meeting of the BALAS annual conference,
ESADE, Barcelona, Spain
Godfrey C (2005) The relationship between corporate philanthropy and shareholder wealth: a risk
management perspective. Acad Manag Rev 30(4):777–798
99 The U-Shaped Relationship 943
Gray H, Bebbington KJ, Walters D (1993) Accounting for the environment: the greening of
accountancy part 2. Paul Chapman, London
Gray H, Kouhy R, Lavers S (1995) Corporate social and environmental reporting: a review of the
literature and a longitudinal study of UK disclosure. Account Auditing Account J 8(2):47–77
Haley CV (1991) Corporate contributions as managerial masques: reframing corporate
contributions as strategies to influence society. J Manage Stud 28(5):485–509
Haniffa M, Cooke TE (2005) The impact of culture and governance on corporate social reporting.
J Account Public Policy 24(5):391–430
Milne J, Patten DM (2002) Securing organizational legitimacy: an experimental decision case
examining the impact of environmental disclosures. Account Auditing Account J
15(3):372–405
Naser K, Al-Hussaini A, Al-Kwari D, Nuseibeh R (2006) Determinants of corporate social
disclosure in developing countries: the case of Quatar. Adv Int Account 19:1–23
Newson M, Deegan C (2002) Global expectations and their association with corporate social
disclosure practices in Australia, Singapore and South Korea. Int J Account 37:183–213
Ò
O’Byrne SF (1996) EVA and market value. J Appl Corp Finance 9(1):116–126
Ott JS, Shafritz JM, Jang YS (2010) Classical readings in organization theory. Wadsworth
Cengage Learn, Canada
Palliam R (2006) Further evidence on the information content of economic value added. Rev
Account Finance 5(3):204–215
Patten M (2002) The relationship between environmental performance and environmental
disclosure: a research note. Acc Organ Soc 27:763–773
Richardson J, Welker M (2001) Social disclosure, financial disclosure and the cost of equity
capital. Account Organ Soc 26:597–616
Shane B, Spicer BF (1983) Market response to environmental information produced outside the
firm. Account Rev 58(3):521–538
Solomon A, Lewis L (2002) Incentives and disincentives for corporate environmental disclosure.
Bus Strategy Environ 11:154–169
Tsang WK (1998) A longitudinal study of corporate social reporting in Singapore: the case of the
banking, food and beverages and hotel industries. Account Auditing Account J 11(5):624–635
Wang H, Choi J, Li J (2008) Too little or too much? Untangling the relationship between
corporate philanthropy and firm financial performance. Organ Sci 19(1):143–159
Chapter 100
Empirical Study on the Five-Dimensional
Influencing Factors of Entrepreneurial
Performance
Xin Lan
Abstract Entrepreneurial is the current hot issues, but the entrepreneurial success
rates are not high. Entrepreneurial performance relates to survival or extinction of
the enterprises, researching the factors that influence entrepreneurial performance
is the key points to capture success. In this study, we make use of theoretical
analysis and empirical research to explore five-dimensional factors that affect
entrepreneurial performance, which consists of capital dimension, innovative
dimension, team dimension, market dimension and environmental dimension.
Then through regression analysis, we sorted out the stepwise regression model
coefficients and test values, to establish the regression equation. We analyzed the
influence of five-dimensional factors to entrepreneurial performance. The study is
conducive to entrepreneurial activities to overcome difficulties and to entrepre-
neurial success.
Keywords Entrepreneurial Entrepreneurial performance Five-dimensional factors
Empirical research
100.1 Introduction
Entrepreneurship can not only cultivate the national innovative capability, but also to
improve national productivity and employment rates and speed up the construction
pace of knowledge-based economy strategy. It is a favorable way to alleviate the
current difficult of employment. In this context, our many business support policies
were introduced, in particular, to encourage and support college students entrepre-
neurial. However, the participation and success rates of college student start remain
X. Lan (&)
Business Institute, China West Normal University, Nanchong, China
e-mail: [email protected]
low. As a result, to improve the business performance of our college business should
be a subject worthy of attention in entrepreneurial management discipline study. The
starting point of this study is to explore the real key dimensions of factors impact on
business performance, but the ultimate goal is to improve business performance and
the rate of business success.
So, what are the key factors currently affecting business performance, how to
make business out of the woods effectively. In this study, we have searched much
literatures, we found many literatures only remained in the relatively study on
macro education and the management level to put forward some frameworks and
models, few articles from the entrepreneur’s perspective to analysis the reality of
current entrepreneurial difficulties, use quantitative analysis to reflect the impact
dimensions of the entrepreneurial performance, and make use of the relevant
strategies to promote entrepreneurship success (Lan and Yang 2010).
The three elements of Timmons Model tell us that, business opportunity, entre-
preneurial team and entrepreneurial resource can promote the development of
entrepreneurship during their consistently matching and balancing with each other.
These elements drive each other in different development stages of the company
while changing their relation from imbalance to balance. The capital factor in
entrepreneurship is the most important dimension in resource elements of Timmons
Model; it appears that difficulties faced in entrepreneurship are often the problem
of capital. Capital deficiency will make it very difficult to transform innovation into
real productivity or to carry out business operation (Yang and Lan 2011b).
100 Empirical Study on the Five-Dimensional Influencing Factors 947
Some youths really have very good entrepreneurial plans, but they have not starting
money, as a result, their plans cannot be put into practice; or even they start their
entrepreneurial activities, the capital problem will also impact operating profit.
At present, all levels of governments are widely introducing various kinds of
supporting policies, such as patent application grant, innovation fund for medium
or small-sized enterprises, industry-specific subsidies, business subsidies to college
graduates, entrepreneurship competition prize. Therefore, we proposed the
following assumptions:
H1: Lacking of starting money is the first difficulty in developing entrepreneurial
activities; the more starting money is the better entrepreneurship performance will
be.
H2: In the operation of the venture company, good capital chain will promote
the company to develop in a healthy manner; the circulating fund has a significant
impact on entrepreneurship performance.
H3: Smooth financing channel will speed up the development of the venture
company, and promote the company to achieve better entrepreneurship
performance.
H4: Government’s support toward entrepreneurial activities through offering
innovation fund, business subsidies, etc. will help improve entrepreneurship per-
formance, thus government support has a positive impact on entrepreneurship
performance.
H5: Technology innovation can promote the research and development and
production of new products, which will further significantly influence entrepre-
neurship performance.
Cheng (2010) thought that, development difficulties of venture companies in
our country were closely related to the insufficient of innovative motivation,
unreasonable internal and external incentive systems. Zhou (2009) pointed out that
the key to solve the difficult situation in transformation from technology imitation
to independent innovation was to strengthen the research and development and
supply of generic technology. The abovementioned studies were done from the
point of view of how to improve innovation, and all of them placed much emphasis
on independent innovation, but, nowadays, college graduates or young entrepre-
neurs are facing the difficulty of transforming technology into real productivity,
although they have some technology innovations, they fail to put these technology
innovations into real operation, which as a result leads to the low pioneering
success rate. According to the above analysis, we proposed several assumptions
for testing:
H6: Successful transformation of innovation into real operation has a positive
impact on entrepreneurship performance.
Business model is a substantial factor including positioning and channels for a
company to provide its products and services to the customers and the company’s
operation structure which enable the company to achieve its business goals. The
first innovation of a company is the innovation of its business model, which is the
foundation of development and profit. Therefore, I put forward the following
assumption that:
H7: The innovative business model has a significant impact on entrepreneurship
performance.
Factors such as the quality, experience and expertise of the entrepreneur and his
team members will influence the success of entrepreneurship. According to our
investigation and survey, entrepreneurial team members’ experience in business
management as well as their relevant management knowledge impact greatly on
entrepreneurship performance. Therefore, in this paper, we proposed the following
assumptions:
H8: Practical experience in business operation has a positive impact on
entrepreneurship performance.
H9: Expertise has a positive impact on entrepreneurship performance.
H10: Cooperation of team members has a positive impact on entrepreneurship
performance.
100 Empirical Study on the Five-Dimensional Influencing Factors 949
After retrieving, consulting and studying related literature, in order to make the
results of the study more general, we took various entrepreneurial projects into
consideration in our questionnaire to make it suited to the Pioneer Park for Chinese
college students (Chengdu) and the Liaison student entrepreneurial base of the
institutions including Youth (College Student) Pioneer Park of Chengdu Hi-Tech
Zone, Technology Park of University of Electronic Science and Technology of
China, Sichuan Normal University Chengdu College, Chengdu University
of Information Technology, Sichuan University Jincheng College, University of
Electronic Science and Technology of China Chengdu College, Chengdu Voca-
tional & Technical College, Sichuan Top Vocational Institute of Information
Technology College, and the 376 enterprises and project teams in the Chengdu
Hi-Tech Zone Innovation Center. The targets of this investigation are all business
or project leaders who are developing entrepreneurial activities, and all of them are
able to answer the questions in the questionnaire. In this investigation, we
distributed 900 questionnaires in total, among which 756 were taken back, that is,
the recovery rate of it is 84 %; and 698 were valid questionnaires, the validity rate
is 92 %. The questionnaires were distributed reasonably in those companies, and
the industries selected are also representative industries, which including 5 ones,
for example, information technology, where college graduates are more willing to
do business in these fields.
100 Empirical Study on the Five-Dimensional Influencing Factors 951
In this study, we made descriptive statistics, and reliability and validity tests of
the collected sample data. And the result of reliability analysis shows that the
‘‘Cro-banch a’’ value of capital dimension, market dimension, innovation
dimension, environment dimension and cooperation team member dimension are
0.931, 0.969, 0.943, 0.952 and 0.674, respectively. According to the suggestion
of Churchilli, the coefficient of the items is more reliable when the value of
‘‘Cro-banch a’’ is greater than 0.7.
In this study, in order to test the validity of the assumption of the five influencing
factors of the achievement of entrepreneurial activities, we made an exploratory
factor analysis on the 16 questions in the questionnaire. In the factor analysis, we
adopted a principle component analysis method, and varimax in rotation. We also
used SPSS17.0 in Bartlett sphericity test and KMO measurement, the observation
of the statistic product of Bartlett test of sphericity is 1633.350, and the corre-
sponding probability is close to 0. When significance level ‘‘a’’ is 0.05, due to the
probability ‘‘p’’ is less than the significance level ‘‘a’’, as a result, the null
hypothesis of Bartlett test of sphericity is rejected, and it can be considered that
there is a significant difference between correlation matrix and unit matrix. Then
let us observe the KMO value, if it is greater than 0.7, it can be said that this
project has passed factor analysis and it conforms to the standard of KMO mea-
surement which is often used in factor analysis as proposed by Kaiser, therefore,
we can explore the key influencing factors of entrepreneurship success by using
factor analysis method.
Questions in the questionnaire are systematically subordinate to several factors
at the same time, the factor analysis indicates that capital dimension questions are
four-factor structure, and the factor loading of them are between 0.735 and 0.915;
the innovation dimension questions are three-factor structure, the factor loading of
them are between 0.816 and 0.917; the cooperation team member dimension
questions are three-factor structure, the market dimension questions are four-factor
structure, the factor loading of them are between 0.661 and 0.891; the environment
dimension questions are two-factor structure, the factor loading of them are
between 0.722 and 0.746, the factor loading of them are between 0.729 and 0.877.
As there is no cross loading, see from the loading coefficient of each factor in the
corresponding dimension, all the loading are very large, which shows that
the factors have a good convergence, and the questions are reasonable, and the
dimensions established based on assumption also conform to the study.
952 X. Lan
The study also tested the abovementioned assumptions by using regression analysis
model, made an analysis on the impact of capital dimension, innovation dimension,
cooperation team member dimension, market dimension and environment dimen-
sion on the entrepreneurship performance, and then formed Table 100.1 as follows
of Model 1, Model 2, Model 3, Model 4 and Model 5.
In Model 1, the accessibility of initial capital (b = 0.489, P \ 0.01), financing
channel (b = 0.943, P \ 0.01), circulating fund (b = 0.840, P \ 0.05) have a
significant impact on entrepreneurship performance, which conforms to assump-
tions H1–H3. However, assumption H4 of whether received government funding
has no significant correlation with entrepreneurship performance and it is not
verified, which shows that the direct funding of government did not help to
improve the entrepreneurship performance.
In Model 2, the assumption H6 of conversion of entrepreneurship achievement
(b = 6.099, P \ 0.01) will impact the entrepreneurship performance, assumption
H7 of having innovative business model (b = -1.518, P \ 0.01) will also impact
the profitability of the companies, and it is verified. But assumption H5 of tech-
nology innovation and product research and development has no significant cor-
relation with entrepreneurship performance, which shows the pressure of
technology research and development of these companies are very big.
In Model 3, having experience in business operation (b = 3.058, P \ 0.01) has
a significant impact on entrepreneurship performance, then the assumption H8 is
verified. But surprisingly, the relation between expertise and entrepreneurship
performance is relatively weak, and the cooperation of team members has no
significant correlation with entrepreneurship performance, that is, the assumptions
H9 and H10 are not verified.
In Model 4, the degree of clearance of target market position (b = -0.096,
P \ 0.01) promotes the venture companies to develop in a healthy manner, if they
don’t have a clear idea about the target market they may fall into difficulties, thus
the assumption H11 is verified. The degree of the products/services meeting the
market demand has a significant impact on entrepreneurship performance, which
shows that the assumption H12 is verified. Competitive price and efficient sales
promotion strategies have a significant impact on entrepreneurship performance;
therefore, assumptions H13 and H14 are verified (Yang and Lan 2011a).
In Model 5, if receiving efficient support from the government and social
organization (b = -0.341, P \ 0.01), then the entrepreneurship performance of
the company will improve significantly, thus the assumption H15 is verified. The
efficient support from colleges and families (b = -1.187, P \ 0.01) has a sig-
nificant impact on entrepreneurship performance, and the assumption H16 is
verified.
100 Empirical Study on the Five-Dimensional Influencing Factors 953
Table 100.1 Verification results on the five-dimension supporting system model of entrepre-
neurship performance
Variables Model 1 Model 2 Model 3 Model 4 Model 5
H1: Enough capital has 0.489*** 0.989*** 1.405***
been raised
H2: Smooth financing 0.943*** 0.701*** 0.237***
channel
H3: Sufficient 0.840** 4.475*** 5.543***
circulating fund
H4: Received support -0.420 -3.979*** -4.873
from the government
funding
H5: Technology -.183 -.653
innovation and
product research and
development
H6: Conversion of 6.099*** 4.476***
entrepreneurship
achievement
H7: Innovative business -1.518*** -1.792***
model
H8: Have experience in 3.058***
business operation
H9: Have expertise 0.444
H10: Cooperation of 0.102
team members
H11: Degree of 0.096***
clearance of target
market position
H12: Degree of the 1.086***
products meeting the
market demand
H13: Competitive price 0.026***
H14: Degree of using 0.014**
sales promotion
strategies
H15: Degree of support 1.187***
from the government
and social
organizations
H16: Degree of support 0.341***
from colleges and
families
F 81.196 142.009 119.599 63379.427 246.445
R2 0.391 0.661 0.702 0.998 0.489
Note * means P \ 0.1, ** means P \ 0.05, *** means P \ 0.01 (two-tailed)
954 X. Lan
100.5 Conclusion
This study is made from the angle of entrepreneurs, starting from the influencing
factors of entrepreneurship performance, and based on the features of real diffi-
culties appear in the entrepreneurial activities, with combination with results of
extensive research literature. It conducted an empirical research on the structure of
the five dimensions which influence the entrepreneurship performance, through
five-dimension supporting system model building, exploratory factor analysis and
regression analysis, we concluded that five-dimension influencing factors con-
sisting of capital dimension, innovation dimension, cooperation team member
dimension, market dimension and environment dimension. All the abovemen-
tioned five dimensions have factors successfully verified, in this analysis; we
named all the verified factors in each dimension and obtained a supporting system
model of entrepreneurship performance and its key factors, as shown in Fig. 100.1.
In a word, five-dimension supporting system model of entrepreneurship
performance from perspective of entrepreneurs is a mirror, by using such model to
analyze the influencing factors of entrepreneurship performance during certain
period and in a certain place, we can conclude the structure of government support
system during that period and in that place. The key factors costing much effort
play an instructive role on enhancing pioneering success rate.
References
Geng Lin
101.1 Introduction
G. Lin (&)
Department of Mathematics, Minjiang University, Fuzhou, China
e-mail: [email protected]
101.2 Methodology
There are two neighborhood structures have been considered for the knapsack
problem: the 1-flip and 1-flip-exchange neighborhoods. If two solutions differ
exactly on one assignment, they are 1-flip neighbor.
Definition 1 For any x 2 S, the 1-flip neighborhood Nf ðxÞ of x is defined by
Nf ðxÞ ¼ fy 2 Sj kx yk1 1g. The 1-flip neighborhood
Nf ðxÞ can be reached by
adding or removing one item from x. Hence, Nf ðxÞ ¼ n þ 1.
If two solutions are 1-flip-exchange neighbors if one can be obtained from the
other by exchanging two items. It is an extension of 1-flip neighborhood.
A lot of algorithms for knapsack problem used above two neighborhood
structures. They start from an initial solution and iteratively move to the best
solution of the neighbor, until the current solution is better than its neighbors.
These local search methods are belonging to greedy algorithm, and trap in a local
optima easily.
A lot of local search methods used in the existing algorithms for knapsack problem
based on greedy method. It trap into a local optima easily. We present an iterative
local search method for knapsack problem. The main idea of the algorithm is to
960 G. Lin
flip a bit at a time in an attempt to maximize the profit sum without having the
weight sum to exceed c. Define the gainði; xÞ of item i as the objective value of the
problem (NKP) would increase if the i bit is flipped, which is as follows:
gainði; xÞ ¼ gðx1 ; . . .; xi1 ; 1 xi ; xiþ1 ; . . .; xn Þ gðxÞ:
Note that an item’s gain may be negative. For each item i, the local search
algorithm computes the gainði; xÞ. It starts with a randomly solution in the solution
space S and changes the solution by a sequence of 1-flip operations, which are
organized as passes. At the beginning of a pass, each item is free, meaning that it is
free to be flipped; after a bit is flipped, it become unfree, i.e., the bit is not allowed
to be flipped again during that pass. The algorithm iterative selects a free item to
flip. When a item is flipped, it becomes unfree and the gain of free items are
updated. After each flip operation, the algorithm records the objective value of
(NKP) achieved at this point. When there are no more free item, a pass of the
algorithm stops. Then it checks the recorded objective values, and selects the point
where the maximum objective value was achieved. All items that were flipped
after that point are flipped. Another pass is then executed using this solution as its
starting solution. The local search algorithm terminates when a pass fails to find a
solution with better value of the objective value of (NKP).
When the local search algorithm traps in a local optima, we restarts the local
search algorithm from a randomly solution.
Let V be a set of items which are free to flip in a pass. The multistart local
search algorithm can be stated as follows:
Step 0. Choose a positive number max iter as the tolerance parameter for termi-
nating the algorithm. Set N ¼ 0, xglobal ¼ 0.
Step 1. Generate a solution x ¼ fx1 ; . . .; xn g randomly.
Step 2. Set V ¼ f1; . . .; ng, t ¼ 1, x0 ¼ x. Calculate gainði; xÞ, for i 2 V.
Step 3. Let gainðj; xÞ ¼ maxfgainði; xÞ; i 2 Vg. Set xt ¼ ðx1 ; . . .; 1 xj ; . . .; xn Þ,
and V ¼ Vnfjg, x ¼ xt , t ¼ t þ 1.
Step 4. If V 6¼ [, calculate gainði; xÞ for i 2 V, go to Step 3. Else go to Step 5.
Step 5. Let xmax ¼ maxfxt ; t ¼ 1; . . .; ng. If gðxmax Þ [ gðx0 Þ, set x ¼ xmax , go to
Step 2. Else, if gðxglobal Þ [ gðxmax Þ, let xglobal ¼ xmax . Go to Step 6.
Step 6. If N\max iter, let N ¼ N þ 1, go to Step 1. Else output xglobal .
In this section, we test the proposed multistart local search algorithm. The
experiments were performed on a personal computer with a 2.11 GHz processor
and 1.0 GB of RAM. For our experiments we employ the following three
benchmark instances, which are also used to test the genetic algorithm for knap-
sack problem in (Shan and Wu 2010).
101 A Multistart Local Search Heuristic for Knapsack Problem 961
Problem 1. ðw1 ; . . .; w20 Þ = (92, 4, 43, 83, 84, 68, 92, 82, 6, 44, 32, 18, 56, 83,
25, 96, 70, 48, 14, 58), ðp1 ; . . .; p20 Þ = (44, 46, 90, 72, 91, 40, 75, 35, 8, 54, 78, 40,
77, 15, 61, 17, 75, 29, 75, 63), c ¼ 878.
Problem 2. ðw1 ; . . .; w50 Þ = (220, 208, 198, 192, 180, 180, 165, 162, 160, 158,
155, 130, 125, 122, 120, 118, 115, 110, 105, 101, 100, 100, 98, 96, 95, 90, 88, 82,
80, 77, 75, 73, 70, 69, 66, 65, 63, 60, 58, 56, 50, 30, 20, 15, 10, 8, 5, 3, 1, 1),
ðp1 ; . . .; p50 Þ = (80, 82, 85, 70, 72, 70, 66, 50, 55, 25, 50, 55, 40, 48, 50, 32, 22,
60, 30, 32, 40, 38, 35, 32, 25, 28, 30, 22, 50, 30, 45, 30, 60, 50, 20, 65, 20, 25, 30,
10, 20, 25, 15, 10, 10, 10, 4, 4, 2, 1), c ¼ 1000.
Problem 3. ðw1 ; . . .; w100 Þ = (54, 183, 106, 82, 30, 58, 71, 166, 117, 190, 90,
191, 205, 128, 110, 89, 63, 6, 140, 86, 30, 91, 156, 31, 70, 199, 142, 98, 178, 16,
140, 31, 24, 197, 101, 73, 169, 73, 92, 159, 71, 102, 144, 151, 27, 131, 209, 164,
177, 177, 129, 146, 17, 53, 164, 146, 43, 170, 180, 171, 130, 183, 5, 113, 207, 57,
13, 163, 20, 63, 12, 24, 9, 42, 6, 109, 170, 108, 46, 69, 43, 175, 81, 5, 34, 146, 148,
114, 160, 174, 156, 82, 47, 126, 102, 83, 58, 34, 21, 14), ðp1 ; . . .; p100 Þ = (597, 596,
593, 586, 581, 568, 567, 560, 549, 548, 547, 529, 529, 527, 520, 491, 482, 478,
475, 475, 466, 462, 459, 458, 454, 451, 449, 443, 442, 421, 410, 409, 395, 394,
390, 377, 375, 366, 361, 347, 334, 322, 315, 313, 311, 309, 296, 295, 294, 289,
285, 279, 277, 276, 272, 248, 246, 245, 238, 237, 232, 231, 230, 225, 192, 184,
183, 176, 174, 171, 169, 165, 165, 154, 153, 150, 149, 147, 143, 140, 138, 134,
132, 127, 124, 123, 114, 111, 104, 89, 74, 63, 62, 58, 55, 48, 27, 22, 12,6),
c ¼ 6718.
The proposed algorithm uses a parameter maxiter as a termination parameter.
In the experiment, we set maxiter ¼ 30. We run the proposed algorithm 10 times
to above three benchmarks. The test results are given in Table 101.1. In order to
compare with genetic algorithm proposed in (Shan and Wu 2010), the results of
greedy algorithm, basic genetic algorithm, hybrid genetic algorithm (Shan and Wu
2010) are also listed in Table 101.1, and the results quote from (Shan and Wu
2010) directly. Table 101.1 gives the best solutions found by greedy algorithm,
basic genetic algorithm, hybrid genetic algorithm. P and W denotes the sum of the
profit, and the sum of weight, respectively. g means algorithm found the best
solution within g generations.
The following observations can be made based on the experimental results in
Table 101.1.
(1) The proposed algorithm found the solution better than those of greedy algo-
rithm and basic genetic algorithm found.
(2) The proposed algorithm and hybrid genetic algorithm found the same best
objective value.
(3) Note that our proposed used only 30 initial solutions. It shows that the pro-
posed can reduce the chance that local search process becomes trapped at local
optima.
962 G. Lin
101.5 Conclusion
Acknowledgments This research is supported by the Science and Technology Project of the
Education Bureau of Fujian, China, under Grant JA11201.
References
Gorman MF, Ahire S (2006) A major appliance manufacturer rethinks its inventory policies for
service vehicles. Interfaces 36:407–419
Hanafi S, Freville A (1998) An efficient tabu search approach for the 0–1 multidimensional
knapsack problem. Eur J Oper Res 106:663–679
Kellerer H, Pferschy U, Pisinger D (2004) Knapsack problems. Springer, Berlin
Li KS, Jia YZ, Zhang WS (2009) Genetic algorithm with schema replaced for solving 0–1
knapsack problem. Appl Res Comput 26:470–471
Liao CX, Li XS, Zhang P, Zhang Y (2011) Improved ant colony algorithm base on normal
distribution for knapsack problem. J Syst Simul 23:1156–1160
Martello S, Pisinger D, Toth D (2000) New trends in exact algorithms for the 0–1 knapsack
problem. Eur J Oper Res 123:325–332
Papadimitriou HC (1981) On the complexity of integer programming. J ACM 28:765–768
Pisinger D (1995) An expanding-core algorithm for the exact 0–1 knapsack problem. Eur J Oper
Res 87:175–187
Shan XJ, Wu SP (2010) Solving 0–1 knapsack problems with genetic algorithm based on greedy
strategy. Comput Appl Softw 27:238–239
Tian JL, Chao XP (2011) Novel chaos genetic algorithm for solving 0–1 knapsack problem. Appl
Res Comput 28:2838–2839
Zhao XC, Han Y, Ai WB (2011) Improved genetic algorithm for knapsack problem. Comput Eng
Appl 47:34–36
Chapter 102
Heterogeneity of Institutional Investors
and Investment Effects: Empirical
Evidence from Chinese Securities Market
Ying Jin
Abstract With social security funds and securities investment funds as research
objects, this paper makes an empirical study on cross-sectional data in the period
2008–2010 of listed companies of which the stocks are heavily held by institu-
tional investors. Using property rights theory and agency theory, this paper verifies
the following hypothesis: securities investment funds and social security funds
face different political and social pressure, and have different payment mecha-
nisms for managers, thus the fund owners may have conflict or convergence of
interests with companies’ administration, which may affect contrarily the invest-
ment value of companies. This paper contributes by demonstrating the influence
on companies’ investment effects of heterogeneity of Chinese institutional
investors, which provides new evidence for judging, in the era of diversified
institutional investors, the different roles of different institutional investors in
corporate governance and performance, and offers supporting evidence for China
to formulate development strategy for institutional investors.
102.1 Introduction
Y. Jin (&)
School of Business, Jinling Institute of Technology,
Nanjing, Jiangsu, China
e-mail: [email protected]
At present, a lot of domestic and foreign research has been made on institutional
investors’ participation in corporate governance. On whether institutional investors
are involved in corporate governance, there are different types of opinions among
foreign scholars. Scholars who believe in ‘‘shareholder activism’’ think that
institutional investors have favorable conditions for supervision, for example, they
are professional investors and hold a large amount of stocks, and therefore they
obtain information superiority. In addition, heavily held stocks make them sus-
ceptible to liquidity losses when they withdraw from the market, and they have to
bear a strict fiduciary responsibility. This means investors can benefit from
supervision. The above factors indicate that ‘‘free rider’’ problem can be avoided
(David and Kochhar 1996; Grossman and Hart 1980; Smith 1996). Scholars who
believe in ‘‘shareholder passivism’’ presume that because of reasons like legal
restrictions, difficulty of supervision, high cost and liquidity, institutional investors
are not proactively involved in supervision of the companies; their stock holding
has no significant impact on the value of the companies (Agrawal and Knoeber
1996; Bhide 1994; Coffee 1991). Those who hold the eclectic opinion believe that
due to different funding sources, amount of shareholdings, and whether the
institutional investors have conflict of interest with the companies, investors’ roles
in corporate governance differ (Cornett et al. 2007).
Previously, Chinese researchers think that due to their own conditions and
external environmental constraints, institutional investors (mainly those of secu-
rities investment funds) have very limited roles in corporate governance. For
instance, Huang (2006) considered that Chinese institutional investors were highly
dependent on the government, and that the government intervened in operations of
institutional investors with resource control or political control. However, huge
disparity existed between government targets and business goals, therefore insti-
tutional investors were not qualified to supervise the businesses. Over time, a
growing number of scholars believe that Chinese institutional investors do resort
their oversight capacity to govern. For example, Wang et al. (2008) presumed that
as transformation of our government’s functions and reform of split share structure
102 Heterogeneity of Institutional Investors and Investment Effects 965
went on, the government was being phased out as a manager and a supervisor.
Since the types of institutional investors augmented and their share proportions
increased, it was possible for them to become qualified company oversight bodies.
According to associated preliminary findings in China, these studies mainly
focused on securities investment funds, or they regarded all institutional investors
as homogeneous funds. The findings simply presumed that institutional share-
holders had either no significant impact or positive effect on firm value. However,
they ignored that differences between institutional managers’ business objectives
might affect adversely the firm value.
Many foreign scholars believe that unlike banks and insurance companies which
have business ties with the invested companies, public pension funds are relatively
independent institutional investors (Cornett et al. 2007) as well as long-term funds,
they are suitable to be the overseers of enterprises. Woidtke (2002) divided pen-
sion funds into two types: public pension funds and private pension funds. By
comparing effects on industry’s adjustment Tobin’s Q of companies when these
two types of funds hold stocks in companies, Woidtke found that shareholdings of
public pension funds are negatively related with industry’s adjustment Tobin’s Q,
while the shareholdings of private pension funds are positively related with
industry’s adjustment Tobin’s Q. She suggested that the remarkable differences
were resulted from the fact that public pension funds faced greater political
pressure than private pension funds and their incentives were decoupled from
performance. Domestic scholars, such as Zhang and Sun (2006), believed that the
social security funds had long-term goals, and were suitable to be institutional
investors who could participate in corporate governance and stabilize market.
However, they ignored the inconsistency between objectives of social security
fund managers and those of corporate. Wang (2008) proposed that one particular
problem of agency during management of public pension reserve funds was the
intervention of political factors in the funds’ operations. Based on the above
analysis, we propose Hypothesis 1.
H1: The shareholding proportions of social security funds are negatively cor-
related with investment value of companies.
The minority shareholders have the rights to transfer fund shares, which is the
rights provided by the redemption mechanism (for open-end funds), or the rights of
transfer in capital markets (for closed-end funds), and can pose strong constraints
on fund managers. In addition, an essential part of incomes of the fund companies
is the management fees charged in accordance with the size of the trust fund. The
rights to transfer of minority shareholders provide incentives for fund managers to
monitor management of the companies; this can maximize the interests of minority
shareholders, and can also make the goals of the fund managers less susceptible to
administrative intervention. From the perspective of incentives of the funds, at the
966 Y. Jin
beginning of each year, fund managers and fund companies sign a performance
contract, in which both parties agree on certain performance targets. The targets
are usually about how high the annual cumulative rate of return of the fund
administered by a fund manager must rank among the same type of funds. The
ranking is directly linked with the performance bonus that fund manager can
obtain. This performance-sensitive payment system urges fund managers to strive
to safeguard the interests of minority shareholders, and strengthen supervision as
the shareholdings of funds expand. Such supervision can ease the company’s
problem of agency, reduce agency costs, and increase the value of the companies
as well as that of the funds. Moreover, Chinese fund companies have strict trust
and agency relationship with minority shareholders. This means fund managers are
under dual supervision of the trustees and the general assembly of fund holders,
and they are responsible, on behalf of the minority shareholders, to oversee the
companies and to protect and increase the interests of minority shareholders.
Therefore, fund managers and the firms have the same goal: maximization of
investment value. Based on the above analysis, we propose Hypothesis 2.
H2: The shareholding proportions of securities investment funds are positively
correlated with the investment value.
Because securities investment funds account for a majority of institutional
investors, this paper proposes Hypothesis 3.
H3: The shareholding proportions of institutional investors as a whole
(including securities investment funds, social security funds and insurance com-
panies) are positively correlated with the investment value.
We have chosen the period 2008–2010 as the sample interval. Since value indi-
cators lag by 1 year, we verified how the shareholding proportions of securities
investment funds, social security funds and institutions as a whole in the period
2008–2009 affected corporate investment value of 2009–2010. We selected 817
samples of 2008, and 1178 samples of 2009. The data used in this paper comes
from the CSMAR database.
As for indicators of investment value, in addition to earnings per share and net
assets yield that represent companies’ accounting performance, Tobin’s Q used to
study relationship between corporate governance and the value was also chosen.
102 Heterogeneity of Institutional Investors and Investment Effects 967
Tobin’s Q, which is a market indicator, equals to the ratio of the company’s market
value to replacement value of the company’s assets, and can reflect the company’s
future development potential. What’s more, Tobin’s Q can reflect not only the
public shareholder activism, but also the value effects of nonpublic shareholder
activism, for example private negotiations (Woidtke 2002; Sun and Huang 1999).
To fully reflect the investment effect, capital expenditure, which is the com-
pany’s most important investment decision, was also considered. Many scholars
believe that capital expenditure is likely to become an important tool for the
controlling shareholders or administrators of the company to secure personal
interests and damage the interests of minority shareholders (Hu et al. 2006). Under
the institutional context in which companies are controlled by the largest share-
holders, investigating the impact of active shareholder behavior of institutional
investors, which is an emerging governing mechanism, on investor protection from
the perspective of capital expenditure will help understand the effect of the
supervision of the institutional investors, and provide backing evidence for Chi-
nese authorities’ vigorous decisions on supporting institutional investors.
We selected capital expenditure as the proxy indicator of investor protection,
and used ‘‘cash for building the fixed assets, intangible assets and other long-term
assets’’ on the cash flow statement as the proxy variable of total capital expen-
diture (Zhang 2010). Referring to the article of Hua and Liu (2009), we used the
following variables as control variables. First, we used GROW to represent the
company’s growth. Capital expenditures differ as companies’ growth differs. More
developed companies have more potential investment opportunities and thus will
spend more capital. Operating revenue growth rate is frequently used as indicators
for measuring growth. Second, we used CASH to represent net cash flow generated
from operations. The above mentioned net cash flow is an important factor
affecting the company’s capital expenditure level.
In terms of indicators for institutional shareholding (INS), this paper used
shareholding proportions of securities investment funds, shareholding proportions
of the social security funds and those of institutional investors as a whole. In terms
of control variables, we used the ownership structure variables to represent internal
mechanism of corporate governance (Bai and Liu 2005). We selected the share-
holding proportions of the largest shareholders (TOP1) and those of the second to
the tenth largest shareholders (TOP2–10). TOP1 reflects corporate holding structure
with Chinese characteristics; TOP2–10 reflects the roles of the second to the tenth
largest shareholders in balancing internal control of the largest shareholders.
Company size and financial leverage (asset-liability ratio) were used to represent
the other control variables that affect the corporate investment value.
We took into consideration that institutional investors might expand their
shareholding proportions the same time when investment value increased. That is
to say, institutional investors may invest in the company due to recent growth in
investment value, rather than company stocks in order to improve companies’
increase investment value. For example, institutional investors supervise com-
pany’s administration after holding more their shareholdings of the company after
finding that corporate performance is better. At this point, institutional investors’
968 Y. Jin
social security funds have different goals with the listed companies, and they will
exercise a negative impact on the companies’ investment value. The positive
correlation between shareholding proportions of securities investment funds,
shareholding proportions of institutional investors as a whole and TBQ, EPS,
ROE, CAP is significant, which validates H2 and H3. With expanding of share-
holding proportions of securities investment funds, securities investment funds and
institutional investors as a whole can overcome ‘‘free rider’’ problem of minority
shareholders. They are motivated and capable to oversee the company’s admin-
istration, and can play imperative roles in promoting the company’s investment
value. In addition, we can conclude the following from the empirical results: (1)
the ownership structural variables have no significant impact on the investment
value, indicating that the largest shareholders used their advantages of control to
violate company assets and undermine the interests of outside investors. Other
large shareholders, because they have different targets, do not manage to form
effective balance with the largest shareholders. This also demonstrates that insti-
tutional investors as a whole can inhibit large shareholders from infringing the
interests of minority shareholders, protect their interests, and mitigate the problem
of agency; (2) the negative correlation between investment value of companies and
company size as well as asset-liability ratio is significant. The investment value of
companies is negatively correlated with company size; this conforms to the fact
that investment value of larger companies is prone to be underestimated while the
970 Y. Jin
The results of empirical tests show that although the social security funds are
relatively long-term funds and have conditions for supervising administration of
listed companies, they, under the political and social pressure, have different
operating objectives with listed companies and will pose a negative impact on their
market value. The securities investment funds’ incentives are highly related with
performance, making them less vulnerable to the political and social pressure. To
increase in funds’ shareholdings will urge the funds to supervise more closely
listed companies, thereby accruing the investment value. Moreover, it is verified
that the overall shareholdings of institutional investors have a positive impact on
the investment value of listed companies.
This paper demonstrates that heterogeneity exists in institutional investors, due to
differences between incentives and conflicts of interests, different institutions have
different impacts on corporate governance and value of listed companies, which
provides new evidence for judging roles of different institutions in corporate gov-
ernance in the era of diversified Chinese institutional investors. Given the ineffective
supervision of Board of Directors and roles of institutional investors as a whole in
promoting the value of companies, supervision of institutional investors has become
a reliable mechanism for overseeing listed companies. Chinese authorities should
continue to vigorously support institutional investors. But given that social security
funds have negative influence on the investment value of listed companies, Chinese
government, when supporting diversified institutional investors, should reduce the
political and social pressure on institutional investors and set up payment systems
that are closely linked to performance, in order to enable the funds to be independent
market participants and create a harmonious governance structure.
Acknowledgments Fund: This paper explains the preliminary results of the philosophy and
social science project (2010SJD630048) of Education Department of Jiangsu Province.
102 Heterogeneity of Institutional Investors and Investment Effects 971
References
Agrawal A, Knoeber CR (1996) Firm performance and mechanisms to control agency problems
between managers and shareholders. J Financ Quant Anal 31(3):377–397
Bai C, Liu Q (2005) Empirical research on Chinese listed companies governance structure. Econ
Res 51(2):81–91
Bhide A (1994) Efficient markets, deficient governance: U. S. securities regulations protect
investors and enhance market liquidity, but do they alienate managers and shareholders?’’
Harv Bus Rev 72(5):128–140
Coffee J (1991) Liquidity versus control: the institutional investor as corporate monitor.
Columbia Law Rev 91(2):1277–1368
Cornett MM, Marcus AJ, Saunders A, Tehranian H (2007) The impact of institutional ownership
on corporate operating performance. J Bank Finance 31(6):1771–1794
David P, Kochhar R (1996) Barriers to effective corporate governance by institutional investors:
implication for theory and practice. Eur Manag J 14(5):457–466
Grossman S, Hart O (1980) Takeover bids, the free rider problem, and the theory of the
corporation. Bell J Econ 18(11):42–64
Hu G, Huang J, Qiu Y (2006) Ownership structure and capital expenditure decisions: theoretical
and empirical analysis. Manag World 22(1):137–144
Hua G, Liu Z (2009) Governmental control, stock holding of institutional investors and protection
of investors’ rights. J Finance Econ 35(4):119–130
Huang X (2006) On the premise of the cause of effective supervising subject. J Huazhong Univ
Sci Technol (Soc Sci Edn) 27(1):92–96
Smith M (1996) Shareholder activism by institutional investors: evidence from CaPERS.
J Finance 51(4):227–252
Sun Y, Huang Z (1999) Equity structure and performance of listed companies. Econ Res
45(12):23–31
Wang W (2008) Administration structure of public pension fund. Justice China 27(3):94–96
Wang Z, Hua F, Yang Z (2008) The influence of institutional investors’ role in corporate
governance. J Huazhong Univ Sci Technol (Soc Sci Edn) 22(4):112–116
Woidtke T (2002) Agents watching agents? Evidence from ownership and firm value. J Financ
Econ 63(3):99–131
Zhang W (2010) Control of large shareholders, institutional equity and investors protection. Econ
Probl 32(8):94–97
Zhang W, Sun Z (2006) Research on involvement in corporate governance of social security
funds and other institutional investors. Econ Rev 23(4):31–33
Chapter 103
Integer Linear Programming Model
and Greedy Algorithm for Camping
Along the Big Long River Problem
Abstract In this paper, we investigate the problem of camping along the Big
Long River: How to schedule the X trips in a rafting season of the Big Long River
so that the total meets of any boats are minimal? By introducing the proper
variables, the problem is formulated into an integer linear programming model.
For small size problem, this integer linear programming can be solved by Lingo
software; for large size problem, we design a greedy algorithm to arrange the
schedule of the given X boats. Finally, we do some simulations of the above model
and algorithm and obtain the optimal solution.
Keywords Camping along the river Integer linear programming model Greedy
algorithm Simulation The optimal solution
103.1 Introduction
Visitors to the Big Long River (225 miles) can enjoy scenic views and exciting
white water rapids. The river is inaccessible to hikers, so the only way to enjoy it is
to take a river trip that requires several days of camping. River trips all start at First
Launch and exit the river at Final Exit, 225 miles downstream. Passengers take
either oar-powered rubber rafts, which travel on average 4 mph or motorized
boats, which travel on average 8 mph. The trips range from 6 to 18 nights of
camping along the river. Currently, X trips travel down the Big Long River each
year during a six month period (the rest of the year is too cold for river trips).
There are Y camp sites on the Big Long River, distributed fairly uniformly
Z. Li (&)
School of Information, Beijing Wuzi University, Beijing, China
e-mail: [email protected]
X. Huang
Department of Postgraduate, Beijing Wuzi University, Beijing, China
e-mail: [email protected]
throughout the river corridor. In order to make sure the passengers enjoy a wil-
derness experience and also for the sake of their safety (http://wenku.baidu.com/
view/19fab121192e45361066f5e4.html), we should try to avoid the meet of two
groups of boats on the river. Besides, due to the capacity constraints, no two sets of
campers can occupy the same site at the same time.
Every year, before the rafting season, the park managers must arrange the
schedule of these X trips that will raft along the Big Long River in the rafting
season. The key problem is how to schedule these X trips so that the total meets
among all boats in the river is minimum. In this paper, we will solve this problem.
This paper is organized as follows: in Sect. 103.2, we make some assumptions (Fu
2008; Xiang and Xu 2011) and introduce several variables (Gan et al. 2005), and
then we formulate the problem into an integer linear model. Then we will design a
greedy algorithm in Sects. 103.3 and 103.4 are the simulation results. The con-
clusion is given in Sect. 103.5.
103.2.1.1 Assumptions
• Once people choose one of the propulsion (oar- powered rubber rafts or
motorized boats) at first, they can not change again on the way;
• The duration of each trip ranges from 6 to 18 nights on the river;
• There are X trips, each trip has a given duration;
• There are Y camps distributed fairly uniformly throughout the river corridor;
• There is so enough fuel and power for each boat that no breakdown might occur
on the whole river trip;
• Each boat, controlled by the specialized staff, will run exactly on schedule;
• There are 180 days in the Big Long River’s rafting season;
• There are 8 h open for the river trip at daytime;
• Each rafting boat must stay in one camping site at night.
103.2.1.2 Variables
1; boat i occupys camping site k
xik ¼
0; otherwise
1; boat i and j meet at river between camping site k and k þ 1
ckij ¼
0; otherwise
k1 0; if dik [ djk and rikþ1 [ rjkþ1
cij ¼
1; otherwise
0; if dik \djk and rikþ1 \rjkþ1
ck2
ij ¼
1; otherwise
Ti: the total trip duration of boat i (measured in nights on the river);
Pi: from which day boat i start off (Pi is an integer) at the First Launch;
vi, min: the minimal speed of boat i;
vi, max: the maximum speed of boat i.
The problem of camping along the Big Long River can be formulated into an
Integer Linear Programming Model (Liu et al. 2009; Wang 2010; Li and Wang
2009), in which the schedule time table of all given X boats (the total number of
available trip boats) can be obtained with the total meets among all boats in the
river be minimal.
Firstly, we define the open time for river trip. According to the given information,
we could define that the river trip is only allowed at daytime from 08:00 to 16:00
clock; for the other time, passengers have to stay at the camping site.
X X
X X X
Y
minnbsp; z ¼ Cijk ð103:1Þ
i j k
s.t
0 rik dik ; i ¼ 1; 2; . . .; X; k ¼ 1; 2; . . .; Y ð103:2Þ
w w 225
rik diðk1Þ ; w¼ ð103:3Þ
vi max vi min Y þ1
24Pi þ 8 dio 24Pi þ 16 ð103:4Þ
! !
X
k Xk
24 Pi þ xis þ 8 dik 24 Pi þ xis þ 16 ð103:5Þ
s¼1 s¼1
! !
X
k X
k1
24 Pi þ xis þ 8 rik 24 Pi þ xis þ8 ð103:6Þ
s¼1 s¼1
X
Y
xis ¼ Ti ð103:7Þ
s¼1
X
Y
6 Pi þ xis 180 ð103:9Þ
s¼1
8
>
> dik djk Mck1 ij
>
> k1
>
< riðkþ1Þ rjðkþ1Þ Mcij
k2
dik djk þ Mcij ð103:10Þ
>
>
>
> riðkþ1Þ rjðkþ1Þ þ Mck2
>
: k
ij
cij ck1 k2
ij þ cij 1
Constraint (103.2) means the time when boat i leaves site k is later than that
when boat i arrives at site k.
Constraint (103.3) guarantees the time for boat i to travel from site k-1 to site
k is between the lower and upper bound.
Constraint (103.4) means that boat i will begin its trip in the Pith day at the open
time, where Pi ¼ 1; 2; . . .; 174.
Constraint (103.5) guarantees that boat i leaves camping site k at the open time.
Constraint (103.6) guarantees that boat i arrives at camping site k at the open
time.
Constraint (103.7) means the duration of boat i is Ti, where Ti is an integer
ranging from 6 to 18.
Constraint (103.8) describes the condition whether boat i occupies the camping
site k.
Constraint (103.9) guarantees all the boats will finish their river trips in six
months.
Constraint (103.10) describes the condition whether boat i and boat j meets on
the river.
Constraint (103.11) guarantees no two sets of campers can occupy the same
camping site at the same time.
Constraint (103.12) describes the value range of variables.
As a matter of fact, we could use Lingo software to solve this problem; however,
the problem scale is so large that the time consuming will be too long. So it’s not
wise enough to use Lingo software in this situation. Here we design a greedy
algorithm (Chen et al. 2008; Chen and Xu 2011; Su and Zhang 2011; Liang et al.
2005; Wang and Li 2008) to solve this problem.
according to the passengers’ booking. During one cycle time, we will arrange
the boats by their duration like this: boat of 6 nights first, and then it will be 7,
8,… in turn, boats of 18 nights will be the last to be arranged.
• The number of boats (represented with Q) arranged every day will depend on its
duration and could be calculated by Q = [Y/Tdur]. For example, for boats whose
duration is 6, the maximum number of this type boats we can arrange every day
is [Y/6]; for boats whose duration is 7, the maximum number we can arrange
every day is [Y/7]. Why? We can explain this by the following graph (see
Fig. 103 1).
Suppose Y = 24, then for boats whose duration is 6, the maximum number of
boats we can arrange every day is 4:
In Fig. 103.1, the bold horizontal line denotes the riverbank while the thin
vertical line represents the camping site and the arrow symbolizes the boat. At the
first day we arrange 4 boats. Then we can see these four boats as a whole and its
whole trip process could be described vividly in the above graph. By this method
we can guarantee to utilize the camping sites in the best possible way. As to other
boats of different duration, we can draw the similar graph like the above.
Fig. 103.1 Trip process of boats arranged in the first and the second days
103 Integer Linear Programming Model and Greedy Algorithm 979
Classes all X boats into 13 groups according to their duration. Denote the
number of boats whose duration is i by X ðiÞ; i ¼ 6; 7; . . .; 18.
Provided that the maximum river trip time every day is no more than 5 h,
calculate the maximum number of boats whose duration is i we can arrange every
day, denoted it by m(i), where mðiÞ ¼ ½Y=i.
for i ¼ 6 : 18
Arrange the schedule of all boats with duration i.
For all X(i) boats with duration i, arrange them to start off from the First Launch
in XðiÞ=mðiÞ consecutive days, with the time gap between two successive boats is
225/(Y ? 1)/v daytime.
After all X(i) boats with duration i arranged, we can arrange the following
X(i ? 1) boats with duration i ? 1 in the following Xði þ 1Þ=mði þ 1Þ days…, till
all X boats are arranged.
end
END
In this section, we will do some simulations of the model and algorithm described
above.
Supposing Xi denotes the number of boats whose duration is iði ¼ 6; 7;. . .; 18Þ,
where
Given Y = 53.
We run our procedural coded by MATLAB software, the simulation result are
as follows:
D1: 5 boats whose duration is 6 nights and 3 boats whose duration is 7 nights
will start off in the first day;
D2: 3 boats whose duration is 7 nights and 4 boats whose duration is 8 nights
will start off in the second day;
D3: 6 boats whose duration is 8 nights will start off in the third day;
D4: 1 boat whose duration is 8 nights and 5 boats whose duration is 9 nights
will start off in the fourth day;
D5: 5 boats whose duration is 9 nights will start off in the fifth day;
…
D31: 2 boats whose duration is 18 nights will start off in the 31st day;
D32: 2 boats whose duration is 18 nights will start off in the 32nd day;
D33: 1 boat whose duration is 18 nights will start off in the 33rd day.
The detail rafting schedule of all boats can be described in Fig. 103.2.
980 Z. Li and X. Huang
Fig. 103.2 The detail schedule of all boats obtained by greedy algorithm
103 Integer Linear Programming Model and Greedy Algorithm 981
According to the simulation results, we find that we can arrange the 124 boats
in about 50 days. This inspired us that we can divided the rafting season
(180 days) into several (for example 3) periods. Arrange X/3 boats in every period
respectively according to the greedy algorithm. This can avoid all boats with same
duration be arranged in several concentrate days.
Remarks: By using the greedy algorithm, we can give a solution to the problem;
however this solution might not be the optimal one. But based on this solution, we
can take some measures to improve. By continuous adjusting, we can finally find a
satisfied solution.
103.5 Conclusion
The problem of Camping along the Big Long River is very complex and the
solution should be of great openness. In this paper, we formulate this problem into
an integer linear programming model and design a greedy algorithm to arrange the
schedule of boats. Then by doing some simulations with this algorithm, we give a
solution to the problem. The results show that this method can obtain the optimal
solution by continuous improving. Furthermore; we can estimate the capacity of
the river by this greedy algorithm.
Although river trip is quite interesting and exciting, it is also very risky and
need some spirit of adventure. Any accident may happen during this process, such
as bad weather, passenger’s injuries and so on. These potential factors might have
a great impact on the supervisor’s decision and management. We don’t take these
factors into account. In the future, we will consider these factors in the model and
algorithm.
References
Chen H, Xu L (2011) Greedy algorithm computing minkowski reduced lattice bases with
quadratic bit complexity of input vectors. Chin Ann Math 32B(6):857–862
Chen D, You S, Han B (2008) Algorithm to create and rate sudoku puzzles. MCM PRO BLEM B
Fu Y (2008) The study of whitewater drifting tourism product development based on tourists
experience at Hongkou scenic spots of Dujiangyan city. Master dissertation of Southwest
Jiaotong University, pp 33–36
Gan Y, Tian F, Li W, Li M, Chen B, Hu Y (2005) Operations research, 3rd edn, vol 6. Tsinghua
University Press, Beijing, pp 122–126
982 Z. Li and X. Huang
Li Z, Wang H (2009) A feasible mathematical model for the marshalling and dispatching problem
of railway. Internet Fortune 11:92–93 (in Chinese)
Liang L, Chen Y, Xu M (2005) Schedule arrangement algorithm based on greedy method.
J Yunnan Normal University (Nat Sci Edn) 25(3):9–16 (in Chinese)
Liu D, Zhao J, Han D, Chen Z (2009) Model and algorithm for the marshalling and dispatching
problem of railway freight train. Math Pract Theory 39(16):162–172 (in Chinese)
Su F, Zhang J (2011) Research on greedy algorithm to solve the activity arrangement. Softw
Guide 10(12):43–44 (in Chinese)
Wang P (2010) The study on train operation simulation: real-time scheduling model and
algorithm. Master dissertation of Beijing Jiaotong University, pp 9–11
Wang B, Li Z (2008) Research and implementation of automatic course system based on greedy
algorithm. Comput Eng Design 29(18):4843–4846
Xiang W, Xu C (2011) Analysis of the factors influencing whitewater rafting experience. J Guilin
Inst Tourism 3(6):56–60
Chapter 104
Research on End Distribution Path
Problem of Dairy Cold Chain
Abstract The vehicle routing problem of dairy cold chain end distribution with
random demand and time window is investigated in this paper. Considering the
characteristics of dairy cold chain end distribution, the chance constrained theory
and the penalty function is introduced to establish a mathematical model of this
problem. A scanning-insert algorithm to solve the model is proposed. The algo-
rithm can be described as: firstly, according to the capacity of the vehicle and time
window restrictions, the customers are divide into several groups by scan algo-
rithm; then find a feasible routing line for each group of customers; finally, using
the idea of recent insertion method to adjust the vehicle route and find the final
optimal distribution vehicle route.
104.1 Introduction
Vehicle routing problem with time windows refers to the transportation problem in
general under the premise of customer‘s requirements of time window. Solomon
and Desrosiers etc. (Solomon 1987; Solomon and Desrosiers 1988) consider joined
time window constraint to the general vehicle routing problem in 1987. Desorchers
et al. (1988) used to concise summary and summarized various kinds of method
solving vehicle routing problem with time windows further in 1988. Sexton and
Z. Li (&)
School of Information, Beijing Wuzi University, Beijing 101149, China
e-mail: [email protected]
S. Wang
Department of Postgraduate, Beijing Wuzi University, Beijing 101149, China
e-mail: [email protected]
Choi (1986) used the Decomposition method proposed by Bender to solve the
single vehicle pick-up and delivery problem with time window restriction.
Chance constrained mechanism use the default value constraint error to return
to probability in the essence of vehicles service process, and additional cost caused
by service failure is not within planning (Chen 2009). Stewart (Stewart and Golden
1983) and Laporte (Laporte et al. 1989) used respectively chance constrained
program change SVRP into equivalent deterministic VRP under some assump-
tions. Dror (Dror and Trudeau 1986) used Clark-Wright algorithm to solve vehicle
routing optimization problem.
This paper’s main consideration is regular route for distribution mode under the
target of minimizing the cost. It means that the customer or the number of nodes
and their position are fixed in every day visit, but each customer’s demand is
different, and their demands meet Normal Distribution.
Distribution center has to pay for the fixed costs for the use of each vehicle. These
costs include the driver’s wages, insurance, lease rental of the vehicle.
X
m
c1 ¼ fk
k¼1
In the cold chain, the main factors causing fresh products damaged are storage
temperature, food of microbes in water activity, PH value, oxygen content (Wang
2008). Assume the damage rate is k, the unit value of the products is P, and
capacity of vehicle k is Qk.
104 Research on End Distribution Path Problem 985
c3 ¼ PkQk
The heat load vehicle refrigeration equipment is mainly due to difference heat
transfer between the vehicle body inside and outside. Suppose the temperature
difference between inside and outside of the vehicle is fixed in a certain period,
then the cost of energy consumption can be expressed as:
X
m
c4 ¼ A ðek sk Þ
k¼1
Soft time window can allow the distribution vehicle to arrive outside the time
window, but outside the appoint time must be punished. Delivery time can be
divided into three categories: service in advance, service by time window, service
delay (Zhan 2006; Thangiah et al. 1991), which is shown in Fig. 104.1.
(1) Service in advance is that the distribution vehicles arrive in time window [a, g).
Immediate delivery may cause customers’ inconvenience and complaint, but it
can reduce the energy consumption.
(2) Service by time window means that the distribution vehicle arrives in the time
window [g, h]. Immediate delivery and the energy costs relate to time is a
constant.
(3) Service delay means that the distribution vehicle arrive in time window (h, b].
Immediate delivery and the energy and relevant penalty costs will increase.
Time t
a g h b
986 Z. Li and S. Wang
1; If vehicle k service for customer i
yki ¼
0; otherwise
The mathematical model can be formulated as follows:
X
m X m X
n X
n m X
X n
min z ¼ ½f k þ PkQk þ A ek sk þ ckij dij xkij þ uðtik Þ
k¼1 k¼1 i¼1 j¼1 k¼1 i¼1
ðs:t:Þ
X
m
m; i¼0
yki ¼ ð104:1Þ
k¼1
1; i ¼ 1; 2; . . .; n
X
n
ykj ¼ xkij ; i 6¼ j; j ¼ 1; 2; . . .; n ð104:2Þ
i¼1
X
n X
n
xkip xkpj ¼ 0 p ¼ 1; 2; . . .; n ð104:3Þ
i¼0 j¼0
dij
tjk tik þ ð1 xkij ÞM; j ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .; m ð104:4Þ
v
doi
tik sk þ ð1 xkoi ÞM; i ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .; m ð104:5Þ
v
djo
ek tjk þ ð1 xkjo ÞM; j ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .; m ð104:6Þ
v
tai tik tbi ; i ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .; m ð104:7Þ
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Xn Xn
k 1
li yi þ U ðbÞ r2i yki Qk ð104:8Þ
i¼1 i¼1
yki ¼ 0; 1 i ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .; m;
xkij ¼ 0; 1 i; j ¼ 1; 2; . . .; n; i 6¼ j; k ¼ 1; 2; . . .; m;
Constraints (104.4–104.5) are the conditions that the arriving time for vehicle k
come to customers i and j must satisfy.
Constraints (104.6–104.7) are the time window restrictions.
Constraint (104.8) means that the probability of each vehicle’s capacity is no
less than the total demands of all the customers it serviced is great than b.
104.4 Algorithm
There are 30 customers needed to be serviced. Suppose all vehicles are of the same
type. The capacity of each vehicle is 48; fixed costs is 100; the vehicle speed is
30 km/h; Unit of energy consumption cost $0.5 per minute; unit distance trans-
portation cost is $5 per kilometer; punishment coefficient h is 0.4 and g is 0.5; b is
95 %, k is 0.01, P is 100. The experimental data is random generation through the
computer under experimental hypothesis.
A. Set up the polar coordinates system
B. Partition the customers into several groups
(1) Starting from zero Angle and rotating along counterclockwise direction,
we can find the first group customers are 2, 3, 5, 6, 7, and 9. The detail
information is listed in Table 104.1.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Q1 ¼ 3 þ 4 þ 9 þ 5 þ 7 þ 11 þ 1:65 1 þ 2 þ 3 þ 3 þ 4 þ 5 ¼ 45:4 \ 48
(2) Find an initial solution sequence 0-2-3-7-5-6-9-0, The initial route is
shown in Fig. 104.2.
(3) Continue to rotate counterclockwise to build new group. Repeat the pro-
cess until all customers are picked into a group.
C. Optimize the initial route of each group by the recent Insertion method
(1) Select customer 2 whose requirement time window is the earliest to form a
sub-route with distribution center 0. Insert customer 3 as the next customer
point according to its requirement time window. Then customer 3 will be
inserted between distribution center 0 and customer 2, forming a new sub-
route 0-2-3-0 satisfying time window and with minimal cost increment.
See Table 104.2.
(2) Insert customers 7, 5, 6, 9 into the sub-route one by one. We can find the
optimal route 0-2-7-9-3-6-5-0. The objective function value is 796.6. The
optimal route is shown in Fig. 104.3 and Table 104.3.
Similarly, we can use the same method to find the optimal routes in the other
groups. The results are shown in Fig. 104.4.
40
35 9
30 7
25
20 3
15 6
10 2
5
5
00 5 10 15 20 25 30
40
35 9
30 7
25
20 3
15 6
10 2
5
5
0
0 5 10 15 20 25 30
10 2
1
15 5
17
26 28
18 20 25
22 29
23 27 30
19
16 21 24
104.6 Conclusion
The vehicle routing problem of dairy cold chain end distribution with random
demand and time window is investigated in this paper. A mathematical model is
constructed, and an algorithm is proposed.
Vehicle routing problem with time windows is a real problem the enterprises
facing with at the end of city distribution. It is obvious that to pursuit minimum
cost may cause to drop the quality of service and eventually lead to the loss of
customers. To establish a suitable mode of long-term sustainable development, the
enterprise should find a balance between service quality and cost. As a result, the
enterprises could meet customer requirements with the highest level of service and
minimum cost. In addition, this paper did not consider the asymmetry of road
network and the time handling factors. In the future, we will investigate the
problem with these factors.
References
Chen B (2009) Ant colony optimization algorithm for the vehicle routing problem in the research
on the application (in Chinese). Harbin Industrial University, Harbin
Desorchers M, Lenstra J, Savelsbergh M, Esoumis (1988) Vehicle routing with time windows
optimization and apporximation, vehicle routing: methods and studies. North-Holland,
Amsterdam, pp 64–84
992 Z. Li and S. Wang
Dror M, Trudeau P (1986) Stochastic vehicle routing with modified savings algorithm. Eur J Oper
Res 23:228–235
Laporte G, Louveaux F, Mercure H (1989) Models and exact solutions for a class of stochastic
location-routing problems. Eur J Oper Res 39:71–78
Sexton TR, Choi YM (1986) Pickup and delivery of partial loads withSoft time windows. Am J
Math Manage Sci 6(4):369–398
Solomon MM (1987) Algorithm for the vehicle routing and scheduling problems with time-
windows constraints. Oper Res 35(2):254–265
Solomon MM, Desrosiers J (1988) Time windows constrained routing and scheduling problems.
Transp Sci 22(2):1–13
Stewart WR, Golden BL (1983) Stochastic vehicle routing: a comprehensive approach. Eur J
Oper Res 14:371–385
Thangiah S, Nygard K, Juell PG (1991) Agenetic algorithms system for vehicle routing with time
windows. In: Miami proceedings of the seventh conference on artificial intelligence
applications, Florida, pp 322–325
Wang Y (2008) Cold-chain logistics distribution center mode study (in Chinese). Changsha
University of Science and Technology, Changsha
Zhan S (2006) Parallel algorithm with the soft time Windows in the vehicle routing problem of
the application (in Chinese). Wuhan University of Science and Technology, Wuhan
Chapter 105
Improved Evolutionary Strategy Genetic
Algorithm for Nonlinear Programming
Problems
Keywords Nonlinear programming Genetic algorithm Improved evolutionary
strategy Correction operator method
105.1 Introduction
method, the multiplier method. But these methods had their specific scope and
limitations, the objective function and constraint conditions generally had con-
tinuous and differentiable request. The traditional optimization methods were
difficult to adopt as the optimized object being more complicated. Genetic algo-
rithm overcame the shortcomings of traditional algorithm, it only required the
optimization problem could be calculated, eliminating the limitations of optimi-
zation problems having continuous and differentiable request, which was beyond
the traditional method. It used the forms of organization search, with parallel
global search capability, high robustness and strong adaptability, and it could
obtain higher efficiency of optimization. The basic idea was first made by Pro-
fessor John Holland. The genetic algorithm had been widely used in combinatorial
optimization, controller’s structural parameters optimization etc. fields, and had
become one of the primary methods of solving nonlinear planning problems
(Operations research editorial group 2005; Bazarra and Shetty 1979; Bi et al. 2000;
Liang et al. 2009; Holland 1975; Hansen 2004; Saleh and Chelouah 2004; Uidette
and Youlal 2000; Lyer et al. 2004).
In this paper, the evolution strategy was improved after analyzing the process of
the genetic algorithm and the improved algorithm took full advantages of genetic
algorithm to solve unconstrained and constrained nonlinear programming prob-
lems. In the MATLAB environment, the numerical example showed that the
proposed improved genetic algorithm for solving unconstrained and constrained
nonlinear programming was effective, and the experiment proved it was a kind of
algorithm with calculation stability and better performance.
the objective function f(X) in En. Here, hi(X) = 0 and gj(X) C 0 were the con-
strained conditions.
For max f(X) = -min[-f(X)], only the minimization problem of objective
function was needed to take into consideration without loss of generality.
If some constrained conditions were ‘‘B’’ inequality, they were needed to be
multiplied at both ends of the constraints by ‘‘-1’’. So we could only consider the
constraint in the form of ‘‘C’’.
Based on the simple genetic algorithm, the following gave the analysis design and
description of the algorithm which improved the genetic evolution strategy of
genetic algorithm.
We used the binary encoding and multi-parameter cascade encoding. It meant that
we made each parameter encoded by means of the binary method and then con-
nected the binary encoded parameters with each other in a certain order to con-
stitute the final code which represented an individual including all parameters. The
bit string length depended on the solving precision of specific problems, the higher
precision we required, the longer the bit string.
If the interval of someone parameter was [A, B] and the precision was c digits
after decimal point, then the calculation formula for bit string length was:
ðB AÞ 10c 2L ð105:3Þ
Here, L took the smallest integer which made the above equation valid.
If the interval of someone parameter was [A, B], the corresponding substring in
the individual code was bL bL1 bL2 b2 b1 , then its corresponding decoding
formula was:
X L
BA
X ¼Aþð bi 2i1 Þ L ð105:4Þ
i¼1
2 1
There were two conditions when producing initial population. One was to solve
the unconstrained problem; the other was to solve the constrained problem.
996 H. Zhu et al.
Suppose the number of decision variables was n, the population scale was m, ai
and bi were lower limit and upper limit of a decision variable respectively. For the
unconstrained problem, binary encoding was adopted to randomly produce initial
individuals of the population.
For constrained problems, the initial population could be selected at random
under certain constraint conditions. It also could be produced in the following
manner:
First, a known initial feasible individual X(0)
1 was given artificially. It met the
following conditions:
ð0Þ ð0Þ ð0Þ ð0Þ ð0Þ
gj ðX1 Þ ¼ gj ðX11 ; X12 ; X13 ; . . .; X1n Þ [ 0
The other individuals were produced in the following way (Wang et al. 2006):
ð0Þ
X2 ¼ A þ r2 ðB AÞ ð105:5Þ
Here, A = (a1, a2, a3,…,an)T, B = (b1, b2, b3,…,bn)T, r2 = (r21, r22, r23,…,
r2n)T, random number rij 2 Uð0; 1Þ.
ð0Þ
Then checking whether X2 satisfied the constraints or not. If the constraints
ð0Þ
were satisfied, another individual would produce as X2 . If the constraints were
ð0Þ
not satisfied, X2 would be corrected by correction operator.
When genetic algorithm was applied to deal with constrained nonlinear pro-
gramming problems, the core problem was how to treat constraint conditions.
Solving it as unconstrained problems at first, checking whether there were con-
straint violations in the search process. If there were no violations, it indicated that
it was a feasible solution; if not, it meant it was not a feasible solution. The
traditional method of dealing with infeasible solutions was to punish those
infeasible chromosomes or to discard infeasible chromosome. Its essence was to
eliminate infeasible solution to reduce the search space in the evolutionary process
(Gao 2010; Wang et al. 2003; Ge et al. 2008; He et al. 2006; Tang et al. 2000;
Wang and Cao 2002). The improved evolution strategy genetic algorithm used the
correction operator method, which selected certain strategy to fix the infeasible
solution. Different from the method of penalty function, the correction operator
method only used the transform of objective function as the measure of the
adaptability with no additional items, and it always returned feasible solution. It
had broken the traditional idea, avoided the problem of low searching efficiency
because of refusing infeasible solutions and avoided early convergence due to the
introduction of punishment factor, and also avoided some problems such as the
result considerably deviating constraint area after mutation operation.
If there were r linear equations of constraints, and the linear equations’ rank
was r \ n, all decision variables could be expressed by n - r decision variables.
105 Improved Evolutionary Strategy Genetic Algorithm 997
Taking them into inequality group and the objective function, the original
n decision variables problem became n - r decision variables problem with only
inequality group constraints. So we could only consider problems with only
inequality group constraints. The production of initial individuals, offspring pro-
duced by crossover operation and individuals after mutation, all were needed to be
judged whether they met the constraints, if not, fixed them in time. Such design of
genetic operation made solution vectors always bounded in the feasible region.
The concrete realization way of correction operator was:
Each individual was tested whether it satisfied the constraints. If so, continued
the genetic operation; if not, let it approach the former feasible individual
ð0Þ
(assumed X1 and the former feasible individual should be an inner point). The
approaching was an iterated process according to the following formula:
ð0Þ ð0Þ ð0Þ ð0Þ
X2 ¼ X1 þ aðX2 X1 Þ ð105:6Þ
where, a was step length factor. If it still did not satisfy the constraint, then the
accelerated contraction step length was used, that was a = (1/2)n, here, n was
search times.
Big step length factor could affect the constraint satisfaction and reduce the
repairing effect and even affect the search efficiency and speed, whereas, too small
step length factor could not play the role of proper correction. So the method of
gradually reducing the step length factor could both protect the previous correction
result and give full play to correction strategy.
ð0Þ
Thus X2 was made to feasible individual after some times of iteration, then
ð0Þ ð0Þ
X3 was produced as X2 and become feasible individual. In the same way, all the
needed feasible individuals were produced. For binary genetic algorithm, these
feasible individuals were phenotype form of binary genetic algorithm. Real coding
individuals were converted into binary string according to the mapping relation-
ship between genotype and phenotype. Then the feasible individuals of binary
genetic algorithm were obtained.
This kind of linear search way of infeasible individual moving to the direction
of feasible individual had the advantage of improving infeasible individual, ini-
tiative guiding infeasible individuals to extreme point of population, making the
algorithm realize optimization in global space. This paper introduced the correc-
tion operator to improve the feasibility of infeasible individuals. This method was
simple and feasible. And the treatment on infeasible individuals was also one
novelty of improving evolution strategy of genetic algorithm.
If the objective function was for minimal optimization, the following transfor-
mation was applied (Wang et al. 2007):
998 H. Zhu et al.
cmax f ðxÞ f ðxÞ\cmax
Fitðf ðxÞÞ ¼ ð105:7Þ
0 other
Here, cmax was an estimated value which was enough large for the problem.
If the objective function was for maximal optimization, the following trans-
formation was applied:
f ðxÞ cmin f ðxÞ [ cmin
Fitðf ðxÞÞ ¼ ð105:8Þ
0 other
Here, cmin was an estimated value which was enough small for the problem.
Selection operator used the roulette selection method. The selection probability of
individual i:
X
m
ps ¼ f i = fi ð105:9Þ
i¼1
In the experiment, simulations of two examples were used to validate the cor-
rectness of the algorithm and to test the performance of the algorithm. The
hardware environment in the experiment were Intel Pentium Dual-Core
[email protected] GHz, 2 GB RAM. The operating system was Microsoft Windows
XP, compile environment was MATLAB 7.11.0 (R2010b).
1000 H. Zhu et al.
In the below table, the interval lower bound was a, the interval upper bound was b,
the precision was c digits after decimal point, the population size was m, the
maximum evolution generation was T.
Example 1:
X
n
min f1 ð xÞ ¼ x2i ð105:11Þ
i¼1
0.000010)
1001
1002 H. Zhu et al.
105.5 Conclusions
(1) The Improved Evolutionary Strategy Genetic Algorithm preserves the optimal
groups based on the traditional elite preservation method. The advantage of
this method is that it reduces the possibility of optimal solutions being
destroyed by crossovers or mutations in the process of evolution. Premature
convergence, which may be present in the traditional elite preservation
method, is avoided because all individuals quickly converge to one or two
individuals with high fitness values.
(2) The correction operator breaks the traditional idea and avoids some problems
such as low searching efficiency by refusing infeasible solutions, early con-
vergence by introducing a punishment factor and deviation from the constraint
area considerably after mutation operation.
(3) The combination of the improved evolutionary strategy and the method of
correction operator can effectively solve many nonlinear programming prob-
lems, greatly improve solution quality and convergence speed, realize the
linear search method of moving infeasible individuals towards feasible indi-
viduals, and effectively guide infeasible individuals.
The disposal of infeasible individuals by the correction operator is simple and
effective. It is proved to be an effective, reliable, and convenient method.
References
Bazarra MS, Shetty LM (1979) Nonlinear programming theory and algorithms. John Wiley
&Sons, New York, pp 124–159, 373–378
Bi Y, Li J, Li G, Liu X (2000) Design and realization of genetic algorithm for solving nonlinear
programming problem (in Chinese). Syst Eng Electron 22(2):82–89
Gao J (2010) Genetic algorithm and its application in nonlinear programming. Master
dissertation, Xi’an University of Architecture and Technology, Xi’an, China
Ge Y, Wang J, Yan S (2008) A differentiable and ‘almost’ exact penalty function method for
nonlinear programming (in Chinese). J Nanjing Normal Univ Nat Sci Ed 31(1):38–41
Hansen JV (2004) Genetic search methods in air traffic control. Comput Oper Res 31(3):445–459
He D, Wang F, Mao Z (2006) Improved genetic algorithm in discrete variable non-linear
programming problems (in Chinese). Control and Decision 21(4):396–399
Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan, USA
105 Improved Evolutionary Strategy Genetic Algorithm 1003
Liang X, Zhu C, Yan D (2009) Novel genetic algorithm based on species selection for solving
constrained non-linear programming problems (in Chinese). J Central South Univ Sci Technol
40(1):185–189
Lyer SK, Saxena B et al (2004) Improved genetic algorithm for the permutation flowshop
scheduling problem. Comput Oper Res 31(4):593–606
Operations research editorial group (2005) Operations research (3rd edn) (in Chinese). Tsinghua
University Press, Beijing, pp 133–190
Saleh HA, Chelouah R (2004) The design of the global navigation satellite system surveying
networks using genetic algorithms. Eng Appl Artif Intell 17(1):111–122
Sui Y, Jia Z (2010) A continuous approach to 0–1 linear problem and its solution with genetic
algorithm. Math Pract Theor 40(6):119–127
Tang J, Wang D (1997) Improved genetic algorithm for nonlinear programming problems (in
Chinese). J Northeast Univ Nat Sci 18(5):490–493
Tang J, Wang D, Gao Z, Wang J (2000) Hybrid genetic algorithm for solving non-linear
programming problem (in Chinese). Acta Automat Sin 26(3):401–404
Uidette H, Youlal H (2000) Fuzzy dynamic path planning using genetic algorithms. Electron Lett
36(4):374–376
Wang D, Liu Y, Li S (2003) Hybrid genetic algorithm for solving a class of nonlinear
programming problems (in Chinese). J Shanghai Jiaotong Univ 37(12):1953–1956
Wang D, Wang J, Wang H, Zhang R, Guo Z (2007) Intelligent optimization methods (in
Chinese). Higher Education Press, Beijing, pp 20–80
Wang F, Wang J, Wu C, Wu Q (2006) The improved research on actual number genetic
algorithms (in Chinese). J Biomathematics 21(1):0153–0158
Wang X, Cao L (2002) Genetic algorithm—theories, applications and software realization (in
Chinese). Xi’an Jiaotong University Press, Xi’an, pp 1–210
Chapter 106
Simulation and Optimization of a Kind
of Manufacturing and Packing Processes
Chun-you Li
Abstract There are many factors that influence each other in production-
packaging processes. Resources, objects, processes and their properties and
behaviors can be simulated to construct a computer simulation model across the
whole production-packing process. Usually, the minimized cost, maximized profit
or reasonable utilization was targeted as the decision objective, and concerned
parameters was configured as conditions in the simulation model. With enough
repeated runs, the optimization module can seek the best equipment combination
and the best production schedule.
106.1 Introduction
In some industries such as food and tobacco industry, the terminal product is
generally made from the production line and packaged into small boxes or small
bags. The basic process is to produce these products through one or more pro-
duction lines, and delivery or transfer products to packaging. Finally, the small-
packaging products are filled into a larger container through one or more pack-
aging equipments continuously or partially and gotten out the lines. Figure 106.1
is the schematic diagram of such production and packaging process, two manu-
facturing lines produce the same kind of products, then transfer them to three
C. Li (&)
College of Transportation & Logistics, CSUFT, Changsha, Hunan, China
e-mail: [email protected]
C. Li
Accounting School, GXUFE, Nanning, Guangxi, China
coordinate packing lines through a series of buffer vessels, finally package them
into finished goods.
In cases such as designing and recreating a new packaging process or managing
an existing process, we often have to face the following questions as: how to
reduce process failure? How to reduce the processing cycle time? How much is the
reasonable buffer capacity and the buffer stock? How to deal with the change of
production scale? And is it necessary to add more and higher ability production
lines, packaging lines and container?
To simplify the analysis, we can do analysis and make decision with single
process and single factor. For example, the expansion of the production and
logistics ability can be determined with the output of production or both speeds.
But the whole process is complex and the relationship between processes is
uncertain. Additionally, there are various factors influencing mutually in processes
and these factors always interact dynamically along with time and events. Once a
suggestion was put forward or a measure was imposed, it may be hard to predict
the ultimate effect brought by the changes, so it is very difficult to determine the
exact priority order of the measures. For example, in order to maintain the reli-
ability and the inventory balance across processes, designers can use greater
inventory to deal with the less reliability of equipments. On the contrary, by
improving the reliability of the upstream through equipment, he can reduce the
storage in the processes. Both measures can ensure to meet the needs of the
downstream material and ensure that production runs smoothly. Even if the rela-
tionship among production, package or buffer process is certain, there are still
many factors interact each other. For example it is difficult to evaluate the influ-
ence degree of the mutual isolated factors on the production scheduling, as well as
the control of the operation sequence and rhythm, the product quantity, structure of
the production, or the characteristics of products and so on. These factors may
affect production speed, lead to the failure or production conversion. In addition,
due to manufacturing and packing may be arranged in different positions apart
from each other, and the difference of scheduling method and the enterprise culture
material
Buffer 1
material
Package 1
Machine 1 Buffer 2
material
Package 2 Final goods
Buffer 3
material
Machine 2
Buffer 4 Package 3
require different rhythms of production, the complexity to solve the problems will
also be increased.
There is a variety of decision tools and the experimental methods to deal with in
this kind of problems that schedule manufacturing and packing with multifactor.
This paper presents a simulation method that simulates the process and interacting
factors. It analyses and evaluates the problems and forecast the effect of the
decision that have been designed or improved. It provides a tool to test design idea
or improvement effect by developing a manufacturing-packing simulation model.
Just as simulation driving-cabin can help the driver to learn driving in the best way
and to build a good habit, a manufacture and packaging simulation model can be
used to test and optimize the manufacture and package (Wang et al. 2002).
The tool that is used to simulate and solve the problem is the simulation model, or
called simulator. Specific simulation model is based on the research goal con-
cerning with the problem to solve. This paper involves a factory to build a new
manufacture and packing system. Preliminary design assume there were two
production lines in the factory and each production line can produce any kind of
the basic specifications. There were three packaging lines to pack the products into
kinds of size and the shape of the containers with different labels. There were
many parallel buffer tanks between production lines and packing lines which can
receive products from any production line, then put the products to any packing
line. The production lines and the buffer tanks should be cleaned before transform
the line for new products.
The simulation model is developed to solve the following problems: Can the
new equipments match the new production combination and schedule? What kind
of scheduling strategy will make best operation of new equipment? How many
buffer tanks will be need and what is the reasonable specification of buffer tanks?
How is the effect of improvement of manufacture and packaging reliability? What
influence will the production cycle time brings? Are more packaging lines or
production lines needed?
The model is implemented in the ExtendSim that is a simulation platform
developed and published by Imagine That. It is a set of software that contains
simulation libraries and tools. It is used to simulate discrete event, continuous
process and discrete process based on rate. The continuous flow stands for large-
amount or high-speed flow, this software includes controls and schedule parts used
for modeling process, and layered structure templates used to represent a higher
level (Krahl 2010, 2011).
Figure 106.2 is the general simulation model that shows the two production lines,
four buffer tanks and three packing lines. The actual parts of the model are included in a
level module. You can double-click any region of higher level modules image to open
lower level modules. The timetable, equipment performance, fault characteristics, the
1008 C. Li
conversion rate and other data are included in a built-in a database table of the model
and could be visited through a logical scheduling structure.
In this case, the model manufactures and packs products by running manu-
facturing machines and packing equipments under the control of a logic scheduler.
The scheduler controls the simulation by an order table that lists products and the
amount. The utilization ratio of equipment was set by the logical scheduler and we
can use the logical scheduler to instruct equipment for conversion when it is
necessary. The report submitted by the operation model is like the actual business
report. The researcher can check out these subject reports, and points out the
existing problems.
The simulator is widely used for helping the project team of the factory, to
make sure the number and configuration of the new manufacturing lines, pack-
aging lines and buffer tanks. A few test schedules are developed to represent
production requirements in the typical situation and in the extreme cases. These
models are based on the existing factory model and a series of packing line designs
that are recommended.
When the simulation model operates, the simulation clock keep recording the run-
ning time increasing with simulation steps. Productions and packaging steps are not
predefined with a table, but caused by events. In this model, items as objects are
produced by a Create module, and the production rate of the item is decided by
‘‘interval time of two items’’. The interval can be represented by species and
parameters of designated random distribution that depicts the item production
condition of the making line. For example, the interval in this case is described with
an exponential distribution. The mean value for one conduction line is 0.2 and
106 Simulation and Optimization 1009
another is 0.4. Both location values are all zero. Production characteristic of the two
production lines is depicted with different values of the parameter (Hu and Xu 2009).
The manufacture or the packaging process could change the measurement unit.
For example, five small items of product form a larger one after packing. It can be
simulated with a merger module named Batch, it allows multiple sources of
objects merged into one thing. It will be of great help in coordinating different
machines to assembling or fusing different parts. In the module dialog box, we can
set the number of each input objects to produce an output objects, as well as
objects from other input are not allowed to put into this module if some input
objects are not arrive or quantity is not enough.
We can simply use a process with time parameters to simulate the manufacturing
or packaging. The most important activity module of ExtendSim is Activity. Its
basic parameter is Delay that is the processing time of the activity module. In
addition, it can also define and deal with several items at the same time. In the
module dialog box, the processing time can be designated as a fixed value, or input
from the D(Demand) port of the module, or fromattribute value of another module,
as well as from an inquiring form. The last three ways can realize more plenty and
slightly modeling for processes. In this case, the initial processing time of two host
machines was set constants, more detailed time table can be set up in the sub-
sequent according to the specific situation of the machine.
In order to coordinate the input and manufacture or packaging, a Queue module
is needed between input and the activity as a buffer. Queue module will store items
and wait to release them to next module based on the rules predetermined. In the
module dialog box we can select or set queuing rules such as resource pool
queuing, according to attribute value, FIFO(first in first out), LIFO(last in first
out), priority, etc. With rule Resource pool queuing, the resource will be caught
from the resource pool module where the resource number is limited. A queue
based on the attribute value will sort items by an optional attribute. FIFO queue is
the most common queuing way; LIFO is a kind of reverse queuing, also known as
stack, which means the latest item into the stack will leave first; under Priority
queuing way, the module uses the Priority attributes to determine the releasing
order of item (Fig. 106.3).
There exist differences of space distance and rate between the Processing and
packing process. In practice, it often use the way of setting up stock in factory or
production inventory to coordinate the contradiction. The specific physical form of
this inventory may be a universal warehouse, some cargo space or buffer tanks exist
between the processing and packing line. In this case we assume the way of buffer
tank. A buffer tank store the same kind of products only, but receive the products
from different production lines, it can also release to any packing line for packaging.
We still use Queue module to realize the simulation of buffer tank. But since
there are multiple buffer tanks, the model needs to choose the reasonable buffer tank
to store the products from the end of production lines. And because there are more
than one packaging lines, the model should also do a reasonable choice when the
buffer tanks release products so as to put the products into the free packing line. We
can use two kinds of module like Select Item In and Select Item Out to realize the
simulation of the product routing. A Select Item In module receives items from
more input branches and release items out through its only output port. A Select
Item out module receives items from its only input port and release items by
choosing one of export branches. The selections in the dialog box include based on
priority, random selection, sequential selection or based on Select port selection. In
this case, between production line and buffer tank, and firstly establish a Select Item
In with two inputs branches, output randomly to Select Item out with four branches,
completing the routing of bulk products from the buffer tank to the packing line.
Between the buffer tanks and packing lines, establish a Select Item In with four
input branches, randomly output to a Select Item out with three output branches to
realize the route from the buffer tank to the packaging line (Fig. 106.4).
The simulation model is driven to operate and interact by a schedule, just like an
opera showing step by step with a script written by the editor. The model can be
The simulation model is used to analysis the manufacturing and packaging system
and the problems are to be solved. Some models involve manufacturing operation
only or involve packaging operation only, and the other involves both. Normally, it
is based on problems which will be solved to determine the corresponding eval-
uation index such as the equipment utilization, the processing cost and the queue
length (Jiang et al. 2009).
Utilization. Utilization is the ratio of the working time and all running time of
the equipment. Low utilization means that the resources have not been made fully
use, but high utilization rate is sometimes not a good thing because it means ability
tension. Once the equipment runs failure, inevitably leads to the production halt
and extend the production cycle, which leads to the failure of the production
scheduling (Dessouky et al. 1994).
Pn
ti
Ult ¼ i¼1 ð106:1Þ
T
i ID of products
T running time of the equipment.
Cost. Any manufacturing and packaging process must spend some resource and
its cost is a key management tool. An activity module has cost parameters in the
dialog box that can simulate the cost of the process. We can set the cost information
in the cost page in the dialog box. According to the character of cost, we can set two
kinds of cost, fixed cost and time cost. Fixed cost is the cost that happens when deal
with every product, its value is a constant, unrelated with the delay of products. The
time cost is relevant with the processing time, it equals to the multiplication of the
cost per unit time and the operation time (Harrell 1993). The module will auto-
matically calculate its cumulative cost and display on the plot.
X
n
Total Cost ¼ ti Cpertimeunit þ qi Cperitem ð106:2Þ
i¼1
106.4.2 Optimization
The simulated optimization, also named goal seeking, is seeking the optimal
answer to the question automatically, or the optimal value of parameters. In
parameter range given in the model, we can run the model repeatedly to search the
solution space and find out the best parameter-values that satisfy the conditions as
well as reach the decision target. In the optimization model including an Optimizer
module, the issue is usually presented as a target function or a cost-profit equation.
In order to realize the cost minimization or the profit maximization, the ExtendSim
simulation models help researchers not only find the best solution automatically
but also put out of the long boring process that repeated trying different parameter-
values (Wang et al. 2009).
The running conditions can be changed in the optimization model. For example,
we can set the value range, value method and constraint conditions of parameters
by limiting value scope of decision-making or defining constraints equations. We
can also affect the solving precision by the setting run parameters, such as deciding
the total sample cases, the search times of each case, when to check convergence,
and optimal number of member cases in the convergence (Zhang and Liu 2010).
Optimizer does not have the function of refusing faults, so any Optimizer maybe
106 Simulation and Optimization 1013
converges to the second best solution not the first best solution, especially when its
running time is not long enough. So we should consider to run more times and get
enough operation results, and ensure getting the same convergence to close to the
optimal solution before using the best solution for actual application.
106.5 Conclusion
The simulation is a good tool to study the problems of manufacture and packaging
operation. It is because of multiple factors interact through a variety of means.
A simulation model can provide more real ways to improve decision effect than
other research models. In fact, a simulation model is just like a virtual factory to
test the new design idea, or assessment recommended projects. The simulation
model discussed in this paper is designed in the ExtendSim condition. It can be
used to optimize the process of manufacture, package and logistics in the factory.
References
Dessouky Y, Maggioli G, Szeflin D (1994) A simulation approach to capacity expansion for the
Pistachio Hulling Process. In: Proceedings of winter simulation conference, IEEE, New
Jersey, pp 1248–1252
Harrell CR (1993) Modeling beverage processing using discrete event simulation. In:
Proceedings of the winter simulation conference, IEEE, New Jersey, 1993, pp 845–850
Hu S, Xu LW (2009) Simulation and optimization for Noshery Service System. Paper presented
at Information Engineering and Electronic Commerce at the international symposium,
pp 721–723
Jiang LF, Sun GT, Zhang N (2009) Layout research of campus traffic system based on system
simulation. In: Technology and innovation conference 2009 (ITIC 2009), pp 1–5
Krahl D (2010) ExtendSim advanced technology: Integrated simulation database. In: Proceedings
in winter simulation conference, 2010, pp 32–39
Krahl D (2011) ExtendSim technology: scenario management. In: Proceedings in winter
simulation conference, 2011, pp 12–19
Pinedo M (2002) Scheduling: theory, algorithms, and systems, 2nd edn. Prentice Hall,
Englewood Cliffs, pp 124–126
Pinedo M, Chao X (1999) Operations scheduling with applications in manufacturing and services.
McGraw Hill, New York, pp 68–74
Wang Y, Perkins JR, Khurana A (2002) Optimal resource allocation in new product development
projects: a control-theoretic approach. IEEE Trans Autom Control 47(8):1267–1276
Wang R, Li Q-M, Zhu H-B, Peng Y-W (2009) A simulation model for optimizing support
resources in requirement of warship mission reliability. In: International conference computer
technology and development, 2009, pp 144–148
Zhang Z-C, Liu J-H (2010) Extend-based research in positioning and optimization the bottleneck
process of MTO enterprises. In: International conference on computer application and system
modeling, 2010, pp 479–481
Chapter 107
Equilibrium and Optimization
to the Unequal Game of Capital-Labor
Interest
M. Wang (&) Y. Lu
Guangdong University of Technology, Guangzhou 510520, Guangdong, China
e-mail: [email protected]
second is that China’s trade unions and other legal deficiencies in the system
resulted in imbalance; the third is that it mainly due to government, whose over-
emphasis on the investment environment caused damage to the workers’ interest;
the last one is that the imbalance stems from the differences between employers
and employees capacity to safeguard the rights (Qi 2008).
According to game theory, in market economy conditions, the distribution to
the labor-capitals’ benefits depends on both employers and employees’ game
power, and Imbalance of labor-capitals’ interests is the inevitable result of the
unequal game. In theory, under the existing conditions, the decisive factors to the
game power include the resources they have, the credibility of the threat, risk
aversion and time preference (Jack 2009). ‘‘Strong Capital and weak labor’’ is an
objective fact, which can be said is a worldwide phenomenon. This is determined
by the system of market economy in the modern times. As the distribution subject,
the corporate system, which dominated by the principle of ‘‘absolute ownership’’,
is still dominated by the ‘‘shareholders center’’ Wang 2008). At the same time,
from employers and employees’ market attributes, the incompetence of the labor
market and capital flows around the world further exacerbates the worldwide
imbalance of labor game capacity (Zaheer 2003).
Based on the principle of freedom of contract in market economy, labor contract
is seen as the one which employers and employees freely negotiate and deal with
their respective interests, the labor interests’ distribution is also seen as the outcome
of the game, which can be a one-time or repeated several times. Macroscopically
speaking, existence of market economy depends on the minimum cooperation of
employers and employees and thus, the labor relations in the market economy is a
long-term competition and cooperation, a bargaining relationship on the distribu-
tion of benefits; but from the micro level, the specific employment relationship may
be a non-cooperative relationship at random. Reflected in the labor contract, short-
term labor contract (or one-time labor contract) means that the one-off game, or
non-cooperative relationship between employers and employees; long-term con-
tract (or non-fixed term labor contract) means repeated game, or competition and
cooperation between the two parties. Employers and employees concern about the
overall benefits in the Long-term labor contracts, and thus there is the possibility to
adjust their game strategy, and it can inhibit the ‘‘short-sighted behavior’’ of the
game subject to a certain extent. When the interests of labor-management coop-
eration and win–win cooperation is over the inputs (such as special human capital
investment), labor-management cooperation will become the norm.
In addition to the above factors, the general industrial labor supply far exceeds
the market demand in our country, which determines the congenital weakness of
labor in the game. Meanwhile, low labor skills, absence of the autonomic trade
union and the deletion of methods such as ‘‘strike’’ and ‘‘threat’’, as well as
defection of unemployment insurance, all compress the game strategy space of the
labor, and reduce labor tolerance to time cost required for labor-capital consul-
tation and the credibility of the means. Thus, relative to other developed countries,
labor game of power even more unbalanced in China. Determined by the stage of
economic development, homogenous competition among cooperation is intense
107 Equilibrium and Optimization to the Unequal Game of Capital-Labor Interest 1017
(e.g., the price competition), enterprises lack the power and capacity to improve
labor rights, and labor rights lack protection. Therefore, our labor respect a serious
injustice distribution system, not because they endorse, nor is a Pareto improve-
ment, but simply because they have no better choice. It is inevitable to adjust the
pattern of labor game and strengthen the ability of labor game in order to pursue
better social justice, optimize the allocation pattern, avoid the deterioration of
labor relations and achieve the stable development of the market economy.
It must be clear that emphasizing the balance of labor interests is not said to
fundamentally reinvent the wheel, but is the proper adjustment of distribution of
labor benefits under the socialist market economy. In essence, it is appropriate
optimization to the game framework with ‘‘strong capital and weak labor’’ to
achieve sustainable economic development and create a harmonious social
environment.
The unfair distribution of labor interests is from unequal game power. It is sig-
nificant to explore the theory to realize balance in the unequal game. Assume that
Player A and Player B are on behalf of employers and labor, the game strategy
combination as the following table, then, we can make the following analysis on
strategy equilibrium of the game.
In this model, if DA, B \ x, then, there will be two equilibrium results, namely
(R, L) and (L, R). D value indicates the payment which the perpetrator will get
when he fails to reach an equilibrium outcome, or failure value. eA, B [ 0, e is said
as the advantages of distribution of behavior.
If DA = DB, failure values are equal, that means it is a peer-to-peer game. If
DA [ DB, or DB [ DA, that means an unequal conditions. In such repeated games
with incomplete information
A ‘s probability (employer) to select R is: ðx þ eB DB Þ=ð2x þ eB 2DB Þ;
probability to select L is: ðx DB Þ=ð2x þ eB 2DB Þ.
Accordingly, B (Labour) probability:
If, p ¼ ðx þ eB DB Þ=ð2x þ eB 2DB Þ, then B (Labour) have no preference
between L and R.
1018 M. Wang and Y. Lu
There is no doubt that labor relations in the market economy exists congenital
defect of ‘‘strong capital weak labor’’. How to take appropriate measures to reverse
unfavorable situation of the labor and coordinate capital-labor interests is the
common problem faced by all market economy countries. The academic
107 Equilibrium and Optimization to the Unequal Game of Capital-Labor Interest 1019
community discussed much and put forward a variety of theoretical models and
policy recommendations, which can be summarized as the following:
The first is the new classical school. It abandons ethical factors in classical
economics, emphasizing that the natural order of the market and economic
exchange is the main way to resolve labor conflicts. It advocates mobilizing labor
enthusiasm for production through the adoption of wages, bonuses and other
incentives to realize capital-labor win–win situation. The second is the manage-
ment school. It sees the limitations of ‘‘pure market’’ regulation and labor conflicts
triggered and emphasizes the common development of both employers and
employees on the basis of the same interests. The third is the new system school. It
believes that capital-labor conflict of interest can be solved by constructing
common interests and advocates establishment of a diversified economy and
political system to ensure the bargaining right between employers and employees
and independent trade unions to ensure the labor’s interest and eliminate labor
conflicts through institutionalized ways. The fourth is freedom of reform school. It
advocates the establishment of strong trade unions, develop a strict legal to reg-
ulate labor relations. It also believes that the government should implement
positive economic and social policies to restrict and change the recurrent negative
impact on the market economy. The fifth is the new Marx school. It maintains that
a system should be established so that the labor becomes the owner and manager to
participate in corporate decision-making and profit-sharing (Zhao 2009). All the
above make certain sense and to some extent, reflect the basic requirements of the
labor relations adjustment under market economy conditions, and are adopted in
some countries.
From the practice in the world, the adjustment models of labor relation are
various and different. As a result of congenital defect, ‘‘strong capital and labor
weak ‘‘, of the market economy, all countries have the same objective which is to
strengthen the labor game ability, let the government and other social subjects play
appropriate ole in coordination and prevent excessive imbalance. Among them,
allocation mechanism of labor interests based on collective bargaining has become
the mainstream to solve imbalance.
The unequal bargaining model shows that the hinge to decide the labor force are
the game resources, time preference and the credibility of the ‘‘threat’’. Therefore,
the key to optimize labor allocation pattern is to strengthen the game ability of
labor and optimize labor game structure. Achieve above goals, the following steps
should be taken. One is to enhance individual labor game ability. Theory and
Western experience have shown that the level and specialization of labor skills
connects closely with the game capacity.’’Asset specificity’’ decides comparative
advantage of the parties (Oliver 2002). Therefore, strengthening school education,
vocational training and work skills training is critical to enhance labor game
capabilities. The second is the formation of the labor collective game force.
Reform the existing trade union structure, strengthen the union representative, and
actively build autonomous trade unions in the enterprises, progressively develop
the industrial trade union to strengthen the capacity of the labor collective game, at
the same time, improve labor ‘‘threat’’ power. Draw on Western experience and
1020 M. Wang and Y. Lu
Acknowledgments Fund Project: The article is the achievement of the humanities and social
science planning project of the Ministry of education (project number:10YJAZH079), Natural
Science Fund Project of Guangdong Province (project number: 10151009001000003), Guangz-
hou society and’’ eleven five’’ planning fund project (project number:10Y73).
107 Equilibrium and Optimization to the Unequal Game of Capital-Labor Interest 1021
References
Aoki CY (2001) Analysis on comparative institution, Leann. Shanghai Far East Press, Shanghai,
pp 385–392
Jack N (2009) Institutions and social conflict, Zhou Weilin. Shanghai People’s Publishing House,
Shanghai, pp 130–141
Oliver WE (2002) The economic institutions of capitalism, Duan Yicai, ed. The Commercial
Press, Beijing, pp 78–84
Qi X (2008) Research on labor relation imbalance. J Jiangxi Adm Coll 10(4):47–50
Wang M (2008) Legal mechanism to generate corporate social responsibility. Theory Guide
30(4):101–104
Wang M (2011) On the coordination of labor relation of ‘‘three mechanisms’’ to implement the
social foundation and limitation. Theory Guide 33(1):34–37
Zaheer DA (2003) Breaking the deadlock: ‘‘why and how developing countries should accept
labour standards in the WTO’’. Stanf J Law Bus Finance 9:69–104
Zhao X (2009) Research on Chinese labor relation adjustment mechanism during the transition
period. Economic Science Press, Beijing, pp 34–35
Chapter 108
Innovative and Entrepreneurship
Education in Underdeveloped Western
Regions of China
Abstract This paper makes an analysis on the major problems for innovative and
entrepreneurship education (IEE) in underdeveloped western regions of China, and
outlines a set of implications for local governments and universities. The authors
suggest that a more practical and flexible cultivation system rooted in regional
contexts should be established for bringing a radical change to the backward IEE
in western China. It is important to implement the ‘‘4C’’ concepts in IEE, namely
cross-culture, cross-region, cross-discipline and cross-specialty through strength-
ening international cooperation and mutual regional support, integrating the IEE
into the university curriculum, and building a four-dimensional nexus via part-
nerships between universities, industries, governments and families. While the
paper is written mainly from the perspective of underdeveloped western regions of
China, the discussion allows for generalization, and thus should be applicable to
the development of IEE in other nations facing similar problems.
108.1 Introduction
Along with the occupied population boom, China is entering a new economic
transformation phase. The two words ‘‘innovation’’ and ‘‘entrepreneurship’’ (IE)
are more closely combined than ever before and have become an important
internal force for China’s economic growth. Social development is in urgent need
of innovative and entrepreneurial (IE) talents. With innovation and entrepre-
neurship education (IEE) in universities as its focus, cultivation of IE talents
forged ahead by Chinese governments at various levels is in full swing across the
country. Provinces throughout China have been making great efforts in developing
IE talents cultivation modes fitted in regional contexts. The number of start-up
businesses sees a continuous increase, yet most of the enterprises are not estab-
lished on the basis of innovative concepts, knowledge, skills or innovations.
Therefore, it has become a great concern for Chinese local governments, educators
and researchers on how to produce more quality IE talents through integration of
innovation and entrepreneurship.
IE Research and practice spreads across the world, though in varying degrees. In
developing countries like the United States where entrepreneurship receives
general recognition and concern, entrepreneurial enterprises contribute to 40 % of
the value created by all enterprises and have created 75 % of the new job
opportunities in that country.
108 Innovative and Entrepreneurship 1025
Entrepreneurship in China, which started from the 1970s when China adopted
the open-up policy, has gone through six stages in its development marked with
four climax periods. Chinese governments at various levels have begun to build
entrepreneurial cities since 2009, followed by national wide popularization of IEE.
Internationally, researches on entrepreneurship education (EE) began from the
1940s and witnesses fruitful results. In the past decade, western scholars carried
out studies on EE centered on thirteen hotspots, such as EE adjustment and cultural
interpenetration, business and management education, entrepreneurship manage-
ment, business models and EE courses (Li 2007).
In China, there has been a remarkable improvement in EE research and practice
since 2006. Some successful EE modes were formed to solve problems like
ambiguous orientation, unsuccessful localization of western IEE concepts and
modes, ineffective teaching and practice (Lu 2011). Chinese scholars from various
disciplines began to show interest in IEE from the very beginning of the 21st
century and expressed their views on IEE from different perspectives. In respect of
performance evaluation, Professor Xie Zhiyuan employed an analytic hierarchy
process in the qualitative research on the performance evaluation system for
China’s IEE (Xie and Liu 2010); Vesper’s Seven Elevation Factors was introduced
into the comprehensive evaluation on IEE in Chinese universities. These resear-
ches are important for us to learn about the development of IEE in China. How-
ever, it appears that most studies are introductions of western IEE concepts and
experiences or generalized suggestions, and there is an obvious lack of studies on
IEE in regional contexts, especially empirical studies, of more practical value.
It has always been a serious common problem for western China in developing
IEE that very little progress could be made, though a large amount of manpower,
materials, money and time have been devoted into it, and IEE research and
practice are still on a superficial level where innovation has not been fully sub-
stantialized. To be specific, IEE in western China is confronted with big imped-
iments in respect of resources, concepts as well as educational and supporting
systems.
First, most of the western regions in China are undeveloped with relatively
limited resources and educational funds. Local governments, as the policy maker
and allocator of various social resources, are still clinging to conservative
administrative concepts and beliefs, while subordinate departments, affected by
such work style and attitude of their higher authorities will usually hold a wait-
and-see attitude toward the polices for IEE. This has led to the current situation of
IEE that policy making far outweighs implementation.
1026 C. Lu et al.
Local governments and universities play a very important role in making a radical
change to the backward IEE in western China. Local governments, as the policy
maker and allocator of resources, should be more IEE-supportive and promote
effective utilization of intellectual, manpower, financial and material resources.
Universities, as the main implementer of IEE, is responsible for achieving sub-
stantial progress in IEE by integrating mass education with elite education on the
basis of introducing advanced international IEE concepts. On one hand, they
should popularize basic IE knowledge among all students, guide students to
internalize IE concepts and develop IE competencies. On the other hand, intensive
education should be accessible to students with entrepreneurship mindset, aptitude
and potentials. We hereby would like to outline some preliminary policy and
educational implications for governments and academics on establishing IEE
cultivation mode rooted into the regional contexts in western China.
Concept determines how we act, so it is a must that universities should break away
from the conservative concept which prevents people from risk-taking, which is
one of the main causes for the slow development of IEE in western regions. IEE, in
108 Innovative and Entrepreneurship 1027
A research team from the Experimental Zone for the Reform Pilot Project to
Cultivate Interdisciplinary Entrepreneurial Talents in China-ASEAN Free Trade
Zone conducted survey on the current situation of IEE in underdeveloped western
regions of China in 2010. One of the focuses of the survey is to learn about
university students’ self-evaluation on IE. According to its empirical analysis,
students think themselves of medium level in terms of the first-level entrepre-
neurship indicators which consist of awareness, psychological qualities, knowl-
edge and competencies. For most of the students, entrepreneurial knowledge is
lower than the other three indicators and is considered of greater need for uni-
versity students with entrepreneurship intentions. Among the 28 second-level
indicators, professional abilities, innovative abilities, learning abilities and foreign
language communicative abilities are considered most important. Result of the
survey is recognition on the necessity and feasibility of popularizing IEE in uni-
versities in western China. Yet, IEE is a continuous and dynamic life-long process
and should be integrated into the whole education system. Universities, while
popularizing IEE among students throughout their university study on a basis of
the process-oriented education concept, which will help students to lay a solid
foundation for future business-startup, should also provide opportunities for
graduates and people from all walks of life with entrepreneurship intensions to get
access to IEE via continuing education or in more flexible ways, such as distance
training programs, lectures, and the like. This during-and-after-university mode of
IEE sees better IE prospects. So we maintain that university students do not need
to make a choice between getting employed and starting up a business upon
graduation. The choice should be made when everything is ready.
108 Innovative and Entrepreneurship 1029
References
Li G (2007) Hotspots of international entrepreneurship education. High Educ Dev Eval China
27(4):70–76 (Chinese)
Lu B (2011) Establishment of education mode for cultivating innovative and entrepreneurial
talents. Hei Longjiang High Educ Res China 207(7):140–141 (Chinese)
Wu Q, Zhang H (2008) An empirical study on the influence of environment for innovative and
entrepreneurship on students’ entrepreneurship intensions. Hei Longjiang High Educ Res
China 175(11):129–131 (Chinese)
Xie Z (2009) Localization of entrepreneurship education at undergraduate level. Explor Educ
Dev China. 30(4):81–832 (Chinese)
Xie Z, Liu W (2010) Evaluation system for innovative and entrepreneurship education in
universities. Innovative Entrepreneurship Educ China 1(6):3–8 (Chinese)
Chapter 109
Network-Based Optimal Design
for International Shipping System
Abstract Lean concept and lean thinking are means of expressions for industrial
engineering reflect in different countries, enterprises and environment. Cost
management in international shipping system is an application of system optimi-
zation used lean management theory and method. After optimization, lean cost
management can be realized.
109.1 Introduction
E. Qi L. Zhu
Management and Economics Department, Tianjin University, Tianjin, China
L. Zhu M. Yu (&)
CCCC International Shipping Corp, Tianjin, China
e-mail: [email protected]
Shipping Lane
109.2 Methodology
and iceberg. Sea condition such as ocean circulation, swell. Ship’s conditions
comprise vessel age, draft, speed, tonnage, stowage and ship’s crew.
Voyage cost, usually refers to cost of unit time in navigation (Pa) multiplied by
the transit time (Ta) plus the sum of cost of unit time in port when loading/
discharging (Pp) multiplied by the time in port (Tp).
Computational formula:
X
P¼ ðPa Ta Pp TpÞ ð109:1Þ
109.3 Results
Based on lean theory, assume workflow and continuity of the workflow as object,
research on how to ensure the continuity of the workflow, reduce waste in
workflow in uncertain period condition synthetically applied Cycle Operation
Network (CYCLONE), Genetic Algorithm (GA), 4D-CAD, Line of balance
(LOB), Theory of Constraints (TOC), Extensible Markup Language (XML), and
established workflow integrated management method, and realize project work-
flow lean management. Workflow integrated management consists of following
three modules: simulation module, optimization module, visual module.
There are plenty of optimization models in shipping system based on different
view points. For example queuing system targeting in maximizing the benefit of
1034 E. Qi et al.
109.4 Conclusion
References
Du Y (1995) A practical optimal method of scheduling, system theory and practice. vol 2, Hubei,
Wuhan, China
Guo R (2008) Transportation systems engineering. National Industry Publishing Company,
Beijing, pp 126–127
Hu M, Wang Y (2011) Marine transportation business. China Communications Press, Beijing,
China, pp 238–250
Huang Y (2007) The optimization of Yangtze freshwater liner shipping routes. Shanghai Jiao
Tong University, Shanghai, China
Liang J (2010) Study on resource allocation decision of railway container terminal. Southwest
Jiaotong University, Chengdu, Sichuan, China, March 2010
Liu F (1992) A new calculating method for minimum-time route. J Dalian Marine Coll
18(3):231–235
Liu J (2006) An optimization study for the tramp shipping system, Shanghai Maritime University,
Shanghai, China
Liu T (2010) Optimization of multi-transportation organization mode for complicated freight
flow. Wuhan University of Technology, Hubei, Wuhan, China, May 2010
Lu W (2008) Research on work flow lean management for construction projects. Harbin Institute
of Technology, Harbin, Heilongjiang, China, June 2008
Wu L (2010) Study on optimization of bulk shipping process in domestic of a company. Lan
Zhou University, Lanzhou, Gansu, China
Xu J (2011) Graph theory and applications. China science and technology university press,
Beijing, pp 100–106
Yang G (2010) Study on the Evolution mechanism and optimization of regional logistics network
structure. Central South University, Changsha, May 2010
Yang Q (2010) Study on optimization of middle-East line of company A. Dalian Maritime
University, Dalian
Zhang Y (2011) Research on china coastal container transportation market and the optimal
shipping route. Logist Eng Manage 33(11):71–75
Zhao R, Xiao Y (2009) Maritime navigation. China Communications Press, Beijing, China,
pp 382–390
Chapter 110
A Dynamic Analytic Approach to Study
on the Interaction Between Product
Innovation and Process Innovation
of the Equipment Manufacturing
Enterprises
Keywords Equipment manufacturing Interactive relationship Product inno-
vation Process innovation System dynamics (SD)
110.1 Introduction
From the industrial point of view, the technology progress has a deep impact on
advanced and rationalized industrial structure of the equipment manufacturing
(Feng 2008). As technological innovation is the main source of the technological
progress in equipment manufacturing, the coordinated development of product
innovation and process innovation is an important factor for enterprise’ technology
innovation success (Kim and Choi 2009). Guizhou Province is a traditional
manufacturing province which has good foundations and development opportu-
nities. However, the rise of the emerging manufacturing provinces through the
In the 1970s, William J. Abernathy and James M. Utterback put forward the AU
model of the innovation type and innovation degree changing with technology life-
cycle (Utterback and Abernathy 1975), which creates a precedent for collaborative
research of product innovation and process innovation (Bi et al. 2007). After the AU
model was built, Hayes and Wheelwright promoted the formalized relation model of
product innovation and process innovation, namely product—process matrix con-
ceptual model. This model provides a quantitative basis for enterprise production and
market diversification decision-making choices (Hayes and Wheelwright 1979).
Peter M.Milling and Joachim Stumpfe innovatively used the system dynamics (SD)
method, starting from the complexity of product and process which makes changes
with the innovation, so the research of the interaction between them becomes more
systematic (Milling and Stumpfe 2000). Some domestic scholars put forward the
interactive model of product innovation and process innovation corresponding to
Chinese national conditions on the basis of oversea studies. Bi Kexin built the SD
model of the interaction between product innovation and process innovation to have
a simulation study on a particular manufacturing enterprise (Bi et al. 2008). Now,
domestic studies rarely use the SD method to study the interaction between product
innovation and process innovation of the equipment manufacturing. Therefore, it has
an important practical significance to have a study on SD simulation for an equipment
manufacturing enterprise.
From the process of product innovation and process innovation, the interaction
between product innovation subsystems and process innovation subsystem occurs
mainly in decision-making process, R&D process and the manufacturing process.
In the decision-making stage, decision-maker should allocate resources for product
innovation and process innovation to determine the proportion of technology
development inputs and product process innovation (Labeaga and Ros 2003). In
the early R&D stage, product development and design department should
exchange information more frequently with the R&D department (Guo 1999). The
main purpose is to set the framework for the development of process and product.
In the manufacturing stage, there will be more exchange of technical information
between the various departments (Eswaran and Gallini 1996). The overall structure
which displays the interaction between product innovation subsystems and process
innovation subsystem is shown in Fig. 110.1.
Decision-making process
Number of Potential
Product
Product Innovations
+ innovation rate
+ of Implemented
Number
Product Innovations
+ +
Desired Product Product
+ Innovation - Maturity of Product Attractivenes
+ Life Cycle
Product Features with Resources for
+ Product R&D
Respect to Its Complexity
+ + Number of
Flexibility of
Manufacturing Process Production
Correlation Complexity and Variety
+ - between product of the Product Line + Technology
and process
Transfer Factor
Yield Factor
Process Features Referring to +
-
the Level of Systemization and Desired Process + Process
+
Interconnection Innovation Innovation Rate
- Number of Potential
Process Innovations1
Maturity of Process +
Life Cycle + Number of Implemented Resources for
Process Innovations Process R&D
The main effect variable in the subsystem of product innovation and process
innovation is only selected to structure the causal loop (see Fig. 110.2) according
to the actual situation of equipment manufacturing and the point of view of
availability and maneuverability.
The variables are quantified and structured simple SD model (see Fig. 110.3)
according to the causal loop diagram. In this model, number of implemented
Fig. 110.3 System dynamic simulation of product innovation and process innovation
110 A Dynamic Analytic Approach 1041
product innovations and number of implemented process innovations are two level
variables, while product innovation rate and process innovation rate are two rate
variables, and other variables are auxiliary or constant variables.
As far as possible, the model was calibrated to make the consistency of model’s
behavior and system behavior. The model passed the unit consistency test and
model test firstly. Then according to the practical data of JY Kinetics Co Ltd., the
model is repeatedly calibrated, and at last the behavior of the model becomes very
close to the reality. In the end, the relative error is less than 10 %.
From the simulation of product innovation and process innovation, Fig. 110.4
shows the tendency of JY Kinetics Co Ltd.’s product-process interaction at
present. In this graph, product innovation rate is gradually diminishing, at the time,
process innovation rate is surpassed by product innovation rate in 2012.
Because this paper focuses on the political simulation and effect prediction in
order to aid decision making. Thus we choose the three variables as below which
can be regulated and controlled by the managers.
Change investment proportion. Under the circumstance which other variables
are not changed, adjust investment proportion = 2 (current strategy) to investment
proportion 1 = 0.5, investment proportion 2 = 8. Then we got Fig. 110.5.
Compared with Figs. 110.4 and 110.5, we can figure out that when investment
proportion reduces, the resources for product R&D reduce and then product
innovation rate reduces a lot. At the same time, the resources for process R&D
increase and process innovation rate increases in certain extent. However, along
1042 T. Wang et al.
1.125
Item/Year
0.75
0.375
0
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Time (Year)
Product Innovation Rate : current
Process Innovation Rate : current
with the passing time, process innovation rate will gradually reduce. When
investment proportion increases, the resources for product R&D increase too while
product innovation rate is higher than ever before based on basic strategy, but it
still tends to go down as time goes by. Process innovation rate increases, which
causes a small-scope fluctuation.
Change resources R&D. Under the circumstance which other variables are not
changed, adjust the resources R&D = 15 Million Yuan (current strategy) to
resources R&D1 = 7.5 Million Yuan, resources R&D2 = 30 Million Yuan. From
the result of the simulation shown in Fig. 110.6, we can figure out that when
resources R&D reduces, both product innovation rate and process innovation rate
reduce by a large margin. When resources R&D increases, both product innovation
rate and process innovation rate are higher than ever before, but the tendency of
product innovation rate is down and the amplification of process innovation rate
last long as time goes by.
Change correlation between product and process. Under the circumstance
which other variables are not changed, adjust the correlation between product and
process = 0.6 (current strategy) to correlation between product and process
1 = 0.3, correlation between product and process 2 = 0.9. The result of SD
Interaction between Product Innovation and Process Innovation Interaction between Product Innovation and Process Innovation
1.5 1.5
1.125 1.125
Item/Year
Item/Year
0.75 0.75
0.375 0.375
0 0
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Time (Year) Time (Year)
Product Innovation Rate : Investment Proportion1 Product Innovation Rate : Investment Proportion2
Process Innovation Rate : Investment Proportion1 Process Innovation Rate : Investment Proportion2
1.125
Item/Year
0.75
0.375
0
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Time (Year)
Product Innovation Rate : Resources R&D1
Process Innovation Rate : Resources R&D1
1.125
Item/Year
0.75
0.375
0
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Time (Year)
Product Innovation Rate : Resources R&D2
Process Innovation Rate : Resources R&D2
Item/Year
0.75
0.375
0
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Time (Year)
Product Innovation Rate : Correlation between Product
and Process1
Process Innovation Rate : Correlation between Product
and Process1
1.125
Item/Year
0.75
0.375
0
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Time (Year)
Product Innovation Rate : Correlation between Product
and Process2
Process Innovation Rate : Correlation between Product
and Process2
trend of product innovation and process innovation, we try to make out the optimal
strategy about how to make these two factors which can develop steadily. Fourth,
enterprises should have more people who have strong ability on conversion
achievement, and only by this way can the transfer ability of company’s research
be enhanced. It’s very important to communicate with stakeholders, and then it’s
possible for us to make a positive and active innovative environment to attract
innovative personnel and promote technological innovation.
References
Bi K-x, Ai M-y, Li B-z (2007) The classification study on analysis models and approaches of
synergy development between product innovation and process innovation (in China). Chin J
Manage Sci 15(4):138–149
Bi K-x, Sun D-h, Li B-z (2008) Product innovation and process innovation—a system dynamic-
based simulation of the interaction in manufacturing enterprises (in China). Sci Sci Manage of
Sci Technol 12:75–80
Chen S, Bi K-x, Gao W (2009) Systematic analysis on associated factors between product
innovation and process innovation based on manufacturing industry (in China). Chin J Mech
Eng 20(6):709–713
Eswaran M, Gallini N (1996) Patent policy and the direction of technological change. RAND J
Econ 27(4):722–746
Feng M (2008) Study on technological progress of equipment manufacturing industry in China:
1996–2006 (in China). World Econ Polit Forum 2:67–69
Guo B (1999) Study on modes and the interaction between product innovation and process
innovation (in China). Sci Manage Res 6:51–55
Hayes RH, Wheelwright SC (1979) The dynamic of process-product life cycles. Harvard Bus Rev
57(2):127–136
Jackson MC (2005) Systems thinking—creative holism for managers, ch. 5. China Renmin
University Press, Beijing, pp 35–43
Kim SW, Choi K (2009) A dynamic analysis of technological innovation using system dynamics.
In : Proceedings of the POMS 20th annual conference, Orlando, 1–4 May 2009
Labeaga JM, Ros EM (2003) Persistence and ability in the innovation decision. Business
Economics, Series 1, 2003
Lee T-L, von Tunzelmann N (2005) A dynamic analytic approach to national innovation systems:
the IC industry in Taiwan. Sci Direct Res Policy 34:425–440
Milling PM, Stumpfe J (2000) Product and process innovation-A system dynamics-based analysis
of the interdependencies. In: Proceedings of the 18th international conference of the system
dynamics society
Sun D-h (2007) Study on the interaction between product innovation and process innovation in
manufacturing enterprises (in China). Harbin University of Science and Technology, Harbin
Utterback JM, Abernathy WJ (1975) A dynamic model of product and process innovation.
Omega 3(6):639–656
Zhu T-b (2009) Research on the interaction between influencing factors of regional equipment
manufacturing technical innovation (in China). Harbin University of Technology, Harbin
Chapter 111
A Multi-agent Simulation System
Considering Psychological Stress
for Fire Evacuation
111.1 Introduction
Modeling and simulation tools for analyzing fire evacuation are useful in public
place design for enhancing passenger safety (Sharma et al. 2008), and different
tools have been developed to study fire safety (Owen et al. 1996; Galea and
Galparsoro 1994). Among these tools, multi-agent simulation system is used in a
growing number of areas (Drogoul et al. 2003; Zhang et al. 2009; Gonzalez 2010).
This tool is based on a computational methodology that allows building an arti-
ficial environment populated with autonomous agents. Each agent is equipped with
sensors, decision-making rules and actuators, which is capable of interacting with
environment (Pan et al. 2007). In the area of fire evacuation study, for moral and
legal reasons, we are not permitted to deliberately expose normal experimental
participants to real fire condition, which will pose a life-threatening degree of risk
(Hancock and Weaver 2005). Multi-agent simulation technique can potentially
help in achieving a better understanding of fire evacuation process without safety
threatens to real people.
Unfortunately, there is a lack of multi-agent simulation frameworks to allow
human factors, such as psychological stress, to be taken into account (Sharma et al.
2008). Fires are perceived as very stressful and a person, who has to decide how to
get out of a building, and away from an uncontrolled fire, is under extremely
psychological stress (Benthorn and Frantzich 1999). According to theories of
information processing, how people interpret information depends on the degree of
stress they are experiencing (Janis and Mann 1977; Miller 1960). Thus, psycho-
logical stress in fire will affect people’s perception of various environmental
factors and thereby influence their actions in the process of fire evacuation (Ozel
2001; Nilsson et al. 2009). For example, a person’s interpretation of emergency
information and other people’s actions, as well as his or her subsequent behavior,
i.e., decision to evacuate, choice of exit and pre-movement time, is partly related
to his or her psychological stress level (Nilsson et al. 2009).
Psychological stress is an important factor affecting people’s evacuation
behavior, which must be considered in multi-agent simulation system. This paper
proposed a new model to describe individual evacuation behavior and developed a
new multi-agent simulation system for fire evacuation, where people may behave
differently depending on their psychological stress level. This simulation system
considered the uncertainty in behavior under psychological stress, which could
obtain more realistic results about fire evacuation.
Simulation
Environment
Module Data Collection
Module
Agent’s
Input Crowd Initialization Output
Module Behavior Model
Visualization
Interventions Module
Generation
Module
In multi-agent simulation system, the most important part is the behavior model of
agent, which determines the validity of simulation results. The behavior model in
this study adopts the process of ‘‘External stimuli-Internal status-Decision mak-
ing’’ (see Fig. 111.2), which is called EID model in the present study. The basic
idea in EID model is that external stimuli have direct effects on a person’s internal
status, which in turn influence the process of decision making (Luo et al. 2008).
In EID model, external stimuli are divided into four categories by people’s
sensory system: visual stimuli, auditory stimuli, thermal stimuli and olfactory
stimuli. Visual stimuli include the burning fire and smoke, which are quantified by
fire size and smoke density, respectively. The scale of fire and smoke will increase
with simulation time goes on. Auditory stimuli include the sound of fire alarm and
the sound of burning fire. They are quantified by their sound intensity. Fire alarm is
activated when the fire breaks out, and its sound intensity keeps constant during
fire. The sound intensity of burning fire varies with the scale of fire. Thermal
stimuli represent the heat caused by burning fire, which is quantified by envi-
ronment temperature. The olfactory stimuli represent the smell of smoke, which is
quantified by smoke density. All these quantified parameters have five levels from
0 to 5. ‘‘0’’ means there is no stimulus and ‘‘5’’ means the stimulus has reached its
maximum. All these external stimuli collectively impact a person’s psychological
stress level. Psychological stress influences people’s decision making process,
mainly including route choice and travelling speed.
1050 F. Meng et al.
Intensity
Fire
Burning Smoke Fire Alarm Burning Heat Caused Smell of
Fire Sound by Fire Smoke
Sound
Stress Level
Travelling
Route Choice
Speed
Emergency signs are set up at each intersection, which direct agents to the exit
by the optimal route. The decision of route choice is a two-stage process. In the
first stage, agents recognize the directions given by emergency signs at intersec-
tions. Previous studies showed that most participants in fire evacuation do not pay
enough attention to emergency signs (Tang et al. 2009), thus whether an agent can
find signs at intersection is a probability event, whose probability is influenced by
psychological stress level of this agent. It is believed that agents tend to neglect
emergency signs under high psychological stress, thus the probability of finding
emergency signs is negative correlated with agents’ psychological stress level. In
the second stage, agents evaluate the hazard level of each direction by scale of fire
and smoke. If an agent doesn’t find signs in the first stage, it will choose the
direction with least hazard level from all optional directions; if it recognizes
the directions given by signs, it will continue to judge whether the hazard level of
the given direction exceeds the pre-defined threshold. If no, it will travel to the
direction given by the sign; if yes, it will choose the direction with least hazard
level from other directions. It is believed that people will increase walking speed
under stress, and even start to run when surfing extreme psychological stress. In
the behavior model, agents’ travelling speed is positively correlated with their
psychological stress level (Fig. 111.3).
Reach an
intersection
Yes
Evaluate the hazard Evaluate the hazard
level of chosen route level of all routes
Fig. 111.4 The top views of test environment in normal condition (left) and in fire emergency
(right)
‘‘Data Collection Module’’ is used to collect real-time data of each agent during
simulation. The data include: travel speed, psychological stress level, probability
of recognizing emergency signs and escape time. These data are used to analyze
the efficiency of fire evacuation.
The most important part of the present simulation system is to consider the
influence of psychological stress. To demonstrate the effect of psychological
stress, two different kinds of simulation are conducted, and their results are
compared. In the first kind of simulation, agents’ psychological stress level is not
taken into consideration, that is, agents’ behavior are not affected by external
stimuli. In the other simulation, the influence of psychological stress is added, just
111 A Multi-agent Simulation System Considering Psychological Stress 1053
as what is described above. For each kind of simulation, ten trials are conducted. In
each trial, the number of escaped agents, which represents how many agents have
escaped before the fire spreads all around the environment, and the average escape
time of each agent, which indicates how much time an agent spends to reach the
exit averagely, are recorded to make comparison.
Summary of simulation results are displayed in Table 111.1. The t-test is
conducted to compare the difference between the two simulations. The number of
escaped agents in the simulation without stress (mean = 94.9, s.d. = 2.5) is sig-
nificantly higher than that in the simulation with stress (mean = 83.8, s.d. = 3.7),
with a p value of less than 0.001. The average escape time is 35.5 s (s.d. = 3.0 s)
and 43.9 s (s.d. = 4.3 s) in the simulation without stress and with stress, respec-
tively, and their difference is significant (p value = 0.002).
Acknowledgments The authors would like to acknowledge the support of research program of
Foxconn Technology Group.
References
Benthorn L, Frantzich H (1999) Fire alarm in a public building: how do people evaluate
information and choose an evacuation exit? Fire Mater 23(6):311–315
Drogoul A, Vanbergue D, Meurisse T (2003) Multi-agent based simulation: where are the agents?
Multi-agent-based simulation II:43–49
Galea E, Galparsoro J (1994) EXODUS an evacuation model for mass transport vehicles. Fire Saf
J 22:341–366
Gonzalez RA (2010) Developing a multi-agent system of a crisis response organization. Bus
Process Manage J 16(5):847–870
Hancock P, Weaver J (2005) On time distortion under stress. Theoret Issues Ergonomics Sci
6(2):193–211
Janis IL, Mann L (1977) Decision making: a psychological analysis of conflict, choice, and
commitment. Free Press, New York
Luo L, Zhou S, Cai W et al (2008) Agent-based human behavior modeling for crowd simulation.
Comput Anim Virtual Worlds 19(3):271–281
Miller JG (1960) Information input overload and psychopathology. Am J Psychiatry
116(2):695–704
Nilsson D, Johansson M, Frantzich H (2009) Evacuation experiment in a road tunnel: A study of
human behaviour and technical installations. Fire Saf J 44(4):458–468
Owen M, Galea ER, Lawrence PJ (1996) The EXODUS evacuation model applied to building
evacuation scenarios. J Fire Prot Eng 8(2):65–84
Ozel F (2001) Time pressure and stress as a factor during emergency egress. Saf Sci
38(2):95–107
Pan X, Han CS, Dauber K et al (2007) A multi-agent based framework for the simulation of
human and social behaviors during emergency evacuations. AI Soc 22(2):113–132
Sharma S, Singh H, Prakash A (2008) Multi-agent modeling and simulation of human behavior in
aircraft evacuations. IEEE Trans Aerosp Electron Syst 44(4):1477–1488
Tang CH, Wu WT, Lin CY (2009) Using virtual reality to determine how emergency signs
facilitate way-finding. Appl Ergonomics 40(4):722–730
Yuan W, Tan KH (2011) A model for simulation of crowd behaviour in the evacuation from a
smoke-filled compartment. Physica A 390:4210–4218
Zhang Q, Zhao G, Liu J (2009) Performance-based design for large crowd venue control using a
multi-agent model. Tsinghua Sci Technol 14(3):352–359
Chapter 112
A Multi-Granularity Model for Energy
Consumption Simulation and Control
of Discrete Manufacturing System
112.1 Introduction
With the climate change of the earth and the unsecured energy supply, the efficient
use of available energy resources is one of the key approaches in the modern
society and industry. Companies today are becoming increasingly interested in
J. Wang (&) S. Li
Department of Industrial and Manufacturing System Engineering, Huazhong University
of Science and Technology, Wuhan, People’s Republic of China
e-mail: [email protected]
J. Liu
School of Mechanical Engineering and Automation, Beihang University,
Beijing, People’s Republic of China
measuring and reducing the environmental footprint of their products and activi-
ties. The manufacturing industry, with its about 75 % of the world’s yearly coal
consumption, 20 % of global oil consumption, 44 % of the world’s natural gas
consumption and 42 % of all electricity produced (IEA 2007), is one of the main
energy consumer and largest emitters of carbon dioxide (CO2). The pressures
coming from energy prices, environmental regulations with their associated costs
for CO2 emissions and the changing purchasing behavior of customers make the
manufacturing industry adopt new methodology and techniques for a sustainable
manufacturing (Bunse et al. 2011).
Energy efficient manufacturing (Rahimifard et al. 2010), which aims to inte-
grated manage the energy efficiency and the production performance, can be
beneficial to industrial companies for economic, environmental or societal aspects
by reducing energy consumption and maintaining the system throughput. Although
energy intensive industries (e.g. steel, cement, pulp and paper, chemicals) remain
in the focus (Solding and Petku 2005), research finds challenges for small and
medium sized enterprises and the non-energy intensive industries (e.g. discrete
mechanical manufacturing industry). They should be not neglected but was
lacking of research attention in the past (Ramírez et al. 2005). Studies show that
there is a significant potential to improve energy efficiency in discrete manufac-
turing. Even with already available technologies, improvements of 10–30 % are
likely to be achieved (Devoldere et al. 2007; Herrmann et al. 2011).
The introduction of energy consumption as a parameter to support the decision
making process may help to forecast and manage the energy costs associated to the
production plan while maintaining the suitable throughput. As a very effective
approach and tool for problem solving and optimizing in manufacturing systems
design, operation and control, discrete event simulation (DES) provides engineers
with a flexible modeling capability for extensive analysis of a production flow and its
dynamic behavior. Currently, the main parameters measured in DES are throughput,
utilizations, and time-span. A review of commercially available manufacturing
simulation tools (e.g. Plant Simulation, Arena, Quest et al.) reveals that they do not
support the energy evaluation for production schedules. With the development of
real time electrical signal monitoring technologies, the information-rich energy data
can be collected and analyzed in the ICT systems (Vijayaraghavana and Dornfeld
2010). A holistic energy consumption model is required to simulation applications
for discrete manufacturing system. In this paper, a multi-granularity energy con-
sumption model will be constructed to simulate and control the discrete machining
manufacturing system for energy management purpose.
factors should be considered when defining the energy consumption profile for
simulation application.
• The partition method of energy consumption process of equipment can be
applied to different facilities used in manufacturing industry.
• Several energy consumption states can be merged to one state or vice versa for
different simulation granularity object.
• The energy consumption state should accommodate important concerned energy
resources (i.e. electricity, gas, heat, and coal) in simulation of production
system.
• Some instantaneous state with higher energy consumption must be included in
order to evaluate the energy efficiency of the control strategy by simulation.
From the literatures (Rahimifard et al. 2010; Gutowski et al. 2006; Dietmair and
Verl 2009; Weinert et al. 2011; Heilala et al. 2008; He et al. 2012; Johansson et al.
2009; Mouzon et al. 2007; Le et al. 2011; Mori et al. 2011), it is known that a
typical electrical consumption profile of a CNC machine has a number of different
operational states which are arose by the activities of its components and deter-
mine the power consumption. In each state, other types of energy resources can be
appended to different state according to the practical requirement for specific
machine. In this paper, the states are clarified to the following types with different
characteristics and simulation intention.
• Power off: The machine power if off and all the energy resources are not
consumed.
• Shut down: The machine will consume some energy to be shut off even if the
duration of this state is very short.
• Warm up: The electrical switch is on and some peripheral equipments of the
machine are start up. Although the warm up time is short, the required energy is
higher.
• Power on: This is an idle state with no material removal. The whole machine
consumes the basic energy in this state. This state can serve as a lower energy
saving mode when no production activity take place.
• Start up: This state is the transition between the power on and production modes.
The main components of the machine (e.g. the spindle and coolant system) will
change to working state. This state is an acceleration process which consumes
higher amplitudes of power consumption with short duration.
• Stand by: This is also an idle state. All drives and pumps of the machine are in
stand-by but with no material removal.
• Production: This is a working state with the material removal process. There is
some short duration with no material remove because of interspaces between the
machining paths. In this paper, the duration of this state is defined as the time
from the product loaded into the machine until it is dropped out the machine.
• Maintenance: The machine is maintained according to preventive maintenance
(PM) schedule or stochastic failure (SF). The energy type and quantity can be
defined for maintenance activities if they are concerned issue.
1060 J. Wang et al.
Start up Stand by
Idle (t2)
Condition 1
Warm up Condition 4
(t1) Condition 2
Power on
Part
Condition 3 Part
arrive
finished
Setup Busy
PM or SF
Repaired
Down
Maintenance
Fig. 112.1 State chart model for energy consumption simulation and control
Based on the analysis of the energy consumption profile, a holistic state chart model
for energy consumption simulation and control of discrete manufacturing system is
shown in Fig. 112.1. The model has a multi-granularity form in order to adopt the
different requirements of simulation and control. The dot line in the model will be
used to control the machine state in manufacturing process for energy saving
purpose. The state changing conditions can be related with the arrival time of the
next part or the current state has lasted for a predefined duration. Some states have a
constant duration in production process. For example, execution time of the warm
up (t1) and start up state (t2) will be a constant for a specific machine. The model
has a nested structure which makes it general enough to be extended and modified
for different application scenarios. This modeling method also can be applied to
other faculties such as the conveyor, robot and AGV.
The characteristic and the parameters of each state are summarized in
Table 112.1. The practical duration of state can be constant or stochastic, and the
consumed energy type can be obtained for the specific machine. After the simu-
lation, the required energy and the throughput for a shift of a production plan can
112 Multi-Granularity Model for Energy Consumption 1061
be reported. For a specific amount of part production, the energy and the overall
makespan time can also be obtained for decision making. The model has the
following classical using scenarios.
• For a coarse granularity simulation aiming to energy audit, only three states can
be reserved, i.e. power on (idle), production (busy) and maintenance (down),
which are supported by most of the current simulation software. By endowing
with energy consumption data for each state, the energy quantity can be accu-
mulative by multiplying the state duration and its required energy amount
during the simulation process.
• For both the energy audit and energy saving control simulation, all the states in
Fig. 112.2 can be used. Particularly, after the busy state, if the machine queue is
not null (i.e. there are parts waiting for to be machined), the next part will be
machined at once. Otherwise the machine will choose a suitable state (i.e. stand
by, power on, shut down) in order to reduce the energy consumption from the
view of the system level considering the arrival time of the next part or the idle
state lasting for a predefined duration.
112.4 Experimentation
The experiment is partly based on a case provided by Mouzon et al. 2007, Le et al.
2011. The data herein presented are for the purposes of demonstration of our
method and do not necessarily imply an actual plant floor data.
One hour production ability will be evaluated for a single CNC center. The
inter-arrival time and service time of parts are exponentially distributed with a
mean of 20 and 6 s, respectively. The initial condition of the machine is assumed
to be power off. The warm up takes 5 s and consumes 4 unit powers per second.
The production and power on (i.e. idle) power are 6 and 2 unit powers per second,
respectively. Shutting down a machine takes 2 s, consuming 1 unit powers per
second. The state chart model for energy audit and energy saving control is shown
in Fig. 112.2. When a part production is finished, the machine state will be
changed to different state according to the following rule. If there are parts waiting
for machining, the machine will be changed to production state at once. Otherwise,
the machine will be idle at power on state. When the idle (power on) state lasts for
a predefined duration (e.g. 5 s), the machine will be shut down for energy saving
purpose until there is part arrival for machining.
An ARENA simulation model (Fig. 112.3) has been developed for the state
chart model of the CNC machine with energy saving control strategy. Apparently,
Fig. 112.3 The arena model for energy consumption simulation and control
112 Multi-Granularity Model for Energy Consumption 1063
if a machine has more states, the simulation flow module and their logic relation
will be more complex. The five states in Fig. 112.2 are all included in the simu-
lation model by using ARENA StateSet module. By changing the condition in
some modules, the model in Fig. 112.3 can be only used to collect energy con-
sumption data with no energy saving control. That is to say, the machine will stay
at power on state when there is no part to be machined.
Table 112.2 shows the performance of the above two scenarios for 1 h pro-
duction. From the Table 112.2, the state control scenario will have 6.4 %
decreasing in thought with 26.6 % energy saving.
Figures 112.4 and 112.5 show the energy consumption ratio and the state time
ratio of five states in two strategies. Apparently, the power off state in Fig. 112.5
has a relative longer time but have no energy consumption in Fig. 112.4 under the
energy saving control strategy.
1064 J. Wang et al.
112.5 Conclusion
References
113.1 Introduction
Systems archetypes analysis was an effective method that can grasp the structure
of system. Masters of modern management Senge (1992) built 9 system archetypes
in the book of the Fifth Discipline—the Art & Practice of the Learning Organi-
zation. Regarded the system archetype as a key tool to analyze the issue of
organization and management, and made it to be the core content of the learning
organization theory, but in the book, he did not discuss how to build systems
archetypes. To improve the competitiveness of high college and university is a
system engineering project. Based on the method of system archetype, the authors
analyzed the issue of how to enhance the competitiveness of high college and
L. Li (&) Y. Yu
Jiangxi Science & Technology Normal University, Nanchang, China
e-mail: [email protected]
Competitivenes
School reputation
Chart 113.1 Hierarchy structure of influenced factors of high college and university
competitiveness
113 Analysis on System Archetype of High College 1067
From the Chart 113.1, top-layer variable is the competitiveness of high college and
university, the first layer variable is school reputation that affects the top-layer
variable directly (Jia and Ding 2002), the causal structure is positive feedback,
v1 v2 v1, the variables in the second layer are key factors to affect the
school reputation which affected by other factors that include some positive factors
and some negative factors, the former improve the competitiveness, the latter
restrict the development of high college and university competitiveness. The paper
built the feedback system archetypes based on the second layer variables, analyzed
its complex relationship, and propose the effective countermeasures to enhance the
high college and university competitiveness.
To high college and university, image of school is an invisible business card, at the
meanwhile (Sun and Liu 2010), it also is the valuable educational resources, its
essence of image of school is the high college and university objective reality
external performance, however, improve the perfect school image, some colleges
pour large money on school size expansion or new campus construction. In fact, in
that way, it isn’t useful to add luster to school image, but also may be caught in
huge debt, which has brought a series of negative impact to high college and
university. Therefore, through systematic analysis, the paper built feedback
archetype based on the school image-oriented showed as Chart 113.2.
+ key disciplines
From Chart 113.2, the feedback archetype based on the school image-oriented
is composed of a positive feedback and four negative feedbacks, in the chart, the
left feedback loop is a positive feedback loop that promote system development,
the right feedback loop is a negative feedback loop that strict system development.
The positive feedback loop reveals the mutual promotion between the school
image, school reputation, and school competitiveness, however, in order to
improve school image or reputation, some high college or university spend most
large money on expanding new campus, regardless of the school affordability of a
large number of loans. The results of a heavy debt ratio seriously affect school
teaching, research and the normal operation of the subject building work. All of
these lead a direct damage to school reputation and school image, and have a bad
affect on the promotion of competitiveness.
Therefore, in order to effectively promote the healthy and orderly development
of high college and university, each school must think over and over before
investment, blindly to follow the example is not conducive to the enhancement of
competitiveness.
With introducing competition into the education, scientific research & develop-
ment becomes more and more important to improve the teaching quality, school
reputation and social influence (Li 2011). Colleges and universities which used
been a single place to impart knowledge turned into the base of production and
innovation and became an important pillar of national economic and technological
development. First prize winner of the State Technological Invention Award,
President of Central South University, academician Boyun Huang said: ‘‘in
addition to teach the most advanced knowledge, the more important function of
higher education is creating new knowledge.’’ Therefore, scientific research &
development becomes the most important evaluation index of measuring the
colleges and universities competitiveness.
More and more high colleges and universities lay emphasis on scientific
research & development. It directly influence academic standards, it also repre-
sents the school’s overall strength and competitiveness. However, research &
development achievements become a huge invisible pressure to high college and
university teachers. Although the moderate pressure of research can not only make
teachers concentrate on scientific research, maintain a strong scientific research
ambition, but also enhance teachers’ level of business as soon as possible, too
much scientific research pressures also do harm to teachers. As it is showed on
Chart 113.3, shifting burden feedback archetype based on research & development
pressure-oriented, revealed universities did not correctly handle the question of
113 Analysis on System Archetype of High College 1069
+2
research pressure
academic corruption
+
- -2
cultivation of internal -
research team
fundamental solution
competitiveness
+ +
+1 + achievement
research
+
school reputation
‘‘Teacher is essential for that a school whether can be able to train qualified
personnel to the socialist construction or not.’’ said Deng Xiaopeng (Li and Xia
2011). To a high college or university, teacher is the foundation to cultivate
talented person, it is the root to form school characteristics and advantages, and it
also is the pledge to keep sustainable development. Some school according to the
own development needs, spend large money on attracting talents to keep up with
the pace of development of the schools, but some high colleges blindly follow,
regardless of their own needs whether it is reasonable. This is a typical vicious
competition, as shown as in Chart 113.4.
1070 L. Li and Y. Yu
+
school B's threateness
school A's development
from school A
+ +activity by school
- B:investment in talent
school A's teaching faculty
-
+
school A's threateness
from school B +
activity by school + school B's development
A:investment in +
talent +
school B's teaching faculty
Chart 113.4 Vicious competition feedback archetype based on teaching faculty- oriented
From Chart 113.4, we can see clearly that two universities each see their
welfare as depending on a relative advantage over the other. Whenever one side
gets ahead, the other is more threatened, leading it to act more aggressively
reestablish its advantage, which threatens the first, increasing its aggressiveness,
and so on, each side sees its own aggressive behavior as a defensive response to
the other’s aggression, but each side acting ‘‘in defense’’ results in a buildup that
goes far beyond either side’s desires.
The vicious competition feedback archetype management principle is that to
look for a way for both sides to ‘‘win’’, or to achieve their objectives. In many
instances, one side can unilaterally reverse the vicious spiral by taking overtly
aggressive ‘‘peaceful’’ actions that cause the other to feel less threatened.
Development of social resources is the internal need of high colleges and univer-
sities, and the only way to improve educational level and quality. Due to the
scarcity of resources, a fierce competition for the limited social resources is showed
between the different colleges and universities. The more successful one becomes,
the more support it gains, thereby starving the other, as show as Chart 113.5.
Therefore, the management principle is looking for the overarching goal for bal-
anced achievement of both choices. In some cases, break or weaken the coupling
between the two, so that they do not compete for the same limited resource.
Student is the lifeblood for university normal operation, affected by the popular-
ization of higher education, the competitiveness of snatching students becomes
113 Analysis on System Archetype of High College 1071
+
govement support
school A's competitiveness allocation to A instead
of B
Chart 113.5 Success to the successful feedback archetype based on social resources–oriented
more and more intense. However, different universities have different capacity for
students. Reasonable student number can promote health and orderly development,
too many students do may be harm to the sustainable development of the uni-
versity, as show as Chart 113.6, in the left of the chart, positive feedback causal
relationship reveal student number play an important role in improving the school
reputation and competitiveness; on contrary, in the right of the chart, the negative
feedback reveals too many students isn’t conducive to the improvement of the
university.
+ teaching task
+ workload
+ research achievement
First of all, too many students lead to more dormitories and campus expansion
which force many schools to pour into blindly, and lead to loan burden. Secondly,
too many students lead to difficulty of management, which result in more and more
internal crisis (Li and Jiang 2011). Finally, too many students will bring more
teaching load for teachers. Due to everyone’s energy is limited, too much teaching
task is bound to teachers may be difficult in being focus on research. Therefore,
high colleges and universities have to be consistent with their own reasonable
capacity in enrollment.
113.4 Conclusion
With the research on the spot in the concrete high colleges and universities and
systematical analysis, the paper proposed the key influencing factors of high
college and university competitiveness, combined hierarchical structure with
system archetype analysis technology of Peter M.Senge, constructed the key
variable feedback system archetypes based on the influencing factors, at last put
forward corresponding management countermeasure, which has theoretical and
practical significance for improving high college and university competitiveness.
References
Senge PM (1992) The fifth discipline the art & practice of the learning organization. Century
Business Publishing House, London
Zhu J, Liu Z (2007) The evaluation index system of high college and university competitiveness.
Prod Forces Res 02:67–68
Jia R, Ding R (2002) System dynamics—complex analysis of feedback dynamics. Higher
Education Press, Beijing
Sun J, Liu Z (2010) The pressure of research influenced the university teachers. Teachers 5:28–29
Li L (2011) Analysis on the evaluation of hospital competitiveness, countermeasures producing
and the effect implementation simulation—take three A-level comprehensive hospital in
Jiangxi province as an example. Economic Science Press, Beijing
Li L, Xia L (2011) Analysis on the influencing factors and countermeasures of high college and
university competitiveness based on feedback archetype generating set. In: 2011 IEEE 18th
international conference on industrial engineering and engineering management
(IE&EM2011)
Li L, Jiang M (2011) Analysis on key variable-oriented typical archetype of high college and
university competitiveness based on system dynamics. In: 2011 IEEE 18th international
conference on industrial engineering and engineering management (IE&EM2011)
Chapter 114
Analysis on the Operation Effects
of Logistics Park Based on BP Neural
Network
Jun Luo
114.1 Introduction
With the development of logistics, logistics park has become a kind of emerging
logistics management way. In Japan, Germany and other developed countries
logistics park has developed rapidly. The construction of logistics park of our
country began in Shen Zhen city in 1990s, and other cities also began the con-
struction of logistics park rapidly. By 2008 in September, according to statistics,
there had been 475 logistics parks in our country. Among them, 122 logistics parks
have operated already, 219 were under construction, and 134 were being planned
(China Federation of Logistics & Purchasing).
In the western developed countries, the rate of return on investment of logistic
park is about 6–8 %. The income gained by the investors of logistics park usually
comes from the return of rental and the land appreciation. In China, the vacancy
rate of logistics park is more than 60 %, and even some logistics park is used for
J. Luo (&)
School of Economics and Management, Wuyi University, Jiangmen, China
e-mail: [email protected]
other purposes. With the rapid development of the construction and operation of
logistics park, the operation effects and the development level of logistics park will
become the focus of attention. In this paper, a set of evaluation metrics of the
operation effects of logistics park will be built, and the method of BP neural
network will be used for analysis of the operation effects of logistics park.
In the guidance of the government planning, logistics park is the large place in which
several sorts of modern logistics facilities and several logistics organizations layout.
Through sharing the infrastructure and the supporting service facilities, logistics
park can give full play to its whole advantage and complementary advantages.
The intensification and scale of logistics can promote the sustainable development of
the city (Zhang 2004; Richardson Helen 2002; Marian 2006). When we plan the
logistics park, we should consider regional economic level, customer industry,
distribution of retail industry and the entire functional orientation.
The factors affecting the operation effects of logistics park have two aspects
which are external factors and internal factors. The external factors include gov-
ernment’s support and relative policy, the economic situation and the market
environment. The internal factors mainly refers to the own operation ability of
logistics park.
The policy environment mainly reflects the government’s support to the
development of logistics park. The local governments have provided some policies
to logistics park, but these policies are not comprehensive and perfect. On the
whole, in our country, the policy support to the development of logistics park is
not enough. Soon, this situation will be changed, as we pay more and more
attention to logistics and promulgate some related policy. The market demand of
logistics park mainly includes target market’s service demand, the adaptability of
logistics park service and the matching degree of supply and demand of logistics
park service. These factors directly affect the operation effects of logistics park.
The service ability of logistics park its own mainly includes: transportation,
warehousing, distribution, packing and sorting, circulation processing, market
development and maintenance, informatization level and management ability, etc.
The internal factors are the foundation of the operation effects of logistics park.
114 Analysis on the Operation Effects of Logistics Park 1075
According to the influencing factors and the set of metrics of related documents
(Mingming 2010; Dai 2010; Zhong 2009), a set of evaluation metrics of the
operation effects of logistics park is built, which includes the economic benefits,
the condition of the enterprises in the park, park ability and social benefits four
parts, as is shown in Table 114.1.
In 1985, the BP neural network model was brought out by D. Rumelhart from
Stanford University. As BP neural network can solve the nonlinear problem well, it
has become one of the most widely applied neural networks. BP algorithm solves
the connection weight problem of the hidden layer in the multi-lever network
model, improves the learning and memory function of neural, and especially solves
the XOR problem. The BP neural network model is the prior to connection model
that constitutes of input layer, output layer and some hidden layer (Yin 2003; Liu
and Lu 2011; Hagan et al. 2002).
The BP neural network can also deal with qualitative and quantitative knowl-
edge. It’s operation is very fast, and has strong learning and forecast ability.
Therefore this paper using BP neural network model evaluates the operation
effects of logistics park. The specific procedure is as follows:
(1) The number of neurons: In this paper, the BP neural network will use three
layer structures, namely, input layer, hidden layer and output layer.
a. Input layer node: The number of input layer node is the set of evaluation
metrics. There are 19 input nodes.
b. Hidden layer node: The number of hidden layer node is related to the
number of input layer node, the character of sample data and the character
of the unsolved problem. To determine the number of hidden layer node,
pffiffiffiffiffiffiffiffiffiffiffiffi
we usually use the experience formula: q ¼ n þ m þ a. Among them, n is
the number of input layer node, m is the number of output layer node,
a = 1, 2,…, 10. Through several tests, 10 is the optimal number of hidden
layer node.
c. Output layer node: The results of the evaluation are output layer node.
According the analysis, the number of input layer node is 19; the number of
hidden layer node is 10; and the number of output layer node is 1.
(2) The initialization of weight value and threshold value: According to the set of
metrics, the index is divided into two kinds of indexes, namely, qualitative
index and quantitative index. Dealing with qualitative index, we generally use
the expert scoring method, and for the quantitative index we use normalized
processing. Generally, weight value and threshold value of initialization is the
random number from -1 to 1.
(3) The positive information transmission: In this paper, we use sigmoid function
to process network transmission, and purelin function to process transmission
is in output layer. After confirming the number of each layer node and
transmission function, we initialize the BP network again.
P
The output vector of hidden layer is y ¼ f1 wij xi þ ai , the output vector of
P
output layer is y ¼ f2 wjk yj þ aj .
(4) The reverse error transmission: Calculation the error E of network. If error
E is less than the previously set error e, the network training process is over,
and the output value is approximated the expected value. Otherwise we pro-
ceed the reverse error transmission of output layer and hidden layer node.
(5) Confirm
P the final evaluation results. Calculating the global error function
E¼ ek , if E \ e the training process is over. According to the final output
value results, the greater the output value, the better the operation effects of
logistics park. From very good to very bad, the output value is divided to six
levels, very good (0.9–1), good (0.8–0.9), preferably (0.6–0.8), general
(0.4–0.6), bad (0.2–0.4), very bad (0–0.2).
114 Analysis on the Operation Effects of Logistics Park 1077
According to the set of evaluation metrics and the BP neural network theory, we
establish the model steps. Using the initial, training and simulation functions of
Matlab7 neural network tool box, it can quickly complete the network training
process.
(6) Selection of sample data: Let the 19 indexes of the set of metrics as the input
node, the simulation data of the set of metrics of front five logistics park
W1–W5 as the training sample, and the back three logistics park W6–W8 as the
testing. Normalizing the input data, the input data is shown as Table 114.2.
(7) Determination of network structure: The number of input layer node is 19, the
number of hidden layer node is 10, and the number of output layer node is 1.
The network structure figure is shown as Fig. 114.1. The transmission function
of hidden layer node is sigmoid, and the transmission function of output layer
node is purelin.
(8) Model training: The training time is 265, target error is 0.001, learning rate is
0.01, using Matlab to calculate the algorithm.
After 800 times training, the network overall error is in the range of tar-
get allowable error. The prediction error figure is shown as Table 114.3. The
training is over.
. .
. .
. .
(9) Model testing: Using the network which has already been trained, we can get
network output value of the three samples. The network output value of W6 is
0.488, W7 is 0.752, W8 is 0.613. The network output of W7 is the better one.
114.5 Conclusion
In this paper, we have studied the influence factor of the operation effects of logistics
park, and establish the set of evaluation metrics of the operation effects of logistics
park. Through the BP neural network model, the operation effects of logistics park
have been analyzed, then the manager of the logistics park can find the shortage of the
operation process, and furthermore, can improve the operation of logistics park.
114 Analysis on the Operation Effects of Logistics Park 1079
References
China Federation of Logistics & Purchasing. China society of logistics. The second national
logistics park (base) survey report. http://b2b.toocle.com
Dai H (2010) Study on the operating model of Logistics Park based on the game theory. Wuhan
University of Technology, Wuhan
Hagan MT, Demuth HB, Beale M (2002) Neural network design. China Machine Press, Beijing
Liu H, Lu H (2011) Study on risk evaluation for manufacturers’ lean supply chain collaboration
based on BP neural network. Logist Technol 30(3):103–105
Marian S (2006) Logistics park development in Slovak Republi. Transport ll(3):197–200
Mingming Ni (2010) Research on the operating conditions of Logistics Park based on fuzzy
comprehensive evaluation. Value Eng 10:27–28
Richardson Helen L (2002) 3PL today: a story of changing relationships. Transp Distrib
43(9):38–40
Yin N (2003) The application design of BP neural network. Inform Technol 27(6):18–20
Zhang X (2004) The research on layout planning of logistics park, vol 6. China Supplies Press,
Beijing
Zhong J (2009) Construction of evaluation index system on economic operation of multi-service
Logistics Park. Logist Eng Manag 31:7
Chapter 115
Application of Ant Colony Algorithm
on Secondary Cooling Optimization
of Continuous Slab
Abstract Continuous casting secondary cooling water is one of the key factors to the
quality of slab. Reasonable surface temperature maximum rate of cooling and surface
temperature rise speed of every secondary cooling stage can reduce the factors that
causing inside and surface crack of the slabs. The optimized model of continuous
casting secondary cooling was established according to metallurgy criterion (include
Goal surface temperature, Straightening spot temperature, surface temperature
maximum rate of cooling, surface temperature rise speed, fluid core length etc.) and
equipment constraints request. The water of continuous casting secondary cooling
was optimized by ant group algorithm to improve the quality of slabs.
115.1 Introduction
J. Li (&)
Department of Information Engineering, Henan Polytechnic,
Zhenzhou, China
e-mail: [email protected]
H. Pei
Physical Engineering College, Zhengzhou University,
Zhengzhou, China
Slab, in the continuous casting process, under certain assumptions (Ying et al.
2006), ignores heat slab width direction, can be simplified as one-dimensional heat
transfer, the solidification and heat transfer equation (Radovic and Lalovic 2005).
oT o2 T
qC ¼k 2
ot ox
where: q is density of every phase of steel, kg/m3, C is specific heat capacity of
every phase of steel, J/(kg•K), k is thermal conductivity of every phase of steel,
W(mK).
Slab along the direction of the casting speed is separated into (0 to n) cross-
sections. Get the integral of each different spatial position in a unit on heat transfer
partial. The result is the ordinary differential equations relative to temperature T on
the time derivative. Derivative that temperature on the time calculated through
chase method to ordinary differential equations. By the time derivative of the
temperature on the slab surface temperature can be obtained (Lotov et al. 2005).
Secondary cooling system determined by continues casting metallurgical cri-
teria, device constraints, heat transfer model of different kinds of steel type. The
aims that integrate optimization secondary cooling system are to let the temper-
ature distribution of slab rationalization. This can get the best slab quality and
yield. The optimization method is: The value of objective function constructed by
metallurgical criteria is Minimum. Assuming second cooling paragraphs water and
converted into integrated heat transfer coefficient under constraint of industrial
conditions. Substitute these into simulation model of heat transfer calculation as
third boundary condition. To obtain transfer function distribution of second
cooling zone that meet the various metallurgical criteria and determine the dis-
tribution of secondary cooling water quantity.
Optimization model of the system expressed by M. Control vector k = [k1,k2,
…,kn]T. Where, n is the number of segments of cooling water section. Optimi-
zation model determined by metallurgical criterion and equipment constraints
(Bergh and Engelbrecht 2006). Optimal control parameters based on the perfor-
mance guideline that establishment of comprehensive evaluation on object func-
tion. And the object function must optimize according to certain rules.
Optimization model in the derivation use the following symbols:
f f [0
f ¼
0 f 0
115 Application of Ant Colony Algorithm on Secondary Cooling 1083
Casting speed and actual water of secondary cooling sections are in a certain range
in the production.
1084 J. Li and H. Pei
Ants release pheromones in action. The information less volatilize in the shorter
path. The pheromone effect as a signal to the other’s actions, And then the
pheromone left by the original pheromone is enhanced by latecomers. The results
of continue cycle is that more ants visit the path, more probability of path chosen.
Within a certain period of time, the shorter path will be visited by the more ants.
Thus there will be more pheromone in the shorter path. More pheromone means a
shorter path, which means better answer (Gao and Yang 2006).
8 a b
>
< P½si;j ðtÞ ½gai;j ðtÞ b ; j 2 tabuk
Pki;j ðtÞ ¼ ½si;j ðtÞ ½gi;j ðtÞ
>
: j2tabuk
0 ; others
where tabuk is node set that ant k has scanned at point ck.
4. Update pheromone
To every ants that completed construct solution, the pheromone volatilize in
accordance with the following formula.
si;j ðt þ 1Þ ¼ ð1 qÞ si;j ðtÞ
The secondary cooling of continues slabs are optimized according to the actual
production equipment, process parameters, physical properties parameters of steel.
The restriction are straightening point temperature greater than 900 C, surface
cooling rate less than 200 C/m, surface temperature rise rate along casting less
than 100 C/m, metallurgy length 21.58 m. No-optimized and optimized surface
temperature of slab is as Fig. 115.1.
After optimization, the maximum cooling rate and surface temperature rise rate
are both lower than before. The maximum cooling rate is drop from 152 to 72 C/m.
The maximum surface temperature rise rate is drop from 34 to 12 C/m. The
surface temperature distribution is flat. These reduce the stress factors that induced
slab inner and surface cracking.
1086 J. Li and H. Pei
)
temperature of slab 1150
Surface Temperature (
1100
No-optimized
1050
1000
950 Optimized
900
0 5 10 15 20 25
Distance to Meniscus (m)
References
Bergh van den F, Engelbrecht AP (2006) A study of particle swarm optimization particle
trajectories. Inf Sci 176(6):937–971
Gao S, Yang J (2006) Swarm intelligence algorithm and its application. China Water Resources
and Electric Press, Beijing
Gutjahr WJ (2002) ACO algorithms with guaranteed convergence to the optimal solution. Inf
Process Lett 82(3):145–153
Laitinen E, Lapinb AV, Piesk J (2003) Asynchronous domain decomposition methods for
continuous casting problem. J Comput Appl Math 154(2):393–413
Lan CW, Liu CC, Hsu CM (2002) An adaptive finite volume method for incompressible heat flow
problems in solidification. J Comput Phys 178:464–497
115 Application of Ant Colony Algorithm on Secondary Cooling 1087
Lotov AV, Kamenev GK, Berezkin VE (2005) Optimal control of cooling process in continuous
casting of steel using a visualization-based multi-criteria approach. Appl Math Model
29(7):653–672
Natarajan TT, El-Kaddah N (2004) Finite element analysis of electromagnetic and fluid flow
phenomena in rotary electromagnetic stirring of steel. Appl Math Model 28(1):47–61
Radovic Z, Lalovic M (2005) Numerical simulation of steel ingot solidification process. J Mater
Process Technol 160:156–159
Santos CA, Spim JA Jr, Maria CF et al (2006) The use of artificial intelligence technique for the
optimisation of process parameters used in the continuous casting of steel. Appl Math Model
26(11):1077–1092
Wang S, Gao L, Cui X (2008) Study on multi-depots vehicle routing problem and its ant colony
optimization. Syst Eng Theory Prac 2:143–147
Liu Y, Cao T, Xi A (2006) Control model for secondary cooling in continuous slab casting.
J Univ Sci Technol Beijing 28(3):290–293
Chapter 116
Application of the Catastrophe
Progression Method in Employment
Options for Beijing, Shanghai
and Guangzhou
116.1 Introduction
Beijing, Shanghai and Guangzhou are China’s three first-tier cities, there are
thriving and talent. From the economy, the culture to the standard of living, the three
cities are the elites of China’s cities. Thousands of people arrived here dreaming to
create their own place in the world. So which city is the right city to work in? All the
data in the article is rooted in the three cities ‘‘statistic almanac’’ (2011).
(1) Firstly, establish the catastrophe assessment indexes system. According to the
purpose of the evaluation system, the total indicators is divided into multi-hier-
archical contradictory groups and the system arranged in tree structure of the level
of purpose, obtain the more concrete quantifying indexes. Some indexes may have
to be decomposed further. The decomposition will be stopped until the metered
indexes are obtained. But the number of control variables in catastrophe system
should not be more than 4, so the sub-indexes of a single index had better not be
more than 4 (Liang et al. 2008).
(2) Determine the index system of each levels catastrophe system model.
R. Thom had classified a special kind of mapping f: Rn ? R of singularities,
namely smooth mapping f: Rn ? Rd a r parameter family (for any limited n and
all r 2 4) locally equivalent to one of the seven kinds of structural stability family
of functions, called the seven elementary catastrophe (Zhang et al. 2009). And
there are several commonly used catastrophe systems: cusp catastrophe system,
swallowtail catastrophe system and butterfly catastrophe system. The feature of
116 Application of the Catastrophe Progression Method 1091
result of advantage and disadvantage ordering of every evaluate target by the score
of the total evaluation index (Liang et al. 2008).
According to the different levels and the application of analytic hierarchy process
and categories of the evaluation indicators, the establishment of assessing indi-
cators system for employment options is shown in Table 116.2. The original data
is adopted by ‘‘Statistic Almanac’’ (2011) to ensure the accuracy (Shanghai
Municipal Bureau of Statistics 2011; Guangzhou Municipal Bureau of Statistics
2011; Beijing Municipal Bureau of Statistics 2011).
Based on the requirement of catastrophe theory, the primary control variable
write in front and the secondary control variable write in the back. According the
divided of catastrophe system, from Table 116.2, the third-class indexes from top to
bottom are swallowtail catastrophe system, swallowtail catastrophe system, cusp
catastrophe system, butterfly catastrophe system, butterfly catastrophe system,
swallowtail catastrophe system, swallowtail catastrophe system, cusp catastrophe
system, swallowtail catastrophe system, cusp catastrophe system, swallowtail
catastrophe system, which consist of 32 indexes and record these as x1, x2,
x3,…,x32; The second-class are cusp catastrophe system, swallowtail catastrophe
system, cusp catastrophe system and butterfly catastrophe system respectively,
which consist of 11 indexes and record these as y1, y2, y3…y11; The first-class is
butterfly catastrophe system which consist of 4 indexes and record these as z1, z2,
z3, z4 (Gao et al. 2008).
As can be seen in the Table 116.3, the results of the optimization Beijing [ Gu-
angzhou [ Shanghai. As the capital of our country, Beijing’s total scores are the
1096 Q. Yuan et al.
highest in the three cities. But its working is not as good as Guangzhou, Gu-
angzhou which is china’s leading opening up the city, and in 2010 succeeded in
held the 16th Asian games, success made many employment opportunities and
improve the quality of life. we can know the low standards of Shanghai is mainly
due to the town average salaries is lower and the unemployment rate is higher
compared with other city from the working of Table 116.4, and the inadequate of
livelihood is mainly due to doctors and beds per capita (Gao et al. 2005).
116.6 Conclusion
(2) Through the evaluation analysis of three cities, establish the employment
options index system of three cities, and according to the basic principle of
catastrophe theory, carries on sorting to each influence factor by the importance of
its objectives, respectively for the first-class index and the second-class index, and
choose the third-class index able to represent the second-class index to calculate
and analysis, and carries on the normalized computation gradually upwards, finally
gets the evaluation results. It can be found the deficiencies of the cities from the
evaluation results of the urban employment and provide the reasonable reference
for the selection of employment city.
(3) Because the method can only handle within four control variables, it is not
suitable for more than four control variables decision problem. Therefore the total
indicators need to be divided into multi-hierarchical contradictory groups (Zhang
2009). There are various problems about score problem of the lowest indicators, all
levels of decomposition problem, order of importance between the indexes of the
same level and the complementary and the non complementary relationship
between the indexes and so on, the catastrophe progression method needs to be
researched and improved further.
References
Beijing Municipal Bureau of Statistics (2011) The national bureau of statistics survey office in
Beijing, Beijing statistics book 2011. China Statistics Press, Beijing
Chen ML (2004) The application of catastrophe model to comprehensive evaluation. J Univ Sci
Technol Suzhou (Nat Sci) 21(4):23–27
Chen JC, Chen ZN (2011) The application of catastrophe to China real estate industry
competitive power evaluation of the city. Decis Making 19(8):254–256 (In Chinese)
Dou XF (1994) The application of catastrophe theory in economic field. University of Electric
Science and Technology of China, Chengdu
Gao MS, Dou LM, Zhang N, Kan JG (2005) Cusp catastrophic model for instability of coal pillar
burst damage and analysis of its application. J China Univ Min Technol 4(34):432–437 (In
Chinese)
Gao K, Li M, Wu C (2008) Application of catastrophe progression method in forecasting
spontaneous combustion of blasted muck pile of sulfide ore. Met Mine 2(2):21–22
Guangzhou Municipal Bureau of Statistics (2011) Guangzhou statistics book 2011. China
Statistics Press, Beijing
He P, Zhao ZD (1985) Catastrophe theory and its application. Dalian University of Technology
Press, Dalian
Huang YL (2001) Application of catastrophe progression method to sustainable usage of water
resource. Arid Environ Monit 15(3):167–170
Li HW (2004) Application of the catastrophe progression method in evaluation index system of
eco-city. Environ Assess 2004(9):44–48
Liang GL, Xu WJ, He YZ, Zhao TX (2008a) Application of catastrophe progression method to
comprehensive judgment of slope stability. Rock Soil Mech 29(7):1895–1899
Liang GL, Xu WJ, He YZ, Zhao TX (2008b) Application of catastrophe progression method to
comprehensive judgment of slope stability. Rock Soil Mech 29(7):1895–1899 (In Chinese)
Shanghai Municipal Bureau of Statistics (2011) Shanghai statistics book 2011. China Statistics
Press, Beijing
1098 Q. Yuan et al.
Shi YQ, Liu YL, He JP (2003) Further study on some questions of catastrophe evaluation method.
Eng Diurnal Wuhan Univ 36(4):132–136
Tan YJ, Chen WY, Yi JX (1999) Principle of system engineering. National University of Defense
Technology Press, Changsha, pp 341–348
Wan WL, Yang CF, Wang DJ (2006) Application of catastrophe theory evaluation method in
assessment of economic profit and productivity of mine. Min Eng 4(2):5–7
Yao DQ, Guo XC, Tu SW (2008) The application of catastrophe progression method on the
decision-making planning alternatives for through highways. In: 2008 International confer-
ence on intelligent computation technology and automation, Changsha, CA
Yu L (2008) Create Chinese characteristics of urban evaluation system. China Dev 8(4):89–95
Zhang JX (2009) Livable city evaluation and countermeasures about Henan. Bus Econ (6):93–95
(In Chinese)
Zhang TJ, Ren SX, Li SG, Zhang TC, Xu HJ (2009) Application of the catastrophe progression
method in predicting coal and gas outburst. Min Sci Technol 28(4):431–434
Chapter 117
Cooperation Relationship Analysis
of Research Teams Based on Social
Network Analysis and Importance
Measures
117.1 Introduction
117.2 Methodology
Because the traditional adjacency matrix can only represent the connectivity
between nodes, we present a new weight adjacency matrix (WAM) to describe the
characteristics of research team. It can evaluate the cooperation relationships
between team members quantitatively.
A WAM also represents which nodes of a graph are adjacent to which other
nodes. The diagonal variable aii is still 0. However, the non-diagonal variable
aij ; 1 aij 1 can represent an edge from node i to node j with weight. If aij ¼ 1,
it shows that node i and node j are connected with absolute positive relationship. If
0\aij \1, it shows that node i and node j are connected with part positive
1102 Z. Han and Z. Cai
1
0. 5 ⎡0 0 0 0.5 ⎤
⎢0 0 0 −1
⎥
2
-1
4 ⎢ ⎥
⎢0 0 0 1 ⎥
⎢ ⎥
⎣ 0. 5 − 1 1 0 ⎦
1
3
Member
1
0.5 -0.3
⎡ 0 0.5 0 −0.3 ⎤
⎢ 0.5 0 0.8 −1
⎥
Member -1 Member ⎢ ⎥
2
Research
4 ⎢ 0 0.8 0 1 ⎥
team ⎢ ⎥
1
⎣ −0.3 −1 1 0 ⎦
0.8
Member
3
relationship. If aij ¼ 0, it shows that node i and node j are isolated with none
relationship. If aij ¼ 1, it shows that node i and node j are connected with
absolute negative relationship. Figure 117.2 shows an example of WAM.
Usually, a research team is represented with a WAM. In the WAM, a node i
means a member i in the team, while the weight aij describes the cooperation
relationship between member i and member j. The research team can also be
represented with the corresponding graph to get a more direct understanding.
Figure 117.3 shows an example of WAM for research team.
Social network analysis (SNA) is the mapping and measuring of relationships and
flows between people, groups, organizations, computers, URLs, and other
117 Cooperation Relationship Analysis of Research Teams 1103
oRðSÞ
IðBMÞSCi ¼ ð117:3Þ
oRðCi Þ
If the failure function of system and components is written as FðÞ, the Birn-
baum importance could also be measured as (117.4).
oRðS ¼ 0Þ oð1 FðSÞÞ oFðSÞ
IðBMÞSCi ¼ ¼ ¼ ð117:4Þ
oRðCi ¼ 0Þ oð1 FðCi ÞÞ oFðCi Þ
From the aspect of probability distributions, (117.3) can be transformed as
(117.5), which denotes the system reliability decrease when the component Ci
degrades from function to failure state.
oRðSÞ
IðBMÞSCi ¼ ¼ PðS ¼ 0jCi ¼ 0Þ PðS ¼ 0jCi ¼ 1Þ ð117:5Þ
oRðCi Þ
According to (117.5), in the WAM of research team, the Birnbaum importance
of member i is calculated as (117.6) and (117.7).
BPi ¼ ðajai ¼ n 1Þ a ð117:6Þ
Node 3 Node 4
Node 5 Node 6
Node 2
!
the number of papers published
by both node i and node j
aij ¼ ! ð117:8Þ
the number of papers published
by node i or node j
117.5 Conclusion
Acknowledgments The authors gratefully acknowledge the financial support for this research
from the National Natural Science Foundation of China (Grant No. 71101116).
117 Cooperation Relationship Analysis of Research Teams 1107
References
Adams JD, Black GC, Clemmons JR, Stephan PE (2005) Scientific teams and institutional
collaborations: Evidence from U.S. universities, 1981–1999. Res Policy 34(3):259–285
Birnbaum ZW (1969) On the importance of different components in a multi-component system,
Multi-variate analysis 2. Academic Press, New York, pp 581–592
Young CA (1998) Building a care and research team. J Neurol Sci 160(S1): S137–S140
Dekker DM, Rutte CG, Van den Berg PT (2008) Cultural differences in the perception of critical
interaction behaviors in global virtual teams. Int J Intercult Relat 32(5):441–452
Dodson MV, Guan LL, Fernyhough ME et al (2010) Perspectives on the formation of an
interdisciplinary research team. Biochem Biophys Res Commun 391(2):1155–1157
Fussell BJ (1975) How to hand-calculate system reliability characteristics. IEEE Trans Reliab
24(3):169–174
Ghobadi S, D’Ambra J (2012) Competitive relationships in cross-functional software develop-
ment teams: How to model and measure? J Syst Softw 85(5):1096–1104
Krebs V (2012) Social network analysis, a brief introduction. http://www.orgnet.com/sna.html
LePine JA, Buckman BR, Crawford ER, Methot JR (2011) A review of research on personality in
teams: Accounting for pathways spanning levels of theory and analysis. Human Resour
Manag Rev 21(4):311–330
Pagell M, LePine JA (2002) Multiple case studies of team effectiveness in manufacturing
organizations. J Oper Manag 20(5):619–639
Salmi A (2010) International research teams as analysts of industrial business networks. Ind Mark
Manag 39(1):40–48
Vesely WE (1970) A time-dependent methodology for fault tree evaluation. Nucl Eng Des
13(2):337–360
Wikipedia (2012a) Adjacency matrix. http://en.wikipedia.org/wiki/Adjacency_matrix
Wikipedia (2012b) Social network analysis. http://en.wikipedia.org/wiki/Social_network_
analysis
Chapter 118
Establishment of Construction Standard
System Based on the Complex System
Theory
Keywords Emergent property Complex adaptive system Complex system
Construction standard system (CSS)
118.1 Introduction
The quantity of construction standards has sharply increased along with the rapid
development of construction industry. Most of these standards were formulated
according to the foretime condition, and it caused some issues between con-
struction standards after several years’ development, such as discordance,
imperfect supporting, unreasonable content, mutual repeated, and conflict. The
standard system structure becomes more and more complicated, and it is hard to
guarantee the normal operation and function properly.
Construction standard system has been investigated in a number of research
studies. Existing research in this important area focused on the function and
institution of the system. There are two trends of the construction standard system
in the future: the developed countries’ standard system will become the global
standard with their business expansion around the world, the other trend is the
mature standard system will still maintain the primary position along with their
own construction industry development (Bredillet 2003). The reasonable supply of
construction standard system can satisfy the building quality requirement of the
enterprises and the consumers, and it also can bring the ideal economic and social
benefits (Ofori and Gang 2001). Both ‘‘mandatory article’’ and ‘‘recommended
standard’’.
Come along at the present stage in China. This management mode will be
replaced by the ‘‘technical regulation’’ and ‘‘technical standard’’ management
mode in order to adapt the new requirement of the market environment (Mu 2005;
Yang 2003). Construction standards are divided into 18 professional categories in
China. Each category’s structure can be described by four levels: synthesis, base,
general, and specialized level (Wang 2007). Construction standard system should
contain the analysis of standard current situation, reasonable standard management
system, flexible operating mechanism, demand of standard in 5–10 years and the
harmonizing relationship of ‘‘technical regulation’’ and ‘‘technical standard’’. This
is the development trend of the construction standard system (Mu 2005).
‘‘So generally, project is more complex, the cost is higher, and the time limit for
the project is longer’’ (Baccarini 1996). Along with the development of economic,
the subject of investment of a construction project is diversification, and it makes
huge investment project possible. More and more complicated management
objects of organization, technology, time, quality go along with that, and it
enhances the demand of construction standard. The standard system reflects the
features of complex system gradually. Despite of the significant contributions of
the above research studies, there is no reported research that focused on: (1)
complex features of standard system; (2) emergent property of complex system;
(3) establishment method of construction standard system based on complex
system theory. Accordingly, there is an urgent need for additional studies to
address these there critical research gaps, as shown in Fig. 118.1.
Fig. 118.1 The establishment model based on the complex system theory
Relationship
Subsystem of Elements
External
Environment
Input Output
The ComplexSystem of
Construction Standard System
standards within a certain range. The system effect can make the economic activities
get the best benefit. Construction standard system is an interdependence, mutual
constraint, mutual complement and cohesion organic whole, which is formed by
interconnected standards in construction field.
Figure 118.2 shows that the construction system is a complex system, and it is
constituted by many elements and subsystem. The system is a multilevel, multiple
target system formed by a number of construction standards. The complexity
factors of construction standard can be described as external factor and internal
factor. The external complexity is the condition of system complexity including
open and dynamic. The internal factor is the reason of complexity, which contains
complexity of elements, organizational relationship and information.
1112 Z. Sun and S. Zhang
Along with the development of economy and the field of construction, the laws
and regulations of construction field must adapt to the development demand of
construction activities. Specially, regulations about safety of people’s life and
property require the elements of the standard system to adjust accordingly.
The development of the Internet makes the information, energy and material
exchange of construction standardization work in different way. It also brings out
higher request on the speed of standard update and the standard system optimi-
zation. The platform makes the standard update cycle shorten greatly, and adds the
complexity level of the system.
Complexity of
CSS
Adjustment
Increase of
of Standard
Standards
Content
Laws and
Regulations
equipment, new materials and knowledge. These requirements enhance the com-
plexity of standard system during the standard update.
The dynamic change of external factor of construction standard system is the
origin of system adjustment. The dynamic of system causes the unsteadiness of the
system. The disturbance of external elements is one of reasons of the complex
system, as shown in Fig. 118.3.
Complexity of Complexity of
Each Subsystem’s the Feedback
Object Information
Internal Complexity of
the System
1114 Z. Sun and S. Zhang
The range of construction activities is very wide, and it not only contains the civil
engineering and equipment installation, but also contains the materials, cost,
infrastructure and other professional fields. These participants of the system
influence each other in space–time, due to the establishment of system need the
cooperation of participants. The establishment of construction standard system
needs to cross multiple professional. The cross of the professional between
transverse and longitudinal is very complicated, for example, the housing building
subsystem needs the building design, structure, and foundation professional to
combine in their respective stage. Each elements of the subsystem enters or exits
the system in their own life cycle. So the complexity of the standard system
embodies how to coordinate different professional standards in their different life
cycle organically, thereby give full play to the function of the system, and reduce
the conflict and contradiction between different standards.
The feedback information of the construction standard system comes from owner,
design institute, builder, supplier of the materials and the supervision units, and it
also comes from different project stage, such as feasible research, design, bid
118 Establishment of Construction Standard System 1115
inviting and construction process. The management process also generates large
quantity of feedback information, such as quality control, investment control,
progress control and the contract management. The feedback information from
different units enhances the complexity of the information search during the
system optimization process. The feedback information from related units may
have contradiction, and the benefit may have consistency sometimes. The
dependency and relevancy of feedback information from different related units,
process, and environment can enhance the difficulty of the information collection
and analysis of the information demanders.
The research on the construction standard system is stuck in early stage of system
science, which means the concept of elements, subsystem and the structure. The
standards in system is completely passive, the purpose of their existence is to
realize some task or function of the system, and these standards have no their own
objects and orientations. The system can’t ‘‘grow’’ or ‘‘evolve’’ in the environ-
mental interactions, and it only can make fixed reaction according to the fixed
mode even though there’s some communication with external environment. The
elements of the construction standard system can be regarded as an active and
adaptive agent with their own purpose and initiative by using the theory of
complex adaptive system.
Holland provided four characteristics of the study on the adaptive and evolu-
tionary process about the concept of ‘‘agent’’: aggregation, nonlinearity, flows and
diversity. He also gave three mechanisms: tag, internal, building blocks (Holland
1995; Jin and Qi 2010).
The constitutors of the construction standard system can analyze the demand of
standard function, forecast the development direction of technology and take
action according to the predetermined object. The constitutors can form an
aggregate of organization in some particular field by ‘‘adhere’’, and this aggregate
becomes the organization of standardization finally. The new aggregate develops
in the environment, which has huge demand for the aggregate. The whole process
can be considered as the motion of a subsystem. This aggregation relationship
doesn’t mean that every organization can adhere together. These organizations that
comply with subsystem development goals, be helpful for standardization field,
have the professional relativity can have this kind of aggregation relationship. This
common object, conditions and rules of the choice are endowed with a cognizable
form, which is called ‘‘tag’’.
‘‘Nonlinearity’’ and ‘‘flows’’ are two characteristics of construction standard
system. There is the exchange current of information, function and benefit between
standards, subsystems, levels and standardization activities as previously men-
tioned. Moreover, the unblocked degree and turnover frequency of the ‘‘flows’’ are
in a high level due to the complexity of the system. The elements and their
1116 Z. Sun and S. Zhang
The elements form the system according to some way, and it will produce the
specific attribute, characteristics, behavior, function, which the whole system has
but the parts don’t have. The system theory calls this emergent property (Miao
2006). The most important thing to describe the construction standard system by
using the system theory is to grasp the whole emergence of the system.
x1 Rx
x2 Rx x3 Xn
R12 z R12 z
z121 z122 z123 Z12e … Ze
aspects, and get the related functional factors (Bo et al. 2002; Farley and Lin 1990;
Berndsen and Daniels 1994).
According to the description of the complexity system, the paper uses the
mathematical expression to describe a construction standard system with tertiary
structure qualitatively:
The definition: the system X means a whole formed by n relational elements
x1 ; x2 ; . . .; xn , measured as X ¼ fXn ; Rx g, and Xn ¼ fxi ji ¼ 1; 2; . . .; n; n 2g. Rx
is the relation of these elements, called the soft structure of the system (Miao
2005). Meet the following conditions:
P
n
(1) 9 a subsystem Y ¼ fYm ; Ry g ; Y Xn ; Ym ¼ Yim ; and xi ¼ fYim ; Riy g
i¼1
The above expression means that the construction standard system also has the
hierarchical, nonlinear and coupling characteristic.
1118 Z. Sun and S. Zhang
The emergent property of construction standard system comes from the elements,
structure and the environment to the system. The effect of elements, scale,
structure and environment make the whole emergent property of the system (Bo
et al. 2002).
(1) The construction standard system is formed by the laws, regulations,
standards and other elements, so the origin of the whole emergent property comes
from each element. The emergent of system is restrained by the characteristic of
the element, and that means the random combination of these elements can’t form
the system. (2) There’s the relationship between the emergent and the scale of the
system. The scale of the system is the essential condition of the complexity, and
it’s hard to emerge the complexity of a system from simplicity without enough
elements. (3) The level and the characteristic of each element is the material base
of the whole emergent property, and they only provide the objective possibility for
generating the emergent. The interaction, mutual inspire, restraining and com-
plement with each other between the different ways of every elements can generate
the whole emergent, which is called structure effect, and it is the core source of the
whole emergent property. (4) The external environment provides the necessary
resources and constraint condition for the generating process of emergent. The
construction standard system can get the resources from interaction with the
external environment. The resources can help the system exploit system space,
form the system boundary and establish the channel for exchanging material,
energy and information with the construction activities. It also can make the
system adapt new environment, enhance the anti-jamming capability. These
exchanging results generate the emergent property of the system finally.
According to definition of complexity system and the mechanism of emergent
property, we can deduce the emergent, which the system has but the elements
don’t have, mechanism abstractly.
According to the mathematical definition of the construction standard com-
plexity system, system X ¼ fXn ; Rx g; Xn ¼ fxi ji ¼ 1; 2; . . .; n; n 2g is the
subsystem of X, Rx is the correlation set of elements x1 ; x2 ; . . .; xn ; xi ¼ fXi ; Ri g,
S
n
and xi is in the Xi ¼ Xn .
i¼1
S
n S
n
The emergent of the system is reflected in xi X, because Xi ¼ Xn , when
i¼1 i¼1
S
n S
n
Ri Rx , xi X is true.
i¼1 i¼1
Because:
118 Establishment of Construction Standard System 1119
S
n
(1) Ri 6¼ Rx
i¼1
S
n
Prove: suppose Ri ¼ Rx , only Rx ¼ [ and Ri ¼ [, or Rx 6¼ [ and Ri ¼ 0.
i¼1
That means there’s no soft structure between two subsystems, and it’s
inconsistent with definition of construction system: X ¼ fXn ; Rx g, so usually
Sn
have: Ri 6¼ Rx
i¼1
S
n
(2) Ri Rx is not correct
i¼1
Prove: because xi is the subsystem of arbitrary system X, xi Xis true for
S
n
arbitrary system xi , so Ri Rx
i¼1
S
n S
n S
n
It’s knowable that because of Xi ¼ Xn , Ri Rx , xi X finally.
i¼1 i¼1 i¼1
Through the process of argumentation, the system has the function that the parts
don’t have, and this function comes from the soft structure of the system, which
means the structure relationship of the parts.
118.4 Conclusion
The complexity system theory applies a new thinking and a rationale to the
establishment of the construction standard system. The complexity of the standard
system is caused by the internal and the external factor. External factor contains
the dynamic of laws and technology environment, and the development of infor-
mation technology. The internal factor contains the level of system, participants of
the system, feedback information and complexity of the subsystem objects. This
paper uses the theory of CAS, describes the complexity of the standard system.
The construction standards form the system according to some objects base on the
theory of emergent property. The system has the function and the characteristic
which the parts or the total of part don’t have.
Acknowledgments Foundation item: Project of the national twelfth-five year research program
of China (2012BAJ19B03)
References
Baccarini D (1996) The concept of project complexity—a review. Int J Project Manage
14(4):201–204
Berndsen R, Daniels H (1994) Causal reasoning and explanation in dynamic economic system.
J Econ Dyn Control 18:251–271
1120 Z. Sun and S. Zhang
Bo L, Zhang S, Li Y (2002) The qualitative representation and inference of the complex systems.
Syst Eng Theory Pract 12:15–21 (in Chinese)
Bredillet CN (2003) Genesis and role of standards: theoretical foundations and socio-economical
model for the construction and use of standards. Int J Proj Manage 21:463–470
Farley AM, Lin KP (1990) Qualitative reasoning in economics. J Econ Dyn Intell 14:435–450
Holland J (1995) Hidden order: how adaptation builds complexity. Addison-Wesley, New York
Jin H, Qi W (2010) Research on the theory and applying of complex system brittleness.
Northwestern Polytechnical University Press, Xi’an, p 120 (in Chinese)
Miao D (2005) On system thoughts (4): careful investigation going deep into system. Chin J Syst
Sci 13(2):1–5 (in Chinese)
Miao D (2006) On systematic thoughts (6): to focus the attention on the emergent properties of
the whole. Chin J Syst Sci. 2006 14(1):1–6 (in Chinese)
Mu X (2005) Comprehensive description and prospect of national engineering construction
standard. Spec Struct 1:90–92 (in Chinese)
Ofori G, Gang G (2001) ISO 9000 certification of Singapore construction enterprises: its costs
and benefits and its role in the development of the industry. Eng Constr Archit Manage
8(2):145–157
Wang C (2007) The analysis of construction standard system. Architect 5:111–116 (in Chinese)
Wu S (2006) Study on the synergic mechanism and methods of construction project management
based on complex system theory. Tian Jin University, Tian Jin, pp 24–25 (in Chinese)
Yang J (2003) The research on the system of construction regulations and standards. Harbin
Institute of Technology, Harbin, pp 15–18 (in Chinese)
Chapter 119
Location Selection of Coal Bunker Based
on Particle Swarm Optimization
Algorithm
119.1 Introduction
Coal bunker usually contains bottom coal bunker, district coal bunker, section coal
bunker, ground bunker, tunneling bunker (Wang 1983). As the main cavern
(Zhang 2010), coal bunker has an important role in the process of coal
119.2 Methodology
Among them, x is for inertial weight, generally its init value is 0.9, and can be
decreased to 0.4 with the linear increasing of iteration number (Huang 2011),
which allows it to focus on global search firstly, and fast convergence in a certain
area. /1 ; /2 are learning factors and normal numbers, and usually the value is 2,
rnd1 ðÞ; rnd2 ðÞ are random functions, the values range in (0, 1). Pid is for particles’
best position on the history, Pgd is for the position of particle which has best fitness
in all particles.
(3) Calculate the next step of the particle’s location with the rate of current
particle, namely
xid ¼ xid þ vid ð119:2Þ
(4) Return to the second step to repeat calculating, until it reaches the limit value
set or the evaluated number of function values is greater than that of the
biggest function set.
1124 Q. Cui and J. Shen
X
m
Xjk Mk ð119:5Þ
j¼1
119 Location Selection of Coal Bunker Based on Particle 1125
X
m
Xjk Dk ð119:6Þ
j¼1
Among them, Formulas (119.4) is said that the total coal materials from resources j
to the alternative points are not more than supply capacity; Formulas (119.5) is
said that the total supply from the points of all coal resource to coal bunker k is not
more than its maximum capacity; Formulas (119.6) is said that supply amount
from the points of all coal resource is not less than its demand.
The main problem of coal bunker location in logistics system is to build some
certain coal bunkers from candidate ones under the supply of a series of resource
points. In other words, to optimize which coal bunker will be distributed from
these resource points.
Therefore, particle position can be structured as following steps: For m resource
points, q candidates of coal bunker, the current position of particle k is
Xk ¼ fx1k ; x2k ; . . .; xmk g, xjk ðj ¼ 1; 2; . . .; mÞ is that transport coal resource to j,
therefore, xjk values interval in ½1; q. Select the adapt function as following form:
X
m X
m
fk ¼ Cjk ljk Xjk þ Ck Xjk þ Fk Wk ð119:7Þ
j¼1 j¼1
fk is the adapt value of particle k; and the best adapt value of global is as:
gbest ¼ minffk g ð119:8Þ
Pbest is the position owing the best adapt value that particle k has experienced, Pbest
is obtained by the following formulas :
Pbest ðtÞ; if f ðXk ðt þ 1ÞÞ f ðPbest ðtÞÞ
Pbest ðt þ 1Þ ¼ ð119:9Þ
Xk ðt þ 1Þ; if f ðXk ðt þ 1ÞÞ \ f ðPbest ðtÞÞ
Implementation procedure of algorithm is as follows:
(1) Initialization. Set the random position of particle swarm xk , speed vk and the
maximum iterating times T, the initial particle randomly generates in the
feasible domain of solution space;
(2) Calculate the adopt value of each particle according to formulas (119.7);
(3) For each particle, compare the adaptive value with the one on the best position
Pbest has experienced, thus to update the current best particle position
according to (119.9);
1126 Q. Cui and J. Shen
(4) For each particle, compare the value on the best position Pbest has experienced
with the one on the global value gbest , thus to update the global best particle
position;
(5) Update the value of the inertia weight w;
(6) The rate and position of the particles is iteration according to (1, 2);
(7) Update fbest , if it reaches maximum iterating times T, the cycle is over, and
output optimal particle, its best position gbest and the value of optimal fitness;
otherwise the cycle is not over, and return to step (2) to continue to operating.
119.3 Results
Table 119.2 Unit goods rate from resource point j to coal bunker’s alternative point k
k
j 1 2 3 4
1 7 8 13 13
2 10 10 9 8
3 11 8 11 9
119 Location Selection of Coal Bunker Based on Particle 1127
Table 119.3 Unit distance from resource point j to coal bunker’s alternative point k
k
j 1 2 3 4
1 3 5 5 4
2 4 3 4 3
3 5 4 31 5
Table 119.4 Management fee rate of unit circulation in coal Bunker’s alternative points
Alternative points L1 L2 L3 L4
Management fee rate of unit circulation 80 90 95 85
Table 119.5 The quantity of goods from resource point j to coal bunker’s alternative point k
k
j 1 2 3 4
1 30 0 0 0
2 0 25 0 10
3 0 0 55 0
L4
119.4 Conclusion
This paper solves the problem of coal bunker selection in underground logistics
system on the case study based on Particle Swarm Optimization, and establishes
the corresponding nonlinear model, thus to realize the cost minimization. The case
shows that this method can be effectively applied in single target problem of
location selection about coal bunker, thus to provide a new kind of optimization
algorithm for coal bunker selection in coal underground logistics system.
References
Abstract This paper presents a study of the behavior in firms receiving the
government’s policy support. It is emphasized that firm’s attitude is an important
factor to policy planning. Resource-based theory is also adapted to firm’s response
to the innovation policy of industrial cluster. In the perspective of complex
adaptive system, policy responding has suffered from the influence of some factors
such as comprehension, firm’s scale, entrepreneur, benefit, innovative capability,
demands for R&D and neighbor firms. Based on echo model, the paper discusses
the mechanism of cluster firm’s response to innovation policy. Furthermore, it
describes the important role of various factors play in the process of policy
responding and trigger condition. Finally, the behavior matching modes and self—
adaptive responding flow have been analyzed. Findings show that the fundamental
of firm’s response to policy is resource—depending and self—organizing evolu-
tion ensures policy responding better and better.
120.1 Introduction
The industrial cluster (cluster) has a tremendous role in promoting the develop-
ment of all sectors of society on the regional economy (de Oliveira Wilk and
Fensterseifer 2003). It is no doubt about the positive role of regional innovation
policy promotes enterprise technology research and innovation. Specially, in the
Y. Zhang (&) C. Li
School of Economics and Management, Beijing University of Technology, Beijing, China
e-mail: [email protected]
C. Li
e-mail: [email protected]
Using the echo model, researcher could solve the issue about mutual dissemination
and absorption of firm’s resource innovation by resource-based theory (Holland
1995). This paper sums up the echo model research framework as shown in
Fig. 120.1. The model provides an analyze method of multi-agent interaction
mechanisms for the government and industrial cluster’s firms. The mechanism is
divided into two segments naming control and tagging. The control segment
includes selective response, resource acquisition, conversion, evolution and con-
ditions of replication. The tagging segment is composed by offensive, defensive
and adhesion.
For the process of policy response, the government realizes resource allocation
by policy planning and implementing. Simultaneously, firms are responding to the
policy for obtaining resource and engaging in innovation. These resources are
usually divided into the information, funds, intellectual results, human resource
and physical tools (Filippetti and Archibugi 2011). In this paper, innovation policy
provides market information, funds, project items, human resource. The infor-
mation includes guidance, specifications, market situation, technological advances,
120 Mechanism of Firm’s Response to Innovation Policy 1131
Causality
Analyzing
Firms of R&D R&D
Industrial Cluster Performance Behavior
Condition
Echo Model
Selective response
INTERACT
Innovation Policy
Factors
Government Analysis
Other Effects
and R&D results. The funds are a collection of investment, subsidies, tax relief,
etc. The projects provided by the government are stimulating cluster’s firms R&D
passion which has the form of major projects, fund projects, technology projects,
and special projects. General, technology and R&D staff as human resource is
supplying skill persons who are needed to Innovation R&D activities. Among
them, the information and project providing by the government is an interactive
resource which also could be shared. To be emphasized, the information only has
the situation ‘have’ or ‘None’, and the remaining resource can be measured.
After created echo model (Holland 1995) uses it to research group prisoner’s
dilemma and success to apply adaptive function in the process of multi-agent
interaction for gaming. This paper will define agents as cluster’s firms and gov-
ernment in the analysis of innovation policy response mechanism.
Based on the actual response to innovation policy, Scholars has been summarized
seven key factors (such as shown in Table 120.1).
(1) Cognition ability: which is the ability to learn and understand, and the correct
cognition of innovation policy is a prerequisite for cluster firm’s response to
the policy. Based on questionnaire survey, scholars believe that carefully
analyze and clearly understand the range of agents and limitations is very
important.
(2) Firm’s size: It limits innovation activities and capabilities, firms with different
sizes have different attitude to policy response. Large scale firms tend to
actively respond to innovation policy. And SMEs prefer subsidies and tax
policy. The studies usually use the variables of assets, employees, resource,
etc.
(3) Entrepreneur’s attitude: It is a direct impact on whether a firm responds to
policy. The studies usually use the variables of firms’ diversification and
position in the network of government and firms.
(4) The results and income: It is a key factor to firms’ response policy. The studies
usually use the variables of policy funding amount, project limits, innovation
cost, Skillman incremental, patent and other results of incremental, new
product profitability, tax relief and other income.
(5) Innovation capability: Policy can help to enhance firms’ innovation capability
is an influencing key factor of firms’ response policy. The studies usually use
the variables of yield, utilization and conversion rate of resource and
knowledge.
(6) R&D Requirement: Innovation policy focuses on supporting firms’ R&D
innovation and R&D require meeting is also a factor of firms’ response. The
studies usually use the variables of direct invest of policy, R&D subsidy, staff
welfare, R&D efficiency and resource.
(7) Inter-Enterprise: To measure the interaction between cluster firms and imitate
innovation behaviors, cooperation and communication is so important. The
studies usually use the variables of cooperate willing, distance and information
exchange frequency.
Tagging Control
Offensive Defensive Adhesion Selection Acquisition Conversion Evolution
Matching
t
has amount of resource ARi , resource acquisition from response is AtRi and from
others is MAtRi , resource of innovation use is A~ t , Innovation policy should
Ri
trigger two conditions as in
At
Dt1 ¼ MARti [ 1;
Ri
t1 A
AtR þA ~ t1 ð120:1Þ
Dt2 ¼ i
~
A
Ri
t
Ri
[1
Ri
It is emphasized that resource is not only cluster firm’s innovation guarantee but
the key point of response to policy. Compare to get income, resource acquisition
meeting innovation require is the prime purpose of firm response to policy. So
government needs to enhance resource supply in policy planning.
(2) Acquisition: Cluster firm’s response to innovation policy is related to gov-
ernment resource supply. The resource which firms select themselves contains
information (R1), funds (R2), projects (R3) and human resource (R4) (Kang
and Park 2012). At the moment t in the process, a firm has resource A t and
~ t t ~ t ~ tþ1
consumes A . When ARi ARi \ARi , it will decide to respond to policy.
Furthermore, because there is competition and cooperation relation between
firms in cluster, so resource could disseminate from one firm to another. The
MAtRi denotes resource acquisition from other firms. If we do not consider the
policy’s effect, a firm maximum requires A t A ~ t þ MAt at the t ? 1
Ri Ri Ri
moment. According to gaming theory, the government only needs to provide
resource more than t ? 1 moment above mentioned, the firm will respond to
innovation policy. The amount of resource is acquired from response as in
120 Mechanism of Firm’s Response to Innovation Policy 1135
~t ; A
AtRi ¼ ð1 kÞ½minðA t A
~ t þ MAt Þ ð120:3Þ
Ri Ri Ri Ri
Y ¼ aX C ð120:4Þ
where, adenotes technology results conversional coefficient, C denotes innovative
cost. It is classic that income converted is relationship with resource, innovative
capability and cost. However, the process is not a continuous and no technology
effect in formula (120.4). So we reference Gilbert and Katz’s standpoint to
improve the formula (Gilbert and Katz 2011). We assume the policy responding
process referred to n type’s resource and conversional time t becomes shorter and
shorter gradually with experience unceasing accumulating as in 1 enht ,
parameter h [ 0. Then, every innovative profit I(T) converted from resource
through technology T as in
8 1 9
< Z h i=
IðTÞ ¼ max a nhI ðT þ 1ÞeðnhþrÞt dt nc
n : ;
0 ð120:5Þ
nhIðT þ 1Þ
¼ max a nc
n nh þ r
Where, r denotes a changing parameter of resource convert to profits by
technology. To all of firms, if a 0, no response activity because of no profits. So
I [ 0 and a [ 0 is the condition of firms responding to innovation policy.
(4) Evolution: Firms as the agents who could realize evolution by experience
accumulating from the study process of response. There are three kinds of
evolution patterns based on firm’s size. The first is passive evolution, which
describes some firms depend on innovation policy intensively. Because they
need the subsidy of policy to operate, so they have to respond policy. The
process is a passive response, but after response their experience is accumulate
to a database. And different responding patterns to different policy. The second
is imitating evolution, which describes some firms depend on innovation
policy generally. In this style, response process is decided by entrepreneurs
who always follow large scale firms’ experience in the cluster. The third is
activity evolution, which describes some firms could select to respond policy
or not themselves. Firms with this style often have their policy analysis sector
which charges for statistic analyzing, evaluating the policy and establishing
indicators of response. Firms also need to give government a feedback. The
1136 Y. Zhang and C. Li
8
< 0\p\0:4 first
Pe ¼ p 0:4\p\0:7 second
:
0:7\p\1 third
The probability of experience growing is random, which describes firms
responding process is complex adaptive.
(5) Adhesion: By firm’s experience accumulating, there are two connection styles.
One names inertia connection, which describes some firms respond to inno-
vation policy sustained through experience judging. Another is one time
connection which describes some firms attempt to respond policy. The
advantages of inertia connection are directly response without energy con-
suming and reduce the cost (Horwitch and Mulloth 2010). But some reason as
qualification makes firm hasn’t response and opportunity cost increasing. The
selection of connection style is related to resource acquisition, conversion and
the mechanism of selection. We assume that innovation policy j is responded
by a firm, its sustained cost Cjk00 , attempt cost Cjk0 , and Cjk00 \Cjk0 . The intensity
of inertia and attempt connection separately is l00jk and l0jk (l00jk þ l0jk ¼ 1). If
unresponsive time t0 at t time slice in the actual response process and
Cjk00 t [ Cjk0 t0 . Then, reduce l00jk while increase l0jk . When the intensity equals
1, the process is stop. On the contrary, reduce l0jk while increase l00jk . If it
appears inertia connection continuously, firms and policy is adhesion.
benefit
policy
policy
attract
could
could
from
firm
firm
The
The
the
Terminate Terminate
Game decision
Judging from
experience
Cognition
Terminate
-making
The effect of factors
120 Mechanism of Firm’s Response to Innovation Policy
Project demand
Transformation
Subsidy welfare
R&D efficiency
Innovation cost
Funds amount
Achievements
Information
Other firms
Cooperation
Utilization
Tax relief
Resource
Resource
Location
Tax relief
Liability
Willing
Subsidy
Profit
Profits
Asset
……
Profits
Risk
Staffs
……
……
No
The responding risk is receivable Policy’s effect is obviously
Firm accords project’s condition Firms responding effect is good
Matching
condition
Policy’s subsidy is adequate Information exchange is extensive Giving impetus to firm R&D
Policy’s tax relief is moderate Firm is willing to R&D innovation Increasing firm’s performance
Firm wishes to cooperation Increasing profit by responding
Start
Yes
1138 Y. Zhang and C. Li
120.4 Conclusion
This study attempts to use echo model research on the response to innovation
policy. It puts forward research frame based on CAS theory and establishes Echo
model by response mechanisms analyzing. It is emphasized that firms in cluster
pay attention to policy resource supplying. And other firms’ experience also
influences the policy response. Cluster firms response is a continuous ‘stimulate-
react’ and gaming process. As experience accumulating, self-organizing evolution
makes firms’ policy cognitive capability and resource utilization increase gradu-
ally. This mechanism is important guarantee for benign response. On the one hand,
government should focus on firms’ requirement of innovation while cluster
development planning. It is best to let firms participate in policy-making for
ensuring the process to be impartial, opening and fair. Because of firms self-
adaptive, the requirement of resource could be controlled within a certain range to
achieve a balanced. It is not conducive to innovation with more and more resource,
so government should seek a reasonable resource range of policy allocating in
order to avoid the resource surplus. The other hand, the firm should avoid to blind
pursuit benefit of policies and encourage staffs to analyze policy. Meanwhile,
policy response could be incorporated into the innovative planning. Firms should
respond to policy according their capability, and acknowledge the pros and cons of
policy. The study also has some significance to government administration.
REFERENCES
Acs Z, Audretsch D, Feldman M (1992) Real effects of academic research: a comment. Am Econ
Rev 82:363–367
Audretsch DB, Lehmann EE, Warning S (2005) University spillovers and new firm location. Res
Policy 34(7):1113–1122
Beneito P (2003) Choosing among alternative technological strategies: an empirical analysis of
formal sources of innovation. Res Policy 32(4):693–713
Bessant JR (1982) Influential factors in manufacturing innovation. Res Policy 11(2):117–132
Breschi S, Malerba F, Orsenigo L (2000) Technological regimes and Schumpeterian patterns of
innovation. Econ J 110:388–410
Claver E, Llopis J, Garcia D, Molina H (1998) Organizational culture for innovation and new
technological behavior. J High Techol Manag Res 9:55–68
Colwell K, Narayanan V K (2010) Foresight in economic development policy: Shaping the
institutional context for entrepreneurial innovation. Futures 42(4):295–303
Coronado D, Acosta M, Fernandez A (2008) Attitudes to innovation in peripheral economic
regions. Res Policy 37:1009–1021
de Oliveira Wilk E, Fensterseifer JE (2003) Use of resource-based view in industrial cluster
strategic analysis. Int J Oper Prod Manag 23:995–1009
Fichman RG, Kemerer CF (1997) The assimilation of software process innovations: an
organizational learning perspective. Manag Sci 43:1345–1363
Filippetti A, Archibugi D (2011) Innovation in times of crisis: national systems of innovation,
structure, and demand. Res Policy 40:179–192
120 Mechanism of Firm’s Response to Innovation Policy 1139
Gilbert RJ, Katz ML (2011) Efficient division of profits from complementary innovations. Int J
Ind Organ 29(4):443–454
Grace TRL, Shen YC, Chou J (2010) National innovation policy and performance: comparing the
small island countries of Taiwan and Ireland. Technol in Societ 32(2):161–172
Hadjimanolis A (2000) A resource-based view of innovativeness in small firms. Technol Analy
Strat Manag 12:263–281
Holland JH (1995) Hidden order: how adaptation builds complexity (Helix Books), Addison-
Wesley Publishing Company, New York
Holmen M, Magnusson M, Mckelv M (2007) What are innovative opportunities? Indus Innov
14:27–45
Horwitch M, Mulloth B (2010) The interlinking of entrepreneurs, grassroots movements, public
policy and hubs of innovation: the rise of cleantech in New York city. J High Technol Manag
Res 21(1):23–30
Huang CY, Shyu JZ, Tzeng G H (2007) Reconfiguring the innovation policy portfolios for
Taiwan’s SIP Mall industry. Technovation 27(12):744–765
Jeroen PJ, de Jong, Freel M (2010) Absorptive capacity and the reach of collaboration in high
technology small firms. Res Policy, 39(1):47–54
Kang K, Park H (2012) Influence of government R&D support and inter-firm collaborations on
innovation in Korean biotechnology SMEs. Technovation 32:68–78
Kern F (2012) Using the multi-level perspective on socio-technical transitions to assess
innovation policy. Technol Forecast Social Change (Contains Special Section: Emerging
Technologies and Inequalities) 79:298–310
Mowery DC, Nelson RR, Sampat BN, Ziedonis AA (2001) The growth of patenting and licensing
by U.S. universities: an assessment of the effects of the Bayh–Dole act of 1980. Res Policy
30(1):99–119
Nybakk E, Hansen E (2008) Entrepreneurial attitude, innovation and performance among
Norwegian nature-based tourism enterprises. Forest Policy Econ 10:473–479
Palmberg C (2004) The sources of innovations—looking beyond technological opportunities.
Econo Innov New Technol 13:183-197
Paunov C (2012) The global crisis and firms’ investments in innovation. Res Policy 41(1):24–35
Pavitt K, Robson M, Townsend J (1989) Technological accumulation, diversification and
organisation in UK companies, 1945–1983. Manag Sci 35:81–99
Rodríguez-Pose A (1999) Innovation prone and innovation averse societies. Growth Change
30:75–105
Simpson RD, Bradford R L (1996) Taxing Variable Cost: Environmental Regulation as Industrial
Policy. J Environ Econom Manag 30(3):282–300
Souitaris V (2002) Technological trajectories as moderators of firm-level determinants of
innovation. Res Policy 31(6):877–898
Waarts E, Van Everdingen YM, Van Hillegersberg J (2002) The dynamics of factors affecting the
adoption of innovations. J Prod Innov Manag 19:412–423
Weber KM, Rohracher H (2012) Legitimizing research, technology and innovation policies for
transformative change: combining insights from innovation systems and multi-level
perspective in a comprehensive ‘failures’ framework. Res Policy (Special Section on
Sustainability Transitions) 41:1037–1047
Chapter 121
Modeling and Simulation
of Troubleshooting Process
for Automobile Based on Petri Net
and Flexsim
Keywords Fault source M/M/C queuing model Modeling and simulation
Petri net Troubleshooting
121.1 Introduction
method which was widely used in the automotive fault diagnosis expert system is
fault tree and fuzzy set theory (Kong and Dong 2001; Ji 2003; Su 2011). In
conclusion, most of the literature on the automotive failure analysis is simply
about the after-sales data, although the created model has a certain reference value,
but there is a strong lag; the fault tree, fuzzy set analysis methods can not be a
good handling with fuzzy and concurrency of car fault feature extraction; in
addition, these literature did not have simulation, their reliability needs to be
elegant.
To make a breakthrough of three aspects which are mentioned above, this paper
focuses on the process of cars production, to identify a process-oriented and
experience-oriented approach that can gradually find fault source; combines with
Petri Net and Flexsim to finish the modeling and simulation, handling with fuzzy,
parallelism and concurrency of cars fault; using queuing theory to make quanti-
tative analysis is more reliable.
Petri Net is a system model that uses P-element to represent the state, uses
T-element to represent the changes and associates resources (material, informa-
tion) flowing. Overall, it contains the state (Place), change (Transition), and flow.
So its mathematical definition (Su 2011) is a triple N = (P, T; F)
P = {p1, p2,…,pn} is Place set, n is the number of the Place;
T = {t1, t2,…,tm} is Transition set, m is the number of Transition;
F is a Set of ordered pair that consists of a P-element ana a T-element. And it
meets F ðP TÞ [ ðT PÞ:
The characteristics of Petri Nets are mainly reflected in two aspects: first, it is
realizability, the Petri Net systems must ensure that each Transition meets the laws
of nature, so it can be achieved; the most prominent feature of Petri Nets is suitable
for description and analysis of asynchronous concurrent systems on the various
levels of abstraction.
In order to solve quality problems, the old and the new seven tools of quality
management have been widely used in all aspects of business operations. These
methods in the practical application have their respective strengths and focus, but
they do not meet the authors’ requirements: to identify the sources of the failure
efficiently, step by step, from easy to difficult (Fig. 121.1 and Table 121.1).
According to the production of cars, combined with the experience of the staffs
of the Yunnan Y Automotive Company, a new troubleshooting method is born.
1143
Preventive
Operators understand maintenance checks
quality requirements
Yes
Yes
Station layout
Whether there are efficiency
visual aids using appropriate error
Ending
No
No
proofing technology
Operators whether to
Troubleshooting for
Troubleshooting for
fourth step
protection
correct process
Tools have an
right tools
connected to the dark
Fig. 121.1 Diagram for troubleshooting process
correct ?
connected to alternate
Process
Yes
Tools whether already
wear
enforcing the order of
operations
All shifts using the
No
No same tools
The correct guidance
documents
All tools are calibrated
Troubleshooting for
Troubleshooting for
Executive key point
second step
Tools are setting the
first step
right torque
Beginning Work standardization Use the right tools and
devices
1144 W. Liao et al.
right parts
cumulative tolerances
meet requirements
Parts existing of
Parts positioned
Supplier’s Parts
corresponding
right parts are
parts’ design
Part number
identified
properly
cards
Fig. 121.4 Problems need to be shooting in the third step
The need
for craft changes
execution of correct Assembly sequence need to increase the dark lights, error
craft in documents need to change proofing , interlock
Figure 121.6 is built models with basic Petri Net (Ren and Hao 2010; Su and Shen
2007; Xue et al. 2006). From Fig. 121.6, we know, nodes of the system are too
much and model is too huge (Bourjij et al. 1993). In order to better express the
logic relationship, so we references to the thought of Colored Petri Net (Wu and
Yang 2007): through introduction of color transition and replacement reduces the
complexity of system Petri Network model, making model intuitive and simple.
modeling by using CPN-tools (Vinter et al. 2003) simplified as Fig. 121.7.
From Fig. 121.7, readers can stick out a mile to the entire car troubleshooting
process; connection with Fig. 121.6, readers can get detailed reference, also can
grasp from the overall and specific well for the whole troubleshooting process.
121 Modeling and Simulation of Troubleshooting Process 1145
…
…
…
…
...
...
...
n m h k
In Petri Nets model, if regarding problems that every step need to search as object
(customers) waiting for service, the quality engineer as the service desk, and more
than one quality engineer can work simultaneously, then the whole process can be
simplified into the following multi-server single queue tandem queuing system
model:
Assuming that the arrival of customers (fault cars) obey the Poisson, the quality
engineer‘s checking time obeys negative exponential distribution, makes each step
independent, then this model is the M/M/C queuing model. In the situation that the
probability of cost and fault car basically turns to stabilized, to determine the
optimal number of service desk can reduce costs and maximize the benefits.
Seeking the optimal number of service desk is obtained in Fig. 121.8 n, m, h, k.
To improve the reliability and accuracy of the calculation, using the following
references (Ai et al. 2007) to seek the method of M/M/C model optimal service
desk number C.
In steady-state case, the expectations of the unit time full cost (service costs and
waiting costs):
z ¼ c0s c þ cw L ð121:1Þ
where c is the number of service desk; c0s is the unit time cost of each desk; cw is
the unit time costs of each customer stay in the system; L is the system’s customer
121 Modeling and Simulation of Troubleshooting Process 1147
average number Ls or the queue’s customer average Lq , the service desk number
setting up has the deep impact on it. Because c0s and cw can get the statistics
through the actual situation, so (121.1) is the function zðcÞ about c, the purpose is
through getting the optimal solution c makes zðcÞ minimize.
c is an integer, using the marginal analysis method:
z ð c Þ z ð c 1Þ
ð121:2Þ
z ð c Þ z ð c þ 1Þ
Substituting ‘z’ of formula (121.1) into formula (121.2), then
0
cs c þ cw Lðc Þ c0s ðc 1Þ þ cw Lðc 1Þ
ð121:3Þ
c0s c þ cw Lðc Þ c0s ðc þ 1Þ þ cw Lðc þ 1Þ
According to the statistics, in the first step, the fault cars arrived time obeys the
average arrival rate of 26 times per hour to the Poisson distribution; service time is
negative exponential distribution which average service rate is 15 times per hour.
So, c0s ¼ 37 Yuan/quality engineer, cw ¼ 8 Yuan/time, k ¼ 26, l ¼ 15,
k=l ¼ 1:73, assuming the number of quality engineers to c, makes c respectively
as 1, 2, 3, 4, 5. According to the Wq l value of multi-server desks (Ai et al. 2007),
with linear interpolation algorithm to find the corresponding value of Wq l, as
shown in the Table 121.2.
Substituted Ls into the formula (121.4), obtained the following data in
Table 121.3 by the formula (121.1):
From Table 121.3, 128.09 Yuan is the lowest total cost, the corresponding c
equals to 3, so the result coming out is the lowest cost that needs to set three
quality engineers.
1148 W. Liao et al.
Easy to know that fault cars reached in the second step still obey to Poisson
distribution (Winston 2004), if N is Poisson random variables, then
EðNÞ ¼ var N ¼ k. So the fault cars from the first step into the second step reduce
25 %, then k ¼ 26 75 % ¼ 19:5 20, service time also obeys negative expo-
nential distribution, l ¼ 12 times per hour.
It can be gained the smallest total cost is 109.30 yuan, corresponding to the
optimal service desks c = 2, this step sets two quality engineers is best.
Similarly, it is known that the fault cars reached in the last two steps obey Poisson
distribution with k ¼ 26 35% ¼ 9:1, k ¼ 1:3 respectively; service time also
obeys index distribution, Service rate are l ¼ 12, l ¼ 6 times per hour. Because of
the average arrival rate k less than service rate l, so the last two steps only need to
set up 1 quality engineer.
According to the above theoretical calculation result, it is known that first step
should set three quality engineers, the second step shall set up two quality engi-
neers, the last two steps shall just set one quality engineer that can make the benefit
of the system to achieve optimal.
121 Modeling and Simulation of Troubleshooting Process 1149
According to the previous section, fault car’s arrival interval time is about 138 s,
average service time is exponentially distributed, their means, respectively, as 240,
300, 300, and 600 s (Chen et al. 2007). Using Flexsim to modeling, the plan is
shown in the Fig. 121.9.
As shown in Fig. 121.9: using a Source as fault car’s generator; four Queues to
achieve car’s cache; Processor represents quality engineer, there are different
proportion of car finished troubleshooting inflow to Sink, so uses three Flow Node
to provide path, achieve the shunting of the first three steps.
^ 1X R
^
h¼ hr ð121:5Þ
R r¼1
^ ð^
h:l: ¼ ta=2;R1 r hÞ e ð121:8Þ
Formula (121.8) substituted the formula (121.7), then:
ta=2;R1 S0
R ð121:9Þ
e
Owing to ta=2;R1 za=2 (za=2 is the a=2 quantile of the standard normal dis-
tribution), so R meets the following minimum integer, and R R0 , then:
za=2 S0 2
R ð121:10Þ
e
Utilization of formula (121.5) to find out the point estimation value of each
Processor in 5 different random variables simulation is shown in Table 121.4. It is
clear that each Processor’s utilization remains relatively low, so the three Pro-
cessors of the first step should be reduced to two, then analysis the simulation. The
data obtained from simulation analysis is shown in Table 121.5:
Five simulations is shown in Table 121.5, the data gained from Table 121.5
substituted the formula (121.6) to (121.10), followed by count r ^2 ð^hÞ, S20 , ðz0:025e S0 Þ2 .
121 Modeling and Simulation of Troubleshooting Process 1151
fall in the interval 0.7258 ± 0.0407; busy rates of Processor6 will fall in the interval
0.2135 ± 0.0428.
Combined with the calculation of third section and results of two different
models’ simulation, in the first step two quality engineers are more reasonable; the
efficiency of Processor6 is still low, to further improve the efficiency and benefits
of system, should train the technology-packed quality engineers. The problems of
the last two steps will be finished by one quality engineer.
121.6 Conclusion
References
Ai Y et al (2007) Operations research, vol 11. Tsinghua University Press, Beijing, pp 336–337 (in
Chinese)
Bourjij A, Zasadzinski M, Darouach M, Krzakala G, Musset M (1993) On the use of hybrid Petri
Nets for control process safety: application to a steam2boilers network simulator. In: IEEE
international conference on systems, man and cybernetics, no. 2
Chen G, Wu H, Chen Y (2007) Industrial engineering and system simulation, vol 6. Metallurgical
Industry Press, Beijing, pp 79–253 (in Chinese)
Ji C (2003) Development of automotive diagnostic system based on fault tree. Veh Power
Technol 1:52–57 (in Chinese)
121 Modeling and Simulation of Troubleshooting Process 1153
Kong F, Dong Y (2001) Failure diagnosis and reasoning based on fault tree knowledge. Automot
Eng 23(3):209–213 (in Chinese)
Lin Z (2003) Theory and application of system simulation. Press by Canghai Bookstore,
Nanchang, Jiangxi, China, p 357 (in Chinese)
Luo Z, Zhu S (2005) A new e-insensitivity function support vector inductive regression algorithm
and after-sales service data model forecast system. Comput Sci 32(8):134–141 (in Chinese)
Ren J, Hao J (2010) Petri network-based modeling analysis. J Xi’an Aerotech College
28(3):50–52 (in Chinese)
Song L, Yao X (2009) Research on probability model of vehicle quality based on the broken-
down number per thousand cars. J Chongqing Technol Business Univ 26(6):543–547 (in
Chinese)
Su C (2011) Modeling and simulation for manufacturing system. Mechanical Industry Press,
Beijing, p 120 (in Chinese)
Su C, Shen G (2007) Development for system reliability modeling and simulation based on
generalized stochastic Petri net (GSPN). Manuf Inf Eng China 36(9):45–48 (in Chinese)
Vinter RA, Liza W, Henry Machael L, et al (2003) CPN tools for editing, simulting, and
analysing coloured petri net. In: Proceeding of the applications and theory of petri nets
presented at the 24th international conference, Eindhoven, The Netherlands
Winston WL (2004) Operations research introduction to probability models, 4th edn. A Division
of Thomson Learning Asia Pte Ltd, Belmont, pp 333–336
Wu H, Yang D (2007) Hierarchical timed coloured petri-net based approach to analyze and
optimize medical treatment process. J Syst Simul 19(4):1657–1699 (in Chinese)
Xue L, Wei C, Chen Z (2006) Modeling design and simulation of hybrid systems review and
analysis. Comput Simul 23(6):1–5 (in Chinese)
Chapter 122
Modeling and Simulation of Wartime
Casualty Surgical Treatment
Abstract The objective of this paper is to model and simulate the wartime casualty
surgical treatment with a discrete simulation tool (Simio) based on treatment process
analysis and medical data. Firstly, the surgical treatment process is analyzed. Then, a
3D visual simulation mode is built with Simio. Seven scenarios about different
casualty arrival rates are used to test the surgical capability of the field hospital of the
PLA. The results show that two hundred casualties may reach the maximum
throughput in the field hospital equipped with one operation table. The modeling and
simulation of wartime casualty surgical treatment contributes to obtaining the
system performance indicators, and simulation model developed can support
medical resources estimation and allocation optimization.
122.1 Introduction
Warfare has changed significantly in modern time. Range and accuracy of the
lethal modern weapon systems are far more effective than ever, and the army has
transformed into modular units that are smaller, more deployable and flexible.
Other treatment
areas
required to treat the casualty are available, the casualties are delivered to OR and
then flow to postoperative ward.
In the OR, the casualties would receive operation disposition based on their
traumatic conditions. The treatment process in the OR could be considered as a
series of treatment tasks connected with each other, which could then be named
operation treatment task sequence (Zhang and Wu 2011a). There are 2 types of
treatment tasks according to the relative order between each other:
(1) Sequential tasks are those performed one after another.
(2) Concurrent tasks are those completed simultaneously.
The operation treatment task sequence is shown in Fig. 122.2. This task
sequence is obtained by literature investigation and expert consultation.
So, a casualty surgical treatment process could be considered as this casualty
flowing through the above operation treatment task sequence and all casualty
surgical treatment processes actually make up this sequence.
Basic consumable
preparation
Basic OR instruments
preparation
Infusion
blood
... Patient preparation
... ... in the table
Infusion
fluids
The simulation scenarios provided are a typical medical support context in which a
field hospital of the PLA provides emergency treatment to the casualties involved
according to the treatment rules defined by the PLA. In the baseline mode, each
patient arrives randomly with an exponential time between arrivals with a mean of
14 min. They would receive first aid treatment within 10 min after injury and then
be evacuated to the regiment aid station and get to the preoperative room randomly
following an uniform distribution with parameters 0.5–3.5 h. This time internal is
defined by the treatment rules of the PLA. There is 1 operation table equipped with
necessary medical personnel and resources in the OR. In order to test the OR
capability, the casualty arrival rate would successively increase by 25 % in other
scenarios.
The main focus of modeling and simulation is to valuate the system’s surgical
treatment capability. For the system, the average casualty wait length and time for
operation, operation time, and mortality rate must be accepted by the treatment
rules. Since the model developed is a baseline model, and only the casualty arrival
rate is changed in other scenarios, the same metrics to measure the performance of
system would be used. This allows us to collect similar data in each of the
simulations and compare data obtained from several runs of the simulation. Once
the data are collected, statistical analysis is performed and the results are used in
the analysis of different allocation of the operation room.
122 Modeling and Simulation of Wartime Casualty Surgical Treatment 1159
The casualty types in this simulation research mostly come from the U.S. army
Deployable Medical System (DEPMEDS) PC Code and are adjusted by the subject
experts of the PLA. These PC codes occur during deployment and combat
operations and range from snake bites, to severe hearing impairment, to more
serious injuries (James et al. 2005; Deployable Medical System (DEPMEDS)
2003). The casualties needing operation treatments involve in 87 PC codes. In the
simulation, casualties are randomly generated based on an exponential distribu-
tion. The casualty cumulative probability distribution obtained from historical
accounts of ground operations and adjusted by factors such as recent of operation
and medical advances is used for simulation model to indentify a certain PC Code
for each injury event.
The wartime casualty survival probability data are obtained by expert question-
naires. After preliminary analysis, casualties are identified and designated as
having either a high (H), medium (M), or low (L) risk of mortality according to the
severity of life-threatening. In addition, the casualty survival probability data are
fitted by the Weibull survival function with MATLAB. Then the survival functions
based on types of medical treatment facility and treatment delays are obtained
(Zhang and Wu 2011; Mitchell et al. 2004).
In a certain medical treatment facility, the casualty survival model based on a
treatment delay would be obtained by the functions known. A certain type of
casualty starts treatment at c0, and this time point is between c1 and c2
(c1 \ c0 \ c2), then the casualty survival model based on c0 treatment delay is:
SðtÞc0 ¼ Pr½T [ t
¼ ððc0 c1 ÞÞ exp ðt=a1 Þ^ b1
þ ððc2 c0 Þ=ðc2 c1 ÞÞ exp ðt=a1 Þ^ b2 ð122:1Þ
Using this model and the function parameters fitted, a certain type of casualties’
survival probability at any point and time during their treatment processes could be
obtained.
together based on their relative order and compose a casualty treatment sequence.
Actually, each casualty treatment task sequence is a subset of the OR treatment
task sequence. When this casualty arrives at the operating room, he/she would flow
through the OR treatment task sequence. The treatment task sequences are mostly
obtained from the U.S. army treate file. The treatment time, personnel and nec-
essary equipment and supplies are obtained by consulting with experts and
researching books.
The medical personnel are modeled with Worker object, medical equipment with
Resource object, and time consumed by the task with triangle distributions got by
consulting with subject experts. The logic of each treatment task is designed using
the graphical process flows. Figure 122.4 depicts typical treatment task logic.
When a casualty flows to this task process, the Search and Decide steps are used to
decide whether this task is required by the casualty from a casualty treatment task
data table which would be described below. The Set Row and the next Decide
steps are used to link the task to the required resources in the table. Then, the
Seize, Delay and Released step are used together to model the resources to be
seized, delayed and released (Simio user’s manual 2009; Dennis 2009).
There are four types of risk of mortality and five types of internal of treatment
delay. Each type of mortality risk and internal of treatment delay is distinguished
by the Decide step. The survival model is used following the next Decide step to
determine the casualty survival situation. If the casualty is still alive, he/she would
then flow the next treatment process. These logics are shown in Fig. 122.5.
In addition to entering data directly into the modeling objects, a casualty table,
including casualty types, composition of proportions, litter conditions, treatment
chances and priorities, is defined to set all casualties’ basic information, and a
treatment task table, including casualty types, task types, task time and treatment
probabilities, is defined to set all casualties’ treatment information, which is shown
in Fig. 122.6.
1162 K. Zhang et al.
Though Simio platform has powerful statistical functions and makes most statis-
tical data automatically, the surgical system still needs some special data statistics.
So, some statistic elements are created to record the treatment data in Simio, as
shown in Fig. 122.7.
In addition, some process logics, accompanied with statistical elements, are
created to trace the simulation data.
The 3D casualty, medical personnel, equipment, and operating room objects are
developed by 3D modeling software and imported to create the realistic 3D
casualty treatment model with Simio, which is shown in Fig. 122.8 (Dennis 2009).
Seven scenarios about different casualty arrival rates are built in the experiment
window within Simio. The simulation time lasts for 34 h, and the first 10 h is not
122 Modeling and Simulation of Wartime Casualty Surgical Treatment 1163
used for collecting data. The scenario 1 is the baseline model, and the arrival rate is
increased by 25 % after each scenario. We take 100 replications for each scenario
and the results are within the 95 % confidence interval.
All the important performance indicators of the system are obtained. Parts of
mean value of average data are shown in Table 122.1.
Two hundred casualties have long been considered as the maximum throughput
in the field hospital researched in this paper. Though, long time has passed, the
performance indicators still reflect this situation. As shown in Table 122.1, the
Table 122.1 Performance indicators of the field hospital with one operation table
Scenario Casualty Mortality Operation Wait length for Wait time for Operation
arrival rate (%) number operation operation (h) time (h)
number
1 103.46 1.42 8.40 0.25 0.53 1.23
2 128.21 1.48 10.50 0.46 0.74 1.24
3 160.89 1.99 12.94 0.89 1.20 1.20
4 201.07 1.89 14.83 1.37 1.48 1.21
5 249.97 2.60 16.78 2.81 2.25 1.23
6 314.26 3.03 18.26 4.97 3.06 1.25
7 392.91 3.15 18.41 9.48 3.59 1.29
122 Modeling and Simulation of Wartime Casualty Surgical Treatment 1165
122.4 Conclusion
The objective of this paper is to model and simulate the wartime casualty surgical
treatment. Firstly, the surgical treatment process is analyzed. Then, a 3D visual
simulation mode is built with Simio simulation platform. Seven scenarios about
different casualty arrival rates are used to test the surgical capability of the medical
aid station. The results show that two hundred casualties may reach the maximum
throughput in the field hospital equipped with one operation table. The modeling
and simulation of wartime casualty surgical treatment contributes to obtaining the
system performance indictors, and simulation model developed can support
medical resources estimation and allocation optimization.
References
Dennis Pegden C (2009) Now bring simulation in-house to support good decision making. (2009-
12-19) [2010-02-10]. http://www.simio.com/resources/white-papers/bring-simulation-in-
house/Now-Bring-Simulation-In-house-to-Support-Good-Decision-Making.htm
Dennis Pegden C (2009) Introduction to Simio for beginners. (2009-12-19) [2010-02-10] http://
www.simio.com/resources/white-papers/Introduction-to-Simio/introduction-to-simio-for-
beginners-page-1.htm
Dennis Pegden C, How Simio objects differ from other object-oriented modeling tools [EB/OL].
(2009-12-19) [2010-02-10] http://www.simio.com/resources/white-papers/How-Simio-
Objects-Differ-From-Others/how-simio-objects-differ-from-other-object-oriented-modeling-
tools.htm
Deployable Medical System (DEPMEDS) PC codes (2003), The PCs and their accompanying
treatment briefs are updated on a quarterly basis by the Joint Readiness Clinical Advisory
Board (JRCAB). For the most up-to-date information on PCs and specific treatment briefs,
refer to the JRCAB website at: http://www.armymedicine.army.mil/jrcab/d-prod.htm
Fleet marine force manual (FMFM) (1990) 4–50, Health Service Support. Department of the
Navy, Headquarters United States Marine Corps, Washington
James M, Zouris G, Walker J (2005) Projection of patient condition code distributions based on
mechanism of injury. ADA434281, Naval Health Research Center, San Diego
Jeffrey AJ, Roberts SD (2011) Simulation modeling with SIMIO: a workbook. Smiley Micros,
Pennsylvania November pp 203–222
Mitchell R, Galarneau M, Hancock B, Lowe D (2004) Modeling dynamic casualty mortality
curves in the tactical medical logistics (TML+) planning tool. Naval Health Research Center,
San Diego, p 8
Nuhut O, Sabuncuoglu I (2002) Simulation analysis of army casualty evacuations. Simulation
8(10):612–625
1166 K. Zhang et al.
Pegden D (2008) SIMIO: a new simulation system based on intelligent objects. In: Proceedings of
the 2008 winter simulation conference, Institute of Electrical And Electronics Engineers, New
Jersey, pp 2293–2300
SIMIO LCC (2009), Simio user’s manual (2009-12-19) [2010-02-10]. http://www.simio.com/
resources.htm
Zhang K, Wu R (2011a) Using Simio for wartime casualty treatment simulation. In: The 3rd
IEEE international symposium on it in medicine and education, IEEE Press, December,
pp 315–318
Zhang K, Wu R (2011b) Using simio for wartime optimal placement of casualty evacuation
assets. In: The 3rd international symposium on information engineering and electronic
commerce, ASME Press, July, pp 277–280
Zhang K, Wu R (2011) Research on survival model of casualty in wartime. In: The 3rd IEEE
international symposium on it in medicine and education, IEEE Press, pp 625–628
Chapter 123
Modeling of Shipboard Aircraft
Operational Support Process Based
on Hierarchical Timed Colored Petri-Net
Keywords Aviation support system HTCPN Petri-Net Operational support
Shipboard aircraft
123.1 Introduction
The aircraft carrier is the combat platform at sea with the most powerful combat
effectiveness in the world now. It plays a significant role due to its unique char-
acteristics like integrating the sea and air routes, combining ships and planes,
controlling the air and sea and rapid deployment. The main reason for aircraft
carrier becomes an important force in naval battle and land combat is its unique
The basic elements of the PDM workflow model include Store house, Transition,
Token and directional arc (Sun et al. 2011).
(1) Store house: Represents the conditions that is, promoting factors of processes,
and is shown by a circle ‘‘s’’. When the conditions are met, the corresponding
end nodes of directional arc with this store house as the starting point will be
123 Modeling of Shipboard Aircraft Operational Support Process 1169
charge
two activities is not in chronological order, the ‘‘take off’’ activity will not be
triggered only when the two activities are completed. It is just shown as in
Fig. 123.2.
(c) Condition selecting component
Corresponding to conditional routing, it is used to define the split activities
with mutual restraint and exclusive relations between each other. This kind of
split activity often conducts the ‘‘single choice’’ or ‘‘multiple choices’’ based
on the specific implementation situation. The condition selecting component
also requires two basic work-flow primitives: OR-Split and OR-Join. The
relationships of condition selection can be divided into two kinds: one is the
implicit or split selection, that is, it is not known in advance that the trigger
order of the activities determines which split is triggered; and the other is the
explicit or split selection, that is, determine which branch is triggered
according to the activity property before split. The operational support process
mainly adopts explicit or split selection logic. In the refueling process, do
pressure refueling or gravity refueling will be determined according to the
aircraft model. After refueling is completed, move to the next step as shown in
Fig. 123.3.
(d) Cycle component
Cycle component is used to characterize the repeated execution of a task. In
this component, an explicit or OR-Split is used. For example, in the tractor
repair process, if the repair is successful, go to the next step, that is used for
aircraft transporting. If the repair is unsuccessful, continue to repair, until
pressure
OR-Split refuel OR-Join
Prepare Refueling
refueling complete
gravity
refuel
repair detection
available. Here does not consider the tractor maintenance grading and repair
strategies, only discuss the maintenance activities as a unity. It is shown as in
Fig. 123.4.
Hierarchical timed colored Petri net (HTCPN) not only extends the color of the
model and the execution time of activities, but also hierarchical modeling oper-
ational support processes, which combine data structures and hierarchical
decomposition (Zheng et al. 2011).
Aviation support system is very complicated; the model created with traditional
Petri net will be a huge scale and have a large amount of notes. This will not only
make the process of modeling complex, but also make the analysis of the model
characteristics difficult. Therefore the hierarchy Petri network models can be
introduced, that is, use corresponding subnet in a large model to replace the
transition needs to refine. The transition that contains the subnet is represented by
the double box ‘ ’. The design process of hierarchy Petri network model can be
divided into two stages. First: define tasks at the top of the entire workflow
structure; second: to determine the details of tasks description in the lower level
(Zhao et al. 2009).
In the basic Petri network model, the transition is only with the feature of
‘‘transient’’, which means its trigger is not time-consuming. When studying the
operational support process, time is the parameter must be considered, as many
analysis quantitative indicators like maintainability and support are described in
time value, such as MTTR. HTCPN brings in the concept of time and the modeling
task execution time, hence they can be simulated to obtain the time performance of
shipboard aircraft using the support HTCPN model, and estimate the support time
of aviation support system and the utilization rate of the support resources so as to
provide the basis for the optimization of applying the support process.
In the work of the equipment support, contents like support equipment and
support personnel must be considered. HTCPN defines the color of the places and
enhance the arc expression ability so as to uniformly model different support
resources and avoid utilizing the large and complex support models (Yang et al.
2010; He et al. 2010).
1172 T. Wang et al.
Weapons transport
Take-off system mainly includes the facilities such as catapults, jet bias boards.
The main task is to provide sufficient power to make the shipboard aircraft take off
smoothly in a shorter distance.
The task of the aviation support system is triggered by the shipboard aircraft’s
task, and the number of shipboard aircrafts can be flexibly set according to the task.
The operational support process of shipboard aircraft from the hangar to the take-
off point first needs to transport shipboard aircraft from hangar to deck support
point, meanwhile, the ammunition must be transported from ammunition depot to
the deck support point, and then conduct deck support for the shipboard aircraft so
as to complete deck support and tract the shipboard aircraft to take-off point. This
process can be divided into four relatively independent modules, and the opera-
tional support process is sub-divided according to hierarchical Petri network
theory (Fig. 123.7).
In the figure, the black token means shipboard aircraft, and the red token
represents ammunition. It should be noted that the token in the figure is only for
more vividly expressing the relationship between support resources and major
equipment, not for showing the number of tokens. During the actual modeling
process, the amount of preset resources can be simulated and the allocation of
resources can be balanced through analyzing the simulation results.
The transportation of shipboard aircraft can be divided into three stages. The first
stage is the transportation of shipboard aircraft in the hangar, which is the time
transporting shipboard aircraft from the hangar to the lift; the second stage is the
time transporting shipboard aircraft from the hangar to the deck, that is, the trans-
portation process of the lifts; the third stage is the transporting time of the deck 1.
1174
p3 p4
t3
p1 p2 p5 p6
t1 t2 t4
t20 t27
t19 t28
t14 t13 t12 t11
t22 t34
t29
t17 t10
t30
t18 t21 t23 t25 t26
t5 t6 t7 t8 t9 AND-Join
t24
t15
t32
t16
OR-Split OR-Join
T. Wang et al.
p3 p4
t3
p1 p2 p5 p6
t1 t2 t4
Fig. 123.7 The top-level model of the aviation support system. t1: transport shipboard aircraft
from hangar to shipboard support point. t2: shipboard support. t3: Weapons transport. t4: tract the
shipboard aircraft to take-off point
The transporting time here means the time from the lift to the protection time to
time, excluding from the protection point to the take-off time, and to protect the
support point excluding the time from the support point to the landing point and
from the landing point to the support point. The number of shipboard aircraft’s
tractors in hangar, the number of lifts, the number of shipboard aircraft’s lifts on the
deck and the number of shipboard aircrafts need transporting directly affect the
transporting time. The more the tractors and lifts are, the less time needed is.
However, due to space and weight constraints, unlimited increase in the number of
tractors and lifts is impossible. Considering from the other aspect, it will also cause a
waste of resources. Therefore, in order to meet the conditions required, a reasonable
number must be determined.
The model of transportation process is shown as Fig. 123.8.
The model also takes the failure of support equipment into account, and
incorporates the support equipment maintenance activities into the shipboard
aircraft’s transporting sub-module in dominant or branch ways. The green token in
the figure means the hangar tractor and deck tractor while the blue token represents
lift. It should be noted that in order to more clearly show the utilization of support
resources, this paper only uses a token to show the support device, but in the actual
situation there should be more standby support equipment.
Ammunition transporting process is similar to transporting process of shipboard
aircraft, and hence it will not be repeated here. It should be noted that the quantity
of the weapons delivery should be measured by weight, which can be split. The
transporting of the shipboard aircraft is as a whole, which cannot be split. This
should be considered when conducting simulation.
The deck support system includes jet fuel system, aviation power system, air
supply system, and deck support facilities. The major functions include pressure
refueling and gravity refueling for shipboard aircraft; responsible for the support of
the preparation before deck and shipboard aircraft flight, and aviation power
1176 T. Wang et al.
p23
t20
OR-Split
p24 p22
t19
p2
p15 p14 p13
t14 t13 t12 t11
p21 p12
p20
t17 t10
OR-Split
p19 p11
t18
p1 p7 p8 p9 p10
t5 t6 t7 t8 t9
p16 p18
t15
OR-Split
p17
t16
Fig. 123.8 The model of transportation process. t5: tie the aircraft to the hangar tractor. t6: untie
the aircraft from the hangar. t7: transport by the tractor. t8: tie to the lift. t9: untie the hangar
tractor. t10: transport by the lift. t11: tie to the deck tractor. t12: untie the lift. t13: transport by the
deck tractor. t14: untie the deck tractor. t15: check the hangar tractor. t16: repair the hangar
tractor. t17: check the lift. t18: repair the lift. t19: check the deck tractor. t20: repair the deck
tractor
supply before the second start; power supply for the maintenance of flight deck and
shipboard aircraft in the hangar; aviation power supply for guaranteeing the ship
aviation maintenance, and related cabin maintenance; guaranteeing the flight deck
utilizes aviation power to start the shipboard aircraft; guaranteeing the centralized
storage, management and charge/discharge maintenance for aviation batteries,. In
addition, it is also responsible for preparation before flight, maintenance of the
required gas including filling oxygen and nitrogen to the shipboard aircraft,
guaranteeing the gas filling for the shipboard aircraft’s wheels and making sure the
cooling of electronic equipment in shipboard aircraft when the power is on.
Guarantee the routine maintenance of the deck support equipment and the wash,
hydraulic maintenance, safe ground and the snow removal of the flight deck.
Suppose all the processes can be conducted at the same time except filling
oxygen, and every support site is equipped with the same set of jet fuel system, the
deck support model can be established as shown in Fig. 123.9.
123 Modeling of Shipboard Aircraft Operational Support Process 1177
Petri network has powerful analytical techniques and means. Analysis of work-
flow’s behavior, status, and performance can be solved through the nature of Petri
network (such as accessibility, safety, livability, etc.); moreover, the analysis
techniques of Petri network can be used to calculate various performance indi-
cators of the model, such as response time, latency time and share.
CPN-Tools is a Petri network modeling and simulation platform developed by
the Petri network Research Center for the University of Aarhus, Denmark. It is
featured with fast simulation speed and powerful network grammar checker. It
supports for Linux and Windows operating system, supports hierarchical modeling
and analysis of Timed Colored Petri network and supports secondary development.
After modeling by Petri network, the features of the system can be analyzed to
check the characteristics of the actual system. CPN-Tools support the state
equation analysis and time simulation. By assigning the corresponding model’s
transition, arc, and the place, it can be clearly learnt the overall situation of the
p35 p34
t27
p36
t28
t22 t34
p45
p38 p37
t33
t29
p43
p39
AND-Split
t30
p25 p28 p29 p32 p33
t21 t23 t25 t26
Fig. 123.9 The model of deck support process. t21: repair the deck support. t22: refuel. t23:
charge. t24: load ammunition. t25: oxygenate. t26: complete deck support. t27: check the refuel
equipment. t28: repair the refueling equipment. t29: check the charging equipment. t30: repair the
charging equipment. t31: check the loading equipment. t32:repair the loading equipment. t33:
check the oxygenating equipment. t34:repair the oxygenating equipment
1178 T. Wang et al.
operational support, parameters like the average support delay time and the sup-
port resources’ utilization can be obtained, and the operational support time of the
shipboard aircraft can be analyzed according to the simulation results so as to
achieve the optimization of support resources (Song et al. 2007).
123.6 Conclusion
Based on the operational support feature of shipboard aircraft, this paper utilizes
the hierarchical Timed Colored Petri Network (HTCPN) to establish the process
model of shipboard aircraft to achieve a simple hierarchical modeling, which
makes up for the shortage of time performance analysis of Petri network, achieves
the distinction for support resources, provides a reference for the research of the
support process of aviation support system, and plays a significant role in the tasks
persistence of aviation support system.
References
He J-B, Su Q-X, Gu H-Q (2010) Virtual maintenance process model based on extended Petri net
(in Chinese). Comput Simul 27(3):254–257
Song G-M, Wang D-S, Song J-S (2007) The equipment maintenance support resource
management model based on timed colored Petri net (in Chinese). J Syst Simul
19(1):233–236
Song K (2008) Research on modeling and simulation technology of integrated logistics support
system based on Petri net (in Chinese). Graduate School of National University of Defense
Technology, Changsha
Sun B, Wang Y, Guo Y (2011) Process modeling and analysis of maintenance support command
based on Petri net (in Chinese). Command Control Simul 33(1):113–117
Wang K, Feng J-L, Zhang H-X (2005) Modeling based on Petri nets about operation maintenance
and support of military aircraft (in Chinese). J Acad Equip Command Technol 16(6):15–17
Yang C, Yang J, Hu T (2010) Method of maintainability prediction of complex equipment based
on CPN simulation (in Chinese). J Eng Des 17(1):25–29
Yao X-L, Feng L-H, Zhang A-M (2009) Systems vulnerability assessment of aircraft guarantee
system based on improved FPN (in Chinese). Electri Mach Control 13(3):464–470
Zhang W (2010) Research of analysis approach of the mission sustainability on the carrier based
aircraft support system (in Chinese). Beihang University, Beijing
Zhao X-M, Gao X-J, Hang G-D (2009) Modeling for communication equipment maintenance
support system based on HTPN (in Chinese). J Jilin Univ Inf Sci Ed 27(4):412–417
Zheng Z, Xu T, Wang X (2011) A method of determining equipment maintenance resources
based on colored timed Petri net (in Chinese). Ship Sci Technol 33(2):131–133
Chapter 124
Modeling and Simulation
of a Just-in-Time Flexible
Manufacturing System Using Petri Nets
124.1 Introduction
not only embodies the JIT ideology but also enhances the flexibility of production
systems. However, flexible manufacturing system is an extremely complex dis-
crete event dynamic system; it is difficult to be described with traditional math-
ematical models.
Petri nets with its perfect mathematical theory as the foundation, has strong
modeling capabilities to describe the parallel, synchronous, conflict relations and
plays an important role in the system modeling and simulation. It has also been
applied in the modeling of flexible manufacturing systems (Colombo et al. 1997;
Mao and Han 2010). On the other hand, with the manufacturing system has
become increasingly complex, especially under today’s challenge environments,
the auxiliary analysis software becomes a prerequisite for the application of Petri
nets. ExSpect (Voorhoeve 1998), the Executable Specification Tool, is a powerful
modeling and analysis of language and software tools based on timed colored Petri
nets. It is widely used in transportation systems, workflow modeling and main-
tenance support system (Qu et al. 2009; van der Aalst and Waltmans 1991;
Vanit-Anunchai 2010; University of Aarhus 2005).
This paper addresses the modeling and simulation issues of the flexible man-
ufacturing system under Just-in-Time environment basing Petri nets and ExSpect,
a common simulation software platform. A typical flexible manufacturing system
has been used as the study case, and its Petri nets model with Kanban has been
built. Since bottleneck or hunger resources in the production process are com-
monly occurred cases, and they often have bad influence on the production process
of JIT flexible manufacturing system (Zhang and Wu 2009), more attentions were
paid on the bottleneck identification and digestion in support of the proposed
model and simulation mechanism. The machine utilization, under the premise of
meeting custom needs just-in-time, is the main measure of the problem, while the
trigger priority and the kanban numbers are two main adjusted artifices. At the end
of the paper, a large number of numerical simulations are investigated to verify the
effective of the proposed model and detail discussions are proposed further.
One of the major elements of JIT philosophy is the kanban system (Al-Tahat et al.
2009). The kanban system is an information system which controls the production
quantities in every process. Figure 124.1 shows a Petri nets model of the single-
kanban system (Di Mascolo et al. 1991) and describes the production process of
three adjacent processing units. In a single kanban system, a production line could
be divided into several stages and there are a fixed number of kanbans at every
stage. The production of a part cannot start until a kanban indicates that this part is
needed by the following downstream station (Matzka et al. 2012).
124 Modeling and Simulation of a Just-in-Time Flexible Manufacturing System 1181
Td
F1 F2 F3
D
W1 W2 W3
2 3 2 K
I1 O1 I2 O2 T2 I3 O3 T3
T0 T1
K N1 K N2 K N3
This paper takes a typical JIT flexible manufacturing system given in (Raju et al.
1997) as a case for addressing the modeling and simulation problem. The JIT
flexible manufacturing system consists of five machining centers (from M1 to M5)
and a load/unloads station (LUS), and they were connected by automatic guided
vehicles (AGV) network. It caters to a variety of part types. In this paper, three part
types are processed in this JIT flexible manufacturing system.
p1 p2 p3 p4 p5 p6 p7
1182 Y. Cui and Y. Wang
resource places to yield the system net. The interpretations of places and transi-
tions are given in (Raju et al. 1997). Figure 124.2 shows the model.
Here main elements in the system are defined as:
px1, px2, px3: num, //Input requirements
p29, p45, p63: num, //Input of raw materials
p40, p58, p76: num, //Output products
p1, p2, p3, p4, p5: num, //Machines
p6: num, //Fixture
p7: num, //AGV
This Petri net model contains four sub-system, they are named as tx, part1,
part2 and part3, respectively representing user demands subsystem, part1 pro-
cessing subsystem, part2 processing subsystem and part3 processing subsystem.
Among them, tx randomly generates user demands. A Poisson arrival pattern with
a different mean arrival time for each part variety is considered in the present
study. Each part has 10 demands as example in this paper, as an example for the
system modeling, the tx model shown in Fig. 124.3.
This paper takes part1 as an example to introduce the processing subsystem, its
model shown in Fig. 124.4. Being a demand-driven system, the functioning of the
JIT flexible manufacturing system starts with the arrival of a demand. When a
demand arrives, the system directly delivered the part to the user from output
buffer. Then, the system begins to produce the same number of semi-finished or
finished products to compensate for output buffer.
This paper use ExSpect as the simulation platform for our JIT flexible manufac-
turing system to illustrate the performance of the proposed modeling mechanism.
Since bottleneck or hunger resources in the production process are commonly
occurred cases, and they often have bad influence on the production process of JIT
flexible manufacturing system, more attentions were paid on the bottleneck
identification and digestion in support of the proposed model and simulation
mechanism. The machine utilization, under the premise of meeting custom needs
124 Modeling and Simulation of a Just-in-Time Flexible Manufacturing System 1183
just-in-time, is the main measure of the problem, while the trigger priority and the
kanban numbers are two main adjusted artifices. The simulation is running by the
concurrent execution of the system net. A large number of simulations have been
done, and the results data are recorded.
The initial conditions of the system include the number of resources place token is
one, the number of output buffer token is one, kanban number is zero, raw material
is infinite and the average arrival time of the three part type demands are 10, 12
and 15 s respectively.
Simulation data include that the processing time of three parts, the difference of
takt time, the total time, the machines utilization and the machines average
utilization.
production, it must eliminate bottlenecks and hunger issues. So this paper gives the
definitions of bottleneck resources and hunger resources. Bottleneck resources
refer to the machine resources that utilization exceed 10 % of the machines
average utilization, and hunger resources refer to the machine resources that uti-
lization below 10 % of the machines average utilization.
Simulation results were shown in Table 124.1 and the detail analysis were
given bellow.
(1) Simulation 1. The simulation is run in the initial conditions; data show that the
difference of takt time is too large; it means that the production synchroni-
zation is too weak. By comparing machines utilization and machines average
utilization, it can obtain hunger resource is P3, bottleneck resources are P4 and
P7 respectively.
(2) Simulation 2. In JIT flexible manufacturing system, resources are divided into
two classes: fixed resources (P1–P5) and variable resources (P6, P7). Because
of the former is the machine resource with high cost, it is not allowed to
increase arbitrarily the number of resources. While the latter resource cost is
low, it allowed be added properly its resources quantity. This paper adds one
token in P7 for eliminating the influence of the bottleneck P7. The simulation
results show the difference of takt time is decreased and bottleneck P7 is
eliminated, it means that the production synchronization is improved. But
there are still bottleneck resource P4 and hunger resource P3.
(3) Simulation 3. In JIT flexible manufacturing system, priority is divided into two
kinds: resources priority and processing subsystem priority. Application
simulation 1 utilization to set resources priority, (P1–P7): 0.61, 0.47, 0.33,
0.85, 0.49, 0.27, and 0.79. Through respectively calculating the ratio of three
parts production time and the total time to set three processing subsystem
priority, they are 0.91, 0.48, and 0.57. Resources priority can be set in the
corresponding transitions, processing subsystem priority is set in all
124 Modeling and Simulation of a Just-in-Time Flexible Manufacturing System 1185
transitions. When two kinds of priority will be set in the same transition, they
should be added together. The priority is greater, the transition to be inspired
more early. Results show that the difference of takt time are decreased, uti-
lization are increased. But there are still bottleneck resources P2, P4, and
hunger resource P3.
(4) Simulation 4. Increasing one kanban of P3, results show that synchronization
continues to strengthen. By analyzing the data, it can obtain bottleneck affect
has been eliminated, but there are still hunger resource P3.
(5) Simulation 5. Increase two kanbans of P3, results show that the synchroni-
zation is best, hunger resource P3 has been eliminated, but hunger resource P2
is appeared again.
(6) Simulation 6. The above conditions remain unchanged, increase one kanban of
P2, results show that synchronization has been weakened, but the basic
elimination of bottlenecks and hunger problem.
124.5 Conclusion
The main trend of the system simulation is the integration of modeling and sim-
ulation. Petri nets give a convenient method for the flexible manufacturing system
modeling and simulation. This paper presents our modeling and simulation
mechanism of the flexible manufacturing system under Just-in-Time environment.
By virtue of strong modeling capabilities of timed Petri nets, the model of the JIT
flexible manufacturing system can describe the complex production process com-
pletely. In support of ExSpect environment, through simulation and data analysis, it
can identify bottleneck resources and hunger resources. By setting the system pri-
ority and the number of kanban, the bottlenecks or hunger facilities are settled, and
performance of the system is thereby improved by ameliorating the machine utili-
zation and takt time in manufacturing processes. Therefore, the manufacturing
process can run as a smooth and orderly mode with the premise of meeting custom
needs in just-in-time manner.
Acknowledgments This research work is partly supported by the Scientific Research Fund
given by the Liaoning Education Department (LS2010112).
1186 Y. Cui and Y. Wang
References
Al-Tahat MD, Dalalah D, Barghash MA (2009) Dynamic programming model for multi-stage
single-product Kanban-controlled serial production line. J Intell Manuf 23:37–48
Araz ÖU, Eski Ö, Araz C (2006) A multi-criteria decision making procedure based on neural
networks for Kanban allocation. Springer, Berlin, pp 898–905
Colombo AW, Carelli R, Kuchen B (1997) A temporised Petri net approach for design, modelling
and analysis of flexible production systems. Adv Manuf Technol 13:214–226
Di Mascolo M, Frein Y, Dallery Y, David R (1991) A unified modeling of Kanban systems using
Petri nets. Int J Flexible Manuf Syst 3:275–307
Du X (2010) Development of flexible manufacturing system (FMS). Sci Technol Assoc Forum
5:35 (In Chinese)
Mao Y, Han W-G (2010) Research and implementation of FMS scheduling based on Petri nets.
J Chin Comput Syst 31(5):1001–1005 (In Chinese)
Matzka J, Di Mascolo M, Furmans K (2012) Buffer sizing of a heijunka Kanban system. J Intell
Manuf 23(1):49–60
Qu C, Zhang L, Yu Y, Liang W (2009) Development of material maintenance organization
modeling and simulation environment based on ExSpect domain library. J Syst Simulat
21(9):2772–2775 (In Chinese)
Raju KR, Reddy KRB, Chetty OVK (1997) Modelling and simulation of just-in-time flexible
systems. Sadhana 22(1):101–120
van der Aalst WMP, Waltmans AW (1991) Modelling logistic systems with ExSpect. Eindhoven
University of Technology, The Netherlands
Vanit-Anunchai S (2010) Modelling railway interlocking tables using coloured Petri nets. Coord
Model Lang, pp 137–151
Voorhoeve M (1998) ExSpect language tutorial. Eindhoven University of Technology,
Eindhoven
University of Aarhus (2005) Sixth workshop and tutorial on practical use of colored Petri nets and
the CPN tools. University of Aarhus, Aarhus
Zhang R, Wu C (2009) Bottleneck identification procedures for the job shop scheduling problem
with applications to genetic algorithms. Adv Manuf Technol 42:1153–1164 (In Chinese)
Zhang X, Li P, Yan C (2012) Shallow discussion just-in-time (JIT) production mode. Guide Bus
3:257 (In Chinese)
Chapter 125
Numerical Simulation of External-
Compression Supersonic Inlet
Flow Fields
Abstract In this paper, Method of CFD is used to simulate 2D flow fields for a
certain external-compression supersonic inlet. It describes the methods of mesh
generation, boundary conditions determination, convergence techniques of gov-
erning equation, and analyzes simulation result. The numerical results match well
with the theory.
125.1 Introduction
0.3
Y(m)
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
X(m)
125 Numerical Simulation of External-Compression Supersonic Inlet Flow Fields 1189
0.4
Y(m)
E
H
0.3
0.2 D
C
0.1
A B
0
-0.5 0 0.5
X(m)
Fig. 125.3 Contours of the critical operating mode. a Contours of Mach number. b Contours of
static pressure (Unit of pressure in atm)
125 Numerical Simulation of External-Compression Supersonic Inlet Flow Fields 1191
When the flight number M is constant, the flow status of inlet changes with the
flow capacity of inlet export. The inlet flow fields including critical, supercritical
and subcritical operating mode have been simulated with the Mach number of
design operating mode. The boundary conditions of critical operating mode are as
follow: flight altitude H = 11000 m, flow Mach number M = 2.6, angle of attack
a = 0, the export anti-pressure P = 2.78 atm. Figure 125.3 shows the calculation
result.
Two oblique shock waves and a normal shock wave intersect just at the leading
edge of outer walls in critical operating mode. After the first oblique shock wave,
Mach number is reduced to 1.7, after the second oblique shock wave, it decreases
to 1.3, the flows are still supersonic, after going through a normal shock wave M
decreases 0.93, the flows become subsonic. Static pressure before and after every
shock has a mutation, inside the pipeline flow pressure increases and velocity is
reduced.
The intake conditions of supercritical and subcritical modes are the same as
critical operating mode, the calculation results of critical operating mode are
utilized to extrapolate the flow field else, Figs. 125.4 and 125.5 show the
numerical results.
As shown in Fig. 125.4, the normal shock wave moved into channel in the
supercritical operating mode, it is still supersonic flow in the initial segment, after
normal shock wave flow pressure increases and velocity is reduced. As shown in
Fig. 125.5 Contours of the subcritical operating mode. a Contours of Mach number. b Contours
of static pressure (Unit of pressure in atm)
Fig. 125.5, oblique shock waves and normal shock wave intersect before the
entrance in the subcritical operating mode. Low-energy flow inpour into the
channel so as to increase total pressure loss, serious cases may lead to an unstable
operating mode. The total pressure recovery coefficients under the three modes are
as follows: 0.849 in the critical operating mode, 0.741 in supercritical operating
mode, and 0.827 in subcritical operating mode. As far as total pressure loss is
concerned, the most favorable location of normal shock is just at the leading edge
of entrance.
125.6 Conclusions
Finite volume method is employed to solve N–S equation. Combined with wall
functions, RNG k e turbulence model with eddy viscosity correction is adopted.
Numerical techniques such as FMG initialization, extrapolation and reasonable
125 Numerical Simulation of External-Compression Supersonic Inlet Flow Fields 1193
References
Hong-xiu Wang
126.1 Introduction
In the collaborative enterprise modeling, the model is completed by more than one
person in the project team; they will apply their own terms to create a model
instance, resulting in the semantic conflict in the merger from the partial model to
the overall model. The main problems are: the one on the same physical appli-
cation of different terms to describe, the same terminology to describe different
content, three different definitions of the granularity of the process, activity.
For this type of semantic heterogeneity, the related research work focus on
building a unified dictionary based on meta-data (Castano et al. 2005; Missikof
and Schiappelli 2003), but less involved in the essence of the information
H. Wang (&)
Department of Industrial Engineering, Tianjin Polytechnic University, Tianjin, China
e-mail: [email protected]
semantics, and can not fundamentally solve the problem (Cui et al. 2001; Mike
et al. 1998). On the basis of the work on the research at home and abroad, proceed
from solving the semantic heterogeneity of shared modeling the collaborative
modeling, proposed an enterprise ontology-based concept of constraints to solve
the consistency problem in enterprise modeling.
Ontology achieves the effective semantic understanding and communication
between people or application systems (Horrocks et al. 2003; Pulido et al. 2006).
In engineering applications, the ontology can support semantic interoperability. It
provides the mechanism described in the explanation of the objective world.
Semantic interoperability requirements of the data easier to understand, and can be
easily defined mapping between the data of known and unknown (Athena 2004;
Berre et al. 2007).
Generally, the enterprise model is composed of multiple views, and its structure is
complex. In order to achieve the integration of partial models, first of all assume
that the two conditions have been met: First, the model is divided into view
established; Second, during the merger have been identified belong to the same
view of part of the model can be combined with a view of the upper model.
Basis for the formal definition of the enterprise model, in order to calculate
comprehensively, accurately the similarity between the concepts, Respectively, it
is calculated based on the name, concept properties, the subset of concept. Finally,
it is given the right value to merge the similarity.
(1) Calculation of concept name similarity. Assumptions two concepts A and B,
the similarity of their names is calculated as:
Nðthe longest substring between Aname and Bname Þ
simname ðAname ; Bname Þ ¼
ðNðAname Þ þ NðAname ÞÞ
ð126:1Þ
If the concept has an alias, in addition does computing the concept name
similarity, but also to calculating the similarity of the concept alias. Using the
formula (126.1), the final name similarity is
mþ1 X
X nþ1
SimnameZ ¼ wij Simname ðAi ; Bj Þ n; m 0 ð126:2Þ
j¼1 i¼1
P nP
mþ1 þ1
Among them, wij ¼ 1, n is the number of the alias of the concept A, m is
j¼1 i¼1
the number of the alias of the concept B. When n = 0, m = 0, SimnameZ ¼ Simname
(2) Calculation of similarity based on concept attributes
Based on the attributes, the theoretical basis for calculation of the conceptual
similarity is: if the attributes of the two concepts are the same, then the two
concepts are the same; if the two concepts have similar properties, these two
1198 H. Wang
concepts are similar. Each concept in the ontology is to be described and limited
by a set of attributes. The attribute set definition is given in the following.
Definition 3 Let A = {A1[V1], A2[V2],…, An[Vn]}, A is a set of properties. Ai is
the attribute name; Vi is the range of Ai. The definition is the set of attributes were
classified into the attribute set level and the attribute value level. The calculation of
similarity of the property is divided into two parts of the set of attributes and
attributes values to conduct investigations. Let C1 and C2 are the concept asso-
ciated attribute set of the objects o1 and o2. The similarity of the attribute set is:
1 jC1 \ C2 j
SimattrS ¼
jdistðo1 ; o2 Þ 1j jC1 \ C2 j þ ajC1 C2 j þ ð1 aÞjC2 C1 j
ð126:3Þ
There may be different values in the instances of two objects in the common
property. Therefore, the value of the similarities and differences in the common
property need inspect. Let Ai jC1 \ C2 j. Ai(o)[v] represents that the value of the
instance o on attribute Ai is v, and the upper and lower bounds of the statistical
range of the Ai values are expressed as Low(Ai), High(Ai). The similarity of the
attribute value is:
1 \C2 j
jCY
jAi ðo1 Þ½v1 Ai ðo2 Þ½v2 j
SimattrV ¼ 1 ð126:4Þ
i¼1
jLowðAi Þ HighðAi Þ þ 1j
According to Ai specific data types, the specific meaning of its statistical range
is different. For example, for the numerical data type, the difference between the
maximum and minimum can be used in the actual value of the attribute. For
Boolean data type, 0, 1 value is processed. For string type, if the attribute values of
two instances are the same, similarity is 1, otherwise 0.
In the end, the similarity of two instances in the characteristics of the attribute
set is the superposition of these two aspects. The formula is
Simattribute ¼ SimattrS SimattrV ð126:5Þ
In addition, a concept may have multiple attributes and the effects and the
extent described of each attribute on the concept are different. Therefore, if each
attribute is involved, the amount of the calculation will be greatly increased. When
the attribute similarity is calculated, the attributes need be classified, and focusing
on the calculation of the business attributes.
(3) Calculation of similarity based on a set of concept
In the ontology, the meaning of a concept can consist of the meaning of its
direct sub-concepts. The combination of all sub-concepts can describe the meaning
of the concept. Thus, the similarity between the upper concepts can be obtained by
calculating the similarity between the sub-concepts. This method is flexible and
126 Ontology-Based Multi-Enterprise Heterogeneous Model 1199
extensible. Let A, B for the two upper concept in the ontology, similarity between
A and B using following formula:
P P
max Sðai ; bj Þþ max Sðbj ; ai Þ
ai 2A bj 2B bj 2B ai 2A
Simsub ðA; BÞ ¼ ð126:6Þ
NðAÞ þ NðBÞ
N(A) indicates the number of sub-concepts of A, N(B) indicates the number of
sub-concepts of B. S(a,b) is calculated by using the instance-based method, for-
mulated as:
PðA \ BÞ PðA; BÞ
SimðA; BÞ ¼ ¼ ð126:7Þ
PðA [ BÞ PðA; BÞ þ PðA; BÞ þ PðA; BÞ
which P(A,B) indicates the probability that this concept is sub-concepts both A
and B when a concept randomly is selected from the ontology.
A;B A;B .
PðA; BÞ ¼ ðNðU1 Þ þ NðU2 ÞÞ ðNðU1 Þ þ NðU2 ÞÞ ð126:8Þ
Ui indicates the set of underlying concepts in the ontology i, N(Ui) indicates the
number of the concepts in Ui. N(UA,B1 ) indicates the number of the concepts both
belong A and B in the ontology 1. N(UA,B 2 ) indicates he number of the concepts
both belong A and B in the ontology 2. At this point, the similarity of A and B is
obtained.
(4) Comprehensive computation of similarity
This three kinds of similarities are comprehensively computed, the formula of
the final comprehensive similarity as follows:
SimðA; BÞ ¼ wname SimnameZ ðA; BÞ þ wattribute Simattribute ðA; BÞ þ wsub Simsub ðA; BÞ
which, wname þ wattribute þ wsub ¼ 1.
Setting a threshold for the above four kinds of similarity, the threshold is usually
determined by experts or analysts. When the calculated similarity is greater than
the threshold, they are called name similarity, attribute similarity, subset similarity
and comprehension similarity. Where, the model merging rules are defined based
on these four similarity relations. According to these rules, and then the overall
model is generated.
Rule 1: if the two model instances are comprehension similarities, the one is
kept, another is deleted in the model merging.
1200 H. Wang
Rule 2: if the two model instances are name similarities and the similarity is
less than 1, but the attribute and the subset are not similar, two models are kept in
the model merging.
Rule 3: if the name similarity of the two model instances is equal to 1, but the
attribute and the subset are not similar, two models are kept in the model merging.
At the same time, the name of a model is modified.
Rule 4: if two models are name similarity and attribute similarity, but their
subsets are not similar, two models are kept in the model merging.
Rule 5: if two models are name similarity and subset similarity, but their
attributes are not similar, two models are kept in the model merging.
Rule 6: if two models are attribute similarity and subset similarity, but their
names are similar, one model is kept in the model merging.
Rule 7: if the two models are attribute similarity, but their names and subsets
are not similar, two models are kept in the model merging.
Rule 8: if the two models are subset similarity, but their names and attributes
are not similar, two models are kept in the model merging.
Rule 9: if the two models are not subset similarity, name similarity, attribute
similarity, two models are kept in the model merging.
the database into the standard expressed in OWL ontology. Concept matching
module has completed to compute the similarity of the inputted concept.
According to the precious method of similarity calculation, when two concepts
and the weight of each similarity input, the multi-layer similarities and the total
matching are computed. Figure 126.1 show the matching result between ‘‘Quo-
tation’’ and ‘‘Payment application form’’.
126.4 Conclusion
In this paper, the model merging method from the partial model to the whole
model is studied. Moreover, the semantic similarity among the model instances is
analyzed from the various levels, and based on the semantic similarity, a series of
rules of the model merging are proposed, and then model integration is completed.
Finally, a prototype system of the model knowledge matching is developed.
And a case is described to validate the modeling method proposed in this paper.
References
Keywords Allocation of outpatient resources Highly constrained environments
Outpatient appointment model Outpatient scheduling research methodology
Patient preferences
127.1 Introduction
The uncertainty in the service related variables represent the primary challenge in
outpatient scheduling. Highly constrained environments determine the conditions
which should be considered in the selection of outpatient appointment models,
patient preferences, allocation of outpatient resources, and outpatient scheduling
research methodology. With different appointment models, highly variable pref-
erences of treatment time, and various scheduling methodologies, we can derive
different optimal policies for making appointments. Under optimal conditions,
those four highly constrained environments (Fig. 127.1) should be considered and
all service related processes must be quantified. Early studies in the literature
provide significant research about surgical scheduling, with outpatient cost and
revenue as objective functions. However, to the best of the authors’ knowledge,
there is limited literature on the scheduling process under highly constrained
environments. As stated earlier, there are a significant number of constraints in the
outpatient scheduling process, such as patient preferences and allocation of out-
patient resources.
Figure 127.1 shows that highly constrained environments mainly consist of four
elements: outpatient appointment model, patient preferences, allocation of out-
patient resources, and outpatient scheduling research methodology. Patients arrive
to the system with the following distributions: uniform, empirical, and lognormal.
Traditional model requires patients to accept any scheduled or service time pro-
vided to them. The work (Fries and Marathe 1981) considers patients’ waiting
time, as well as the idle time and overtime of providers and staff as performance
metrics or objective functions and utilizes dynamic programming and queuing
models to maximize system capacity.
For instance, according to different outpatient appointment models, different
patients’ preferences may lead to different impact on the objective function under
the same constraints. The distribution and utilization of outpatient resources can be
analyzed as a function of different modes of capacity allocation considering the
changing preferences of different patient groups. Finally, different results can be
obtained when choosing different outpatient scheduling methodologies, with the
ultimate goal being to find a better research methodology of outpatient scheduling,
which further motivates the research presented in this paper.
There are several ways for determining the appropriate outpatient appointment
model. When an appointment has been made, patients have to wait for a long time,
and sometimes their treatments will be postponed. Thus, the patient may not be
able to see their own doctor which could be the output of an inefficient outpatient
appointment scheduling system. This could also result in poor communication
between patients and doctors and could lead to unnecessary costs. Based on this
fact, the paper attempts to select the appropriate outpatient appointment model. As
stated earlier, there are generally three outpatient appointment models: traditional
model, carve-out model, and advanced access model (Murray and Tantau 2000).
In the advanced access model, every doctor has available appointment slots,
which improves the availability of outpatient services. When existing capacity is
unable to meet patient demand, advanced access models become more advanta-
geous. This model can better balance supply and demand. Though patients make
diverse choices when making reservations, this method only takes into consider-
ation reservations made in advance and walk-ins, which will reduce the variability
in patient types. Moreover, the model can increase the effective utilization of
resources, especially bottleneck resources, such as expensive equipment in out-
patient clinics. Table 127.1 illustrates the daily capacity available on physicians’
schedules in the three access models described in the paper, where TPA refers to
the proportion of appointments; WU refers to walk-ins and the urgent; C-R refers
to cost-revenue; and CP refers to capacity.
In the traditional model (TM) (Bowers and Mould 2005; Guo et al. 2004; Gupta
and Wang 2007; Hassin and Mendel 2008; Huang 2008; LaGanga and Lawrence
2007; Murray and Tantau 2000; Muthuraman and Lawley 2008; Ogulata et al.
2009; Turkcan et al. 2010), the schedule is completely booked in advance; same-
day urgent care is either ignored or added on top of existing appointments. In a
carve-out model (COM) (Chakraborty et al. 2010; Chao et al. 2003; Fries and
Marathe 1981; Gallucci et al. 2005; Green and Savin 2007; Kaandorp and Koole
2007; Patrick et al. 2008), appointment slots are either booked in advance or held
for same-day urgent care; same-day non-urgent requests are satisfied in future
time. In advanced access model (AAM) (Murray and Berwick 2003; Murray and
Tantau 1999; Green et al. 2006; Kim and Giachetti 2006; Klassen and Rohleder
1996; Liu et al. 2010; Qu and Shi 2011), where practices focus on doing today’s
work today, there is true capacity. The majority of appointment slots are open for
patients who call that day for routine, urgent, or preventive visits.
Our paper focuses on the overall process of scheduling, which stresses the
proportion of scheduled patients. Table 127.2 emphasizes that every patient is
assigned to a time block, and that the number of patients a doctor can serve in a
certain period of time is fixed. The types of block appointment systems are shown
in Table 127.2.
The work (Fries and Marathe 1981) shows that a multiple-block system is more
feasible when the number of patients change. It was found that it is better to
expand the size of the reservation model. The literature also gave the appropriate
weight to patient’s waiting time, doctor’s idle time and overtime, in an effort to
compare different booking systems using those performance metrics. The paper
(Patrick et al. 2008) applied dynamic programming to schedule multi-priority
patients in the diagnosis of resources. The size of patients in a multi-block
appointment system is not carried out using a dynamic programming model. The
study found that the impact of patients with different priority on outpatient costs is
large.
The paper (Hassin and Mendel 2008) studied the patient no-show rate in the
single-block system, and investigated the degree to which the model is influenced
by outpatient costs and revenue. In the multiple-block system, the patient’s waiting
time cost, no-show costs, and service costs with no-show rate are examined. The
study found that patient no-show cost has a smaller impact than the services cost
on outpatient scheduling. Through the introduction of the above three studies
(Fries and Marathe 1981; Hassin and Mendel 2008; Patrick et al. 2008), it can be
clearly seen that the previous literature studied single-block and multiple-block
systems, with the primary objective function being the overall cost, with variations
in the outpatient factors considered.
1208
Outpatient resources include many elements, such as providers, staff, and equip-
ment. Furthermore, there might be a single department or multiple departments in
outpatient clinics. The paper (Chao et al. 2003) proposed multi-block appointment
and scheduling system based on patients’ waiting time, provider’s available
appointment slots, and other factors. The focus was to determine a reasonable
distribution of outpatient resources, which was not found to be proportional with
other factors.
The paper summarizes the relevant literature regarding outpatient resources,
including slack capacity (SC) (Chakraborty et al. 2010; Huang 2008), the penalty
(P) (Chakraborty et al. 2010; Fries and Marathe 1981; Gupta and Wang 2007; Kim
and Giachetti 2006; Liu et al. 2010; Patrick et al. 2008), cost (C) (Fries and
Marathe 1981; Hassin and Mendel 2008; Kim and Giachetti 2006; Klassen and
Rohleder 1996; Liu et al. 2010; Muthuraman and Lawley 2008; Patrick et al.
2008), providers and staff (PS) (Fries and Marathe 1981; Gupta and Denton 2008;
Gupta and Wang 2007; Guo et al. 2004; Hassin and Mendel 2008; Huang 2008;
Klassen and Rohleder 1996), revenue (R) (Bowers and Mould 2005; Chakraborty
et al. 2010; Fries and Marathe 1981; Gupta and Denton 2008; Gupta and Wang
2007; Kim and Giachetti 2006; Liu et al. 2010; Muthuraman and Lawley 2008;
Patrick et al. 2008), equipment resources (ER) (Bowers and Mould 2005; Guo
et al. 2004; Gupta and Denton 2008; Patrick et al. 2008). While some studies
focused on costs and revenue to evaluate the scheduling process, others considered
slack capacity. In those studies (Huang 2008), slack capacity is said to be the idle
time of staff or equipment, wherein idle time varies between departments. The
paper (Chakraborty et al. 2010) studied both the idle time and overtime with
consideration given to no-show rate and service time distribution based on out-
patient services’ revenue.
We can see in the paper that only four studies focus on outpatient equipment.
The work (Guo et al. 2004) emphasized the efficiency of equipment under certain
scheduling processes. The study (Bowers and Mould 2005) focused on the utili-
zation of equipment shared by outpatient and inpatient clinics. The paper (Gupta
and Denton 2008) mentioned the distribution problem of special facilities in
outpatient clinics, and showed the efficiency when taking into consideration spe-
cial patient conditions. The work (Patrick et al. 2008) mainly focused on the
scheduling process of patients with different priorities. In equipment related
research (Bowers and Mould 2005; Guo et al. 2004; Gupta and Denton 2008;
Patrick et al. 2008), the evaluation should not be focused only on facility effi-
ciency, but should also consider facility planning and department layout.
127 Outpatient Scheduling in Highly Constrained Environments 1211
There are numerous methodologies to solve this scheduling problem, with many
using simulation (Bowers and Mould 2005; Cayirli and Viral 2003; Guo et al.
2004; Hassin and Mendel 2008; Huang 2008; Klassen and Rohleder 1996; Ogulata
et al. 2009) and regression analysis (Gallucci et al. 2005; Hassin and Mendel 2008;
Kim and Giachetti 2006; LaGanga and Lawrence 2007). Heuristic methods (Green
et al. 2006; Gupta and Wang 2007; Liu et al. 2010) and the curve fitting
approaches (Muthuraman and Lawley 2008; Qu and Shi 2011) are relatively rare.
The variables can involve many aspects in the scheduling process. Revenue and
cost are widely used as performance metrics, although other measures such as
resource utilization have also been used. We know that genetic algorithms
(Kaandorp and Koole 2007) and local search algorithms (Turkcan et al. 2010) are
used only in two studies. However, we think that genetic algorithms can be a
promising research tool.
The work (Fries and Marathe 1981) used dynamic programming to determine
the optimal block sizes for the next period given that the number of patients
remaining to be assigned is known. They present an approximate method to apply
the dynamic results to generate a schedule for the static version.
The paper (Ogulata et al. 2009) used simulation to analyze different conditions,
such as the percentage of unaccepted patients, treatment delay, number of patients
waiting in queue, normal capacity usage ratio, and slack capacity usage ratio,
which radiology patients need to schedule. Simulation has shown that in systems
with high frequency, the percentage of unaccepted patients is mostly determined
by maximum waiting parameter rather than slack capacity; the treatment delay is
completely determined by the slack capacity; and the main factor that affects the
treatment delay is maximum waiting time.
A study (Qu and Shi 2011) assessed the impact of patient preferences and
providers/staff capacity using mathematical modeling. They included different
patient choices which belong to the IIA method. The work (Gupta and Wang 2007)
assumed patients choices were subject to IDM, MNL, RUM methods. Different
models used linear programming to study outpatient revenue as a target and to find
community cost according to bound heuristic methods.
The paper (Turkcan et al. 2010) proposed genetic algorithms to plan and
schedule the entire chemotherapy cycle. The main factors are the height variation
of the resource requirements, such as treatment time, nursing time, and pharmacy
time, so that limited resources are fully utilized. The work (Liu et al. 2010) also
used a dynamic heuristic method to study patients’ no-show behavior. The sim-
ulation analysis shows that the method is more suitable when the number of
patients exceed the outpatient clinic’s actual capacity.
1212 X. Wu et al.
References
Bowers J, Mould G (2005) Ambulatory care and orthopedic capacity planning. Health Care
Manag Sci 8:41–47
Cayirli T, Viral E (2003) Outpatient scheduling in health care: a review of literature. Prod Oper
Manag 12(4):519–549
Cayirli T, Viral E, Rosen H (2006) Designing appointment scheduling systems for ambulatory
care services. Health Care Manag Sci 9:47–58
Chakraborty S, Muthuraman K, Lawley M (2010) Sequential clinical scheduling with patient no-
shows and general service time distributions. IIE Trans 42(5):354–366
Chao X, Liu L, Zheng S (2003) Resource allocation in multisite service systems with intersite
customer flows. Manag Sci 49(12):1739–1752
Erdogan SA, Denton BT (2009) Surgery planning and scheduling: a literature review. Med Decis
Making 30:380–387
127 Outpatient Scheduling in Highly Constrained Environments 1213
Abstract This paper points out that the reverse engineering is the important
technology to realize product innovation based on the prototype. It puts forward a
new method to realize 3D reconstruction that takes slicing data of prototype as
original data, under the commercial CAD modeling software environment, and
briefly introduces the system developed by authors. The working process of this
system is to read in the slicing data of the prototype, after pretreatment and feature
recognition, to output feature data and realize 3D reconstruction under the
SolidWorks environment, at last to construct the CAD solid model. It lays a good
foundation for modifying the model so as to realize the product innovation. The
innovation of this system lies in: according to slicing data, it can directly construct
the CAD solid model. This paper also analyzes the key problems of the recon-
struction process, and indicates that this technology has obvious advantages for
mechanical manufacture filed.
M. Li (&) Q. Li
Zhengzhou Institute of Aeronautical Industry Management, Zhengzhou,
People’s Republic of China
e-mail: [email protected]
Q. Li
e-mail: [email protected]
128.1 Introduction
It is known from Fig. 128.1, digitizing the part and constructing CAD model are
the two key technology of reverse engineering (Luan et al. 2003).
Digitizing the part refers to adopt some measuring methods and equipments to
acquire the geometry coordinate of the part. Presently the measuring methods that
are used in industrial are coordinate measuring machines (CMM), laser beam
scanning, industrial computed tomography and layer-layer cutting image. Using
these methods can get every layers of slicing data of the prototype.
The method to construct CAD model that is commonly used in the reverse
engineering technology at home and abroad is: to recognize the border to the
slicing data automatically or manually; 3D dots are grouped according to the
feature single principle; to carry on surface modeling to each group dots; the last is
solid modeling (Chow et al. 2002; Huang et al. 2001). That is to link up each
surface to form a complete part. The surface modeling theory and algorithm has
basically ripe, but the research of oriented solid modeling of reverse engineering is
still not reach the ideal practical level, therefore, in constructing CAD model is
looking forward to have a breakthrough as soon as possible.
OK Drawing
Existing Digitized Finished
Prototype Data CAD modal Check Manufacture
Part
No
Reverse engineering system based on slicing data, short for SdRe system is the
reverse engineering system software of constructing products 3D model based on
slicing data, which is developed by us. SdRe system includes three function
modules: slicing data processing; feature recognition: 3D reconstruction.
The data obtained after digitizing the part is the bitmap image of all slicing layers
of the part prototype. Processing these slicing images have two steps: filtering out
noise and extracting borders. In image filtering technology, global filtering tech-
nology requires to know the signal or noise statistic model in advance, which for
the slicing images is almost impossible, so the SdRe system uses the local filtering
technology, which uses the local operators to do local treatment for the images in
turn.
The SdRe system is with a filter function library including a lot of filter
function. The user can select appropriate function for various image qualities and
modify the filtering parameters, in order to get the best filtering result. Then the
system extracts borders after filtering image, the image information is constructed
as the ring chain that is composed of interconnected pixels. Aiming at the slicing
data of object prototype in reverse engineering must be a ring of respective closed
and mutually disjoint; the system has the effective arithmetic to extract borders for
slicing data of object prototype in reverse engineering. While extracting borders
the system’s parameters can be modified by man–machine conversation to elim-
inate futility data like air holes and chips. Moreover, it can also distinguish
between the convex feature and concave feature for extracted border. The concave
means the solid surface materials to be removed, such as a hole, while the convex
means the solid surface to possess materials, such as a cylinder. After extracting
borders, the data become each closed ring composed of ordered dots, which is
named data ring.
SdRe system uses feature model to construct CAD model, so the work of stage 2 is
the feature recognition. Data ring after feature recognition is constructed the data
that expresses object prototype feature, that is named feature ring. This is the core
model of the system.
1218 M. Li and Q. Li
SdRe system selects the commercial CAD modeling software, such as SolidWorks,
as system support software of 3D modeling. SolidWorks is 3D modeling software
oriented computer. Its function is powerful, the cost is effective and it easily realize
the interfaces with the programming language and other commonly used CAD
software (Wen 2004). Moreover, on the support of commercial CAD modeling
software, the 3D solid model that is reconstructed can be modified, it can output the
part drawings and assembly drawings, as well as the files to be read by other
commonly used CAD software, it still can output the STL files for the rapid pro-
totyping (Liu 2004; Schreve et al. 2006). Using SolidWorks as support software, can
save the research work to realize basic function, to focus on the key technology of
reverse engineering. Solidworks provides the soft interface with programming
language. Using this interface, SdRe system develops the interface model. Under
SolidWorks environment, to run this model, input the feature data produced by
feature recognition model, the CAD model of the product can be reconstructed.
The feature recognition model is the core of SdRe system. It completes feature
recognition function, in order to realize 3D solid reappear. SdRe system read in the
data of data ring, output the feature ring data for SolidWorks modeling.
SdRe system to recognize the feature from the three levels of line, surface and
solid, the feature types are shown in Fig. 128.2.
From the standpoint of feature body, the features that are constructed by SdRe
system have two types: extruding body and extruding body of layer change. The
extruding body is the feature that the shape and size of cross-section do not
change; it is the equal cross-section body, such as the cylinder, prism. The
extruding body of layer change is the body with variable cross-section, the size and
shape of its cross-section are changed. The feature body of containing free surface
is one of the extruding body of layer change. From the standpoint of feature plane,
SdRe system can construct plane, cylinder surface, cone surface and free-form
surface, each surface can be outside surface, also can be inner surface.
From the standpoint of feature line, SdRe system can construct straight line, arc,
circle and free curve, as well as the polygon by their combination.
SdRe system recognizes the feature body and feature line with explicit rec-
ognition form, and the feature surface with implicit recognition form, for the
128 Realization of 3D Reconstruction of CAD Model Based on Slicing Data 1219
Feature body
Feature surface
Feature line
feature surface recognition is included in the recognition of the feature body and
feature line.
As mentioned above, the data ring actually is the data that a certain feature of
object prototype is reflected in a certain cross-section. Some data ring that
expresses the same feature surface of the object prototype, distributes in different
cross-section are congregated, that is formed a new data chain, named solid ring.
Solid ring is actually the data ring collection that expresses the feature surface of
the object prototype. The solid ring after feature recognition is feature ring.
From the standpoint of automation, feature recognition has two manners:
interactive recognition and automatic recognition (Li et al. 2003). Automatic mode
can recognize the two types of feature line of straight line and free curve and all
feature body; interactive mode can recognize all feature line and feature body.
From the standpoint of checking the recognition results, in the interactive recog-
nition there are two manners of relatively recognition and absolute recognition.
For relative recognition, the system to the results of interactive recognition carries
on fitting operations by means of least squares, in order to check the recognition
correctness, if the error exceeds the specified threshold value (the value may be
modified by user), then give warning information, and the user decides to rec-
ognize again or ignore the system warning. Using absolutely identify way, directly
accept the artificial recognition results, no longer inspection. From the standpoint
of working process, there are direct and indirect recognition. When carry on
indirect recognition, the first is to recognize feature line, and then to recognize
feature body, it is mainly used for recognizing the complex feature body. Direct
recognition is to recognize the feature body directly, is mainly used for
1220 M. Li and Q. Li
recognizing the simple feature body. If need to recognize the complex feature
body, the system will guide users to recognize feature line.
input/output
model
solid ring
whole
constructing solid
ring model
solid ring
solid
single
image display
model
plane
data
ring
delete
data ring
combine solid
editing model
ring
separate
polygon arc
cylinder
extrude
feature prism
editing model
cone
layer
change
extrude free-surface
body
operations of deleting, separating and combining for the solid ring. Deleting
solid ring, that is to modify the data of the original data ring in order to
eliminate futility data like air holes and chips. Separating is from a solid ring
(compound solid ring) to be separated another solid ring, one ring is became
two rings. Combining is two solid rings will be combined into one ring. When
combining, SdRe system will be according to the shape of the space position
and the plane position of the two rings to judge whether the solid rings can be
combined, if the system think them unfavorable combine, will give warning,
1222 M. Li and Q. Li
please the operator confirm or give up; if the system think them cannot
combine, it will give error messages, and refuse to combine.
(4) Data Ring Editing
The deleting operation for the data of solid ring can be done, that is to delete
the current data ring (slicing) that is composed of the solid ring, in order to
eliminate redundant data.
(5) Feature recognition
After solid ring is constructed, in order to construct the feature ring, the
geometric recognition to the data composed of solid ring need be completed.
The system uses the above various methods to recognize the feature line and
feature body.
(6) Feature ring editing
Feature ring that has constructed already is displayed in the form of a tree, in
order to do the editing work for it when necessary. This editing work is mainly
to meet the requirements of SolidWorks. The editing work includes two
contents, one is to adjust the modeling order of the feature ring, another is to
adjust the corresponding relation of data point in surface body.
128.4 Conclusion
3D reconstruction method that puts forward in this paper is to directly construct the
CAD solid model of the prototype based on slicing data of the prototype and under the
commercial CAD modeling software environment, this is a new method. Previously,
the reverse engineering method generally is to construct the local surface model of
the prototype first, then to match and joint the surfaces and get the whole surface
model. In the process of the surface matching and jointing, it is very complicated to
deal with the problems of surface tearing and overlap. The method that research in
this paper avoids these problems. The method is to realize 3D reconstruction in
commercial CAD software environment, so it will save much time to develop the
additional modeling software or models. In addition, in the mechanical manufacture
field, the most of parts are composed of the regular surfaces, so this method has
unique advantages in the reverse engineering of mechanical manufacture field.
References
Abella RJ, Daschbach JM, Mcnichols RJ (1994) Reverse engineering industrial application.
Comput Ind Eng 26(2):381–385
Chen L-C, Lin GCI (1997) An integrated reverse engineering approach to reconstructing free-
form surfaces. Comput Integr Manuf Syst 10(1):49–60
Chow J, Xu T, Lee S-M, Kengskool K (2002) Development of an integrated laser-based reverse
engineering and machining system. Int J Adv Manuf Technol 19:186–191
Daschbach J, Abella R, McNichols R (1995) Reverse engineering: a tool for process planning.
Comput Ind Eng 29:(1–4):637–640
128 Realization of 3D Reconstruction of CAD Model Based on Slicing Data 1223
Honsni Y, Ferreira L (1994) Laser based system for reverse engineering. Comput Ind Eng
26(2):387–394
Huang X, Du X, Xiong Y (2001) Modelling technique in reverse engineering. China Mech Eng
12(5):539–542 (in Chinese)
Li D, Wang M, Liu Y (2003) Research on interacted-modeling method for reverse engineering.
China Mech Eng 14(19):1677–1680 (in Chinese)
Liu Y (2004) Research on CAD modeling key technology of reverse engineering based slicing
feature. Zhejiang University, Hangzhou (in Chinese)
Liu Z, Huang C (1992) Reverse engineering design. China Machine Press, Beijing, p 116 (in
Chinese)
Liu Y, Hang J, Wan Y (1998) Reverse engineering and modern design. J Mach Des 16(12):1–4
(in Chinese)
Luan Y, Li H, Tang B (2003) Reverse engineering and its technologies. J Shan Dong Univ (Eng
Sci) 33(2):114–118 (in Chinese)
Motavalli S, Bidanda B (1994) Modular software development for digitizing systems data
analysis in reverse engineering application: case of concentric rotational parts. Comput Ind
Eng 26(2):395–410
Puntambekar NV, Jablokow AG, Joseph Sommer III H (1994) Unified review of 3D modal
generation for reverse engineering. Comput Integr Manuf Syst 7(4):259–268
Schreve K, Goussard CL, Basson AH, Dimitrov D (2006) Interactive feature modeling for reverse
engineering. J Comput Inf Sci Eng 6:422–424
Wen X (2004) Reverse engineering technique of complex surface product based prototype. Mech
Electr Inf 4(8):35–37 (in Chinese)
Chapter 129
Recommender System Based ‘‘Scenario-
Response’’ Types Post-Disaster
Emergency Supplies Planning
Keywords Group decision making Post-disaster supplies planning Recom-
mender system Scenario-response Social tagging
129.1 Introduction
The rapid development of economic globalization not only deepens the level of
national industrialization and urbanization, but also increases the property losses
and causalities which brought out by large-scale unexpected natural disasters.
When such an event happens, all disaster areas are in great demands for emergency
supplies (Fiedrich et al. 2000; Kevin and Liu 2004). Generally, different affected
areas by different natural disasters such as typhoon, flooding, drought or earth-
quake etc. may have exactly different needs for different supplies. As for allocation
management of emergency supplies in large-scale natural disasters, irrational
distribution of resources usually leads to further expanse of personnel and property
losses and deterioration of threats(Bakuli and Smith 1996; Zheng 2007), thus,
more effective way based on the real-time data of the affected area is needed to
achieve optimized post-disaster emergency supplies planning, which could ensure
its fairness and rationality, greatly help reconstruction after disaster, speed up the
recovery of order of daily life and production (Mezher et al. 1998).
At present, the allocation strategies of emergency supplies coping with large-
scale unexpected natural disasters have the problems described as follows: (1) don’t
take the actual emergency supplies needs of disaster areas into consideration, the
needless materials which don’t need are over-supply while the much-needed sup-
plies are on the contrary, which result in unnecessary waste of precious emergency
supplies; (2) don’t lay emphasis on the actual needs of different emergency supplies
from different areas(Toregas et al. 1971; Yuan and Wang 2009).
In a word, the existing post-disaster supplies plans are rough and simple, the
great imbalance of allocation makes them cannot respond well to the demands for
emergency supplies of disaster-affected areas (Chang et al. 2007; Mailler et al.
2003). Therefore, advanced technologies are gradually introduced into post-
disaster supplies planning.
planning in disaster response; Chiu and Zheng (2007) developed a model formu-
lation and solution for real-time emergency response in no-notice disasters.
Therefore, it can be concluded that most of the studies on planning of post-
disaster emergency supplies paid more attention on the combination of allocation
management with available transportation strategies (Gwo-Hshiung et al. 2007;
Fang et al. 2007). However, it is obvious to see that numerous disaster areas
usually have different needs for emergency supplies, including their quantity and
category. The planning should be combined with the actual losses of the areas.
In this paper, based on the real-time data of the affected area and the evolution
of the scenario, an algorithm that combines recommender system and social tag-
ging with allocation management is proposed to establish a ‘‘scenario-response’’
type’s poster-disaster emergency suppliers planning in order to reach higher per-
formance in disaster emergency management and post-disaster recovery.
129.3 Methodology
K-means
(a) First, the average strategy is applied to aggregate the data of damages and
losses of the group member in every cluster, which results in a single vector
representing the integral damage degree of the disaster areas. Assume there are
n1 areas in the cluster; the first area’s vector of damages and losses is
R1 ¼ ða11 ; a12 ; a13 ; . . .; a1i Þ, the second area’s is R2 ¼ ða21 ; a22 ; a23 ; . . .; a2i Þ …
where i indicates the number of the appointed attributes that are used to
appraise the losses and damages of disaster-affected areas. Calculate the single
vector that represents the integral damage degree of the disaster areas in the
same cluster and denoted as Rcluster1, each number in the vector is the average
damages and losses for the attribute i by all of the areas in the cluster.
Xn1
Rcluster1 ¼ R =n1
j¼1 j
ð129:1Þ
(b) Second, after getting the single vector that represents the integral damage
degree of the disaster areas in the same cluster, consider it as a virtual disaster
area v1 . To provide a recommendation list to show the needed quantity of the
emergency supplies of the areas in the cluster, it is necessary to figure out the
ratio in the whole dataset under the same attribute of every cluster, the ratio is
marked as rm;i , representing cluster m‘s ratio in the whole dataset under
attribute i. Suppose that we have already known the total amount tj of the
emergency supplies through an emergency rescue site, j is the number of the
category of the emergency resources. The recommendation list drawn from the
specific data can be obtained by following formula.
Rc ¼ r m;i tj ð129:2Þ
(a) Classify the whole attributes which could represent the damages and losses of
the disaster areas into different categories in accordance with their intrinsic
correlation. Each category corresponds to different class of emergency sup-
plies, for example, collapsed and damaged houses belong to the loss of
buildings which could indicate the demand for emergency tent in an unex-
pected event;
129 Recommender System Based ‘‘Scenario-Response’’ Types 1229
(b) According to the real-time data of damages and losses of the disaster areas in
each cluster, figure out the tag information of the needed emergency supplies
in order, select the top-5 frequency tags as the most tag set which could shown
the demands for the rescue resources of the clusters;
(c) Based on the ratio of the frequency of the tags deduced from the above steps,
we can calculate another different recommendation list for quantity and cat-
egory of emergency supplies which are needed by the clusters Rt .
What is to be addressed here is that the aspect of the data of damages and
losses can be helpful in figuring out the similarity of demands for emergency
supplies of different areas, while the tag information is used to figure out the
differences between different areas in the same cluster, therefore it manages to
compensate the shortage of simple post-disaster supplies planning strategies,
which can fully reflect the needs of the disaster areas.
(d) Calculate the average of the two recommendation list, and then the quantity of
the emergency supplies which allocated to each cluster can be easily obtained.
It is true that even in the same cluster, different areas may have different demands
for the quantity and category of the emergency supplies. Therefore, it is also
important to find out good way to solve this problem. In the following section, the
tag information of the disaster areas is used, and the detailed steps are proposed.
In this section, a case study is conducted to explain the detailed steps of the
proposed strategy for allocation planning of post-disaster emergency supplies.
Because of the existed unit inconsistencies in the original dataset, it is necessary
to standardize the data first; the preprocessed dataset is shown in the Table 129.1.
Step1: Divide the fourteen disaster areas into three different clusters according
to their actual data of damages and losses. The result is shown as follows:
Cluster1: Anhui, Fujian, Jiangxi, Shandong, Henan, Hubei, Hunan, Shanghai,
Jiangsu, Hainan, Yunnan
Cluster2: Guangdong, Guangxi
Cluster3: Zhejiang
Step2: Average strategy is used to calculate the single vector that represents the
integral damage degree of the disaster areas in the same cluster.
1230 G. Kou et al.
Table 129.3 The ratio of the data represents the damages and losses of the disaster areas
Attributes
Clusters Victim Death Shift Victim Drought Collapsed Damaged Economic
crop houses houses loss
Cluster1 0.065468 0.167429 0.072086 0.095447 0.072865 0.097059 0.068263 0.142717
Cluster2 0.492441 0.658328 0.3366 0.591409 0.659885 0.786335 0.708216 0.568527
Cluster3 0.442091 0.174244 0.591314 0.313144 0.26725 0.116606 0.223521 0.288756
quantity for different emergency supplies can be obtained. Acquire the average score
of the three categories attributes of the three clusters. The recommendation list Rc is:
0 1 0 1
0:101661 0:084156 0:10268 t4
Rc ¼ @ 0:49579 0:625647 0:687692 A @ t1 þ t2 A
0:40255 0:290197 0:209628 t3
Step7: As for every cluster, figure out the frequency of the tag of the demands
for the emergency supplies. For simplicity, cluster2 is used as an example,
including Guangdong, Guangxi based on the standardized dataset; it is easy to get
the tag information corresponding to the urgency of the emergency supplies.
Guangdong: tents, tents, instant food, medicine, medicine, medicine;
Guangxi: instant food, medicine, medicine, instant food;
Thus, the frequency of the tags can be obtained.
Step8: The similar method described in Step6 can be used to obtain the
recommendation list Rt . Acquire the average score of Rc and Rt , then the final
recommendation list can be obtained (Tables 129.3, 129.4, 129.5).
Step9: The allocation within the cluster could rely more heavily on the prin-
ciples and tag rank.
1232 G. Kou et al.
129.5 Conclusion
References
Toregas C, Swain R, Revelle S, Bergman L (1971) The location of emergency service facilities.
Oper Res 19:l363–1373
Yuan Y, Wang D-W (2009) Path selection model and algorithm for emergency logistics
management. Comput Ind Eng 56:1081–1094
Zheng Y-J (2007) Distributed cooperative planning and scheduling for disaster management. In:
Proceedings of the IEEE international conference on automation and logistics, August,
pp 1172–1176
Zohar L, Albert IG (2008) Resource allocation under uncertainty in a multi-project matrix
environment: is organizational conflict inevitable? Int J Project Manage 26:773–788
Chapter 130
Research on the Simulation Case
of Traffic Accident
130.1 Introduction
Each traffic accident has its own unique. Variables of people, vehicles, road and
environment have forced that each accident reconstruction should also consider the
particularity on the basis of universality. The traffic accidents in this study are
those shelved for many years with controversy.
On January 21, 2007, ten past eighteen, 1369 km ? 300 m of 202 line, snowy,
road had the ice and snow. The road was two-lane straight asphalt pavement, drove
in different directions. Mr. Wang drove Santana LX sedan (with a passenger
Mr. Zhang) from north to south. When driving to 1369.3 km in 202 lines, it had a
collision with Mr. Lv in agricultural tricycle (with a passenger) from south to
north. After the accident, the Santana sedan fell into the drain outside the slope,
while the agricultural tricycle stopped in the road. The accident has caused
damaged in varying degrees of Santana sedan and agricultural tricycle, and caused
Mr. Wang and the passenger Mr. Zhang died on the spot.
The problems are the speed of Santana sedan and agricultural tricycle when the
accident occurred.
The central part in the right side of Santana sedan has severe hollow deformation,
the right side of the ceiling has inward deformation, and the whole car has bending
(See Fig. 130.1). The deformation zone is from the upper of right front wheel to
the upper of right rear wheel with a length of 1170 mm and a distance of
360–1930 mm (the latter is the height after the deformation, higher than the ori-
ginal height size) from the ground. Within the deformation zone, the depth is about
870 mm, and the deepest area is 2570 mm to the rear. Within the scope of
110–310 cm from the front to the bear with the distance of 15–158 cm from the
ground to the front and rear fender panels at the right side and the two doors, it has
the overall impacting hollow, the right side has severe quasi-impacted marks, and
the vehicle skin has left many scratches. Among them, the front corner of B
column has the similar hollow with the front wheel shape of agricultural tricycle.
For the Santana sedan, the shroud of the engine has deformation, the gearbox
handle has damaged; the combined lamp in the right front has broken off, the front
and rear windshield and right window has broken; there are many paints flakes off
in the front and rear doors and the front and rear fender panel at the right side.
130 Research on the Simulation Case of Traffic Accident 1237
Within the whole width of the vehicle, the front of agricultural tricycle has
inhomogeneous deformation, and the front right deformation is more severe than
the left; the down edge at the right front door and window has backward dislo-
cation of 70 mm, the down edge of right door has backward dislocation of
170 mm; the down edge of left front door and window glass has backward dis-
location of 15 mm; the whole front especially the front wheel hit the cover and has
the hollow; the left front combination has broken off, and the right front combi-
nation cover has broken off; the windshield and right window has broken off; The
two rearview has off (See Fig. 130.2); the right front corner of the head has hollow
and deformation with obvious fold deformation, impacting cracks and scratch
marks; The down edge of left corner has hollow with fold deformation and scratch
marks; The left and right paints have broken off; The front wheel has broken off
from the fork tray.
1238 C. Wei et al.
Santana sedan
Agricultural tricycle
Because the curb of 12 cm and the green belt can hinder the movement of
Santana sedan, it has set a low wall with the height of 12 cm representing the curb
and green belt in the simulation process.
The deformation parts, features and size of the two vehicles can show that the
front of right middle size of the vehicle was firstly contacted and had collision with
the front wheel of agricultural tricycle, and the angle of the two vehicle direction at
the collision is near 90°. The smaller deformation in the front of agricultural
tricycle can show that the main part in the collision is the front wheel, which
means that the collision has a larger rebound effect.
The instant speed and road condition before the collision can determine the
moving distance of the agricultural tricycle in the transverse direction after the
collision.
The relative position and the relative road direction status before the collision
can determine the moving direction after the collision. The moving distance is
mainly influenced by the collision speed of agricultural tricycle, the braking or not,
the bending status, road condition and the conditions outside the road of curb and
green belt.
Based on the above principle analysis, consulting the deformation size, feature
and accident scene sketches of Santana sedan and agricultural tricycle; input the
data of road traces, vehicle quality, vehicle technical parameters and road adhesion
coefficient into the software.
Gradually set the parameters of collision initial speed, initial direction angle
and initial position of Santana sedan and agricultural tricycle, simulate the
moving trajectory, status and result, and make repeated comparison with the cite
situation, especially the final stationary position, and we can get the speed at the
collision:
Because the accident scene has no road traces before the accident collision, it
can not infer the vehicle moving status and trajectory. The initial speed and
direction in the simulation analysis are to realize the relative status (collective
speed, direction and the contact position and direction), but it does not represent
the actual speed, for the driver may adopt the braking and steering operation from
the initial time to collision contact.
The tricycle braking traces are in coincident with the impacting point. It needs a
period of time for the braking from zero to maximum or from the beginning of
braking to the appearance of braking trace in the braking process. In other words,
the tricycle speed of the driver reaction, implemented operation and braking
response is larger than that in the collision. With the analysis, we can get that the
agricultural tricycle speed at 13 m (or 1 s) before the collision is about 45 km/h,
and that at 26 m (2.1 s) before the collision is about 46.5 km/h. Because the
Santana sedan has the sideslip status before the collision, it can not infer the speed
before the accident.
130.5 Conclusion
With the above analysis and computer simulated result, we can infer that the
Santana sedan speed was larger than 50.7 km/h at the collision, the component
speed along the head longitudinal axis was about 8.5 km/h; The agricultural tri-
cycle speed was about 43.4 km/h at the collision.
The agricultural tricycle speed at 13 m (or 1 s) before the collision was about
45 km/h, and that at 26 m (or 2.1 s) before the collision was 46.5 km/h.
Accident simulation should accurately extract the accident parameters. After
obtaining the simulation results, we should take use of the result to verify the
accident process to have a perfect combination of logic and evidence.
Lai-bin Wang
131.1 Introduction
The regional brand concept was first proposed by Keller et al. (1998), the location
like the products or services as branding, brand name is usually the actual name of
this region. The brand makes people aware of the existence of the region, and
related associations. Keller et al. (1998) also believes that the region can be like as
the brand of product or service (Kevin 1999). Rosenfeld (2002) believes that the
L. Wang (&)
Department of Political and Management Science, Chizhou University,
Chizhou, China
e-mail: [email protected]
In this paper, the regional brand building still is a regional industrial clusters as the
foundation, regional industry cluster stakeholders will naturally become the main
body of the regional brand building, and these stakeholders is to constitute the
elements of the regional industrial clusters, exist the complex contact among them;
regional brand building is a strategic development to execution and then to the
outside world to exchange feedback rating system, regardless of the strategy
establishment, the follow-up implementation, the final regulatory assessment for
feedback to the strategic plan amendment is influence each other, so the regional
brand building based industry cluster is a complex system works.
• Selforganization and
Helerorganization
Conversion • Coordination Strategy
Process • Coopetition Game
• Allwin
• Management Mechanism
Output
trial and error behavior. The right strategic choices of region brand development
will lead to greater efficiency; otherwise it will bring huge economic losses.
In the recession phase of the regional brands, in order to re-establish the brand
market image, you must change the development strategy, the liberal market
access system, excellent products and good social public relations which can make
regional brand reemerge to the former presence.
(2) Transformation process
Regional
Social enterprise
Brand subsystem
Subsystem development
intermediary
organizations
subsystem
131 Regional Brand Development Model 1247
The government plays an extremely important role in regional brand building and
development process. From the perspective of systems theory, government sub-
system, use of local advantages of resources, take a variety of marketing tools,
establish and promote regional brand, and in combination with the needs of the
regional brand development process, make scientific support policies.
First of all, the government must conduct regional image marketing. In addi-
tion, the government should rationalize the ideas and mechanism in macroscopic
level of the regional economic development, strengthen macro-guidance and
promotion, improve relevant mechanisms, and develop appropriate policies and
measures. The government need encourage brands to create famous brands,
implement brand strategy and corporate incentives, and make preferential policies
and incentives for brand-name enterprises to play a designer demonstration effect.
131.6 Conclusion
Acknowledgments I would like to thank Anhui Education department humanities and social
science fund (SK2012B338) for their support in this research.
References
Cai L (2008) The application of system dynamics in the research of sustainable development.
China Environmental Science Press, Beijing, pp 28–30 (in Chinese)
Eraydn HA (2010) Environmental governance for sustainable tourism development: collaborative
networks and organization building in the antalya tourism region. Tour Manag 31:113–124
Keller KL (1998) Strategic brand management. Prentice-Hall, Upper Saddle River
Kevin LK (1999) Effective long-run brand management: brand reinforcement and revitalization
strategies. Calif Manag Rev 41(3):102–124
Malcolm SA (2006) Bangkok: the seventh international conference on urban planning and
environment. In: Place branding, pp 11–14
Rosenfeld SA (2002) Just clusters: economic development strategies that reach more people and
places, regional technology strategies. North Carolina, Carrboro
Simon A (2007) Competitive identity: the new brand management formations, cities and regions.
Palgrave Macmillan, New York, pp 25–41
Sun L (2009) Progress analysis of foreign regional brand theory. Foreign economic and management,
pp 40–49 (in Chinese)
Yang G (2005) Targeting model of sustainable development in ecotourism. Hum Geogr 5:74–77
(in Chinese)
Zeng R (2000) System analysis of harmonization development among population, resource,
environment and economy. Syst Eng Theory Pract 20(12):1–6 (in Chinese)
Chapter 132
Research on Bayesian Estimation
of Time-Varying Delay
Abstract Time delay estimation is one of key techniques to array signal pro-
cessing, and it has already had several mature algorithms. According to its dif-
ferent scenes, time delay estimation can be transferred to the estimation of
coefficients of adaptive filter, which is on the basis of parameter model of adaptive
filter. The simulations of Bayesian methods including Extended Kalman Filter,
Unscented Kalman Filter and Bootstrap Particle Filter show that under Gaussian
nonlinear system, EKF and UKF can estimate time-varying delay effectively.
Besides, algorithms of UKF perform better than that of EKF, which are only
subject to Gaussian system. In the nonlinear non-Gaussian system, BSPF is able to
estimate time delay exactly.
Keywords Time delay estimation Extended Kalman Filter Unscented Kalman
Filter Bootstrap Particle Filter
132.1 Introduction
widely used in location and tracking in nonlinear dynamic system. EKF achieves
filtering by first-order linearization (Taylor series expansions), which inevitably
results in extra error and leads to divergence in strong nonlinear system (Crassidis
2005). UKF applies unscented transformation so as to transfer mean and covari-
ance nonlinearly and substitutes Jacobian matrix of EKF with simple mathematics
(Ma and Yang 2009).
UKF algorithm is of high precision, but it can only be used in the system that
noise obeys Gaussian distribution. As a sub-optimal estimation algorithm, particle
filter is commonly applied in nonlinear and non-Gaussian systems. The thesis
simulates EKF, UKF and particle filter and analyzes their performance according
to different scenes, producing relatively good estimation.
Assuming that s(t) represents signals from the same mobile transmitter, then at t,
the signals received by two independent base stations can be formulated as
follows:
(
r1 ðtÞ ¼ sðtÞ þ v1 ðtÞ
ð132:1Þ
r2 ðtÞ ¼ Asðt sðtÞÞ þ v2 ðtÞ
+
e( k )
∑
-
r2 (k )
132 Research on Bayesian Estimation 1253
X
p
zðk Þ ¼ ni r1 ðk iÞ p!1 ð132:2Þ
i¼p
The minimum quadratic sum of error e(k) = z(k) - r2(k) can be achieved by
adjustment of n. If r1(t) and r2(t) are expressed as formula (132.1), then in
according with sampling theorem, expression is as follows:
sin pði sðkÞÞ
ni ¼ A sin cði sðkÞÞ ¼ A ð132:3Þ
pði sðkÞÞ
In practice, with tolerance for certain truncation error, error can be basically
ignored, as long as p is larger than maximum time delay sðkÞmax , for instance
p [ sðkÞmax þ5 , the error can be basically ignored. By this way, process of
estimation becomes less complicated without considering signals’ waveform.
sðkÞ is regarded as state variable. Providing that the transmitter moves at a
constant velocity in a straight line with noise (Gaussian white noise) disturbance,
the state equation and observation equation of the system are:
8
> sðkÞ ¼sðk 1Þ þ ðk 1Þ=100 þ wðk 1Þ
<
X
p
ð132:4Þ
>
: r 2 ð k Þ ¼ A sin cði sðkÞÞ r1 ðk iÞ þ mðkÞ
i¼p
wðk 1Þ and mðkÞ represent system noise and observation noise respectively.
r1 ðk iÞ is a number sequence of a known waveform. In such a case, signal model
of time delay estimation is completed.
Assuming that the above formula in a system model which obeys a first-order
Markov random process, which satisfies the following equation:
pðxk jxk1 ; y1:k1 Þ ¼ pðxk jxk1 Þ ð132:7Þ
New observation data yk is available at k. Based on Bayesian principle, prior
probability distribution can be updated by force of measurement model pðyk jxk Þ so
as to reduce expected results of filtering:
pðyk jxk Þpðxk jy1:k1 Þ
pðxk jy1:k Þ ¼ ð132:8Þ
pðyk jy1:k1 Þ
and
pðyk jxk Þ ¼ pðyk jxk ; y1:k1 Þ ð132:9Þ
Z
pðyk jy1:k1 Þ ¼ pðyk jxk Þpðxk jy1:k1 Þdxk ð132:10Þ
Formulas (132.6) and (132.8) represent two basic steps of prediction and
updating, the recursive computation of which contributes to optimal Bayesian
estimation.
If noise wk1 and vk is zero-mean white Gaussian noise of independent dis-
tribution with known parameters, the state equation fk ðxk1 ; xk1 Þ is a given linear
equation of xk1 and wk1 , the observation equation hk ðxk ; mk Þ is a given linear
equation of xk and vk . The optimal solution pðxk jy1:k Þ can be achieved by Kalman
Filter, in the setting of consecutive xk (Kalman 1960).
In many cases, fk ðxk1 ; wk1 Þ and hk ðxk ; mk Þ are nonlinear, and noise wk and vk is
non-Gaussian, under which Kalman Filter won’t work well. Some extended
algorithms of Kalman filtering, like Extended Kalman Filtering and Unscented
132 Research on Bayesian Estimation 1255
Kalman Filtering features minimum mean squared error under linear system
estimation. Through recursion and iteration, its updating can be completed by
calculation of estimated value and current input value, which is beneficial to real-
time processing. EKF is a classic algorithm used in nonlinear estimation. It adopts
linear transformation of Taylor expansion to approximate nonlinear models, and
combines Kalman Filtering to estimate. EKF algorithms are simple and less cal-
culated, but they can only work in the weak nonlinear Gaussian condition.
Actually, the approximation of probabilistic statistical characteristics of ran-
dom quantity by limited parameters is easier than that of arbitrary nonlinear
mapping function. Great importance has been attached to approximation of non-
linear distribution by sampling, solutions to nonlinear problems, like Unscented
Transformation (UT) (Kastella 2000; Gordon et al. 1993; Julier and Uhlmann
2004). UKF employs Kalman Filtering frame and uses UT to process mean and
covariance, instead of linearizing the nonlinear function. UKF doesn’t need der-
ivation of Jacobian matrix, without ignoring the higher order term, so its nonlinear
distribution statistics is of high precision. Though the calculation of UKF is as less
as that of EKF, its performance is better than that of EKF.
Particle filter achieves approximation of probability density function pðxk jyk Þ
by a pairs of random samples that are transmitted in state space, and has
sample mean instead of integral operation so as to produce state minimum
variance estimation. These samples are called particles. Importance density
function is one of the key techniques, which exerts a direct impact on the
effectiveness of the algorithm. Besides, the number of particles will become
increasingly less along with iteration, which is the phenomenon called ‘‘particle
shortage’’. Two effective solutions to particle shortage are selection of optimal
importance density function and adoption of resampling methods. From an
application perspective, most importance density functions adopt pðxk jxk1 Þ
which can easily achieved by sub-optimal algorithms. The resampling methods
are to increase the number of particles by the resample of particles and
probability density function denoted by corresponding weight. Common
resampling methods include random resampling, stratified resampling and
residual resampling,etc. BSPF is built on optimal importance density function
and resampling. Particle filter serves as the main filtering tool for the nonlinear
non-Gaussian system.
1256 M. Wang et al.
wk ; vk are respectively stand for the statistics of system noise and observed
noise, in addition, Rwk ; Rvk are the covariance matrix.
The following are steps of the time-varying delay estimation.
(1) Initialization: assuming that the time-delay state value is equal to s0 , as k = 0,
and the initial variance is p0 .
(2) Time prediction:
(
^sðkjk 1Þ ¼^sðk 1Þ
ð132:12Þ
Pðkjk 1Þ ¼Pðk 1Þ þ Rw ðk 1Þ
8
> ð0Þ
>
> vk1 ¼sk1 ; i ¼ 0
>
< ðiÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
vk1 ¼sk1 þ ðN þ kÞPk1 ; i ¼ 1; . . .; N ð132:14Þ
>
> pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i
>
>
: vðk1
iÞ
¼sk1 ðN þ kÞPk1 ; i ¼ N þ 1; . . .; 2N
i
8 m
< x0 ¼k=ðN þ kÞ
>
xc0 ¼k ðN þ kÞ þ 1 þ a2 þ b ð132:15Þ
>
: m
xi ¼xci ¼ 1=½2ðN þ kÞ; i ¼ 1; . . .; 2N
` Calculating the mean value and variance based on the predicted point set.
X
2N
ðiÞ
^skjk1 ¼ xm
i vkjk1 ð132:17Þ
i¼0
X
2N h ih iT
Pkjk1 ¼ xci vi;kjk1 ^skjk1 vi;kjk1 ^skjk1 ð132:18Þ
i¼0
X
2N h i h iT
Prr
kjk1 ¼ xci wi;kjk1 ^rkjk1 wi;kjk1 ^rkjk1 ð132:21Þ
i¼0
X
2N h i h iT
Psr
kjk1 ¼ xci vi;kjk1 ^skjk1 vi;kjk1 ^rkjk1 ð132:22Þ
i¼0
´ Updating the state and the variance value, calculating the gains of filtering:
1
Kk ¼ Psr rr
kjk1 Pkjk1 ð132:23Þ
^sik ¼ ^skjk1 þ Kk rk ^rkjk1 ð132:24Þ
T
Pk ¼ Pkjk1 Kk Prr
kjk1 ðKk Þ ð132:25Þ
wik
ik ¼
w ð132:29Þ
P
N
wik
i¼1
N
(6) Resampling to get a new set of points si
k i¼1
and then conducting stratified
sampling.
40 Real signal
0 10 20 30 40 50 60 70 80 90 100
40 Real signal
0 10 20 30 40 50 60 70 80 90 100
0 10 20 30 40 50 60 70 80 90 100
EKF error
10
Estimation error of EKF
0 3σ interval
-10
0 10 20 30 40 50 60 70 80 90 100
UKF error
5
Estimation error of UKF
0 3σ interval
-5
0 10 20 30 40 50 60 70 80 90 100
-5
0 10 20 30 40 50 60 70 80 90 100
Fig. 132.3 The absolute filtering error under the Gauss noise
40
20 Real signal
0 10 20 30 40 50 60 70 80 90 100
40
20 Real signal
0 10 20 30 40 50 60 70 80 90 100
40
20 Real signal
0 10 20 30 40 50 60 70 80 90 100
EKF error
50
0
Estimation error of EKF
3σ interval
-50
0 10 20 30 40 50 60 70 80 90 100
UKF error
50
0
Estimation error of UKF
3σ interval
-50
0 10 20 30 40 50 60 70 80 90 100
-5
0 10 20 30 40 50 60 70 80 90 100
Fig. 132.5 The absolute filtering error under the uniform distribution
The symbol T represented one time step length, MSE of the algorithms in the
figure above respectively equal to 6.9328 (EKF), 6.6033 (UKF) and
6.4577(BSPF). Therefore, according to filtering results and squared error, we could
find that the three filtering methods can all get comparatively good results. The
reason went like this: firstly, hypothetic model was the Gaussian; secondly, the
nonlinearity of model was weak.
Simulation 2: the simulation of system noise conformed to uniform distribution
(U [0, 1]) (Fig. 132.4).
Figure 132.5 showed that the MSE were respectively 125.5421 of EKF,
127.0202 of UKF, and 8.5351 of BSPF. From these data, it was found that EKF
and UKF could not estimate the true values of time-varying delay accurately. The
MSE of EKF and UKF were two orders higher than that of BSPF, which could
deduce a conclusion that in both EKF and UKF, the system must be confined to the
Gauss model for a good performance; in the meanwhile, the BSPF has advantages
in estimation of nonlinear and non-Gaussian systems.
1262 M. Wang et al.
132.6 Conclusion
This paper had introduced Bayesian filtering theory and the algorithm steps of
EKF, UKF and BSPF. Then we had separately simulated in a Gauss nonlinear
system and a non-Gauss nonlinear, which was in order to compare the results of
EKF, UKF and BSPF. It was found that in Gauss nonlinear system, EKF, UKF and
BSPF could all perform well because of the weak nonlinearity. But in non-Gauss
nonlinear system, both EKF and UKF could no longer conduct estimation with
accuracy. They would cause much higher MSE than BSPF which proved to be
more suitable for estimating a non-Gauss nonlinear system.
References
Ching PC, Chan YT (1988) Adaptive time delay with constraint. IEEE Trans Acoust Speech Sig
Process 36(4):599–602
Crassidis JL (2005) Kalman filtering for integrated GPS and inertial navigation. In: AIAA
guidance, and control conference and exhibit, San Francisco: AIAA, 2005–6052
Fu W, Cui Z (2009) Based on improved extended kalman filter static target tracking.
Optoelectronics 36(7):24–27
Gordon N, Salmond DJ, Smith AFM (1993) Novel approach to nonlinear and non-Gaussian
Bayesian state estimation. IEEE Proceedings-F (S0-7803-2914-7) 140(2):107–113
Julier SJ, Uhlmann JK (2004) Unscented filtering and nonlinear estimation. Proc IEEE
92(3):401–422
Kalman RE (1960) A new approach to linear filtering and prediction problems. Trans ASME-J
Basic Eng 35–45
Kastella K (2000) Finite difference methods for nonlinear filtering and automatic target
recognition[J]. Multi-target multi-sensor tracking Appl advan 3:233–258
Knapp CH (1976) The generalized cross correlation method for estimation of time delay. IEEE
Trans Acoust Speech Sig Process 24(8):320–327
Ma Y, Yang S (2009) Based on the combination of UKF navigation error state estimation.
J Huazhong Univ Sci Technol 37:219–222
Xie J, Wu C, Fu S (2008) Study on passive location and time delay estimation method. Ships
Electron 31(6):26–29
Chapter 133
Research on Design and Analysis Platform
for Modular Spacecraft
Keywords Integrated platform Modular design Spacecraft Virtual prototype
technology
133.1 Introduction
With the development of space technology, larger and more complex structure of
the spacecraft system is needed in space exploration. Therefore, when modern and
advanced spacecraft is designed, some factors must be considered, such as the size
X. Zeng (&)
School of Computer, Hunan Institute of Science and Technology, Yueyang, China
e-mail: [email protected]
Z. He
School of Construction Machinery, Chang’an University, Xi’an, China
e-mail: [email protected]
H. Luo
School of Mechanical Engineering, Northwestern Polytechnical
University, Xi’an, China
and complexity. Since modular spacecraft structure design concept was put for-
ward in NASA’s Goddard Space Flight Center (Bartlett 1978) in the seventies of
the twentieth century, Modular, Adaptive, Reconfigurable System (MARS) (Jaime
2005), modular spacecraft design concepts for on-orbit deployment based on
MARS (Sugawaraa 2008), concepts and technology for on-orbit servicing (Rod-
gers and Miller 2005) in modular spacecraft design have been experienced. It can
be seen that more and more attention was paid to modular spacecraft design
concept. Recently, there are mainly two kinds of modular spacecrafts for on-orbit
deployment-Hexpak (Hicks et al. 2005, 2006) and Panel Extension Satellite
(PETSAT) (Higashi et al. 2006). References (Hicks et al. 2005; Hicks et al. 2006;
Higashi et al. 2006; Larry and Rolland 2013 ; Edward 2013; Deborah and Grau
2013; Jon et al. 2002; Murata et al. 2002) elaborate a lot of research on spacecraft
structures for on-orbit deployment and many kinds of mechanical interfaces.
However, related dynamics analysis on such mechanism configuration is rarely
reported.
In this paper, based on the idea of modular design, a spacecraft module con-
figuration for on-orbit deployment assembled with unified hinge mechanism is
designed, which could deploy different modules according to task demands. With
the assistance of virtual prototype technology, spacecraft attitudes influenced by
the deployment sequence of solar panels and modular spacecraft mechanism are
simulated and studied in the state of weightlessness in space. On this basis, based
on J2EE Technology, a design and analysis platform is developed for modular
spacecraft design in Eclipse environment. The configuration design process, model
and data association demand in assembly and simulation and a variety of design
and simulation software integration requirements are considered in this platform.
Modular design is usually directed at the same series of spacecrafts and each
module’s geometry dimensions are relatively invariant. In order to increase the
flexibility of structure design, each module is designed driven by parameters.
Therefore the design and assembly speed is improved to realize rapid response.
The designed spacecraft consists of two parts-spacecraft body and solar panels,
as shown in Fig. 133.1. The spacecraft body is made up of five similar modules
which can be lay flexibly according to the functional requirements. Each module
could be installed with different equipment and instruments according to the task
requires, as shown in Fig. 133.2. There are positioning pins in the modules which
are connected with hinges and driven by motors. The positioning pins are used to
133 Research on Design and Analysis Platform for Modular Spacecraft 1265
locate each module in the process of deployment. The solar panels are also con-
nected with hinges and driven by torsion springs. Each solar panel is made up of 4
pieces of rigid boards of which the geometric dimensions are 1750, 1500 and
30 mm.
The solar panels are folded before released, parallel to each other and fixed on
the spacecraft body. When released on orbit, the solar panels mounted on both
sides deploy at the same time driven by the torsion springs. Table 133.1 shows
mass properties of spacecraft modules and solar panels.
In the process of analysis, the spacecraft body’s coordination is fixed to module 1.
The X direction is parallel to each module when deploys, the Y direction is parallel
1266 X. Zeng et al.
to solar panels and the Z direction is perpendicular to the spacecraft body and solar
panels.
In the simulation, the power output of the motors could be applied by two methods
which are torque and rotational speed. In this paper, a constant speed is applied to
simulate the driving force of the motors. The solar panels are connected with
hinges and driven by torsion springs. Torque calculation formula of the torsion
spring is as follow (Bai et al. 2009):
T ¼ T 0 Kh ð133:1Þ
where T0 is Initial torque of the torsion spring, K is stiffness of the torsion spring
and h is the deployment angle of the solar panels.
Closed Cable Loops (CCL) is currently the most common synchronous
deployment control mechanism (Tianshu et al. 2000; Yuan et al. 2009), which is
made up of the grommets fixed to hinges, grommet guides and soft cable. It is a
synchronous transmission device to realize the inside and outside solar panels
deploy at the same time. The basic principle is shown in Fig. 133.3 where L is the
distance of two grommets and r is the radius of the grommet.
When the unfolding angles of the adjacent solar panels are the same, the
mechanism doesn’t work. But when the solar panels don’t move synchronously,
the angles of the two grommets don’t equal that would causes the upside edge tight
and the downside edge loose. The tight edge would be stretched and the force F
F ¼ K 0 r Dh ð133:3Þ
where T 0 is the controlling torque provided by CCL, K 0 is the equivalent torsional
stiffness of CCL, hi and hj are the two adjacent solar panels’ unfolding angles
correspondingly.
The process of deployment is simulated by using the analysis software Adams.
The spacecraft body and solar panels are regard as rigid bodies. The solar panels
are connected to spacecraft body by hinges and so do as the solar panels. In the
dynamics simulation, the motors drive all the modules to deploy of which speeds
are set to 30 red/s. For all the solar panels are driven by torsion springs, when the
inside solar panel deployed to the angle 90°, the deployment angle of the outside
solar panel will be 180°. Therefore, it could be concluded from formula (133.1)
that the torque of the outside solar panel is 1 time larger than that of the inside one.
The stiffness of the torsion spring is set to 0.1 N m/(°) and damping influence is
ignored. So the pre-tightening torque of the inside and outside solar panel is 9 and
18 N m correspondingly. The solar panels’ synchronous deployment is controlled
by the connection joint in Adams. The transmission ratio of the connection joint is
set as 1:2 and that of the adjacent rotation joint is set as 1:1.
The virtual prototype of the designed spacecraft is shown as Figs. 133.1 (the
initial state) and 133.4 (the final state of the deployment). There are 3 deployment
sequences for the spacecraft: spacecraft body and solar panels deploy at the same
time (Named as sequence 1); solar panels deploy before spacecraft body
(Sequence 2); spacecraft body deploys before solar panels (Sequence 3). All of the
3 deployment sequences are simulated. The attitude angles affected by the
deployment sequences are shown as Figs. 133.5 and 133.6.
As shown in Fig. 133.5, the attitude angles of the X direction are very small.
The value of sequence 3 is the largest and it is 2.15°. Sequence1 costs 9.8544s,
which is less than the others. The attitude angles of the Z direction are very similar.
Form Fig. 133.6, it could be seen that the attitude angles of the Y direction are
very large, no matter what the sequence is. The deployment of the solar panels has
almost no influence on the attitude angle change.
Fig. 133.7 Flexible and multi-level design and analysis flow of modular spacecraft
Fig. 133.8 The sketch map of simulation data of spacecraft deployment on-orbit
completed by specific analysis software. At the same time, client programs can
also control the states of analysis tasks.
The design and analysis flow is very flexible in which flows’ operation process
could be changed according to researchers’ judgments. The analysis flow could be
retreated to specified analysis step. The existing designed models and analyzed
results could be reused.
The designed model and analysis data includes the initial CAD models, experi-
ment data, simulation models and simulation results etc. All of the models and
result data are associated in the platform. So the operators could view and obtain
these data quickly.
(1) Automatic association of flow tasks’ models and data.
133 Research on Design and Analysis Platform for Modular Spacecraft 1271
133.4 Conclusion
Acknowledgments This work is supported by the Scientific Research Fund of Hunan Provincial
Education Department under the grant No. 10C0756.
References
Bai Z, Tian H, Zhao Y (2009) Dynamics simulation of deployment of solar panels in different
layouts based on ADAMS. J Syst Simul 21(13):3976–3977
Bartlett RO (1978) NASA standard multi-mission modular spacecraft for future space
exploration. In: American astronautical society and deutsche gesellschaft fuer luft-and
raumfahrt, goddard memorial symposium 16th, Washington, DC, AAS, pp 8–10
Deborah MW, Grau J (2013) Modular spacecraft standards: supporting low-cost, responsive
space. In: AIAA, 2004–6098
Edward F Jr (2013) Multi-mission modular spacecraft (MMS). AIAA-88-3513
1272 X. Zeng et al.
Hicks M, Enoch M, Capots L (2005) In: AIAA 3rd responsive space conference
Hicks M, Enoch M, Capots L (2006) In: AIAA 4th responsive space conference, Los Anglees.
Paper No: RS4-2006–3006
Higashi K, Nakasuka S, Sugawara Y (2006) In: 25th international symposium on space
technology and science. Paper No: 2006-j-02
Jaime E (2005) AIP Conf Proc 746:1033–1043
Jon M, Jim G, David G (2002) Space frame: modular spacecraft building blocks for plug and play
spacecraft. In: 16th Annual/USU conference on small satellites
Larry M, Rolland S (2013) Options for enhanced multi-mission modular spacecraft (MMS)
maneuver capability. AIAA-80-1292
Rodgers L, Miller D (2005) Synchronized Position Hold, Engage, Reorient, Experimental
Satellites (SPHERES), working paper. Massachusetts Institute of Technology, Cambridge
Satoshi M, Eiichi Y (2002) M-Tran: self-reconfigurable modular robotic system. IEEE/ASME
Trans Mechatron
Sugawaraa Y, Saharab H, Nakasukab S (2008) Acta Astronaut 63:228–237
Wang T, Kong X, Wang B (2000) The research on principle and function of closed loop
configuration of solar arrays. J Astronaut 21(3):29–38
Yuan AN, Song GU, Guang JIN (2009) Analysis and simulation of deployment motion of
satellite solar array. Chinese J Opt Appl Opt 1(2):29–35
Chapter 134
Research on H-Point of Driver Based
on Human Physical Dimensions
of Chinese People
134.1 Introduction
In order to reduce the degree of fatigue of driving and riding when arranging the
interior, the requirements for human comfort posture must be met in the design,
which is the basis of the layout of the human body and seat design. The
direct forward visibility and traffic light vision also were taken into account in the
design process, so that the H-point design is not only meet the comfort but also
ensuring a good vision.
134.2 Methodology
134.2.1 Comfortableness
Studies have shown that comfort or not are affected essentially by the subjective
feeling when maintaining a specific posture, in which the joint angle has a very
important influence on the subjective feelings of the comfort. Of course, there are
also a number of other factors that are likely to affect the feeling of comfort, such
as the contact pressure distribution between the person and the seat (Bubb and
Estermann 2000). The comfort and fatigue degree during the body driving and the
ride are related to the posture determined by the joint angles in the design.
Therefore, various manikin can be positioned according to a comfortable joint
angle so that its vision, encompasses, comfortableness etc. be able to evaluated
(Ren et al. 2006). The major joint angle adjustment range of drivers’ in a com-
fortable posture has been described as shown in Fig. 134.1 and Table 134.1
Each joint angle range of activities can be divided by the preferred angle editor
in CATIA manikin posture analysis module, and then the current position will be
able to be assessed globally and locally by the system. Edit Preferred angle
‘‘button’’ clicked after entering the module, the corresponding angle to a2–a7 in
Table 134.1 were found in the manikin. The scope of activities of each angle is
divided into two areas; one is into the area within the comfort range marked with
green, another falling into the region outside the comfort indicated by orange. For
the comfortable driver sitting angle, it focuses mainly on the plane of the side
view, and the comfortable angle of the other views given little studies. So in order
to prevent the angle change in other views caused by the manikin posture changes,
the freedom of each joint in the other directions was locked using the Lock
function. After locked, the parts cannot perform any operation in the locked
freedom direction, the angle value does not change with other joints’, thus
ensuring only the angle in Table 134.1 impacted in the adjustment process.
Because a1 is related with the seat’s backrest angle but not with a joint in the
body, we can not constrain it by using the above methods, but at the hinge pairs in
this section, and the rotation angle range of the kinematic pair is defined as 20°–
30°, so that the backrest angle can only be adjusted within the range of comfort
without out of range.
After defining the comfort range of different angles, open the dialog box of the
manikin posture assessment and analysis in a manikin posture analysis module, the
system provides two kinds of display modes analyzing for postures, a list type as
shown in Fig. 134.2, and the other a chart type shown in Fig. 134.3. Values in
Angle (angle) item in the list indicate the zero angle values of parts and positions
under a certain degree of freedom. The result (evaluation results) was expressed as
a percentage. It was used for indicating the comfort degree of postures, the higher
the score the more comfortable it is. Score (score) item indicates that the one when
the angle in the preferred area. In the chart, the color of the various parts complied
with the one set at its preferred angle region. When the angle is in the different
regions, it represent the color bar of the parts will be displayed in different colors.
If the color of the preferred angle region not set, the moving parts would not have
the appropriate colors.
H is the hinge point of the human body and thigh, i.e. the hip point (hip Point). In
determining the man-machine interface geometric size relationship of the auto
body, it is often with this point as the basis of positioning of the body. The actual H
is the midpoint of the left and right H-point marking connections on the manikin
when the 3D H-point manikin placed in the car seat according to the specified
steps. It indicates the position of the hip joint in the car after the driver or occupant
seated (SAE 1998, 2002).
134 Research on H-Point of Driver 1277
Fig. 134.2 Dialog box of assessment and analysis for manikin posture (List type)
CATIA assembly module was used to import the human sitting posture model and
the parameterized dashboard vision design model into the same environment
(CATIA Object Manager 2000; CATIA 2000), as shown in Fig. 134.4.
Clicking the ‘‘open horizons window’’ button in the human modeling module in
Ergonomics, the system will pop up a vision window, in which images is within sight
of the manikin. Then entering the body of the analysis module of the model, opening
the ‘‘manikin posture assessment and analysis’’ dialog box and selecting the chart
1278 L. Sun et al.
Fig. 134.3 Dialog box of assessment and analysis for manikin posture (Chart type)
pattern, finally into the electronic prototype module and opening the motion Simu-
lation dialog box, the posture control of the manikin can be realized through the
adjustment of three driving commands of the dialog box. The size to be adjusted was
constrained by using the windows of the vision and the manikin posture assessment.
When the color bars in the window of the manikin posture assessment are all into
green and the images in the window of vision able to meet the requirements of the
134 Research on H-Point of Driver 1279
dashboard vision and the front vision, the right value of the slide adjusted by the first
two drive commands is the H point coordinates. The entire adjusting interface was
shown in Fig. 134.5.
The gender, percentile and nationality of dummies can be modified in Properties
of the manikin, and the modified position of the model and the related settings
remained unchanged, so the manikin with different percentiles can be easily and
quickly studied in the same file to obtain H-point coordinates of each percentiles.
For the American manikin, the same method can be applied to find the H-point
ranges of each percentiles meeting the vision and the comfortableness, as shown in
Fig. 134.7. The fit H-point curve equations (134.2) were:
8
>
> Z95th1 ¼ 0:003887X 2 þ 5:167X 1392
>
>
>
> Z ¼ 0:01078X 2 þ 16:59X 1608
>
> 95th2
>
>
>
>
> Z90th1 ¼ 0:003547X 2 þ 4:535X 1120
>
>
>
> Z90th2 ¼ 0:0125X 2 þ 19:27X 7134
>
>
>
> 2
> Z50th1 ¼ 0:0001384X 0:7642X þ 881:9
>
>
>
>
>
> Z50th2 ¼ 0:03041X 2 þ 46:31X 17330
>
> 8
>
> > 0:0108X 2 12:53X þ 3879ðX\638Þ
>
> >
<
<
Z10th1 ¼ 0:003364X 2 5:297X þ 2292ðX [ 638Þ ð134:2Þ
>
> >
>
>
> :
>
>
>
>
>
> Z10th2 ¼ 0:09494X 2 þ 136:9X 49050
>
> 8
>
>
>
> >
> 0:05769X 2 71:67X þ 22580ðX\627Þ
>
> <
>
>
>
> Z5th1 ¼ 0:05522X 2 72:69X þ 24190ðX [ 627Þ
>
> >
>
>
> :
>
>
>
>
>
> Z5th2 ¼ 0:02075X 2 þ 24:39 6749
>
>
:
1282 L. Sun et al.
It can be found through the comparison of Figs. 134.6 and 134.7 that congenial
H-point curves of the human of Chinese and Americans have a large differences.
Thereby if the population differences ignored and only a single standard used to
design in R & D process, it is bound to produce defects, adversely affecting the
quality of the product.
134.3 Conclusion
Combined with requirements of the vision and the comfortableness, the H-point
range of the driver has been researched and the boundary curves of the 5th, 10th,
50th, 90th, 95th percentile driver’s H-point range obtained. Only the joint angle
was divided into the comfortable and the uncomfortable when evaluating the
comfortableness. In the uncomfortable angle range, regardless of their proximity to
the comfort zone, the same score was adopted to assess. Thus it brings an
inconvenience for the flexibility of the scoring system. In conclusion, the rela-
tionship between the joint angle and the comfortableness should be improved
further.
References
Bubb H, Estermann S (2000) Influence of forces on comfort feeling in vehicles. SAE Paper, 2000-
01-2171
CATIA V5 (2000) Knowledge advisor user’s guide, Dassault Systems
CATIA Object Manager (2000) Interactive user access reference manual, Dassault Systemes
Hockenberry J (1979) Comfort, the absence of discomfort. CP. News. Human Factors Society,
April 1979
Huang J, Long L, Ge A (2000) Optimization on H point in car body packaging. Automot Eng
22(6):368–372 (in Chinese)
Ren J, Fan Z, Huang J (2006) An overview on digital human model technique and its application
to ergonomic design of vehicles. Automot Eng 28(7):647–651 (in Chinese)
SAE (1988) Recommended Practice J826. Devices for use in defining and measuring vehicle
seating accommodation, May 1988
SAE (1998) SAE J1516-1998, Accommodation Tool Reference Point
SAE (2002) SAE J826-2002, H-point machine and design tool procedures and specifications
Sundin A, Örtengren R, Sjöberg H (1966) Proactive human factors engineering analysis in space
station design using the computer Manikin Jack, SAE
Chapter 135
Research on Modeling Framework
of Product Service System Based
on Model Driven Architecture
Abstract Produce Service System (PSS) has attracted much attention in recent
years for its providing new methods to the combination of product manufacturing
with service. Model building of PSS has become a basic question of related
researches. This article aims to propose a modeling framework able to characterize
elements and structures of PSS. First, present research achievements on PSS
modeling are analyzed. Second, PSS spatial structure and application model facing
to its whole life cycle are proposed. Third, a 4-layer modeling framework of PSS is
put forward; the meta-model of PSS is defined. At last, PSS single-view modeling
is discussed, so is the application of this PSS modeling framework based on MDA.
135.1 Introduction
X. Zhao
College of Management and Economics, Tianjin University, Tianjin, China
X. Cai (&)
Weichai Power Co., Ltd, Weifang, China
e-mail: [email protected]
Network
Platform
Logistics
Consulting
Financial
Law
……
Information
Resource Allocation
Process Control
Infrastructure
Figure 135.3 shows a 4-layer PSS modeling framework based on MDA. In this
framework, meta-model layer defines the semantic relationship of various ele-
ments, and model layer is a kind of instantiation of PSS. Due to the complexity of
PSS system structure, it is difficult to build a model from one single dimension.
Therefore, this paper proposes a PSS system model with single-view model based
on meta-model in model layer from different dimensions, such as organization,
process, product/service, communication, control, knowledge, and quality etc. All
views are unified together through the intrinsic correlation of meta-model. Data
layer mainly contains run-time information and data for PSS to describe its run-
ning status.
Shown in Fig. 135.4 is meta-model of PSS, which defines the basic elements of
PSS including: (1) terminals, meaning the ‘‘products ? services’’ combination
delivered to customers; (2) interface, meaning the interaction interface between the
1288 X. Zhao and X. Cai
Meta-
meta- Meta-meta-Model
Model
Meta- Meta-Model
Model
Customer
IT Manufacturing Service
Resource Resource Resource
Service Service
Service Product Acceptor Provider Resource
Participator
Connector
Terminal Platform Service
Management
Interface
Network
system and customer; (3) connector, which managing connections like products
and products, services and service, products and services, or product/service and
PSS platform; (4) platform, main part of the PSS, including process, participants,
resources, business model, production organization, service logic and application,
etc. The meta-model also defines the inherent relationship among various ele-
ments, including two basic relationships affecting and containing.
Meta-Model
Model
Instantiation
Design Standard Meta- Design
Requir- Model PSS
ement Custom Reference
Meta-Model Model Modeling
Framework
Fig. 135.5 The application of the PSS modeling framework based on MDA
The PSS modeling framework proposed in this paper aims to provide a method-
ology for PSS modeling in PSS structure analysis and design.
In the process of PSS analysis and design, as shown in Fig. 135.5, PSS mod-
eling framework can be applied for building visual model, acquiring semantic
information of system structure, analyzing status data of system elements or some
other activities. Reuse a standard meta-model or customize a new one at first.
Second quote or refer to the reference models to proceed with single-model design.
After instantiation at last, a PSS model can be obtained.
135.5 Conclusion
model, put forward a 4-layer modeling framework, defined its meta-model, and
discussed PSS single-view modeling methodology. This framework is able to
describe the complex relationships among various PSS elements and support PSS
analyzing and designing. On this basis, customization and change of meta-model
shall be further discussed to support PSS modeling better.
References
136.1 Introduction
J. Xu C. Bi (&)
School of Economics and Management, Beihang University, Beijing, China
e-mail: [email protected]
J. Xu
Shanghai Aircraft Customer Service Co. Ltd., Shanghai, China
data, in order to analyze the working principle and the system function of the large
and complex system better, this article chooses using system dynamics model to
research problems. The article will construct SD model of the customer service
system of COMAC, and through the simulation of the system, to analyze the key
factors which affect the running of the customer service system, to provide the
basis for decision-making and measures.
external environment
support
management system
system
Airlines
system
technique evaluation
system system
COMAC
tit f
Logistics
System
136 Research on the Civil Aircraft Customer Service System Simulation 1293
This paper used flow figure to research the system dynamic simulation of
COMAC customer service system. This paper based on the analysis of the cau-
sality diagram of COMAC service system to determine the flow figure.
In Fig. 136.3, this paper use the customer response time to stand for the service
ability level of COMAC customer service system, use the information level to
stand for the customer service center investment in logistic and information level
construction, use the supplier delivery time to stand for the supplier availability
(Hui and Jha 1999; Jenkins 1999). This paper is based on these hypotheses, to
carry on the SD simulation experiment. Specific variables and equation set of this
paper as follow.
In the model there are two state variables that are GDP value and profit of cus-
tomer service center. Two equations of state as follow.
• GDP value = GDP initial value ? GDP increment
• customer service center profit = profit initial value ? profit increment
State variable is connected with its initial value and growth rate. The growth
rate is description by rate equation.
136 Research on the Civil Aircraft Customer Service System Simulation 1295
In the model there are two flow rate variables that are GDP increment and profit
increment. Two equations of flow rate as follow.
• GDP increment = GDP value * GDP growth rate
• profit growth = Airline service de mand * average profit of single service.
Assistant equations can make the rate equation be expressed briefly. In this paper
the assistant equations as follow.
• The civil aircraft traffic demand
In this paper the civil aircraft traffic demand can be obtained by regression
analysis based on the proportion of annual air transportation in GDP value.
According to the data of China Statistical Yearbook from 2000 to 2010, by
simple linear regression we got the formula as follow.
Suppose the relationship between the plane traffic demand and the operating
profit is linear, the operating profit ratio is referred to the ratio of BOEING, and set
it to be 8 %. The profit of COMAC can be obtained by the formula as follow.
The Profit of COMAC = The civil aircraft traffic demand * the operating profit
ratio.
• Airline service demand
Suppose the relationship between the airplane traffic demand and the service
demand is linear. And also suppose the service demand of unit traffic demand is 0.3.
• The investment of COMAC for the customer service center
This investment = the investment of COMAC for the customer service cen-
ter ? profit of the customer service center * the investment ratio for customer
service ability construction
Suppose the ratio is 30 %.
• Information level
This paper chose the value of 2010 to be the initial value; it is about 39 trillion
RMB.
• The initial value of Customer service center profit
This paper chose the value of 2010 to be the initial value. In this year the profit
is -790, the income is about it is about 95.33 million RMB.
• GDP growth ratio
This paper chose the average growth ratio between 1980 and 2010 to be the
GDP growth ratio. By calculating it can be determined to be 10 %.
• Average profit ratio of single service
This paper based on the above assumptions to carry on the simulation of the
model, took the customer response time to be the target variable, to examine the
response time change level in different conditions. In this paper result 1 is obtained
based on the original hypothesis. Result 2 is obtained based on the condition that
the ratio of investment for information construction is raised to be 40 % and the
other conditions remain unchanged. Result 3 is obtained based on the condition
that the ratio of investment for inventory construction is raised to be 60 % and the
other conditions remain unchanged. Result 4 is obtained based on the condition
that supplier average unit delivery time changed to be 2.5 and the other conditions
remain unchanged. Result 5 is obtained based on the condition that the ratio of
investment of COMAC changed to be 40 % and the other conditions remain
unchanged. Result 6 is obtained based on the condition when the ratio of invest-
ment for inventory construction has been raised to be 0.5 and the other conditions
remain unchanged. In the research the choice of time span for 5 years, and the
result as follow.
We can see from Fig. 136.4 that, the best results are result 1 and result 2, the
better results are result 5 and result 6, the curve of the result 3 has a transitory
decline at first and then rose again as same as result 4. So we can get some
inference from this.
• The result 2 shows that the information construction level has large contribution
to the customer response time, so the service center should strengthen the
construction for it. Here the information network not merely be the construction
1298 J. Xu and C. Bi
of information network system, but also be the logistic network system. For the
customer service center of COMAC, if it has its own logistic team, it can save
the customer response time and rise customer satisfaction.
• We can see from the result 5 and result 6 that at present the construction for
basic ability should be strengthened, but after the time when basic ability had
met the demand, more investment will not bring about more contribution.
• We can see from result 3 that when the center input too much on the inventory
construction, the repay may not homologous be more, but may be bad. That
because when the investment for inventory is too much, the spare parts inven-
tory will be too much, the pressure on the management of inventory will be too
high, this may lead to a negative influence for the operation of the center. That is
in accordance with the principle that Inventory and not the more the better.
• We can see from result 4 that the supplier average unit delivery time does
negative influence for the customer service ability. So in the process of the
customer service center development, it should be put more attention on the
choice of suppliers.
136.5 Conclusion
References
Angerhofer BJ, Angelides MC (2000) System dynamics modeling in supply chain management:
research review. In: Proceedings of the 2000 winter simulation conference
China Statistical Yearbook (1980–2010) China statistical yearbook. Beijing (in Chinese)
Gao J, Lee JD, Zhang Y (2006) A dynamic model of interaction between reliance on auto mation
and cooperation in multi-operator multi-automation situations. Ind Ergon 36: 511–526
Hui SC, Jha G (1999) Data mining for customer service support. Inf Manag 38:1–13
Jenkins D (1999) Customer relationship management and the data ware house. Call Center
Solutions, Norwalk, pp 10–22
Kim SW (2003) An investigation of information technology investments on buyer–supplier
relationship and supply chain dynamics. Michigan State University, Michigan
Lyneis JM (2000) System dynamics for market forecasting and structural analysis. Syst Dyn Rev
2:68–77
Ovalle OR, Marquez AC (2003) The effectiveness of using e-collaboration tools in the supply
chain: an assessment study with system dynamics. J Purchasing Supply Manag 9:151–163
Tan YY, Wang X (2010) An early warning system of water shortage in basins based on SD
model. Proc Environ Sci 2:399–406 (in Chinese)
Wang Q-f (1994) System dynamics. Tsinghua University Press, Beijing, pp 1–25 (in Chinese)
Chapter 137
Research on the Modeling Method
of Wargaming for Equipment Support
on Computer
137.1 Introduction
The wargaming (Peter 1990) for equipment support on computer is the application
of wargame’s principle in equipment support, by using which, the commander
using the wargame map and units representing the real battle field and military or
using the computer simulation model (Yang 2007; Peng et al. 2008) in terms of the
rule and the principle of probability theory to command the activity of equipment
support in the war for verifying and improving the equipment support project.
The activity of equipment support is a complex system, and how to study the
model for wargaming is a core problem to carry out the function of wargaming
based on computer. The scientific property of model connects with the capability
of wargaming flow and result. In the text, the method of modeling which is based
on the period of chessman’s life is firstly put forward on the basis of analyzing the
state of chessman’s life to establish the system of model for wargaming.
The function of chessman is to show different classes of army and weapons, and
the commander who uses the system of wargaming could inquire the ability
parameters which are evaluated by the level of the army’s training or the capability
of the unit’s equipment through the chessman (James 1997). The chessman’s
parameters of the wargaming for equipment support are composed of the value of
movement, defense, support, attack, and are composed of the information of
support object and unit’s code, and so on, as shown in Fig. 137.1.
The chessman is an information carrier to show the situation of military and is
an assignment carrier to implement the action in the wargaming system which is
based on computer (Liu et al. 2008). The forms and movements of chessman are
basically function achieved for the system working.
Anything has a process from produce to perish. The chessman’s process which
involves produce change perishes; illuminate the flow of the wargaming. The
chessman’s movement is the carrier for wargaming (Ross 2006).
Firstly, the chessman’s entity is made by experts in military affairs through
generalizing the information and the attribute of equipment support forces and
combat forces. When the wargaming begin, the chessman is working on the
purpose of commanders who operate the command platform on computer and
working under the trigger conditions. The movements of chessman involve
mobility deployment maintenance regress, and so on. At the same time, the
chessman’s status messages are changing along with the movements of chessman,
and are displaying on the situation display platform for the commanders who want
to know it on real-time. In course of wargaming, if the chessman is exposed to the
firepower strike from enemy force, then the damage or perishes of the chessman is
coming, as shown in Fig. 137.2.
A battalion
support unit
Maneuverability
The Deployment
Chessman’s Activity
Life Maintenance
Return
Damage
Transformation
Perish
The activity of wargaming for equipment support is a complex system; include the
support entity, the combat entity, the interactive relationship between the entity
and the correlation of the entity. So we need an effective method of modeling to
describe the entity, operating modes of system, complicate battlefield environ-
ment, relationship between the entity and the arbitrage formulae.
The method of modeling which is based on the period of chessman’s life is put
forward on the basis of analyzing the state of chessman’s life to establish the
system of model for wargaming. In course of modeling process, the chessman is a
main. The relevant model is established by analyzing the chessman’s life and the
variation state of chessman in different phases, and the homonymic model archi-
tecture is developed to describe the wargaming from the chessman’s life based on
the method. The entity model is coming with the produce of the chessman, to
describe the static and dynamic attribute of the entity, to model the support force
and the combat force. The structuring model is coming with the trigger conditions
of the chessman’s business activities, to describe the subjection relationship and
correlation relationship between the chessmen, to ascertain the support relation-
ship so as to the chessman’s business activities is under the right order. The
behavior model which is the core in the modeling is coming with the development
of the chessman’s business activities; it is composed of maneuverability model
1304 X. Du et al.
deployment model and so on. The information model and the interactive formulae
model are coming with the change of the chessman’s state, to describe the process
how the chessman’s state is changed and how the message and information is
transferred when the chessman is working. Otherwise the probability model and
the terrain environment model are developed to describe the haphazard and the
environment.
• The entity model is described the structure of entity, the attribute of entity and
the correlation of the entity (Xin et al. 2010). The wargaming entity model is
usually composed of the ability parameter, the structure and the state of the
military force. For example the support force entity model include the infor-
mation of the support units, such as the ability to maneuver support recovery,
the object and the range of operation, to be grounded on the realization of the
dispatching order. The combat force entity model is the object of the support
force, and it isn’t central model relative to the support force entity model, so the
model is simply described the equipment information, the comeback parameter
of combat, the real-time state and so on. Otherwise the equipment entity model
is developed to describe the equipment information, such as maintenance type,
mean time to repair.
• The structuring model is developed to describe the subjection relationship and
the correlation relationship between the chessmen, and to build up the organi-
zational relations of the chessman, the support force correlation, and the rights
of wargaming seat (Peng et al. 2009). The model’s function is to establish a
relationship between the commander’s order and the chessman’s movement.
The model is described the rights formulae for the wargaming seat to develop
the maneuver relationship between the wargaming seat and the chessman, is
defined the trigger conditions to develop the order’s produce and implement.
The subjection relationship and the correlation relationship are developed by
defining the relationship of chessman and the attribute of chessman to imple-
ment the dispatching order and the return order.
• The maneuverability model is developed to describe the process, the chessman’s
movement to the destination after incepting the relevant order. It is based for
quantifying the maneuverability ability of the chessman, synthetically referring
to the influencing factors of environment and enemy’s situation, to estimate the
case of chessman’s maneuver on the road. The model of wargaming is described
the attribute of force, the type of maneuver, the geography information,
the formulae to manage the haphazard. The attribute of force is composed of the
entity’s type, the force’s level, the maneuver ability, the real-time state, etc.
The type of maneuver is composed of the maneuver mode, the beginning
137 Research on the Modeling Method of Wargaming 1305
rapidity, the destination coordinate, the real-time point information. The func-
tion of geography information is to provide the battlefield environment data for
the maneuverability model, it include the weather parameter and the landform
influencing factors. The formulae to manage the haphazard is developed to
describe the case, when the haphazard happen, the chessman automatically
manage it, for example when the chessman is attacked, the formulae may
operate to tell the chessman to remain in concealment firstly, and retaliate upon,
then wait for the commander’s order, but not to sequentially move on the road.
• The deployment model is described when the chessman’s state accord with the
trigger.
• Composed of the support force attribute, the deployment formulae, the
deployment time, the information of operation site, etc.
• The model is the most important model in the behavior model, it is described the
maintenance business process (Xu 2008). When the degree of damage equip-
ment and the level of maintenance unit are accordant the chessmen which
substitute the force begin a maintenance activity to the damage equipment.
According to the class and the amount of damage equipment, the hours of
maintenance task is counted, then the value of maintenance force ability is
established by integrating the utilization of time and the grade of enemy force.
The chessman take turns to maintain the damage equipment until the list of task
is clear. If all of the tasks are achieved, then the maintenance model is over, if
not the wargaming is going on while the formulae judge the grade of enemy
force and the availability of the equipment.
• The information model is developed to describe the process, the information
transmitting and exchanging in the wargaming activity, the logic relation
between the chessmen and the data. The function of the model is to manage the
data transmitting between the entity models, the structuring model, the behavior
model, the interactive formulae model on the computer. In the wargaming the
information includes the command message, the feedback message, state change
message, etc.
• The interactive formulae model is developed to describe the process, the
chessman’s state changing when the chessman’s state accord with the trigger
conditions and the interaction effect happen (Liu et al. 2011). The interactions in
the wargaming for equipment support mostly include the value translation
between support and combat, the value of the support force change under the
enemy force. The model’s parameter is composed of the correlation type, the
trigger formulae, the support value, the combat value, the coefficient translation.
• The probability model is described the haphazard in the battlefield. In the tra-
ditional handwork wargaming, the designer makes use of the probability number
list and the dice to simulate the effect of the haphazard. But in the modern
wargaming system based on computer, the haphazard is simulated by estab-
lishing the probability model through the probability function.
1306 X. Du et al.
Dispatch to produce
support chessman
N
Y
Maneuver to
destination
N Y
The chessman’s
state update
137.4 A Case
The
the Maneuver to maneuverability
destination model
Datas
transfering
The deployment
The trigger conditions of support model
N Y
The
maintenance
The support begining model
The interactive
The chessman’ s formulae model
state update
The information
model
Fig. 137.4 A corresponding relationship between the flow of wargaming and the framework of
model
137.5 Conclusion
The method modeling which is based on the period of chessman’s life is put
forward in the text, we establish the framework of the model for wargaming on
computer, and a case is introduced to explain the application for the method. But
the method has limitations, we should summarize the other methods on modeling
technique at home and abroad to amply design the models for the wargaming
system under the framework system of model based on computer.
References
James FD (1997) The complete wargames handbook. William Morrow & Company, New York
Liu JH, Xu XD, Xu XH (2008) Study on human-computer interaction platform for computer
wargame. In: Proceedings of Chinese control and decision conference, Chinese IEEE,
pp 2233–2238
Liu X, Long GJ, Chen C (2011) Modeling of entity’s data regulations of wargame. Ordnance Ind
Autom 30(Suppl. 8):35–39
1308 X. Du et al.
Peng CG, Liu BH, Huang KD (2008) The study of wargames based on HLA. In: Asia simulation
conference-7th international conference on system simulation and science and scientific
computing, Beijing, China
Peng CG, Ju RS, Yang JC, Huang KD (2009) Analysis of technology of modern wargaming.
J Syst Simul 21(Suppl 2):97–100
Peter PP (1990) The art of wargaming. Naval Institute Press, Annapolis
Ross D (2006) Designing a system on system wargame. U.S. Air Force Research Lab,
pp 149–153
Xin T, Wei W, Zhang MJ (2010) Wargame system modeling and CLIPS-based rule description
method. In: International conference on computer application and system modeling, IEEE
Chinese, pp 572–577
Xu XD (2008) Study and realization of human-computer interaction platform for computer
wargame. Northeastern University, Boston
Yang NZ (2007) Wargame, war game, simulation. The Publishing Company of PLA, Beijing
Chapter 138
Risk Sharing Proportion of Cooperation
Between the Banks and Guarantee
Agencies Based on Elman Neural Network
Abstract Considering the problems such as weak practicality generated from the
application of the mathematical model to calculate risk sharing proportion between
banks and guarantee agencies. This paper puts forward that Elman neural network
model can be adopted to study risk sharing proportion between banks and guar-
antee agencies. The computing process is as followed. First of all, selecting the
existing sample to train network model, and then proving network availability
through the tests, finally inputting the actual data operations to obtain the evalu-
ation results. The result indicates that Elman neural network model exhibits more
effective performance than traditional mathematical model on estimating the risk
sharing proportion in practice.
138.1 Introduction
study the cooperation of banks and guarantee agencies on risk sharing proportion
with the application of Elman neural network model (Jia et al. 2012, Liu et al.
2011).
Neural network model has the ability of self-organizing, applicability, self-
learning (Dong et al. 2007). Different from mathematical model, neural network
model particularly suitable for issues which needs considering a variety of factors
and imprecise and vague information processing. Neural network model is rep-
resented by the network topology, node characteristics, and learning rules instead
of particular logic function (Chen et al. 2005). The application of this method
greatly relieves the modeling difficulty and reduces modeling time and also it
significantly decreases the interference of human factors and effectively reduce the
number of auxiliary variables and intermediate variables which will make the
hypothesis be more reasonable. What’s more, provided the ample study samples,
reasonable network structure and well-designed training parameters trained based
on samples, neural network can automatically extract knowledge rules from his-
torical data to operate accurately the simulation among variables complex map-
ping relationships (Wang et al. 2010, You et al. 2012), so as to overcome the
limitations of traditional mathematical models.
The neural network model has already been widely applied in assessment and
prediction of various economic indicators and phenomena (Li 2002). Elman neural
network is a feedback neural network. Compared with feed forward neural net-
work, it has many advantages such as fast approaching, and high calculation
accuracy (Li et al. 2011). In assessment of the proportion of risk sharing, math-
ematical model computes the extreme point of income of banks and guarantee
agencies to determine the reasonable proportion. While the Elman neural network
model, to calculate the optimal fitted values of risk sharing proportion within the
reasonable range of error by fitting the complex relationships between the vari-
ables in the cooperation of banks and guarantee agencies (Li and Su 2010).
The planned process includes the selection of the existing sample to train
network model, the test of proved network availability, and finally the obtaining of
assessment result by inputting of actual data (Zhao et al. 2005, Hou et al. 1999).
This paper employs the segmentation method to form the risk sharing pro-
portion. According to the data acquired from the 12th national joint session of
guarantee agencies for small and middle-sized enterprises, most domestic banks, at
present, are not willing to share the risks with the guarantee agencies. Even if some
banks think over to share risk, the sharing proportion is generally less than 10 %.
But referring to the experiences and data analysis of the cooperation of banks and
138 Risk Sharing Proportion of Cooperation 1311
guarantee agencies at home and abroad, it shows that bank are likely to share more
risks after assessing risks of guarantee agencies in the process of cooperation. The
upper limit of bank risk sharing proportion is configured as 20 %. Therefore, this
paper adopts the upper limit.
Meanwhile with reference to the present situation of guarantee agencies in
Jiangsu province, risk sharing proportion is theoretically segmented into four
levels as V1[15 %,20 %], V2[10 %,15 %), V3[5 %,10 %), V4[0 %,5 %) of bank
risk sharing proportion. V1 stands for the less risky guarantee agencies, banks tend
to share the highest [15 %, 20 %] level of risks. The V2 represents comparably
secure guarantee agencies, banks are willing to share [10 %, 15 %) of the risks.
The V3 stands for the generally risky guarantee agencies, banks can share [5 %,
10 %) of the risks. V4 represents the most risky guarantee agencies that banks are
usually unwilling to share the risks which means banks will share [0 %, 5 %) of
the risks. Specific data is shown in Table 138.1.
Based on the rating method calculated by Elman neural network model and the
evaluation value interval in Table 138.1, the level of the evaluation value of
guarantee agencies can be conjectured, and then decided the risk sharing pro-
portion that banks are willing to share with guarantee agencies.
The process of evaluating the proportion of the specific risk shares by Elman
neural network model are as following (Cong and Xiang 2001, FECIT Science and
Technology Product Development Center 2005):
(1) Select parameters. Referring to researches and experiences at home and
abroad and considering the analysis result of the data of the guarantee agencies
in Jiangsu province, selecting capital, asset ratio, guarantee business profit-
ability, guarantee compensation rate, compensatory loss rate, margin ratio, re-
guarantee proportion, and willingness of cooperation, which is standardized as
input parameters of Elman neural networks.
(2) Determine the target output model. The level of risk sharing proportion
between banks and guarantee agencies are divided into four levels, using the
following array to indicate target value:
1312 J. Liang and Q. Mei
V1 :ð1; 0; 0; 0Þ
V2 :ð0; 1; 0; 0Þ
V3 :ð0; 0; 1; 0Þ
V4 :ð0; 0; 0; 1Þ
(3) Input sample data to train Elman neural network. The network should be
ensured to meet the evaluation requirements of cooperation of banks and
guarantee agencies.
(4) Input test samples. The trained model Establish learning network to evaluate
the error, according to European norm theory.
(5) Input the evaluation indicators to calculate by Elman neural network.
(6) Acquiring the bank risk sharing according to the output vector-valued. The
largest dimension of the output vector s is the risk rank which the bank wants
to share. For example, output (0.6, 0.3, 0.1, 0.1), risk sharing for V1, namely
[15 %, 20 %].
Before empirical analysis, the data input and the collection of original data require
to be processed in specific patterns to meet the request of the model. So the first
mission is to unify and standardize the data format (FECIT Science and Tech-
nology Product Development Center 2005; Song and Bai 2010).
(1) Design of input and objective vector
Elman neural network’s input parameters consist of seven indexes including
the capital, guarantee scale, guarantee business profitability, margin ratio, the
asset-liability ratio, compensatory loss and cooperation aspiration. The origi-
nal data of the number of the vector from a level, in order to prevent partial
neurons to supersaturated, so before inputting to the neural network, these data
should be standardized, and then provide system for the corresponding oper-
ation. Here sample data are standardized between [0, 1].
(2) Using Elman model for the evaluation of risk sharing proportion between
banks and guarantee agencies.
One of the most important inputs of Elman neural network model is learning
sample. The same data, according to different ways of learning training, will
produce different outcomes, so the set of study sample directly affects the output.
Table 138.2 gives the input vector of the twelve group study sample data (stan-
dardized sample data), and output vector corresponding is the bank risk sharing
proportion, it is known that risk allocation proportion is set to four interval [0 %,
5 %), [5 %, 10 %), [10 %, 15 %) and [15 %, 20 %].
138 Risk Sharing Proportion of Cooperation 1313
The formal evaluation of neural network will be started after determining the
learning sample. Four steps are required which include network creation, network
training, error inspection and network output. These steps will be illustrated in the
following parts.
The first step: network creation.
The three layers of network are considered to be a fairly effective solution to
recognize the general pattern issues. The three layers comprise an input layer,
hidden layer and output layer. Among the three layer network, the number of
hidden neurons is two times of the number of the input layer plus 1.
According to the principles above, this paper designs the network according to
the following way: The number of neurons inputting layer network is 8, so that the
number of neurons in hidden layer is 17, and output layer number of neurons is 4.
For the convenience of analysis, the following data structure can be used to
create he network model. The standardized network ensures the input vector in
range of [0, 1], hidden neurons in the transfer function of using tansig tangent
function, the output layer neural function using logsig logarithm function, so that
the output model could satisfy the network of output requirements.
threshold ¼ ½0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1
net ¼ newelmðminmaxðPÞ; ½17; 4; f0 tansig0 ; 0 logsig0 gÞ
Among them, threshold defines the input vectors to the maximum value and the
minimum value. Network parameter is shown in Table 138.3.
The second step: network training.
Traingdx function will be called through the following codes.
net:trainParam:epochs ¼ 500
net:trainParam:goal ¼ 0:01
net ¼ trainðnet; P; TÞ
The P and T respectively are set for input vector and target vector. The
Fig. 138.1 shows that, with the increase of the training intensity, convergence
speed gets higher. Network did not meet requirements, until the 181th training.
The third step: the error inspection.
The test is taken to display whether the network can meet the requirements of
the evaluation. Four groups of new data are selected as test data, as shown in
Table 138.4.
Test result:
Y1 ¼ ð0:0000 0:0977 0:4264 0:0031Þ
Y2 ¼ ð0:9994 0:0054 0:1293 0:0000Þ
Y3 ¼ ð0:0000 0:2146 0:0002 0:9992Þ
Y4 ¼ ð0:0360 0:0072 0:9991 0:0000Þ
The level is planned to be determined by the largest element from numerical
vector according to Elman neural network analysis, so it is suitable to use the
evaluation results of vector ?-norm to calculate the errors. These errors of the
assessment were 0.0023, 0.0006, 0.0008 and 0.0009. Obviously these errors are
within the acceptable limits (-0.003, +0.003) in the statistical scale. So it is
acknowledged that, the network could meet with the requirements of the risk
sharing proportion between bank and guarantee agencies after training.
The fourth step: network output.
138 Risk Sharing Proportion of Cooperation 1315
After inspection above, the estimated result of the risk sharing proportion is
accurate. The risk sharing proportion for the eight guarantee agencies are shown in
Table 138.5, which was calculated by the same method.
138.4 Conclusion
Elman neural network is used to estimate the risk sharing proportion between
guarantee agencies and banks. The process contains selection of the existing
sample to train network model, tests to prove network availability and finally input
of the actual data to calculate evaluation results. The application of artificial neural
network is predicted to perform effectively and scientifically to estimate the risk
sharing proportion and guarantee magnification for better practical generalization.
References
Fu J, Zhao H (2006) Research on risk sharing mechanism of commercial banks with guarantee
institutions on the base of credit assurance. J Syst Manag 6:565–570 (in Chinese)
Hou X, Chen C, Yu H, Wang T, Ji S (1999) Optimum method about weights and thresholds of
neural network. J Northeast Univ 4:447–450 (in Chinese)
Jia W, Zhou R, Zhang Z, Wang Z, Guo J (2012) Research on gear fault diagnosis based on elman
neural net. Comput Meas Control 5:1173–1175 (in Chinese)
Li X (2002) The establishment of economy forecasting model based on GMDH and artificial
neural network. Forecasting 6:63–66 (in Chinese)
Li X, Su X (2010) A new method for forecasting shield’s disc-cutters wearing based on elman
neural network. J Liaoning Tech Univ (Natural Science) 6:1121–1124 (in Chinese)
Li J, Ren Z, Liu Y (2011) Research on fault diagnosis system of mine ventilator based on elman
neural network. Coal Mine Mach 8:250–253 (in Chinese)
Liu N, Chen Y, Yu H, Fan G (2011) Traffic flow forecasting method based on elman neural
network. J East China Univ Sci Technol (Natural Science Edition) 2:204–209 (in Chinese)
Song S, Bai J (2010) The local impact analysis in artificial neural networks. Value Eng 7:144–145
(in Chinese)
Wang Z, Zou G (2011) The risk allocation mechanism in small and medium-sized enterprises
financing. J Tianjin Norm Univ (Social Science) 5:57–60 (in Chinese)
Wang L, Wang T, Chu Y (2010) Application of B-spline interpolation in system dynamics’
model based on BP artificial neural networks. Value Eng 14:153–154 (in Chinese)
You M, Ling J, Hao Y (2012) Prediction method for network security situation based on elman
neural network. Comput Sci 6:61–76 (in Chinese)
Zhao Q, Liu K, Pang Y (2005) A new training method of Elman and it’s application investigation
in system identification. Coal Mine Mach 5:73–74 (in Chinese)
Chapter 139
Simulation Analysis on Effect
of the Orifice on Injection Performance
Abstract The injector is one of the precision components for a diesel engine, and
it is inevitable to wear fault during utilization. For the fault of the orifice expansion
and the orifice obstruction, they are essentially changing the structure parameters.
In order to analyze the effect of the orifices on injection performance, the simu-
lation model of a certain type diesel injector was established based on AMESim.
And a simulation for a whole injection cycle of this injector was performed, thus
the injection characteristics and the relevant information about motion of the
needle valve was obtained. The effect on the velocity of the needle valve, the flow
rate and the volume of the fuel oil injection, etc. had been analyzed by changing
the number, or the diameter of orifices, and setting different diameters for each
orifice. The analysis would provide some references in structure design, optimi-
zation, testing data analysis and fault diagnosis.
139.1 Introduction
The injector is one of the precision components for a diesel engine, and it is
inevitable to wear fault during utilization. The orifice expansion and the orifice
obstruction are two of the most common fault phenomena.
The orifice expansion is due to constant spray and erosion of high-pressure fuel
oil flow on the orifices during the injector working. It drops the injection pressure,
Y. Li (&) X. An D. Jiang
Automobile Engineering Department, Academy of Military Transportation,
Tianjin, China
e-mail: [email protected]
shortens the injection distance, which leads the diesel atomizing worse and
increasing the carbon deposit in cylinder.
The orifice obstruction is due to half or complete block caused by the nozzle
corrosion during long-term storage for the diesel engine, or some solid impurity
particles mixing into the fuel oil, or the carbon deposit caused by bad combustion
accumulating around the orifices and making the orifices be in half blocking state
(Jin 2008).
These two kinds of fault phenomena are essentially changing the structure
parameters of the orifice, from the point of view of the physical mechanism. It is
difficult to record the parameter variable in real time during the working process of
the injector. So simulation is the common method for analysis on the injector
(Lv et al. 2009).A hole-type injector model is built based on AMESim to simulate
an injection cycle, and to analyze the effect of the orifices on the injection per-
formance, which can provide some references in structure design, optimization,
test data analysis and fault diagnosis.
The combustion process of the traditional diesel engine is mainly diffusion com-
bustion, and its combustion heat release rules and its fuel economy depends on the
fuel injection spray and the spread mix. So it is a high requirement for the spray
quality (Zhou 2011). The fuel injection spray process is very complex. As the fuel
spraying into the cylinder, the processes of fuel bunch rupturing, the fuel droplets
colliding and polymerizing, the fuel droplets running up against the cylinder wall,
and the fuel droplets evaporating spread, all accomplish in tiny space and time
scale (Xie 2005). The diameter and number are important parameters for the fuel
injection system in the diameter. The diameter of orifice has a great influence on
the fuel injection column shape, spray quality, fuel and air mixing state (Ma et al.
2008).
It is in favor of the fuel mixture formation to decrease the orifice diameter, but it
will also prolong the injection duration, in condition of the same cam lift and the
same injector open pressure. The average diameter and the heterogeneity of the
fuel oil droplets increase, the injection flow rate increases, and the fuel oil injection
duration is shorten, with the increasing of the orifice diameter (Jia et al. 2003).
However, smaller orifice diameter improves the low speed performance, mean-
while leads worse emission of NOX (Zhang et al. 2008). Therefore, the smaller
injector orifice diameter is more advantageous to forming the injection rectangle.
And the larger orifice diameter is benefit for reducing the diesel engine noise,
vibration index and emission levels, with being in line with the ideal fuel injection
law requirements of continuous acceleration injection until the quick broken fuel
oil injection process at the end (Wang et al. 2012).
139 Simulation Analysis on Effect of the Orifice on Injection Performance 1319
It is inclined to cause the fuel oil mist adhering to the cylinder wall and
producing more soot with too few orifices. It causes higher temperature inner the
cylinder and it is inclined to cause interference and overlap of the fuel oil bunch,
thus producing more NOX and soot, with too many orifices (Zhou et al. 2008;
Ding et al. 2008; Wu et al. 2010; Zhou et al. 2008).
Fig. 139.1 Main model units of the injector. a Model of conical poppet valve; b Model of
mandril; c Model of valve body; d Model of leakage unit
1320 Y. Li et al.
simulates with the hypothesis that the fuel oil in the inlet passage is to motionless
at the beginning of the injection, because of stickiness force. That is the pressure in
the whole injector and the density of the fuel oil to be equivalent. The simulation
process computes a whole injection cycle, including the needle valve opening
time, the fuel oil injection duration tine and the needle valve closing time.
The basic parameters are four orifices with the same diameter of 0.28 mm. Each
parameter is set to basic parameter except the control parameter. In order to
analyze how the orifices affect the injection performance, the model batch runs
taking different parameter as control parameter respectively.
The fuel oil flow rate reaches maximum rapidly and then drops to zero quickly
with advisable orifice number, which closes to the ideal fuel oil flow rate curve.
Meanwhile, too few orifices leads to increase slowly after the injection, keep short
at the maximum flow rate, and take too long dropping to zero, which does not
conform to the requirements that begins and stops supplying fuel oil to the
combustion chamber quickly. The results are as shown in Fig. 139.4.
The needle valve rises following the law of slow first and then rapid, which also
being an ideal state of the injector. The needle valve rises rapidly at start time and
stays long at the maximum displacement. The results are as shown in Fig. 139.5.
Shown as the curves in Fig. 139.6, the beginning injection time is brought forward
and the injection during time is longer with increasing of the needle valve diameter.
That is because the pressure-bearing surface area of the needle valve increases with
1322 Y. Li et al.
the increasing of the needle valve diameter, which makes the volume in the pressure
chamber decrease correspondingly. So the pressure in the chamber increases fast,
and the needle valve opens earlier. The pressure falls slowly after the needle valve
opening which makes the needle valve keep the maximum displacement for longer.
At the end of the injection, the needle valve takes its seat quickly under the force of
the spring preload, and the injection flow rate falls to zero rapidly. But too large
needle valve diameter can make the pressure in the chamber fall slowly and produce
pressure wave that exceeding the needle valve opening pressure at the time of the
needle valve taking its seat, compelling the needle valve go up again as a result to
generate a twice-injection. However, too small poppet diameter can intensify the
volatility at the beginning injection time.
The effect of the orifice on injection performance is relevant to the orifice number
and orifice diameter. With different diameter for each orifice, the injected volume
and the injection flow rate computes as equivalent to convert the corresponding
number of diameter of the same orifices in the model.
139 Simulation Analysis on Effect of the Orifice on Injection Performance 1323
Shown as the curves in Fig. 139.7 and 139.8, it is exactly the same form for the
curves of the pressure at the orifices and the curves of the force on top of the
needle valve. In other words, it will get same value with normalizing the corre-
sponding data.
139.4 Conclusion
(2) The parameters such as the flow rate, the injected volume, the velocity and the
lift of the needle valve, which character the injection performance. The
parameters are not mutually independent but have inherence relations with
each other. And they are coincident to each other. It is to examine how each
parameter satisfies the diesel engine features in certain aspect that analyzes the
curves of each parameter independently.
(3) The orifice number affects the duration of the injection process, especially
with too few orifices to meet the requirement. The injection flow rate increases
slowly at first, and drops to zero taking quite a long time after reaching the
maximum value, which does not accord with the requirement of the instan-
taneous injection, shown as the movement of the needle valve that it
residences too long at the maximal displacement, going against the throughout
distance and the spray column cone angle meeting required values.
(4) The orifice diameter has little effect at the beginning of the needle valve
opening, while has great effect on stopping supply fuel oil during later of the
injection process.
References
Boudy F, Seers P (2009) Impact of physical properties of biodiesel on the injection process in a
common-rail direct injection system. Energy Convers Manage 50(12):2905–2912
Ding J, Su T, Yang Z (2008) Optimized matching of injector by thermodynamic simulation.
Small Intern Combust Engine Motorcycle 37(1):31–33 (in Chinese)
Jia G, Pang H, Hong D (2003) Effects of fuel injection design parameters on diesel engine
emission characteristics. Diesel Engine, July, pp 35–38 (in Chinese)
Jin J (2008) Reason and elimination for diesel injector breakdown. Farm Mach Maintenance
36:26–27 (in Chinese)
Lv F, Cai Y, Li X, Li X (2009) Effect of injector specifications on combustion process of didiesel
engines. Tractor Farm Transp 6:82–85 (in Chinese)
Ma T, Li J, Wang D (2008) Improvement of the structure of a diesel engine’s fuel injector. Diesel
Engine 30(1):32–33 (in Chinese)
Wang L, Zhang Z, Liu P (2012) Simulation and experimental study on injection characteristics of
electronic control injector. Small Internal Combustion Engine and Motorcycle 41(2):14–16
(in Chinese)
Wu J, Wang M, Ma Z, Xu B, Liu Y, Wu R (2010) The effect of fuel injector parameters on
formation of mixture and combustion characteristics of diesel engine-based on fire numerical
simulation. J Agric Mech Res 18:202–205 (in Chinese)
Wen Y, Zhang Z (2010) Study on simulation of the diesel injector based on AMESim. Auto Mob
Sci Technol 6:38–41 (in Chinese)
Xie M (2005) Calculation of combustion engine, 2nd edn. Dalian University of Technology
Press, Dalian (in Chinese)
Zeng D, Yang J, Huang H, He W (2008) Working process simulation of an injector based on
AMESim. Small Intern Combust Engine Motorcycle 38:5–8 (in Chinese)
Zhang X, Song X, Yao H (2008) Research into the effect of injector structure on the performance
and emission of electric controlled diesel engine. J Yangzhou Polytech Coll 12(4):22–25 (in
Chinese)
139 Simulation Analysis on Effect of the Orifice on Injection Performance 1325
Zhou M, Long W, Leng X, Du B (2008) Simulation research on the effect of fuel injector
parameters on diesel’s combustion characteristics. Vehicle Engine 176: 21–26
Zhou B (2011) Internal combustion engine, 3rd edn. China Machine Press, Beijing (in Chinese)
Chapter 140
Simulation and Optimization Analysis
of a Wharf System Based on Flexsim
140.1 Introduction
Huangpu Port is located on the estuarine area of the Pearl River, in the south-
east of Guangzhou City, which is a branch company of Guangzhou Port Group
Co., Ltd. Its business involves import and export bulk and general cargo handling,
storage and transportation. Taking responsibility of more than 60 % cargos col-
lecting and distributing, General Cargo Wharf of Huangpu nowadays encounters
some problems. Combining with research program of Guangzhou Port Business
Management System Construction, a simulation model for the practical operation
of general cargo wharf of Huangpu Port is built based on Flexsim technology. By
running the 3D model, system bottlenecks are recognized. And then, a series of
proposals are designed, which finally are validated by system simulation.
Throughput of Huangpu Port reaches more than 28 million tons a year. For the
extensive economy hinterland, covering the Pan-Pearl River Delta region and
trading relationship with more than 60 countries and regions, it is one of the
important trade ports in South China. General Cargo Wharf is regarded as char-
acteristic wharf of Huangpu Port. However, in the recent year, handling capacity is
difficult for breakthroughs. 4 Problems are summarized as follows:
(1) Operation dispatching is based on experience management. Experience is
accumulated by operation planners or instructors who work on their job for a
long time, so that decision making is random and lack of scientific backing.
(2) Wharf service is simple, which mainly covers storage and transportation of
domestic cargo business, especially cargo handling, storing and transportation
of cargos in Pearl River Delta.
(3) Wharf information technology is in low level, and business process is slow.
Most business data is collected and handled manually. For the wide varieties
of cargo, bills of document are comprehensively delivered.
(4) Traffic jams occur frequently on the roads of wharf, which annoys customers,
some of whom complain a lot of it and some of whom just stop further
cooperation.
Flexsim is one of the most popular simulation software in the world, which
contains technology of 3-dimensional image processing, simulation, artificial
intelligence and data processing (Shi and Wang 2011), which is tailored to serve in
140 Simulation and Optimization Analysis of a Wharf System 1327
manufacturing and logistics industry. In this paper, taking general cargo wharf of
Huangpu Port as an example, a practical and visual wharf simulation system is
developed for decision making of wharf operation by Flexsim simulation tech-
nology, which simulates wharf operation by importing real data and which outputs
effective index to help managers recognize key problems and make good decision.
General cargos consist of steel, mechanical equipment, and cargos with pack-
aging, among which, steel takes up more than 50 %. In this paper, steel is chose to
be representative of cargos in the model. The equipments taken into use mainly
include gantry crane, jib crane, fork lift and trailer. Most cargos arrive at wharf by
water and leave by truck, in order to simplify the model, which is treated as the
only way of cargo flow in this paper.
The main part of wharf operation system can be simplified as a queuing system
of G/G/1. All the service wharf offers could be looked as service counter, and truck
or ship are service targets (Gao 2011), while truck service would be the focus in
this paper, which could be specified into detail service of truck scale, check-in and
loading (Fig. 140.1).
Assume that k is average arrival rate per minute, l is the average service rate
per minute, so that is
(1) The average minutes a truck spends on queuing
k
Wq ¼ ð140:1Þ
lð l k Þ
(2) The average minutes a truck stays in the system comprises time of running
(Tt), queuing and being served
1
Ws ¼ Tr þ Wq þ ð140:2Þ
l
(3) The rate if a truck arrives but couldn’t be served timely and has to wait
k
Pw ¼ ð140:3Þ
l
Yard or
Check-in Shipside
1328 N. Lin and X. Zhai
We can build a practical and visual wharf system simulation model by applying
G/G/1 queuing mathematics model and Flexsim simulation technology.
By running the model for 7200 min (5 days), we can get the following data:
berthing time of ship is 23.4 h; and if a truck carries 60 tons, wharf handling
capacity is 3444 tons daily (Table 140.1). The counters of truck scale and check-in
are too busy, particularly truck scale, busy at 98 % of time (Table 140.2); Vacancy
rate of crane and fork lift is high, and there is large space for storage or other
operation on the yard (Table 140.3)
The time a truck spends in the system includes running on the road of wharf,
waiting for service and being served. Truck arrival time interval has Poisson
distribution, and the average interval is 12 min. Assume that it takes 14 min for
running on the road per truck. During 5 days, 287 trucks arrive, being served and
leave, so that is
1
k¼ 0:08
12
l ¼ 287 ð7200 287 14Þ 0:09
Plugging them into formula (140.1–140.3), we can get: it takes 134.65 min for
a truck to queue for service; it takes 159.74 min for a truck to stay on wharf; and
the rate that a truck arrives but have to wait for service is 0.92.
By investigation, we find that: Truck scale works for weighing and measuring
for truck and goods on it, which then print a weight note for goods loaded on each
Table 140.3 Vacancy rate of Gantry crane Jib crane Fork lift
equipment of model 0
Vacancy rate 0.65 0.95 0.94
140 Simulation and Optimization Analysis of a Wharf System 1329
truck. Truck license number and weight note number are typed by hand, the
computer working for truck scale is aging and slow. Besides, there is little
information sharing between truck scale and other departments including sched-
uling department and check-in office. The arrival truck must go for truck scale to
have weight record of empty truck and then go for the check-in office. Check-in
officer will check the truck’s pick-up document and search for the corresponding
release sheet and then keep record on the tally sheet when the truck is loaded. For
all the documents are written by hand, it’s troublesome to find the history record,
which leads to average 10 min for truck check-in. All of these may result in traffic
jams near truck scale system. By the analysis above, we can focus on the handling
effectiveness on truck scale and check-in system. Proposals for optimization are as
follows.
1 2 3 4 5 6 7 8
Vacancy rate 0.35 0.59 0.34 0.59 0.30 0.65 0.27 0.43
We can see each index is improved. Handling capacity reaches 4884 tons daily
(Table 140.4). For a truck carries 60 tons, and the system runs for 5 days, we can
come up with throughtput0 as follows:
throughtput0 ¼ 407 60 5 ¼ 4884
The occupying rate of gantry crane, jib crane and fork lift are increasing 60, 240
and 217 % respectively (Table 140.5). So, promoting information technology
arguably weakens the resistance from truck scale and check-in service on wharf
operation while in the other way, it demands more equipment. If arrival interval of
truck and ship are constant, truck can run without any obstacles and reach shipside.
More and more trucks arrive at shipside, if the handling of gantry cranes is not
effective, traffic jams would come up.
Therefore, we can conclude that model 1 can settle the traffic jams at a period of
time, which improves the occupying rate of equipment as well as enlarges wharf
throughput. However, when the handling capacity reaches a certain scale, equip-
ments are too occupied. Model 1 is not effective any more. At this time, the
planner should arrange more handling equipments so as to meet the demand of
larger throughput.
(1) When the owner of cargo is applying for release sheet, he or she could apply
for an IC card according to their own need. Truck arrives at wharf with IC
card. At the truck scale, information of release sheet, weight note and cargo
storage position could be called up by swiping IC card. Loading admission
notice is printed and the information is also sent to scheduling department,
140 Simulation and Optimization Analysis of a Wharf System 1331
In this process, it’s unnecessary for the truck to check in. IC card is the cer-
tificate for picking up cargos. Business system identifies the validation of IC card,
and recognizes whether there is stock, which would save the time for truck (Cao
et al. 2009). And there is no need for check-in officer to keep record on the paper
tally sheet, and then type in the system. Tally sheet is generated automatically.
Modifying the parameter, running the new model, we can get:
After truck business process reengineering, there is no queuing phenomenon
with throughput of 6252 tons daily (throughtput1). For a truck carries 60 tons, and
the system runs for 5 days, we can come up with throughtput1 as follows:
throughtput1 ¼ 521 60 5 ¼ 6252
However, the owner of cargo may take the cost of IC card into consideration,
and compare it with cargo value, not all the owner choose to buy IC card. For the
limit length, this paper discuss situation which is assumed that all the owners
would purchase IC cards.
Table 140.8 Vacancy rate of Gantry crane Jib crane Fork lift
equipments on model 3
Vacancy rate 0.54 0.84 0.83
Port should make full use of its advantage from water transport and railway
transport, which could reduce the loads of wharf in a wide range and attain cost
advantage. It’s a good way to strengthen regional cooperation and try to build a
strong network connecting with waterway, highway, railway and skyway in order
to embrace an encouraging future (Ding et al. 2010). In addition, Huangpu Port
should reposition its wharf function. Through the analysis above, we can see that
the yard space and yard equipments are not fully utilized. It’s considerable to
select proper cargo which are strongly connected with the economical hinterlands
and offer supply chain service such as transportation between the upstream and
downstream, distribution processing on the wharf yard etc.
According to Model 0, when the throughput is 3444 tons per day, bottlenecks
come up in front of the wharf logistics system operation. At this time, invoke
model 1. It’s a good way to solve traffic jams as well as handling capacity shortage
by standardizing business system and reducing manual labor. When the throughput
reaches 4884 tons per day, invoke model 2. Model 2 takes advantage from Model 1
and accelerates the handling efficiency in order to meet strong demand. Sources on
wharf are so limited, which requires reasonable distribution in dispatch agency.
Running the models, we can get indexes of operation, which are helpful to be the
guide for distributing more sources to the weak part which index suggests that the
vacancy rate is lower. When throughput becomes 6252 tons a day, the system
encounters bottleneck again. However, we can still use the model to carry out the
latter data analysis.
In this case, 80 % of cargoes are picked up directly from ship side by truck,
20 % of cargoes are picked up at the storage yards. And the utilization rate of
storage yard is only 0.05 %. The service function of Huangpu Port remains in
traditional handling-loading and uploading, which leads to insufficiency of storage
yard and its equipments. The situation will be changed if Huangpu Port extends
wharf service function (Hu et al. 2006). It is considerable to take full use of
equipments and offer potential logistics service to customers. For example, with
the help of source advantage and economic environment superiority, Huangpu port
can provide warehousing, transit shipment and distribution, etc. In addition, highly
dependency of road transportation is Huangpu’s weak point. Assume the port area
is constant, traffic jams will be inevitable result. Only when Huangpu Port expands
new profit source (Chen et al. 2005), can it solve the congestion fundamentally.
Huangpu port can have the aid from goodness on water transportation as well as
train transportation and promotes regional cooperation (Huang 2012), which forms
a strong system of seaway, road, and train as well as air network in order to make
full use of wharf sources.
1334 N. Lin and X. Zhai
The simulation model provided in this paper is helpful for schedule manager to
make short-term decision. In the long run, Huangpu Port should make great effort
to facilitate the construction of regional transportation network (Tang 2011), and
carry out more regional operation. In addition, it should try to expand new service
field, for example, make full use of its advantage and offer tailored logistics
service for customers, select proper cargo and carry out supply chain service on the
wharf area etc.
References
Cao Q, Huang L, Song Y (2009) Optimization of truck distributing business process and system
design in port. Logist Technol 11:118–120
Chen Z, Cao X, Yan X (2005) Research on correlation relation between the development of
Guangzhou Port and Guangzhou. City Econ Geogr 25(003):373–378
Ding W, Zhang L, Li J (2010) The Construction of hub-and-spoke logistics networks and its
empirical research. China Soft Sci 08:161–168
Gao P (2011) Research on modeling and operation optimization on port logistics network system.
Dalian University of Technology, Dalian
Gao P, Jin C, Deng L (2010) A review on connection issues on port multimodal transport system
and their modeling. Sci Technol Manag Res 23:234–238
Hu F, Li J, Wu Q (2006) On the strategies for the utilization of Pearl river coastland and the
development of Guangzhou harbor. Urban Probl 130(2):31–35
Huang X (2012) Accelerate transition and construct new Guangzhou port. Port Econ 11:11–13
Li J, Chen Y, Zhai J (2010) Research on port supply chain system dynamics simulation model.
Comput Eng Appl 46(35):18–21
Shi Y, Wang J (2011) Optimization of production system by simulation based on UML and
Flexsim. In: International conference on management and service science (MASS), IEEE
China, vol 30, pp 1–4, Aug 2011
Tang S (2011) The interaction and development between Guangzhou port and city. Port Econ
6:48–51
Chapter 141
Simulation Design of Piezoelectric
Cantilever Beam Applied on Railway
Track
141.1 Introduction
In recent years, as wireless sensor networks widely used in rail transport, power
supply of wireless sensor network nodes causes for concern. Now wireless sensor
network node power supply is battery powered. Because of the limited life of the
commonly used chemical batteries, they need regular replacement. So this way
will bring a heavy workload, high cost and serious waste. For a large area of
wireless sensor networks, such as roadbed monitoring, battery replacement is
difficult. Therefore, the power supply and management of the wireless sensor
network node problems are urgent to be solved.
At the same time, the train produces a large number of vibration energy during
operation and radiant this energy by wheel-rail noise resulting in a lot of energy
loss and noise. If we can use this energy for power supply of wireless sensor
network nodes, We not only can solve the power supply of wireless network and
will be able to achieve the purpose of energy saving and environmental protection.
Now with the rapid development of electronic technology, the new ambient
vibration energy of a piezoelectric vibration generation came into being. Com-
pared to other micro-generation devices, piezoelectric self-generating device has a
simple structure, no heat, no electromagnetic interference. Therefore, piezoelectric
power generation device is widely used in different areas. Collecting the rail
vibration energy with piezoelectric vibrator will be new ideas to solve this problem
for wireless sensor network power.
Track vibration is due to mutual collisions of the wheel and rail, when the train is
running. In the propagation of the vibration, the high frequency part decays faster
than the low frequency part of the frequency of the vibration which will change
over distance. Horizontal vibration attenuates faster than the vertical vibration.
Track vibration is complex synthesis of transverse, longitudinal wave, surface
wave. Because this kind vibration is affected by a variety of complex factors, it’s
vibration mechanism and the communication pattern fluctuate. Therefore, only
through a large number of measured data for statistical analysis to consider the
combined effects of various factors can we get the feature of track vibration. GAO
Guang-yun (Gao et al. 2007) tested Qinhuangdao-Shenyang passenger railway
tracks vibration. The test results show that: rail vibration frequency is about
100 Hz, and in the 70–130 Hz the amplitude is relatively large. Based on this, this
paper installed oscillator on rail base, shown in Fig. 141.2.
141 Simulation Design of Piezoelectric Cantilever Beam 1339
100
vibration frequency 95
90
85
80
75
70
65
60
0 1 2 3 4 5 6
mass(g)
When the thickness of piezoelectric crystal is 0.2 mm, the natural frequency of
cantilever beam is shown in Figs. 141.6, 141.7 and 141.8.
From the above modal analysis, we can see that with the increasing in the
length of the cantilever beam and the mass of mass block, the cantilever’s resonant
frequency will reduce. Base on this, we design the size of the cantilever.
Now we design a kind of cantilever which has a resonant frequency 77.5 Hz.
First, we draw a line along the frequency 62.5 Hz in each map shown in
Figs. 141.4, 141.5, 141.6, 141.7 and 141.8, then we get a point of proximity and
ten points of intersection shown in Table 141.2.
141 Simulation Design of Piezoelectric Cantilever Beam 1341
frequency(Hz)
vibration frequency 105
100
95
90
85
80
75
70
0 1 2 3 4 5 6
mass(g)
0.8mm
60 mm, the cantilever 100
vibration frequency 95
90
85
80
75
70
0 1 2 3 4 5 6
mass(g)
0.8mm
vibration frequency 85
80
75
70
65
0 1 2 3 4 5
mass(g)
According to the study (Shan et al. 2010), the output voltage of the cantilever
will increase with the increase of the length of the cantilever. So compare pro-
gram3 and program4, we choose program4. Compare program2 and program5, we
choose program5. Compare program6 and program7, we choose program7.
Compare program8 and program10, we choose program10. Compare program9
and program11, we choose program11. At the same time (Liu et al. 2011), the
output voltage of the cantilever will increase with the decrease of the thickness of
the cantilever. So Compare program4 and program5, we choose program5.
Compare program10 and program11, we choose program10. Cantilever substrate
material is phosphor bronze, the optimum thickness ratio of about 0.5 (Shan et al.
2010). So the thickness ratio of program5 is 0.33, and the thickness ration of
1342 H. Zheng and F. Zheng
frequency(Hz)
75
70
65
60
55
50
45
50 55 60 65 70
length(mm)
85
80
75
70
65
60
55
50
50 55 60 65 70
length(mm)
141.7 Conclusion
Acknowledgments This work was supported by the Natural Science foundation of Tianjin,
China, unfer grant 10JCYBJC06800.
References
Abstract The reasonable pricing of the scenic spots ticket involves many aspects,
and it is a complicated and changeable process. In this paper, according to the idea
of revenue management, scenic spot ticket dynamic pricing model is constructed,
simulation calculation is performed by using particle swarm optimization ant
colony algorithm. The simulation result shows that the dynamic pricing based on
revenue management can bring more profit and this study will provide a scientific
means for the scenic spots tickets pricing.
142.1 Introduction
At present, the scenic spot ticket pricing is muddledness and lack of scientific basis
in China. ‘‘Comparison method’’ or ‘‘follow-the-leader method’’ is the common
adopted pricing approaches by scenic operators. In this cases many scenic spots
cannot reach the expected passenger flow volume, and economic benefit of scenic
spots is influenced seriously (Lu et al. 2008). Revenue management is a set of
system management ideas and methods, it makes the scientific forecasting and
optimization techniques combine together with the modern computer technology
organically and faultlessly, its core is based on market segmentation. Revenue
Z. Duan F. Zhang
College of Economics and Management, Shandong University
of Science and Technology, Qingdao, China
S. Yang (&)
Computer basic courses department, Shandong Institute
of Business and Technology, Yantai, China
e-mail: [email protected]
management is to sell the right product to the right customer at the right time for
the right price through the right channel, so as to achieve the maximum profit. In
other words, revenue management is the art and science of predicting real-time
customer demand and optimizing the price and availability of products according
to the demand. This paper attempts to build the dynamic pricing model of the
scenic tickets based on the idea of revenue management. The simulation calcu-
lation carries out by using the particle swarm optimization of ant colony algorithm,
these works can provide scientific means for the scenic spot tickets pricing.
The revenue management area encompasses all work related to operational pricing
and demand management. This includes traditional problems in the field, such as
capacity allocation, overbooking and dynamic pricing, as well as newer areas, such
as oligopoly models, negotiated pricing and auctions.
An American airline is considered the main pioneer in this field (Duan et al.
2008). Recent years have seen great successes of revenue management, notably in
the airline, hotel, and car rental business. Currently, an increasing number of
industries are exploring to adopt similar concepts (Nagae and Akamatsu 2006;
Weatherford 1997).
Applying revenue management method to set ticket price is a typical market-
oriented pricing. It depends more on the relationship between supply and demand
rather than cost. Based on the EMSR (Expected Marginal Seat Revenue) theory,
one ticket revenue equals to the ticket price and the probability of being sold if
does not consider the cost.
The dynamic pricing method takes tourists willing pay (Wi ) as the basis of
setting price. We take the single scenic spot for instance, the scenic spot ticket
price which the consumers can accept set forth as follows.
Z 1
pi ¼ wi ¼ Fðp; zÞdp
0
In the formula: p stands for the costs from the Starting point to the scenic spots,
z stands for Socio-economic characteristics of the population. Because there are
many consumers, the total revenue TR of the scenic spots is equal to the sum of
different price that the consumers can accept.
X
n
TR ¼ pi
i¼1
142 Simulation of Scenic Spots Dynamic Pricing Based on Revenue Management 1347
Revenue management pricing method is the typical market pricing method, which
reflects the game relation of interests of the tourists and the scenic spots in the
market
Firstly, we suppose that the capacity of a scenic spots is M. The advance sales
cycle is T. The price of each cycle keeps invariability at a period of time. If
booking tickets 4 weeks advance and each week as a pricing cycle, then the cycle
numbers are 4. Assume that the reserve price of visitors obeys a certain probability
distribution FðpÞ and it is changeless in the whole sales cycle, the sales price is pt
in each sales cycle, only when the reserve price is lower than the current prices
tourists will buy the tickets, so the probability of a ticket been bought by an arrived
tourist is 1 Fðpt Þ, thereby Dt ðpt Þ ¼ Mt ½1 Fðpt Þ is the demand function of
cycle t. Here Mt means the potential market size of the cycle t, the price pt is the
decision variable. The ticket revenue management of scenic spot aims to determine
the optimal price for different sale cycle in limited sales cycles of ½0; T 1, so as
to make the total revenue of scenic spots tickets maximize. Dynamic pricing model
can be expressed as follow.
X
T 1
Max TR ¼ pt Dt ðpt Þ
t¼0
8
>
> X
T 1
>
> Dt ðpt Þ M t ¼ 0; 1; 2; ; T 1
>
>
>
> t¼0
>
>
>
> pt b p t ¼ 0; 1; 2; 1
< aP
s:t pt 0 t ¼ 0; 1; 2; ; T 1
>
>
>
> ab
>
>
>
>
>
> a0
>
>
:
b0
The first restriction shows that the total number of tickets can’t exceed the
maximum capacity of the scenic spot, and the second says that the price in each
cycle cannot exceed a guiding price range providing by the state or department. a
and b express the lower limit and upper limit ratio of price change .
1348 Z. Duan et al.
Step2: Ant colony algorithm initialization, place n ants containing only their
parameters randomly on n nodes, according their respective variable value to solve
fitness function value;
Step3: External loop calculator reset;
Step4: Internal loop calculator reset;
Step5: Perform ant colony algorithm on each ant containing respectively
parameters (n; q; q0 ), update pheromone;
Step6: If internal loop conditions are not met loop to step5, otherwise loop to
step7;
Step7: Update local pheromone, record the results of each ant, update n; q; q0
by ant colony algorithm;
Step8: If external loop conditions are not met loop to step4, otherwise loop to
step9;
Step9: Output the optimal solutions.
This paper takes QingDao LaoShan JuFeng scenic spot as an example, Laoshan
scenic spot’s seasonality is very obvious, winter has less tourists, so the present
pricing of off-season is different from of busy-season. Assume that every April to
October as busy-season, the pricing is 95 yuan/people, every November to next
March as off-season, the pricing is 65 yuan/people. Take 4 weeks as a pricing
cycle; by using matlab as a simulation tool, simulations are performed for off-
season and busy-season respectively. Off-season starts from November and busy-
season starts from April, 8 weeks data are used. The benefits of the dynamic
pricing and the fixed pricing are compared. Simulation price in integer forms,
simulation results show as Tables 142.1 and 142.2.
The simulation results showed that based on revenue management the dynamic
pricing revenue are obviously higher than the fixed pricing whether in the off-
Table 142.1 Profit comparison for dynamic pricing and fixed pricing in off-season
Cycle Fixed price Dynamic price Fixed price Dynamic price Income increased
(yuan) (yuan) profit (yuan) profit (yuan) rate (%)
1 65 57 398580 417353 4.71
2 65 55 384215 402196 4.68
3 65 52 391365 408311 4.33
4 65 50 371280 387691 4.42
5 65 51 380510 396529 4.21
6 65 50 367575 382168 3.97
7 65 52 338000 351114 3.88
8 65 50 295295 306723 3.87
1350 Z. Duan et al.
Table 142.2 Profit comparison for dynamic pricing and fixed pricing in peak-season
Cycle Fixed price Dynamic price Fixed price Dynamic price Income increased
(yuan) (yuan) profit (yuan) profit (yuan) rate (%)
1 95 91 1517340 1619305 6.72
2 95 92 1552965 1658101 6.77
3 95 95 1588495 1690476 6.42
4 95 97 1778780 1913612 7.58
5 95 102 1879480 2023260 7.65
6 95 100 1782675 1916910 7.53
7 95 96 1737835 1866956 7.43
8 95 94 1756930 1887294 7.42
season or in peak -season, and the peak-season revenue is obviously higher than
off-season.it is mainly because of the number of visitors to the off-season is a lot
less than peak-season, and the dynamic price for off-season are lower than the
fixed price, but it is just the reverse for peak-season. The highest tickets price up to
102 yuan, and the time is just corresponding to the May 1 vacation. It denotes that,
when there are less visitors one obvious way to win more customers would be to
reduce the price. But the reality is the opposite in more tourists’ conditions. In this
way, the tourists flows have adjusted to a certain extent and the overall income will
be always higher than the fixed pricing mode. The whole tourist routes of laoshan
scenic spot up to six, the revenue management based dynamic pricing strategy will
bring considerable income for scenic spots.
142.6 Conclusion
The revenue management theory is used for building the dynamic pricing model of
scenic spots, simulation is performed by using the particle swarm optimization ant
colony algorithm, the scenic spots income tends to maximize based on the existing
by dynamic adjusting the tickets price. This will provide a scientific means for the
scenic spots tickets pricing. However, the scenic spots ticket pricing is a com-
plicated and changeable process, along with the change of ticket price, the con-
sumption behaviour of the tourists will change accordingly. The further researches
are very necessary for the more accurate demand forecasting and dynamic pricing.
Acknowledgments Financial support: National social science fund (11BJY121); The education
ministry humanities and social science research project (09YA790128); Shandong province soft
science research plan (2012RKB01209).
142 Simulation of Scenic Spots Dynamic Pricing Based on Revenue Management 1351
References
Dorigo M (1999) Ant algorithms for discrete optimization. Artif Life 5(3):137–172
Dorigo M, Gambardella LM (1997) Ant colony system: a cooperative learning approach to the
traveling salesman problem. IEEE Trans Evol Comput 1(1):53–56
Dorigo M, Maniezzo V, Colorni A (1996) The ant system: optimization by a colony of
cooperating agents. IEEE Trans Syst Man Cybern PartB 26(1):1–13
Duan Z, Li J, Lv Z (2008) Management in the Chinese scenic area of ticket pricing. Price Theory
Pract 06:35–38 (in Chinese)
Hetmaniok E, Słota D, Zielonka A (2012) Application of the ant colony optimization algorithm
for reconstruction of the thermal conductivity coefficient. Swarm Evol Comput 7269:240–248
Kathiravan R, Ganguli R (2007) Strength design of composite beam using gradient and particle
swarm optimization. Compos Struct 81(4):471–479
Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: International conference on
neural networks, pp 1942–1948
Lu R, Liu X, Song R, Pan L (2008) A study on admission fee fixing model in china’s tourist
attractions. Tourism Tribune 23(11):47–49 (in Chinese)
Nagae T, Akamatsu T (2006) Dynamic revenue management of a toll road project under
transportation demand uncertainty. Netw Spatial Econ 6(3–4):345–357
Rong Y, Zhao L (2011) Localization algorithm for wireless sensor network based on ant colony
optimization-particle swarm optimization (ACOPSO). Comput Meas Control 3(19):732–735
(in Chinese)
Shi-yong LI, Wang Q (2009) Extensive particle swarm ant colony algorithm for continuous space
optimization. J Test Meas Technol 23(4):319–325 (in Chinese)
Tang L, Zhao L, Zhang Y (2010) Research on multi-period dynamic pricing model and algorithm
for fresh foods. J Syst Manag 19(2):140–146 (in Chinese)
Weatherford LR (1997) Using prices more realistically as decision variables in perishable-asset
revenue management problems. J Comb Optim 1(3):277–304
Yu X, Zhang T (2010) Multiple colony ant algorithm based on particle swarm optimization.
J Harbin Inst Technol 42(5):766–769 (in chinese)
Chapter 143
Study of Adaptive Noise Cancellation
Controller
Keywords Adaptive
Noise cancellation controller Responses feedback
Spectral line enhancement
143.1 Introduction
In the application of engineering, the most classical method for eliminating the
noise from signals is Wiener filtering (Dai 1994; Shen 2001; Hassoun 1995). But
design this filter must know the information of signals and noise. Begin from
60 years, with the developing of adaptive filtering theory, this problem becomes
not so important (He 2002). Adaptive filter can set apart the signals and noises
without information of them. Then, this technology was applied in many fields
(Wu 2001). However, there are still two concern questions—interfered reference
channel and responses feedback. These problems are both not solved satisfied
(Haykin 1994; Zhang and Feng 2003; Jiang et al. 2001). This paper is mostly work
over the question of responses feedback, and proposed a new improved adaptive
noise cancellation controller.
C. Zhao (&)
Department of Electrical and Information Engineering,
Shijiazhuang University, Shijiazhuang, China
e-mail: [email protected]
S. Sun
Institute of Information, Shijiazhuang Tiedao University, Shijiazhuang, China
Original import
Signal s + n0 Output e
source +
−
Noise n1 Adaptive y
source filter
Reference import
Error e
Adaptive noise cancellation
Figure 143.1 is a normal adaptive noise cancellation controller (Yang and Zhou
1998; Larimore et al. 1978; Evinson 1946). It has two sensors. The signal will be
add irrelevance noise n0 when it was transported to the first sensor. Incorporate
signal s ? n0 were transmitted to noise cancellation controller from ‘‘original
import’’. The second sensor accepted the noise signal n1, which is irrelevant to
signal but is relevant to noise n0 with some way. The signal out from second sensor
was called ‘‘reference import’’ (Doherty and Porayath 1997).
Suppose s, n0, n1, y are fixed in statistic. Their average values all are 0. s and n0,
n1 are irrelevant, but n1 and n0 are relevant. The error of output is:
e ¼ s þ n0 y ð143:1Þ
If square formula (143.1) and get mathematic expectation. s and n0, n1are
irrelevant, so:
h i
E e2 ¼E s2 þ E ðn0 yÞ2 þ 2E½sðn0 yÞ
h i ð143:2Þ
¼E s2 þ E ðn0 yÞ2
Adjust the adaptive filter let E[e2] achieve the least. Because the power of signal
E[s2] is not affected, E[(n0 - y)2] will be the least. Ideal instance, E[(n0 - y)2] = 0.
Then y = n0, e = s, it indicate that the least power of output make output no any
noise.
143 Study of Adaptive Noise Cancellation Controller 1355
In Fig. 143.1, the method of adjusting parameters of adaptive filter is similar to the
steepest descent method, which is a kind of optimization algorithms. Actually, this
method is use instantaneous gradient r(k) ^ instead of factual gradient r(k),
namely:
W ðk þ 1Þ ¼W ðkÞ þ lr^ðkÞ
ð143:5Þ
¼W ðkÞ 2leðkÞX ðkÞ
When the adaptive filter converges to steady state, system output signal e(k)
will be equal to the signal s(k). Then the weight will be changed based on the
signal s(k). So it can said that the signal s(k) have feedback through adaptive filter.
The next system output signal is affected by the frontal one. The formula (143.4)
indicated that the instantaneous gradient r(k) ^ is added an error vector:-
2 s(k)X(k), which should be equal zero. This will bring on surplus mean square
error. Thereby, system output signal e(k) will different from input useful signal
s(k). This will bring to distortion of signal, namely, responses feedback
phenomenon.
The former Refs. (Zhen-ya He et al. 2000; Cichocki and Unbehauen 1996; Miller
et al. 1990), adopted the method of variable step size algorithm commonly to
counteract responses feedback phenomenon. This paper is based on another
opinion to improve the iterating weight. Sequentially, it will weaken or eliminate
the effect of responses feedback.
From (143.3)–(143.5) we can see that the surplus mean square error is accretion
because of error vector -2 s(k)X(k), which exist in instantaneous gradient. Then
the formula of iterating weight (143.5) will be rewrite:
1356 C. Zhao and S. Sun
Based on the idea of above and adaptive spectral line enhancement, propose a new
adaptive noise cancellation controller show in Fig. 143.2. It is composed of two
parts, one is high frequency noise cancellation the other is low frequency noise
cancellation.
In the first part high frequency noise cancellation, by the theory of adaptive
spectral line enhancement we know, when the mixed signal get across the delay
z-4, the output of adaptive filter 1 is only contain narrowband weight, if delay
time is longer than the reciprocal of broadband bandwidth while shorter than that
of narrowband. The cause is the continuance of narrowband autocorrelation is
longer than that of broadband.
Then the reference input can be regarded as predictive estimate signal of error
e1. Use the difference e2 to adjust adaptive filter 1 in order to cancellation high
frequency noise.
+
Noise Adaptive − Output
source Filter 2
e3
− Adaptive
Filter 3
+
143.5 Simulation
Figure 143.4 shows this noise filter. r2 is 3. The mixed signal was shown in
Fig. 143.5.
The result of common adaptive noise cancellation controller is show in
Fig. 143.6. It effect is not so perfect. Figure 143.7 is the result of adaptive noise
cancellation controller proposed in this paper. Compare Fig. 143.6 with
Fig. 143.7, we can see that the improved adaptive noise cancellation controller is
better than common one obviously. Its wave is much smoother, there are lesser
burrs and their amplitudes are very small.
1358 C. Zhao and S. Sun
f n0 k −1 ))
) )
From the simulation we can see that the adaptive noise cancellation controller
proposed in this paper can wipe off noise effectively, and the result is better than
the common.
143 Study of Adaptive Noise Cancellation Controller 1359
References
Cichocki A, Unbehauen R (1996) Robust neural networks with On-line learning for blind
identification and blind separation of sources. IEEE Trans Circuits Syst 43:894–906
Dai Y-s (1994) Weak signal detection methods and instruments. Defense Industry Press, Beijing,
pp 50–51
Doherty JF, Porayath R (1997) A robust echo canceler acoustic environments. IEEE Trans
Circuits Syst 44:389–398
Evinson NL (1946) The wiener RMS error criterion in filter design and pre diction. J Math Phys
25:261–278
Hassoun MH (1995) Fundamentals of artificial neural networks. The MIT Press, Cambridge,
pp 126–150
Haykin S (1994) Neural networks. Macmillan College Publishing Company, New York,
pp 260–264
He Z-y (2002) Adaptive signal processing. Science Press, Beijing, pp 67–68
Jiang M-f, Zheng X-l, Peng C-l (2001) The new variable step-size LMS-type algorithm and its
application during adaptive noise cancellation. Signal Process 17(3):282–286
Larimore MG et al (1978) Adaptive canceling using SHARF. Proceedings of 21st Midwest
symposium on circuits., pp 30–32, Aug 1978
Miller WT, Sutton RS, Werbos P (1990) Neural networks for control. MIT Press, Cambridge,
pp 255–260
Shen F-m (2001) Adaptive signal processing. Xidian University Press, Xi’an, pp. 80–81
Wu W (2001) Study of adaptive noise cancellation in enhancement of speed. M.S. thesis, Xidian
University, Xi’an
Yang J-x, Zhou S-y (1998) Simulation of adaptive noise canceller based on neural network.
J Date Acquis Process 13:74–77
Zhang Q, Feng C-q (2003) Variable step-size LMS algorithm and its application in adaptive noise
cancellation. Modern Electron Technol 14:88–90
Zhen-ya He Ju, Liu J, Yang L-x (2000) Blind separation of images using Edge worth expansion
based ICA algorithm. Chin J Electron 3(8):436–439
Chapter 144
Study of Cost-Time-Quality in Project
Failure Risk Assessment Based
on Monte Carlo Simulation
Abstract In order to analyze project failure risk, the quality factor, added on the
basis of cost-time joint risk assessment, are showed by the degree of deviation and
expressed by 2-norm firstly. Secondly, considering the cost-time-quality factors
jointly, the joint distribution model of cost-time based on the Monte Carlo model is
established; meanwhile, the definition of project failure risk value is given. Last,
an example is given based on Program Evaluation Review Technique (PERT) to
simulate and analyse project failure risk through Monte Carlo Simulation (MCS).
Keywords Cost-time-quality Engineering project failure risk Monte Carlo
Simulation The degree of deviation
144.1 Introduction
Management of project typically includes the three aspects: cost, time and quality
(Oisen 1971). Project’s goal is to achieve the expected quality performance
requirement within the specified time and the approved budget. Cost, time and
quality influence each other.
In 1996, Babu and Suresh adapted the continuous scale from Zero to One to
specify quality attained at each activity. They developed the optimization models
and presented an illustrative example (in press) (Babu and Suresh 1996). In 2006,
Xu, Wu and Wang determined the conditional percentile ranking of the schedule (or
cost) values with the integration method, which combined Monte Carlo multiple
simulation analysis technique, regression analysis and statistical analysis (in press)
(Xu et al. 2006). Gao, Hu Cheng and Zhong built the mathematics model of their
synthesis optimization considering time, cost and quality (in press) (Gao et al. 2007).
In 2009, XU Zhe, WU Jin-jin and JIA Zi-jun formed the marginal p probability
distribution functions and the conditional p probability distribution functions about
cost and schedule and analyzed the simulation outputs (in press) (Xu et al. 2009). In
2012, Kim, Kang and Hwang proposed a mixed integer linear programming model
that considers the PQLC for excessive crashing activities (in press) (Kim et al.
2012). The main idea of Monte Carlo Simulation is to estimate the amount people
interested in through simulating system reliability and risk behaviors randomly
(Dubi 1998, 2000; Yang and Sheng 1990; Marseguerra and Zio 2000).
This paper builds the model with Arena the software and analyses the results
with Excel to calculate the value of project failure risk, which relates to the
knowledge of probability theory and MCS.
Quality has an important influence on the risk of project. In this paper, the author
proposes a method to quantify the quality. The corresponding quality will grad-
ually increase when cost and time are increasing, and vice versa reduced. The
failure risk reaches the maximum when cost and time are smallest, which led to the
minimum of project’s quality. On the contrary, the failure risk reaches the mini-
mum when cost and time are biggest, which led to the maximum of project’s
quality.
Figure 144.1 is a scatter plot of the cost and time data. Horizontal and vertical
axes are the dimensionless cost C and time T, which are shown in the figure.
XðCX ; TX Þ is a cost-time point of the project, OðCO ; TO Þ is the minimum value of
all the data points, while AðCA ; TA Þ is the maximum. It shows that O and A defines
the scope of cost and time value. Therefore, the quality of the project is defined as:
the degree of deviation from the value point of project relative to the minimum of
cost and time. Within the definition of deviation from the farther, the higher the
quality of the project; deviation from the closer, the lower the quality of the
project. The formula of any quality values in Fig. 144.1 are defined as follows:
O
144 Study of Cost-Time-Quality in Project Failure Risk Assessment 1363
kOXk2
QX ¼ ð144:1Þ
kOXk2 þkXAk2
kOXk2 and kXAk2 are 2-norm, namely distance, of OX and XA, defined respec-
tively as follows:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
kOXk2 ¼ ðCX CO Þ2 þðTX TO Þ2 ð144:2Þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
kXAk2 ¼ ðCA CX Þ2 þ ðTA TX Þ2 ð144:3Þ
As seen above, quality of the project is quantitated with the numeral from 0 to 1.
Assume simulation times k ¼ 1; 2; . . .N, C ð1Þ ; C ð2Þ ; . . .; CðNÞ and T ð1Þ ; T ð2Þ ; . . .; T ðNÞ
are the output results of the total cost C and time T from N times simulation. aij is
the frequency of results which fall into the shaded area in0. Then the joint
C1 ; Ci
probability distribution of the project in the region is as follow:
T1 ; Tj
Xi1 X
j1
F Ci ; Tj ¼ P C Ci \ T Tj ¼ agh N ð144:4Þ
g¼1 h¼1
Pi1 Pj1
g¼1 h¼1 agh is the cumulative frequency of shaded area in Fig. 144.2. So the
joint failure risk probability is as follow:
PCT ¼ PðCi ; Tj Þ ¼ PðC [ Ci [ T [ Tj Þ ¼ 1 FðCi ; Tj Þ ð144:5Þ
In this paper, considering the three factors of cost, time and quality, the failure risk
value of the project is given by the following formula:
Ri ¼ PCTi Qi ð144:6Þ
Ri is the failure risk value from the ith time of simulation, PCTi is the joint failure
risk probability of cost and time from the ith time of simulation and Qi is the
quality value from the ith time of simulation.
1364 X. Pan and Z. Xin
C j+1
aij
Cj
C3 i −1 j −1
∑∑ agh
g =1 h =1
C2
C1
T1 T2 T3 Ti Ti+1 Tn Tn+1 T
This paper will combine a project network plan with nine activities (including a
virtual activities) to analyze. Figure 144.3 is the network plan for the project.
0 is the data of cost and time from the activities of network plan. Assuming that
the cost, duration and quality of the estimates from each activity are random
variables, which subject to the triangular probability distribution TRIAða; m; bÞ. A
is the most optimistic value; m is the most likely value; b is the most pessimistic
value (Table 144.1).
(2) Assessment of Project Failure Risk Value
Cost and time joint failure risk probability. The 2500 groups of output data from
project cost and schedule simulation can be got through 2500 independent repeated
simulation by software Arena.
1 3 7
4
144 Study of Cost-Time-Quality in Project Failure Risk Assessment 1365
Fig. 144.4 The frequency histogram of the results of cost and time
Tj
Tj0 ¼ ; j ¼ 1; 2. . .n ð144:8Þ
lt
Ci and Tj are the cost and time value of the i and j groups of data. Ci0 and Tj0 are the
dimensionless data. According to formula (144.1) to (144.3), the quality value of
each group can be got. In this paper CO ¼ 0:90, TO ¼ 0:59, CA ¼ 1:07, TA ¼ 1:19.
Then we can get the ordinal position of each data after sorting the joint failure
risk probability values from small to large. According to the formula (144.9), the
location of the corresponding failure risk probability of the p-th confidence per-
centile value is determined, while the estimation of joint failure risk probability of
corresponding confidence percentile can be got.
h p i
k ¼ int 2500 ð144:9Þ
100
For example, the 95th of confidence percentile data is located on k = 2375
order and the cost is 870,508 yuan with the time of 20.7264 months. The joint
failure risk probability is 0.1568. 0 shows the 80th, 85th, 90th, 95th confidence
144 Study of Cost-Time-Quality in Project Failure Risk Assessment 1367
percentile of the joint failure risk probability of cost and time, as well as the
corresponding cost and time values (Table 144.4).
Project failure risk value. According to the formula (144.6), the project failure
risk value of 2500 groups of data can be calculated.
0 shows the statistics results of randomly selected 10 groups of data
(Table 144.5).
It can be seen from the above data that the quality factor, which has a direct
impact on the failure risk of a project, cannot be simply ignored.
Study of ‘‘risk’’ or ‘‘failure’’ but not ‘‘success’’ is a common method today. Project
failure risk in engineering can be as low as possible to minimize losses by esti-
mation of failure risk value.
This paper establishes a quantitative method of quality factors and gets the
quality formula. Cost and time joint failure risk probability is estimated by Monte
Carlo Simulation, statistical analysis and confidence estimation methods. Finally
the project failure risk value can be calculated and analyzed by adding quality
value.
The improvement of this paper is given as follows.
1368 X. Pan and Z. Xin
The quantification of the quality factor. As we know quality factor in the actual
situation is very complex, whether which can be estimated by the formula is not
known. The author tries to find a better quantitative method for quality.
The project failure risk value of cost, time and quality is not the joint proba-
bility value. So the next step is to study the joint risk assessment methods for the
three factors of a project to better predict the actual situation.
Acknowledgments This work is supported by the National Natural Science Foundation of China
under Grants No. 70901004/71171008 and the Fundamental Research Funds for the Central
Universities.
References
Babu AJG, Suresh N (1996) Project management with time, cost, and quality considerations.
J Oper Res 88:320–327
Dubi A (1998) Analytical approach & Monte Carlo methods for realistic systems analysis. Math
Comput Simul 47:243–269
Dubi A (2000) Monte Carlo applications in system engineering. Wiley, Chichester
Gao XF, Hu CS, Zhong DH (2007) Study synthesis optimization of time-cost-quality in project
management. Syst Eng Theory Pract 10:112–117
Kim JY, Kang CW, Hwang IK (2012) A practical approach to project scheduling: considering the
potential quality loss cost in the time–cost tradeoff problem. Int J Proj Manage, Korea
30:264–272
Marseguerra M, Zio E (2000) System unavailability calculations in biased Monte Carlo
simulation: a possible pitfall. Ann Nucl Energy 27:1577–1588
Oisen RP (1971) Can project management be defined? Proj Manag Q 1:12–14
Xu Z, Wu JJ, Wang YQ (2006) Confidence percentile estimation to cost and schedule integration
based on Monte Carlo multiple simulation analysis technique. J Syst Simul 18:3334–3337
Xu Z, Wu JJ, Jia ZJ (2009) Estimation of risk joint probability to cost and schedule integration
based on joint probability distribution theory. J Syst Eng 24:46–53
Yang WM, Sheng YX (1990) System reliability digital simulation. BeiHang University Press,
Beijing
Chapter 145
Study on Innovation and Practice
of Independent Auto Companies Lean
Digital Factory Building
Abstract Lean Thinking extracted on the basis of the lean production is a theory
suitable for all industries, can prompt managers to rethink the business process, to
eliminate waste and to create value. It has entered the various fields of design,
manufacturing, logistics, procurement, sales and operations management so far.
The digital technology is a key technology to realize the knowledge-based,
automated, flexible enterprises and their rapid response of the market. It has now
achieved good economic benefits in optimize the design, fault diagnosis, intelli-
gent detection, system management, scheduling optimization, resource allocation
and other aspects in various industries. In this paper, lean digital factory solutions
is proposed, on the base of analyzing the problems of multi-production line, multi-
plant, multi-brand, short cycle, low-cost and so on in a domestic independent car
manufacturer, lean digital manufacturing framework model is built based on
applications of the information technology and digital technology, advantages of
lean digital manufacturing in creating value, improving resource utilization,
enhancing the competitiveness of enterprises are verified through case studies,
experience and effective measures of developing enterprises’ lean digital manu-
facturing are prompted ultimately.
The core of Lean Thinking is to reduce costs by completely rule out the waste. The
lean, that is, be concise and economic, do not invest the extra factors of produc-
tion, just at the right time to produce the necessary number of market need for
product (or the next process urgently needed products) but all business activities
must be beneficial and effective. Automotive manufacturing digital is a general
term of applying digital technology to factory design and building, product process
planning and actual manufacturing and management processes, increase manu-
facturing efficiency and product quality, reducing manufacturing costs, optimizing
design and improving manufacturing processes through information modeling,
simulation, analysis and information processing. It includes digital product defi-
nition (Product), digital process planning (Process), digital factory layout planning
(Plan), workshop production of digital management (Production) and digital
technology manufacturing resource (the Resource, including digital equipment
(CNC machining centers, robots, etc.), tools, tooling and operators). Factories
carry out a comprehensive digital activities are called Digital Factory. Factory lean
digital is applying lean thinking to digital factory construction and operation,
thereby creating products that meet user requirements: the right product, right
time, right price, the right to meet user requirements. Lean digital manufacturing
to achieve the overall upgrade in T, Q and C, T (Time) refers to the enterprises to
continuously adapt to the international advanced enterprise product development
speed to bring the fierce competition; Q (Quality) refers to quality improvements
of the whole process from drawings to physical vehicles. C (Cost) refers to
advancing late product design to the product development process, thus avoiding
late design changes, repeated transformation and production preparation caused
massive waste of the cost.
Research shows that digital technology has been widely used in many advanced
domestic and foreign enterprises so far. Many research institutes and enterprises
have adopted the digital factory program within different ranges like: CIM Institute
of Shanghai Jiaotong University applied EM-plant and Deneb virtual factory
simulation software in the technological transformation projects in the engine
factory production line of Shanghai Volkswagen Automotive Co., Ltd.; digital
factory platform is used as the digital base for aviation manufacturing enterprises
in Modern Design and Integrated Manufacturing Technology Laboratory of the
Northwestern Polytechnical University; In the aspect of production engineering
and manufacturing process management, Tecnomatix company, a world leader,
applied eMPower series of their software products including the industry’s leading
virtual factory simulation software eM-Plant (SIMPLE++) in factories of all sizes
(and even large-scale multinational corporations) and the production line model-
ing, simulation and optimization of production systems; products’ processes are
verified by EM-Assemble software in the simulation assembly process of the early
product development; U.S. Dassault has designed and built a car production line,
cut in half than in the past with traditional CAD technology cycle; GM has applied
145 Study on Innovation and Practice of Independent Auto Companies 1371
DENEB software to the luxury car factory assembly optimization design. These
digitization projects have very good economic benefits (Liu 2002; Li et al. 2008;
Zhai et al. 2004; Tecnomatrix Corporation Website (2013); http://www.longsea-
tech.com/eMPower.htm; Liao et al. 2004; Beit-On 2002). Many domestic enter-
prises on the understanding and application of digital still at primary stage, the
realization of digital manufacturing also need to invest a lot of time, personnel
funding and more scientific, comprehensive planning (Shao et al. 2000; Pi 2002).
FAW Car Co., Ltd. (hereinafter referred to as FAW Car) is one of the important
independent enterprises, in ‘‘the 12th development strategy’’ implementation
process, it is facing multiple challenges and pressures: (1) a number of production
line and factories, including local factories, remote factories and overseas factories
and so on; (2) Development and production of multi-brand (both co-branded and
own brand) vehicle models; (3) short cycle-market competition requires compa-
nies to achieve rapid product development and mass production; (4) low cost -
requires companies to fully identify the manufacturability of the product before the
manufacturing, to avoid design changes at the later stage. With the development of
the company, products are gradually updating and developing, market competition
is requiring the company to adjust the structure, change the mode of production,
change from manual to automatic, reduce design changes, respond to abnormal
situations and solve resource waste and project tardiness problems.
In order to be able to survive in the intense global competition in the market and
development, FAW Car have identified the strategic objectives of building a
digital manufacturing system and have made the decision of changing original
extensive growth mode: in early production preparation, extensively use virtual
manufacturing software simulation and provide a reliable technology program
basis for late production preparation; extensively use information network tech-
nology to construct multi-functional information systems to provide tools for the
factory management and office automation, to ensure quality, schedule and cost
optimization of the production preparation and volume manufacturing. Established
a framework of lean digital factory model composed by one goal, two mains and
two basics (as shown in Fig. 145.1).
1372 Y. Wang et al.
The one goal is to build lean digital manufacturing system for the overall
objective to completely exclude the unreasonable waste; the two mains are man-
agement information system and virtual design and manufacturing system, they
can achieve and enhance Lean Thinking and industrial engineering, including: (1)
digital Lean design and manufacturing (include 1, virtual product design the
process is Product Design-CAE Engineering Analysis-CAE Process Analysis—
virtual product testing; 2, process virtual evaluation and simulation, includes
process simulation of stamping, welding robot simulation with the logistics sim-
ulation, painting and offline programming, assembly ergonomics and the logistics
simulation, matching tolerance allocation management simulation engine
machining and manufacturing simulation; etc.), (2) informational lean manage-
ment and manufacturing (the development of management information systems,
such as MES production control systems, ERP/DRP Enterprise Resource Man-
agement system, PDM collaborative development and management system, the
DMS/CRM dealer customer relationship management systems, production oper-
ations, knowledge management systems, collaboration and decision support
145 Study on Innovation and Practice of Independent Auto Companies 1373
The main production materials of mass production of stamping are the mold,
equipment and plates, production process embodies the salient features of the
downtime, causes and data volume. Since 2010, FAW Car stamping plant started
building the production and operation of information technology knowledge
management system model (Fig. 145.3) based on lean thinking, knowledge
management methods and production operations management experience accu-
mulated over the years. The model is divided into three levels: data management,
information management knowledge management, specifically divided into
building ‘‘working platform’’, forming ‘‘business experts’’ and ‘‘management
consultant’’. The stamping operation of knowledge management system infor-
mation platform including daily management of production, management of
production data statistical analysis, mold management, problem management,
1374 Y. Wang et al.
accumulated mold problems, downtime analysis more than 1,000, determined the
core business of the stamping plant, provided a platform for business accumulation
and inheritance of the workshop, provided support for the workshop Lean Pro-
duction Management.
This case illustrates that using means of information in the manufacturing of the
product management process can effectively improve work efficiency, reduce
waste, the accumulation of the core business, improve the quality of staff to ensure
the high efficiency and low-cost of the production run.
FAW Car began to apply digital simulation technology to welding technology and
logistics planning, the software included ProcessDesigner (process planning)
module, ProcessSimulate (process simulation) module, Plant (Logistics Simula-
tion) module and RobCAD (simulation and offline programming) module. Plan-
ning 2 factory welding shop production capacity of 200,000/year, continuously
and randomly product more than four models. So, digitally simulate various
welding technology programs. The typical case is floor welding logistics simula-
tion and verification.
Process plan improved by technology personnel is shown in Fig. 145.3, A and
B are front and rear floor cache areas, the simulation showed that if the location
appliances are manually dragged to D area, workers’ work load is moderate
(balanced production of 50–70 %, the limit case in 70–85 %), the plan is rea-
sonable (Fig. 145.4).
This case illustrates that, if digital factory technology is applied to welding
technology and logistics design, the process and logistics design could be opti-
mized, labor efficiency could be improved and the waste of workshop’s reform in
production could be reduced.
FAW Car’s rapid development process, first of all, is the process of learning and
developing lean thinking, it established a philosophy of ‘‘to be correct at the first
time’’, pursued ‘‘zero defect’’, took ‘‘customers first’’ as the origin and manufactured
first-class products. Companies to make a profit, must have a long-term strategic
vision, focus on investing in new technology and talent training, thereby, could
reduce design, manufacturing defects and thoroughly eliminate wasteful aspects,
guarantee mutual benefit among businesses, employees and partners. FAW Car takes
lean thinking and digital factory building as part of the strategy ‘‘to create one million
international passenger cars business units’’, enhances independent innovation and
system capacity, shifts from extensive management to fine management, shifts from
‘‘Fuzzy management’’ and ‘‘chaos management’’ into a ‘‘precision management’’.
The FAW Car did not have a digital plant technology capability before and could
only rely on products partners the MAZDA to complete large-scale production line
145 Study on Innovation and Practice of Independent Auto Companies 1377
process planning (such as welding and assembling the M1, M2 production line
planning), controlled technology resulted in very high manufacturing costs. ‘‘Inde-
pendent’’ determine the characteristics of the FAW Car can not completely copy the
digital factory technology of advanced foreign enterprises, therefore, the ‘‘Digital
manufacturing’’ could only be divided into different areas, different stages and dif-
ferent degrees to be planned, implemented, and ultimately to achieve the general-
ization and integration of various fields of information transmission, use and
management of the manufacturing system. The TECNOMATIX/eMPower body
planning systems and logistics planning systems were introduced, completed mod-
eling of the ‘‘digital manufacturing’’, establishment of work ideas, technology library
architecture design, repository architecture design and other preliminary works in the
first stage; The second stage was the ‘‘digital manufacturing’’ island style applica-
tions, gradually improved the ‘‘digital manufacturing’’ system functions and pro-
cesses to create the conditions and lay the foundation for realizing the following
‘‘digital’’ planned projects. These two stages have been completed. In the third stage,
data management platform will be unified, the digital factory software applications
proficiency, through technology of island and network technology in all areas will be
strengthened, accordingly, will the interfaces and sharing will be realized.
In this paper, the importance of FAW Car Digital Manufacturing System is illus-
trated on the basis of the model and cases, useful lessons are provided for other auto
manufacturers: (1) lean ideas, information and digital technology is an important
means for the auto enterprises to promote product updates, develop production and
improve the international competitiveness; (2) lean digital manufacturing system is
the effective guarantee to ensure multiple production lines, multi-plant, multi-
brand, short cycle and low-cost production; (3) if independent auto enterprises carry
out comprehensive lean digital factories, it will continuously promote the enterprise
economic growth mode to transfer from extensive and technology-introduction type
to intensive and innovation type; (4) enable enterprises to achieve 4 development
through lean digital manufacturing system: efficiency is continuously increased,
cost is continuously reduced; quality is continuously improved; ability is contin-
uously increased; the system’s core ability is enhanced, the core competitiveness of
enterprises is formed, its own excellent is built.
References
Beit-On H (2002) Delmia-tecnomatix—the duel for the digital factory. Promising Market, 5
http://www.longsea-tech.com/eMPower.htm
Li S, Yang T, Chen B et al (2008) Digital factory technology and application in aeronautical
product R&D. Aeronaut Manufact Technol 19:46–49
1378 Y. Wang et al.
Liao F, Zhang L, Xiao T et al (2004) An interactive assembly process planner. Tsinghua Sci
Technol 9(2):219–226
Liu H (2002) Study on equipment layout of multi-production line plant (Chinese). Shanghai Jiao
Tong University, vol 2
Pi X-Z (2002) Research and application of assembly line balancing and simulation technology
(Chinese). Shanghai Jiao Tong University, vol 1
Shao L, Yan J-Q, Ma D-Z, Zhong T-X et al (2000) Virtual integrated design of production line.
Institute of Computer Integrated Manufacturing, Shanghai Jiao Tong University, vol 6
Tecnomatix Corporation website (2013) Available at www.tecnomatix.com
Zhai W-B, Chu X-N, Ma D-Z, Jin Y, Yan J-Q et al (2004) Planning process modeling of a virtual
factory. Institute of Computer Integrated Manufacturing, Shanghai Jiao Tong University, vol
38(6), pp 862–865
Chapter 146
The Application of Kernel Estimation
in Analysis of Crime Hot Spots
Yan-yan Wang, Zhi-hong Sun, Lu Pan, Ting Wang and Da-hu Zhang
Abstract In order to analyze crime hot spots, we use Kernel estimation. The
choice of Kernel function and Band-width is critical in kernel density estimation,
which decides the accuracy of the estimation. We choose Gauss kernel and further
obtain the optimal Band-width in the sense of square error MISE. Using Kernel
estimation, not only can we calculate the density of crime in the region, but also
accurately show the areas with the relative high-crime density and get the maxi-
mum point according to the information about the previous criminal spots. Last we
use Kernel estimation to predict Peter Sutcliffe ‘‘the Yorkshire Ripper’’ 11th
criminal location based on the previous criminal locations in the Serial murders.
Finally we can get the range of the criminal hot zone: Longitude:
53.6875–53.8125 N; Altitude: 1.775–1.815 W. In fact, the coordinate of Peter’s
11th criminal location is (53.817 N, 1.784 W). From this, it can be seen that our
estimation is relatively accurate.
Keywords Band-width Crime hot spots Kernel estimation Kernel function
MISE
146.1 Introduction
into the coordinate point. Criminologists observe the spatial distribution of crime,
such as gregariously, regularly, discretely or randomly. Through the analysis of
crime hot spots, we can look for the gather place of Point group and the identifiable
range, further understanding the formation reasons and its possible impact. The
commonly analysis methods in the crime hotspots are Grid-counting Method,
nearest neighbor distance Method and Moran’s I way. The advantage of Kernel
estimation method is to regard crime space as a core site. Not only can we cal-
culate the density of crime in the region, but also accurately show the areas with
the relative high-crime density.
146.2 Model
In estimating the density function about X, let us suppose each sample, just like a
small irradiative light bulb, serves X, the function of which is related to the distance
in a certain sense (situation) from sample X1 to Xn , i.e., the farther the distance, the
weaker the light intensity. When it is taken into the consideration, the kernel
function should be chosen according to the distance from X to each sample Xi ,
146 The Application of Kernel Estimation 1381
which decreases with the growing distance. The kernel function KðxÞ is insensible
to ^fhn ðxÞ, therefore each kernel function with satisfying conditions
R is suitable.
Kernel function is symmetrical about the origin and satisfies kðlÞdl ¼ 1: Ep-
anechnikov kernel, Bisquare kernel and Gauss kernel are in common use (Silver-
man 1986). We might as well suppose
2
1 x
K ð xÞ ¼ pffiffiffiffiffiffi exp ð146:2Þ
2p 2
P
n h i
ðxxi Þ2
Then ^fhn ðxÞ ¼ h1n n p1ffiffiffiffi
2p
exp 2h 2 .
n
i¼1
" #
^ 1 Xn
1 ðx xi Þ2
f ð xÞ ¼ 4
pffiffiffiffiffiffi exp 2 ð146:5Þ
hn 1:06 Sn n5 i¼1 2p 1:1236 S2n n5
Using this function, we can get the maximum point Xnþ1 according to fXi gni¼1
the information about the previous criminal spots. And spot represented by the
maximum point is just the most probable criminal location.
Serial murders have serious social impact and make people in great horror. So it is
extremely critical to predict the offender’s hideout by the offender’s crimes in
solving the case.
Peter Sutcliffe was born in Bingley, West Riding of Yorkshire on 2nd June in
1946. He was a ferocious serial killer and committed over 20 crimes just within
6 years, including 13 murders and a series of vicious attacks. He was nicknamed
‘‘the Yorkshire Ripper’’ because of his vicious criminal means.
The victims’ information is listed below (Table 146.1).
We can predict Peter’s eleventh criminal location through the Eq. (146.5) based
on the previous criminal locations. And we get the coordinate of the 11th location
(53.7975 N, 1.5652 W). Actually, we can get the range of the criminal hot zone:
Longitude: 53.7620–53.8125 N
Altitude: 1.5124–1.5876 W
In fact, the coordinate of Peter’s 11th criminal location is (53.7997 N,
1.54917 W). From this, it can be seen that our estimation is relatively accurate
(Fig. 146.1).
Fig. 146.1 Peter’s predicted next possible criminal spot based on Kernel Density Estimation
We take the crimes which Peter committed in Bingley (West Riding of Yorkshire),
his former hideout, from 1975 to 1977 as example.
Peter is one of the Marauder offenders. His criminal spots, centered on his
stable hideout, were scattered around. Also it is found that his criminal spots were
not regularly distributed but focused on some area.
From Fig. 146.2, we find that more crimes were committed in the elliptic
region, up to 5 (2 crimes committed in one certain point), which coincide with the
criminologist David Canter’s opinion that the criminal chooses his familiar loca-
tion to commit the crimes. The offender repeated committing his crimes in the
small area, which reflects that he desired to gain control over the criminal spots.
Once he succeeded in committing, the offender would become confident in his
scheme and repeat his crime at the same spot (Becker et al. 1988). So, more police
force should be laid out in this area.
The police agency is much interested in finding out the next criminal spot of the
offender in serial criminal cases.
Given a series of crimes at the locations X1 ; X2 ; . . .; Xn committed by a single
serial offender, we are to estimate the probability P Xnext jX1; X2 ; . . .Xn of the next
1384 Y. Wang et al.
criminal spot Xnext . Based on Bayesian Model from Mike O’Leave (Levine and
Block 2011). Once again, we can use Bayes’ Theorem get the expression formula
P Xnext jX1; X2 ; . . .Xn
ZZZ
/ PðXnext jz; aÞPðX1 jz; aÞPðX2 jz; aÞðXn jz; aÞHðzÞpðaÞdzð1Þ dzð2Þ da:
In reality, it is not easy to estimate the value of HðzÞand the value of pðaÞ.
Moreover, even the estimation values of HðzÞand pðaÞhas been given, it is still not
so easy for us to get the result of the triple integral (Scott 1992).
In solving the problem, we aim to discretize the continuous process and then
use the numerical method to get the probability P Xnext jX1; X2 ; . . .Xn .
146.6 Conclusions
Through the grid mesh and refinement of the area, we set up an optimization model
and Bayesian Model to construct ‘‘Geographic profiling’’ pointing to search the
offender’s hideout.
146 The Application of Kernel Estimation 1385
References
Becker RA, Chambers JM, Wilks AR (1988) The new S language. Wadsworth & Brooks/Cole
(for S version), Pacific Grove
Chen X (1989) Non-parametric statistics. Science and Technology Press, Shanghai, pp 283–296
Lai Z (1996) Residential burglary crime Map and crime location analysis in Taipei. Resource and
Environment Institute of Geography, University of Taipei, Taipei
Levine N, Block R (2011) Bayesian journey to crime estimation: an improvement in geographic
profiling methodology. Prof Geogr 63(2):9
Rossmo DK (2000) Geographic profiling. CRC, New York
Scott DW (1992) Multivariate density estimation. Theory, practice and visualization. Wiley,
New York
Sheather SJ, Jones MC (1991) A reliable data-based bandwidth selection method for kernel
density estimation. J Roy Stat Soc B 53(3):683–690
Silverman BW (1986) Density estimation. Chapman and Hall, London
Venables WN, Ripley BD (2002) Modern applied statistics with S. Springer, New York
Worton B (1989) Kernel methods for estimating the utilization distribution in home-range
studies. Ecology 70:164–168
Chapter 147
The Research of Full-Bridge and Current
Double Rectifier Switched-Mode Power
Supply for Vehicle
Abstract The switched-mode power supply (SMPS) has many advantages, for
instance high efficiency of transformation, small volume and etc. As the most
directive and effective way to decrease the size of the switching converter, high
frequency, however, increases the switching wastage. Therefore, the soft-switch-
ing technology has been invented and developed to reduce switching wastage. This
paper presents the high frequency of full-bridge switching converter using zero-
voltage switching technology and current double rectifier technology. The steady
state model and small signal model are built by PWM switching technology.
Peaking-current control mode is adopted as the control strategy. The simulation
circuit of full-bridge and current double rectifier switch power is designed on
PSIM platform, and the simulation is finished. The simulation proves that the
model is right and the control strategy is effective.
147.1 Introduction
Nowadays the switching power supply development tendency is: high efficiency,
low loss, miniaturization, integration, intellectualization, redundant reliability
(Middlebrook and Cuk 1977; Hua and Lee 1993). In order to reduce switch losses,
noises and improve power density, soft-switching technology under the principle
of zero-current switching (ZCS) and zero-voltage switching (ZVS) is widely used
in many applications (Liu et al. 1987; Theron and Ferreira 1995; Canesin and
Barbi 1997; Dudrik et al. 2006; Liu and Lee 1990).
The SMPS consists of three basic topological structures: buck converter, boost
converter and buck-boost converter. In this paper, the full-bridge and current
double rectifier SMPS is designed on the basis of buck converter and adopts
current double rectifier in the secondary side of pulse transformer. The structure of
the full-bridge and current double rectifier SMPS is shown in Fig. 147.1.
The models of SMPS are built for steady state analysis, transient analysis and
designing of SMPS. There are many modeling methodologies of SMPS, including
method of state-space averaging, the model of PWM switch and so on (Hua and
Lee 1995; Smith and Smedley 1997; Chen et al. 1991).
As in the case of continuous conduction mode (CCM), the model of the PWM
switch in DCM represents the dc and small-signal characteristics of the nonlinear
part of the converter which consists of the active and passive switch pair (Vor-
perian 1990). The dc and small-signal characteristics of a PWM converter are then
obtained by replacing the PWM switch with its equivalent circuit model in a
manner similar to obtaining the small-signal characteristics of linear amplifiers
whereby the transistor is replaced by its equivalent circuit model.
Thus this paper adopts the model of PWM switch to build the steady state
model and small signal model of full-bridge and current double rectifier SMPS.
The simplified equivalent circuit structure by the model of PWM switch is shown
in Fig. 147.2.
147 The Research of Full-Bridge and Current Double Rectifier 1389
+
NP Co
−
P
+
Ri
−
Ri 2
Ri 2 0.8 V
Ri1
−
v CT vP PWM d
+ Ri 1 Comparator
+
− ve
Ri 3 Ri4
1390 Y. Yin et al.
(a) Np ⋅N
Ramp − Sa ⋅
N p⋅ N Compensation N s ⋅ Ri
(ve − 0.8) ⋅
N s ⋅ Ri
iLP1
iL1
iLV 1
s d d ⋅ Tsw s1−d Tsw
(b) Np ⋅N
N p⋅ N
Ramp − Sa ⋅
(ve − 0.8) ⋅
Compensation N s ⋅ Ri
Ns ⋅ R i
iLP 2
iL 2
iLV 2 Tsw 2 Tsw
s1− d sd
Fig. 147.4 The current waveform of two filter inductors. a Current waveform of L1, b current
waveform of L2
The current waveform of two filter inductors in the circuit is shown in Fig. 147.4.
The upward slope of current of filter inductor is sd and the downward slope is
s1d . Therefore, the transfer function of the peak-value current control mode can
be derived from Fig. 147.3
R 1
H 1 ð SÞ ¼ Fp ð SÞ F h ð SÞ ð147:3Þ
R0i 1 þ RTLsw0 :p Qp
where 8
> 1 þ xSZ1
>
> Fp ð SÞ ¼
>
> 1 þ xSP1
>
>
>
>
>
> 1
>
>
> Fh ð SÞ ¼
> 2
>
> 1 þ xn Qp þ xS 2
S
>
> n
>
> N
< R0 ¼ s
Ri
i
2Np N ð147:4Þ
>
>
>
> Ns
>
> 0 Np Vin Vo
>
> Sd ¼
>
> L
>
> 2
>
>
>
> 1
>
> Qp ¼
>
> Sa =R0i
>
: p ð1 DÞ 1 þ S0 1
2
d
147 The Research of Full-Bridge and Current Double Rectifier 1391
The first step of designing SMPS, including power circuit and control circuit, is to
choose the right structure of SMPS, control method and related SMPS technology
according to the technical indicators of SMPS. The technical indicators of this
paper are illustrated in Table 147.1
The designing of Power circuit of SMPS based on the technical indicators mainly
includes the choosing of output capacitors and filter inductors, designing of
parameters of soft-switching.
Essentially, full-bridge and current double rectifier SMPS is the derivation of
BUCK converter. The minimum of filter inductors Lm must be obtained, using the
model of steady state in (147.2)
Vo N p
Lm ¼ RM Tsw 1 ð147:5Þ
VinM Ns
where VinM stands for maximum input voltage of SMPS and RM stands for load
impedance of SMPS in discontinuous current mode (DCM) and can be derived as
follows:
Vo2
RM ¼ ð147:6Þ
Pm
where Pm is the minimum output power in CCM.
Finally, inductor with inductance L = 15 lH is selected as the filter inductor.
The key parameter of choosing output capacitor is equivalent series resistance
(ESR) rather than capacitance, because ESR of output capacitor has a much bigger
impact on the ripple of output voltage than the capacitance. Thus, the ESR of
output capacitor is obtained firstly according to the ripple of output voltage and the
1392 Y. Yin et al.
The initial value of ESR and Co can be obtained, using expression (147.10) and
Table 147.1.
ESR 40:1 mX Co 63 lF
When choosing capacitors, it is necessary to take into consideration the loss
factor tan d which is provided in manufacturer’s data specification sheets because
the ESR of capacitor decreases with the increasing of capacitance.
tan d ¼ 2pfc C ESR ð147:11Þ
Thus, two capacitors with withstand voltage 63 V and capacitance 1500 lF are
selected as the output capacitor.
The power circuit of full-bridge and current double rectifier SMPS using
MOSFET is shown in Fig. 147.5.
When the voltage drop of diode is ignored, the total time tCD of charging or
discharging of the capacitor CCD in the leading leg can be derived as follows:
vin
tCD 2CCD ð147:12Þ
ipk
where iPK is the peak current though the primary winding of transformer. To make
sure the soft switching of leading leg is achieved, tCD must be less than com-
mutation dead time tdeadCD .
RDELCD
tCD tdeadCD ¼ 5 1011 ð147:13Þ
1:5ðvCS vADS Þ þ 1
During switching of lagging leg, the transformer is not impacted but only the
resonant inductance Lr is impacted. Therefore, to make sure the capacitor CAB in
the lagging leg is fully charged or discharged, the resonant inductance Lr must
satisfies the following expression.
8
< Lr 2CAB2 v2in
iAB
ð147:14Þ
: iAB ipk Ns ð0:5dÞTsw vo
Np L
The total time tAB of charging or discharging of the capacitor CAB in the lagging
leg can be derived as follows:
rffiffiffiffiffiffiffiffiffiffi
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2CAB vin
tAB ¼ 2Lr CAB arcsin ð147:15Þ
Lr iAB
L1
KD KB
N p : Ns Co ESR
D2 − +
+ Lr R
vin −
C in vo
D1
KC KA
L2
To make sure the soft switching of lagging leg is achieved, tAB and tdeadCD must
satisfies the following expression.
tAB tdeadAB tAB þ tr
RDELAB
tdeadAB ¼ 5 1011 1:5ðvCS ð147:16Þ
vADS Þþ1
The designing of control circuit mainly includes type selection of control chip and
designing of its outside circuits. UCC3895 is selected as the control ship and the
peak-value current control mode is selected as the control mode. The control
circuit is shown in Fig. 147.6.
The feedback compensate network can be designed as
8
> Sa =R0i
> 1 1 Tsw 1
>
> R1 C2 ¼ RCo þ L0 Co ð1 DÞ 1 þ S0d 2
>
>
>
< R4 C4 ¼ ESR Co
xcross ¼ RCTRR 4
1 þ R5
H1 ð0Þ ð147:17Þ
>
> R C R
>
>
1 3 2 6
>
> H1 ð0Þ ¼ RR0 h 1 i
>
: i Sa = R 0
1þRTsw ð1DÞ 1þ
L0 S0 2
i
1
d
The whole simulation circuit construction of the designed full-bridge and current
double rectifier SMPS is shown in Fig. 147.7.
The circuit is designed with power outputs ranging from 100 to 800 W. The
designed SMPS is simulated in PSIM and the simulated waveforms are illustrated
in Figs. 147.8, 147.9, 147.10 and 147.11.
When the output power is 100 W, the crossover frequency fcross of the open
loop transfer function and the dominant pole of slope compensation network are
set about 63 kHz. Thus, when the output power is below 100 W, the phase margin
will decline and even become negative because the crossover frequency fcross
moves to higher frequency, which will lead to system instability, as is shown in
Fig. 147.8. The current sensor is designed when the output power is 800 W. If the
output power exceeds 800 W, current limiter and protecting circuit will start to
work and then the output power will be limited by limiting input peak current and
the duty ratio of controller, so the output voltage will be below 14 V, as can be
seen in Fig. 147.11
147 The Research of Full-Bridge and Current Double Rectifier 1395
vi
+ R5 R6
− Output
R i2 R i2 vo
R3
Optocoupler
R4
R i1 EAN EAP
EAOUT SS
R bias
R i3 RAMP OUTA R1
U C SS
REF C OUTB
R i4 R i1 C 2
− GND C PGND
C
+ SYNC 3 VCC 4
CT 8 TL431
Slope CT 9 OUTC
Compensation RT
RT 5 OUTD R2
R DELAB
DELAB CS R ADS 1
R DELCD
DELCD ADS
11. 7. 25 R ADS 2
capacitor Co is right. The ripple of output voltage is the biggest under the maxi-
mum input voltage, as is shown in Fig. 147.12.
1398 Y. Yin et al.
147.4 Conclusion
In this paper, a kind of full-bridge and current double rectifier SMPS for vehicle is
analyzed and modeled. Based on the models, the circuit of the SMPS is designed
and soft-switching technology is also used to reduce the switching losses. By
simulation in PSIM, the models and designed circuits are validated. The result
shows that the models have very high accuracy and the designed circuit is right.
References
Canesin CA, Barbi I (1997) Novel zero-current-switching PWM converters. IEEE Trans Ind
Electron 44(3):372–381
Chen Q, Lofti AW, Lee FC (1991) Design trade-offs in 5 V output off-line zero-voltage PWM,
telecommunications energy international conference—INTELEC. Proceedings VPEC,
pp 89–99
Cho JG, Sabate JA, Lee FC (1994) Zero voltage and zero current switching full-bridge PWM,
power electronics and ECCE Asia (ICPE & ECCE), IEEE 8th international conference on
converter for high power applications. IEEE PESC Rec, pp 1585–1590
Dudrik J, Spanik P, Trip ND (2006) Zero-voltage and zero-current switching full-bridge DC–DC
converter with auxiliary transformer. IEEE Trans Power Electron 21(5):1328–1335
Hua G, Lee FC (1993) Soft-switching PWM techniques and their applications. IET Power
Electron Appl Euro Conf 3:87–92
Hua GC, Lee FC (1995) Soft-switching techniques in PWM converters. IEEE Trans Ind Electron
42(6):595–603
Hua G, Leu CS, Jiang Y, Lee FC (1994) Novel zero voltage switching PWM converters. IEEE
Trans Power Electron 9(2):213–219
147 The Research of Full-Bridge and Current Double Rectifier 1399
Jiang JC, Zhang WG, Shen B (2003) Analysis and design of a novel ZCT-PWM converter. IEEE
Trans Power Electron 1:126–130
Liu KH, Lee FCY (1990) Zero-voltage switching technique in DC/DC converters. IEEE Trans
Power Electron 5(3):293–304
Liu KH, Oruganti R, Lee FCY (1987) Quasi-resonant converters-topologies and characteristics.
IEEE Trans Power Electron PE-21(1):62–71
Middlebrook D, Cuk S (1977) A general unified approach to modeling switching converter power
stages. Int J Electron 6:521–550
Schutten MJ, Torrey DA (2003) Improved small-signal analysis for the phase-shifted PWM
power converter. Trans Power Electron 18(2):659–669
Smith KM, Smedley KM (1997) A comparison of voltage-mode soft-switching methods for
PWM converters. IEEE Trans Ind Electron 12(2):376–386
Theron PC, Ferreira JA (1995) The zero voltage switching partial series resonant converter. IEEE
Trans Ind Appl 31(4):879–886
Vorperian V (1990) Simplified analysis of PWM converters using the model of the PWM switch,
part I(CCM) and II (DCM). Trans Aerosp Electron Syst 26(3):490–496
Chapter 148
The Research of Industrial Optimization
of Beijing CBD Ribbon Based on Fitness
Function Mode
148.1 Introduction
There are numerous nodes in the urban industrial system, the generation of pillar
industry and leading industry and the evolvement of forerunner industry and sunset
industry have certain rules. To analyze urban industrial clustering structure by
means of fitness function model in order to discover its rules in evolvement and
features can accelerate the development of urban industrial structure as well as
promote the optimization of urban industry.
There are few abnormal nodes in many realistic networks. These nodes’ growth
rate of degree depends on not the age of nodes but their competitive capacity.
Moreover, they don’t acquire new sides complying with the principle of degree
preferential attachments. They may connect to only a few sides in the early
evolution of time step for these nodes (Population Division, Department of Eco-
nomic and Social Affairs, United Nations 2010). According to the principle of
degree preferential attachments, the probability that they may obtain new sides
should be very small. Because of some other reasons, however, they have a greater
probability to get new sides. This phenomenon reflects the fitness is becoming rich.
As a result, people put forward fitness function model based on this phenomenon.
The following is the evolutionary pattern of its network model (Gomez-Gardenes
et al. 2006):
1. Increase: Begin with a network included nodes, bring in one new node each
time and connect it to existing nodes, fitness of each node is selected in
accordance with probability distribution.
2. Preferential connection: The probability of connection between a new node and
an existing node, the node’s degree, the node’s degree and fitness meet the
following relationship:
Y g ki
¼ Pi ð148:1Þ
i
gj k j
j
M
M3
M1 M2
industry but derive the new industry rapidly, and even become the new leading
industry and pillar industry (Mo et al. 2008; Wang et al. 2006). It increases the
proportion of it in the national economy, and promotes optimization of urban area
industrial structure and increase of overall economy constantly, as shown in
Fig. 148.1.
A, B represent leading industry, pillar industry, and M represents emerging
industry and its derivative industry. In the long term, the industrial optimization
based on the fitness function model confirms to the stream and principle of
industrial development. But under the effect on path dependence, urban area has
already formed several industries so this process will be limited. There is a con-
stant gambling process between emerging industry and original industry (Liu
2009). If they have no mutual promoting relationship, participation of the
emerging industry must squeeze the other’s living space and emerge competitive
relationship. Meanwhile, the original and relative outdated industries are not
willing to withdraw from the market. Consequently, the government needs to give
enough initial support before the maturity of emerging industries; as the maturing
of the emerging industry, its relevant industry enters into market gradually and the
industrial structure will be optimized constantly.
Certainly, the participation of emerging industry has limit. The new partici-
pants, comparing with other industries, have to invest on numerous personnel,
resources and capital in order to obtain greater competitive advantages (Luo 2005).
Or else the emerging industry may abort so as to make adverse effect on long term
development.
148.2 Methodology
We can get Table 148.1 through classifying and disposing the date of Beijing CBD
Ribbon’s node relation. Utilizing Ucinet software, we can generate corresponding
network relation figure (viz. Fig. 148.2). The node represents the industry’s
1404 Y. Zhang and G. Zong
designation and the line represents the relations among industries. The scale of
network represents quantity of total participants; therefore the scale of industrial
network in this Ribbon is 16.
As shown in the Fig. 148.1, we can see clearly that this network has significant
scale-free network characteristic that the connection situation (number of degrees)
among each node has asymmetrical distributivity, and most of nodes have relative
less number of degrees and only few of them have higher number of degrees. In
the CBD Ribbon industrial structure, retail business and traditional service
148 The Research of Industrial Optimization of Beijing CBD Ribbon 1405
Fig. 148.3 The network node’s correlation matrix of each industry in CBD ribbon
industry still have relative higher number of degrees, but the proportion of CBD
Ribbon’s modern service industry grows constantly in national economy, and its
industrial correlation degree has significant improvement, emerging a trend to
become the leading industry gradually (Newman 2001; Barabási 2001; Liljeros
et al. 2001).
Fig. 148.4 The network node’s cluster analysis tree of each industry in CBD ribbon
relation with other industries. Therefore modern service industry shows strong
adaptive capacity and will optimize the regional industrial structure constantly.
148.3 Conclusions
References
Abstract Based on Davis’s TAM, combining with the user satisfaction theory in
information system and motivation theory, and the SNS user behavior character-
istics, this study proposed the user acceptance model on SNS websites. In this
model, Perceived Usefulness and Perceived Ease-of-Use were retained, and Per-
ceived Enjoyment and Perceived Connectivity were added. In addition, the
external variables affecting these key factors were subdivided. The questionnaire
was designed and Structural Equation was used to validate the empirical
hypothesis. The results showed that TAM could apply to user acceptance on SNS
website basically, and Perceived Enjoyment and Perceived Connectivity were all
positively correlated with Willingness, also, the subdivision of external variable
reflected the importance of user activity.
149.1 Introduction
MicroBlog, a Twitter-like service, has been rapidly developed, and the regis-
tered users have increased dramatically.SNS websites users are coincident with
those of MicroBlog to a great extent. Double pressures of homogenous competi-
tion and MicroBlog’ rising, makes SNS websites face a great challenge. SNS
websites need to absorb new users continuously and retain old ones. SNS websites
are facing with the problem of user acceptance.
Based on rational behavior theory, Davis (1986) put forward TAM. TAM adopts
the well-established causal chain of beliefs ?attitude ? intention ? behavior
which has become known as the Theory of Reasoned Action (TRA). Based on
certain beliefs, a person forms an attitude about a certain object, on the basis of
which he or she forms an intention to behave with respect to that object. The
intention to behave is the sole determinant of actual behavior (Fig. 149.1). In TAM
applications, two key factors, Perceived Usefulness and Perceived Ease-of-Use,
can effectively explain users’ behavior intention.
Perceived
Ease of Use
149 A Study on the User Acceptance Model 1411
In TAM, the external variables have not been fractionized, which will make
against further analysis the influencing factors on user acceptance to SNS web-
sites.SNS websites provide an interactive platform for friends, on which integrates
basic Internet applications, such as log, photo, video, community and game, and
meet users’ sociality demands through online interactive among friends, infor-
mation sharing, participating in activities and other ways. Webster and Martocchio
(1995) thought entertainment was an intrinsic motivation using computers in
workplaces. Scholars put forward Perceived Enjoyment as doing Internet empirical
study, which refers to entertainment degree by using SNS websites. Moreover,
Perceived Connectivity refers to being connected with friends in passions and is
not confined by time or location. During using SNS websites, users may have the
feeling of satisfaction or happiness, which makes users to accept SNS websites in
further (Shin 2008).
Based on TAM, the user acceptance model for SNS websites is shown in
Fig. 149.2. Perceived Usefulness and Perceived Ease-of-Use are retained in the
model, and Perceived Enjoyment and Perceived Connectivity are added as another
two key factors affecting users to accept SNS websites. In addition, external
variables of the above factors are further divided into the information quality,
quality system, active degree and the service quality and related factors.
1412 D. Jin and M. Zhou
Activitylevel
Participation
Sharing Perceived
Connectivity
Sociability
Game Interactive
InformationQuality
Accuracy Behavior
Perceived Willingness
Timeliness Usefulness
Integrity
System Quality
Security Perceived
Ease-of-Use Social Impact
Interface
The related assumptions in TAM are still established in this model. And new
assumptions about Perceived Enjoyment and Perceived Connectivity are proposed.
(1) Perceived Ease-of-Use and related assumptions.
Based on customer satisfaction theory and with the previous empirical data
support, (Seddon 1997) also confirmed that system quality has a positive effect
on Perceived Ease-of-Use. In addition, the system security is much important
to users, which means to promise personal information safety. Moreover, both
user interface and interactive process will affect users to accept SNS websites.
H1: System quality positive effect on this term
H1a: Security positive effect on it
H1b: Interface positive effect on it
H2: Perceived Ease-of-Use positive effect on this item
(2) Perceived Usefulness and related assumptions
H3: Information Quality positive effect on this item
H3a: Accuracy positive effect on it
H3b: Timeliness positive effect on it
H3c: Integrity positive effect on it
H4: Activity level positive effect on this item
H4a: Participation positive effect on it
149 A Study on the User Acceptance Model 1413
In order to verify the model assumptions, a questionnaire is used to collect data. The
questionnaire uses a standard 7-point Likert-typescale. The 7 point are ‘‘completely
disagree’’, ‘‘relatively disagree’’, ‘‘some disagree’’, ‘‘not sure’’,’’ some agree’’,
‘‘relatively agree’’ and ‘‘completely agree’’. According to actual situation,
1414 D. Jin and M. Zhou
According to the pre-test results, the questionnaire were modified in final two
ways, electronic and paper questionnaire. On the principle of simple random
sampling, questionnaires were distributed. 170 electronic questionnaires and 68
paper questionnaire were distributed, altogether 202 questionnaires were returned.
After moving the data obviously not meeting the requirements, 153 valid ques-
tionnaires were remained.
149 A Study on the User Acceptance Model 1415
Second order confirmatory factor analysis (CFA) is put forward as there is high
degree relevance among original first order factors in the first order CFA, and the
first order CFA can be in agreement with sample data. In the model, information
quality, system quality, service quality and activity level are measured with the
multidimensional method, so AMOS 17 would be used respectively during their
second order factor analysis. The analysis results show, all the first order factors to
second order factors of information quality, system quality, service quality, and
activity level, the load capacity value lie between 0.5 and 0.95, and the Signifi-
cance Probability and C.R. are greater than 1.96, and decision criteria is achieved
totally. Meanwhile, compared with the goodness-of-fit standard, the whole
goodness-of-fit reaches the basic standard. So we can conclude that the first order
factors of information quality, system quality, service quality and activity level
would measure these second order factors well.
According to the validity inspection and analysis results of second order CFA
above, social impact and sociality factors are removed from the original hypothesis
model, the goodness-of-fit indexes of the model are summarized as Table 149.1.
The ratio Chi-square/freedom degree of the model is 1.828 \ 2, RMSEA is
0.074 \ 0.080, two goodness-of-fit indexes meet the standard. But other goodness-
of-fit indexes, such as GFI (0.658 \ 0.9), AGFI (0.62 \ 0.8), CFI (0.828 \ 0.9),
TLI (0.816 \ 0.9) and NFI (0.689 \ 0.8), are not up to the standards, thus the
model need to be modified and optimized.
AMOS offers two model modification indexes, in which modification index MI
is used for model expanding, and the critical ratio C.R. is used for model
restricting. According to the value of critical ratio and modification indexes, fol-
lowing the principle of modifying one parameter once, the final revised model is
shown in Fig. 149.3.
The path of regression coefficients of each factor of the revised full model has
increased, the value of C.R. is larger than 1.96, some of the path of significant
probability are larger than 0.05 that are sufficiently close to the standard, which
shows the modification effect well. Table 149.2 is the path coefficient of the
revised model for the first time.
1416 D. Jin and M. Zhou
System Quality
Security Perceived
Ease-of-Use
Interface
The goodness-of-fit of the revised model is shown as Table 149.3. The value of
Chi square (1468.758) and freedom (835) improve markedly than the preceding
ones. The ratio Chi-square/degree of freedom (1.759 \ 2) and the value of
RMSEA (0.071 \ 0.08) reach the standard both. Several other goodness-of-fit
indexes meet the standard basically. For numbers of latent variables exist in the
model, the relationship between factors is relatively complex and some indexes
may be influenced greatly by the sample size, the revised model can be regarded as
the final model.
The model set up in this study originally uses 13 level 1 hypothesis from H1 to
H13. As system quality, information quality, service quality, and activity level use
multidimensional measure method, H1–H6 each exists level 2 hypothesis. The
data analysis results support the rest hypothesis apart from H4c, H5, H7c, H9, H9a,
H9b, H9c, H10c and H13.
Table 149.2 The path coefficient of revised full model
Path Estimated coefficient Standard deviation Critical ratio Significant probability Standard coefficient
(S.E.) (C.R.) (p)
connectedness \— activity 0.562 0.121 4.627 *** 0.535
usefulness \— information 0.913 0.232 3.940 *** 0.641
easiness \— system 0.702 0.111 6.340 *** 0.735
entertainment \— activity 0.758 0.120 6.311 *** 0.620
usefulness \— activity 0.343 0.137 2.492 0.013 0.315
entertainment \— connectedness 0.322 0.102 3.147 0.002 0.277
willingness \— usefulness 0.229 0.084 2.731 0.006 0.221
willingness \— entertainment 0.340 0.084 4.070 *** 0.368
willingness \— easiness 0.213 0.084 2.530 0.011 0.177
willingness \— connectedness 0.307 0.096 3.205 0.001 0.285
easiness4 \— easiness 1.342 0.121 11.089 *** 0.919
149 A Study on the User Acceptance Model
This study constructs SNS websites user acceptance model based on TAM, and
model hypothesis is verified by the structure equation model. The conclusion can
be drawn as follows: (1) TAM is basically suitable to SNS websites user accep-
tance study, but no evidence supports the causality between Perceived Usefulness
and Perceived Ease-of-Use. (2) Both Perceived Enjoyment and Perceived Con-
nectivity have a positive correlation with Usage Willingness, and Perceived
Connectivity further affects Perceived Enjoyment. (3) The subdivision of external
variables reflects the importance of user activity, the activity level influences
Perceived Usefulness, Perceived Connectivity and Perceived Enjoyment simulta-
neously, while the influence of service quality to Perceived Enjoyment is deleted
for path of regression coefficients is too little.
Based on above study, some constructive suggestions are proposed to SNS
service providers: perfect amusement and e-commerce functions to enhance user’
viscidity, pay attention to interface operation to optimize user’ experience; provide
service in information filtering, sorting and pushing, study deeply on promoting
user’ activity level.
References
Davis FD (1986) A technology acceptance model for empirically testing new end user
information systems. Cambridge, MA
Davis F (1989) Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Q 13(3):319–340
Delone WH, McLean ER (2003) The DeLone and McLean model of information systems
success: a ten-year update. J Manage Syst 19(4):9–30
Papacharissi Z, Rubin AM (2000) Prediction of Internet use. J Broadcast Electron Media 44:4–13
Patrick Rau P-L, Gao Q, Ding Y (2008) Relationship between the level of intimacy and lurking in
online social network services. Comput Human Behav 24:2757–2770
Schaefer C (2008) Motivations and usage patterns on social network sites. Institute of
Information Systems and Management, Kaiserstr
Seddon PB (1997) A respecification and extension of the DeLone and McLean Model of IS
Success. Inf Syst Res 8(3):240–253
1420 D. Jin and M. Zhou
Shin D (2008) What do people do with digital multimedia broadcasting? Path analysis of
structural equation modeling. J Mobile Commun 6(1):258–275
Webster J, Martocchio JJ (1995) The differential effects of software training previews on training
outcomes. J Manage 21(4):757–787
Wixom BH, Todd TA (2005) A theoretical integration of user satisfaction and technology
acceptance[J]. Inf Syst Res 16(1):65–102
Chapter 150
Augmented Reality Based Factory Model
Comparison Method
Abstract Through factory digital mock-up, Digital Factory (DF) technology can
save enormous time and cost in factory planning. A problem of the digital factory
mock-up maintenance is checking the digital models with the real factory. This
paper introduces a method using Augmented Reality (AR) technology to compare
the 3D models with the real object in real time. Compared to other measures, this
method have the benefit of the cost saving. An experiment demonstrates the
proposed method is given at the end of the paper.
150.1 Introduction
The factory digital mock-up creates a visual simulation platform for product
design and processes planning which has been the key point to optimize processes
and offer optimal production scheme (Bracht and Masurat 2005). It works as the
foundation of Digital Factory (DF) technology which is widely used in many fields
such as aviation, automobile manufacturing, chemical industry and electronic
products. Model calibration is a significant issue in the application of factory
W. Sun (&) J. Lu
CIMS Research Center, Tongji University, Shanghai, China
e-mail: [email protected]
J. Lu
e-mail: [email protected]
D. Li
CDHAW, Tongji University, Shanghai, China
e-mail: [email protected]
Factory digital mock-up is a complex of digital archive for the whole life-cycle of
a factory. It includes not only 3D model of the factory, but also all the design
documents, construction documents, and maintenance information. DF technol-
ogy, the company to meet the challenges of the twenty-first century an effective
means (Liu 2009), which integrates techniques of computer, virtualization, emu-
lation, and networks, plays a significant role in keeping competitive advantage for
enterprises. It operates in a collaborative way under 3-D visualization environment
and interactive interface. Based on the actual data and models the planned products
and production processes can be improved using virtual models until the processes
are fully developed and extensively tested for their use in the real factory. DF is a
comprehensive approach in factory layout planning, which consists of the 3-D
model design of plant (that contains workshop structure, equipments and facilities,
material flow and other resources for production) and processes optimization
(Zhang et al. 2006). And Factory digital mock-up acts as the prerequisite for
operative information concerned.
Factory digital mock-up shows its advantages: engineers make assessment by
optimizing the plant layout and resolving conflicts between different parts, then
avoid loss due to irrational design, and make data and information of equipments and
process flow optimum coordinate with the factory building (Yu et al. 2003). When
applied in the aspect of automobile industry, it coordinates materials resource
(components and modules of automobile), equipment (machine tools and facilities),
workshop (area), and process flow (automobile manufacturing processes) into an IT
system. While in the field of pharmaceutical and chemical industry, factory digital
mock-up makes it possible to increase product innovation and flexibility.
150.3 Methodology
One of the most basic problems currently limiting factory digital mock-up
applications is updating. Regular calibrating factory models taking real factory for
reference is necessary. There are several model comparison methods prevailed:
laser scanning, laser ranging, photograph-visual inspection comparison.
150 Augmented Reality Based Factory Model Comparison Method 1423
Laser scanner firstly gets the outline integrated data of the object rapidly by
Omni-directional scanning, and then generates point cloud records after precisely
construction, editing, and modification by computer. Accurate as the data is, the
method cannot be widely used because of the high cost. Furthermore, the instru-
ment is unable to display data instantly.
Another approach for updating factory digital mock-up is laser ranging. People
can easily obtain elevation and other information of the object, relative position,
for example, using the hand-held laser distance meter. However, this kind of
method is not appropriate for objects which are precise and complicated.
Photograph-visual inspection means to make a comparison between real factory
and the photograph of the factory. It is feasible, however, not accurate.
These methods above are commonly used at present. However, more efforts
should be paid to explore new resort which is inexpensive and accurate. Therefore,
Augmented Reality based factory model comparison method has been proposed.
AR system can present a view of blended scene of real factory environment and
digital mock-up. Then the information files can be easily changed to correct the
model without complicated manual operation.
Augmented Reality (AR) technology can enhance user’s perception of the real
world by providing information from computer system. It has the following three
characteristics: combine real and virtual, interactive in real time, and registered in
3-D (Azuma 1997; Azuma et al. 2001). AR system has been applied in medical,
manufacturing, visualization, path planning, entertainment, military and many
other aspect (Quan et al. 2008).
ARToolKit is an AR application development tool kit based on C/C++ lan-
guage. It has successfully developed indoor registration technology with fiducial
mark pattern tracking system. On the condition of controlled environment, it has
achieved a fine tracking result. The kit includes camera calibration and mark
making tool which can compound Direct3D, OpenGL graphics and VRML scenes
into the video stream and support a variety of display devices.
Augmented Reality based factory model comparison method is put forward on the
foundation of model-loaded procedure in ARToolKit to blend virtual objects
seamlessly with a real factory environment in 3-D. The mainly work flow is shown
in Fig. 150.1.
1424 W. Sun et al.
Identify
Lord Model
Modify
N
Blend
End
Take the case of a section of a cross fire fighting sprinkler and a pipe support in
a classroom (as Fig. 150.2 shows) which is too high to measure, just meet the
requirements of the experiment object selection.
The operating system is Windows XP with Microsoft Visual studio 2008
development environment. An ordinary CMOS camera (pixels 320 9 240) with
2.0 USB interface and a printed fiducial mark is enough.
Besides provided mark patterns, other patterns also can be designed and trained
according with the instruction in ARToolKit. There are some limitations to purely
computer vision based AR systems. The larger the physical pattern, the further
away the pattern can be detected. What’s more, the simple pattern is better. Taking
the height of experimental subject into account, a proper coordinate axis offset
should be set to get a clearly vision. The mark was fixed in the bottom right-hand
corner of the object which is also shown in Fig. 150.2.
DAT files contain the name, size display, rotation and other information of the
models. Some new models can be matched with modifying the data as well as
model updating. When multiple patterns tracked associate with different 3D object
needed, DAT files also can be easily fixed to load more than one pattern.
The virtual model of the classroom was created by Microstation in real pro-
portion and converted into WRL by 3DSMAX. The mark shown in Fig. 150.2 has
been trained to match the digital model. The initial vision of the blending scene
was rendered as soon as the camera identified the mark, which is shown in
Fig. 150.3.
The model and the vision of real object can be rotated synchronously when the
camera moved. While immersed in a view of real world, some subtle changes in
DAT files can make the model get closer to the object. Then the position can be
determined with acceptable precision and accuracy in real time. The modified
model was shown in Fig. 150.4.
150 Augmented Reality Based Factory Model Comparison Method 1425
From the conclusions drawn from experiment above, Augmented Reality based
factory model comparison method has the following advantages over the existing
methods:
(1) Flexibility. The method operated in a direct way which has avoided laser
scanning, ranging, or other manual work.
(2) Low-cost. It can do the work well without professional instrument and also
result in time-saving.
(3) Accuracy. The real object was blended with the virtual model seamlessly, that
lead to errorless result.
(4) Real-time. The real world and the digital model were rendered and combined
in real time, and it is possible to reduce time with the use of the method by
working more efficiently.
Certainly, there are also some limitations. The method cannot work without
document database of the object, while Laser scanning is better. And Laser ranging
does well in presenting data like elevation and relative position for some big
facilities. It is more efficiently to make a proper combination of these approaches
in the real factory.
Since visual interface has advanced the performance, more effort needed to
perfect the AR system, which includes tracking visible natural features without
prepared marks (Neumann and You 1999), model loaded automatically, project of
the controller interface with CAD and so on. The author will do further study in
this area to support the factory digital modeling techniques in future.
References
151.1 Introduction
In recent years, the credit of high-tech small and medium enterprises (SMEs) has
been gaining much more importance according to their high growth in financial
world (Derelioglu and Gürgen 2011). However, the credit guarantee risk is very
high in SMEs due to their particular characteristics, which lead to a low credit
scoring in general (Chen et al. 2006, 2010). The evaluation of credit risk of high-
tech SMEs becomes a very challenging problem for the bank. Therefore, it is
essential to develop an accurate credit scoring model for high-tech SMEs for the
efficient management of bank.
Most well-known evaluation models use probability or fuzzy set theory to hold
randomness or fuzziness respectively, such as the decision trees (Frydman et al.
1985), artificial neural networks (Jensen 1992), genetic algorithm (Walker et al.
1995), etc. Among all of these methods, only cloud model based models consider
both aspects of uncertainty. Cloud model is the innovation and development of
membership function in fuzzy theory (Di et al. 1999), which transforms qualitative
terms described in a natural language to distribution patterns of quantitative values
(Deyi et al. 1995). It is successfully used in spatial analysis (Haijun and Yu 2007),
target recognition (Fang et al. 2007), intelligent algorithm improvement (Yunfang
et al. 2005) and so on.
Therefore, in this paper, we propose an evaluation method based on cloud
model for the credit of high-tech SMEs. As credit evaluation is a typical multi-
attribute evaluation problem, it is more significant in applying this novel approach
to credit evaluation so as to demonstrate its usefulness.
151.2 Methodology
Suppose that r is the language value of domain U, and mapping CrðxÞ : U ! ½0; 1,
8x 2 XðX UÞ, x ! CrðxÞ, then the distribution of CrðxÞ in U is called the
membership cloud of r, or cloud in short, and each projection is called a cloud drop
in the distribution. If the distribution of CrðxÞ is normal, it is named normal cloud
model.
range between ½Ex 3En; Ex þ 3En, and it reflects the numerical value range
acceptable by concept and represents the margin with double-sided property.
Hyper entropy He is the entropy of entropy, it reflects the dispersion degree of
the entropy of concept (Lv et al. 2009).
A cloud model can be denoted with vector CðEx; En; HeÞ. The numerical
characteristics of the cloud Model are shown in Fig. 151.1.
In the evaluation method based on cloud model, gravity center of cloud can be
denoted as:
T ¼ab ð151:3Þ
In the type, a means the position of gravity center of cloud, depicting with the
expectation value Ex, then if the expectation value Ex changes, the position of
gravity center of cloud also corresponds of change; b means the height of gravity
center of cloud, depicting with the heavy value of power, which takes often value
(0.371).
Therefore, the variety that passes gravity center of cloud can reflect the variety
of system information status, the concrete step of the evaluation method based on
cloud model is as follows:
Step 1: Confirming index system and index power weight.
Step 2: Denoting the cloud model of each index. Denotation of accurate-
number type and language-description type are different in cloud model. Withdraw
a set of sample n to constitute to make policy matrix, so the index of accurate-
number type can be denoted as follow:
Ex1 þ Ex2 þ þ Exn
Ex ¼ ð151:4Þ
n
maxðEx1 ; Ex2 ; . . .; Exn Þ minðEx1 ; Ex2 ; . . .; Exn Þ
En ¼ ð151:5Þ
6
And the index of language-description type can be denoted as follow:
Ex1 En1 þ Ex2 En2 þ þ Exn Enn
Ex ¼ ð151:6Þ
En1 þ En2 þ þ Enn
En ¼ En1 þ En2 þ þ Enn ð151:7Þ
Step 3: Denoting status of the system. n indexes can be depicted with n cloud
models, therefore the evaluation system containing n indexes can be denoted with
n dimension comprehensive cloud, T ¼ ðT1 ; T2 ; . . .; Tn Þ, Ti ¼ ai bi ði ¼ 1; 2;
. . .; nÞ, and when the status of evaluation system occurrences variety, the gravity
center changes to T 0 ¼ ðT10 ; T20 ; . . .; Tn0 Þ.
Step 4: Measuring the variety of cloud gravity center based on power-added
deviation degree. Suppose that each index of the ideal status of a system is given,
then the vector of cloud gravity center can be depicted as T 0 ¼ a bT
¼ ðT10 ; T20 ; . . .; Tn0 Þ, a ¼ ðEx01 ; Ex02 ; . . .; Ex0n Þ, b ¼ ðb1 ; b2 ; . . .; bn Þ, bi ¼ wi 0:371,
and the normal status of vector of n dimension comprehensive cloud gravity center
is denoted as T ¼ ðT1 ; T2 ; . . .; Tn Þ.
Generally power-added deviation degree can be used to measure the variety of
cloud gravity center between ideal status and normal status. The vector of
151 An Evaluation Method Based on Cloud Model 1431
V ¼ ðv1 ; v2 ; v3 ; v4 ; v5 Þ
¼ ðbadðCÞ; generalðBÞ; goodðAÞ; verygoodðAAÞ; bestðAAAÞÞ
The index system of credit evaluation of high-tech SMEs includes total 25 indexes,
5 major type as follows: credit quality U1 —register capital U11 , history credit
condition U12 , equipment level U13 and guarantee U14 ; Organizational level U2 —
business strategy U21 , organization system U22 , stability of management team U23 ,
stability of R&D team U24 , business proposal U25 ; Operation level U3 —turnover
1432 G. Zhou et al.
ratio of accounts receivable U31 , turnover ratio of total assets U32 , return on total
assets ratio U33 , operating profit ratio U34 , income growth ratio U35 , profit growth
ratio U36 , liquidity ratio U37 , debt asset ratio U38 , after-sales service U39 ; R&D
level U4 —R&D input U41 , intellectual property rights U42 , R&D character U43 ;
Network position U5 —market share U51 , public relations U52 , industry trend U53 ,
geography position U54 .
Select ten high-tech SMEs in the second-board Market in China as test sample
set S ¼ fSi ji ¼ 1; 2; . . .; 10g, including electronics information, medical apparatus,
biological pharmacy, etc. Finance data comes from database in CSMAR Solution
(2012), and other qualitative indexes are descripted by the expert evaluation
languish. Take company S1 (NO. 300002) in the test sample set as example, the
status of each credit evaluation index are shown in Tables 151.1, 151.2, 151.3,
151.4 and 151.5.
Normalize the evaluation languish set (bad, general, good, very good, best) to (0,
0.25, 0.50, 0.75, 1), and thus the policy matrix A1 –A5 is constituted as follow:
2 3
37920 33470815 0:25 0:50
6 37920 33470815 0:25 0:50 7
A1 ¼ 64 37920 33470815 0:50 0:50 5
7
Through cloud model computation, the credit evaluation value of high-tech SMEs:
X
5
PS1 ¼ ðwi PUi Þ ¼ 0:487
i¼1
Then the credit evaluation value is input into the five-scale evaluation set based
on cloud model (see Fig. 151.3). It will activate two cloud objects: A and B, but
the activation degree of A is far larger than B, so the credit evaluation of company
S1 (NO. 300002) obtains A.
151 An Evaluation Method Based on Cloud Model 1435
151.4 Conclusion
In this paper, credit evaluation applications in high-tech SMEs are discussed, and
an evaluation method based on cloud model is formulated. The application of this
model is also illustrated. After this research, we find that it is a better way to use
this method to realize transforming qualitative terms described in a natural lan-
guage to distribution patterns of quantitative values, especially for high-tech
SMEs.
Acknowledgments This work was partially supported by the National Natural Science Foun-
dation of China (Grants No. 71172148) and Soft Science Research Projects of the Ministry of
Housing and Urban-Rural Construction (Grants No. 2011-R3-18). The authors are also grateful to
the referees for their helpful comments and valuable suggestions for improving the earlier version
of the paper.
References
Chen X, Han W, She J (2006) The credit risk, company quality and growth illusion. Econ Theory
Bus Manage 12:62–67
Chen X, Wang X, Wu DD (2010) Credit risk measurement and early warning of SMEs: an
empirical study of listed SMEs in China. Decis Support Syst 49:301–310
CSMAR Solution (2012) CSMAR database. http://www.gtarsc.com
Derelioglu G, Gürgen F (2011) Knowledge discovery using neural approach for SME’s credit risk
analysis problem in Turkey. Expert Syst Appl 38(8):9313–9318
Deyi L, Yi D (2005) Artificial intelligence with uncertainty. Chapman & Hall, Boca Raton
Deyi L, Haijun M, Xuemei S (1995) Membership clouds and membership cloud generators.
J Comput Res Dev 32(6):15–20
Di K, Deyi L, Deren L (1999) Cloud theory and its applications in spatial data mining knowledge
discovery. J Image Graph 4A(11):930–935
Fang W, Yanpeng L, Xiang L (2007) A new performance evaluation method for automatic target
recognition based on forward cloud. In: Proceedings of the Asia simulation conference,
pp 337–345
1436 G. Zhou et al.
Frydman H, Altman EI, Kao DL (1985) Introducing recursive partitioning for financial
classification: the case of financial distress. J Finance 40(1):269–291
Haijun W, Yu D (2007) Spatial clustering method based on cloud model. In: Proceedings of the
fourth international conference on fuzzy systems and knowledge discovery, no. 7, pp 272–276
Jensen HL (1992) Using neural networks for credit scoring. Manag Finance 18(6):15–26
Li DY, Liu CY, Gan WY (2009) A new cognitive model: cloud model. Int J Intell Syst
24:357–375
Lv P, Yuan L, Zhang J (2009) Cloud theory-based simulated annealing algorithm and application.
Eng Appl Artif Intell 22:742–749
Walker RF, Haasdijk E, Gerrets MC (1995) Credit evaluation, using a genetic algorithm. In:
Intelligent systems for finance and business, Wiley, US, pp 39–59
Yunfang Z, Chaohua D, Weirong C (2005) Adaptive probabilities of crossover and mutation in
genetic algorithms based on cloud generators. J Comput Inf Syst 1(4):671–678
Chapter 152
The Structural Optimum Design
of Erected Circular Medicine-Chest Based
on Non-Intervention Motion
Abstract In this paper, the following studies are completed, including the analysis
of the mechanical structure of the erected circular medicine-chest as well as its
working principle, discussion of the non-intervention motion conditions of drug
containers according to their different motion phases, establishment of the optimum
functions of the chain transmission based on the conditions of the non-intervention
motion of the containers, obtention of the minimum circular radius of drug con-
tainers by application of Matlab as well as the obtention of the design parameters of
chain transmission, the simulation model building with UG and ADAMS, and the
accomplishment of the motion simulation to prove the success of optimum design
based on the non-intervention motion condition of drug containers.
152.1 Introduction
Currently, with the limitation regarding technology, the drug storing system in a
hospital pharmacy mainly consists of ordinary shelves, unable to realize the dense
storage. In most of the hospital pharmacy, the facilities are old, working envi-
ronment poor and pharmacists’ working intensity high. In addition, there are other
problems such as storage complexity, space waste, low working efficiency, etc.
The introduction of automated pharmacy system may help to make a better overall
plan for the pharmacy, including reducing the drug storage area, effectively car-
rying out the standardized and automated management of the pharmacy, thus
improving drug dispensing and reducing patients’ waiting time. The automated
pharmacy system may be connected to HIS, putting all the working process of the
working staff under the supervision (Li et al. 2007; Liu et al. 2009; Zhao 2009).
The typical equipment is erected circular Medicine-chest, which is originated
from the digital-controlled erected circular inventory for the management and
storage of accessories and tools in large factories. Basing on the stereoscopic
inventory, when adding safety security, humanization design and connecting with
the HIS system of hospital, it can be sued for the pharmacy management in the
hospital. This is a semi-automatic system, in which the drug dispensary works
should be done artificially. This type of automatic system is adaptable to the
package of drugs, and can be used as the supplementary equipment to the drug
dispensing machine for box-packed drugs as effective assisting (Zhao et al. 2008).
In operation, after two rate reductions, one by reducer, the other by the first chain
transmission, the motor of the erected circular medicine-chest drives the two
driving chain wheels on the synchronizing shaft, connected to which are two
driven wheels fixed on the two half axles accordingly. Driven by the chains, the
support rods and balance bars fixed on them make all the drug containers move
circularly. Drugs in the movable containers, after receiving the dispensing order,
are conveyed along the shortest path and reached the outlets within the shortest
time.
As shown in Fig. 152.1, the pitch of chain 084 A: 12.7 mm; the span between the
two support rods connected to the same container: 18 pitches; the span between
the support rods respectively connected to the neighbouring containers: 4 pitches;
the number of containers (evenly arranged along the chains): 12.
The pitch radiuses of both driving wheels and driven wheels: r; the overhang of
the support rods: c; the width and height of the containers W: and h, respectively;
the space between the centers of the pivots of the two neighbouring drug
152 The Structural Optimum Design of Erected Circular Medicine-Chest 1439
containers: H; the fixation range for the connection beam: A; the safety space
between the connection beam and the drug container: S.
As shown in Fig. 152.2, with the center of driven wheel as the origin of
coordinate, the reference coordinate is built; O1, O2, O3, O4 are, respectively, the
joint points of the transmission chain with the Supporting Rods No. 1, 2, 3 and 4
for the Drug Containers A1, A2, whose coordinates are ðx1 ; y1 Þ, ðx2 ; y2 Þ, ðx3 ; y3 Þ,
ðx4 ; y4 Þ accordingly. A1 is the center of the pivot of Container 1 and Supporting
Rods No. 1 and 2, while A2 is the center of the pivot of Container 2 and Supporting
Rods No. 3 and 4. The coordinates of A1, A2 are ðX1 ; Y Þ1 , ðX2 ; Y2 Þ, respectively.
The wheels do counterclockwise rotation.
Suppose located in the pitch circle is Point B, which is on the same horizontal
line with Center O of the chain wheel, coinciding with O1, O2, O3, O4 are on the
same vertical line. At the beginning of the container motion, O1 moves unclock-
wise along the circle with r as the radius, while O2, O3, O4 move upward verti-
cally. When O2 reaches the same height as that of Point O, O1 and O2 move along
the same circle. h0 is the included angle between OO1 and OO2 when O1, O2 move
along the same circle simultaneously; h1 is the angle between OB and X axis; h2 is
the included angle between OO2 and OO3 when O2, O2 move along the same circle
simultaneously.
Suppose, at the motion of Containers 1 and 2, DX and DY are the horizontal
space and the vertical space of the Centers of Pivots A1 and A2, the condition of
nonintervention motion is (Zheng 1992):
if 0 DX w,
jDY j [ h; ð152:1Þ
When DX [ w, there must be no intervention between the two drug containers.
The floor area of the erected circular medicine-chest refers to the projected area of
the outline of the steelwork, whose length is affected by such factors as the length
of the drug containers, while whose width by such factors as the turning radius and
the width of the drug containers. After the design of the drug containers, the
following references may be given, including the width w and height h of the
container, as well as the pitch of the chain. In the design of the chain transmission,
available are the best overhang of the support rods c and the pitch radius of the
chain wheel r based on the optimum design, thus also available is the minimum
turning radius of the container, whose outline of the steelwork is smallest in width.
If the length remains the same, the floor area will be reduced, also reduced will be
the mechanical deformation of the steelwork.
152 The Structural Optimum Design of Erected Circular Medicine-Chest 1441
X ¼ ½x1 x2 x3 T ¼ ½k r h1 T
To make available the smallest width of the outline of the steelwork, the
Turning Radius L is taken as the objective function, then:
L ¼ c þ r ¼ r ð 1 þ kÞ ð152:2Þ
Based on the above, the expression of the objective function of the optimum
design is:
f ðxÞ ¼ x2 ð1 þ x1 Þ
152.4.1.3 Constraints
The vertical nonintervention motion of the containers on the left and those on
the right:
2L w þ A þ 2s
In the practical design, selected are the following data:
W = 420 mm, A = 30 mm, s = 87 mm
*L ¼ c þ r ¼ r ð1 þ kÞ;
) rð1 þ kÞ 0:312 m
` Since the container forces on the support rods, the overhang of the support
rod should not be too long in order to ensure the enough strength of the chain. In
the design:
0:15 m c 0:25 m
)H ¼ 279:4 mm
˜ h0; h2 :
n1 360 6:35
h0 ¼ ¼ 36 arc sin ð152:6Þ
z r
n2 360 6:35
h2 ¼ ¼ 8 arc sin ð152:7Þ
z r
n1: the number of the pitches whose total length amounts to the distance
between O1 and O2; n1: the number of the pitches whose total length amounts to
the distance between O2, O3.
Þ Based on Formulae (152.3) and (152.4), it may be deduced:
1:24 k 3:08
þ The scope of: 0 h1 p þ 2h0 þ h2
Omitted here are the track equations of the centers of Pivots and that of the guide
rail of the balance bars
From the above, Variables x1 , x2 and x3 may be substituted in the formula, then
the optimum mathematical model of the chain transmission based on the
152 The Structural Optimum Design of Erected Circular Medicine-Chest 1443
min f ðxÞ ¼ x2 ð1 þ x1 Þ
g1 ðxÞ ¼ 0:312 x2 ð1 þ x1 Þ
g2 ðxÞ ¼ 0:15 x2 x1
g3 ðxÞ ¼ x1 x2 0:25
When 0 h1 h0 ,
g4 ðxÞ ¼ DX1
g7 ðxÞ ¼ 1:24 x1
g8 ðxÞ ¼ x1 3:08
g9 ðxÞ ¼ 0:081 x2
It may be seen from the mathematical model, the optimum design belongs to that
of the constrained nonlinear optimization (Su et al. 2004). The Matlab functions to
solve the above problem of the constrained nonlinear optimization are FMINCON.
The calculation results based on Matlab are:
k ¼ 1:7609; r ¼ 113 mm; Lmin ¼ 312 mm
According to formula (152.8), z = 55.88, rounded for z = 56, then
p
d¼ ¼ 226:5 mm
sin 180
z
i.e. r ¼ 113:25 mm
According to Formula (152.2), if c = 199.5 mm, the turning radius of the
container L = 312.75 mm.
The results of the optimum design include the pitch of the chain: 12.7 mm, the
overhang of the support rod: 199.5 mm, the number of the pitches whose total length
amounts to the height of the container: 22, the total number of the pitches: 264, the
diameter of the pitch: 226.5 mm, the teeth number of the chain wheel: 56, the length
of the chain: 3352.8 mm, the theoretical center distance: 1320.8 mm
According to the test results of the motion simulation intervention, based on the
conditions of the Non-intervention Motion of the containers, feasible are the
results that the turning radius of the container L = 312.75 mm and the horizontal
overhang of the support rods c = 199.5 mm.
152.6 Conclusion
The analysis is made on the working principle of the erected circular medicine-chest,
built up are the conditions in regard to non-intervention motion of neighbouring drug
containers, structure optimization is implemented by the application of Matlab,
feasibility of the above mentioned structural optimum design is established by the
analysis based on modeling and simulation.
Acknowledgments This work was completed with the support of the project of national natural
science foundation of china (No. 51105041)
References
Li C, Wang W, Yun C (2007) Status quo and new development of automated pharmacy. Robot
Technol Appl China 5:27–32
Lijun XJ, Qin W (2002) ADAMS tutorial example. Beijing Institute of Technology Press, Beijing
Liu W (2005) Design of vertical carousel automation based on solid works. Shandong University,
Jinan
Liu X, Yun C, Zhao X, Wang W, Ma Y (2009) Research and design on automated pharmacy.
Chin J Mach Des 26(7):65–68
Peng G (2007) Study and design of vertical circulation cubic garage based on CAE. Shandong
University, Jinan
Su J, Zhang L, Liu B (2004) MATLAB toolbox application. Electronic Industry Press, Beijing
Wang J (2005) Non-intervention motion conditions for vertical circulation of parking equipment
carrier. Machinery 32(3):P22–P23
1446 Z. Zhang et al.
Zhao T (2009) Hospital pharmacy automation is the inevitable trend of pharmacy development.
Capital Med China 16(24):31
Zhao X, Yun C, Liu X, Wang W, Gao Z (2008) Research on the automated pharmacy system.
Chin J Sci Instrum 29(4):193–195, 200
Zheng Z (1992) Chain transmission design and the application. China Machine Press, Beijing
Chapter 153
Application of Noise Estimator
with Limited Memory Index on Flexure
Compensation of Rapid Transfer
Alignment
153.1 Introduction
W. Zhou Y. Ji (&)
Department of Automation, Harbin Engineering University, Harbin, China
e-mail: [email protected]
where
xsms ¼ w_ m ð153:12Þ
Expansion and differentiation of (153.11) results in
where
xsim x
^ sin xsnm x
~ sns ð153:19Þ
According to (153.9), by inserting (153.19) to (153.18) leading to the attitude
error model
w_ m ¼ ðwm wa Þx
~ sns þ xsf þ es ð153:20Þ
The error model of accelerometer and gyro are composed by constant drift and
white noise, which can be written as
r ¼ rc þ xa ; r_ c ¼ 0 ð153:21Þ
e ¼ ec þ xg ; e_ a ¼ 0 ð153:22Þ
153 Application of Noise Estimator 1451
According to the error model of small misalignments, the state equation and
measurement equation are linear equation with additive noise, whose discrete
general formula can be given by
(
xk ¼Uk1 xk1 þ wk1
ð153:23Þ
zk ¼Hk xk þ vk
where dkj is the function of Kronecker-d. The flexure process af and xf are chosen
as the state variables
X ¼ ½ dV wm wa r e af xf ð153:25Þ
When Markov model is used to describe the flexure process, the dimension of
state will increase rapidly. If the east channel and west channel are considered and
the two-ordered Markov model is adopted, the required state variables will be 8. If
all the three channels are described by three-ordered Markov model, the required
state variables will be 18. So the system model needs to be simplified and the state
equation can be written by
xk ¼ Uk1 xk1 þ DUk1 þ wk1 ð153:26Þ
where DUk1 can be written by
" #
Csn asf
DUk1 ¼ ð153:27Þ
xsf
When error model is built accurately and the system noise can be obtained cor-
rectly, the classic Kalman filter can be written by
(1) Set of initial value
^x0 ¼ Eðx0 Þ
ð153:29Þ
P0 ¼ E ðx0 ^x0 Þðx0 ^x0 ÞT
(2) Time and measurement update
^xk;k1 ¼ Uk1^xk1 ð153:30Þ
Pk ¼ ½I Kk Hk Pk;k1 ð153:34Þ
^xk ¼ ^xk;k1 þ Kk zk zk;k1 ð153:35Þ
bi1 bi
bi ¼ ð153:38Þ
1 bm
By inserting (153.38) to (153.36), leading to
X
k n h T i o
^k ¼ 1
Q bkþ1i Ki zi hi^xi;i1 zi hi^xi;i1 KiT Ui1 Pi1 UTi1 þ Pi
k i¼1
ð153:39Þ
So the process noise estimator based on limited memory length is given by
n h i o
^ k1 þ 1 b Kk zk Hk ^xk;k1 zk Hk ^xk;k1 T K T Uk1 Pk1 UT þ Pk
^ k ¼bQ
Q k k1
1 bm
bm bmþ1 n h T i T
Kkm zkm Hkm^xkm;km1 zkm hkm^xkm;km1 Kkm
1 bm
Ukm1 Pkm1 UTkm1 þ Pkm
ð153:40Þ
where the memory length m can be adjusted according to actual environment.
The rapid transfer alignment of shipboard aircraft is simulated, where the swing
maneuver of ship is driven by sea wave. To obtain better observability, velocity/
attitude matching is selected whose measurement equation is given by
z ¼ Hx þ v ð153:41Þ
According to Sect. 153.2, after compensation of process noise, the state can be
given by
X ¼ ½dV wm wa r e ð153:42Þ
1454 W. Zhou and Y. Ji
The swinging amplitude hxm , hym , hzm are 5°, 4°, 2°, the frequency xx , xy , xz are
0.18, 0.13, 0.06 Hz and initial angle hx0 , hy0 , hz0 are all set 0°. The gyro constant drift
of SINS is set 0.05°/h and accelerometer offset is set 10-3 g. The variance (vari-
ances) of white noise of gyro and accelerometer are set respectively (0.001°/h)2 and
(10-4 g)2. Misalignments wx , wy , wz are 150 , 300 , 1°. Simulation time is 100 s.
The two-ordered Markov model is adopted for the true flexural deformation and
the model coefficients of three channels are set bx ¼ 0:1, by ¼ 0:2, bz ¼ 0:4,
simultaneously the variance of white noise are (0.05/h)2 and (10-3 g)2. Scheme 1
takes the same model but with different coefficients, which are set bx ¼ 0:2,
by ¼ 0:3, bz ¼ 0:5. Noise compensation method is used in Scheme 2, but the
statistical characteristics of noise is set a fixed value, in which the compensation
coefficient is set 1.5. The process noise estimator with limited memory index is
used in Scheme 3, the memory length m is set 10 and forgetting factor b is set 0.3.
The filter frequency is set 5 Hz. Estimation results of three schemes are shown
from Figs. 153.1, 153.2, and 153.3.
There are slight deviations in model coefficients between true model and
Scheme 1. However, it can be seen from the simulation results that in Scheme 1
20
Scheme 1
15
Scheme 2
10 Scheme 3
5
ψx/'
-5
-10
-15
-20
0 10 20 30 40 50 60 70 80 90 100
Time/s
10
5
0
ψy/' -5
-10
-15
-20 Scheme 1
-25 Scheme 2
Scheme 3
-30
0 10 20 30 40 50 60 70 80 90 100
Time/s
20
10
0
-10
-20
ψ z/'
-30
-40
-50 Scheme 1
Scheme 2
-60
Scheme 3
-70
0 10 20 30 40 50 60 70 80 90 100
Time/s
the standard deviations of three misalignments are 0.320 , 0.670 , 2.190 . But limited
to the high computation burden, the convergence speed declines. In Scheme 2,
because of the reduction of state, the convergence speed is improved. However,
the cost is the declining of filter accuracy since the fixed noise statistical charac-
teristics cannot follow the variation of actual environment. The standard deviations
of three misalignments are 1.430 , 3.560 , 12.530 . On the basis of Scheme 2,
Scheme 3 uses noise estimator with limited memory index to real-time track the
system noise. The standard deviations of three misalignments are 0.280 , 0.760 ,
2.550 , which are better than Scheme 2 and faster than Scheme 1.
153.5 Conclusion
After model building of rapid transfer alignment, the error equations are simplified
by noise compensation method. Aiming at the time variant characteristics of
flexure process in time domain, the noise compensation problem of flexural
1456 W. Zhou and Y. Ji
References
Bavdekar VA, Deshpande AP, Patwardhan SC (2011) Identification of process and measurement
noise covariance for state and parameter estimation using extended Kalman filter. J Process
Control 21(4):585–601
Jones D, Roberts C, Tarrant D (1993) Transfer alignment design and evaluation environment. In:
IEEE proceedings of aerospace control systems, pp 753–757
Kain JE, Cloutier JR (1989) Rapid transfer alignment for tactical weapon application. In:
Proceedings of the AIAA guidance, navigation and control conference, Boston, pp 1290–1300
Lim Y-C, Lyou J (2001) An error compensation method for transfer alignment. In: Proceedings
of IEEE conference on electrical and electronic technology. TENCON, vol 2, pp 850–855
Mohamed AH (1999) Adaptive Kalman filtering for INS/GPS. J Geodesy 73(4):193–203
Qi S, Han J-D (2008) An adaptive UKF algorithm for the state and parameter estimation of a
mobile robot. Acta Automatica Sinica 34(1):72–79
Robert MR (1996) Weapon IMU transfer alignment using aircraft position from actual flight tests.
In: Proceedings of IEEE position location and navigation symposium, pp 328–335
Ross CC, Elbert TF (1994) A transfer alignment algorithm study based on actual flight test data
from a tactical air-to-ground weapon launch. In: Proceedings of IEEE position location and
navigation symposium, pp 431–438
Sage AP, Husa GW (1969) Adaptive filtering with unknown prior statistics. In: Joint automatic
control conference, Colombia, pp 760–769
Spalding K (1992) An efficient rapid transfer alignment filter. In: Proceedings of the AIAA
guidance, navigation and control conference, pp 1276–1286
Wendel J, Metzger J., Trommer GF (2004) Rapid transfer alignment in the presence of time
correlated measurement and system noise. In: AIAA guidance, navigation, and control
conference and exhibit, Providence, RI, pp 1–12
Xiao Y, Zhang H (2001) Study on transfer alignment with the wing flexure of aircraft. Aerosp
Control 2:27–35
Xiong K, Zhang HY, Chan CW (2006) Performance evaluation of UKF-based nonlinear filtering.
Automatica 42(2):261–270
Xiong K, Zhang HY, Chan CW (2007) Author’s reply to ‘‘comments on ‘performance evaluation
of UKF-based nonlinear filtering’’’. Automatica 43(3):569–570
Zhao L, Wang X (2009) Design of unscented Kalman filter with noise statistic estimator. Control
Decis 24(10):1483–1488
Chapter 154
A Cumulative SaaS Service Evolution
Model Based on Expanded Pi Calculus
154.1 Introduction
J. He (&) T. Li D. Zhang
School of Software, Yunnan University, Kunming, Yunnan, China
e-mail: [email protected]
J. He T. Li
Yunnan Provincial Key Lab of Software Engineering, Kunming, Yunnan, China
mixed and harder to approach as their requirements are open to more rapid
changes and variations. To meet their needs, a new revolutionary software service
that permits an enhanced customizability and dynamic adaptability is called for.
SaaS software is just the right thing that comes in. Compared to traditional
software, it can orchestrate much more complex evolution processes and it can
deal with the frequent corrections and replacements caused by both universal and
particular needs of the customers simultaneously. However, whether the post-
evolution services are customer-friendly or not poses challenges. Therefore, SaaS
has to be evolved on a coarsest-grained layer so that the evolution process is
unknown and transparent to customers to guarantee a better on-line experience on
their part. For the moment, the biggest challenges posed by the evolution of SaaS
services range from the problem of the coarsest-grain, transparency to
progressiveness.
A number of researches on the evolution of SaaS software have been con-
ducted. However, they are mainly focused on the issue of ‘‘multi-tenant versus one
instance’’ for the customization of work procedures and measures for data safety
(Luo and Wu 2011; Liang et al. 2010; Bezemer and Zaidman 2009). One research
(Liang et al. 2011) brings forward, from the perspective of work flow, an evolution
model and its method that supports a supreme work flow; one research is on
evolution model and data dependence (Liu et al. 2010); another research, though
yet to be expressed in formulas, proposes the concept of services in evolution to
describe the incessant changes of services and a possible solution to the cooper-
ation across services (Ramil and Lehman 2002).
With the expanded Pi calculus (EPI) as its descriptive formula, this paper
proposes a cumulative model to support the evolution of SaaS services so as to
bring forward a theoretical formula that will be applicable to an automatic evo-
lution of services in prospect.
The Pi Calculus (Milner 1999) is the calculus model brought forward by Milner
and others on the basis of Calculus of Communication System (CCS) to depict and
analyze the top structure in change. It is often used to describe inter-process
interactions and concurrent communicative operations. The formula can be put in
several ways (Sangiorgi and Walker 2003). The one my paper adopts is in its
ordinary form. It can be defined as following (Liao et al. 2005):
Definition 154.1 Suppose N is a set of an infinite number of names, and i, j, k, l,
m, n,… are the names of the N; capital letters A, B, C,… denote different
154 A Cumulative SaaS Service Evolution Model Based on Expanded Pi Calculus 1459
processes; and capital letters P, Q, R,… stand for the expressions of the processes,
then the basic grammar is as in (154.1).
the process of an upper level, for example: i ! Si means the process i belongs
to the process sequence Si , and thus the analysis of the relationship of processes
can be limited to a certain layer.
2. Use the symbol ‘‘[h]’’ to describe the restrictive conditions of process transfers,
while h is the symbol of the conditions responsible for transfers, and it can be
taken as ½i ¼ jP, an expression expanded but identical, i.e. ½hP ½i ¼ jP.
Symbols ! and ? respectively indicate the output and input of data; i!x indicates
the output of data x through data channeli; and j?yindicates the input of data
y through data channel j. Expressions ½hi !x and uj ?x are used to indicate the
conditions that invoke the transfer of processes as well as the emission and
½hi !x indicates the process i sends the data x under the
reception of data. And
condition of hi ; uj ?x indicates the process j receives the data x under the
condition of uj .
When SaaS software functions are abstractedly taken as services, the evolution of
services can be applied to realize the dynamic change and maintenance of soft-
ware. As the evolution is basically the changes in structure, property and operation
of SaaS services (Papazoglou 2008), so to build an organizational structure that
encompasses all these factors can not only illustrate the relationship between an
upper and a lower service layer, define the property of services and the type of
operations, demonstrate the mapping relationships among them, but also illumi-
nate the organic integration of the service structures and the mechanism of service
solicitations SaaS software involves. Based on the analysis of the evolution of
services and the mapping relationship embodied by the expanded Pi calculus, the
following definition is made:
Definition 154.3 Suppose there are SaaS services whose sequences are as many as
n, it can be put as Si , and i ¼ f1; 2; . . .; ng. Si is a set of atom services, then it
follows Si ¼ frjr ! Si g. The atom service r in the expression is equal to the
process in the expanded Pi calculus.
Definition 154.4 A service schema of the SaaS services can be defined as a
quintuple, as in (154.2).
X
s ¼ ðS; A; E; C; f Þ ð154:2Þ
In it:
1. S indicates the set of atom services of the SaaS services,S ¼ fS1 ; S2 ; . . .; Sn g.
154 A Cumulative SaaS Service Evolution Model Based on Expanded Pi Calculus 1461
S
n
2. A ¼ Ai is the set of the service operations that each of the atom services
i¼1
provides, Ai ¼ fai1 ; ai2 ; . . .; ain g indicates the set of the service operations that
the atom service Si provides.
3. E indicates the set of the sequential execution processes provided by the atom
service S, an execution process refers to a partly sequence that consists of
nuclear services, for example: S1 S2 S3 S4 S5 is a partial
sequence.
4. C is a partial sequence of a service operation, for example: ðS1 ; a11 Þ
ðS2 ; a21 Þ ðS3 ; a31 Þ ðS4 ; a41 Þ ðS5 ; a51 Þ is a partial sequence of a
service operation.
5. The expression f : A ! A is to define the operational mapping function of the
services from an upper layer to a lower one, and f justifies the expression
f ðS; EÞ ¼ C, and f can be defined as a recursive function, it indicates a SaaS
service can be multiplied into a number of layers and grains according to actual
customer needs. Such a rendering of the definition can greatly improve the
applicability of the formula.
In the running process of a SaaS software, customer needs and commercial logic
keep changing. Though the changes are only relevant to some of the customers, the
evolution process is supposed to be transparent and cumulative to all users
undistinguished. To improve the service customizability and dynamic adaptability
of a SaaS software, this paper proposes an cumulative evolution model of the
SaaS, and the model is discussed in terms of four atom evolutions and the inte-
gration of them. The four nuclear evolutions are: the cumulative evolution in
sequential order, the cumulative evolution in reverse order, the cumulative evo-
lution in parallel order, and the cumulative evolution in corrective order.
Definition 154.5 The Service Schema proposed in Definition 154.4 can be used to
P
express a SaaS service: s ¼ ðS; A; E; C; f Þ. The capitalized A, B, C,… each
indicates one of the service serials represented by services i, j, r,… The symbol f,
as a nuclear service, can be evolved into a mapping function in operation, standing
for the mapping relationship of the service sequences prior to and post an
evolution.
1462 J. He et al.
The sequential cumulative evolution means to add the service r to the sequence S,
and the sequence can be transformed into a new service sequence as a result. The
sequential cumulative evolution is an atom form for other evolutions.
The service sequence
S before evolution can be expressed as:
i ! Si ½hi !x uj ?x j ! Sj , in which: A ¼ i ! Si ½hi !x 0,
B ¼ uj ?x j ! Sj .
The added service can be expressed as r ! Sr , thus, the sequential cumulative
evolution of the service sequence S is:
fSIE : i ! Si ½hi !x ½ur ?x r ! Sr ½hk !y uj ?y j ! Sj ð154:3Þ
in which:
A ¼ i ! Si ½hi !x 0; C ¼ ½ur ?x r ! Sr ½hk !y; B ¼ uj ?x j ! Sj
With the running of the evolution of service sequence, changing are the data
receiving and sending conditions responsible for the happening of evolution. This
again suggests that the evolution process of a service sequence is always dynamic
and incessant.
in which:
A ¼ i ! Si ½hi !x 0; B ¼ uj ?x j ! Sj hj !y 0; C ¼ ½uk ?y k ! Sk
The parallel cumulative evolution supports the concurrent running of two services,
of which one is an inset and both services can be executed as parallels. Once one
service is added as an inset, then the number of messages sent or received by the
prepositive and postpositive service of either of them will increase from 1 to 2.
The service sequence S prior to the evolution can be expressed as in (154.6).
i ! Si ½hi !x uj ?x j ! Sj ½hk !y
ð154:6Þ
½uk ?y k ! Sk ½hk !y
In which:
A ¼ i ! Si ½hi !x 0; B ¼ uj ?x j ! Sj ½hk !y 0;
C ¼ ½uk ?y k ! Sk ½hk !y
The cumulative service can be expressed as r ! Sr , and u, v indicate the inlet
and outlet of message the service r is responsible for.
The parallel cumulative evolution of the service serial S thus can be expressed
as in (154.7).
h 0i
fPIE : i ! Si ð½hi !xj hi !uÞ
h i
ðð uj ?x j ! Sj hj !yÞjð½ur ?u ð154:7Þ
h 0i
r ! Sr hr !vÞÞ ð½uk ?yj uk ?vÞ k ! Sk
In which:
h 0i h i
A ¼ i ! Si ð½hi !xj hi !uÞ 0; B ¼ uj ?x j ! Sj hj !y 0;
h 0i
C ¼ ð½uk ?yj uk ?vÞ k ! Sk
When a service can no longer appeal to customers, it entails the replacement of the
old service with either one that is adapted or one that is brand new.
The service sequence S before evolution can be expressed as in (154.8).
i ! Si ½hi !x uj ?x j ! Sj hj !y ½uk ?y k ! Sk ð154:8Þ
In which:
1464 J. He et al.
A = i ! Si ½hi !x 0; B ¼ uj ?x j ! Sj hj !y 0; C ¼ ½uk ?y k ! Sk ;
In which:
h i 0
A ¼ i ! Si ½hi !u 0; B ¼ uj0 ?u j ! Sj0 hj0 !u 0; C ¼ ½uk ?v k ! Sk
The sequential cumulative evolution, the reverse cumulative evolution, the parallel
cumulative evolution and the adaptive cumulative evolution are the four atom
evolution models that underlie the ultimate cumulative evolution of SaaS service
in question. Every evolution process of it can be realized by variedly configuring
the four atom evolutions. Among them, as the sequential cumulative evolution and
the parallel cumulative evolution involve the question of branching, it is necessary
to analyze the interrelationship of the evolution sequences and its evolution out-
comes. The integrated evolution process can be expressed as (Figs. 154.1, 154.2):
The service sequence S before integration can be expressed as in (154.10).
i ! Si ½hi !x uj ?x j ! Sj hj !z ½uk ?z k ! Sk ð154:10Þ
i x x
x m
m
u
j y y
z j j n
k z z
v
k k
154 A Cumulative SaaS Service Evolution Model Based on Expanded Pi Calculus 1465
y v
j j n
j n
z z
w
k z
k
w
k
In which:
A ¼ i ! Si ½hi !x 0; B ¼ uj ?x j ! Sj ½hk !z 0; C ¼ ½uk ?z k ! Sk
h 0i
i ! Si ð½hi !xj hi !uÞ ðð½um ?x m !
h i
Sm hm !y uj ?y j ! Sj hj !zÞj
ð½um ?u m ! Sm hm !v ½un ?v n ! ð154:14Þ
h 0i
Sn hn !wÞÞ ð½uk ?zj uk ?wÞ k ! Sk
When the parallel integration runs before the sequential integration, the evo-
lution process would be more complex as it may induce redundant services.
Therefore, the elimination of the redundant services has to be considered.
Whether formula (154.13) and (154.14) are equal in value has to be verified.
Theorem 154.1 When the sequential cumulative evolution and the parallel
cumulative evolution are simultaneously executed in the service sequence S, then
the outcomes of the two evolution sequences are the same.
Proof The interactive simulation theory (Milner 1999). To differentiate the out-
comes of the two evolutions, the services of integration sequencing 2 are to be
represented by i0 ; j0 ; k0 ; m0 ; m00 ; n0 , and m0 and m00 are equal in value.
Suppose (S, T) is a system that indicates the transfer of symbols, T represents
the transfer of services from the service S
T ¼ fði; mÞ; ði0 ; m0 Þ; ðm; nÞ; ðm; iÞ; ðm0 ; nÞ; ðm00 ; n0 Þg
And suppose F is a two-tuple of S, then
154.4 Conclusion
SaaS is an internet-based software service supply and delivery model. With the
development of the cloud computing technology, researches on SaaS are getting
momentum (Wu et al. 2011; Ardagna et al. 2007). But the literature that is focused
on the evolution of SaaS service remains inadequate. As the evolution of SaaS
software involves a higher level of service customizability and dynamic adapt-
ability than that the traditional software requires, the evolution has to be executed
in a coarsest-grained, transparent and gradual way (Weber et al. 2008). The paper
proposes a cumulative evolution model of SaaS service on the basis of an
expanded Pi calculus. By expanding the typical Pi calculus, it is possible to
multiply the proprietary relationships in the collection of names and the restrictive
conditions that determine the transfers of processes, and thus probable is the
overall orchestrating of the evolution of SaaS services. Following the analysis of
the cumulative evolution, four atom cumulative evolutions are discussed. After
that, the possible integrations of them and the conditions under which they are
equal in value are also demonstrated. A relevant research to be furthered should be
on the layering of the services and the testing of the functions of the model.
Acknowledgments Foundation item: National Natural Science Fund Project (No. 60963007);
Software College of Yunnan University Construction Fund Project (No. 2010KS01).
References
Ardagna D, Comuzzi M, Mussi E (2007) Paws: a framework for executing adaptive web service
processes. IEEE Softw 24:39–46
Bezemer C-P, Zaidman A (2009) Multi-tenant SaaS applications maintenance dream or
nightmare. Position Paper 4:88–89
Liang S, Shuai L, Zhong L (2010) TLA based customization and verification mechanism of
business process for SaaS. Chin J Comput 33(11):2056–2058 (in Chinese)
Liang G, Jian C, Chen J (2011) Self-evolving for process model of software as a service. Comput
Integr Manuf Syst 17(8):1603–1608 (in Chinese)
Liao J, Tan H, Liu J (2005) Based on Pi calculation of web services composition description and
verification. Chin J Comput 33(4):635–643 (in Chinese)
Liu S, Wang H, Cui L (2010) Application of SaaS based on data dependency of the progressive
pattern evolution method. In: The first national conference on service computing (CCF NCSC
2010) essays, pp 127–129
Luo X-l, Wu Q-l (2011) Research of business logic framework for SaaS software service SaSed
on mass customization. Telecommun Sci 32:26–28 (in Chinese)
Milner R (1999) Communicating and mobile systems: the Pi calculus. Cambridge University
Press, Cambridge
Papazoglou M (2008) The challenges of service evolution. In: Proceedings of the 20th
international conference on advanced information systems engineering, pp 1–15
Ramil JF, Lehman MM (2002) Evolution in software and related areas. In: ACM
Sangiorgi D, Walker D (2003) The Pi calculus: a theory of mobile processes. Cambridge
University Press, New York
1468 J. He et al.
Weber B, Reichert M, Rinderle-Ma S (2008) Change patterns and change support features-
enhancing flexibility in process-aware information systems. Data Knowl Eng 64(3):438–466
Wei G (2011) Overview of SaaS theory and application. Agric Netw Inf 26:69–70 (in Chinese)
Wu X, Wang M, Zhang W (2011) Overview of cloud computing development. Sci Technol Vane
209:49–52 (in Chinese)
Zhou J, Ceng G (2007) Based on the CPi calculus grid service behavior research. Comput Sci
34(6):13–18 (in Chinese)
Zhou J, Zeng G (2009) A mechanism for grid service composition behavior specification and
verification. Future Gen Comput Syst 25(3):378–383
Chapter 155
Permanence and Extinction of Periodic
Delay Predator–Prey System with Two
Predators and Stage Structure for Prey
155.1 Introduction
The aim of this paper is to investigate the permanence and extinction of the
following periodic delay three species predator–prey system with Holling IV and
Beddington–DeAngelis functional response and stage-structure for prey
8 Rt
> 0 bðsÞds h1 ðtÞx1 ðtÞ
>
> x 1 ðtÞ ¼ aðtÞx 2 ðtÞ bðtÞx 1 ðtÞ aðt s1 Þe ts1
x2 ðt s1 Þ y1 ðtÞ;
>
> a 1 ðtÞ þ x21 ðtÞ
>
> Rt
>
> 0 bðsÞds h2 ðtÞx2 ðtÞ
>
< x2 ðtÞ ¼ aðt s1 Þe ts1 x2 ðt s1 Þ cðtÞx22 ðtÞ y2 ðtÞ;
a2 ðtÞ þ bðtÞx2 ðtÞ þ cðtÞy2 ðtÞ
>
> p1 ðtÞx1 ðt s2 Þ
>
> y01 ðtÞ ¼ y1 ðtÞ q1 ðtÞ þ g1 ðtÞy1 ðtÞ ;
>
> a1 ðtÞ þ x21 ðt s2 Þ
>
>
>
> p2 ðtÞx2 ðt s3 Þ
: y0 ðtÞ ¼ y2 ðtÞ q2 ðtÞ þ g2 ðtÞy2 ðtÞ ;
2
a2 ðtÞ þ bðtÞx2 ðt s3 Þ þ cðtÞy2 ðt s3 Þ
ð155:1Þ
where x1 ðtÞ and x2 ðtÞ denote the densities of immature and mature prey species at
time t, respectively; y1 ðtÞ and y2 ðtÞ denote the densities of the predators that prey
on immature and mature prey at time t , respectively; aðtÞ; bðtÞ; cðtÞ; gi ðtÞ;
hi ðtÞ; pi ðtÞ; qi ðtÞ; ai ðtÞ; i ¼ 1; 2; bðtÞ; cðtÞ are all continuous positive x- periodic
functions; si ; i ¼ 1; 2; 3 are positive constants.
We can refer to (Liu and Yan 2011; Zhu et al. 2011) to get biological signif-
icance of all parameters and assumptions explanation of (155.1).
The initial conditions for system (155.1) is as follow
According to the analysis of the above, we get the following system (155.3).
155 Permanence and Extinction of Periodic Delay 1471
8
> h1 ðtÞx1 ðtÞ
>
> x_ 1 ðtÞ ¼ aðtÞx2 ðtÞ bðtÞx1 ðtÞ BðtÞx2 ðt s1 Þ y1 ðtÞ;
>
> a 2
1 ðtÞ þ x1 ðtÞ
>
>
>
>
>
> 2 h2 ðtÞx2 ðtÞ
>
< x_ 2 ðtÞ ¼ BðtÞx2 ðt s1 Þ cðtÞx2 ðtÞ a ðtÞ þ bðtÞx ðtÞ þ cðtÞy ðtÞ y2 ðtÞ;
2 2 2
>
> p1 ðtÞx1 ðt s2 Þ
>
> y_ 1 ðtÞ ¼ y1 ðtÞ q1 ðtÞ þ g1 ðtÞy1 ðtÞ ;
>
> a1 ðtÞ þ x21 ðt s2 Þ
>
>
>
>
>
> p2 ðtÞx2 ðt s3 Þ
: y_ 2 ðtÞ ¼ y2 ðtÞ q2 ðtÞ þ g2 ðtÞy2 ðtÞ :
a2 ðtÞ þ bðtÞx2 ðt s3 Þ þ cðtÞy2 ðt s3 Þ
ð155:3Þ
The dynamic behavior of the predator–prey system with delay and stage
structure for prey has been long discussed [see (Liu and Yan 2011; Zhu et al. 2011;
Li and Qian 2011; Li et al. 2011)]. Recently, Wang (Xiong and Li 2008) study the
model of this type, by using software Maple, the authors got the corresponding
numeric results of the conclusions. We can refer to (Hao and Jia 2008; Wang and
Chen 2010; Liu and Yan 2011; Naji and Balasim 2007; Cai et al. 2009; Zhao and
Lv 2009; Feng et al. 2010; Zhang et al. 2011; Liu and Yan 2011) to have in-depth
understanding of more research achievement of those models. Furthermore, the
research concerning the stage structure while being time delays predator–prey
periodic systems are quite rare. Indeed, recently, Kar (Ta and Nguyen 2011),
Huang (Chen and You 2008) and Chen (Huang et al. 2010) project those systems
on permanence and extinction. To keep the biological variety of ecosystem, the
dynamic behavior of biotic population is a significant and comprehensive problem
in biomathematics. So it is meaningful to investigate the system (155.3).
In the next section, we state the main results of this paper. Sufficient conditions
of the permanence and extinction of the system (155.3) are proved in Section III.
The conclusions we obtain further promote the analysis technique of Huang (Chen
and You 2008) and Chen (Huang et al. 2010).
Theorem 155.2.2 Assume the condition (155.4) hold, there is at least a positive
x- periodic solution of system (155.3).
Theorem 155.2.3 Suppose that
p1 ðtÞx1 ðt s2 Þ
m q1 ðtÞ þ 0;
a1 ðtÞ þ x21 ðt s2 Þ
ð155:5Þ
p2 ðtÞx2 ðt s3 Þ
m q2 ðtÞ þ 0;
a2 ðtÞ þ bðtÞx2 ðt s3 Þ
hold, then any solutions of system (155.3) with initial condition (155.2) satisfies
lim yi ðtÞ ¼ 0; i ¼ 1; 2:
t!þ1
pi ðtÞ x1 ðtÞ þ e
m qi ðtÞ þ [ 0: ð155:8Þ
ai ðtÞ
Thus, from the global attractive of x1 ðtÞ; x2 ðtÞ ; for every given eð0\e\1Þ;
there exists a T1 [ 0; such that
To the same argument of Lemma 155.3.1, we can easily get Lemma 155.3.2.
Lemma 155.3.2 There exists positive constant gix \Mx ; i ¼ 1; 2, such that
lim inf xi ðtÞ [ gix ; i ¼ 1; 2:
t!þ1
Lemma 155.3.3 Assumed that (155.4) holds, then there exists two positives
constants giy ; i ¼ 1; 2, such that any solutions ðx1 ðtÞ; x2 ðtÞ; y1 ðtÞ; y2 ðtÞÞ of system
(155.3) with initial condition (155.2) satisfies
1474 W. Zheng and E. Han
where
p1 ðtÞ x1 ðt s2 Þ e0
ue0 ðtÞ ¼ q1 ðtÞ þ q1 ðtÞe0 ;
a1 ðtÞ þ x1 ðt s2 Þ e0
p2 ðtÞ x2 ðt s3 Þ e0
we0 ðtÞ ¼ q2 ðtÞ þ q2 ðtÞe0 :
a2 ðtÞ þ bðtÞ x2 ðt s3 Þ e0 þ cðtÞe0
Take the equation below with a parameter e [ 0 into account:
8
>
> 0 h1 ðtÞ
> x
< 1 ðtÞ ¼ aðtÞx 2 ðtÞ bðtÞ þ 2e x1 ðtÞ BðtÞx2 ðt s1 Þ;
a1 ðtÞ
ð155:14Þ
>
> h ðtÞ 2
> x02 ðtÞ ¼ BðtÞx2 ðt s1 Þ cðtÞ þ 2e 2
: x ðtÞ:
a2 ðtÞ 2
By Lemma155.2.2, system (155.14) has a unique positive x- periodic solution
x 1e ðtÞ; ex 2e ðtÞ ; which is global attractive. Let ðx1e ðtÞ; x2e ðtÞÞ; be the solution of
e
(155.14) with initial condition xie ð0Þ ¼ xi ð0Þ; i ¼ 1; 2. Then, for the above e0 ,
there exists a sufficiently large T4 [ T3 , such that
e0
xie ðtÞ xie ðtÞ \ ; t T4 :
4
We have xie ðtÞ ! xi ðtÞ in ½T4 ; T4 þ x; as e ! 0. Then, for e0 [ 0, such that
e0
xie ðtÞ xi ðtÞ \ ; t 2 ½T4 ; T4 þ x; 0\e\e0 :
4
So, we can get
e0
xie ðtÞ xi ðtÞ xie ðtÞ xie ðtÞ þ xie ðtÞ xi ðtÞ \ :
2
Since xie ðtÞ; xi ðtÞ are all x-periodic, hence
e0
xie ðtÞ xi ðtÞ \ ; i ¼ 1; 2; t 0; 0\e\e0 :
2
Choosing a constant e1 ð0\e1 \e0 ; 2e1 \e0 Þ, we have
155 Permanence and Extinction of Periodic Delay 1475
e0
xie ðtÞ xi ðtÞ ; i ¼ 1; 2; t 0: ð155:15Þ
2
Assuming (155.12) is false, then there exists / 2 R4þ , such that, under the initial
condition ðx1 ðhÞ; x2 ðhÞ; y1 ðhÞ; y2 ðhÞÞ ¼ /; h 2 ½s; 0. We have lim sup yi ðt; /Þ
t!þ1
\e1 ; i ¼ 1; 2.
So, there exists T5 [ T4 , such that
yi ðt; /Þ\2e1 \e0 ; t T5 : ð155:16Þ
By using (155.16), from system (155.3), for all t T6 T5 þ s1 , we can obtain
0 h1 ðtÞ
x1 ðtÞ aðtÞx2 ðt; /Þ bðtÞ þ 2e1 x1 ðt; /Þ BðtÞx2 ðt s1 ; /Þ;
a1 ðtÞ
h2 ðtÞ 2
x02 ðtÞ BðtÞx2 ðt s1 ; /Þ cðtÞ þ 2e1 x ðt; /Þ:
a2 ðtÞ 2
Let ðu1 ðtÞ; u2 ðtÞÞ be the solution of (155.14), with e ¼ e1 and ðx1 ðT6 ; /Þ;
x2 ðT6 ; /ÞÞ, then
xi ðt; /Þ ui ðtÞ; i¼ 1; 2;t T6 :
From the global attractive of x1e1 ðtÞ; x2e1 ðtÞ , here we let e ¼ e20 , there exists
T7 T6 ; such that
e0
ui ðtÞ xie1 ðtÞ \ ; i ¼ 1; 2; t T7 :
2
So, we have
e0
xi ðt; /Þ ui ðtÞ [ xie1 ðtÞ ; i ¼ 1; 2; t T7 :
2
Hence, by (155.15), we can obtain
xi ðt; /Þ xi ðtÞ e0 ; i ¼ 1; 2; t T7 : ð155:17Þ
Therefore, by (155.16) and (155.17), for, t T7 þ s2 ; such that
y01 ðt; /Þ
" #
p1 ðtÞ x1 ðt s2 Þ e0
y1 ðt; /Þ q1 ðtÞ þ g1 ðtÞe0
a1 ðtÞ þ x1 ðt s2 Þ e0
¼ ue0 ðtÞy1 ðt; /Þ;
1476 W. Zheng and E. Han
y02 ðt; /Þ
" #
p2 ðtÞ x2 ðt s3 Þ e0
y2 ðt; /Þ q2 ðtÞ þ g2 ðtÞe0
a2 ðtÞ þ bðtÞ x2 ðt s3 Þ e0 þ cðtÞe0
¼ we0 ðtÞy2 ðt; /Þ:
ð155:18Þ
Integrating both sides of (155.18) from T7 þ s2 to t and from T7 þ s3 to t,
respectively, so we can get
Z t
y1 ðt; /Þ y1 ðT7 þ s2 ; /Þ exp ue0 ðtÞdt;
T7 þs2
Z t
y2 ðt; /Þ y2 ðT7 þ s3 ; /Þ exp we0 ðtÞdt:
T7 þs3
Since
0
x1 ðtÞ aðtÞx2 ðtÞ bðtÞx1 ðtÞ BðtÞx2 ðt s1 Þ;
0
x2 ðtÞ BðtÞx2 ðt s1 Þ cðtÞx22 ðtÞ:
It follows from (155.20) and (155.21) that for t max T ð1Þ þ s2 ; T ð2Þ þ s3 ,
p1 ðtÞx1 ðt s2 Þ
m q1 ðtÞ þ g1 ðtÞe \ e0 ;
a1 ðtÞ þ x21 ðt s2 Þ
ð155:22Þ
p2 ðtÞx2 ðt s3 Þ
m q2 ðtÞ þ g2 ðtÞe \ e0 :
a2 ðtÞ þ bðtÞx2 ðt s3 Þ
First, there exists a T ð2Þ [ max T ð1Þ þ s2 ; T ð2Þ þ s3 , such that yi ðT ð2Þ Þ\
eði ¼ 1; 2Þ. Otherwise, by (155.22), we have
e y1 ðtÞ
Z
t
h1 ðtÞx1 ðs s2 Þ
y1 ðT ð1Þ þ s2 Þ exp q1 ðsÞ þ q1 ðsÞe ds
T ð1Þ þs2 a1 ðsÞ þ x1 ðt s2 Þ
n o
ð1Þ ð1Þ
y1 ðT þ s2 Þ exp e0 t T s2 ! 0:
yi ðT ð3Þ Þ [ e expfMðeÞxg; i ¼ 1; 2:
By the continuity of yi ðtÞ, then there must exists T ð4Þ 2 T ð2Þ ; T ð3Þ , such that
yi ðT ð4Þ Þ ¼ e and yi ðtÞ [ e, for t 2 T ð4Þ ; T ð3Þ . Let P1 be the nonnegative integer
such that T ð3Þ 2 T ð4Þ þ P1 x; T ð3Þ þ ðP1 þ 1Þx . From (155.22), we have
1478 W. Zheng and E. Han
e expfMðeÞxg
(Z )
T ð3Þ
ð3Þ ð4Þ h1 ðtÞx1 ðt s2 Þ
\y1 ðT Þ\y1 ðT Þ exp q1 ðtÞ þ g1 ðtÞe dt
T ð4Þ a1 ðtÞ þ x21 ðt s2 Þ
(Z ð4Þ Z T ð3Þ )
T þP1 x
h1 ðtÞx1 ðt s2 Þ
¼ e exp þ q1 ðtÞ þ g1 ðtÞe dt
T ð4Þ T ð4Þ þP1 x a1 ðtÞ þ x21 ðt s2 Þ
(Z ð3Þ )
T
h1 ðtÞx1 ðt s2 Þ
\e exp d2 ðtÞ þ qðtÞe dt
T ð4Þ þP1 x a1 ðtÞ þ x21 ðt s2 Þ
e expfMðeÞxg:
We can see that this is a contradiction. Similarly, from the second equation of
(155.22), we have
Example 1 From system (155.3), cause aðtÞ ¼ 4; bðtÞ ¼ 2=3, cðtÞ ¼ 9 exp
f0:3gð1 expf0:3gÞ; q1 ðtÞ ¼ 1=4 cos t; q2 ðtÞ ¼ 1=5 cos t; h1 ðtÞ ¼ 5; h2 ðtÞ
¼ 6; p1 ðtÞ ¼ 2 þ cos t; p2 ðtÞ ¼ 3 þ cos t; a1 ðtÞ ¼ 1; a2 ðtÞ ¼ 1=4ð1 expf0:3gÞ;
bðtÞ ¼ 7 þ 2 cos t; cðtÞ ¼ 1; s1 ¼ s2 ¼ s3 ¼ 0:6;gi ðtÞ; i ¼ 1; 2 are any arbitrary
nonnegative continuous 2p-periodic functions.
The above parameters conditions satisfy Theorem 155.2.1, so system (155.3) is
permanent and admits at least a positive 2p-periodic solution. From Fig. 155.1, we
can see that the density restriction of the predators have a major impact on the
stability of the predator–prey system. When predator species have no crowding
effect, the predator species is at high density; and with crowding effect, the
predator species is at low density.
Example 2 Assuming that the conditions of example 1 are established. Causing
q1 ðtÞ ¼ 5=4 cos t; q2 ðtÞ ¼ 1=2 cos t, those parameters satisfy the Theorem
155.2.3. So any positive solution of system (155.3) satisfies
lim yi ðtÞ ¼ 0; i ¼ 1; 2:
t!þ1
Figure 155.2 shows that two predators are extinction and the immature and
mature preys are permanent.
155 Permanence and Extinction of Periodic Delay 1479
From the Theorem 155.2.1–155.2.3, we can get a conclusion: the death rate and
the density restriction of the two predator population have a great extent influence
on the dynamic behavior of the system.
Acknowledgment This paper is supported by the Natural Science Fund of Shaanxi Provincial
Education Administration Bureau (Grant No. 11JK0502) and the Doctor’s Research Fund of
Xi’an Polytechnic University.
1480 W. Zheng and E. Han
References
156.1 Introduction
The military logistics operation process can be divided into three phases: demand
applying, material acquisition, transportation & distribution (Zhang et al. 2007), as
shown in Fig. 156.1.
(1) Demand applying. When combat troops need some materials, they apply the
demand including material names, amounts, quality standards, deadlines and
destinations to military logistics command.
(2) Material acquisition. When military logistics command receives combat
troops’ material demands, it makes material acquisition decision promptly
after contracting demand with inventory. If the inventory is enough, it will
give an outing order to military depot; otherwise it will give a material pro-
curement mission to military procurement agency which will purchase those
needed materials from suppliers.
(3) Transportation & distribution. During the period of carrying out material
acquisition mission, military logistics command decides how to distribute the
supplies. If it is dangerous for civil logistics enterprise, military logistics
command will select military transportation agency to transport the supplies;
otherwise it will select civil logistics enterprise. If the supplies are only
distributed to one place, they will be transported to the destination directly;
otherwise they will be transported to distribution center firstly and be dis-
tributed to combat troops by distribution center later.
156.4 Conclusion
References
Chen FZ, Jiang DH (2011) Design and implication on the intelligent logistics delivery system
based on internet of things. Microelectron Comput 28(8):19–21
Huang FY (2006) Design and implication of E-purchase system. J Shangqiu Vocat Tech Coll
5(5):34–36 (in Chinese)
Jiang LW (2009) Research on models and strategies of military logistics capability optimization.
Beijing jianotong University, pp 8–9 (in Chinese)
Li ZS (2011a) The influence of internet of things to logistics development. Logist Sci-Tech
3:77–78 (in Chinese)
Li HW (2011b) Planning and design of logistics information platform based on internet of things.
Inf Technol 9:13–16 (in Chinese)
Li YB (2011c) The research on optimizing the intelligentized logistics distribution system by
building the internet of things. Logist Eng Manage 33(7):56–57 (in Chinese)
Lin ZX (2011) Unified information system of logistics pallet circulation based on internet of
things. Sci Technol Res 7:190–201 (in Chinese)
O’hanlon ME (2009) The science of war. Princeton University Press, Princeton, pp 141–145 (in
Chinese)
Qu XL (2010) Internet of things technology-based logistics management system for emergency.
Comput CD Softw Appl 13:107–109 (in Chinese)
Ruan ZYL, Lu L (2011) Networking technology based on the logistics of emergency. Jiangsu
Commer Forum 9:60–62 (in Chinese)
Su YH (2011) Design and implementation of logistics and vehicle monitoring system based on
the internet of things. Comput Digit Eng 7:75–78 (in Chinese)
Wu XZ (2011) Analysis on the application of internet of things to the field of storage logistics and
the prospect. Chin Bus Market 6:36–39 (in Chinese)
Yang YQ, Pan H (2011) Discussion on reengineering strategies of logistics management
information system based on internet of things. Comput Mod 12:98–101 (in Chinese)
Zhang FL, Wang C, Huang J (2007) Military logistics operations model based on process. Logist
Sci-Tech 4:97–99 (in Chinese)
Zhu WH (2010) Realization of whole-process intelligent supply chain distribution service based
on the internet of things. Logist Technol 7:172–173 (in Chinese)
Chapter 157
Simulation and Optimization of the Steel
Enterprise Raw Material System
Abstract This paper uses discrete system simulation method to simulate the
production scheduling material factory, verify the feasibility of the material fac-
tory system design, find out the system weakness, optimize the design and
scheduling schemes, save investment, reduce the cost of operation. With the
understanding of production and operation in detailed, we use system simulation to
build material factory simulation model, the system simulation not only provides a
powerful data analysis, and supports virtual reality 3D animation. On the opti-
mization problem of belt conveyor route, we compare the A * algorithm and
depth-first recursion algorithm, the best algorithm was obtained. About the yard
optimization problem, we use unfixed type and variable tonnage stock pile, and use
search mechanism to dispatch belt conveyor route and reclaimer, combining
Optimization module to optimize the yard. At the same time we can get the input
of coal field to formulate purchase plan.
157.1 Introduction
With the further adjustment of China’s steel industry structure, steel enterprises
gradually develop to be large-scale; the delivery and storage capacity of the material
factory has become one of the bottlenecks which restrict the production scale. The
major domestic steel enterprises have risen a new turn of energy saving and emission
L. Xu (&)
CISDI Chongqing Iron Steelmaking Plant Integration Co. Ltd., Chongqing 400013, China
e-mail: [email protected]
P. Wang S. Chen X. Zhong
Logistics Department, CISDI Engineering Co., Ltd., Chongqing 400013, China
This paper simulate a typical coal yard where store eight kinds of coking coal, two
kinds of thermal coal, two kinds of blind coal and three kinds of injecting Mixed
Coal. There are A, B, C, D four material strip feeder. Each material strip feeder has
a track bed with two bucket-wheel stacker reclaimer above it.
The input system mainly including the pier and the rail car dumper input system.
The main raw material is transported from the sea to the pier and then transported
into the material factory through belt conveyor, the other material unload by the
train car dumper and then transported by the belt conveyor too. The pier input
157 Simulation and Optimization of the Steel Enterprise Raw Material System 1491
system designed to one conveyor line. The rail input system consists of car dumper
and relevant delivery system that designed to two lines.
The coking coal output system is designed to two lines that mainly transported coking
coal from coal yard to coking coal blending bunker, transported blind coal from coal
yard to sintering blending bunker and transported injecting mixed coal from coal
yard to the blast furnace injection blending bunkers. The thermal coal output system
is designed to one line that mainly transported thermal coal from coal yard to power
plant. The blast furnace output system and the coking coal output system use the same
transport lines.
Establish each simulation module and the simulation model shown in Fig. 157.1.
In the following section this paper will introduce various equipment and simula-
tion module.
The belt conveyor is a material handing machine that continuous transfer material
in a certain line, also known as a tape machine. Belt conveyor can be horizontal,
tilt and vertical transport, the space can also be composed of transport lines,
transport lines are generally fixed. Belt conveyor transport capacity, long distance,
but also in the transportation process of a number of processes operating at the
same time, so a wide range of applications. The simulation software has a standard
fixed belt conveyor module, so set the parameters then can use it.
There are a variety of bunkers to provide raw material for the production system in
the raw material system, it is the end for the transport system, but for the pro-
duction it is the beginning. Each bunker consumption rates and demand are not the
same. The bunker that contains a lot of codes is a Satellite control centre in
simulation system. The bunker will call the belt conveyor line and scheduling
module of stacker-reclaimer according to the level of the material. Central control
module will determine the task priority. Finally, the bunker module appoints belt
conveyor line and stacker-reclaimer. Then the belt conveyor line and stacker-
reclaimer finish the task.
Coal pile is the most important module in the simulation model and the amount of
code it contains is also the largest, because it is not only required the coal but also
the supplier, so it is not only an active judgment entity but also a passive entity.
When it is in the active state, it sends a message to the port module or railway
station module. Then it will call the belt conveyor and stacker-reclaime, and then
the module will call the central control module to determine the task priority,
finally finishes the transport task. When it is in a passive state, it only receives the
message sent by the central control module to tell which bunker does the coal send
to. Its three-dimensional entity shows in Fig. 157.3.
The role of this module is to determine which task execute first based on task priority
parameters, it can be a no-display graphical entity; the other is the interrupt module,
its role is mainly to interrupt the non-critical tasks to free the resource, the freed
resource will meet the emergence situation based on the number of interrupt within a
certain time.
1494 L. Xu et al.
The belt conveyor route is very complex; there are a lot of transfer stations. Some
of the transfer station between the belt conveyors is for several belt conveyor
routes, so there is not only one belt conveyor route from one operating point to
another, therefore it is necessary to determine which is the shortest route. Although
there are some routes in the simulation process, but if one or a few belt conveyor
of some routes has been occupied, so we must call the shortest route based on the
available route situation. The shortest path algorithm is Dijkstra algorithm, A*(A
start) algorithm, depth-first algorithm. Dijkstra algorithm is A* algorithm of the
special case, also is the lowest efficiency case. If we just need to find a path, depth-
first algorithm can quickly find out the route and jump out of the circulation;
however, if we search for all the routes, and then compare all the paths, so the
efficiency is very low. In the simulation model, each task need to call the search
path algorithm, so if the shortest path algorithm efficiency is low, which will affect
the simulation speed. A* algorithm is actually a heuristic search (Manuel and
Johan 2004; Dong et al. 2003; Chow 1990; Scholl 1999; Zhao et al. 2000); A*
algorithm uses a best-first search and finds a least-cost path from a given initial
node to one goal node. It uses a distance-plus-cost heuristic function to determine
the order in which the search visits nodes in the tree. The A* can be implemented
more efficiently—roughly speaking, no node needs to be processed more than once
(Clymer and Cheng 2001; Solow 2005). If we use general function to package the
157 Simulation and Optimization of the Steel Enterprise Raw Material System 1495
A*, we call this function then we can get a belt convey or route. As shown in
Fig. 157.4, through this route table, the coal entities can reach its destination.
Each strip feeder has a certain amount of stock pile, in order to optimize, each
stock pile set a unique number and tonnage, if the consumer want to find the right
stock pile by the unique number, so it must search the storage yard module. In
other words, we can change the unique number and tonnage of each stock pile in
storage yard module, that mean the stock pile martial changed. By set the unique
number constraints and tonnage constraint and the objective function in the
optimize module, run the simulation we can get the best storage layout. As shown
in Fig. 157.5.
We can also get the maximum, minimum and average inventory of each bunker
and stock pile. At the same time, so we can get minimum safe stock to reduce the
costs (Table 157.2).
Get the input information of the storage yard by ways of simulation is earlier to
make procurement plan. Management staff can make out the corresponding pur-
chasing transport plan. Table 157.3 is input information of the storage yard.
Finally by means of using simulation technology we can make quantitative
evaluation and analysis of the material factory and optimize the design and
scheduling. This paper also showed that we can use simulation technology to
establish the random complex large production scheduling system model. If we
change the Parameters and strategy, we can get the simulation results quickly, and
by using optimization function of simulation software, we can get the optimal
configuration of the system resources.
References
Chow W (1990) Assembly line design methodology and applications. Marcel Dekker, New York
Clymer JR, Cheng DJ (2001) Simulation-based engineering of complex adaptive systems using a
classifier block. In: The 34th Annual Simulation Symposium, Seattle, WA, pp 243–250,
22–26 April 2001
Dong J, Xiao T, Zhao Y (2003) Application of genetic-Tabu Search Algorithm in sequencing
mixed-model assembly lines industrial engineering and management. Ind Eng Manag
8(2):14–17
Hopp WJ, Spearman ML (2002) Factory physics—foundations of manufacturing management.
Tsinghua University Press, Beijing
Manuel L, Johan M (2004) Business processing modeling, simulation and design. Prentice Hall,
New Jersey
Scholl A (1999) Balancing and sequencing of assembly lines. Physica-Verlag, Heidel-berg
Solow D (2005) On the challenge of developing a formal mathematical theory for establishing
emergence in complex systems in Complexity. Wiley, New York, vol 6, no 1, pp 49–52
Sun H, Xu L (2009) Optimization of scheduling problem for auto mixed model assembly line. In:
Proceedings of the first international workshop on intelligent systems and applications,
2009.5, vol 3, pp 2015–2017
1498 L. Xu et al.
Sun H, Xu L (2009) Simulation and optimization for noshery service system. In: Proceedings of
the first international symposium on information engineering and electronic commerce,
2009.5, pp 721–723
Zhao W, Han W, Luo Y (2000) Scheduling mixed-model assembly lines in JIT production
systems. J Manag Sci China 3(4):23–28
Chapter 158
Comprehensive Evaluation
and Optimizing for Boarding Strategies
Abstract In this paper, we focus on the need for reducing boarding time for
airlines. Therefore existing researches devoted to designing boarding routines and
studying boarding strategies in existence. A model based on cellular automata is
developed for calculating the integrated boarding time and testifies that the
Reverse-Pyramid way is one of the most effective strategies. Aiming at giving an
optimal boarding strategy, this paper combines a new evaluating criterion with
some further analysis of Reverse-Pyramid and finally concludes that Reverse-
Pyramid strategy which is divided into 5 groups and has more groups with a
particular proportion is the best. Somehow the present paper solves the neglect of
passengers’ satisfaction and time spent on organizing before boarding in existing
researches and gives some recommendations to airlines at last.
Keywords Aircraft boarding Aisle interference Cellular automata Evaluating
criterion
158.1 Introduction
How much time is usually demanded for different necessary tasks when the air-
plane is landed, which are departure, fuel, baggage loading and unloading,
catering, and passengers’ boarding? According to reports from Boeing, the pas-
sengers’ boarding is the most time-consuming task, around 60 % of the total
(Capelo et al. 2008).
Because the plane makes money for airline only when it is in motion, reducing
the boarding time is helpful to not only arrange more scheduled flight but also be
158.2.1 Model
158.2.1.1 Assumptions
(1) Interferences are the major reasons lead to wait. There are two main kinds of
interferences which are aisle interference and seat interference. The aisle inter-
ference shows that the process of placing baggage will delay the passengers who
followed by them. And for the second kind, when a passenger wants to settle in the
window seat, he may block other passengers of the same line. We call this ‘seat
interference’. But in the model, we ignore the seat interference, because many
researches made clear that Outside-In strategy is better than Back-To-Front and
random strategy mainly because Outside-In strategy avoiding the seat interference.
The paper mainly studies the strategies which avoid seat interference themselves
on the basis of predecessors, so the seat interference doesn’t influence the results.
(2) Assumptions for the passengers: they will not happen to have other’s seat or
miss his seat. And they will obey our arrangement of boarding. Also the paper
doesn’t consider the passengers’ coming late.
(3) Assumptions for the planes: the boarding gate is in the top of the cabin. And
the business class seats and the first class seats are far less than economy class
seats. That allows us to only consider the boarding of passenger in economy class.
Name: Outside-In
Advantages
Totally avoid the conflict of seat and fully make use of the
space of gangway to place the luggage
Name: Reverse-Pyramid
Advantages
Have the advantages of the former two strategies
Because we want to describe the individual behavior, we decompose the cabin into
many units. Just like Fig. 158.1. The figure follows these assumptions: (1) All the
seats are treated as the same size and each seat is considered as one unit which is
arranged very closely. (2) The gangway has the same width of a seat and it only
gangway
allows passengers to stand as single line. (3) The only entrance is at the top of the
economy class.
This figure is the small size of plane that can have a capacity of 100 passengers.
In the formula, Tbag means the time using for placing the luggage. nbin means
the present amount of baggage that already be placed in the luggage trunk. Nlug
means the number of bags that will be put in the luggage trunk. c means the
number of capacity of luggage rack for one row of seat. k is correction coefficient.
According to (Shang et al. 2010), c = 4, k = 20 (Trivedi 2002; Kiwi 2006).
In this hyperbolic model, set DT to stand for the unit time of placing luggage.
ntotal stands for the total number of capacity of luggage rack for one row of seat.
So DT ¼ fðntotal Þ is a hyperbola (see Fig. 158.2). We can find that when the
spare of suitcases become smaller, the time of placing luggage is longer. When
ntotal [ 4, the time of placing luggage approaches infinite (Bohannon 1997).
158 Comprehensive Evaluation and Optimizing for Boarding Strategies 1503
But not everyone will bring luggage and they have different number of bags.
The paper refers to the report from ‘Data 100 Market Research’ and gets the
probability for every passenger as in Table 158.2 (Merry 1998).
158.2.2 Results
For each result, we simulate 200 times and average the results to become the
finally result. And we calculate the root-mean-square deviation of every result and
find that all the root-mean-square deviations are smaller than the 5 % of the
corresponding results. So we can say the results are credible (Fig. 158.3).
Compare the results of strategy 1, 2, 3. We can see the strategy 3 is best because
its boarding steps and waiting steps are the least. Strategy 1 is similar to the
strategy of Back-To-Front. Strategy 2 is similar to the strategy of Outside-in. And
strategy 3 is similar to the strategy of Reverse-Pyramid. Therefore, we can see the
advantage of Reverse-Pyramid. For the same reason, when we compare the
strategy 4, 5, 6, 7, we can get the same laws. So we can conclude that Reverse-
Pyramid is the best one on the hand of total steps and waiting steps. So we
concentrate on the strategy of Reverse-Pyramid.
1504 D. Sun et al.
1 5
7
1
3 196.5 42.5
gangway
8 4
6 2
2 1
2 gangway
6 5 190 39
8 7
4 3
3 1
3 7 4 185 36
gangway
8 6
5 2
1
4 gangway
2 19 9 38.5
2
1
3 1
5 gangway
4 2 201 45
4 2
3 1
2 1
6 gangway
4 3 194 42
4 3
2 1
3 1
7 gangway
4 3 189 37
4 2
2 1
The higher the number of groups of seats in the airship, the longer it takes to
organize a line before boarding. Most researches didn’t explain how they chose the
number of groups. The present paper calculates the financial lose because of the
time spent on organizing a line and waiting time. So using the economic indicator,
we find the optimal number.
First we assume that all the 100 passengers sit in waiting hall by a line. Finally,
different strategies require different orders of queues. For example, if one strategy
requires every one boarding in a certain order, that’s to say, the number of groups
is 100, the queue must be formed in order one by one. But, if one strategy makes
the passengers’ boarding randomly, all passengers will stand the position closest to
his seat in waiting hall. Finally, we get the total average steps using for organizing
a line for different number groups. Some results are showed in Table. 158.3.
We treat the steps using for organizing a line the same as the average waiting
steps, because the two kinds of steps reflects the satisfaction of passengers. And
according to the literature (Li 2010) and The Civil Aviation Act, Air China’s flying
hours is 736770 in 2007 and retained profits are 3773 million RMB. Also airline
pays every passenger 200 RMB for 4 h aircraft delay. So according to the ratio of
money, we get the ratio of waiting step and boarding step which is 1/25.605. With
all the data, we can calculate the optimal number of groups. The result is that 5 is
the most optimal number and there are some results in Table. 158.4.
All the results in Sect. 158.3 are for Reverse-Pyramid strategy.
Compare with the four strategies in Fig. 158.4, we find the strategy 13 is the best.
These four strategies are all Reverse-Pyramid. The difference between the four
strategies is shape of the second, the third and the fourth group. The second group
has some window seats and some aisle seats. If we change the proportion of the
two kinds of seats, the results will change. For the same reason, changing the third
and the fourth groups’ proportion of two kinds of seats will change the final
results. Comparing these four strategies, we infer that it is better to arrange more
groups that have the proportion of 7/3 approximately (7 is window seat and 3 is
aisle seat).
Steps of Waiting
No. How to dividethe group
boarding steps
4 3 2 1
5 4 3 2
11 gangway
5 4 3 2
209 44.9
4 3 2 1
4 3 2 1
5 4 3 2
12 gangway
5 4 3 2
207.5 44.7
4 3 2 1
4 3 2 1
5 4 3 2
13 gangway
5 4 3 2 205.5 44.2
4 3 2 1
3 2 1
5 4 3 2
14 gangway
5 4 3 2 209.5 45.4
3 2 1
158.5 Conclusion
The present paper simulates a model to calculate the total boarding steps, waiting
steps and organizing steps. First the model compares the three boarding strategies.
Finding that the best one is Reverse-Pyramid, the model’s results are the same as
the other researches and that somehow proves the reliability of the model. Then a
new way of evaluating boarding strategies comes out and shows that dividing into
5 groups is the best choice. Finally an optimal Reverse-Pyramid is given. And the
paper recommends airline that for the similar structure as Fig. 158.1, it’s better to
use Reverse-Pyramid strategy which is divided into 5 groups and arranges more
groups that have the proportion of 7/3 approximately. For the other type of planes,
one method is to divide the bigger plane into the structures as Fig. 158.1 shows.
Another way is to change the parameter in Fig. 158.1 to fit the special type plane.
Both methods are easy to achieve.
References
Bohannon RW (1997) Comfortable and maximum walking speed of adults aged 20-79 years,
reference values and determinants. Age and Aging 26:15–19
Capelo E, de Castro Silva JL, van den Briel MHL, Villalobos JR (2008) Aircraft boarding fine-
tuning. In: XIV international conference on industrial engineering and operations
management
Ferrari P, Nagel K (2005) Robustness of efficient passenger boarding strategies for airplanes.
Transp Res Board 1915:44–54
Kiwi M (2006) A concentration bound for the longest increasing subsequence of a randomly
chosen involution. Discrete Appl Math 154(13):1816–1823
Li X (2010) Airlines flight delays analysis. Friends Account 2(2):41–43
Menkes HL, Briel VD, Villalobos JR, et al (2005) America West Airlines develops efficient
boarding strategies. Interface 35(3):191–201
Merry MM (1998) The role of computer simulation in reducing airplane turn time. Aero
Magazine 1
Shang HY, Lu HP, Peng Y (2010) Aircraft boarding strategy based on cellular automata.
Tsinghua Univ (Sci & Tech), vol 50, No 9
Trivedi KS (2002) Probability and statistics with reliability, queuing and computer science.
Wiley, New York
Van Landeghem H, Beuselinck A (2002) Reducing passenger boarding time in airplanes: a
simulation based approach. Eur J Oper Res 142:294–308
Chapter 159
Modeling for Crime Busting
The present paper is for investigating a conspiracy to commit a criminal act. Now,
we realize that the conspiracy is taking place to embezzle funds from the company
and use internet fraud to steal funds from credit cards of people who do business
with the company. All we know is that there are 83 people, 400 messages (sent by
the 83 people), 15 topics (3 have been deemed to be suspicious), 7 known con-
spirators, 8 known non-conspirators and there are three managers in the company.
159.2.1 Step1
Our goal is to get a table to explain the suspicious degree of different messages,
and then we can get a preliminary priority list on the basis of the table.
We consider that every message connects two workers. According to the sus-
picious degree of the message’s topic, we add a reasonable weight to each worker.
The weight is related not only to the suspicious degree of the message’s topic, but
also to the suspicious degree of the speaker and listener.
In Table 159.1, ‘c.’ means conspirator. This table gives the value to Qm that we
use in the formula (159.1). The corner mark ‘m’ means the number of a message.
Mentioned in the section of sensitivity analysis, ‘(A#)’ stands for the number of
the cell in the table. This table is one of our foundations of the model. We can see
the Qm is decided by three factors: the speaker, the listener and the topic of No. m
message.
X
N
Yi ¼ Rim Qm ð159:1Þ
m¼0
‘Yi’ is an intermediate variable for worker i and we will use ‘Yi’ in formula
(159.2). ‘N’ means total amount of the messages and in our case N = 400. ‘m’
means the No. of a message. ‘Qm’ means the weight that No. m message gives.
The value of ‘Qm’ is given by Table 159.1. Rim means the relation between No. m
message and worker i. If the No. m message is sent from or sent to worker i,
Rim = 1. If not Rim = 0. At first we use ‘Yi’ to present the suspicious degree for
every workers.
The primary priority list is the result and it is made by traversal all messages
with Matlab.
It is obvious that there are two special points (20, 12) and (56, 58). Our goal is
to distinguish conspirators and non-conspirators, so there should be one special
point to divide the figure into two parts in theory. So we don’t satisfy with this
figure, and there must be some other factors that we didn’t take into consideration.
159.2.2 Step2
Our goal is to optimize the priority figure we have got in step1 using method of
weighted average.
We think ‘Yi’ (mentioned in formula (159.1)) to stand for the suspicious degree
is not reliable. Through a formula we create, we can evaluate the suspicious degree
of each worker by using the method of weighted average.
PM
SdðkÞ Ai ðkÞ
Wi ¼ k¼1 Yi ð159:2Þ
Ai Sdmax
‘Wi’ means the crime suspicious degree of worker i. ‘k’ means the No. of the
topic. M means the total amount of topics and in our case, M = 15. ‘Yi’ is given
by formula (159.1). Sd(k) stands for the influence of linguistics for No. k topic.
Before step4, Sd(k) = 1 when topic k is not suspicious and Sd(k) = 2 when topic
is the suspicious. And we will optimize the value of Sd(k) at step4 and finally give
many different values for Sd(k) according to different topic number. Sdmax is
defined by
Sdmax ¼ maxfSd1 Sd2 . . .SdM g ð159:3Þ
Ai is the total amount of text topics worker i gets. For example, if David
receives only one message and sent no message. And the only message includes
three text topics, ADavid = 3. Ai(k) is the total amount of text topics of number k.
That is to say
X
M
Ai ¼ Ai ðkÞ ð159:4Þ
k¼1
We can get Wi, which is the crime suspicious degree of worker i, by searching
all messages. Then we get the priority list by ranking Wi.
Figure 159.2 is better than Fig. 159.1, because there is only one inflection
point. But the slope of both sides of the inflection point is not different largely. We
want to optimize the priority figure further more in the step3.
159.2.3 Step3
Our goal is to optimize the better priority figure we have got in step2 using method
of iteration.
We find a disadvantage of step1 and step2: We didn’t consider the difference
between the unknown workers. In fact, some of the unknown workers are crimi-
nals and the others are not. So they should be considered differently. In another
word, different unknown worker have different influence for other’s ‘Wi’. We
consider these different influences in step3 to optimize our priority list.
1510 D. Sun et al.
In Fig. 159.1, point (1, 3) worker and point (68, 93) worker are considered
differently in this step. We treat point (68, 93) worker as a known criminal and
treat point (1, 3) worker as an innocent. And use the way of step1 again. So we get
another priority list.
But we think it’s unconvincing that let two unknown worker to be a criminal
and an innocent apart in each time. Because if you see the Fig. 159.1, you will find
the point (68, 93) worker has obvious difference from the near one in crime
suspicious degree. But point (1, 3) worker has tiny difference from the near one. So
we match the 67 points a curve and use slopes of the two vertex tangents to
describe how many unknown workers should be treated as criminals and inno-
cents. So we do these continually and finally we will places all unknown workers
in two categories: criminals and innocents. We will get a priority list at last
(Fig. 159.3).
159.2.4 Step4
Our goal is that: consulting the literatures about the text analysis methods, we try
to optimize our priority figure by this method.
We set ‘Sd(k)’ which means the suspicious degree for the No. k topic. We think
that the topics talked more frequently in the group of conspirators should be added
a larger value for ‘Sd(k)’. We use the following formula (159.5) and (159.6) to
describe ‘Sd(k)’
1512 D. Sun et al.
Sdmaxi
SdðkÞiþ1 ¼ uðkÞi uavei 10 þ SdðkÞi ð159:5Þ
umaxi
pðkÞh
uðkÞ ¼ h pðkÞj ð159:6Þ
j
‘P(k)h’ means the times that topic k is talked by criminals. ‘h’ means the
amount of criminals. ‘j’ means the total amount of people, which means j = 83.
uðkÞ describes the frequency degree of topic k in conversation of criminals.
umaxi ¼ maxfuðkÞi g; uavei ¼ averagefuðkÞi g; Sdmaxi ¼ maxfSdðkÞi g. In step2, we
elect unknown workers to be criminals and innocents continually. So we set cir-
culating in Matlab and the corner mark ‘i’ means the times we circulate.
We use the text analysis idea to value Sd(k), and finally change the ‘Wi’ (degree
of crime suspicious for worker i).
With the considering of influence from text analysis method to ‘Sd(k)’, we get
the final priority figure with linguistics. We get our final priority list by the priority
figure (Fig. 159.4).
159.2.5 Step5
In step5, our goal is to locate a discriminate line to help distinctly categorize the
unknown workers using the ideas of cluster analysis and method of hypothesis
testing.
159 Modeling for Crime Busting 1513
We find a variable ‘AW1’ to describe the degree of conspiring for the group of
conspirator. ‘AW’ is defined by the following formula (159.7)
!
X67
AW1x ¼ Wi þ W83K K ð83 xÞ ð159:7Þ
i¼x
‘AW1’ means the average weight for the group of conspirators, and it stands for
the degree of conspiring. The ‘x’ is the abscissa of the point where the discriminate
line located at. ‘Wi’ is defined in formula (159.2). ‘K’ is the amount of the known
conspirators. We consider the suspicious degrees (‘Wi’) of all known conspirators
are all same and the value of these suspicious degrees is same as the most right
point in Fig. 159.4.
(a) Draw the Fig. 159.5 that shows the changing of ‘AW1x’ by growing of the
‘x’.
(b) For the same reason, we can define ‘AW2’ to describe the degree of con-
spiring for the group of non-conspirator, using formula (159.8).
!
X
i¼x
AW2x ¼ Wi þ WL L ðL þ XÞ ð159:8Þ
1
We finally give different discriminate lines and evaluate the solution according
to different probability of first type error and second type error to fit different
requires of the police.
The first type error in our model is let the conspirators get away with crime. We
describe the probability of first type error by ‘P1%’ using formula (159.9)
0x 1
R2
B f ðxÞdxC
B C
P1% ¼ 1 B xx2 C 100% ðx1 x x2 Þ ð159:9Þ
@R A
f ðxÞdx
x1
The f(x) is the function of the curve we match in Fig. 159.5. X1 and X2 is left
and right point’s abscissa.
The second type error in our model is let the non-conspirators be treated as
conspirators. We describe the probability of first type error by ‘P2%’ using formula
(159.10)
0x 1
R
B gðxÞdxC
Bx C
P1% ¼ 1 B x12 C 100% ðx1 x x2 Þ ð159:10Þ
@R A
gðxÞdx
x1
159 Modeling for Crime Busting 1515
The g(x) is the function of the curve we match in Fig. 159.6. And we finally
find that P2%, the probability of second type error, is decreasing with the
increasing of ‘x’.
If we change the value of ‘x’, the probability of the first and second type error
will change too. At last we find when ‘x’ = 55, the value of P1% ? P2% is the
smallest. So we recommend police to locate discriminate line at the point whose
abscissa is 55 in Fig. 159.4 (Li and Zhu 2008; Guo and Zhu 2005).
159.2.6 Step6
We try to find the boss of the crime group using the concept of point centrality in
study of social network analysis (Ma and Guo 2007).
Now that we know Jerome, Dolores and Gretchen are the senior managers of the
company. If one or two, or even all of these three people are in the list of con-
spirators, we can credibly make sure that the leader or leaders come from the group
of the three managers (Estevez et al. 2007; Santos et al. 2006; Kiss et al. 2006).
If all of them are not in the primary priority list, it will become more complex.
Assuming the crime group is isolated from the other group, that is to say, it has little
connection to the outside, so we can focus only on them. From previous work, we
can obtain the criminal topics and their Sd(k). So we can calculate everyone’s
weight of point centrality by the same formula as formula (159.2). If someone’s
weight is much higher than others, we can surely know that he is the leader
(Klovdahl et al. 1994; Klovdahl 1985; Peiris et al. 2003; Svoboda et al. 2004).
In our models, the value of weight in Table 159.1 and Sd(k) are defined by
ourselves through perceptual knowledge and some experiments of the example of
Investigation EZ. That is to say, the weight and Sd(k) have no certain standard, so
it is necessary for us to make sure how it will affect our results.
There are 18 weights needed in our models, named A1, A2, A3, A4,
A5……A18, which you can see them in Table 159.1. We choose A3 and A16
randomly. With previous value, we get the priority list as follows (be expressed by
the code for unknown workers):
3, 32, 15, 37, 17, 40, 10, 81, 34, 22, 31, 13 ……
First, change A3 from 4 to 5, we got the result:
3, 32, 15, 37, 40, 17, 10, 81, 4, 34, 31 ……
Then, change the value of A16 from 1 to 2, we got result:
3, 32, 15, 37, 17, 40, 10, 81, 34, 22, 31, 13 ……
The basic value of Sd(k) is defined to be 1 when topic k has nothing about
crime. Otherwise, the topic is suspicious. The problem is that we didn’t define the
1516
maximum value of its initial value. The value will influent the results of formula
(159.2). In our models, we defined it as 2. Now, we analysis whether the small
change of the value will affect our results.
3, 32, 17, 15, 10, 37, 81, 40, 22, 16, 34, 4, 44 ……
Observing these results carefully, we can conclude that when we change the
weight or the maximum initial value of Sd(k), the results are not sensitive. We can
say that our models behave well during the process of sensitivity analysis.
Now we evaluate this model, because we don’t know whether it is stable or
accurate. We guess that our model seems rely on the initial conditions. If it relies
tightly on initial conditions, the model may not be trustful because no one can
make sure the initial condition. So we will show that how our model’s result may
change when the initial conditions change. And we only take the conspirators who
are at the top of the priority list into consideration. These are some extreme
conditions.
Condition 1. Set initial conditions as normal, this result can be the basic
standard.
Condition 2. Assume we cannot identify the conspirators;
Condition 3. Assume we cannot identify the non-conspirators;
Condition 4. Assume we cannot identify all of them;
Condition 5. Assume we can only identify some of the conspirators, such as 7,
18, 21, 43;
Condition 6. Assume we can only identify some of the non-conspirators, such
as 0, 2, 48, 64 (Marsden 2002; Newman 2003).
Through the analysis of Table 159.2, we conclude some laws:
(1) The initial conditions about ‘Known conspirators’ and ‘Unknown conspir-
ators’ will affect the results, but the effect is tolerable; (2) The more accurate initial
conditions there are, the more fast and accuracy the results have; (3) More initial
conditions means more accuracy and less time(especially for large data base),
however it also means more energy it costs; (4) Our model has great stability so
that it can be used widely, and it will show strong adaptability (Anderson and May
1992).
References
Anderson RM, May RM (1992) Infectious diseases of humans: dynamics and control. Oxford
University Press, Oxford
Estevez PA, Vera P, Saito K (2007) Selecting the most influential nodes in social networks.
Proceedings of international joint conference on neural networks. [s.n.], Orlando, FL, USA,
pp 12–17
Guo L, Zhu Y (2005) Application of social network analysis on structure and interpersonal
character of sports team. 2005 China Sports Sci Technol 41(5):10–13
Kiss C, Scholz A, Bichler M (2006) Evaluating centrality measures in large call graphs.
Proceedings of the 8th IEEE international conference on e-commerce technology and the 3rd
1518 D. Sun et al.
160.1 Introduction
Analysis of the existing E-Learning system, we can easily find a common phe-
nomenon: the current system is often web-based information technology
‘‘boilerplate’’ are text-based teaching can be seen, usually posted on the Internet,
teaching practice, and related methods, this teaching method is indifferent learning
lack of personalized teaching guidance, we generally call ‘‘emotional deficit’’
(Zhang 2009). The importance of emotions in the E-Learning, University of
Adelaide Professor Kerry O’Regan, told that students conducted a survey of dis-
tance learning, she found that the emotion was the key to learning networks and
was the essential factor in the teaching and learning process (O’Regan 2003). In
addition, according to the psychological studies, emotional factors have an
important impact on the learning behaviors (Su and Xu 2009).
J. Wu (&) W. Wang
Information Engineering Institute Capital Normal University, Beijing 100048, China
e-mail: [email protected]
Support vector machine (Cores 1995) designed to solve nonlinear problems of the
small sample study and classification. On the one hand, to overcome, the least
squares method is too simple, can not distinguish between complex nonlinear
classification designs; on the other hand, support vector machine has good clas-
sification ability, but neural network had the problem of overfitting and underfit-
ting. SVM technology, the most critical is the selection of the kernel function:
different kernel functions have a great impact on the classification results. There
are several different ideas in the selected kernel function to solve practical prob-
lems: The first use of an expert transcendental knowledge for kernel function
selected; the other is the Cross-Validation method, kernel function selection
process, experimenting with different kernel functions and parameters; the third
using mixed kernel function method (Smits et al. Proposed), which is using dif-
ferent kernel functions combined to obtain better performance, it is also the basic
idea of the mixed kernel function. On the whole, the parameter selection problem,
in essence, is an optimization problem.
In this paper, using the main advantage of the SVM algorithm is to classify
training data characteristics of the facial expression modeling and has obtained good
experimental results. In the research, using libsvm (http://www.csie.ntu.edu.tw/
*cjlin/libsvm/index.html) toolbox in Matlab, it is developed by National Taiwan
1522 J. Wu and W. Wang
University Professor LinChin-jen. The aim is to design a simple, easy to use support
vector machine (SVM) to solve pattern recognition and linear regression package.
The software not only provides a compiled version of Microsoft Windows, but also
other operating systems. The executable file is open source code, facilitate others to
improve and modify; the software can solve the C-support vector (C-SVC), nu-
support vector classification (nu-the SVC), one-class SVM (distribution estimation),
epsilon-support vector regression (epsilon-SVR), nu-support vector regression (nu-
SVR) and other problems, including based on one-on-one algorithm to solve the
related many types of algorithm’s problems. Support vector machine is used to solve
the problem of pattern recognition or regression, the international scientific com-
munity has not yet formed a unified view the parameters choice and the choice of
kernel function. This also means that the parameter selection of the optimal SVM
algorithm can only use the excellent previous experience, or comparative experi-
ments, or large-scale search, or use the package cross-validation function. Also use
other algorithms to achieve optimization, the algorithms such as genetic algorithm
(Kang et al. 2011), particle swarm optimization (PSO) (Chen and Mei 2011) and cats
swarm optimization (CSO) (Wang and Wu 2011). In this paper, due to the com-
plexity of the experimental constraints, only to choose the experts and experience
the results of selected parameters.
Due to the complexity of the human face, related to the study about this experi-
ment, I proposed the three main concepts are aversion degree, cheer degree and
pleasure degree. Aversion is based on the face area and interpupillary distance to
locate positioning method. Positioning and calculation of the face and the pupil of
the eye is to determine the learners in the learning process, interested in learning
the current content. Under normal circumstances, when the detected face area and
interpupillary distance is larger, which means learners leaned forward in the
learning process, learning content is relatively interested in, aversion for bigger;
On the contrary, when the changes in the hours means that learners lean back, and
not interested in learning content, and even boredom, aversion for smaller.
Similarly, cheer degree of detect eye spacing heterozygosity is to describe and
judge the cheer extend. And pleasure degree, through the mouth upturned angle to
detect degree of pleasure in the learning process.
Verify the stability of the data to prove its stability. A learner and B learner
within two hours (every 60 s tested once) detected in the normal state of learning
data. Because of space constraints, I only cite the face area and interpupillary
distance of data analysis figure, similar to other situations.
Detected sample data (the face area and interpupillary distance) really focused
within a certain range. Accordingly, we propose a hypothesis: the face area and
ongoing testing to get a sufficient amount of data, we believe that it is possible to
meet the normal distribution, if that were true, then we can change the scope of the
160 Personalized Emotion Model Based on Support Vector Machine 1523
already mentioned, and then detect whether the current learners in the normal
learning state. Figure 160.1 shows the results of tests of the data sample, to
demonstrate that they meet the assumption.
Aversion degree of face area and interpupillary distance, and the statistical analysis
of previous data have been found that these two sets of data for normal distri-
bution, indicating that learner’s mood is relatively stable over a period of time.
We put this the face area and interpupillary distance of 120 sets of data in two
different learners classification preprocessing, select 100 as the training set, the
remaining 20 as test set. Consider a simplified classification of emotions into the
four categories, which are very interested, interested, tired and very tired. So one
On behalf of very tired, two is tired, three is interested and four is very interest,
enter is test data of human face area and interpupillary distance. Figure 160.2
shows the relationship between category labels and face area, interpupillary dis-
tance, asterisk is the distribution of sample points.
Kernel function to select the radial basis kernel function, the function C is selected
as 1000.
160.5 Conclusion
This paper is using fast learning classification adopting support vector machine
network of small sample nonlinear characteristics based on the OCC emotion
model, aversion degree, cheer degree and pleasure degree to establish academic
emotions model in E-Learning. The model provides the necessary basis of aca-
demic emotions, is also a useful attempt of the SVM algorithm in the field of
emotion recognition, achieved the good results.
References
Chen W, Mei Y (2011) Research on forecasting methods for reduction ratio of pore closure in
forging stock based on PSO and SVM. Comput Eng Appl 47(27):243–245
Cores C (1995) Vapnik. Support vector networks. Mach Learn 20:273–297
[EB/OL] http://www.csie.ntu.edu.tw/*cjlin/libsvm/index.html
Kang H, Li M, Zhou P, Zhao Z (2011) Prediction of traffic flow using support vector machine
optimized by chaos higher efficient genetic algorithm. J Wuhan Univ Technol (Transportation
Science and Engineering) 35(4):649–653
O’Regan K (2003) Emotion and e-learning. JALN 7(3):78–92
Su X-p, Xu Y-x (2009) Intelligent E-learning system having emotion interaction function.
Comput Eng Des 30(15):3690–3693
Wang Z-l (2009) Artificial emotion. Mechanical Industry Press, Beijing
Wang W-s, Wu J-b (2011) Emotion recognition based on CSO&SVM in e-learning, ICNC
Wu Q-l (2003) Educational psychology—the book dedicated to the teachers. East China Normal
University Press, Shanghai
Zhang X-y (2009) Framework of an E-Learning system based on affective computing. J Hunan
Inst Sci Technol (Natural Sciences) 22(4):51–54
Chapter 161
Research on Decision-Making Behavior
Test System for Top Management Team
Based on Simulation Environment
Abstract The decision made by Top Management Team is fatally important for
business operation. So, how to improve the quality and reliability of decision-
making seems very necessary. Starting from the Prospect Theory of behavioral
decision-making theory, this paper puts forward testing decision-making behaviors
of Top Management Team, and analyzes the specific process and methods of
decision-making. According to results of the decision-making behavior testing, the
characteristics of Top Management Team can be obtained, and so as to provide
reasonable foundation for evaluation and improvement of decision-making
behaviors.
161.1 Introduction
The Nobel Prize Winner Herbert Simon used to say that ‘‘management is making
decision’’, which reveals how important the decision-making is in business
administration. With the global economic integration goes further in China, drastic
market competition and rapid changes of information revolution, diversification
oriented business and close coordination oriented department all present new
challenges to executive leaders (Ancona and Nadler 1989). At the same time, team
decision-making gradually takes place of personal decision-making and is
161.2 Method
Effect, Reference Point Effect, Deterministic Effect and other effects caused by
irrational behaviors under uncertain conditions, what is more these irrational
behaviors make individual decisions betray the Expected Utility Maximization.
Framing Effect manifests that different formulations can lead to different prefer-
ences to the same problem. Deterministic Effect manifests that decision makers have
obvious preference on certainty. In fact, these two effects are both caused by the
changing of reference point which has a big influence on individual decisions. In
different fields, different environments and among different individuals, decision
makers’ reference points may change, and we define this change as Reference Point
Effect. For example, through experiments Zhan-lei Li validated that there exist
Framing Effect, Reference Point Effect and Deterministic Effect in individual
decisions under the environment of economy, society and culture (Li et al. 2007).
In view of different behavioral decision-making effects’ different influences, in
this paper we mainly use four effects (Reference Point Effect, Framing Effect,
Fuzzy Avoid Effect and Deterministic Effect), which are extended from Prospect
Theory, to test decision behaviors of TMT under different decision scenarios and
analyze their behaviors’ characteristics.
scenarios based on four basic effects. At last, we can analyze their decision-making
characteristics by the data we collected.
Data-
base
Management
Information of testing
services situation
Secondly, they should input their team information and then they can choose
testing scenarios. After all of these, they will enter into testing subsystem to finish
the whole processes under the guide of the system.
Database is used for depositing testing data of testing subsystem and system
parameters of simulation programs. In testing phrases, subjects input decision
variables and decision values as the simulation system asked. Then, the system
will use relevant decision parameters that are set by managers in advance to
calculate the results of simulation operation. Finally, the results will be seen in the
interface as the form of reports; in decision-making behaviors analyzing phrase,
managers take advantages of statistic analysis software to analyze all the decision
data and give the results of the subjects’ characteristics of decision-making
behaviors.
the values inputted by subjects; module of decision behavior analyzing is used for
analyzing the decision data and the results of operation, and finally the testing
results by classifying subjects’ behaviors based on four effects mentioned before
can be obtained.
This function includes adding testing scenarios that have been designed, modifying
relevant marketing parameters in every progress according to different behavioral
decision-making effects, editing or deleting certain situations that are not signifi-
cance in testing phrase. Only managers have the right to modify parameters to
cater for demands.
Testing online mainly provides functions that can be used to test subjects’ deci-
sion-making behaviors. At first, subjects log in main interface, then under the
guide of the system they can implement testing needed. This system provides
individual settings, which are single-period testing and multiple rounds of simu-
lations decision testing, in view of different testing aims. In single-period testing,
system will guide subjects to another decision situation after they finished their
first decision-making and provide simulation operation results. Subjects will be
tested a lot of times under certain testing scenarios, where only some certain
parameters will be changed in order to control certainty of simulation situation,
until enough data has been collected. In multiple rounds of simulations decision
testing, subjects will be required to manage a company for several periods and
testing scenarios will be changed along with subjects’ decisions. And different
with single-period testing, subjects’ decisions will always affect next period in
multiple rounds of simulations decision testing.
161 Research on Decision-making Behavior Test System 1533
After all the tests are over, managers will use statistical analysis software to
analyze data which subjects submitted, and final results will be obtained.
There are mainly two aspects: In multiple rounds of simulations decision testing,
operational results will be provided and can be searched; after tests over, decision-
making behavioral characteristics and relevant suggestions will be provided. The
process of searching is showed in Fig. 161.2.
161.5 Summary
Other
operations Searching criteria
Whether
No do further Getting the
searching result
1534 X. Hong et al.
Acknowledgments This work was partly supported by the National Science Foundation of
China (Project No. 71171043), the National Basic Scientific Research Expenses—National
Project Breeding Fund (Project No. N090406006) and the National Undergraduates Innovating
Experimentation Project ‘‘Under Team Task Situations Decision-Making Behavior of Business
Executives Test Platform’’ (Project No. 110105).
References
Ancona DG, Nadler DA (1989) Top hats and executive tales: designing the senior team. Sloan
Manage Rev 31(1):19–28
Bang H, Fuglesang SL, Ovesen MR, Eilerten DE (2010) Effectiveness in top management group
meetings: the role of goal clarity, focused communication, and learning behavior. Scand J
Psychol 51:253–261
Barberis N, Huang M, Santos T (2001) Prospect theory and asset prices. Q J Econ 116(1):1–53
Barberis N, Shleifer A, Vishny R (1998) A model of investor sentiment. J Financ Econ
49:307–343
Boone C, Hendriks W (2009) Manage Sci 55(2):165–180
Daniel K, Hirshleifer D, Subrahmanyam A (1998) Investor psychology and security market under
and overreactions. J Financ 53:1839–1885
Hong H, Stein J (1999) A unified theory of under reaction, momentum trading, and overreaction
in asset markets. J Financ 54:2143–2184
Huang C (2006) Discussion on behavioral decision theory and decision-making behavior of
empirical research methods (in Chinese). Economic Survey, No. 5
Iwai C (2007) Development of MBABEST21: a case-based management game. http://
www.MBABEST21.org
Kahneman D, Tversky A (1979) Prospect theory: an analysis of decision under risk.
Econometrica 47:263–295
Kahneman D, Tversky A (1981) The framing of decisions and the psychology of choice. Science
211:453–458
Li Z-L, Li H-M, Li N (2007) Experimental verification of the non-rational behavior of individual
decision-making (in Chinese). J HeBei Univ Eng 24(2):14–18
Shefrin H, Statman M (1994) Behavioral capital asset pricing theory. J Financ Quant Anal
29(3):323–349
Shefrin H, Statman M (2000) Behavioral portfolio theory. J Financ Quant Anal 35(2):127–151
von Neumann J, Morgenstern O (1944) Theory of games and economic behavior. Princeton
University Press, Princeton
Wang Q-W (1999) Economics and management computer essentials (in Chinese). Higher
Education Press, Beijing, pp 245–246
Xiao X-D (2001) Evolution and revolution of the development of modern management
simulation (in Chinese). Mod Manage 5:50–52
Chapter 162
An Exploratory System Dynamics Model
of Business Model Evolution
Abstract The evolution of the business model is a complex dynamic process and
shows abundant dynamic characteristics, which are the integrated effects of sur-
roundings and inter structure of the system. By developing causal loop diagrams
and stock and flow diagrams, we build a scientific dynamic system model to
identify the relationships among key variables in the business model and probe its
evolutionary dynamics. As a result, we are able to lay the foundation for further
research of business model evolution.
Keywords Business model evolution Causal loop diagram Stock and flow
diagram System dynamic model
162.1 Introduction
The business model (BM hereafter) concept became prevalent in the mid 1990s
with the advent of the Internet and its massive adoption for e-commerce (Amit and
Zott 2001), rapid growth in emerging markets and interest in ‘‘bottom-of-the-
pyramid’’ issues (Seelos and Mair 2007), as well as expanding organizations
dependent on post-industrial technologies (Perkmann and Spicer 2010), and it has
been gathering momentum since then.
In fact, each firm has its unique BM from its foundation. How to deal with the
evolution of many elements and their interactions in each BM subsystem is one of
X. Shao (&)
School of Management, Zhejiang University, Hangzhou, China
e-mail: [email protected]
J. Shi
College of Mechanical and Transportation Engineering, China
University of Petroleum, Beijing, China
the main problems bothering all firms which want to gain long-term survival and
sustained competitive advantage in a changing market. However, the academic
research on BM seems to lag behind practice and lacks the systematic and dynamic
view, which leads to the confusion of management. Consequently, in practice,
many managers don’t know when and how to implement BM change; even if they
take action, the results may be contrary to expectations due to the lack of system
thinking or ignore delays in the system, triggering poor dynamic behaviors in the
evolution of BM.
The initial research on the business model took a static approach and focused on
the related concepts, structures and elements, etc. Recently, the dynamic approach
emerges, but only emphasizes on the impetus, approach, and implementation of
the BM innovation.
Despite the overall surge in the literature on BM, no generally accepted definition
of the term ‘‘business model’’ has emerged. At a general level BM has been referred
to as a statement (Stewart and Zhao 2000), a description (Applegate 2000), a
representation (Morris et al. 2005), an architecture (Timmers 1998), a conceptual
tool or model (Teece 2010). Morris et al. (2005) conducted a content analysis of key
words in 30 definitions to identify three general categories of definitions which can
be labeled economic, operational, and strategic, with each comprised of a unique set
of decision variables (Stewart and Zhao 2000).
Though diversity in the available definitions poses substantive challenges for
determining what constitutes a good BM, we can find ‘‘value’’ appears most
frequently through a literature review, as shown in Table 162.1.
The relationship between business model and time is little discussed, and the
dynamic perspective has only recently been incorporated into research on this
topic. Research at home and abroad explores the innovative impetus of BM from
perspectives of technology, demand, competition, executives and systems (Wang
and Wang 2010), which can be seen as the influencing factors of BM evolution.
162 An Exploratory System Dynamics Model 1537
Through literature review, we find that there is no denying the fact that BM is a
complex system of value creation. However, scholars cannot agree on what its
components are since it emerged from business practice recently and scholars
frequently adopt idiosyncratic definitions that fit the purposes of their specific
studies, which lacks the theoretical underpins and as a result are difficult to rec-
oncile with each other.
Studies on the dynamic perspective of the BM are relatively rare, partly due to
its debatable structure and also due to its research methodology. Existing resear-
ches are almost without exception qualitative case analysis which could not por-
tray and explain the dynamic characteristics and internal mechanism of BM
evolution.
The evolution of the BM is a complex dynamic process and shows rich dynamic
characteristics, which is the integrated effect of surroundings and inter structure. In
order to identify the mechanism and probe the evolutionary dynamics of BM, we
have to build a scientific dynamic system model. Due to limited space, this article
will focus on the various components of the BM and their interaction mechanisms,
taking external environmental factors as given exogenous variables. We use the
System Dynamics (hereafter SD) approach to establish the initial exploratory
1538 X. Shao and J. Shi
162.3 Method
162.4 Model
The various variables involved in the process of BM evolution can be divided into
two types: state variables and influencing variables. These variables and their
mechanisms constitute a complex nonlinear dynamic feedback system referred as
BMES. While the state variables are all endogenous, the influencing variables can
be both endogenous and exogenous.
162 An Exploratory System Dynamics Model 1539
BMES is divided into three subsystems as shown in Fig. 162.1: the subsystem of
resources scale, the subsystem of value proposition, the subsystem of management
capability, which are respectively corresponding to the three state variables. These
three subsystems connect with each other via variables and there is only one
negative feedback between the resource scale subsystem and the management
capabilities subsystem.
In summary, the main interactions between the three subsystems of business
model can be displaced in one causal loop diagram, as shown in Fig. 162.2,
including five enhancing loops and two balancing loops.
The resources scale is a state variable, referring to the resources purchased from
the external market or internally developed by firms, including physical resources
(e.g., plant, capital), intangible resources (such as patents, trademarks), and
knowledge resources (existing in personnel, files, or other similar media) (Benoît
and Xavier 2010).
The complexity of daily operation management and the required management
capability change in the same direction with the resources scale as shown in the
Fig. 162.3. But the growth of resource scale will decrease the remaining man-
agement capabilities used to expand the resource scale, so that the growth rate will
be slowed down. The negative feedback loop, ‘‘resource scale-remaining
162 An Exploratory System Dynamics Model 1541
remaining
management
management
capabilities capabilities gap
+ -
management management
knowledge increasing capabilities
achieved via single decreaseing
management
customer + management
+ capabilities -
capabilities
+
time required
increasing + +
management to establish
knowledge management resource
Amount of
established + capability dropout rate
+ management
capability + ratio of
ratio of -
number of new resource
management investment
custormers - capability on profits
investment
costs for management on profits -
establishing unit capability ratio ofr
management investment
capability market pr
omotion
on profits
management
market scale
knowledge
+ +
The number of
increasing customers existing customers customer defection
remaining + +
resources +
- + +
+
market
sales + Shortage of
market promotion
expanding costs Required management
motive + management capability
+ + capability +
-
costs to gain Profits +
single new +
customer unit actual
customer-servicing management
ratio of market rate of return
promotion capability capability
on sales
cost on profits
162.5 Conclusion
This paper builds a SD model to explore the internal structure of complex BMES
and clarifies the meanings and functions of many vague concepts in Penrose’s
theory. We are hoping to help the practitioners gain a better understanding of the
dynamic relationships between various promoting and constraining variables and
improve the systematic thinking in the BM decision-making.
Since this paper is an exploratory study of SD evolution using SD approach,
there are several limitations which should be paid attention to in the future
research.
Firstly, although the SD model of BMES has been established, the model
simulation was not carried out due to limited time and space. Therefore, the
dynamic system behavior patterns had not been fully revealed.
Secondly, this paper focused on internal structure, which is essential to the
system behavior. Consideration on the technical, social, political, and other
external factors was inadequate which should be included in future research to gain
a more comprehensive understanding.
References
163.1 Introduction
The independent director is directors, who not be employed by the company or its
affiliates and not be closely related to the company or its management through
significant economic, family or other ties. In 1940 \ Investment Company
Act [ was putted forward by United States, the proportion of independent
T. Hui (&) F. Lu
Department of Economics and Management, University of Xidian,
Xi’an, China
e-mail: [email protected]
F. Lu
e-mail: [email protected]
directors in the Investment Company should not be less than 40 %. For the
important role of the independent directors, it has received widely attention, and a
lot of scholars have done researches on it about the number of independent
directors, background characteristics, and so on. But, due to the research samples,
research time, research method, we still do not get consistent conclusions.
Securities investment fund has become an invest manner and is developed
largely in recent years. By the end of 2011, there are 914 securities investment
funds in China, the scale is 2646.465 billion which has grown up 9.17 % over the
last year. At the same time, various kinds of matters, such as Interests transmis-
sion, Tunnel Digging, Mouse storehouse take place usually. So how to improve the
governance structure of the fund management company to protect the interest of
investors is becoming emergent. The independent directors as an important gov-
ernance mechanism in fund management companies are endowed with great
expectations. According to the analysis of independent director’s structure and its
influence on fund performance, this paper hopes to promote the development of
the fund management company healthily and rapidly in China.
163.3 Hypothesis
(1) The relationship between the proportion of independent directors and fund
performance: Director Independence is the foundation of decision-making and
supervision for the board of directors. For the fund management company, usually,
compared to association board of directors, driven by social reputation and long-
term material interests of themselves, independent directors can supervise
Table 163.1 The list of research literature about general company
Author Dependent Data sources Research conclusions
variables
Zhang et al. ROA 14 banks listed on shanghai and shenzhen stock The independent director has significantly positive effect on
(2011) 2006–2009 performance to bank.
Fan and Li ROE 109 listed companies in Guangxi 2002–2006 ROE is negative correlation with the proportion of the independent
(2009) directors, but not significant
Meng ROA, 3740A-shares listed companies The proportion of independent directors is weak related to
(2010) Tobin’s company governance performance
Q
Liu et al. ROA 141 civilian-run listed companies 2004–2007 The proportion of independent directors is not an important factor
(2009) to affect the company performance.
Li et al. Tobin’s Q 863 listed companies in manufacturing industry in cross- The independent board system has significant positive effect
(2009) section from Japan 2001–2006 toward company performance.
163 An Empirical Study on the Influence of Independent Directors
1547
Table 163.2 The research literature about the fund management company
1548
The data is based on the fund management company in China and its equity open-
end Fund and the mixed open-end fund from 2005 to 2010, eliminating the LOF
and QDII. In order to ensure the integrity of fund dates, the fund must be estab-
lished before the year of researched.
Research variables in this paper are composed of: dependent variables, indepen-
dent variables, and control variables. Their descriptions are shown in Table 163.3.
The Table 163.4 indicates that, in 2005–2010 the size of independent directors is
changed little, the maximum and minimum are 6 and 2, of which more than 90 %
independent directors in the fund management company are three or four, reached
the requirement of the CSRC that the fund management company should have at
least three independent directors.
The Fig. 163.1 shows, the degree of independent directors in the fund man-
agement company is mainly doctor, the proportion of it has increasing tendency,
especially in 2010 it reaches at 44.0158 %. Master degree also goes up, but lower
than that of the doctor degree. Only the proportion of bachelor degree is reduced
year by year, from the highest 33.2258 % in 2005 to the lowest 23.9444 % in
2010, almost decrease by 10 % in six years. This shows the fund management
163 An Empirical Study on the Influence of Independent Directors 1551
companies have more and more strict demand with independent director, the
pursuit of highly educated is one of the means to be an independent director.
From the Fig. 163.2, we can see that, the majority of independent directors in
the fund management company are mainly economic management and the annual
average proportion reaches up to 62.552 % in 2005–2010. The proportions of law
1552 T. Hui and F. Lu
100
50
0
2005 2006 2007 2008 2009 2010 Y
major are decreasing. In recent years, the irrational behaviors of some fund
management companies in the pursuit of high return, the cases lead the share-
holders to suffer loss can be found everywhere. This requires the fund management
company to add more independent directors in science and technology major, and
make its more reasonable judgments. Accordingly, the proportion of independent
directors in science and technology major has a rising trend.
Figure 163.3 shows, independent directors from the college and research
institutions make up more than half of the proportion in the fund management
company, the following is financial institutions, and its proportion rises gradually
in 2005–2010. And the others from accounting firms, law firms and industry and
commerce enterprises accounts for only about 20 % of all the independent
directors. Half of independent directors are from school and research institutions
with less company operating experiences; it will lead to some problems on the
function of independent directors in the fund management company.
163 An Empirical Study on the Influence of Independent Directors 1553
Regression results are show in Table 163.5, among the variables of structure
characteristics of independent director in fund management company, the pro-
portion of the independent directors has a negative impact on the fund return, but
not significantly, and at the 0.05 level, it has a significant negative correlation with
the fund risk, also it is positively related to risk-adjusted fund performance, which
is consistent with the existing literature. Now, the introduction of independent
director plays a positive role to control fund risk under the condition of less
effectively supervise and protection of the fund holder’s interests, which can
alleviate principal-agent problems between the fund holder and the fund man-
agement company. Meanwhile, relating to the internal director, the independent
director have little knowledge on the company management and it is weak in
supporting function of management decision, so it has a negative impact on the
fund return, but the positive influence of its risk control exceeds the negative
influence, thus it has positive influence to the risk-adjusted fund performance.
The education and major of independent director are not significantly influ-
enced on fund performance, whether return, risk, or risk-adjusted fund perfor-
mance. For the title of independent directors, the proportion of independent
director who has a senior professional title has a negative influence with fund risk,
which generally has a high status in society and practice experience, so they can
control risk better. Besides, considering their own reputation, the independent
director has the motivation to work harder and qualified for the role as an enter-
prise management supervisor.
The proportion of independent directors from industry and commerce enter-
prises is significantly positive correlation to fund return, and negative correlation
to fund risk. This is mainly because of the independent director come from
industry and commerce enterprise have no relationship with fund management
companies, they can better ensure the objectivity of their supervision, and long-
term practices enrich their working experience, helping fund management com-
panies make better investment decisions from the point of view outside the
financial enterprise, so they have some positive influence on the fund performance.
In addition, recent years the fund management companies in China do not have
learning effect, the fund established time has negative influence on both the fund
return and risk. It is maybe the longer time the fund operates, the more conser-
vative its operation is, so the fund’s incomes and risk are lower. Especially for the
risk control, a fund which through a long-term experience, management method
and system are relatively mature, so it has a significant negative effect on fund
risks at the 0.05 level, but is not significant to the risk-adjusted fund performance.
For the fund scale, it has a certain scale effect, the greater the scale, the higher the
fund return, but more difficulty to management, the lower is the fund’s flexibility,
that are more apparent in bear market. So, fund size is positively related to fund
risk, not significant to the risk-adjusted fund performance.
1554
163.5 Suggestions
some enterprise or business experience, familiar with the laws and regulations, and
possess the knowledge of capital market operation theory, it will makes them
better to perform the duties of independent directors.
Reference
Fan L, Li X (2009) A positivist study of the formation of board of directors in relation to business
performance— evidence from guangxi listed firms’ empirical data in nearly five years.
J Guangxi Univ Philos Soc Sci 31(3):34–37 (in Chinese)
Fu G (2008) Study on corporate governance performance and directors’ source, the degree of
economic correlation. Mark Modernization 37(27):355–358 (in Chinese)
He J (2005) Independent director, governance structure & performance of contractual-type fund
in China. Nankai Bus Rev 8(1):41–48 (in Chinese)
Li B, Zhang J, Zheng C (2009) The panel model empirical analysis of the effect of member age of
board of directors and independence on corporate performance: examples from Tokyo main
board listed companies in Japanese Manufacturing Industry. (2009LDQN14)
Liu J, Zheng L, Wan G (2009) The relationship between board structure and corporate
performance—evidence from private listed firms in China. Reform Econ Syst 27:68–73 (in
Chinese)
Li N (2010) Relevant research on China fund management company internal governance and
corporate performance. Yangzhou University, Jiangsu, pp 30–39 (in Chinese)
Ma Z (2010) China’s fund management company’s independent director system in research and
analysis. J Northeast Norm Univ Philos Soc Sci 60:43–46 (in Chinese)
Meng X (2010) Relevant research on the board of directors features and corporate governance
performance. Chongqing University, Chongqing, pp 25–37 (in Chinese)
Stephen PF, Xuemin (Sterling) Y (2007) Do independent directors and chairmen matter? the role
of boards of directors in mutual fund governance. J Corp Financ 13:392–420
Xing Q, Song F (2008) An empirical research of fund performance and fund governance. 12th
annual meeting of the academic management science in China, Editorial Department of China
Management Science, Beijing (in Chinese)
Xiao J, Peng W (2010) A research into inside governance of fund management corporation and its
effect: taking open-ended fund as a sample. Audit & Econ Res 25(1):105–112 (in Chinese)
Wei Z (2005) On independent director system of fund managing companies. Securities Mark
Herald 15:17–21 (in Chinese)
Wu X (2006) An empirical study and apocalypse on the validity of board of directors of american
corporate funds. Soochow University, Suzhou, pp 25–31 (in Chinese)
Zhou T (2008) An empirical study on the relationship between characteristics of board of
directors and fund performance in fund management company. Southwest Jiaotong
University, Chengdu, pp 43–62 (in Chinese)
Zhang N, Guan Z, Guo Z (2011) An empirical study on the relationship between director board’s
characteristics and bank performance—evidence from China’s 14 listed banks. Econ Surv
28:25–28 (in Chinese)
Chapter 164
Research on Optimal Enterprise
Contribution of Hunan Province Based
on OLG Model
Ni Yang
Abstract Optimizing the enterprise contribution is the key factor for promoting
the reforming of public pension system and insuring the dynamic balance of social
security fund. This paper has made a research on optimal enterprise contribution of
Hunan province based on OLG model. The empirical result showed that life
expectancy growth would raise the optimal enterprise contribution, while popu-
lation growth rate decline would reduce the contribution, and the latter factor made
more influence. If both two factors were introduced in the equilibrium equation,
the optimal enterprise contribution would be reduced from 20 to 10.04 %, when
life expectancy growth raised from 73.8 to 77.2 and population growth rate
declined. The research on optimal enterprise contribution provides theory basis
and the policy support for macroeconomic policy making and pension reforming
promoting.
Keywords OLG model Optimal enterprise contribution Pension reforming
Life expectancy growth Population growth rate
164.1 Introduction
N. Yang (&)
College of Economics and Management, Hunan Normal University, Changsha, China
e-mail: [email protected]
Hunan province has a large population, as the age population (65 years old and
above) has reached 6.35 million by the end of 2009 accounting for 9.22 % of total
population. Furthermore, the population aging of Hunan province has character-
istics of speedy growth, large scale, ‘‘growing old before growing rich’’ and so on,
which makes significant impact on dependency ratio, consumption structure and
social security. Therefore, making proper mechanism for optimal enterprise con-
tribution is an important premise for promoting the stability and development of
social security in long run.
As a frequently used tool for public pension analyzing, overlapping generations
model (OLG) could examine the influence of public pension on the whole macro
economy through analyzing micro economic agents, based on the general equi-
librium frame. The theory was advanced by Samuelson, and expanded to classic
cross time dynamic model by Diamond (1965). Many scholars discussed the
relation between social security mechanism and economic growth based the model
from different aspect. For example, Barro (1974) and Romer (1986) discussed the
influence of pay as you go pension on economic growth from bequests motivation
and personal savings respectively. Casamatta (2000) constructed a two period
OLG model to analysis the reallocation function of social security. Fanti and Gori
(2007) analyzed the effects of the regulation of wages in a standard one sector
OLG model of neoclassical growth extended to account for endogenous fertility
decisions of households and unemployment benefit policies financed at balanced
budget. Rausch and Rutherford (2010) developed a decomposition algorithm by
which a market economy with many households may be solved through the
computation of equilibria in OLG models. Moreover, many scholars explode the
influence of population changing on economic development of OECD countries,
such as Auerbach and Kotlikoff (1989), Neusser (1993), Hviding and Mérette
(1998), Fougère and Mérette (1998), etc.
Public pension reforming has been launched in 1997 in China. In recent years,
OLG model has been used for studying the transition cost, invisible debt, dynamic
effectiveness during the pension reforming. For example, Bo (2000) explored the
influence of different institutional arrangement on economic growth and Pareto
efficiency. Yuan and Song (2000) made a conclusion that the personal saving rates
in China was efficient while the macro savings rate was ineffective through con-
structing OLG model and simulation. Yuan and He (2003) discussed the possi-
bility of dynamic inefficiency in free competition economic with the help of OLG
model. Yang (2008) analyzed endowment insurance for enterprise employees
based on OLG model, to get the optimum pension replacement rate in a general
equilibrium frame. Wang et al. (2010) analyzed the economic effects of the
occupational pension system from macroeconomic capital and output, microeco-
nomic producers and microeconomic consumers based on OLG model. Li and Bai
(2006) constructed an OLG model by GAMS software, whose input variable was
population age changing, to illustrate the changing of social output, personal
consumption and government revenue arise from population ageing in China.
Huang and Sun (2005) analyzed the difference in informal institutions and con-
sumption modes in OLG model, and made a theoretical analysis on households’
164 Research on Optimal Enterprise Contribution of Hunan Province 1559
consumption in the oriental culture and belief. Liu and Hao (2011) constructed a
discrete time bilateral generation transfer model, and discussed the optimal
investment structure and economic growth pattern. Lu (2011) constructed a three
period OLG model, to explain the influence of population structure and income
growth on personal savings in China.
This paper makes a research on the optimal enterprise contribution in partially
funded endowment insurance of Hunan province based on OLG equilibrium
framework. Combining the characteristics of population structure and economic
growth in Hunan Province, parameter selection and model improvement has been
analyzed. The rest has been arrangement as follows. Section 164.2 introduced the
basic framework of OLG model and made a deriving of optimal enterprise con-
tribution. Section 164.3 made an empirical research on the selection of optimal
enterprise contribution in urban old-age insurance of Hunan province. And the
Sect. 164.4 was the conclusion and future research suggestions.
s:t: c1t ¼ ð1 sÞwt st ; c2tþ1 ¼ ð1 þ rtþ1 Þst þ ð1 þ rtþ1 Þgt þ btþ1 ð164:3Þ
Parameter h [ (0,1) presents discount factor. The utility function u(.) is a
monotonous increasing function of consumption, and a strict concave function,
satisfying u0 (.) [ 0, u00 (.) \ 0. Enterprise produces homogeneous goods in com-
petitive market, satisfying first order homogeneous production function yt = f(kt),
in which function kt presents labor capital ratio. Enterprise contribution rate for
endowment insurance is g [ (0,1).
According to Euler theorem, interest rate equals to the marginal capital output,
while (1 ? g)wt equals to the marginal labor output, satisfying:
f ðk t Þ k t f 0 ðk t Þ
rt ¼ r 0 ðkt Þ wt ¼ ð164:4Þ
1þg
In order to make the state of market economy reach the optimal state, policy
parameters should be adjusted, to obtain optimal capital labor ratio.
From functions described above, we could obtain the equilibrium equation for
optimal enterprise contribution.
h i
p ð1aÞ 1þn
q a ð 1 þ n q Þ þ p ð 1 p Þ q ð 1 þ p Þ ð 1 þ n Þ
g¼ h i ð164:8Þ
1 ð1aÞ 1þn
p a ð 1 þ n qÞ p ð 1 p Þ q ð 1 þ p Þ ð 1 þ n Þ
According to the equation above, we could know that the optimal enterprise
contribution would be influenced by survival probability of retirement P, social
discount factor q, capital income share a and population growth rate n.
Parameter capital income share a is usually equals 0.3 for developed countries.
While income of labor is lower in China, and Hunan province has a large popu-
lation density. Therefore, parameter a is 0.36.
Urban population of Hunan province is adopted as population statistics caliber.
According to the data announced by the Department of Economic and Social
Affairs of United Nations, the life expectancy growth would be increased to
80.3 years old in 2055–2060. Therefore, the length of one period is set 27 years,
which satisfying the condition ‘‘three periods time span should be equal or greater
1562 N. Yang
With the promotion of living quality and improvement of medical conditions, the
life expectancy is increasing. Because of the limitation of data available, we
assume that the life expectancy in Hunan province is the same as the whole
country.
According to the data announced by Department of Economic and Social
Affairs of United Nations, we could obtain the life expectancy of Chinese in future
30 years (the data could be seen in Table 164.1).
Bringing the values of parameters set above in (164.8), including a = 0.38,
n = 3.0962, q = 0.5458, the optimal enterprise contribution could be obtained
under different life expectancy.
According to Table 164.1, the optimal enterprise contribution increases along
with the life expectancy ascend.
And in the next 25 years, the life expectancy in China would be increased from
73.8 to 77.2 years old. If population growth rate fixed, the optimal enterprise
contribution would be promoted from 20 to 22.97 %.
Table 164.1 Estimation of optimal enterprise contribution (fixed population growth rate)
Period Life expectancy P g
2015–2020 74.7 0.7667 0.2104
2020–2025 75.6 0.8 0.2189
2025–2030 76.4 0.8296 0.2251
2030–2035 77.2 0.8592 0.2297
164 Research on Optimal Enterprise Contribution of Hunan Province 1563
China has executed very strict one-child policy since 1980s, so human fertility
declined rapidly, which is the main reason for population ageing. Low fertility,
low mortality, low population growth rate, has been the main characteristic of the
population of China. The changing of population growth rate would influence
population age composition, social support rate, and then the optimal enterprise
contribution.
Table 164.2 shows the result of the optimal enterprise contribution when both
of the two variables, life expectancy growth and population growth rate decline
have been brought in the equilibrium framework. According to Table 164.2, the
optimal enterprise contribution declined while both of the two variables, life
expectancy growth and population growth rate decline have been brought in the
equilibrium framework.
In the next 25 years, the life expectancy in China would be increased from 73.8
to 77.2 years old and population growth rate would decline continuously, and then
the optimal enterprise contribution would be declined from 20 to 10.04 %. Now
that life expectancy growth would increase the optimal enterprise contribution and
the changing of two variables make the optimal enterprise contribution decline,
therefore we could obtain the conclusion that the population growth rate plays a
more important role than life expectancy growth in the optimal enterprise con-
tribution determine because of the large population bass.
This paper has discussed the optimal enterprise contribution of Hunan province
based on the OLG model. Firstly, we have introduced the basic framework of OLG
model and the deriving of optimal enterprise contribution; and then we have made
an empirical research on the optimal enterprise contribution determination of
Hunan province. Empirical results show that life expectancy growth would
increase the optimal enterprise contribution while population growth rate decline
would lower the optimal value. And the latter variable of population growth rate
plays a more important role because of the large population base in China.
However, this paper discussed the optimal enterprise contribution adjustment
caused by parameters changing based static equilibrium equations. How to con-
structing a general dynamic equilibrium system to describe the real state of pen-
sion operating and the dynamic changing of parameters based social optimization,
would be one of the future research.
References
Auerbach AJ, Kotlikoff LJ (1989) The economic dynamics of an ageing population: the case of
four OECD Countries. OECD Econ Rev 12(1):97–130
Barro RJ (1974) The impact of social security of private saving. American Enterprise Inst,
Washington DC, pp 21–35
Bo J (2000) The influence of endowment insurance system arrangement on economic growth and
Pareto efficiency. Econ Sci 27(1):78–88 (in Chinese)
Casamatta G (2000) The political economy of social security. Scand J Econo 102(3):503–522
Diamond PA (1965) National debt in a neoclassical growth model. Am Econ Rev
55(1):1126–1150
Fanti L, Gori L (2007) Fertility, income and welfare in an OLG model with regulated wages. Int
Rev Econ 54(2):405–427
Fougère M, Mérette M (1998) Population ageing and current account in selected OECD
countries. Working Papers-Department of Finance Canada, vol 4, no 1, pp 1–24
Huang S, Sun T (2005) Informal institutions, consumption modes and assumption of OLG
model—a theoretical analysis on households’ consumption in the oriental culture and belief
(in Chinese). Econ Res J 24(4):57–65
Hviding K, Mérette M (1998) Population effects of pension reform in the context of ageing. OLG
simulations for seven OECD countries, OECD Working Paper, pp 1–23
Li H, Bai X (2006) Life-cycle model and its application to research in aging China. Chin J
Population Sci 28(4):28–35 (in Chinese)
Liu Q, Hao S (2011) Theoretical analysis on uncertainty of aging issue in gift economy based on
OLG model. Stat Res 28(10):84–90 (in Chinese)
Lu D (2011) Population structure, economic growth and China’s household saving: empirical
research based on OLG model and panel data. Shanghai Finance 32(1):10–15 (in Chinese)
Neusser K (1993) Savings, social security, and bequests in an OLG model: a simulation exercise
for Austria. J Econ 7(1):133–155
Rausch S, Rutherford TF (2010) Computation of equilibria in OLG models with many
heterogeneous households. J Econ 36(2):171–189
Romer PM (1986) Increasing returns and long run growth. J Polit Econ 94(2):1002–1037
Wang X, Zhai Y, Yan H (2010) Economic effects of the occupational pension system: the
research based on general equilibrium model. Nankai Econ Stud 12(5):46–55 (in Chinese)
164 Research on Optimal Enterprise Contribution of Hunan Province 1565
Yang Z (2008) The public pension for enterprise employees, benefit replacement rate and
population growth rate. Stat Res 25(5):38–42 (in Chinese)
Yang Z (2010) OLG model analysis on public pension: principles and applications. Guangming
Daily Press, Beijing, pp 27–45 (in Chinese)
Yuan Z, He Z (2003) Dynamic inefficiency in China’s economy since 1990s. Econ Res J
24(7):18–27 (in Chinese)
Yuan Z, Song Z (2000) The age composition of population, the endowment insurance system and
optimal savings ratio in China. Econ Res J 11(1):24–32 (in Chinese)
Chapter 165
Comprehensive Experiment Design
of Production Logistics Based on CDIO
Keywords CDIO concept Production logistics Comprehensive and project-
based experiment Role exchange
Production and Operation Management, Facilities Planning and Logistics are the
important courses for the students whose major is the Logistics Engineering, the
core contents of the two courses are closely related to actual operation of enter-
prises (especially the manufacturing plants) and are the core curriculum to culti-
vate production logistics professional (Zhang 2006). The two courses cover the
market demand analysis, facilities planning and layout, logistics systems analysis
and design, organization and design of flow line, production planning and control,
Y. Li (&) X. Lan
Mechanical Engineering College, Zhejiang University of Technology,
Hangzhou, China
e-mail: [email protected]
quality control, work study and time study, business process reengineering,
advanced manufacturing systems and so on, which have strong practical (Jiang
2006; Li and Chen 2008).
Among the many interesting facts we know about how experiences affect
learning, one relates especially to CDIO (Qi and Wu 2010): Engineering students
tend to learn by experiencing the concrete and then applying that experience to the
abstract. Unlike their counterparts of yesteryear, many engineering students these
days don’t arrive at college armed with hands-on experiences like tinkering with
cars or building radio (Li et al. 2011; Feng 2009). CDIO has open and accessible
channels for the program materials and for disseminating and exchanging
resources. CDIO collaborators have assembled a unique development team of
curriculum, teaching and learning, assessment, design and build, and communi-
cations professionals. They are helping others to explore adopting CDIO in their
institutions (Cheng 2006).
Current teaching model pays more attention on the basic theory (Xu 2003; Chen
and Peng 2007), the teacher mainly use the lecture notes, the students are accepted
the basic knowledge passively. It is lack of comprehensive, systematic experiment
course, teachers and students are lack of communication deeply, the students are
lack of real feeling to the knowledge, it is difficult to achieve the teaching
effectiveness. Although the teacher use some auxiliary teaching material and tools
(Xiao and Zheng 2008; Zhang and Bo 2004; Jian and Li 2008), such as cases
study, video explore, basic understanding of practice, there are some problems in
teaching model, which include as follows:
(1) In case study, there are some descriptive content, but there are lack of actual
data, the case is far away from the practices, which is hard to attract the
students interesting.
(2) The comprehensive video is lack of production logistics, the special subject of
production logistics is lack, it is difficult to integrate the curriculum content
closely.
(3) The understanding of the practice is too short a, it is difficult for students to
understand the application of professional knowledge deeply. The existing
curriculum design is limited to theoretical and lack of practices supporting.
(4) The experiment teaching software pay more attention on the solution of the
model of production and management activities, not on the production
logistics analysis which is play more roles on culturing. It is hard to achieve
the teaching purpose which is to culture the applying and innovative ability.
(5) The knowledge and emotional experience of juniors are far away from the
actual business operation, with the abstraction and boring the course knowl-
edge, which lead to the lack of study interesting for students.
In response to these problems, it is necessary to design a comprehensive
experiment on the logistics engineering and industrial engineering laboratory
platform based on the CDIO engineering education philosophy (Bartholdi and
Hackman 2008). The comprehensive experiment will integrate many courses to
improve the understanding and application to the organization, design, operation
165 Comprehensive Experiment Design of Production Logistics Based on CDIO 1569
and control for the production logistics in manufacturing factory (Frazelle 2002).
The comprehensive experiment can attract the interesting in professional courses.
The experiment purpose is to help the students to learn how to use the theory
knowledge comprehensively. The experiment will provide a production process
model by independent design, analysis and optimization for students to cultivate
the independent analysis and problem-solving ability. The detail objectives include
as follows:
(1) To change the single teaching method and improve the teaching effectiveness,
which can improve the application of CDIO concept in the high education;
(2) To help the students to understand the production and operation in manu-
facturing enterprises deeply, to improve the understanding on basic theory and
methods in courses, to reduce abstraction and boring feeling, and to attract the
study interesting;
(3) To improve the recognize, understanding and application to the internal pro-
duction logistics system of manufacturing plant, to enhance recognize and
interesting on the production logistics jobs, to broaden the employment view
and choice;
(4) To improve the application ability to solve practical problems with compre-
hensive knowledge, and to culture the creativity and teamwork ability.
Under the CDIO teaching concept, experiment includes many professional cour-
ses, relevant principles and theories, which includes the production demand
analysis and forecast, facility planning and layout, flow line organization and
balancing, through output analysis, production planning and schedule, quality
control and statistical analysis, Just in Time system, Kanban system and so on. The
experiment syllabus includes:
(1) To master the market demand analysis and forecast; to understand the JIT
production model; to grasp the basic production planning methods and pro-
duction analysis methods.
(2) To familiar with the general methods of facility planning and layout, to use
these methods to analyze the production logistics system; and to know well the
logistics equipment and the basic process of internal logistics and so on.
(3) To grasp the assembly line design and balancing method, know well about the
application of line balancing software and tools; to use the general standard
165 Comprehensive Experiment Design of Production Logistics Based on CDIO 1571
time method and tools; to understand the important role of the assembly line
organization and management in manufacturing system.
(4) To master the common tools and statistical software in quality control and
analysis; to understand the impact on the production of the quality fluctua-
tions; to understand the basic quality management knowledge and concepts,
such as the qualified rate, sample testing, pass-through rate, rework rate,
downgrade management and so on.
(5) To understand the organization, design, operation and control system in
manufacturing plant, to improve the interesting in study; to help students to
grasp the core operation process of manufacturing plants comprehensively and
systematically.
The experiment includes the background and roles design and setting. The
experiment background is a children toys manufacturing plant, which has com-
plete organization, flexible assembly lines and production facilities, the production
mode is the JIT system. The third parts supplier can provide all the materials in the
BOM.
There are 7 roles, including one teacher and 10 students (the ID is from S1 to
S10), the initial roles setting is in the Table 165.1.
The comprehensive experiment procedure flow is shown in the Fig. 165.1 and the
detailed procedure is shown in the Table 165.2.
CDIO
165.6 Conclusion
References
Bartholdi JJ, Hackman ST (2008) Allocating space in a forward picking area of a distribution
center for small parts. IIE Trans 40:1046–1053
Chen Z, Peng Y (2007) Application of FR in production operation and management. China Educ
Guid 14:58–59
Cheng Z (2006) Study on production operation and management course construction and
teaching method for MBA. Educ Mod 9(3):3–8
Feng G (2009) Practical teaching research on the course of production and operation
management. Res Explor Lab 28(1):118–120
Frazelle EH (2002) World-class warehouse and material handling. McGraw Hill, New York
Jian X, Li Z (2008) Storage location assignment in a multi aisle warehouse considering demand
correlations. Comput Integr Manuf Syst 14(12):2447–2451
Jiang Z (2006) Industrial engineering curriculum design guidance. Machinery Industry Press,
Beijing, China
Li C, Chen Y (2008) Teaching reform of the industrial engineering curriculum design. Educ
Innov Guide 1:23–26
Li H, Fang Z, Wang Y (2011) Industrial engineering practice teaching system planning and
construction. China Electr Power Educ 10:57–60
Qi L, Wu S (2010) Industrial engineering theory teaching, laboratory teaching, curriculum design
trinity of design and implementation. China Electr Power Educ 32:112–115
Xiao J, Zheng L (2008) Storage location assignment in a multi aisle warehouse considering
demand correlations. Comput Integr Manuf Syst 14(12):2447–2451
Xu Z (2003) Course design of factory visiting in production operation and management. J Xiamen
Univ (Nat Sci) 42(10):144–147
Zhang X (2006) Industrial engineering experiments and practical tutorial. Machinery Industry
Press, Beijing, China
Zhang YF, Bo L (2004) Application of genetic algorithm in selecting accurate freight site. J Syst
Simul 16(1):168–171
Chapter 166
Improved Grey Forecasting Model
for Taiwan’s Green GDP Accounting
Abstract This paper applies the grey forecasting model to forecast the green GDP
accounting of Taiwan from 2002 to 2010. Green GDP accounting is an effective
economic indicator of human environmental and natural resources protection.
Generally, Green GDP accounting is defined as the traditional GDP deduces the
natural resources depletion and environmental degradation. This paper modifies
the original GM(1,1) model to improve prediction accuracy in green GDP
accounting and also provide a value reference for government in drafting relevant
economic and environmental policies. Empirical study shows that the mean
absolute percentage error of RGM(1,1) model is 2.05 % lower than GM(1,1) and
AGM(1,1), respectively. Results are very encouraging as the RGM(1,1) fore-
casting model clearly enhances the prediction accuracy.
166.1 Introduction
Energy consumption and the threat of global warming have drawn nation and
international attention. In 1992, the Commission for Sustainable Development of
the United Nation signed the convention to pursue equilibrium between ecological
reservation and economic development. In 1997, Taiwan’s government promul-
gated an Article 10 amendment of Taiwan’s Constitution to support environmental
S. Lu
Department of Industrial Management and Enterprise Information,
Aletheia University, Taipei, Taiwan, China
C.-I. Lin S. Tai (&)
Department of Industrial Management, Lunghwa University of
Science and Technology, Taipei, Taiwan, China
e-mail: [email protected]
degradation of air pollution has increased 28.51 % since 2002. Through reduction,
recycling and proper disposal of the solid waste in the past years, the degradation of
solid waste is much less than those of water and air pollution.
In 2010, there was NT$ 13.61 trillion created in GDP, but negative impact on
environment was accompanied by a high economic growth. One is depletion of
natural resources totaled NT$ 18.19 billion and the other is environmental deg-
radation up to NT$ 65.47 billion. Consequently, the green GDP accounting is NT$
13.53 trillion, up by 30.97 percent compared to NT$ 10.33 trillion in 2002. The
dramatic growth in green GDP accounting may attribute to the increasing envi-
ronmental awareness of Taiwanese and the policy implementation by the gov-
ernment. Accordingly, one of main concerns in this article is to construct
forecasting model to forecast green GDP accounting of Taiwan. The proposed
model not only can reflect how much degree paid on environmental protection but
support government to draft pertinent policies for Taiwan’s environmental issues.
Time series models are widely used in predicting and acquiring management
information. A large number of observations are required to understand the pattern
and choose a reasonable mathematical model for time series process. Unfortu-
nately, only a little data are obtained over time and simultaneously we are inter-
ested in speculating succeeding observations in the future. Neither statistical
methods nor data mining techniques are suitable for exploring the small obser-
vation problem. The grey system theory, originally developed by Deng (1982),
effectively deals with limit data and uncertain information. Since then, the grey
system theory has become popular when we have only incomplete information and
also successfully applied to various fields such as transportation (Pai et al. 2007),
energy (Hsu and Chen 2003; Akay and Atak 2007), financial (Chang and Tsai
2008; Huang and Jane 2009; Kayacan et al. 2010), social and economic (Shen
et al. 2009), engineering (Li and Yeh 2008) and so on. Above mentioned articles,
the grey system theory is utilized in this work to forecast green GDP accounting of
Taiwan.
166.2 Methodology
The aim of this article is to construct a green GDP accounting forecasting model
based on grey system theory. Unlike statistical methods, this theory mainly deals
with original data by accumulated generating operations (AGO) and tries to find its
internal regularity. Deng (1986) has been proven that the original data must be
taken in consecutive time period and as few as four. In addition, the grey fore-
casting model (GM) is the core of grey system theory and the GM(1,1) is one of
the most frequently used grey forecasting model. The GM(1,1) model constructing
process is described as follows:
1578 S. Lu et al.
dxð1Þ ðkÞ
þ axð1Þ ðkÞ ¼ b ð166:4Þ
dt
where a is the developing coefficient and b is the grey input.
Step 5: Solve Eq. (166.4) by using the least square method and the forecasting
values can be obtained as
8
< ^xð1Þ ðkÞ ¼ xð0Þ ð1Þ b eaðk1Þ þ b
>
a a ð166:5Þ
>
: ^ð0Þ ð1Þ ð1Þ
x ðkÞ ¼ ^x ðkÞ ^x ðk 1Þ
where
where
eð0Þ ðkÞ ¼ xð0Þ ðkÞ ^xð0Þ ðkÞ; k ¼ 2; 3; . . .; n: ð166:10Þ
Execute the Steps 1–5, a RGM(1,1) forecasting model can be established and
the forecasting values ^eð0Þ ðkÞ be:
be
^eð0Þ ðkÞ ¼ eð0Þ ð2Þ ð1 eae Þeae ðk1Þ ; k ¼ 3; 4; . . .; n ð166:11Þ
ae
Considering the residual modification on GM(1,1) model can improve the
predictive accuracy of the original GM(1,1) model.
Li and Yeh (2008) proposed the trend and potency tracking method (TPTM) to
acquire concealed information, and then construct a triangular trend and potency
(TP) function with an asymmetrical domain range. TP values of the existing data
are determined by ration rule of a triangle and represented the current datum’s
intensity close to the central location. The detailed procedure of computing TP
values is described by Li and Yeh (2008).
Moreover, the background value is the most important factor which affects the
model’s adoption and precision. Many researchers generally regard each datum as
having equal importance, and set a ¼ 0:5 in Eq. (166.3) to compute the back-
ground value. However, Li et al. (2009) discussed the influence of a and renamed
Eq. (166.3) as
zð1Þ ðkÞ ¼ xð1Þ ðk 1Þ þ axð0Þ ðkÞ; a 2 ð0; 1Þ; k ¼ 2; 3; . . .; n:
Clearly, the influence from a to the background value mainly comes from the
newest data. Therefore, the Adaptive GM(1,1), known as AGM(1,1), are presented
by Li et al. (2009) and described as follows:
Step 1–2 are same as original GM(1,1).
Step 3: Calculate the TP values by TPTM
fTPi g ¼ fTP1 ; TP2 ; . . .; TPn g; i ¼ 1; 2; . . .; n ð166:12Þ
1580 S. Lu et al.
Step 4: ak is computed by
Pk i1
i¼1 2 TPi
ak ¼ P k
; k ¼ 2; 3; . . .; n ð166:13Þ
i1
i¼1 2
To demonstrate the precision and stability of grey forecasting method, the relevant
green GDP accounting provided by DGBAS are examined in this study. The
historical annual data of original GDP accounting, natural resources depletion,
environmental degradation and green GDP accounting from 2002 to 2010 are
presented in Table 166.1.
Table 166.1 Values of the relevant green GDP accounting from 2002 to 2010 (NT$ Billion)
Years GDP Natural resources Environmental Green GDP
accounting depletion degradation accounting
2002 10411.63 20.70 60.66 10330.27
2003 10696.25 20.29 57.59 10618.37
2004 11365.29 21.07 67.14 11277.08
2005 11740.27 19.55 66.64 11654.08
2006 12243.47 18.58 66.14 12158.75
2007 12910.51 18.58 67.23 12824.70
2008 12620.15 18.07 65.39 12536.68
2009 12481.09 17.60 63.20 12400.28
2010 13614.22 18.19 63.40 13532.62
166 Improved Grey Forecasting Model 1581
The residual data sequence is built by Eq. (166.10). Repeat the Eqs. (166.2)–
(166.4) to estimate the parameters of a and b of RGM(1,1) model (a ¼ 0:128,
b ¼ 5:374). The RGM(1,1) forecasting model is listed as follow:
ð0Þ ð0Þ 5:374
^e ðkÞ ¼ e ð2Þ 1 e0:128 e0:128ðk1Þ ; k ¼ 3; 4; . . .; n
0:128
166.3.2 Results
The predicted results obtained by the original GM(1,1), residual GM(1,1) and
adaptive GM(1,1) model are presented in Table 166.2 and Fig. 166.1. To measure
the forecasting performance, mean absolute percentage error (MAPE), is used for
evaluation of these models. The results indicate that the RGM(1,1) has the smallest
MAPE (2.05 %) compared with original GM(1,1) and AGM(1,1) (3.25 and
2.32 %, respectively). Therefore, RGM(1,1) model not only can reduce the fore-
casting error effectively, but enhance the precision of a grey forecasting model.
However, the absolute percentage error (APE) of the GM(1,1), RGM(1,1) and
AGM(1,1) models in 2007 are 6.99, 4,39 and 5.14 %, respectively, which is higher
than its MAPE.
1582 S. Lu et al.
Table 166.2 Forecasting values and errors of green GDP accounting (NT$ 0.1*Trillion)
Years AVa GM(1,1) RGM(1,1) AGM(1,1)
FVb Error (%)c FV Error (%) FV Error (%)
2002 103.30
2003 106.18 106.42 0.23 108.94 2.60
2004 112.77 109.50 2.90 114.57 1.60 111.99 0.69
2005 116.54 112.67 3.32 117.13 0.51 115.12 1.22
2006 121.58 115.92 4.65 119.86 1.42 118.34 2.66
2007 128.24 119.28 6.99 122.74 4.29 121.65 5.14
2008 125.36 122.72 2.10 125.77 0.33 125.05 0.24
2009 124.00 126.27 1.83 128.95 4.00 128.55 3.67
2010 135.32 129.92 3.99 132.29 2.24 132.15 2.34
MAPEd 3.25 2.05 2.32
a
AV Actual value
b
FV Forecasting value
c
Error = jFVk AVk j=AVk
P
d
MAPE = 1n nk¼1 ½jFVk AVk j=AVk
125 RGM(1,1)
Trillion
120
115 GM(1,1)
110
105
100
2002 2003 2004 2005 2006 2007 2008 2009 2010
Year
166.4 Conclusions
Numerous forecasting methods have been widely used, including the time series
analysis, regression analysis and artificial neural networks. They need a large
amount of data to construct a proper forecasting model. With the life cycle of
products, however, data collected are limit. Adopting traditional forecasting
methods with a few uncertain and insufficient data to build forecasting model is
unsuitable. Therefore, in order to obtain a highly accurate forecasting model with
limit data, Deng (1986) first presented the grey forecasting model from grey theory
to overcome the problem facing a few data. Accordingly, the goal of this paper is
166 Improved Grey Forecasting Model 1583
to forecast the green GDP accounting of Taiwan by original GM(1,1) model and
compare to residual GM(1,1) and adaptive GM(1,1) model.
To measure the performance of the GM(1,1), RGM(1,1) and AGM(1,1) models,
the criteria of MAPE is adopted. Empirical results indicate that the RGM(1,1)
forecasting model has the lowest MAPE, 2.05 %, among three models. That is,
RGM(1,1) forecasting model has a high prediction validity of forecasting the green
GDP accounting in Taiwan. The findings serve as a basis for government decision
making to make Taiwan become Green Islands both economically and
environmentally.
The results are very encouraging as they show that green GDP accounting
represented the human welfare is increasing during the last decade. More impor-
tant, natural resources depletion and environmental degradation are debit entries to
green GDP accounting, which represent negative environment impacts rising from
the economic developments achieved. Therefore, in order to pursue a high human
welfare and sustainable development of ecosystem, Taiwan government and
Taiwanese must cooperate together to execute pertinent environmental policies.
References
Akay D, Atak M (2007) Grey prediction with rolling mechanism for electricity demand
forecasting of Turkey. Energy 32:1670–1675
Chang BR, Tsai HF (2008) Forecast approach using neural network adaptation to support vector
regression grey model and generalized auto-regressive conditional heteroscedasticity. Expert
Syst Appl 34:925–934
Deng JL (1982) Grey system fundamental method. Huazhong University of Science and
Technology, Wuhan
Deng JL (1986) Grey prediction and decision. Huazhong University of Science and Technology,
Wuhan
Heal G (2007) Environmental accounting for ecosystems. Ecol Econ 6:693–694
Hsu CC, Chen CY (2003) Applications of improved grey prediction model for power demand
forecasting. Energy Convers Manage 44:2241–2249
Huang KY, Jane CJ (2009) A hybrid model for stock market forecasting and portfolio selection
based on ARX, grey system and RS theories. Expert Syst Appl 36:5387–5392
Kayacan E, Ulutas B, Kaynak O (2010) Grey system theory-based models in time series
prediction. Expert Syst Appl 37:1784–1789
Li DC, Yeh CW (2008) A non-parametetric learning algorithm for small manufacturing data sets.
Expert Syst Appl 34:391–398
Li DC, Yeh CW, Chang CJ (2009) An improved grey-based approach for early manufacturing
data forecasting. Comput Ind Eng 57:1161–1167
Pai TY, Hanaki K, Ho HH, Hsieh CM (2007) Using grey system theory to evaluate transportation
effects on air quality trends in Japan. Transp Res Part D 12:158–166
Shen VRL, Chung YF, Chen TS (2009) A novel application and grey system theory to
information security (part I). Comput Stan Interfaces 31:277–281
Xu L, Yu B, Yue W (2010) A method of green GDP accounting based on eco-service and a case
study of Wuyishan, China. Procedia Environ Sci 2:1865–1872
1584 S. Lu et al.
Yue WC, Xu LY (2008) Study on the accounting methods of Green GDP based on ecosystem
services. Ecol Econ 9:50–53
Yue WC, Xu LY, Zhao X (2009) Research of Green GDP accounting in Wuyishan City based on
ecosystem services. Ecol Econ 2:11–12
Chapter 167
Simulation Research of the Fuzzy Torque
Control for Hybrid Electrical Vehicle
Based on ADVISOR
The simulation model of the super-mild hybrid vehicle is established through the
simulation software ADVISOR. This Model is shown in Fig. 167.1.
The backward simulation and forward simulation can be used in ADVISOR.
The backward simulation can calculate the engine and motor output power. For-
ward simulation is to be after the backward simulation, it can use the engine and
motor power along with backward simulation passing in the opposite direction.
The actual speed of vehicle is calculated by the forward simulation (Wipke et al.
1999). Every module in the vehicle simulation model contains a simulink simu-
lation module. We can modify the parameters in the M-file for data input (Zeng
et al. 2004).
The parameters in the M-file of vehicle module are modified according to the
parameters of the entire vehicle, such as:
veh_gravity = 9.81; % m/s2
The super-mild hybrid electrical vehicle has no pure electric operating mode.
Motor only be used for idle starting, stopping, power compensation and recycling
braking energy, so the motor torque control will affect the vehicle performance
(Zhang et al. 2010; Deng et al. 2004; Fan and Wu 2004; Liang and Wang 2001).
The output torque of the engine can be divided into two parts: One is used to
drive the vehicle, the other is used to drive the generator for the battery charging.
The input/output torque of the motor can balance the relationship between engine
torque and vehicle required torque, which controls the engine working point along
the economic curve (Schouten et al. 2002; Lee and Sul 1998). When the engine
output torque is lower than the vehicle required torque, the motor will make up the
difference. When the engine output torque is higher than the vehicle required
torque or the vehicle is in deceleration braking condition, the extra engine output
torque or the recovery of deceleration energy can drive generator for battery
charging (Schoutena et al. 2002, 2003; Kheir et al. 2004; Poursamad and
Montazeri 2008; Baumann et al. 2000). The fuzzy controller not only improves the
vehicle fuel economy but also makes the battery SOC value to be in the high
efficiency range.
Take the differential values between the vehicle required torque and the engine
output torque (Tc), and battery SOC values to be input variables. Take the torque
167 Simulation Research of the Fuzzy Torque 1587
167.4 Conclusion
The ADVISOR simulation model and the fuzzy logic torque controller are
established. The fuzzy logic torque controller is realized through the clutch and
torque control module. The fuzzy torque control strategy can more effectively
distribute the motor and the engine operating range. The fuel economy of the
vehicle has been improved and the emissions have been lower.
Acknowledgments This dissertation is under the project support of Natural Science Foundation
of Tianjin (09JCYBJC04800).
167 Simulation Research of the Fuzzy Torque 1591
References
Baumann BM, Washington G, Glenn BC et al (2000) Mechatronic design and control of hybrid
electric vehicles. IEEE/ASME Trans Mechatron 5(1):58–72
Deng Y, Wang Z, Gao H (2004) Modeling and simulation of hybrid drive system on the Toyota
PRIUS based on bondgraph. J Wu han Univ Technol 2004(4):50–55
Fan J, Wu T (2004) Simulating study of the control strategy for Honda insight. J Guang xi Univ
Technol 2:18–20
Kheir NA, Salman MA, Schouten NJ (2004) Emissions and fuel economy trade-off for hybrid
vehicles using fuzzy logic. Math Comput Simul 66:155–172
Lee H-D, Sul S-K (1998) Fuzzy-logic-based torque control strategy for parallel-type hybrid
electric vehicle. IEEE Trans Ind Electron 45(4):625–632
Liang C, Wang Q (2001) Energy management strategy and parametric design for hybrid electric
family Sedan. SAE paper: 2001-01
Poursamad A, Montazeri M (2008) Design of genetic-fuzzy control strategy for parallel hybrid
electric vehicles. Control Eng Pract 16:861–873
Schouten NJ, Salman MA, Kheir NA (2002) Fuzzy logic control for parallel hybrid vehicles.
IEEE Trans Control Syst Technol 10(3):460–468
Schoutena NJ, Salman MA, Kheira NA (2003) Energy management strategies for parallel hybrid
vehicles using fuzzy logic. Control Eng Pract 11:171–177
Wipke KB, Cuddy MR, Burch SD (1999) ADVISOR user-friendly advanced powertrain
simulation using a combined backward/forward approach. IEEE Trans Veh Technol Spec
Issue Hybrid Electr Veh 1999(5):1–10
Zeng X, Wang Q, Li J (2004) The development of HEV control strategy module based on
ADVISOR2002 software. Automot Eng 26:394–396
Zhang Y, Zhou M, Wang X, Lu X, Yuan B (2010) A study on the control system of regenerative
braking for HEV. J Nat Sci Hei Longjiang Univ 27:551–556
Chapter 168
Vulnerability Analysis and Assessment
System of Natural Disaster
Abstract With regard to the overall vulnerability of the complex natural disaster
system, correlation of disaster-inducing factors, disaster environment and disaster
bearing objects was analyzed, natural disaster forming efficiency was simulated,
and vulnerability mechanism of natural disaster was researched using disaster-
inducing factor-vulnerability chain and vulnerability curves. Assessment decision-
making model of natural disaster vulnerability was built. Through constructing
three index systems, natural disaster vulnerability was assessed by disaster risk
degree, vulnerability of disaster bearing objects and risk loss degree.
168.1 Introduction
J. Shen J. Huang T. Li
College of Management and Economics, Tianjin University, Tianjin,
People’s Republic of China
M. Xu (&)
TEDA College, Nankai University, Tianjin, People’s Republic of China
e-mail: [email protected]
Global disaster research program had a major impact on disaster risk assessment
index system, such as Disaster Risk Index (DRI) scheme, the world first global
scale human vulnerability assessment index system with spatial resolution to the
countries. Domestic scholars focused on single disaster with risk assessment
system based on index.
(1) Natural disaster risk assessment. The Disaster Risk Hotspots Plan by
Columbia University and ProVention Union established three risk assessment
indexes and disaster risk maps of hazard-prone areas (Arnold et al. 2006).
European Spatial Planning Observation Network elaborated multiple risk
assessment index methods on the potential risk of a particular area (Greiving
2006). The U.S. Federal Emergency Management Agency and National
Institute of Building Sciences developed the HAZUS model, a standardized
national multi-hazard loss estimation method.
(2) Vulnerability assessment. Vulnerability analysis methods were mainly two
ways: the index system and the vulnerability curve. Index system method,
168 Vulnerability Analysis and Assessment System of Natural Disaster 1595
Economic level,
Element Disaster social Special disaster
combination
Scope Definition
abnormality bearing development, prediction, Comprehensive risk
type Of
Bayesian
disaster- emergency
occurrence types management capability
inducing
time
Assessment
factor, major
factors anomalous
amplitude People Basic Special Comprehensive risk loss
identificat-ion Property R
duration disaster disaster
Ecology response response absolute number
Sensit- capacity capacity relative degree
ivity to Vd 1 Vd2
Disaster Disaster Classification disast-
intensity Gij Probability Pij statistics ers
Comprehensive judgment
Vs Disaster response
H Exposure Vd Classified risk loss
Ve Rij
Fuzzy Set theory
Risk degree assessment H Disaster bearing objects vulnerability(V) geographical Risk loss geographical
Target geographical distribution distribution distribution
(1) Disaster intensity (G) assessment. G was determined by the variability degree
of natural factors (such as the magnitude degree, the size of wind, temperature
or precipitation anomaly degree) or attribute index of natural disasters’
influence(such as seismic intensity, flood intensity).
(2) Disaster probability (P) assessment.
P was determined by the natural disaster occurrence number of the intensity in
a certain period, represented by probability or frequency (Fig. 168.2).
Considering previous disasters and future trend, associated with social and eco-
nomic and disaster statistics system, vulnerability of disaster bearing objects was
assessed from physical exposure, sensitivity to disasters and socio-economic and
cultural disaster response capacity in favor of national, regional and community
development strategy and mitigation decision-making principles.
(1) Physical exposure (Ve) assessment. Ve indexes was divided into quantitative
and value types based on specific types and characteristics of disaster bearing
objects. The assessment process was:
Step 1: fix the minimum assessment unit.
Step 2: determine the number of disaster bearing objects in each minimum
assessment unit.
1598 J. Shen et al.
(1) Risk loss assessment methods of single disaster. Adopting the analogy of
historical scenarios, physical simulation and experimental method and expert
scoring, disaster bearing objects were classified as demographic, economic
property and ecological systems to do population risk assessment, disaster risk
assessment of property loss and ecosystem loss degree assessment under
specific disaster.
(2) Risk loss assessment methods of Multi-disaster. On the basis of risk loss
assessment of single disaster, considering regional development, personal
safety and property security of residents, different natural disasters of different
power sources and characteristics were set in a regional system. The assess-
ment was divided into two levels. The first level was independent multi-
disaster risk loss assessment of the three types of disaster bearing objects
based on the assessment of the risk of loss considering risk loss and grade
assessment. The second level was integrated assessment of the three types of
bearing risk body based on the integration of assessment of the risk.
168.5 Conclusion
References
Arnold M, Chen RS, Deichmann U (2006) Natural disaster hotspots case studies. Hazard
Management Unit, World Bank, Washington DC, pp 1–181
Blaikie P, Cannon T, Davis I (1994) People’s vulnerability and disasters. Nat Hazards, Routledge,
London, pp 189–19
Cutter SL (1996) Vulnerability to environmental hazards. Prog Hum Geogr 20:529–539
Gissing A, Blong R (2004) Accounting for variability in commercial flood damage estimation.
Aust Geogr 35(2):209–222
Greiving S (2006) Multi-risk assessment of Europe’s region. In: Birkmann J (ed) Measuring
vulnerability to hazards of national origin. UNU Press, Tokyo
Janssen M (2005) Scholarly network on resilience, vulnerability and adaptation with the human
dimensions of global environmental change. In: Hesse A et al (eds) Conference book for the
open meeting of the human dimensions of Global Environmental Change Research
Community, Bonn, Germany, October 2005. IHDP, pp 75–76
Shi Y, Xu S, Shi C, Sun A, Wang J (2009) A review on development of vulnerability assessment
of floods. Process Geogr 28(1):41–46
Chapter 169
Application of Actuarial Model
in the Forecast of Maternity Insurance
Fund’s Revenue and Expenditure:
A Case Study of Tianjin
Abstract To explore the way how to build up China’s urban and rural childbirth
insurance system, the crucial point is to carry out mutual helping function given by
the maternity insurance fund and make sure its sustainable use. Guided by the
principles and methods of demography and actuarial science, this paper forecasts
and calculates people who are insured by Tianjin employees’ maternity insurance
and urban and rural maternity insurance, and their fund of revenue and expendi-
ture, and draws relevant conclusions so as to provide scientific references for the
collection of Tianjin’s urban and rural unified maternity insurance fund, and for-
mulation of relative payment standards.
Keywords Balance of urban and rural Maternity insurance fund Forecast of
revenue and expenditure Actuarial science
169.1 Introduction
Basically speaking, China’s maternity insurance system is only for the employees’,
a large amount of rural women and urban non-professional women are out of the
coverage, which is beneath social justice. Therefore, in order to promote the
optimization and development of social security system, it is necessary to explore
L. Fu (&) J. Fan
Management and Economic Department, Public Resource Management Research Center,
Tianjin University, Tianjin, China
e-mail: [email protected]
J. Liu
Tianjin Health Insurance Research Association, Tianjin, China
X. Chu
Tianjin Municipal Human Resources and Social Security Bureau, Tianjin, China
and build up unified maternity insurance system of urban and rural. The estab-
lishment of such unified system means the maternity insurance will cover not only
employed women, but also unemployed women, and in turn, maternity insurance
funds need to perform its mutual helping function. However, the revenue of
Tianjin employee’s maternity insurance funds is largely greater than its expendi-
ture since the year 2005. Due to too much balance of its accumulation and its
gradual annual increase, the efficiency of maternity insurance fund is quite low and
its function and effect has not been fully carried out. In a consequence, the analysis
and forecast of Tianjin maternity insurance fund’s revenue and expenditure will be
helpful to digest overmuch redundancy, which will benefit not only the rational
allocation of maternity security resources between different groups, but also the
realization of fairness between groups and the harmonious and stable development
of our society.
X
T X
65
Int ¼ Rð0Þ þ ðCRðtÞ ð Ex;t WAx;t jÞÞV t ð169:6Þ
t¼1 x¼15
In this module, R(0) is the surplus of fund in base period, CRðtÞ is the collection
rate in the year t, Ex;t is the average quantity of employees aged x in the year t,
WAx;t is the average pay cost wage of employees aged x in the year t, and j is the
rate of collection and payment. V t is the discount rate.
1604 L. Fu et al.
X
T X
49 X
70
Ext ¼ ð Lft;x ft;x l0;x þ Lft;x jt;x o0;x ÞV t ð169:7Þ
t¼1 x¼20 x¼20
TPa10;x
l0;x ¼ Lf f
is the average birth medical expenses and allowance (quota) of
0;x 0;x
birth women aged x in base year; Lf0;x is the insured women aged x in base year; f0;x
is the birth rate of insured women aged x in base year; TPa10;x is the expenditure
fees of insured female employees aged x in base year; jt;x ¼ Tt;x =Lft;x is the family
planning level of women aged x in the year t; Tt;x is the total number of women
aged x who are under birth control; Lft;x is the quantity of insured women aged x in
TPa20;x
the year t; o0;x ¼ Lf j
is the average expenditure (quota) of family planning
0;x 0;x
women aged x in base year; j0;x is the family planning rate of insured women aged
x in base year; TPa20;x is the family planning expenditure fees of insured female
employees aged x in base year.
In 2010, the quantity of childbearing aged women who are birth insured
employees are 873,800, who are medically insured urban and rural residents are
1.2613 million. So the total amount is 2.1351 million, and over 446,500 women of
childbearing age are not insured (Tianjin Bureau of Statistics 2006–2011).
(2) The quantity of births
The quantity of births is 70,300 that are complied with Tianjin family planning
policy in 2010, among which 33,300 are urban residents, 37,000 are rural resi-
dents, and 16,100 are from other places (Tianjin Statistics Information 2010). In
recent years, the rate of family planning is between 98.28 % and 99.21 % in the
whole city.
(3) The quantity of birth insured people
urban and rural residents’ medical insurance, total 48,800. There are over 21,500
childbearing women who are not maternity insured (Tianjin Municipal Birth
Insurance System Documentation 2006–2011).
According to Tianjin birth insurance database of basic data, and by using the
‘‘Tianjin statistical yearbook’’(Tianjin Bureau of Statistics 2006), ‘‘the fifth pop-
ulation census data of Tianjin’’ (Tianjin Statistics Bureau 2001) and China life
insurance industry experience life table (National Bureau of Statistics of China
Payment and Employment 2009–2011), use the fund of medical treatment insur-
ance actuarial analysis model ‘‘MIFA12, by forecasting the five years the popu-
lation growth rate (including the birth rate and mortality, net migration rate),
ginseng protect growth rates, wage growth, the total fertility rate, plan and other
important short-term level parameter value, and run outlook the birth insurance
fund operation prospect for the actuarial forecast and analysis from 2011 to 2015.’’
The feasibility assumptions anticipate reality, and take the neutral level. Model
parameters adjustment factor mainly use gradually recursion method and correc-
tion method in advance (Song 2009).
Through Table 169.1, about the maternity insurance fund revenue part, from
2011 to 2015 total revenue increase $572 million, nearly increasing of 82.07 %,
with an average annual growth rate of 10.78 %, including staff maternity insurance
revenues grew 11.14 %, urban and rural maternity insurance revenues grew
0.62 %. In 2015, staff maternity insurance revenue share on behalf of 97.79 %,
urban–rural income accounting for 2.21 % of birth insurance.
On the side of outlay of Maternity insurance fund, the total outlay increases
average 9.94 % per year from 2011 to 2015. Including staff maternity insurance
fund increase average 11.06 %, town and country maternity insurance fund
increase average 2.37 % per year. The total outlay of fund in 2015 is 0.98 billion
dollars; staff maternity insurance occupies 90.73 %, town and country maternity
insurance occupies 9.27 %. The expenses and receipts, growth rate and the ratio of
staff maternity insurance fund exceed earning of town and country maternity
insurance fund to a large extent.
Because of the calculation, which is for the accrual basis of analysis and
forecast, the corresponding output is also accrual, so that The Fund income and
expenditure does not correspond exactly to the current cash under the system of
accounting and statistics reports. Appropriate predictions should be reflected
through the accounting statements of the current mode after 1–2 years.
Table 169.3 The childbirth grant expenditure of urban and rural medical insurance in
2011–2015
Years Subtotal Prenatal Childbirth medical care Family Newborn care
examination planning
2011 0.80 0.12 0.54 0.07 0.07
2012 0.82 0.13 0.54 0.06 0.08
2013 0.86 0.14 0.57 0.07 0.09
2014 0.89 0.14 0.59 0.07 0.09
2015 0.92 0.15 0.61 0.07 0.09
Unit Billion Yuan
According to the Table 169.3, the expenditure of urban and rural childbirth
insurance has an annual average increase of 2.37 % in 2011–2015. By 2015, the
expenditure of urban and rural childbirth insurance will increase to 92 million
Yuan. Within this total prenatal examination expenditure as proportion of 16.30 %
is 14 million; Childbirth Medical Care as proportion of 66.30 % is 59 million;
Family Planning as proportion of 7.62 % is 7 million; Newborn Care as proportion
of 9.78 % is 9 million.
169.5 Conclusion
(1) The rate and balance of payments can fund in this paper and imposed or paid
based on the prediction of fund income. From 2011 to 2015, from the absolute
value to see current balance, accumulated balance continuously increased,
could realize the sustainable usage; From the relative amounts to see, birth
insurance fund income annual growth rate is 10.78 %, birth insurance fund
total spending the average annual growth of 9.94 %, after that plan, birth
insurance fund basically comply with ‘‘payment to fix the receive, balance the
basic balance’’ principle. Under this principle, deal with the fund collection
and use of scientific investigation and measurement, the determination of
reasonable seized proportion, in principle should be controlled in the pro-
portion between 0.6 % and 0.7 %, which can steadily absorb balance fund.
(2) The worker bears insurance fund collection-payment, growth rates and that
accounts for a fund income is far greater than the proportion of urban and rural
birth insurance fund income. The worker bears insurance fund income growth
rate is flat and the urban and rural birth insurance fund spending growth rate
than income growth, after that plan as a whole, urban and rural insurance fund
spending part can by as a whole fund after agreed to pay, to reflect fund each
other function.
(3) During 2011–2015, the workers’ childbirth insurance amount paid the average
annual growth of 13.8 %, the proportion of the birth grant is bigger; the urban
and rural childbirth insurance amount paid the average annual growth of
1610 L. Fu et al.
References
Chi B, Chen Z-j, Liu X-p (2009) Birth insurance policy review and audit of treatment to pay
concerned. Tianjin Social Insurance, Tianjin, pp 43–47 (Chinese)
Fang J-q, Sun Z-q (2008) The health statistics. People’s Medical Publishing House, Beijing
(Chinese)
National Bureau of Statistics of China Payment and Employment (2009–2011) Labor wage and
employment China monthly economic indicators (Chinese)
Social Insurance Act of China http://www.china.com.cn/policy/txt/2010-10/29/content_
21225907.htm
Song S-b (2009) China’s medical security system debt risk assessment and sustainability.
Economics & Management Publishing House, Beijing (Chinese)
Statistics Bulletin of the National Economic and Social Development of Tianjin Municipal in
2010. Tianjin Statistics Information (Chinese) http://www.stats-tj.gov.cn/Article/ShowClass.
asp?ClassID=44
Tianjin Bureau of Statistics (2006–2011) Tianjin statistical yearbook. China Statistics Publishing
House, Beijing (Chinese)
Tianjin Municipal Birth Insurance System Documentation (2006–2011) Tianjin Municipal
Human Resources and Social Security Bureau (Chinese)
Tianjin Statistics Bureau (2001) The Fifth Population Census Data of Tianjin
Chapter 170
Study on Process Reengineering
of Medium and Small Coal Machine
Manufacture Enterprises
Abstract Based on the theory and method of process reengineering, this paper
implemented process engineering on JY company which is a coal machine man-
ufacture enterprise. On the basis of analysis, diagnosis and optimization on
existing process, process system and organizational structure were reengineered,
and related management system were establish. That shows that BPR is an
important way for medium and small coal machine manufacture enterprises to
standardize enterprise management, enhance organization and coordination flexi-
ble, promote enterprise competition ability.
Keywords Medium and small enterprises Coal machine manufacture enterprise
Process reengineering
170.1 Introduction
Since the 1980s, with the rapid development of the world economy and technol-
ogy, the uncertainty of the enterprise survival environment is increasing, the
competition which enterprises are facing is also becoming increasingly fierce,
which mainly reflected in the competition of variety, quality, price, time and
service. Only the one who has advantages in these five respects can survive and
develop. Enterprises used a lot of advanced management methods and manufac-
turing technology, and the comprehensive using of these technology and methods,
indeed, has improved and enhanced the enterprises’ competitiveness (Yue 2005).
However, among them, process reengineering is the most effective method to
improve the competitiveness of the enterprise in terms of the point of strategy.
J. An Z. Zhang (&)
School of Management, China University of Mining
and Technology, Beijing, China
e-mail: [email protected]
This method makes us to rethink the way of product and service providing
designing process in a white paper fundamentally. The new method will start from
the target, and gradually pour push, process should be designed to reach the
requirements of the organization. This method is profound, dramatic, high risky
and has big resistance, and may bring huge cost if the reform fails (Zeng 2008).
Through the systematic analysis of the existing process, this method is to create
new process based on the existing process. This method is efficient, advance step
by step, and has a lower risk of resistance and smaller interference to the normal
operation. Many big companies at home or abroad regard continuous improvement
as important part of their enterprise culture, through the work of the hundreds of
thousands of small changes, the huge performance improvement can be gradually
accumulated.
1614 J. An and Z. Zhang
JY Company used strict linear function structure under original workshop man-
agement mode, Which had not adapt to the need greatly to increase production
flexibility, speed up the external market response and strengthen internal man-
agement; Departments responsibility divided was unclear, nonstandard and
imbalance, lacked of effective communication cooperation mechanism between
departments, and organizational operation efficiency is not high. In the end
responsibilities and interests were unequal, and the management ranges were too
big, which lead to low management efficiency.
170 Study on Process Reengineering of Medium and Small Coal 1615
The responsibilities of process units were not clear, cooperation degree is not high,
and the processes were lack of flexibility, standardization, systematization. The
design and implement of Management process all existed problems, and the setting
of department and key position were not reasonable which all led some management
process missing or fuzzy, and some processes often appear as ‘‘short circuit’’ in the
implementation process, so the phenomenon of dispute over trifles and shuffle arose.
On the basis of the analysis of the internal and external market environment and
according to the company management status, strategic objectives and reengi-
neering ideas, the objective of process reengineering was made sure to construct
1616 J. An and Z. Zhang
With the best process management practice and theory, and considering the
practical situation, JY company’s overall processes were divided into two classes:
business process and management process. According to the feature of specific
production and functional management, the first process framework of JY com-
pany was formed (see Fig. 170.2).
Meanwhile, on the basis of the framework, the second and the third process
were set up, and nine first processes and fifty-six second processes and third
processes were set up preliminary (Electricity Group Co. 2008).
Through the special conference and the matrix analysis method, the determined
key processes include six key processes: production management, quality man-
agement, financial management and so on. On the basis of full understanding and
analysis to the key process, we should find and research the sickness of existing
process, and then to design process (Hui et al. 2000).
Take the process of production optimization for example.
Financial Management
management level, reduce the management cost and delegate management, and the
new decision point is located in the business process execution place.
According to the analysis of the status and combined with external best prac-
tices and the reality of JY company, firstly, the organization frame structure design
carried through on the basis of framework system optimization of JY company and
according to ‘‘streamline organization, personnel optimization’’ principle. Sec-
ondly, the organization department functions boundary was determined and
department responsibility was written. Then the fixation of posts and staff was
determined based on the design of organizational structure. Finally, key position
introductions were established (Figs. 170.3, 170.4, 170.5 and 170.6).
Any reform and innovation of management must carry out on system level,
which is an important principle to get rid of the rule by men for modern enterprise,
especially modern china enterprise, so process reengineering as an important
management innovation is no exception. At the same time process reengineering is
an system engineering, need each aspect provide guarantee for this to set up
process management, evaluation, compensation and other enterprise management
system, to effectively ensure the smooth implementation of the process
reengineering.
170.5 Effect
From may 2010 the beginning of carrying out the above process reform plan to
may 2011, through the continuous reform optimization, temporary workers reduce
by 30 %; Organization operation efficiency and production efficiency increase
obviously, pump production cycle reduces seven days; Product quality and cus-
tomer satisfaction improve significantly. Due to the implementation of the new
assessment method and salary system, the worker enthusiasm is remarkably
improved, labor productivity increases by 20 %, the per capita wage of worker
increases by 16 %, annual output value increases by 21 % and profits growth rate
is up to 28 %. These show that the effect of BPR on JY company is obvious.
170.6 Conclusion
This paper systematically analyzed the management status of the JY company and
existing problems, used the method of combining theory with practice, and put
forward the implementing method of JY company process reengineering. Through
the study, the conclusions are as follows:
(1) The right process reengineering can make the enterprise operation efficiency
and economic benefit, product and service quality and customer satisfaction
increase hugely. Promoting process reengineering on the small and medium
1618 J. An and Z. Zhang
Beginning
contract
1 Batch purchase Release purchase
of raw material notice
Whether to have
2 production conditions
3
Release mass
production notice
Prepare
4 production plan
Arrange the
5 outsourcing
manufacturers
Release
6
production plan
Organize
7
Production
product
9
inspection
Unqualified
If they are No product disposal
qualified process
Product
10 Yes
storage
Delivery
11
process
Responsible
Department
Product Department Process Name Production
Levels One Level Process Process Numbers
Product Outsourcing Quality Corporate
Department
Department manufacturers Department Management Sales Department
Department
Note A B C D E
Beginning
Consulate with relevant departments
to determine
Release Release mass
1
purchase production
notice notice
2
Whether to have
production
conditions Random parts list Sales Supply
Production
plan
Prepare Release mass
3 production production
plan notice
Outsourcing review list (Manufacturer and
contract assessment)
If there is Arrange Review
4 outsourcing Yes outsourcing record
arrangements ?
Review
No
Release
production No
5
plan
Include process
Production process
inspection
7 management/Outsourcing
manufacturers management Product check
list
8 product
inspection
Delivery
10 process
Production
Finance Planning
department Consumable
department
storage
Technical Technical
m anager department
Quality
laboratory
department
Labour
union
sized enterprises similar to JY company is the need for small and medium
sized enterprise to change development mode, realize leaping development,
increase enterprise flexible, improve economic and social benefits and realize
strategic objective.
(2) BPR so far is just a thought, but not be called a theory. Because as a kind of
innovation theory, BPR is far from mature, internal mechanism of BPR and
deep understanding of the essential rule are far from established. And
advanced thinking and theory are not enough to bring the success to practice.
Not perfect method system and lack analysis tools are all obstacles factors to
effective BPR in practice. Therefore enterprises in practical application avoid
blind imitation by all means, and should combine BPR with IE and other
management methods, only that can we guarantee the success of the process
reengineering (Hanuner and ChamPy 1993).
170 Study on Process Reengineering of Medium and Small Coal 1621
Structure
General section
office
Machining
Enterprise section
management Assembly
department section
secretary semi product
Dispatching
Labor union center warehouse
Tool storage
Product
Production department Material
President
(3) The practice of JY company business process reengineering proved that the
method of this paper for enterprise process reengineering has certain directive
significance, and can reduce mistakes, improve the efficiency, ensure smooth
completion of the business process reengineering (Mei and Teng 2004; Huang
and Mei 2003).
References
Mei S, Teng JTC (2004) Process reengineering—theory, method and technology. Tsinghua
University Press, Beijing
Qi E, Wang H (2005) Business process reengineering based on value chain. Ind Eng J 8(1):77–79
Wang P (2005) Process reengineering. CITIC Publishing House, Beijing
Wanbei Coal and Electricity Group Co., LTD (2008) New mode of coal mine management
Yue J (2005) Reshaping the production process is an effective way to improve the competition of
enterprises. J Inner Mongolla Finan Econ Coll 3:77–79
Zeng W (2008) Study on production process management mode optimization in Renhe casting
factory, Lanzhou University, Lanzhou
Chapter 171
The Increase and Decrease Connecting
Potential Analysis on Urban and Rural
Residential Land of Tianjin
Abstract Research purpose: probe into the technical routes and methods of CUR
(connecting the increase of urban construction with the decrease of rural resi-
dential land) potential calculation. Research method: quantitative analysis method.
Results: according to this calculation method and technical route, the empirical
analysis on CUR potential of Tianjin, the CUR potential coefficient is 1.25 and the
CUR potential balance is 4936.60 hm2 in as yearly planning goals. This states that
Tianjin can meet the demand of new town construction land occupying plough in
target planned years through CUR performance. Research conclusions: to find the
technical route and measure and calculated methods of CUR potential based on
overall plan of land utilization its result would reflect the actual area, tightly
integrating with land use control, be useful data references for other places
applying for CUR experimental unit, distributing CUR quota, developing CUR
items, lay the foundation for working out land reclamation planning and provide
quantitative data and references for land and resources management departments
to develop, innovation CUR policy.
171.1 Introduction
In June of 2008, the Ministry of land and resources issued ‘‘connecting the increase
in Urban construction with the decrease in Rural residential land management
approach’’, and it symbolized that our country CUR (connecting the increase of
G. Lin
School of Management and Economics, Tianjin University, Tianjin, China
S. Hao (&)
The Postgraduate Department, Tianjin Polytechnic University, Tianjin, China
e-mail: [email protected]
urban construction with the decrease of rural residential land) pilot work which
was formally incorporated into the course that runs lawfully. With the further
development of the CUR, CUR with the Chinese characteristic of land use in
China has gradually become the research focus in the field of land management
and preliminary already formed with CUR policy interpretation (Shao and Li
2009; Feng et al. 2011; Li and Wang 2009; Gong 2012), CUR pattern design
(Wang and Zhu 2007; Li et al. 2007; Qv et al. 2011; Wang and Wang 2009), CUR
benefit evaluation (Mai 2008; Gan and Zhou 2008; Yuan 2011), CUR potential
analysis (Xu and Wang 2009; (Yu 2011) and so on as the core content of the
theoretical system. However the CUR potential and space layout analysis research
is few. The few studies are mostly lack of potential calculation of system thinking,
ignoring the potential to link with the general plans for land use and the calculation
results with the overall land use planning are out of line, bad practicality. In the
future, based on the overall land use planning of CUR potential calculation will be
one of the most important research directions. It can provide useful data references
for other places applying for CUR experimental unit, distributing CUR quota,
developing CUR items, lay the foundation for working out land reclamation
planning and provide quantitative data and references for land and resources
management departments to develop, innovate CUR policy.
China’s CUR policy was introduced when urbanization and industrialization were
advanced ceaselessly, the construction of new socialist countryside is just unfol-
ded. The core of this policy includes two aspects: The former is mainly for the
expansion of the city and the latter, in the present stage of our country, is mainly
through consolidation rural residential land to achieve. The CUR potential analysis
is theoretical and practical work and must have corresponding theoretical basis
guiding to make it more scientific, forward-looking and practical. This paper
argues that related theoretical basis mainly includes land use planning theory,
sustainable development theory, location theory, and rent theory.
Land-use planning is ahead of schedule arrangement for a certain area of land use
in the future, which is land resource allocation and reasonable organization of
land-use comprehensive technical and economic measures in time and space on
the basis of the regional social economics development and land natural history
characteristics (Wang and Han 2002). The land use planning on the one hand as
science and technology is the productive force to provide support for human
reasonable and orderly use for land, on the other hand as the land in the sectors of
171 The Increase and Decrease Connecting Potential 1625
Sustainable development not only satisfy the demand answers and not harm future
generations demand, but also conforms to the local population interests and to the
interests of the global population. Here mainly includes the following means:
(1) Efficient development. Not only refer to the efficiency of economic sense, but
also contains natural resources and environment in the profits and losses of the
composition.
(2) Sustainable development. The economic and social development cannot
exceed the carrying capacity of the environment and resource, and we must
make the natural resource consumption rate be lower than the rate of regen-
eration resources.
(3) Equitable development. Contains the generation of transverse intergenera-
tional equity and intergenerational vertical equity, and people living in con-
temporary word cannot damage offspring’s survival and development
conditions due to their own development needs.
(4) Common development. The Earth is a complex giant system, between various
subsystems are interaction, as long as a sub system happen problems, will
directly or indirectly affect other subsystems and influences the whole system
function, therefore, the sustainable development is common development.
Location theory is about the place of human economics activities and the theory of
space economic ties. The positions of the spatial distribution of Social economic
activity include geographical location, economic position and traffic position.
These positions connecting organically work together to regional space, forming
the superiority of difference of land location. Land is the places of all human
activities. Different human activities can produce different types of land using. Plot
has not only azimuth and distance attributes, but also social economic activities
and the spatial distribution law which closely ties to geographical elements.
About the economic benefit of the land, the influence of the location factor
mainly is embodied in the following respects:
(1) Accessibility. Locations with good accessibility can enter the location strongly.
(2) The distance from the central of business district. The closer from the central
business district, the location is better and the land using efficiency is higher.
(3) Materialized labor inputs. The more social materialized labor is used, the
value of land using is greater and the economic benefit is much higher.
(4) Agglomeration benefits and its complementary with each other. Cluster can
make the enterprise to get comprehensive benefit. When multiple or related
enterprises get together, it forming the mutual complement of organic whole,
they can get more profit than scattered arrangement.
The location of the project is the embodiment and application of the location
theory. On the one hand, the rural residential areas in the CUR turns into cultivated
land again and the cultivated land for building a new area turns into urban con-
struction purposes, realizing the change of the space position of the land use,
making the land use become more reasonable; On the other hand, for the choosing
of the new site, also need the region whose location is in good, and can gain
maximum benefit.
Differential rent theory provides theory basis for the analysis of urban and rural
construction land increase or decrease the peg operation mechanism. Differential
rent theory is the part of the excess profit a rebound by the operating better land
belonging to the land owners. The difference of land’s natural conditions and the
monopoly to the right of land using combine differential rent. In accordance with
the formation condition, the differential rent can be divided into differential rent I
and differential rent II. Differential rent I produced in different land fertile degree
and geographical location and differential rent II because in the same block con-
tinuous investment leads to higher labor productivity. Differential rent I is the basis
and the premise of differential rent II.
171 The Increase and Decrease Connecting Potential 1627
In the CUR work, one of key jobs is to set up the reasonable corresponding
relation between new build (CUR demand area) and removed the old district (CUR
supply area). Among them, the CUR supply areas mainly develop consolidation of
rural construction land potential, provide town construction land index, and the
CUR demand as key areas of urban construction. In theory, differential rent lower
land unsuitable for construction, so we should reclaim the differential land rent
lower local rural construction land as plough and high places of differential rent for
the priority of the arrangement for the new building.
Qy ¼ Sg Dg ð171:2Þ
In (171.1) and (171.2): r is CUR potential coefficient, Qy is CUR potential
balance, Sg is CUR supply ability in target years and Dg is CUR demand in target
years.
Mainly consists of the following three steps to calculate the CUR potential:
Step 1: Calculate urban construction land CUR demand based on the urban con-
struction land utilization and planning in target years.
Step 2: Calculate consolidation of rural residential land CUR supply ability based
on the rural dweller dot utilization and planning in target years.
Step 3: Calculate the CUR potential.
The CUR demand is depending on the urban construction land demand through the
CUR model to meet based on all levels general land use planning scheme and the
regional economic social development. In this paper the calculation of CUR
demand is as followed (171.3):
1628 G. Lin and S. Hao
Dg ¼ Dz Dx ð171:3Þ
In (171.3), Dg is CUR demand, Dz is additional construction land quota
occupied farmland index in target years, and Dx is new construction land occu-
pation of cultivated land control indexes which determined by ‘‘The overall land
use planning (2006–2020)
The consolidation of rural residential land CUR supply ability is that in various
practical constraints, the capacity of consolidation rural residential land providing
land to CUR work based on the overall planning of land use. The calculation of
CUR demand is as followed (171.4):
Sg ¼ a Sq ð Sx Sh Þ ð171:4Þ
In (171.4): Sg is the ability of CUR supply, Sq is rural residential land read-
justment potential, Sx is Rural residential area of current situation, Sh is rural
residential planning area and a is newly increased cultivated land coefficient.
The concrete calculation process is as follows:
Step 1: Calculation of rural residential land readjustment potential, based on the
rural residents utilization.
Step 2: Calculation of rural residential planning in target years CUR supply ability
based on the land use of rural residential area planning.
Tianjin is the biggest coastal open city in North China, within the eastern Eurasian
Continental Bridge bridgehead, and is located in the northeast of the north China
plain, the Bohai economic center, which are good location conditions. At the end
of 2008, the city’s population was 9,688,700, of which agricultural population was
3,806,000 and non agricultural population was 5,882,700. According to Tianjin
2008 current land-use change survey results, the city’s land area was
1,191,731.9 hm2. Among that, agricultural land area was 692,670.95 hm2,
accounting for 58.12 % of the total land and plough area was 441,089.72 hm2
among agricultural land, representing the city’s total land area of 37.01 %;
The total area of land for construction was 368,188.81 hm2, accounting for
171 The Increase and Decrease Connecting Potential 1629
30.90 % of the total land area, in which rural residential land was 88,192.45 hm2,
representing the city’s total land area of 4.70 %; The size of unused land was
130,872.15 hm2, accounting for 10.98 % of the total land area and mainly in Baodi
District, Ninghe County, Dagang District, Wuqing District, Jinghai County and
other places.
CUR demand in Tianjin district and county is as shown in Table 171.1, according
to the formula (171.3), ‘‘Tianjin city land uses overall planning (2006–2020)’’ and
the second land survey data of Tianjin city. We can see from it that in target planed
years, Tianjin by CUR to meet the need of the town construction land of cultivated
land occupied index will be 19,905.19 hm2, CUR needs at most will be Wuqing
District, as high as 3,454.1 hm2, accounting for 17.35 % of all CUR demand,
followed by Jinghai County will be 3,295.13 hm2 accounting for 16.55 % of all
CUR demand, CUR needs the least will be Binhai New Area, only 515.28 hm2,
accounting for 2.59 % of all CUR demand.
Tianjin Binhai New Area should pay future efforts to become North China
portal opening to the outside world, a high level of modern manufacturing and
research conversion base, the northern international shipping center and the
international logistics center, and gradually become the economic prosperity,
social harmony, environmental beautifully and ecological livable city. Therefore,
the ‘‘Tianjin city land uses overall planning (2006–2020)’’ gives more new con-
struction land occupation of farmland index, to provide security for its economic
There are two steps to calculate consolidation rural residential land in Tianjin CUR
supply ability according to (171.4) and above listed 2.2 calculated steps.
(1) Calculation of Tianjin rural residential land consolidation potential in the
planning target years. This study uses the method from Song et al. (2006) to
calculate Tianjin rural residential land readjustment potential, the calculation
is as (171.5) and (171.6):
Mt ¼ G=Qt ð171:6Þ
In (171.5) and (171.6): Si is rural residential land consolidation potential, Sx is
current situation of rural residential area, At is the average standard of homestead
land in the target years, Mt is households in the target years, R is the proportion of
residential land in the target years, G is rural population in the target years, Qt is
household scale in the target years.
Calculate Tianjin rural residential land consolidation potential according to the
(171.5), (171.6) after determining household scale, the average homestead area,
the proportion of residential land in Tianjin counties and the results would be in
Table 171.2. From Table 171.2, rural settlements obviously regional differences
exist in Tianjin districts consolidation potential. Suburban districts and Binhai
New Area developed area, rural residential land readjustment potential is relatively
lower, because of the more developed economies, higher urbanization level, the
higher rural residential land saving and intensive use level. Conversely, some
undeveloped economy, urbanization level is lower, and along with the economic
and social development in the future, the rural residential land consolidation
potential is higher. Dongli district rural residential land readjustment potential is
minimum as 0 hm2 according to the ‘‘Tianjin city land uses overall planning
(2006–2020)’’. In addition, Binhai New Area rural residential land consolidation
potentiality is lower, as 3,540.61 hm2 and Beichen is 695.67 hm2. Baodi is the
highest as 13,910.79 hm2, followed by Jixian and Wuqing, which are 13,228.82
and 11,935.65 hm2 respectively.
Table 171.2 RURAL residential land readjustment potential in Tianjin district and county (unit: hm2)
District and county name Arrangement potential
Binhai New Area 3540.61
Dongli 0
Xiqing 6970.12
Jinnan 7837.01
Beichen 5695.67
Wuqing 11935.65
Baodi 13910.79
Ninghe 5141.57
171 The Increase and Decrease Connecting Potential
Jinghai 9676.31
Jixian 13228.82
Total 77936.56
Label Dongli district rural residents are without arrangement potential because it is planning rural residential area is 0 hm2 in the target years, according to
the Tianjin city land uses overall planning (2006–2020)
1631
1632 G. Lin and S. Hao
(2) Calculation of CUR supply ability in Tianjin in the target years. According to
the CUR case in Tianjin districts, Tianjin CUR rural dweller dot reclamation
newly cultivated land coefficient between 0.80 and 0.92 in the demolition of
the old district. Considering relevant expert opinions, this study identified each
involving agricultural district newly cultivated land coefficient is 0.85, in
addition to Jixian. Comprehensive determination of newly increased cultivated
land coefficient is 0.5, because Jixian is located in mountainous terrain district,
which is the northernmost ecological area of Tianjin, the main target land use
is ecological conservation and bearing the development of tourism and many
rural residents is not suitable for land reclamation. By (171.4), we can cal-
culate each distinct and county CUR supply ability as Table 171.3. Tianjin
CUR supply capacity is 24,841.79 hm2, among them, Baodi 6,966.34 hm2,
Jixian 4,506.96 hm2 followed by Ninghe, Wuqing and Jinghai; The CUR and
supply capacity of Jinnan, Binhai New Area and Beichen is lesser, Dongli is
0 hm2. The space link differences of each area county’s CUR ability of
reflecting the differences of the various districts and counties in Tianjin eco-
nomic development level, the industrialization, the urbanization process. The
counties’ supply capacity with high levels in the industrialization, the
urbanization level, economical and intensive utilization of land is poor;
Otherwise, is strong.
According to (171.1) and (171.2), we can calculate the CUR ability of the Tianjin
each area county’s planning target years, specifically seen in Table 171.4. From
the Table 171.4, we can see the Tianjin’s potential CUR coefficient of planning
target years is 1.25, a CUR for 4,936.60 hm2 potential balances, which shows
through the CUR work, Tianjin can meet the demand of new town planning target
years construction land occupying cultivated land, and there’s a balance potential
index. However we can see from the various districts and counties’ CUR potential
coefficient, different areas’ county CUR has bigger space potential differences,
such as Jixian, Ninghe, Baodi, Jinghai and other rare counties have bigger CUR
potential, but Dongli, Beichen, Jinnan Xiqing districts, this four suburban areas,
and Binhai New Area’s CUR potential is small. In order to effectively regulate the
imbalance of CUR potential between each district and county and realize
the coordinated development among different levels regions, we should divide the
whole city into different CUR areas and distribute the balance CUR potential
index, according to the CUR potential coefficient.
Table 171.3 CUR supply ability in Tianjin district and county (unit: hm2)
District and county name CUR supply ability
Binhai New Area 287.97
Dongli 0
Xiqing 1064.8
Jinnan 216.41
Beichen 524.34
Wuqing 3715.19
Baodi 6966.34
171 The Increase and Decrease Connecting Potential
Ninghe 4054.72
Jinghai 3505.07
Jixian 4506.96
Total 24841.79
1633
1634 G. Lin and S. Hao
Table 171.4 Cur potential Area county name CUR potential CUR potential
coefficient and cur potential coefficient balance (hm2)
balance of each district and
country Binhai New Area 0.56 -227.31
DongLi 0 -1427.69
Xiqing 0.69 -470.79
Jinnan 0.10 -1859.02
Beichen 0.30 -1244.37
Wuqing 1.08 261.09
Baodi 3.90 5180.77
NingHe 1.99 2012.83
JingHai 1.06 209.94
Jixian 2.25 2501.13
Total 1.25 4936.60
171.5 Conclusion
References
Feng JM, Chen LQ, Song X (2011) CUR policy analysis. Anhui Agric Bull 14:12–13 (in
Chinese)
Gan LC, Zhou BT (2008) Based on the CUR of the rural construction land consolidation benefits
analysis. Land Econ 10:42–46 (in Chinese)
Gong MF (2012) Analyzing the advantages and disadvantages of CUR. Rural Econ Sci Technol
01:10–11 (in Chinese)
Li WJ, Wang L (2009) Analyzing the advantages and disadvantages of CUR. Inf Land Resour
4:34–37 (in Chinese)
Li ZJ, Fan ZA, Gao MX (2007) The rural settlement arrangement mode and countermeasures in
CUR policy—TaiAn in Shandong province as an example. J Shandong Agric Univ 1:32–36
(in Chinese)
Mai XS (2008) CUR economic analysis—Shapingba in Chongqing as an example. Master thesis
in Southwest University (in Chinese)
171 The Increase and Decrease Connecting Potential 1635
Qv YB, Zhang FR, Jiang GH, Li L, Song W (2011) Rural residential areas of land consolidation
potential and CUR partition research. Resour Sci 33:134–142 (in Chinese)
Shao SJ, Li XS (2009) Rural and urban construction land increase or decrease peg reading. Law
Soc 10:290–291 (in Chinese)
Song W, Zhang FR, Chen XW (2006) Our country rural residential areas potential consolidation
measuring method. Guangdong Land Sci 5:43–47 (in Chinese)
Wang HL, Wang X (2009) Liaoning province CUR mode analysis. Land Resour 6:48–49 (in
Chinese)
Wang J, Zhu YB (2007) In CUR operation pattern discussion. Ural Econ 8:29–32 (in Chinese)
Wang WM, Han TK (2002) The land use planning learning. China Agriculture Press, Beijing,
p 10 (in Chinese)
Xu WD, Wang ZR (2009) In CUR policy Shandong province of rural construction land
consolidation potential and key area. Shandong Land Resour 1:23–25 (in Chinese)
Yu YQ (2011) The CUR potential analysis—Nasi in Xinjiang as an example. Econ Res 23:83–89
(in Chinese)
Yuan HZ (2011) CUR implementation evaluation index system construction analysis. China
Collect Econ 3:102–103 (in Chinese)
Chapter 172
Study on Re-Evaluation of Technological
Innovation Efficiency Based on the C2R
Improvement Model in Zhongguancun
High-Tech Enterprises
Abstract To begin with, this paper studied the relative efficiency of the
Innovation Efficiency of 10 major High-tech industries in Zhongguancun. The
study found that 7 of the 10 high-tech industries in Zhongguancun are relatively
effective in their Innovation Efficiency. They are industries of electronic infor-
mation, advanced manufacturing, new energy, new materials, modern farming,
ocean engineering and nuclear application. Then this article introduced the virtual
optimization of DMU based on the C2R model, which re-evaluated the relative
effectiveness of the above-mentioned seven industries. Then this paper gave some
suggestions to improve the innovation efficiency of these industries.
Keywords Zhongguancun High-tech Industries Data envelopment analysis
DEA Virtual decision making units
172.1 Introduction
industries can help government better plan new industries, utilize resources, raise
efficiency and promote industrial restructuring.
In actual DEA evaluation processes, most of the DMU are relative effective, and
only a few of the DMUs are invalid. This is because there are too many indexes and
too few DMUs, making the analysis result less practical (Duan 2007). In this case, it
172 Study on Re-Evaluation of Technological Innovation 1639
should be made further analysis to the relative effective DMUs to evaluate the
efficiency. There are many ways of sequencing in the DEA evaluation, and this paper
adopts the virtual unit method (Duan 2007; Hua and Tao 2011; Liu and Song 2010).
Within the virtual unit method, a virtual decision making unit DMUn+1 is
introduced to replace the normal DMU0 within the General model of constraint
conditions so as to distinguish the different degrees of different DMUs. Suppose
the input and output of DMUn+1 is (xi,n+1,yk,n+1), xi;nþ1 ¼ min xij ði ¼ 1; ; mÞ
1jn
yk;nþ1 ¼ max ykj ðk ¼ 1; ; sÞ. The virtual decision making unit DMUn+1 is the
1jn
best decision making unit among the valid DMUs. It’s compared the efficiency
value of virtual DMU with the efficiency value of other DMUs. If the DMU show a
value that is approximate to the virtual DMU, then the value is high. The evalu-
ation process can be achieved by inputting valid DMU, or by introducing a virtual
DMUn+1. The result can be calculated through (De1) (Liu and Song 2010).
8
>
> min½h eð^eT s þ eT sþ Þ
>
>
>
> X
nþ1
>
>
>
>
> s:t: kj xj þ s ¼ h x0 ; j ¼ 1; 2; ; n þ 1; j 6¼ j0
>
>
< j¼1
ðDe1 Þ X n þ1 ð172:2Þ
>
> kj yj sþ ¼ y0 ; j ¼ 1; 2; ; n þ 1; j 6¼ j0
>
>
>
> j¼1
>
>
>
>
>
> k j 0; j ¼ 1; 2 ; n þ 1
>
: þ þ þ
s ðs1 ; s2 ; ; sþ
s Þ 0; s ðs1 ; s2 ; ; ss Þ 0
The sequence of efficiency value from (D)e is the sequence of quality of the
decision making unit DMUj0. If the DMU value B1, the bigger the value is, the
better the quality of this DMUj0.
The DMU efficiency value (De2) can be calculated after the introduction of the
virtual DMUn+1.
8
>
> max½a þ eð^eT s þ eT sþ Þ
>
>
>
> X
nþ1
>
>
>
>
> s:t: kj xj þ s ¼ h x0 ; j ¼ 1; 2; ; n þ 1; j 6¼ j0
>
>
< j¼1
ðDe2 Þ X n þ1 ð172:3Þ
>
> kj yj sþ ¼ y0 ; j ¼ 1; 2; ; n þ 1; j 6¼ j0
>
>
>
> j¼1
>
>
>
>
>
> k j 0; j ¼ 1; 2 ; n þ 1
>
: þ þ þ
s ðs1 ; s2 ; ; sþ
s Þ 0; s ðs1 ; s2 ; ; ss Þ 0
This paper firstly evaluates the DMU through C2R and B2C model within
the DEA method. Then the paper introduces the virtual processing unit DMUn+1.
The effectiveness of the DMU is calculated through the efficiency evaluation of the
valid DMU.
All of the evaluation indicators used in this article in assessing the innovation
efficiency of Zhongguancun high-tech industries are objective. The data used here
are mainly from the Annual Book of Zhongguancun High-tech Industrial Park and
the Annual Book of Zhongguancun National Demonstration Park of Self-inno-
vation. Part of the data is from statistical data of the website of Zhongguancun
National Demonstration Park of Self-innovation from 2006 to 2010 (http://
www.zgc.gov.cn/tjxx/). And part of the data is through calculation of these
existing data. Therefore, these data are highly objective and credible.
172 Study on Re-Evaluation of Technological Innovation 1641
This paper utilizes the DEA input–output returns to scale B2C model, puts the data
of Table 172.1 into the model and gets the result through DEAP 2.1. The innovation
efficiency value of the 10 key high-tech industries is shown in Table 172.2. Since
the DEA method is a relative evaluation method, the DEA value in Table 172.2 just
stands for its degree of the relative effectiveness (Wang et al. 2009).
It can be seen from Table 172.2 that the biggest overall DEA value is 1, the
smallest being 0.454 and the average value being 0.913. Among the innovation
efficiency evaluation of Zhongguancun high-tech industries, 7 industries (elec-
tronic information, advanced manufacturing, new energy, new material, and
modern agriculture, ocean engineering and nuclear application) have an innovation
efficiency value of 1. This means that 70 % of the DMUs are effective while 30 %
(environment protection, biomedicine and aerospace) of these are not. Generally
speaking, most of the high-tech industries are relative effective in terms of inno-
vation efficiency.
It can be seen from the results of the C2R and B2C model that industries with
relative effective innovation efficiency account for a bigger share. In order to
Table 172.1 Statistical indicators of the 10 high-tech industries of Zhongguancun in the 11th 5-
year plan period
Industries I1 (%) I2 (billion Yuan) I3 (%) O1 (item) O2 (%) O3 (%)
Electronic information 39 36.159 13 2764.00 61 83
Biomedicine 25 1.945 8 378.20 41 45
New material 25 2.493 7 676.00 66 88
Advanced manufacturing 25 4.133 6 1022.00 42 51
Aerospace 40 1.751 37 51.80 53 62
Modern agriculture 24 0.427 6 71.00 70 97
New energy 30 2.643 4 439.80 77 103
Environment protection 34 1.029 15 227.60 63 113
Ocean engineering 27 0.075 14 11.60 20 24
Nuclear application 45 0.275 12 67.20 79 69
1642 J. An et al.
distinguish the efficiency value of these industries, a virtual unit DMU11 was
introduced to re-evaluate the innovation efficiency, as is shown in Table 172.3.
Suppose e = 10-6, a C2R model based on the input of Archimedes infinitesimal
C R model is established. The C2R model of DMU1 is as follows:
2
8
>
> min½h eðs þ
1 þ s2 þ s3 þ s1 þ s2 þ s3 Þ
þ þ
>
>
>
> s:t:0:39k1 þ 0:25k2 þ 0:25k3 þ 0:24k4 þ 0:3k5 þ 0:27k6 þ 0:45k7 þ 0:24k8 s
>
> 1 ¼ 0:24
>
>
>
> 361:59k 1 þ 24:93k 2 þ 41:33k 3 þ 4:27k 4 þ 26:43k 5 þ 0:75k 6 þ 2:75k7 þ 0:75k 8 s
2 ¼ 0:75
>
>
< 0:13k þ 0:07k þ 0:06k þ 0:06k þ 0:04k þ 0:14k þ 0:12k þ 0:04k s ¼ 0:04
1 2 3 4 5 6 7 8 3
ðDe1 Þ þ
>
> 2764k1 þ 676k2 þ 1022k3 þ 71k4 þ 439:8k5 þ 11:6k6 þ 67:2k7 þ 2764k8 s1 ¼ 2764
>
>
>
> 0:61k1 þ 0:66k2 þ 0:42k3 þ 0:7k4 þ 0:77k5 þ 0:2k6 þ 0:79k7 þ 0:79k8 s2 ¼ 0:79 þ
>
>
>
>
> 0:83k1 þ 0:88k2 þ 0:51k3 þ 0:97k4 þ 1:03k5 þ 0:24k6 þ 0:69k7 þ 1:03k8 sþ
> 3 ¼ 1:03
>
>
:
kj 0; j ¼ 1; 2 ; 8sþ ðsþ
1 ; sþ þ
2 ; s3 Þ 0; s
ðs 1 ; s2 ; s3 Þ 0
ð172:4Þ
After calculation by Matlab, the following results are innovation efficiency of
Zhongguancun high-tech industries. See Table 172.4.
It’s known that virtual evaluation unit is the best decision making unit.
Therefore, we can rank the innovation efficiency of Zhongguancun high-tech
industries in the following sequence: new energy [ modern agriculture [ new
material [ electronic information [ nuclear application [ advanced manufactur-
ing [ ocean engineering. In terms of economies of scale, industries of new energy,
modern agriculture and ocean engineering are in the best condition, while all other
industries have witnessed an increasing trend of returns to scale.
Based on the DEA-C2R, it can get the invalid input indicator slack variable
value and output indicator slack variable value of the 7 high-tech industries in
Zhongguancun Demonstration Park. See Table 172.5 (the input residual value and
insufficient output value is zero, which is nothing in the table).
172 Study on Re-Evaluation of Technological Innovation 1643
Table 172.4 Evaluation of DEA efficiency C2R when combined with virtual DMUs
DMU h h (after 1–4 5 6–7 8 9 1
P
n
h kj
(initial) improvement) j¼1
According to the projection analysis theory, it got the input residual value and
the insufficient output value of the seven high-tech industries of Zhongguancun
Demonstration Park, which is relatively invalid. See Table 172.6 (the input
residual value and insufficient output value is zero, which is nothing in the table).
1644 J. An et al.
172.4 Conclusion
As a growth point of the industries in the Demonstration Park, the new energy
industry has an efficiency value of 0.999 after DEA evaluation, ranking the first,
172 Study on Re-Evaluation of Technological Innovation 1645
with a scale efficiency of 1 and a constant return to scale. In 2010, the revenue of
the new energy industry accounts for 10.93 % of the industrial park. Therefore, in
accordance with the plan of developing new energy, with the output unchanged, it
should reduce the proportion of technological staff by 60 %, cut the technological
budget by 2.568 billion Yuan. While remaining a constant input, it should increase
the patent authorization by 2324.2 pieces and the proportion of new products sales
revenue by 20 %.
As a fast growing industry, the new material industry gets an efficiency evaluation
value by 0.820 % after DEA evaluation, ranking the third, with a scale efficiency
of 1.042 and increasing trend. In 2010, the new material industry accounted for
6.73 % of the total revenue in the industrial park. So, while maintaining a constant
output, it should cut the technological budget by 1.987 billion Yuan; reduce the
budget input intensity by 23 %. And while maintaining a constant input, it should
increase patent authorization by 1685.476 items and the proportion of new prod-
ucts sales revenue by 15 %.
budget by 71.7 million Yuan; reduce the budget input intensity by 25 %. And
while maintaining a constant input, it should increase patent authorization by
2696.8 items and the proportion of new products sales revenue by 34 %.
References
Che W, Zhang L (2010) Shanghai city cooperative efficiency evaluation—based on industry data
DEA analysis. Sci Technol Prog Policy 03:20–25
Cheng H, Chen Y (2009) DEA based in Hubei Province high-tech industrial innovation efficiency
analysis. Sci Technol J 12:115–116
Duan Y (2007) Data envelopment analysis: theory and application. Shanghai Science and
Technology Press, Shanghai
He M, Shane K, Liu Y (2010) Innovation ability evaluation analysis of Zhongguancun science
and technology garden based on DEA. Sci Technol Prog Policy 9:106–109
Hua Z, Tao L (2011) Safety efficiency evaluation of China’s coal enterprises based on DEA
model and its improved model. Coal Econ Res 5:49–53
Li N, Xie Z (2010) China regional innovation system innovation efficiency evaluation—positive
analysis based on DEA. Ind Technol Econ 8:122–126
Liu W, Song M (2010) Coal listed company performance evaluation—based on DEA-C *2R
model perspective. Manag Eng 1:1–4
Quan J, Yao L, Shi B (2008) National high-tech zone sustainable development capacity
evaluation based on DEA. Soft Sci 1:75–77
Wang D (2008) Technology innovation performance evaluation of the East Lake High-tech Zone
based on DEA. J Chongqing Acad Arts Sci Nat Sci Ed 2:84–87
Wang P, Wang L, Wang L (2009) The manufacturing industry in Hunan Province based on DEA
innovation efficiency analysis. Sci Technol Manag Res 06:172–175
Wang X, King S (2009) Cooperative innovation efficiency evaluation of DEA. Stat Decis Making
3:54–56
Wei Q (2004) Data envelopment analysis. Science Press, Beijing
Xie Z (2011) Study on the national high-tech zone technology innovation efficiency influence
factors. Manag Sci Res 11:52–58
172 Study on Re-Evaluation of Technological Innovation 1647
Abstract The goal of this paper is to analyze the status and the need strategic
management in SMEs in China, and make analysis on the problems and coun-
termeasures for the implementation of strategic management in SMEs. Firstly, we
make an analysis about the importance of SMEs for China’s economic develop-
ment, and analyze the need for SMEs to implement strategic management. Sec-
ondly, we make an introduction about the step of the implementation of strategic
management for the SMEs, and divide the implementation process into five steps.
Then, we make a discussion about the existing problems during implementation of
strategic management for the SMEs in recent China. We find that the lack of skills,
the lack of consideration of macro environment, and the speculative mentality are
the main obstacles for the implementation of strategic management for SMEs.
Finally, we put forward the corresponding suggestions and recommendations for
the implementation of strategic management for SMEs.
173.1 Introduction
Since the reform and opening policy in 1978, the number and scale of Small and
Medium Enterprises (SMEs) have prospered for a long time with the rapid
development of private economy in China. The SMEs increasingly become an
important pillar of China’s economic development. According to the introduction
of National Development and Reform Commission, the number of SMEs in China
X. Zhu (&) Y. Li
School of Management, Guangxi University of Technology, Liuzhou, China
e-mail: [email protected]
Y. Li
e-mail: [email protected]
has reached more than 4200 million, accounting for over 99.8 % of total number of
enterprises. The number of SMEs registered by the business sector reached 460
million, and the number of self-employed households reached more than 3800
million in the end of 2009. The value of final goods and services created by the
SMEs is account for 60 % of the gross domestic product. The goods produced by
the SMEs accounts for the 60 % of the community’s total sales. The tax revenue
by SMEs turned over more than half of the total. The SMEs also supply more than
80 % of the total job positions. We can get the conclusion that the SMEs are
playing an increasingly important role for the economic development in China.
The development status of SMEs becomes an important indicator of economic
vitality for a country or region all around the world.
However, it should be noted that the development of SME faced with unprec-
edented difficulties due to a variety of factors. There are serious problems for their
ideas, technologies and equipment, management structure and other aspects. Par-
ticularly in its strategic management, the majority of SMEs lack of clear strategic
positioning, and their strategic management are in chaos. With China’s accession to
WTO and economic globalization, information technology conditions, the SMEs
exposure to a dynamic ultra-competitive environment the same as the large
enterprises (Hennart 2001). This environment is unpredictable, treacherous, and for
the self-limited conditions, often in competition for SMEs at a disadvantage. This
environment is unpredictable, treacherous, and for the self-limited conditions,
SMEs are often at a disadvantage in the competition. This determines the depth of
the environment for SMEs to capture the opportunities, avoid threats, exceed and
develop appropriate strategies to be possible to grasp the business destiny.
The SMEs will face many strategic choices which can affect the fate of the
corporations in their growth and development, just the same as the large busi-
nesses. In the course of business, there are strategic management issues. SME
managers, especially senior management across the enterprise in the business
planning process of development requires far-sighted to think, to make appropriate
choices. These options relate to the long-term business interests and global
interests, only to make the right strategic choices to be effective in guiding the
development of enterprises.
With the development of social productivity, market supply and demand has
changed from past over-supply shortage. SMEs had a ‘‘small boat U-turn’’
173 Research on the Strategic Management 1651
advantage in the increasingly fierce competition in the market has gradually dis-
appeared. The survival of SMEs should not only to consider the current pressure,
but also to consider the future long-term environmental impact. Therefore, the
strategic management of SMEs must be on the agenda, careful analysis of the
external environment and internal business environment, accurate positioning and
positioning the industry to continuously optimize the development strategy,
otherwise the survival of SMEs will be increasingly difficult.
The large enterprises usually developed on the basis of SMEs through the careful
design their own development strategies, accurate self-positioning, the correct
direction of investment, etc., only to the original to the continuous development of
SMEs. Practice shows that the growth process in SMEs, strategic management is
the management of the most important, only to seize the small and medium
enterprise development strategies in the development of the road can make the
right strategic choices.
With the rise of high-tech industries and knowledge economy, the world economy
has entered a new era. In this economic situation that traditional industries are
facing integration, the traditional mode of operation are facing challenges, and the
new business areas and ways are emerging (Rugman and Verbeke 2001). Only the
1652 X. Zhu and Y. Li
SMEs with continuous innovation can be invincible. In the new economic situa-
tion, the SMEs should treat strategic management as the guiding ideology, and
play their advantages as much as possible. The SMEs should continue their
innovation in areas such as in the mode of operation, technology, product devel-
opment, service production process.
Taking into account the characteristics of SMEs, the steps of the implementation
strategic management should include analysis of business environment, industry
and market position, identify strategic objectives, business strategy formulation
and selection, implementation and evaluation for strategy aspects.
The business environment includes both the external environment and internal
environment. The purpose of the external environmental analysis is to understand
the enterprise’s survival and development of a significant impact on the various
factors including the macro environment, industry environment, and the compet-
itive environment outside the enterprise (Williamson 1999). For SMEs, the
external environment can not control, but can take corresponding measures with
the different types of external influences. SMEs have to grasp the status and trends
of macroeconomic environment and the industrial environment, which can enable
their business strategy to have strong adaptability.
Internal environment analysis is the enterprise’s own environment, including
the operation of existing enterprises, business performance, product development
and marketing, management ability, all kinds of resources to conduct in-depth
analysis to understand they will have on future activities which affect the business.
The analysis of the internal environment should have a clear understanding for the
business advantages and disadvantages. The supporting impact is advantage, and
the impeding inferior is disadvantage. We should know the advantage the com-
panies have, and should clarify the disadvantages for the further development.
In this way, we can get the right strategic direction on how to avoid weaknesses for
the long-term development for SMEs.
start from the situation of the enterprises themselves, and select the proper
industries and markets for the survival and development of small and medium
enterprises. For SMEs in general should choose the less trade monopoly or near
perfectly competitive market. In addition, the business scope should not be too
broad, should concentrate their limited resources and human well specialization
(Ghemawat 2003). When the development of SMEs to a certain scale, in order to
expand, you can try to diversification, but must carefully consider their own
abilities, or they might have disastrous consequences.
The strategic goal means the expected results to be achieved in the scope of their
business during certain period of time under the enterprises’ operation thought. The
division of the contents of strategic objectives will be different with different cate-
gories standards. It can be divided into departmental goals, and job goals according
to target level. It can also be divided into long-term goals, medium-term objectives
and short-term goals according to the length of time. The SMEs should be according
to their capabilities, and can not be too high or too low, it should be through the
efforts of the enterprise can achieve In determining the strategic objectives.
The Implementation and evaluation of the strategy is the key link to achieve
strategic objectives. The SMEs are different from large enterprises, so we should
pay more attention for the strategy control in the implementation of the strategy.
That means we should control the speed, direction, deviation of the implementation
1654 X. Zhu and Y. Li
of the strategy. At the same time, we should clarify the scope of responsibility of all
organizations, so that every department, every employee behavior and corporate
coherent overall strategy. In the strategy implementation process, we also con-
stantly checking the implementation, correct the problem in time. We should make
accurate evaluation for the implementation of the strategic objectives, and make
certain adjustment according to strategic objectives and strategic programs. The
implementation of the strategy process is a continuous improvement process. The
SMEs should be on implementation of the strategy process and results of lessons
learned in a timely manner, so as to achieve satisfactory results.
Some of our SMEs lack strategic thinking, and don’t have adequate understanding
of what is the strategy, the value of strategic for enterprise. They treat strategy
intangible thing, and the short-term behaviors are very common without long-term
goals. However, more and more enterprises realize the importance of strategic with
the deep of reform and opening up, the frequency of economic activities, the
promotion of their own understanding (Hedlund 2007). But many of the managers
of SMEs lack enough awareness of what kind of strategy on the development, how
to develop strategies and how effective implementation of the strategic and other
issues. Some SMEs treat corporate profits as a strategic objective, but lack of depth
thought on where’s the money earned, earn whose money and so on. The company
is in machine industry today, may enter the health care products, real estate and
other industries tomorrow, which significantly increase business risk.
they are not good at closely linking the macro political and economic environment
and production and operations. Their grasp of policy and the economy is relatively
slow, which missed the best opportunity of development. Some enterprises even as
the direction and policy guidance is inconsistent, leading to bankruptcy and
insolvency risk.
Corporate strategy is often the values of a company’s top leader. So, First of all,
business leaders must be trained to improve their strategy awareness and skills.
Now business leaders are increasingly aware of the importance of training.
However, most of them think that only the subordinates need training to improve
1656 X. Zhu and Y. Li
The staff participation is the key for whether the corporate strategy can be carried
out perfectly. When employees do not agree with the strategic decisions, there will
be resistance, decreased satisfaction, which will impact on productivity directly. If
a company’s employees do not understand how to be different with companies, do
not understand what business value created, they will be difficult to understand
face multiple choices (Brandenburger and Nalebuff 1995). If the sales staff does
not know strategy, they will not know who to sell. If engineers do not know
strategy, they will not know what outputs. If the employee participate in strategic
management, understand business strategy development through, it is easy to
recognize strategy, understand strategic in their daily work.
There are more and more factors to consider for enterprises’ business development
strategy. Enterprise business development strategy more and more factors to
consider. Strategies must also change according to market changes more and more
frequently. We should grasp the changing situation, develop appropriate strategies
and countermeasures. It is not enough to make decision with individual mind, and
should focus as much as possible of social intelligence (Glimstedt et al. 2006). The
thinking tank has the following functions:
Innovate business thinking. We can break the company’s own mindset through
the introduction of foreign brains Resources. We can eliminate blind spots in pro-
duction, management, sales, service, research and other areas to provide new ideas,
new knowledge, new information, new ideas, new methods, and new strategies.
Enhance enterprise intelligence. Leadership and employees are fixed constants,
but the outer brain resources are infinite variables. The establishment of think
tanks outside the brain will result in enhanced business intelligence advantage.
Mechanism in the enterprise has created a unique highlight of the new intelligence,
which will effectively improve the business identify problems and solve problems
(Mulcaster 2009). Competition in the market committed fewer errors, and create
more opportunities.
Develop the interface of enterprise. The introduction of outer brains can not
only enhance business intelligence with external intelligence, but also can inte-
grating the human, material and social relations owned by outer brains. We can
expand the interface through the project or joint system of enterprise management
platform. This flexible operation mode can maximize the company’s virtual
resources and make up the short board for business (Markides 1999).
173.6 Conclusion
Some SME managers believe that only the large enterprises need strategic man-
agement, while SMEs do not need. Such kind of idea is completely wrong. The
Viability of the SMEs is tough, because the SMEs can not to compete with larger
enterprises in terms of technology, personnel, capital and other aspects. SMEs
without clear strategic management thinking will easily get lost in the market and
defeated with the increasingly fierce international and domestic market competi-
tion. Therefore, how to implement the strategic management of small and medium
enterprises, how to analyze their strengths and weaknesses, how to correctly select
the management strategy have become the key for the healthy and rapid devel-
opment of SMEs in the future.
References
Almeida P (1996) Knowledge sourcing by foreign multinationals: patent citation analysis in the
US semiconductor industry. Strateg Manag J 17:155–165
Brandenburger AM, Nalebuff BJ (1995) Theright game: use game theory to shape strategy. Harv
Bus Rev 73(4):57–71
Chang S, Singh H (2000) Corporate and industry effects on business unit competitive position.
Strateg Manag J 21:739–752
Ghemawat P (2003) The incomplete integration of markets, location-specificity, and international
business strategy. J Int Bus Stud 34:138–152
Glimstedt H, Lazonick W, Xie H (2006) Evolution and allocation of stock options: adapting
US-style compensation to the swedish business model. Eur Manag Rev 3:1–21
Hedlund G (2007) A model of knowledge management and the N-form corporation. Strateg
Manag J 15(3):73–91
Hennart JF (2001) Theories of multinational enterprise. In: Rugman AM, Brewer TL (eds), The
Oxford handbook of international business. Oxford University Press, New York, pp 127–149
Markides C (1999) A dynamic view of strategy. Sloan Manag Rev 40:55–63
Mulcaster WR (2009) Three strategic frameworks. Bus Strategy Ser 10:68–75
Rugman AM, Verbeke A (2001) Subsidiary-specific advantages in multinational enterprises.
Strateg Manag J 22(3):237–250
Scott AJ (2006) Entrepreneurship, innovation and industrial development: geography and the
creative field revisited. Small Bus Econ 26:1–24
Williamson OE (1999) Strategy research: governance and competence perspectives. Strateg
Manag J 20(12):1087–1108