0% found this document useful (0 votes)
168 views

A Hybrid Forecasting Model For Prediction of Stock Value of Tata Steel Using Support Vector Regression and Particle Swarm Optimization

This document discusses a hybrid forecasting model that combines support vector regression (SVR) and particle swarm optimization (PSO) to predict stock prices of Tata Steel. SVR requires optimization of its hyperparameters (cost and gamma) for best performance, so PSO is used to optimize these hyperparameters. The model analyzes technical indicators from 18 years of Tata Steel stock price data. Empirical results show the proposed PSO-SVR model improves forecasting accuracy compared to previous models, achieving a mean absolute percentage error of approximately 0.7%.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
168 views

A Hybrid Forecasting Model For Prediction of Stock Value of Tata Steel Using Support Vector Regression and Particle Swarm Optimization

This document discusses a hybrid forecasting model that combines support vector regression (SVR) and particle swarm optimization (PSO) to predict stock prices of Tata Steel. SVR requires optimization of its hyperparameters (cost and gamma) for best performance, so PSO is used to optimize these hyperparameters. The model analyzes technical indicators from 18 years of Tata Steel stock price data. Empirical results show the proposed PSO-SVR model improves forecasting accuracy compared to previous models, achieving a mean absolute percentage error of approximately 0.7%.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

International Journal of Pure and Applied Mathematics

Volume 119 No. 14 2018, 1719-1727


ISSN: 1314-3395 (on-line version)
url: http://www.ijpam.eu
Special Issue
ijpam.eu

A HYBRID FORECASTING MODEL FOR PREDICTION OF STOCK VALUE OF TATA STEEL USING
SUPPORT VECTOR REGRESSION AND PARTICLE SWARM OPTIMIZATION

Mohammed Siddique 1, Sabyasachi Mohanty 2, Debdulal Panda 3


1
Dept. of Mathematics, 2 Dept. of CS, CUTM, Bhubaneswar, India
3
Dept. of Mathematics, KIIT University, Bhubaneswar, India

INTRODUCTION
ABSTRACT:
Stock market analysis has always been an essential part
Financial time series forecasting has always draws a lot of the financial sector of any country. Most of investors
of attention from investors and researchers. The are presently depending upon Intelligent Trading Systems
inclination of stock market is extremely complex and is for prediction of stock market price based on various
inclined by various factors. Hence to find the most conditions. Precision of these forecast systems is
significant factors to the stock market is really important. necessary for better investment decisions with minimum
But the high noise and difficulty residing in the financial risk factors. Prediction of stock price has been beneficial
data makes this job very challenging. Many researchers for both the individual and institutional investors.
have used support vector regression (SVR) and Predicting stock market price is a moderately challenging
comparatively overcome this challenge. As the dormant task. Technological analysis is an admired approach to
high noises in the data impair the performance, reducing study the stock market analysis.
the noise would be competent while constructing the
forecasting model. To achieve this task, integration of Researchers use various machine learning and artificial
SVR with particle swarm optimization (PSO) is proposed intelligent approaches to forecast future trends or price.
in this research work. This paper analyzes a series of Artificial neural network (ANN), support vector machine
technological indicators used in usual studies of the stock (SVM), and logistic regression (LR) have been used by
market and executes support vector regression and many for this kind of forecasting tasks. Among all these
particle swarm optimization algorithm. SVM is considered to one of the best performing
technique provided appropriate initialization of its
The performance of the proposed approach is evaluated regularization parameters is made.
with 18 years’ daily transactional data of Tata Steel
stocks price from Bombay Stock Exchange (BSE). We used the support vector regression and particle swarm
Empirical results show that the proposed model enhances optimization technique for forecasting the stock price of
the performance of the previous prediction model. TATA STEEL. Support vector regression requires its
hyper parameters (i.e., cost and gamma) to be optimized
This approach is compared with existing models with real to perform better and hence particle swarm optimization
data set and gives more accurate results which give more (PSO) is used to optimize the same. Technical indicators
accuracy with MAPE 0.7 % (approximately). used in this analysis are calculated from the historical
trading data. Lagged data in the time series domain have
Keywords: Stock market, Financial time series always been influencing the forecasting accuracy. The
forecasting; Support vector regression; particle swarm availability of lagged data for our proposed model PSO-
optimization. SVR leads to better performance than standard SVR.

The rest of this paper is organized as follows. Literature


review is highlighted in Section-2 and a brief description
of SVR and PSO are given in Section 3. Then in Section-

1719
International Journal of Pure and Applied Mathematics Special Issue

4, the methodology and the process involved in the hybrid Propagation Technique (BP) and Support Vector Machine
model under study, i.e., PSO-SVR, is explained. In Technique (SVM) to forecast future prices exchange in
Section-5, experimental analysis are presented and the Indian stock market.They have shown that Support
finally, the paper is concluded in Section-6. Vector Machines gives the better overview than that
conventional methods. Yongsheng Ding, Xinping Song &
1. LITERATURE REVIEW Yueming Zen 2008, [9] constructed Support Vector
Machine based on basic data forecast to the stock crises
The authors Vapnik et al, 1999 [1] represented that and the financial position of the companies in the Chinese
support vector machine is a learning system paying market. They have applied 10-fold cross-validation and
attention to statistical learning theory. Support vector grid-search technique to obtain the optimal hyper
machine has been utilized by Kim KJ, 2003 [2] and Hu, parameters C and γ for different kernel functions. They
Su, Hao & Tang 2009 [3] for forecasting financial time have compared the prediction performance of the Support
series. Kim KJ analyzed the effect of the value of the Vector Machines with four dissimilar kernels and
upper bound C and the kernel parameter δ2 in Support cocluded the Radial Basis Function kernel (RBF) is the
Vector Machine and concluded that the prediction best performance among four. They also statistically
performances of Support Vector Machines are sensitive compared the prediction accuracy with Back Propagation
to the value of these parameters.Tony Van Gestel, Johan Neural Network (BPNN), Multiple Discriminate
A. K. Suykens, Dirk-Emma Baestaens et al , 2001[4] Analysis(MDA) and logistic regression (Logit). The
proposed the model combining Bayesian evidence results of empirical analysis show that the RBF kernel
framework with least squares support vector machines for SVM superior than other kernel SVM and BPNN, MDA,
nonlinear regression and validated on the forecast of the and Logit models. Shen, Shunrong, Haomiao Jiang &
weekly US short term T-bill rate and the daily closing Tongda Zhang 2012 [10] proposed a forecast algorithm
prices of the DAX30 stock index. Wei Huang, Yoshiteru which makes use of the sequential among global stock
Nakamori, & Shou-Yang Wang, 2005 [5] summarized the markets and different financial substance to predict the
stock trading decision support systems and proposed next day stock value using Support Vector Machines.
Supprt vector machine is a superior tool for financial They have used the same algorithm with individual
stock market prediction. Yuling Lin, Haixiang Guo & regression algorithm to forecast the actual growth in the
Jinglu Hu in 2013, [6] propose a Support Vector Machine markets. At last they build a basic trading model and
based stock market prediction model. They implemented distinguish its performance with the existing algorithm.
the piecewise linear principle, and the characteristic Puspanjali Mohapatra, Soumya Das, Tapas Kumar Patra
weights are integrated to put up the optimal separating & Munnangi Anirudh, 2013 [11] proposed a comparative
hyperplane, which assesses for stock indicator and control study of particle swarm optimization (PSO) based hybrid
over fitting on stock market expectation. They tried this swarmnet and simple functional link artificial neural
methodology on Taiwan stock market datasets and network(FLANN) model. Both the models are initially
establish that this method performs result in compare to trained with least mean square (LMS) algorithm, then
the conventional stock market prediction system. Lucas with particle swarm optimization (PSO) algorithm. The
Lai & James Liu, 2014 [7] implemented the Support models are predicted the stock price of two different
Vector Machine and Least Square Support Vector datasets NIFTY and NASDAQ on different time horizons
Machine models for prediction of stock market. They (one day, one week, and one month) ahead. The
have considered three ate systems- General performance is evaluated on the basis of Root Mean
Autoregressive Conditional Heteroskedasticity Square Error (RMSE) and Mean Absolute Percentage
(GARCH), Support Vector Regression (SVR) and Least Error (MAPE). It was verified that PSO based hybrid
Square Support Vector Machine (LSSVM) with the swarmnet performed better in comparison to PSO based
wavelet kernel for configuration of three narrative FLANN model, simple hybrid model trained with LMS
algorithms namely Wavelet-based GARCH and simple FLANN model trained with LMS.
(WL_GARCH), Wavelet-based SVR (WL_SVR) and Mohammed Siddique, Debdulal Panda, Sumanjit Das at
Wavelet-based Least Square Support Vector Machine el., 2017, [12] proposed a hybrid model to forecast stock
(WL_LSSVM) to resolve the non-parametric and non- price using Artificial Neural Network (ANN) model
linear financial time series issue. Shom Prasad Das & optimized by particle swarm optimization (PSO), which
Sudarsan Padhy,2012 [8] incorporated the Back consisting of an effective algorithm for predicting next

1720
International Journal of Pure and Applied Mathematics Special Issue

day high price of Yahoo stock value and Microsoft stock by their labels and it maximize the distance from
value. M. Karazmodeh, S. Nasiri, and S. Majid Hashemi, hyperplane to the support vector.
2013, [13], proposed an improved hybrid Improved via
Genetic Algorithm based on Support Vector Machines The basic concept of SVM is to maximize the margin
(IPSOSVM) system to predict the future stock prices. hyperplane in the feature space. The principle of normal
Rohit Choudhry, and Kumkum Garg, 2008, [14], Support Vector Machine for Regression (SVR) model, a
proposed a hybrid GA-SVM system for predicting the supervised machine learning technique developed by
future stock prices. Cheng-Lung Huang, Jian-Fan Dun, Vapnik et al.[1], is described below.
2008 [15], proposed a new hybrid PSO–SVM system to
solve continuous valued and discrete valued PSO version. Given a sample data-set S = (x1; y1); (x2; y2); :::(xk; yk)
They have shown that experimental results optimize the representing k input-output pairs, where each xi ϵ X is a
model parameters and search the discriminating feature subset of Rn, denoting the n dimensional input sample
subset simultaneously. space and matching target values yi ϵ Y is a subset of R
for (i = 1; 2; :::; k). The objective of this regression
2. METHODOLOGY USED problem is to find a function f : Rn → R, to approximate
the value of y for hidden and unlabeled x, which is not
1. Support Vector Machine for Regression present in the training sample data-set. Through a
Support Vector Machines is one of the best binary nonlinear mapping function ϕ, the input data is mapped
classifiers. SVM create a decision boundary such that the from Rn to a higher dimensional space Rm, where m > n,
majority of the points in one category falls on one side of and hence the estimating function f is defined as
the boundary while most points of other category fall on
the other side of the boundary. Consider an n-dimensional f(x) = w T ϕ(x) + b -------------------------------------------(1)
feature vector X = (x1, x2, ... ... xn). We can define a
hyperplane
where w ϵ Rm is the regression coefficient vector, b ϵ R, is
the bias or threshold value.The objective of the support
α α x α x … … α x vector regression is to find a function f that has the most
ϵ-deviation from the target y , .We want to determine w
α α x 0
and b such that the value of f(x) can be determined by
minimizing the risk.

Then elements in one category will be such that the sum Rreg (w) = ||w||2 + K ∑ L ϵ y , f x -----------------(2)
is greater than 0, while elements in the other category will
have the sum be less than 0. We construct a label, α
∑ α x Y, where Y ϵ {-1, 1} is the label where, where K determines the trade-off between the
classifier.We can rewrite the hyperplane equation using flatness of the f(x) and the amount up to which deviations
inner products Y α ∑ β Y X i ∗ X,Where ∗ greater than ϵ are tolerated. Also K is the penalty factor
represents the inner product operator and inner product is which is a user defined constant that determines the
weighted by its label. transaction between the training error and the penalizing
term ||w||2 and L ϵ y , f x , is the ϵ-intensive loss
The margin of the optimal hyperplane is obtained by function, defined as
maximizing the distance from the plane to any point. The
maximum margin hyperplane (MMH) splits the data very y, f x ∈ , |y f x | ∈
well. The essential aspect is that only the points Lϵ y , f x ----(3)
0, |y f x | ∈
neighboring to the boundary of the hyperplane are
participated in selection; all other points are irrelevant.
These points are known as the support vectors, and the The minimization of risk functional equation (2) can be
hyperplane is known as a Support Vector Classifier reformulated by introducing non-negative slack variables
(SVC) as it places each support vector in one class or in γ and ξ as
the other class. The inner products in SVC are weighted

1721
International Journal of Pure and Applied Mathematics Special Issue

Rreg (w, γ , ξ ) = Minimize ||w||2 + K ∑ γ ξ --(4)


where K x , x known as Kernel function.
The value of kernel function is equal to the inner product
subject to constraints of x and x in the feature space ϕ(x

y W x b ∈ γ and ϕ(x such that


W x b y ∈ ξ ------------------------(5) K x ,x ϕ(x . ϕ x ---------------------------------(9)
γ , ξ 0
2. Particle Swarm Optimization (PSO)
where ||w||2 is the regularization term preventing over Particle swarm optimization (PSO) is one of the leading
learning γ ξ is the pragmatic risk and K > 0 is the meta-heuristic optimization methods which is motivated
regularization constant, which controls the trade-off by birds and fishes co-ordinated, collective social
between the empirical risk and regularization term. behavior. It was originally introduced by Kennedy and
Eberhart in the year 1995. In PSO, each particle flies
By introducing Lagrange multipliers α , β , μ and η the through the multidimensional search space and adjusts its
quadratic optimization problem (4) and (5) can be position in every step until it reaches an optimum
formulated as solution. In particle swarm optimization each particle has
some fixed distance from the food source and the fitness
L ||w||2 + K ∑ γ ξ ∑ α ∈ γ y value of each particle gives the output. Each particle i
maintains a trace of the position of its previous best
W x b ∑ β ∈ ξ y W x b performance in a vector called pbest. The nbest, is another
∑ μ γ η ξ ------------------------------------------(6) ‘best’ value that is tracked by the particle swarm
optimizer. This is the best value achieved faraway by any
The dual of the corresponding optimization problem (4) particle in that neighborhood of the particles. When a
and (5) is represented as particle takes the total population as its topological
neighbors, the best value is known as the global best and
Maximize ||w||2 + K ∑, α β α is called gbest. Every particles can share information about
the search space representing a possible solution to the
β x x ∈ ∑ α β ∑ α β optimization problem, each particle moves in the
direction of its best solution and the global best position
Subject to constraints discovered by any particles in the swarm. Each particle
calculates its own velocity and updates its position in
each iteration. Calculate the Pbest value for each particle.
α β 0 The velocity and the location of the particles in each
iteration are updated. From that particle best (pbest) the
α , β ϵ 0, K global best (gbest) value is determined.

By changing the equation w = ∑ α β x , the Working Process of PSO


function f(x) can be written as Step: 1 Initialize the swarm particle in the search space
randomly.
f(x) = ∑ α β x T
ϕ(x) + b----------------------(7) Step: 2 Calculate the fitness value by using objective
function and consider it as pbest.
consequently by applying Lagrange theory and Karush- Step: 3 update the velocity and the location for each
Kuhn-Tucker condition, the general support vector Particle.
regression function can be expressed as
Velocity of each particle is updated by using the equation
f(x) = ∑ α β K x , x + b---------------------------(8)

1722
International Journal of Pure and Applied Mathematics Special Issue

Vt = (w* vt-1) + (c1*r1* (gbt-1 - pt-1))+ (c2* r2*(pbt-1k – pt-


k
1 ))

Location of each particle is updated by using the equation


Pt = Pt-1+ Vt

Step: 4 update the Pbest and gbest.


Step: 5 stop if max iteration is reached otherwise repeat
from step 2.
3. PROPOSED MODEL

The proposed model is built using particle swarm


optimization (PSO) and support vector regression (SVR).
In this model SVR is at the core of the prediction
mechanism and PSO optimizes the free parameters of
SVM. In SVR, proper selection of kernel type,
regularization parameter, and the ϵ-insensitive loss are the
most critical to determine. In this proposed model, we
have used radial basis function (RBF) kernel due to the
nonlinearity nature of dataset under study, and
mathematically, RBF kernel is defined as K u, v
e || || , where γ .The free parameters of SVR
that are optimized by PSO are cost and gamma. The
detailed flowchart of optimizing the hyper parameters of
SVR is shown in figure-1 Figure-1: Flowchart of PSO-SVR mechanism

Here the dataset comprises of features based on time-


series. The model is built upon the concept of lagged
(past period) values of the last 5 days. Normally for each
day the 7 attributes mentioned in Table-1 are captured
and used as key attributes for this time-series forecasting
mechanism.
Table-1: Variables and its description

Sl. Variable Description


1 Open Price The price at which the stock first trades
upon opening of an exchange on a
trading day.
2 Highest Price The highest price of a share on a trading
day.
3 Lowest Price The lowest price of a share on a trading
day.
4 Close price The price at which the stock last trades
upon closing of an exchange on a
trading day.
5 No. of Shares Total quantity of shares traded on a
trading day.
6 No. of Trades Total number of trades happened on a
trading day.
7 Turnover Total value of stock traded on a trading
day.

1723
International Journal of Pure and Applied Mathematics Special Issue

In order to avoid numerical difficulties during 3 Mean Absolute


Percentage Error
1 y d
Average
percentage of
100
computation and to prevent dominance of features with (MAPE) l d absolute
of errors divided
greater numerical ranges over smaller numerical ranges, by actual
normalization has been implemented during pre- observation
values
processing stage. Here, normalization of data has been
achieved by linearly scaling to [0, 1] using the following where,
l is the total number of instances or records under evaluation,
equation di is the desired output value, i.e., actual or true value of interest, and
yi is the estimated value obtained using a prediction algorithm.
A A
NV , for i 1, 2, 3, … … l
A A 4.2 Comparison of Results
In this study, the performance of our proposed hybrid
where, A is the actual value of the i-th feature, l is the model i.e., PSO-SVR is compared with standard SVR
total number of data points available, A and A are model. Here, PSO-SVR model is designed with Support
the maximum and minimum values respectively, and NV Vector Machine for Regression (SVR) at its core and
is the corresponding normalized value. Particle Swarm Particle Swarm Optimization (PSO)for
optimizing the hyper parameters of SVR.
After diving the dataset into training and testing datasets,
the model building process starts using the training In this study, the dataset are categorized into training and
dataset by initializing the parameters of PSO and SVR. testing datasets and applied to the models for training and
The optimized values of the hyper parameters of SVR are testing phases of the models respectively for predicting
searched using PSO and the processing of searching the next day opening price. Out of 4143 numbers of data
continues till the termination criteria are reached. Finally, of Tata Steel (from 24-July-2001 to 19-March-2018)
SVR is built using the optimized values attained in the three-fourth of the data are used for building the training
search process and applied on the testing dataset. dataset and rest one-fourth for the testing dataset. Errors
evaluated with MAE, RMSE, and MAPE in training
4. EXPERIMENTAL RESULTS AND phase are 2.7602, 5.7413, and 0.6899 % (approx)
DISCUSSIONS respectively and the errors in testing phase are 2.9291,
6.4949, 0.7085 % (approx.) respectively. The Table-2
4.1 Evaluation Criteria
shows the error measures found for both the models, i.e.,
To evaluate the performance of the proposed regression Standard SVR and PSO-SVR. This empirical study
model, we have used three standard statistical metrics. shows that PSO-SVR outperformed Standard SVR in all
They are mean absolute error (MAE), root mean squared the three evaluation criteria.
error (RMSE), and mean absolute percentage error
(MAPE) and their details have been described in Table 2. Table-2: Comparison of Performance of Standard SVR
As MAE, RMSE, and MAPE indicate variants of the and PSO-SVR Models on Training and Testing Datasets
differences between the actual and predicted values, it is
important to note that smaller the error value, better the Models
performance. Standard SVR PSO-SVR
Training MAE 4.145735418 2.760213993
Sl Metric Definition Description RMSE 7.857402569 5.741340821
1 Mean Absolute 1 Sum of absolute MAPE 1.75813745 % 0.68994578 %
Error (MAE) |y d| differences
l between the Testing MAE 12.54774696 2.929112587
actual value and RMSE 22.50747649 6.494903279
the
MAPE 3.21849913 % 0.708516926 %
forecast divided
by the number
of observations
2 Root Mean Square root of
The Figures-2 to 5 shows the comparison of the actual
Squared Error 1
y d
sum of the stock vale and prediction of stock values using PSO-
(RMSE) l squared
errors divided by SVR. It also includes the absolute error.
the number of
observations

1724
International Journal of Pure and Applied Mathematics Special Issue

Figure-2: Actual Verses Prediction of PSO-SVR on Figure-5: Absolute error of PSO-SVR.


complete dataset
5. CONCLUSION

Our proposed model for addressing the problem of next


day stock price prediction presents results that are quite
acceptable not only from the research point of view but
also from practical use as well. The results demonstrated
by PSO-SVR has attained 0.7 % (approx.) mean absolute
percentage error (MAPE) on the testing dataset. The
proposed model also outperforms Standard SVR in all the
three evaluation measures, i.e., MAE, RMSE, and MAPE.
These results were possible to be achieved due to the use
of Particle Swarm Optimization (PSO) to optimize the
free parameters (cost and gamma) of Support Vector
Machine for Regression on the lagged time-series dataset.
The dataset was build using 35 attributes which is
Figure-3: Actual Verses Prediction of PSO-SVR on composed of 7 attributes for the last 5 days of the
training dataset. prediction day.This model can also be extended by
varying the number of lagged attributes present in the
dataset. From the application point of view, we are quite
hopeful that our proposed model (PSO-SVR) will be of
great help to forecast not only the stock price but also
every aspect in the financial domain.

REFERENCE:

[1] V.N. Vapnik , M. Jordan, S.L. Lauritzen & J.F.


Lawless, Nature of Statistical Learning
Theory.Springer-1999
[2] K.J. Kim , Financial time series forecasting using
support vector machines, Neuro computing, 2003 ,
vol. 55, pp. 307 – 319
[3] Y. Hu, Su Peng, Hao Xuchan & Tang Fei, The
Figure-4: Actual Verses Prediction of PSO-SVR on long-term predictive effect of SVM financial crisis
testing dataset. early-warning model, Proceedings of the First

1725
International Journal of Pure and Applied Mathematics Special Issue

IEEE International Conference on Information


Science and Engineering -2009
[4] Tony Van Gestel, Johan A. K. Suykens, Dirk-
Emma Baestaens et al, ‘Financial Time Series
Prediction Using Least Squares Support Vector
Machines Within the Evidence Framework’, IEEE
Transactions on neural networks, 2001, vol.12, no.
4, pp. 809- 821
[5] Wei Huang, Yoshiteru Nakamori, & Shou-Yang
Wang, Forecasting stock market movement
direction with support vector machine, Computers
& Operations Research, 2005, vol.32, no.10, pp.
2513-2522
[6] Yuling Lin, Haixiang Guo & Jinglu Hu 2013, An
SVM-based approach for stock market trend
prediction’, Neural Networks , The International
Joint Conference on, IEEE, pp.1-7.
[7] Lucas Lai & James Liu 2014,‘Support Vector
Machine and Least Square Support Vector
Machine Stock Forecasting Models’, Computer
Science and Information Technology, vol. 2, no.1,
pp. 30-39.
[8] Shom Prasad Das & Sudarsan Padhy
2012,‘Support Vector Machines for Prediction of
Futures Prices in Indian Stock Market’,
International Journal of Computer Applications,
vol. 41, no.3, pp.22-26
[9] Yongsheng Ding, Xinping Song & Yueming Zen
2008,‘Forecasting financial condition of chinese
listed companies based on support vector machine’,
Expert Systems and Application, vol.34, no.4,
pp.3081–3089.
[10] Shen, Shunrong, Haomiao Jiang & Tongda Zhang
2012,‘Stock Market Forecasting Using Machine
Learning Algorithms’ Taylor W. Hickem
[11] Puspanjali Mohapatra, Soumya Das, Tapas Kumar
Patra, at el., 2013 Volume 2, Issue 3, pp.78-85
[12] Mohammed Siddique, Debdulal Panda, Sumanjit
Das, at el., 2017, International Journal of Pure and
Applied Mathematics Volume 117, No. 19, pp.357-
362
[13] M. Karazmodeh, S. Nasiri, and S. Majid Hashemi,
2013, Journal of Automation and Control
Engineering, Vol. 1, No. 2, PP.173-176
[14] Rohit Choudhry, and Kumkum Garg, 2008, World
Academy of Science, Engineering and Technology,
pp.315-318.
[15] Cheng-Lung Huang, Jian-Fan Dun, 2008, Applied
Soft Computing 8, pp.1381–1391

1726
1727
1728

You might also like