100% found this document useful (8 votes)
86 views83 pages

Advances in Applied Artificial Intelligence 1st Edition by John Fulcher 9781591408291 1591408296 PDF Download

The document is a collection of resources and information regarding various editions of books on artificial intelligence, including 'Advances in Applied Artificial Intelligence' edited by John Fulcher. It discusses the nature of artificial intelligence, emphasizing the need for AI to exhibit both high-level expert behaviors and low-level human-like behaviors. The book includes chapters on diverse AI applications, such as decision support systems, text mining, and swarm intelligence, showcasing the integration of various AI techniques.

Uploaded by

dufelprivez2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (8 votes)
86 views83 pages

Advances in Applied Artificial Intelligence 1st Edition by John Fulcher 9781591408291 1591408296 PDF Download

The document is a collection of resources and information regarding various editions of books on artificial intelligence, including 'Advances in Applied Artificial Intelligence' edited by John Fulcher. It discusses the nature of artificial intelligence, emphasizing the need for AI to exhibit both high-level expert behaviors and low-level human-like behaviors. The book includes chapters on diverse AI applications, such as decision support systems, text mining, and swarm intelligence, showcasing the integration of various AI techniques.

Uploaded by

dufelprivez2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Advances in Applied Artificial Intelligence 1st

edition by John Fulcher 9781591408291 1591408296


download

https://ebookball.com/product/advances-in-applied-artificial-
intelligence-1st-edition-by-john-
fulcher-9781591408291-1591408296-10194/

Instantly Access and Download Textbook at https://ebookball.com


Get Your Digital Files Instantly: PDF, ePub, MOBI and More
Quick Digital Downloads: PDF, ePub, MOBI and Other Formats

KI 2006 Advances in Artificial Intelligence 1st Edition by Anique ISBN

https://ebookball.com/product/ki-2006-advances-in-artificial-
intelligence-1st-edition-by-anique-isbn-13700/

KI 2007 Advances in Artificial Intelligence 1st Edition by Anique ISBN

https://ebookball.com/product/ki-2007-advances-in-artificial-
intelligence-1st-edition-by-anique-isbn-13698/

AI 2011 Advances in Artificial Intelligence Lecture Notes in


Artificial Intelligence 1st edition by Dianhui Wang, Mark Reynolds
7106 9783642258329 3642258328

https://ebookball.com/product/ai-2011-advances-in-artificial-
intelligence-lecture-notes-in-artificial-intelligence-1st-
edition-by-dianhui-wang-mark-
reynolds-7106-9783642258329-3642258328-11140/

Advances in Artificial Intelligence Software and Systems Engineering


1st Edition by Tareq Ahram 3030513270 9783030513276

https://ebookball.com/product/advances-in-artificial-
intelligence-software-and-systems-engineering-1st-edition-by-
tareq-ahram-3030513270-9783030513276-16056/
Advances in Artificial Intelligence Sbia 2004 1st edition by Ana
Bazzan, Sofiane Labidi 3662203332 978-3662203330

https://ebookball.com/product/advances-in-artificial-
intelligence-sbia-2004-1st-edition-by-ana-bazzan-sofiane-
labidi-3662203332-978-3662203330-13878/

Advances in Artificial Intelligence SBIA 2004 17th Edition by Ana


Bazzan, Sofiane Labidi 3540232370 9783540232377

https://ebookball.com/product/advances-in-artificial-
intelligence-sbia-2004-17th-edition-by-ana-bazzan-sofiane-
labidi-3540232370-9783540232377-19498/

Advances in Artificial Intelligence SBIA 2004 17th Edition by Ana


Bazzan, Sofiane Labidi 3540232370 9783540232377

https://ebookball.com/product/advances-in-artificial-
intelligence-sbia-2004-17th-edition-by-ana-bazzan-sofiane-
labidi-3540232370-9783540232377-19490/

Advances in logic artificial intelligence and robotics 1st Edition by


Jair Abe, Joao Ida Silva Filho ISBN 1586032925

https://ebookball.com/product/advances-in-logic-artificial-
intelligence-and-robotics-1st-edition-by-jair-abe-joao-ida-silva-
filho-isbn-1586032925-19500/

KI 2004 Advances in Artificial Intelligence 27th Edition by Susanne


Biundo, Thom Frühwirth, Günther Palm 9783540302216

https://ebookball.com/product/ki-2004-advances-in-artificial-
intelligence-27th-edition-by-susanne-biundo-thom-fra1-4hwirth-
ga1-4nther-palm-9783540302216-19488/
Advances in Applied
Artificial Intelligence

John Fulcher, University of Wollongong, Australia

IDEA GROUP PUBLISHING


Hershey • London • Melbourne • Singapore
Acquisitions Editor: Michelle Potter
Development Editor: Kristin Roth
Senior Managing Editor: Amanda Appicello
Managing Editor: Jennifer Neidig
Copy Editor: Susanna Svidunovich
Typesetter: Sharon Berger
Cover Design: Lisa Tosheff
Printed at: Yurchak Printing Inc.

Published in the United States of America by


Idea Group Publishing (an imprint of Idea Group Inc.)
701 E. Chocolate Avenue, Suite 200
Hershey PA 17033
Tel: 717-533-8845
Fax: 717-533-8661
E-mail: [email protected]
Web site: http://www.idea-group.com

and in the United Kingdom by


Idea Group Publishing (an imprint of Idea Group Inc.)
3 Henrietta Street
Covent Garden
London WC2E 8LU
Tel: 44 20 7240 0856
Fax: 44 20 7379 0609
Web site: http://www.eurospanonline.com

Copyright © 2006 by Idea Group Inc. All rights reserved. No part of this book may be
reproduced, stored or distributed in any form or by any means, electronic or mechanical,
including photocopying, without written permission from the publisher.

Product or company names used in this book are for identification purposes only.
Inclusion of the names of the products or companies does not indicate a claim of
ownership by IGI of the trademark or registered trademark.

Library of Congress Cataloging-in-Publication Data

Advances in applied artificial intelligence / John Fulcher, editor.


p. cm.
Summary: "This book explores artificial intelligence finding it cannot simply display the
high-level behaviours of an expert but must exhibit some of the low level behaviours
common to human existence"--Provided by publisher.
Includes bibliographical references and index.
ISBN 1-59140-827-X (hardcover) -- ISBN 1-59140-828-8 (softcover) -- ISBN 1-59140-
829-6 (ebook)
1. Artificial intelligence. 2. Intelligent control systems. I. Fulcher, John.
Q335.A37 2006
006.3--dc22
2005032066

British Cataloguing in Publication Data


A Cataloguing in Publication record for this book is available from the British Library.

All work contributed to this book is new, previously-unpublished material. The views
expressed in this book are those of the authors, but not necessarily of the publisher.
IGP Forthcoming Titles in the
Computational Intelligence and
Its Applications Series
Biometric Image Discrimination Technologies
(February 2006 release)
David Zhang, Xiaoyuan Jing and Jian Yang
ISBN: 1-59140-830-X
Paperback ISBN: 1-59140-831-8
eISBN: 1-59140-832-6

Computational Economics: A Perspective from Computational


Intelligence
(November 2005 release)
Shu-Heng Chen, Lakhmi Jain, and Chung-Ching Tai
ISBN: 1-59140-649-8
Paperback ISBN: 1-59140-650-1
eISBN: 1-59140-651-X

Computational Intelligence for Movement Sciences: Neural Networks,


Support Vector Machines and Other Emerging Technologies
(February 2006 release)
Rezaul Begg and Marimuthu Palaniswami
ISBN: 1-59140-836-9
Paperback ISBN: 1-59140-837-7
eISBN: 1-59140-838-5

An Imitation-Based Approach to Modeling Homogenous


Agents Societies
(July 2006 release)
Goran Trajkovski
ISBN: 1-59140-839-3
Paperback ISBN: 1-59140-840-7
eISBN: 1-59140-841-5

It’s Easy to Order! Visit www.idea-group.com!


717/533-8845 x10
Mon-Fri 8:30 am-5:00 pm (est) or fax 24 hours a day 717/533-8661

IDEA GROUP PUBLISHING


Hershey • London • Melbourne • Singapore

Excellent additions to your library!


This book is dedicated to
Taliver John Fulcher.
Advances in Applied
Artificial Intelligence
Table of Contents

Preface ........................................................................................................................viii

Chapter I
Soft Computing Paradigms and Regression Trees in Decision Support Systems .......1
Cong Tran, University of South Australia, Australia
Ajith Abraham, Chung-Ang University, Korea
Lakhmi Jain, University of South Australia, Australia

Chapter II
Application of Text Mining Methodologies to Health Insurance Schedules .............. 29
Ah Chung Tsoi, Monash University, Australia
Phuong Kim To, Tedis P/L, Australia
Markus Hagenbuchner, University of Wollongong, Australia

Chapter III
Coordinating Agent Interactions Under Open Environments .................................... 52
Quan Bai, University of Wollongong, Australia
Minjie Zhang, University of Wollongong, Australia
Chapter IV
Literacy by Way of Automatic Speech Recognition ................................................... 68
Russell Gluck, University of Wollongong, Australia
John Fulcher, University of Wollongong, Australia

Chapter V
Smart Cars: The Next Frontier ................................................................................ 120
Lars Petersson, National ICT Australia, Australia
Luke Fletcher, Australian National University, Australia
Nick Barnes, National ICT Australia, Australia
Alexander Zelinsky, CSIRO ICT Centre, Australia

Chapter VI
The Application of Swarm Intelligence to Collective Robots .................................... 157
Amanda J. C. Sharkey, University of Sheffield, UK
Noel Sharkey, University of Sheffield, UK

Chapter VII
Self-Organising Impact Sensing Networks in Robust Aerospace Vehicles ........... 186
Mikhail Prokopenko, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia
Geoff Poulton, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia
Don Price, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia
Peter Wang, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia
Philip Valencia, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia
Nigel Hoschke, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia
Tony Farmer, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia
Mark Hedley, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia
Chris Lewis, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia
Andrew Scott, CSIRO Information and Communication
Technology Centre and CSIRO Industrial Physics, Australia

Chapter VIII
Knowledge Through Evolution .................................................................................. 234
Russell Beale, University of Birmingham, UK
Andy Pryke, University of Birmingham, UK
Chapter IX
Neural Networks for the Classification of Benign and Malignant Patterns in
Digital Mammograms ............................................................................................... 251
Brijesh Verma, Central Queensland University, Australia
Rinku Panchal, Central Queensland University, Australia

Chapter X
Swarm Intelligence and the Taguchi Method for Identification of Fuzzy Models .... 273
Arun Khosla, National Institute of Technology, Jalandhar, India
Shakti Kumar, Haryana Engineering College, Jalandhar, India
K. K. Aggarwal, GGS Indraprastha University, Delhi, India

About the Authors ..................................................................................................... 296

Index ........................................................................................................................ 305


viii

Preface

Discussion on the nature of intelligence long pre-dated the development of the


electronic computer, but along with that development came a renewed burst of investi-
gation into what an artificial intelligence would be. There is still no consensus on how
to define artificial intelligence: Early definitions tended to discuss the type of behaviours
which we would class as intelligent, such as a mathematical theorem proving or dis-
playing medical expertise of a high level. Certainly such tasks are signals to us that the
person exhibiting such behaviours is an expert and deemed to be engaging in intelli-
gent behaviours; however, 60 years of experience in programming computers has shown
that many behaviours to which we do not ascribe intelligence actually require a great
deal of skill. These behaviours tend to be ones which all normal adult humans find
relatively easy, such as speech, face recognition, and everyday motion in the world.
The fact that we have found it to be extremely difficult to tackle such mundane prob-
lems suggests to many scientists that an artificial intelligence cannot simply display
the high-level behaviours of an expert but must, in some way, exhibit some of the low-
level behaviours common to human existence.
Yet this stance does not answer the question of what constitutes an artificial
intelligence but merely moves the question to what common low-level behaviours are
necessary for an artificial intelligence. It seems unsatisfactory to take the stance which
some do, that states that we would know one if we met one. This book takes a very
pragmatic approach to the problem by tackling individual problems and seeking to use
tools from the artificial intelligence community to solve these problems. The tech-
niques that are used tend to be those which are suggested by human life, such as
artificial neural networks and evolutionary algorithms. The underlying reasoning be-
hind such technologies is that we have not created intelligences through such high-
level techniques as logic programming; therefore, there must be something in the actu-
ality of life itself which begets intelligence. For example, the study of artificial neural
networks is both an engineering study in that some practitioners wish to build ma-
chines based on artificial neural networks which can solve specific problems, but it is
also a study which gives us some insight into how our own intelligences are generated.
Regardless of the reason given for this study, the common rationale is that there is
something in the bricks and mortar of brains — the actual neurons and synapses —
which is crucial to the display of intelligence. Therefore, to display intelligence, we are
required to create machines which also have artificial neurons and synapses.
ix

Similarly, the rationale behind agent programs is based on a belief that we become
intelligent within our social groups. A single human raised in isolation will never be as
intelligent as one who comes into daily contact with others throughout his or her
developing life. Note that for this to be true, it is also required that the agent be able to
learn in some way to modulate its actions and responses to those of the group. There-
fore, a pre-programmed agent will not be as strong as an agent which is given the ability
to dynamically change its behaviour over time. The evolutionary approach too shares
this view in that the final population is not a pre-programmed solution to a problem, but
rather emerges through the processes of survival-of-the fittest and their reproduction
with inaccuracies.
Whether any one technology will prove to be the central one in creating artificial
intelligence or whether a combination of technologies will be necessary to create an
artificial intelligence is still an open question, so many scientists are experimenting
with mixtures of such techniques.
In this volume, we see such questions implicitly addressed by scientists tackling
specific problems which require intelligence with both individual and combinations of
specific artificial intelligence techniques.

OVERVIEW OF THIS BOOK


In Chapter I, Tran, Abraham, and Jain investigate the use of multiple soft comput-
ing techniques such as neural networks, evolutionary algorithms, and fuzzy inference
methods for creating intelligent decision support systems. Their particular emphasis is
on blending these methods to provide a decision support system which is robust, can
learn from the data, can handle uncertainty, and can give some response even in situa-
tions for which no prior human decisions have been made. They have carried out
extensive comparative work with the various techniques on their chosen application,
which is the field of tactical air combat.
In Chapter II, Tsoi, To, and Hagenbuchner tackle a difficult problem in text mining
— automatic classification of documents using only the words in the documents. They
discuss a number of rival and cooperating techniques and, in particular, give a very
clear discussion on latent semantic kernels. Kernel techniques have risen to promi-
nence recently due to the pioneering work of Vapnik. The application to text mining in
developing kernels specifically for this task is one of the major achievements in this
field. The comparative study on health insurance schedules makes interesting reading.
Bai and Zhang in Chapter III take a very strong position on what constitutes an
agent: “An intelligent agent is a reactive, proactive, autonomous, and social entity”.
Their chapter concentrates very strongly on the last aspect since it deals with multi-
agent systems in which the relations between agents is not pre-defined nor fixed when
it is learned. The problems of inter-agent communication are discussed under two
headings: The first investigates how an agent may have knowledge of its world and
what ontologies can be used to specify the knowledge; the second deals with agent
interaction protocols and how these may be formalised. These are set in the discussion
of a supply-chain formation.
Like many of the chapters in this volume, Chapter IV forms almost a mini-book (at
50+ pages), but Gluck and Fulcher give an extensive review of automatic speech recog-
nition systems covering pre-processing, feature extraction, and pattern matching. The
x

authors give an excellent review of the main techniques currently used including hid-
den Markov models, linear predictive coding, dynamic time warping, and artificial neu-
ral networks with the authors’ familiarity with the nuts-and-bolts of the techniques
being evident in the detail with which they discuss each technique. For example, the
artificial neural network section discusses not only the standard back propagation
algorithm and self-organizing maps, but also recurrent neural networks and the related
time-delay neural networks. However, the main topic of the chapter is the review of the
draw-talk-write approach to literacy which has been ongoing research for almost a
decade. Most recent work has seen this technique automated using several of the
techniques discussed above. The result is a socially-useful method which is still in
development but shows a great deal of potential.
Petersson, Fletcher, Barnes, and Zelinsky turn our attention to their Smart Cars
project in Chapter V. This deals with the intricacies of Driver Assistance Systems,
enhancing the driver’s ability to drive rather than replacing the driver. Much of their
work is with monitoring systems, but they also have strong reasoning systems which,
since the work involves keeping the driver in the loop, must be intuitive and explana-
tory. The system involves a number of different technologies for different parts of the
system: Naturally, since this is a real-world application, much of the data acquired is
noisy, so statistical methods and probabilistic modelling play a big part in their system,
while support vectors are used for object-classification.
Amanda and Noel Sharkey take a more technique-driven approach in Chapter VI
when they investigate the application of swarm techniques to collective robotics. Many
of the issues such as communication which arise in swarm intelligence mirror those of
multi-agent systems, but one of the defining attributes of swarms is that the individual
components should be extremely simple, a constraint which does not appear in multi-
agent systems. The Sharkeys enumerate the main components of such a system as
being composed of a group of simple agents which are autonomous, can communicate
only locally, and are biologically inspired. Each of these properties is discussed in
some detail in Chapter VI. Sometimes these techniques are combined with artificial
neural networks to control the individual agents or genetic algorithms, for example, for
developing control systems. The application to robotics gives a fascinating case-study.
In Chapter VII, the topic of structural health management (SHM) is introduced.
This “is a new approach to monitoring and maintaining the integrity and performance
of structures as they age and/or sustain damage”, and Prokopenko and his co-authors
are particularly interested in applying this to aerospace systems in which there are
inherent difficulties, in that they are operating under extreme conditions. A multi-agent
system is created to handle the various sub-tasks necessary in such a system, which is
created using an interaction between top-down dissection of the tasks to be done with
a bottom-up set of solutions for specific tasks. Interestingly, they consider that most of
the bottom-up development should be based on self-organising principles, which means
that the top-down dissection has to be very precise. Since they have a multi-agent
system, communication between the agents is a priority: They create a system whereby
only neighbours can communicate with one another, believing that this gives robust-
ness to the whole system in that there are then multiple channels of communication.
Their discussion of chaotic regimes and self-repair systems provides a fascinating
insight into the type of system which NASA is currently investigating. This chapter
places self-referentiability as a central factor in evolving multi-agent systems.
xi

In Chapter VIII, Beale and Pryke make an elegant case for using computer algo-
rithms for the tasks for which they are best suited, while retaining human input into any
investigation for the tasks for which the human is best suited. In an exploratory data
investigation, for example, it may one day be interesting to identify clusters in a data
set, another day it may be more interesting to identify outliers, while a third day may see
the item of interest shift to the manifold in which the data lies. These aspects are
specific to an individual’s interests and will change in time; therefore, they develop a
mechanism by which the human user can determine the criterion of interest for a spe-
cific data set so that the algorithm can optimise the view of the data given to the human,
taking into account this criterion. They discuss trading accuracy for understanding in
that, if presenting 80% of a solution makes it more accessible to human understanding
than a possible 100% solution, it may be preferable to take the 80% solution. A combi-
nation of evolutionary algorithms and a type of spring model are used to generate
interesting views.
Chapter IX sees an investigation by Verma and Panchal into the use of neural
networks for digital mammography. The whole process is discussed here from collec-
tion of data, early detection of suspicious areas, area extraction, feature extraction and
selection, and finally the classification of patterns into ‘benign’ or ‘malignant’. An
extensive review of the literature is given, followed by a case study on some benchmark
data sets. Finally the authors make a plea for more use of standard data sets, something
that will meet with heartfelt agreement from other researchers who have tried to com-
pare different methods which one finds in the literature.
In Chapter X, Khosla, Kumar, and Aggarwal report on the application of particle
swarm optimisation and the Taguchi method to the derivation of optimal fuzzy models
from the available data. The authors emphasize the importance of selecting appropriate
PSO strategies and parameters for such tasks, as these impact significantly on perfor-
mance. Their approach is validated by way of data from a rapid Ni-Cd battery charger.
As we see, the chapters in this volume represent a wide spectrum of work, and
each is self-contained. Therefore, the reader can dip into this book in any order he/she
wishes. There are also extensive references within each chapter which an interested
reader may wish to pursue, so this book can be used as a central resource from which
major avenues of research may be approached.

Professor Colin Fyfe


The University of Paisley, Scotland
December, 2005
Soft Computing Paradigms and Regression Trees 1

Chapter I

Soft Computing
Paradigms and
Regression Trees in
Decision Support Systems
Cong Tran, University of South Australia, Australia

Ajith Abraham, Chung-Ang University, Korea

Lakhmi Jain, University of South Australia, Australia

ABSTRACT
Decision-making is a process of choosing among alternative courses of action for
solving complicated problems where multi-criteria objectives are involved. The past
few years have witnessed a growing recognition of soft computing (SC) (Zadeh, 1998)
technologies that underlie the conception, design, and utilization of intelligent
systems. In this chapter, we present different SC paradigms involving an artificial
neural network (Zurada, 1992) trained by using the scaled conjugate gradient
algorithm (Moller, 1993), two different fuzzy inference methods (Abraham, 2001)
optimised by using neural network learning/evolutionary algorithms (Fogel, 1999),
and regression trees (Breiman, Friedman, Olshen, & Stone, 1984) for developing
intelligent decision support systems (Tran, Abraham, & Jain, 2004). We demonstrate
the efficiency of the different algorithms by developing a decision support system for
a tactical air combat environment (TACE) (Tran & Zahid, 2000). Some empirical
comparisons between the different algorithms are also provided.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
2 Tran, Abraham & Jain

INTRODUCTION
Several decision support systems have been developed in various fields including
medical diagnosis (Adibi, Ghoreishi, Fahimi, & Maleki, 1993), business management,
control system (Takagi & Sugeno, 1983), command and control of defence and air traffic
control (Chappel, 1992), and so on. Usually previous experience or expert knowledge is
often used to design decision support systems. The task becomes interesting when no
prior knowledge is available. The need for an intelligent mechanism for decision support
comes from the well-known limits of human knowledge processing. It has been noticed
that the need for support for human decision-makers is due to four kinds of limits:
cognitive, economic, time, and competitive demands (Holsapple & Whinston, 1996).
Several artificial intelligence techniques have been explored to construct adaptive
decision support systems. A framework that could capture imprecision, uncertainty,
learn from the data/information, and continuously optimise the solution by providing
interpretable decision rules, would be the ideal technique. Several adaptive learning
frameworks for constructing intelligent decision support systems have been proposed
(Cattral, Oppacher, & Deogo, 1999; Hung, 1993; Jagielska, 1998; Tran, Jain, & Abraham,
2002b). Figure 1 summarizes the basic functional aspects of a decision support system.
A database is created from the available data and human knowledge. The learning
process then builds up the decision rules. The developed rules are further fine-tuned,
depending upon the quality of the solution, using a supervised learning process.
To develop an intelligent decision support system, we need a holistic view on the
various tasks to be carried out including data management and knowledge management
(reasoning techniques). The focus of this chapter is knowledge management (Tran &
Zahid, 2000), which consists of facts and inference rules used for reasoning (Abraham,
2000).
Fuzzy logic (Zadeh, 1973), when applied to decision support systems, provides
formal methodology to capture valid patterns of reasoning about uncertainty. Artificial
neural networks (ANNs) are popularly known as black-box function approximators.
Recent research work shows the capabilities of rule extraction from a trained network
positions neuro-computing as a good decision support tool (Setiono, 2000; Setiono,
Leow, & Zurada, 2002). Recently evolutionary computation (EC) (Fogel, 1999) has been
successful as a powerful global optimisation tool due to the success in several problem
domains (Abraham, 2002; Cortes, Larrañeta, Onieva, García, & Caraballo, 2001;
Ponnuswamy, Amin, Jha, & Castañon, 1997; Tan & Li, 2001; Tan, Yu, Heng, & Lee, 2003).
EC works by simulating evolution on a computer by iterative generation and alteration
processes, operating on a set of candidate solutions that form a population. Due to the
complementarity of neural networks, fuzzy inference systems, and evolutionary compu-
tation, the recent trend is to fuse various systems to form a more powerful integrated
system, to overcome their individual weakness.
Decision trees (Breiman et al., 1984) have emerged as a powerful machine-learning
technique due to a simple, apparent, and fast reasoning process. Decision trees can be
related to artificial neural networks by mapping them into a class of ANNs or entropy nets
with far fewer connections.
In the next section, we present the complexity of the tactical air combat decision
support system (TACDSS) (Tran, Abraham, & Jain, 2002c), followed by some theoretical
foundation on neural networks, fuzzy inference systems, and decision trees in the

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 3

Figure 1. Database learning framework for decision support system

Hu man k no wle dg e D e c is ion mak i ng


r ule s

M a s te r da ta s e t Le ar ning pr oc e s s

E nv iro nme n t S o luti on


me as ur e m e nt e v alua tio n
U na c ce pta ble

Ac c e pta bl e

E nd

following section. We then present different adaptation procedures for optimising fuzzy
inference systems. A Takagi-Sugeno (Takagi & Sugeno, 1983; Sugeno, 1985) and
Mamdani-Assilian (Mamdani & Assilian, 1975) fuzzy inference system learned by using
neural network learning techniques and evolutionary computation is discussed. Experi-
mental results using the different connectionist paradigms follow. Detailed discussions
of these results are presented in the last section, and conclusions are drawn.

TACTICAL AIR COMBAT


DECISION SUPPORT SYSTEM
Implementation of a reliable decision support system involves two important
factors: collection and analysis of prior information, and the evaluation of the solution.
The data could be an image or a pattern, real number, binary code, or natural language
text data, depending on the objects of the problem environment. An object of the decision
problem is also known as the decision factor. These objects can be expressed mathemati-
cally in the decision problem domain as a universal set, where the decision factor is a set
and the decision data is an element of this set. The decision factor is a sub-set of the
decision problem. If we call the decision problem (DP) as X and the decision factor (DF)
as “A”, then the decision data (DD) could be labelled as “a”. Suppose the set A has
members a1, a2, ... , an then it can be denoted by A = {a1,a2,..,a n} or can be written as:

A = {ai | i ∈Rn} (1)

where i is called the set index, the symbol “|” is read as “such that” and Rn is the set of
n real numbers. A sub-set “A” of X, denoted A⊆ X, is a set of elements that is contained
within the universal set X. For optimal decision-making, the system should be able to

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
4 Tran, Abraham & Jain

adaptively process the information provided by words or any natural language descrip-
tion of the problem environment.
To illustrate the proposed approach, we consider a case study based on a tactical
environment problem. We aim to develop an environment decision support system for
a pilot or mission commander in tactical air combat. We will attempt to present the
complexity of the problem with some typical scenarios. In Figure 2, the Airborne Early
Warning and Control (AEW&C) is performing surveillance in a particular area of
operation. It has two Hornets (F/A-18s) under its control at the ground base shown as
“+” in the left corner of Figure 2. An air-to-air fuel tanker (KB707) “o” is on station —
the location and status of which are known to the AEW&C. One of the Hornets is on patrol
in the area of Combat Air Patrol (CAP). Sometime later, the AEW&C on-board sensors
detect hostile aircraft(s) shown as “O”. When the hostile aircraft enter the surveillance
region (shown as a dashed circle), the mission system software is able to identify the
enemy aircraft and estimate their distance from the Hornets in the ground base or in the
CAP.
The mission operator has few options to make a decision on the allocation of
Hornets to intercept the enemy aircraft:
• Send the Hornet directly to the spotted area and intercept,
• Call the Hornet in the area back to ground base or send another Hornet from the
ground base.
• Call the Hornet in the area for refuel before intercepting the enemy aircraft.

The mission operator will base his/her decisions on a number of factors, such as:
• Fuel reserve and weapon status of the Hornet in the area,
• Interrupt time of Hornets in the ground base or at the CAP to stop the hostile,
• The speed of the enemy fighter aircraft and the type of weapons it possesses.

Figure 2. A typical air combat scenario

Surveillance
Hostiles
Boundary

Fighter on CAP

Tanker aircraft
Fighters at ground base

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 5

Table 1. Decision factors for the tactical air combat

Fuel Intercept Weapon Danger Evaluation


reserve time status situation plan
Full Fast Very
Sufficient Good
dangerous
Half Normal Enough Dangerous Acceptable
Low Slow Insufficient Endangered Bad

From the above scenario, it is evident that there are important decision factors of
the tactical environment that might directly affect the air combat decision. For demon-
strating our proposed approach, we will simplify the problem by handling only a few
important decision factors such as “fuel status”, “interrupt time” (Hornets in the ground
base and in the area of CAP), “weapon possession status”, and “situation awareness”
(Table 1). The developed tactical air combat decision rules (Abraham & Jain, 2002c)
should be able to incorporate all the above-mentioned decision factors.

Knowledge of Tactical Air Combat Environment


How can human knowledge be extracted to a database? Very often people express
knowledge as natural (spoken) language or using letters or symbolic terms. The human
knowledge can be analysed and converted into an information table. There are several
methods to extract human knowledge. Some researchers use cognitive work analysis
(CWA) (Sanderson, 1998); others use cognitive task analysis (CTA) (Militallo, 1998).
CWA is a technique used to analyse, design, and evaluate human computer interactive
systems. CTA is a method used to identify cognitive skills and mental demands, and
needs to perform these tasks proficiently. CTA focuses on describing the representation
of the cognitive elements that define goal generation and decision making. It is a reliable
method to extract human knowledge because it is based on observations or an interview.
We have used the CTA technique to set up the expert knowledge base for building the
complete decision support system. For the TACE discussed previously, we have four
decision factors that could affect the final decision options of “Hornet in the CAP” or
“Hornet at the ground base”. These are: “fuel status” being the quantity of fuel available
to perform the intercept, the “weapon possession status” presenting the state of
available weapons inside the Hornet, the “interrupt time” which is required for the Hornet
to fly and interrupt the hostile, and the “danger situation” providing information whether
the aircraft is friendly or hostile.
Each of the above-mentioned factors has a different range of units, these being the
fuel (0 to 1000 litres), interrupt time (0 to 60 minutes), weapon status (0 to 100 %), and the
danger situation (0 to 10 points). The following are two important decision selection
rules, which were formulated using expert knowledge:
• The decision selection will have a small value if the fuel is too low, the interrupt time
is too long, the Hornet has low weapon status, and the Friend-Or-Enemy/Foe
danger is high.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
6 Tran, Abraham & Jain

Table 2. Some prior knowledge of the TACE

Fuel Interrupt Weapon Danger Decision


status time status situation selection
(litres) (minutes) (percent) (points) (points)
0 60 0 10 0
100 55 15 8 1
200 50 25 7 2
300 40 30 5 3
400 35 40 4.5 4
500 30 60 4 5
600 25 70 3 6
700 15 85 2 7
800 10 90 1.5 8
900 5 96 1 9
1000 1 100 0 10

• The decision selection will have a high value if the fuel reserve is full, the interrupt
time is fast enough, the Hornet has high weapon status, and the FOE danger is low.

In TACE, decision-making is always based on all states of all the decision factors.
However, sometimes a mission operator/commander can make a decision based on an
important factor, such as: The fuel reserve of the Hornet is too low (due to high fuel use),
the enemy has more powerful weapons, and the quality and quantity of enemy aircraft.
Table 2 shows the decision score at each stage of the TACE.

SOFT COMPUTING AND DECISION TREES


Soft computing paradigms can be used to construct new generation intelligent
hybrid systems consisting of artificial neural networks, fuzzy inference systems, approxi-
mate reasoning, and derivative free optimisation techniques. It is well known that the
intelligent systems which provide human-like expertise such as domain knowledge,
uncertain reasoning, and adaptation to a noisy and time-varying environment, are
important in tackling real-world problems.

Artificial Neural Networks


Artificial neural networks have been developed as generalisations of mathematical
models of biological nervous systems. A neural network is characterised by the network
architecture, the connection strength between pairs of neurons (weights), node proper-
ties, and update rules. The update or learning rules control the weights and/or states of
the processing elements (neurons). Normally, an objective function is defined that
represents the complete status of the network, and its set of minima corresponds to
different stable states (Zurada, 1992). It can learn by adapting its weights to changes in
the surrounding environment, can handle imprecise information, and generalise from
known tasks to unknown ones. The network is initially randomised to avoid imposing any

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 7

of our own prejudices about an application of interest. The training patterns can be
thought of as a set of ordered pairs {(x1, y1), (x2, y2) ,..,(xp, y p)} where xi represents an input
pattern and yi represents the output pattern vector associated with the input vector xi.
A valuable property of neural networks is that of generalisation, whereby a trained
neural network is able to provide a correct matching in the form of output data for a set
of previously-unseen input data. Learning typically occurs through training, where the
training algorithm iteratively adjusts the connection weights (synapses). In the conju-
gate gradient algorithm (CGA), a search is performed along conjugate directions, which
produces generally faster convergence than steepest descent directions. A search is
made along the conjugate gradient direction to determine the step size, which will
minimise the performance function along that line. A line search is performed to determine
the optimal distance to move along the current search direction. Then the next search
direction is determined so that it is conjugate to the previous search direction. The
general procedure for determining the new search direction is to combine the new
steepest descent direction with the previous search direction. An important feature of
CGA is that the minimization performed in one step is not partially undone by the next,
as is the case with gradient descent methods. An important drawback of CGA is the
requirement of a line search, which is computationally expensive. The scaled conjugate
gradient algorithm (SCGA) (Moller, 1993) was designed to avoid the time-consuming line
search at each iteration, and incorporates the model-trust region approach used in the
CGA Levenberg-Marquardt algorithm (Abraham, 2002).

Fuzzy Inference Systems (FIS)


Fuzzy inference systems (Zadeh, 1973) are a popular computing framework based
on the concepts of fuzzy set theory, fuzzy if-then rules, and fuzzy reasoning. The basic
structure of the fuzzy inference system consists of three conceptual components: a rule
base, which contains a selection of fuzzy rules; a database, which defines the membership
functions used in the fuzzy rule; and a reasoning mechanism, which performs the
inference procedure upon the rules and given facts to derive a reasonable output or
conclusion. Figure 3 shows the basic architecture of a FIS with crisp inputs and outputs
implementing a non-linear mapping from its input space to its output (Cattral, Oppacher,
& Deogo, 1992).

Figure 3. Fuzzy inference system block diagram

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
8 Tran, Abraham & Jain

We now introduce two different fuzzy inference systems that have been widely
employed in various applications. These fuzzy systems feature different consequents in
their rules, and thus their aggregation and defuzzification procedures differ accordingly.
Most fuzzy systems employ the inference method proposed by Mamdani-Assilian
in which the rule consequence is defined by fuzzy sets and has the following structure
(Mamdani & Assilian, 1975):

If x is A1 and y is B 1 then z1 = C 1 (2)

Takagi and Sugeno (1983) proposed an inference scheme in which the conclusion
of a fuzzy rule is constituted by a weighted linear combination of the crisp inputs rather
than a fuzzy set, and which has the following structure:

If x is A1 and y is B1 , then z1 = p1 + q 1 y + r (3)

A Takagi-Sugeno FIS usually needs a smaller number of rules, because its output
is already a linear function of the inputs rather than a constant fuzzy set (Abraham, 2001).

Evolutionary Algorithms
Evolutionary algorithms (EAs) are population-based adaptive methods, which may
be used to solve optimisation problems, based on the genetic processes of biological
organisms (Fogel, 1999; Tan et al., 2003). Over many generations, natural populations
evolve according to the principles of natural selection and “survival-of-the-fittest”, first
clearly stated by Charles Darwin in “On the Origin of Species”. By mimicking this process,
EAs are able to “evolve” solutions to real-world problems, if they have been suitably
encoded. The procedure may be written as the difference equation (Fogel, 1999):

x[t + 1] = s(v(x[t])) (4)

Figure 4. Evolutionary algorithm pseudo code

1. Generate the initial population P(0) at random and set i=0;

2. Repeat until the number of iterations or time has been


reached or the population has converged.

a. Evaluate the fitness of each individual in P(i)

b. Select parents from P(i) based on their fitness in P(i)

c. Apply reproduction operators to the parents and produce


offspring, the next generation, P(i+1) is obtained from the
offspring and possibly parents.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 9

where x (t) is the population at time t, v is a random operator, and s is the selection
operator. The algorithm is illustrated in Figure 4.
A conventional fuzzy controller makes use of a model of the expert who is in a
position to specify the most important properties of the process. Expert knowledge is
often the main source to design the fuzzy inference systems. According to the perfor-
mance measure of the problem environment, the membership functions and rule bases
are to be adapted. Adaptation of fuzzy inference systems using evolutionary computa-
tion techniques has been widely explored (Abraham & Nath, 2000a, 2000b). In the
following section, we will discuss how fuzzy inference systems could be adapted using
neural network learning techniques.

Neuro-Fuzzy Computing
Neuro-fuzzy (NF) (Abraham, 2001) computing is a popular framework for solving
complex problems. If we have knowledge expressed in linguistic rules, we can build a FIS;
if we have data, or can learn from a simulation (training), we can use ANNs. For building
a FIS, we have to specify the fuzzy sets, fuzzy operators, and the knowledge base.
Similarly, for constructing an ANN for an application, the user needs to specify the
architecture and learning algorithm. An analysis reveals that the drawbacks pertaining
to these approaches seem complementary and, therefore, it is natural to consider building
an integrated system combining the concepts. While the learning capability is an
advantage from the viewpoint of FIS, the formation of a linguistic rule base will be
advantageous from the viewpoint of ANN (Abraham, 2001).
In a fused NF architecture, ANN learning algorithms are used to determine the
parameters of the FIS. Fused NF systems share data structures and knowledge represen-
tations. A common way to apply a learning algorithm to a fuzzy system is to represent
it in a special ANN-like architecture. However, the conventional ANN learning algorithm
(gradient descent) cannot be applied directly to such a system as the functions used in
the inference process are usually non-differentiable. This problem can be tackled by
using differentiable functions in the inference system or by not using the standard neural
learning algorithm. Two neuro-fuzzy learning paradigms are presented later in this
chapter.

Classification and Regression Trees


Tree-based models are useful for both classification and regression problems. In
these problems, there is a set of classification or predictor variables (Xi) and a dependent
variable (Y). The Xi variables may be a mixture of nominal and/or ordinal scales (or code
intervals of equal-interval scale) and Y may be a quantitative or a qualitative (in other
words, nominal or categorical) variable (Breiman et al., 1984; Steinberg & Colla, 1995).
The classification and regression trees (CART) methodology is technically known
as binary recursive partitioning. The process is binary because parent nodes are always
split into exactly two child nodes, and recursive because the process can be repeated by
treating each child node as a parent. The key elements of a CART analysis are a set of
rules for splitting each node in a tree:
• deciding when a tree is complete, and
• assigning each terminal node to a class outcome (or predicted value for regression)

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
10 Tran, Abraham & Jain

CART is the most advanced decision tree technology for data analysis, pre-
processing, and predictive modelling. CART is a robust data-analysis tool that automati-
cally searches for important patterns and relationships and quickly uncovers hidden
structure even in highly complex data. CARTs binary decision trees are more sparing with
data and detect more structure before further splitting is impossible or stopped. Splitting
is impossible if only one case remains in a particular node, or if all the cases in that node
are exact copies of each other (on predictor variables). CART also allows splitting to be
stopped for several other reasons, including that a node has too few cases (Steinberg
& Colla, 1995).
Once a terminal node is found, we must decide how to classify all cases falling within
it. One simple criterion is the plurality rule: The group with the greatest representation
determines the class assignment. CART goes a step further: Because each node has the
potential for being a terminal node, a class assignment is made for every node whether
it is terminal or not. The rules of class assignment can be modified from simple plurality
to account for the costs of making a mistake in classification and to adjust for over- or
under-sampling from certain classes.
A common technique among the first generation of tree classifiers was to continue
splitting nodes (growing the tree) until some goodness-of-split criterion failed to be met.
When the quality of a particular split fell below a certain threshold, the tree was not grown
further along that branch. When all branches from the root reached terminal nodes, the
tree was considered complete. Once a maximal tree is generated, it examines smaller trees
obtained by pruning away branches of the maximal tree. Once the maximal tree is grown
and a set of sub-trees is derived from it, CART determines the best tree by testing for error
rates or costs. With sufficient data, the simplest method is to divide the sample into
learning and test sub-samples. The learning sample is used to grow an overly large tree.
The test sample is then used to estimate the rate at which cases are misclassified (possibly
adjusted by misclassification costs). The misclassification error rate is calculated for the
largest tree and also for every sub-tree.
The best sub-tree is the one with the lowest or near-lowest cost, which may be a
relatively small tree. Cross validation is used if data are insufficient for a separate test
sample. In the search for patterns in databases, it is essential to avoid the trap of over-
fitting or finding patterns that apply only to the training data. CARTs embedded test
disciplines ensure that the patterns found will hold up when applied to new data. Further,
the testing and selection of the optimal tree are an integral part of the CART algorithm.
CART handles missing values in the database by substituting surrogate splitters, which
are back-up rules that closely mimic the action of primary splitting rules. The surrogate
splitter contains information that is typically similar to what would be found in the primary
splitter (Steinberg & Colla, 1995).

TACDSS ADAPTATION USING


TAKAGI-SUGENO FIS
We used the adaptive network-based fuzzy inference system (ANFIS) framework
(Jang, 1992) to develop the TACDSS based on a Takagi-Sugeno fuzzy inference system.
The six-layered architecture of ANFIS is depicted in Figure 5.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 11

Figure 5. ANFIS architecture

Suppose there are two input linguistic variables (ILV) X and Y and each ILV has three
membership functions (MF) A1, A2 and A3 and B1, B 2 and B3 respectively, then a Takagi-
Sugeno-type fuzzy if-then rule could be set up as:

Rulei : If X is Ai and Y is Bi then fi = p i X + q i Y+ ri (5)

where i is an index i = 1,2..n and p, q and r are the linear parameters.


Some layers of ANFIS have the same number of nodes, and nodes in the same layer
have similar functions. Output of nodes in layer-l is denoted as Ol,i, where l is the layer
number and i is neuron number of the next layer. The function of each layer is described
as follows:
• Layer 1
The outputs of this layer is the input values of the ANFIS

O1,x = x

O1,y = y (6)

For TACDSS the four inputs are “fuel status”, “weapons inventory levels”, “time
intercept”, and the “danger situation”.
• Layer 2
The output of nodes in this layer is presented as Ol,ip,i,, where ip is the ILV and m
is the degree of membership function of a particular MF.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
12 Tran, Abraham & Jain

O2,x,i= m Ai(x) or O2,y,i = mBi(y) for i = 1,2, and 3 (7)

With three MFs for each input variable, “fuel status” has three membership
functions: full, half, and low, “time intercept” has fast, normal, and slow, “weapon
status” has sufficient, enough, and insufficient, and the “danger situation” has
very dangerous, dangerous, and endangered.
• Layer 3
The output of nodes in this layer is the product of all the incoming signals, denoted
by:

O3,n = Wn= mAi(x) x mBi(y) (8)

where i = 1,2, and 3, and n is the number of the fuzzy rule. In general, any T-norm
operator will perform the fuzzy ‘AND’ operation in this layer. With four ILV and
three MFs for each input variable, the TACDSS will have 81 (34 = 81) fuzzy if-then
rules.
• Layer 4
The nodes in this layer calculate the ratio of the i th fuzzy rule firing strength (RFS)
to the sum of all RFS.

wn
81
O4,n = w = n
∑w n where n = 1,2,..,81 (9)
n =1

The number of nodes in this layer is the same as the number of nodes in layer-3.
The outputs of this layer are also called normalized firing strengths.
• Layer 5
The nodes in this layer are adaptive, defined as:

O5,n = w f = wn (pn x + q n y + rn)


n n (10)

where pn, qn, rn are the rule consequent parameters. This layer also has the same
number of nodes as layer-4 (81 numbers).
• Layer 6
The single node in this layer is responsible for the defuzzification process, using
the centre-of-gravity technique to compute the overall output as the summation of
all the incoming signals:

81

81
∑ w fn n

∑w
n =1
O6,1 = n fn = 81 (11)
n =1
∑ wn
n =1

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 13

ANFIS makes use of a mixture of back-propagation to learn the premise parameters


and least mean square estimation to determine the consequent parameters. Each step in
the learning procedure comprises two parts: In the first part, the input patterns are
propagated, and the optimal conclusion parameters are estimated by an iterative least
mean square procedure, while the antecedent parameters (membership functions) are
assumed to be fixed for the current cycle through the training set. In the second part, the
patterns are propagated again, and in this epoch, back-propagation is used to modify the
antecedent parameters, while the conclusion parameters remain fixed. This procedure is
then iterated, as follows (Jang, 1992):

w1 w2 wn
f1 f2 fn
ANFIS output f = O 6,1 = ∑w
n
n + ∑w
n
n +…+ ∑w
n
n

= w1 (p1x + q1y + r1) + w2 (p2x + q2y + r2) + … + (pnx + qny +rn)

= (x)p1 + (y)q1 + r1 + (x)p2 + (y)q2 + r2 + … + (x)pn + (y)qn + rn (12)

where n is the number of nodes in layer 5. From this, the output can be rewritten as

f = F(i,S) (13)

where F is a function, i is the vector of input variables, and S is a set of total parameters
of consequent of the nth fuzzy rule. If there exists a composite function H such that H ⊕
F is linear in some elements of S, then these elements can be identified by the least square
method. If the parameter set is divided into two sets S1 and S2, defined as:

S = S1 ⊕ S2 (14)

where ⊕ represents direct sum and o is the product rule, such that H o F is linear in the
elements of S2, the function f can be represented as:

H (f) = H ⊕ F(I,S) (15)

Given values of S1, the S training data can be substituted into equation 15. H(f) can
be written as the matrix equation of AX = Y, where X is an unknown vector whose elements
are parameters in S2.
If |S2| = M (M being the number of linear parameters), then the dimensions of
matrices A, X and Y are PM, Ml and Pl, respectively. This is a standard linear least-squares
problem and the best solution of X that minimizes ||AX – Y||2 is the least square estimate
(LSE) X*

X* = (ATA)-1ATY (16)

where AT is the transpose of A, (ATA)-1AT is the pseudo inverse of A if ATA is a non-singular.


Let the i th row vector of matrix A be aand the ith element of Y be y, then X can be calculated
as:

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
14 Tran, Abraham & Jain

Xi+1 = Xi + Si+1ai+1(y - y - aX i) (17)

Si ai+1 yiT+1 - Si
Si+1 = Si - , I = 0,1,…, P -1 (18)
1 + a iT+1Si ai + 1

The LSE X* is equal to Xp. The initial conditions of Xi+1 and Si+1 are X0 = 0 and S0
= gI, where g is a positive large number and I is the identity matrix of dimension M × M.
When hybrid learning is applied in batch mode, each epoch is composed of a forward
pass and a backward pass. In the forward pass, the node output I of each layer is
calculated until the corresponding matrices A and Y are obtained. The parameters of S2
are identified by the pseudo inverse equation as mentioned above. After the parameters
of S2 are obtained, the process will compute the error measure for each training data pair.
In the backward pass, the error signals (the derivatives of the error measure with respect
to each node output) propagates from the output to the input end. At the end of the
backward pass, the parameter S1 is updated by the steepest descent method as follows:

∂E
a = −η (19)
∂α

where a is a generic parameter and η is the learning rate and E is an error measure.

k
η= (20)
∂E 2
∑α ( )
∂α

where k is the step size.


For the given fixed values of parameters in S1, the parameters in S2 are guaranteed
to be global optimum points in the S2 parameters space due to the choice of the squared
error measure. This hybrid learning method can decrease the dimension of the search
space using the steepest descent method, and can reduce the time needed to reach
convergence. The step size k will influence the speed of convergence. Observation
shows that if k is small, the gradient method will closely approximate the gradient path;
convergence will be slow since the gradient is being calculated many times. If the step
size k is large, convergence will initially be very fast. Based on these observations, the
step size k is updated by the following two heuristics (rules) (Jang, 1992):

If E undergoes four continuous reductions, then increase k by 10%, and

If E undergoes continuous combinations of increase and decrease, then reduce


k by 10%.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 15

TACDSS ADAPTATION USING MAMDANI FIS


We have made use of the fuzzy neural network (FuNN ) framework (Kasabov, Kim
& Gray, 1996) for learning the Mamdani-Assilian fuzzy inference method. A functional
block diagram of the FuNN model is depicted in Figure 6 (Kasabov, 1996); it consists of
two phases of learning.
The first phase is the structure learning (if-then rules) using the knowledge
acquisition module. The second phase is the parameter learning for tuning membership
functions to achieve a desired level of performance. FuNN uses a gradient descent
learning algorithm to fine-tune the parameters of the fuzzy membership functions. In the
connectionist structure, the input and output nodes represent the input states and
output control-decision signals, respectively, while in the hidden layers, there are nodes
functioning as quantification of membership functions (MFs) and if-then rules. We used
the simple and straightforward method proposed by Wang and Mendel (1992) for
generating fuzzy rules from numerical input-output training data. The task here is to
generate a set of fuzzy rules from the desired input-output pairs and then use these fuzzy
rules to determine the complete structure of the TACDSS.
Suppose we are given the following set of desired input (x1, x2) and output (y) data
pairs (x1, x2, y): (0.6, 0.2; 0.2), (0.4, 0.3; 0.4). In TACDSS, the input variable “fuel reserve”
has a degree of 0.8 in half, a degree of 0.2 in full. Similarly, the input variable “time
intercept” has a degree of 0.6 in empty and 0.3 in normal. Secondly, assign x 1i, x2i, and
yi to a region that has maximum degree. Finally, obtain one rule from one pair of desired
input-output data, for example:

(x11, x21, y1) => [x11 (0.8 in half), x21 (0.2 in fast), y1 (0.6 in acceptable)],

R1: if x1 is half and x2 is fast, then y is acceptable (21)

(x12,x22,y2), => [x1(0.8 in half),x2(0.6 in normal),y2(0.8 in acceptable)],

R2: if x1 is half and x2 is normal, then y is acceptable (22)

Figure 6. A general schematic of the hybrid fuzzy neural network

Structure learning
Explanation
Knowledge Fuzzy rule
acquisition based

Insert rule Extract rule

Input Parameter learning Output


Pre-
processing using gradient descent

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
16 Tran, Abraham & Jain

Assign a degree to each rule. To resolve a possible conflict problem, that is, rules
having the same antecedent but a different consequent, and to reduce the number of
rules, we assign a degree to each rule generated from data pairs and accept only the rule
from a conflict group that has a maximum degree. In other words, this step is performed
to delete redundant rules, and therefore obtain a concise fuzzy rule base. The following
product strategy is used to assign a degree to each rule. The degree of the rule is denoted
by:

Ri : if x1 is A and x2 is B, then y is C(wi) (23)

The rule weight is defined as:

wi = mA(xl)m B(x2)mC(y) (24)

For example in the TACE, R1 has a degree of

W 1 = mhalf(x1) mfast(x2) macceptable(y) = 0.8 x 0.2 x 0.6 = 0.096 (25)

and R2 has a degree of

W2 = mhalf(x1) m normal(x2)? macceptable(y) = 0.8 x 0.6 x 0.8 = 0.384 (26)

Note that if two or more generated fuzzy rules have the same preconditions and
consequents, then the rule that has maximum degree is used. In this way, assigning the
degree to each rule, the fuzzy rule base can be adapted or updated by the relative
weighting strategy: The more task-related the rule becomes, the more weight degree the
rule gains. As a result, not only is the conflict problem resolved, but also the number of
rules is reduced significantly. After the structure-learning phase (if-then rules), the
whole network structure is established, and the network enters the second learning phase
to optimally adjust the parameters of the membership functions using a gradient descent
learning algorithm to minimise the error function:

q
1
d1 − y l )
2
E= ∑∑
2 x l =1
( (27)

where d and y are the target and actual outputs for an input x. This approach is very similar
to the MF parameter tuning in ANFIS.

Membership Function Parameter Optimisation


Using EAs
We have investigated the usage of evolutionary algorithms (EAs) to optimise the
number of rules and fine-tune the membership functions (Tran, Jain, & Abraham, 2002a).
Given that the optimisation of fuzzy membership functions may involve many changes
to many different functions, and that a change to one function may affect others, the large
possible solution space for this problem is a natural candidate for an EA-based approach.
This has already been investigated in Mang, Lan, and Zhang (1995), and has been shown

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 17

Figure 7. The chromosome of the centres of input and output MF’s

Input Output

Fuel used Intercept Weapon Danger Tactical


time efficiency situation solution

CLow WdLow CEnough WdEnough CHigh WdHigh

to be more effective than manual alteration. A similar approach has been taken to optimise
membership function parameters. A simple way is to represent only the parameter
showing the centre of MFs to speed up the adaptation process and to reduce spurious
local minima over the centre and width.
The EA module for adapting FuNN is designed as a stand-alone system for
optimising the MFs if the rules are already available. Both antecedent and consequent
MFs are optimised. Chromosomes are represented as strings of floating-point numbers
rather than strings of bits. In addition, mutation of a gene is implemented as a re-
initialisation, rather than an alteration of the existing allegation. Figure 7 shows the
chromosome structure, including the input and output MF parameters. One point
crossover is used for the chromosome reproduction.

EXPERIMENTAL RESULTS FOR


DEVELOPING THE TACDSS
Our master data set comprised 1000 numbers. To avoid any bias on the data, we
randomly created two training sets (Dataset A - 90% and Dataset B - 80%) and test data
(10% and 20 %) from the master dataset. All experiments were repeated three times and
the average errors are reported here.

Takagi-Sugeno Fuzzy Inference System


In addition to the development of the Takagi-Sugeno FIS, we also investigated the
behaviour of TACDSS for different membership functions (shape and quantity per ILV).
We also explored the importance of different learning methods for fine-tuning the rule
antecedents and consequents. Keeping the consequent parameters constant, we fine-
tuned the membership functions alone using the gradient descent technique (back-
propagation). Further, we used the hybrid learning method wherein the consequent
parameters were also adjusted according to the least squares algorithm. Even though
back-propagation is faster than the hybrid technique, learning error and decision scores
were better for the latter. We used three Gaussian MFs for each ILV. Figure 8 shows the
three MFs for the “fuel reserve” ILV before and after training. The fuzzy rule consequent
parameters before training was set to zero, and the parameters were learned using the
hybrid learning approach.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
18 Tran, Abraham & Jain

Figure 8. Membership function of the “fuel reserve” ILV (a) before and (b) after
learning

(a)

(b)

Comparison of the Shape of Membership Functions


of FIS
In this section, we demonstrate the importance of the shape of membership
functions. We used the hybrid-learning technique and each ILV had three MFs. Table
3 shows the convergence of the training RMSE during the 15 epoch learning using four
different membership functions for 90% and 80% training data. Eighty-one fuzzy if-then
rules were created initially using a grid-partitioning algorithm. We considered Generalised
bell, Gaussian, trapezoidal, and isosceles triangular membership functions. Figure 9
illustrates the training convergence curve for different MFs.
As is evident from Table 3 and Figure 9, the lowest training and test error was
obtained using a Gaussian MF.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 19

Figure 9. Effect on training error for the different membership functions

Table 3. Learning performance showing the effect of the shape of MF

Root Mean Squared Error (E- 05)

Gaussian G-bell Trapezoidal Triangular

Epochs Data A Data B Data A Data B Data A Data B Data A Data B


1 1.406 1.305 1.706 1.581 2.459 2.314 0.9370 0.8610
2 1.372 1.274 1.652 1.537 2.457 2.285 1.789 1.695
3 1.347 1.249 1.612 1.505 2.546 2.441 1.789 1.695
4 1.328 1.230 1.586 1.483 2.546 2.441 1.789 1.695
5 1.312 1.214 1.571 1.471 2.546 2.441 1.789 1.695
6 1.300 1.199 1.565 1.466 2.546 2.441 1.789 1.695
7 1.288 1.186 1.564 1.465 2.546 2.441 1.789 1.695
8 1.277 1.173 1.565 1.464 2.546 2.441 1.789 1.695
9 1.265 1.160 1.565 1.459 2.546 2.441 1.789 1.695
10 1.254 1.148 1.565 1.448 2.546 2.441 1.789 1.695
11 1.243 1.138 1.565 1.431 2.546 2.441 1.789 1.695
12 1.236 1.132 1.565 1.409 2.546 2.441 1.789 1.695
13 1.234 1.132 1.565 1.384 2.546 2.441 1.789 1.695
14 1.238 1.138 1.565 1.355 2.546 2.441 1.789 1.695

Test
1.44 1.22 1.78 1.36 2.661 2.910 1.8583 1.8584
RMSE

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
20 Tran, Abraham & Jain

Mamdani Fuzzy Inference System


We used FuzzyCOPE (Watts, Woodford, & Kasabov, 1999) to investigate the
tuning of membership functions using back-propagation and evolutionary algorithms.
The learning rate and momentum were set at 0.5 and 0.3 respectively, for 10 epochs. We
obtained training RMSEs of 0.2865 (Data A) and 0.2894 (Data B). We further improved
the training performance using evolutionary algorithms. The following settings were
used for the evolutionary algorithm parameters:

Population size = 50
Number of generations = 100
Mutation rate = 0.01

We used the tournament selection strategy, and Figure 10 illustrates the learning
convergence during the 100 generations for Datasets A and B. Fifty-four fuzzy if-then
rules were extracted after the learning process. Table 4 summarizes the training and test
performance.

Figure 10. Training convergence using evolutionary algorithms

Table 4. Training and test performance of Mamdani FIS using EA’s

Root Mean Squared Error (RMSE)


Data A Data B
Training Test Training Test
0.0548 0.0746 0.0567 0.0612

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 21

Figure 11. Neural network training using SCGA

Table 5. Training and test performance of neural networks versus decision trees

Data A Data B
Training Testing Training Testing
RMSE
CART 0.00239 0.00319 0.00227 0.00314
Neural 0.00105 0.00095 0.00041 0.00062
Network

Artificial Neural Networks


We used 30 hidden neurons for Data A and 32 hidden neurons for Data B. We used
a trial-and-error approach to finalize the architecture of the neural network. We used the
scaled conjugate gradient algorithm to develop the TACDSS. Training was terminated
after 1000 epochs. Figure 11 depicts the convergence of training during 1000 epochs
learning. Table 5 summarizes the training and test performance.

Classification and Adaptive Regression Trees


We used a CART simulation environment to develop the decision trees (www.salford-
systems.com/products-cart.html). We selected the minimum cost tree regardless of tree
size. Figures 12 and 13 illustrate the variation of error with reference to the number of
terminal nodes for Datasets A and B. For Data A, the developed tree has 122 terminal

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
22 Tran, Abraham & Jain

Figure 12. Dataset A - Variation of relative error versus the number of terminal nodes

Figure 13. Dataset B - Variation of relative error versus the number of terminal nodes

Figure 14. Dataset A - Developed decision tree with 122 nodes

Figure 15. Dataset B - Developed decision tree with 128 nodes

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 23

Figure 16. Test results illustrating the efficiency of the different intelligent paradigms
used in developing the TACDSS

nodes as shown in Figure 14, while for Data B, the rest of the tree had 128 terminal nodes
as depicted in Figure 15. Training and test performance are summarized in Table 5.
Figure 16 compares the performance of the different intelligent paradigms used in
developing the TACDSS (for clarity, we have chosen only 20% of the test results for
Dataset B).

DISCUSSION
The focus of this research is to create accurate and highly interpretable (using rules
or tree structures) decision support systems for a tactical air combat environment
problem.
Experimental results using two different datasets revealed the importance of fuzzy
inference engines to construct accurate decision support systems. As expected, by
providing more training data (90% of the randomly-chosen master data set), the models
were able to learn and generalise more accurately. The Takagi-Sugeno fuzzy inference
system has the lowest RMSE on both test datasets. Since learning involves a complicated

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
24 Tran, Abraham & Jain

procedure, the training process of the Takagi-Sugeno fuzzy inference system took longer
compared to the Mamdani-Assilian fuzzy inference method; hence, there is a compromise
between performance and computational complexity (training time). Our experiments
using different membership function shapes also reveal that the Gaussian membership
function is the “optimum” shape for constructing accurate decision support systems.
Neural networks can no longer be considered as ‘black boxes’. Recent research
(Setiono, 2000; Setiono, Leow, & Zurada, 2002) has revealed that it is possible to extract
rules from trained neural networks. In our experiments, we used a neural network trained
using the scaled conjugate gradient algorithm. Results depicted in Figure 5 also reveal
with the trained neural network could not learn and generalise accurately compared with
the Takagi-Sugeno fuzzy inference system. The proposed neural network outperformed
both the Mamdani-Assilian fuzzy inference system and CART.
Two important features of the developed classification and regression tree are its
easy interpretability and low complexity. Due to its one-pass training approach, the
CART algorithm also has the lowest computational load. For Dataset A, the best results
were achieved using 122 terminal nodes (relative error = 0.00014). As shown in Figure 12,
when the number of terminal nodes was reduced to 14, the relative error increased to 0.016.
For Dataset B, the best results could be achieved using 128 terminal nodes (relative error
= 0.00010). As shown in Figure 13, when the terminal nodes were reduced to 14, the relative
error increased to 0.011.

CONCLUSION
In this chapter, we have presented different soft computing and machine learning
paradigms for developing a tactical air combat decision support system. The techniques
explored were a Takagi-Sugeno fuzzy inference system trained by using neural network
learning techniques, a Mamdani-Assilian fuzzy inference system trained by using
evolutionary algorithms and neural network learning, a feed-forward neural network
trained by using the scaled conjugate gradient algorithm, and classification and adaptive
regression trees.
The empirical results clearly demonstrate that all these techniques are reliable and
could be used for constructing more complicated decision support systems. Experiments
on the two independent data sets also reveal that the techniques are not biased on the
data itself. Compared to neural networks and regression trees, the Takagi-Sugeno fuzzy
inference system has the lowest RMSE, and the Mamdani-Assilian fuzzy inference
system has the highest RMSE. In terms of computational complexity, perhaps regression
trees are best since they use a one-pass learning approach when compared to the many
learning iterations required by all other considered techniques. An important advantage
of the considered models is fast learning, easy interpretability (if-then rules for fuzzy
inference systems, m-of-n rules from a trained neural network (Setiono, 2000) and
decision trees), efficient storage and retrieval capacities, and so on. It may also be
concluded that fusing different intelligent systems, knowing their strengths and weak-
ness could help to mitigate the limitations and take advantage of the opportunities to
produce more efficient decision support systems than those built with stand-alone
systems.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 25

Our future work will be directed towards optimisation of the different intelligent
paradigms (Abraham, 2002), which we have already used, and also to develop new
adaptive reinforcement learning systems that can update the knowledge from data,
especially when no expert knowledge is available.

ACKNOWLEDGMENTS
The authors would like to thank Professor John Fulcher for the editorial comments
which helped to improve the clarity of this chapter.

REFERENCES
Abraham, A. (2001). Neuro-fuzzy systems: State-of-the-art modeling techniques. In J.
Mira & A. Prieto (Eds.), Connectionist models of neurons, learning processes, and
artificial intelligence (pp. 269-276). Berlin, Germany: Springer-Verlag.
Abraham, A. (2002). Optimization of evolutionary neural networks using hybrid learning
algorithms. Proceedings of the IEEE International Joint Conference on Neural
Networks (IJCNN’02): Vol. 3, Honolulu, Hawaii (pp. 2797-2802). Piscataway, NJ:
IEEE Press.
Abraham, A., & Nath, B. (2000a). Evolutionary design of neuro-fuzzy systems: A generic
framework. In A. Namatame, et al. (Eds.), Proceedings of the 4th Japan-Australia
Joint Workshop on Intelligent and Evolutionary Systems (JA2000 - Japan) (pp.
106-113). National Defence Academy (Japan)/University of New South Wales
(Australia).
Abraham, A., & Nath, B. (2000b, December). Evolutionary design of fuzzy control
systems: A hybrid approach. In J. L. Wang (Ed.), Proceedings of the 6th Interna-
tional Conference on Control, Automation, Robotics, and Vision, (ICARCV
2000), Singapore.
Abraham, A., & Nath, B. (2001). A neuro-fuzzy approach for modelling electricity demand
in Victoria. Applied Soft Computing, 1(2), 127-138.
Adibi, J., Ghoreishi, A., Fahimi, M., & Maleki, Z. (1993, April). Fuzzy logic information
theory hybrid model for medical diagnostic expert system. Proceedings of the 12th
Southern Biomedical Engineering Conference, Tulane University, New Orleans,
LA (pp. 211-213).
Breiman, L., Friedman, J., Olshen, R., & Stone, C. J. (1984). Classification and regression
trees. New York: Chapman and Hall.
Cattral R., Oppacher F., & Deogo, D. (1999, July 6-9). Rule acquisition with a genetic
algorithm. Proceedings of the Congress on Evolution Computation: Vol. 1,
Washington, DC (pp. 125-129). Piscataway, NJ: IEEE Press.
Chappel, A. R. (1992, October 5-8). Knowledge-based reasoning in the Paladin tactical
decision generation system. Proceedings of the 11th AIAA Digital Avionics
Systems Conference, Seattle, WA (pp. 155-160).
Cortés, P., Larrañeta, J., Onieva, L., García, J. M., & Caraballo, M. S. (2001). Genetic
algorithm for planning cable telecommunication networks. Applied Soft Comput-
ing, 1(1), 21-33.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
26 Tran, Abraham & Jain

Fogel, D. (1999). Evolutionary computation: Towards a new philosophy of machine


intelligence (2nd ed.). Piscataway, NJ: IEEE Press.
Gorzalczany, M. B. (1996, June 17-20). An idea of the application of fuzzy neural networks
to medical decision support systems. Proceedings of the IEEE International
Symposium on Industrial Electronics (ISIE ’96): Vol. 1, Warsaw, Poland (pp. 398-
403).
Holland, J. H., Kaufmann, M., & Altos, L. (1986). Escaping brittleness: The possibility
of general-purpose learning algorithms applied to rule-based systems. In R. S.
Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine learning: An artificial
intelligence approach (pp. 593-623). San Mateo, CA: Morgan Kaufmann.
Holsapple, C. W., & Whinston, A. B. (1996). Decision support systems: A knowledge-
based approach. Minneapolis, MN: West Publishing Company.
Hung, C. C. (1993, November). Building a neuro-fuzzy learning control system. AI Expert,
8(10), 40-49.
Ichimura, T., Takano, T., & Tazaki, E. (1995, October 8-11). Reasoning and learning
method for fuzzy rules using neural networks with adaptive structured genetic
algorithm. Proceedings of the IEEE International Conference on Systems, Man,
and Cybernetics — Intelligent Systems for the 21 st Century: Vol. 4, Vancouver,
Canada (pp. 3269-3274).
Jagielska, I. (1998, April 21-23). Linguistic rule extraction from neural networks for
descriptive data mining. Proceedings of the 2nd Conference on Knowledge-Based
Intelligent Electronic Systems — KES’98: Vol. 2, Adelaide, South Australia (pp.
89-92). Piscataway, NJ: IEEE Press.
Jang, R. (1992, July). Neuro-fuzzy modeling: Architectures, analyses, and applications.
PhD Thesis, University of California, Berkeley.
Kasabov, N. (1996). Learning fuzzy rules and approximate reasoning in fuzzy neural
networks and hybrid systems. Fuzzy Sets and Systems, 82, 135-149.
Kasabov, N. (2001, December). Evolving fuzzy neural networks for supervised/unsuper-
vised on-line knowledge-based learning. IEEE Transaction of Systems, Man, and
Cybernetics, Part B — Cybernetic, 31(6), 902-918.
Kasabov, N., Kim, J. S., & Gray, A. R. (1996). FUNN — A fuzzy neural network architecture
for adaptive learning and knowledge acquisition. Information Sciences, 101(3),
155-175.
Kearney, D. A., & Tran, C. M. (1995, October 23-25). Optimal fuzzy controller design for
minimum rate of change of acceleration in a steel uncoiler. Control95 — Meeting
the Challenge of Asia Pacific Growth: Vol. 2, University of Melbourne, Australia
(pp. 393-397).
Lee, C. C. (1990). Fuzzy logic control systems: Fuzzy logic controller — Part I & II. IEEE
Transactions on Systems, Man, and Cybernetics, 20(2), 404-435.
Lin, T. Y., & Cercone, N. (1997). Rough sets and data mining: Analysis of imprecise data.
New York: Kluwer Academic.
Mamdani, E. H., & Assilian, S. (1975). An experiment in linguistic synthesis with a fuzzy
logic controller. International Journal of Man-Machine Studies, 7(1), 1-13.
Mang, G., Lan, H., & Zhang, L. (1995, October 30-November 3). A genetic-base method
of generating fuzzy rules and membership function by learning from examples.
Proceedings of the International Conference on Neural Information (ICONIP’95):
Vol. 1, Beijing, China (pp. 335-338).

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Soft Computing Paradigms and Regression Trees 27

Militallo, L. G., & Hutton, R. J. B. (1998). Applied cognitive task analysis (ACTA): A
practitioner’s toolkit for understanding cognitive. Ergonomics, 41(11), 1618-1642.
Moller, A. F. (1993). A scaled conjugate gradient algorithm for fast supervised learning.
Neural Networks, 6, 525-533.
Perneel, C., & Acheroy, M. (1994, December 12-13). Fuzzy reasoning and genetic
algorithm for decision making problems in uncertain environment. Proceedings of
the Industrial Fuzzy Control and Intelligent Systems Conference/NASA joint
Technology Workshop on Neural Networks and Fuzzy Logic — NAFIPS/IFIS/
NASA 94, San Antonio, TX (pp. 115-120).
Ponnuswamy, S., Amin, M. B., Jha, R., & Castañon, D. A. (1997). A C3I parallel benchmark
based on genetic algorithms implementation and performance analysis. Journal of
Parallel and Distributed Computing, 47(1), 23-38.
Sanderson, P. M. (1998, November 29-December 4). Cognitive work analysis and the
analysis, design, and evaluation of human computer interactive systems. Proceed-
ings of the Annual Conference of the Computer-Human Interaction Special
Interest Group (CHISIG) of the Ergonomics Society of Australia (OzCHI98),
Adelaide, South Australia (pp. 40-45).
Setiono, R. (2000). Extracting M-of-N rules from trained neural networks. IEEE Transac-
tions on Neural Networks, 11(2), 512-519.
Setiono, R., Leow, W. K., & Zurada, J. M. (2002). Extraction of rules from artificial neural
networks for nonlinear regression. IEEE Transactions on Neural Networks, 13(3),
564-577.
Steinberg, D., & Colla, P. L. (1995). CART: Tree-structured non-parametric data analy-
sis. San Diego, CA: Salford Systems.
Sugeno, M. (1985). Industrial applications of fuzzy control. Amsterdam: Elsevier
Science Publishing Company.
Takagi, T., & Sugeno, M. (1983, December 15-18). Derivation of fuzzy control rules from
human operator’s control actions. Proceedings of the IFAC Symposium on Fuzzy
Information, Knowledge Representation and Decision Analysis, Marseilles, France
(pp. 55-60).
Tan, K. C., & Li, Y. (2001). Performance-based control system design automation via
evolutionary computing. Engineering Applications of Artificial Intelligence,
14(4), 473-486.
Tan, K. C., Yu, Q., Heng, C. M., & Lee, T. H. (2003). Evolutionary computing for knowledge
discovery in medical diagnosis. Artificial Intelligence in Medicine, 27(2), 129-154.
Tran, C., Abraham, A., & Jain, L. (2004). Modeling decision support systems using hybrid
neurocomputing. Neurocomputing, 61C, 85-97.
Tran, C., Jain, L., & Abraham, A. (2002a, December 2-6). Adaptation of Mamdani fuzzy
inference system using neuro — genetic approach for tactical air combat decision
support system. Proceedings of the 15th Australian Joint Conference on Artificial
Intelligence (AI’02), Canberra, Australia (pp. 402-410). Berlin: Springer Verlag.
Tran C., Jain, L., & Abraham, A. (2002b), Adaptive database learning in decision support
system using evolutionary fuzzy systems: A generic framework, hybrid informa-
tion systems. In A. Abraham & M. Oppen (Eds.), Advances in soft computing (pp.
237-252). Berlin: Physica Verlag.
Tran, C., Jain, L., & Abraham, A. (2002c). TACDSS: Adaptation of a Takagi-Sugeno
hybrid neuro-fuzzy system. Proceedings of the 7th Online World Conference on

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
28 Tran, Abraham & Jain

Soft Computing in Industrial Applications (WSC7) (pp. 53-62). Berlin: Springer


Verlag.
Tran, C., & Jain, L. (2000, September). Intelligent techniques for decision support in
tactical environment. Proceedings of the Warfighting in Complex Terrain: Land
Warfare Conference 2000, Melbourne, Australia, Australian Defence Science &
Technology Organisation (pp. 403-411).
Wang, L. X., & Mendel J. M. (1992). Generating fuzzy rules by learning from examples.
IEEE Transactions on System, Man and Cybernetics, 22(6), 1414-1427.
Watts, M., Woodford, B., & Kasabov, N. (1999, November 22-24). FuzzyCOPE: A
software environment for building intelligent systems — The past, the present, and
the future. Proceedings of the ICONIP/ANZIIS/ANNES’99 Workshop, Dunedin/
Queenstown, New Zealand (pp. 188-192).
Zadeh, L. A. (1973). Outline of a new approach to the analysis of complex systems and
decision processes. IEEE Transactions on System, Man, & Cybernetics, 3(1), 28-
44.
Zadeh, L. A. (1998, August 21-31). Roles of soft computing and fuzzy logic in the
conception, design, and deployment of information/intelligent systems. In O.
Kaynak, L. A. Zadeh, B. Turksen, & I. J. Rudas (Eds.), Computational intelligence:
Soft computing and fuzzy-neuro integration and applications. Proceedings of
NATO Advanced Study Institute on Soft Computing and its Applications: Vol.
162, Manavgat, Antalya, Turkey (pp. 1-9).
Zurada, J. M. (1992). Introduction to artificial neural systems. St. Paul, MN: West
Publishing Company.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Application of Text Mining Methodologies to Health Insurance Schedules 29

Chapter II

Application of Text Mining


Methodologies to Health
Insurance Schedules
Ah Chung Tsoi, Monash University, Australia

Phuong Kim To, Tedis P/L, Australia

Markus Hagenbuchner, University of Wollongong, Australia

ABSTRACT
This chapter describes the application of a number of text mining techniques to
discover patterns in the health insurance schedule with an aim to uncover any
inconsistency or ambiguity in the schedule. In particular, we will apply first a simple
“bag of words” technique to study the text data, and to evaluate the hypothesis: Is there
any inconsistency in the text description of the medical procedures used? It is found
that the hypothesis is not valid, and hence the investigation is continued on how best
to cluster the text. This work would have significance to health insurers to assist them
to differentiate descriptions of the medical procedures. Secondly, it would also assist
the health insurer to describe medical procedures in an unambiguous manner.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
30 Tsoi, To & Hagenbuchner

AUSTRALIAN HEALTH INSURANCE SYSTEM


In Australia, there is a universal health insurance system for her citizens and
permanent residents. This publicly-funded health insurance scheme is administered by
a federal government department called the Health Insurance Commission (HIC). In
addition, the Australian Department of Health and Ageing (DoHA), after consultation
with the medical fraternity, publishes a manual called Medicare Benefit Schedule (MBS)
in which it details each medical treatment procedure and its associated rebate to the
medical service providers who provide such services. When a patient visits a medical
service provider, the HIC will refund or pay the medical service provider at the rate
published in the MBS 1 (the MBS is publicly available online from http://
www.health.gov.au/pubs/mbs/mbs/css/index.htm).
Therefore, the description of medical treatment procedures in the MBS should be
clear and unambiguous to interpretation by a reasonable medical service provider as
ambiguities would lead to the wrong medical treatment procedure being used to invoice
the patient or the HIC. However, the MBS has developed over the years, and is derived
through extensive consultations with medical service providers over a lengthy period.
Consequently, there may exist inconsistencies or ambiguities within the schedule. In this
chapter, we propose to use text mining methodologies to discover if there are any
ambiguities in the MBS.

Figure 1. An overview of the MBS structure in the year of 1999

Group A2
...
Other Non-preferred
Group A15
Group A1
Medical (Emergency)
General Practitioner

Group C3 Category 1
Prosthodontic Professional Attendance Group D1
Misc.
Group C2
Maxilloacial
Group D2
Category 7 Category 2 Nuclear
Group C1 Cleft Lip and Cleft Diagnostic Procedures
Orthodontic Pallate Services

Group T1
Group P11
Misc.
Specimen referred MBS
. 1999
.
. Category 3
Category 6 Group T2
Therapeutic Procedures
Pathology Services Radiation
Group P2
.
Chemical
..

Group P1
Group T9
Heamatology
Amesthesia

Category 5 Category 4
Diagnostic Imaging Oral Services
Group I5
Magnetic Resonance
Group O1
Consultation
..
. Group O2
Group I2 .
.. Assistance
Tomography
Group I1 Group O9
Ultrasound Nerve Blocks

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Application of Text Mining Methodologies to Health Insurance Schedules 31

The MBS is divided into seven categories, each of which describes a collection of
treatments related to a particular type, such as diagnostic treatments, therapeutic
treatments, oral treatments, and so on. Each category is further divided into groups. For
example, in category 1, there are 15 groups, A1, A2, …, A15. Within each group, there are
a number of medical procedures which are denoted by unique item numbers. In other
words, the MBS is arranged in a hierarchical tree manner, designed so that it is easy for
medical service providers to find appropriate items which represent the medical proce-
dures provided to the patient.2 This underlying MBS structure is outlined in Figure 1.
This chapter evaluates the following:
• Hypothesis — Given the arrangement of the items in the way they are organised in
the MBS (Figure 1), are there any ambiguities within this classification? Here,
ambiguity is measured in terms of a confusion table comparing the classification
given by the application of text mining techniques and the classification given in
the MBS. Ideally, if the items are arranged without any ambiguities at all (as
measured by text mining techniques), the confusion table should be diagonal with
zero off diagonal terms.
• Optimal grouping — Assuming that the classification given in MBS is ambiguous
(as revealed in our subsequent investigation of the hypothesis), what is the
“optimal” arrangement of the item descriptions using text mining techniques (here
“optimal” is measured with respect to text mining techniques)? In other words, we
wish to find an “optimal” grouping of the item descriptions together such that there
will be a minimum of misclassifications.

The benefits of this work are as follows:


• From the DoHA point of view, it will allow the discovery of any existing ambiguities
in the MBS. In order to make procedures described in the MBS as distinct as
possible, the described methodology can be employed in evaluating the hypoth-
esis in designing the MBS such that there would not be any ambiguities from a text
mining point of view. This will lead to a better description of the procedures so that
there will be little misinterpretation by medical service providers.
• From a service provider’s point of view, the removal of ambiguities would allow
efficient computer-assisted searching. This will limit misinterpretation, and allow
the implementation of a semi-automatic process for the generation of claims and
receipts.
• While the “optimal grouping” process is mainly derived from a curiosity point of
view, this may assist the HIC in re-grouping some of their existing descriptions of
items in the MBS, so that there will be less opportunities for misinterpretation.

Obviously, the validity of the described method lies in the validity of text mining
techniques in unambiguously classifying a set of documents. Unfortunately, this may
not be the case, as new text mining techniques are constantly being developed.
However, the value of the work presented in this paper lies in the ability to use
existing text mining techniques and to discover, as far as possible, any ambiguities within
the MBS. This is bound to be a conservative measure, as we can only discover
ambiguities as far as possible given the existing tools. There will be other ambiguities

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
32 Tsoi, To & Hagenbuchner

which remain uncovered by current text mining techniques. But at least, using our
approach will clear up some of the existing ambiguities. In other words, the text mining
techniques do not claim to be exhaustive. Instead, they will indicate ambiguities as far
as possible, given their limitations.
The structure of this chapter is as follows: In the next section, we describe what text
mining is, and how our proposed techniques fall into the general fabric of text mining
research. In the following section, we will describe the “bag of words” approach to text
mining. This is the simplest method in that it does not take any cognizance of semantics
among the words; each word is treated in isolation. In addition, this will give an answer
to the hypothesis as stated above. If ambiguities are discovered by using such a simple
text mining technique, then there must exist ambiguities in the set of documents
describing the medical procedures. This will give us a repository of results to compare
with those when we use other text mining techniques. In the next section, we describe
briefly the latent semantic kernel (LSK) technique to pre-process the feature vectors
representing the text. In this technique, the intention is that it is possible to manipulate
the original feature vectors representing the documents and to shorten them so that they
can better represent the “hidden” message in the documents. We show results which do
not assume the categories as given in the MBS.

TEXT MINING
In text mining, there are two main issues: retrieval and classification (Berry, 2004).
• Retrieval techniques — used to retrieve the particular document:
¡ Keyword-based search — this is the simplest method in that it will retrieve
a document or documents which matches a particular set of key words
provided by the user. This is often called “queries”.
¡ Vector space-based retrieval method — this is often called a “bag of words”
approach. It represents the document in terms of a set of feature vectors. Then,
the vectors can be manipulated so as to show patterns, for example, by
grouping similar vectors into clusters (Nigam, McCallum, Thrun, & Mitchell,
2000; Salton, 1983).
¡ Latent semantic analysis — this is to study the latent or hidden structure of
the set of documents with respect to “semantics”. Here “semantics” is taken
to mean “correlation” within the set of documents; it does not mean that the
technique will discover the “semantic” relationships between words in the
sense of linguistics (Salton, 1983).
¡ Probabilistic latent semantic analysis — this is to consider the correlation
within the set of documents within a probabilistic setting (Hofmann, 1999a).
• Classification techniques — used to assign data to classes.
¡ Manual classification — a set of documents is classified manually into a set
of classes or sub-classes.
¡ Rule-based classification — a set of rules as determined by experts is used
to classify a set of documents.
¡ Naïve Bayes classification — this uses Bayes’ theorem to classify a set of
documents, with some additional assumptions (Duda, 2001).

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Application of Text Mining Methodologies to Health Insurance Schedules 33

¡ Probabilistic latent semantic analysis classification — this uses the proba-


bilistic latent semantic analysis technique to classify the set of documents
(Hofmann, 1999b).
¡ Support vector machine classification — this is to use support vector
machine techniques to classify the set of documents (Scholkopf, Burges, &
Smola, 1999).

This chapter explores the “bag of words” technique to classify the set of documents
into clusters and compare them with those given in the MBS. The chapter also employs
the latent semantic kernel technique, a technique from kernel machine methods (based
on support vector machine techniques) to manipulate the features of the set of docu-
ments before subjecting them to clustering techniques.

BAG OF WORDS
If we are given a set of m documents D = [d1, d2,..., dm], it is quite natural to represent
them in terms of vector space representation. From this set of documents it is simple to
find out the set of vocabularies used. In order that the set of vocabularies would be
meaningful, care is taken by using the stemmisation technique which regards words of
the same stem to be one word. For example, the words “representation” and “represent”
are considered as one word, rather than two distinct words, as they have the same stem.
Secondly, in order that the set of vocabularies would be useful to distinguish documents,
we eliminate common words, like “the”, “a”, and “is” from the set of vocabularies. Thus,
after these two steps, it is possible to have a set of vocabularies w1, w2,..., wn which
represents the words used in the set of documents D. Then, each document can be
represented as an n-vector with elements which denote the frequency of occurrence of
the word in the document di, and 0 if the word does not occur in the document di. Thus,
from a representation point of view, the set of documents D can be equivalently
represented by a set of vectors V - [v1, v2,..., vm] , where vi is an n-vector. Note that this set
of vectors V may be sparse, as not every word in the vocabulary occurs in the document
(Nigam et al., 2000). The set of vectors V can be clustered together to form clusters using
standard techniques (Duda, 2001).

Table 1. An overview over the seven categories in the MBS

Category Number of items


1 158
2 108
3 2734
4 162
5 504
6 302
7 62
Total 4030

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
34 Tsoi, To & Hagenbuchner

Table 2. A confusion table showing the classification of documents (the actual


classifications as indicated in the MBS are given horizontally; classifications as
obtained by the naïve Bayes method are presented vertically)

Category 1 2 3 4 5 6 7 Total % Accuracy


1 79 0 0 0 0 0 0 79 100.00
2 1 25 9 0 12 7 0 54 46.30
3 12 3 1323 15 10 3 1 1367 96.78
4 1 0 62 18 0 0 0 81 22.22
5 0 3 18 0 229 1 1 252 90.87
6 0 2 0 0 1 148 0 151 98.01
7 3 0 1 1 2 0 24 31 77.42

In our case, we consider each description of an MBS item as a document. We have


a total of 4030 documents; each document may be of varying length, dependent on the
description of the particular medical procedure. Table 1 gives a summary of the number
of documents in each category.
After taking out commonly occurring words, words with the same stem count, and
so on, we find that there are a total of 4569 distinct words in the vocabulary.
We will use 50% of the total number of items as the training data set, while the other
50% will be used as a testing data set to evaluate the generalisability of the techniques
used. In other words, we have 2015 documents in the training data set, and 2015 in the
testing data set. The content of the training data set is obtained by randomly choosing
items from a particular group so as to ensure that the training data set is sufficiently rich
and representative of the underlying data set.
Once we represent the set of data in this manner, we can then cluster them together
using a simple clustering technique, such as the naïve Bayes classification method
(Duda, 2001). The results of this clustering are shown in Table 2.
The percentage accuracy is, on average, 91.61%, with 1846 documents out of 2015
correctly classified. It is further noted that some of the categories are badly classified,
for example, category-2 and category-4. Indeed, it is found that 62 out of 81 category-4
items are misclassified as category-3. Similarly, 12 out of 54 category-2 items are
misclassified as category-5 items.
This result indicates that the hypothesis is not valid; there are ambiguities in the
description of the items in each category, apart from category-1, which could be confused
with those in other categories. In particular, there is a high risk of confusing those items
in category-4 with those in category-3.
A close examination of the list of the 62 category-4 items which are misclassified as
category-3 items by the naïve Bayes classification method indicates that they are indeed
very similar to those in category-3. For simplicity, when we say items in category-3, we
mean that those items are also correctly classified into category-3 by the classification
method. Tables 3 and 4 give an illustration of the misclassified items. It is noted that
misclassified items 52000, 52003, 52006, and 52009 in Table 3 are very similar to the
category-3 items listed in Table 4.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Application of Text Mining Methodologies to Health Insurance Schedules 35

Table 3. Category-4 Items 52000, 52003, 52006, and 52009 misclassified by the naïve
Bayes method as Category-3 items

Item No Item Description


52000 Skin and subcutaneous tissue or mucous membrane, repair of recent
wound of, on face or neck, small (not more than 7 cm long), superficial
52003 Skin and subcutaneous tissue or mucous membrane, repair of recent
wound of, on face or neck, small (not more than 7 cm long), involving
deeper tissue
52006 Skin and subcutaneous tissue or mucous membrane, repair of recent
wound of, on face or neck, large (more than 7 cm long), superficial
52009 Skin and subcutaneous tissue or mucous membrane, repair of recent
wound of, on face or neck, large (more than 7 cm long), involving deeper
tissue

Table 4. Some items in Category 3 which are similar to items 52000, 52003, 52006, and
52009

Item No Item Description


30026 Skin and subcutaneous tissue or mucous membrane, repair of wound of,
other than wound closure at time of surgery, not on face or neck, small (not
more than 7cm long), superficial, not being a service to which another item
in Group T4 applies
30035 Skin and subcutaneous tissue or mucous membrane, repair of wound of,
other than wound closure at time of surgery, on face or neck, small (not
more than 7cm long), involving deeper tissue
30038 Skin and subcutaneous tissue or mucous membrane, repair of wound of,
other than wound closure at time of surgery, not on face or neck, large
(more than 7cm long), superficial, not being a service to which another item
in Group T4 applies
30041 Skin and subcutaneous tissue or mucous membrane, repair of wound of,
other than wound closure at time of surgery, not on face or neck, large
(more than 7cm long), involving deeper tissue, not being a service to which
another item in Group T4 applies
30045 Skin and subcutaneous tissue or mucous membrane, repair of wound of,
other than wound closure at time of surgery, on face or neck, large (more
than 7cm long), superficial
30048 Skin and subcutaneous tissue or mucous membrane, repair of wound of,
other than wound closure at time of surgery, on face or neck, large (more
than 7cm long), involving deeper tissue

It is observed that the way items 5200X are described is very similar to those
represented in items 300YY. For example, item 52000 describes a medical procedure to
repair small superficial cuts on the face or neck. On the other hand, item 30026 describes
the same medical procedure except that it indicates that the wounds are not on the face

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
36 Tsoi, To & Hagenbuchner

or neck, with the distinguishing feature that this is not a service to which another item
in Group T4 applies. It is noted that the description of item 30026 uses the word “not”
to distinguish this from that of item 52000, as well as appending an extra phrase “not being
a service to which another item in Group T4 applies”. From a vector space point of view,
the vector representing item 52000 is very close3 to item 30026, closer than other items
in category-4, due to the few extra distinguishing words between the two. Hence, item
52000 is classified as “one” in category-3, instead of “one” in category-4. Similar
observations can be made for other items shown in Table 3, when compared to those
shown in Table 4.

Table 5. Some correctly classified Category-1 items

Item No Item description


3 Professional attendance at consulting rooms (not being a service to which any
other item applies) by a general practitioner for an obvious problem
characterised by the straightforward nature of the task that requires a short
patient history and, if required, limited examination and management -- each
attendance
4 Professional attendance, other than a service to which any other item applies,
and not being an attendance at consulting rooms, an institution, a hospital, or a
nursing home by a general practitioner for an obvious problem characterised by
the straightforward nature of the task that requires a short patient history and, if
required, limited examination and management -- an attendance on 1 or more
patients on 1 occasion -- each patient
13 Professional attendance at an institution (not being a service to which any other
item applies) by a general practitioner for an obvious problem characterised by
the straightforward nature of the task that requires a short patient history and, if
required, limited examination and management -- an attendance on 1 or more
patients at 1 institution on 1 occasion -- each patient

19 Professional attendance at a hospital (not being a service to which any other


item applies) by a general practitioner for an obvious problem characterised by
the straightforward nature of the task that requires a short patient history and, if
required, limited examination and management -- an attendance on 1 or more
patients at 1 hospital on 1 occasion -- each patient

20 Professional attendance (not being a service to which any other item applies) at
a nursing home including aged persons' accommodation attached to a nursing
home or aged persons' accommodation situated within a complex that includes
a nursing home (other than a professional attendance at a self contained unit) or
professional attendance at consulting rooms situated within such a complex
where the patient is accommodated in a nursing home or aged persons'
accommodation (not being accommodation in a self contained unit) by a
general practitioner for an obvious problem characterised by the
straightforward nature of the task that requires a short patient history and, if
required, limited examination and management -- an attendance on 1 or more
patients at 1 nursing home on 1 occasion -- each patient

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Application of Text Mining Methodologies to Health Insurance Schedules 37

Table 6. Some correctly classified Category-5 items

Item No Item description


55028 Head, ultrasound scan of, performed by, or on behalf of, a medical practitioner
where: (a) the patient is referred by a medical practitioner for ultrasonic
examination not being a service associated with a service to which an item in
Subgroups 2 or 3 of this Group applies; and (b) the referring medical
practitioner is not a member of a group of practitioners of which the first
mentioned practitioner is a member (R)
55029 Head, ultrasound scan of, where the patient is not referred by a medical
practitioner, not being a service associated with a service to which an item in
Subgroups 2 or 3 of this Group applies (NR)
55030 Orbital contents, ultrasound scan of, performed by, or on behalf of, a medical
practitioner where: (a) the patient is referred by a medical practitioner for
ultrasonic examination not being a service associated with a service to which an
item in Subgroups 2 or 3 of this Group applies; and (b) the referring medical
practitioner is not a member of a group of practitioners of which the first
mentioned practitioner is a member (R)
55031 Orbital contents, ultrasound scan of, where the patient is not referred by a
medical practitioner, not being a service associated with a service to which an
item in Subgroups 2 or 3 of this Group applies (NR)
55033 Neck, 1 or more structures of, ultrasound scan of, where the patient is not
referred by a medical practitioner, not being a service associated with a service
to which an item in Subgroups 2 or 3 of this Group applies (NR)

On the other hand, Tables 5 and 6 show items which are correctly classified in
category-1 and category-5 respectively. It is observed that items shown in Table 5 are
distinct from those shown in Table 6 in their descriptions. A careful examination of
correctly-classified category-1 items, together with a comparison of their descriptions
with those correctly-classified category-5 items confirms the observations shown in
Tables 5 and 6. In other words, the vectors representing correctly-classified category-
1 items are closer to other vectors in the same category than other vectors representing
other categories.

SUPPORT VECTOR MACHINE AND


KERNEL MACHINE METHODOLOGIES
In this section, we will briefly describe the support vector machine and the kernel
machine techniques.

Support Vector Machine and Kernel Machine


Methodology
In recent years, there has been increasing interest in a method called support vector
machines (Cristianni & Shawe-Taylor, 2000; Guermeur, 2002; Joachims, 1999; Vapnik,
1995). In brief, this can be explained quite easily as follows: Assume a set of (n-

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written
permission of Idea Group Inc. is prohibited.
Other documents randomly have
different content
“‘A Month in a Dandi’ is full of instruction. It shows a great deal of ability and
determination to express truths, even if they be unpalatable. The chapters on the
vexed questions of Baboo culture and Indian Congress are well worth reading.”—
Manchester Guardian.
“Miss Bremner’s style is chastened for the most part, humorous, faithful to
detail, and oftentimes polished to literary excellence. The earlier chapters are full
of raciness and agreeable personality.”—Hull Daily Mail.
“‘A Month in a Dandi’ describes the writer’s wanderings in Northern India,
following upon a shrewdly observant account of the seamy side of Anglo-Indian
Society. The subject throughout is approached from a political economist’s point of
view. The chapter on the growing poverty of India sounds a warning note.”—
Gentlewoman.
“The author of a ‘Month in a Dandi’ is evidently a keen observer of men and
things, and we know that her opinion is shared by many of our countrymen who
have had a much larger experience of India and Indian affairs than herself. The
book is full of the most exquisite word pictures, pictures that are full of light,
beauty, and grace, but, unfortunately, some of them have more shade than we
care to see; but, doubtless, Miss Bremner’s treatment is correct and life-like.”—Hull
Daily News.
“Quite up to Date.”—Hull Daily Mail.

Crown 8vo., 140 pp.; fancy cover, 1s.; cloth bound, 2s.

STEPPING-STONES TO SOCIALISM.
BY DAVID MAXWELL, C.E.

CONTENTS
In a reasonable and able manner Mr. Maxwell deals with the
following topics:—The Popular Meaning of the Term Socialism—Lord
Salisbury on Socialism—Why There is in Many Minds an Antipathy to
Socialism—On Some Socialistic Views of Marriage—The Question of
Private Property—The Old Political Economy is not the Way of
Salvation—Who is My Neighbour?—Progress, and the Condition of
the Labourer—Good and Bad Trade: Precarious Employment—All
Popular Movements are Helping on Socialism—Modern Literature in
Relation to Social Progress—Pruning the Old Theological Tree—The
Churches,—Their Socialistic Tendencies—The Future of the Earth in
Relation to Human Life—Socialism is Based on Natural Laws of Life—
Humanity in the Future—Preludes to Socialism—Forecasts of the
Ultimate Form of Society—A Pisgah-top View of the Promised Land.

PRESS OPINIONS.
The following are selected from a large number of favourable notices:—
“The author has evidently reflected deeply on the subject of Socialism, and his
views are broad, equitable, and quite up to date. In a score or so of chapters he
discusses Socialism from manifold points of view, and in its manifold aspects. Mr.
Maxwell is not a fanatic; his book is not dull, and his style is not amateurish.”—Hull
Daily Mail.
“There is a good deal of charm about Mr. Maxwell’s style.”—Northern Daily
News.
“The book is well worthy of perusal.”—Hull News.
“The reader who desires more intimate acquaintance with a subject that is often
under discussion at the present day, will derive much interest from a perusal of
this little work. Whether it exactly expresses the views of the various socialists
themselves is another matter, but inasmuch as these can seldom agree even
among themselves, the objection is scarcely so serious as might otherwise be
thought.”—Publisher’s Circular.
Elegantly bound in cloth gilt, crown 8vo., 340 pp., 4/4 nett.

ANDREWS’ LIBRARY OF POPULAR


FICTION.
No. 1.—Children of Chance.
By HERBERT LLOYD.

PRESS OPINIONS.
“Mr. Lloyd has redeemed his story by sprightly incident and some admirable
character sketches. Madge, whom the hero eventually marries, is a charming
creation, and yet ‘not too light and good for human nature’s daily food.’ Her sister
and her husband, Tom Coltman, are also a fine couple, and Mr. Lloyd introduces us
to some very clever scenes at the theatre at which they perform. The hero’s sister,
Gladys, is another favourite, and the family to which she is introduced consists of
many persons in whom the reader is bound to take an interest. Mr. Lloyd works up
the climax in a truly masterly manner, and the discovery of the father of the
‘children of chance,’ is ingenious and clever. In short we have little but praise for
this book.... The reader’s interest is aroused from the first and is sustained to the
end. There is pathos in the story and there is humour, and Mr. Lloyd writes very
gracefully and tenderly where grace and tenderness are needed.”—Birmingham
Daily Gazette.
“The story ... is full of action and movement, and is never dull.”—The Scotsman.
“Messrs. William Andrews & Co., of Hull, have opened their ‘Library of Popular
Fiction’ with a brightly-written novel by Herbert Lloyd, entitled ‘Children of Chance.’
The treatment of the story is distinctly above the average.... The character of
Richard Framley, though a minor one, is very cleverly limned, and a forcible piece
of writing in the last chapter but one, will leave a vivid impression even to the
reader who merely skims the book. Altogether the ‘Library’ has reached a high
standard with its initial volume.”—Eastbourne Observer.
“Those who can appreciate a good story told in plain and simple language will
probably find a good deal of pleasure in perusing ‘Children of Chance,’ by Herbert
Lloyd. It is altogether devoid of sensationalism. At the same time one feels an
interest in the various couples who are introduced, and whose love-making is
recorded in a very agreeable manner.... Mr. Lloyd succeeds in depicting an
effective scene at the final denouement, the period before it being attractively
filled in. It is artistically worked out.”—Sala’s Journal.
“The story is strengthened by the interest attaching to its women, and by a
certain lightness of touch and naturalness in the portrayal of the life of an artistic
family. Some of the characters are both well drawn and likeable, and one or two
strong incidents redeem the general tone of the plot.”—Glasgow Herald.
“This is decidedly a good novel, and the plot is sufficiently exciting to attract a
reader and hold him to the end.”—The Publishers’ Circular.
“The author of ‘Children of Chance,’ grasps one of the first essentials of fiction,
dramatic effect.... There is no lack of new ideas, and the story is not
uninteresting.”—The Literary World.
“The plot of ‘Children of Chance,’ by Herbert Lloyd, is in many ways a powerful
one.... There are several strong situations, and the book is well worth reading.”—
The Yorkshire Post.
“‘Children of Chance,’ which inaugurates Andrews’ ‘Library of Popular Fiction,’
enforces the lesson of evil consequences that may be expected to follow upon foul
deeds deliberately wrought.... The interest in the career of Cecil Studholme and
his children is kept well alive.”—The Academy.
“This is a well-balanced and cleverly written novel. Some fine realistic work is
displayed in the delineation of several characters, a trait which shows that the
author has kept a high ideal before him in his constructive processes.... Love
episodes come in, and the conversation is exceedingly healthy and natural. The
volume is beautifully got-up.”—The Perthshire Advertiser.
“There is plenty of love-making in the story, several of the characters are well
drawn, and the plot is an ingenious one.”—Northern Evening Mail.
“Much of Mr. Lloyd’s book is bright, fresh, and ingenious.... The plot is cleverly
conceived, and shows careful treatment from beginning to end.... There are in
‘Children of Chance’ notable instances where a deep insight into human nature is
perceptible; many scenes, such as that which closes on the life of the deserted
wife, show a touch of pathos of which many a more noted author might feel justly
proud; while at times the dialogue is far from indifferent.”—Hull News.
“‘Children of Chance’ is the pioneer volume of Andrews’ ‘Library of Fiction.’ It
ought to win its way to popular favour. Its attractive binding and excellent printing
are commendable features, while the story itself displays high literary merit. Mr.
Lloyd does not lack the modern fiction writer’s capacity for the creation of
sensational incidents; but he manages his plots with ingenuity and success, and
his morality is thoroughly sound.”—North Eastern Daily Gazette.

HULL: WILLIAM ANDREWS & CO., THE HULL PRESS.


London: Hutchinson & Co.
*** END OF THE PROJECT GUTENBERG EBOOK BYGONE
SCOTLAND: HISTORICAL AND SOCIAL ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States
copyright in these works, so the Foundation (and you!) can copy
and distribute it in the United States without permission and
without paying copyright royalties. Special rules, set forth in the
General Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree
to abide by all the terms of this agreement, you must cease
using and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project Gutenberg™
works in compliance with the terms of this agreement for
keeping the Project Gutenberg™ name associated with the
work. You can easily comply with the terms of this agreement
by keeping this work in the same format with its attached full
Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and
with almost no restrictions whatsoever. You may copy it,
give it away or re-use it under the terms of the Project
Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country
where you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
back
back
back
back
back
back
back
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!

ebookball.com

You might also like