0% found this document useful (0 votes)
20 views10 pages

A Pattern Recognition Method For Electronic Noses

The document presents a study on a pattern recognition method for electronic noses using a chaotic neural network called KIII, which mimics olfactory systems to classify volatile organic compounds (VOCs) in Chinese rice wines. The KIII network demonstrated superior performance in handling concentration variations and sensor drift compared to conventional neural networks. The research highlights the potential of the KIII model in improving electronic nose applications through robust generalization capabilities.

Uploaded by

Hannah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views10 pages

A Pattern Recognition Method For Electronic Noses

The document presents a study on a pattern recognition method for electronic noses using a chaotic neural network called KIII, which mimics olfactory systems to classify volatile organic compounds (VOCs) in Chinese rice wines. The KIII network demonstrated superior performance in handling concentration variations and sensor drift compared to conventional neural networks. The research highlights the potential of the KIII model in improving electronic nose applications through robust generalization capabilities.

Uploaded by

Hannah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

UC Berkeley

UC Berkeley Previously Published Works

Title
A pattern recognition method for electronic noses based on an olfactory neural network

Permalink
https://escholarship.org/uc/item/9tr82425

Journal
Sensors and Actuators B, 125

Authors
Fu, Jun
Li, Guang
Qin, Yuqi
et al.

Publication Date
2007

Peer reviewed

eScholarship.org Powered by the California Digital Library


University of California
Sensors and Actuators B 125 (2007) 489–497

A pattern recognition method for electronic noses


based on an olfactory neural network
Jun Fu a , Guang Li b,∗ , Yuqi Qin a , Walter J. Freeman c
a
Department of Biomedical Engineering, Zhejiang University, Hangzhou 310027, China
b National Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, China
c Division of Neurobiology, University of California at Berkeley, LSA 142, Berkeley, CA 94720-3200, USA

Received 7 November 2006; received in revised form 27 February 2007; accepted 27 February 2007
Available online 12 March 2007

Abstract
Artificial neural networks (ANNs) are generally considered as the most promising pattern recognition method to process the signals from a
chemical sensor array of electronic noses, which makes the system more bionics. This paper presents a chaotic neural network entitled KIII, which
modeled olfactory systems, applied to an electronic nose to discriminate six typical volatile organic compounds (VOCs) in Chinese rice wines.
Thirty-two-dimensional feature vectors of a sensor array consisting of eight sensors, in which four features were extracted from the transient
response of each TGS sensor, were input into the KIII network to investigate its generalization capability for concentration influence elimination
and sensor drift counteraction. In comparison with the conventional back propagation trained neural network (BP-NN), experimental results show
that the KIII network has a good performance in classification of these VOCs of different concentrations and even for the data obtained 1 month
later than the training set. Its robust generalization capability is suitable for electronic nose applications to reduce the influence of concentration
and sensor drift.
© 2007 Published by Elsevier B.V.

Keywords: Artificial neural networks; Electronic nose; Pattern recognition; Transient phase; Olfactory model; Sensor drift

1. Introduction monitoring [4,5], food and beverage industry [6–8], medical


diagnosis [9], public security [10], etc.
The molecular basis of odor recognition in the human olfac- As a multidisciplinary research, most studies on electronic
tory system has been successfully investigated [1], while the noses focused on the sensitivities of the chemical sensor array
information processing principle of olfactory neural systems still and the pattern recognition methods to process the signals
remains not very clear. However, a bionic technology termed obtained from the sensor array. With the development of func-
electronic noses inspired by the mechanism of biological olfac- tional material technology, signals can be obtained via various
tory systems has been studied by many researchers during the sensors, such as metal oxide semiconductor (MOS), optical, con-
past two decades [2]. An electronic nose is an instrument, ducting polymer (CP), quartz crystal microbalance (QCM) and
which generally consists of an array of cross-sensitive electronic surface acoustic wave (SAW) sensors [11,12]. However, how to
chemical sensors and an appropriate pattern recognition method deal with these signals is still crucial for artificial olfaction to
(PARC), to automatically detect and discriminate simple or com- reliably recognize various odors. So far, a considerable number
plex odors [3]. Generally speaking, electronic noses are faster to of pattern recognition methods have been introduced into elec-
respond, easier to use and relatively cheaper in comparison with tronic noses [13,14]. And ANNs are usually considered to be one
conventional analytical techniques, such as gas chromatogra- of the most promising methods to solve this complicated prob-
phy/mass spectroscopy (GC/MS) and flame ionization detection lem, because they can cope with nonlinear problems and handle
(FID), so that they have wide applications in environmental noise or drift better than conventional statistical approaches. So
many ANNs to process signals from sensor arrays are reported,
such as back propagation trained neural network [15], radial
∗ Corresponding author. Tel.: +86 571 87952233 8228. basis function neural network [16], probabilistic neural network
E-mail address: [email protected] (G. Li). [17], self-organizing network [18], etc.

0925-4005/$ – see front matter © 2007 Published by Elsevier B.V.


doi:10.1016/j.snb.2007.02.058
490 J. Fu et al. / Sensors and Actuators B 125 (2007) 489–497

Although conventional ANNs simulate the hierarchy struc-


ture of cortex, only a few of ANNs mimic the architectures of
a particular neural system. Multi-scale models entitled K sets
were introduced by Freeman in the 70 , which described increas-
ing complexity of structure and dynamical behaviors. K-sets are
topological specifications of the hierarchy of connectivity in neu-
ron populations, and the KIII network is a complex dynamics
system to imitate vertebrate olfactory systems [19,20]. When
the parameters are optimized and additive noise is introduced,
the KIII network can not only output electroencephalograph-
like waveforms observed in electrophysiological experiments
[21–23], but also be used in a wide range of applications,
including spatiotemporal EEG pattern classification [24,25] and
handwritten numerals recognition [26]. Recently, Gutierrez-
Osuna and Gutierrez-Galvez have shown the potential use of
the KIII network to analyze the output signal of a chemical sen-
sor array [27]. They also proposed a new Hebbian/anti-Hebbian
learning rule for this model to increase pattern separability for
different concentrations of three VOCs [28].
Focusing on the problems of concentration influence and sen-
sor drift, this paper reports an application of the KIII neural Fig. 2. A typical output of the sensor array. Features extracted from the response
network on an electronic nose to recognize VOCs usually present of one sensor are indicated as Vm , Tm , Vf and S. The curves marked with the
same symbol (,  and ) were obtained from the same type sensors.
in the headspace of Chinese rice wine.

To distribute the analyte uniformly in the test chamber, a


2. Experimental
fan inside the chamber stirred air for 1 min after the VOC was
extracted from the vial headspace and then injected into the
2.1. Experimental setup and data acquisition
test chamber using a syringe. When a constant voltage (5 V dc)
was applied to the heater resistors of all sensors, the outputs of
The experimental setup consists of an array of eight MOS
eight sensors were simultaneously measured via the 12-bit 8-
sensors in a sealed test chamber (3000 mL), a set of acquisi-
channel A/D converter and recorded on the hard disk of the PC
tion circuits including a 12-bit A/D converter and an IBM PC
for further processing. The sampling rate for each sensor was
compatible computer (as shown in Fig. 1). The communica-
20 samples/s and the duration was 1 min. Fig. 2 shows typical
tion between the signal acquisition circuits and the computer
response curves of the sensor array. The same type of sensors
is via a RS232 cable. Eight sensors (TGS880 (2×), TGS813
have similar but not the same response characteristics, as shown
(2×), TGS822 (2×), TGS800, TGS823) are all commercially
in Fig. 2, implying that no one is redundant. After measurement,
available, purchased from Figaro Engineering Inc.
the test chamber was flushed with ambient airflow for 5 min to
Six VOCs (ethanol, acetic acid, acetaldehyde, ethyl acetate,
purge the chamber and leave the sensors to recover by desorp-
lactic acid and isoamyl alcohol) usually presenting in the
tion. All measurements were carried out under open laboratory
headspace of Chinese rice wines [29] were of analytical grade
conditions without special atmospheric, humidity or temperature
and purchased from Sinopharm Chemical Reagent (Shanghai,
control.
China). For each VOC, 10 mL solution was rested on the bottom
In order to investigate the sensor drift effect, data acqui-
of a 250 mL vial at least 20 min, so that saturated VOC could be
sition were conducted during different periods. Dataset I was
extracted in the headspace of the vial as an analyte.
collected in May, containing 66 samples (11 samples for each of
six VOCs), and Dataset II was collected in June, containing 120
samples (20 samples for each of six VOCs). All the concentra-
tions of VOCs for Datasets I and II are 30 mL/3000 mL. Dataset
III was acquired in August, containing 90 samples (five samples
for each of six VOCs of 30 mL/3000 mL, 50 mL/3000 mL and
70 mL/3000 mL concentrations).

2.2. Feature extraction

A typical output of the sensor array consists of eight time


series from eight individual sensors. Some features should be
extracted to represent the original signals for further processing.
Fig. 1. Scheme of the experimental setup. Many feature extraction methods have been considered, includ-
J. Fu et al. / Sensors and Actuators B 125 (2007) 489–497 491

ing steady-state phase, transient phase or both. However, it is


commonly believed that the transient response, which represents
different dynamic behaviors of the sensors exposed to different
odors [30], may contain more information than steady-state one.
Besides, utilizing transient response reduces the time require-
ment to collect data. In this work, four features were selected
(as shown in Fig. 2) to construct a feature vector to represent one
response of the sensor array to a certain VOC; (1) the maximum
voltage of sensor output, Vm , (2) the time to reach the maximum
voltage, Tm , (3) the voltage at 40 s, Vf , and (4) the area covered
by the response curve during the first 40 s, S, which is estimated
as the sum of the data values of the first 40 s.
In order to reduce the influence of concentration fluctuation
on classification results, a vector normalization is applied as
described in Eq. (1). Each set of vector is individually divided
by its Euclidean norm so that it lies in a hyper-sphere of unit
radius.
P(i)
Pnew (i) =  1/2 , i = 1, 2, ..., 8 (1)
8
i=1 P (i)
2

where P(i) represents Vm , Tm , Vf and S, respectively. Eq. (1)


is given for each feature from the eight sensors. Therefore, the
response of the sensor array to a certain VOC can be represented
by a 32-dimensional feature vector.

3. Pattern recognition method

3.1. KIII neural network

The KIII network modeling biological olfactory systems is


Fig. 3. Topology of the KIII neural network.
a massively parallel architecture with multiple layers coupled
with both feedforward and feedback loops through distributed
delay lines. Fig. 3 shows the topological diagram of the KIII is an external input signal to the ith node. The parameters a and b
network. Odorant sensory signals from receptors (R) propagate reflect two rate constants. Q(x(t), q) is a static sigmoid function
to periglomerular cells (P) and olfactory bulb (OB) layers via the derived from the Hodgkin-Huxley equation and evaluated by
primary olfactory nerve (PON) in parallel. The OB layer consists experiments [33].
of a set of mutually coupled neural oscillators, each being formed 
by two mitral cells (M) and two granule cells (G). Then the sum q(1 − exp(−(exp(x(t)) − 1)/q)), x(t) > −x0
output of all lateral M1 nodes transmits via a lateral olfactory Q(x(t), q) =
−1, x(t) ≤ −x0
tract (LOT) to the AON and PC, which provides the final output
1
of the olfactory system to other parts of the brain from deep x0 = − ln 1 − q ln 1 +
pyramidal cells (C), as well as back to the OB and AON layers. q
(3)
Details of the KIII network and its neurophysiologic foundations
are given in Refs. [19,21,23,31,32]. Therefore, the dynamics of the whole olfactory model can be
In Fig. 3, every node representing a population of interactive mathematically described by a set of such ODEs, as details in
neurons can be described by a second-order ordinary differential Refs. [19,22]. Here, the fourth-order Runge-Kutta method with
equation (ODE) as follows: a fixed step of 1 was applied for numerical integration of the
1  ODEs.
[x (t) + (a + b)xi (t) + abxi (t)]
ab i The parameters in the model are determined by a set of reli-
able parameter optimization algorithms [21] to make the KIII
N
   model output EEG-like waveform as observed in olfactory sys-
= Wij Q(xj (t), qj ) + Ii (t) (2)
tems. All parameters without declaration in this paper come from
j=i
Ref. [22]. Moreover, it seems to be very important to introduce
where xi (t) represents the state variable of the ith node, xj (t) additive noises into the KIII network for its stability and robust-
represents the state variable of the jth node, which is connected to ness. Therefore, a low-level Gaussian noise is injected into two
the ith, while Wij indicates the connection strength from j to i. Ii (t) significant points, R and AON to simulate both peripheral and
492 J. Fu et al. / Sensors and Actuators B 125 (2007) 489–497

Fig. 4. An example of time series from (a) P2, G2, E1, A1 and (b) M1 node of the 32-channel KIII network with constant stimulus from 50 to 250 steps injected via
receptor. (c) The phase portrait of attractor with M1 node against G2 node in OB (start from red, then to black and end in blue) (For interpretation of the references
to colour in this figure legend, the reader is referred to the web version of the article.).

central sources of noise in olfactory systems. It offers a conver- training can be expressed by a vector:
gence of statistical measures on the KIII output trajectories under
perturbations of initial conditions of variables and parameter Φ = [SD(1), SD(2), . . . , SD(n)] (5)
values [22]. In training phase, every time the modified Hebbian learning
rule and the habituation learning rule [24] are employed to mod-
3.2. Learning rule and classification algorithm ify lateral weights Wmml between all M1 nodes in the OB layer
(short for Wij and Wji), as shown in Eq. (6). If the activities of two
When a pattern to be learned expressed by an n-dimension nodes, M1(i) and M1(j) for each pair of i and j, are larger than
vector is input in parallel into an n-channel KIII network, the the mean activity of the OB layer, they are considered to be co-
system that presents an aperiodic oscillation in its basal state at activated by the external stimulus and their connection weights
first will soon go to a specific local basin of an attractor wing, are strengthened by the modified Hebbian learning rule. Other-
with a gamma range of quasi-periodic burst, corresponding to wise their connection weights decrease at the habituation rate
this pattern, as shown in Fig. 4. The system memory is defined hhab and eventually diminish asymptotically toward zero after
as the collection of basins and attractor wings of the KIII net- several learning cycles.
work, and a recall is the induction by a state transition of a
spatiotemporal gamma oscillation [24]. When used for pattern IF SD(i) > (1 + K)SDm AND SD(j) > (1 + K)SDm
recognition, the outputs of the KIII network are expressed in the THEN Wij = hHeb , Wji = hHeb (6)
form of a spatial amplitude modulated (AM) pattern of a chaotic ELSE Wij = hhab Wij , Wji = hhab Wji
oscillation in the multi-channel OB layer. Many mathematical 
methods to extract information from the outputs of the model are where SDm = (1/n) nk=1 SD(k) and i, j, k = 1, 2, . . ., n and i = j.
proposed, such as standard deviation (SD) [24], singular-value W stands for the weight after learning, while W is the original
decompositions (SVD) [34], root mean square (RMS), principal weight; hHeb and hhab are the learning constant of the Hebbian
components analysis (PCA) and fast Fourier transform (FFT) reinforcement and habituation, respectively. The bias coefficient
[35]. K is defined to avoid the saturation of the weight space.
In this work, the SD method is adopted. The burst in each M1 The learning process continues until the weight changes of
node is portioned into s equal segments, and the mean value of Wmml converge to a desired level. At the end of learning, the
the individual SD of these segments is calculated as SD(k), as cluster centroid of every pattern Ci is determined and the connec-
in Eq. (4). tion weights are fixed in order to perform classification using the
trained network. While inputting an unknown sample t from the
s
1 test set, the Euclidean distances from the corresponding activity
SD(k) = SDr , k = 1, 2, ..., n (4)
s vector Φt to those training pattern cluster centroids Ci are esti-
r=1
mated, and the minimum distance determines the classification.
When a new sample is presented to the KIII network with n All calculations and data processing in this study were imple-
channels, the activity measure over the whole OB layer in this mented in MATLAB (version 7.1, Mathworks, USA) on a
J. Fu et al. / Sensors and Actuators B 125 (2007) 489–497 493

Dell Pentium-4 personal computer (CPU 3.00 GHz and RAM


1.00 GB, Dell Inc., USA) running Windows XP (Mircosoft,
USA).

4. Results and discussion

4.1. KIII neural network implementation

Each simulation of a trial for either training or testing lasts


about 400 steps. The first 50 steps is the initial period in which
the KIII network enters its basal state, and the input is on during
50–250 steps. In the last 150 steps, the KIII network goes back
to the initial state. All output information of the KIII network are
observed from M1 nodes in the OB layer, and the burst between Fig. 5. Convergence curve of the overall weight change, Wmml, with respect
50 and 350 steps in each M1 is equally partitioned into five to the number of learning cycles in a semi-log plot. The number of training
cycles can be easily determined by the threshold (e.g., 6 × 10−4 ).
segments, as shown in Fig. 4(b). The parameters mentioned in
Section 3.2 are hHeb = 0.0395, hhab = 0.8607 and K = 0.4, which
are determined empirically. of six VOCs, the Euclidean distances of all samples, includ-
Firstly, how the weight matrices of Wmml converge with the ing the training set, to different cluster centroids of those six
learning cycle number is studied to determine how many learn- classes are shown in Fig. 6. According to the classification cri-
ing cycles are needed. The Dataset I is used to train the KIII teria described in Section 3.2, the classification results clearly
network. The KIII network is trained 10 cycles alternately with show that the correction rate of most samples are close to 100%,
VOCs’ feature vectors. The overall weight change, Wmml, except two samples were misclassified. Two lactic acid samples
indicated by the sum of square of each weight change, is shown were misrecognized as acetic acid (as shown in Fig. 6(a)), while
in Fig. 5. It can be seen that Wmml descends rapidly with an another sample of acetic acid was misrecognized as lactic acid
increase of the number of learning cycles. When a threshold (as shown in Fig. 6(c)). In each subplot, the Euclidean distance
of the weight change is fixed, the number of the learning cycles of the first VOC sample to its own cluster centroid is usually the
needed can be easily determined. An example is shown in Fig. 5. smallest, since the sample was used to train the KIII.
In our experiments, the number of learning cycles is between
four and six, which guarantees Wmml in the order of 10−4 . 4.2. Concentration influence elimination by the KIII
Dataset I, containing 66 samples (11 samples for each of six
VOCs), was used here to investigate the general performance A challenge of electronic nose applications is the pattern
of the KIII network. Trained with one of the samples of each dispersion caused by concentration difference. Most work to

Fig. 6. Euclidean distance from all samples in Dataset I to different cluster centroids of: (a) lactic acid, (b) ethanol, (c) acetic acid, (d) ethyl acetate, (e) isoamyl
alcohol and (f) acetaldehyde. Symbols: (♦) lactic acid, () ethanol, () acetic acid, () ethyl acetate, (*) isoamyl alcohol and (夽) acetaldehyde.
494 J. Fu et al. / Sensors and Actuators B 125 (2007) 489–497

Table 1
Correction rate of classification (%) of the KIII trained by 30 mL/3000 mL,
50 mL/3000 mL and 70 mL/3000 mL samples, respectively
30 mL/3000 mL 50 mL/3000 mL 70 mL/3000 mL

Lactic acid 100 100 100


Ethanol 50.0 100 66.7
Acetic acid 100 91.7 75.0
Ethyl acetate 100 100 100
Isoamyl alcohol 75.0 100 100
Acetaldehyde 100 100 100
Average 87.5 98.6 90.3

eliminate this influence concentrated on applying different nor-


malization methods to preprocess the data. However, linear
normalization methods like Eq. (1) only work in a small scale
of concentration fluctuation because most sensor responses are
logarithmically dependent on gas concentration. In other words,
for nonlinear sensors this normalization does not cancel the con-
centration dependence completely. An efficient algorithm for an
electronic nose should identify chemicals independently of their
concentrations. And ANNs seem one of the appropriate algo-
rithms. Here Dataset III was used to test the KIII network to
eliminate the concentration influence.
Three samples with the same concentration were ran-
domly chosen to train the KIII network, while the others
in Dataset III were used for testing. Classification results of
the KIII trained by different concentrations (30 mL/3000 mL,
50 mL/3000 mL and 70 mL/3000 mL) are shown in Table 1.
The average classification accuracy of the KIII network trained
by 50 mL/3000 mL samples is 98.6%, which is higher than
those by 30 mL/3000 mL and 70 mL/3000 mL. It is reasonable
because the pattern normally varies from one to another grad-
ually as the concentration changes and 50 mL/3000 mL is in
the middle of the concentration gradient. No matter which con- Fig. 7. PCA plots of six VOCs collected in May (Dataset I, red), June (Dataset
centration is used to train, a correction rate better than 87% is II, black) and August (Dataset III, blue). Symbols: (♦) lactic acid, () ethanol,
() acetic acid, () ethyl acetate, (*) isoamyl alcohol and (夽) acetaldehyde.
achieved. In other words, the concentration error tolerance in
a span of 40 mL/3000 mL should be about 87%. Although it
may be not good enough in comparison with biological olfac- A dimension reduction technique, PCA, can help to get a
tion, this concentration tolerance may meet some application better understanding of the nature of sensor drifts, through giving
requirements when the data acquisition conditions are strictly an appropriate visual representation of the raw data with fewer
controlled. dimensions. Fig. 7 illustrates the PCA plots, which show the
difference between Datas I, II and III, obtained in May (red),
4.3. Sensor drift counteraction by the KIII June (black) and August (blue), respectively. Different VOCs are
represented by different symbols. Examining these PCA plots,
Another key issue for an electronic nose is that the chemical the first three principal components accounted for 86.1% of the
sensors tend to show significant variations over long time periods variance of the data. The clustering of different VOC samples
when exposed to identical atmospheres. These so-called sensor from one dataset (samples collected during the same time period)
drifts are due to the aging of the sensors, poisoning effects, and is obvious. However, some sensor drifts occurred significantly,
perhaps fluctuations in the sensor temperature because of envi- for examples the acetaldehyde (夽) samples in June and August
ronmental changes [36]. It is very important for an electronic are far away from those in May, the ethyl acetate () samples
nose to have robust generalization and error tolerance capability being the same.
in order to avoid the regular requirement of sensor calibration or The KIII network was trained with six samples, corre-
ANN retraining before each use. The need to deal with the sensor sponding to six kinds of VOCs, chosen from Dataset I. Then
drift of electronic noses has been long recognized and various the procedure, in which other samples in Dataset I, all sam-
strategies [36,37] have been developed to solve this problem. ples in Dataset II and the samples with a concentration of
The following experiments were addressed to investigate the 30 mL/3000 mL in Dataset III were classified by the trained net-
drift counteraction capability of the KIII network. work, was considered as one trial. The average correction rates
J. Fu et al. / Sensors and Actuators B 125 (2007) 489–497 495

Table 2
Classification correction rates (%) of Datasets I, II and III using the KIII, NPA and BP-NN
Dataset I (May) Dataset II (June) Dataset IIIa (August)

KIII NPA BP-NN KIII NPA BP-NN KIII NPA BP-NN

Lactic acid 85.0 100 87.7 98.0 100 78.3 50.0 83.3 48.6
Ethanol 100 100 100 100 100 87.6 100 100 62.0
Acetic acid 100 100 86.3 100 100 74.5 86.7 100 52.6
Ethyl acetate 100 100 100 100 100 99.3 0 0 44.6
Isoamyl alcohol 100 100 97.3 100 100 86.5 86.7 93.3 52.0
Acetaldehyde 100 100 86.0 65.0 0 18.3 46.7 0 28.7
Average 97.5 100 92.9 93.9 83.3 74.1 61.6 62.7 48.1
a Only the samples with concentration of 30 mL/3000 mL in Dataset III are used to make the results comparable with Datasets I and II.

of six trials are shown in Table 2. For the KIII network, the The BP-NN used in this paper is composed of 32 input nodes,
average correction rates for the samples of Datasets I, II and 10 hidden nodes and six output nodes representing the clusters.
III are 97.5%, 93.9% and 61.6%, respectively. It is clear that The tan-sigmoid transfer function is selected for both the hid-
the classification accuracy declined a little 1 month later though den and the output layers. A gradient descent with a learning
it dropped dramatically 3 months later. It can be seen that the ratio of 0.05 is chosen. To make fair comparison, both BP-NN
KIII network has capability to counteract the sensor drift in 1 and the KIII were trained using the same training set until both
month. It is not surprising that the KIII network misrecognized mean-squared errors reached the same order of magnitude. The
all ethyl acetate as acetaldehyde in Dataset III since the samples neuron in BP-NN with the highest score in the output layer
of ethyl acetate obtained in August moved into the region of indicates which class the input sample belongs to. This tolerant
acetaldehyde, as shown in the PCA plots in Fig. 7. classification criterion is similar to what used with the KIII. Five
A simple nonparametric algorithm (NPA) based on the different runs were conducted for each trial to reduce the effect
Euclidean distance metric was used for comparison. In our of the random initial weights in the training phase. The classi-
experiments, the training set was adopted as the pattern tem- fication results of BP-NN are presented in Table 2 along with
plates and the Euclidean distances between testing samples and those of the KIII. The performance of BP-NN is not as good
templates were calculated. The nearest neighbors are classified as the KIII under similar situation, although an optimization by
as one class. The same classification criteria were employed in trial and error may improve it quite a lot.
the KIII application. Comparing the classification accuracy of
the KIII and NPA (as shown in Table 2), we can find that the 5. Conclusions
KIII network has better generalization capability.
In this paper, a biologically inspired neural network, based
4.4. Performance comparison with BP-NN on anatomical and electroencephalographic studies of biological
olfactory systems, is applied to pattern recognition in elec-
Usually, PARC selections are application-oriented and empir- tronic noses. Classifying six VOCs commonly presented in the
ical. Some criteria, including high classification accuracy, fast, headspace of Chinese rice wine, its performance to eliminate the
simple to train, low memory requirement, robust to outliers and concentration influence and counteract sensor drift is examined
to produce a measure of uncertainty, are proposed to attempt to and compared with the simple nonparametric algorithm and the
determine the optimal classifier [38]. And several researchers well-known BP-NN.
have compared different PARCs employed by electronic noses The KIII neural network has a good performance in classi-
[38,39]. To compare the performances, a conventional ANN, fication of six VOCs of different concentrations, even for the
the back propagation trained neural network (BP-NN), as well patterns obtained 1 month later than what was used for training.
as the KIII network, was applied to the classification. The BP- Its flexibility and robust fault tolerance are quite suitable for
NN algorithm was taken from the neural network toolbox in electronic nose applications, subjecting to the problems associ-
MATLAB. ated with the susceptibility to concentration influence and sensor
Being one of the most popular ANNs in electronic noses, BP- drift. Compared with BP-NN, the application of the KIII neu-
NN has become the de facto standard for pattern recognition of ral network is time-consuming and requires a lot of memory
signals from a chemical sensor array. BP is a supervised learn- to solve lots of ODEs constructing the KIII; e.g., a 32-channel
ing algorithm based on the generalized delta rule, usually using KIII network consists of over 200 ODEs. Although one classifi-
gradient descent for minimizing the total squared output error cation performance required about 1 min in our experiments, it is
between the desired and the actual net outputs. The performance fast enough to satisfy application requirement. Efficient numeri-
of BP-NN is dependent on several factors, e.g., the number of cal computation methods and DSP and VLSI hardware specially
hidden layers, learning rate, momentum and training data. More designed for parallel implementation are under research for other
details can be referred to Ref. [40]. real-time applications.
496 J. Fu et al. / Sensors and Actuators B 125 (2007) 489–497

The purpose of this paper is not to prove that the KIII net- [17] M. Garcia, M. Aleixandre, J. Gutierrez, M.C. Horrillo, Electronic nose
work is superior to other techniques of signal processing in the for wine discrimination, Sensor Actuators B Chem. 113 (2006) 911–
electronic nose community. Instead, we just like to introduce a 916.
[18] C. Di Natale, A. Macagnano, A. D’Amico, F. Davide, Electronic-nose mod-
new method to process sensor array signals and to attract more eling and data analysis using a self-organizing map, Meas. Sci. Technol. 8
researchers to pay attentions to this biological model of olfac- (1997) 1236–1243.
tory systems. Future works will be addressed on improving the [19] Y. Yao, W.J. Freeman, Model of biological pattern recognition with spa-
performance of the KIII network in electronic nose application, tially chaotic dynamics, Neural Netw. 3 (1990) 153–170.
especially fully utilizing the spatio-temporal dynamics proper- [20] W.J. Freeman, Neurodynamics: An Exploration of Mesoscopic Brain
Dynamics, Springer-Verlag, London, UK, 2000.
ties of the model for time series signals from chemical sensor [21] H.J. Chang, W.J. Freeman, Parameter optimization in models of the olfac-
arrays. More biologically oriented learning and classification tory neural system, Neural Netw. 9 (1996) 1–14.
rules are still under investigation. We are sure that the study [22] H.J. Chang, W.J. Freeman, B.C. Burke, Biologically modeled noise stabi-
on electronic noses will help to understand signal processing in lizing neurodynamics for pattern recognition, Int. J. Bifurc. Chaos 8 (1998)
biological olfactory systems and vice versa. 321–345.
[23] H.J. Chang, W.J. Freeman, B.C. Burke, Optimization of olfactory model in
software to give 1/f power spectra reveals numerical instabilities in solu-
Acknowledgements tions governed by aperiodic (chaotic) attractors, Neural Netw. 11 (1998)
449–466.
This research is supported by the National Basic Research [24] R. Kozma, W.J. Freeman, Chaotic resonance–methods and applications for
robust classification of noise and variable patterns, Int. J. Bifurc. Chaos 11
Program of China (973 Program, project No. 2004CB720302), (2001) 1607–1629.
the National Natural Science Foundation of China (No. [25] R. Kozma, W.J. Freeman, Classification of EEG patterns using nonlin-
60421002) and the Y.C. Tang Disciplinary Development Fund. ear dynamics and identifying chaotic phase transitions, Neurocomputing
The authors thank Mr. Jun Zhou and Miss. Lehan He for their 44–46 (2002) 1107–1112.
experimental assistance and fruitful discussions. [26] X. Li, G. Li, L. Wang, W.J. Freeman, A study on a bionic pattern clas-
sifier based on olfactory neural system, Int. J. Bifurc. Chaos 16 (2006)
2425–2434.
References [27] R. Gutierrez-Osuna, A. Gutierrez-Galvez, Habituation in the KIII olfactory
model with chemical sensor arrays, IEEE Trans. Neural Netw. 14 (2003)
[1] L. Buck, R. Axel, A novel multigene family may encode odorant receptors: 1565–1568.
a molecular basis for odor recognition, Cell 65 (1991) 175–187. [28] A. Gutierrez-Galvez, R. Gutierrez-Osuna, Increasing the separability of
[2] J.W. Gardner, P.N. Bartlett, A brief history of electronic noses, Sensor chemosensor array patterns with Hebbian/anti-Hebbian learning, Sensor
Actuators B Chem. 18 (1994) 211–220. Actuators B Chem. 116 (2006) 29–35.
[3] J.W. Gardner, P.N. Bartlett, Electronic Noses: Principles and Applications, [29] Z.D. Bao, R.N. Xu, Analysis of the flavor components in Yellow rice wine,
Oxford University Press, New York, 1999. Liquor Mak. 5 (1999) 65–67 (in Chinese).
[4] K.C. Persaud, P. Wareham, A.M. Pisanelli, E. Scorsone, Electronic nose—a [30] E. Llobet, J. Brezmes, X. Vilanova, J.E. Sueiras, X. Correig, Qualitative
new monitoring device for environmental applications, Sens. Mater. 17 and quantitative analysis of volatile organic compounds using transient and
(2005) 355–364. steady-state responses of a thick-film tin oxide gas sensor array, Sensor
[5] R.E. Baby, M. Cabezas, E.N. Walsöe de Reca, Electronic nose: a useful tool Actuators B Chem. 41 (1997) 13–21.
for monitoring environmental contamination, Sensor Actuators B Chem. [31] W.J. Freeman, Mass Action in the Nervous System, Academic Press, New
69 (2000) 214–218. York, 1975.
[6] M.P. Marti, R. Boqué, O. Busto, J. Guasch, Electronic noses in the qual- [32] W.J. Freeman, Simulation of chaotic EEG patterns with a dynamic model
ity control of alcoholic beverages, Trac-Trends Anal. Chem. 24 (2005) of the olfactory system, Biol. Cybern. 56 (1987) 139–150.
57–66. [33] W.J. Freeman, Nonlinear gain mediating cortical stimulus-response rela-
[7] S. Ampuero, J.O. Bosset, The electronic nose applied to dairy products: a tions, Biol. Cybern. 33 (1979) 237–247.
review, Sensor Actuators B Chem. 94 (2003) 1–12. [34] S. Quarder, U. Claussnitzer, M. Otto, Using singular-value decomposi-
[8] E. Schaller, J.O. Bosset, F. Escher, Electronic noses and their application tions to classify spatial patterns generated by a nonlinear dynamic model
to food, LWT-Food Sci. Technol. 31 (1998) 305–316. of the olfactory system, Chemometr. Intell. Lab. Syst. 59 (2001) 45–
[9] J.W. Gardner, H.W. Shin, E.L. Hines, An electronic nose system to diagnose 51.
illness, Sensor Actuators B Chem. 70 (2000) 19–24. [35] K. Shimoide, W.J. Freeman, Dynamic neural network derived from the
[10] J. Yinon, Detection of explosive by electronic noses, Anal. Chem. 75 (2003) olfactory system with examples of applications, IEICE Trans. Fundam.
98A–105A. Electron. Commun. Comput. Sci. E78-A (1995) 869–884.
[11] K.J. Albert, N.S. Lewis, C.L. Schauer, G.A. Sotzing, S.E. Stitzel, T.P. Vaid, [36] M. Holmberg, F. Winquist, I. LundstrÖm, F. Davide, C. Dinatale, A.
D.R. Walt, Cross-reactive chemical sensor arrays, Chem. Rev. 100 (2000) D’Amico, Drift counteraction for an electronic nose, Sensor Actuators B
2595–2626. Chem. 36 (1996) 528–535.
[12] D. James, S.M. Scott, Z. Ali, W.T. O’Hare, Chemical sensors for electronic [37] T. Artursson, T. Eklöv, I. Lundström, P. Mårtensson, M. Sjöstrom, M.
nose systems, Microchim. Acta 149 (2005) 1–17. Holmberg, Drift correction for gas sensors using multivariate methods,
[13] E.L. Hines, E. Llobet, J.W. Gardner, Electronic noses: a review of signal J. Chemometr. 14 (2000) 711–723.
processing techniques, IEE Proc. Circuit Device Syst. 146 (1999) 297–310. [38] R.E. Shaffer, S.L. Rose-Pehrsson, R.A. McGill, A comparison study of
[14] R. Gutierrez-Osuna, Pattern analysis for machine olfaction: a review, IEEE chemical sensor array pattern recognition algorithms, Anal. Chim. Acta
Sens. J. 2 (2002) 189–202. 384 (1999) 305–317.
[15] A.K. Srivastava, Detection of volatile organic compounds (VOCs) using [39] M. Bicego, G. Tessari, G. Tecchiolli, M. Bettinelli, A comparative anal-
SnO2 gas-sensor array and artificial neural network, Sensor Actuators B ysis of basic pattern recognition techniques for the development of
Chem. 96 (2003) 24–37. small size electronic nose, Sensor Actuators B Chem. 85 (2002) 137–
[16] M. Sriyudthsak, A. Teeramongkolrasasmee, T. Moriizumi, Radial basis 144.
neural networks for identification of volatile organic compounds, Sensor [40] S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd ed.,
Actuators B Chem. 65 (2000) 358–360. Prentice-Hall, Englewood Cliffs, NJ, 1999.
J. Fu et al. / Sensors and Actuators B 125 (2007) 489–497 497

Biographies research interests include biosensors, biomedical instruments and neuroinfor-


matics.
Jun Fu entered Zhejiang University, China in 1999, and received his BSc degree Yuqi Qin is a senior undergraduate student of biomedical engineering at Zhe-
in biomedical engineering in 2004. Currently, he is a PhD candidate at Zhejiang jiang University, China. She is also a member of Chu Kochen Hornor College
University, majoring in biomedical engineering. His research interests include of Zhejiang University. Her research interest is signal processing.
biosensors, pattern recognition and artificial olfaction.
Walter J. Freeman studied physics and mathematics at M.I.T., philosophy at
Guang Li is a professor at the Department of Control Science and Engineer- the University of Chicago, medicine at Yale University (M.D. cum laude 1954),
ing, Zhejiang University. He received his BSc and MSc Degrees in biomedical internal medicine at Johns Hopkins, and neurophysiology at UCLA. He has
engineering at Zhejiang University, China in 1987 and 1991, respectively. He taught brain science in the University of California at Berkeley since 1959,
obtained his PhD degree in biomedical engineering at Imperial College of where he is Professor of the Graduate School. His research interests include
Science, Technology and Medicine, London, UK in 1998. He used to work nonlinear neurodynamics and brain science.
at University of Glasgow and Moor Instruments Ltd., UK (1998–2001). His

You might also like