(Ebook - Artificial Intelligence) - Neural Networks With Java - Neural Net Overview
(Ebook - Artificial Intelligence) - Neural Networks With Java - Neural Net Overview
choose a section
jump directly to
A neural net is an artificial representation of the human brain that tries to simulate its learning
process.
The term "artificial" means that neural nets are implemented in computer programs that are able
to handle the large number of neccessary calculations during the learning process.
To show where neural nets have their origin, let's have a look at the biological model: the human
brain.
The human brain consists of a large number (more than a billion) of neural cells that process
informations. Each cell works like a simple processor and only the massive interaction between
all cells and their parallel processing makes the brain's abilities possible.
As the figure indicates, a neuron consists of a core, dendrites for incoming information and an
axon with dendrites for outgoing information that is passed to connected neurons.
Information is transported between neurons in form of electrical stimulations along the dendrites.
Incoming informations that reach the neuron's dendrites is added up and then delivered along the
neuron's axon to the dendrites at its end, where the information is passed to other neurons if the
stimulation has exceeded a certain threshold. In this case, the neuron is said to be activated.
If the incoming stimulation had been too low, the information will not be transported any further.
In this case, the neuron is said to be inhibited.
The connections between the neurons are adaptive, what means that the connection structure is
changing dynamically. It is commonly acknowledged that the learning ability of the human brain
is based on this adaptation.
Generally spoken, there are many different types of neural nets, but they all have nearly the same
components.
If one wants to simulate the human brain using a neural net, it is obviously that some drastic
simplifications have to be made:
First of all, it is impossible to "copy" the true parallel processing of all neural cells. Although
there are computers that have the ability of parallel processing, the large number of processors
that would be necessary to realize it can't be afforded by today's hardware.
Another limitation is that a computer's internal structure can't be changed while performing any
tasks.
Neural Networks with Java: neural net overview
As you can see, an artificial neuron looks similar to a biological neural cell. And it works in the
same way.
Information (called the input) is sent to the neuron on its incoming weights. This input is
processed by a propagation function that adds up the values of all incoming weights.
The resulting value is compared with a certain threshold value by the neuron's activation function.
If the input exceeds the threshold value, the neuron will be activated, otherwise it will be
inhibited.
If activated, the neuron sends an output on its outgoing weights to all connected neurons and so
on.
In a neural net, the neurons are grouped in layers, called neuron layers. Usually each neuron of
one layer is connected to all neurons of the preceding and the following layer (except the input
layer and the output layer of the net).
The information given to a neural net is propagated layer-by-layer from input layer to output layer
through either none, one or more hidden layers. Depending on the learning algorithm, it is also
possible that information is propagated backwards through the net.
Neural Networks with Java: neural net overview
The following figure shows a neural net with three neuron layers.
Note that this is not the general structure of a neural net. For example, some neural net types have
no hidden layers or the neurons in a layer are arranged as a matrix.
What's common to all neural net types is the presence of at least one weight matrix, the
connections between two neuron layers.
Neural nets are being constructed to solve problems that can't be solved using conventional
algorithms.
Such problems are usually optimization or classification problems.
Neural Networks with Java: neural net overview
The different problem domains where neural nets may be used are:
❍ pattern association
❍ pattern classification
❍ regularity detection
❍ image processing
❍ speech analysis
❍ optimization problems
❍ robot steering
❍ processing of inaccurate or incomplete inputs
❍ quality assurance
❍ stock market forecasting
❍ simulation
❍ ...
There are many different neural net types with each having special properties, so each problem
domain has its own net type (see Types of neural nets for a more detailed description).
Generally it can be said that neural nets are very flexible systems for problem solving purposes.
One ability should be mentioned explicitely: the error tolerance of neural networks. That means,
if a neural net had been trained for a specific problem, it will be able to recall correct results, even
if the problem to be solved is not exactly the same as the already learned one. For example,
suppose a neural net had been trained to recognize human speech. During the learning process, a
certain person has to pronounce some words, which are learned by the net. Then, if trained
correctly, the neural net should be able to recognize those words spoken by another person, too.
But all that glitters ain't gold. Although neural nets are able to find solutions for difficult problems
as listed above, the results can't be guaranteed to be perfect or even correct. They are just
approximations of a desired solution and a certain error is always present.
Additionaly, there exist problems that can't be correctly solved by neural nets. An example on
pattern recognition should settle this:
If you meet a person you saw earlier in your life, you usually will recognize him/her the second
time, even if he/she doesn't look the same as at your first encounter.
Suppose now, you trained a neural net with a photograph of that person, this image will surely be
recognized by the net. But if you add heavy noise to the picture or rotate it to some degree, the
recognition will probably fail.
Surely, nobody would ever use a neural network in a sorting algorithm, for there exist much better
and faster algorithms, but in problem domains, as those mentioned above, neural nets are always a
good alternative to existing algorithms and definitely worth a try.
Neural Networks with Java: neural net overview
Neural Networks with Java: neural net overview
jump directly to
Perceptron
Perceptron characteristics
Neural Networks with Java: neural net overview
sample structure
type feedforward
neuron layers 1 input layer
1 output layer
input value binary
types
activation hard limiter
function
learning method supervised
learning Hebb learning rule
algorithm
mainly used in simple logical operations
pattern classification
Multi-Layer-Perceptron
Multi-Layer-Perceptron characteristics
sample
structure
type feedforward
neuron layers 1 input layer
1 or more hidden layers
1 output layer
input value binary
types
activation hard limiter / sigmoid
function
learning supervised
method
learning delta learning rule
algorithm backpropagation (mostly used)
mainly used in complex logical operations
pattern classification
Neural Networks with Java: neural net overview
Backpropagation Net
The Backpropagation Net was first introduced by G.E. Hinton, E. Rumelhart and R.J. Williams in
1986
and is one of the most powerful neural net types.
It has the same structure as the Multi-Layer-Perceptron and uses the backpropagation learning
algorithm.
type feedforward
neuron layers 1 input layer
1 or more hidden layers
1 output layer
input value binary
types
Neural Networks with Java: neural net overview
activation sigmoid
function
learning supervised
method
learning backpropagation
algorithm
mainly used in complex logical operations
pattern classification
speech analysis
Hopfield Net
The Hopfield Net was first introduced by physicist J.J. Hopfield in 1982 and belongs to neural net
types which are called "thermodynamical models".
It consists of a set of neurons, where each neuron is connected to each other neuron. There is no
differentiation between input and output neurons.
The main application of a Hopfield Net is the storage and recognition of patterns, e.g. image files.
type feedback
neuron 1 matrix
layers
input value binary
types
activation signum / hard limiter
function
learning unsupervised
method
learning delta learning rule
algorithm simulated annealing (mostly used)
mainly pattern association
used in optimization problems
The Kohonen Feature Map was first introduced by finnish professor Teuvo Kohonen (University
of Helsinki) in 1982.
It is probably the most useful neural net type, if the learning process of the human brain shall be
simulated. The "heart" of this type is the feature map, a neuron layer where neurons are
organizing themselves according to certain input values.
The type of this neural net is both feedforward (input layer to feature map) and feedback (feature
map).
(A Kohonen Feature Map is used in the sample applet)
sample
structure
jump directly to
In the human brain, information is passed between the neurons in form of electrical stimulation along the dendrites. If a certain
amount of stimulation is received by a neuron, it generates an output to all other connected neurons and so information takes its
way to its destination where some reaction will occur. If the incoming stimulation is too low, no output is generated by the neuron
and the information's further transport will be blocked.
Explaining how the human brain learns certain things is quite difficult and nobody knows it exactly.
It is supposed that during the learning process the connection structure among the neurons is changed, so that certain stimulations
are only accepted by certain neurons. This means, there exist firm connections between the neural cells that once have learned a
specific fact, enabling the fast recall of this information.
If some related information is acquired later, the same neural cells are stimulated and will adapt their connection structure
according to this new information.
On the other hand, if a specific information isn't recalled for a long time, the established connection structure between the
responsible neural cells will get more "weak". This had happened if someone "forgot" a once learned fact or can only remember it
vaguely.
As mentioned before, neural nets try to simulate the human brain's ability to learn. That is, the artificial neural net is also made of
neurons and dendrites. Unlike the biological model, a neural net has an unchangeable structure, built of a specified number of
neurons and a specified number of connections between them (called "weights"), which have certain values.
What changes during the learning process are the values of those weights. Compared to the original this means:
Incoming information "stimulates" (exceeds a specified threshold value of) certain neurons that pass the information to connected
neurons or prevent further transportation along the weighted connections. The value of a weight will be increased if information
should be transported and decreased if not.
While learning different inputs, the weight values are changed dynamically until their values are balanced, so each input will lead
to the desired output.
The training of a neural net results in a matrix that holds the weight values between the neurons. Once a neural net had been trained
correctly, it will probably be able to find the desired output to a given input that had been learned, by using these matrix values.
I said "probably". That is sad but true, for it can't be guaranteed that a neural net will recall the correct results in any case.
Very often there is a certain error left after the learning process, so the generated output is only a good approximation to the perfect
output in most cases.
Neural Networks with Java: neural net overview
The following sections introduce several learning algorithms for neural networks.
A neural net is said to learn supervised, if the desired output is already known.
Example: pattern association
Suppose, a neural net shall learn to associate the following pairs of patterns. The input patterns are decimal numbers, each
represented in a sequence of bits. The target patterns are given in form of binary values of the decimal numbers:
While learning, one of the input patterns is given to the net's input layer. This pattern is propagated through the net (independent of
its structure) to the net's output layer. The output layer generates an output pattern which is then compared to the target pattern.
Depending on the difference between output and target, an error value is computed.
This output error indicates the net's learning effort, which can be controlled by the "imaginary supervisor". The greater the
computed error value is, the more the weight values will be changed.
Forwardpropagation
Forwardpropagation is a supervised learning algorithm and describes the "flow of information" through a neural net from its input
layer to its output layer.
Example:
Suppose you have the following 2-layered Perceptron:
Patterns to be learned:
input target
01 0
11 1
First, the weight values are set to random values (0.35 and 0.81).
The learning rate of the net is set to 0.25.
Next, the values of the first input pattern (0 1) are set to the neurons of the input layer (the output of the input layer is the
same as its input).
The neurons in the following layer (only one neuron in the output layer) are activated:
Now that the weights are changed, the second input pattern (1 1) is set to the input layer's neurons and the activation of the
output neuron is performed again, now with the new weight values:
That was one learning step. Each input pattern had been propagated through the net and the weight values were changed.
The error of the net can now be calculated by adding up the squared values of the output errors of each pattern:
By performing this procedure repeatedly, this error value gets smaller and smaller.
Neural Networks with Java: neural net overview
The algorithm is successfully finished, if the net error is zero (perfect) or approximately zero.
Backpropagation
Backpropagation is a supervised learning algorithm and is mainly used by Multi-Layer-Perceptrons to change the weights
connected to the net's hidden neuron layer(s).
The backpropagation algorithm uses a computed output error to change the weight values in backward direction.
To get this net error, a forwardpropagation phase must have been done before. While propagating in forward direction, the neurons
are being activated using the sigmoid activation function.
1
f(x) = ---------
1 + e-input
Example:
Suppose you have the following 3-layered Multi-Layer-Perceptron:
Patterns to be learned:
input target
01 0
11 1
First, the weight values are set to random values: 0.62, 0.42, 0.55, -0.17 for weight matrix 1 and 0.35, 0.81 for weight
matrix 2.
The learning rate of the net is set to 0.25.
Next, the values of the first input pattern (0 1) are set to the neurons of the input layer (the output of the input layer is the
same as its input).
Neural Networks with Java: neural net overview
The first input pattern had been propagated through the net.
The same procedure is used for the next input pattern, but then with the changed weight values.
After the forward and backward propagation of the second pattern, one learning step is complete and the net error can be
calculated by adding up the squared output errors of each pattern.
By performing this procedure repeatedly, this error value gets smaller and smaller.
The algorithm is successfully finished, if the net error is zero (perfect) or approximately zero.
Note that this algorithm is also applicable for Multi-Layer-Perceptrons with more than one hidden layer.
If all values of an input pattern are zero, the weights in weight matrix 1 would never be changed for this pattern and the net
could not learn it. Due to that fact, a "pseudo input" is created, called Bias that has a constant output value of 1.
These additional weights, leading to the neurons of the hidden layer and the output layer, have initial random values and are
changed in the same way as the other weights. By sending a constant output of 1 to following neurons, it is guaranteed that
the input values of those neurons are always differing from zero.
Selforganization
Selforganization is an unsupervised learning algorithm used by the Kohonen Feature Map neural net.
As mentioned in previous sections, a neural net tries to simulate the biological human brain, and selforganization is probably the
best way to realize this.
It is commonly known that the cortex of the human brain is subdivided in different regions, each responsible for certain functions.
The neural cells are organizing themselves in groups, according to incoming informations.
Those incoming informations are not only received by a single neural cell, but also influences other cells in its neighbourhood.
This organization results in some kind of a map, where neural cells with similar functions are arranged close together.
This selforganization process can also be performed by a neural network. Those neural nets are mostly used for classification
purposes, because similar input values are represented in certain areas of the net's map.
A sample structure of a Kohonen Feature Map that uses the selforganization algorithm is shown below:
Kohonen Feature Map with 2-dimensional input and 2-dimensional map (3x3 neurons)
As you can see, each neuron of the input layer is connected to each neuron on the map. The resulting weight matrix is used to
propagate the net's input values to the map neurons.
Additionally, all neurons on the map are connected among themselves. These connections are used to influence neurons in a
certain area of activation around the neuron with the greatest activation, received from the input layer's output.
The amount of feedback between the map neurons is usually calculated using the Gauss function:
-|xc-xi|2
-------- where xc is the position of the most activated
neuron
2 * sig2 xi are the positions of the other map neurons
feedbackci = e sig is the activation area (radius)
In the beginning, the activation area is large and so is the feedback between the map neurons. This results in an activation of
neurons in a wide area around the most activated neuron.
As the learning progresses, the activation area is constantly decreased and only neurons closer to the activation center are
influenced by the most activated neuron.
Unlike the biological model, the map neurons don't change their positions on the map. The "arranging" is simulated by changing
the values in the weight matrix (the same way as other neural nets do).
Because selforganization is an unsupervised learning algorithm, no input/target patterns exist. The input values passed to the net's
input layer are taken out of a specified value range and represent the "data" that should be organized.
In the beginning, when the weights have random values, the feature map is just an unordered mess.
After 200 learning cycles, the map has "unfolded" and a grid can be seen.
Neural Networks with Java: neural net overview
As the learning progresses, the map becomes more and more structured.
It can be seen that the map neurons are trying to get closer to their nearest blue input value.
Neural Networks with Java: neural net overview
At the end of the learning process, the feature map is spanned over all input values.
The reason why the grid is not very beautiful is that the neurons in the middle of the feature map are also trying to get closer to the
input values. This leads to a distorted look of the grid.
The selforganization is finished at this point.
I recommend you, to do your own experiments with the sample applet, in order to understand its behaviour. (A description of the
applet's controls is given on the belonging page)
By changing the net's parameters, it is possible to produce situations, where the feature map is unable to organize itself correctly.
Try, for example, to give the initial activation area a very small value or enter too many input values.
navigation
[main page] [content] [neural net overview]
[class structure] [using the classes] [sample applet]
[glossary] [literature] [about the author] [what do you think ?]
a
activation area refers to: Kohonen Feature Map
A value that indicates the area of influence of the
most activated neuron (the center of activation) on
other map neurons. The activation is spread out
around this center (maximum activation) and
decreases the greater the distance to this center is.
activation function refers to: neuron
A mathematical function that a neuron uses to
produce an output refering to its input value. Usually
this input value has to exceed a specified threshold
value that determines, if an output to other neurons
should be generated.
see also: hard limiter, signum activation, sigmoid
activation
b
backpropagation refers to: learning algorithm
A learning algorithm used by neural nets with
supervised learning. Special form of the delta
learning rule.
see also: forwardpropagation
d
delta learning rule refers to: learning algorithm
A learning algorithm used by neural nets with
supervised learning. Effects the changing of weights
by multiplying a neuron's input with the difference
of its output and the desired output and the net's
learning rate.
e
error refers to: neural net, output
A value that indicates the "quality" of a neural net's
learning process. Used by neural nets with
supervised learning, by comparing the current output
values with the desired output values of the net. The
smaller the net's error is, the better the net had been
trained. Usually the error is always a value greater
than zero.
f
feedback type refers to: neural net
A specific connection structure of a neural net,
where neurons of one neuron layer may have
connections to neurons of other layers and also to
neurons of the same layer. An example of such a net
type is the Hopfield Net.
see also: feedforward type
g
GUI refers to: general term, abbreviation
Graphical User Interface.
The graphical environment of a software
application.
h
hard limiter refers to: neuron, activation function
A specific type of a neuron's activation function.
see also: activation function
i
input refers to: neuron, input layer
A set of values, called "pattern", that is passed to a
neural net's input neuron layer. The elements of
those patterns are usually binary values.
see also: input layer, output
j
JDK refers to: general term, abbreviation
Java Developers Kit.
An extensive set of Java classes, suitable for
different purposes. The classes are enclosed in
"packages", each covering a certain topic
(networking, user interface,...).
see also: API
k
Kohonen Feature Map refers to: neural net
A feedforward / feedback type neural net. Built of an
input layer thats neurons are connected with each
neuron of another layer, called "feature map". The
feature map can be one- or two-dimensional and
each of its neurons is connected to all other neurons
on the map. Mainly used for classification.
see also: selforganization
l
learning algorithm refers to: neural net
A mathematical algorithm that a neural net uses to
learn specific problems.
see also: backpropagation, delta learning rule,
forwardpropagation, Hebb learning rule, simulated
annealing
m
Multi-Layer-Perceptron refers to: neural net
A feedforward type neural net. Built of an input
layer, at least one hidden layer and one output layer.
Mainly used for pattern association.
see also: backpropagation, hidden layer, Perceptron
n
neuron refers to: neuron layer, neural net
An element of a neural net's neuron layer.
see also: activation function, threshold
o
object orientation (OO) refers to: general term
A method of software-engineering. The main goal of
object orientation is to develope reusable software
components.
output refers to: neuron, output layer
A value or a set of values (pattern), generated by the
neurons of a neural net's output layer. Used to
calculate the current error value of the net.
see also: output layer, input, error, supervised
learning
p
Perceptron refers to: neural net
A feedforward type neural net. Built of one input
layer and one output layer. Mainly used for pattern
association.
see also: Multi-Layer-Perceptron
s
selforganization refers to: Kohonen Feature Map, learning algorithm
A learning algorithm used by the Kohonen Feature
Map neural net. During its learning process, the
neurons on the net's feature map are organizing
themselves depending on given input values. This
will result in a clustered neuron structure, where
neurons with similar properties (values) are arranged
in related areas on the map.
see also: Kohonen Feature Map, learning algorithm
t
TCP refers to: general term, abbreviation
Transmission Control Protocol.
A protocol used to transport data across the Internet.
thermodynamical model refers to: neural net
Another expression for feedback type neural nets.
Called "thermodynamical", because the term energy
is used instead of error.
see also: Hopfield Net
w
weight refers to: neural net, weight matrix
An element of a weight matrix. A connection
between two neurons with a value that is
dynamically changed during a neural net's learning
process.
see also: weight matrix, learning algorithm
x y XOR
false false false
false true true
true false true
true true false
navigation
[main page] [content] [neural net overview]
[class structure] [using the classes] [sample applet]
· glossary · [literature] [about the author] [what do you
think?]
Copyright 1996-97 Jochen Fröhlich. All rights reserved. A new version is available.
Diploma
of
Jochen Fröhlich
Fachhochschule Regensburg
Department of Computer Science
AWARDS
navigation
· main page · [content] [neural net overview]
[class structure] [using the classes] [sample applet]
[glossary] [literature] [about the author] [what do you
think?]
Copyright 1996-97 Jochen Fröhlich. All rights reserved. A new version is available.
I wish to thank
Prof. Jürgen Sauer for his lessons on neural networks (where it all
began) and his support while I did my diploma
(and to say "Just do it!" was definitely the right
way, I think...)
Grady Booch for his very good book 'Object Oriented Analysis
and Design'
Gamelan and JARS for the cool fan and the delicious apple
Supporting
National Organization
Men Against Amazonian
Free speech online Rinderwahn online
Masterhood
navigation
[main page] [content] [neural net overview]
[class structure] [using the classes] [sample applet]
[glossary] [literature] · about the author · [what do you
think ?]
Copyright 1996-97 Jochen Fröhlich. All rights reserved. A new version is available.
choose a section
Common components
Class Neuron
Class NeuronLayer
Class WeightMatrix
navigation
[main page] [content] [neural net overview]
· class structure · [using the classes] [sample applet]
[glossary] [literature] [about the author] [what do you
think ?]
Copyright 1996-97 Jochen Fröhlich. All rights reserved. A new version is available.
jump directly to
These classes represent two neural net types (Backpropagation Net and Kohonen Feature Map).
There is also an abstract class called "NeuralNet", which is the superclass of both types. This
class contains generic methods, e.g. a method to set the initial learning rate of a neural net.
Class NeuralNet
boolean finishedLearning ()
Indicates that the net has finished learning. True, if the
learning process is finished. False otherwise.
String getElapsedTime ()
Returns the elapsed learning time of a neural net.
int getLearningCycle ()
Returns the current learning cycle of a neural net.
double getLearningRate ()
Returns the current learning rate of a neural net.
int getMaxLearningCycles ()
Returns the number of maximum learning cycles of a neural
net.
void resetTime ()
Resets the net's learning time.
top of page
Class BackpropagationNet
instantiated application
by
constructors public BackpropagationNet ()
methods void addNeuronLayer ( int size )
Adds a neuron layer with size neurons.
Note that neuron layers are sequentially added to the net.
void connectLayers ()
Connects all neuron layers with weight matrices.
Must be called after all neuron layers have been added.
double getAccuracy ()
Returns the accuracy value.
double getError ()
Returns the current error of the net.
double getMinimumError ()
Returns the minimum error of a neural net.
int getNumberOfLayers ()
Returns the number of neuron layers.
int getNumberOfPatterns ()
Returns the number of patterns.
int getNumberOfWeights ()
Returns the number of weights of all weight matrices.
void learn ()
Performs one learning step.
top of page
Class KohonenFeatureMap
instantiated application
by
constructors public KohonenFeatureMap ()
methods void connectLayers ( InputMatrix
inputMatrix )
Connects the feature map and the input layer (which is
generated depending on the size of the inputMatrix)
with a weight matrix.
double getActivationArea ()
Returns the current activation area.
double getInitActivationArea ()
Returns the initial activation area.
double getInitLearningRate ()
Returns the initial learning rate.
int getMapSizeX ()
Returns the number of neurons in the map layer's x-
dimension.
int getMapSizeY ()
Returns the number of neurons in the map layer's y-
dimension.
int getNumberOfWeights ()
Returns the number of weights in the weight matrix.
double getStopArea ()
Returns the final activation area.
float[][] getWeightValues ()
Returns the weight values of the net's weight matrix.
void learn ()
Performs a learning step.
navigation
Copyright 1996-97 Jochen Fröhlich. All rights reserved. A new version is available.
choose a section
Download
navigation
[main page] [content] [neural net overview]
[class structure] · using the classes · [sample applet]
[glossary] [literature] [about the author] [what do you
think ?]
Copyright 1996-97 Jochen Fröhlich. All rights reserved. A new version is available.
http://fbim.fh-regensburg.de/~saj39122/jfroehl/diplom/e-3.html15/7/2006 3:46:42 π•
Neural Networks with Java: using the classes
jump directly to
This section explains, how you can use the neural network classes in your own programs.
While designing the classes, I asked myself, what would be the easiest way to build a neural net?
I remembered the neural net lessons, where my professor used to say things like:
"Now we want to build a 3-layered Backpropagation Net with 4 neurons in its input layer
and 3 neurons in its output layer. The hidden layer consists of 2 neurons. Each neuron of
one layer is connected to all neurons of the following layer..."
While he said this, he drew a sketch of the net on the blackboard in the following way:
For this procedure was always the same, and only the net structure changed, it seemed to be a
possible solution for an implementation.
So I came to the conclusion, building a neural net in a program should be done the same way as
you would describe it with your own words.
The whole class structure consists of 11 classes, as can be seen in class structure/The classes and
their relationships, but you actually need to know more about only three of them to get your
neural net running.
The classes you explicitely use in your programs are: BackpropagationNet, KohonenFeatureMap,
and InputMatrix.
Here is the source code of the BPN Application to show you an example: BPN.java
top of page
The structure of a conversion file
The conversion file of a Backpropagation Net must be used to convert ASCII characters (the net
input) to an internal binary representation.
The number of binary values that represent the ASCII characters can be changed freely, but has
to be the same for each character.
Below you see the conversion file "ascii2bin.cnv" that is used in the BPN application.
It contains 64 conversions (as can be seen in the first line) and each ASCII character is converted
to 6 binary digits.
64
0000000
1000001
2000010
3000011
4000100
5000101
6000110
7000111
8001000
9001001
a001010
b001011
c001100
d001101
e001110
f001111
g010000
h010001
i010010
j010011
k010100
l010101
m010110
n010111
o011000
p011001
q011010
r011011
s011100
t011101
u011110
v011111
w100000
x100001
y100010
z100011
A100100
B100101
C100110
D100111
E101000
F101001
G101010
H101011
I101100
J101101
K101110
L101111
M110000
N110001
O110010
P110011
Q110100
R110101
S110110
T110111
U111000
V111001
W111010
X111011
Y111100
Z111101
?111110
?111111
Note: A conversion file must be read before you add neuron layers to the net, because the
number of binary digits must already be available when a neuron layer is created.
top of page
The structure of a pattern file
The pattern file of a Backpropagation Net contains the input and target patterns for the net.
The patterns should contain ASCII characters that are defined in the net's conversion file.
The number of patterns can be changed freely.
The length of an input pattern must be the same as the number of neurons in the net's input
neuron layer.
The length of a target pattern must be the same as the number of neurons in the net's output
neuron layer.
Below you see the pattern file "towns.pat" that is used in the BPN application.
It contains 15 patterns (first line). Each input pattern consists of 10 ASCII characters (second
line) and each target pattern consists of 7 characters (third line).
15
10
7
Bonn000000 Germany
Brasilia00 Brasil0
Brussels00 Belgium
Helsinki00 Finland
London0000 England
Madrid0000 Spain00
Moscow0000 Russia0
New0Delhi0 India00
Oslo000000 Norway0
Paris00000 France0
Rome000000 Italy00
Stockholm0 Sweden0
Tokyo00000 Japan00
Vienna0000 Austria
Washington USA0000
Note: A pattern file must be read after the connectLayers() had been called.
top of page
Using the KohonenFeatureMap class
top of page
Using the InputMatrix class
navigation
[main page] [content] [neural net overview]
[class structure] [using the classes] [sample applet]
[glossary] [literature] [about the author] [what do you
think ?]
Copyright 1996-97 Jochen Fröhlich. All rights reserved. A new version is available.