neural-network
neural-network
➲ 1 Brief Introduction
➲ 2 Backpropogation Algorithm
➲ 3 A Simply Illustration
Chapter 1 Brief Introduction
➲ History
➲ 1.2 Review to Decision Tree
Learning process is to reduce the error, which can
be understood as the difference between the target
and output values from learning structure.
ID3 Algorithm can be implemented only for discrete
values.
Artificial Neural Network (ANN) can describe
arbitrary functions.
➲ 1.3 Basic Structure
This example of ANN learning is provided by
Pomerluau’s(1993) system ALVINN, which uses a
learned ANN to steer an autonomous vehicle
driving at normal speeds. The input of ANN is a
30x32 grid of pixel intensities obtained from
forward-faced camera mounted on the vehicle. The
output is the direction in which the vehicle is
steered.
As can be seen, 4 units receive inputs directly from
all of the 30X32 pixels from the camera in vehicle.
These are called ”hidden” units because their
outputs are only available to the coming units in the
network, but not as apart of the global network.
➲ 1.4 Ability
Instances are represented by many attribute-value
pairs. The target function to be learned is defined
over instances that can be described by a vector of
predefined feature. such as the pixel values in the
ALVINN example.
The training examples may contain errors. In
following sections we can see, that ANN learning
methods are quite robust to noise in training data.
Long training times are acceptable. Compared to
decision tree learning, network training algorithm
requires longer training time, depending on factors
such as the number of the weights in network.
Chapter 2
backpropagation Algorithm
➲ 2.1 Sigmoid
Like the perceptron, the
sigmoid unit first
computes a linear
combination of its input.
then the sigmoid unit
computes its output with
the following function.
This equation 2 is often referred to as the
squashing function since it map very large
input domain to a small range of output.