Learning Process:
Properties of a neural network:
To improve its performance through learning.
Ability of the network to learn from its environment.
How foes learn from environment
Adjust of the synaptic weight
Bias levels
Learning of a neural network:
It is a process of adaption of free parameters of a neural network. The number of internal
parameters in a neural network is total number of weights + the total number of biases. The total
number of weights equals the sum of the products of each pair of adjacent layers. The total
number of biases is equal to the number of hidden neurons + the number of output neurons.
For example, consider a neural network with 13 input neurons, two hidden layers of 5 and 4
neurons, and an output layer of 3 neurons. The total number of weights is (13 * 5) + (5 * 4) + (4 *
3) = 97. The total number of biases is 5 + 4 + 3 = 12. Which makes the total number of parameters
97 + 12 = 109.
Events that occurred during learning process:
NN is stimulated by an environment
Changes in free parameters as a result of stimulation
NN responds different way because of changes in free parameters.
Different Types e of learning:
1. Error correction learning: Error-Correction Learning, used with supervised learning, is
the technique of comparing the system output to the desired output value, and using that
error to direct the training. Error correction learning algorithms attempt to minimize this
error signal at each training iteration. The most popular learning algorithm for use with
error-correction learning is the back-propagation algorithm.
2. Memory based learning: Memory-based learning, also known as instance-based learning
that compare new problem instances with instances seen in training, which have been
stored in memory. The basic idea is that concepts can be classified by their similarity with
previously seen concepts, and learning involves storing the training data items. It is a
system that computes similarity between new data items and training data items.
3. Competitive learning: Competitive learning is a type of unsupervised learning, while
back-propagation is a supervised learning algorithm. In competitive learning, a network of
artificial neurons competes to "fire" or become active in response to a specific input.
4. Boltzmann learning: Boltzmann learning is statistical in nature, and is derived from the
field of thermodynamics. It is similar to error-correction learning and is used during
supervised training. In this algorithm, the state of each individual neuron, in addition to the
system output, are taken into account.
5. Hebbian Learning: Hebbian learning is a neurobiological theory that suggests an increase
in the synaptic strength arises from the repeated and persistent stimulation of one neuron
by another.
Error correction learning:
Objective:
Minimize cost function or index of performance Ɛ (n) = ½ ek2(n) where Ɛ(n) is the instantaneous
value of the error energy.
Termination condition:
The step by step adjustments to the synaptic weights of neuron k are continued until the system
reaches a steady state (the synaptic weights are essentially stabilized)
Weight update:
Error gradient:
The error gradient tells you how to adjust the model’s parameters to reduce the
error.
It's used to guide optimization algorithms (like gradient descent) to find the best
model parameters.