Unit 1
Introduction to ANN
Introduction to ANN
• Introduction to ANN,
• History of Neural Network
• Structure and working of Biological Neural Network
• Neural net architecture
• Topology of neural network architecture, Features,Characteristics,Types,
• Activation functions,
• Models of neuron-Mc Culloch & Pitts model,
• Perceptron, Adaline model,
• Basic learning laws,
• Applications of neural networks,Comparison of BNN and ANN.
Introduction to ANN
History of Neural Network
Structure and working of Biological Neural Network
Neuron-Information processing unit
Synapses/ connecting links- characterised by
weight or strength of its own
Adder- linear combiner
Activation Function- limit the amplitude of
output of an neuron
Squashing function- as it squashes amplitude
range of output signal to finite value.
Bias- increasing or lowering net input to
activation function.
X- input
W- weights
U-linear combiner output
B-bias
V-combiner output with bias as
input to activation function
Y-Output
Network Architecture
• Single layer feedforward network
• Multilayer feedforward network
• Recurrent Network
Single layer- input and
output layer of neurons
Input Layer- Hidden Layers- Output Layer
Multi layer feed forward network Hidden neurons
Fully connected
Partially connected
Number of hidden nodes
Number of hidden layers- Trial and Error
Recurrent networks
Property of neural Network- Ability to learn from environment, improve
Prescribed well-defined rules for the
performance through learning
solution to a learning problem- Learning
Algorithm.
Diversity of Learning Algorithm –
Adjustment of synaptic weights of
neurons is formulated
Learning Paradigms
Learning Rules
Learning Tasks, Memory, Adaptation
Learning Process
Learning with teacher-
Supervised Learning
Error correction Learning
Learning without teacher
Unsupervised Learning
Error Correction Learning
Learning Tasks
Pattern Association- Auto association and Hetero Association
There are two phases involved in the operation of an associative memory:
• storage phase, which refers to the training of the network
• recall phase, which involves the retrieval of a memorized pattern in responseto the presentation of a
noisy or distorted version of a key pattern to the network.
❖ Pattern recognition is formally defined as the process whereby a received pattern/ signal is assigned to one of a
prescribed number of classes.
❖ A neural network performs pattern recognition by first undergoing a training session during which the
❖ network is repeatedly presented with a set of input patterns along with the category to which each particular
pattern belongs.
❖ Later, the network is presented with a new pattern that has not been seen before, but which belongs to the same
population of patterns used to train the network.
❖ The network is able to identify the class of that particular pattern because of the information it has extracted
from the training data.
In generic terms, pattern-recognition machines using neural
networks may take
one of two forms:
• The machine is split into two parts, an unsupervised
network for feature extraction
and a supervised network for classification, In conceptual
terms, a pattern is represented by a set of m
observables,which may be viewed as a point x in an
m-dimensional observation (data) space
Feature extraction is described by a transformation that maps
the point x into an intermediate point y in a q-dimensional
feature space with q < m.
This transformation may be viewed as one of dimensionality
reduction (i.e., data compression), the use of which is
justified on the grounds that it simplifies the task of
classification.
• The machine is designed as a feedforward network using a
supervised learning algorithm.
In this second approach, the task of feature extraction is
performed by the
computational units in the hidden layer(s) of the network.
❖ Classification is another form of neural computation. Let
Learning Tasks us assume that a set of input patterns is divided into a number
of classes, or categories.
Classification and Recognition
❖ In response to an input pattern from the set, the classifier is
supposed to recall the information regarding class membership
of the input pattern. Typically, classes are expressed by
discrete-valued output vectors, and thus output neurons of
classifiers would employ binary activation functions.
❖ Interestingly, classification can be understood as a special case
of heteroassociation. The association is now between the input
pattern and the second member of the heteroassociative pair,
which is supposed to indicate the input's class number.
❖ If the network's desired response is the class number but the
input pattern does not exactly correspond to any of the patterns
in the set, the processing is called recognition. When a
class membership for one of the patterns in the set is recalled,
recognition becomes identical to classification.
❖ Recognition within the set of three patterns is schematically
shown in Figure 2.17(b). This form of processing is of particular
significance when an amount of noise is superimposed on input
patterns.
Generalisation
Applications • Forecasting/Market Prediction: finance and banking
• Handwritten Digit
Recognition
• Manufacturing: quality control, fault diagnosis
• Face recognition
• Time series prediction
• Medicine: analysis of electrocardiogram data, RNA & DNA
• Process identification sequencing, drug development without animal testing
• Process control
• Optical character recognition
• Control: process, robotics
The Top 10 Applications of Artificial Neural Networks in 2023
• Image Recognition and Computer Vision
• Speech Recognition and Natural Language Processing (NLP)
• Financial Forecasting and Trading
• Medical Diagnosis and Treatment Planning
• Autonomous Vehicles
• Recommender Systems
• Natural Language Generation
• Fraud Detection
• Supply Chain Optimization
• Predictive Maintenance
❖ The perceptron is the simplest form of a neural network used for the classification
❖ of patterns said to be linearly separable (i.e., patterns that lie on opposite sides of a hyperplane).
❖ Basically, it consists of a single neuron with adjustable synaptic weights and bias.
❖ The algorithm used to adjust the free parameters of this neural network first appeared in a learning
procedure developed by Rosenblatt (1958, 1962) for his perceptron brain model.1
❖ Indeed, Rosenblatt proved that if the patterns (vectors) used to train the perceptron are drawn from two
linearly separable classes, then the perceptron algorithm converges and positions the decision surface in the
form of a hyperplane between the two classes.
❖ The proof of convergence of the algorithm is known as the perceptron convergence theorem.
❖ The perceptron built around a single neuron is limited to performing pattern classification with only two
classes (hypotheses).
❖ By expanding the output (computation) layer of the perceptron to include more than one neuron, we may
correspondingly perform classification with more than two classes.
❖ However, the classes have to be linearly separable for the perceptron to work properly. The important
point is that insofar as the basic theory of the perceptron as a pattern classifier is concerned,