NN UNIT-1 Notes R18 With 43 Pages
NN UNIT-1 Notes R18 With 43 Pages
Prepared
by
Dr K Madan Mohan
Asst. Professor
Department of CSE (AI&ML)
Sreyas Institute of Engineering and Technology,
Nagole, Bandlaguda, Hyderabad
NN&DL Menu
Introduction
Neural Networks
The Artificial Neural Networks (ANN) began with Warren McCulloch and Walter Pitts (1943) who created a computational model
for neural networks based on algorithms called threshold logic
Computational model that mimes the functional of human brain to perform various tasks faster than traditional system. ANN is an
efficient information processing system which resembles the characteristics of biological neural network
https://studyglance.in/nn/display.php?tno=1&topic=Introduction 1/4
8/20/23, 10:23 PM Introduction - Neural Networks and Deep Learning Tutorial | Study Glance
Biological Neurons
Dendrites : Responsible for receiving information from other neurons and bring it to the cell body(soma)
Soma : Responsible for processing of information, they have received from dendrites.
Axon : It is just like a cable through which send information from the cell body to other neurons.
Synapses : It is the connection between the axon and other neuron dendrites.
Ads by
Stop seeing this ad
https://studyglance.in/nn/display.php?tno=1&topic=Introduction 2/4
8/20/23, 10:23 PM Introduction - Neural Networks and Deep Learning Tutorial | Study Glance
The input signals is received by dendrites, and processing generally to the cell body(soma). Incoming signals can be either
excitatory which means they tend to make the neuron fire (generate an electrical impulse).
Most neurons receive many input signals throughout their dendritic trees. A single neuron may have more than one set of dendrites
and may receive many thousands of input signals. To decide whether a neuron is excited to fire an impulse depends on the sum of
all of the excitatory and inhibitory signals it receives. The processing of this information takes place in soma which is neuron cell
body. If the neuron does end up firing, the nerve impulse, or action potential, is conducted down the axon.
Towards its end, the axon splits up into many branches known as axon terminals (or nerve terminals), which makes connections on
target cells.
The junctions that allow signal transmission between the axons terminals and dendrites are called synapses. The process of
transmission is by diffusion of chemicals called neuro transmitters across the synaptic cleft
Ads by
Stop seeing this ad
The artificial neuron model has N input, denoted as x1, x2, ...xn. Each line connecting these inputs to the neuron is assigned a
weight, which are denoted as w1, w2, .., wn respectively. Weights in the artificial model correspond to the synaptic connections in
biological neurons. The threshold in artificial neuron is usually represented by Θ and the activation corresponding to the graded
potential is given by the formula:
yin = x 1 . w1 + x 2 . w2 + x 3 . w3 … x n . wn
yin = ∑ x i . wi + b
The output can be calculated by applying the activation function over the net input.
Y = F (yin )
ABOUT
https://studyglance.in/nn/display.php?tno=1&topic=Introduction 3/4
8/20/23, 10:23 PM Introduction - Neural Networks and Deep Learning Tutorial | Study Glance
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important & previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects.
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus
https://studyglance.in/nn/display.php?tno=1&topic=Introduction 4/4
8/20/23, 10:24 PM Basic models - Neural Networks and Deep Learning Tutorial | Study Glance
NN&DL Menu
Basic models
The models of ANN are specified by the three basic entities namely:
1. The model's synaptic interconnection.
2. The training rules or learning rules adopted for updating and adjusting the connection weights.
3. Activation functions.
Network Architectures
The manner in which the neurons of a neural network are structured is intimately linked with the learning algorithm used to train
the network.
Ads by
Stop seeing this ad Why this ad?
https://studyglance.in/nn/display.php?tno=2&topic=Basic-models 1/9
8/20/23, 10:24 PM Basic models - Neural Networks and Deep Learning Tutorial | Study Glance
In this type of network, we have input layer and output layer but the input layer does not count because no computation is
performed in this layer.
Output Layer is formed when different weights are applied on input nodes and the cumulative effect per node is taken.
After this, the neurons collectively give the output layer to compute the output signals.
This network has one or more hidden layers, the term "hidden" refers to the fact that this part of the neural network is not seen
directly from either the input or output of the network.The function of hidden neurons is to intervene between the external input
and the network output in some useful manner.
The existence of one or more hidden layers enables the network to be computationally stronger.
Recurrent Networks
Arecurrent neural network distinguishes itself from a feedforward neural network in that it has at least one feedback loop
https://studyglance.in/nn/display.php?tno=2&topic=Basic-models 2/9
8/20/23, 10:24 PM Basic models - Neural Networks and Deep Learning Tutorial | Study Glance
Vertex Bougainvillea
Ad Vertex bougainvillea Open
This network is a single-layer network with a feedback connection in which the processing element's output can be directed back to
itself or to other processing elements or both.
A recurrent neural network is a class of artificial neural network where the connection between nodes forms a directed graph along
a sequence.
This allows is it to exhibit dynamic temporal behavior for a time sequence. Unlike feedforward neural networks, RNNs can use their
internal state (memory) to process sequences of inputs.
https://studyglance.in/nn/display.php?tno=2&topic=Basic-models 3/9
8/20/23, 10:24 PM Basic models - Neural Networks and Deep Learning Tutorial | Study Glance
In this type of network, processing element output can be directed to the processing element in the same layer and in the preceding
layer forming a multilayer recurrent network.
They perform the same task for every element of the sequence, with the output being dependent on the previous computations.
Inputs are not needed at each time step.
The main feature of a multilayer recurrent network is its hidden state, which captures information about a sequence.
Learning rules
Just as there are different ways in which we ourselves learn from our own surrounding environments, neural networks learning
processes as follows: learning with a teacher (supervised learning) and learning without a teacher (unsupervised learning and
reinforcement learning).
Supervised Learning
Learning with a teacher is also referred to as supervised learning. In conceptual terms, we may think of the teacher as having
knowledge of the environment and unknown to the neural network. the teacher is able to provide the neural network with a desired
response for that training vector. Indeed, the desired response represents the "optimum" action to be performed by the neural
network. The network parameters are adjusted under the combined influence of the training vector and the error signal. The error
signal is defined as the difference between the desired response and the actual response of the network. This adjustment is carried
out iteratively in a step-by-step fashion with the aim of eventually making the neural network emulate the teacher; the emulation is
presumed to be optimum in some statistical sense. In this way, knowledge of the environment available to the teacher is transferred
https://studyglance.in/nn/display.php?tno=2&topic=Basic-models 4/9
8/20/23, 10:24 PM Basic models - Neural Networks and Deep Learning Tutorial | Study Glance
to the neural network through training and stored in the form of "fixed" synaptic weights, representing long-term memory.When this
condition is reached, we may then dispense with the teacher and let the neural network deal with the environment completely by
itself.
Unsupervised Learning
In unsupervised, or self-organized, learning is done without the supervision of a teacher. The goal of unsupervised learning is to find
the underlying structure of dataset, group that data according to similarities, and represent that dataset in a compressed format.
To perform unsupervised learning, we may use a competitive-learning rule. For example, we may use a neural network that consists
of two layers, an input layer and a competitive layer. The input layer receives the available data. The competitive layer consists of
neurons that compete with each other (in accordance with a learning rule) for the "opportunity" to respond to features contained in
the input data. In its simplest form, the network operates in accordance with a "winner-takes-all" strategy. In such a strategy, the
neuron with the greatest total input "wins" the competition and turns on; all the other neurons in the network then switch off.
Reinforcement Learning
https://studyglance.in/nn/display.php?tno=2&topic=Basic-models 5/9
8/20/23, 10:24 PM Basic models - Neural Networks and Deep Learning Tutorial | Study Glance
Reinforcement Learning is a feedback-based Network technique in which an agent learns to behave in an environment by
performing the actions and seeing the results of actions. For each good action, the agent gets positive feedback, and for each bad
action, the agent gets negative feedback or penalty.
Since there is no labeled data, so the agent is bound to learn by its experience only.
The agent interacts with the environment and explores it by itself. The primary goal of an agent in reinforcement learning is to
improve the performance by getting the maximum positive rewards.
The goal of reinforcement learning is to minimize a cost-to-go function, defined as the expectation of the cumulative cost of actions
taken over a sequence of steps instead of simply the immediate cost.
https://studyglance.in/nn/display.php?tno=2&topic=Basic-models 6/9
8/20/23, 10:24 PM Basic models - Neural Networks and Deep Learning Tutorial | Study Glance
Activation functions
The activation function is applied over the net input to calculate the output of an ANN. An integration function (say f) is associated
with the input of a processing element. This function serves to combine activation, information or evidence from an external source
or other processing elements into a net input to the processing element.
When a signal is fed through a multilayer network with linear activation functions, the output obtained remains same as that could
be obtained using a single-layer network. Due to this reason, nonlinear functions are widely used in multilayer networks compared
to linear functions.
There are many activation functions available. In this part, we'll look at a few:
Identity function:
It is a linear function and can be defined as f (x) = x for all x
The output here remains the same as input. The input layer uses the identity activation function.
1 if x ≥ Θ
f (x) = {
0 if x < Θ
where q represents the threshold value. This function is most widely used in single-layer nets to convert the net input to an output
that is a binary (1 or 0).
1 if x ≥ Θ
f (x) = {
−1 if x < Θ
where q represents the threshold value. This function is also used in single-layer nets to convert the net input to an output that is
bipolar (+1 or –1).
Sigmoidal functions:
The sigmoidal functions are widely used in back-propagation nets because of the relationship between the value of the functions at
a point and the value of the derivative at that point which reduces the computational burden during training.
Sigmoidal functions are of two types:
1
f (x) =
-λx
1 + e
-λx
2 1 − e
f (x) = −1 =
-λx -λx
1 + e 1 + e
https://studyglance.in/nn/display.php?tno=2&topic=Basic-models 7/9
8/20/23, 10:24 PM Basic models - Neural Networks and Deep Learning Tutorial | Study Glance
λ
f´(x) = [1 + f (x)] [1 - f (x)]
2
The bipolar sigmoidal function is closely related to hyperbolic tangent function, which is written as
x -x -2x
e − e 1 − e
f (x) = =
x -x -2x
e + e 1 + e
https://studyglance.in/nn/display.php?tno=2&topic=Basic-models 8/9
8/20/23, 10:24 PM Basic models - Neural Networks and Deep Learning Tutorial | Study Glance
ABOUT
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important & previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects.
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus
https://studyglance.in/nn/display.php?tno=2&topic=Basic-models 9/9
8/20/23, 10:25 PM Terminologies - Neural Networks and Deep Learning Tutorial | Study Glance
NN&DL Menu
Terminologies
Weights
In ANN architecture, every neuron is connected to other neurons by means of a directed communication link and every link is
associated with weights. Weight is a parameter which contains information about the input signal. This information is used by the
net to solve a problem.
Wij is the weight from processing element ´i´ source node to processing element ´j´ destination node.
Bias (b)
The bias is a constant value included in the network. Its impact is seen in calculating the net input. The bias is included by adding a
component x0 =1 to the input vector X.
The bias can also be explained as follows: Consider an equation of straight line, y = mx + c where x is the input, m is the weight, c
is the bias and y is the output. Thus, bias plays a major role in determining the output of the network.
Bias can be positive or negative. The positive bias helps in increasing the net input of the network. The negative bias helps in
decreasing the net input of the network.
Threshold (Θ)
Threshold is a set value used in the activation function. In ANN, based on the threshold value the activation functions are defined
and the output is calculated.
ABOUT
Ads
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important by
& previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects. Stop seeing this ad Why this ad?
https://studyglance.in/nn/display.php?tno=3&topic=Terminologies 1/2
8/20/23, 10:25 PM Terminologies - Neural Networks and Deep Learning Tutorial | Study Glance
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus
https://studyglance.in/nn/display.php?tno=3&topic=Terminologies 2/2
8/20/23, 10:21 PM Perceptron Networks - Neural Networks and Deep Learning Tutorial | Study Glance
JackDumbell
NN&DL Menu
Perceptron Networks
Single layer - Single layer perceptrons can learn only linearly separable patterns
Multilayer - Multilayer perceptrons or feedforward neural networks with two or more layers have the greater processing power
The Perceptron algorithm learns the weights for the input signals in order to draw a linear decision boundary.
This enables you to distinguish between the two linearly separable classes +1 and -1.
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
Step 4: calculate the ourput of the network. to do so first obtain the net input:
yin = ∑ x i . wi + b
https://studyglance.in/nn/display.php?tno=4&topic=Perceptron-Networks 1/5
8/20/23, 10:21 PM Perceptron Networks - Neural Networks and Deep Learning Tutorial | Study Glance
where n is the number of inputs neurons in the input layer. Then apply activation over the net input calculated to obrain the output
⎧1 if yin > θ
f (yin ) = ⎨ 0 if − θ ⩽ yin ⩽ θ
⎩
−1 if yin < −θ
Step 5: weight and bias adjustment: compare the value of the actual (calculated) output and desire(targer) output
if y ≠ t, then
wi(new) = wi(old)+ αtxi
b(new) = b(old)+ αt
else we have
wi(new) = wi(old)
b(new) = b(old)
Step 6: Train the network until there is no weight change. This is the stopping condition for the network. If this condition is not
met, then start again form step 2.
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
Step 4: Calculate output response of each output unit j = 1 to m: First, the net input is calculated as
n
yinj = ∑ xi . w ij + bj
Then activations are applied over the net input to calculate the output response:
⎧1 if yin > θ
f (yin ) = ⎨ 0 if − θ ⩽ yin ⩽ θ
⎩
−1 if yin < −θ
https://studyglance.in/nn/display.php?tno=4&topic=Perceptron-Networks 2/5
8/20/23, 10:21 PM Perceptron Networks - Neural Networks and Deep Learning Tutorial | Study Glance
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
Step 6: Test for the stopping condition, i.e., if there is no change in weights then stop the training process, else start again from
Step 2.
Truth table for AND function with bipolar inputs and targets.
x1 x2 Target
1 1 1
1 -1 -1
-1 1 -1
-1 -1 -1
Row-1
initializing w1, w2, and b as 0, α=1, and θ=0.
we get;
x1(0)+x2(0)+0
Passing the first row of the AND logic table (x1=1, x2=1), we get;
1*0+1*0+0 = 0
yin = 0
= 0+ 1*1*1 = 1
https://studyglance.in/nn/display.php?tno=4&topic=Perceptron-Networks 3/5
8/20/23, 10:21 PM Perceptron Networks - Neural Networks and Deep Learning Tutorial | Study Glance
= 0+ 1*1*1 = 1
b(new) = 0+ 1*1 = 1
Row-2
w1, w2, and b as 1, α=1, and θ=0.
we get;
x1(1)+x2(1)+1
Passing the second row of the AND logic table (x1=1, x2=-1), we get;
1*1+-1*1+1 = 1
yin = 1
= 1+ 1*-1*1 = 0
= 1+ 1*-1*-1 = 2
b(new) = 1+ 1*-1 = 0
check Y is equal to t or not, that is -1 = -1, Hence weights change is Not required.
Row-3
w1=0, w2= 2, b=0, α=1, and θ=0.
we get;
x1(0)+x2(2)+0
Passing the third row of the AND logic table (x1=-1, x2=1), we get;
-1*0+1*2+0 = 2
yin = 2
= 0+ 1*-1*-1 = 1
= 2+ 1*-1*1 = 1
https://studyglance.in/nn/display.php?tno=4&topic=Perceptron-Networks 4/5
8/20/23, 10:21 PM Perceptron Networks - Neural Networks and Deep Learning Tutorial | Study Glance
b(new) = 0+ 1*-1 = -1
check Y is equal to t or not, that is -1 = -1, Hence weights change is Not required.
Row-4
w1=1, w2= 1, b=-1, α=1, and θ=0.
we get;
x1(1)+x2(1)+-1
Passing the fourth row of the AND logic table (x1=-1, x2=-1), we get;
-1*1+-1*1+(-1) = -3
yin = -3
check Y is equal to t or not, that is -1 = -1, Hence weights change is not required.
ABOUT
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important & previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects.
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus
https://studyglance.in/nn/display.php?tno=4&topic=Perceptron-Networks 5/5
8/20/23, 10:27 PM Adaptive Linear Neuron - Neural Networks and Deep Learning Tutorial | Study Glance
NN&DL Menu
Ads by
Send feedback
https://studyglance.in/nn/display.php?tno=5&topic=Adaptive-Linear-Neuron 1/4
8/20/23, 10:27 PM Adaptive Linear Neuron - Neural Networks and Deep Learning Tutorial | Study Glance
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
xi = s i (i = 1 to n)
yin = ∑ xi . wi + b
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 5 Until least mean square is obtained (t - yin), Adjust the weight and bias as follows −
Step 7 − Test for the stopping condition, if error generated is less then or equal to specified tolerance then stop.
Ads by
Stop seeing this ad Why this ad?
It is just like a multilayer perceptron, where Adaline will act as a hidden unit between the input and the Madaline layer.
The weights and the bias between the input and Adaline layers, as in we see in the Adaline architecture, are adjustable.
The Adaline and Madaline layers have fixed weights and bias of 1.
Training can be done with the help of Delta rule.
It consists of “n” units of input layer and “m” units of Adaline layer and “1” unit of the Madaline layer. Each neuron in the Adaline
and Madaline layers has a bias of excitation “1”. The Adaline layer is present between the input layer and the Madaline layer; the
Adaline layer is considered as the hidden layer.
https://studyglance.in/nn/display.php?tno=5&topic=Adaptive-Linear-Neuron 2/4
8/20/23, 10:27 PM Adaptive Linear Neuron - Neural Networks and Deep Learning Tutorial | Study Glance
Step 0 − initialize the weights and the bias(for easy calculation they can be set to zero). also initialize the learning rate α(0, α, 1)
for simpicity α is set to 1.
Step 2 − perform steps 3-5 for each bipolar training pair s:t
xi = s i (i = 1 to n)
Step 4 − Obtain the net input at each hidden layer, i.e. the Adaline layer with the following relation −
n
Q inj = bj + ∑ x i wij (j = 1 to m)
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 5 − Apply the following activation function to obtain the final output at the Adaline and the Madaline layer −
1 if x ⩾ 0
f (x) = {
−1 if x < 0
Qj = f (Q inj )
https://studyglance.in/nn/display.php?tno=5&topic=Adaptive-Linear-Neuron 3/4
8/20/23, 10:27 PM Adaptive Linear Neuron - Neural Networks and Deep Learning Tutorial | Study Glance
Final output of the network
i.e.
m
yinj = b0 + ∑ Qj vj
j=1
y = f (yin )
If t ≠ y and t = +1, update weights on Zj, where net input is closest to 0 (zero)
wij(new) = wij(old) + α(1 - Qinj)xi
bj(new) = bj(old) + α(1 - Qinj)
else If t ≠ y and t = -1, update weights on Zk, whose net input is positive
wik(new) = wik(old) + α(-1 - Qink)xi
bk(new) = bk(old) + α(-1 - Qink)
else if y = t then
no weight updation is required.
Step 7 − Test for the stopping condition, which will happen when there is no change in weight or the highest weight change
occurred during training is smaller than the specified tolerance.
ABOUT
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important & previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects.
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus
https://studyglance.in/nn/display.php?tno=5&topic=Adaptive-Linear-Neuron 4/4
8/20/23, 10:28 PM Back Propagation - Neural Networks and Deep Learning Tutorial | Study Glance
NN&DL Menu
Back Propagation
The main features of Backpropagation are the iterative, recursive and efficient method through which it calculates the updated
weight to improve the network until it is not able to perform the task for which it is being trained.
Step 1 − Continue step 2-10 when the stopping condition is not true.
Ads by
Stop seeing this ad Why this ad?
https://studyglance.in/nn/display.php?tno=6&topic=Back-Propagation 1/4
8/20/23, 10:28 PM Back Propagation - Neural Networks and Deep Learning Tutorial | Study Glance
Ads by
Stop seeing this ad Why this ad?
AI program for your startup
Increase mileage on your cloud investment with Google
Cloud
Phase 1
Step 3 − Each input unit receives input signal xi and sends it to the hidden unit for all i = 1 to n
Step 4Calculate the net input at the hidden unit using the following relation −
n
Qinj = ∑ xi vij + bj j = 1 to p
i=1
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
Qj = f (Qinj )
Step 5 Calculate the net input at the output layer unit using the following relation
p
yink = ∑ Qj w jk + bk k = 1 to m
j=1
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
yk = f (yink )
Phase 2
Step 6 − Compute the error correcting term, in correspondence with the target pattern received at each output unit, as follows
′
δk = (tk − yk )f (yink )
https://studyglance.in/nn/display.php?tno=6&topic=Back-Propagation 2/4
8/20/23, 10:28 PM Back Propagation - Neural Networks and Deep Learning Tutorial | Study Glance
′
Step 7 − Now each hidden unit will be the sum of its delta inputs from the output units.
m
δinj = ∑ δk w jk
k=1
δj = δinj f (Qinj )
Phase 3
Step 8 − Each output unit yk (k = 1 to m) updates the weight and bias as follows
w jk (new) = w jk (old) + Δw jk
Δw jk = αδk Qj
Δb0k = αδk
Step 9 − Each Hidden unit qj (j = 1 to p) updates the weight and bias as follows
Δvij = αδj xi
Δbj = αδj
Step 10 − Check for the stopping condition, which may be either the number of epochs reached or the target output matches the
actual output.
ABOUT
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important & previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects.
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus
https://studyglance.in/nn/display.php?tno=6&topic=Back-Propagation 3/4
8/20/23, 10:28 PM Back Propagation - Neural Networks and Deep Learning Tutorial | Study Glance
https://studyglance.in/nn/display.php?tno=6&topic=Back-Propagation 4/4
8/20/23, 10:30 PM Associate Memory Network - Neural Networks and Deep Learning Tutorial | Study Glance
NN&DL Menu
Ads by
Stop seeing this ad Why this ad?
3D Interior Design Made Easy
Design your 3D dream home like a Pro. Interior Design & 4K
Render
1. Hebb Rule
2. Outer Products Rule
1. Hebb Rule
The Hebb rule is widely used for finding the weights of an associative memory neural network. The training vector pairs here are
denoted as s:t. The weights are updated unril there is no weight change.
Step 1: For each training target input output vector pairs s:t, perform Steps 2-4.
Ads by
https://studyglance.in/nn/display.php?tno=7&topic=Associate-Memory-Network 1/6
8/20/23, 10:30 PM Associate Memory Network - Neural Networks and Deep Learning Tutorial | Study Glance
Step 2: Activate the input layer units to current training input, Xi=Si (for i = 1 to n)
Ads by
Stop seeing this ad Why this ad?
AI program for your startup
Increase mileage on your cloud investment with Google
Cloud
w ij (new) = w ij (old) + xi yj (i = 1 to n j = 1 to m)
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
ST = sTt =>
s1
⎡ ⎤
⎢ . ⎥
⎢ ⎥
⎢ . ⎥
⎢ ⎥
⎢ ⎥
⎢ si ⎥ * [ t1 . . tj . . tm ]
⎢ ⎥
⎢ ⎥
⎢ . ⎥
⎢ ⎥
⎢ . ⎥
⎣ ⎦
sn
This weight matrix is same as the weight matrix obtained by Hebb rule to store the pattern association s:t. For storing a set of
associations, s(p):t(p), p = 1 to P, wherein,
s(p) = (s1 (p}, ... , si(p), ... , sn(p))
t(p) = (t1 (p), · · ·' tj(p), · · · 'tm(p))
T
w = ∑S (p) . S(p)
p=1
https://studyglance.in/nn/display.php?tno=7&topic=Associate-Memory-Network 2/6
8/20/23, 10:30 PM Associate Memory Network - Neural Networks and Deep Learning Tutorial | Study Glance
Ads by
Stop seeing this ad Why this ad?
In the auto associative memory network, the training input vector and training output vector are the same
Vertex Bougainvillea
Ad Vertex bougainvillea Open
https://studyglance.in/nn/display.php?tno=7&topic=Associate-Memory-Network 3/6
8/20/23, 10:30 PM Associate Memory Network - Neural Networks and Deep Learning Tutorial | Study Glance
xi = si (i = 1 to n)
yj = sj (j = 1 to n)
w ij (new) = w ij (old) + xi yj
The weight can also be determine form the Hebb Rule or Outer Products Rule learning
T
w = ∑S (p) . S(p)
p=1
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s rule.
Step 3 − Set the activation of the input units equal to that of the input vector.
yinj = ∑ xi w ij
i=1
+1 if yinj > 0
yj = f (yinj ) = {
−1 if yinj ⩽ 0
https://studyglance.in/nn/display.php?tno=7&topic=Associate-Memory-Network 4/6
8/20/23, 10:30 PM Associate Memory Network - Neural Networks and Deep Learning Tutorial | Study Glance
xi = si (i = 1 to n)
yj = sj (j = 1 to m)
w ij (new) = w ij (old) + xi yj
The weight can also be determine form the Hebb Rule or Outer Products Rule learning
T
w = ∑S (p) . S(p)
p=1
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s rule.
https://studyglance.in/nn/display.php?tno=7&topic=Associate-Memory-Network 5/6
8/20/23, 10:30 PM Associate Memory Network - Neural Networks and Deep Learning Tutorial | Study Glance
Step 3 − Set the activation of the input units equal to that of the input vector.
yinj = ∑ xi w ij
i=1
⎧ +1
⎪ if yinj > 0
yj = f (yinj ) = ⎨ 0 if yinj = 0
⎩
⎪
−1 if yinj < 0
ABOUT
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important & previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects.
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus
https://studyglance.in/nn/display.php?tno=7&topic=Associate-Memory-Network 6/6
8/20/23, 10:31 PM Bidirectional Associative Memory - Neural Networks and Deep Learning Tutorial | Study Glance
NN&DL Menu
Ads by
Stop seeing this ad Why this ad?
Buy 3 Shirts Combo | 899/-
Find a Large Selection of Exclusive Designs and Beautiful
Shirts at Great Prices!
Ads by
Stop seeing this ad Why this ad?
https://studyglance.in/nn/display.php?tno=8&topic=Bidirectional-Associative-Memory 1/4
8/20/23, 10:31 PM Bidirectional Associative Memory - Neural Networks and Deep Learning Tutorial | Study Glance
Ads by
Stop seeing this ad
Figure shows a BAM network consisting of n units in X layer and m units in Y layer. The layers can be connected in both
directions(bidirectional) with the result the weight matrix sent from the X layer to the Y layer is W and the weight matrix for signals
sent from the Y layer to the X layer is WT. Thus, the Weight matrix is calculated in both directions.
Determination of Weights
Let the input vectors be denoted by s(p) and target vectors by t(p). p = 1, ... , P. Then the weight matrix to store a set of input and
target vectors, where
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
https://studyglance.in/nn/display.php?tno=8&topic=Bidirectional-Associative-Memory 2/4
8/20/23, 10:31 PM Bidirectional Associative Memory - Neural Networks and Deep Learning Tutorial | Study Glance
P
p=1
When the input vectors are bipolar, the weight matrix W = {wij} can be defined as
p=1
The activation function is based on whether the input target vector pairs used are binary or bipolar
⎪1
⎧ if yinj > 0
yj = ⎨ yj if yinj = 0
⎩
⎪
0 if yinj < 0
⎧ 1 if yinj > θi
⎪
yj = ⎨ yj if yinj = θj
⎩
⎪
−1 if yinj < θj
⎧1 if x ini > 0
xi = ⎨ xi if x ini = 0
⎩
0 if x ini < 0
⎧1 if x ini > θi
xi = ⎨x if x ini = θi
i
⎩
−1 if x ini < θi
Ads by
Stop seeing this ad Why this ad?
Buy 3 Shirts Combo | 899/-
Find a Large Selection of Exclusive Designs and Beautiful
Shirts at Great Prices!
Step 2: Ser the activations of X layer to current input pauern, i.e., presenting the input pattern x to X layer and similarly presenting
the input pattern y to Y layer. Even though, it is bidirectional memory, at one time step, signals can be sent from only one layer. So,
either of the input patterns may be the zero vector
Step 3: Perform Steps 4-6 when the acrivacions are not converged.
Step 4: Update the activations of units in Y layer. Calculate the net input,
n
yinj = ∑ x i wij
i=1
https://studyglance.in/nn/display.php?tno=8&topic=Bidirectional-Associative-Memory 3/4
8/20/23, 10:31 PM Bidirectional Associative Memory - Neural Networks and Deep Learning Tutorial | Study Glance
Applying ilie activations, we obtain
yj = f (yinj )
Step 5: Updare the activations of unirs in X layer. Calculate the net input,
m
x ini = ∑ yj wij
j=1
xi = f (x ini )
Step 6: Test for convergence of the net. The convergence occurs if the activation vectors x and y reach equilibrium. If this occurs
then stop, Otherwise, continue.
ABOUT
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important & previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects.
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus
https://studyglance.in/nn/display.php?tno=8&topic=Bidirectional-Associative-Memory 4/4
8/20/23, 10:33 PM Hopfield Networks - Neural Networks and Deep Learning Tutorial | Study Glance
Ads by
Stop seeing this ad Why this ad?
Import from Germany to Mexico
MSC offers the best shipping solution at the best market price. G
Instant Quote now
MSC Cargo Ge
NN&DL Menu
Hopfield Networks
The network takes two-valued inputs: binary (0, 1) or bipolar (+1, -1); the use of bipolar inpurs makes the analysis easier. The
network has symmetrical weights with no self-connections, i.e.,
Wij = Wji;
Wij = 0 if i = j
Ads by
Stop seeing this ad Why this ad?
https://studyglance.in/nn/display.php?tno=9&topic=Hopfield-Networks 1/4
8/20/23, 10:33 PM Hopfield Networks - Neural Networks and Deep Learning Tutorial | Study Glance
Let the input vectors be denoted by s(p), p = 1, ... , P. Then the weight matrix W to store a set of input vectors, where
Ads by
Stop seeing this ad Why this ad?
In case of input vectors being binary, the weight matrix W = {wij} is given by
Ads by
Stop seeing this ad Why this ad?
https://studyglance.in/nn/display.php?tno=9&topic=Hopfield-Networks 2/4
8/20/23, 10:33 PM Hopfield Networks - Neural Networks and Deep Learning Tutorial | Study Glance
P
p=1
When the input vectors are bipolar, the weight matrix W = {wij} can be defined as
p=1
Ads by
Stop seeing this ad Why this ad?
Vertex Bougainvillea
Bougainvillea offers 65 exclusive villas surrounded by open
space with abundance of nature
Step 1: When the activations of the net are not converged, then perform Steps 2-8.
Step 3: Make the initial activations of the net equal to the external input vector X:
yi = xi f or i = 1 to n
Step 4: Perform Steps 5-7 for each unit yi. (Here, the units are updated in random order.)
yini = xi + ∑ yj w ji
Step 6: Apply the activations over the net input to calculate the output:
⎧1 if yini > θi
yi = ⎨y if yini = θi
i
⎩
0 if yini < θi
Step 7: Now feed back the obtained output yi to all other units. Thus, the activation vectors are updated.
ABOUT
https://studyglance.in/nn/display.php?tno=9&topic=Hopfield-Networks 3/4
8/20/23, 10:33 PM Hopfield Networks - Neural Networks and Deep Learning Tutorial | Study Glance
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important & previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects.
CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus
https://studyglance.in/nn/display.php?tno=9&topic=Hopfield-Networks 4/4