Soft Computing Question Answer
Soft Computing Question Answer
(https://www.ques10.com/p/39252/all-types-of-numericals-1/)
Question - Implement AND function using McCulloch-Pitts Neuron (take binary data).
OR
Question - Implement XOR function using McCulloch-Pitts Neuron (take binary data).
This self-organizing neural network consists of a single layer linear 2D grid of neurons, rather
than a series of layers. All the nodes on this lattice are associated directly to the input vector. The
SOM network consists of 2 layers input layer and the output layer.
The weight gets updated on the basis of weights as a function of the input data. The grid itself
maps the coordinates at each iteration as a function of the input data. Only single node is
activated at each iteration in which the features of an instance of the input vector are presented to
the neural network as all nodes respond to the input.
Stages of operations:
The function of self-organizing neural network is divided into three stages:
Construction: - The self-organizing network consists of few basic elements. The input signals
are stimulated in a matrix of neurons. These signals are grouped and transferred to every neuron.
Learning: - This mechanism defines the similarities between the every neurons and the input
signal. This assigns the neurons with shortest distance as the winner. At the start of process the
wages are of small random numbers, after learning those wages are modified and show the
internal structure of input data.
Identification:- Thus at the final stage the net values of winning neurons and its neighbors are
get adapted and the net topology is defined by determining the neighbors of every input neurons.
Properties:
Some of the properties to be known are:
Best Matching Unit (BMU):The node is chosen by determining the between the current
input values and all the nodes in the network.
Distance from input= i=0i-n(Ii-Wi)2
Where I- current input vector
W- Node’s weight vector
N=number of weights
Algorithm:
Step:1
Initialize the weights wij. And initialize random weights
Step:2
Choose a random input vector x.
Step:3
Repeat steps 4 and 5 for all nodes on the map.
Step:4
Calculate the Euclidean distance between weight vector wij and the input vector x(t), and
calculate the square of the distance
Step:5
Track the node that generates the smallest distance t and generate the winning weight using
formula.
Step:6
Calculate the overall Best Matching Unit (BMU). It means the node with the smallest distance
from all calculated ones.
Step:7
Discover topological neighborhood of BMU in Kohonen Map.
Step:8
Repeat for all nodes in the BMU neighborhood:
Update the winning weight of the first node in the neighborhood of the BMU by including a
fraction of the difference between the input vector x(t) and the weight w(t) of the neuron.
Step:9
Repeat the complete iteration until reaching the iteration.
Here, step 1 represents initialization phase, while step 2 to 9 represents the training phase.
Advantages:
It is easily interpreted and understood.
Disadvantages:
It does not build a generative model for the data
The magnification factors not well understood (at least to my best knowledge)
It not so intuitive : neurons close on the map (topological proximity) may be far away in
feature space
It does not behave so gently when using categorical data, even worse for mixed data
Adaptive Resonance Theory (ART)
The adaptive resonant theory is a type of neural network that is self-organizing and competitive.
It can be of both types, the unsupervised ones(ART1, ART2, ART3, etc) or the supervised
ones(ARTMAP). Generally, the supervised algorithms are named with the suffix “MAP”.
But the basic ART model is unsupervised in nature and consists of :
F1 layer or the comparison field(where the inputs are processed)
F2 layer or the recognition field (which consists of the clustering units)
The Reset Module (that acts as a control mechanism)
The F1 layer accepts the inputs and performs some processing and transfers it to the F2 layer that
best matches with the classification factor.
There exist two sets of weighted interconnection for controlling the degree of similarity between
the units in the F1 and the F2 layer.
The F2 layer is a competitive layer.The cluster unit with the large net input becomes the
candidate to learn the input pattern first and the rest F2 units are ignored.
The reset unit makes the decision whether or not the cluster unit is allowed to learn the input
pattern depending on how similar its top-down weight vector is to the input vector and to the
decision. This is called the vigilance test.
Thus we can say that the vigilance parameter helps to incorporate new memories or new
information. Higher vigilance produces more detailed memories, lower vigilance produces more
general memories.