ANN Lab Manual
ANN Lab Manual
Engineering
LAB MANUAL
CHANDIGARH UNIVERSITY, GHARUAN (MOHALI)
Table of Contents
UNIT-I
Write a MATLAB program to plot a few activation functions that are being
1.
used in neural networks.
Generate ANDNOT function using McCulloch-Pitts neural net by a
2.
MATLAB program.
Write a MATLAB program for perceptron net for an AND function with
4.
bipolar inputs and targets.
UNIT-II
UNIT-III
Program-1
Write a MATLAB program to plot a few activation functions that are being used in
neural networks.
OUTPUT:
Program-2
Generate ANDNOT function using McCulloch-Pitts neural net by a MATLAB
program.
Program
>> %Getting weights and threshold value
disp('Enter weights');
w1=input('Weight w1=');
w2=input('weight w2=');
disp('Enter Threshold Value');
theta=input('theta=');
y=[0 0 0 0];
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 0 1 0];
con=1;
while con
zin=x1*w1+x2*w2;
fori=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0;
end
end
disp('Output of Net');
disp(y);
if y==z
con=0;
else
disp('Net is not learning enter another set of weights and Threshold value');
w1=input('weight w1=');
w2=input('weight w2=');
theta=input('theta=');
end
end
disp('Mcculloch-Pitts Net for ANDNOT function');
disp('Weights of Neuron');
disp(w1);
disp(w2);
disp('Threshold value');
disp(theta);
OUTPUT:
Program-3
Generate XOR function using McCulloch-Pitts neuron.
yin=y1*v1+y2*v2;
fori=1:4
if yin(i)>=theta;
y(i)=1;
else
y(i)=0;
end
end
disp('Output of Net');
disp(y);
if y==z
con=0;
else
disp('Net is not learning enter another set of weights and Threshold value');
w11=input('Weight w11=');
w12=input('weight w12=');
w21=input('Weight w21=');
w22=input('weight w22=');
v1=input('weight v1=');
v2=input('weight v2=');
theta=input('theta=');
end
end
disp('McCulloch-Pitts Net for XOR function');
disp('Weights of Neuron Z1');
disp(w11);
disp(w21);
disp('weights of Neuron Z2');
disp(w12);
disp(w22);
disp('weights of Neuron Y');
disp(v1);
disp(v2);
disp('Threshold value');
disp(theta);
OUTPUT:
Program-4
Write a MATLAB program for perceptron net for an AND function with bipolar
inputs and targets.
b=b+alpha*t(i);
end
end
epoch=epoch+1;
end
disp('Perceptron for AND funtion');
disp(' Final Weight matrix');
disp(w);
disp('Final Bias');
disp(b);
OUTPUT:
Program-5
With a suitable example simulate the perceptron learning network and separate the
boundaries. Plot the points assumed in the respective quadrants using different
symbols for identification.
Plot the elements as square in the first quadrant, as star in the second quadrant, as diamond in the third
quadrant, as circle in the fourth quadrant. Based on the learning rule draw the decision boundaries.
Program
p1=[1 1]'; p2=[1 2]'; %- class 1, first quadrant when we plot the elements, square
p3=[2 -1]'; p4=[2 -2]'; %- class 2, 4th quadrant when we plot the elements, circle
p5=[-1 2]'; p6=[-2 1]'; %- class 3, 2nd quadrant when we plot the elements,star
p7=[-1 -1]'; p8=[-2 -2]';% - class 4, 3rd quadrant when we plot the elements,diamond
%Now, lets plot the vectors
hold on
plot(p1(1),p1(2),'ks',p2(1),p2(2),'ks',p3(1),p3(2),'ko',p4(1),p4(2),'ko')
plot(p5(1),p5(2),'k*',p6(1),p6(2),'k*',p7(1),p7(2),'kd',p8(1),p8(2),'kd')
grid
hold
axis([-3 3 -3 3])%set nice axis on the figure
t1=[0 0]'; t2=[0 0]'; %- class 1, first quadrant when we plot the elements, square
t3=[0 1]'; t4=[0 1]'; %- class 2, 4th quadrant when we plot the elements, circle
t5=[1 0]'; t6=[1 0]'; %- class 3, 2nd quadrant when we plot the elements,star
t7=[1 1]'; t8=[1 1]';% - class 4, 3rd quadrant when we plot the elements,diamond
%lets simulate perceptron learning
R=[-2 2;-2 2];
netp=newp(R,2); %netp is perceptron network with 2 neurons and 2 nodes, hardlimit transfer
function, perceptron rule learning
%Define the input matrix and target matrix
P=[p1 p2 p3 p4 p5 p6 p7 p8];
T=[t1 t2 t3 t4 t5 t6 t7 t8];
Y=sim(netp,P) %Well, that is obvioulsy not good, Y is not equal P
%Now, let's train
netp.trainParam.epochs = 20; % let's train for 20 epochs
netp = train(netp,P,T); %train,
%it seems that the training is finished after 3 epochs and goal is met. Lets check by simulation
Y1=sim(netp,P)
%this is the same as target vector, so our network is trained
%the weights and biases after training
W=netp.IW{1,1} %weights
B=netp.b{1} %bias
%decison boundaries are lines perepndicular to weights
grid
axis([-3 3 -3 3])%set nice axis on the figure
plot(x,y,'r',x,y1,'b')%here we plot boundaries
hold off
OUTPUT:
Program-6
With a suitable example demonstrate the perceptron learning law with its decision
regions using MATLAB. Give the output in graphical form.
The following example demonstrates the perceptron learning law.
Program
p = 5; % dimensionality of the augmented input space
N = 50; % number of training patterns - size of the training epoch
% PART 1: Generation of the training and validation sets.
X = 2*rand(p-1, 2*N)-1;
nn = round((2*N-1)*rand(N,1))+1;
X(:,nn) = sin(X(:,nn));
X = [X; ones(1,2*N)];
wht = 3*rand(1,p)-1; wht = wht/norm(wht);
wht
D = (wht*X >= 0);
Xv = X(:, N+1:2*N) ;
Dv = D(:, N+1:2*N) ;
X = X(:, 1:N) ;
D = D(:, 1:N) ;
% [X; D]
pr = [1, 3];
Xp = X(pr, :);
wp = wht([pr p]); % projection of the weight vector
c0 = find(D==0); c1 = find(D==1);
% c0 and c1 are vectors of pointers to input patterns X
% belonging to the class 0 or 1, respectively.
figure(1), clf reset
plot(Xp(1,c0),Xp(2,c0),'o', Xp(1, c1), Xp(2, c1),'x')
% The input patterns are plotted on the selected projection
% plane. Patterns belonging to the class 0, or 1 are marked
% with 'o' , or 'x' , respectively
axis(axis), hold on
% The axes and the contents of the current plot are frozen
% Superimposition of the projection of the separation plane on the
% plot. The projection is a straight line. Four points lying on this
% line are found from the line equation wp . x = 0
L = [-1 1] ;
S = -diag([1 1]./wp(1:2))*(wp([2,1])'*L +wp(3)) ;
plot([S(1,:) L], [L S(2,:)]), grid, drawnow
% PART 2: Learning
eta = 0.5; % The training gain.
wh = 2*rand(1,p)-1;
OUTPUT:
Program-7
Write a MATLAB program to show Back Propagation Network for XOR function
with Binary Input and Output
program
%Back Propagation Network for XOR function with Binary Input and Output
%Initialize weights and bias
v=[0.197 0.3191 -0.1448 0.3394;0.3099 0.1904 -0.0347 -0.4861];
v1=zeros(2,4);
b1=[-0.3378 0.2771 0.2859 -0.3329];
b2=-0.1401;
w=[0.4919;-0.2913;-0.3979;0.3581];
w1=zeros(4,1);
x=[1 1 0 0;1 0 1 0];
t=[0 1 1 0];
alpha=0.02;
mf=0.9;
con=1;
epoch=0;
while con
e=0;
for I=1:4
%Feed forward
for j=1:4
zin(j)=b1(j);
fori=1:2
zin(j)=zin(j)+x(i,I)*v(i,j);
end
z(j)=binsig(zin(j));
end
yin=b2+z*w;
y(I)=binsig(yin);
%Backpropagation of Error
delk=(t(I)-y(I))*binsig1(yin);
delw=alpha*delk*z'+mf*(w-w1);
delb2=alpha*delk;
delinj=delk*w;
for j=1:4
delj(j,1)=delinj(j,1)*binsig1(zin(j));
end
for j=1:4
fori=1:2
delv(i,j)=alpha*delj(j,1)*x(i,I)+mf*(v(i,j)-v1(i,j));
end
end
delb1=alpha*delj;
w1=w;
v1=v;
%Weight updation
w=w+delw;
b2=b2+delb2;
v=v+delv;
b1=b1+delb1';
e=e+(t(I)-y(I))^2;
end
if e<0.005
con=0;
end
epoch=epoch+1;
end
disp('BPN for XORfuntion with Binary input and Output');
disp('Total Epoch Performed');
disp(epoch);
disp('Error');
disp(e);
disp('Final Weight matrix and bias');
v
b1
w
b2
OUTPUT:
Program-8
Write a MATLAB program to show Back Propagation Network for XOR function
with Bipolar Input and Output.
function y=bipsig(x)
y=2/(1+exp(–x))–1;
function y=bipsig1(x)
y=1/2*(1-bipsig(x))*(1+bipsig(x));
Program
%Back Propagation Network for XORfuntion with Bipolar Input and Output
%Initialize weights and bias
v=[0.197 0.3191 -0.1448 0.3394;0.3099 0.1904 -0.0347 -0.4861];
v1=zeros(2,4);
b1=[-0.3378 0.2771 0.2859 -0.3329];
b2=-0.1401;
w=[0.4919;-0.2913;-0.3979;0.3581];
w1=zeros(4,1);
x=[1 1 -1 -1;1 -1 1 -1];
t=[-1 1 1 -1];
alpha=0.02;
mf=0.9;
con=1;
epoch=0;
while con
e=0;
for I=1:4
%Feed forward
for j=1:4
zin(j)=b1(j);
fori=1:2
zin(j)=zin(j)+x(i,I)*v(i,j);
end
z(j)=bipsig(zin(j));
end
yin=b2+z*w;
y(I)=bipsig(yin);
%Backpropagation of Error
delk=(t(I)-y(I))*bipsig1(yin);
delw=alpha*delk*z'+mf*(w-w1);
delb2=alpha*delk;
delinj=delk*w;
for j=1:4
delj(j,1)=delinj(j,1)*bipsig1(zin(j));
end
for j=1:4
fori=1:2
delv(i,j)=alpha*delj(j,1)*x(i,I)+mf*(v(i,j)-v1(i,j));
end
end
delb1=alpha*delj;
w1=w;
v1=v;
%Weight updation
w=w+delw;
b2=b2+delb2;
v=v+delv;
b1=b1+delb1';
e=e+(t(I)-y(I))^2;
end
if e<0.005
con=0;
end
epoch=epoch+1;
end
disp('BPN for XORfuntion with Bipolar Input and Output');
disp('Total Epoch Performed');
disp(epoch);
disp('Error');
disp(e);
disp('Final Weight matrix and bias');
v
b1
w
b2
OUTPUT:
Program-9
Write a MATLAB program to recognize the number 0, 1, 2, 39. A 5 * 3 matrix
forms the numbers. For any valid point it is taken as 1 and invalid point it is taken
as 0. The net has to be trained to recognize all the numbers and when the test data is
given, the network has to recognize the particular numbers.
The numbers are formed from the 5 * 3 matrix and the input is determined. The input data and the test
data are given. When the test data is given, if the pattern is recognized then it is + 1, and if the pattern is
not recognized, it is – 1.
Program
input=[1 0 1 1 1 1 1 1 1 1;
1 1 1 1 0 1 1 1 1 1;
1 0 1 1 1 1 1 1 1 1;
1 1 0 0 1 1 1 0 1 1;
0 1 0 0 0 0 0 0 0 0;
1 0 1 1 1 0 0 1 1 1;
1 0 1 1 1 1 1 0 1 1;
0 1 1 1 1 1 1 0 1 1;
1 0 1 1 1 1 1 1 1 1;
1 0 1 0 0 0 1 0 1 0;
0 1 0 0 0 0 0 0 0 0;
1 0 0 1 1 1 1 1 1 1;
1 1 1 1 0 1 1 0 1 1;
1 1 1 1 0 1 1 0 1 1;
1 1 1 1 1 1 1 1 1 1;]
fori=1:10
for j=1:10
ifi==j
output(i,j)=1;
else
output(i,j)=0;
end
end
end
fori=1:15
for j=1:2
if j==1
aw(i,j)=0;
else
aw(i,j)=1;
end
end
end
test=[1 0 1 1 1;
1 1 1 1 0;
1 1 1 1 1;
1 1 0 0 1;
0 1 0 0 1;
1 1 1 1 1;
1 0 1 1 1;
0 1 1 1 1;
1 0 1 1 1;
1 1 1 0 0;
0 1 0 1 0;
1 0 0 1 1;
1 1 1 1 1;
1 1 1 1 0;
1 1 1 1 1;]
net=newp(aw,10,'hardlim');
net.trainparam.epochs=1000;
net.trainparam.goal=0;
net=train(net,input,output);
y=sim(net,test);
x=y';
fori=1:5
k=0;
l=0;
for j=1:10
if x(i,j)==1
k=k+1;
l=j;
end
end
if k==1
s=sprintf('Test Pattern %d is Recognised as %d',i,l-1);
disp(s);
else
s=sprintf('Test Pattern %d is Not Recognised',i);
disp(s);
end
end
OUTPUT:
Program-10
OUTPUT: