AI Lab Record
AI Lab Record
RAMANATHAPURAM -623513
Department of Computer Science and Engineering
Name : …………………………………………………………….
Subject
Code : ………….……………………………………………
&Name
Register No
:…………...………………………………………………………
UNIVERSITY COLLEGE OF ENGINEERING
(A Constituent College of Anna University, Chennai)
RAMANATHAPURAM -623513
Department of Computer Science and Engineering
BONAFIDE CERTIFICATE
Register No.
Aim:
To implement uninformed search algorithms such as BFS and DFS.
Algorithm:
Step 1:= Initialize an empty list called 'visited' to keep track of the nodes visited
during thetraversal.
Step 2:= Initialize an empty queue called 'queue' to keep track of the nodes to be
traversedin the future.
Step 3:= Add the starting node to the 'visited' list and the
'queue'.
Step 4:= While the 'queue' is not empty, do the following:
a. Dequeue the first node from the 'queue' and store it in a variable called 'current'.
b. Print 'current'.
c. For each of the neighbours of 'current' that have not been visited yet, do
thefollowing:
Mark the neighbour as visited and add it to the 'queue'.
Step 5:= When all the nodes reachable from the starting node have been visited,
terminatethe algorithm.
Program :
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited =
[]queue =
[]
def bfs(visited, graph,
node):
visited.append(node)
queue.append(node)
while queue:
m = queue.pop(0)
print (m, end = "
")
for neighbour in graph[m]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
Algorithm:
Step 1:= Initialize an empty set called 'visited' to keep track of the nodes visited during the
traversal.
Step 2:= Define a DFS function that takes the current node, the graph, and the 'visited'
setas input.
Step 3:= If the current node is not in the 'visited' set, do the following:
a. Print the current node.
b. Add the current node to the 'visited' set.
c. For each of the neighbours of the current node, call the DFS function recursively
withthe neighbour as the current node.
Step 4:= When all the nodes reachable from the starting node have been visited,
terminatethe algorithm.
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = set()
def dfs(visited, graph,
node):if node not in
visited:
print (node)
visited.add(node
)
for neighbour in graph[node]:
dfs(visited, graph,
neighbour)
Output:
Result:
Thus the uninformed search algorithms such as BFS and DFS have been
executed successfully and the output got verified
algorithm (A*)
Aim:
Algorithm:
1. Initialize the distances dictionary with float('inf') for all vertices in the
graphexcept for the start vertex which is set to 0.
2. Initialize the parent dictionary with None for all vertices in the graph.
3. Initialize an empty set for visited vertices.
4. Initialize a priority queue (pq) with a tuple containing the sum of the heuristic
value and the distance from start to the current vertex, the distance from start
tothe current vertex, and the current vertex.
5. While pq is not empty, do the following:
a. Dequeue the vertex with the smallest f-distance (sum of the heuristic
valueand the distance from start to the current vertex).
b. If the current vertex is the destination vertex, return distances and parent.
c. If the current vertex has not been visited, add it to the visited set.
d. For each neighbor of the current vertex, do the following:
i. Calculate the distance from start to the neighbor (g) as the sum of the
distance from start to the current vertex and the edge weight between the
currentvertex and the neighbor.
ii. Calculate the f-distance (f = g + h) for the neighbor.
iii. If the f-distance for the neighbor is less than its current distance in the
distances dictionary, update the distances dictionary with the new distance
andthe parent dictionary with the current vertex as the parent of the
neighbor.
iv. Enqueue the neighbor with its f-distance, distance from start to
neighbor,and the neighbor itself into the priority queue.
6. Return distances and parent.
Program :
import heapq
pq = [( 0 + heuristic[start], 0 , start)] # E
spacewhile pq:
curr_f, curr_dist, curr_vert =
heapq.heappop(pq)
if curr_vert not in
visited:
visited.add(curr_vert)
if nbor == dest:
# we found a path based on
heuristicreturn distances, parent
return '->'.join(path[::-1])
graph = {
'A': {'B':5, 'C':5},
'B': {'A':5, 'C':4, 'D':3 },
'C': {'A':5, 'B':4, 'D':7, 'E':7, 'H':8},
'D': {'B':3, 'C':7, 'H':11, 'K':16, 'L':13,
'M':14},'E': {'C':7, 'F':4, 'H':5},
'F': {'E':4, 'G':9},
'G': {'F':9, 'N':12},
'H': {'E':5, 'C':8, 'D':11, 'I':3
},'I': {'H':3, 'J':4},
'J': {'I':4, 'N':3},
'K': {'D':16, 'L':5, 'P':4, 'N':7},
'L': {'D':13, 'M':9, 'O':4, 'K':5},
'M': {'D':14, 'L':9, 'O':5},
'N': {'G':12, 'J':3, 'P':7},
'O': {'M':5, 'L':4},
'P': {'K':4, 'J':8, 'N':7},
heuristic =
{'A': 16,
'B': 17,
'C': 13,
'D': 16,
'E': 16,
'F': 20,
'G': 17,
'H': 11,
'I': 10,
'J': 8,
'K': 4,
'L': 7,
'M': 10,
'N': 7,
'O': 5,
'P': 0
}
start = 'A'
dest= 'P'
distances,parent = a_star(graph, start, dest, heuristic)
print('distances => ', distances)
print('parent => ', parent)
print('optimal path => ', generate_path_from_parents(parent,start,dest))
Output:
distances => {'A': 0, 'B': 22, 'C': 18, 'D': 24, 'E': 28, 'F': 36, 'G': inf, 'H': 24, 'I': 26, 'J': 28, 'K': 28,
'L': 28, 'M': 32, 'N': 30, 'O': 30, 'P': 28}
parent => {'A': None, 'B': 'A', 'C': 'A', 'D': 'B', 'E': 'C', 'F': 'E', 'G': None, 'H': 'C', 'I': 'H', 'J': 'I',
'K':'D', 'L': 'D', 'M': 'D', 'N': 'J', 'O': 'L', 'P': 'K'}
optimal path => A->B->D->K->P
Result:
Thus the program to implement informed search algorithm have been
executed successfully and output got verified.
Aim:
Algorithm:
Program:
import numpy as np
import matplotlib.pyplot as
pltimport pandas as pd
import seaborn as sns
dataset =
pd.read_csv('NaiveBayes.csv') # split
Downloaded by Arivu Mathi (arivumathi118@gmail.com)
lOMoARcPSD|400 679 48
GaussianNB()# training
the model
classifer1.fit(X_train, y_train)
Result :
Thus the program with Naïve Bayes Classifier Algorithm have been executed
successfully and output got verified
Aim:
To construct a Bayesian network, to demonstrate the diagnosis of heart
patients using standard Heart Disease Data Set.
Algorithm:
Program:
alarm_model =
BayesianNetwork(
[
("Burglary", "Alarm"),
("Earthquake", "Alarm"),
("Alarm", "JohnCalls"),
("Alarm", "MaryCalls"),
]
)
cpd_burglary = TabularCPD(
variable="Burglary", variable_card=2, values=[[0.999], [0.001]]
)
cpd_earthquake = TabularCPD(
variable="Earthquake", variable_card=2, values=[[0.998], [0.002]]
)
cpd_alarm =
TabularCPD(
variable="Alarm",
variable_card=2,
values=[[0.999, 0.71, 0.06, 0.05], [0.001, 0.29, 0.94, 0.95]],
evidence=["Burglary",
"Earthquake"],evidence_card=[2, 2],
Downloaded by Arivu Mathi (arivumathi118@gmail.com)
lOMoARcPSD|400 679 48
Output:
True
NodeView(('Burglary', 'Alarm', 'Earthquake', 'JohnCalls', 'MaryCalls'))
OutEdgeView([('Burglary', 'Alarm'), ('Alarm', 'JohnCalls'), ('Alarm', 'MaryCalls'),
('Earthquake', 'Alarm')])
(Burglary ⟂ Earthquake)
(MaryCalls ⟂ Earthquake, Burglary, JohnCalls | Alarm) (MaryCalls ⟂
Burglary,JohnCalls | Earthquake, Alarm)
(MaryCalls ⟂ Earthquake, JohnCalls | Burglary,
Alarm)(MaryCalls ⟂ Earthquake, Burglary | JohnCalls,
Alarm)(MaryCalls ⟂ JohnCalls | Earthquake, Burglary,
Alarm)(MaryCalls ⟂ Burglary | Earthquake, JohnCalls,
Alarm)(MaryCalls ⟂ Earthquake | Burglary, JohnCalls,
Alarm)(JohnCalls ⟂ Earthquake, Burglary, MaryCalls |
Result:
Thus the program to implement a bayesian networks have been executed
successfully and the output got verified.
5. Build Regression
models Aim:
To build regression models such as locally weighted linear regression and plot
the necessary graphs.
Algorithm:
1. Read the Given data Sample to X and the curve (linear or non-linear) to Y
2. Set the value for Smoothening parameter or Free parameter say τ
3. Set the bias /Point of interest set x0 which is a subset of X
4. Determine the weight matrix using :
6. Prediction = x0*β.
Program:
def lowess(x, y, f,
iterations):n = len(x)
r = int(ceil(f * n))
h = [np.sort(np.abs(x - x[i]))[r] for i in range(n)]
w = np.clip(np.abs((x[:, None] - x[None, :]) / h), 0.0, 1.0) w = (1 - w ** 3)
** 3yest = np.zeros(n)
delta =
np.ones(n)for
iteration in
range(iterations): for i
inrange(n):
weights = delta * w[:, i]
b = np.array([np.sum(weights * y), np.sum(weights * y * x)])
A = np.array([[np.sum(weights), np.sum(weights * x)],[np.sum(weights * x),
np.sum(weights
* x * x)]])
beta = linalg.solve(A, b)
yest[i] = beta[0] + beta[1] * x[i]
residuals = y - yest
s = np.median(np.abs(residuals))
return yest
import
mathn =
100
x = np.linspace(0, 2 * math.pi, n)
y = np.sin(x) + 0.3 *
np.random.randn(n)f =0.25
iterations=3
yest = lowess(x, y, f, iterations)
import matplotlib.pyplot as
pltplt.plot(x,y,"r.")
plt.plot(x,yest,"b-")
Output:
Result
Aim:
To implement the concept of decision trees with suitable dataset from real world
problems using CART algorithm.
Algorithm:
Program:
import numpy as np
import matplotlib.pyplot as
pltimport pandas as pd
data =
pd.read_csv('Social_Network_Ads.csv')
data.head()
feature_cols = ['Age',
'EstimatedSalary']x = data.iloc[:, [2,
3]].values
y = data.iloc[:, 4].values
plt.xlim(x1.min(), x1.max())
plt.ylim(x2.min(), x2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1], c=ListedColormap(("red",
"green"))(i), label=j)
plt.title("Decision Tree(Test
set)")plt.xlabel("Age")
plt.ylabel("Estimated Salary")
plt.legend()
plt.show()
dot_data = StringIO()
export_graphviz(classifier, out_file=dot_data, filled=True, rounded=True,
special_characters=True, feature_names=feature_cols, class_names=['0',
'1'])graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.write_png('decisiontree.png'))
dot_data = StringIO()
export_graphviz(classifier, out_file=dot_data, filled=True, rounded=True,
special_characters=True, feature_names=feature_cols, class_names=['0',
'1'])graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.write_png('opt_decisiontree_gini.png'))
Result:
Thus the program to implement the concept of decision trees with suitable dataset
fromreal world problems using CART algorithm have been executed successfully.
Aim:
To create a machine learning model using Support Vector Machine algorithm.
Algorithm:
Step 2: Load the iris dataset using the datasets.load_iris() function and store the data and
target values in variables X and y respectively.
Step 3: Create a pandas dataframe from the iris data using iris_data.data[:, [2, 3]] and
column names as iris_data.feature_names[2:].
Step 4: Split the data into training and test sets using
train_test_split
Step 5: Print the number of samples in the training and test sets
Step 6: Define the markers, colors, and colormap to be used for plotting the data.
Step 7: Plot the data using a scatter plot by iterating through the unique labels and
plotting the points with the corresponding color and marker.
Step 9: Train the SVM model using the standardized training data and the SVM() function
Store the trained model in SVM.
Step 10: Print the accuracy of the SVM model on the training and test data using
SVM.score(X_train_standard, y_train) and SVM.score(X_test_standard, y_test) respectively.
Program:
import pandas as
pdimport numpy as
np
import matplotlib.pyplot as plt
from matplotlib.colors import
ListedColormapimport matplotlib.pyplot as
plt
from sklearn import
datasetsfrom sklearn.svm
import SVC
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
iris_data =
datasets.load_iris()X =
iris_data.data[:, [2, 3]]
Downloaded by Arivu Mathi (arivumathi118@gmail.com)
lOMoARcPSD|400 679 48
y = iris_data.target
iris_dataframe = pd.DataFrame(iris_data.data[:, [2, 3]],
columns=iris_data.feature_names[2:])
print(iris_dataframe.head())
print('\n' + 'Unique Labels contained in this data are ' + str(np.unique(y)))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print('The training set contains {} samples and the test set contains {}
samples'.format(X_train.shape[0], X_test.shape[0]))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print('The training set contains {} samples and the test set contains {}
samples'.format(X_train.shape[0], X_test.shape[0]))
markers = ('x', 's', 'o')
colors = ('red', 'blue', 'green')
cmap = ListedColormap(colors[:len(np.unique(y_test))])
Output:
petal length (cm) petal width (cm)
0 1.4 0.2
1 1.4 0.2
2 1.3 0.2
3 1.5 0.2
4 1.4 0.2
Result:
Thus the machine learning model was created using Support Vector Machine
algorithm.
Aim:
Algorithm:
1. Split the training dataset into train, test and validation dataset.
2. Fit all the base models using train dataset.
3. Make predictions on validation and test dataset.
4. These predictions are used as features to build a second level model
5. This model is used to make predictions on test and meta-features.
Program:
import pandas as pd
from sklearn.metrics import
mean_squared_errorfrom sklearn.ensemble
import RandomForestRegressor import
xgboost as xgb from sklearn.linear_model
import LinearRegression f
rom sklearn.model_selection import
train_test_splitdf = pd.read_csv("train_data.csv")
target = df["target"] train = df.drop("target")
X_train, X_test, y_train, y_test = train_test_split(train, target,
test_size=0.20)train_ratio = 0.70
validation_ratio = 0.20
test_ratio = 0.10
x_train, x_test, y_train, y_test = train_test_split( train, target, test_size=1 - train_ratio)
x_val, x_test, y_val, y_test = train_test_split(
x_test, y_test, test_size=test_ratio/(test_ratio + validation_ratio))
model_1 = LinearRegression()
model_2 = xgb.XGBRegressor()
model_3
=RandomForestRegressor()
model_1.fit(x_train, y_train)
val_pred_1 = model_1.predict(x_val)
test_pred_1 =
model_1.predict(x_test)
val_pred_1 = pd.DataFrame(val_pred_1)
test_pred_1 =
pd.DataFrame(test_pred_1)
model_2.fit(x_train, y_train)
val_pred_2 = model_2.predict(x_val)
test_pred_2 = model_2.predict(x_test)
val_pred_2 = pd.DataFrame(val_pred_2)
test_pred_2 =
pd.DataFrame(test_pred_2)
model_3.fit(x_train, y_train)
val_pred_3 =
Downloaded by Arivu Mathi (arivumathi118@gmail.com)
lOMoARcPSD|400 679 48
model_1.predict(x_val)
test_pred_3 = model_1.predict(x_test)
Output:
4790
Result:
Aim:
Algorithm:
Program:
import pandas as
pdimport numpy as
np
from sklearn import datasets
from sklearn.cluster import
KMeansimport matplotlib.pyplot
as plt
import matplotlib.patches as
mpatchesimport sklearn.metrics as
sm
iris = datasets.load_iris()
x = pd.DataFrame(iris.data, columns=['Sepal Length', 'Sepal Width', 'Petal Length', 'Petal
Width'])
y = pd.DataFrame(iris.target,
columns=['Target'])plt.figure(figsize=(12,3))
colors = np.array(['red', 'green', 'blue'])
iris_targets_legend = np.array(iris.target_names)
red_patch = mpatches.Patch(color='red', label='Setosa')
green_patch = mpatches.Patch(color='green', label='Versicolor')
blue_patch = mpatches.Patch(color='blue', label='Virginica')
plt.subplot(1, 2, 1)
plt.scatter(x['Sepal Length'], x['Sepal Width'], c=colors[y['Target']])
plt.title('Sepal Length vs Sepal Width')
plt.legend(handles=[red_patch, green_patch, blue_patch])
plt.subplot(1,2,2)
plt.scatter(x['Petal Length'], x['Petal Width'], c=
colors[y['Target']])plt.title('Petal Length vs Petal Width')
plt.legend(handles=[red_patch, green_patch, blue_patch])
iris_k_mean_model = KMeans(n_clusters=3)
iris_k_mean_model.fit(x)
print (iris_k_mean_model.cluster_centers_)
Downloaded by Arivu Mathi (arivumathi118@gmail.com)
lOMoARcPSD|400 679 48
Result:
Aim:
To implement the EM algorithm for clustering networks using the given dataset.
Algorithm:
Program:
# REAL PLOT
plt.subplot(1,3,1)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y.Targets],s=40)
plt.title('Real')
# K-PLOT
plt.subplot(1,3,2)
model=KMeans(n_clusters=3)
model.fit(X)
predY=np.choose(model.labels_,[0,1,2]).astype(np.int64)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[predY],s=40)
plt.title('KMeans')
# GMM PLOT
scaler=preprocessing.StandardScaler(
)scaler.fit(X)
xsa=scaler.transform(X)
Downloaded by Arivu Mathi (arivumathi118@gmail.com)
lOMoARcPSD|400 679 48
xs=pd.DataFrame(xsa,columns=X.columns)
gmm=GaussianMixture(n_components=3)
gmm.fit(xs)
y_cluster_gmm=gmm.predict(xs)
plt.subplot(1,3,3)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y_cluster_gmm],s=40)
plt.title('GMM Classification')
Output:
Text(0.5, 1.0, 'GMM Classification')
Result:
Thus the program for Expectation Maximization Algorithm was executed and verified.
Aim :
To implement the neural network model for the given numpy array.
Algorithm:
Step 1: Use numpy arrays to store inputs (x) and outputs (y)
Step 2: Define the network model and its arguments.
Step 3: Set the number of neurons/nodes for each
layer.
Step 4: Compile the model and calculate its accuracy
Step 5: Print a summary of the Keras model
Program:
from keras.models import Sequential
from keras.layers import Dense,
Activationimport numpy as np
x = np.array([[0,0], [0,1], [1,0], [1,1]])
y = np.array([[0], [1], [1], [0]])
model = Sequential()
model.add(Dense(2,
input_shape=(2,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='mean_squared_error', optimizer='sgd',
metrics=['accuracy'])model.summary()
Output:
Result: Thus the program to implement the neural network model for the given dataset.
Aim:
Algorithm:
Program:
import tensorflow.keras as
kerasimport tensorflow as tf
print(tf. version )
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) =
mnist.load_data()import matplotlib.pyplot as plt
plt.imshow(x_train[0],cmap=plt.cm.binary)
plt.show()
print(y_train[0])
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
print(x_train[0])
plt.imshow(x_train[0],cmap=plt.cm.binary)
plt.show()
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
Downloaded by Arivu Mathi (arivumathi118@gmail.com)
lOMoARcPSD|400 679 48
model.add(tf.keras.layers.Dense(10,
activation=tf.nn.softmax))model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train,
epochs=3)
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss)
print(val_acc)
model.save('epic_num_reader.model
')
new_model = tf.keras.models.load_model('epic_num_reader.model')
predictions = new_model.predict(x_test)
import numpy as np
print(np.argmax(predictions[0]))
plt.imshow(x_test[0],cmap=plt.cm.binary)
plt.show()
Output:
5
[[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.00393124 0.02332955 0.02620568 0.02625207 0.17420356 0.17566281
0.28629534 0.05664824 0.51877786 0.71632322 0.77892406 0.89301644
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0.05780486 0.06524513 0.16128198 0.22713296
0.22277047 0.32790981 0.36833534 0.3689874 0.34978968 0.32678448
0.368094 0.3747499 0.79066747 0.67980478 0.61494005 0.45002403
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0.12250613 0.45858525 0.45852825 0.43408872 0.37314701
0.33153488 0.32790981 0.36833534 0.3689874 0.34978968 0.32420121
0.15214552 0.17865984 0.25626376 0.1573102 0.12298801 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0.04500225 0.4219755 0.45852825 0.43408872 0.37314701
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0.1541463 0.28272888 0.18358693 0.37314701
0.33153488 0.26569767 0.01601458 0. 0.05945042 0.19891229
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0.0253731 0.00171577 0.22713296
0.33153488 0.11664776 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.20500962
0.33153488 0.24625638 0.00291174 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.01622378
0.24897876 0.32790981 0.10191096 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.04586451 0.31235677 0.32757096 0.23335172 0.14931733 0.00129164
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.10498298 0.34940902 0.3689874 0.34978968 0.15370495
0.04089933 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.06551419 0.27127137 0.34978968 0.32678448
0.245396 0.05882702 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0.02333517 0.12857881 0.32549285
0.41390126 0.40743158 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.32161793
0.41390126 0.54251585 0.20001074 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.06697006 0.18959827 0.25300993 0.32678448
0.41390126 0.45100715 0.00625034 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.05110617 0.19182076 0.33339444 0.3689874 0.34978968 0.32678448
0.40899334 0.39653769 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.04117838 0.16813739
0.28960162 0.32790981 0.36833534 0.3689874 0.34978968 0.25961929
0.12760592 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0.04431706 0.11961607 0.36545809 0.37314701
0.33153488 0.32790981 0.36833534 0.28877275 0.111988 0.00258328
0. 0. 0. 0. 0. 0.
0.
Downloaded by Arivu Mathi (arivumathi118@gmail.com)
lOMoARcPSD|400 679 48
[0. 0. 0. 0. 0. 0.
0.05298497 0.42752138 0.4219755 0.45852825 0.43408872 0.37314701
0.33153488 0.25273681 0.11646967 0.01312603 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0.37491383 0.56222061
0.66525569 0.63253163 0.48748768 0.45852825 0.43408872 0.359873
0.17428513 0.01425695 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0.92705966 0.82698729
0.74473314 0.63253163 0.4084877 0.24466922 0.22648107 0.02359823
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]]
Epoch
1/3
1875/1875 [==============================] - 18s 8ms/step - loss: 0.2588 -
accuracy: 0.9252
Epoch 2/3
1875/1875 [==============================] - 16s 9ms/step - loss: 0.1055 -
accuracy: 0.9679
Epoch 3/3
1875/1875 [==============================] - 17s 9ms/step - loss: 0.0723 -
accuracy: 0.9773
313/313 [==============================] - 2s 4ms/step - loss: 0.1149 -
accuracy: 0.9651
0.11487378180027008
0.9650999903678894
Result:
Thus the program to implement and build a Convolutional neural network
model whichhave been executed successfully and the output got verified.