0% found this document useful (0 votes)
17 views

Ai ML Lab Manual 17

ARTIFICIAL INTELLIGENCE

Uploaded by

vadivu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Ai ML Lab Manual 17

ARTIFICIAL INTELLIGENCE

Uploaded by

vadivu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 81

1

SSM COLLEGE OF ENGINEERING


KOMARAPALAYAM- 638 183

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CS3491– ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING


LABORATORY

(Regulations 2021)

SEMESTER IV
(ACADEMIC YEAR 2023-24)

RECORD NOTE BOOK

REGISTER NUMBER

NAME OF THE STUDENT


2

SSM COLLEGE OF ENGINEERING


KOMARAPALAYAM- 638 183

Department of Computer Science and Engineering Laboratory Record


NAME :_____________________________________COURSE : B.E (CSE)
REGISTER NO:____________________________YEAR : II Year IV Sem

Certified that this is bonafide record of work done by the above


student of the CS3491 – Artificial Intelligence and Machine Learning
Laboratory during the year 2023-2024.

Signature of Lab in Charge Signature of Head of the Department

Submitted for the Practical examination held on______________

Internal Examiner External Examiner


3

INDEX
S.No Date Name Of The Experiment Page Marks Signature
No
1.a) 05
Implementation of Uninformed
search algorithms-BFS

1.b) 08

Implementation of Uninformed search


algorithms-DFS

2.a) 11
Implementation of Informed search
algorithms -A*

2.b) 15
Implementation of Informed search
Algorithms Memory bounded A*
3 21

Implement naïve Bayes models

4 23

Implement Bayesian Networks

5 31

Build Regression models

6.a) 36
Build decision trees

6.b) 42
Build random forests

7 46
Build SVM models

8 50
4

Implement Ensembling Techniques

9 55
Implement Clustering Algorithms

10 59
Implement EM for Bayesian networks

11 Build Simple NN models 65

12 72
Build deep learning NN models

Ex. No : 1(a)
Date :
Implementation of Uninformed search algorithms-BFS
5

AIM:
To Implementation of Uninformed search algorithms-BFS by using Python algorithm.

ALGORITHMS:

 Start by putting any one of the graph’s vertices at the back of the queue.
 Now take the front item of the queue and add it to the visited list.
 Create a list of that vertex's adjacent nodes. Add those which are not within the
visited list to the rear of the queue.
 Keep continuing steps two and three till the queue is empty.

PROGRAM:

# Python3 Program to print BFS


traversal # from a given source vertex.
BFS(int s) # traverses vertices reachable
from s. from collections import
defaultdict

# This class represents a directed graph


# using adjacency list representation
class Graph:

# Constructor
def init (self):

# default dictionary to store graph


self.graph = defaultdict(list)

# function to add an edge to graph


def addEdge(self,u,v):
self.graph[u].append(v)

# Function to print a BFS of graph


def BFS(self, s):

# Mark all the vertices as not visited


visited = [False] * (len(self.graph))

# Create a queue for BFS


queue = []
# Mark the source node as
# visited and enqueue it
queue.append(s)
visited[s] = True

while queue:
6

# Dequeue a vertex from


# queue and print it
s = queue.pop(0)
print (s, end = " ")

# Get all adjacent vertices of the


# dequeued vertex s. If a
adjacent
# has not been visited, then mark it
# visited and enqueue it
for i in self.graph[s]:
if visited[i] == False:
queue.append(i)
visited[i] = True

# Driver code

# Create a graph given in


# the above diagram
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)

print ("Following is Breadth First Traversal"


" (starting from vertex 2)")
g.BFS(2)
7

OUTPUT:

Following is Breadth First Traversal (starting from vertex 2)


>3
20313
>

RESULT

Thus the Implementation of Uninformed search algorithms-BFS by using Python algorithm is executed
successfully.
8

Ex. No : 1(b)
Implementation of Uninformed search algorithms-DFS
Date :

AIM:
To implement a program to Uninformed search algorithms-DFS using Python.

ALGORITHM:

 We will start by putting any one of the graph's vertex on top of the stack.
 After that take the top item of the stack and add it to the visited list of the vertex.
 Next, create a list of that adjacent node of the vertex. Add the ones which aren't in
the visited list of vertexes to the top of the stack.
 Lastly, keep repeating steps 2 and 3 until the stack is empty.

PROGRAM:

# Python program to print DFS traversal for complete graph


from collections import defaultdict

# This class represents a directed graph using adjacency


# list representation
class Graph:

# Constructor
def init (self):

# default dictionary to store graph


self.graph = defaultdict(list)

# function to add an edge to graph


def addEdge(self,u,v):
self.graph[u].append(v)

# A function used by DFS


def DFSUtil(self, v, visited):

# Mark the current node as visited and print it


visited[v]= True
print v,

# Recur for all the vertices adjacent to


# this vertex
for i in self.graph[v]:
9

if visited[i] == False:
self.DFSUtil(i, visited)

# The function to do DFS traversal. It uses


# recursive DFSUtil()
def DFS(self):
V = len(self.graph) #total vertices

# Mark all the vertices as not visited


visited =[False]*(V)

# Call the recursive helper function to print


# DFS traversal starting from all vertices
one # by one
for i in range(V):
if visited[i] == False:
self.DFSUtil(i, visited)

# Driver code
# Create a graph given in the above diagram
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)

print "Following is Depth First Traversal"


g.DFS()
10

OUTPUT:

Following is Depth First Traversal (starting from vertex 2) 2 0 1 9 3

RESULT

Thus the implementation of Uninformed search algorithms-DFS using Python is executed suc
11

Ex. No : 2(a) Implementation of Informed search algorithms -A*


Date :

AIM :

To implement Informed search algorithm –A* using python


ALGORITHM:


Firstly, Place the starting node into OPEN and find its f (n) value.

Then remove the node from OPEN, having the smallest f (n) value. If it is a goal
node, then stop and return to success.
 Else remove the node from OPEN, and find all its successors.
 Find the f (n) value of all the successors, place them into OPEN, and place the
removed node into CLOSE.
 Goto Step-2.
 Exit.
PROGRAM:

def aStarAlgo(start_node, stop_node):


open_set = set(start_node)
closed_set = set()
g = {} #store distance from starting node
parents = {} # parents contains an adjacency map of all
nodes #distance of starting node from itself is zero
g[start_node] = 0
#start_node is root node i.e it has no parent nodes
#so start_node is set to its own parent node
parents[start_node] = start_node
while len(open_set) > 0:
n = None
#node with lowest f() is found
for v in open_set:
if n == None or g[v] + heuristic(v) < g[n] + heuristic(n):
n=v
if n == stop_node or Graph_nodes[n] == None:
pass
else:
for (m, weight) in get_neighbors(n):
#nodes 'm' not in first and last set are added to first
#n is set its parent
if m not in open_set and m not in closed_set:
open_set.add(m)
parents[m] = n
g[m] = g[n] + weight
#for each node m,compare its distance from start i.e g(m) to the
#from start through n node
else:
12

if g[m] > g[n] + weight:


#update g(m)
g[m] = g[n] + weight
#change parent of m to n
parents[m] = n
#if m in closed set,remove and add to
open if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
print('Path does not exist!')
return None

# if the current node is the stop_node


# then we begin reconstructin the path from it to the start_node
if n == stop_node:
path = []
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
print('Path found: {}'.format(path))
return path
# remove n from the open_list, and add it to closed_list
# because all of his neighbors were inspected
open_set.remove(n)
closed_set.add(n)
print('Path does not exist!')
return None

#define fuction to return neighbor and its distance


#from the passed node
def get_neighbors(v):
if v in
Graph_nodes:
return Graph_nodes[v]
else:
return None
A star example 1
#for simplicity we ll consider heuristic distances given
#and this function returns heuristic distance for all nodes
def heuristic(n):
H_dist =
{ 'A': 11,
'B': 6,
'C': 5,
13

'D': 7,
'E': 3,
'F': 6,
'G': 5,
'H': 3,
'I': 1,
'J': 0
}
return H_dist[n]

#Describe your graph here


Graph_nodes = {
'A': [('B', 6), ('F', 3)],
'B': [('A', 6), ('C', 3), ('D', 2)],
'C': [('B', 3), ('D', 1), ('E', 5)],
'D': [('B', 2), ('C', 1), ('E', 8)],
'E': [('C', 5), ('D', 8), ('I', 5), ('J', 5)],
'F': [('A', 3), ('G', 1), ('H', 7)],
'G': [('F', 1), ('I', 3)],
'H': [('F', 7), ('I', 2)],
'I': [('E', 5), ('G', 3), ('H', 2), ('J', 3)],
}

aStarAlgo('A', 'J')

#for simplicity we ll consider heuristic distances given


#and this function returns heuristic distance for all nodes
def heuristic(n):
H_dist =
{ 'A': 11,
'B': 6,
'C': 99,
'D': 1,
'E': 7,
'G': 0,
}
return H_dist[n]

#Describe your graph here


Graph_nodes = {
'A': [('B', 2), ('E', 3)],
'B': [('A', 2), ('C', 1), ('G', 9)],
'C': [('B', 1)],
'D': [('E', 6), ('G', 1)],
'E': [('A', 3), ('D', 6)],
'G': [('B', 9), ('D', 1)]
14

aStarAlgo('A', 'G')

OUTPUT:

Path found: ['A', 'F', 'G', 'I', 'J']

Path found: ['A', 'E', 'D', 'G']

RESULT :

Thus the implementation of Informed search algorithm –A* using python is executed successfully.
15

Implementation of Informed search algorithms - memory-bounded


Ex. No : 2(b)
Date : A*

AIM :
To implement Informed search algorithms - memory-bounded A* using python

ALGORITHM:

Default values for g, h and f are same as 0; because this is just a cell
Inserting some walls, starting and goal positions
Convert the grid to a string
Printing the maze with the solution path

PROGRAM:

import random
from typing import Tuple, List, Generator, Union, Optional

# The below constants are used to represent the different


# types of cells in the maze. (Except space, which is ' ')
START_SIGNS = '$Ss'
GOAL_SIGNS = '*XxEeGg'
WALL_SIGNS = '#&;'

class Cell:
"""Cell class represents a cell in the maze.
Attributes:
value: The character stored in the cell.
position: Vertical and horizontal position of the cell in the maze.
parent: Cell which has been visited before this cell.
g: Cost from start to this cell.
h: Estimated cost from this cell to the goal.
f: Sum of the cost of this cell and the estimated cost to the goal.
"""

def init (self, value: str, position: Tuple[int, int]) -> None:
self.value = value
self.position = position
self.parent = None

# Default values for g, h and f are same as 0; because this is just a cell
# and has no brain to calculate the values.
16

self.g = 0
self.h = 0
self.f = 0

def eq (self, other: 'Cell') -> bool:


return self.value == other.value and self.position == other.position

def str (self) -> str:


return f'Cell(value={repr(self.value)}, position={self.position}, parent={self.parent},
g={self.g}, h={self.h}, f={self.f})'

def repr (self) -> str:


return str(self)

class Maze:
"""The place that represents a 2D grid of cells.
Attributes:
grid: A list of cells with their specific position.
horizontal_limit: The horizontal limit of the maze (x-axis).
vertical_limit: The vertical limit that we can go (y-axis).
start: The starting position of the maze.
goals: The goals positions that we want to
reach. """

def init (self, maze: str) -> None:


lines = maze.splitlines()
rows, cols = len(lines), max(map(len, lines))
grid = [[Cell(' ', (i, j)) for j in range(cols)] for i in range(rows)] # Generate a matrix based
on the max length of rows
start = None
goals = []

for i, line in enumerate(lines):


for j, char in
enumerate(line):
if char not in f'{START_SIGNS}{GOAL_SIGNS}{WALL_SIGNS} ':
raise ValueError(f'Invalid character {repr(char)} at position ({i}, {j})')
elif char in START_SIGNS:
if start is not None:
raise ValueError('Multiple start positions found!')
start = (i, j)
elif char in GOAL_SIGNS:
goals.append((i, j))
grid[i][j].value =

char self.grid = grid


17

self.horizontal_limit = rows
self.vertical_limit = cols
self.start = start
self.goals = goals

def eq (self, other: 'Maze') -> bool:


return self.grid == other.grid and self.start == other.start and self.goals == other.goals

def str (self) -> str:


return f'Maze(grid={self.grid},
horizontal_limit={self.horizontal_limit}, vertical_limit={self.vertical_limit},
start={self.start}, goals={self.goals})'

def repr (self) -> str:


return str(self)

def neighbors(self, cell: Tuple[int, int]) -> Generator[Tuple[int, int], None, None]:
"""Yields all the neighbors that are not walls."""
current_x, current_y = cell
coords = [(-1, -1), (-1, 0), (-1, 1), (0, 1), (0, -1), (1, -1), (1, 0), (1, 1)]

for next_x, next_y in coords:


x = current_x + next_x
y = current_y + next_y
if 0 <= x < self.horizontal_limit and 0 <= y < self.vertical_limit and \ self.grid[x]
[y].value not in WALL_SIGNS:
yield x, y

def _generate_maze(min_size: Tuple[int, int], max_size: Tuple[int, int],


save_to_file: Optional[bool] = False) -> Maze:
"""Generates a random maze with the given size."""
rows, cols = random.randint(*min_size), random.randint(*max_size)
grid = [[' ' for _ in range(cols)] for _ in range(rows)]
# Inserting some walls, starting and goal positions
for _ in range(random.randint(0, rows*cols//5)):
grid[random.randint(0, rows-1)][random.randint(0, cols-1)] = '#'
grid[random.randint(0, rows-1)][random.randint(0, cols-1)] = '$'
grid[random.randint(0, rows-1)][random.randint(0, cols-1)] = 'X'
grid = '\n'.join([''.join(row) for row in grid]) # Convert the grid to a string

if save_to_file:
with open('genmaze.txt', 'w') as f:
f.write(grid)
return Maze(grid)
18

def _reconstruct_path(current: Cell) -> List[Tuple[int, int]]:


"""Reconstructs the path from the start to the goal."""
path = [current.position]
while current.parent is not
None: current = current.parent
path.insert(0, current.position)
return path

def _manhattan_distance(current: Tuple[int, int], goal: Tuple[int, int]) -> int:


"""Calculates the Manhattan distance between two points."""
return abs(current[0] - goal[0]) + abs(current[1] - goal[1])

def sma_star(maze: Union[Maze, str], bound: Optional[int] = None, forcely: Optional[bool] =


False) -> List[Tuple[int, int]]:
"""SMA* searches for the shortest path from the start to the goal(s)."""
if not isinstance(maze, (Maze, str)):
raise TypeError('The maze must be a Maze object or a string.')
elif isinstance(maze, str):
maze = Maze(maze)

opened = [maze.grid[maze.start[0]][maze.start[1]]] # The list initiallized with the starting cell


closed = [] # No cells have been visited yet

while opened:
# print(' ')
lowest_f = min(opened, key=lambda cell:
cell.f) current =
opened.pop(opened.index(lowest_f)) #
print(f'Current position is {current.position}')
closed.append(current)

if current.position in maze.goals:
# print(f'Goal found after {len(closed)} steps!')
# print(f'The maximum required bound in this case is {int(bound) if bound is not None
else "not specified"}')
# print(' ')
return _reconstruct_path(current) # Return the path at the first goal found

for neighbor_x, neighbor_y in maze.neighbors(current.position):


neighbor_cell = maze.grid[neighbor_x][neighbor_y]
if neighbor_cell in closed:
# print(f'Cell {neighbor_cell.position} is already visited')
continue

neighbor_cell.parent = current # We need to trace back to the start


19

neighbor_cell.g = current.g + 1 # The path cost from the start to the node n increases by
1
neighbor_cell.h = _manhattan_distance(current.position, neighbor_cell.position)
neighbor_cell.f = neighbor_cell.g + neighbor_cell.h
# print(f'Neighbor(position={neighbor_cell.position}, g={neighbor_cell.g},
h={neighbor_cell.h}, f={neighbor_cell.f})')

if neighbor_cell not in opened:


opened.append(neighbor_cell)
# print(f'Cell {neighbor_cell.position} added to the opened list')
closed.append(neighbor_cell) # The cell is now visited

if bound is not None and len(closed) >


bound: # print(f'The bound of {bound} is
reached') if not forcely:
return _reconstruct_path(current)
bound *= 1.05 # Increase the bound by just 5%

return None

if name == ' main ':


import sys
import argparse

parser = argparse.ArgumentParser(description='Simplified Memory Bounded A* (SMA*)


path finding algorithm.')
parser.add_argument('-m', '--maze', type=str, help='the path to the maze file')
parser.add_argument('-g', '--generate', action='store_true', help='generate a new maze and
save
it to a file')
parser.add_argument('-b', '--bound', type=int, help='the maximum number of nodes to be
expanded')
parser.add_argument('-f', '--forcely', action='store_true', help='search the maze even if the
bound is reached')
args = parser.parse_args()

if args.generate:
maze = _generate_maze(min_size=(5, 5), max_size=(100, 100), save_to_file=True)
print('Maze generated successfully!\n')
elif args.maze is not None:
with open(args.maze, 'r') as f:
maze = Maze(f.read())
print('Maze loaded from file...\n')
else:
print('No maze specified! Please read the help for more information.')
sys.exit(1)
20

solution = sma_star(maze, args.bound,


args.forcely) if solution is None:
print('No solution found!')
sys.exit(1)
print(f'The solution steps in order is: {" -> ".join(map(lambda pos: f"({pos[0]}, {pos[1]})",
solution))}\n')

# Printing the maze with the solution path


for row in maze.grid:
for cell in row:
if cell.position in solution[1:-1]:
cell.value = '·'
print(cell.value, end='')
print()

Output:

RESULT

Thus the implementation of Informed search algorithms - memory-bounded A* using python is


executed successfully.
21

Ex. No : 3
Implement naïve Bayes models
Date :

AIM :

To implement naïve bayes model


ALGORITHM:

Step 1: Separate By Class.


Step 2: Summarize
Dataset.
Step 3: Summarize Data By Class.
Step 4: Gaussian Probability Density
Function. Step 5: Class Probabilities.

PROGRAM:

import numpy as np

class NaiveBayes:

def fit(self, X, y):


n_samples, n_features = X.shape
self._classes = np.unique(y)
n_classes = len(self._classes)

# calculate mean, var, and prior for each class


self._mean = np.zeros((n_classes, n_features), dtype=np.float64)
self._var = np.zeros((n_classes, n_features), dtype=np.float64)
self._priors = np.zeros(n_classes, dtype=np.float64)

for idx, c in enumerate(self._classes):


X_c = X[y == c]
self._mean[idx, :] = X_c.mean(axis=0)
self._var[idx, :] = X_c.var(axis=0)
self._priors[idx] = X_c.shape[0] /
float(n_samples)

def predict(self, X):


y_pred = [self._predict(x) for x in X]
return np.array(y_pred)

def _predict(self, x):


posteriors = []

# calculate posterior probability for each class


22

for idx, c in enumerate(self._classes):


prior = np.log(self._priors[idx])
23

posterior = np.sum(np.log(self._pdf(idx, x)))


posterior = posterior + prior
posteriors.append(posterior)

# return class with the highest posterior


return self._classes[np.argmax(posteriors)]

def _pdf(self, class_idx, x):


mean =
self._mean[class_idx] var =
self._var[class_idx]
numerator = np.exp(-((x - mean) ** 2) / (2 * var))
denominator = np.sqrt(2 * np.pi * var)
return numerator / denominator

# Testing
if name == " main ":
# Imports
from sklearn.model_selection import train_test_split
from sklearn import datasets

def accuracy(y_true, y_pred):


accuracy = np.sum(y_true == y_pred) /
len(y_true) return accuracy

X, y = datasets.make_classification(
n_samples=1000, n_features=10, n_classes=2, random_state=123
)
X_train, X_test, y_train, y_test =
train_test_split( X, y, test_size=0.2,
random_state=123
)

nb = NaiveBayes()
nb.fit(X_train, y_train)
predictions = nb.predict(X_test)

print("Naive Bayes classification accuracy", accuracy(y_test, predictions))

OUTPUT:

RESULT :
Thus the implementation of Naïve Bayes model is executed successfully.
24

Ex. No : 4
Implement Bayesian Networks
Date :

AIM :

To implement Bayesian networks

ALGORITHM:

1. First, identify which are the main variable in the problem to solve. ...
2. Second, define structure of the network, that is, the causal relationships between all the
variables (nodes).
3. Third, define the probability rules governing the relationships between the variables.

PROGRAM:

#!/usr/bin/env python
""" generated source for module BayesianNetwork """
from Assignment4 import *
import random
#
# * A bayesian network
# * @author Panqu
#
class BayesianNetwork(object):
""" generated source for class BayesianNetwork """
#
# * Mapping of random variables to nodes in the
network #
varMap = None

#
# * Edges in this network
#
edges = None

#
# * Nodes in the network with no
parents #
rootNodes = None

#
# * Default constructor initializes empty
network #
def init (self):
""" generated source for method init """
self.varMap = {}
self.edges = []
25

self.rootNodes = []

#
# * Add a random variable to this
network # * @param variable Variable
to add
#
def addVariable(self, variable):
""" generated source for method addVariable """
node = Node(variable)
self.varMap[variable]=node
self.rootNodes.append(node)

#
# * Add a new edge between two random variables already in this
network # * @param cause Parent/source node
# * @param effect Child/destination
node #
def addEdge(self, cause, effect):
""" generated source for method addEdge """
source = self.varMap.get(cause)
dest = self.varMap.get(effect)
self.edges.append(Edge(source, dest))
source.addChild(dest)
dest.addParent(source)
if dest in self.rootNodes:
self.rootNodes.remove(dest)

#
# * Sets the CPT variable in the bayesian network (probability
of # * this variable given its parents)
# * @param variable Variable whose CPT we are setting
# * @param probabilities List of probabilities P(V=true|P1,P2...), that must be ordered
as follows.
# Write out the cpt by hand, with each column representing one of the parents
(in alphabetical order).
# Then assign these parent variables true/false based on the following order: ...tt, ...tf, ...ft,
...ff.
# The assignments in the right most column, P(V=true|P1,P2,...), will be the values
you should pass in as probabilities here.
#
def setProbabilities(self, variable, probabilities):
""" generated source for method setProbabilities """
probList = []
for probability in probabilities:
probList.append(probability)
self.varMap.get(variable).setProbabilities(probList)
26

def normalize(self, toReturn):


SUM = toReturn[0] + toReturn[1]

if SUM is 0:
return 0, 0
else:
return float(toReturn[0])/SUM

#
# * Returns an estimate of P(queryVal=true|givenVars) using rejection
sampling # * @param queryVar Query variable in probability query
# * @param givenVars A list of assignments to variables that represent our given
evidence variables
# * @param numSamples Number of rejection samples to perform
#
def performRejectionSampling(self, queryVar, givenVars, numSamples):
""" generated source for method performRejectionSampling """
# TODO
toReturn = [0, 0]
for j in range(1, numSamples):
# start prior sampling
x = {}
sortVar = sorted(self.varMap)
for variable in sortVar:
ran = random.random()
if ran <= self.varMap[variable].getProbability(x, True):
x[variable.getName()] = True
else:
x[variable.getName()] = False
# end prior sampling

for e in givenVars:
if x[e.getName()] == givenVars[e]:
if x[queryVar.getName()] is
True:
toReturn[0] += 1
else:
toReturn[1] += 1

return self.normalize(toReturn)

#
# * Returns an estimate of P(queryVal=true|givenVars) using weighted
sampling # * @param queryVar Query variable in probability query
# * @param givenVars A list of assignments to variables that represent our given
evidence variables
27

# * @param numSamples Number of weighted samples to


perform #
def performWeightedSampling(self, queryVar, givenVars, numSamples):
""" generated source for method performWeightedSampling """
# TODO
toReturn = [0, 0]
for j in range(1, numSamples):
# weightedSample
(x, w) = self.weightedSample(self.varMap, givenVars)

if x[queryVar.getName()] is True:
toReturn[0] += w
else:
toReturn[1] += w

return self.normalize(toReturn)

def weightedSample(self, bn, e):


x = Sample()
for event in e:
x.setAssignment(event.getName(), e[event])

sortVar = sorted(bn.keys())
for xi in sortVar:
if x.getValue(xi.getName()) is not None:
w = x.getWeight()
w = w * bn[xi].getProbability(x.assignments, x.assignments.get(xi.getName()))
x.setWeight(w)
else:
ran = random.random()
if ran <= bn[xi].getProbability(x.assignments, True):
x.assignments[xi.getName()] = True
else:
x.assignments[xi.getName()] = False

return x.assignments, x.getWeight()

#
# * Returns an estimate of P(queryVal=true|givenVars) using Gibbs
sampling # * @param queryVar Query variable in probability query
# * @param givenVars A list of assignments to variables that represent our given
evidence variables
# * @param numTrials Number of Gibbs trials to perform, where a single trial consists
of assignments to ALL
28

# non-evidence variables (ie. not a single state change, but a state change of all
non- evidence variables)
#
def performGibbsSampling(self, queryVar, givenVars, numTrials):
""" generated source for method performGibbsSampling """
# TODO
counter = [0, 0]

nonEviVar = []
givenVarsSort =

sorted(givenVars) newvarMap =

{}

# set all needed variable field


for variable in self.varMap.keys():
if variable in givenVarsSort:
newvarMap[variable.getName()] =
givenVars[variable] continue
else:
nonEviVar.append(variable)
randomprob =
random.random() if
randomprob < 0.5:
newvarMap[variable.getName()] = False
else:
newvarMap[variable.getName()] = True

# gibbs sampling
# idea from book page 537
for j in range(1, numTrials):
for z in nonEviVar:
markovList = self.markovBlanket(self.varMap.get(z))
markovMap = {}

for mark in markovList:


markovMap[mark.getVariable().getName()] =
newvarMap[mark.getVariable().getName()]
probCom = self.gibbsProb(markovMap,
z) if probCom[0] is 0:
alpha = 0
else:
alpha = 1.0/probCom[0]
val = alpha * probCom[1]

randomprob2 =
random.random() if val <
randomprob2:
newvarMap[self.varMap[z].getVariable().getName()] = False
29

else:
newvarMap[self.varMap[z].getVariable().getName()] = True

if newvarMap[queryVar.getName()] is False:
counter[1] += 1
else:
counter[0] += 1

return self.normalize(counter)

def gibbsProb(self, markMap, Z_i):


query = {}
probC_true = 1.0
probC_false = 1.0

for par in self.varMap[Z_i].getParents():


query[par.getVariable().getName()] = markMap[par.getVariable().getName()]

prob_true = self.varMap[Z_i].getProbability(query, True)


prob_false = self.varMap[Z_i].getProbability(query, False)

for child in self.varMap[Z_i].getChildren():


childP = {}
for childp in child.getParents():
if childp.getVariable().equals(self.varMap[Z_i].getVariable()) is False:
childP[childp.getVariable().getName()] =
markMap[childp.getVariable().getName()]
else:
childP[childp.getVariable().getName()] = True

probC_true = prob_true * child.getProbability(childP,


markMap[child.getVariable().getName()])
for child in self.varMap[Z_i].getChildren():
childP = {}
for childp in child.getParents():
if childp.getVariable().equals(self.varMap[Z_i].getVariable()) is False:
childP[childp.getVariable().getName()] =
markMap[childp.getVariable().getName()]
else:
childP[childp.getVariable().getName()] = False

probC_false = prob_false * child.getProbability(childP,


markMap[child.getVariable().getName()])
toReturn = prob_true * probC_true + prob_false * probC_false
return toReturn, prob_true * probC_true
30

# markovBlanket method that provides a list that we need to use for Gibbs Sampling
# idea from slide 19_20 page 18
def markovBlanket(self, node):
markovList = []
for parentN in node.getParents():
markovList.append(parentN)

for childrenN in node.getChildren():


markovList.append(childrenN)

for parentC in childrenN.getParents():


if parentC is node or parentC in markovList:
continue
markovList.append(parentC)

return markovList
31

Output:

RESULT :

Thus the implementation of Bayesian Networks is executed successfully.


32

Ex. No : 5
Build Regression models
Date :

AIM :

To implement build regression models .

ALGORITHM:

1. Initialize the parameters.


2.Predict the value of a dependent variable by given an independent variable.
3. Calculate the error in prediction for all data points.
4.Calculate partial derivative w.r.t a0 and a1.
5.Calculate the cost for each number and add them.
6.Update the values of a0 and a1.

PROGRAM:

import matplotlib.pyplot as plt


import pandas as pd
import numpy as np

def kernel(point,xmat, k):


m,n = np.shape(xmat)
weights = np.mat(np.eye((m))) # eye - identity matrix

for j in range(m):
diff = point - X[j]
weights[j,j] = np.exp(diff*diff.T/(-2.0*k**2))
return weights

def localWeight(point,xmat,ymat,k):
wei = kernel(point,xmat,k)
W = (X.T*(wei*X)).I*(X.T*(wei*ymat.T))
return W
def localWeightRegression(xmat,ymat,k):
m,n = np.shape(xmat)
ypred = np.zeros(m)
for i in range(m):
ypred[i] = xmat[i]*localWeight(xmat[i],xmat,ymat,k)
return ypred

def graphPlot(X,ypred):
sortindex = X[:,1].argsort(0) #argsort - index of the smallest
xsort = X[sortindex][:,0]
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(bill,tip, color='green')
33

ax.plot(xsort[:,1],ypred[sortindex], color = 'red', linewidth=5)


plt.xlabel('Total bill')
plt.ylabel('Tip')
plt.show();
# load data points
data = pd.read_csv('10data_tips.csv')
bill = np.array(data.total_bill)
# We use only Bill amount and Tips data tip = np.array(data.tip)
mbill = np.mat(bill) # .mat will convert nd array is converted in 2D array
mtip = np.mat(tip)
m= np.shape(mbill)[1]
one =
np.mat(np.ones(m))
X = np.hstack((one.T,mbill.T)) # 244 rows, 2 cols
ypred = localWeightRegression(X,mtip,8) # increase k to get smooth curves
graphPlot(X,ypred)
34

OUTPUT:
35

Type 2 :

import numpy as np
import matplotlib.pyplot as plt

def estimate_coef(x, y):


# number of
observations/points n =
np.size(x)

# mean of x and y
vector m_x =
np.mean(x)
m_y = np.mean(y)

# calculating cross-deviation and deviation


about x SS_xy = np.sum(y*x) - n*m_y*m_x
SS_xx = np.sum(x*x) - n*m_x*m_x

# calculating regression
coefficients b_1 = SS_xy / SS_xx
b_0 = m_y - b_1*m_x

return (b_0, b_1)

def plot_regression_line(x, y, b):


# plotting the actual points as scatter plot
plt.scatter(x, y, color = "m",
marker = "o", s = 30)

# predicted response vector


y_pred = b[0] + b[1]*x

# plotting the regression line


plt.plot(x, y_pred, color = "g")

# putting labels
plt.xlabel('x')
plt.ylabel('y')

# function to show plot


plt.show()
36

def main():
# observations / data
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])

# estimating
coefficients b =
estimate_coef(x, y)
print("Estimated coefficients:\nb_0 = {} \
\nb_1 = {}".format(b[0], b[1]))
# plotting regression line
plot_regression_line(x, y, b)

if name == " main


": main()

output

RESULT :

Thus the implementation of build regression models is been executed successfully.


37

Ex. No : 6 a)
Date :
Build decision trees

AIM :

To implement decision trees using python

ALGORITHM:

1. Importing Required Libraries. Let's first load the required libraries. ...
2.Loading Data. ...
3.Feature Selection. ...
4.Splitting Data. ...
5.Building Decision Tree Model. ...
6.Evaluating Model. ...

PROGRAM:

Main.py

import numpy as np
from collections import Counter

class Node:
def init (self, feature=None, threshold=None, left=None, right=None,*,value=None):
self.feature = feature
self.threshold =
threshold self.left = left
self.right = right
self.value = value

def is_leaf_node(self):
return self.value is not None

class DecisionTree:
def init (self, min_samples_split=2, max_depth=100, n_features=None):
self.min_samples_split=min_samples_split
self.max_depth=max_depth
self.n_features=n_features
self.root=None
38

def fit(self, X, y):


self.n_features = X.shape[1] if not self.n_features else min(X.shape[1],self.n_features)
self.root = self._grow_tree(X, y)

def _grow_tree(self, X, y, depth=0):


n_samples, n_feats = X.shape
39

n_labels = len(np.unique(y))

# check the stopping criteria


if (depth>=self.max_depth or n_labels==1 or n_samples<self.min_samples_split):
leaf_value = self._most_common_label(y)
return Node(value=leaf_value)

feat_idxs = np.random.choice(n_feats, self.n_features, replace=False)

# find the best split


best_feature, best_thresh = self._best_split(X, y, feat_idxs)

# create child nodes


left_idxs, right_idxs = self._split(X[:, best_feature],
best_thresh) left = self._grow_tree(X[left_idxs, :], y[left_idxs],
depth+1) right = self._grow_tree(X[right_idxs, :], y[right_idxs],
depth+1) return Node(best_feature, best_thresh, left, right)

def _best_split(self, X, y, feat_idxs):


best_gain = -1
split_idx, split_threshold = None, None

for feat_idx in feat_idxs:


X_column = X[:, feat_idx]
thresholds = np.unique(X_column)

for thr in thresholds:


# calculate the information gain
gain = self._information_gain(y, X_column, thr)

if gain > best_gain:


best_gain = gain
split_idx = feat_idx
split_threshold = thr

return split_idx, split_threshold

def _information_gain(self, y, X_column, threshold):


# parent entropy
parent_entropy = self._entropy(y)

# create children
left_idxs, right_idxs = self._split(X_column, threshold)
40

if len(left_idxs) == 0 or len(right_idxs) == 0:
return 0

# calculate the weighted avg. entropy of children


n = len(y)
n_l, n_r = len(left_idxs), len(right_idxs)
e_l, e_r = self._entropy(y[left_idxs]), self._entropy(y[right_idxs])
child_entropy = (n_l/n) * e_l + (n_r/n) * e_r

# calculate the IG
information_gain = parent_entropy - child_entropy
return information_gain

def _split(self, X_column, split_thresh):


left_idxs = np.argwhere(X_column <=
split_thresh).flatten() right_idxs = np.argwhere(X_column
> split_thresh).flatten() return left_idxs, right_idxs

def _entropy(self, y):


hist =
np.bincount(y) ps =
hist / len(y)
return -np.sum([p * np.log(p) for p in ps if p>0])

def _most_common_label(self,
y): counter = Counter(y)
value = counter.most_common(1)[0][0]
return value

def predict(self, X):


return np.array([self._traverse_tree(x, self.root) for x in X])

def _traverse_tree(self, x, node):


if node.is_leaf_node():
return node.value

if x[node.feature] <= node.threshold:


return self._traverse_tree(x, node.left)
return self._traverse_tree(x, node.right)
41

Train data.py

from sklearn import datasets


from sklearn.model_selection import train_test_split
import numpy as np
from DecisionTree import DecisionTree

data = datasets.load_breast_cancer()
X, y = data.data, data.target

X_train, X_test, y_train, y_test =


train_test_split( X, y, test_size=0.2,
random_state=1234
)

clf = DecisionTree(max_depth=10)
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)

def accuracy(y_test, y_pred):


return np.sum(y_test == y_pred) / len(y_test)

acc = accuracy(y_test, predictions)


print(acc)
42

OUTPUT:

RESULT :

Thus the implementation of decision trees using python is executed successfully


43

Ex. No : 6 b)
Date :
Build random forests

AIM :

To implement random forests using python


ALGORITHM:
 Importing the libraries
 Importing the datasets
 Splitting the dataset into the Training set and Test set
 Feature Scaling
 Fitting the classifier into the Training set
 Predicting the test set results
 Making the Confusion Matrix
 Visualising the Training set results

PROGRAM:

# Random Forest Classifier

# Importing the libraries

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

# Importing the datasets

datasets = pd.read_csv('Social_Network_Ads.csv')
X = datasets.iloc[:, [2,3]].values
Y = datasets.iloc[:, 4].values

# Splitting the dataset into the Training set and Test set

from sklearn.model_selection import train_test_split


X_Train, X_Test, Y_Train, Y_Test = train_test_split(X, Y, test_size = 0.25, random_state = 0)

# Feature Scaling

from sklearn.preprocessing import StandardScaler


sc_X = StandardScaler()
X_Train = sc_X.fit_transform(X_Train)
X_Test = sc_X.transform(X_Test)

# Fitting the classifier into the Training set

from sklearn.ensemble import RandomForestClassifier


classifier = RandomForestClassifier(n_estimators = 200, criterion = 'entropy', random_state = 0)
44

classifier.fit(X_Train,Y_Train)

# Predicting the test set results

Y_Pred = classifier.predict(X_Test)

# Making the Confusion Matrix

from sklearn.metrics import confusion_matrix


cm = confusion_matrix(Y_Test, Y_Pred)

# Visualising the Training set results

from matplotlib.colors import ListedColormap


X_Set, Y_Set = X_Train, Y_Train
X1, X2 = np.meshgrid(np.arange(start = X_Set[:, 0].min() - 1, stop = X_Set[:, 0].max() + 1, step
= 0.01),
np.arange(start = X_Set[:, 1].min() - 1, stop = X_Set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(Y_Set)):
plt.scatter(X_Set[Y_Set == j, 0], X_Set[Y_Set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Random Forest Classifier (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()

# Visualising the Test set results

from matplotlib.colors import ListedColormap


X_Set, Y_Set = X_Test, Y_Test
X1, X2 = np.meshgrid(np.arange(start = X_Set[:, 0].min() - 1, stop = X_Set[:, 0].max() + 1, step
= 0.01),
np.arange(start = X_Set[:, 1].min() - 1, stop = X_Set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(Y_Set)):
plt.scatter(X_Set[Y_Set == j, 0], X_Set[Y_Set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Random Forest Classifier (Test set)')
45

plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
46

OUTPUT:

RESULT :

Thus the implementation of random forest using python is executed successfully


47

Ex. No : 7
Date :
Build SVM models

AIM :

To implement SVM models using python

ALGORITHM:
 Importing the libraries
 Importing the datasets
 Splitting the dataset into the Training set and Test set
 Fitting the classifier into the Training set
 Predicting the test set results
 Making the Confusion Matrix
 Visualising the Training set results

PROGRAM:

# Support Vector Machine


# Importing the libraries

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

# Importing the datasets

datasets = pd.read_csv('Social_Network_Ads.csv')
X = datasets.iloc[:, [2,3]].values
Y = datasets.iloc[:, 4].values

# Splitting the dataset into the Training set and Test set

from sklearn.model_selection import train_test_split


X_Train, X_Test, Y_Train, Y_Test = train_test_split(X, Y, test_size = 0.25, random_state = 0)

# Feature Scaling

from sklearn.preprocessing import StandardScaler


sc_X = StandardScaler()
X_Train = sc_X.fit_transform(X_Train)
X_Test = sc_X.transform(X_Test)

# Fitting the classifier into the Training set


48

from sklearn.svm import SVC


classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(X_Train, Y_Train)

# Predicting the test set results

Y_Pred = classifier.predict(X_Test)

# Making the Confusion Matrix

from sklearn.metrics import confusion_matrix


cm = confusion_matrix(Y_Test, Y_Pred)

# Visualising the Training set results

from matplotlib.colors import ListedColormap


X_Set, Y_Set = X_Train, Y_Train
X1, X2 = np.meshgrid(np.arange(start = X_Set[:, 0].min() - 1, stop = X_Set[:, 0].max() + 1, step
= 0.01),
np.arange(start = X_Set[:, 1].min() - 1, stop = X_Set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(Y_Set)):
plt.scatter(X_Set[Y_Set == j, 0], X_Set[Y_Set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Support Vector Machine (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()

# Visualising the Test set results

from matplotlib.colors import ListedColormap


X_Set, Y_Set = X_Test, Y_Test
X1, X2 = np.meshgrid(np.arange(start = X_Set[:, 0].min() - 1, stop = X_Set[:, 0].max() + 1, step
= 0.01),
np.arange(start = X_Set[:, 1].min() - 1, stop = X_Set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(Y_Set)):
49

plt.scatter(X_Set[Y_Set == j, 0], X_Set[Y_Set == j, 1],


c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Support Vector Machine (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
50

OUTPUT:

RESULT :

Thus the implementation of SVM models using python is executed successfully


51

Ex. No : 8
Date :
Implement ensembling techniques

AIM :

To implement ensembling techniques using python

ALGORITHM:
 importing utility modules
 importing machine learning models for prediction importing train test split
 loading train data set in dataframe from train_data.csv file getting target data from the
dataframe
 target = df["target"] getting train data from the dataframe
 Splitting between train data into training and validation dataset performing the train
test and validation split
 performing train test split performing test validation split
 initializing all the base model objects with default parameters training all the model on
the train dataset
 training first model converting to dataframe training second model converting to dataframe
training third model converting to dataframe
 concatenating validation dataset along with all the predicted validation data (meta features)
making the final model using the meta features
 getting the final output
 printing the mean squared error

PROGRAM:

Type : Blending

# importing utility modules


import pandas as pd
from sklearn.metrics import mean_squared_error

# importing machine learning models for prediction


from sklearn.ensemble import RandomForestRegressor
import xgboost as xgb
from sklearn.linear_model import LinearRegression

# importing train test split


from sklearn.model_selection import train_test_split

# loading train data set in dataframe from train_data.csv file


df = pd.read_csv("train_data.csv")

# getting target data from the


dataframe target = df["target"]
52

# getting train data from the dataframe


train = df.drop("target")

#Splitting between train data into training and validation dataset


X_train, X_test, y_train, y_test = train_test_split(train, target, test_size=0.20)

# performing the train test and validation split


train_ratio = 0.70
validation_ratio = 0.20
test_ratio = 0.10

# performing train test split


x_train, x_test, y_train, y_test =
train_test_split( train, target, test_size=1 -
train_ratio)

# performing test validation split


x_val, x_test, y_val, y_test = train_test_split(
x_test, y_test, test_size=test_ratio/(test_ratio + validation_ratio))

# initializing all the base model objects with default parameters


model_1 = LinearRegression()
model_2 = xgb.XGBRegressor()
model_3 = RandomForestRegressor()

# training all the model on the train dataset

# training first model


model_1.fit(x_train, y_train)
val_pred_1 = model_1.predict(x_val)
test_pred_1 = model_1.predict(x_test)

# converting to dataframe
val_pred_1 = pd.DataFrame(val_pred_1)
test_pred_1 = pd.DataFrame(test_pred_1)
# training second model
model_2.fit(x_train, y_train)
val_pred_2 = model_2.predict(x_val)
test_pred_2 = model_2.predict(x_test)

# converting to dataframe
val_pred_2 = pd.DataFrame(val_pred_2)
test_pred_2 = pd.DataFrame(test_pred_2)

# training third model


model_3.fit(x_train, y_train)
val_pred_3 = model_1.predict(x_val)
test_pred_3 = model_1.predict(x_test)
53

# converting to dataframe
val_pred_3 = pd.DataFrame(val_pred_3)
test_pred_3 = pd.DataFrame(test_pred_3)

# concatenating validation dataset along with all the predicted validation data (meta features)
df_val = pd.concat([x_val, val_pred_1, val_pred_2, val_pred_3], axis=1)
df_test = pd.concat([x_test, test_pred_1, test_pred_2, test_pred_3], axis=1)

# making the final model using the meta features


final_model = LinearRegression()
final_model.fit(df_val, y_val)

# getting the final output


final_pred = final_model.predict(df_test)

#printing the mean squared error between real value and predicted value
print(mean_squared_error(y_test, pred_final))

OUTPUT :

4790

Type : Stacking

# importing utility modules


import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# importing machine learning models for prediction


from sklearn.ensemble import RandomForestRegressor
import xgboost as xgb
from sklearn.linear_model import LinearRegression

# importing stacking lib


from vecstack import stacking

# loading train data set in dataframe from train_data.csv file


df = pd.read_csv("train_data.csv")

# getting target data from the


dataframe target = df["target"]

# getting train data from the dataframe


train = df.drop("target")

# Splitting between train data into training and validation dataset


X_train, X_test, y_train, y_test = train_test_split(
54

train, target, test_size=0.20)

# initializing all the base model objects with default parameters


model_1 = LinearRegression()
model_2 = xgb.XGBRegressor()
model_3 = RandomForestRegressor()

# putting all base model objects in one list


all_models = [model_1, model_2, model_3]

# computing the stack features


s_train, s_test = stacking(all_models, X_train, X_test,
y_train, regression=True, n_folds=4)

# initializing the second-level model


final_model = model_1

# fitting the second level model with stack features


final_model = final_model.fit(s_train, y_train)

# predicting the final output using stacking


pred_final = final_model.predict(X_test)

# printing the mean squared error between real value and predicted value
print(mean_squared_error(y_test, pred_final))
55

OUTPUT :
4510

RESULT :

Thus the implementation ensembling techniques using python is executed successfully


56

Ex. No : 9
Date :
Implement clustering algorithms

AIM :

To implement clustering algorithms using python

ALGORITHM:

 Randomly select 'c' cluster centers.


 Calculate the distance between each data point and cluster centers.
 Assign the data point to the cluster center whose distance from the cluster center is
minimum of all the cluster centers.

PROGRAM:

Type 1:

from sklearn.cluster import KMeans


import numpy as np
X = np.array([[1.713,1.586], [0.180,1.786], [0.353,1.240],
[0.940,1.566], [1.486,0.759], [1.266,1.106],[1.540,0.419],[0.459,1.799],[0.773,0.186]])
y=np.array([0,1,1,0,1,0,1,1,1])
kmeans = KMeans(n_clusters=3, random_state=0).fit(X,y)
print("The input data is ")
print("VAR1 \t VAR2 \t CLASS")
i=0
for val in X: print(val[0],"\
t",val[1],"\t",y[i]) i+=1
print("="*20)
# To get test data from the user
print("The Test data to predict ")
test_data = []
VAR1 = float(input("Enter Value for VAR1 :"))
VAR2 = float(input("Enter Value for VAR2 :"))
test_data.append(VAR1)
test_data.append(VAR2)
print("="*20)
print("The predicted Class is : ",kmeans.predict([test_data]))
57

TYPE 2:

import matplotlib.pyplot as
plt from sklearn import
datasets
from sklearn.cluster import KMeans
import pandas as pd
import numpy as np
# import some data to play with iris = datasets.load_iris()
X = pd.DataFrame(iris.data)
X.columns = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width']
y = pd.DataFrame(iris.target)
y.columns = ['Targets']
# Build the K Means Model model = KMeans(n_clusters=3)
model.fit(X) # model.labels_ : Gives cluster no for which samples belongs to

# # Visualise the clustering results


plt.figure(figsize=(14,14)) colormap = np.array(['red', 'lime',
'black'])
# Plot the Original Classifications using Petal features plt.subplot(2, 2, 1)
58

plt.scatter(X.Petal_Length, X.Petal_Width, c=colormap[y.Targets],


s=40) plt.title('Real Clusters')
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
# Plot the Models Classifications plt.subplot(2, 2, 2)
plt.scatter(X.Petal_Length, X.Petal_Width, c=colormap[model.labels_],
s=40) plt.title('K-Means Clustering')
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
# General EM for GMM
from sklearn import preprocessing
# transform your data such that its distribution will have a # mean value 0 and standard deviation
of 1.
scaler = preprocessing.StandardScaler()
scaler.fit(X)
xsa = scaler.transform(X)
xs = pd.DataFrame(xsa, columns = X.columns)
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=3)
gmm.fit(xs)
gmm_y = gmm.predict(xs)
plt.subplot(2, 2, 3)
plt.scatter(X.Petal_Length, X.Petal_Width, c=colormap[gmm_y], s=40)
plt.title('GMM Clustering')
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
print('Observation: The GMM using EM algorithm based clustering matched the true labels more
closely than the Kmeans.')
59

OUTPUT:

RESULT :
60

Thus the implementation of clustering algorithms using python is executed successfully


61

Ex. No : 10
Date :
Implement EM for Bayesian networks

AIM :

To implement EM for Bayesian networks using python

ALGORITHM:

 load and display an image with Matplotlib


 calculate f(e^f) in order to use logsumexp trick and avoid
underflow/overflow find max of all fk(trick[k]) for each example

 counter in order to count iterations and stop after some in order our program doesn't run for
an eternity
 check if algorithm is correct
 check if the convergence criterion is met
 update 'safety valve' in order to not loop for an
eternity calculate error
 load image as pixel array
summarize shape of the pixel
array
 display the array of pixels as an
image normalize data

PROGRAM:

from PIL import Image


import numpy as np
# load and display an image with Matplotlib
from matplotlib import image
from matplotlib import pyplot
import time

# calculate f(e^f) in order to use logsumexp trick and avoid


underflow/overflow def f_logsumexp(x, mean, var, pi):
N = x.shape[0]
K = mean.shape[0]
trick = np.zeros((N,
K)) for k in range(K):
62

subtraction = np.subtract(x, mean[k])


arg1 = -1.0 / (2 * var[k]) * np.power(subtraction, 2)
arg2 = -0.5*np.log(2*np.pi*var[k])
arg3 = np.sum(arg2 + arg1, axis=1) # before sum 1xD -> 1x1
arithmitis = np.log(pi[k]) + arg3
trick[:, k] = arithmitis
# find max of all fk(trick[k]) for each example
m = trick.max(axis=1) # Nx1
m = m.reshape((m.shape[0], 1))
return trick, m

# N -> number of
examples # K -> number
of clusters def
update_gamma(f, m):
f = f-m
f = np.exp(f) # NxK
par = np.sum(f, axis=1) # Nx1
par = par.reshape((par.shape[0],1))
result = np.divide(f, par) # NxK
return result

# return matrix with dimensions KxD


def update_mean(gamma, x):
arith = np.dot(np.transpose(gamma), x) # (KxN)*(NxD)-> KxD
paran = np.sum(gamma, axis=0) # Kx1
paran = paran.reshape((paran.shape[0], 1))
result = arith/paran # KxD
return result

# return vector with dimensions 1xK


def update_variance(gamma, x, mean):
D = x.shape[1]
K = mean.shape[0]
arith = np.zeros((K, 1))
for k in range(K):
gamma_k = gamma[:, k]
gamma_k = gamma_k.reshape((gamma_k.shape[0], 1))
subtraction = np.subtract(x, mean[k]) # NxD
# ((Nx1).*(NxD)-> NxD->sum row wise -> 1xN -> sum -> 1x1
sub = np.sum(np.sum(np.multiply(np.power(subtraction, 2), gamma_k), axis=1))
arith[k] = sub
paran = D * np.sum(gamma, axis=0) # Kx1
paran = paran.reshape((K, 1)) # Kx1
63

return arith/paran

def update_loglikehood(f, m):


f = f - m # NxK
arg1 = np.sum(np.exp(f), axis=1) # Nx1
arg1 = np.log(arg1) # Nx1
arg1 = arg1.reshape((arg1.shape[0], 1))
arg2 = arg1+m
return np.sum(arg2, axis=0) # 1x1

def init_parameters(D, K):


mean = np.random.rand(K, D)
var = np.random.uniform(low=0.1, high=1, size=K) #
Kx1 val = 1/K
pi = np.full(K, val) # Kx1
return mean, var, pi

# pi is not np.pi = 3.14.....is a different variable


def EM(x, K, tol):
# counter in order to count iterations and stop after some in order our program doesn't run for
an eternity
counter = 1
# num of examples(Here pixels)
N = x.shape[0]
# num of dimensions of each examples(Here RGB
canals) D = x.shape[1]
# init parameters
mean, var, pi = init_parameters(D, K)
# logsumexp trick
f, m = f_logsumexp(x, mean, var, pi)
loglikehood = update_loglikehood(f, m)
while counter <= 400:
print('Iteration: ', counter)
# E-step
gamma = update_gamma(f, m) # NxK
# M-step
# update pi
pi = (np.sum(gamma, axis=0))/N
# update mean
mean = update_mean(gamma, x)
# update variance(var)
var = update_variance(gamma, x, mean)
old_loglikehood = loglikehood
64

# logsumexp trick
f, m = f_logsumexp(x, mean, var, pi)
loglikehood = update_loglikehood(f, m)
# check if algorithm is correct
if loglikehood-old_loglikehood < 0:
print('Error found in EM algorithm')
print('Number of iterations: ', counter)
exit()
# check if the convergence criterion is met
if abs(loglikehood-old_loglikehood) < tol:
print('Convergence criterion is met')
print('Total iterations: ', counter)
return mean, gamma
# update 'safety valve' in order to not loop for an eternity
counter += 1
return mean, gamma

def error_reconstruction(x, means_of_data):


N = x.shape[0]
x = x*255
x = x.astype(np.uint8)
diff = x-means_of_data
sum1 = np.sqrt(np.sum(np.power(diff, 2)))
error = sum1/N
return error

def reconstruct_image(x, mean, gamma, K):


D = mean.shape[1]
# denormalize values
mean = mean * 255
# set data-type uint8 so every data is in set [0,255]
mean = mean.astype(np.uint8)
max_likelihood = np.argmax(gamma, axis=1) # 1xN
# matrix that has for each example(pixel) the means of dimensions(R,G,B) of k(=cluster) with
highest
# a posteriori probability gamma. This matrix is our new
data(pixels) means_of_data = np.array([mean[i] for i in
max_likelihood]) # NxD # set data-type uint8 so every data is in set
[0,255]
means_of_data = means_of_data.astype(np.uint8)
# calculate error
error = error_reconstruction(x, means_of_data)
print('Error of reconstruction:', error)
means_of_data = means_of_data.reshape((height, width, D))
segmented_image = Image.fromarray(means_of_data, mode='RGB')
65

name = 'Segmented_Images\segmented_image_'+str(K)+'.jpg'
segmented_image.save(name)

def run(x, cluster, tol):


for K in cluster:
print('------ Cluster: '+str(K)+'------')
start_time = time.time()
mean, gamma = EM(x, K,
tol) end_time = time.time()
em_time = end_time-start_time
print("Time of execution of EM for clusters/k = %s is %s seconds " % (K, em_time))
reconstruct_image(x, mean, gamma, K)
tolerance = 1e-6
clusters = [1, 2, 4, 8, 16, 32, 64]
path = 'Image\im.jpg'
# load image as pixel array
data = image.imread(path)
data = np.asarray(data)
# summarize shape of the pixel array
print("Dimensions of image: ", data.shape)
(height, width, d) = data.shape
max_value = np.amax(data)
# display the array of pixels as an image
pyplot.imshow(data)
pyplot.show()
# N = number of data set (Here height*width of image)
# D = dimensions of each data (Here R,G,B)
dataset = data.reshape((height*width, d)) #
NxD # normalize data
dataset = dataset/max_value
run(dataset, clusters, tolerance)
66

Input: Output:

RESULT :

Thus the implementation of EM for Bayesian networks using python is executed successfully
67

Ex. No : 11
Date :
Build simple NN models

AIM :

To implement NN models using python

ALGORITHM:

 Create an approximation model.


 Configure data set.
 Set network architecture.
 Train neural network.
 Improve generalization performance.
 Test results.

PROGRAM:

import numpy as np

class NeuralNetwork():

def init (self):


# seeding for random number generation
np.random.seed(1)

#converting weights to a 3 by 1 matrix with values from -1 to 1 and mean of 0


self.synaptic_weights = 2 * np.random.random((3, 1)) - 1

def sigmoid(self, x):


#applying the sigmoid function
return 1 / (1 + np.exp(-x))

def sigmoid_derivative(self, x):


#computing derivative to the Sigmoid function
return x * (1 - x)

def train(self, training_inputs, training_outputs, training_iterations):

#training the model to make accurate predictions while adjusting weights continually
for iteration in range(training_iterations):
#siphon the training data via the neuron
output = self.think(training_inputs)
68

#computing error rate for back-propagation


error = training_outputs - output
69

#performing weight adjustments


adjustments = np.dot(training_inputs.T, error * self.sigmoid_derivative(output))

self.synaptic_weights += adjustments

def think(self, inputs):


#passing the inputs via the neuron to get output
#converting values to floats

inputs = inputs.astype(float)
output = self.sigmoid(np.dot(inputs, self.synaptic_weights))
return output

if name == " main ":

#initializing the neuron class


neural_network = NeuralNetwork()

print("Beginning Randomly Generated Weights: ")


print(neural_network.synaptic_weights)

#training data consisting of 4 examples--3 input values and 1 output


training_inputs = np.array([[0,0,1],
[1,1,1],
[1,0,1],
[0,1,1]])

training_outputs = np.array([[0,1,1,0]]).T

#training taking place


neural_network.train(training_inputs, training_outputs, 15000)

print("Ending Weights After Training: ")


print(neural_network.synaptic_weights)

user_input_one = str(input("User Input One: "))


user_input_two = str(input("User Input Two: "))
user_input_three = str(input("User Input Three: "))

print("Considering New Situation: ", user_input_one, user_input_two, user_input_three)


print("New Output data: ")
print(neural_network.think(np.array([user_input_one, user_input_two, user_input_three])))
print("Wow, we did it!")

output
70

Type 2:

import numpy as np
X = np.array(([2, 9], [1, 5], [3, 6]), dtype=float)
y = np.array(([92], [86], [89]), dtype=float)
X = X/np.amax(X,axis=0) # maximum of X array longitudinally y = y/100

#Sigmoid Function
def sigmoid (x):
return (1/(1 + np.exp(-x))) #Derivative of Sigmoid Function
def derivatives_sigmoid(x):
return x * (1 - x)
#Variable initialization
epoch=7000 #Setting training
iterations lr=0.1 #Setting learning rate
inputlayer_neurons = 2 #number of features in data set
hiddenlayer_neurons = 3 #number of hidden layers
neurons output_neurons = 1 #number of neurons at output layer

#weight and bias initialization


wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))

bout=np.random.uniform(size=(1,output_neurons))
# draws a random range of numbers uniformly of dim x*y #Forward Propagation
for i in range(epoch):
hinp1=np.dot(X,wh)
71

hinp=hinp1 + bh
hlayer_act =
sigmoid(hinp)
outinp1=np.dot(hlayer_act,wout)
outinp= outinp1+ bout
output = sigmoid(outinp)
#Backpropagation
EO = y-output
outgrad = derivatives_sigmoid(output)
d_output = EO* outgrad
EH = d_output.dot(wout.T)
hiddengrad = derivatives_sigmoid(hlayer_act) #how much hidden layer wts contributed to error
d_hiddenlayer = EH * hiddengrad
wout += hlayer_act.T.dot(d_output) *lr
# dotproduct of nextlayererror and currentlayerop
bout += np.sum(d_output, axis=0,keepdims=True)
*lr wh += X.T.dot(d_hiddenlayer) *lr
#bh += np.sum(d_hiddenlayer, axis=0,keepdims=True) *lr
print("Input: \n" + str(X))
print("Actual Output: \n" + str(y))
print("Predicted Output: \n" ,output)
72

output
73

RESULT :

Thus the implementation of NN models using python is executed successfully


74

Ex. No : 12
Build deep learning NN models
Date :

AIM :

To implement deep learning NN models using python


Algorithm:

Step 1: Initialize the weights and


biases Step 2: Forward propagation
module Step 3: Define the cost
function
Step 4: Set Back propagation
Step 5: Update parameters with gradient descent

Program:
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt import
scipy
from PIL import Image from
scipy import ndimage
def initialize_parameters_deep(layer_dims):
# 0th layer is the input layer with number # of
columns stored in layer_dims. parameters = {}

# number of layers in the network L =


len(layer_dims)

for l in range(1, L):


parameters['W' + str(l)] = np.random.randn(layer_dims[l],
layer_dims[l - 1])*0.01 parameters['b' +
str(l)] = np.zeros((layer_dims[l], 1))

return parameters
def linear_forward(A_prev, W, b):

# cache is stored to be used in backward propagation module Z =


np.dot(W, A_prev) + b
cache = (A, W, b)
return Z, cache
def sigmoid(Z):

A = 1/(1 + np.exp(-Z))
return A, {'Z' : Z} def

tanh(Z):
75

A = np.tanh(Z) return A,
{'Z' : Z}

def linear_activation_forward(A_prev, W, b, activation):


# cache is stored to be used in backward propagation module if activation
== "sigmoid":
Z, linear_cache = linear_forward(A_prev, W, b) A,
activation_cache = sigmoid(Z)
elif activation == "tanh":
Z, linear_cache = linear_forward(A_prev, W, b) A,
activation_cache = tanh(Z)
cache = (linear_cache, activation_cache)

return A, cache
def L_model_forward(X, parameters): """
Arguments:
X -- data, numpy array of shape (input size, number of examples) parameters --
output of initialize_parameters_deep()

Returns:
AL -- last post-activation value caches -- list
of caches containing:
every cache of linear_activation_forward() (there are
L-1 of them, indexed from 0 to L-1)
"""

caches = []
A=X

# number of layers in the neural network L =


len(parameters) // 2

# Implement [LINEAR -> TANH]*(L-1). Add "cache" to the "caches" list. for l in
range(1, L):
A_prev = A
A, cache = linear_activation_forward(A_prev,
parameters['W' + str(l)],
parameters['b' + str(l)], 'tanh')

caches.append(cache)

# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list. AL, cache =
linear_activation_forward(A, parameters['W' + str(L)],
parameters['b' + str(L)], 'sigmoid')
caches.append(cache)

return AL, caches def


compute_cost(AL, Y):
"""
Implement the cost function defined by the equation. m =
76

Y.shape[1]
cost = (-1 / m)*(np.dot(np.log(AL), Y.T)+np.dot(np.log((1-AL)), (1 - Y).T))

# To make sure your cost's shape is what we # expect


(e.g. this turns [[20]] into 20).
cost = np.squeeze(cost)

return cost
def linear_backward(dZ, cache):

A_prev, W, b = cache m
= A_prev.shape[1]
dW = (1 / m)*np.dot(dZ, A_prev.T)
db = (1 / m)*np.sum(dZ, axis = 1, keepdims = True) dA_prev =
np.dot(W.T, dZ)

return dA_prev, dW, db


def sigmoid_backward(dA, activation_cache):

Z = activation_cache['Z'] A =
sigmoid(Z)
return dA * (A*(1 - A)) # A*(1 - A) is the derivative of sigmoid function

def tanh_backward(dA, activation_cache): Z =

activation_cache['Z']
A = sigmoid(Z)
return dA * (1 -np.power(A, 2)) #
A*(1 -
def L_model_backward(AL, Y, caches): """
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) caches -- list of
caches containing:
every cache of linear_activation_forward() with "tanh" (it's caches[l],
for l in range(L-1) i.e l = 0...L-2) the cache of
linear_activation_forward() with "sigmoid"
(it's caches[L-1])

Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers m =
AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL

# Initializing the backpropagation


# derivative of cost with respect to AL
77

dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))

# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". # Outputs:
"grads["dAL-1"], grads["dWL"], grads["dbL"]
current_cache = caches[L - 1]
grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = \
linear_activation_backward(dAL, current_cache, 'sigmoid')

# Loop from l = L-2 to l = 0 for l in


reversed(range(L-1)):
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward( grads['dA' + str(l
+ 1)], current_cache, 'tanh')
grads["dA" + str(l)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp

return grads
def update_parameters(parameters, grads, learning_rate):
L = len(parameters) // 2 # number of layers in the neural network

# Update rule for each parameter. Use a for loop. for l in


range(L):
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate *
grads['dW' + str(l + 1)]
parameters["b" + str(l + 1)] = parameters['b' + str(l + 1)] - learning_rate * grads['db'
+ str(l + 1)]

return parameters
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations
= 3000, print_cost = False): """
Arguments:
X -- data, numpy array of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat),
of shape (1, number of examples) layers_dims --
list containing the input size and each layer size,
of length (number of layers + 1). learning_rate
-- learning rate of the gradient descent update rule num_iterations -- number of iterations
of the optimization loop print_cost -- if True, it prints the cost every 100 steps

Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""

np.random.seed(1)
costs = [] # keep track of cost

parameters = initialize_parameters_deep(layers_dims)

# Loop (gradient descent)


78

for i in range(0, num_iterations):

# Forward propagation: [LINEAR -> TANH]*(L-1) -> LINEAR -> SIGMOID. AL,
caches = L_model_forward(X, parameters)
# Compute cost.
cost = compute_cost(AL, Y)

# Backward propagation.
grads = L_model_backward(AL, Y, caches)

# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)

# Print the cost every 100 training example if


print_cost and i % 100 == 0:
print ("Cost after iteration % i: % f" %(i, cost)) if print_cost and
i % 100 == 0:
costs.append(cost)

# plot the cost plt.plot(np.squeeze(costs))


plt.ylabel('cost') plt.xlabel('iterations (per
hundreds)')
plt.title("Learning rate =" + str(learning_rate)) plt.show()

return parameters
def predict(parameters, path_image):

my_image = path_image
image = np.array(ndimage.imread(my_image, flatten = False)) my_image =
scipy.misc.imresize(image,
size =(num_px, num_px)).reshape(( num_px
* num_px * 3, 1))

my_image = my_image / 255.


output, cache = L_model_forward(my_image, parameters) output =
np.squeeze(output)
prediction = round(output)
if(prediction == 1):
label = "Cat picture" else:
label = "Non-Cat picture" # If the model is trained to recognize a cat image.
print ("y = " + str(prediction) + ", your L-layer model predicts a \"" + label)

#Provided layers_dims = [12288, 20, 7, 5, 1] when this model is trained with an appropriate
amount of training dataset it is up to 80% accurate on test data.
The parameters are found after training with an appropriate amount of training dataset. #

{'W1': array([[ 0.01672799, -0.00641608, -0.00338875, ..., -0.00685887,


-0.00593783, 0.01060475],
[ 0.01395808, 0.00407498, -0.0049068, ..., 0.01317046,
0.00221326, 0.00930175],
[-0.00123843, -0.00597204, 0.00472214, ..., 0.00101904,
79

-0.00862638, -0.00505112],
...,
[ 0.00140823, -0.00137711, 0.0163992, ..., -0.00846451,
-0.00761603, -0.00149162],
[-0.00168698, -0.00618577, -0.01023935, ..., 0.02050705,
-0.00428185, 0.00149319],
[-0.01770891, -0.0067836, 0.00756873, ..., 0.01730701,
0.01297081, -0.00322241]]), 'b1': array([[ 3.85542520e-03], [
8.18087056e-03],
[ 6.52138546e-03],
[ 2.85633678e-03],
[ 6.01081275e-03],
[ 8.17122684e-04],
[ 3.72986493e-04],
[ 7.05992009e-04],
[ 4.36344692e-04],
[ 1.90827285e-03], [ -
6.51686461e-03],
[ 6.97258125e-03], [ -
1.08988113e-03],
[ 5.40858776e-03],
[ 8.16752511e-03], [ -
1.05298871e-02],
[ -9.05267219e-05],
[ -5.13240993e-04], [
1.42355924e-03],
[ -2.40912130e-03]]), 'W2': array([[ 2.02109232e-01, -3.08645240e- 01, -
3.77620591e-01,
-4.02563039e-02, 5.90753267e-02, 1.23345558e-01,
3.08047246e-01, 4.71201576e-02, 5.29892230e-02,
1.34732883e-01, 2.15804697e-01, -6.34295948e-01,
-1.56081006e-01, 1.01905466e-01, -1.50584386e-01,
5.31219819e-02, 1.14257132e-01, 4.20697960e-01,
1.08551174e-01, -2.18735332e-01],
[ 3.57091131e-01, -1.40997155e-01, 3.70857247e-01,
2.53207014e-01, -1.12596978e-01, -3.15179195e-01,
-2.48100731e-01, 4.72723584e-01, -7.71870940e-02,
5.39834663e-01, -1.17927181e-02, 6.45463019e-02,
2.73704423e-02, 4.30157714e-01, 1.59318390e-01,
-6.48089126e-01, -1.71894333e-01, 1.77933527e-01,
1.54736463e-01, -7.26815274e-02],
[ 2.96501527e-01, 2.43056424e-01, -1.22400000e-02,
2.69275366e-02, 3.76041647e-01, -1.70245407e-01,
-2.95343754e-02, -7.35716150e-02, -1.80179693e-01,
-5.77515859e-03, -6.38323383e-01, 6.94950669e-02,
7.66137263e-02, 3.66599261e-01, 5.40904716e-02,
-1.51814996e-01, -2.61672559e-01, 1.35946854e-01,
4.21086332e-01, -2.71073484e-01],
[ 1.42186042e-01, -2.66789439e-01, 4.57188131e-01,
2.84732743e-02, -5.49143391e-02, -3.96786581e-02,
-1.68668726e-01, -1.46525541e-01, 3.25325993e-03,
-1.13045329e-01, 4.03935681e-01, -3.92214264e-01,
5.25325051e-04, -3.69642647e-01, -1.15812921e-01,
80

1.32695899e-01, 3.20810624e-01, 1.88127350e-01,


-4.82784806e-02, -1.48816756e-01],
[ -1.65469406e-01, 4.24741323e-01, -5.76900900e-01,
1.58084434e-01, -2.90965849e-01, 3.40124014e-02,
-2.62189635e-01, 2.66917709e-01, 4.77530579e-01,
-1.73491365e-01, -1.48434710e-01, -6.91270097e-02,
5.42923817e-03, -2.85173244e-01, 6.40701002e-02,
-7.33126171e-02, 1.43543481e-01, 7.82250247e-02,
-1.47535352e-01, -3.99073661e-01],
[ -2.05468389e-01, 1.66914752e-01, 2.15918881e-01,
2.21774761e-01, 2.52527888e-01, 2.64464223e-01,
-3.07796263e-02, -3.06999665e-01, 3.45835418e-01,
-3.47687682e-01, 9.13383273e-02,
1.05973413e-01, 2.22363710e-01,
3.97150339e-02, -3.14285982e-01,
-9.70224337e-02, -3.03701358e-01,
-3.93921988e-01, -4.56621577e-01],
1.40075127e-01, -2.39537245e-01, -4.06133490e-01,
2.06819296e-01, -3.27700300e-01,
[ 5.92692802e-02, 8.95374287e-02,
-6.13447906e-01, 1.89927573e-01,
-6.89856027e-02,
1.77958823e-03, -1.34407806e-01,
-1.42814095e-01,
-2.00549616e-02, 9.01789763e-02,
9.34036862e-02, 3.30416268e-01, -1.76566228e-02,
3.81627943e-01,
9.28388267e-02, -1.16167106e-01]]), 'b2': array([[-0.00088887],
[ 0.02357712],
[ 0.01858614],
[-0.00567557],
[ 0.00636179],
[ 0.02362429],
[-0.00173074]]), 'W3': array([[ 0.20939786, 0.21977478, 0.77135171,
-1.07520777, -0.64307173,
-0.24097649, -0.15626735],
[-0.57997618, 0.30851841, -0.03802324, -0.13489975, 0.23488207,
0.76248961, -0.34515092],
[ 0.15990295, 0.5163969, 0.15284381, 0.42790606, -0.05980168,
0.87865156, -0.01031899],
[ 0.52908282, 0.93882471, 1.23044256, -0.01481286, 0.41024244,
0.18731983, -0.01414658],
[-0.96753783, -0.30492002, 0.54060558, -0.18776932, -0.39245146,
0.20654634, -0.58863038]]), 'b3': array([[ 0.8623361 ],
[-0.00826002],
[-0.01151116],
[-0.06844291],
[-0.00833715]]), 'W4': array([[-0.83045967, 0.18418824,
0.85885352, 1.41024115, 0.12713131]]), 'b4': array([[-1.73123633]])}

my_image = "https://www.pexels.com / photo / adorable-animal-blur-cat- 617278/"


predict(parameters, my_image)
81

Output with learnt parameters:

y = 1, your L-layer model predicts a Cat picture.

RESULT :

Thus the implementation of deep learning NN models using python is executed successfully.

You might also like