1.
A IMPLEMENTATION OF UNINFORMED SEARCH ALGORITHMS (BFS)
AIM:
To Implementation of Uninformed search algorithms BFS
ALGORITHM:
Step 1. Initialize the graph, visited list, and queue.
Step 2. Mark 'A' as visited and enqueue it.
Step 3. While there are nodes in the queue:
- Step 3.1. Take out the first node from the queue.
- Step 3.2. Print the node.
- Step 3.3. Look at its neighbors:
- Step 3.3.1. If they haven't been visited, mark them as visited and put them in the queue.
Step 4. Repeat steps 3 until the queue is empty.
Step 5. When all reachable nodes are visited, stop the algorithm.
PROGRAM:
graph = {
'A' : ['B','C'],
'B' : ['D', 'E'],
'C' : ['F'],
'D' : [],
'E' : ['F'],
'F' : []
}
visited = [] # List to keep track of visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node):
[Link](node)
[Link](node)
while queue:
s = [Link](0)
print (s, end = " ")
for neighbour in graph[s]:
if neighbour not in visited:
[Link](neighbour)
[Link](neighbour)
# Driver Code
bfs(visited, graph, 'A')
Output :
ABCDEF
RESULT:
Thus the program for Implementation of Uninformed search algorithms BFS has been executed
successfully and verified
1.B IMPLEMENTATION OF UNINFORMED SEARCH ALGORITHMS (DFS)
AIM:
To Implementation of Uninformed search algorithms DFS
ALGORITHM:
Step 1. Initialize the graph, create an empty set to track visited nodes, and define the DFS function.
Step 2. Start DFS from the initial node '5'.
Step 3. Visit the current node:
If it hasn't been visited:
Print the node.
Add it to the set of visited nodes.
Step 4. Explore its neighbors recursively:
For each neighbor of the current node:
If the neighbor hasn't been visited:
Recursively call DFS on that neighbor.
Step 5. Repeat steps 3-4 until all reachable nodes are visited.
Step 6. Finish DFS.
PROGRAM:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = set() # Set to keep track of visited nodes of graph.
def dfs(visited, graph, node): #function for dfs
if node not in visited:
print (node)
[Link](node)
for neighbour in graph[node]:
dfs(visited, graph, neighbour)
# Driver Code
print("Following is the Depth-First Search")
dfs(visited, graph, '5')
Output :
Following is the Depth-First Search 5 3 2 4 8 7
RESULT:
Thus the program for Implementation of Uninformed search algorithms DFS has been executed
successfully and verified
[Link] SEARCH A*
AIM:
To write the python program for informed search A*
ALGORITHM:
Step 1. Define Problem: Identify the initial state, goal state, and necessary parameters.
Step 2. Initialize Data Structures: Set up data structures such as priority queues and sets.
Step 3. Main Loop: While there are nodes to explore:
Pop the node with the lowest total cost.
Check if it's the goal state; if yes, return the path.
Expand the node and add its successors to the priority queue.
Step 4. Extract Path: Once the goal state is reached, extract the path from the initial state to the goal state.
Step 5. Define Heuristic Function: Estimate the cost from a state to the goal.
Step 6. Define Goal Test Function: Determine if a state is the goal state.
Step 7. Define Successor Function: Generate possible actions and resulting states.
Step 8. Perform the Search: Execute the A* algorithm.
Step 9. Output Result: Display the path found by the A* algorithm.
Step 10. Stop
PROGRAM :
import heapq
class Node:
def __init__(self, state, parent=None, action=None, cost=0, heuristic=0):
[Link] = state
[Link] = parent
[Link] = action
[Link] = cost
[Link] = heuristic
def __lt__(self, other):
return ([Link] + [Link]) < ([Link] + [Link])
def a_star(initial_state, goal_test, successors, heuristic):
initial_node = Node(state=initial_state)
frontier = []
[Link](frontier, initial_node)
explored = set()
while frontier:
current_node = [Link](frontier)
if goal_test(current_node.state):
return extract_path(current_node)
[Link](current_node.state)
for action, state, cost in successors(current_node.state):
if state not in explored:
new_cost = current_node.cost + cost
new_node = Node(state=state, parent=current_node, action=action, cost=new_cost,
heuristic=heuristic(state))
[Link](frontier, new_node)
def extract_path(node):
path = []
while node:
[Link](([Link], [Link]))
node = [Link]
return path[::-1]
initial_state = (0, 0)
goal_state = (4, 4)
def successors(state):
x, y = state
possible_actions = [(1, 0), (0, 1), (-1, 0), (0, -1)]
for dx, dy in possible_actions:
new_x, new_y = x + dx, y + dy
if 0 <= new_x <= 4 and 0 <= new_y <= 4:
yield (dx, dy), (new_x, new_y), 1
def heuristic(state):
x, y = state
goal_x, goal_y = goal_state
return abs(goal_x - x) + abs(goal_y - y)
def goal_test(state):
return state == goal_state
path = a_star(initial_state, goal_test, successors, heuristic)
print("Path found:", path)
OUTPUT:
Path found: [(None, (0, 0)), ((0, 1), (0, 1)), ((0, 1), (0, 2)), ((1, 0), (1, 2)), ((0, 1), (1, 3)), ((1, 0), (2, 3)), ((0, 1
), (2, 4)), ((1, 0), (3, 4)), ((1, 0), (4, 4))]
RESULT:
Thus the program for Implementation of informed search algorithms has been executed successfully
and verified
[Link] SEARCH (Memory bounded A*)
AIM:
To write the python program for informed search (bounded memory A*)
ALGORITHM:
Step 1. Define the problem by identifying the initial state, goal state, and necessary parameters.
Step 2. Initialize data structures such as priority queues and sets.
Step 3. In the main loop, while there are nodes to explore:
Pop the node with the lowest total cost.
Check if it's the goal state; if yes, return the path.
Expand the node and add its successors to the priority queue.
Step 4. Extract the path once the goal state is reached from the initial state to the goal state.
Step 5. Define a heuristic function to estimate the cost from a state to the goal.
Step 6. Define a goal test function to determine if a state is the goal state.
Step 7. Define a successor function to generate possible actions and resulting states.
Step 8. Perform the A* search algorithm.
Step 9. Output the result by displaying the path found by the A* algorithm.
Step 10. Stop the algorithm.
PROGRAM:
import networkx as nx
def memory_bounded_a_star(initial_state, goal_test, successors, heuristic, memory_limit):
graph = [Link]()
graph.add_node(initial_state, cost=0, heuristic=heuristic(initial_state), parent=None)
frontier = [(initial_state, 0)]
best_solution = float('inf')
while frontier:
current_state, current_cost = [Link](0)
if current_cost + heuristic(current_state) >= best_solution:
continue
if goal_test(current_state):
best_solution = min(best_solution, current_cost)
continue
for action, state, cost in successors(current_state):
total_cost = current_cost + cost
if total_cost < best_solution:
if state not in [Link] or total_cost < [Link][state]['cost']:
graph.add_node(state, cost=total_cost, heuristic=heuristic(state), parent=current_state)
graph.add_edge(current_state, state, action=action, cost=cost)
[Link]((state, total_cost))
[Link](key=lambda x: x[1] + heuristic(x[0]))
return best_solution
initial_state = (0, 0)
goal_state = (4, 4)
def successors(state):
x, y = state
possible_actions = [(1, 0), (0, 1), (-1, 0), (0, -1)]
for dx, dy in possible_actions:
new_x, new_y = x + dx, y + dy
if 0 <= new_x <= 4 and 0 <= new_y <= 4:
yield (dx, dy), (new_x, new_y), 1
def heuristic(state):
x, y = state
goal_x, goal_y = goal_state
return abs(goal_x - x) + abs(goal_y - y)
def goal_test(state):
return state == goal_state
memory_limit = 10000
best_solution = memory_bounded_a_star(initial_state, goal_test, successors, heuristic, memory_limit)
print("Best solution found:", best_solution)
OUTPUT:
Best solution found: 8
RESULT:
Thus the program for Implementation of informed search algorithms has been executed successfully
and verified
[Link] NAÏVE BAYES MODEL
AIM:
To write program for naïve bayes
ALGORITHM:
Step 1. Define the problem by identifying the initial state, goal state, and necessary parameters.
Step 2. Initialize data structures such as priority queues and sets.
Step 3. In the main loop, while there are nodes to explore:
- Pop the node with the lowest total cost.
- Check if it's the goal state; if yes, return the path.
- Expand the node and add its successors to the priority queue.
Step 4. Extract the path once the goal state is reached from the initial state to the goal state.
Step 5. Define a heuristic function to estimate the cost from a state to the goal.
Step 6. Define a goal test function to determine if a state is the goal state.
Step 7. Define a successor function to generate possible actions and resulting states.
Step 8. Perform the A* search algorithm.
Step 9. Output the result by displaying the path found by the A* algorithm.
Step 10. Stop the algorithm.
PROGRAM:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn import metrics
fname = '[Link]'
data = pd.read_csv(fname, header=None)
X = [Link][:, :-1].values
y = [Link][:, -1].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
model = GaussianNB()
[Link](X_train, y_train)
y_pred = [Link](X_test)
accuracy = metrics.accuracy_score(y_test, y_pred)
print("Accuracy of your model:", accuracy)
OUTPUT:
Accuracy of your model: 0.7445887445887446
RESULT:
Thus the program for Implementation of naïve bayes has been executed successfully and verified
[Link] BAYESIAN NETWORKS
AIM:
To write the python program for Bayesian networks
ALGORITHM:
Step 1. Define Problem: Define initial state, goal state, and parameters.
Step 2. Initialize Data Structures: Set up priority queues and sets.
Step 3. Main Loop: Explore nodes, pop lowest-cost node, check goal, expand successors.
Step 4. Extract Path: Once goal is reached, extract initial to goal path.
Step 5. Define Heuristic Function: Estimate state-to-goal cost.
Step 6. Define Goal Test Function: Determine if state is goal.
Step 7. Define Successor Function: Generate possible actions and states.
Step 8. Perform Search: Execute A* algorithm.
Step 9. Output Result: Display A* found path.
Step 10. Stop.
PROGRAM :
import numpy as np
import pandas as pd
from [Link] import MaximumLikelihoodEstimator
from [Link] import BayesianNetwork
from [Link] import VariableElimination
from [Link] import load_iris
iris = load_iris()
iris_df = [Link](data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
print('Sample instances from the Iris dataset are given below:')
print(iris_df.head())
print('\nAttributes and datatypes:')
print(iris_df.dtypes)
model = BayesianNetwork([('sepal length (cm)', 'target'),
('sepal width (cm)', 'target'),
('petal length (cm)', 'target'),
('petal width (cm)', 'target')])
print('\nLearning CPD using Maximum Likelihood Estimators')
[Link](iris_df, estimator=MaximumLikelihoodEstimator)
print('\nInferencing with Bayesian Network:')
iris_infer = VariableElimination(model)
print('\n1. Probability of Iris class given sepal length = 5.1')
q1 = iris_infer.query(variables=['target'], evidence={'sepal length (cm)': 5.1})
print(q1)
print('\n2. Probability of Iris class given petal width = 1.5')
q2 = iris_infer.query(variables=['target'], evidence={'petal width (cm)': 1.5})
print(q2)
OUTPUT:
Sample instances from the Iris dataset are given below:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) \
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
target
0 0.0
1 0.0
2 0.0
3 0.0
4 0.0
Attributes and datatypes:
sepal length (cm) float64
sepal width (cm) float64
petal length (cm) float64
petal width (cm) float64
target float64
dtype: object
Learning CPD using Maximum Likelihood Estimators
Inferencing with Bayesian Network:
1. Probability of Iris class given sepal length = 5.1
+-------------+---------------+
| target | phi(target) |
+=============+===============+
| target(0.0) | 0.3352 |
+-------------+---------------+
| target(1.0) | 0.3324 |
+-------------+---------------+
| target(2.0) | 0.3324 |
+-------------+---------------+
2. Probability of Iris class given petal width = 1.5
+-------------+---------------+
| target | phi(target) |
+=============+===============+
| target(0.0) | 0.3327 |
+-------------+---------------+
| target(1.0) | 0.3343 |
+-------------+---------------+
| target(2.0) | 0.3330 |
+-------------+---------------+
RESULT:
Thus the program for Implementation of Bayesian network has been executed successfully and
verified
5. REGRESSION MODEL
AIM:
To write the python program for simple linear regression
ALGORITHM:
Step 1. Define Problem: Initial state - data points x and y, Goal state - estimated regression coefficients,
Parameters - input data x and y.
Step 2. Initialize Data Structures: None required.
Step 3. Main Loop: Estimate coefficients using numpy's polyfit, Plot regression line using matplotlib.
Step 4. Extract Path: N/A
Step 5. Define Heuristic Function: N/A
Step 6. Define Goal Test Function: Check if coefficients have been estimated.
Step 7. Define Successor Function: N/A
Step 8. Perform Search: N/A
Step 9. Output Result: Display estimated coefficients and regression line plot.
Step 10. Stop.
PROGRAM:
import numpy as np
import [Link] as plt
def main():
x = [Link]([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = [Link]([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])
b = [Link](x, y, 1)
print("Estimated coefficients: b_0 = {}, b_1 = {}".format(b[1], b[0]))
[Link](x, y, color="r", marker="s", s=30)
[Link](x, b[1] + b[0] * x, color="c")
[Link]('x')
[Link]('y')
[Link]()
if __name__ == "__main__":
main()
OUTPUT:
Estimated coefficients: b_0 = 1.236363636363637, b_1 = 1.1696969696969697
RESULT:
Thus the program for Implementation of regression model has been executed successfully and
verified
6. BUILD DECISION TREE AND RANDOM FOREST
AIM:
To write program for build decision tree and random forest
ALGORITHM:
Step 1. Define Problem: Load Iris dataset, split into training and testing sets, train decision tree and random
forest classifiers.
Step 2. Initialize Data Structures: None required.
Step 3. Main Loop: Train decision tree and random forest classifiers using training data.
Step 4. Extract Path: N/A
Step 5. Define Heuristic Function: N/A
Step 6. Define Goal Test Function: N/A
Step 7. Define Successor Function: N/A
Step 8. Perform Search: N/A
Step 9. Output Result: Print classification reports and confusion matrices for both classifiers.
Step 10. Stop.
PROGRAM:
import pandas as pd
from sklearn.model_selection import train_test_split
from [Link] import DecisionTreeClassifier
from [Link] import RandomForestClassifier
from [Link] import classification_report, confusion_matrix
from [Link] import load_iris
iris = load_iris()
X, y = [Link], [Link]
feature_names = iris.feature_names
target_names = iris.target_names
raw_data = [Link](data=X, columns=feature_names)
raw_data['Kyphosis'] = y
x = raw_data.drop('Kyphosis', axis=1)
y = raw_data['Kyphosis']
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3)
decision_tree_model = DecisionTreeClassifier()
decision_tree_model.fit(x_train, y_train)
dt_predictions = decision_tree_model.predict(x_test)
random_forest_model = RandomForestClassifier()
random_forest_model.fit(x_train, y_train)
rf_predictions = random_forest_model.predict(x_test)
print("Decision Tree Classifier Results:")
print(classification_report(y_test, dt_predictions))
print(confusion_matrix(y_test, dt_predictions))
print("\nRandom Forest Classifier Results:")
print(classification_report(y_test, rf_predictions))
print(confusion_matrix(y_test, rf_predictions))
OUTPUT:
Decision Tree Classifier Results:
precision recall f1-score support
0 1.00 1.00 1.00 15
1 1.00 0.92 0.96 13
2 0.94 1.00 0.97 17
accuracy 0.98 45
macro avg 0.98 0.97 0.98 45
weighted avg 0.98 0.98 0.98 45
[[15 0 0]
[ 0 12 1]
[ 0 0 17]]
Random Forest Classifier Results:
precision recall f1-score support
0 1.00 1.00 1.00 15
1 1.00 0.92 0.96 13
2 0.94 1.00 0.97 17
accuracy 0.98 45
macro avg 0.98 0.97 0.98 45
weighted avg 0.98 0.98 0.98 45
[[15 0 0]
[ 0 12 1]
[ 0 0 17]]
RESULT:
Thus the program for Implementation of decision tree and random forest has been executed
successfully and verified
[Link] SVM MODELS
AIM:
To write python program for build svm model
ALGORITHM;
Step 1. Import necessary libraries.
Step 2. Load dataset.
Step 3. Split dataset into training and testing sets.
Step 4. Standardize features.
Step 5. Initialize SVM classifier.
Step 6. Train SVM classifier.
Step 7. Predict labels for the test set.
Step 8. Calculate accuracy of the model.
Step 9. Print the accuracy.
PROGRAM:
from sklearn import datasets
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
from [Link] import SVC
from [Link] import accuracy_score
iris = datasets.load_iris()
X = [Link]
y = [Link]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = [Link](X_test)
svm_classifier = SVC(kernel='linear', random_state=42)
svm_classifier.fit(X_train, y_train)
y_pred = svm_classifier.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy of SVM model:", accuracy)
OUTPUT:
Accuracy of SVM model: 0.9777777777777777
RESULT:
Thus the program for build SVM model has been executed successfully and verified
[Link] OF ENSEMBLE TECHNIQUES
AIM:
Implementation Of Ensemble Techniques
ALGORITHM:
Step 1. Initialize and train a Random Forest classifier with 100 decision trees.
Step 2. Initialize and train a Gradient Boosting classifier with 100 estimators and a learning rate of 0.1.
Step 3. Evaluate the Random Forest model using accuracy_score on the test set.
Step 4. Evaluate the Gradient Boosting model using accuracy_score on the test set.
Step 5. Print the accuracies of both models.
PROGRAM:
from [Link] import load_iris
from sklearn.model_selection import train_test_split
from [Link] import RandomForestClassifier, GradientBoostingClassifier
from [Link] import accuracy_score
iris = load_iris()
X = [Link]
y = [Link]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Step 1: Random Forest Classifier
rf_classifier = RandomForestClassifier(n_estimators=100, random_state=42)
rf_classifier.fit(X_train, y_train)
rf_pred = rf_classifier.predict(X_test)
rf_accuracy = accuracy_score(y_test, rf_pred)
print("Random Forest Accuracy:", rf_accuracy)
# Step 2: Gradient Boosting Classifier
gb_classifier = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1, random_state=42)
gb_classifier.fit(X_train, y_train)
gb_pred = gb_classifier.predict(X_test)
gb_accuracy = accuracy_score(y_test, gb_pred)
print("Gradient Boosting Accuracy:", gb_accuracy)
OUTPUT:
Random Forest Accuracy: 1.0
Gradient Boosting Accuracy: 1.0
RESULT:
Thus the program for Implementation of ensemble learning has been executed successfully and
verified
[Link] CLUSTERING ALGORITHMS
AIM:
To write a program to implement clustering algorithm
ALGORITHM:
Step 1: Initialization: Randomly initialize K centroids.
Step 2: Assign data points: Assign each data point to the nearest centroid.
Step 3: Update centroids: Compute the mean of all data points assigned to each centroid and update the
centroid to this mean.
Step 4: Repeat: Repeat steps 2 and 3 until convergence or for a maximum number of iterations. Convergence
is typically determined by checking if the centroids stop changing significantly between iterations.
PROGRAM:
import numpy as np
import cv2
from matplotlib import pyplot as plt
X = [Link](10,45,(25,2))
Y = [Link](55,70,(25,2))
Z = [Link]((X,Y))
Z = np.float32(Z)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret,label,center = [Link](Z,2,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
A = Z[[Link]()==0]
B = Z[[Link]()==1]
[Link](A[:,0],A[:,1])
[Link](B[:,0],B[:,1],c = 'r')
[Link](center[:,0],center[:,1],s = 80,c = 'y', marker = 's')
[Link]('Test Data'),[Link]('Z samples')
[Link]()
Output:
RESULT:
Thus the program for clustering algorithm has been executed successfully and verified.
[Link] EM FOR BAYESIAN NETWORK
AIM:
To write program to implement EM for Bayesian network.
ALGORITHM:
Step 1: Initialization: Initialize the parameters of the Bayesian Network, such as the transition matrix, with
random values.
Step 2: Expectation Step (E-step): Calculate the expected counts of transitions based on incomplete data.
Step 3: Maximization Step (M-step): Update the parameters of the Bayesian Network using the expected
counts calculated in the E-step.
Step 4: Convergence Check: Check for convergence by comparing the difference between the updated
parameters and the previous parameters against a predefined tolerance.
Step 5: Visualization: Visualize the learned parameters of the Bayesian Network, such as the transition
matrix, to understand the structure of the network and the relationships between different states.
Step 6: Repeat: Repeat steps 2-5 until convergence is achieved or the maximum number of iterations is
reached.
PROGRAM:
import numpy as np
import [Link] as plt
class BayesianNetwork:
def __init__(self, num_nodes):
self.num_nodes = num_nodes
self.transition_matrix = [Link]((num_nodes, num_nodes))
def fit(self, data, max_iter=100, tol=1e-3):
self.transition_matrix = [Link](self.num_nodes, self.num_nodes)
self.transition_matrix /= [Link](self.transition_matrix, axis=1, keepdims=True)
for _ in range(max_iter):
expected_counts = np.zeros_like(self.transition_matrix)
for sample in data:
for i in range(1, len(sample)):
prev_state = int(sample[i - 1])
current_state = int(sample[i])
expected_counts[prev_state][current_state] += 1
new_transition_matrix = expected_counts / [Link](expected_counts, axis=1, keepdims=True)
if [Link](new_transition_matrix - self.transition_matrix) < tol:
break
self.transition_matrix = new_transition_matrix
[Link](self.transition_matrix, cmap='hot', interpolation='nearest')
[Link]()
[Link]('Learned Transition Matrix')
[Link]()
if __name__ == "__main__":
data = [Link](2, size=(100, 5))
bayesian_network = BayesianNetwork(num_nodes=2)
bayesian_network.fit(data)
Output:
RESULT:
Thus the program for implement EM for Bayesian network has been executed successfully and
verified.
[Link] SIMPLE NN MODEL
AIM:
To write a program to build simple NN model
ALGORITHM:
Step 1: Initialization: Initialize the weights and biases of the neural network randomly.
Step 2: Forward Propagation: Compute the output of the neural network by propagating the input data
through the network layers.
Step 3: Activation Function: Apply the sigmoid activation function to the output of each layer to introduce
non-linearity.
Step 4: Output: Return the final output of the neural network after the forward pass.
Step 5: Example Usage: Demonstrate the usage of the neural network by creating an instance of the model
and passing input data to obtain predictions.
PROGRAM:
import numpy as np
class NeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
self.weights_input_hidden = [Link](input_size, hidden_size)
self.weights_hidden_output = [Link](hidden_size, output_size)
self.bias_hidden = [Link]((1, hidden_size))
self.bias_output = [Link]((1, output_size))
def forward(self, inputs):
hidden_layer_input = [Link](inputs, self.weights_input_hidden) + self.bias_hidden
hidden_layer_output = [Link](hidden_layer_input)
output_layer_input = [Link](hidden_layer_output, self.weights_hidden_output) + self.bias_output
output = [Link](output_layer_input)
return output
def sigmoid(self, x):
return 1 / (1 + [Link](-x))
if __name__ == "__main__":
nn = NeuralNetwork(input_size=2, hidden_size=3, output_size=1)
input_data = [Link]([[0, 0], [0, 1], [1, 0], [1, 1]])
print("Input data:", input_data)
print("Output:", [Link](input_data))
Output:
Input data: [[0 0]
[0 1]
[1 0]
[1 1]]
Output: [[0.27551913]
[0.29492153]
[0.42742819]
[0.44352722]]
RESULT:
Thus the program for building simple NN model has been executed successfully and verified.
[Link] DEEP LEARNING NN MODEL
AIM:
To write a program to build deep learning model
ALGORITHM:
Step 1: Initialization: Initialize the weights and biases of the deep neural network randomly.
Step 2: Forward Propagation: Compute the output of the neural network by propagating the input data
through the network layers.
Step 3: Activation Function: Apply the sigmoid activation function to the output of each layer to introduce
non-linearity.
Step 4: Output: Return the final output of the neural network after the forward pass.
Step 5: Example Usage: Demonstrate the usage of the neural network by creating an instance of the model
and passing input data to obtain predictions.
PROGRAM:
import numpy as np
class DeepNeuralNetwork:
def __init__(self, layer_sizes):
self.num_layers = len(layer_sizes)
[Link] = [[Link](layer_sizes[i], layer_sizes[i+1]) for i in range(self.num_layers - 1)]
[Link] = [[Link]((1, layer_sizes[i+1])) for i in range(self.num_layers - 1)]
def forward(self, inputs):
activations = inputs
for i in range(self.num_layers - 1):
activations = [Link]([Link](activations, [Link][i]) + [Link][i])
return activations
def sigmoid(self, x):
return 1 / (1 + [Link](-x))
if __name__ == "__main__":
dnn = DeepNeuralNetwork(layer_sizes=[2, 3, 1])
input_data = [Link]([[0, 0], [0, 1], [1, 0], [1, 1]])
print("Input data:", input_data)
print("Output:", [Link](input_data))
Output:
Input data: [[0 0]
[0 1]
[1 0]
[1 1]]
Output: [[0.53459532]
[0.63937041]
[0.43286944]
[0.52571823]]
RESULT:
Thus the program for building deep NN model has been executed successfully and verified.