VISVESVARAYA TECHNOLOGICAL UNIVERSITY
JNANA SANGAMA, BELAGAVI-590018
Algorithms & AI Lab – MCSL106
“LABORATORY RECORD BOOK”
Submitted by
NAME: AMARESHA
USN: ( 4VZ24SCS01 )
In partial fulfillment of requirement for the award degree of
MASTER OF TECHNOLOGY IN
COMPUTER SCIENCE & ENGINEERING
Under the guidance of
[Link] R [Link], Ph.D,
Assistant Professor
Department of Computer Science & Engineering
Visvesvaraya Technological University, Post Graduate Centre,
Hanchya, Sathagalli Layout, Outer Ring Road, Mysuru–570019
2024-2025
INDEX PAGE
COURSE NAME: ALGORITHMS & AI LAB
COURSE CODE: MCSL106
SL PROGRAM NAME PAGE NO
NO
1 Implement a simple linear regression algorithm to predict a
continuous target variable based on a given dataset.
2 Develop a program to implement a Support Vector Machine
for binary classification. Use a sample dataset and visualize the
decision boundary.
3 Write a program to demonstrate the ID3 decision tree
algorithm using an appropriate dataset for classification.
4 Implement a KNN algorithm for regression tasks instead of
classification. Use a small dataset, and predict continuous
values based on the average of the nearest neighbors.
5 Create a program that calculates different distance metrics
(Euclidean and Manhattan) between two points in a dataset.
Allow the user to input two points and display the calculated
distances.
6 Implement the k-Nearest Neighbor algorithm to classify the
Iris dataset, printing both correct and incorrect predictions.
Artificial Intelligence Laboratory MCSL106
1) Implement a simple linear regression algorithm to predict a continuous target variable
based on a given dataset.
import numpy as np
import [Link] as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from [Link] import mean_squared_error
# Generate a sample dataset
[Link](42)
X = 2 * [Link](100, 1) # Feature: Random values between 0 and 2
y = 4 + 3 * X + [Link](100, 1) # Target variable with some noise
# Splitting the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train a linear regression model
model = LinearRegression()
[Link](X_train, y_train)
# Make predictions
y_pred = [Link](X_test)
# Display dataset and regression line
[Link](X, y, color='blue', label='Dataset')
[Link](X_test, y_pred, color='red', linewidth=2, label='Regression Line')
[Link]('Feature (X)')
[Link]('Target (y)')
[Link]('Simple Linear Regression')
[Link]()
[Link]()
# Print model parameters
print(f"Intercept: {model.intercept_[0]:.2f}")
print(f"Slope: {model.coef_[0][0]:.2f}")
# Print mean squared error
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error: {mse:.2f}")
1
Artificial Intelligence Laboratory MCSL106
OUTPUT:
2) Develop a program to implement a Support Vector Machine for binary classification. Use
a sample dataset and visualize the decision boundary.
import numpy as np
import [Link] as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from [Link] import SVC
from [Link] import plot_decision_regions
# Generate a sample dataset
X, y = datasets.make_classification(n_samples=100, n_features=2, n_classes=2,n_clusters_per_class=1,
n_redundant=0, random_state=42)
# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
2
Artificial Intelligence Laboratory MCSL106
# Train an SVM classifier
svm = SVC(kernel='linear', C=1.0)
[Link](X_train, y_train)
# Plot decision boundary
[Link](figsize=(8, 6))
plot_decision_regions(X, y, clf=svm, legend=2)
[Link]("Feature 1")
[Link]("Feature 2")
[Link]("SVM Decision Boundary")
[Link]()
# Print model accuracy
accuracy = [Link](X_test, y_test)
print(f"Model Accuracy: {accuracy * 100:.2f}%")
OUTPUT:
Note: install “mlxtend” package if not installed.(use command: pip install mlxtend)
3
Artificial Intelligence Laboratory MCSL106
3) Write a program to demonstrate the ID3 decision tree algorithm using an appropriate
dataset for classification.
import numpy as np
import pandas as pd
import [Link] as plt
from [Link] import DecisionTreeClassifier, plot_tree
from sklearn.model_selection import train_test_split
from sklearn import datasets
# Load a sample dataset
iris = datasets.load_iris()
X = [Link] # Use all features for better accuracy
y = [Link]
# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=75)
# Train the ID3 Decision Tree model with additional parameters to avoid overfitting/underfitting
id3_tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=75)
id3_tree.fit(X_train, y_train)
# Visualize the Decision Tree
[Link](figsize=(12, 8))
plot_tree(id3_tree, feature_names=iris.feature_names,class_names=iris.target_names, filled=True)
[Link]("ID3 Decision Tree Visualization")
[Link]()
# Model Accuracy
accuracy = id3_tree.score(X_test, y_test)
print(f"Model Accuracy: {accuracy * 100:.2f}%")
OUTPUT:
4
Artificial Intelligence Laboratory MCSL106
4) Implement a KNN algorithm for regression tasks instead of classification. Use a small
dataset, and predict continuous values based on the average of the nearest neighbors.
import numpy as np
import [Link] as plt
from sklearn.model_selection import train_test_split
from [Link] import KNeighborsRegressor
from [Link] import mean_squared_error
# Generate a small dataset
[Link](42)
X = [Link](5 * [Link](20, 1), axis=0) # Feature
y = [Link](X).ravel() + [Link](0, 0.1, [Link][0]) # Target with noise
# Split dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Implement KNN Regression
k = 3 # Number of neighbors
knn_regressor = KNeighborsRegressor(n_neighbors=k, weights='uniform')
knn_regressor.fit(X_train, y_train)
# Predict on test set
y_pred = knn_regressor.predict(X_test)
# Compute Mean Squared Error
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error: {mse:.4f}")
# Plot results
5
Artificial Intelligence Laboratory MCSL106
[Link](X, y, color='blue', label='Data')
[Link](X_test, y_pred, 'ro', label='Predictions')
[Link]("Feature")
[Link]("Target")
[Link](f"KNN Regression (k={k})")
[Link]()
[Link]()
OUTPUT:
5) Create a program that calculates different distance metrics (Euclidean and Manhattan)
between two points in a dataset. Allow the user to input two points and display the
calculated distances.
import numpy as np
def euclidean_distance(point1, point2):
return [Link]([Link](([Link](point1) - [Link](point2)) ** 2))
def manhattan_distance(point1, point2):
return [Link]([Link]([Link](point1) - [Link](point2)))
def main():
try:
print("Enter the coordinates of two points:")
point1 = list(map(float, input("Enter coordinates of Point 1 (space-separated):").split()))
point2 = list(map(float, input("Enter coordinates of Point 2 (space-separated):").split()))
if len(point1) != len(point2):
print("Error: Points must have the same number of dimensions.")
return
euclidean = euclidean_distance(point1, point2)
manhattan = manhattan_distance(point1, point2)
print(f"Euclidean Distance: {euclidean:.4f}")
print(f"Manhattan Distance: {manhattan:.4f}")
6
Artificial Intelligence Laboratory MCSL106
except ValueError:
print("Error: Please enter valid numeric values separated by spaces.")
except Exception as e:
print(f"An unexpected error occurred: {e}")
if __name__ == "__main__":
try:
main()
except Exception as e:
print(f"An error occurred: {e}")
OUTPUT:
Enter the coordinates of two points:
Enter coordinates of Point 1 (space-separated): 3 4
Enter coordinates of Point 2 (space-separated): 7 1
Euclidean Distance: 5.0000
Manhattan Distance: 7.0000
6) Implement the k-Nearest Neighbor algorithm to classify the Iris dataset, printing both
correct and incorrect predictions.
from [Link] import load_iris
from sklearn.model_selection import train_test_split
from [Link] import KNeighborsClassifier
# Load dataset
iris = load_iris()
X, y = [Link], [Link]
# Split into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train k-NN classifier
knn = KNeighborsClassifier(n_neighbors=3)
[Link](X_train, y_train)
# Make predictions
y_pred = [Link](X_test)
# Print correct and incorrect predictions
7
Artificial Intelligence Laboratory MCSL106
for i in range(len(y_test)):
print(f"Predicted: {y_pred[i]}, Actual: {y_test[i]}, {'Correct' if y_pred[i] == y_test[i] else 'Incorrect'}")
Output:
Predicted: 1, Actual: 1, Correct
Predicted: 0, Actual: 0, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 1, Actual: 1, Correct
Predicted: 1, Actual: 1, Correct
Predicted: 0, Actual: 0, Correct
Predicted: 1, Actual: 1, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 1, Actual: 1, Correct
Predicted: 1, Actual: 1, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 0, Actual: 0, Correct
Predicted: 0, Actual: 0, Correct
Predicted: 0, Actual: 0, Correct
Predicted: 0, Actual: 0, Correct
Predicted: 1, Actual: 1, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 1, Actual: 1, Correct
Predicted: 1, Actual: 1, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 0, Actual: 0, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 0, Actual: 0, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 2, Actual: 2, Correct
Predicted: 0, Actual: 0, Correct
Predicted: 0, Actual: 0, Correct