Q1: Write a Python Program to implement Depth First Search.
(4 Marks)
Here's a simple implementation of Depth First Search (DFS) in Python:
```python
class Graph:
def __init__(self):
[Link] = {}
def add_edge(self, u, v):
if u not in [Link]:
[Link][u] = []
[Link][u].append(v)
def dfs_util(self, v, visited):
[Link](v)
print(v, end=" ")
for neighbor in [Link](v, []):
if neighbor not in visited:
self.dfs_util(neighbor, visited)
def dfs(self, start):
visited = set()
self.dfs_util(start, visited)
# Example usage:
if __name__ == "__main__":
g = Graph()
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 2)
g.add_edge(2, 0)
g.add_edge(2, 3)
g.add_edge(3, 3)
print("Depth First Traversal starting from vertex 2:")
[Link](2)
```
This program defines a `Graph` class with methods for adding edges and performing DFS
traversal. It takes an adjacency list representation of the graph, and performs DFS from a given
starting vertex.
Q2: Write a Python Program to implement Min-Max Algorithm. (4 Marks)
Below is a simple implementation of the Min-Max Algorithm in Python, specifically for the Min-
Max Algorithm applied to Tic-Tac-Toe:
```python
import math
# Constants for player and empty cell
PLAYER_X = 'X'
PLAYER_O = 'O'
EMPTY = ' '
# Function to print the Tic-Tac-Toe board
def print_board(board):
for row in board:
print(" | ".join(row))
print("-" * 5)
# Function to check if the current state of the board is a terminal state
def is_terminal_state(board):
# Check rows
for row in board:
if [Link](PLAYER_X) == 3 or [Link](PLAYER_O) == 3:
return True
# Check columns
for col in range(3):
if board[0][col] == board[1][col] == board[2][col] and board[0][col] != EMPTY:
return True
# Check diagonals
if (board[0][0] == board[1][1] == board[2][2] and board[0][0] != EMPTY) or \
(board[0][2] == board[1][1] == board[2][0] and board[0][2] != EMPTY):
return True
# Check if the board is full (draw)
if all([cell != EMPTY for row in board for cell in row]):
return True
return False
# Function to evaluate the board (heuristic)
def evaluate(board):
# Checking for rows, columns, and diagonals for a win
for row in board:
if [Link](PLAYER_X) == 3:
return 10
elif [Link](PLAYER_O) == 3:
return -10
for col in range(3):
if board[0][col] == board[1][col] == board[2][col] and board[0][col] == PLAYER_X:
return 10
elif board[0][col] == board[1][col] == board[2][col] and board[0][col] == PLAYER_O:
return -10
if (board[0][0] == board[1][1] == board[2][2] and board[0][0] == PLAYER_X) or \
(board[0][2] == board[1][1] == board[2][0] and board[0][2] == PLAYER_X):
return 10
elif (board[0][0] == board[1][1] == board[2][2] and board[0][0] == PLAYER_O) or \
(board[0][2] == board[1][1] == board[2][0] and board[0][2] == PLAYER_O):
return -10
# If none of the players has won, it's a draw
return 0
# Function to implement the Min-Max algorithm
def min_max(board, depth, is_maximizing):
if is_terminal_state(board):
return evaluate(board)
if is_maximizing:
max_eval = -[Link]
for i in range(3):
for j in range(3):
if board[i][j] == EMPTY:
board[i][j] = PLAYER_X
eval = min_max(board, depth + 1, False)
board[i][j] = EMPTY
max_eval = max(max_eval, eval)
return max_eval
else:
min_eval = [Link]
for i in range(3):
for j in range(3):
if board[i][j] == EMPTY:
board[i][j] = PLAYER_O
eval = min_max(board, depth + 1, True)
board[i][j] = EMPTY
min_eval = min(min_eval, eval)
return min_eval
# Example usage:
if __name__ == "__main__":
board = [
[EMPTY, EMPTY, EMPTY],
[EMPTY, EMPTY, EMPTY],
[EMPTY, EMPTY, EMPTY]
print("Initial Board:")
print_board(board)
best_move_value = -[Link]
best_move = None
for i in range(3):
for j in range(3):
if board[i][j] == EMPTY:
board[i][j] = PLAYER_X
move_value = min_max(board, 0, False)
board[i][j] = EMPTY
if move_value > best_move_value:
best_move_value = move_value
best_move = (i, j)
print("Best Move for X:", best_move)
```
This code calculates the best move for the 'X' player using the Min-Max algorithm in a Tic-Tac-
Toe game.
Q3: Write a Python Program to implement the Backtracking approach to solve N
Queen’s problem (5 Marks)
Certainly! Below is a Python program that implements the backtracking approach to solve the N
Queens problem:
```python
def is_safe(board, row, col, n):
# Check if there's a queen in the same column
for i in range(row):
if board[i][col] == 1:
return False
# Check upper diagonal on left side
for i, j in zip(range(row, -1, -1), range(col, -1, -1)):
if board[i][j] == 1:
return False
# Check upper diagonal on right side
for i, j in zip(range(row, -1, -1), range(col, n)):
if board[i][j] == 1:
return False
return True
def solve_n_queens_util(board, row, n):
if row == n:
return True
for col in range(n):
if is_safe(board, row, col, n):
board[row][col] = 1
if solve_n_queens_util(board, row + 1, n):
return True
board[row][col] = 0
return False
def solve_n_queens(n):
board = [[0] * n for _ in range(n)]
if not solve_n_queens_util(board, 0, n):
print("Solution does not exist")
return False
print("Solution for", n, "queens:")
for i in range(n):
for j in range(n):
print(board[i][j], end=" ")
print()
return True
# Example usage:
if __name__ == "__main__":
n=4
solve_n_queens(n)
```
This program solves the N Queens problem using backtracking. The `is_safe` function checks
whether it's safe to place a queen at a given position on the board, and the
`solve_n_queens_util` function recursively tries to place queens on the board, backtracking if
necessary. Finally, the `solve_n_queens` function initializes the board and calls the utility
function to find a solution.
Q4: Implement logistic regression in Python. Take the data set as input of your
choice. (5 Marks)
Sure, here's an example implementation of logistic regression in Python using scikit-learn
library. We'll use the popular Iris dataset for this example:
```python
from sklearn import datasets
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
from sklearn.linear_model import LogisticRegression
from [Link] import accuracy_score
# Load the Iris dataset
iris = datasets.load_iris()
X = [Link]
y = [Link]
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Feature scaling
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = [Link](X_test)
# Initialize logistic regression model
log_reg = LogisticRegression()
# Train the model
log_reg.fit(X_train, y_train)
# Make predictions
y_pred = log_reg.predict(X_test)
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
```
In this code:
- We load the Iris dataset using scikit-learn's `datasets.load_iris()` function.
- We split the dataset into training and testing sets using `train_test_split`.
- We standardize the features using `StandardScaler`.
- We initialize a logistic regression model using `LogisticRegression`.
- We train the model on the training data using `fit` method.
- We make predictions on the test data using `predict` method.
- Finally, we calculate the accuracy of the model using `accuracy_score` from scikit-learn's
`metrics` module.
Q5: Take a real time example to implement KNN- classification algorithm in
Python. (5 Marks)
Sure, let's implement the K-Nearest Neighbors (KNN) classification algorithm using a real-world
dataset. We'll use the famous Iris dataset, which contains measurements for 150 iris flowers
from three different species: Setosa, Versicolor, and Virginica.
```python
from [Link] import load_iris
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
from [Link] import KNeighborsClassifier
from [Link] import accuracy_score
# Load the Iris dataset
iris = load_iris()
X = [Link]
y = [Link]
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Feature scaling
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = [Link](X_test)
# Initialize KNN classifier
knn = KNeighborsClassifier(n_neighbors=3)
# Train the model
[Link](X_train, y_train)
# Make predictions
y_pred = [Link](X_test)
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
```
In this code:
- We load the Iris dataset using scikit-learn's `load_iris()` function.
- We split the dataset into training and testing sets using `train_test_split`.
- We standardize the features using `StandardScaler`.
- We initialize a KNN classifier using `KNeighborsClassifier`.
- We train the model on the training data using `fit` method.
- We make predictions on the test data using `predict` method.
- Finally, we calculate the accuracy of the model using `accuracy_score` from scikit-learn's
`metrics` module.
This code demonstrates a real-time example of using the KNN classification algorithm to classify
iris flowers based on their measurements.
Q6: Write a Python Program to implement Support Vector Machines for data
classification, choose dataset of your own choice (6 Marks)
Certainly! Here's an example implementation of Support Vector Machines (SVM) for data
classification using scikit-learn library. We'll use the Iris dataset again for this example:
```python
from sklearn import datasets
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
from [Link] import SVC
from [Link] import accuracy_score
# Load the Iris dataset
iris = datasets.load_iris()
X = [Link]
y = [Link]
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Feature scaling
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = [Link](X_test)
# Initialize SVM classifier
svm_classifier = SVC(kernel='linear', random_state=42)
# Train the model
svm_classifier.fit(X_train, y_train)
# Make predictions
y_pred = svm_classifier.predict(X_test)
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
```
In this code:
- We load the Iris dataset using scikit-learn's `datasets.load_iris()` function.
- We split the dataset into training and testing sets using `train_test_split`.
- We standardize the features using `StandardScaler`.
- We initialize a SVM classifier using `SVC` (Support Vector Classifier) with a linear kernel.
- We train the model on the training data using `fit` method.
- We make predictions on the test data using `predict` method.
- Finally, we calculate the accuracy of the model using `accuracy_score` from scikit-learn's
`metrics` module.
This code demonstrates how to implement Support Vector Machines for data classification
using a real-world dataset.
Q7: Take a Data set as per your choice, implement and execute on different
inputs of K-Means clustering algorithm (5 Marks)
Let's implement the K-Means clustering algorithm using a sample dataset. We'll create a
synthetic dataset for this example.
```python
import numpy as np
import [Link] as plt
from [Link] import make_blobs
from [Link] import KMeans
# Generate synthetic dataset
X, _ = make_blobs(n_samples=300, centers=4, cluster_std=0.60, random_state=0)
# Visualize the dataset
[Link](X[:, 0], X[:, 1], s=50)
[Link]("Feature 1")
[Link]("Feature 2")
[Link]("Synthetic Dataset")
[Link]()
# Initialize K-Means model
kmeans = KMeans(n_clusters=4)
# Fit the model to the data
[Link](X)
# Get cluster centers and labels
centers = kmeans.cluster_centers_
labels = kmeans.labels_
# Visualize the clusters and centroids
[Link](X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis')
[Link](centers[:, 0], centers[:, 1], marker='*', c='red', s=200)
[Link]("Feature 1")
[Link]("Feature 2")
[Link]("K-Means Clustering")
[Link]()
```
In this code:
- We generate a synthetic dataset with four clusters using `make_blobs` from scikit-learn.
- We visualize the synthetic dataset.
- We initialize a K-Means model with 4 clusters.
- We fit the K-Means model to the data.
- We obtain the cluster centers and labels.
- We visualize the clusters and centroids.
You can experiment with different numbers of clusters by changing the value of `n_clusters` in
the `KMeans` constructor and observe how the algorithm clusters the data differently.
Q8: Write a Python Program to implement FP tree growth Algorithm on a
dataset of your own choice
Implementing the FP-growth algorithm from scratch is quite complex, but I can provide you
with a simplified version using the `pyfpgrowth` library, which is a Python implementation of
the FP-growth algorithm.
First, ensure you have `pyfpgrowth` installed:
```bash
pip install pyfpgrowth
```
Here's an example of how to use it:
```python
import pyfpgrowth
# Sample dataset
dataset = [
['milk', 'bread', 'water'],
['milk', 'bread', 'diaper', 'beer'],
['milk', 'bread', 'diaper', 'cola'],
['milk', 'bread', 'water', 'diaper'],
['bread', 'water', 'diaper']
# Minimum support threshold
min_support = 2
# Generate frequent itemsets
patterns = pyfpgrowth.find_frequent_patterns(dataset, min_support)
# Generate association rules
rules = pyfpgrowth.generate_association_rules(patterns, 0.7) # Confidence threshold: 0.7
# Print frequent itemsets
print("Frequent Itemsets:")
for pattern, support in [Link]():
print(pattern, ":", support)
# Print association rules
print("\nAssociation Rules:")
for rule, confidence in [Link]():
print(rule[0], "=>", rule[1], ":", confidence)
```
In this code:
- We define a sample dataset consisting of transactions (lists of items).
- We set a minimum support threshold.
- We use `find_frequent_patterns` to find frequent itemsets in the dataset.
- We use `generate_association_rules` to generate association rules from the frequent
itemsets.
- Finally, we print the frequent itemsets and association rules.
You can replace the `dataset` variable with your own dataset, formatted as a list of transactions
(lists of items). Adjust the `min_support` parameter and confidence threshold as needed for
your analysis.