0% found this document useful (0 votes)
13 views

CV 5

This experiment aims to evaluate the efficiency of different classification models for image recognition tasks. It compares models like support vector machines (SVM) and K-nearest neighbors (KNN) on an image dataset. The implementation shows code to load and preprocess data, train SVM and KNN classifiers, and evaluate their performance on the test set by measuring accuracy and other metrics.

Uploaded by

jiteshkumardj
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

CV 5

This experiment aims to evaluate the efficiency of different classification models for image recognition tasks. It compares models like support vector machines (SVM) and K-nearest neighbors (KNN) on an image dataset. The implementation shows code to load and preprocess data, train SVM and KNN classifiers, and evaluate their performance on the test set by measuring accuracy and other metrics.

Uploaded by

jiteshkumardj
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

CourseName:Computer Vision Lab Course Code: CSP-422

Experiment: 5

Aim: Write a program to evaluate the efficiency of human-guided control point selection
for image alignment.

Software Required: Any Python IDE (e.g., PyCharm, Jupyter Notebook,GoogleColab)

Description:

Image recognition is a fundamental task in computer vision and has various practical applications, such as
object detection, facial recognition, and medical imaging. By comparing different classification models,
researchers and practitioners can determine which models are most effective for specific image recognition
tasks, enabling the development of more accurate and efficient systems. There are several classification
models commonly used in image recognition tasks. Some used here are :
Support Vector Machines (SVM): SVMs are supervised learning models that can be used for image
classification. They find an optimal hyperplane to separate different classes in feature space. SVMs are often
combined with handcrafted features or extracted features from CNNs.
K-Nearest Neighbours(KNN) : is one of the most basic yet essential classification algorithms in Machine
Learning. It belongs to the supervised learning domain and finds intense application in pattern recognition,
data mining, and intrusion detection. It is widely disposable in real-life scenarios

Pseudo code/Algorithms/Flowchart/Steps:
1. Import the necessary libraries and modules.
2. Load a dataset of labelled images for training and testing.
3. Preprocess the images by resizing, normalizing, and augmenting if necessary.
4. Split the dataset into training and testing sets.
5. Define and initialize different classification models, such as support vector machines (SVM), random
forest, convolutional neural networks (CNN), etc.
6. Train each model using the training set.
7. Evaluate the performance of each model using various metrics, such as accuracy, precision, recall,
and F1 score, on the testing set.
8. Compare the performance of the different models based on the evaluation results.
9. Analyze the strengths and weaknesses of each model in terms of accuracy, computational efficiency,
robustness, etc.
10. Draw conclusions and discuss the implications of the findings.
CourseName:Computer Vision Lab Course Code: CSP-422

Implementation:
#SVM
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt import seaborn as
sns from sklearn import datasets,svm from
mlxtend.plotting import plot_decision_regions iris
= datasets.load_iris()

dir(iris)
iris.feature_names
df = pd.DataFrame(iris.data,columns = iris.feature_names)
df.head() df['target']=iris.target df.head() iris.target_names
df[df.target==2].head()
df['flower_name']=df.target.apply(lambda x:iris.target_names[x])
df.head()

import matplotlib.pyplot as plt


get_ipython().run_line_magic('matplotlib', 'inline')
df0 = df[df.target==0] df1 = df[df.target==1] df2 = df[df.target==2]
plt.scatter(df0['sepal length (cm)'],df0['sepal width (cm)'],color = 'blue', marker ='+')
plt.scatter(df1['sepal length (cm)'],df1['sepal width (cm)'],color = 'green', marker ='o')
plt.scatter(df2['sepal length (cm)'],df2['sepal width (cm)'],color = 'red', marker ='*')
plt.scatter(df0['petal length (cm)'],df0['sepal width (cm)'],color = 'blue', marker ='+')
plt.scatter(df1['petal length (cm)'],df1['sepal width (cm)'],color = 'green', marker ='o')
plt.scatter(df2['petal length (cm)'],df2['sepal width (cm)'],color = 'red', marker ='*')

sepal_length =iris['data'][:,0]
petal_length=iris['data'][:,2]
X = np.column_stack((sepal_length,petal_length))
X
Y= iris.target
Y
from sklearn.model_selection import train_test_split as tts
X_train,X_test,Y_train,Y_test = tts(X,Y,test_size =0.25,random_state =54)
len(X_train) len(X_test) from sklearn.svm import SVC model =
SVC(kernel ='linear') model.fit(X_train,Y_train)
model.score(X_train,Y_train)
plot_decision_regions(X=X_test,y=Y_test,clf = model,legend =1)
#KNN
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt from sklearn import
datasets from sklearn.model_selection import
train_test_split from sklearn.neighbors import
KNeighborsClassifier
CourseName:Computer Vision Lab Course Code: CSP-422

# Load the Iris dataset


iris = datasets.load_iris()

# Create a DataFrame from the Iris dataset


df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['target'] = iris.target df['flower_name'] = df.target.apply(lambda
x: iris.target_names[x])

# Prepare the feature matrix (X) and target vector (Y)


X = df[["sepal length (cm)", "sepal width (cm)", "petal length (cm)", "petal width (cm)"]].values
Y = df['target'].values

# Split the data into training and testing sets


X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state=54)

# Create a k-Nearest Neighbors (k-NN) classifier


k = 3 # You can adjust the value of k as needed
knn_classifier = KNeighborsClassifier(n_neighbors=k)

# Train the k-NN classifier


knn_classifier.fit(X_train, Y_train)

# Predict on the test set


Y_pred = knn_classifier.predict(X_test)

# Visualize the data colors =


['blue', 'green', 'red']
plt.figure(figsize=(10, 6))

# Plot the data points based on their true labels for i in range(3): plt.scatter(X_test[Y_test == i][:, 0], X_test[Y_test == i][:,
1], label=f'Class {i} (True)', c=colors[i], marker='o', alpha=0.6)

# Plot the data points based on their predicted labels for i in range(3): plt.scatter(X_test[Y_pred == i][:, 0], X_test[Y_pred ==
i][:, 1], label=f'Class {i} (Predicted)', c=colors[i], marker='x', s=80)

plt.title('Iris Dataset - True vs. Predicted Classes (k-NN)')


plt.xlabel('Sepal Length (cm)') plt.ylabel('Sepal Width
(cm)')
plt.legend()
plt.grid(True)
plt.show()
Output:
CourseName:Computer Vision Lab Course Code: CSP-422

SVM

KNN

You might also like