CV 5
CV 5
Experiment: 5
Aim: Write a program to evaluate the efficiency of human-guided control point selection
for image alignment.
Description:
Image recognition is a fundamental task in computer vision and has various practical applications, such as
object detection, facial recognition, and medical imaging. By comparing different classification models,
researchers and practitioners can determine which models are most effective for specific image recognition
tasks, enabling the development of more accurate and efficient systems. There are several classification
models commonly used in image recognition tasks. Some used here are :
Support Vector Machines (SVM): SVMs are supervised learning models that can be used for image
classification. They find an optimal hyperplane to separate different classes in feature space. SVMs are often
combined with handcrafted features or extracted features from CNNs.
K-Nearest Neighbours(KNN) : is one of the most basic yet essential classification algorithms in Machine
Learning. It belongs to the supervised learning domain and finds intense application in pattern recognition,
data mining, and intrusion detection. It is widely disposable in real-life scenarios
Pseudo code/Algorithms/Flowchart/Steps:
1. Import the necessary libraries and modules.
2. Load a dataset of labelled images for training and testing.
3. Preprocess the images by resizing, normalizing, and augmenting if necessary.
4. Split the dataset into training and testing sets.
5. Define and initialize different classification models, such as support vector machines (SVM), random
forest, convolutional neural networks (CNN), etc.
6. Train each model using the training set.
7. Evaluate the performance of each model using various metrics, such as accuracy, precision, recall,
and F1 score, on the testing set.
8. Compare the performance of the different models based on the evaluation results.
9. Analyze the strengths and weaknesses of each model in terms of accuracy, computational efficiency,
robustness, etc.
10. Draw conclusions and discuss the implications of the findings.
CourseName:Computer Vision Lab Course Code: CSP-422
Implementation:
#SVM
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt import seaborn as
sns from sklearn import datasets,svm from
mlxtend.plotting import plot_decision_regions iris
= datasets.load_iris()
dir(iris)
iris.feature_names
df = pd.DataFrame(iris.data,columns = iris.feature_names)
df.head() df['target']=iris.target df.head() iris.target_names
df[df.target==2].head()
df['flower_name']=df.target.apply(lambda x:iris.target_names[x])
df.head()
sepal_length =iris['data'][:,0]
petal_length=iris['data'][:,2]
X = np.column_stack((sepal_length,petal_length))
X
Y= iris.target
Y
from sklearn.model_selection import train_test_split as tts
X_train,X_test,Y_train,Y_test = tts(X,Y,test_size =0.25,random_state =54)
len(X_train) len(X_test) from sklearn.svm import SVC model =
SVC(kernel ='linear') model.fit(X_train,Y_train)
model.score(X_train,Y_train)
plot_decision_regions(X=X_test,y=Y_test,clf = model,legend =1)
#KNN
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt from sklearn import
datasets from sklearn.model_selection import
train_test_split from sklearn.neighbors import
KNeighborsClassifier
CourseName:Computer Vision Lab Course Code: CSP-422
# Plot the data points based on their true labels for i in range(3): plt.scatter(X_test[Y_test == i][:, 0], X_test[Y_test == i][:,
1], label=f'Class {i} (True)', c=colors[i], marker='o', alpha=0.6)
# Plot the data points based on their predicted labels for i in range(3): plt.scatter(X_test[Y_pred == i][:, 0], X_test[Y_pred ==
i][:, 1], label=f'Class {i} (Predicted)', c=colors[i], marker='x', s=80)
SVM
KNN