0% found this document useful (0 votes)
29 views4 pages

Machine Learning Algorithms Overview

The document provides implementations of various machine learning algorithms using Python and the scikit-learn library, including FIND-S, Decision Trees (ID3), Linear Regression, Support Vector Machines (SVM), K-Nearest Neighbors (KNN), KMeans clustering, and PCA for dimensionality reduction. Each section includes code snippets for loading datasets, training models, and visualizing results. Additionally, it demonstrates data preprocessing techniques such as scaling and regression coefficient extraction.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views4 pages

Machine Learning Algorithms Overview

The document provides implementations of various machine learning algorithms using Python and the scikit-learn library, including FIND-S, Decision Trees (ID3), Linear Regression, Support Vector Machines (SVM), K-Nearest Neighbors (KNN), KMeans clustering, and PCA for dimensionality reduction. Each section includes code snippets for loading datasets, training models, and visualizing results. Additionally, it demonstrates data preprocessing techniques such as scaling and regression coefficient extraction.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Machine Learning Algorithms - Implementation

1. FIND-S Algorithm:
import numpy as np
from [Link] import load_iris
data = load_iris()
X, y = [Link], [Link]
def find_s(X, y):
target_class = 1
examples = X[y == target_class]
hypothesis = examples[0]
for example in examples[1:]:
hypothesis = [Link](hypothesis == example, hypothesis, '?')
return hypothesis
print("FIND-S Hypothesis:", find_s(X, y))

2. Data Pre-Processing:

from [Link] import StandardScaler


from [Link] import load_iris
data = load_iris()

X, y = [Link], [Link]

scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
print("Scaled Data:", X_scaled[:5])

3. Decision Tree ID3 Algorithm:

from [Link] import DecisionTreeClassifier, plot_tree


from [Link] import load_iris
import [Link] as plt

# Load dataset
data = load_iris()
X, y = [Link], [Link]

# Train Decision Tree (ID3 uses 'entropy' criterion)


clf = DecisionTreeClassifier(criterion='entropy')
[Link](X, y)

# Plot the decision tree


[Link](figsize=(12, 8))
plot_tree(clf, filled=True, feature_names=data.feature_names,
class_names=data.target_names, rounded=True)
[Link]('Decision Tree (ID3)')
[Link]()

# Accuracy of the model


print("Decision Tree Accuracy:", [Link](X, y))
4. Linear Regression:

from sklearn.linear_model import LinearRegression


from [Link] import make_regression
import [Link] as plt

X, y = make_regression(n_samples=100, n_features=1, noise=0.1)

model = LinearRegression()
[Link](X, y)

[Link](X, y)

[Link](X, [Link](X), color='red')


[Link]()

5. Support Vector Machines (SVM):

from [Link] import SVC

from [Link] import load_iris


import [Link] as plt
from [Link] import PCA

data = load_iris()

X, y = [Link], [Link]

pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)
model = SVC(kernel='linear')
[Link](X_pca, y)
[Link](X_pca[:, 0], X_pca[:, 1], c=y, cmap='coolwarm')
[Link]('SVM on PCA-reduced data')
[Link]()

6. K-Nearest Neighbors (KNN) Classification:

from [Link] import KNeighborsClassifier


from [Link] import load_iris
import [Link] as plt
import numpy as np

# Load the Iris dataset


data = load_iris()
X, y = [Link], [Link]

# Fit KNN model using only the first two features


model = KNeighborsClassifier(n_neighbors=3)
[Link](X[:, :2], y)
# Visualization using only the first two features
h = .02 # Step size in mesh
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1

xx, yy = [Link]([Link](x_min, x_max, h), [Link](y_min, y_max, h))


Z = [Link](np.c_[[Link](), [Link]()])
Z = [Link]([Link])

# Plotting the decision boundary


[Link](figsize=(8, 6))
[Link](xx, yy, Z, alpha=0.8)

# Scatter plot of the data points


[Link](X[:, 0], X[:, 1], c=y, edgecolors='k', marker='o')
[Link]('KNN Classification with First Two Features')
[Link]('Feature 1')
[Link]('Feature 2')
[Link]()

7. Regression Algorithm (Linear Regression example):

from sklearn.linear_model import LinearRegression


from [Link] import make_regression
X, y = make_regression(n_samples=100, n_features=1, noise=0.1)
model = LinearRegression()
[Link](X, y)
print("Regression Coefficient:", model.coef_)

8. Clustering Algorithm (KMeans):

from [Link] import KMeans


from [Link] import load_iris
import [Link] as plt
data = load_iris()

X, y = [Link], [Link]

model = KMeans(n_clusters=3)
[Link](X)
[Link](X[:, 0], X[:, 1], c=model.labels_)
[Link]('KMeans Clustering')
[Link]()

9. Dimensionality Reduction using PCA:

from [Link] import PCA


from [Link] import load_iris
import [Link] as plt
data = load_iris()
X, y = [Link], [Link]

pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)
[Link](X_pca[:, 0], X_pca[:, 1], c=y)
[Link]('PCA Reduction')
[Link]()

You might also like