0% found this document useful (0 votes)
23 views7 pages

Easy Pract ML

The document provides simplified Python code examples for various machine learning practices using libraries like sklearn. Each practical demonstrates a different algorithm, including Linear Regression, Logistic Regression, Decision Trees, SVM, Naive Bayes, PCA, KNN, and RBF SVM, with built-in datasets for easy understanding. The code is designed to be beginner-friendly, with clear variable names and straightforward logic.

Uploaded by

priya31patil0504
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views7 pages

Easy Pract ML

The document provides simplified Python code examples for various machine learning practices using libraries like sklearn. Each practical demonstrates a different algorithm, including Linear Regression, Logistic Regression, Decision Trees, SVM, Naive Bayes, PCA, KNN, and RBF SVM, with built-in datasets for easy understanding. The code is designed to be beginner-friendly, with clear variable names and straightforward logic.

Uploaded by

priya31patil0504
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Great!

Below are short and easy versions of each practical with built-in data so you don't
need to upload anything.

✅ Practical 1: Linear Regression (Predict Salary from Experience)


from sklearn.linear_model import LinearRegression
import [Link] as plt

# Sample data
X = [[1], [2], [3], [4], [5]]
y = [15000, 25000, 35000, 45000, 55000]

model = LinearRegression()
[Link](X, y)

[Link](X, y, color='green')
[Link](X, [Link](X), color='red')
[Link]("Experience vs Salary")
[Link]("Years")
[Link]("Salary")
[Link]()

✅ Practical 2: Logistic Regression (Predict Purchase)


from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from [Link] import accuracy_score

# Sample data: [Age, Salary], target: Buy (0 or 1)


X = [[22, 30000], [25, 32000], [30, 60000], [35, 58000], [40, 90000]]
y = [0, 0, 1, 1, 1]

model = LogisticRegression()
[Link](X, y)
print("Pred:", [Link]([[28, 55000]]))

✅ Practical 3: Decision Tree


from [Link] import DecisionTreeClassifier
from sklearn import tree

X = [[0, 0], [1, 1], [1, 0], [0, 1]]


y = [0, 1, 1, 0]

model = DecisionTreeClassifier()
[Link](X, y)
tree.plot_tree(model)
✅ Practical 4: SVM
from [Link] import SVC

X = [[1, 2], [2, 3], [3, 3], [6, 5], [7, 8]]
y = [0, 0, 0, 1, 1]

model = SVC(kernel='rbf')
[Link](X, y)
print("Pred:", [Link]([[4, 4]]))

✅ Practical 5: Naive Bayes


from sklearn.naive_bayes import GaussianNB

X = [[1, 0], [2, 0], [3, 1], [4, 1], [5, 1]]
y = [0, 0, 1, 1, 1]

model = GaussianNB()
[Link](X, y)
print("Pred:", [Link]([[2, 1]]))

✅ Practical 6: PCA
from [Link] import PCA
from [Link] import StandardScaler

X = [[2.5, 2.4], [0.5, 0.7], [2.2, 2.9], [1.9, 2.2], [3.1, 3.0]]
sc = StandardScaler()
X_scaled = sc.fit_transform(X)

pca = PCA(n_components=1)
X_pca = pca.fit_transform(X_scaled)
print("PCA Output:\n", X_pca)

✅ Practical 7: KNN
from [Link] import KNeighborsClassifier

X = [[1, 2], [2, 3], [3, 4], [6, 5], [7, 7]]
y = [0, 0, 0, 1, 1]

model = KNeighborsClassifier(n_neighbors=3)
[Link](X, y)
print("Pred:", [Link]([[4, 4]]))

✅ Practical 8: SVM with RBF (Radial Basis Function)


from [Link] import make_classification
from [Link] import SVC
from sklearn.model_selection import train_test_split
X, y = make_classification(n_samples=50, n_features=2, n_classes=2)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

model = SVC(kernel='rbf', gamma=0.1)


[Link](X_train, y_train)
print("Test Accuracy:", [Link](X_test, y_test))

Let me know if you want all these in one file or with explanations too!

Perfectly clear now! Here's what I'll do:

✅ Step-by-step plan:

1. Read and learn the exact code from your PDF (already done).
2. Keep the logic the same.
3. Simplify the code:
o Use shorter, meaningful variable names.
o Remove unnecessary steps or comments.
o Keep the dataset inside the code, not in files.
o Keep it accurate, but beginner-friendly.

✅ Practical 1: Linear Regression – Simplified


from sklearn.linear_model import LinearRegression
import [Link] as plt

# Input: Experience and Salary


exp = [[1.1], [1.3], [1.5], [2.0], [2.2], [2.9], [3.0], [3.2], [3.2], [3.7]]
sal = [39343, 46205, 37731, 43525, 39891, 56642, 60150, 54445, 64445, 57189]

# Train model
model = LinearRegression()
[Link](exp, sal)

# Predict
pred = [Link]([[4]])
print("Predicted salary for 4 years experience:", pred[0])

# Plot
[Link](exp, sal, color='blue')
[Link](exp, [Link](exp), color='red')
[Link](4, pred[0], color='green')
[Link]('Experience vs Salary')
[Link]('Experience (Years)')
[Link]('Salary')
[Link](True)
[Link]()
✅ Practical 2: Logistic Regression – Simplified
from sklearn.linear_model import LogisticRegression
import [Link] as plt

# Input: Age, Salary and Purchased


data = [[19, 19000], [35, 20000], [26, 43000], [27, 57000], [19, 76000],
[27, 58000], [27, 84000], [32, 150000], [25, 33000], [35, 65000]]
buy = [0, 0, 0, 0, 0, 0, 0, 1, 0, 1]

# Train model
model = LogisticRegression()
[Link](data, buy)

# Predict
test = [[30, 87000]]
result = [Link](test)
print("Prediction for age 30 and salary 87000:", result[0])

# Plot: Age vs Purchased


age = [d[0] for d in data]
colors = ['red' if b == 0 else 'green' for b in buy]

[Link](age, buy, color=colors)


[Link](0.5, color='blue', linestyle='--')
[Link](30, result[0], color='black')
[Link]('Age vs Purchase Prediction')
[Link]('Age')
[Link]('Purchased (0=No, 1=Yes)')
[Link](True)
[Link]()

✅ Both programs now:

 Have short and meaningful variables like exp, sal, buy, etc.
 Use direct dataset values, not external files.
 Keep the same logic as your PDF.
 Include simple and clean graphs for better understanding.

If you share more practicals in a PDF, I’ll repeat this same simplification style for all of them.
Want me to wait for the next one?

Great! Here are simple Python programs using tiny datasets for each of the listed experiments.
Each program is beginner-friendly and uses built-in libraries like sklearn and numpy.
1. Linear Regression
from sklearn.linear_model import LinearRegression

# Dataset: Hours studied vs Marks


X = [[1], [2], [3]]
y = [2, 4, 6]

model = LinearRegression()
[Link](X, y)

print("Prediction for 4 hours:", [Link]([[4]]))

2. Logistic Regression
from sklearn.linear_model import LogisticRegression

# Dataset: Hours studied vs Pass(1)/Fail(0)


X = [[1], [2], [3], [4], [5]]
y = [0, 0, 1, 1, 1]

model = LogisticRegression()
[Link](X, y)

print("Prediction for 2.5 hours:", [Link]([[2.5]]))

3. CART Decision Tree


from [Link] import DecisionTreeClassifier

# Dataset: [Temperature, Humidity] vs Play (1: Yes, 0: No)


X = [[30, 70], [25, 80], [20, 90], [35, 60]]
y = [1, 1, 0, 0]

model = DecisionTreeClassifier()
[Link](X, y)

print("Prediction for [28, 75]:", [Link]([[28, 75]]))

4. Support Vector Machine (SVM)


from sklearn import svm

# Dataset: [Height, Weight] vs Gender (0: Female, 1: Male)


X = [[150, 45], [160, 55], [170, 65], [180, 75]]
y = [0, 0, 1, 1]

model = [Link]()
[Link](X, y)

print("Prediction for [165, 60]:", [Link]([[165, 60]]))


5. Bayesian Classification (GaussianNB)
from sklearn.naive_bayes import GaussianNB

# Dataset: [Age, Salary] vs Buys Product (0/1)


X = [[25, 50000], [30, 60000], [35, 70000], [40, 80000]]
y = [0, 0, 1, 1]

model = GaussianNB()
[Link](X, y)

print("Prediction for [32, 65000]:", [Link]([[32, 65000]]))

6. PCA (Principal Component Analysis)


from [Link] import PCA
import numpy as np

# Dataset: [Height, Weight, Age]


X = [Link]([[160, 50, 20], [170, 60, 22], [180, 70, 24]])

pca = PCA(n_components=2)
reduced_X = pca.fit_transform(X)

print("Reduced Data:\n", reduced_X)

7. K-Nearest Neighbour (KNN)


from [Link] import KNeighborsClassifier

# Dataset: [Marks in Math, Science] vs Pass/Fail


X = [[30, 40], [35, 45], [80, 85], [85, 90]]
y = [0, 0, 1, 1]

model = KNeighborsClassifier(n_neighbors=1)
[Link](X, y)

print("Prediction for [33, 42]:", [Link]([[33, 42]]))

8. Radial Basis Function (RBF) Kernel with SVM


from [Link] import SVC

# Dataset: [X, Y] points from two classes


X = [[1, 2], [2, 3], [3, 3], [6, 7], [7, 8], [8, 8]]
y = [0, 0, 0, 1, 1, 1]

model = SVC(kernel='rbf')
[Link](X, y)

print("Prediction for [4, 5]:", [Link]([[4, 5]]))


Let me know if you'd like output screenshots, graph visualizations (e.g. for regression/decision
trees), or explanations for any of these.

You might also like