0% found this document useful (0 votes)
15 views2 pages

Heart Disease Prediction with SVM

The document outlines a practical implementation of a Support Vector Machine (SVM) model using a heart disease dataset. It includes data preprocessing steps such as normalization and splitting into training and testing sets, followed by model training and evaluation. The model achieved an accuracy of approximately 86.9%, with detailed metrics provided in the classification report.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views2 pages

Heart Disease Prediction with SVM

The document outlines a practical implementation of a Support Vector Machine (SVM) model using a heart disease dataset. It includes data preprocessing steps such as normalization and splitting into training and testing sets, followed by model training and evaluation. The model achieved an accuracy of approximately 86.9%, with detailed metrics provided in the classification report.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

24/02/2025, 12:01 AI2_Practical 7.

ipynb - Colab

import pandas as pd
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
from [Link] import SVC
from [Link] import accuracy_score, classification_report, confusion_matrix

df = pd.read_csv("[Link]")
df

age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca thal target

0 63 1 3 145 233 1 0 150 0 2.3 0 0 1 1

1 37 1 2 130 250 0 1 187 0 3.5 0 0 2 1

2 41 0 1 130 204 0 0 172 0 1.4 2 0 2 1

3 56 1 1 120 236 0 1 178 0 0.8 2 0 2 1

4 57 0 0 120 354 0 1 163 1 0.6 2 0 2 1

... ... ... ... ... ... ... ... ... ... ... ... ... ... ...

298 57 0 0 140 241 0 1 123 1 0.2 1 0 3 0

299 45 1 3 110 264 0 1 132 0 1.2 1 0 3 0

300 68 1 0 144 193 1 1 141 0 3.4 1 2 3 0

301 57 1 0 130 131 0 1 115 1 1.2 1 1 3 0

302 57 0 1 130 236 0 0 174 0 0.0 1 1 2 0

303 rows × 14 columns

[Link]()

<class '[Link]'>
RangeIndex: 303 entries, 0 to 302
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 303 non-null int64
1 sex 303 non-null int64
2 cp 303 non-null int64
3 trestbps 303 non-null int64
4 chol 303 non-null int64
5 fbs 303 non-null int64
6 restecg 303 non-null int64
7 thalach 303 non-null int64
8 exang 303 non-null int64
9 oldpeak 303 non-null float64
10 slope 303 non-null int64
11 ca 303 non-null int64
12 thal 303 non-null int64
13 target 303 non-null int64
dtypes: float64(1), int64(13)
memory usage: 33.3 KB

# Split data into features and target


X = [Link](columns=['target'])
y = df['target']

# Normalize numerical features


scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Split into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)

# Train the SVM model


svm_model = SVC(kernel='linear', C=1.0, random_state=42)
svm_model.fit(X_train, y_train)

# Make predictions
y_pred = svm_model.predict(X_test)

Start coding or generate with AI.

Start coding or generate with AI.

[Link] 1/2
24/02/2025, 12:01 AI2_Practical [Link] - Colab

# Evaluate the model


accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
class_report = classification_report(y_test, y_pred)

accuracy, conf_matrix, class_report

(0.8688524590163934,
array([[25, 4],
[ 4, 28]]),
' precision recall f1-score support\n\n 0 0.86 0.86 0.86 29\n
1 0.88 0.88 0.88 32\n\n accuracy 0.87 61\n macro avg
0.87 0.87 0.87 61\nweighted avg 0.87 0.87 0.87 61\n')

Start coding or generate with AI.

[Link] 2/2

You might also like