0% found this document useful (0 votes)
2 views

dl lab prog 2

The document outlines the steps to implement a Boston Housing Price prediction model using a deep neural network with TensorFlow/Keras. It includes data preparation, preprocessing, model definition, training, evaluation, and visualization of predictions. Key considerations are discussed, such as the choice of activation functions and loss functions for regression tasks.

Uploaded by

charlton.latham
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

dl lab prog 2

The document outlines the steps to implement a Boston Housing Price prediction model using a deep neural network with TensorFlow/Keras. It includes data preparation, preprocessing, model definition, training, evaluation, and visualization of predictions. Key considerations are discussed, such as the choice of activation functions and loss functions for regression tasks.

Uploaded by

charlton.latham
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

To implement the Boston Housing Price prediction problem using a linear regression model

via a deep neural network, we can follow these steps:


Steps:
1. Prepare the dataset (Boston Housing dataset).
2. Preprocess the data (such as normalization).
3. Define the Deep Neural Network (DNN) architecture.
4. Compile and train the model.
5. Evaluate the model.
6. Make predictions.
Here is the full implementation using TensorFlow/Keras to create the deep neural network for
linear regression:
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_boston
import matplotlib.pyplot as plt

# Load the Boston housing dataset


boston_data = load_boston()

# Convert to a DataFrame for better understanding


df = pd.DataFrame(boston_data.data, columns=boston_data.feature_names)
df['PRICE'] = boston_data.target

# Splitting the dataset into features (X) and target (y)


X = df.drop(columns=['PRICE'])
y = df['PRICE']

# Split the data into training and testing sets (80% train, 20% test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)

# Standardize the features by normalizing them (important for neural


networks)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Define the model (Deep Neural Network)


model = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_dim=X_train_scaled.shape[1],
activation='relu'), # First hidden layer
tf.keras.layers.Dense(32, activation='relu'), # Second hidden layer
tf.keras.layers.Dense(1) # Output layer (single unit for regression)
])

# Compile the model


model.compile(optimizer='adam', loss='mean_squared_error')

# Train the model


history = model.fit(X_train_scaled, y_train, epochs=200, batch_size=32,
validation_split=0.2, verbose=1)

# Evaluate the model on the test set


test_loss = model.evaluate(X_test_scaled, y_test)
print(f'Test loss (MSE): {test_loss}')
# Predict house prices on the test set
y_pred = model.predict(X_test_scaled)

# Plotting the loss curve to see the training progress


plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss (MSE)')
plt.legend()
plt.title('Training and Validation Loss')
plt.show()

# Optionally, visualize the predictions vs actual values


plt.scatter(y_test, y_pred)
plt.xlabel('True Prices')
plt.ylabel('Predicted Prices')
plt.title('True vs Predicted Prices')
plt.show()
Explanation of the Code:
1. Data Loading: We load the Boston housing dataset using load_boston() from
sklearn.datasets. The dataset contains various features such as the number of
rooms, crime rate, etc., and the target variable is the house price (PRICE).
2. Preprocessing: We standardize the features using StandardScaler, which scales the
features to have a mean of 0 and a standard deviation of 1. This is important for
training deep learning models.
3. Model Definition: We use a simple neural network architecture with two hidden
layers:
o The first hidden layer has 64 neurons and uses ReLU activation.
o The second hidden layer has 32 neurons with ReLU activation.
o The output layer is a single neuron (as we are doing regression).
4. Compilation: The model is compiled using the Adam optimizer and the Mean
Squared Error (MSE) loss function since it's a regression problem.
5. Training: The model is trained on the training data for 200 epochs with a batch size
of 32. We also use validation_split=0.2 to keep a part of the training data for
validation during training.
6. Evaluation: After training, we evaluate the model on the test set and print the loss
(MSE). We also visualize the training and validation loss curves to understand how
well the model is learning.
7. Prediction and Visualization: We use the trained model to predict house prices for
the test set and plot the true vs. predicted values.
Key Considerations:
 Activation Function: For linear regression, the output layer doesn't use an activation
function, because we're predicting a continuous value.
 Loss Function: We use mean_squared_error as the loss function since we're solving
a regression problem.
 Epochs and Training: You might want to experiment with the number of epochs and
batch size to optimize the model.
Let me know if you'd like to further tweak this code or have any specific questions!

You might also like