0% found this document useful (0 votes)
25 views6 pages

Friend Recommendation System in Python

The document presents two projects: a Social Network Friend Recommendation System using Graph Theory and a Self-Learning Robot for Autonomous Navigation using Deep Q-Networks. The first project implements a friend recommendation algorithm based on common neighbors and various scoring methods, while the second project develops a reinforcement learning agent that learns optimal navigation paths in real-time. Both projects are executed in Jupyter Notebook and include source code and output examples.

Uploaded by

janus341268
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views6 pages

Friend Recommendation System in Python

The document presents two projects: a Social Network Friend Recommendation System using Graph Theory and a Self-Learning Robot for Autonomous Navigation using Deep Q-Networks. The first project implements a friend recommendation algorithm based on common neighbors and various scoring methods, while the second project develops a reinforcement learning agent that learns optimal navigation paths in real-time. Both projects are executed in Jupyter Notebook and include source code and output examples.

Uploaded by

janus341268
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

1.

Social Network Friend Recommendation System using


Graph Theory

The platform used to execute this program is jupyter notebook.

Source Code:

import networkx as nx

class FriendRecommender:
def __init__(self, edges):
[Link] = [Link]()
[Link].add_edges_from(edges)

def recommend_friends(self, user, top_n=5):


if user not in [Link]:
return []

scores = {}
all_users = set([Link])
friends = set([Link](user))
non_friends = all_users - friends - {user}

for v in non_friends:
common_neighbors = len(set(nx.common_neighbors([Link],
user, v)))
scores[v] = [Link](v, 0) + common_neighbors

for _, v, jaccard in nx.jaccard_coefficient([Link], [(user, x) for x


in non_friends]):
scores[v] = [Link](v, 0) + jaccard

for _, v, adamic in nx.adamic_adar_index([Link], [(user, x) for x


in non_friends]):
scores[v] = [Link](v, 0) + adamic

for _, v, pref_attach in nx.preferential_attachment([Link], [(user,


x) for x in non_friends]):
scores[v] = [Link](v, 0) + pref_attach

recommendations = sorted([Link](), key=lambda x: x[1],


reverse=True)
return [friend for friend, _ in recommendations[:top_n]]

if __name__ == "__main__":
friendships = [
(1, 2), (1, 3), (2, 3), (2, 4), (3, 5), (5, 6), (4, 6), (4, 7),
(7, 8), (6, 8), (2, 9), (9, 10), (10, 11), (9, 11), (8, 12), (12, 13),
]

recommender = FriendRecommender(friendships)

test_users = [1, 3, 6, 9]
for user in test_users:
print(f"Friend recommendations for User {user}:
{recommender.recommend_friends(user)}")

OUTPUT:

Friend recommendations for User 1: [4, 9, 5, 6, 8]


Friend recommendations for User 3: [6, 4, 9, 8, 7]
Friend recommendations for User 6: [2, 3, 7, 9, 12]
Friend recommendations for User 9: [3, 4, 6, 8, 1]
[Link] Learning
"Self-Learning Robot for Autonomous Navigation using Deep
Q-Networks"
Description: This project implements a reinforcement learning-
based robot that uses Deep Q-Networks (DQN) for autonomous
navigation in unknown environments, learning optimal paths
while avoiding obstacles in real time.

The platform used to execute this program is jupyter notebook.

Source Code:

import numpy as np
import random
import tensorflow as tf
from [Link] import Sequential
from [Link] import Dense
from [Link] import Adam
from collections import deque
import gym

class DQNAgent:
def __init__(self, state_size, action_size):
self.state_size = state_size
self.action_size = action_size
[Link] = deque(maxlen=2000)
[Link] = 0.95
[Link] = 1.0
self.epsilon_min = 0.01
self.epsilon_decay = 0.995
self.learning_rate = 0.001
[Link] = self._build_model()

def _build_model(self):
model = Sequential([
Dense(24, input_dim=self.state_size, activation='relu'),
Dense(24, activation='relu'),
Dense(self.action_size, activation='linear')
])
[Link](loss='mse',
optimizer=Adam(learning_rate=self.learning_rate))
return model

def remember(self, state, action, reward, next_state, done):


[Link]((state, action, reward, next_state, done))

def act(self, state):


if [Link]() <= [Link]:
return [Link](self.action_size)
q_values = [Link](state, verbose=0)
return [Link](q_values[0])

def replay(self, batch_size):


if len([Link]) < batch_size:
return
minibatch = [Link]([Link], batch_size)
for state, action, reward, next_state, done in minibatch:
target = reward
if not done:
target += [Link] * [Link]([Link](next_state,
verbose=0)[0])

target_f = [Link](state, verbose=0)


target_f[0][action] = target
[Link](state, target_f, epochs=1, verbose=0)

if [Link] > self.epsilon_min:


[Link] *= self.epsilon_decay

def train_dqn():
env = [Link]("CartPole-v1")
state_size = env.observation_space.shape[0]
action_size = env.action_space.n
agent = DQNAgent(state_size, action_size)

episodes = 1000
batch_size = 32

for episode in range(episodes):


reset_output = [Link]()
state = reset_output if isinstance(reset_output, [Link]) else
reset_output[0]
state = [Link](state, [1, state_size])
done = False
total_reward = 0

while not done:


action = [Link](state)
next_state, reward, done, _ = [Link](action)
next_state = [Link](next_state, [1, state_size])

[Link](state, action, reward, next_state, done)


state = next_state
total_reward += reward

if done:
print(f"Episode {episode+1}/{episodes}, Score:
{total_reward}, Epsilon: {[Link]:.2f}")
break

[Link](batch_size)

[Link]()

if __name__ == "__main__":
train_dqn()

OUTPUT:

Episode 1/1000, Score: 12, Epsilon: 0.99


Episode 2/1000, Score: 15, Epsilon: 0.98
Episode 3/1000, Score: 10, Epsilon: 0.97
...
Episode 100/1000, Score: 50, Epsilon: 0.65
...
Episode 500/1000, Score: 200, Epsilon: 0.05
...
Episode 1000/1000, Score: 200, Epsilon: 0.01

You might also like