COMP 488 Neural Network Deep Learning
COMP 488 Neural Network Deep Learning
KATHMANDU UNIVERSITY
Subject: Neural Network and Deep Learning Course Code: COMP 488
Credit: 3 F.M: 100
Type: Elective
Course Description:
The course is designed to provide the students of undergraduate level with the fundamental
concepts of Deep Learning. The course is divided into two parts: first one is the basic foundation
with the introduction to Applied Math and Machine Learning Basics, The Neural Network, and
Regularization for Deep Learning; and the second one deals with Convolutional Neural Networks,
Models for Sequence Modeling: Recurrent and Recursive Nets and Applications. Students are
encouraged to participate to go through the several outsource materials which will focus the
students on the state of the art of Deep Learning. The lab is designed in such way that students will
be trained so that they will be able to use various open-source libraries such as TensorFlow, Keras,
PyTorch while implementing Deep Learning algorithms. As part of the course, students will
conduct a mini-project to solve the problem of various applications including Natural Language
Processing, Image Recognition, Visual Activities, and Recommendation System.
Course Objectives:
The major objective of the course is to prepare the students so that they will be able to design Deep
Learning models to solve the real-world problems. The course is designed in such a way that it
will facilitate the students to connect them with the recent works being carried out around the
globe. Demonstration of several examples by using Keras, Tensor Flow, and Pytorch will help to
understand different aspects of Deep Learning models. This approach will help students to
understand various libraries which are used in various global companies and also to understand
the concept of implementing deep learning models.
Evaluation:
Internal: 50
Final: 50
Contents:
Unit 1 – Applied Math and Machine Learning Basics [6 hrs]
1.1. Scalars, Vectors, Matrices and Tensors
1.2. Probability Distributions
1.3. Conditional Probability
1.4. Useful Properties of Common Functions
1.5. Gradient-Based Optimization
1.6. Capacity, Overfitting and Underfitting
1.7. Hyper parameters and Validation Sets
1.8. Estimators, Bias and Variance
1.9. Stochastic Gradient Descent
1.10. Supervised and Unsupervised Algorithms
Text Book:
Ian Goodfellow, Yoshua Bengio, Aaron Courville, Deep Learning, MIT Press, 2016.
Reference Books:
1. Nikhil Buduma and Nicholas, Fundamentals of Deep Learning, OREILLY Press, 2017
2. Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
3. Thomas M. Mitchell Machine Learning , McGraw-Hill, Inc. Professional Book Group 11 West
19th Street New York, NY, United States