0% found this document useful (0 votes)
63 views

COMP 488 Neural Network Deep Learning

This course provides undergraduate students with fundamental concepts of deep learning including neural networks, convolutional neural networks, sequence modeling and applications. Students will learn through lectures, labs and a mini-project to solve real-world problems in areas like computer vision, natural language processing and recommender systems. Evaluation is based on internal assessments during the course and a final exam.

Uploaded by

ISHWAR KC
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views

COMP 488 Neural Network Deep Learning

This course provides undergraduate students with fundamental concepts of deep learning including neural networks, convolutional neural networks, sequence modeling and applications. Students will learn through lectures, labs and a mini-project to solve real-world problems in areas like computer vision, natural language processing and recommender systems. Evaluation is based on internal assessments during the course and a final exam.

Uploaded by

ISHWAR KC
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

KATHMANDU UNIVERSITY

Subject: Neural Network and Deep Learning Course Code: COMP 488
Credit: 3 F.M: 100
Type: Elective

Course Description:
The course is designed to provide the students of undergraduate level with the fundamental
concepts of Deep Learning. The course is divided into two parts: first one is the basic foundation
with the introduction to Applied Math and Machine Learning Basics, The Neural Network, and
Regularization for Deep Learning; and the second one deals with Convolutional Neural Networks,
Models for Sequence Modeling: Recurrent and Recursive Nets and Applications. Students are
encouraged to participate to go through the several outsource materials which will focus the
students on the state of the art of Deep Learning. The lab is designed in such way that students will
be trained so that they will be able to use various open-source libraries such as TensorFlow, Keras,
PyTorch while implementing Deep Learning algorithms. As part of the course, students will
conduct a mini-project to solve the problem of various applications including Natural Language
Processing, Image Recognition, Visual Activities, and Recommendation System.

Course Objectives:
The major objective of the course is to prepare the students so that they will be able to design Deep
Learning models to solve the real-world problems. The course is designed in such a way that it
will facilitate the students to connect them with the recent works being carried out around the
globe. Demonstration of several examples by using Keras, Tensor Flow, and Pytorch will help to
understand different aspects of Deep Learning models. This approach will help students to
understand various libraries which are used in various global companies and also to understand
the concept of implementing deep learning models.
Evaluation:
Internal: 50
Final: 50

Contents:
Unit 1 – Applied Math and Machine Learning Basics [6 hrs]
1.1. Scalars, Vectors, Matrices and Tensors
1.2. Probability Distributions
1.3. Conditional Probability
1.4. Useful Properties of Common Functions
1.5. Gradient-Based Optimization
1.6. Capacity, Overfitting and Underfitting
1.7. Hyper parameters and Validation Sets
1.8. Estimators, Bias and Variance
1.9. Stochastic Gradient Descent
1.10. Supervised and Unsupervised Algorithms

Unit 2 – The Neural Network [8 hrs]


2.1. The Neuron
2.2. Expressing Linear Perceptrons as Neurons
2.3. Example: Learning XOR
2.4. Feed-Forward Neural Networks
2.5. Gradient-Based Learning
2.6. Hidden Units
2.7. Architecture Design
2.8. Back-Propagation and Other Differentiation Algorithms

Unit 3 – Regularization for Deep Learning [8 hrs]


3.1. Parameter Norm Penalties
3.2. Dataset Augmentation
3.3. Early Stopping
3.4. Parameter Tying and Parameter Sharing
3.5. Sparse Representations
3.6. Bagging and Other Ensemble Methods
3.7. Dropout

Unit 4 – Convolutional Neural Networks [8 hrs]


4.1. History of Convolutional Networks
4.2. The Convolution Operation
4.3. Motivation
4.4. Pooling
4.5. Data Types
4.6. Efficient Convolution Algorithms
4.7. Random or Unsupervised Features
4.8. The Neuroscientific Basis for Convolutional Networks

Unit 5 – Sequence Modeling: Recurrent and Recursive Nets [10 hrs]


5.1. Recurrent Neural Networks
5.2. Bidirectional RNNs
5.3. Encoder-Decoder Sequence-to-Sequence Architectures
5.4. Deep Recurrent Networks
5.5. Recursive Neural Networks
5.6. The Challenge of Long-Term Dependencies
5.7. The Long Short-Term Memory and Other Gated RNNs
Unit 6 – Applications [5 hrs]
6.1. Large-Scale Deep Learning
6.2. Computer Vision
6.3. Speech Recognition
6.4. Natural Language Processing
6.5. Recommender Systems

Text Book:
Ian Goodfellow, Yoshua Bengio, Aaron Courville, Deep Learning, MIT Press, 2016.
Reference Books:
1. Nikhil Buduma and Nicholas, Fundamentals of Deep Learning, OREILLY Press, 2017
2. Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
3. Thomas M. Mitchell Machine Learning , McGraw-Hill, Inc. Professional Book Group 11 West
19th Street New York, NY, United States

You might also like