0% found this document useful (0 votes)
150 views29 pages

M.Tech. in Augmented & Virtual Reality Program

The document describes a proposed M.Tech program in Augmented Reality and Virtual Reality. The program aims to train engineers to design, implement, and evaluate VR and AR applications. It provides details on the program structure, compulsory and elective courses, learning outcomes, and topics that will be covered.

Uploaded by

Biplov Belel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
150 views29 pages

M.Tech. in Augmented & Virtual Reality Program

The document describes a proposed M.Tech program in Augmented Reality and Virtual Reality. The program aims to train engineers to design, implement, and evaluate VR and AR applications. It provides details on the program structure, compulsory and elective courses, learning outcomes, and topics that will be covered.

Uploaded by

Biplov Belel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Concept note on M.Tech.

Augmented Reality and Virtual Reality (AR & VR)

(CSE, EE, AIDE)

1. Introduction

Recent advancement in computer graphics and sensor technologies have provided a complete
set of interactive tools that give us unprecedented possibilities to completely immerse humans
in virtual environments or augment real environments. Virtual Reality (VR) and Augmented
Reality (AR) constitute a completely new computing paradigm finding its way into applications
for industry, health care, education, entertainment etc. Prime aim of this master’s programme
is to train professionals who can design, implement and evaluate VR and AR applications.

In order to meet this increasing demand of engineers with diverse backgrounds in the field of
augmented and virtual reality and to support relevant research and development, an M.Tech.
programme in AR and VR is designed. The proposed M.Tech. programme will provide
interdisciplinary learning opportunities to participate in one of the most challenging advanced
technology areas. It is also envisaged that this programme will serve as a platform to test
innovative ideas in the design, development, and testing of the AR and VR systems.

Eligible Branches: Mechanical Engineering, Electrical Engineering, Electrical and


Communication Engineering, Computer Science and Engineering, Instrumentation and Control,
Aerospace Engineering, Automobile Engineering, Aeronautical Engineering, Engineering Physics,
Civil Engineering, Bio-technology

2. Objective of the Program


The objectives of the programme are to
● produce competent engineers who can design and develop AR and VR applications.
● impart in-depth knowledge and analytical and experimental research skills to solve AR
and VR systems problems
● provide knowledge of several tools for the design and modeling of AR and VR applications
and immersive experiences.
● develop the ability to cultivate technological solutions for addressing growing demands of
AR and VR systems

3. Graduate Attributes:
The graduates of this program will have:
1. strong fundamentals in graphics, AR and VR, Sensation and perception, and Machine
Learning.
2. Ability to design interactive AR/VR experiences.
3. Ability to design games AR/VR frameworks.
4. Ability to design content and animation for AR/VR applications.
5. Understanding of applications of AR/VR technologies in the medical, industry, education
and other relevant domains.
6. Understanding of cutting-edge research on AR/VR technologies.
7. Ability to innovate and contribute towards the development of next-generation AR/VR
systems.
8. High-quality technical communication skills.
9. Appreciation and adherence to norms of professional ethics.
10. Ability to plan and manage technical projects.

4. Learning Outcomes:
At the end of program, a student is expected to have,

ARVR#01
1. Understanding of the fundamentals related to AR/VR technologies, content creation,
hardware design for AR/VR applications,
2. Knowledge of working principles of Game Design, Medical Application, AR/VR based
training simulators, Navigation and tracking in AR/VR.
3. Knowledge of working principles of scanning and 3D reconstruction and sensor data
fusion.
4. Knowledge of Image Synthesis, Rendering, and Animation.
5. Developing awareness about recent trends in AR/VR systems.
6. Ability to apply the acquired knowledge for analysis and design of AR/VR systems.
7. Critical thinking and scientific problem-solving.
8. Skill to communicate scientific ideas and /or application systems.
9. Acquire basic project management skills

5. Topic Cloud

Core Topic cloud


● Augmented Reality, Virtual Reality, Head Mounted Display, Visual Perception, Geometry
of Camera and Visual World, 3D Modeling, Manipulation and Interaction with objects,
Neural Networks, Light and Optics, haptic interfaces and haptic rendering, Mobile
VR/AR, Convolutional Neural Networks, Transfer Learning, Representation Learning,
Generative Models, Model Compression, Design Thinking Process, Context Modeling,
Ideation and Storyboarding, Prototyping, Evaluation of Interaction and Experience, User
Feedback
● Physiological basis of different senses and psychophysical methods for computing
perceptual thresholds

Advanced Topic Cloud


● Scanning, 3D Reconstruction, Surface Representation, Geometry Processing, Image
Synthesis for AR/VR, Ray Tracing, Image and Physics Based Rendering, Neural Image
Synthesis, Game Design, Hardware Design, Sensor Data Fusion, Tracking and
Navigation, Blender’s interface and Modeling in Blender, Texturing and Shading in
Blender, Viewing scene in VR in Blender, importing to Unity or Unreal Engine,
Introduction to the interface and Scene Setup, Creating a VR App.

Topic clouds and Mapping of Topic clouds with proposed compulsory


courses

S.N. Topic Cloud Course


1 Augmented Reality, Virtual Reality, Head Mounted Introduction to Augmented Reality
Displays, Visual Perception, Geometry of Camera and Virtual Reality
and Visual World, Light and Optics, Tracking of
Camera and Head
2 Physiology of Perception, Cutaneous Senses, Pain, Sensation and Perception
Olfaction, Gustation, Auditory System, Auditory
Localization, Speech, Visual System, Object
Perception, Motion Detection, Depth and Size
Perception, psychophysical methods for computing
perceptual thresholds
3 VR UX with the Unity API, Interaction and Mobile VR and AR
Locomotion, Working with Mobile VR in Unity
4 Kinesthetic and Tactile Senses, Haptic Perception, Introduction to Haptics
Haptic Modeling and Rendering of Virtual
Environment, Haptics for AR/VR
5 Design Thinking Process, Context Modeling, Interaction and Experience Design
Ideation and Storyboarding, Prototyping, for VR/AR
Evaluation of Interaction and Experience, User
Feedback
ARVR#02
6 Convolutional Neural Networks, Transfer Learning, Deep Learning
Representation Learning, Generative Models, Model
Compression

6. Program’s Structure for M.Tech. (Augmented and Virtual Reality)


S.N. Course Type Credits
1 M.Tech. Compulsory (MC) 19
2 M.Tech. Electives (ME) 15
3 M.Tech. Open (MO)/Seminar/Minor Project 6
4 M.Tech. Project (MP) 16
Total Graded 56
5 Non-Graded (NG) 4
Total 60

List of M.Tech. Compulsory (MC):


Sr. No. Title L-T-P Credits
1 *Introduction to Augmented Reality and Virtual Reality 3-0-0 3
2 *Sensation and Perception 3-0-0 3
3 Mobile VR and AR 2-0-4 4
4 *Introduction to Haptics 3-0-0 3
5 *Deep Learning 3-0-0 3
6 Interaction and Experience Design for VR/AR 2-0-2 3

List of M.Tech. Electives (ME)


Sr. No. Title L-T-P Credits
1 Content creation for VR/AR 1-0-4 3
2 Psychophysics 3-0-0 3
3 Spatial Audio 3-0-0 3
4 Advanced Computer Vision 3-0-0 3
5 Neural Image Synthesis for VR/AR 2-0-2 3
6 VR/AR Application Development 2-0-2 3
7 *Image Synthesis 3-0-0 3
8 *Animation 3-0-0 3
9 *3D Shape Analysis 3-0-0 3
10 Seminar in VR/AR 3
11 Minor Project in VR/AR 3

Preparatory courses: (to be advised based upon background of the students)


Sr. No. Title L-T-P Credits
1 *Linear Algebra 3-1-0 4
2 *Probability and Statistics 3-1-0 4
3 *Data Structures and Practices 0-0-2 1
4 *Programming Techniques 2-0-2 3
5 Python Programming 1-0-4 3
6 *Digital Image Processing 2-0-2 3
7 *Computer Graphics 3-0-0 3
8 *Machine Learning 3-0-0 3
9 *Computer Vision 3-0-0 3

* Courses are already approved at IIT Jodhpur

ARVR#03
Semester-Wise Plan

Courses L-T-P GC Courses L-T-P GC


I Semester II Semester
MC Introduction to 3-0-0 3 MC Introduction to Haptics 3-0-0 3
Augmented and Virtual
Reality
MC Sensation and Perception 3-0-0 3 PE2 Program Elective 3
PE1 Program Elective 3 NG Innovation and IP 1-0-0 1
Management
NG Technical 1-0-0 1 MC Deep Learning 3-0- 3
Communication 0
Total (Graded + Non- 10 Total (Graded + Non- 10
graded) graded)
III Semester IV Semester
MC Interaction and 2-0-2 3 MC Mobile AR and VR 2-0-4 4
Experience Design for
AR/VR
PE3 Program Elective 3 OE! Open 3
Elective/Seminar/Minor
Project
PE4 Program Elective NG Ethics and Professional 1-0-0 1
3 Life
NG Systems Engineering 1-0-0 ME5 Program Elective 3
and Project 1
Management
Total (Graded + Non- 10 Total (Graded + Non- 11
graded) graded)
V Semester VI Semester
PP1 Project-I 0-0-6 6 PP2 Project-II 0-0- 10
10
OE2 Open 3
Elective/Seminar/Minor
Project
Total (Graded + Non- 9 Total (Graded + Non- 10
graded) graded)

Acknowledgement: This program has been developed by below PanIIT faculty members:
● Prof. Parag Chaudhuri, IIT Bombay
● Prof. Subodh Kumar, IIT Delhi
● Prof. Prem Kalra, IIT Delhi
● Prof. Manivannan, IIT Madras
● Prof. Santanu Chaudhury, IIT Jodhpur,
● Prof. Uma Mudenagudi, KLE Technological University, Hubballi
● Dr. Amit Bhardwaj, IIT Jodhpur
● Dr. Manish Narwaria, IIT Jodhpur
● Dr. Himanshu Kumar, IIT Jodhpur
● Dr. Rajendra Nagar, IIT Jodhpur
● Dr. Nimish Vohra, IIT Jodhpur

Detailed M.Tech. Compulsory Course Contents


(Existing)

ARVR#04
Title Introduction to Haptics Number EEL7XX0
Department Electrical Engineering L-T-P [C] 3–0–0 [3]
Offered for B.Tech., M.Tech., Ph.D Type Elective
Prerequisite Fourier Analysis, Basics of Linear Algebra
Objectives
The Instructor will:
1. Provide the fundamentals of haptics, and its application to various domains.
2. Provide exposure to practical problems and their solutions in the field of haptics
Learning Outcomes
The students are expected to have the ability to
1. Develop haptic modeling and rendering algorithms for virtual objects using various
haptic interfaces.
2. Design psychophysical experiments for evaluation of the developed rendering
algorithms.
Contents
Human Haptics
Haptic Sensations- Kinesthetic and Tactile, Physiology of Human Touch, Overview of Haptic
Interfaces, Applications of Haptics (4 lectures)

Haptic Perception- Kinesthetic/Tactile, Multisensory Interactions, Psychophysics for Haptic


Perception: Psychometric Function, Perceptual Thresholds, Laws of Perception, Classical and
Modern Psychophysical Methods
(10 lectures)

Haptic Rendering
Basic Concepts and Steps of Haptic Rendering, Rendering Stability, Kinesthetic Rendering: 3
DoF Rendering, Haptic Volume Rendering; Texture/Tactile Rendering, Friction Rendering,
Measurement-based/Data-driven Haptic Rendering
(12 lectures)

Haptics for Telepresence and Tele-action (TPTA)


Overview of a TPTA system, Issues for Haptic Transmission, Kinesthetic Data Compression,
Tactile Data Compression, Haptics over Shared Network (8 lectures)

Emerging areas in Haptics


Surface Haptics- Electrostatic vs Ultrasonic; Mid-air Haptics, Haptic Interaction in Virtual and
Augmented Reality (VR/AR) (8
lectures)

Textbooks:
1. A. Bhardwaj and S. Chaudhuri, Kinesthetic Perception: A Machine Learning Approach,
Springer Publishers, 2017.
2. MC Lin and MA Otaduy (Eds), Haptic Rendering: Foundations, Algorithms, and
Applications, AK Peters, Ltd; London: 2008.
3. G. A. Gescheider, Psychophysics: The Fundamentals, 3rd ed., Lawrence Erlbaum
Associates, Publishers, 1997.
Preparatory Course Material:
1. E. Steinbach, M. Strese, M. Eid, X. Liu, A. Bhardwaj, Q. Liu, M. Al-Jaa'afrah, T.
Mahmoodi, R. Hassen, A. E. Saddik, and O. Holland, Haptic Codecs for the Tactile
Internet [40pt], Proceedings of the IEEE, pp. 124, 2018.
2. S. Choi and K. J. Kuchenbecker, Vibrotactile Display: Perception, Technology, and
Applications, Proceedings of the IEEE, vol. 101, pp. 2093-2104, 2013.
3. E. B. Goldstein, Sensation and Perception, 7th Ed., Thomson Wadsworth, Ch. 14, 2007

(Existing)
Title Deep Learning Number CSL7XX0

ARVR#05
Department Computer Science and Engineering L-T-P [C] 3–0–0 [3]
Offered for M.Tech 1st year and PhD 1st year Type Compulsory
Prerequisite Machine Learning

Objectives
The Instructor will:
1. Provide technical details about various recent algorithms and software platforms related to
Machine Learning with specific focus on Deep Learning.
Learning Outcomes
The students are expected to have the ability to:
1. Design and program efficient algorithms related to recent machine learning techniques,
train models, conduct experiments, and develop real-world ML-based applications and
products

Contents

Fractal 1: Foundations of Deep Learning


Deep Networks: CNN, RNN, LSTM, Attention layers, Applications (8 lectures)
Techniques to improve deep networks: DNN Optimization, Regularization, AutoML (6 lectures)
Fractal 2: Representation Learning
Representation Learning: Unsupervised pre-training, transfer learning, and domain adaptation,
distributed representation, discovering underlying causes (8 lectures)
Auto-DL: Neural architecture search, network compression, graph neural networks (6 lectures)
Fractal 3: Generative Models
Probabilistic Generative Models: DBN, RBM (3 lectures)
Deep Generative Models: Encoder-Decoder, Variational Autoencoder, Generative Adversarial
Network (GAN),
Deep Convolutional GAN, Variants and Applications of GANs (11 lectures)

Textbook
1. Goodfellow,I., Bengio.,Y., and Courville,A., (2016), Deep Learning, The MIT Press

Reference Books
1. Charniak, E. (2019), Introduction to deep learning, The
MITPress.
2. Research Literaure
Self Learning Material
1. https://www.deeplearningbook.org/

(Existing)
Title Introduction to Augmented and Virtual Number CSL7XX0
Reality
Departme Computer Science and Engineering L-T-P [C] 3–0–0 [3]
nt

ARVR#06
Offered for M.Tech Type Compulsory
Prerequisit
e

Objectives
The Instructor will:
1. Discusses such issues, focusing upon the human element of AR and VR
2. Explain the Hardware and software related issues related to AR and VR
Learning Outcomes
The students are expected to have the ability to:
1. Explain perceptual concepts governing augmented reality and virtual reality
2. Identify and solve the issues of various augmented reality and virtual reality frameworks
3. Design immersive experience using AR and VR Software

Contents
Introduction: Definition of X-R (AR, VR, MR), modern experiences, historical perspective,
Hardware, sensors, displays, software, virtual world generator, game engines (6 lectures)
Geometry of Visual World: Geometric modeling, transforming rigid bodies, yaw, pitch, roll, axis-
angle representation, quaternions, 3D rotation inverses and conversions, homogeneous
transforms, transforms to displays, look-at, and eye transform, canonical view and perspective
transform, viewport transforms (8 lectures)
Light and Optics: Interpretation of light, reflection, optical systems (4 lectures)
Visual Perception: Photoreceptors, Eye and Vision, Motion, Depth Perception, Frame rates and
displays (6 lectures)
Tracking: Orientation, Tilt, Drift, Yaw, Lighthouse approach (4 lectures)
Head Mounted Display: Optics, Inertial Measurement Units, Orientation Tracking with IMUs,
Panoramic Imaging and Cinematic VR, Audio (8 lectures)
Frontiers: Touch, haptics, taste, smell, robotic interfaces, telepresence, brain-machine interfaces
(6 lectures)

Textbook
1. Shirley, M., (2016), Fundamentals of Computer Graphics, 4th Edition, CRC Press
2. LaValle, (2016), Virtual Reality, Cambridge University Press
3. Schmalstieg D, and Hollerer T. (2016). Augmented Reality: Principles & Practice, Pearson
Education India

Reference Books
1. Jerald,J., (2015), The VR Book: Human-Centered Design for Virtual Reality, Morgan &
Claypool
2. Mather,G., (2009), Foundations of Sensation and Perception, 2nd Edition, Psychology
Press
3. Shirley,P., Ashikhmin,M., Marschner,S. and Peters,A.K., Fundamentals of Computer
Graphics, 3rd Edition, CRC Press
4. Bowman,D.A., Kruijff,E., LaViola,J.J. and Poupyrev,I., (2014), 3D User Interfaces: Theory
and Practice, 2nd Edition, Addison Wesley Professional

Self Learning Material


1. Steven M. LaValle, Video Lectures,
https://www.youtube.com/playlist?list=PLbMVogVj5nJSyt80VRXYC-YrAvQuUb6dh

(New)
Title Interaction and Experience Design Number EEL/CSL/AIL7xx0
for AR/VR
Department EE/CSE/AIDE L-T-P[C] 2–0–2 [3]

ARVR#07
Offered for UG, M.Tech and PhD Type Compulsory
Prerequisite None

Objectives
Introduce the students to:
1. Thinking holistically from a user’s perspective.
2. Collaboratively come up with ideas.
3. Prototype ideas and seek user feedback.

Learning Outcomes
The students will be able to:
1. Apply the method of design thinking to come up with compelling ideas and prototypes.
2. Apply the known guidelines of usability and interaction (wrt to the medium) to create a
frictionless user experience.

Contents

● Introduction to the Design Thinking process: Empathy, Human- and life-centered


approach, Problem statement, Ideation, Prototyping, Testing; Design critiques. [4
lecture, 2 labs]
● Needfinding: Problem context; User personas; Observation of context; Talking to
users; Constraints; Problem statement. [4 lectures, 2 labs]
● Ideation and storyboarding: Mind maps; Thinking visually; Collaborative ideation–
whiteboards, digital whiteboard tools (Google Jamboard, Mural, Miro, etc.); Synthesis of
ideas; Evaluating ideas from the perspective of the medium; Use cases; Summarizing
the idea into key use cases. [6 lectures, 3labs]
● Prototyping: Introduction to prototypes–low to high fidelity; Importance of
prototyping; Making prototypes–paper, physical, digital; Prototype key use cases. [6
lectures, 5labs]
● Evaluation of interaction and experience: Usability; Usefulness; Guidelines for
interaction development; Evaluating usability against the currently known guidelines. [4
lectures]
● User feedback: Taking user feedback on the prototype for key use cases; the Think-
Aloud protocol. [4 lectures, 2 labs]

Textbooks:

1. Jason Jerald, “The VR Book: Human-Centered Design for Virtual Reality,” Illustrated
edition, Morgan and Claypool Publishers, 2015

References:

1. Joseph LaViola, et al, 3D User Interfaces: Theory and Practice,” 2nd Ed, Addison-
Wesley, 2017
2. Cornel Hillmann, “UX for XR: User Experience Design and Strategies for Immersive
Technologies (Design Thinking),” 1st Ed, Apress, 2021
3. Don Norman, “The Design of Everyday Things,” 2nd Edition, Basic Books, 2013
Preparatory Course Material:
1. M. Nebling, Developing AR/VR/MR/XR Apps with WebXR, Unity & Unreal, University of
Michigan, https://www.coursera.org/learn/develop-augmented-virtual-mixed-extended-
reality-applications-webxr-unity-unreal

(Existing)
Course Title Sensation and Perception Course No. CGL7xx0
Department/ IDRP CBSA-AIDE L-T-P [C] 3-0-0 [3]
Offered for B.Tech Type Compulsory
ARVR#08
Prerequisite

Objectives
1. The course will introduce basics of sensation and perception

Learning Outcomes
1. Students learn about perception processes
2. Enable students to use knowledge of perceptual processes for interface design

Contents

Introduction – history and philosophy (1)


Methods & Techniques: Signal Detection Theory (2)
Physiology of Perception, Neuroimaging, Bottom-Up vs. Top-Down (3)
Cutaneous Senses, Pain (2)
Olfaction (smell) (2)
Gustation (taste); Flavor (2)
Auditory System: Pathways (2)
Fundamental Auditory Functions (1)
Auditory Localization (2)
Auditory Scene Analysis (2)
Speech (2)
Visual System: Pathways (2)
Visual Functions (2)
Object Perception (2)
Visual Attention (2)
Taking Action (2)
Motion Detection (2)
Color Vision (3)
Depth and Size Perception (4)
Constancy & Illusions, and Camouflage (2)

Reference Text

1. Goldstein, E. B. & Brockmole, James R. (2017). Sensation and Perception, 10th Edition.
Cengage Learning.

(New)
Title Mobile AR and VR Number EEL/CSL/AIL7xx0
Departme EE/AIDE/CSE L-T-P[C] 2–0–4 [4]
nt
ARVR#09
Offered UG, M.Tech and PhD Type Compulsory
for
Prerequisit None
e

Objectives

The Instructor will:

1. Provide exposure to mobile VR/AR application development tools

Learning Outcomes

The students are expected to have the ability to

1. design, develop, troubleshoot, and publish their own mobile VR/AR applications using
different VR engines

Contents

Introduction to Mobile AR/VR [2 Lectures]

Working with Mobile VR in Unity: Unity XR, headset and controller tracking, loss of tracking,
Unity core APIs related to XR functionality. [4 Lectures, 2 Labs]

Interaction and Locomotion: interact with objects in VR, scene exploration, teleportation, and
grab and throw interactions, how to avoid causing nausea and dizziness for your user. [ 8
Lectures 2 Labs]

VR UX with the Unity API: Text Issues in VR, Optimizing Text Readability, Attaching Displays
to a Controller, Interacting with Canvas Elements, Moving Constrained Objects With Tracking,
Hitting Objects, UX That Moves With You, Triggering Canvas Events from Raycasts, Creating
Virtual Controls That Mimic Real World Controls. [14 Lectures, 4 Labs]

Textbooks:

1. Greengard, Samuel. Virtual reality. Mit Press, 2019.

Preparatory Course Material:

1. M. Nebling, Developing AR/VR/MR/XR Apps with WebXR, Unity & Unreal, University of
Michigan, https://www.coursera.org/learn/develop-augmented-virtual-mixed-extended-
reality-applications-webxr-unity-unreal
2. Peter Patterson, Mobile VR App Development with Unity, Unity Technologies,
https://www.coursera.org/learn/mobile-vr-app-development-unity

Detailed M.Tech. Elective Course Contents


(Existing)

ARVR#010
Title Image Synthesis Number EE/CSE 7xx0
Department EE, CSE L-T-P [C] 3–0–0 [3]
Offered for B.Tech, M.Tech, PhD Type Elective
Prerequisite Computer Graphics

Objectives
The Instructor will introduce students to fundamentals of realistic image synthesis.

Learning Outcomes
The students are expected to have the ability to synthesize realistic looking images for
animations, simulation, gaming, etc.

Contents

The Goals of Rendering, Ray Tracing I: Basic Algorithm, Ray-Surface Intersection, Ray
Tracing II: Acceleration Techniques [8 Lectures]

The Light Field, Lights and Lighting, Illumination, Camera and Film, Sampling and
Reconstruction, Aliasing and Antialiasing, Statistical Sampling [8 Lectures]

Reflection Models: BRDFs, Ideal Specular and Diffuse, Reflection Models II: Glossy,
Texture, The Rendering Equation, Materials and BRDFs, Mapping techniques, Exotic
materials [10 Lectures]

Monte Carlo Methods: Probability, Sampling and Variance Reduction, Sampling Paths,
Irradiance Caching and Photon Maps [8 Lectures]
Radiosity: Form Factors, Solvers, Meshing and Hierarchical Techniques [8 Lectures]

Textbooks:
1. Akenine-Möller, T., Haines, E., & Hoffman, N. (2019). Real-time rendering. Crc Press.
2. Hughes, J. F., Van Dam, A., Foley, J. D., McGuire, M., Feiner, S. K., & Sklar, D. F. (2014).
Computer graphics: principles and practice. Pearson Education.

Self Learning Material


1. Prof. Pat Hanrahan, Computer Graphics: Image Synthesis Techniques, Stanford
University: https://graphics.stanford.edu/courses/cs348b-00/
2. Prof. Ravi Raviramamoorthi, Computer Graphics, University of California, San Diego,
https://www.youtube.com/user/raviramamoorthi/videos

(New)
Title VR/AR Application Development Number EEL/CSL/AIL7xx0

ARVR#011
Departme EE/AIDE/CSE L-T-P- 2–0–2 [3]
nt [C]
Offered M.Tech. (AR/VR), 1st Year Type Elective
for
Prerequisit Computer Programming
e

Objectives
This course will prepare students to become proficient in design and development of AR/VR
applications using common tools and frameworks.
Learning Outcomes
The students are expected to obtain the ability to:
1. Analyze and describe application specifications.
2. Determine technical content, hardware, and programming requirement matching
specifications
3. Assemble and develop the necessary content and modules
4. Evaluate and test the application and that it meets the specs

Contents
● Specifying components of a VR application: Environment, User, and Interaction [1
Lectures]
● Software modules in a VR application, Senses and VR device for those senses,
Coordination across senses [2 Lectures]
● Device APIs, and their integration, 3D world, Gaze, Tracking, Touch, Pseudo-physics for
VR [6 Lectures]
● Manipulation and Interaction with objects, Integrating traditional interfaces in VR,
Techniques for user locomotion, wayfinding, and orientation in 3D [7 Lectures]
● Design principles: discoverability, affordances, perceptibility, constraints, error
tolerance, Recovering from transient failure of tracking, sensing [7 Lectures]
● The notion of cognitive load on human sensing and memory, Multi-user VR and Social
interaction [2 Lectures]
● Measuring Usability and immersion, Gulfs of execution and evaluation, Pitfalls for Virtual
Reality [3 Lectures]

Laboratory Experiments
One small application and one large application development. The large application will
have a client, initial specifications, initial design, module-wise implementation, and user
study,

Textbook
1. Greengard, Samuel. Virtual reality. Mit Press, 2019.
Online Course Material:
1. M. Nebling, Developing AR/VR/MR/XR Apps with WebXR, Unity & Unreal, University of
Michigan, https://www.coursera.org/learn/develop-augmented-virtual-mixed-extended-
reality-applications-webxr-unity-unreal

(New)
Title Spatial Audio Number EEL7xx0

ARVR#012
Departme Electrical Engineering L-T-P [C] 3–0–0 [3]
nt
Offered UG, M.Tech, PhD Type Elective
for
Prerequisit Fundamentals of Signals and Systems
e

Objectives
The Instructor will:
1. provide knowledge of spatial audio capture and rendering process

Learning Outcomes
The students are expected to have the ability to:
1. analyze and integrate spatial audio into AR/VR systems for enhancing user experience

Contents
Introduction to spatial audio: technology evolution from traditional channel based audio to
immersive spatial audio, importance of audio in immersive AR/VR experience (3 lectures)
Audio as physical phenomenon, properties of hearing, cues in spatial hearing, time and level
difference for audio perception (5 lectures)
Auditory events and vector-base panning: loudness, directionality, vector models (5 lectures)
Amplitude panning techniques: Vector-Base Amplitude Panning, Multiple-Direction Amplitude
Panning, Panner plugins (5 lectures)
Introduction to ambisonics: recording, storage formats, basics of spherical harmonic analysis
(5 lectures)
Format conversion, ambisonic rendering engines and use-cases (5 lectures)
Binaural audio: head related transfer function (HRTF) basics, HRTF measurement in practice,
use of HRTF in audio rendering, binaural audio reproduction technology (8 lectures)
Use-cases of spatial audio for immersive AR/VR: Immersive walkthrough, gaming, event
simulation etc. (6 lectures)
Textbook
1. Zotter, F., & Frank, M. (2019). Ambisonics: A practical 3D audio theory for recording, studio
production, sound reinforcement, and virtual reality (p. 210). Springer Nature.
2. Rumsey F., Spatial Audio, Routledge Publisher, 2012.

Self Learning Material


Pan, S.X., Introduction to Audio in VR, Coursera, University of London,
https://www.coursera.org/lecture/3d-models-virtual-reality/introduction-to-audio-in-vr-
CQ1Nb

Preparatory Course Material


Jagannatham, A. K., Principles of Signals and Systems, NPTEL Course Material, Department
of Electrical Engineering, Indian Institute of Technology Kanpur,
https://nptel.ac.in/courses/108104100/

ARVR#013
(Existing)
Title Animation Number EEL/CSL7xx0
Department EE, CSE L-T-P [C] 3–0–0 [3]
Offered for B.Tech, M.Tech, PhD Type Elective
Prerequisite Computer Graphics

Objectives
The Instructor will introduce students to fundamentals of animation and simulation.

Learning Outcomes
The students are expected to have the ability to use the animation techniques for physics
based real time systems.

Contents

Core mathematics and methods for computer animation and motion simulation. Traditional
animation techniques. [6 Lectures]

Physics-based simulation methods for modeling shape and motion: particle systems, constraints,
rigid bodies, deformable models, collisions and contact, fluids, and fracture. Animating natural
phenomena. [16 Lectures]

Methods for animating virtual characters and crowds. Additional topics selected from data-driven
animation methods, realism and perception, animation systems, motion control, real-time and
interactive methods, and multi-sensory feedback. [18 Lectures]

Textbooks:

1. Hughes, J. F., Van Dam, A., Foley, J. D., McGuire, M., Feiner, S. K., & Sklar, D. F. (2014).
Computer graphics: principles and practice. Pearson Education.
2. Marschner, S., & Shirley, P. (2015). Fundamentals of computer graphics. CRC Press.

Self Learning Material


1. Prof. Doug James, Computer Graphics: Animation and Simulation, Stanford University,
https://graphics.stanford.edu/courses/cs348b-00/

ARVR#014
(Existing)
Title 3D Shape Analysis Number EEL7xx0
Department Electrical Engineering L-T-P 3–0–0 [3]
[C]
Offered for B.Tech, M.Tech, PhD Type Elective
Prerequisite Fourier Analysis, Linear
Algebra

Objectives

The Instructor will:


1. provide students an understanding of differential geometry tools isometry, geodesics,
Laplace Beltrami Operator, and Functional Maps.

Learning Outcomes
The students are expected to have the ability to:
1. apply the learned geometry tools to solve practical 3D surface reconstruction problems.
2. generate 3D printable models from the unstructured point clouds.

Contents

Introduction to Differential Geometry: Basics of differential geometry of curve and


surfaces and estimating the quantities such as normals and curvature. [5 Lectures]

Surface Representation and Reconstruction: 3D Surface Representation,


Parametric and Non-parametric, Implicit, Signed Distance Function, Data structures for
surface representation, 3D surface reconstruction from point clouds. [9 Lectures]

Shape Analysis Tools: Isometry, geodesics on triangle meshes, Scalar functions on


shapes, discrete Laplace Beltrami operator on triangle meshes, Heat Diffusion on
Shapes [12 Lectures]

Shape Matching: 3D Shape Descriptors, Rigid and Non-rigid registration, Functional


Maps, 3D Shape correspondences, Intrinsic Symmetry [12 Lectures]

Shape Repairing: Shape completion, generating printable 3D models from raw point
clouds [4 Lectures]
Reference Books
1. Botsch, M., Kobbelt, L., Pauly, M., Alliez, P., & Lévy, B. (2010). Polygon mesh processing.
CRC press.
2. Solomon, J. (2015). Numerical algorithms: methods for computer vision, machine
learning, and graphics. CRC press.

Self Learning Material


1. by Prof. Justin Solomon, Shape Analysis, Massachusetts Institute of Technology,
http://groups.csail.mit.edu/gdpgroup/6838_spring_2019.html
2. Prof. Siddhartha Chaudhuri, Digital Geometry Processing, Indian Institute of Technology
Bombay: https://www.cse.iitb.ac.in/~cs749/spr2017/
Preparatory Course Material
1. Prof. Gilbert Strang, Linear Algebra and Learning from Data, Massachusetts Institute of
Technology, https://math.mit.edu/~gs/learningfromdata/

ARVR#015
(New)
Title Advanced Computer Vision Number EEL/AIL7xx0
Department EE/AIDE L-T-P[C] 3-0-0[3]
Offered for UG, M.Tech and PhD Type Elective
Prerequisite 1. Computer Vision
2. Deep Learning

Objectives
Introduce the students to:
1. advanced research concepts in computer vision, with an emphasis on 3D deep learning,
and 3D neural rendering
2. 3D data capturing devices, human body and motion reconstruction, and human-object
interaction.
Learning Outcomes
The students will be able to:
1. design efficient learning algorithms for 3D data processing
2. design efficient algorithms and systems for markerless 3D human capture and modeling
and putting realistic people in AR/VR based virtual environments.

Content

● Fundamentals for Advanced CV:


3D Deep Learning, Generative Models [6 Lectures]
Point Cloud and Mesh Representation, 3D Feature Learning, Correspondences Estimation
[5 Lectures]
Neural Radiance Fields, Neural Rendering [3 Lectures]
● 3D Capture and Reconstruction:
RGB-D scanning (Kinect, RealSense), camera tracking, sensor calibration [3 Lectures]
Non-Rigid Registration, Implicit Representation Learning, Completion [8 Lectures]
Motion Capture, Body-, Face-, and Hand-Tracking [3 Lectures]
● Modeling and Animating 3D Humans:
Body Models, SCAPE, SMPL [3 Lectures]
Bodies from RGBD, 3D Faces and Expressions [4 Lectures]
Clothing Capture and Modeling, Capturing Contacts, 3D Hands-Object Interaction [5
Lectures]
Putting People into 3D scenes, Inferring Actions, Modeling Human Movements. [5
Lectures]

Textbooks:

1. Szeliski, R. (2022). Computer vision: algorithms and applications. 2nd edition. Springer.
2. Botsch, M., Kobbelt, L., Pauly, M., Alliez, P., & Lévy, B. (2010). Polygon mesh
processing. CRC press

References:

1. Recent research papers: openaccess.thecvf.com

Preparatory Course Material:

1. Advanced Computer Vision by Prof. Yuxiong Wang at University of Illinois,


https://yxw.cs.illinois.edu/course/CS598ACV/S21/

ARVR#016
(New)
Title Psychophysics Number EEL/AIL7xx0
Departme Electrical Engineering/AIDE L-T-P[C] 3–0–0 [3]
nt
Offered UG, M.Tech and PhD Type Elective
for
Prerequisit None
e

Objective
The Instructor will:
1. Expose the students to the fundamentals of psychophysical methods

Learning Outcome:
The students are expected to have the ability to
1. Design psychophysical experiments for various perceptual studies in AR/VR
2. Compute the just noticeable difference for different stimuli

Contents:

1. Sensation: physiological basis of visual, auditory and touch (6 Lectures)


2. Introduction to perception, basics of psychophysics, Psychophysical laws: Weber’s law,
Fechner’s law and Steven’s Power law; Psychophysical measurement of thresholds:
Absolute Sensitivity, Differential Sensitivity (6 Lectures)
3. Classical Psychophysical Methods: method of limits, method of constant stimuli, method
of adjustment, Adaptive Staircase methods and its different variants (8
Lectures )
4. Theory of Signal detection (TSD): receiver operating characteristics, sensitivity, response
bias, procedures of TSD, applications of TSD, Measurement of Sensory Attributes and
Discrimination scales (10 Lectures)
5. Psychophysical Ratio Scaling methods: ratio production, ratio estimation, magnitude
production, magnitude estimation; Evaluation of ratio scaling methods (8 Lectures)
6. Statistical Hypothesis testing methods: z-test, t-test, F-test, ANOVA (4 Lectures)

Textbooks:
1. G. A. Gescheider, “Psychophysics: The Fundamentals,” 3rd ed., Lawrence Erlbaum
Associates, Publishers, 1997.
2. E. B. Goldstein, “Sensation and Perception,” 7th Ed., Thomson Wadsworth, 2007

Preparatory Course Material:

1. Detection Theory: A User's Guide, 2nd Edition, N. A. Macmillan & C. D. Creelman,


Cambridge University Press, ISBN No. 0-8058-4230-6.
2. Wolfe, Jeremy M., et al. “Sensation and Perception” Sunderland, MA: Sinauer Associates
Inc., 2005. ISBN: 9780878939381.

(New)

ARVR#017
Title Neural Image Synthesis for VR/AR Number CSL/AIL/EEL7xx0
Departme CSE/AIDE/EE L-T-P[C] 2–0–2 [3]
nt
Offered M.Tech, PhD Type Elective
for
Prerequisit Image processing or computer graphics,
e computer vision, deep learning

Objectives
The course will explain the concepts and details that are involved in deep generative methods
for image synthesis.

Learning Outcomes
The students are expected to have the ability to:
1. Understand generative models like GANs and VAEs.
2. Understand differentiable rendering and neural scene representation.

Contents
1. Classical image synthesis - global illumination with path tracing, image-based rendering
(6 lectures, 2 labs)
2. Neural and differentiable Rendering - Generative models:GANs and VAEs, Neural scene
representation and rendering, Neural Radiance fields, Differentiable rendering (10
lectures, 5 labs)
3. Image Synthesis for Virtual and Augmented Reality - Spherical (360) Panoramas, Neural
Antialiasing, Vergence-Accommodation Conflict, deep foveated rendering, Inverse
Rendering, Neural Rendering of Point Clouds (12 lectures, 5 labs)

Reference book
1. Deep Learning, Ian Goodfellow and Yoshua Bengio and Aaron Courville, MIT Press, 2016

Online Course Material


Since this material is very new, it will have to be taught mostly from relevant research
papers.

ARVR#018
(New)
Title Content Creation for VR/AR Number EEL/CSL/AIL7xx0
Departme EE/CSE/AIDE L-T-P[C] 1–0–4 [3]
nt
Offered for M.Tech, PhD Type Elective
Prerequisit None
e

Objectives
The Instructor will:
1. Explain how to create geometry, textures and shaders in Blender.
2. Explain how to view the scene that they created in VR in Blender.
3. Explain how to import the content to Unity or Unreal Engine and create a standalone VR
app in it.
4. Explain how to view the same content as augmentations on a camera feed for AR.

Learning Outcomes
The students are expected to have the ability to:
1. Create geometry, textures and shaders in Blender.
2. Create standalone VR and AR apps with the content created in Blender.

Contents
1. Introduction to Blender’s interface and Modelling in Blender (3 lectures, 3 labs)
2. Texturing and Shading in Blender (5 lectures, 4 labs)
3. Viewing scene in VR in Blender, importing to Unity or Unreal Engine (1 lecture, 1 lab)
4. Introduction to the interface and Scene Setup (Unity/Unreal Engine) (1 lecture, 1 lab)
5. Creating a VR App (Unity/Unreal Engine) (2 lectures, 2 labs)
6. Creating a AR App with above content (Unity/Unreal) (2 lectures, 2 labs)

Textbook:
1. Blain, J. M. (2019). The complete guide to Blender graphics: computer modeling &
animation. AK Peters/CRC Press.
2. Hess, R. (2013). Blender Foundations: The Essential Guide to Learning Blender 2.5.
Routledge.

Online Course Material


1. Blender Tutorial: https://www.blender.org/support/tutorials/
2. Learning Unity: https://unity.com/learn
1. Learning Unreal Engine: https://www.unrealengine.com/en-US/learn

ARVR#019
Detailed M.Tech. Preparatory Course Contents
(Existing)
Title Digital Image Processing Number CSL7xx0
Departme Computer Science and Engineering L-T-P [C] 3–0–0 [3]
nt
Offered for B.Tech, M.Tech, PhD Type Elective
Prerequisit Linear Algebra
e

Objectives
1. To introduce the origin and formation of digital imaging.
2. To develop the understanding of different types of imaging techniques for different purposes.
3. To equip the students with various possible applications of image analysis.

Learning Outcomes
The students are expected to have the ability to:
1. Enhance image in spatial and frequency domain.
2. Implement various aspects of image segmentation and compression.

Contents
Digital Image Fundamentals: Image modeling, Sampling and Quantization, Imaging Geometry,
Digital Geometry, Image Acquisition Systems, Different types of digital images (3 Lectures)
Image Transforms: Basic transforms: Spatial and Frequency Domain Transforms (8 Lectures)
Image Enhancement: Point processing, interpolation, enhancement in spatial domain,
enhancement in frequency domain (7 Lectures)
Color Image Processings:Color Representation, Laws of color matching, chromaticity diagram,
color enhancement, color image segmentation, color edge detection (3 Lectures)
Image compression: Lossy and lossless compression schemes, prediction based compression
schemes, vector quantization, sub-band encoding schemes, JPEG compression standard (4
Lectures)
Morphology:Dilation, erosion, opening, closing, hit and miss transform, thinning, extension to
grayscale morphology, Euler technique (5 Lectures)
Segmentation: Segmentation of grey level images, Watershed algorithm for segmenting gray
level image (6 Lectures)
Feature Detection:Fourier descriptors, shape features, object matching/features (6 Lectures)

Textbook
1. C. GONZALEZ, R.E. WOODS (2018), Digital Image Processing, Prentice Hall, 4th Edition.
2. A.K. JAIN (1989), Fundamentals of Image Processing, Prentice Hall

Reference Books
Research literature

Online Course Material


https://nptel.ac.in/courses/117104020/

ARVR#020
(Existing)
Title Computer Graphics Number CSL7xx0
Department Computer Science and Engineering L-T-P [C] 3–0–0 [3]
Offered for B.Tech, M.Tech, PhD Type Elective
Prerequisite None

Objectives
The Instructor will:
Provide a thorough introduction to computer graphics techniques, focusing on 2D and 3D
modelling, image synthesis and rendering

Learning Outcomes
The students are expected to have the ability to:
1. Explain and create interactive graphics application
2. Implement graphics primitives
3. Synthesize and render images for animation and visualization

Contents
CSL7xx1 Introduction to Graphical Primitives 1-0-0 [1]
Introduction: Overview of computer graphics, representing pictures, preparing, presenting
& interacting with pictures for presentations;
Scan conversion: 2D Geometric Primitives; Area Filling algorithms. Clipping algorithms,
Anti Aliasing
Transformations and viewing: 2D and 3D transformations, Matrix representations &
homogeneous coordinates, Viewing pipeline, Window to viewport co-ordinate
transformation, clipping operations, viewport clipping, 3D viewing.
CSL7xx2Graphical Object Representation 1-0-0 [1]
Curves and Surfaces: Conics, parametric and non-parametric forms; Bezier (Bernstein
Polynomials) Curves, Cubic-Splines, B-Splines; Quadratic surfaces, Bezier surfaces and
NURBS, 3-D modelling.
CSL7xx3Graphics Rendering 1-0-0 [1]
Hidden surfaces: Depth comparison, Z-buffer algorithm, Back face detection, BSP tree
method, the Printer’s algorithm, scan-line algorithm; Hidden line elimination, wire frame
methods, fractal - geometry.
Color & shading models: Phong's shading model, Gouraud shading, Shadows and
background, Color models, Photo-realistic rendering
Animation and OpenGL primitives: Functions, pipeline, sample programs for drawing 2-D,
3-D objects; event handling and view manipulation, Introduction to GPU and animation
Textbook
1. J. D. Foley, A. Van Dam, S. K. Feiner and J. F. Hughes, Computer Graphics; Principles
and practice, Addison Wesley, 2nd Edition in C, 1997.
2. D. F. Rogers and J. A. Adams, Mathematical elements for Computer Graphics,
McGraw-Hill, 2nd Edition, 1990.
Self-Learning Material
1. Blender: https://www.blender.org/download/
2. OpenGL:http://www.opengl-tutorial.org/
PreparatoryCourse Material
1.Department of Computer Science and Engineering, Indian Institute of Technology
Madras,
https://nptel.ac.in/courses/106106090/

ARVR#021
(Existing)
Title Computer Vision Number CSL7xx0
Departme CSE, AI&DE L-T-P [C] 3–0–0 [3]
nt
Offered B.Tech, M.Tech, PhD Type Elective
for
Prerequisit Linear Algebra
e

Objectives
The Instructor will:
1. Provide insights into fundamental concepts and algorithms behind some of the remarkable
success of Computer Vision
2. Impart working expertise by means of programming assignments and a project
Learning Outcomes
The students are expected to have the ability to:
1. Learn and appreciate the usage and implications of various Computer Vision techniques in
real-world scenarios
2. Design and implement basic applications of Computer Vision

Contents
Introduction: The Three R’s - Recognition, Reconstruction, Reorganization (1 Lecture)
Fundamentals: Formation, Filtering, Transformation, Alignment, Color (5 Lectures)
Image Restoration: Spatial Processing and Wavelet-based Processing (5 Lectures)
Geometry:Homography, Warping, Epipolar Geometry, Stereo, Structure from Motion, Optical
flow (9 Lectures)
Segmentation:Key point Extraction, Region Segmentation (e.g., boosting, graph-cut and
level-set), RANSAC (6 Lectures)
Feature Description and Matching:Key-point Description, handcrafted feature extraction
(SIFT, LBP) (3 Lectures)
Deep Learning based Segmentation and Recognition: DL-based Object detection (e.g.
Mask-RCNN, YOLO), Semantic Segmentation, Convolutional Neural Network (CNN) based
approaches to visual recognition (9 Lectures)
Applications: Multimodal and Multitask Applications (4 Lectures)

Textbooks
1. R. HARTLEY, A. ZISSERMAN (2004), Multiple View Geometry in Computer Vision, Cambridge
University Press, 2nd Edition.
2. R. SZELISKI, (2010), Computer Vision: Algorithms and Applications, Springer-Verlag
London.

Reference Books
1. Research literature

ARVR#022
(Existing)
Title Data Structures and Practices Number CSLXXX
Department CSE L-T-P [C] 0–0–2 [1]
Offered for M.Tech 1st Year Type Compulsory
Prerequisite Computer programming

Objectives
The Instructor will:
Explain various data structures and provide details to implement and use them in different
algorithms
Learning Outcomes
The students are expected to have the ability to:
1. Write, debug and rectify the programs using different data structures
2. Expertise in transforming coding skills into algorithm design and implementation

Contents
Laboratory Experiments
Exercises based on
Abstract Data Types: Arrays, link-list/list, hash tables, dictionaries, structures, stack, queues
(4 labs)
Data Structures: Heap, Sets, Sparse matrix, Binary Search Tree, B-Tree/ B+ Tree, Graph (4
labs)
Algorithm implementation: Quick or Merge sort, Breadth or Depth first search or Dijkstra’s
Shortest Path First algorithm, Dynamic programing (6 labs)
Textbook
1.Weiss, M. A. (2007), Data Structures and Algorithm Analysis in C++, Addison-Wesley.
2. Lipschutz, S. (2017), Data Structures with C, McGraw Hill Education.
3.Cormen, T. H., Leiserson, C. E., Rivest, R. L. and Stein, C., (2009), Introduction to
Algorithms, MIT Press
Online Course Material
Department of Computer Science and Engineering, IIT Delhi,
http://www.nptelvideos.in/2012/11/data-structures-and-algorithms.html

ARVR#023
(Existing)
Title Programming Techniques Number MA
Department Mathematics L-T-P[C] 2-0-2-0 [3]
Offered for M.Sc Type Compulsory
Prerequisite

Objectives
The Instructor will:
1. Introduce the basics of computer programming in C/C++.
2. Teach to develop well-structured programs in C/C++.

Learning Outcomes
The students are expected to have the ability to:
1. Write well structures programs in C/C++.
2. Implement and solve algebraic and transcendental equations and system of equations
numerically.

Contents:
Introduction[2 Lectures]:machine language, assembly language, high-level programming
languages, compiler, interpreter, loader, linker, and flowchart.
Basic features of programming (Using C/C++) [8 Lectures]:Data types, variables, operators,
expressions, control structures, functions, parameter passing conventions.
Advanced features of programming [8 Lectures]: Pointers, arrays, operations on data (Insert,
delete, search, traverse and modify), structures, introduction to classes.
Root finding methods [4 Lectures]: Bisection method, Secant method, the method of false
position, Newton-Raphson method.
Solution of system of linear equations [6 Lectures]: Gauss elimination, Gauss-Jordan method,
LU decomposition, Crout's Method, Do-little Method, and Cholesky decomposition.
Textbooks
1. Balaguruswamy, E., Programming in ANSI C, Tata McGraw-Hill, 2004.
2. Lafore, R., Object-Oriented Programming in C++, Fourth Edition, Pearson, 2002.
3. Burden, R. L., Numerical Analysis, Ninth Edition, Cengage Learning India, 2012.

Reference Books
1. Schildt, H., C: The Complete Reference, Fourth Edition, Tata McGraw Hill, 2000.
2. Datta, B. N., Numerical Linear Algebra and Applications, 2nd Edition, PHI, 2010.

Online Course Material


1. Kumar, S.A., Principles of Programming Languages, NPTEL Course Material, Department of
Computer Science and Engineering, IIT Delhi, https://nptel.ac.in/courses/106102067/.
2. Gupta, D., Introduction to Problem Solving and Programming, NPTEL Course Material,
Department of Computer Science and Engineering, IIT Kanpur,
https://nptel.ac.in/courses/106104074/.
3. Nayak, A. K. and Kumar S., Numerical methods, NPTEL Course Material, Department of
Mathematics, IIT Roorkee, https://nptel.ac.in/courses/111107105/.

ARVR#024
(Existing)
Title Linear Algebra Number MA
Department Mathematics L-T-P[C] 3-1-0 [3]
Offered for M.Sc Type Compulsory
Prerequisite

Objectives
1. To give sufficient knowledge of the subject which can be used by students for further
applications in their respective domains of interest.

Learning Outcomes
1. Concept of linear spaces, mapping between spaces, norm and their action on spaces.
2. Eigenvalues, eigenvectors and diagonalization, and Primary decomposition theorem.
3. Normal operators and Spectral theory of real symmetric normal operators

Contents
Vector Spaces, Matrices and System of Linear Equations [10 Lectures]: System of linear
equations,Matrices and rank, Vector Spaces over fields, Subspaces, Bases and dimension, Direct
sum of the sub spaces.
Linear Transformation and Inner Product Spaces [10 Lectures]: Linear Transformations, Rank
and Nullity theorem, Representation of linear transformations by matrices, duality and
transpose. Inner product spaces, Gram-Schmidt orthonormalization, orthogonal projections.
Operators and Spectral Theorem [20 Lectures]:Linear functionals and adjoints, Hermitian, self-
adjoint, Unitary and normal operators, Spectral theorem for real operators, Eigenvalues,
Eigenvectors, Characteristic polynomials, minimal polynomials, Cayley Hamilton Theorem,
triangulation, diagonalization, Jordan canonical forms (without proof).
Textbook
1. Hoffman, K., and Kunze R., Linear Algebra, Pearson Education (India) 2003.
2. Herstein, I.N., Topics in Algebra, 2nd Edn., John Wiley and Sons.
3. Lax, P., Linear Algebra and its applications, John Wiley & Sons, Indian Edition 1997
4. Lang, S., Linear Algebra, 3rd Edition, Springer, 2004

Reference Books
1. Artin, M., Algebra, Prentice Hall of India, 1994.
2. Axler, S., (1997), Linear Algebra Done Right.
3. Kumaresan, S., (2000) Linear Algebra – A Geometric Approach, PHI Learning.
4. Sharma, R. K., Shah, S. K. and Shankar, A. G., Algebra I: A Basic Course in Algebra, Pearson
Education, 2011.

Online course Material


1. Rana, I.K., Basic Linear Algebra, NPTEL Course material, Department of Mathematics, IIT
Bombay, https://nptel.ac.in/courses/111101115

ARVR#025
(New)
Title Python Programming Number CSP7XX0
Department Computer Science and Engineering L-T-P[C] 0–1–4 [3]
Offered for Type Elective
Prerequisite

Objectives
The Instructor will:
1. Explain programming concepts in python.
2. Explain how python can be used to process and access data.
3. Explain how to debug and profile python code.

Learning Outcomes
The students are expected to have the ability to:
1. Write and debug python programs.
2. Understand and use concepts like list comprehension,

Contents
1. Numbers, strings and lists; loops and iteration; control flow and conditionals; functions
and arguments (4 tutorials, 4 labs)
2. Sets and dictionaries; tuples, anonymous functions; variable scope (3 tutorials, 3 labs)
3. Reading CSV and JSON files and extracting data; map, filter and list comprehension (2
tutorials, 2 labs)
4. Modules, Classes and Inheritance (1 tutorial, 1 lab)
5. Numpy and Scipy; Pandas; (2 tutorials, 3 labs)
6. Virtual Environments, Debugging and Profiling (1 tutorial, 1 lab)

Textbook
1. Learn Python 3 the Hard Way, Zed A. Shaw (Addison-Wesley, 2016)

Online Course Material


1. Python Tutorial: https://docs.python.org/3.9/tutorial/index.html
2. Python Tutorial: https://www.learnpython.org/

ARVR#026
(Existing)
Title Machine Learning Number CSL7XX0
Department Computer Science and Engineering L-T-P [C] 3–0–0 [3]
Offered for M.Tech 1st year and PhD 1st year Type Compulsory
Prerequisite None
Objectives
The Instructor will:
1. Provide motivation and understanding of the need and importance of Machine Learning in
today’s world
2. Provide details about various algorithms in Machine Learning
Learning Outcomes: The students are expected to have the ability to:
1. Develop a sense of Machine Learning in the modern context, and independently work on
problems relating to Machine Learning
2. Design and program efficient algorithms related to Machine Learning, train models, conduct
experiments, and deliver ML-based applications
Contents
CSL7XX1: Machine Learning I: Supervised Learning 1-0-0[1]
Introduction: Motivation, Different types of learning, Linear regression, Logistic regression (2
lectures)
Gradient Descent: Introduction, Stochastic Gradient Descent, Subgradients, Stochastic Gradient
Descent for risk minimization (2 lectures)
Support Vector Machines: Hard SVM, Soft SVM, Optimality conditions, Duality, Kernel trick,
Implementing Soft SVM with Kernels (4 lectures)
Decision Trees: Decision Tree algorithms, Random forests (2 lectures)
Neural Networks: Feedforward neural networks, Expressive power of neural networks, SGD and
Backpropagation (3 lectures)
Model selection and validation: Validation for model selection, k-fold cross-validation, Training-
Validation-Testing split, Regularized loss minimization (1 lectures)
CSL7XX2: Machine Learning I: Unsupervised Learning and Generative Models 1-0-0[1]
Nearest Neighbour: k-nearest neighbour, Curse of dimensionality (1 lecture)
Clustering: Linkage-based clustering algorithms, k-means algorithm, Spectral clustering (3
lectures)
Dimensionality reduction: Principal Component Analysis, Random projections, Compressed sensing
(2 lectures)
Generative Models: Maximum likelihood estimator, Naive Bayes, Linear Discriminant Analysis,
Latent variables and Expectation-maximization algorithm, Bayesian learning (5 lectures)
Feature Selection and Generation: Feature selection, Feature transformations, Feature learning
(3 lectures)
CSL7XX3: Machine Learning I: Computational Learning Theory and Deep Neural
Networks 1-0-0[1]
Statistical Learning Framework: PAC learning, Agnostic PAC learning, Bias-complexity tradeoff,
No free lunch theorem, VC dimension, Structural risk minimization, Adaboost (7 lectures)
Foundations of Deep Learning: DNN, CNN, RNN, Autoencoders (7 lectures)
Textbook
1. Shalev-Shwartz,S., Ben-David,S., (2014), Understanding Machine Learning: From Theory to
Algorithms, Cambridge University Press
Reference Books
1. Mitchell Tom (1997). Machine Learning, Tata McGraw-Hill
Self Learning Material
1. Department of Computer Science, Stanford University,
https://see.stanford.edu/Course/CS229

ARVR#027
(Existing)
Title Probability and Statistics Number MA
Department Mathematics L-T-P[C] 3-1-0 [4]
Offered for M.Sc Type Compulsory
Prerequisite

Objectives
1. Demonstrate an understanding of the basic principles of probability theory.
2. Use of the properties of discrete and continuous random variables with their joint, marginal,
and conditional distributions.
3. Use of the various families of probability distributions to model various types of data.

Learning Outcomes
1. Understanding of probability theory and statistics to solve industrial problems.
2. Understanding of random sampling, theory of estimation and testing of hypotheses.

Contents:
Probability spaces and Random Variables[15 lectures]: Probability measure, conditional
probability, Bayes’ theorem, Random variable, cumulative distribution function and its
properties, probability density function, functions of a random variable, standard discrete and
continuous distributions and their applications, Transformation, moments, Chebychev’s
inequality.
Joint Distributions[10 lectures]: Random vectors, joint, marginal and conditional distributions,
conditional expectation, independence, correlation and regression, Bi-variate normal
distribution, functions of random vectors, transformation.
Limit Theorems [3 lectures]: Convergence of sequences of random variables, weak and strong
laws of large numbers, central limit theorems.
Estimation and Tests of Hypotheses[14 lectures]: Sampling distributions, estimation of
parameters, maximum likelihood method and method of moments, interval estimation, testing
of hypotheses
Text Books
1. Rohatgi, V. K., and Saleh, A. K. M. E., An Introduction to Probability and Statistics, Second
Edition, Wiley India, 2000.
2. Casella, G. and Berger, R. (2002). Statistical Inference, Cengage Learning.
3. Hoel, P.G., Port, S.C. and Stone, C.J. (1971). Introduction to Probability Theory, Houghton
Mifflin Series in Statistics.
Reference Books
1. Hogg, R. V., McKean J. W., and Craig A., Introduction to Mathematical Statistics, Sixth
Edition, Pearson Education India, 2006.
2. Prakasa Rao, B. L., S., A First Course in Probability and Statistics, World Scientific/Cambridge
University Press India, 2009.
3. Castaneda, L. B., Arunachalam, V., and Dharmaraja, S., Introduction to probability and
stochastic processes with Applications. Wiley, 2012

Online Course Material:


1. Kumar, S., Probability and Statistics, NPTEL Course Material, Department of Mathematics, IIT
Kharagpur, https://nptel.ac.in/courses/111105090/

ARVR#028
ARVR#029

You might also like