CP5191 MACHINE LEARNING TECHNIQUES L T P C
OBJECTIVES:
To introduce students to the basic concepts and techniques of Machine Learning.
To have a thorough understanding of the Supervised and Unsupervised learning
techniques
To study the various probability based learning techniques
To understand graphical models of machine learning algorithms
UNIT I INTRODUCTION 9
Learning – Types of Machine Learning – Supervised Learning – The Brain and the Neuron –
Design a Learning System – Perspectives and Issues in Machine Learning – Concept Learning
Task – Concept Learning as Search – Finding a Maximally Specific Hypothesis – Version Spaces
and the Candidate Elimination Algorithm – Linear Discriminants – Perceptron – Linear
Separability – Linear Regression.
UNIT II LINEAR MODELS 9
Multi-layer Perceptron – Going Forwards – Going Backwards: Back Propagation Error – Multilayer
Perceptron in Practice – Examples of using the MLP – Overview – Deriving BackPropagation – Radial
Basis Functions and Splines – Concepts – RBF Network – Curse of
Dimensionality – Interpolations and Basis Functions – Support Vector Machines.
UNIT III TREE AND PROBABILISTIC MODELS 9
Learning with Trees – Decision Trees – Constructing Decision Trees – Classification and
Regression Trees – Ensemble Learning – Boosting – Bagging – Different ways to Combine
Classifiers –( Probability and Learning – Data into Probabilities – Basic Statistics – Gaussian
Mixture Models – Nearest Neighbor Methods – Unsupervised Learning – K means Algorithms –
Vector Quantization – Self Organizing Feature Map)( Stephen Marsland Chapter 12,13,14,7)
UNIT IV DIMENSIONALITY REDUCTION AND EVOLUTIONARY MODELS 9
Dimensionality Reduction – Linear Discriminant Analysis – Principal Component Analysis – Factor
Analysis – Independent Component Analysis – Locally Linear Embedding – Isomap –
Least Squares Optimization – Evolutionary Learning – Genetic algorithms – Genetic Offspring: -
Genetic
Operators – Using Genetic Algorithms – Reinforcement Learning – Overview – Getting Lost
Example – Markov Decision Process (Stephen Marsland chapter 6 , 9,10,11)
UNIT V GRAPHICAL MODELS 9
Markov Chain Monte Carlo Methods – Sampling – Proposal Distribution – Markov Chain Monte
Carlo – Graphical Models – Bayesian Networks – Markov Random Fields – Hidden Markov
Models – Tracking Methods (Simon Rogers Mark Girolami ,Chapter 9,3)
TOTAL: 45 PERIODS
OUTCOMES:
Upon completion of this course, the students will be able to:
Distinguish between, supervised, unsupervised and semi-supervised learning
Apply the appropriate machine learning strategy for any given problem
Suggest supervised, unsupervised or semi-supervised learning algorithms for any given
problem
Design systems that uses the appropriate graph models of machine learning
Modify existing machine learning algorithms to improve classification efficiency
REFERENCES:
1. Ethem Alpaydin, ―Introduction to Machine Learning 3e (Adaptive Computation and Machine
Learning Series)‖, Third Edition, MIT Press, 2014
2. Jason Bell, ―Machine learning – Hands on for Developers and Technical Professionals‖, First
Edition, Wiley, 2014
3. Peter Flach, ―Machine Learning: The Art and Science of Algorithms that Make Sense of Data‖,
First Edition, Cambridge University Press, 2012.
4. Stephen Marsland, ―Machine Learning – An Algorithmic Perspective‖, Second Edition,
Chapman and Hall/CRC Machine Learning and Pattern Recognition Series, 2014.
5. Tom M Mitchell, ―Machine Learning‖, First Edition, McGraw Hill Education, 201