0 ratings0% found this document useful (0 votes) 191 views53 pagesML Akash
akash previous year book for machine learning
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
SYLLABUS
[Academic Session: 2016-17]
MACHINE LEARNING (TCS-402)
Instructions to Peper Setters:
1 Quan Na Istould bcompton ner thentn plu. This qt should
ate bet or shor ane ype quaton sould bef 5marke.
2 Apart fam Quin NI rote te paper shall oat orate orth ls
[Bon an stould haved qustont Hower edn may eased engl]
ueton om each unit Each quetin sa bea 2. Sore
jective: To intros theses aout be koawledge f bs concep of machine
Tearing etm, peseflerang
Introduction:
Basic concepts Definikon oflering tens Goals and appiatonsoachine orig.
‘tf rps rig ote, pean tos
‘Types of Learning: Supervised learsiog and unupervized
(Steieatio: nt, aii tes taldaton dtc, one tng
probate cdma end ener) geaet peighber [Fy T2}No, of He 13}
‘UNFFAE
Logistic rgession, Prepon, Expnenil family, Canaratvelaring alert,
(Gnusian csrnnant aaa Nave Baye, Suppers vector machines: Opal Me?
ening, Overview of
(ha TaNe. of ese)
Uneupersed learning: Clustering Krmean. EM Alp, Mistr of Gavan Factor
tips PCA Principal components analy CA Unepeent components anal),
Intent semanti inning Spee dareiag, Marker models Hen Marky dsl
Gnas. (74, No of Mee 1
Reinforcement Learning and Control MPs elinan equations, Yate tration and
poly feraton, Lier qendatirpslaon LQG earings fon
ruin eysnchRentrce POMP GP, THM Hs 11
MODEL PAPER-I
END TERM EXAMINATION [MAY-2016]
EIGHTH SEMESTER ([Link])
MACHINE LEARNING [ETCS-402]
Rae @.NO 1 isconpuen. Atonpanyfurfrometes
@.L() Erlanusing anexamplethe conceptof learning sytem?
tern eara om egrience wi rps st
eas aaa Es praromnee stain eased
7h ior ipeece
“uxt png crs
1 etrane entre pester won ais oops
[Ahundoritingecopston learning preblent
‘uo gsning und daring handritn words within imaet
1 peturmans menor spect of werd coecty canied
“ipatiperpenc «dtc fhandritan wed with velit
[Arebt diving learning problam:
‘Tne rvngon uc erlaneihway sing ison seers
1 Retrmanormennre average tes eeeedbfre anergy
a7
o
we renga deve,
‘Q.6) What arethegoalsof machine learning? o
‘Ans The pinay golf mane ering ito eel geeral purpose algerie
otgrata vee Suc elertan shld be efiest. earing rita shel lo
thera! purpne as pune We re oking or alontae that eam Be aly
Shlled ia bend hu of arin eS
‘we wat the result oflearingto bea prediction rule that
he pedctions hatter Ocsioally, we may aso be
Te tide by imps impressions or perhaps an examination a nly a flatly
Shall mamber example.eae ight Semester, Machine Learning
Q1.(6) Whatare Generativelearsing algorithms? o
‘Ans Consier acasifatin problem inwhich we want learn to tngish
Sewean elephants =D and dogs 0), basen tne fete tan ana
‘Givens training vet an sigorita ike lngtie regret othe poet
sigrth bse tes tofnd estate line-in einer
that separates the phased dag Then, edn new animal sie an
‘lpn rg on hh to a ein dy fal a
sp ent rat te ea
SUSIE aeuerioeten meneame
err tere
a eer ro
‘Algorithms that tolear piety (uch a lepton alga
hatte to learn mappings directly fm the pace fnpateX tothe abel eh
[asthe porepton algrith) are called ernie easing algerithma
‘The algoriths that instead try to mode lx) and pare elle generative
learning algorthne. Fer inatane y odcateswbeher an eampes = gern
‘lophane (hy then ply =O mode thedstrbaton af age fare, abd platy =D
‘oleate dtnbuton of epbants features
‘The diferent geeratie leering algorithms are
+ Gaussian dscriminant anaes
Nave Bayes
1. (d) What dosouunderstand by Reinforcement Learning?)
‘Ans, Reinforcement Learning refers tothe problem ofa goal-directed agent
Interacting with an uncertain environment The goa efan RL agent it maximizes
Itnpstn tala rewerd by saung the state ofthe oavironment and taking stone
inblh ale he sate. etch tay an RLeytem get evaluative wdc sbout te
[Perbrnanee ote ation allowing to improve perfrmanco ef susaqaet actions.
‘Bojer sethods hav ees developed and suceeflly applied in machine faring
to loam opal polis for fitestateinto-eton greene Markov Decision
Process MDP!
LP [Link]-AB Publisher wera
‘An analogous cota ytem shown Figure where the cone, awed
co fifteen eback sbttprevow ection calulin te
aoa peciead to mnimprv prorance The renfrcement ial it
eset eh mans evaluator on, whch iia auntie sae
ae enya ha sinulecjestive to on optimal conto ahh
ra ie te ter performance entaron while maintaining atablty
a cia the vlue fn, which ea eeasure of goodness of =
‘Brenachon fr apfven ste
'Qi.(c) What novefitting?
‘how, Oveciting refers to 2 mods that madels the traning d
ovectitop happens when a model ears the detail and nee i the a
eA t negatvey impacts the performance on them
FaeSE ent tne nie or pando Mactntions i he training datas picked op
saree sncpts bythe mel he problem i that these concepts do nat
aang creatively topos! the models bit to generalize, Overiting
SPREE th nuparawetis aod onlnentmedels that have more Desbity
Fae nese fonction. Ay tuch, many nonparametse rachioe learning
esa parametar or teehsiqes to lit and constrain bow mech
geri ~
‘etal tbe mo "mle, econ tees ae nonparametric machine
Tearing ryote and in subject to overiting training data
‘Thin pret orto
Tore some she dtl as ped oP
“Q2 Explain modes flearingin deta ap
‘hee Leasingutrtine arena vgn rounds partir paroien’
eee ie reall aprac leanne Acomplia earsng
eta soubor sou te towing apes
ene: rat = igh ing on lent ater
propre entire eaeded more gene torre eo
Teolttgtotemntarentsor maybe noid in pyc obese oot anda
Tcucworeef cern naligont enon
* pomain: What eng learned ea anton, oa eancet. Armen the many
anor pois re operon ta Jv fe ogee lanpnge a peer,
smytbe tte cane eferosepn ta ofcncep hat er consierd far tearing we
Sistema cae.
~ Gon iy thelersings dna, Thelerngsanbe done ta eee setof aes
rom pets dta obewnepodeimalter or tmephyeal phenomenon. 0 tae
font rer ten, and
‘Tepresntation The my the objets tbe eared are represented g the way
heparin bererseted ye mpats ogra Ton ypabers which he poem
arp whtelenming maybe eprennted ite sameway ein broader or mare
‘sited ora
Algorithme technology The sgn emawo tobe ued. Amore the
acai areal psa rtwork, cane tase
[enwaing, dec tee, pamiary, Ins state machines, probate network,
‘elearangspport ector machines, and these faci One may ls spect)em Bight Semester, Machine Lersing
eanigpetenetey eter nh tn
‘oats rg ee al
Sacto!
cman et ei it a rpm
sari cre
Sot cere inna meson en
Szenlee Ses aerate
Sek oocanamwemereoe
fag omni nrpon s
‘Bet ome lean dss Ina nine aig nar he rogram
Sa en ereacams pecan
Eee eet deteerermee eet itd
Zancai Se See
Sipanatenaee eter crea ome
ichanattginetesintsaeomemarnanrates
inuitraeninenanetone
cepeenoncs reggae ethane
ee
“herctatas rnet epee im
nie pg i een rent
se mreltis
Trina nO esti
eens eat pee
Se ea ania eet
Sonics
Shortridge
sets ent toca es
iiinaaipane ieee remcirseete
ie dins emey notioeeengenco noes
i tpaemamnetherintr dt
“rams etre fin
an gn i eee
sisi saben veer’ npg
orate emia rate ecinaoe teas
isneterted repeat ier ae
Stet iymawittpse oe
“eons Tio eaten pt
ee anata Sos ae ace
sieberiana
Seopa meen ateadopoeeloon
‘Conpatationa ering models muy pend on many moreereria and specie
tain ofthe earring rece.
wees
1 Univer. TechAB Poor
cco algorithm In deta
‘han The tnt sgt fo sive re lenring, rvesznds approximate
eat ln tier dco es omc ies td.
eee eso ahaa abo etnad atthe ot ofthe re? The
atin atthe rote tthe ten Ae
3. Eel
rade ot
te eae tae srt, ad the eng
easy ered tothe appoprste devcendant node (oe, down the bran
a es vein thle) Th nti procera then een
a eater tncined with ech decondant tds selec the tsk
cae a tat pn in toe ee Ti rms weedy search ran ecrtable
RERENECIESREN Wt tect never tka to resmnier ener ces
v potspy, that ebaracterzes the pany fanaa colecton of xaos.
nnn nang isn ag rata re a
Entropy S) * ~peleesP9-Pees Fe
etre the propaton of tv examples in ani she proportion
ugetve xpi in
‘Taformation gin eras howl agivn atribte separates te tining
exact acnedng tote target asieatan IDS woes ths infant pin meas
CETTE Stag ie eodidateatrtotn each step wile growing the tree
Bae Some in stnpy te enpested reduction in entopy ese by parttining
thecasplr coring ote tbe, Mere peel he infra xn Gan A)
lam aurbte relative to colcion of examples Ss eine a2
there rede then rested rec
aint. = Eato)~
here Vet A) ett flyout vals fer tebe A, and Sy the
nharat' far whieh wre Abt aleve, Sy= le SACS) wD Te vale
‘Gait ite mumer of it eaved when ebcfing te target vale ofan arbitrary
‘nenter by knowngtbe val atribte A.
no Yeromen ighth Semester, Machine Leasing
‘To ilatat, repose Si elation of M examplon a some Relea concept,
1 University CBTADAB PB
‘eladng9pstveand negative examples we ade th tation 9,5]
ha eamplecf data). Then the slepy orl o ha bcleacaeention
Batropy (0 +,5-) = (10 eg sOA) deg)
= 090
Sera Sacco angered winning |
‘ind, whieh an have the values Weak or Strong. Ofthere 4 eamples,vppon ot
{he patve and 7 oftheneetve examples have Wind» Weak, andthe remainder ave
‘Wind » Strong. Th nraaton gain dc to sorting the ergnal 24 eazpln by he
stile Wind may hen be cleat a
‘Vaves(Wind)= Weak tong
8 = (04.59
Baa 166.24
Same © 84.341
Stearns)
sin, Wind) = strony) — iat
+ Eneey)-(SLOEDYSyuy)
(OLENA ae)
= 0900-t@nans11-en0100
Information gin ie precizl th marred by Do ale the bet atrbat
chatapin roving tee Th we enoraation gino valuta he rlevancet
Phles In tien gurethaformatonunel ws iret tris, Humily
td Wind incmpied in ererto tarsi wish tetera for lasing
the taining eames
‘Which tribute the Bet clair?
Eos ee
ay \ pom vans \oors
Both Bat BB ENS
(an. iy) ans in
Seiogmeesiesse Tegra aeenens
4 (0) Consider the following set raining examples: es
Testance [ Ghifcaton | Ani [ames [Ames]
1 et * 7
2 a . t
3 a » Fr
“ a . t
5 3 . F
‘ e > t ®
1 e . F ©
® a > T «
(o) What is he entropy ofthis olectionoftraining examples with respect
tothe target fenction casifiation? o
(What the information gun ofAUHD! elatve to these training examples?
(c) Crete the decison tree for thee traning examples using TDS.
(Convert the decision tee into the rules
‘Ana (a) Entropy» -81016510)- 2100p 2/10)- 010g 110)» 148
(8) EAU wa) = si4ogi34)— VAlogtV4h= #08
BiAWwht =)=-Dlog24)- LAlogt)~ VHlgt4) = 18
EAA se) -12ag{.2)- log) «1
(e)GainctAsibl «E~ UIOEAurbt=4)- 410 Bebb) -210ELAtrb 0)
036
bd
(tase =) 4 (Ath? Then Clason vet
dusts wa (rbd =F) Then Cassifeation «ch
Meats «by Atebt =) The Cassieton et
caer =) Ate ob) (AtEA? «7 Then Casifetion «edeur ight Sometr, Machin Larne
Hate «Ata = 9 Ate = F)TRenClsieton wet
ted +o Then Chasiestion
heed =A) Then Casifeation xe
(QA. (Give decision trees to represen the folowing boolean fenetlons
(@AviBACL
(iaaBvicaD
‘Ane (@)AVIB AC]
oy
tds BiviewD
5. What the dtference between supervised and unsupervised learning?
028)
‘Ans, Sopervised Machine Learning:
‘The majority f practi machine lnring wie sopervae earning Supervised
teacrng ofl conenan inlet problene because he goa is Ren ge tho
amputer to learn m clasalfiaton aystm that we hav created. More generals
ehncaicn eanioginepropite fran prclem where daducnga cassifestin
1 [Link]-AB Publber MP9
etl andthe claniiatin is cay to dare 1 eelled upervined learning
th prem oa aot onrlng rm the taining datant ean be oven
ASP EE2o'Sipervining toe Tnening procne Tho algorithm eratively makes
aon ee eining ata and io correstad ky the eather. Learaing ops when
‘Esalptn achiever an ceptable evel of performance
‘Soperin earnings where you hve apo arable) and an output arable
‘op anda an gor oar the mapping fenton fom te apt toot
v= 60)
‘Theat istoappoxinate the maringurtion rowel that when youhave new
sngat ta ha can predit the otpatvarales fr that ata.
Supervised leering probleme can be further grouped into regression and
ouiteton problems
= Clasfctin: Aclasifton problem when he output vaable sacar,
such and Ml oie” ad "no das’
‘*egrssion:A regression problem ia when the outpt variable rel val,
sucha lar” eight” Soe poplar examples of supervised machine leraing
‘Suet ar
* Leite Regression
+ Support vector mache (SVBD
‘se Nearest Neighbours
Naive Bayes
+ Svat for rgrston
‘Unsupervised learning?
‘The elit han te computer arabe todsomthing tat we dan telithow
tou! Taare scaly tow approach toupensne leaning Th ie aprons
{Stench the agent not by eng ean eterna, bot by using sone sot of
‘arden to indie acne Assn Ope a wspervaed ening is ealad
‘Sontering Int pe oleering he goat oto mane way fn bat
Simpy tnd sian inthe aig data Toe sumption fe tha too
‘Sater sovered wil match reaznay wll wih untae dsiaton,
asupried arigis whee ouonlhaelnpu ita and no een
utp variate, Th gal er unspent sled undrying aucun
ristribution nthe data inorder learn more bout the dat,
‘These called uupevined leringecue nt supervised
— shove
there eect ante and ther fencer: Ago ele to eros
Abvise to sce and prevent th lteretng strate athe fa
nspervisn leaning proloms ean be farther grouped int cst
sociation problems, baie aesighth Semone Macioe Learning
Acustering problem where you want dicover the inherent
ose an grouping costomers by purchasing behav.
“An stcation reaming probes ie whee you want to diner
‘les that dese large pros of oar data, rh an popl hat by Kalo edt
tart
‘Aillutering bgt come under uneopersee earning sgpritms
+ K-meane clustering
+ Hirai cartering
+ Hidden Markov models
46. Whatisthediference between Model selection and Feature section?
023)
-Ans(a) Feature selecion:Ia machine leasing fstre seta, le known
sriabe election» attbute selection or variable subset selector, the proven of
Sclcting a subset of relevant featarer (arable, predictors) for ute in model
Cnctnocton. Feature elation achniges ere used for thre ree
= Sinplieaton of mls to make the ease to interpret researcheainers,
+ Shorting tine,
+ EBahanced gnerlztcn by redoing overttng ormally. reduction ef aries)
Feature alec Lechniqos shuld be distin rm ature extraction, Fetare
‘acion creates we fetes fro fini ofthe oil eatures, whereas fate
Btoctan return eubea th features, Feature selection terriques ae on used
{domaine where there are many fencres and comparatively few sample (or data
‘Bint Archetypa cows forthe alin feature selection include the analy of
‘riten cna and DNA msrarry dat, where hee are many thowsand of festares,
(nda few ton to hundred of amples.
“Afeatareeclecton lg canbe soen at the combination ofa earch techie
tec popcng nem feature subsets, long nth an evaluation meatre which sears the
‘Aerene feature subeta. The sspletaleitim Io to tet each pousible sub of
{eatures Gad the oe which minimise the ere rts. Thisis an exhaustivorerchof
{he apace, and computationally intractable oral but the alls of fostare st
Feature Selection Algorithns
"Thee ae three general clases of feature selection alorthme: filter mods,
wrapper methods and embedded ethods
1. Filter Methods
‘er fsture election methods apply a statisti! mesure to aesign searing
ech entre: The feature are ranked bythe sar and ether eslored to be her
se riom the dataset. The method are often univariate and conser ce feature
with regard tothe dopendent variable. Some examples foe ter
independently,
aperasain the Ch oqeared tort iformation gun ad creation eoelclent
2 VeapperMuthods
arch pete,
lection of at of fatres a
Wrapper methods conser the
‘prepared, evaluated end compared to other
where different combinations a
1, Univerty 48 Teeh)-AB Publisher mesa
td to evaluate a combination of features and
tng scr lted on model accuracy The ware process maybe methodical nich ag
sn pe Tends any mocha noch ata raptor hll-imbing algorithm, it
NER CERSSIie Ite toeerd and bacoaed pases to add and rove featares. A
LIN ef a wreper method the rsurive fate elimination algorithm
2. Embedded Methods
‘bedded method learn whic fstares best contribute tthe acarac ofthe
sndl wile the mol i beng eased. The mat comsnen type of embadéed feature
‘Riedidsmathctsarererulanation methods, Regaarzationmethodn are also called
{enaliantlon mothe tht itrodceadiionalenstzants nt the optimization of «
Frodaivecigorithm Gch an eosin algorithn) Ua bia the model toward lower
‘ompleny donee ences). Examples ofregslavzation lgrithms are the LASSO,
‘tane Net and Ridge Rereen.
embintlons. A predictive model
aah tt tt at
echhirtice aes ners
sprig matic gacieara nme
Soest
Somes
See ae
each a
psec
agate oentitr
‘often considered. * er
eee eee ates eae
sna cetaceans
fanction that generated the points, waa =
ppemee
ste Salita
pease
eer tat atin cpt
eee nemesea
ttarmfegyareiageaea te aay
Sta aeine
wa tcemmam ete
25)weet ighth Somovtey, Machine Leasing
+ Aisa setofocton.(Porexampe the et ofa prsbe directions in which yoy
canpushthe elioper entre tks)
+P, roth tat transonpeabte, For eachetats eS and action 8, Py
_adstnbasen oe the stat spre, Well sty more abut hs ltr, bat biel, ses
‘he dtbtion over what states we wil analon ef wo take seon nin sas’
+16 10,2 aed the dicount tn
+ R:SxA-+ Ris the reward farton Revans are sometins ala writen
fancionots tate ony in wbichese we wouldhave RS»)
‘The oamies ofan MDP proces as flrs: Wester insomedtate ny and getto
one sme ston ye Ats tak inthe MDP. Aa esl for chet sat the
[MDP randomly trasiton to ome cua sate dan ard te - Pa.
‘Then, et te ick nater ation y Anarene thi ato, on ote tons
again now tosome Pa, We npc, andeoon. Patil moc prt
py Rey
Up visting thenogoensaf ater gy, mihasine a a. oer aa Pe
sapivendy
Baga) seg) Rg) +
‘On when wear wing ewards aba funcinafthe stately thie becomes
+e) PR
‘Ans (8) Bellnans Equation
‘ABelimun equaion{s also known asa dynamic programming equation Iie
necessary covditonoptiality associated with the mathematical optimisation
‘method known st dynamic programming wie the vale devon peter at
‘certain putin ine in tars of the pao rm ome ial hac andthe ae
‘tthe emining deen problem that rel fom then alee, Thit beaks
1 dynam optimization problem ina staple uiproblems, ar Hallman’ "Principle
sr Optialig” prescribes
‘Alsat any problem which cen beled using ptnal contre tniry can alee
snlvod by analyzing the appropriate Blinaneqaton. However, the ean ‘elina
intr aul refers tobe dynamic programing equation avnated with dirt
{ine optimization problems
‘hi the Bellman equation, Ian bo spied even Srter i we op ine
sabecrips and plugin th alee the next tate
vio = pay (Peadepvereed)
‘he Blinn equation iclasied a factional equation, because by selving
ltmean nding th unkown fnton V which the value neti Recall hat the
‘ale function deertos the best posable value ofthe ebjectiv, asa funeton ofthe
latex By calculating the valve fnction, we will lo fad the fnction a) that
1 University <[Link]-AB Publisher
fanction ofthe state; thi a clled the ply
sence the optinl ection
Tora ioral sacasticreqenta optimisation problem with Markovian socks
nd whore the agent acd wih i dean expt, the Delian equation take
Serpe re
Veen) = pug Flee) Bi etx)
‘Ana. (6) Valu function approximation algorithm
‘efecto appraxnatos, aa ben sscesfully applied to many Rinoreement
tearing robiens isn aleative tod fr ining polices in continaous rate
[MDP nwbich we appsinateV rely without orig to dsrtieatio.
‘Te davlop a vie function apprornatin algorithm, we wil assume that we
nave a el rslaton, fr the MDE. Informally asst sack box that
{Gheras input any entnnonr valued) state, end acon mand outputs extstate
syqseled searing othe tate easton probabilities Py:
® Be
‘Terry ha on hl Ons tyson,
Ferenc etd pn in Pt wesc ang te
Inve po tesa wht poston drain th cate wl at
tine igen thecrent at tin tnd his he, tenga ne
{wlth premier he ten earth eget pte mn fe
{ota on Aten one on slo tet fheshl ppt sian
‘tare ator wih alo inpatient
totem, becarent ten and eon td compete ite a te en
snl stonta ed nee
Aastra yo et od ts aoe from dat etn the MDP
ores spar we sc mas in wih ew epeataly tae ens tee
DP each wi oT nets. Tsncan bo ove ping etna tendon ee
tre pel rv veto engonans We woulneeceae
sate rnc he loving
Pp ae a
PP pa ah
AP ah aye otrowed ighth Semester, Machine Loring
Wecan hen apply alesraing algorithm to pret, a fnciono a
zamplejone may chow ta learn linea ode ofthe for
fa + At 0 Bap
‘using an lgrith smite eres. Here, the parameter ofthe mde!
‘are hematin Aad Bad we cam etna them sing the data ellectd from our
als by picking
a Po
suogip 3 Be (at maf
MODEL PAPER-IL
END TERM EXAMINATION [MAY-2016]
EIGHTH SEMESTER [[Link]]
MACHINE LEARNING [ETCS-402]
‘Tinea meatts
‘ows [Link],Atemptony four fomethrs
{@110) What do you mean by machine larnig? Ove some example of
‘how Mace ering sais compte agit fo easing sd staf The
tesa shat being Gone nswnp red onset fcteratoso data seh
‘staan decexperec or stretn Soin geeal ahi eaaig shot
‘Eumng tod eter nthe Rare selon what as experoced te pat Th
ving onsueatc abe. In ater mar ef ita
ming suiematey witheat hema
‘SrovanmingtyxampiesOten whee repre min ros as spam flarog
‘Birhther hen repr ie expats the sk del in machinfearang,
sree main rhc ti cpr ln pl hom prgies bareoe
+ Optical character recognition: Categorize iages of handwritten characters
bythe letters represented
‘+ Pace detectlon: ind fice in images or inate i fac present)
+ Spam filtering enti emai messages a apamor omen
ing Catrina aries ny) nn wth they ar abit
Within the entnt of a inited domain,
{etermie the meaning something wteredby spear to ie exent that team oe
a uflerer or nonalferer of come
ion: rei for instance, whieh uttomer wil eponta
sparcalar promotion
adi eetion deity recat tae) which
+ Weather prediction: rei, for instance, wher ont it wll ain tamarow,
urbe
1 ( Explain in brit Supervised machine Leal
‘xampls of Supervined machine Learning, ne Ane ae eos
‘At Superiaed Machine Larsng
The ari of rca machin learning ots epervised learning. Sepervacd
ier ata comzonin dlaiertonprblon bnere the gol woken htaan ight Semester, Machine Leaning
cerpaer to lear a dassifcaton system that we have crested. More grey
lasso learning inappropriate orang ples vbw dengan
eet and the daacaton enya determine Teele safer
Seems the poets of anal arg em hating tae en Sena,
sa tether suprising th ering pose. Tor agro fete oak
‘rictona on th taining ata anced yh ence Lenrng phe
She algiin ackeves an sepa et ertronne
Sopersd ering where ouhavept aries andan sia vara
(W andy analgritn oear the mapingfucon om laptop
Y=
‘The gal isto apposite the pring factions wl that when ou bat em
Snput data otha pou ean pred the apart fora de,
‘Supervised learning problems ean be futher grouped into regres
tassication problems.
See eee eee
con Serie
oa ae
co eine ee
a
Pee!
poco ene a
fears
eas
es
gee
sete tenet inna
Fi nue
saints gn rds es
ee eat mero
tr reciente
ne eae
ees cig etre tat
ee rps in rer
Where cach sareaoaldenstan
np, tthe perceptron ona Note
ie ain apa Wi
Dereon to otputa
“ovis ntation
write the above inequality 98 Sag
pron fantion a8
we imagine an aditina constant inputs, = yaowing #12
> orin vector formas vos> 0 Forbes 9
ill sometimes write the pete
Ln very 87 AB Pah vests
a) = note
ve
1990
vote 8
cnet pp psingspernedcion rfa the
"oints).The perceptroneutpatsa 1 fr instances
sa ia ae
‘png sada! the ypeplane anton
din machine learning? ®
tow Dudhovrting se ndertingean lad poor mel perforance. oS by
turthe tent amen ion napled atin earengvorriting Overiting
{chao enn vation ache ering eget taining aa
{Stren fm ese we aay creo ot aaa aly how ele
Sign pera onsen athe we tapartan techies tat ocak
‘Sten ete mache eaming signs oes vein
1 Uses ramping tehige a etinate mel asuracy
2 ld back alain cate
‘oem poplar resampling eget aidan, tallow 3009
train andes mel inet tent ars afro sta sed ed oe
‘tinateofte perarmans aca ning ne oe ean da
‘Avalain date any a eto ou ning dts tat you ol back
tyr aching sll ry endeear ret A hace
sted entree ache rig lbs on or niga oe
tli earned mols he vlan dataset os ak thee Saat
othe ee ight room morn ata Using ces ants ee na
lntupled mation rang fer estinntng eal ataregyonancee eee
‘Meds, naga validaton xtaetis aan ecoiot ee
(0) What doyou understandhy POMP?
Aas. A Partally Observable Markov Decisi
‘sweralztion of Marko deanseat Eighth Samet, Machine Learning
‘alatenance and planning under uncertainty in general. An exact slton toa FOND?
{elds the epimal action foreach poste belt oer the world states, The epi
‘eign mantles (or minimise) the expected renard (or cost) ofthe agent over
sil infinite hoon. The sequence of epinel actions i known eth epinal
oly ofthe agent for interacting With seinen
‘A disrete-time POMDP macels the relationship between an agent and its
‘ovroament Formally, POMDP is TuplS, AR, 03) whore
+8 inns tater,
+ Aisasetfacions
+ Tis set of enitonal transition probabilities Been tates
1+ RSKA-+Ristheronard fenton
+ ies et observations
+0 ie et conditional abereton probable, and
1 re (041 the discount far
2 Give thediferent appectsofdesigning alearningsystem. 126)
‘Ans, Ta order to strate some ofthe baie design itues and approaches to
‘machine learning, le ut consider designing program to leva ply checker, ith
{heal festering inthe word cskeretoormaent
1. Chooring the Training Experience
"The fie desen ese we fetches the type of tralaing experienc from
vrhich our system wil era The typeof tuning experience avaabe ean have @
‘Sonica impact on sucess or ult ofthe learner One key attebtels whether te
{Gaining exyenanc provides rect or indrec feedback rearing the cies ede by
the performance rstem For example, neering play checers, teeter might
learn fom tes trsning samples eonsitngot edu checker brad ate and
the erect over each
‘Asecond important ttt ofthe training experience the greta which the
tearm contr the sequeonfrinng example For example, the learner might roy
the ncn tees nora ar ates rie th cect me
ich ternative learoer might tc propos beard ete that fd pa
Seeking and wk the teacher forte coret move Or the rarer may have complete
‘Solel ver bath the boned state and nrc) training eawletons, a does
‘Shen t Tears by playing against itself with no teacher present
"Athind inportantattribte of th taining experience is how well t epresets
the dioafoation of exaples over which the nal ystom performance P must be
measured. In general, learning in most reliable when the ang examples flow 8
(Giurbutin ear to that offutre test examples
‘checkers learning problem:
‘Task T playing ebechere
+ Perfonwance measure P: percent ofgumes wen aginst pponents
«Training experience E: playing practic game gaint tell
trite AB Pale ons
easter ac
aoe scene aly to ea wi
ll a io iin nb pone ra at ahh
Sige ra amen cme soe
Foren re ane rch ee moe
ere rein mates
oe
ee iron ret anon gis nt nt
ee a i hon
stars eer ah hotie aerate
aaa ee ty ioat tenet
HEN aa eI toes
ae
dnd nt steep rp tin Veg a hat
iterate
scaeerae te erates
dad ent at en =
alpaca eae OI—
Feats
dinncate teaming peor tt fal
eee
atten cee ohn
cogil om nnreenase rae
er eee tn ee
ere rey
Sones ences
eee eee ala paar anes
Renae tcltyestetnneewteecsicceet
ee ee Nae an aoe ieetaes
cecealcenep ec eerareee nese
tot ar ha ped nl ut ae We mos chow
Personas ta ecasgpeessou te ements tenance
Sine ineeracausecetecana meetin aati
Mtwrvintonaamnemmntoennane terse ae once
“retary niente eer ce
oer apranin areca en
eccrine ga pena nn
roreuanchrapueetaliinas any eee
Sees
= tp terete yon and
GalniS, 0)saat ght Semester, Mucine Leasing
4.0 Give decision resto represent the following boclen func
wane
(AXORD
An ARB ;
‘Al metas A, and A? meane
4. Explain in detall Support vector Machines? How Io the optima
hnypetnlanecompated? 038)
"Ans Sopport Velo Machines In china ering. uppert vector machines
(svitjaresuperied earning tsi ected aringsgoitims tat arly
(ie ned avseation ad roprennn sly Civen fining expen,
cried ectoging ton eriinaterefeocugotas an SY sleet
‘ita madd tat asp new example tone cater thn
tie inary near laser An SVM model isa representation
‘rpintsinepce tapped tht the examples ofthe eparaecatagre ae vided
{ear guy hat ao wide at poste New examples ne ten mapped ins that ame
‘hme ond reid telong oa ctepry based on ch sie of he gap he al
Talons perorming linear classieation, SVM can ffeny perform anon
User ciation tong what called th Lee ek, inlet mapping their
{spats ine high-imensacal eaare space
1D Uaivernity-(Bec-AB Publier esto
_Asopout Vea Machine SVIO ea diniintive lanier fray defined by
aerating iyperpane Ine words, gen bled ein data peri orig)
‘hetlpyrn atpts an opin pepe which entegrze em examples.
a ole the fling single problem: For a linearly parable st of 2
es finda nepaatng aight ie
y wee as
7 °
“kh \oe
oo
a
tine bore pstace you cm ee tht there ests multiple ines that offer 2
setsnn oth poem any of hen erent thers Ween tle die
‘Tonto aetna tho woth ofthe Hoes:Alie sb it pases to case to te
{ins bcos wile moe soni and wil at generalise corrects Therefore,
rel sh be ond the ine pasing a fora posile rom al pols.
‘Ton the operon of be SYM alrite asedon Sng the hyperplane hat
si ett mim dita the rig enlace
How isthe optimal hyperplane computed?
Atrio ots yas tof typenannins igh
cite incscnalogny sidan wed sana ecro-mpat ighh Semater, Machine earning
{ash Itstvly god separation le nehevedby the hyperplane tha has ha aga,
),andensures the convergence of
Aimsue if acucr otro th age wil no cre ony meat reward
[encase the gent il cond fturereward wit rater weigh, willing
{poly ova
"Tho Learning alert goes as flows:
1. Set th gum peemetr adenvroament rewards mats
2 intae mt Qt
5 respect ano it ae,
Doh passant naa
"tow ong sl nls fr th caret sate
“tne pt one sag th en ate
“eta Qari et sat bed on al pon ston.
{cpu Qi an = Rate) umm * Manet ae,
soa
tesa oh crest tte
(Men Marko models IDs ati! Maho adel nw he
spite sean es np ona te)
issn prada te snl jams ejecting
itn tos Rn ttn dely eater ad
‘Meret phthalic tos
‘ets yw othe op eps he sae
‘il Esha pony drain oe in oe speaks
‘Beebe euscen guoaedys gee noice et
Sosgucafis Mined twtheSaargce eee
thea panes otith pre bene temo eee
‘Sotto te poem re baorn ce
ln Mactr moda ae ep avon fr rope tempor
futur rton hone handing pte eopatan pects
Sarin, musa oar fovng pc chanson oni
Tatneft rr preemie o enerliaton
ttm nsion nnrocenen ersten foetacarein retake
aah ntetrebenentep, ostream neem eng14-MP-IT Eighth Semester, Machine Learning
an observer there in a genie, The room contains urns x1,%2, x3, ...cach of which containg
‘a known mix of balls, ench ball labeled y1, y2, ¥3, -.. The genic chooses an urn in that
room and randomly draws a ball from that urn. It then puts the ball onto a conveyor
belt, where the observer can observe the sequence of the balls but not the sequence of
urns from which they were drawn. The gen nome procedure to choose urns; the
choice of the urn for the n-th ball depends only upon a random number and the choice of
the urn for the (n - 1)-th ball. The choice of urn does not directly depend on the urns
chosen before this single previous urn; therefore, this is called a Markov process. [Link]
be described by the upper part of Figure.
‘The Markov process itself cannot be observed, only the sequence of labeled balls,
thus this arrangement is called a “hidden Markov process”. This is illustrated by the
lower part of the diagram shown in Figure, where one can see that balls y1, y2, y3, y4. can
be drawn at each state. Even if the observer knows the composition of the urns and has
just observed a sequer:ce of three balls, e.g. y1, y2 and y3 on the conveyor belt, the
observer still cannot be sure which urn (i.e., at which state) the genie has drawn the
third ball from, However, the observer can work out other information, such as the
likelihood that the third ball came from each of the urns.
Fig.1. Probabilistic parameter of a hidden markov model (example)
x — state
“y = Possible observations
a ~ otate transition probabilities
b — output probabilities
|
|
|
|
|
|FIRST TERM EXAMINATION [FEB. 2017]
EIGHTH SEMESTER [[Link].]
MACHINE LEARNING [ETCS-402]
time: 1% hrs. MM. : 30
ote: Q.1. is compulsory. Attempt any two more questions from the rest. ut
Q.1. (a) What do you understand by Reinforcement Learning? —2.5x4=10
Ans. Reinforcement Learning refers to the problem of a goal-directed a it
Rote ntting with an uncertain environment. The goal of an RL agent is to maximize
ine term scalar reward by sensing the state of the environment and taking actions
Gwhich affect the state. At each step, an RL system gets evaluative feedback about the
‘An analogous RL control system is shown in Figure, where the controller, based on
te feedback and reinforcement feedback about its previous action, calculates the next
control which should lead to an improved performance. The reinforcement signal is the
‘output of a performance evaluator function, which is typically a function of the state
nd the control. An RL system has a similar objective to an optimal controller which
‘ms to optimize a long-term performance criterion while maintaining stability. RL
‘nethods typically estimate the value function, which is a measure of goodness of a given
faction for a given state.
Q.1. (b) What is over fitting?
‘Ans. Overfitting refors to a model that models the training data too well. Overfitting
happens when a model learns the detail and noise in the training data to the extent
lEhat it negatively impacts the performance on the model on new data. This means that
the noise or random fluctuations in the training data is picked up and learned as concepts
‘by the model. ‘The problem is that these concepts do not apply to new data and negatively
impact the models ability to generalize. Overfitting is more likely with nonparametric
} nonlinear models that have more flexibility when learning a target function, As
wach, many nonparametric machine learning algorithms also include parameters or
echniques to limit and constrain how much detail the model learns. For example,
fecision trees are a nonparametric machine learning algorithm that is very flexible and
‘subject to overfitting training data. This problem can be addressed by pruning a tree
rit has learned in order to remove some of the detail it has picked up.22017 ight Semester, Machine Learaing
1. (€) What do you understand by POND?
Ans. A partially observable Markov decision process (POMDD) i
cnerltationaf thoy docson proves (NDPL-APOMDE model get ee
Broce in which is assumed that Ue sytem dgeamis are determined by an MD)
Eatthe agent cannot drt sbserve thundering tate Instead t mua ita
roby estat oer tho set of pune states, based on at of sorrel
$ndbeeraton probabilities, nd the underlying MDE.
‘Toe POMDP frameworks general enough tomode a varey of rel word segue
Aecsion procerses, Applications include robot navigation probleins, machi
‘Bateman, and planning under nertaityin genera, next auton
‘ele the optimal econ fr each posible elit aver the word tate. The optimal
{eon master niniae) the taped rover
A diceretetime POMDP models the relationship between
cxvioament Farmallya POMDPIsa tuple SAR 0,0.) where
TS ina et feta,
+ Aiea setotecions
+ Tica set ofcoditiona ranston probubiliee between states
+ R:SKA-oRistherenard fenton
+e isasetefebaervations
+ 0 Wasecoteonaens aberration probabilitig and
+ pe Oe the discount factor
(104) Beplain the concept of Hidden Maslow model?
‘Ans: Hidden Markov model (HMM) tte Mackov model in which th
system being medeled issued be a Matov rece wis uber ved (de
{Bites An HAIN can be presented
‘ple Markov mal i
{ind therefore the rate
so odd the tate sot dry Vine, bl the ott, dopenden
the stat, ie Esch state bara probability datrbation over th pseu
{okons. Therefor, the sequence of ohens generated by an DM ie
{asa 'hllen Marko model even thee patemeter ate krown cnc
‘Hien Markov odes are especially known fr er spltion in eer
recogni such a pect, handwriting, genere tengo, partspecr te
‘musal sore following, part dscharpeo and Uslformate
Tate diverete form, a hidden Mackay proces can be vsulied ae a general
ofthe Urn prebem with replacement (here eue tem fom the rm setared a
trignal urn before the net tep Consier this example ina roca tats ot visible
nadnerir eres genie. The rao contain tine kl, X2 13, eathot which eta
{Known mix of balls each ball labeled 1, 2,38, The gol chooses an ura
{oom and asdany dea «ball om that urn thon pot the bal on 2c
tlt hare the ebserver ean certo the sequence ofthe bals but nthe sequene
urs om whieh they were drawa, The gente has srs pocode ochre ara
eo te ue fo the rth ball depends only pon a rndom mbt and the eb
{thou forth (n= Ith ball. The choieaf urn does not directly depend om be
[LP University {Beheh]-AB Publisher 2017-3
cei snl ute heed Marr proce Kean
“enrbed by he upper part of Faure.
erin arkor proersiuelfcanot be observed only the soqunceeflabeled balls,
yo th atungeven io saled siden Maroy prot Th ia Dastated bythe
‘Be panofticdagram bow n Figure whereonecan se that ball, 2,98,94 ex
dove arch sate. Eronif the hoevor kas th competion of he ors a
Jot cher a sequence hreballs eg 7l,yZand onthe conve blithe cbaerer
i5y Cannot be sure which ur a, 2 whch rat) he gene has drawn th third all
{fom However the observer can wrk out other information, ach asthe kelibood at
{hethindball eame fom each ofthe ra
Fig. 1. Probab
Markov madel example)
xetates
posible observations
‘stats transition probate
‘b~ ouput probabilities
Q2-Esplainany two -
(@2-(a) Markov Decision Process QHDP)
‘Ans. AMarloy decision prosesisa tuple (8 A, [Py sD, where
4 Sisasetof states. (For example, in autonomoat haliopter Tight, S might be
‘he set fal ponte potas a erertatons othe helicopter)
* Aisasetofactons. Fer example, the set ofall posible directions in which you
‘an puth the helicopter contol sticks)
1, arothe tate ranaitin probable. Poreach states Sand action €A,
Pio adirtbutin ovr the state space, Wll say mre bout this later babes Pe
‘tes the dlateibuton over wba tates we wil tanaltontitwe take ean tn eat
sx2a10
+ 160, Din called the diesunt tor.
+ 1:54» Ils the revard function, Rewards are smetimes aio writen as
‘function ofa state Sony. in whiheavo we would hate RS»
‘The dynamics of MDP preceeds afellons: Wo tart nome tate, and goto
choose ron ation aye Ato tae nthe MDP. Asa result four hse the ate the
MDP randomly transitions io some scessor state drawn sending toss = Pa
‘Tn, we get pck another scion oy, a aval af ae action, the sae anesSe $$
aot ight Semester, Machine Learning
since tosomesy-Py a, Welhen ik nd on... Petry wen epee,
Sisprowse ae ise a =
ght ye
Liepvisting hegemony hy pt
Reape rR eRe)»
Orhan we are writing rewarls as afunctin ef the states ony this bacmes
Rigen Re
2.0) Bellman'sEguat
‘Ans Refer Q7.(a of End Term Examination 207
Q2.(c) Valo function approximation algorithm
‘Ans Refer Q9, (dof Ed Tren Examination 2017
(Q3(a) Explain in detail @- Learning
(MDP) Te works by learaing
feneioa hat ltinately gee he expected ty of aking a Give aston faa Bren
‘Sat and following he pina poly ereaer Alien atte aget lows
Ineetcung actions iene sae When sch an action sl fueled,
{he opal palicy can be contre tap ceting the ation wth te highest
‘eincach att. Oneof he eeneths cl @lsrnngin hati abet compare te
“epetd byob avasate cana witht reuting tamed ofthe one
‘dona, @earingeanbanle plone wth chs ans and evar,
‘that reqiig ay ado ations Ithosteon ponent fr ty te MDP @ fearing
‘rental nde an pina lyin the vote he creeds ofthe al ward
turn over ll suressive spn tari rw the exeent sa
Sheela
"The @ Learning sleet was propoed aa way optimize lations a Mashor
Ascison proces problems. The Urichveentare of@-Learning in eect >
‘honsebton nmediate evards ad [Link] sep tier on eget
hoerves the vector of ate then shee
Soveriostnton
‘inforcements, Uh leading othe hort path rm star ofa,
"The transition raleof @ earning ia vry simple armel:
(fstate, atin) = Hitt, action) +1° Maxine eat all action)
"The parameter bas range ft 10-201» 1) and ensures the converges
ofthe am. Ifjisloser tose, thw agent wilted consier online reward
Iriaicclacer tone the agent wleonider fate reward with renter weigh wiling
fodelay the reward,
‘The @-Learning algorithm goe as fellows:
1. Sot the gamma parameter and environment reward in mate R
2 Initialize matrix Qt eo.
4. For each episod: Select random intial stat,
[LP Uaiveity {Bech -AB Polisher 217-5
Do While the goal state hast been reached.
lect one among all posible ations forthe care ate
1 Daing this pone ation, consider ging to the nex tate
1 Gee maximum Q value fortis netstat based en all posible ato.
1 Ganput: tate, ection’ Rie acon) paras” Max Poet tai, a eos)
1 Ser the nest state as the curent state
Endo
nd For
3:0) What do you mean by Linear quadratie regulation (LQR)
[Ans Tha theory of optimal ete isoncerand with operating a dani ystems at
sini cost The cae her the ofsem dynamics sre dashed by a et of sear
‘Eien equations and te cont ideo bys quadratic faneenincaled ve LQ
fribem. Toe Lait isan sport par of tbe solaton tothe LQG Gnaeus
[Gaus probten.
‘The cost Vea particular contol input us defined bythe lowing integral
vee Jest miasy
“atthe nal tne ofthe contol pete.
(0) (ie sclaralued fncionofs 0, and
(omisa fonction ofs Incl the terminal peal fenton.
We sume that yan, are kzown, od valued) ee Our ea i
ooastheconel lik fiominimizeVAcae bic nti applaions is ere
and ¢ ae quadratic fancone of their arguments, Por an LTV tod the syeore
"eeipdon and cost are them gen by
EAD E+BIOY, ster,
vias [hetQuneeTRanaae Fu rMetay
here M. © and Rare postive semidefntematixwald fonctions fine, These
imatvee canbe chen byte Sng a oan desea closed up sesponse Ta
‘inimiation ef th qundratecost Vr inear sytem eknown ea the nea strats
‘eultor Li prodien
‘QA. FaplainindetaitSpectral Chstering? a0)
‘Ans. Spectral Clasering coisas wo the epecru gennalues ot
tie lary mare ofthe tt prers nema rebel ees oeanr
‘newer dontane Theory matric aan tapas tad ase a
{htsuate assent the lative soy oa
Sheetal ving methods ve nsveve, co
terval for sans Gta sets ps
Fenconaby fast
Spectral elustring teat tae
make any assumption onthe
‘Spectral hstering stages
ze Prceing~ Contr the graph and the similarity matin repreentng
Ben . ast ingtheear ight Semester, Machine Learing
+ Speseal representation
Form the asiecited Laplacian satin
~Compate eigenvalues ad eigenvectors of th Laplacian mate
=Map each points ewer dimensional representation based onone mare
eiganvetr.
*"Clartring Asien pits totwocrmare dase, based the new representation
Specteal Clustering algorithm
Input: Datamatrc Pe RO data pins, P = dimension) Keumberaf clusters
+ Cont airin ainty matt) x amp (-22F")
+ Construct degree matic D dig)
5 Compute Lplsanl =D -Atenrosnd)
+ Compute itt Keperra
+ Lateran th vt ean
+ tate Rte wr creapntng the throw efD
+ Chaser th panei Mink ny yy ih nea
vipa: Cher 9% with ely, eh
‘arans ofepctacletringsigritie
Inger he pect clntrng thd cab vid thee anv
sinc an pc sgn el died oe ae,
PreprecsingSpectal cntrng mths ne ont tert a tl
snatch besarte of hse mat Soba ch stay
ral ameliorate the eau
Catlin fh sary ric it cident
~Cheming te sna uc a ely aft theres he long
stn. Inmont ens he nua hrnlnconn vote sn tees
lay wre ned fer pec aps
+ Graph an sina mari contrat: alain matics are gery
cise epnvand scatter etree ibe reas
‘The mot ea Laplacian marae vated take
Unnormalited = D-W
Symmetric y= D*LDe1-Ds wD
Asymmetsic Ly, = DtLel=DAW
Clustering simple algorithms the than nes an be sein the ls tage
suchas simple linkage, lines, longatedkimeans,mistare ode, ee.
A
END TERM EXAMINATION [MAY-JUNE 2017)
EIGHTH SEMESTER [[Link].]
MACHINE LEARNING [ETCS-402]
tine: sb seae7s
ote dtemptaryfeequsionsincaing No.1 which scompaioy
(1.0) What are they tasks of Machine earning?
‘es Sachi ering taie copter algrithns rearing The earning that
tata nn ssbnays bandon we ot of oberations data, eh as example,
{retenportc ornate Sm gene ahi easing abot ening do
erin aac braen hat wegen nthe past The emphasis of marine
[ening is automate metods nator words, the coal et deve learning
‘Nps Jthrlearang sstmateay what aman intervention assistance.
“Taek ass of machin learning are to develep general purpose lgrithns of
praca vals. Such alga should elfen. Learning gts should sso
es ene purpose a pos
Primary take machine leasing thatthe rel flaming shouldbe a predion
ralethatinas aerate a ose nth prediction hth maken The majorintrest
Teste interpretatity othe preton ales predusdby learning Tnater words,
{ntomeconern suchas medal Sagor) wean tnecomputerto ed prediction
‘ees that are eal understandably Buren expe
Machi laringean be tout fas progearmingy example” The reslisof
wingmochine urge ten mote ecrttan what ean be reed tres
rotating. The aon that machine eaeingeleonthms are at ven, and are
{etocrarge large amount fata, Onc olherhand, «haan expert ney
‘eddy prc ipressonscr perhaps sn examinaion enya elatirely tal,
umberst esi
Qn eDi Bayer Theorem.
An, Naive Hayes theorem it a of spevsed lamin alerts bse on
Training sch sates, buts fama orth based on a common pense: al
naive dyes classifier assume tha the value of articular feature indepen of
the ale of any ater fatre, given the clas variable Por example, fut may be
feasidered tobe an pole tind, round an tout 10 cm in daar
Given a cass vanable Xand a dependent feture vector 2, through x, Bayes
theorem state the llawing relations ce
Polat 19)
Payee)
Using the naive independenee assumption that
Poel 9aieFeyamty = Pe,
for allthis relationship is simplified to
Pyle) =mca,
2017 ight Semester, Machine Lenming
Pom iar
Pt
Since Patan itcnstat ven ths input, we nw th fllowinlal
rota « Po Mats
¥ = anerns a ft
‘and wo can use Masinum A Posterior! (AD) etisaton to eatin PO)
1,19); the former is then the lative frequency fen y nth rnin st
Arent nave Bayes asides ifr mainly by the enum thay one rol
thedietabutionet Psy)
‘The majority of practical mochne leaning wos superviaed learn Spr
learning sary common in claeetin prtloms beets bl eon
computer to larn a lasiienionnystom that we have erated, Mo gene
Elanieation learning approeate fr ny problem where decry valent
‘sf and tho cssfetn ny todlerin I ical apes lego
the proses ofalgorith earning om th ning dnivet eno oughta ec
‘iperising the learning process. Th algorithm iratively maken pesictone on
‘ining data andiseoectedy the acer Learning tops when th ah ach
sh aceoplable level ef prtoranee,
‘Sopervae earning where you have apt arabes (2) and an ott rah
‘The goa ito approximate the mapping fursion so ell that when ou have
Input data that you ean pre! he ouput serials (9 er thn data
‘Suporitd learning probles can be farther grouped into erosion a
dasitcaton problems
"+ Classifcaion A lasifaton prbiom whan the ctpu varialo ie
suchas red aru” or “ieane and "no nea
‘Regression: regression problem is when tho output wasabi rea val
suchas “dnlare or "weight
‘Some popular examples of supervised machine learsing algorithm are:
+ Lasiate Regression
+ Decision rose
+ Sopporcwector machine (SVM)
+ kearast Neighbors
+ Naive Bayes
+ Random frst
+ Linear regression
L$
1, Unonnity (0. Teh AB liber aio
2 va oro
impr tot tmrnl
: set mpter rae t
rs npprnchen to nur
ge nage baby
coal pant uncipervise arming ical eastern
‘haat in tomatoes wll function, bot simpy tied
icin trea ‘iin voter that Un tars scored
aereeicamnny wall mths cv,
vacua ni wher pny have inp XD and no enrespoing
souk lee the gan er anpered rng to mel te underlying trutare
‘elfatta Ita nr rs nee laa
ina orn tecauv like sypecdsnd learning above
heroine suerr a tere a teacher Ain ae ett ion devin
te th tterntng ewer dat
i problem ea be further grouped inte clstering and
someting oat dna ee
siearning Tire pgoch e
Vigu vt eae
oben
tote th oe
Unmet
sintion problems
eChutering A clustering pablo i whaer yo want to discover the inherent
ron i nin ih ging ner poring bhai,
Amoclatlon: Anwwadaionruleaaing problem nwher you want dace
tulos at veri large pono our data, macs poole thal buy X ala tend to
‘lettering serithe come under unaupavised earning algorithms
5 rb tering
2 ten erk
4. cd) aplainin ital Regrealon.
An, Lage regres outed ee a diferont clas of probleme known as
lnm hin ret nn ts pre th group wht caret eek
Slr snc riion hn Cleriation al ale proning the data wir ot
gt reresan nssneqtn at th ropresenaton, ery much Ik near,
regrmim npt vac) recombined Inney eng weighs a ccitent vac
{Beane am utp alg Ady rece rien ere a the
Outpt van bing mele in nrysloen (Our Ieater than sumer ale
example fg reno uation
= eNO oes) e082)
whore ig tepid utp We heb oe intercept term and tia the
ccf rtheig pt Pach
‘etn [Link] ale) ta nb eared rom your tating des
‘ial pratt fi nt at ou vo nen eyo In ae
‘coefficients in the equntion (tho beta value or bi). us a
4,(0 What independent Component Analyse? Discuss,
‘Ans. Independent component analy ICs astatscal and computational
tei or roveaing Hen fctare that under sets of eundem arab
enmromenta oF igh,
aSA ———$—_—£
10-2017 [iehth Semester Machine Leaning
[defines a generative model forthe herve mlivaratdatawhishistypicty
sgivenas large databae ef amples Inthe mel the data vrihlon sr assumed
‘elinear mistures of ome unknown Intent sation andthe ising syste Ina
{knows The stent varables are saued on gaoraan and aly tnependent,
ltd they ar cli te independent open etter data, Thve ndeperden
‘Seponents les eld sures or fact, am be fund TCR.
1 eupertcalyreleed te prance component anal and fcr analysis
IcAisa mach more poner tehnigoe,omever pul finding the snoring ton
sor sourees when thee lassie ets fl completely The data aalyred by ICA ould
fensinate fom many dlerent Lines of aypiation fel, neue dita age,
‘cement dntahacs, economic indiestore tnd peseometie mewsurenete ln ny
‘cares, the measurements ar given as ast f pra signals tie eet arm
Bind source separation ie uaed to charctenze this problem Typical exages ste
rmivtore ef simultaneous speech signals that have Been picked p by several
‘Sssophones tran wateseued ty mle sensory tering eo scone ang
{ts mtile phone, or parallel Se series ead from some instil proce.
(Q2 (a) What is machine learning? Dicuss the frues in machine learning
land the steps required for selecting right machine learning algorithm.
hatin being dane always Seed on ne art fabuervations data such eb eamples,
{Srectemerenee ersnrrucson Soin general machine earings lsering tds
beter inthe fture based on what was experneed inthe past The emphasise machine
learning iron sztmasie metnoce. In otner words, the gal isto devise Leraing
slp that do tbe leering aural without human intervention absitance
‘Toereachine leasing paradign can be vewed as programming by example" Often we
have a specie task In ting, each ar epam filtering Bot rather than program the
computers sive the ask cry anhunelearnng, wo seth methods by which the
‘ezmputer wil ome up with soma program based on examples that we prove,
‘Machine earning in cor eabare of ara iteligence Ther ir many example
of mathne arin pretiems Her re weverl examples
‘opucalenaratr recognition: eategorze ages of handwritten characters by
the leners represented
‘foe detection: fin face fn age (or indicate fa fce present)
1 spoken lnnguate understanding: within the context of limited domain,
Aetermine the meaning ofsomething uttered by a peaher tothe exten that team be
‘Gaseied sce of ned eto eatepories
+ medical aguas: iagnse patient asa aferer or nom-suferer of
Sse
+ fraud detection: ently rei card transactions lor instance) which may bo
rvdoent instore
weather prediction: predic fr instance, whither or nat itil sn tomorrow.
Issuer i Machine Learalng- Thefolloving isu rise frm machin learning:
+ What alent exis for learing general target functions fom spciftraning
examples?
1+ Inwhat settings wil particle algorithms converge tothe desired function, given
sulicent training data?
‘+ Which alorithas perform bet for which ypesf problems and representations?
+ How moc training dataissucient?
14 University {Teh AB Publisher aot
«whan and how ean pee kod eld bythe learner ide the proce of
1 Arte the heat strategy fr chosing a el next texiing experience, and
vo th nce oh sratgy alter the comply ofthe lenring problem?
SP ynet inthe best wey to reduce the learning task to one o* more fone
sorxination problems?
ropetoc electing right machine learning algoithm:-
1 Categorize the problem, Tht tatwostep process
1 Calogorite by input. you hvelaelled data, sapere learning probe,
you have onlebelied data and want to find structure, 18s an unsupervised
Teingprbler lox nant tonic an objective fanction by interacting wit ak
vironment, a enforcement earning problem
'b Categorize by output. If the output of your modal is a number, it's
regresion poo Utheatput of your made iaclay, tea losfcatisn problem.
tac outpuro your model ea set afimpat groups, i's aclusterng probes
2. Find the availabe algorithms. Now that you have categorized the problem,
Ean Meniy tho algorithms that are appliable and racial to implement using
eal at your daposa
‘Implement allof them Set up a machine leering pipeline that compares the
riormante ofeach lurtn a the dataset seng ase fear selected eration
teria. The bert one is automatialy elected. You can ether do this once or have a
‘vice ring that oes hi intervalewhea nee dat sade.
‘4 Optimize hyper parameters (optional). Using cross-raidation,youcaa tne
ach algorithm to opinize performance i tie permits ie ft, ancally selected
er paraccters wil work well enough fr the ost pak
(3) What is learning? Discuss any four Learning Techniques.
+ Tak papngetekers
+ Peforancemeasire: porcentofgunes won agus opponents
+ raining experience: lying practice gues agit ell
handwriting recogaiion learning probe
“wok Trcognsing andlasaling bandriten words within ages
Perfraance measreP pecentof words torrny dase
‘hing exerone a databee of handwriten words wth given assfatons
ing Techniquen=
1, Regression Learning egreson coceon with wading tho elatonshi
teen aries that serail e ae nga ease ferrin peietns
sty theme. Regreson methods area workhouse afvatisicsand have beene
tical machine lasing We can an repent ele to thes ot
roblem and the elas of lgorthin, Restos
om ta th lyf algorithm Reresaion is process, The Mow popeighth Semester, Machine Learning
Pore ha
+ Ordinary Least Squares Regression (OLR)
iar Regression
+ Lagisti Regression
1+ Stepwise Regression
2. Instancehased Learning Instance bse laring madelina decison p
sithinstancen or examples of training data that
the model. Such method epeally buildup
rie data tothe database using a similaety measure inorder tf
‘ed makea prediction. For this reason, instance based methods areal called wn
{akell metbods
‘hore instances and sielarty menres ted between instances The most y
intancobased algorithms are!
avg
+ SettOreanizing Map SOM)
. Decision Treo Learning Decision ro methods constrict « model of dei
‘made sed on actual values trbatesin the data Decisions realty tos tra
“ntl prdition decison made
for clarieation and regression
‘ion
memory-based earning Pocus spat onthe representation
ora given cord Decision eon ae tained o dlp,
problems. Decision tees are often fast and acur
1nd big favorite in machine learning. The mest popular decision tee algrthms
LP. University {[Link] PAB Publisher 2017-13
+ casiteatn an Rees Tree CART)
1 {ete hain 9105
Ann Reo Network Learning iil Neural Neo ar otl
eee ela ander neon eft ne nora Ty
aig oe coy ed peered
cae rip ano eloped ode grt snd
Sole pes Te ent plas anal eer
riatios fr
gorithms ee
+ BackePropagstion
1 Hopfield Network
(3.(a) Explain Generative Learning Algorithms in detail
to finda stright line—that i, decision boundary—that separates the elephants
dogs: Then to classify anew animales either a clophant r a dog, checks oo
jWlich side othe decision boundary falls, nd makes ite prediction accordingly
lore’ diferent approach. First, looking at elephants, we can build a model of
+ lephants look lke. Then, looking at dogs, we ean build separate model of what
2 ook ike. Finally a classify anew animal, we can match the new anal againet
ephant model, and match gait the dog model, o seu whether the ow eine
‘more like the elephants or more like the dogs we had sen in te training set
Algrithms that try to learn py) ety seen as logs regression or algoithans
try to learn mappings directly from the apace of inputs X tothe labels Osu
the percepcon algorithm) areelled dsetminativelseaing algorithms,
‘he algorithms that instead try to medal pix! (and pare ealed generative
ringers Friant yay wher anamapllna dora
ls the distbuton of elephants features. eee.
‘The different generative learning algorithms are
+ Gaussian discriminant analysis,
Bayes
3.14) Doseribo thatimitati
Ans. One type of ANN eystem i
ot perception model,2017 Eighth Semester, Machine Learning
Vif ey buye tunes Font ite >
1 otherwise
where each W fare vaoed constant or weight, that determines the cout
otinput ye the perceptron output. Notice the quantity (W,) isa thresbald that
‘netted combinatenfnputs Wy, ».oW,X mun surpass ncrder fr te per
tooutputa
‘Perceptron model has several limitations=
1 First the tpt value of perepton can take on only one of wo values (Dar
dive tothe hard iit transfer function,
2-Second pereptrons can oly elas linearly separable ets of vectors,
4. Ifa straight line ora plane canbe dramn to separate the npst vectors int
correc categories the fput vers arenes separable Ifthe eer ae oc]
‘eparble, learning wil never each plat where all vectors areclasifed proper
(4a) Explain decision tree algorithm with example.
‘Ans. The base algorithm fr dessin tre leaaing, corrnpondsepprsimatay
‘the ID3 alortho. The IDS algorithm lear decision tees by constructing
tepdown, besaning with the question "which tirbute shouldbe tested atthe ren
‘Ne rec? The best stbuete elected and used athe eta the ec nde the
‘Adescendat ofthe ot ode then created fer each possible vl of thst
Sod trsiningensmpee are rare tothe apFoprat descendant node, am
branch eoreeponding to the example value fortis stribute) The entire poca
then repeated using the ralning enamplesassodated with each descendant noe
ele the best attribute to test at tat ointin the ree. This forms greedy search
‘Shacsplaledsson ue, a whchthealgati never acres to retold e
‘holes
+ Entropy, that charatrizes the imipury ofan etary callin of
Given calcio, containing postive and negative examples cose arg! cone
{he entropy ofS relative to this balan dassieation
Entropy) +P eB Pe
herp te propotont mite emmpern Sand pth preprin
mepineeneiiae
*Tntormaton gna meant lagna saath a
amply sci trate as aie sa
ieteke among te eanidea suena each sept grviog ae
Indornton ely inte xpi econ in tgy ned ees
ecard ssi Map atin pn alt
inate Aree oacaleon ot erempls& defend ae
se Vos thst al oil vles oratbt ,d
tute ir wbih alist Away hn See SIAC) = Te a
‘Gin th br of is soved when eesti te tare vale fan 8
snonbeoSty knowing value oflrated
otn
18) =
'Stpcuvpy()
Gain(S,A) = Entropy(s)- 5 1S
[LP [Link] PAB Publisher ort
Dato |
sei ovrcen ~ nae
me
eh Nt Sirong gh
/ \ 4 \
% Yer % he
Toitlastrat, suppose 8 isa coletion of 14 examples of some Boolean concept
‘nding 9 positive and S negative examples (we adopt te notation (94, 5-Jtasummarie
sch sample of data). Then tho entropy ofS relative to this boolean lareifestion is
Entropy (94,8) = ~ 9/4) log, 9/14)= (614) og, SA)
= 0910
vaditiSionsteine ingen dy ditty attesting
‘hichean haethe values Weak or Strong. Of hese Id examples suppose Gof
{eter 2th ean empire Wn Weak nate raed hve
Strong. The nfrmatio gai det ating the vigil 14 eoampls
state Wind may thenbeealedstegan or ae
Values(Wind) = Wea, Strong
5 = O45
Seat © (64.241
Some = 43+]
Goin(S, Wind) = Eniropy(s) ~ lentropy(si)
senna BY
= Enropy(S) ~ BNSEntropx Suns)
= ONAEnIP0py Stem)
= 0940 ~ @igosnr — /sy1.00
= ons
Inman gains recy the mead by 18 to set he es attbute
steachsepnguving tin ee, Teuton fala ase ose oe
‘sameeren nt elaranton ate ioe
Tumidity and Wind, is computed in order to determine which is the better attribute.
‘for classifying the training examples. eee a
You might also like
Programming For Problem Solving-I - Set-I - CSE-K-Q, AI-B, IT-C-E, ECE-B-D, CE-A, EEE-A&B & ME-A - I-I - R22 - Mar-2023
Programming For Problem Solving-I - Set-I - CSE-K-Q, AI-B, IT-C-E, ECE-B-D, CE-A, EEE-A&B & ME-A - I-I - R22 - Mar-2023
1 page