2
Most read
3
Most read
8
Most read
INDUCTIVE-ANALYTICAL
APPROACHES TO LEARNING
Swapna.C
2.1 The Learning Problem
• Given:
• A set of training examples D, possibly
containing errors.
• A domain theory B, possibly containing errors.
• A space of candidate hypotheses H.
• Determine:
• A hypothesis that best fits the training
examples and domain theory.
• There are 2 approaches:
• 1. To find best fit find errorD(h),errorB(h)
errorD(h) is defined to be the proportion of
examples from D that are misclassified by h.
errorB(h) of h with respect to a domain theory
B to be the probability that h will disagree with
B on the classification of a randomly drawn
instance.
we could require the hypothesis that minimizes
some combined measure of these errors, such
as
• It is not clear what values to assign to kB and kD
to specify the relative importance of fitting the
data versus fitting the theory. If we have a very
poor theory and a great deal of reliable data, it
will be best to weight errorD(h) more heavily.
• Given a strong theory and a small sample of very
noisy data, the best results would be obtained by
weighting errorB(h) more heavily. Of course if the
learner does not know in advance the quality of
the domain theory or training data, it will be
unclear how it should weight these two error
components.
• 2.Bayes theorem perspective.
• Bayes theorem computes this posterior
probability based on the observed data D,
together with prior knowledge in the form of
P(h), P(D), and P(Dlh).
• The Bayesian view is that one should simply
choose the hypothesis whose posterior
probability is greatest, and that Bayes
theorem provides the proper method for
weighting the contribution of this prior
knowledge and observed data.
• Unfortunately, Bayes theorem implicitly assumes
pefect knowledge about the probability distributions
P(h), P(D), and P(Dlh). When these quantities are
only imperfectly known, Bayes theorem alone does
not prescribe how to combine them with the
observed data.
• (One possible approach in such cases is to assume
prior probability distributions over P(h), P(D), and
P(D1h) themselves, then calculate the expected
value of the posterior P (h / D) .
• we will simply say that the learning problem is to
minimize some combined measure of the error of
the hypothesis over the data and the domain theory.
2.2 Hypothesis Space Search
• We can characterize most learning methods as
search algorithms by describing the hypothesis
space H they search, the initial hypothesis ho at
which they begin their search, the set of search
operators O that define individual search steps,
and the goal criterion G that specifies the
search objective.
Three different methods for using prior
knowledge to alter the search performed
by purely inductive methods.
• Use prior knowledge to derive an initial hypothesis from
which to begin the search.:
• In this approach the domain theory B is used to construct
an initial hypothesis ho that is consistent with B. A
standard inductive method is then applied, starting with
the i ho.
• For example, the KBANN system learns artificial neural
networks in It uses prior knowledge to design the
interconnections and weights for an initial network, so that
this initial network is perfectly consistent with the given
domain theory. This initial network hypothesis is then
refined inductively using the BACKPROPAGATaIlOgNor ithm
and available data. Beginning the search at a hypothesis
consistent with the domain theory makes it more likely that
the final output hypothesis will better fit this theory.
• Use prior knowledge to alter the objective of the
hypothesis space search:
• The goal criterion G is modified to require that
the output hypothesis fits the domain theory as
well as the training examples.
• For example, the EBNN system learns neural
networks ,Whereas inductive learning of neural
networks performs gradient descent search to
minimize the squared error of the network over
the training data, EBNN performs gradient
descent to optimize a different criterion.
• This modified criterion includes an additional
term that measures the error of the learned
network relative to the domain theory.
• Use prior knowledge to alter the available search
steps.
• In this approach, the set of search operators 0 is altered by
the domain theory.
• For example, the FOCL system learns sets of Horn clauses.
It is based on the inductive system FOIL, which conducts a
greedy search through the space of possible Horn clauses, at
each step revising its current hypothesis by adding a single
new literal.
• FOCL uses the domain theory to expand the set of
alternatives available when revising the hypothesis, allowing
the addition of multiple literals in a single search step when
warranted by the domain theory.
• FOCL allows single-step moves through the hypothesis space
that would correspond to many steps using the original
inductive algorithm. These "macro-moves" can dramatically
alter the course of the search, so that the final hypothesis
found consistent with the data is different from the one that
would be found using only the inductive search steps.

More Related Content

PDF
Seminar report on augmented and virtual reality
DOCX
seminar report on smart glasses
PPTX
Supervised Machine Learning
PPT
E-PAPER TECHNOLOGY
PPT
Software Project Management chapter-1
PPTX
Machine learning
PDF
Principles of programming languages. Detail notes
PPT
Machine Learning
Seminar report on augmented and virtual reality
seminar report on smart glasses
Supervised Machine Learning
E-PAPER TECHNOLOGY
Software Project Management chapter-1
Machine learning
Principles of programming languages. Detail notes
Machine Learning

What's hot (20)

PPTX
Combining inductive and analytical learning
PPTX
Using prior knowledge to initialize the hypothesis,kbann
PPTX
Analytical learning
PPTX
Learning set of rules
PPTX
Genetic algorithms
PDF
Bayesian learning
PPTX
Machine learning seminar ppt
PPTX
Artificial Intelligence Searching Techniques
PPTX
Advanced topics in artificial neural networks
PDF
Production System in AI
PPT
Computational Learning Theory
PPTX
knowledge representation using rules
PDF
Bayes Belief Networks
PPTX
Knowledge representation in AI
PPTX
Evaluating hypothesis
PPTX
Learning in AI
PPTX
Knowledge representation
PDF
A* Search Algorithm
PPT
Heuristc Search Techniques
PPTX
MACHINE LEARNING-LEARNING RULE
Combining inductive and analytical learning
Using prior knowledge to initialize the hypothesis,kbann
Analytical learning
Learning set of rules
Genetic algorithms
Bayesian learning
Machine learning seminar ppt
Artificial Intelligence Searching Techniques
Advanced topics in artificial neural networks
Production System in AI
Computational Learning Theory
knowledge representation using rules
Bayes Belief Networks
Knowledge representation in AI
Evaluating hypothesis
Learning in AI
Knowledge representation
A* Search Algorithm
Heuristc Search Techniques
MACHINE LEARNING-LEARNING RULE
Ad

Similar to Inductive analytical approaches to learning (20)

PPTX
Module 4_F.pptx
PPTX
Module 4 part_1
PPTX
UNIT II (7).pptx
PPTX
Bayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering College
PDF
Machine Learning using python module_4_part_1.pdf
PDF
Bayes Theorem.pdf
PPT
09_Informed_Search.ppt
PPTX
-BayesianLearning in machine Learning 12
PPTX
Learn from Example and Learn Probabilistic Model
PPTX
Unit1_AI&ML_leftover (2).pptx
PPTX
General-to specific ordering of hypotheses Learning algorithms which use this...
PDF
PAC Learning
PPT
Chernick.Michael.ppt
PPT
Chernick.Michael (1).ppt
PPT
Chernick.Michael.ppt
PDF
Bayesian Learning- part of machine learning
PPT
ARTIFICIAL INTELLIGENCE- informed search strategies
PDF
Concept Learning in hypothesis in machine Learning by tom m mitchel
PDF
Principle of Maximum Entropy
PPTX
ngboost.pptx
Module 4_F.pptx
Module 4 part_1
UNIT II (7).pptx
Bayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering College
Machine Learning using python module_4_part_1.pdf
Bayes Theorem.pdf
09_Informed_Search.ppt
-BayesianLearning in machine Learning 12
Learn from Example and Learn Probabilistic Model
Unit1_AI&ML_leftover (2).pptx
General-to specific ordering of hypotheses Learning algorithms which use this...
PAC Learning
Chernick.Michael.ppt
Chernick.Michael (1).ppt
Chernick.Michael.ppt
Bayesian Learning- part of machine learning
ARTIFICIAL INTELLIGENCE- informed search strategies
Concept Learning in hypothesis in machine Learning by tom m mitchel
Principle of Maximum Entropy
ngboost.pptx
Ad

More from swapnac12 (14)

PPTX
Awt, Swing, Layout managers
PPTX
Applet
PPTX
Event handling
PPTX
Asymptotic notations(Big O, Omega, Theta )
PPTX
Performance analysis(Time & Space Complexity)
PPTX
Introduction ,characteristics, properties,pseudo code conventions
PPTX
Learning rule of first order rules
PPTX
Instance based learning
PPTX
Computational learning theory
PPTX
Multilayer & Back propagation algorithm
PPTX
Artificial Neural Networks 1
PPTX
Introdution and designing a learning system
PPTX
Inductive bias
PPTX
Concept learning and candidate elimination algorithm
Awt, Swing, Layout managers
Applet
Event handling
Asymptotic notations(Big O, Omega, Theta )
Performance analysis(Time & Space Complexity)
Introduction ,characteristics, properties,pseudo code conventions
Learning rule of first order rules
Instance based learning
Computational learning theory
Multilayer & Back propagation algorithm
Artificial Neural Networks 1
Introdution and designing a learning system
Inductive bias
Concept learning and candidate elimination algorithm

Recently uploaded (20)

PDF
Kalaari-SaaS-Founder-Playbook-2024-Edition-.pdf
PPTX
ENGlishGrade8_Quarter2_WEEK1_LESSON1.pptx
PPTX
principlesofmanagementsem1slides-131211060335-phpapp01 (1).ppt
PPTX
operating_systems_presentations_delhi_nc
PDF
IS1343_2012...........................pdf
PDF
Chevening Scholarship Application and Interview Preparation Guide
PPTX
Math 2 Quarter 2 Week 1 Matatag Curriculum
PDF
faiz-khans about Radiotherapy Physics-02.pdf
PDF
BSc-Zoology-02Sem-DrVijay-Comparative anatomy of vertebrates.pdf
PDF
Diabetes Mellitus , types , clinical picture, investigation and managment
PPTX
PAIN PATHWAY & MANAGEMENT OF ACUTE AND CHRONIC PAIN SPEAKER: Dr. Rajasekhar ...
PPT
hsl powerpoint resource goyloveh feb 07.ppt
PPTX
IT infrastructure and emerging technologies
PDF
GIÁO ÁN TIẾNG ANH 7 GLOBAL SUCCESS (CẢ NĂM) THEO CÔNG VĂN 5512 (2 CỘT) NĂM HỌ...
PPTX
Approach to a child with acute kidney injury
PPTX
Power Point PR B.Inggris 12 Ed. 2019.pptx
PPTX
Key-Features-of-the-SHS-Program-v4-Slides (3) PPT2.pptx
PPTX
UCSP Section A - Human Cultural Variations,Social Differences,social ChangeCo...
PDF
CHALLENGES FACED BY TEACHERS WHEN TEACHING LEARNERS WITH DEVELOPMENTAL DISABI...
PDF
anganwadi services for the b.sc nursing and GNM
Kalaari-SaaS-Founder-Playbook-2024-Edition-.pdf
ENGlishGrade8_Quarter2_WEEK1_LESSON1.pptx
principlesofmanagementsem1slides-131211060335-phpapp01 (1).ppt
operating_systems_presentations_delhi_nc
IS1343_2012...........................pdf
Chevening Scholarship Application and Interview Preparation Guide
Math 2 Quarter 2 Week 1 Matatag Curriculum
faiz-khans about Radiotherapy Physics-02.pdf
BSc-Zoology-02Sem-DrVijay-Comparative anatomy of vertebrates.pdf
Diabetes Mellitus , types , clinical picture, investigation and managment
PAIN PATHWAY & MANAGEMENT OF ACUTE AND CHRONIC PAIN SPEAKER: Dr. Rajasekhar ...
hsl powerpoint resource goyloveh feb 07.ppt
IT infrastructure and emerging technologies
GIÁO ÁN TIẾNG ANH 7 GLOBAL SUCCESS (CẢ NĂM) THEO CÔNG VĂN 5512 (2 CỘT) NĂM HỌ...
Approach to a child with acute kidney injury
Power Point PR B.Inggris 12 Ed. 2019.pptx
Key-Features-of-the-SHS-Program-v4-Slides (3) PPT2.pptx
UCSP Section A - Human Cultural Variations,Social Differences,social ChangeCo...
CHALLENGES FACED BY TEACHERS WHEN TEACHING LEARNERS WITH DEVELOPMENTAL DISABI...
anganwadi services for the b.sc nursing and GNM

Inductive analytical approaches to learning

  • 2. 2.1 The Learning Problem • Given: • A set of training examples D, possibly containing errors. • A domain theory B, possibly containing errors. • A space of candidate hypotheses H. • Determine: • A hypothesis that best fits the training examples and domain theory.
  • 3. • There are 2 approaches: • 1. To find best fit find errorD(h),errorB(h) errorD(h) is defined to be the proportion of examples from D that are misclassified by h. errorB(h) of h with respect to a domain theory B to be the probability that h will disagree with B on the classification of a randomly drawn instance. we could require the hypothesis that minimizes some combined measure of these errors, such as
  • 4. • It is not clear what values to assign to kB and kD to specify the relative importance of fitting the data versus fitting the theory. If we have a very poor theory and a great deal of reliable data, it will be best to weight errorD(h) more heavily. • Given a strong theory and a small sample of very noisy data, the best results would be obtained by weighting errorB(h) more heavily. Of course if the learner does not know in advance the quality of the domain theory or training data, it will be unclear how it should weight these two error components.
  • 5. • 2.Bayes theorem perspective. • Bayes theorem computes this posterior probability based on the observed data D, together with prior knowledge in the form of P(h), P(D), and P(Dlh). • The Bayesian view is that one should simply choose the hypothesis whose posterior probability is greatest, and that Bayes theorem provides the proper method for weighting the contribution of this prior knowledge and observed data.
  • 6. • Unfortunately, Bayes theorem implicitly assumes pefect knowledge about the probability distributions P(h), P(D), and P(Dlh). When these quantities are only imperfectly known, Bayes theorem alone does not prescribe how to combine them with the observed data. • (One possible approach in such cases is to assume prior probability distributions over P(h), P(D), and P(D1h) themselves, then calculate the expected value of the posterior P (h / D) . • we will simply say that the learning problem is to minimize some combined measure of the error of the hypothesis over the data and the domain theory.
  • 7. 2.2 Hypothesis Space Search • We can characterize most learning methods as search algorithms by describing the hypothesis space H they search, the initial hypothesis ho at which they begin their search, the set of search operators O that define individual search steps, and the goal criterion G that specifies the search objective.
  • 8. Three different methods for using prior knowledge to alter the search performed by purely inductive methods. • Use prior knowledge to derive an initial hypothesis from which to begin the search.: • In this approach the domain theory B is used to construct an initial hypothesis ho that is consistent with B. A standard inductive method is then applied, starting with the i ho. • For example, the KBANN system learns artificial neural networks in It uses prior knowledge to design the interconnections and weights for an initial network, so that this initial network is perfectly consistent with the given domain theory. This initial network hypothesis is then refined inductively using the BACKPROPAGATaIlOgNor ithm and available data. Beginning the search at a hypothesis consistent with the domain theory makes it more likely that the final output hypothesis will better fit this theory.
  • 9. • Use prior knowledge to alter the objective of the hypothesis space search: • The goal criterion G is modified to require that the output hypothesis fits the domain theory as well as the training examples. • For example, the EBNN system learns neural networks ,Whereas inductive learning of neural networks performs gradient descent search to minimize the squared error of the network over the training data, EBNN performs gradient descent to optimize a different criterion. • This modified criterion includes an additional term that measures the error of the learned network relative to the domain theory.
  • 10. • Use prior knowledge to alter the available search steps. • In this approach, the set of search operators 0 is altered by the domain theory. • For example, the FOCL system learns sets of Horn clauses. It is based on the inductive system FOIL, which conducts a greedy search through the space of possible Horn clauses, at each step revising its current hypothesis by adding a single new literal. • FOCL uses the domain theory to expand the set of alternatives available when revising the hypothesis, allowing the addition of multiple literals in a single search step when warranted by the domain theory. • FOCL allows single-step moves through the hypothesis space that would correspond to many steps using the original inductive algorithm. These "macro-moves" can dramatically alter the course of the search, so that the final hypothesis found consistent with the data is different from the one that would be found using only the inductive search steps.