Unit-1 Knowledge Representation and Reasoning-1
Unit-1 Knowledge Representation and Reasoning-1
Knowledge representation makes complex software easier to define and maintain than
procedural code and can be used in expert systems. For example, talking to experts in terms of
business rules rather than code lessens the semantic gap between users and developers and
makes development of complex systems more practical.
Knowledge representation and reasoning (KRR) is the study of how to represent information
about the world in a form that can be used by a computer system to solve and reason about
complex problems. It is an important field of artificial intelligence (AI) research.
We all are familiar with the word “Knowledge,” but have you heard of “Knowledge
Representation” and “Knowledge Representation in AI”? Think like this: You’re trying to
make a perfect basketball shot. Think about all the things your mind and body do to make it
happen.
Now, imagine trying to teach the same to a machine. It’s a big challenge since you’ll need a vast
amount of knowledge to present this to the machine. Even simple scenarios like lifting a pen off
the desk will need a big set of rules and descriptions.
That’s where ‘Knowledge Representation in AI’ comes in – it’s the key to making all of this
work. Here, knowledge representation plays a vital role in setting up the environment and gives
all the details necessary to the system. It’s like a guiding light that unlocks the machine’s
potential.
Let’s explore how AI uses “knowledge” to change how businesses operate, and I’ll make sure to
keep you engaged till the end.
‘Knowledge representation in AI’ is like giving computers a smart brain. It’s the magic that
allows them to understand and use real-world information to solve tricky problems.
In simple terms, it’s about teaching AI to think and reason using symbols and automation. For
instance, if we want AI to diagnose illnesses, we must provide it with the right knowledge at the
right time.
So, in a nutshell, it’s all about making computers brilliant problem solvers by communicating in
their unique “language” of information or a way that a computer system can understand and
apply it to tackle real-world problems or manage everyday tasks.
Knowledge is like the wisdom a computer gathers from its experiences and learning. Imagine it
as the “know-how” that makes an AI (like a chatbot) savvy. In Artificial Intelligence, a machine
takes specific actions based on what it has learned in the past. For example, think of an AI
winning a chess game—it can only do that if it knows how to play and win.
2. Representation
Representation is how computers translate their knowledge into something useful. It’s like
turning knowledge into a language computers understand. This includes things like:
Objects: Info about the objects in our world, like knowing buses need drivers or that guitars have
strings.
Events: Everything happening in our world, from natural disasters to great achievements.
Performance: Understanding how people behave in different situations. This helps AI grasp the
human side of knowledge.
Facts: This is the factual stuff about our world, like knowing the Earth isn’t flat but not a perfect
sphere either.
Meta Knowledge: Think of it as what we already know, which helps AI make sense of things.
Knowledge Base: It’s like a big library of information, like a treasure trove of facts about a
specific topic, such as road construction.
In simple terms, knowledge is what we know from our experiences, facts, data, and situations. In
artificial intelligence, there are various types of knowledge that need to be represented.
1. Declarative Knowledge (The “What” Knowledge)
o It’s all about facts and concepts, helping describe things in simple terms.
o This knowledge helps AI understand relationships between concepts and objects, aiding
problem-solving.
o This is like a manual for tasks, with specific rules and strategies to follow.
o It’s knowledge about knowledge, including categories, plans, and past learning.
5. Heuristic Knowledge (Learning from Experience)
o This type helps AI make decisions based on past experiences, like using old techniques to solve
new problems.
These types of knowledge equip AI to understand and solve problems, follow instructions, make
informed decisions, and adapt to different situations.
AI:
1. Logical Representation
In AI, we communicate using formal logic, much like following a rulebook. Imagine AI as a
student following a strict set of rules in a school. These rules ensure that information is shared
with minimal mistakes and that AI’s conclusions are either true or false. Though it can be tricky,
logical representation is like the foundation of many programming languages, helping AI think
logically.
2. Semantic Network
Think of a semantic network as a giant web with connected nodes and links. Nodes stand for
objects or ideas, while links show how they connect. This method simplifies how AI stores and
arranges information, much like a mind map. It’s more natural and expressive compared to
logical representation, allowing AI to grasp complex relationships.
3. Frame Representation
Frames act like information ID cards for real-world things. They contain details and values
describing these things. Imagine each frame as a file containing important information. Frames
can be flexible and, when connected, create a robust knowledge system. This method is versatile
and commonly used in AI.
4. Production Rules
Imagine AI using “if-then” statements to decide what to do. If a specific situation arises, AI
knows exactly what action to take. This method is like having a playbook. Production rules are
modular, making it easy to update and add new rules. While they may not always be the fastest,
they let AI make smart choices and adapt to different scenarios.
These techniques give AI the tools it needs to organize and use knowledge effectively, making it
smarter and more capable.
To make AI intelligent, we need a way to gather vital information. That’s where the AI
knowledge cycle and its essential components come into play. These components help AI
understand the world better and make intelligent choices. It’s like giving AI the tools to learn,
adapt, and act wisely.
1. Perception: AI takes in information from its surroundings, like listening, seeing, or reading.
This helps it understand the world. For example, it listens to spoken words, sees images, and
reads text to gather knowledge about its environment.
2. Learning: AI uses deep learning algorithms to study and remember what it perceives. It’s like
taking notes to get better at something. Through learning, AI becomes skilled at recognizing
patterns and making predictions based on its experiences.
3. Knowledge and Reasoning: These parts are like AI’s brain. They help it understand and
think smartly. They find important information for AI to learn. AI’s knowledge and reasoning
components sift through its data to identify valuable insights, allowing it to make informed
decisions.
4. Planning and Doing: AI uses what it learned to make plans and take action. It’s like using
knowledge to make good decisions. With its plans in place, AI carries out tasks efficiently and
adapts to changes in its environment, demonstrating intelligent behavior.
1. Simple Relational Knowledge: This is like organizing facts neatly in columns, often used in
databases. It’s straightforward but not great for drawing conclusions. For instance, in a
database, you can use this to list relationships between people and their addresses.
2. Inheritable Knowledge: Here, data is stored in a hierarchy, like a family tree. For example,
you can use this to show how animals relate to different species or how products belong to
various categories. It helps us understand relationships between things, and it’s better than the
simple relational method.
3. Inferential Knowledge: This is the precise way of using formal logic to guarantee accurate
facts and decisions. For instance, you can use this to deduce that if “All men are mortal” and
“Socrates is a man,” then “Socrates is mortal.”
4. Procedural Knowledge: AI uses small programs or rules (like recipes) to perform tasks. For
example, it can follow rules to play chess or diagnose diseases. Despite its limitations, it is
useful for specialized tasks.
2. Inferential Adequacy: The system should be flexible, allowing it to adjust old knowledge to
fit new information.
3. Inferential Efficiency: It should guide AI to make smart decisions quickly by pointing it in
the right direction.
4. Acquisitional Efficiency: The system should easily learn new information, add it to its
knowledge, and use it to work better.
Knowledge representation gives AI the power to handle complex tasks based on what it has
learned from human experiences, rules, and responses. It’s like the AI’s “instruction
manual” that it can read and follow.
Moreover, AI relies on this knowledge to solve problems, complete tasks, and make decisions. It
helps AI understand, communicate in human language, plan, and tackle challenging areas.
Therefore, it’s the backbone of AI technology all around us.
Extracts insights from data and offers real-time information for informed decision-making.
Read Also: Exploring HyperIntelligence: How Evolving AI Capability Can Drive Business Value?
Frequently Asked Questions
Knowledge representation in AI is like the way our brain stores and organizes information,
helping AI systems think and make decisions more like humans do.
There are four main approaches to knowledge representation in AI: relational, inheritable,
inferential, and procedural.
Knowledge representation’s goal is to show relationships between ideas and objects so we can
draw conclusions and make inferences easily.
Knowledge representation and reasoning
Knowledge representation and reasoning (KRR, KR&R, or KR²) is a field of artificial
intelligence (AI) dedicated to representing information about the world in a form that a
computer system can use to solve complex tasks, such as diagnosing a medical
condition or having a natural-language dialog. Knowledge representation incorporates
findings from psychology[1] about how humans solve problems and represent knowledge,
in order to design formalisms that make complex systems easier to design and build.
Knowledge representation and reasoning also incorporates findings from logic to
automate various kinds of reasoning.
Artificial intelligence
Major goals
Approaches
Applications
Philosophy
History
Glossary
In the meanwhile, John McCarthy and Pat Hayes developed the situation calculus as a
logical representation of common sense knowledge about the laws of cause and
effect. Cordell Green, in turn, showed how to do robot plan-formation by applying
resolution to the situation calculus. He also showed how to use resolution for question-
answering and automatic programming.[3]
These efforts led to the cognitive revolution in psychology and to the phase of AI
focused on knowledge representation that resulted in expert systems in the 1970s and
80s, production systems, frame languages, etc. Rather than general problem solvers, AI
changed its focus to expert systems that could match human competence on a specific
task, such as medical diagnosis.[6]
Expert systems gave us the terminology still in use today where AI systems are divided
into a knowledge base, which includes facts and rules about a problem domain, and
an inference engine, which applies the knowledge in the knowledge base to answer
questions and solve problems in the domain. In these early systems the facts in the
knowledge base tended to be a fairly flat structure, essentially assertions about the
values of variables used by the rules.[7]
Meanwhile, Marvin Minsky developed the concept of frame in the mid-1970s.[8] A frame
is similar to an object class: It is an abstract description of a category describing things
in the world, problems, and potential solutions. Frames were originally used on systems
geared toward human interaction, e.g. understanding natural language and the social
settings in which various default expectations such as ordering food in a restaurant
narrow the search space and allow the system to choose appropriate responses to
dynamic situations.
It was not long before the frame communities and the rule-based researchers realized
that there was a synergy between their approaches. Frames were good for representing
the real world, described as classes, subclasses, slots (data values) with various
constraints on possible values. Rules were good for representing and utilizing complex
logic such as the process to make a medical diagnosis. Integrated systems were
developed that combined frames and rules. One of the most powerful and well known
was the 1983 Knowledge Engineering Environment (KEE) from Intellicorp. KEE had a
complete rule engine with forward and backward chaining. It also had a complete frame-
based knowledge base with triggers, slots (data values), inheritance, and message
passing. Although message passing originated in the object-oriented community rather
than AI it was quickly embraced by AI researchers as well in environments such as KEE
and in the operating systems for Lisp machines from Symbolics, Xerox, and Texas
Instruments.[9]
Overview
Knowledge-representation is a field of artificial intelligence that focuses on designing
computer representations that capture information about the world that can be used for
solving complex problems.
For example, talking to experts in terms of business rules rather than code lessens the
semantic gap between users and developers and makes development of complex
systems more practical.
Knowledge representation goes hand in hand with automated reasoning because one of
the main purposes of explicitly representing knowledge is to be able to reason about
that knowledge, to make inferences, assert new knowledge, etc. Virtually all knowledge
representation languages have a reasoning or inference engine as part of the system.[17]
Arguably, FOL has two drawbacks as a knowledge representation formalism in its own
right, namely ease of use and efficiency of implementation. Firstly, because of its high
expressive power, FOL allows many ways of expressing the same information, and this
can make it hard for users to formalise or even to understand knowledge expressed in
complex, mathematically-oriented ways. Secondly, because of its complex proof
procedures, it can be difficult for users to understand complex proofs and explanations,
and it can be hard for implementations to be efficient. As a consequence, unrestricted
FOL can be intimidating for many software developers.
One of the key discoveries of AI research in the 1970s was that languages that do not
have the full expressive power of FOL can still provide close to the same expressive
power of FOL, but can be easier for both the average developer and for the computer to
understand. Many of the early AI knowledge representation formalisms, from databases
to semantic nets to production systems, can be viewed as making various design
decisions about how to balance expressive power with naturalness of expression and
efficiency.[19] In particular, this balancing act was a driving motivation for the
development of IF-THEN rules in rule-based expert systems.
A similar balancing act was also a motivation for the development of logic
programming (LP) and the logic programming language Prolog. Logic programs have a
rule-based syntax, which is easily confused with the IF-THEN syntax of production rules.
But logic programs have a well-defined logical semantics, whereas production systems
do not.
The earliest form of logic programming was based on the Horn clause subset of FOL.
But later extensions of LP included the negation as failure inference rule, which turns LP
into a non-monotonic logic for default reasoning. The resulting extended semantics of
LP is a variation of the standard semantics of Horn clauses and FOL, and is a form of
database semantics, [20] which includes the unique name assumption and a form
of closed world assumption. These assumptions are much harder to state and reason
with explicitly using the standard semantics of FOL.
In a key 1993 paper on the topic, Randall Davis of MIT outlined five distinct roles to
analyze a knowledge representation framework:[21]
"A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the
thing itself, used to enable an entity to determine consequences by thinking rather than
acting," [21] i.e., "by reasoning about the world rather than taking action in it."[21]
"It is a set of ontological commitments",[21] i.e., "an answer to the question: In what terms
should I think about the world?" [21]
"It is a fragmentary theory of intelligent reasoning, expressed in terms of three components:
(i) the representation's fundamental conception of intelligent reasoning; (ii) the set of
inferences the representation sanctions; and (iii) the set of inferences it recommends."[21]
"It is a medium for pragmatically efficient computation",[21] i.e., "the computational
environment in which thinking is accomplished. One contribution to this pragmatic efficiency
is supplied by the guidance a representation provides for organizing information" [21] so as
"to facilitate making the recommended inferences."[21]
"It is a medium of human expression",[21] i.e., "a language in which we say things about the
world."[21]
Knowledge representation and reasoning are a key enabling technology for
the Semantic Web. Languages based on the Frame model with automatic classification
provide a layer of semantics on top of the existing Internet. Rather than searching via
text strings as is typical today, it will be possible to define logical queries and find pages
that map to those queries.[15] The automated reasoning component in these systems is
an engine known as the classifier. Classifiers focus on the subsumption relations in a
knowledge base rather than rules. A classifier can infer new classes and dynamically
change the ontology as new information becomes available. This capability is ideal for
the ever-changing and evolving information space of the Internet.[22]
The Semantic Web integrates concepts from knowledge representation and reasoning
with markup languages based on XML. The Resource Description Framework (RDF)
provides the basic capabilities to define knowledge-based objects on the Internet with
basic features such as Is-A relations and object properties. The Web Ontology
Language (OWL) adds additional semantics and integrates with automatic classification
reasoners.[16]
Characteristics
In 1985, Ron Brachman categorized the core issues for knowledge representation as
follows:[23]
In the early years of knowledge-based systems the knowledge-bases were fairly small.
The knowledge-bases that were meant to actually solve real problems rather than do
proof of concept demonstrations needed to focus on well defined problems. So for
example, not just medical diagnosis as a whole topic, but medical diagnosis of certain
kinds of diseases.
As knowledge-based technology scaled up, the need for larger knowledge bases and
for modular knowledge bases that could communicate and integrate with each other
became apparent. This gave rise to the discipline of ontology engineering, designing
and building large knowledge bases that could be used by multiple projects. One of the
leading research projects in this area was the Cyc project. Cyc was an attempt to build
a huge encyclopedic knowledge base that would contain not just expert knowledge but
common-sense knowledge. In designing an artificial intelligence agent, it was soon
realized that representing common-sense knowledge, knowledge that humans simply
take for granted, was essential to make an AI that could interact with humans using
natural language. Cyc was meant to address this problem. The language they defined
was known as CycL.
After CycL, a number of ontology languages have been developed. Most are declarative
languages, and are either frame languages, or are based on first-order logic.
Modularity—the ability to define boundaries around specific domains and problem
spaces—is essential for these languages because as stated by Tom Gruber, "Every
ontology is a treaty–a social agreement among people with common motive in sharing."
There are always many competing and differing views that make any general-purpose
ontology impossible. A general-purpose ontology would have to be applicable in any
domain and different areas of knowledge need to be unified.[27]
There is a long history of work attempting to build ontologies for a variety of task
domains, e.g., an ontology for liquids,[28] the lumped element model widely used in
representing electronic circuits (e.g.[29]), as well as ontologies for time, belief, and even
programming itself. Each of these offers a way to see some part of the world.
The lumped element model, for instance, suggests that we think of circuits in terms of
components with connections between them, with signals flowing instantaneously along
the connections. This is a useful view, but not the only possible one. A different ontology
arises if we need to attend to the electrodynamics in the device: Here signals propagate
at finite speed and an object (like a resistor) that was previously viewed as a single
component with an I/O behavior may now have to be thought of as an extended
medium through which an electromagnetic wave flows.
Ontologies can of course be written down in a wide variety of languages and notations
(e.g., logic, LISP, etc.); the essential information is not the form of that language but the
content, i.e., the set of concepts offered as a way of thinking about the world. Simply put,
the important part is notions like connections and components, not the choice between
writing them as predicates or LISP constructs.
The commitment made selecting one or another ontology can produce a sharply
different view of the task at hand. Consider the difference that arises in selecting the
lumped element view of a circuit rather than the electrodynamic view of the same device.
As a second example, medical diagnosis viewed in terms of rules (e.g., MYCIN) looks
substantially different from the same task viewed in terms of frames (e.g., INTERNIST).
Where MYCIN sees the medical world as made up of empirical associations connecting
symptom to disease, INTERNIST sees a set of prototypes, in particular prototypical
diseases, to be matched against the case at hand.
References
[edit]
1. ^ Schank, Roger; Abelson, Robert (1977). Scripts, Plans, Goals, and Understanding: An Inquiry
Into Human Knowledge Structures. Lawrence Erlbaum Associates, Inc.
2. ^ Doran, J. E.; Michie, D. (1966-09-20). "Experiments with the Graph Traverser program". Proc.
R. Soc. Lond. A. 294 (1437): 235–
259. Bibcode:1966RSPSA.294..235D. doi:10.1098/rspa.1966.0205. S2CID 21698093.
3. ^ Green, Cordell. Application of Theorem Proving to Problem Solving (PDF). IJCAI 1969.
4. ^ Hewitt, C., 2009. Inconsistency robustness in logic programs. arXiv preprint
arXiv:0904.3036.
5. ^ Kowalski, Robert (1986). "The limitation of logic". Proceedings of the 1986 ACM fourteenth
annual conference on Computer science - CSC '86. pp. 7–
13. doi:10.1145/324634.325168. ISBN 0-89791-177-6. S2CID 17211581.
6. ^ Nilsson, Nils (1995). "Eye on the Prize". AI Magazine. 16: 2.
7. ^ Hayes-Roth, Frederick; Waterman, Donald; Lenat, Douglas (1983). Building Expert Systems.
Addison-Wesley. ISBN 978-0-201-10686-2.
8. ^ Marvin Minsky, A Framework for Representing Knowledge, MIT-AI Laboratory Memo
306, June, 1974
9. ^ Mettrey, William (1987). "An Assessment of Tools for Building Large Knowledge-Based
Systems". AI Magazine. 8 (4). Archived from the original on 2013-11-10. Retrieved 2013-12-
24.
10. ^ Brachman, Ron (1978). "A Structural Paradigm for Representing Knowledge" (PDF). Bolt,
Beranek, and Neumann Technical Report (3605). Archived (PDF) from the original on April 30,
2020.
11. ^ MacGregor, Robert (June 1991). "Using a description classifier to enhance knowledge
representation". IEEE Expert. 6 (3): 41–46. doi:10.1109/64.87683. S2CID 29575443.
12. ^ McCarthy, J., and Hayes, P. J. 1969. Some philosophical problems from the
standpoint of artificial intelligence at the Wayback Machine (archived August 25, 2013).
In Meltzer, B., and Michie, D., eds., Machine Intelligence 4. Edinburgh: Edinburgh
University Press. 463–502.
13. ^ Lenat, Doug; R. V. Guha (January 1990). Building Large Knowledge-Based Systems:
Representation and Inference in the Cyc Project. Addison-Wesley. ISBN 978-0201517521.
14. ^ Smith, Brian C. (1985). "Prologue to Reflections and Semantics in a Procedural
Language". In Ronald Brachman and Hector J. Levesque (ed.). Readings in Knowledge
Representation. Morgan Kaufmann. pp. 31–40. ISBN 978-0-934613-01-9.
15. ^ Jump up to:a b Berners-Lee, Tim; Hendler, James; Lassila, Ora (May 17, 2001). "The
Semantic Web – A new form of Web content that is meaningful to computers will
unleash a revolution of new possibilities". Scientific American. 284 (5): 34–
43. doi:10.1038/scientificamerican0501-34. Archived from the original on April 24, 2013.
16. ^ Jump up to:a b Knublauch, Holger; Oberle, Daniel; Tetlow, Phil; Wallace, Evan (2006-03-
09). "A Semantic Web Primer for Object-Oriented Software
Developers". W3C. Archived from the original on 2018-01-06. Retrieved 2008-07-30.
17. ^ Hayes-Roth, Frederick; Waterman, Donald; Lenat, Douglas (1983). Building Expert Systems.
Addison-Wesley. pp. 6–7. ISBN 978-0-201-10686-2.
18. ^ Levesque, H.J. and Brachman, R.J., 1987. Expressiveness and tractability in
knowledge representation and reasoning 1. Computational intelligence, 3(1), pp.78-93.
19. ^ Levesque, Hector; Brachman, Ronald (1985). "A Fundamental Tradeoff in Knowledge
Representation and Reasoning". In Ronald Brachman and Hector J. Levesque (ed.). Readings
in Knowledge Representation. Morgan Kaufmann. p. 49. ISBN 978-0-934613-01-9. The good
news in reducing KR service to theorem proving is that we now have a very clear, very specific
notion of what the KR system should do; the bad new is that it is also clear that the services can
not be provided... deciding whether or not a sentence in FOL is a theorem... is unsolvable.
20. ^ Russell, Stuart J.; Norvig, Peter. (2021). Artificial Intelligence: A Modern
Approach (4th ed.). Hoboken: Pearson. p. 282. ISBN 978-0134610993. LCCN 20190474.
21. ^ Jump up to:a b c d e f g h i j k Davis, Randall; Shrobe, Howard; Szolovits, Peter (Spring
1993). "What Is a Knowledge Representation?". AI Magazine. 14 (1): 17–33. Archived from
the original on 2012-04-06. Retrieved 2011-03-23.
22. ^ Macgregor, Robert (August 13, 1999). "Retrospective on Loom". [Link]. Information
Sciences Institute. Archived from the original on 25 October 2013. Retrieved 10
December 2013.
23. ^ Brachman, Ron (1985). "Introduction". In Ronald Brachman and Hector J. Levesque
(ed.). Readings in Knowledge Representation. Morgan Kaufmann. pp. XVI–XVII. ISBN 978-
0-934613-01-9.
24. ^ Bih, Joseph (2006). "Paradigm Shift: An Introduction to Fuzzy Logic" (PDF). IEEE
Potentials. 25: 6–
21. doi:10.1109/MP.2006.1635021. S2CID 15451765. Archived (PDF) from the original on
12 June 2014. Retrieved 24 December 2013.
25. ^ Zlatarva, Nellie (1992). "Truth Maintenance Systems and their Application for Verifying Expert
System Knowledge Bases". Artificial Intelligence Review. 6: 67–
110. doi:10.1007/bf00155580. S2CID 24696160.
26. ^ Levesque, Hector; Brachman, Ronald (1985). "A Fundamental Tradeoff in Knowledge
Representation and Reasoning". In Ronald Brachman and Hector J. Levesque (ed.). Readings
in Knowledge Representation. Morgan Kaufmann. pp. 41–70. ISBN 978-0-934613-01-9.
27. ^ Russell, Stuart J.; Norvig, Peter (2010), Artificial Intelligence: A Modern Approach (3rd
ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-604259-7, p. 437-439
28. ^ Hayes P, Naive physics I: Ontology for liquids. University of Essex report, 1978, Essex,
UK.
29. ^ Davis R, Shrobe H E, Representing Structure and Behavior of Digital Hardware, IEEE
Computer, Special Issue on Knowledge Representation, 16(10):75-82.
Further reading
Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic
Networks; IEEE Computer, 16 (10); October 1983
Ronald J. Brachman, Hector J. Levesque Knowledge Representation and Reasoning,
Morgan Kaufmann, 2004 ISBN 978-1-55860-932-7
Ronald J. Brachman, Hector J. Levesque (eds) Readings in Knowledge Representation,
Morgan Kaufmann, 1985, ISBN 0-934613-01-X
Chein, M., Mugnier, M.-L. (2009),Graph-based Knowledge Representation: Computational
Foundations of Conceptual Graphs, Springer, 2009,ISBN 978-1-84800-285-2.
Randall Davis, Howard Shrobe, and Peter Szolovits; What Is a Knowledge
Representation? AI Magazine, 14(1):17-33,1993
Ronald Fagin, Joseph Y. Halpern, Yoram Moses, Moshe Y. Vardi Reasoning About
Knowledge, MIT Press, 1995, ISBN 0-262-06162-7
Jean-Luc Hainaut, Jean-Marc Hick, Vincent Englebert, Jean Henrard, Didier
Roland: Understanding Implementations of IS-A Relations. ER 1996: 42-57
Hermann Helbig: Knowledge Representation and the Semantics of Natural Language,
Springer, Berlin, Heidelberg, New York 2006
Frank van Harmelen, Vladimir Lifschitz and Bruce Porter: Handbook of Knowledge
Representation 2007.
Arthur B. Markman: Knowledge Representation Lawrence Erlbaum Associates, 1998
John F. Sowa: Knowledge Representation: Logical, Philosophical, and Computational
Foundations. Brooks/Cole: New York, 2000
Adrian Walker, Michael McCord, John F. Sowa, and Walter G. Wilson: Knowledge Systems
and Prolog, Second Edition, Addison-Wesley, 1990
Mary-Anne Williams and Hans Rott: "Frontiers in Belief Revision, Kluwer", 2001.
External links
Wikimedia Commons has media related to Knowledge representation.
We consider three modern roles for logic in artificial intelligence, which are based on the
theory of tractable Boolean circuits: (1) logic as a basis for computation, (2) logic for
learning from a combination of data and knowledge, and (3) logic for reasoning about
the behavior of machine learning systems.
What is the main purpose of logic?
However, all academic disciplines employ logic: to evaluate evidence, to analyze
arguments, to explain ideas, and to connect evidence to arguments. One of the most
important uses of logic is in composing and evaluating arguments. The study of logic
divides into two main categories: formal and informal.
What is the main function of logic?
At the heart of digital circuits and computing, you'll find logic functions and gates. These
are the building blocks that enable devices to perform complex calculations and make
decisions. Understanding these basics is crucial for delving into more advanced topics
in computer science and electrical engineering.
What are the four types of logic?
When you think of studying logic, what comes to mind? Often, logic is one of those subjects —
perhaps along with Latin and philosophy — that many associate with an outdated model of
education or, if studied today, maybe even with a hint of pretension. As a homeschooling family,
is it really necessary for your child to learn logic?
We believe the answer is an emphatic, “Yes!” In this post, we’ll cover the many benefits of
learning logic — from developing critical thinking and decision-making skills to building good
character — as well as several pieces of advice for teaching your student logic in your home
school.
Before diving into three learning outcomes that answer why we should study logic, it’s important
to make a quick note on the term logic used in this post. Generally, the study of logic is
categorized into informal and formal logic. The type of logic we encourage families to study is
formal logic — often referred to as traditional logic — which deals with forms of reasoning. As
Classical Conversations founder, Leigh Bortins, describes in her book The Question, “Formal
logic has been called ‘math with words.’”
Studying logic is something everybody should do. This includes both homeschool
students and parents. In short, there are three reasons why we should learn logic: it encourages
clear thinking, empowers us to be truly in the image of God, and builds good character.
These same critical thinking skills practiced in logic can also be applied to sound decision-
making, a skill every parent wants their child to develop. Finally, it’s important to study logic to
become an effective communicator. After all, logic is also the backbone necessary for crafting
compelling arguments in speech and writing that point others toward truth.
As Christians, the God we worship is a God of form. Just look in Genesis, Chapter 1. The
universe God created is the ultimate example of order, structure, and form.
Similarly, we too create forms, from math and science formulas to sentence forms to logical
arguments. By using forms to indicate order from disorder and truth from uncertainty, we
establish ourselves as made in the image of God.
For many parents and students, studying logic isn’t easy. Often, along with learning logic come
times of frustration and befuddlement. Still, the goal of learning logic is to become better
thinkers, which is a worthwhile end to strive toward no matter how strenuous the journey may
become. Following through with your study of logic will empower you and your student
with confidence in your abilities to learn something challenging and use critical thinking skills
to make sound judgments and arrive at the truth in other areas of life.
Convinced why your student should learn logic? Although the learning outcomes of studying
logic are noble and inspiring, many parents struggle when it comes to actually teaching the
subject. With its forms, structure, and objectivity, logic can appear intimidating. Hopefully, these
three pieces of advice will help and encourage you to take on the worthy task of homeschooling
your child in logic!
1. Stay Persistent!
Although this may not be what you want to hear, all difficult subjects — logic included —
require persistence and hard work. Constantly remind yourself that the end goal of your student
learning logic is to equip them with the skills to think critically. So, be persistent in teaching
your student logic. In time, your student will learn to apply critical thinking skills to make good
decisions and to detect truth from falsehood in everyday situations and encounters. It’s worth
every difficult moment to see these fruits of your labor!
Times when homeschooling is hard are a natural part of this journey. Still, that doesn’t mean you
have to go at it alone. Find other homeschool parents whom you can rely on for support,
guidance, and advice in teaching your child logic, whether in your Classical Conversations local
community, a homeschool co-op, or elsewhere. Homeschooling in isolation is never a good idea!
The road to becoming a skilled logician begins with an understanding of the grammar — or
foundational knowledge — of the subject. Make sure to spend time with your student repeating
the basics of logic over and over before moving on to complex problems and concepts.
What are these foundations of logic? Well, there are logic vocabulary terms and definitions to
commit to memory, like argument, syllogism, conclusion, major premise, minor premise,
and fallacy. In addition, you and your student should understand the principles of logic, or “how
logic works.” That is, spend time studying the basic rules and procedures associated with clear
thinking and reasoning.
Moving on to advanced exercises and ideas before establishing a firm foundation will only lead
to discouragement with this subject. For instance, don’t feel guilty if you have to spend several
more weeks studying the basics of logic. In the end, this actually might end up saving you time,
not to mention a good deal of frustration!
One of the tenets of classical education is the idea that all subjects are interconnected. Thus,
subjects shouldn’t be studied as if they are islands, unrelated to each other.
A great benefit of learning logic is that it trains students to think clearly in all subjects by helping
them organize, make connections, and draw conclusions about all types of information. So,
encourage your student to utilize what they are learning in their study of logic to understand why
Hester Prynne made the decisions she did in The Scarlett Letter or what events motivated
American colonialists to wage war against England in the American Revolution.
The truth is that the skills of logic are applicable to all areas of life, and not just if your student
goes on to study math or computer science in college. From literature and art to history and
science, logic can be used everywhere. Encouraging your student to use logical reasoning in their
other subjects will show them that logic is useful and an important skill to master.
The Beauty of Learning Formal Logic
Sure, learning and teaching formal logic can be intimidating. But still, there’s something equally
attractive about the study of logic. Arriving at objective truth, knowing that which can be known,
making good decisions — these are beautiful goals that make the study of logic well worth the
effort.
If you’re on the search for a homeschool logic curriculum, consider our Traditional Logic series
designed to make homeschooling logic doable with daily practice exercises to help your student
develop powerful critical thinking skills.
Not yet a Classical Conversations member and interested in our community-based approach to
homeschooling? We’d love to hear from you. To learn more about us, click here.
Knowledge Representation using Logic
The aim of this section is to show how logic can be used to form representations of
the world and how a process of inference can be used to derive new representations
about the world and how these can be used by an intelligent agent to deduce what to
do.
We require:
Why logic?
The challenge is to design a language which allows one to represent all the necessary
knowledge. We need to be able to make statements about the world such as describing
things - people, houses, theories etc; relations between things and properties of things.
Logic makes statements about the world which are true (or false) if the state of affairs
it represents is the case (or not the case). Compared to natural languages (expressive
but context sensitive) and programming languages (good for concrete data structures
but not expressive) logic combines the advantages of natural languages and formal
languages. Logic is:
concise
unambiguous
context insensitive
expressive
effective for inferences
Propositional Logic
Propositional symbols are used to represent facts. Each symbol can mean what we want it to be.
Each fact can be either true or false.
Propositions are combined with logical connectives to generate sentences with more complex
meaning. The connectives arre:
AND
OR
NOT
=> implies
<=> mutual implication
The meaning (semantics) of the connectives is represented by truth tables. The following are the
truth tables for the connectives:
E F True False
E F True True False
False False False
E
E True False
False True
E F True False
E F True True True
False True False
E F True False
E =>F True True False -
False True True
NOTE(1):
E F <=> E => F .
NOTE(2): `False True' is a True sentence/assertion.
Implication
Its truth table does not quite fit our intuitive understanding of ``E implies F'' since propositional
logic does not require any relation of causation or relevance between E and F. It is better to think
of ``E F'' as saying: ``if E is true then I am claiming that F is true (otherwise I am making no
claim).
Commutative E F <=> F E
E F <=> F E
Distributive E (F G)<=> (E F) (E G)
E (F G)<=>(E F) (E G)
Associative E (F G) <=> (E F) G
E (F G) <=> (E F) G
De Morgan's C(E F )<=> E F
(E F )<=> E F
Negation ( E) <=> E
Logical Proofs
A sentence is valid or necessarily true if and only if it is true under all possible
interpretations in all possible worlds. If a sentence is valid, it is so by virtue of its
logical structure independent of what possible interpretations are like.
A sentence is satisfiable if and only if there is some interpretation in some world for
which it is true. A sentence that is not satisfiable is unsatisfiable.
Making Inferences
Reasoning and inference are generally used to describe any process by which
conclusions are reached. Inference is often of three types:
1. Abduction
If there is an axiom E=>F and an axiom F, then E does NOT logically follow.
This is called Abduction and is not a sound rule of inference.
2. Induction
These are inferences which are made by the sound rules of inference. This
means that the conclusion is true in all cases where the premise is true. These
rules preserve the truth.
Looking at the truth tables you will easily see that the sound rules are:
From an implication and the premise of the implication you can infer the
conclusion.
2. Modus tolens
3. Resolution.
In fact, resolution can subsume both modus ponens and modus tolens. It can
also be generalized so that there can be any number of disjuncts in either of the
two resolving expressions, including just one. (Note, disjunct are expressions
connected by , conjuncts are those connected by .) The only requirement is that
one expression contains the negation of one disjunct from the other.
To verify soundness we can construct a truth-table with one line for each
possible model of the proposition symbols in the premise, and show that in all
models where the premise is true, the conclusion is also true.
A B C A B B C A C
F F F F T F
F F T F T T
F T F T F F
F T T T T T
T F F T T T
T F T T T T
T T F T F T
T T T T T T
Types of Logic in AI
In the intricate world of Artificial Intelligence (AI), logic serves as the foundational fabric
that weaves together the intricacies of decision-making and problem-solving. As we go
into the diverse landscape of artificial intelligence, exploring the various types of logic in
AI becomes crucial. From classical propositional logic to advanced Bayesian reasoning,
each type of logic contributes uniquely to the capabilities of AI systems. In this
comprehensive article, we navigate through the nuanced concepts of logic, enlightening
their significance in AI development.
In other words, logic serves as "the compass steering the ship of AI development". It
provides a robust framework for developers to define rules, constraints, and
relationships within a system. This structured approach is paramount, especially in
scenarios where clear decision paths are vital. Consider an autonomous vehicle relying
on logical reasoning to navigate through traffic, follow traffic rules, and make split-
second decisions ensuring passenger safety.
Types of Logic in AI
Like the human brain, there are also several different types of logic in AI. Let's take a
tour through a handful of these logical frameworks.
Propositional Logic
Propositional logic, often called 'sentential' or 'zeroth-order' logic, constitutes a
fundamental aspect of logical reasoning within the realm of AI. In its essence, this logic
type deals with propositions: statements that can be either true or false, offering a
simplistic binary framework for decision-making.
First-Order Logic
First-order logic, an extension of propositional logic, introduces the notion of variables
and quantifiers, bringing a more nuanced layer of complexity to logical reasoning. In this
logical system, statements are not merely true or false; they involve objects, properties,
and relationships between them.
Consider an AI tasked with managing a complex network of renewable energy assets.
First-order logic enables the AI to express intricate relationships, such as "For every
solar panel, there exists a corresponding battery" or "If the wind speed exceeds a
certain threshold, decrease the rotational speed of all turbines." This level of
expressiveness allows the AI to model and comprehend the interconnected aspects of a
renewable energy system, facilitating more sophisticated decision-making and
optimisation strategies.
Fuzzy Logic
Fuzzy logic, departing from the binary nature of classical logic, acknowledges the
shades of grey in between true and false, allowing for a more nuanced representation of
uncertainty. In AI applications for renewable energy, fuzzy logic proves invaluable in
handling imprecise and fluctuating inputs, such as weather conditions.
Fuzzy logic's ability to capture and process uncertainties enhances the adaptability of AI
systems in the dynamic context of renewable energy operations.
Modal Logic
Modal logic introduces modalities, or qualifiers, to classical logic to account for different
states of affairs or possibilities. Modalities are expressions that signify necessity,
possibility, belief, or time. In the realm of AI and renewable energy, modal logic finds
relevance in scenario planning and decision-making under different conditions.
Consider an AI system managing a wind farm. Modal logic allows the AI to reason about
various possibilities, such as different wind speeds, turbine states, and maintenance
scenarios. This nuanced approach, leveraging modalities, enables the AI to make
decisions that consider not only the current state but also potential future states,
contributing to improved forecasting and adaptive strategies.
By incorporating modal logic, AI systems in renewable energy enhance their capacity to
anticipate and respond to a spectrum of operational scenarios.
Bayesian Logic
Bayesian logic, deeply rooted in probability theory, is a cornerstone in AI for decision-
making under uncertainty. It applies Baye's theorem to update probabilities as new
information emerges. In the context of renewable energy systems, Bayesian logic
proves invaluable for predictive maintenance.
Imagine an AI-driven solar farm management system. Bayesian logic allows the system
to continually refine its predictions about potential faults or performance issues based
on incoming data. For instance, if a solar panel shows a slight degradation in
performance, the system, leveraging Bayesian principles, can intelligently update the
probability of an impending failure. This dynamic adjustment enhances the precision of
maintenance schedules, ensuring timely interventions and optimising the overall
performance of the solar array.
In this article, we are going to learn about the logic which we mentioned
earlier in the knowledge representation of Artificial Intelligence-based agent. We
will discuss what logic means in terms of AI and what are the types of logic.
We will also study about why these are important while dealing with Artificial
Intelligence.
Submitted by Monika Sharma, on June 06, 2019
While taking any decision, the agent must provide specific reasons based on
which the decision was taken. And this reasoning can be done by the agent only
if the agent has the capability of understanding the logic.
1. Deductive logic
2. Inductive logic
1) Deductive logic
In deductive logic, the complete evidence is provided about the truth of the
conclusion made. Here, the agent uses specific and accurate premises that lead
to a specific conclusion. An example of this logic can be seen in an expert system
designed to suggest medicines to the patient. The agent gives the complete
proof about the medicines suggested by it, like the particular medicines are
suggested to a person because the person has so and so symptoms.
2) Inductive logic
In Inductive logic, the reasoning is done through a ‘bottom-up’ approach. What
this means is that the agent here takes specific information and then generalizes
it for the sake of complete understanding. An example of this can be seen in the
natural language processing by an agent in which it sums up the words according
to their category, i.e. verb, noun article, etc., and then infers the meaning of that
sentence.
Table of Contents of Knowledge Representation
1. Logic
2. Ontology
3. Knowledge Representations
6. Knowledge Soup
Bibliography
Name Index
Subject Index
For more information, see the index and the preface of the book.
Unity in Diversity is a concept that signifies unity among individuals who have certain
differences among them. These differences can be on the basis of culture, language,
ideology, religion, sect, class, ethnicity, etc. Furthermore, the existence of this concept
has been since time immemorial.
Unity in diversity is used as an expression of harmony and unity between dissimilar individuals
or groups. It is a concept of "unity without uniformity and diversity without fragmentation"[1] that
shifts focus from unity based on a mere tolerance of physical, cultural, linguistic, social, religious,
political, ideological and/or psychological differences towards a more complex unity based on an
understanding that difference enriches human interactions. The idea and related phrase is very
old and dates back to ancient times in both Western and Eastern Old World cultures. It has
In Hinduism, Unity amidst diversity signifies that, despite visible differences in forms and
beliefs, all beings originate from the same fundamental essence, emphasizing interconnectedness
and a shared spiritual identity among diverse creations.
Significance of Unity amidst diversity in Purana and Itihasa (epic history):
Purana Books
The Purana refers to Sanskrit literature preserving ancient India’s vast cultural history, including historical legends, religious ceremonies,
various arts and sciences. The eighteen mahapuranas total over 400,000 shlokas (metrical couplets) and date to at least several centuries
BCE.
Unity amidst diversity emphasizes the importance of coherence among diverse elements,
highlighting how different individuals or groups can come together to form a harmonious whole,
enhancing collaboration, understanding, and mutual respect while celebrating differences.
Significance of Unity amidst diversity in India history and geography: