Page 142 - DCAP310_INTRODUCTION_TO_ARTIFICIAL_INTELLIGENCE_AND_EXPERT_SYSTEMS
P. 142
Introduction to Artificial Intelligence & Expert Systems
Notes
Did u know? Recent developments in KR include the concept of the Semantic Web, and
development of XML-based knowledge representation languages and standards, including
Resource Description Framework (RDF), RDF Schema, Topic Maps, DARPA Agent Markup
Language (DAML), Ontology Inference Layer (OIL) and Web Ontology Language (OWL).
There are several KR techniques such as frames, rules, tagging, and semantic networks which
originated in cognitive science. Since knowledge is used to achieve intelligent behavior, the
fundamental goal of knowledge representation is to facilitate reasoning, inferencing, or drawing
conclusions. A good KR must be both declarative and procedural knowledge. What is knowledge
representation can best be understood in terms of five distinct roles it plays, each crucial to the
task at hand:
A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the
thing itself, used to enable an entity to determine consequences by thinking rather than
acting, i.e., by reasoning about the world rather than taking action in it.
It is a set of ontological commitments, i.e., an answer to the question: In what terms should
I think about the world?
It is a fragmentary theory of intelligent reasoning, expressed in terms of three components:
(i) the representation’s fundamental conception of intelligent reasoning; (ii) the set of
inferences the representation sanctions; and (iii) the set of inferences it recommends.
It is a medium for pragmatically efficient computation, i.e., the computational environment
in which thinking is accomplished. One contribution to this pragmatic efficiency is supplied
by the guidance a representation provides for organizing information so as to facilitate
making the recommended inferences.
It is a medium of human expression, i.e., a language in which we say things about the
world.”
7.2.2 Common Realms of Discourse
The Dempster – Shafer theory (DST) is a mathematical theory of evidence. It allows one to
combine evidence from different sources and arrive at a degree of belief (represented by a belief
function) that takes into account all the available evidence. The theory was first developed by
Arthur P. Dempster and Glenn Shafer.
In a narrow sense, the term “Dempster – Shafer theory” refers to the original conception of the
theory by Dempster and Shafer. However, it is more common to use the term in the wider sense
of the same general approach, as adapted to specific kinds of situations. In particular, many
authors have proposed different rules for combining evidence, often with a view to handling
conflicts in evidence better.
Dempster – Shafer theory is a generalization of the Bayesian theory of subjective probability;
whereas the latter requires probabilities for each question of interest, belief functions base
degrees of belief (or confidence, or trust) for one question on the probabilities for a related
question. These degrees of belief may or may not have the mathematical properties of
probabilities; how much they differ depends on how closely the two questions are related. Put
another way, it is a way of representing epistemic plausibilities but it can yield answers that
contradict those arrived at using probability theory.
Often used as a method of sensor fusion, Dempster – Shafer theory is based on two ideas:
obtaining degrees of belief for one question from subjective probabilities for a related question,
and Dempster’s rule for combining such degrees of belief when they are based on independent
136 LOVELY PROFESSIONAL UNIVERSITY