Page 252 - DCAP310_INTRODUCTION_TO_ARTIFICIAL_INTELLIGENCE_AND_EXPERT_SYSTEMS
P. 252
Introduction to Artificial Intelligence & Expert Systems
Notes The second view is the probabilistic view: the random variable F = f(G) depends upon the
random variable G = g(H), which depends upon H = h(X), which depends upon the random
variable x. This view is most commonly encountered in the context of graphical models.
The two views are largely equivalent. In either case, for this particular network architecture, the
components of individual layers are independent of each other (e.g., the components of g are
independent of each other given their input h). This naturally enables a degree of parallelism in
the implementation.
Dealing with Uncertainty and Change
Fancy Logics
Dealing with uncertainty and change in important in AI because:
The world changes, sometimes unpredictably, as a result of external actions (ie, actions
that we don’t perform ourself).
Our beliefs about the world change (whether or not the world changes itself). If we get
new evidence about something, a whole web of related beliefs may change.
Our beliefs about the world may be uncertain. We may be unsure whether we have
observed something correctly, or we may draw plausible but not certain inferences from
our existing variably certain beliefs.
Dealing with such things isn’t too hard to do in an ad hoc way. We can have rules that delete
things from working memory when they change, and/or which have rules and facts with
numerical certainty factors on them. What is hard is dealing with uncertainty in a principled
way. First order predicate logic is inadequate as it is designed to work with information that is
complete, consistent and monotonic (this basically means that facts only get added, not deleted
from the set of things that are known to be true). There is no straightforward way of using it to
deal with incomplete, variably certain, inconsistent and non-monotonic inferences. (We also
cannot easily use it to explicitly represent beliefs about the world, for reasons that should become
clear later.) We would like to have a formal, principled, and preferably simple basis for dealing
with belief, uncertainty and change.
There are two main approaches to dealing with all this. The first is to use a fancier logic. It is
important to remember that first order predicate logic is but one logic among many. New logics
pop up every day, and most come with a clear, well defined semantics and proof theory. Some
of these fancy logics appear to be just what we need to deal with belief, uncertainty and change
(though in practice we tend to find that no logic solves all our problems, so people end up
picking a suitable logic off the shelf depending on their application).
The second approach is to base things not on a logic, but on probability theory. This too is a well
understood theory, with clear results concerning uncertainty. Unfortunately basic probability
theory tends to make too many assumptions concerning the availability and reliability of
evidence, so we have to be careful how we use it. However, it still provides a good starting point
and a way of assessing more ad hoc approaches to uncertainty - are they consistent with classical
probability theory, given certain assumptions?
The first type of logic is default logic, which allows us to perform non monotonic inferences, where
new information may mean that certain facts are no longer believed true.
Default Logics
Default logics allow us to reason about things where there is a general case, true of almost
everything, and some exceptions. We met this idea when looking at frame systems and
246 LOVELY PROFESSIONAL UNIVERSITY