Page 273 - DCAP310_INTRODUCTION_TO_ARTIFICIAL_INTELLIGENCE_AND_EXPERT_SYSTEMS
P. 273

Unit 14: Types of Learning




          more likely to transfer the common strategy into a subsequent negotiation task than were  Notes
          students who analyzed the same two cases separately.

          14.8.3 Retrieval of Analogy: The Inert Knowledge Problem

          Inert knowledge is information which one can express but not use. The process of understanding
          by learners does not happen to that extent where the knowledge can be used for effective
          problem-solving in realistic situations. The phenomenon of inert knowledge was first described
          in 1929 by Alfred North Whitehead:
          “Theoretical ideas should always find important applications within the pupil’s curriculum. This is not an
          easy doctrine to apply, but a very hard one. It contains within itself the problem of keeping knowledge alive,
          of preventing it from becoming inert, which is the central problem of all education.”
          An example for inert knowledge is vocabulary of a foreign knowledge which is available
          during an exam but not in a real situation of communication.
          An explanation for the problem of inert knowledge is that people often encode knowledge to a
          specific situation, so that later remindings occur only for highly similar situations

          In contrast so called conditionalized knowledge is knowledge about something which includes
          also knowledge as to the contexts in which that certain knowledge will be useful.

          Self Assessment

          State whether the following statements are true or false:

          15.  Analogy plays an important role in learning and instruction.
          16.  Learning from cases is often difficult than learning principles directly.

          14.9 Explanation-based Learning (EBL)

          Explanation-based learning (EBL) is a form of machine learning that exploits a very strong, or
          even perfect, domain theory to make generalizations or form concepts from training examples.
          An example of EBL using a perfect domain theory is a program that learns to play chess by being
          shown examples. A specific chess position that contains an important feature, say, “Forced loss
          of black queen in two moves,” includes many irrelevant features, such as the specific scattering
          of pawns on the board. EBL can take a single training example and determine what are the
          relevant features in order to form a generalization. A domain theory is perfect or complete if it
          contains, in principle, all information needed to decide any question about the domain. For
          example, the domain theory for chess is simply the rules of chess. Knowing the rules, in principle
          it is possible to deduce the best move in any situation. However, actually making such a deduction
          is impossible in practice due to combinatoric explosion. EBL uses training examples to make
          searching for deductive consequences of a domain theory efficient in practice. An EBL system
          works, by finding a way to deduce each training example, from the system’s existing database of
          domain theory. Having a short proof of the training example extends the domain-theory database,
          enabling the EBL system to find and classify future examples that are similar to the training
          example very quickly. The main drawback of the method, the cost of applying the learned proof
          macros as these become numerous.



             Did u know? Humans appear to learn quite a lot from one example.





                                           LOVELY PROFESSIONAL UNIVERSITY                                   267
   268   269   270   271   272   273   274   275   276   277