Page 249 - DCAP310_INTRODUCTION_TO_ARTIFICIAL_INTELLIGENCE_AND_EXPERT_SYSTEMS
P. 249

Unit 13: Expert System Architecture




          13.4 Semantic Memory                                                                  Notes

          Semantic memory refers to the memory of meanings, understandings, and other concept-based
          knowledge, and underlies the conscious recollection of factual information and general
          knowledge about the world. Semantic and episodic memory together make up the category of
          declarative memory, which is one of the two major divisions in memory. With the use of our
          semantic memory we can give meaning to otherwise meaningless words and sentences. We can
          learn about new concepts by applying our knowledge learned from things in the past. The
          counterpart to declarative, or explicit memory, is procedural memory, or implicit memory.
          TLC is an instance of a more general class of models known as semantic networks. In a semantic
          network, each node is to be interpreted as representing a specific concept, word, or feature. That
          is, each node is a symbol. Semantic networks generally do not employ distributed representations
          for concepts, as may be found in a neural network. The defining feature of a semantic network
          is that its links are almost always directed (that is, they only point in one direction, from a base
          to a target) and the links come in many different types, each one standing for a particular
          relationship that can hold between any two nodes. Processing in a semantic network often takes
          the form of spreading activation.




             Notes Semantic networks see the most use in models of discourse and logical
            comprehension, as well as in Artificial Intelligence. In these models, the nodes correspond
            to words or word stems and the links represent syntactic relations between them.

          13.4.1 Feature Models

          Feature models view semantic categories as being composed of relatively unstructured sets of
          features. The semantic feature-comparison model, proposed by Smith, Shoben, and Rips (1974),
          describes memory as being composed of feature lists for different concepts. According to this
          view, the relations between categories would not be directly retrieved, they would be indirectly
          computed. For example, subjects might verify a sentence by comparing the feature sets that
          represent its subject and predicate concepts. Such computational feature-comparison models
          include the ones proposed by Meyer (1970), Rips (1975), Smith, et al. (1974).
          Early work in perceptual and conceptual categorization assumed that categories had critical
          features and that category membership could be determined by logical rules for the combination
          of features. More recent theories have accepted that categories may have an ill-defined or “fuzzy”
          structure and have proposed probabilistic or global similarity models for the verification of
          category membership.

          13.4.2 Associative Models

          The “association”—a relationship between two pieces of information—is a fundamental concept
          in psychology, and associations at various levels of mental representation are essential to models
          of memory and cognition in general. The set of associations among a collection of items in
          memory is equivalent to the links between nodes in a network, where each node corresponds to
          a unique item in memory. Indeed, neural networks and semantic networks may be characterized
          as associative models of cognition. However, associations are often more clearly represented as
          an N×N matrix, where N is the number of items in memory. Thus, each cell of the matrix
          corresponds to the strength of the association between the row item and the column item.
          Learning of associations is generally believed to be a Hebbian process; that is, whenever two
          items in memory are simultaneously active, the association between them grows stronger, and
          the more likely either item is to activate the other.



                                           LOVELY PROFESSIONAL UNIVERSITY                                   243
   244   245   246   247   248   249   250   251   252   253   254