Page 166 - DCAP405_SOFTWARE_ENGINEERING
P. 166

Unit 9: System Engineering




          particular we need a positive theory of the environment, that is, some kind of principled  Notes
          characterization of those structures or dynamics or other attributes of the environment in virtue
          of which adaptive behavior is adaptive.
          Herbert Simon discussed the issue in his pre-AI work. His book Administrative Behavior, for
          example, presents the influential theory that later became known as limited rationality. In
          contrast to the assumption of rational choice in classical economics, Simon describes a range of
          cognitive limitations that make fully rational decision-making in organizations impracticable.
          Yet organizations thrive anyway, he argues, because they provide each individual with a
          structured environment that ensures that their decisions are good enough. The division of labor,
          for example, compensates for the individual’s limited ability to master a range of tasks. Structured
          flows of information, likewise, compensate for the individual’s limited ability to seek this
          information out and judge its relevance. Hierarchy compensates for the individual’s limited
          capacity to choose goals. And fixed procedures compensate for individuals’ limited capacity to
          construct procedures for themselves.
          In comparison to Simon’s early theory in Administrative Behavior, AI has downplayed the
          distinction between agent and environment. In Newell and Simon’s early work on problem
          solving, the environment is reduced to the discrete series of choices that it presents in the course
          of solving a given problem. The phrase “task environment” came to refer to the formal structure
          of the search space of choices and outcomes. This is clearly a good way of modeling tasks such as
          logical theorem-proving and chess, in which the objects being manipulated are purely formal.
          For tasks that involve activities in the physical world, however, the picture is more complex. In
          such cases, the problem solving model analyzes the world in a distinctive way. Their theory
          does not treat the world and the agent as separate constructs. Instead, the world shows up, so to
          speak, phenomenological: in terms of the differences that make a difference for this agent, given
          its particular representations, actions, and goals. Agents with different perceptual capabilities
          and action repertoires, for example, will inhabit different task environments, even though their
          physical surroundings and goals might be identical.
          Newell and Simon’s theory of the task environment, then, tends to blur the difference between
          agent and environment. As a framework for analysis, we find the phenomenological approach
          valuable, and we wish to adapt it to our own purposes. Unfortunately, Newell and Simon carry
          this blurring into their theory of cognitive architecture. They are often unclear whether problem
          solving is an activity that takes place wholly within the mind, or whether it unfolds through the
          agent’s potentially complicated interactions with the physical world. This distinction does not
          arise in cases such as theorem-proving and chess, or in any other domain whose workings are
          easily simulated through mental reasoning. But it is crucial in any domain whose actions have
          uncertain outcomes. Even though we wish to retain Newell and Simon’s phenomenological
          approach to task analysis, therefore, we do not wish to presuppose that our agents reason by
          conducting searches in problem spaces. Instead, we wish to develop an analytical framework
          that can guide the design of a wide range of agent architectures. In particular, we want an
          analytical framework that will help us design the simplest possible architecture for any given
          task.

          Continues and Discrete Systems

          In models for discrete event dynamic systems (i.e., DEDS models) state changes occur at particular
          points in time whose values are not known a priori. As a direct consequence, (simulated) time
          advances in discrete ‘jumps’ that have unequal length.
          In contrast, with models that emerge from the domain of continuous time dynamic systems (i.e.,
          CTDS models), state changes occur continuously (at least in principle) as time advances in a





                                           LOVELY PROFESSIONAL UNIVERSITY                                   159
   161   162   163   164   165   166   167   168   169   170   171