Page 223 - DCAP601_SIMULATION_AND_MODELING
P. 223

Unit 12: Design and Evaluation of Simulation Experiments (II)



            observations need not be  IID.  Indeed, they  need not  be  either independent  or identically  Notes
            distributed. Unlike the case of independent replications, we now face the problems of bias and
            dependence among the random variables.
            Fortunately, there are generalizations of the classical IID framework that enable us to  estimate
            the bias and the mean squared error as a function  of the  sample size  in terms  of only  two
            fundamental parameters: the asymptotic bias and the asymptotic variance; see Whitt (1992) and
            references therein. That theory tells us that, under regularity conditions, both the  bias and the
            MSE are of order 1  n.
            Within a single run, the stochastic processes tend to become stationary as time evolves. Indeed,
            now we assume that X     X(  ) as n     (in the discrete-time case) and X(t)   X(  ) as t   
                              n
            (in the continuous-time case). The stochastic processes fail to be stationary throughout all time
            primarily because it is necessary (or at least more convenient) to start the simulation in a special
            initial state. We thus can reduce the bias by choosing a good initial state or by deleting (not
            collecting statistics over) an initial portion of the simulation run. Choosing an appropriate initial
            state can be difficult if the stochastic process of interest is not nearly Markov. For example, even for
            the relatively simple M/G/s/   queueing model, with s servers and non-exponential service times,
            it is necessary to specify the remaining service time of all customers initially in service.
            The asymptotic bias helps us to determine if it is necessary to choose a special initial state  or
            delete an initial portion of the run. The asymptotic bias also helps us estimate the final  bias,
            whether or not we choose a special initial state or delete an initial portion of the run. It also helps
            us determine what proportion of the full run should be deleted if we follow that procedure.

            Under regularity conditions, there is a parameter    called the asymptotic bias such that
                                      limn                                         (2)
                                      n  n
            see Whitt (1992) and references therein. Given the definition of the bias   , we see that the
                                                                         n
            asymptotic bias must be
                                               
                                                    ]
                                               ( [X   );
                                                 E
                                                    i
                                              i 1
            the regularity conditions ensure that the sum is absolutely convergent. We thus approximate
            the bias of  X  for any sufficiently large n by
                      n
                                                   
                                                 
                                                   n

            This approximation reduces the unknowns to be estimated from the function  { n  : n   1} to the
            single parameter   . Corresponding formulas hold in continuous time.

            Given that we can ignore the bias, either because it is negligible or because it has been largely
            removed by choosing a good initial state or by deleting an initial portion of the run, we can use
            the asymptotic variance to estimate the width of confidence intervals and thus the required run
            length to yield desired statistical precision. Under regularity conditions, there is a parameter   2
            called the asymptotic variance such that

                                    2
                                            2
                                      limn ,                                       (3)
                                       n  n
            Where (under the assumption that {X  : n    1} is a stationary process)
                                          n
                                                   
                                       2
                                                          ,
                                                
                                       Var (X 1 ) 2  Cov (X X 1 i  ),
                                                            
                                                         1
                                                  i 1
                                             LOVELY PROFESSIONAL UNIVERSITY                                  217
   218   219   220   221   222   223   224   225   226   227   228