Page 225 - DCAP601_SIMULATION_AND_MODELING
P. 225

Unit 12: Design and Evaluation of Simulation Experiments (II)




            variance    is substituted for the ordinary variance   ; e.g., the required simulation run length  Notes
                                                       2
                     2
            with a relative-width criterion is
                               4 2 2                2
                                  z
                                             k
                                            
                       n r ( , )     /2  andn r (10 ,0.05)  16 (10)  2k            (4)
                                                        
                          
                           
                                 2 2
                                                    2
            From (1) and (4), we immediately see that the required  run length is approximately   2  /  2
            times greater when sampling from one run than with independent sampling (assuming that we
            could directly observe independent samples from the steady-state distribution, which of course
            is typically not possible).
            As with independent replications, established simulation methodology and statistical theory
            tells how to estimate the unknown quantities ,    and    from data; e.g., see Bratley et al. (1987)
                                                         2
            and Fishman (2001). Instead, we apply additional information about the model to  obtain rough
            preliminary estimates for these parameters without  data. For   , the  representation of  the
                                                                 2
            asymptotic variance in terms of the autocovariances is usually too complicated to be of much
            help, but fortunately there is another approach, which we will describe in Section

            12.3.2 The Asymptotic Parameters for a Function of a Markov Chain

            From the previous section, it should be apparent that we  can do  the intended  preliminary
            planning if we can estimate the asymptotic bias and the asymptotic variance. We now start  to
            describe how we can calculate these important parameters. We first consider functions of  a
            Markov chain. That illustrates available general results. However, fast  back-of-the-envelope
            calculations usually depend on diffusion approximations, based on stochastic-process limits,
            after doing appropriate scaling. Indeed, the scaling is usually the key part, and that is so simple
            that back-of-the-envelope calculations are actually possible.
            In this section, drawing on Whitt (1992), which itself is primarily a survey of known  results
            (including Glynn (1984) and Grassman (1987a,b) among others), we observe that (again  under
            regularity conditions) we can calculate the asymptotic bias and the asymptotic variance whenever
            the stochastic process of interest is a function of a (positive-recurrent irreducible) Markov chain,
            i.e., when X  = f(Y ) for n    1, where f is a real-valued function and {Y  : n    1} is a Markov chain
                     n    n                                        n
            or when X(t) = f(Y (t)) for t    0, where again f is a real-valued function and {Y (t) : t    0} is a
            Markov  chain.  As  noted  before,  we  usually  obtain  the  required  Markov  structure  by
            approximating the given stochastic process by a related one with the Markov property.
            In fact, as in Whitt (1992), we only discuss the case in which the underlying Markov chain  has a
            finite state space (by which we mean countably finite, i.e., {0, 1, ...., m}, not [c, d]), but the theory
            extends to more general state spaces under regularity conditions. For illustrations, see Glynn
            (1994) and Glynn and Meyn (1996). But the finite-state-space condition is very useful. Under the
            finite-state-space condition, we can compute the asymptotic parameters numerically,  without
            relying on special model structure. However, when  we do  have special  structure, we  can
            sometimes go further to obtain relatively simple closed-form formulas. We also obtain relatively
            simple closed-form formulas when we establish diffusion-process approximations via stochastic-
            process limits.

            Continuous-time Markov Chains

            We will discuss the case of a Continuous-time Markov Chain (CTMC); similar results hold for
            discrete-time Markov chains. Suppose that the CTMC {Y (t) : t    0} is irreducible with finite state
            space {0, 1, ..., m} (which implies that it is positive recurrent). Our sample-mean estimator is




                                             LOVELY PROFESSIONAL UNIVERSITY                                  219
   220   221   222   223   224   225   226   227   228   229   230