Page 97 - DCAP601_SIMULATION_AND_MODELING
P. 97

Unit 6: Discrete System Simulation (III)



            Other Methods Employing Monte Carlo                                                   Notes


            1.   Assorted random models, e.g. self-organised criticality
            2.   Direct simulation Monte Carlo
            3.   Dynamic Monte Carlo method
            4.   Kinetic Monte Carlo
            5.   Quantum Monte Carlo

            6.   Quasi-Monte Carlo method using low-discrepancy sequences and self avoiding walks
            7.   Semiconductor charge transport and the like
            8.   Electron microscopy beam-sample interactions

            9.   Stochastic optimization
            10.  Cellular Potts model
            11.  Markov chain Monte Carlo
            12.  Cross-entropy  method
            13.  Applied information economics

            14.  Monte Carlo localization

            Monte Carlo as a Procedure

            Monte Carlo is an estimation procedure. The basic idea is as follows. You want to know the
            average value of some random variable. You can’t work out what its distribution is, exactly, or
            you don’t want to do integrals numerically, but you can take samples from that distribution.
            (The random variable may; for instance, be some complicated function of variables with simple
            distributions, or they distribution may have a hard-to-compute normalizing factor [“partition
            function” in statistical mechanics].) To estimate it, you simply take samples, independently, and
            average them. If you take enough samples, then the law of large numbers says your average
            must be close to the true value. The central limit theorem says that your average has a Gaussian
            distribution around the true value.
            Here’s one of the canonical  examples. Say you want to measure  the area of a shape with a
            complicated, irregular outline. The Monte Carlo approach is to draw a square around the shape
            and measure the square. Now you throw darts into the square, as uniformly as possible. The
            fraction of darts falling on the shape gives the ratio of the area of the shape to the area of the
            square. Now, in fact, you can cast almost any integral problem, or any averaging problem, into
            this form. So you need a good way to tell if you’re inside the outline, and you need a good way
            to figure out how many darts you should throw. Last but not least, you need a good way to
            throw darts uniformly, i.e., a good random number generator. That’s a whole separate art I
            shan’t attempt to describe.
            Now, in fact, you don’t strictly need to sample independently. You can have dependence, so long
            as you end up visiting each point just as many times as you would with independent samples.
            This is useful, since it gives a way to exploit properties of Markov chains in designing your
            sampling strategy,  and even of speeding up the convergence of your estimates  to the  true
            averages. (The classic instance of this is the Metropolis-Hastings algorithm, which gives you a
            way of sampling from a distribution where all you have to know is the ratio of the probability
            densities at any two points. This is extremely useful when, as in many problems in statistics and
            statistical mechanics, the density itself contains a complicated normalizing factor; it drops out of
            the ratio.)


                                             LOVELY PROFESSIONAL UNIVERSITY                                   91
   92   93   94   95   96   97   98   99   100   101   102