Page 94 - DCAP601_SIMULATION_AND_MODELING
P. 94

Simulation and Modelling



                      Notes         Optimization


                                    Another powerful and very popular application for random numbers in numerical simulation
                                    is in numerical optimization. These problems use functions of some often large-dimensional
                                    vector that are to be minimized (or maximized). Many problems can be phrased in this way: for
                                    example a computer chess program could be seen as trying to find the optimal set of, say, 10
                                    moves which produces the best evaluation function at the end. The traveling salesman problem
                                    is another  optimization problem.  There are also applications to engineering design, such  as
                                    multidisciplinary design optimization.

                                    Most Monte Carlo optimization methods are based on random walks. Essentially, the program
                                    will move around a marker in multi-dimensional space, tending to move in directions which
                                    lead to a lower function, but sometimes moving against the gradient.

                                    Optimization Methods

                                    1.   Evolution strategy
                                    2.   Genetic algorithms
                                    3.   Parallel  tempering
                                    4.   Simulated annealing

                                    5.   Stochastic optimization
                                    6.   Stochastic tunneling

                                    Inverse Problems

                                    Probabilistic formulation of inverse problems leads to the definition of a probability distribution
                                    in the model space.  This probability  distribution combines  a priori information with  new
                                    information obtained by measuring some observable parameters (data). As, in the general case,
                                    the theory linking data with model parameters is nonlinear, the a posteriori probability in the
                                    model space may not be easy to describe (it may be multimodal, some moments may not be
                                    defined, etc.).

                                    When analyzing an inverse problem, obtaining a maximum  likelihood model  is usually not
                                    sufficient, as we normally also wish to have information on the resolution power of the data. In
                                    the general case we may have a large number of model parameters, and an inspection of the
                                    marginal probability densities of interest may be impractical, or even useless. But it is possible
                                    to pseudorandomly generate a large collection of models according to the posterior probability
                                    distribution and to  analyze and display the models in  such a way that  information on the
                                    relative likelihoods of model properties is conveyed to the spectator. This can be accomplished
                                    by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a
                                    priori distribution is available.
                                    The best-known importance sampling method, the Metropolis algorithm, can be generalized,
                                    and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems
                                    with complex a priori information and data with an arbitrary noise distribution.

                                    Computational Mathematics

                                    Monte Carlo methods are useful in many areas of computational mathematics, where a lucky
                                    choice can find the correct result.





            88                               LOVELY PROFESSIONAL UNIVERSITY
   89   90   91   92   93   94   95   96   97   98   99