Page 371 - DECO504_STATISTICAL_METHODS_IN_ECONOMICS_ENGLISH
P. 371

Statistical Methods in Economics


                   Notes
                                  Again, since x , x , ... x  is a random sample and  μ  is the population mean, we have  (  E x  −  μ 2   =  σ 2  .
                                             1  2  n                                                      ) i
                                  Therefore,

                                                                    (
                                                                 n  E x  −  μ ) i  2  ∑ σ 2
                                                            2
                                                          σ
                                                        E ( ) =  ∑          =       =  σ 2
                                                           0
                                                                i = 1  n        n
                                  Thus,  σ 0 2   is an unbiased estimator of  σ 2  .
                                  Example 3: Find the m.l.e. of the parameters  μ  and   σ 2  in random samples from a  ( μ σN  ,  2 )
                                  population, when both the parameters are unknown.
                                  Solution: As in the preceding example,

                                                                  n                  ( ∑  x  −  μ ) i  2
                                                          log L =  −  σ  2  n   π − log  − log 2
                                                                  2                   2 σ 2

                                                    ∂ ⎡  ⎤ log L  −1
                                      ∴            ⎢     ⎥     =    ∑  (  −2 x  0  ) i  ( μ  1 )−   = 0
                                                   ⎣  ∂μ  ⎦  μ=μ 0  2 σ 2

                                           ∑
                                  This gives  ( i  − x  μ 0 )  = 0; i.e.,  μ  =  x , the sample mean. The m.l.e. of the parameter  μ  is the sample
                                                           0
                                  mean  x . (Note that this estimator is unbiased).

                                                                    ∑  (  − x  μ ) i  2
                                                                2
                                  Proceeding as in Example 2 we have σ  =   n  . But since the parameter  μ  is not known, it is
                                                                0
                                  replaced by the m.l.e.  μ  =  x . The m.l.e. of  σ 2   is now
                                                     2
                                                                 ∑  (  − x  x ) i  2
                                                          σ 0 2  =   n    =   S 2

                                  which is the sample variance. (Note that this estimator is biased).
                                  Example 4: A tossed a biased coin 50 times and got head 20 times, while B tossed it 90 times and got
                                  40 heads. Find the maximum likelihood estimate of the probability of getting head when the coin is
                                  tossed.
                                  Solution: Let P be the unknown probability of obtaining a head. Using binomial distribution,
                                  Probability of 20 heads in 50 tosses =   50 C P 20  ( 20  1  ) − P  30


                                  Probability of 40 heads in 90 tosses =   90 C P 40  ( 40  1  ) −P  50
                                  The likelihood function is given by the product of these probabilities:

                                                             L=   50 C 20  ⋅  90  P 30  ( 40  1 P ) − C  30

                                      ∴                   log L =  (  log  50  20 90 C 40 )  + C  + 60logP8  (0log 1P ) −

                                                        ∂ logL   60  80
                                      Hence,             ∂ P   =   P  −  1  − P
                                  The maximum likelihood estimate P  is therefore obtained by solving
                                                               0



         366                              LOVELY PROFESSIONAL UNIVERSITY
   366   367   368   369   370   371   372   373   374   375   376