Page 392 - DMTH404_STATISTICS
P. 392

Statistics



                      Notes         29.1 Theory of Estimation


                                    Let X be a  random variable with probability density  function (or probability mass function)
                                    f(X ;  ,  , ....  ), where  ,  , ....   are k parameters of the population.
                                         1  2   k        1  2   k
                                    Given a random sample X , X , ...... X  from this population, we may be interested in estimating
                                                         1  2     n
                                    one or more of the k parameters  ,  , ......  . In order to be specific, let X be a normal variate so
                                                               1  2    k
                                    that its probability density function  can be written as  N(X :  ,  ).  We may  be interested in
                                    estimating m or s or both  on the basis of random sample obtained from this population.
                                    It should be noted here that there can be several estimators of a parameter, e.g., we can have any
                                    of the sample mean, median, mode, geometric mean, harmonic mean, etc., as an estimator of
                                                                               1         2        1          2
                                    population mean . Similarly, we can use either  S =  å (X -  X )  or  s =  å (X -  X )  as
                                                                                     i
                                                                                                         i
                                                                               n                 n -  1
                                    an estimator of population standard deviation s. This method of estimation, where single statistic
                                    like Mean, Median, Standard deviation, etc. is used as an estimator of population parameter, is
                                    known as Point Estimation. Contrary to this it is possible to estimate an interval in which the
                                    value of parameter is expected to lie. Such a procedure is known as Interval Estimation. The
                                    estimated interval is often termed as Confidence Interval.

                                    29.2 Point Estimation


                                    As mentioned above, there can be more than one estimators of a population parameter. Therefore,
                                    it becomes necessary to determine a good estimator out of a number of available estimators. We
                                    may recall that an estimator, a function of random variables X , X , ...... X , is a random variable.
                                                                                      1  2    n
                                    Therefore, we can say that a good estimator is one whose distribution  is more concentrated
                                    around the population parameter. R. A. Fisher has given the following properties of a good
                                    estimators. These are:
                                    (i) Unbiasedness  (ii) Consistency  (iii) Efficiency (iv) Sufficiency.

                                    29.2.1 Unbiasedness


                                    An estimator t (X , X , ...... X ) is said to be an unbiased estimator of a parameter q if E( t ) = .
                                                  1  2     n
                                    If E( t )  q, then t is said to be a biased estimator of . The magnitude of bias = E( t ) – . We have
                                                   E X
                                    seen in § 20.2 that  ( ) =  , therefore, X  is said to be an unbiased estimator of population mean
                                                                            n - 1            1        2
                                                                        2
                                                                                          2
                                                                      E S
                                    m. Further, refer to § 20.4.1, we note that  ( ) =   ×  2 , where  S = å (X -  X ) . Therefore,
                                                                             n               n    i
                                                                                æ n -  1  ö   1
                                     2                    2                              2      2
                                                                                      1  = -
                                    S  is a biased estimator of s . The magnitude of bias  = ç  - ÷    .
                                                                                è  n   ø      n
                                                                1          2
                                                            2
                                                                                                           2
                                                                                                       2
                                    Contrary to this, if we define  s =  å (X - X ) ,  we have seen in § 20.4.1 that E(s ) =  . Thus,
                                                               n - 1   i
                                    s  is an unbiased estimator of s . Also from § 20.3.1 we note that E(p) =  , therefore, p is an
                                                              2
                                     2
                                    unbiased estimator of .
                                    29.2.2 Consistency
                                    It is desirable to have an estimator, with a probability distribution, that comes closer and closer
                                    to the population parameter as the sample size is increased. An estimator possessing this property


            384                              LOVELY PROFESSIONAL UNIVERSITY
   387   388   389   390   391   392   393   394   395   396   397