Page 220 - DCAP601_SIMULATION_AND_MODELING
P. 220
Simulation and Modelling
Notes Bias, Mean Squared Error and Variance
By assuming that the limits exist, we are assuming that we would obtain the exact answer if we
devoted unlimited computational effort to the simulation experiment. In statistical language,
e.g., see Lehmann and Castella (1998), we are assuming that the estimators X and X are
n t
consistent estimators of the quantity to be estimated, . For finite sample size, we can describe
the statistical precision by looking at the bias and the mean squared error. The bias, which we
denote by in the discrete-time case and in the continuous-time case, indicates how much
n t
the expected value of the estimator differs from the quantity being estimated, and in what
direction. For example, in the discrete-time case, the bias of X is
n
]
[ E X
n
n
The mean-squared error (MSE or MSE ) is the expected squared error, e.g.,
n t
MSE E X 2
n
n
2
If there is no bias, then the MSE coincides with the variance of X , which we denote by , i.e.,
n n
2
2
) E X
Var (X n [ E X n ] ]].
n
n
Then we can write
n n
2
)
,
Var (X n ) n 2 Cov (X X j ,
n
i
i 1 j 1
where Cov(X , X) is the covariance, i.e.,
i j
E
)
,
Cov (X X [ E X X j ] E [X i ] [X j ].
i
i
j
Analogous formulas hold in continuous time. For example, then the variance of the sample
mean X is
t
t t
2
X
u
X
Var (X t ) t 2 0 0 Cov ( ( ), ( ))dudv .
v
t
Unfortunately, these general formulas usually are too complicated to be of much help when
doing preliminary planning.
The Classical Case: Independent Replications
In statistics, the classical case arises when we have a discrete-time stochastic process
{X : n 1}.where the random variables X are mutually Independent and Identically Distributed
n n
(IID) with mean and finite variance , and we use the sample mean X to estimate the mean
2
n
. Clearly, the classical case arises whenever we use independent replications to do estimation,
which of course is the great appeal of independent replications.
In the classical case, the sample mean X is a consistent estimator of the mean by the Law of
n
Large Numbers (LLN). Then there is no bias and the MSE coincides with the variance of the
2
sample mean, , which is a simple function of the variance of a single observation X :
n n
214 LOVELY PROFESSIONAL UNIVERSITY