4.3.4 Bias

4.3.4  Bias

The bias of an estimator H is the expected value of the estimator less the value θ being estimated:

[4.6]

If an estimator has a zero bias, we say it is unbiased. Otherwise, it is biased. Let’s calculate the bias of the sample mean estimator [4.4]:

[4.7]

[4.8]

[4.9]

[4.10]

[4.11]

where μ is the mean E(X) being estimated. The sample mean estimator is unbiased.

4.3.5 Standard error

The standard error of an estimator is its standard deviation:

[4.12]

Let’s calculate the standard error of the sample mean estimator [4.4]:

[4.13]

[4.14]

[4.15]

[4.16]

[4.17]

[4.18]

where σ is the standard deviation std(X) being estimated. We don’t know the standard deviation σ of X, but we can approximate the standard error based upon some estimated value s for σ. Irrespective of the value of σ, the standard error decreases with the square root of the sample size m. Quadrupling the sample size halves the standard error.

4.3.6 Mean Squared Error

We seek estimators that are unbiased and have minimal standard error. Sometimes these goals are incompatible. Consider Exhibit 4.2, which indicates PDFs for two estimators of a parameter θ. One is unbiased. The other is biased but has a lower standard error. Which estimator should we use?

Exhibit 4.2: PDFs are indicated for two estimators of a parameter θ. One is unbiased. The other is biased but has lower standard error.

Mean squared error (MSE) combines the notions of bias and standard error. It is defined as

[4.19]

Since we have already determined the bias and standard error of estimator [4.4], calculating its mean squared error is easy:

[4.20]

[4.21]

[4.22]

Faced with alternative estimators for a given parameter, it is generally reasonable to use the one with the smallest MSE.