###### 4.8.6 Properties

As generally implemented, the models of this section are stationary and conditionally homoskedastic, but nothing in our definitions requires this. A white noise can be conditionally heteroskedastic. Any nontrivial MA, AR, or ARMA model based upon such a white noise will also be conditionally heteroskedastic.

MA processes are necessarily stationary, although some choices of coefficient matrices **β*** _{k}* will cause components to oscillate. AR and ARMA processes need not be stationary. Depending upon coefficient matrices

*b**, components can increase or decrease without bound. They may also oscillate wildly. If an AR or ARMA process does not exhibit such behaviors, it is generally stationary. See Hamilton (1994) for necessary and sufficient conditions for covariance stationarity.*

_{k}###### 4.8.7 Estimation

Stochastic processes are estimated from time series using techniques of statistical estimation as appropriate. In the case of a Gaussian white noise ** W**, terms are IID. We may treat a segment {

^{–α}

**, … ,**

*W*^{–1}

**,**

*W*^{0}

**} of the stochastic processes as a sample, and employ sample estimators to estimate the constant covariance matrix of terms**

*W**. Estimation for other processes may be more difficult. ML estimators are widely used for this purpose. Consider an example.*

^{t}**W**Exhibit 4.15 indicates a time series, which is graphed in Exhibit 4.16.

We treat the time series as a realization of a segment of a univariate stationary AR(1) process:

[4.62]

where ** W** is Gaussian white noise,

*~*

^{t}W*N*(0, σ

^{2}). Parameters that need to be estimated are

*a*,

*b*

_{1}, and σ. Let θ = (

*a*,

*b*

_{1}, σ). Let

*ϕ(*

^{t}*| θ) be the PDF of*

^{t}x*conditional only on θ, and let*

^{t}X

^{t}^{ | t –1}ϕ(

*| θ,*

^{t}x

^{t}^{ –1}

*x*) be the PDF of

*conditional on both θ and the previous value*

^{t}X

^{t}^{ –1}

*x*. With

*normal, it can be shown that*

^{t}W*is both conditionally and unconditionally normal. It has unconditional mean*

^{t}X[4.63]

By [3.32], it has unconditional variance

[4.64]

[4.65]

[4.66]

Accordingly

[4.67]

Similarly, we conclude

[4.68]

Since terms * ^{t}X* are dependent, the likelihood function employs conditional probability densities

^{t}^{ | t –1}f(

*| θ,*

^{t}x

^{t}^{ –1}

*x*) for all but the first term:

[4.69]

The log-likelihood function is

[4.70]

Substituting in time series values from Exhibit 4.15, this becomes

[4.71]

We take the gradient of the log-likelihood function and set it equal to **0**. Applying Newton’s method, we estimate θ = (*a*, *b*_{1}, σ) as (0.82, 0.31, 1.50).