3.6.1 Singular Random Vectors

3.6.1  Singular Random Vectors

Suppose random vector X is singular with covariance matrix Σ. There exists a row vector b ≠ 0 such that bΣb′ = 0. Consider the random variable bX. By [3.28],

[3.35]

Since our random variable bX has 0 variance, it must equal some constant a. This argument is reversible, so we conclude that a random vector X is singular if and only if there exists a row vector b ≠ 0 and a constant a such that

[3.36]

Dispensing with matrix notation, this becomes

[3.37]

Since b ≠ 0, at lease one component bi is nonzero. Without loss of generality, assume b1 ≠ 0. Rearranging [3.37], we obtain

[3.38]

which expresses component X1 as a linear polynomial of the other components Xi. We conclude that a random vector X is singular if and only if one of its components is a linear polynomial of the other components. In this sense, a singular covariance matrix indicates that at least one component of a random vector is extraneous.

If one component of X is a linear polynomial of the rest, then all realizations of X must fall in a plane within n. The random vector X can be thought of as an m-dimensional random vector sitting in a plane within n, where m < n. This is illustrated with realizations of a singular two-dimensional random vector X in Exhibit 3.6.

Exhibit 3.6: Realizations of a singular two-dimensional random vector X. Component X2 is a linear polynomial of component X1.

If a random vector X is singular, but the plane it sits in is not aligned with the coordinate system of n, we may not immediately realize that it is singular from its covariance matrix Σ. A simple test for singularity is to calculate the determinant |Σ| of the covariance matrix. If this equals 0, X is singular. Once we know that X is singular, we can apply a change of variables to eliminate extraneous components Xi and transform X into an equivalent m-dimensional random vector Y, m < n. The change of variables will do this by transforming (rotating, shifting, etc.) the plane that realizations of X sit in so that it aligns with the coordinate system of n. Such a change of variables is obtained with a linear polynomial of the form

[3.39]

Consider a three-dimensional random vector X with mean vector and covariance matrix

[3.40]

We note that Σ has determinant |Σ| = 0, so it is singular. We propose to transform X into an equivalent two-dimensional random vector Y using a linear polynomial of the form [3.39]. For convenience, let’s find a transformation such that Y will have mean vector 0 and covariance matrix I:

[3.41]

We first solve for k. By [3.31]

[3.42]

so we seek a factorization ΣX = kk′. Applying the Cholesky factorization and discarding an extraneous column of 0’s, as described in Section 2.7, we obtain

[3.43]

Solving next for d, by [3.30]

[3.44]

[3.45]

[3.46]

[3.47]

Accordingly, our transformation is

[3.48]

Exhibit 3.7 illustrates how this change of variables transforms the plane in which X sits so that it aligns with the coordinate system of 2

Exhibit 3.7: Our change of variables transforms the plane that realizations of X sit in so that it aligns with the coordinate system of 2. The third extraneous component of X “drops out.”
Exercises
3.17

Below are described four three-dimensional random vectors: W, V, X, and Y. Assuming their second moments exist, which of the random vectors has a singular covariance matrix?

  1. Components V1 and V2 are independent, with V3 = 2V1 – 5V2 + 1.
  2. Components W1 and W2 are independent, with W3 = W1log(W2).
  3. Components X1, X2, and X3 represent next year’s total returns for three different companies’ common stocks.
  4. Components Y1 and Y2 represent tomorrow’s prices for the nearby 3-month Treasury bill and 3-month Eurodollar futures. Component Y3 represents tomorrow’s price difference between those two futures.

Solution

3.18

Consider a singular random vector X with mean vector and covariance matrix

[3.49]

Transform X into an equivalent two-dimensional random vector Y with mean vector 0 and covariance matrix I:

[3.50]

Solution