2.7  Cholesky Factorization

If we think of matrices as multi-dimensional generalizations of numbers, we may draw useful analogies between numbers and matrices. Not least of these is an analogy between positive numbers and positive definite matrices. Just as we can take square roots of positive numbers, so can we take “square roots” of positive definite matrices.

2.7.1  Positive Definite Matrices

A real symmetric matrix x is said to be:

• positive definite if bxb′ > 0 for all row vectors b ≠ 0;
• positive semidefinite if bxb′ ≥ 0 for all row vectors b;
• negative definite if bxb′ < 0 for all row vectors b ≠ 0;
• negative semidefinite if bxb′ ≤ 0 for all row vectors b;
• indefinite if none of the above hold.

These definitions may seem abstruse, but they lead to an intuitively appealing result. A symmetric matrix x is:

• positive definite if all its eigenvalues are real and positive;
• positive semidefinite if all its eigenvalues are real and nonnegative;
• negative definite if all its eigenvalues are real and negative;
• negative semidefinite if all its eigenvalues are real and nonpositive;
• indefinite if none of the above hold.

It is useful to think of positive definite matrices as analogous to positive numbers and positive semidefinite matrices as analogous to nonnegative numbers. The essential difference between semidefinite matrices and their definite analogues is that the former can be singular whereas the latter cannot. This follows because a matrix is singular if and only if it has a 0 eigenvalue.

2.7.2  Matrix “Square Roots”

Nonnegative numbers have real square roots. Negative numbers do not. An analogous result holds for matrices. Any positive semidefinite matrix h can be factored in the form h = kk′ for some real square matrix k, which we may think of as a matrix square root of h. The matrix k is not unique, so multiple factorizations of a given matrix h are possible. This is analogous to the fact that square roots of positive numbers are not unique either. If h is nonsingular (positive definite), k will be nonsingular. If h is singular, k will be singular.

2.7.3  Cholesky Factorization

A particularly easy factorization h = kk′ to perform is one known as the Cholesky factorization. Any positive semidefinite matrix has a factorization of the form h = gg′ where g is a lower triangular matrix. Solving for g is straightforward. Suppose we wish to factor the positive definite matrix

[2.75]

A Cholesky factorization takes the form

[2.76]

By inspection, = 4, so we set g1,1 = 2. Also by inspection, g1,1g2,1 = −2. Since we already have g1,1 = 2, we conclude g2,1 = −1. Proceeding in this manner, we obtain a matrix g in six steps:

Our Cholesky matrix is

[2.77]

The above example illustrates a Cholesky algorithm, which generalizes for higher dimensional matrices. Our algorithm entails two types of calculations:

1. Calculating diagonal elements gi,i (steps 1, 4 and 6) entails taking a square root.
2. Calculating off-diagonal elements gi,j i > j (steps 2, 3 and 5) entails dividing some number by the last-calculated diagonal element.

For a positive definite matrix h, all diagonal elements gi,i will be nonzero. Solving for each entails taking the square root of a nonnegative number. We may take either the positive or negative root. Standard practice is to take only positive roots. Defined in this manner, the Cholesky matrix of a positive definite matrix is unique.

The same algorithm applies for singular positive semidefinite matrices h, but the result is not generally called a Cholesky matrix. This is just an issue of terminology. When the algorithm is applied to the singular h, at least one diagonal element gi,i equals 0. If only the last diagonal element gn,n equals 0, we can obtain g as we did in our example. If some other diagonal element gi,i equals 0, off-diagonal element gi+1,i will be indeterminate. We can set such indeterminate values equal to any value within an interval [–a, a], for some a ≥ 0.

Consider the matrix

[2.78]

Performing the first four steps of our algorithm above, we obtain

[2.79]

In the fifth step, we multiply the second row of g by the third column of g′ to obtain

[2.80]

We already know g2,1 = 1, g3,1 = −1, and g2,2 = 0, so we have

[2.81]

[2.82]

which provides us with no means of determining g3,2. It is indeterminate, so we set it equal to a variable x and proceed with the algorithm. We obtain

[2.83]

For the element g3,3 to be real, we can set x equal to any value in the interval [-3, 3]. The interval of acceptable values for indeterminate components will vary, but it will always include 0. For this reason, it is standard practice to set all indeterminate values equal to 0. With this selection, we obtain

[2.84]

We can leave g in this form, or we can delete the second column, which contains only 0’s. The resulting 3 × 2 matrix provides a valid factorization of h since

[2.85]

If a symmetric matrix h is not positive semidefinite, our Cholesky algorithm will at some point attempt to take a square root of a negative number and fail. Accordingly, the Cholesky algorithm is a means of testing if a matrix is positive semidefinite.

2.7.4  Computational Issues

In exact arithmetic, our Cholesky algorithm will run to completion with all diagonal elements gi,i > 0 if and only if the matrix h is positive definite. It will run to completion with all diagonal elements gi,i ≥ 0 and at least one diagonal element gi,i = 0 if and only if the matrix h is singular positive semidefinite.

Things are more complicated if arithmetic is performed with rounding, as is done on a computer. Off-diagonal elements are obtained by dividing by diagonal elements. If a diagonal element is close to 0, any roundoff error may be magnified in such a division. For example, if a diagonal element should be .00000001, but roundoff error causes it to be calculated as .00000002, division by this number will yield an off-diagonal element that is half of what it should be.

An algorithm is said to be unstable if roundoff error can be magnified in this way or if it can cause the algorithm to fail. The Cholesky algorithm is unstable for singular positive semidefinite matrices h. It is also unstable for positive definite matrices h that have one or more eigenvalues close to 0.

Exercises
2.10

Identify all factorizations of the following matrices that are obtainable with our Cholesky algorithm. Take only positive square roots when selecting nonzero diagonal elements gi,i. In each case, is the original matrix positive definite, singular positive semidefinite, or neither of these?

a.

[2.86]

b.

[2.87]

c.

[2.88]