3.7 Principal Component Analysis

3.7  Principal Component Analysis

With principal component analysis, we transform a random vector Z with correlated components Zi into a random vector D with uncorrelated components Di. This is called an orthogonalization of Z.

Principal component analysis can be performed on any random vector Z whose second moments exist, but it is most useful with multicollinear random vectors. Principal component analysis takes the plane in which realizations of a multicollinear random vector “almost” sit and realigns it with the coordinate system of n. The components of D that are perpendicular to the transformed plane have small, almost trivial standard deviations. Discarding these components provides a lower-dimensional approximate representation for Z. This is illustrated with realizations of a multicollinear two-dimensional random vector Z in Exhibit 3.9.

Exhibit 3.9: Principal component analysis can be used to reduce the dimensionality of a multicollinear random vector. Realizations for a multicollinear two-dimensional random vector Z are illustrated in the left graph. Principal component analysis transforms Z into an equivalent multicollinear random vector D that is aligned with the coordinate system of 2. Realizations of D are shown in the middle graph. Discarding the second component D2 of D, transforms D into a one-dimensional approximate representation of the two-dimensional Z. Realizations of this representation are shown in the right graph.
3.7.1 Example: European Currencies

Suppose today is June 30, 2000. We consider a random vector Z whose components represent the simple price returns that specific European currencies will realize versus the US dollar (USD) over the upcoming trading day:

[3.56]

Exhibit 3.10 graphs 18 months of daily exchange-rate data drawn from the period immediately following the launch of the new EUR currency. In our data, the EUR weakens following its launch, and the remaining European currencies—those that did not join the EUR on January 1, 1999—weaken in sympathy. All the currencies track the EUR, but the GBP does so the least. It is less correlated with the EUR and loses value more slowly.

Exhibit 3.10: Historical exchange rates versus the USD for the period January 1, 1999, through June 30, 2000. Exchange rates are presented as USD/unit of currency, so a rising curve indicates a strengthening currency. Exchange rates are individually scaled so they all fit on the graph.

We assume μZ = 0.5 Based upon a time series analysis of the historical price data, we construct a covariance matrix for Z:

[3.57]

The corresponding correlation matrix is

[3.58]

The correlations are all positive. Several exceed 0.90. The one between DKK and EUR exceeds 0.99. The smallest is a respectable 0.45 between GBP and SEK. With such pronounced interdependencies between its components, we expect Z to be multicollinear, and it is. The correlation matrix has determinant |ρ| = .0000045.

To define principal components of Z, we calculate orthonormal6 eigenvectors vi of the covariance matrix ΣZ of Z. We arrange these as the columns of a matrix:

[3.59]

The eigenvectors are graphed in Exhibit 3.11. Corresponding eigenvalues λi are also indicated:

Exhibit 3.11: Eigenvectors vi of covariance matrix [3.57]. Corresponding eigenvalues λi are also indicated.

The eigenvectors may be thought of as “modes of fluctuation” of random vector Z. We observed in our historical data a tendency for the European currencies to move together. This is reflected in the first eigenvector. It describes a broad move in all the currencies, with the GBP participating about half as much as the other currencies. The second eigenvector has the GBP moving in opposition to the NOK and SEK, with the CHF moving modestly with the GBP. The third eigenvector describes the GBP, NOK, and SEK moving together in opposition to the other currencies. The remaining eigenvectors describe other such “modes of fluctuation.”

If the eigenvectors vi are modes of fluctuation of Z, then Z is a random combination of those modes of fluctuation:

[3.60]

The Di are the principal components of Z. They are random variables that define each mode of fluctuation’s random contribution to Z. The Di are uncorrelated with variances equal to the eigenvalues of their corresponding eigenvectors. The vector D of principal components has mean μD = 0 and covariance matrix

[3.61]

We have ordered our principal components according to their variances. From our covariance matrix ΣD, we see that the first three principal components are more significant than the rest. The last two principal components, D6 and D7, have variances that are less than 1% of the variance of D1. Their contribution to random vector Z is trivial.

We can approximate Z by discarding from [3.60] less significant principal components. The more we discard, the simpler—and cruder!—will be our approximation. For this example, we shall be aggressive and discard the contributions of the last four principal components, approximating Z with just the first three. We define

[3.62]

and approximate Z with . Like Z has mean vector 0. Its covariance matrix is obtained from [3.61] and [3.62] using [3.31]:

[3.63]

Comparing this covariance matrix with [3.57], you can judge for yourself the quality of our approximation.