3.10.4 Joint-normal Distributions

3.10.4  Joint-normal Distributions

Let X be an n-dimensional random vector with mean vector μ and covariance matrix Σ. Suppose the marginal distribution of each component Xi is normal. Let Y be a random variable defined as a linear polynomial

[3.121]

of X. Using [3.27] and [3.28], we can calculate the mean μY and standard deviation σY of Y. Knowing only that the marginal distributions of the Xi are normal, there is little more we can say about the distribution of Y. However, there is an additional condition we can impose upon X that will cause Y to be normally distributed. That condition is joint-normality.

The definition of joint-normality is almost trivial. A random vector X is said to be joint-normal if every nontrivial linear polynomial Y of X is normal. Joint-normal distributions are sometimes called multivariate normal or multinormal distributions.

We denote the n-dimensional joint-normal distribution with mean vector μ and covariance matrix Σ as Nn(μ,Σ). If Σ is positive definite, it has PDF

[3.122]

where |Σ| is the determinant of Σ. Exhibit 3.20 illustrates a joint-normal distribution in two random variables X1 and X2. If we define Y = X1 + X2, then Y is normal.

Exhibit 3.20: The PDF of a joint-normal distribution.

Now let’s illustrate how a random vector may fail to be joint-normal despite each of its components being marginally normal.9 Let X be a two-dimensional random vector with components X1 and X2. Let X1 and Z be independent N(0,1) random variables, and set X2 equal to –|Z| or |Z|, depending on whether X1 is negative or non-negative. By construction, both X1 and X2 are N(0,1), but their realizations are always either both negative or both non-negative. The vector X, whose PDF is illustrated in Exhibit 3.21, is not joint-normal. In this case, the random variable Y = X1 + X2 is not normal. Instead, it has the PDF illustrated in Exhibit 3.22.

Exhibit 3.21: This PDF illustrates how a random vector X can have two components that are both marginally normal but not be joint-normal.
Exhibit 3.22: The PDF of Y = X1 + X2 is illustrated where X1 and X2 are components of random vector X, whose PDF is illustrated in Exhibit 3.21. This example illustrates that a linear polynomial of normal random variables need not be normal.

A random vector is joint-normal with uncorrelated components if and only if the components are independent normal random variables.

A property of joint-normal distributions is the fact that marginal distributions and conditional distributions are either normal (if they are univariate) or joint-normal (if they are multivariate). Specifically, let X ~ Nn(μ,Σ). Select k components. Without loss of generality, suppose these are the first k components X1, X2, … Xk. Let X1 be a k-dimensional vector comprising these components, and let X2 be an (nk)-dimensional vector of the remaining components. These partition X, μ and Σ into sub-vectors and sub-matrices as follows

[3.123]

The marginal distribution of X1 is Nk(μ1,Σ1,1) and that of X2 is Nnk(μ2,Σ2,2). If Σ2,2 is positive definite, the conditional distribution of X1 given that X2 = x2 is

[3.124]

If X ~ Nn(μ,Σ), b is a constant m × n matrix and a is an m-dimensional constant vector, then

[3.125]

This generalizes property [3.94] of one-dimensional normal distributions.