###### 9.3.8 Selecting Realizations for Interpolation or Least Squares

Realizations ^{1}*r*^{[k]} for interpolation or least squares should be selected with care. A poor choice may distort results. The simple approach of generating pseudorandom realizations based upon the distribution of ^{1}** R** is inadvisable, as it will cause most realizations to cluster near

^{1|0}

**μ**=

^{0}

*E*(

^{1}

**). If a quadratic remapping is to be a good approximation over a large region of values for**

*R*^{1}

**, we need to apply interpolation or least squares to a dispersed set of realizations.**

*R*A general approach to specifying realizations is to let the point ^{1|0}**μ** be one realization ^{1}*r*^{[l]}, and select other realizations that are dispersed in some manner on the ellipsoid4

[9.30]

where ^{1|0}**Σ** is the conditional covariance matrix of ^{1}** R** and

*q*is a constant. If

^{1}

**is joint-normal, this ellipsoid defines a level curve (surface) of its distribution—that is, a curve on which the probability density of**

*R*^{1}

**is constant. In practice,**

*R**q*is set equal to 1, 2 or some value in-between. In a sense, it reflects the number of standard deviations from

^{1|0}

**μ**at which realizations are placed, but this is precise only in one-dimension.

We may disperse *l* – 1 realizations on ellipsoid [9.30] by selecting points on the unit sphere centered at **0** and projecting these onto the ellipsoid. There are various ways this might be done. The following approach provides a good dispersion of realizations as long as ^{1|0}**Σ** is not multicollinear.

Let ^{1|0}**σ** be a diagonal matrix with diagonal elements equal to the conditional standard deviations of ^{1}** R**. Let

^{1|0}

**ρ**be the conditional correlation matrix of

^{1}

**. Given points**

*R*

*p**on the unit sphere centered at the origin*

_{k}**0**, define corresponding realizations

^{1}

*r*^{[k]}as

[9.31]

This leaves the question of how to select *l* – 1 points *p** _{k}* on the unit sphere centered at

**0**. We might randomly disperse them. Generate

*l*– 1 pseudorandom vectors

*v**~*

_{k}*U*((–1,1)

_{n}*). Discard and generate replacements for any vectors that equal the zero vector*

^{n}**0**or have norm greater than 1. Then set

[9.32]

[for all *k*. This approach has little to recommend itself other than the fact that it is easy. Distortions may result from points randomly clustering in certain regions and not in others.

If a quadratic remapping is to be constructed with interpolation, we may directly select points *p** _{k}* based upon the coefficients

*c*

_{i, j},

*b*, and

_{ i}*a*to be determined. For each coefficient

*c*

_{ i}_{,i}, select a point

*p**whose components are all 0’s except the*

_{k}*i*

^{ th}component, which is 1. For each coefficient

*c*

_{i}_{, j}for which

*i*≠

*j*, select a point

*p**whose components are all 0’s except the*

_{k}*i*

^{ th}and

*j*

^{ th}components, which are . For each coefficient

*b*, select a point

_{i}

*p**whose components are all 0’s except the*

_{k}*i*

^{ th}component, which is –1. This procedure will yield precisely

*l*– 1 points unless your quadratic form has

*a*= 0. In that case, the approach will yield

*l*points. To reduce this to

*l*– 1 points, discard two points,

*p**and*

_{k}

*p**, and replace them with the single point*

_{j}[9.33]

Obviously, discarded points *p** _{k}* and

*p**should be selected so that their replacement point is not the same as one of the other points already selected.*

_{j}To construct a quadratic remapping of form

[9.34]

we would select points *p** _{k}* as indicated in Exhibit 9.18.

*p**on the unit sphere suitable for use in interpolation of a quadratic remapping of form [9.34].*

_{k}Considering this configuration of points, we may wonder if better results might be obtained with a symmetrical configuration, such as that in Exhibit 9.19. Such symmetrical configurations tend to work best if a remapping is to be constructed using least squares and the number of points *l* exceeds the number *m* of coefficients to be selected by a reasonable margin.

Ignoring trivialities,5 a perfectly symmetrical configuration of *l* – 1 points on a unit sphere in *n* dimensions is not always well defined. In three dimensions, such an arrangement is possible with 4, 6, 8, 12, or 20 points. These symmetrical configurations are achieved by inscribing one of the five Platonic solids within the sphere and placing a point at each of the solid’s vertices.

In three dimensions, five points cannot be distributed with such symmetry. Perhaps the most symmetrical configuration is the one illustrated in Exhibit 9.21. Here, points at the north and south poles of the sphere are symmetrical to each other, but are configured differently from those on the equator.

In higher dimensions, the situation is similar. Certain numbers of points allow for perfectly symmetrical configurations, but for most numbers of points, any nontrivial configuration affords less-than-perfect symmetry.

Accordingly, we don’t seek perfect symmetry, but only an arrangement of points that is as uniform as possible. A convenient solution is to distribute the points in the same manner in which *l* – 1 electrons would distribute themselves on the surface of a sphere based upon the mutual repulsive forces between them. This concept is defined6 in three dimensions, and the mathematics generalizes to higher dimensions. Treated as electrons, *l* – 1 points *p** _{k}* distribute themselves to minimize the sum

[9.35]

The configuration of Exhibit 9.21 is such a “minimum energy” configuration. Minimum energy configurations for other values of *n* and *l* – 1 can be obtained by computer simulation. First generate a set {} of *l* – 1 pseudorandom points on the surface of the sphere. A procedure for doing so was described earlier in this section. Next, shift the points around iteratively based upon applicable electrostatic forces to obtain subsequent point sets {}, {}, {}, … Continue until the points stop moving discernibly—perhaps until *max*(|| – ||) < α_{1} for some suitable value α_{1}.

At each iteration, the new point set {} is obtained from the current one {} as follows. For each point , calculate a vector-valued “force” ,

[9.36]

where α_{2} is a suitable scaling factor. Shift each point to obtain the subsequent point:

[9.37]

The algorithm may converge slowly; especially for large *n* and *l*. Values α_{1} = .001 and α_{2} = 1/(20*l*) work well for most value-at-risk applications.7

If *n* and *l* remain the same each time a value-at-risk measure is used, the above algorithm only needs to be run once. The resulting point set {*p** _{k}*} can be stored for reuse each time the value-at-risk measure is applied. Of course, the points will have to be projected onto different realizations

^{1}

*r*^{[k]}on ellipsoid [9.30] each time because

^{1|0}

**Σ**will change.

With least squares, it may be advantageous to weight the realization ^{1}*r*^{[l]} = ^{1|0}**μ** more heavily than others. Our discussion of least squares in Section 2.9 does not mention nonuniform weightings of points. However, this is easily accomplished by considering additional realizations of *r*^{[k]} and setting them all equal to ^{1|0}**μ**.