1.9.2 Proprietary Value-at-Risk Measures
Tracing the historical development of proprietary value-at-risk measures is difficult because they were used by firms for internal purposes. They were not published and were rarely mentioned in the literature. One interesting document is a letter from Stephen C. Francis (1985) of Fischer, Francis, Trees & Watts to the Federal Reserve Bank of New York. Describing what appears to be an early proprietary value-at-risk measure, he indicates that their measure was similar to the SEC’s UNCR but employed more asset categories, including 27 categories for cash market US Treasuries alone. He notes:
We find no difficulty utilizing on an essentially manual basis the larger number of categories, and indeed believe it necessary to capturing accurately our gross and net risk exposures.
The full letter is presented as Exhibit 13.
Working at Bankers Trust, Garbade (1986) describes sophisticated value-at-risk measures for fixed income markets that employed linear and principal component remappings to simplify computations. These may have been influenced by, but were different from, an internal value-at-risk measure Bankers Trust implemented around 1983 for use with its risk-adjusted return on capital (RAROC) system of internal capital allocation.
Garbade recollects9 efforts within Bankers Trust to improve existing value-at-risk measures following the stock market crash of 1987. During the crash, Treasury interest rates fell sharply while stock prices plummeted. Such correlated market moves are often observed during periods of market turmoil. They have motivated suggestions that correlations become more extreme during periods of elevated market volatility. Within Bankers Trust, there were several efforts to model this phenomena with mixed normal distributions. These comprised two joint-normal distributions. One was likely to be drawn from and had modest standard deviations and correlations. The other was less likely to be drawn from and had more extreme standard deviations and correlations. Using time-series methods available at the time, the researchers were unable to fit a reasonable model to available market data. They concluded that their inability to do so indicated a significant shortcoming of value-at-risk measures then in use.
At about the same time, Chase Manhattan Bank was developing a Monte Carlo based value-at-risk measure for use with its return on risk-adjusted capital (RORAC) internal capital allocation system. Citibank was implementing another value-at-risk measure, also for capital allocation, which measured what the bank referred to as “potential loss amount” or PLA.10
A 1993 survey conducted by Price Waterhouse for the Group of 3011 found that, at that time, among 80 responding derivatives dealers, 30% were using value-at-risk to support market risk limits. Another 10% planned to do so.