11.4 Origins of Historical Simulation
Finger (2006) observes
When [bank] risk managers are asked why they opt for historical simulation, they usually respond with one or more of the following:
- It is easy to explain.
- It is conservative.
- It is “assumption-free”.
- It captures fat tails.
- It gives me insight into what could go wrong.
As a result of the first two of these, and perhaps the third as well, there is another reason: “my regulator likes it.”
Noticeably absent from the list of reasons is the statement
- It produces good risk forecasts.
The methodology of historical simulation was already widely familiar when J.P. Morgan publicly launched RiskMetrics in November 1994.1 Bank regulators had already developed a preference for the methodology.2 To understand why, some historical perspective may be helpful.
Within banks, there is a stark distinction between “profit centers” and “cost centers”. Profit centers earn money for a bank, so they command resources and the best employees. Cost centers don’t. During the 1990s, physics Ph.D.s flocked to Wall Street. But they were not put to work developing value-at-risk measures. The over-the-counter derivatives market was growing explosively, and the physicists’ math skills were needed to devise pricing and hedging strategies. Derivatives trading was a profit center. Financial risk management was not. Work on value-at-risk was—not always, but often—assigned to junior analysts or managers whose careers had been sidetracked. Mostly these people lacked quantitative skills. They struggled with concepts such as random vectors, statistical estimators, standard error or variance reduction. But historical simulation was different. It used no sophisticated mathematics. Anyone, so it seemed, could understand and implement the methodology.
A broad literature developed around value-at-risk. This included some outstanding articles—see references cited in this book—but these were the exception. Top academics mostly avoided value-at-risk as a subject for research, so articles and books tended to be written by less capable finance professors or practitioners with limited theoretical grounding. The website gloria-mundi.com is a bibliography for value-at-risk. It lists a staggering volume of items. Among its earlier entries, few are worth reading. A substantial number endorsed historical simulation.
In this context, bank regulators had to approve analytics for banks calculating value-at-risk under the Basel Accords. The regulators tended to have legal or accounting backgrounds, so they too lacked the quantitative skills to understand most value-at-risk measures or to sift through the turgid literature. For them, the “transparency” of historical simulation was appealing. They didn’t rule out other methodologies, but by the mid-1990s, they had wholeheartedly embraced historical simulation.
At the same time, banks were forming risk advisory groups to offer corporate clients free or inexpensive risk management consulting services. The business model was to sell over-the-counter derivatives or other financial services through consultative selling. Several groups offered value-at-risk analytics to complement their consulting. J.P. Morgan had RiskMetrics. CS First Boston offered a package called PrimeRisk. Bankers Trust had RAROC 2020 (risk-adjusted return on capital 2020). Chase Manhattan Bank had CHARISMA (Chase Risk Management Analyzer).3
Chase’s CHARISMA calculated value-at-risk with a crude historical simulation employing just 100 days of historical data. Aggressive marketing of the system closely associated Chase with historical simulation and served to promote the methodology further.
These developments helped spread adoption of historical simulation. More importantly, they contributed to the ongoing acceptance of the methodology by bank regulators. More than anything else, that regulatory acceptance explains the widespread adoption of historical simulation by banks for regulatory puropses.