When’s the Last Time You Checked Controls Assuring VaR Validity?

April 13, 2013

By Joseph Neu

When JP Morgan’s losses stemming from its Chief Investment Office (CIO) surfaced last summer, one of the concerns raised involved their VaR model and alleged changes made to it. As noted here then, this was surprising given the level of scrutiny that bank supervisors and accompanying regulation were giving risk models.

Though we have not seen such scrutiny yet trickle down to corporates, all treasurers should still take the time to read the VaR modeling appendix to the JP Morgan Management Task Force report on its CIO Losses and heed its lessons on risk model development, validation and implementation.

Lessons in Model Governance

Surprisingly, the narrative presented in the Task Force Report does not show a problem of complex risk analytics, but a failure of basic controls and the familiar hazards of relying on spreadsheets. Therefore, well before corporates start mimicking the layer-upon-layer of risk governance recommended in the report, they should strengthen the controls they have surrounding their VaR models and spreadsheets. For example:

  • Document and subject to independent review any claims that a model is overestimating VaR. JPM’s CIO changed the VaR model used for its Synthetic Credit Portfolio responsible for its losses in response to the pending implementation of Basel II.5. Basel II.5 would have flagged the model being used for its limited ability to estimate correlation risk (a common VaR flaw).

Yet, in constructing a new model, the trader responsible for the portfolio and his direct report responsible for building it sought also to address a belief that the model in use produced a VaR higher than appropriate given the loss experience.

Ultimately, the implication that the old model was overstating VaR helped dampen warning signs when VaR limits were exceeded and accelerated the implementation of a new model without proper review.

  • Ensure the “modeler” gets adequate support resources not tied to the chief trader. While having the model developer reporting to the chief trader and supporting other traders, as the report documents, cannot be considered best practice, it also meant the modeler was stretched. When he asked for additional resources to work on the model, he did not receive any.
  • Don’t implement models that have not received full back-testing. Apparently, JP Morgan’s Model Review Group reviewed and approved the new model proposal, in part, because it contemplated a “full revaluation” of the risk involved in the indices and index tranches traded as part of CIO activities. Yet, in its final review, a “full revaluation” of the new model was not back-tested fully using 264 prior trading days per policy. The modeler said the CIO lacked the historical data necessary for such back-testing.
  • Give careful scrutiny to operational risks in spreadsheets. As is too often the case, the CIO model relied on Excel spreadsheets, “which had to be completed manually, by a process of copying and pasting data from one spreadsheet to another.” Further, given that the CIO was trading in illiquid tranches, the natural tendency was to copy and paste the same price for such tranches over multiple consecutive days, understating volatility. Remediation steps were put in place to fix both issues, but no follow-up occurred to ensure their completion.
  • Flag models that take inputs from unapproved sources. JPM’s Model Review Group discovered that the new VaR model was using inputs from an unapproved second model. It allowed the exception because both the approved and unapproved input models (actually both were deployed in spreadsheets) were created by the same modeler (the conflicted one working on the new VaR model itself) and they showed similar results upon limited back-testing.

The unapproved spreadsheet model, it turns out, too easily defaulted to calculate the hazard rate and correlation inputs based on the Uniform Rate method rather than the approved Gausssian Copula method.

Fortunately, this flaw led to the discovery of a more fundamental spreadsheet error: the relative changes in hazard rates and correlations estimates were being calculated by dividing the sum of the two instead of their average. This muted volatility by a factor of two and lowered VaR significantly.

Leave a Reply

Your email address will not be published. Required fields are marked *