Guest Opinion: Moving Forward With IRR Back Testing
A stronger regulatory focus on managing interest rate risk on the balance sheet has brought to the forefront the requirement that institutions back test their IRR models. Back testing is a means to check the sufficiency of the data, the setup and the assumptions used to produce an analytical report. Back testing a model compares the projections of a past report against the actual figures produced during that same time horizon.
Comparing actual data to a past projection can help identify and measure discrepancies so that any that are found can be properly explained or corrected. By identifying variances between projections and actual figures, an institution can pinpoint holes in the core data, the modeling structure, the assumptions and even the report presentation. These items then can be fixed or their limitations notated so management can plan and budget accordingly.
From an IRR perspective, back testing is only possible in some cases. In addition, it must be emphasized that the assumptions in an ALM model are necessary to isolate interest rate risk, and, therefore, they cannot be perfectly replicated in actuality.
An ALM model typically stresses the balance sheet in parallel rate shocks. In reality, yield curves would never move in such a fashion, and, an institution could not utilize such shocked IRR results as a perfect basis for measuring future volatility.
Back testing base-case income projections is possible as the institution can compare the projected income of a previous base-case report relative to the financial institution’s actual earnings over that time period. Even this is far from perfect, though, as the model uses unrealistic rate, prepayment and reinvestment assumptions specifically for IRR purposes (e.g. holding the balance sheet constant). Note, too, that the purpose of an ALM report is not to project earnings but to shock interest rates to determine the effects on the institution’s earnings.
Measuring the accuracy of an ALM model is dependent upon observable inputs. One approach is monitoring base-case income projections. The institution should compare the projected income from a recent report (such as the previous quarter) to its actual income earned. A more detailed analysis will highlight specific accounts or categories that varied notably from the projection. These accounts can then be checked in the model to ensure attributes are set correctly.
Other historical projections versus actual results can also be measured. Prepayment speeds are very important for any cash flow analysis, and the institution could track internal prepayments per account type. Comparing that information against previous report assumptions may highlight outliers. Similarly, member behavior as it relates to core deposits is a huge part of ALM analytics. Utilizing historical data is important to gauge the accuracy of current projections.
Employing previous analytics is another smart approach. First, the institution should have a method in which to compare the results of the current report and the previous report. Doing so quickly isolates big movements over that period of time, and these movements can then be reviewed to ensure appropriate setup and defensible assumptions. In addition, a comparison over several reports on a macro scale is beneficial as it tracks key metrics of the institution.
Alternative scenarios in previous reports can sometimes be used as a proxy for current results. If rates move notably in a short period of time (say 100 basis points over six months), the institution could go back half a year and view the shock up 100 basis point projections from that time period. While the member data will have changed and the market rate movements will never be a perfect parallel shock, measuring price and income volatilities can still be a good check against the model.
Understanding the limitations of modeling is also important. In recognizing the shortfalls of a model, the institution can perform alternative what-if scenarios that stress-test varying assumptions. Reviewing the results of what-if reports allows management to understand the effects such assumptions play and how much variance can exist between different models.
Finally, the institution can utilize compliance documentation to complement a back-testing analysis. For instance, the institution should acquire software validation from a third-party provider that measures the model’s analytical capabilities. The institution should also have a periodic validation of its model performed by another vendor.
Benedict Voit is an asset-liability management adviser at ALM First Financial Advisors LLC.
800-752-4628 or www.almfirst.com