Response to consultation paper on Guidelines on criteria for the use of data inputs for the ES risk measure under the IMA

Go back

Q1. To which extent do you intend to apply paragraph 16 of the present GL? Please provide concrete examples that could fall under the scope of paragraph 16 and explain why the coefficients cannot be calibrated to the historical data only.

Response:

Paragraph 16 sets out some reasonable conditions under which coefficients may incorporate some level of judgement. The type of situation where this paragraph can be applicable is where there is not enough historical data under which to make a fully empirical estimate or the estimates can be stabilised and accuracy improved by incorporating a level of judgement.

Proposed Action

Include Paragraph 16

Q2. To which extent do you intend to apply paragraph 17 of the present GL? Please provide concrete examples that could fall under the scope of paragraph 17.

Response:
Paragraph 17 sets out conditions where institutions do not use data from the 1-year period of financial stress only but also more recent historical data.
One type of example of where one would not solely want to use data from a period of financial stress can be for example when the instrument or risk factor did not exist in the period of financial stress used.

A further example is where an issuer whose nature has changed dramatically through time, e.g. a company that has completely changed its business model and now belongs to a different sector. The data (stock price, credit spread) of that issuer in the stress period may not be relevant any longer to the current risk profile of the issuer. In this instance it may be preferable to use more recent data to reflect the risk of that issuer.

Proposed Action

Include Paragraph 17

Q3. Do you agree with the inclusion of paragraph 31 in the GL? Do you envisage any issues that could be associated with paragraph 31?

Response:
The market data team is required to remediate data issues such as missing data points and inconsistent data. Remediation is applied by replacement methodology which involves using data corresponding to other risk factors. Data remediation is generally considered as a one-off exercise. There is a concern that paragraph 31 could be interpreted as a requirement to regularly monitor modellability across all data points that have been used for data remediation, and trigger revision if modellability criteria are not met. This would potentially introduce instability in the risk estimate i.e. requesting periodic changes of shocks for historical time series which in theory should remain static. Furthermore, a situation could occur where a bank may have replaced a valid remediated point which at that time was an acceptable modellable alternative, but might now be modellable despite being non modellable at the time of remediation. Given the above concerns, it is advisable for paragraph 31 to be excluded.

Proposed Action

Remove paragraph 31

Q4. Do you agree with the inclusion of paragraph 34 in the GL? Do you envisage any issues that could be associated with paragraph 34?

Response:

The CP provides guidance on the concept of “extrapolation up to a reasonable distance.” If the risk factor has been deemed modellable and there is either missing or inconsistent values in the historical time data series of the risk factor, there may be a need for extrapolation to rectify the issues in the time series. However, the CP is too prescriptive on the use of extrapolation and restricting the use of extrapolation as a means to substitute risk factors when data inputs are not available, and therefore it may inadvertently result in the application of proxies of inferior quality when compared to extrapolation. In the most extreme cases, no “reasonable” proxy (other than extrapolation) may be available. Consequently shocks beyond the last modelled pillar would effectively be forced to zero with significant impact on risk factor eligibility tests to IMA, in particular on PLAT.

Portfolio Back-testing is already recognized by Basel (par 99.22(4)) as a valid tool to identify whether risk factors simulated in the Risk Management Model (‘RMM’) adequately reflect market volatility and correlations. Besides that, FRTB framework introduces another mechanism to ensure that risk factors’ dynamics within the RMM is consistent to the dynamic of the market data sets used in the Front Office: the Profit and Loss Attribution Test (‘PLAT’).

In cases where extrapolation/interpolation is of insufficient quality, PLAT failure would offer a backstop, forcing the desk on SA if the desk fails to meet the prescribed thresholds. This mechanism is already strict and does not require additional constraints on risk factor extrapolation mechanisms.

In general, we consider interpolation/extrapolation modelling techniques as a distinctive aspect of the Internal Model and it should not be part of a prescriptive regulation. Indeed, such limitations in the use of sound proxy methodologies may greatly impact the results of Backtesting and PLAT tests.

We would like to illustrate the above point by means of an example using FX Volatilities.

When considering potential proxies, several alternatives can be contemplated. For example, it is possible to consider extrapolation technique such as tenor substitution for a best fit. Here the implied volatility of a given tenor point can be used to approximate other adjacent tenor points. Alternatively, currency substitution may be used in which the missing data is replaced by data from a comparable currency, maybe selected via regression analysis. Tests performed using market data for FX volatility show that proxying via extrapolation across adjacent data points performs better compared to currency pair substitution. Extrapolation method leads to a higher correlation and comparable volatilities between the underlying and the proxy when compared with the currency substitution approach.

To support the above statement with empirical evidence, we compared a flat extrapolation “Tenor substitution” method to a “Currency substitution” method based on regression analysis.

• In the Tenor Substitution case we proxied the Delta Neutral implied volatility at 3Y tenor with the implied volatility at the 1Y Tenor of the same currency pair.
• In the Currency Substitution case we proxied the Delta Neutral implied volatility of currency pair (X) at 3Y tenor with Delta Neutral implied volatility of currency pair (Y) at 3Y tenor. Currency pair Y as a proxy for currency pair X was selected via regression analysis.
• We repeated the exercise across 73 currency pairs.
• Analysis was carried across SVaR period

The table below shows the detailed comparison between “Tenor Substitution” / “Currency Substitution”

Please see attachment for table

Conclusions:
• Tenor Substitution, results in higher correlations, at 96% when compared to Currency Substitution which scores between 83% to 86%
• Tenor Substitution approach tracks the statistical properties of the underlying risk factor as the volatility of proxy is comparable to the volatility of the underlying risk factor.
• The average correlation results for Tenor Substitution are quite stable, changing only by +/- 4% across currencies while a corresponding change under Currency Substitution methods is +/- 13%

We therefore recommend that banks be allowed the flexibility to choose the most appropriate extrapolation methodologies as long as they are able to demonstrate appropriateness of extrapolation choice and therefore we recommend to remove paragraph 34.


Proposed action

The Industry proposes to not include paragraph 34 in the Guidelines.

Upload files

Name of the organization

ISDA