Response to consultation on ITS package for benchmarking of credit risk, market risk and IFRS9 models for the 2025 exercise
MR Q1: Do you see any issues or lack of information required in the new templates suggested for the IMA FRTB benchmarking exercise (i.e. Annex 6 & 7)?
In general, the industry notes that currently there is a lot of uncertainty for the IMA FRTB 2025 benchmarking exercise. By the time the uncertainty is removed we will be close to the next benchmarking cycle with nearly no time for the implementation of additional capabilities. Further, such implementation would compete for resources against other projects, like readiness for first own funds requirements (‘OFR’) reporting for end of Q1’25. The industry therefore recommends EBA to utilise figures which are part of COREP templates and request no additional data beyond that, for example ES estimator (1d, 99%).
The industry would like to ask EBA to review the required risk measures per portfolio. VaR, stressed VaR and IRC would no longer deemed relevant by the firms post FRTB go live, but the proposed templates by the EBA include those measures. If the EBA has a different view, this should be communicated with necessary explanation as early as possible. The expectation is that firms will switch off legacy calculations and systems by end of 2025 (subject to FRTB go-live 1st Jan 2025).
MR Q2: Do you think it is appropriate the restrict the data collection to only two asset classes (interest and credit spread risk) to begin the exercise? Please motivate your answer.
A quantitative benchmarking exercise requires a reasonable number of participating firms to support a horizontal comparison of results. The industry suggests introducing a threshold for a minimum number of participant firms. Below this threshold the exercise should not be conducted in its proposed form. The industry would propose a minimum threshold of 5 firms. In the absence of a quantitative benchmarking exercise an alternative could be to conduct benchmarking based on qualitative criteria that would give regulators insights on certain aspects, e.g., overview on modellable risk factors.
MR Q3: Do you think it is appropriate to ask to report also a PES with the same stressed risk scenario? Would you extend this possibility also to the SSRM?
The use of prescribed stress periods could present implementation challenges for firms that may require infrastructural changes, along with operational challenges to source and clean relevant data relating to historic periods unrelated to firms’ real portfolio stress periods.
Without detailed instructions defined upfront in the ITS to ensure uniformity of approach for firms to resolve data gaps there would be a significant variability in results associated with this approach. Variability in results by aligning stress periods would be expected regardless due to differences in the methodology applied to determine the scenario of extreme shock across firms
MR Q4: Do you think it is appropriate/feasible to impose to report an instrument/portfolio as if all the risk factors in the instruments/portfolio would be eligible to pass the risk factor eligibility test?
In general, artificially forcing trades to pass or fail risk factor eligibility test (‘RFET’) outside of a firms’ actual implementation of FRTB could present significant data challenges. If a risk factor is deemed by a bank to be non-eligible, the bank is unlikely to have the required time series / market data, or at least not enough to support eligibility. As a result, it would be complicated to include the risk factor in the computation of expected shortfall (ES) and other relevant measures.
MR Q5: As a follow-up to Q4, do you think it is appropriate/feasible to impose to report an instrument/portfolio as if all the risk factors in the instruments/portfolio would fail to pass the risk factor eligibility test (i.e. report all the RF as if they were NMRF)?
The operational difficulty in complying with this request would be significant as it would require firms to deviate from the risk factors firm’s use in the firm’s production computation. This deviation would involve having to artificially create dummy risk factors for some/all the hypothetical portfolio instruments that would ordinarily pass RFET in production. Any results would not be representative of a firm’s actual implementation, unless firms also provide a representative view of what they would have submitted with their own set of risk factors, which would represent a huge workload. It is unclear whether the benefit derived from this would outweigh the significant effort required to support this ask.
MR Q6: Do you see any issues with the changes introduced in the Annex 5?
It is our understanding that this question should refer to Annex II and the changes contained within the Portfolio definition with the additional portfolios containing each single instrument.
Although the industry can see no technical issues with the proposed changes it would like to point out the exercise has undergone substantial changes in the last 4 years. The 2021 Benchmarking exercise made use of 81 instruments, 66 portfolios and 14 templates. In the latest proposed ITS, this has expanded to 105 instruments, 105 individual portfolios (single instruments), 56 individual portfolios (multi-instruments), 7 aggregated portfolios, 537 instruments for SBM validation purposes, 388 SBM validation portfolios and 23 templates. For a participating firm every new or changed feature and/or set-up in scope of the exercise, means a change request or a new implementation, thereby putting additional constraints on IT, business, and risk resources, both in initial set-up and on a recurring basis.
MR Q7: In order to reduce the submission burden on the banks, would it be feasible for submitters to have just one submission for A-SA SBM and DRC RM (aligned to IMV submission and relating to the same reference date)?
Although a single submission is feasible, it would not be preferable for the industry to submit as per the proposed timeline all ASA RM figures aligned to the current timeline associated with the IMV submission. This would not give the banks sufficient time to maximise the data quality of their submission. The current timeline allows firms to participate in the ISDA Dry Run which makes a significant difference in improving data quality. The proposed timeline would result in submissions being more volatile driven by implementation errors, e.g., in trade setup, portfolio setup, etc. This would reduce the added value for all parties involved as part of outliers will most likely be linked to operational errors.
When surveyed 79% of firms would prefer to stick to the current timelines, with 50% preferring to remove the submission of initial sensitivities as part of IMV to help reduce the operational burden.
MR Q8: Do you agree with the proposed to extent the set of ASA instruments validation to all asset classes?
The Industry understand the benefits to running the validation portfolios to inform the data quality of the benchmarking submission results. However, the industry would like to point out the operational burden for firms to use synthetic sensitivity inputs in their system and the diminishing returns observed from recurring year-on-year submissions. We suggest the option to exclude submission for any/all validation instruments and portfolios that had been previously provided to national competent authorities.
With respect to the specific set of test cases, the HRK test case is no longer relevant so can be removed.
CR Q1: Do you agree with the proposed changes to the instructions in Annex IV?
NA