Response to consultation on amending ITS on benchmarking exercises
MR Q1: Do you see any issues or any missing information that should be required in the new templates suggested for the AIMA FRTB benchmarking exercise (i.e. Annex 6 & 7)?
The RASB acknowledges that the proposed templates for the AIMA FRTB benchmarking exercise are comprehensive and reflect the complexity of market risk measurement under the Fundamental Review of the Trading Book (FRTB) standards. We support the EBA’s initiative to enhance comparability and reliability through detailed reporting requirements.
From a risk aggregation and reporting integrity perspective, we respectfully suggest that the templates could be further strengthened by incorporating standardized metrics that capture the operational and non-financial risk components influencing trading book exposures. Although the primary focus of benchmarking is on market risk models, experience has shown that operational failures, data processing errors, and governance lapses can materially distort trading risk profiles and, consequently, the reliability of model outputs.
Integrating supplementary fields or annotations regarding significant operational events, system limitations, or data processing issues encountered during the reporting period could offer valuable context for interpreting benchmarking results. Over time, adopting standardized aggregation of non-financial risks, such as through Risk Units (RUs), could enhance EBA’s ability to detect discrepancies not solely attributable to market risk modeling differences but also to variations in operational resilience and data governance practices.
We offer this perspective recognizing that it complements, rather than complicates, the EBA’s existing goals of fostering robust benchmarking and supervisory convergence.
MR Q2: Do you see any issues with the reduced subset of instruments proposed for the AIMA exercise? Please elaborate?
The RASB acknowledges that the reduced subset of instruments proposed for the AIMA exercise is a pragmatic and reasonable step toward focusing benchmarking efforts on representative, material exposures. We understand that this streamlining aims to enhance data quality, reporting efficiency, and the interpretability of benchmarking results.
From a risk aggregation standpoint, we respectfully suggest that when selecting a subset of instruments, attention should be given to ensuring that the remaining portfolio is still reflective of the full spectrum of risk factors managed within institutions’ trading books. It is important to recognize that underlying operational and data-related risks can disproportionately affect certain instruments or asset classes. Reducing the instrument set should not inadvertently obscure significant risk concentrations or data aggregation challenges.
While Risk Accounting does not prescribe instrument selection criteria, we highlight that standardized aggregation of underlying non-financial risks associated with instrument portfolios — such as system limitations, data feed dependencies, or operational processing vulnerabilities — could enhance transparency around any residual risks that persist after subset selection. This would ensure that benchmarking exercises continue to provide a holistic and reliable view of model performance, even with a narrowed focus on fewer instruments.
We encourage the EBA to consider periodic reviews of the instrument subset’s composition to ensure continued risk representativeness and aggregation integrity over time.
MR Q3: Do you see any issues with the new template 106.02? Please elaborate.
The RASB recognizes that Template 106.02 represents an important addition aimed at capturing sensitivities under the Internal Models Approach (IMA) for market risk. We commend the EBA for designing a structure that seeks to enhance consistency in reporting sensitivities and risk factor exposures across institutions.
From a risk aggregation perspective, we respectfully observe that while Template 106.02 focuses on capturing modeled sensitivities, the underlying quality and integrity of the source data — including operational risk factors and data governance practices — are critical determinants of the reliability of reported sensitivities.
To enhance the robustness of benchmarking outcomes, we propose that institutions be encouraged to provide, alongside the quantitative sensitivity data, a brief qualitative disclosure or annotation highlighting any known data quality limitations, operational events, or system constraints that could materially affect sensitivity measures during the reporting period. Over time, incorporating a standardized, supplementary view of operational exposures expressed through Risk Units (RUs) could provide regulators with additional assurance regarding the provenance and consistency of reported sensitivity data.
We offer this suggestion in the spirit of complementing the EBA’s efforts to promote transparency, comparability, and supervisory confidence in market risk model benchmarking exercises.
MR Q4: Do you see any issues with specifying the specified timeline in the Annex 5 or with the reference date for new ASA institution in the exercise as defined in the suggested draft of Arti-cle 4.1.(b)?
RASB recognizes that the clarity of timelines and reference dates is crucial to ensuring consistent and comparable submissions across institutions participating in the AIMA benchmarking exercise. We support the EBA’s intention to define these elements explicitly in Annex 5 and in the suggested draft of Article 4.1.(b).
From a risk aggregation and operational execution standpoint, we respectfully suggest that alongside the specified timelines, the EBA could encourage institutions to document any significant operational challenges or data reconciliation issues encountered during their adherence to the timeline. Capturing such operational disclosures — even at a high level — would improve the transparency of benchmarking results and allow supervisors to differentiate between model performance variances and execution-related discrepancies.
In the longer term, adopting standardized, supplementary aggregation metrics such as Risk Units (RUs) could provide further granularity regarding operational risk impacts associated with adherence to specified timelines. This would enhance the EBA’s supervisory toolkit in assessing not only model validity but also the underlying operational resilience that supports effective model execution and reporting.
We commend the EBA’s efforts to introduce precise reference dates and timelines and view these refinements as important steps toward strengthening the reliability and comparability of benchmarking exercises across the EU banking sector.
MR Q5: Do you see any issues with the changes introduced in the Annex 5?
The RASB acknowledges that the changes introduced in Annex 5 — particularly those aimed at streamlining the benchmarking process, enhancing the reporting of sensitivities, and improving comparability across institutions — are positive developments. We commend the EBA for its continuous refinement of benchmarking practices in alignment with evolving supervisory objectives.
From a risk aggregation perspective, we respectfully observe that the success of these changes will heavily depend on the underlying consistency, quality, and governance of the data institutions submit. Operational factors such as data feed management, trade capture processes, and systems reliability can significantly affect the accuracy and comparability of reported sensitivities and exposures, even when templates and instructions are precisely defined.
To further enhance the robustness of the benchmarking exercise, we encourage the EBA to consider promoting the inclusion of brief operational risk disclosures or annotations alongside technical submissions. For example, institutions could provide a short qualitative statement identifying any material operational events or system issues that may have affected the reliability of reported sensitivities, combined — optionally — with quantitative indicators such as a Risk Mitigation Index (RMI) score or a high-level Residual Risk Unit (RU) figure associated with the submission’s data sources. This would provide supervisors with valuable context to differentiate model discrepancies from operational anomalies, enhancing the transparency and interpretability of benchmarking results.
We welcome the changes proposed in Annex 5 and fully support EBA’s direction in fostering a benchmarking environment that prioritizes transparency, comparability, and systemic risk visibility.
MR Q6: Would you consider it useful to clarify the type of SOFR rate (term, compound) to be used when booking related interest rate instruments? If so, please suggest a clarification.
The RASB agrees that providing explicit clarification on the type of SOFR rate to be used — whether term SOFR, compounded SOFR, or otherwise — would be highly beneficial for ensuring consistency across institutions in the AIMA benchmarking exercise. Precise specification of benchmark rates is particularly important when comparing sensitivities and risk factor exposures under internal models.
From a risk aggregation and supervisory perspective, ambiguities in the definition or application of reference rates can introduce avoidable inconsistencies in reported sensitivities and model outputs. Even small variations in rate interpretation can create material benchmarking differences that are unrelated to underlying model quality.
We therefore support the EBA in providing a clear, standardized specification for SOFR usage within the benchmarking templates and reporting instructions. Ideally, this would be accompanied by a brief technical clarification in Annex 5 indicating:
- Whether the compounded in arrears SOFR rate, a simple compounded SOFR rate, or a forward-looking term SOFR should be used.
 - The preferred conventions for day-count, reset dates, and compounding conventions if applicable.
 
While Risk Accounting does not directly model interest rate instruments, it strongly promotes the principle that standardized definitions and data aggregation rules improve the transparency, comparability, and interpretability of risk data — key objectives of the EBA’s benchmarking initiatives.
CR Q1: Do you think that the proposed approach aimed at including the breakdown B.6.3 is cor-rect and it enables to avoid any double counting of the exposures?
The RASB supports EBA’s initiative to refine credit risk exposure reporting through the introduction of breakdown B.6.3. We agree that a clearer and more granular separation of exposures is essential to avoid double-counting and to ensure consistency in benchmarking internal credit risk models across institutions.
From a risk aggregation and data governance standpoint, we respectfully observe that even with well-designed templates, exposure overlaps or duplications can still arise due to operational issues such as inconsistent data mapping, variations in internal classification standards, or reconciliation challenges between front-office systems and regulatory reporting systems.
To further strengthen the EBA’s approach and minimize residual risks of double-counting, we propose that institutions be encouraged to:
- Apply robust operational reconciliation procedures between credit portfolios and reported templates.
 - Disclose any known or suspected residual exposure overlaps when submitting benchmarking templates.
 - Optionally provide an aggregated Residual Risk Unit (RU) figure reflecting the operational risk associated with credit exposure classification and reporting, thereby giving supervisors additional insight into the operational risk dimension of reported data.
 
Embedding operational risk awareness into credit risk benchmarking processes will improve the transparency and interpretability of benchmarking results and enhance the resilience of supervisory assessments.
We commend EBA’s focus on addressing potential double-counting issues and encourage further steps toward integrating operational and data governance considerations into credit risk reporting standards over time.