Response to consultation on Regulatory Technical Standards that specify material changes and extensions to the Internal Ratings Based approach
Question 1. Do you have any comments on the clarification of the scope of the revised draft regulatory technical standards to specify the conditions for assessing the materiality of the use of an existing rating system for other additional exposures not already covered by that rating system and changes to rating systems under the IRB Approach?
The European Savings and Retail Banking Group (ESBG) welcomes the opportunity given by the European Banking Authority (EBA) through this consultation to share comments on its draft technical standards that specify material changes and extensions to the Internal Ratings Based approach.
Generally speaking, ESBG kindly invites the EBA to consider the simplification of the process regarding the IRB model, as far as reasonable. Changes triggered by banks' ongoing reviews, aimed at improving model performance and adapting to new information, are typically viewed as part of the model life cycle. To avoid unnecessary strain on both regulators and institutions, it would be beneficial if regulation was simplified to allow more flexibility in considering these types of changes as requiring notification rather than prior permission. Waiting for prior permission may lead to the extended use of models that banks view as suboptimal, negatively impacting user acceptance of the IRB system. Alternatively, regulation could provide additional guidance to regulators and institutions on a simplified application process for material changes of this type to enable faster implementation.
Par. 9, page 7- “Updates to the data used in the ongoing application of the rating systems in order to calculate the risk weight exposure amount for the application portfolio should not be covered by this Regulation”. We welcome EBA’s effort to clarify these aspects, nevertheless it would be helpful to further clarify, with examples, what is meant by “updates to the data used in the ongoing application of the rating systems in order to calculate the risk weight exposure amount for the application portfolio”.
We would see it as particularly important to clarify that the update of the data in the context of review of estimates and other ongoing approved processes, such as validation and monitoring, fall under the “update of the data used in the ongoing application of the rating systems”, and thus do not fall in the scope of this RTS. In this context we suggest the following wording: „Extensions to the data time series based on approved methods, processes, controls, data collection and IT systems for the specific use in approved processes such as review of estimates, validation and model monitoring should not be covered by this Regulation“.
Additionally, we kindly request a clarification of the delineation and interplay between the so called “return to compliance” topics (Art. 146 CRR) and the classification of model changes. As an example, we consider that data and IT corrections (e.g. in the context of ongoing data quality enhancements) to align with the approved status, are not considered model changes.
Par. 10, it would be helpful to provide examples for the new recital 4 of the draft RTS. For example, this new rule could be stated to apply to all types of annuity loans in the retail business, even if a new annuity loan product (with different features) is introduced.
Par. 12, page 7 does not exclude that changes stemming from CRR3 might be considered material. Here we would like to comment that changes solely stemming from changed regulatory requirements (especially in cases where the changes are highly operationalized and leave no room for interpretation) should never be considered material unless backstop requirements apply (i.e., breach of quantitative thresholds). We would highly welcome if the EBA could specify any changes that would be considered material to ensure a level playing field in the context of the CRR3 introduction.
The recitals of the proposed amendment to the Delegated Regulation will not be part of the final consolidated version to be published in the OJEU. It makes it difficult for institutions to even track the recitals. Furthermore, it is not entirely clear how the new recitals relate to the existing recitals of the original Delegated Regulation 529/2014 and the amending Delegated Regulation 2015/924. For example, recital 2 of the proposed amending Delegated Regulation confuses when read compared to existing recital 7 which refers to the “ongoing alignment of the models with the calculation dataset used”. It would be very helpful if the EBA could explain what “calculation dataset” means and how it differs from the “reference dataset”. In this context, it is unclear what the “data for the application portfolio” mentioned in paragraph 8 of the consultation paper means. In our view, the latter can hardly be the input for a rating. After all, it goes without saying that this input changes over time and that ratings are always based on the most up-to-date information. It cannot possibly constitute a change to the rating system. To clarify this, it would be helpful if the EBA could provide some examples.
Question 2. Do you have any comments on the clarifications and revisions made to the qualita-tive criteria for assessing the materiality of changes as described in the Annex I, part II, Section 1 and Annex I, part II, Section 2?
Page 27, point iii (2f) - a classification of “changes to the (fundamental) methodology to derive appropriate adjustments” as a material model change (MMC) could trigger an excessive number of MMCs because such adjustments (as well as their underlying methodology) are almost always derived ad-hoc based on the specific situation at hand. We propose that only changes to the methodology how appropriate adjustment are applied constitutes a MMC, while the concrete operationalization (i.e. determination of an appropriate adjustment) for a specific situation does only qualify as non-material model change with ex-ante notification.
Par. 13, page 8 - “the rank ordering should be derived from the final estimates associated with the grades or pools”. We would be grateful if the EBA could clarify whether “final estimates” include MoCs and/or downturn. Further, we consider it necessary for EBA to provide guidance and examples how to interpret the “significant manner, the measure and level of which will have been defined by the institution” defined in Annex I, Part II, Section 1, point 2 (d), lit. (i) and (ii), page 27.
Par. 15, page 9 - As regards the definition of the significance level of a change in the Definition of Default (DoD) for a material change, it would be beneficial to have a clear specification of the metrics and thresholds by the EBA. The DoD definition is at the core of IRB models and a unified understanding, for all banks, of the metric and thresholds is needed. It would be important and beneficial to establish clear guidance on significance thresholds to ensure a level playing field across jurisdictions and promote harmonised implementation among competent authorities and institutions.
We suggest using the term 'case-by-case assessment' from par. 58 of the Guidelines on the application of the definition of default (EBA/GL/2016/07) instead of 'manual reclassification default' in the context of the proposed additional disclosures on the 'unlikeliness to pay' in par. 15.
Par. 16, page 9 – Very important point as institutions are currently not in the position to improve their validation frameworks in an efficient way. However, typically, changes to validation framework are neither strictly conservative nor strictly more lenient. If a test is replaced with a new and better one, it will typically go in both directions. In this context, we would be grateful if the EBA could clarify and provide metrics and thresholds for what is considered to be a “more lenient judgment” of the accuracy and consistency of the estimation of the relevant risk parameters, the rating processes or the performance of their rating systems in case of changes in the validation methodology and/or validation processes. One possible solution would be to classify a change as being material only if the validation assessment would be „systematically“ (i.e. biased to be) more lenient. In addition, we would propose to trigger a material change only in case of changes in one of the main validation categories of back testing and discriminatory ability to avoid triggering material changes where only less significant validation tests are affected.
Question 3. Do you have any comments on the clarifications and revisions made to the qualita-tive criteria for assessing the materiality of extensions and reductions as described in the Annex I, Part I, Section 1 and Annex I, Part I, Section 2?
Par. 19, page 9/10 – Please confirm our understanding that the extension of the scope of application of an existing IRB rating system with additional exposures previously treated under the Standardized Approach (e.g. via purchasing exposures from a third party or via additional business unit) falls under the scope of the RTS, as a change to the range of application of rating systems, i.e. the classification as MMC or ex-ante notification is relevant and not a new application under Art. 148 (1) CRR is required.
Par. 22, page 10 – We welcome that this paragraph states that “the change in RWEA stemming from a change in the methodology for assigning exposures to different exposure classes would stem from applying a different prescribed RWEA formula, etc. rather than changes to the rating systems themselves”. This insinuates that this “accepted” effect can be excluded from the impact calculations and comparisons with the thresholds. At the same time, also changes to the methodology for assigning an obligor to a rating system can imply changes in the exposure class assignments. In this context, please clarify that, indeed, for changes in the assignment of exposures to exposure classes, the impact calculations shall be computed in a way to disregard impacts that directly stem from CRR3 requirements (i.e. RWEA formula, input floor or other regulatory prescribed input). An explicit example of how the impact calculations and “implied changes to the affected rating systems” mentioned in the paragraph are to be interpreted would be highly appreciated.
Question 4. Do you have any comments on the introduced clarification on the implementation of the quantitative threshold described in Article 4(1)(c)(i) and 4(1)(d)(i)?
Par. 24, page 11 – It would be helpful to provide an indication regarding the acceptable timeframe and the meaning of “same nature” (does it mean the same paragraph from the RTS?) in case of “modifications of the same nature and to the same rating system that are implemented sequentially over time”. Furthermore, please clarify the timing of the application in case of material model change triggered by sequential implementation. Can it be assumed that it is based on the last implementation date? Lastly, it would be helpful to clarify how to proceed in case model changes resulting from the closing of several regulatory obligations split over time.
Par. 25, page 11– please confirm the understanding that in case of a change affecting multiple rating systems (e.g. change in DoD or PD methodology) only the thresholds referred to in Art. 4(1)(c)(i) need to be calculated and considered for the materiality classification and no check on single rating system level, i.e. Art. 4(1)(c)(ii), is necessary. Similarly, in case of an extension of the range of application affecting multiple rating systems (i.e. purchase of corporate, SME and private individual portfolios) only the thresholds referred to in Art. 4(1)(d)(i) need to be calculated and considered for the materiality classification and no check on single rating system level, i.e. Art. 4(1)(d)(ii), is necessary.
Question 5. Do you have any comments on the revised 15% threshold described in Article 4(1)(d)(ii) related to the materiality of extensions of the range of application of rating systems?
Par. 28, page 12 – We would strongly prefer keeping the current approach, where the numerator is calculated as the difference between the RWEA assigned by the extended rating system and the RWEA assigned to the set of exposures before the extensions. The reason is that the EBA’s arguments assume that a risk of weak model performance shall be covered, while the regulation should not expect violations of legal requirements. In fact, a model must perform adequately on all relevant subportfolios, particularly on the extended scope. This would need to be made sure via relevant adjustments to the models and any remaining deficiencies captured in the way the regulation envisages it, i.e. by a margin of conservatism. Because of that and because we encourage a clear and uniform approach in regulation, we see it inconsistent to divert from the way of measurement otherwise taken in the RTS, i.e. to measure RWA reductions.
Question 6. Do you have any comments on the documentation requirement for extensions that require prior notification?
Requiring the entire documentation catalog (validation report and technical documentation) for extensions requiring prior notification would be disproportionate. It would not only unnecessarily delay or slow down meaningful model changes, but also significantly impede the efficiency of models developed jointly and operated by a central servicer (pool models).
For the updated article 8, it is unclear what is the difference between point e) “Reports of the institution’s assessment of the model performance of the rating system after the change” and the validation reports mentioned in point f) “Reports of the institutions' independent review or validation”. If for example full validation is provided, would this be considered sufficient to cover both point e) and point f), or still separate reports are expected?
Par. 32, page 14 only refers to the assessment report of the validation function while Article 8 refers more generally to the “reports of the institutions’ independent review or validation”, which also opens the possibility for internal audit to perform the assessment. Should this requirement remain, please align Par 32. accordingly. However, more in general, for credit institutions, the effort for producing and submitting the documentation for an ex-ante notification for extensions of the range of application of a rating system would be the same as for a material model change, which places a burden on the institution. While it is true that competent authorities might be more efficient in challenging the materiality classification of a given change, the same would be true for any other change classified as ex-ante notification. As extensions occur frequently, the burden will be massive on the institutions’ side, outweighing the actual benefit on the supervisory side as long as no significant track record of banks having submitted wrongly classified extensions was evident in the past. Even in this case, the new clarifications on the representativeness analysis pointing to CDR (EU) 2022/439 as required by the RTS will help to raise the quality of notifications. We strongly propose setting additional requirements only in case of evidence that these new clarifications do not yield the desired quality improvements in the notifications.