Response to consultation on Regulatory Technical Standards that specify material changes and extensions to the Internal Ratings Based approach

Go back

Question 1. Do you have any comments on the clarification of the scope of the revised draft regulatory technical standards to specify the conditions for assessing the materiality of the use of an existing rating system for other additional exposures not already covered by that rating system and changes to rating systems under the IRB Approach?

We consider the clarification of the scope to be generally useful. However, it is also important to avoid any ambiguity in understanding what is meant by “changes to development and calibration processes (including the respective reference datasets) to align with the approved methods, processes, controls, data collection and IT systems”. In particular, it should be avoided a significantly higher number of ex-ante notifications due to a potential misinterpretation of the RTS scope. More precise wording or practical examples would be useful to fully grasp the intended scope. 

For instance, it should be clarified that merely extending the quantification and calibration samples with additional years of data during the review of estimates is not a change that is done to align with the approved methods, processes, controls, data collection and IT systems. This is done to ensure that the most recent data is considered. If, however, after adding additional data it turns out that actual changes are additionally needed to the scorecard/model (e.g. a MoC has to be added or removed), then such change would indeed represent a change in development and calibration processes to align with the approved methods, processes, controls, data collection and IT systems and therefore be in scope of the RTS. 

We suggest also to finetune the recital which mentions the “updates to the data used in the development and calibration of the rating systems” to clarify what is meant under those updates.  

It is also not entirely clear what is meant under “updates to the data used in the ongoing application of the rating systems”.  

If no further clarifications are provided, there is a risk of misunderstanding the targeted scope, which might result in an unnecessarily high number of ex-ante notifications, which would cause a high burden for both the institutions and the competent authorities. 

When it comes to “New origination of facilities that are of a type of exposure already rated under the IRB approach”, the EBA should also clarify whether portfolio acquisition and subsequent use of IRB approach (approved for old existing portfolio) for the newly acquired portfolio would fall into this category. 

Question 2. Do you have any comments on the clarifications and revisions made to the qualita-tive criteria for assessing the materiality of changes as described in the Annex I, part II, Section 1 and Annex I, part II, Section 2?

We appreciate the revision of materiality of changes related to the definition of default and to the validation methodology and processes. With respect to the changes in the definition of default, we would like to highlight two important aspects.  

Firstly, it would be relevant to clarify what is meant under “changes whether an indication of Unlikeliness to Pay results in an automatic or in a manual default reclassification”.  

  • In our understanding this criterion is meant to cover UtP indicators that before the change led to an automatic default recognition (without any case-by-case assessment or human judgement), whereas after the change the same UtP indicator would lead to an assessment whether default is indeed to be recognized, and after such assessment the default will be manually recognized, if deemed justified. I.e. such UtP indicator will stop being a mandatory (“hard”) default indicator in all cases, but would trigger a case-by-case assessment and a manual default recognition, if the conclusion was made that the bank considers the obligor/facility as unlikely to repay the obligation in full. 
  • If also the other direction of the change was meant by the EBA: changing from the individual case-by-case manual recognition of default due to a certain UtP indicator to an automatic default recognition in all cases due to the same UtP indicator, i.e. where no human judgement is involved anymore, this should be also clarified. 

In any case, we strongly oppose that the classification of such changes should be always material. The classification should be dependent on the significance of the quantitative impact and on the assessment of the newly introduced qualitative backstop measure. 

Secondly, we would ask the EBA to consider cases when local regulators introduce mandatory changes in the default (de-)recognition criteria which the institution is obliged to implement. It would be beneficial to analyse such changes – mandatory changes in DoD stemming from local legislation changes – based on the quantitative criteria and the qualitative backstop. I.e. irrespectively of the nature of the change (whether it is a change to the 90+DPD indicator, or to any other DoD aspect), the change caused by local legal requirements is to be considered material if either the RWA impact is significant on the rating system or on consolidated level in line with Article 4, paragraph 1(c), or if it impacts the default classification of the exposures in the range of application of a rating system in a significant manner based on the metrics and thresholds defined by the institution. This is of special importance for institutions/groups where the same DoD/internal model is used on both local and consolidated level

With respect to the changes in the validation methodology and/or validation processes, the overall approach to treat only changes that lead to a more lenient judgment is understood and supported. However, it shall be noted that there can be many situations when changes in the validation methodology can clearly be classified as leading to a more tolerant or a stricter judgement. For example, when representativeness methodology is criticised by the competent authority, it is explicitly expected that the change in this methodology would lead to a different assessment. In case such change results in a more lenient assessment, in our view it would not be an appropriate use of resources for the institution and the competent authority to proceed with it as a material change. Therefore, not only the direction of change, but also its magnitude should be accounted for when assessing its materiality

Question 3. Do you have any comments on the clarifications and revisions made to the qualita-tive criteria for assessing the materiality of extensions and reductions as described in the Annex I, Part I, Section 1 and Annex I, Part I, Section 2?

We seek clarification on the remark in paragraph 19 claiming that “in accordance with Article 148(1) of Regulation (EU) 2024/1623, additional exposures that were not risk weighted by another rating system (i.e. under the Standardized Approach or by F-IRB if the scope of an LGD model is extended) require in any case an approval by the competent authority and are not within scope of this RTS”. It should be clarified whether such requirement is applicable to acquired portfolios. The current version of the RTS in place allows that if an institution can prove representativeness and comparability of the portfolio, then the already approved IRB rating system can be extended in a form of an ex-ante notification without considering whether the acquired portfolio is under standardised approach.  

Acquired portfolios are calculated in standardised approach until the institution can incorporate relevant data in its IRB models due to historical data availability issues. In many cases, the already approved IRB models must also be adjusted to cover the acquired portfolio. Such changes of the originally approved model are captured by other paragraphs of the RTS. Based on the current version of the RTS an ex-ante notification is sufficient to adjust the already approved IRB model and extend the use to the acquired portfolio. If the acquired portfolio consists of the same type of exposure as what an institution has already IRB approval for, then the use of IRB approach should not require an IRB approval even if the exposures were calculated in standardised approach following the acquisition (see also paragraph 10 of the Consultation Paper). 

Question 4. Do you have any comments on the introduced clarification on the implementation of the quantitative threshold described in Article 4(1)(c)(i) and 4(1)(d)(i)?

The appropriate approach to materiality assessment is to compare the threshold in Article 4(1)(c)(i) against the change for each rating system individually. It might often happen that even if one change affects several rating systems, the magnitude of the effect of this change differs significantly for each affected rating system, i.e. the impact on one might be very insignificant, whereas for the other rating system the impact might be indeed material. In this situation, it would not be substantiated to assess such change as material for all rating systems based on overall impact that is provided for all rating systems together when it is known that for some rating systems the impact is immaterial

In addition, we would ask the EBA to re-consider the wording of Article 3, paragraph 3, point (b) which states that “modifications of the same nature and to the same rating system that are implemented sequentially over time”, as “sequentially over time” is a rather vague formulation and may result in practical limitations.  

For instance, “sequentially over time” can be understood in very broad terms and time horizon: it can mean months, years or any longer period of time. In practice, for example, one rating system can be affected by various supervisory obligations that lead to changes or extensions to that rating system that might last over longer periods (e.g. half a year, a year and more). Model developers schedule the solving of obligations in the model life cycle and bundle together those that can be solved in regular recalibrations or parameter re-estimations or require full redevelopments. It is not clear whether “same nature” can be understood to support further on such practical separation fitting the model life cycles. Similar difficulty might arise in other situations, e.g. in case of changes in legal requirements or internal audit findings that in the end result in the need to perform several changes to a rating system. 

In addition, a rating system can consist of various rating models. Model life cycles (development and validation activities) or supervisory obligations are focusing on rating models and various rating models within a rating system can have different timings for development and maintenance. This is also in line with the expectations of the EBA Guidelines on PD estimation, LGD estimation and the treatment of defaulted exposures, which specify the life cycle of a model and not of a rating system. When a change is developed, validated and assessed for one rating model, why would it be needed that the institution has to wait for other rating model changes in the same rating system? Or can “the same nature” be understood that changes to one rating model are of different nature than changes to other rating model? 

Moreover, it is not always possible to schedule and predict all changes needed to a rating system far in advance, as many other factors might change and require a respective change in the rating system. Apart from more predictable events like periodic validation, there might be changes in the business strategy of the institution, changes in portfolios, various local changes mandatory for legal entities of a group (especially relevant for central models), etc. There is therefore room for misinterpretation and uncertainty of what is meant under “modifications of the same nature and to the same rating system that are implemented sequentially over time” and which period of time is appropriate to assess compliance with this criterion. 

We consider that the existing requirement in Article 3, paragraph 3 saying that “One material extension or change shall not be split into several changes or extensions of lower materialityalready sufficiently covers the intended requirement and has been diligently implemented and practiced by institutions

Question 5. Do you have any comments on the revised 15% threshold described in Article 4(1)(d)(ii) related to the materiality of extensions of the range of application of rating systems?

In our opinion, a RWA increase of 15% would not indicate that a rating system might not perform adequately for the additional exposures to which the range of application of the rating system is extended.  

The performance of the rating system should be assessed by the validation function during their regular yearly review. However, setting a threshold for assessment of a materiality of a change in an attempt to capture this type of risk does not seem appropriate. A quantitative threshold defined in Art 4(1)(d)(i) should be sufficient for providing a backstop in terms of RWA impact. Instead of the revised 15% threshold described in Article 4(1)(d)(ii), we rather suggest combining with qualitative criteria, i.e. where 15% threshold has been reached AND a significant drop has been observed in the performance of the rating system where the range of application has been extended to. 

The envisaged quantitative threshold on the rating system level would also disproportionately punish smaller rating systems

Question 6. Do you have any comments on the documentation requirement for extensions that require prior notification?

We would like to highlight that requiring a validation report for all ex-ante notifications related to the extensions of the range of application of a rating system, and irrespective of the size of the portfolio to which the rating system is extended, is a significant additional burden for the institutions and specifically for the validation function.  

In both scenarios, i.e. if an institution has to wait for the periodic validation before submitting the notification, or if validation has to perform an ad-hoc assessment before the notification, this requirement means additional obstacles for an ex-ante notification, operational difficulties and increased workload for validation.  

  • In the first scenario, the flexibility of performing a change that requires a prior notification when it is needed by the institution is lost, as the institutions would have to always adjust the timing of ex-ante notifications to the periodic validations. This would significantly slow down the possibility to implement the extension and impede the implementation in the optimal and most reasonable timeframe. 
  • In the second scenario, the validation function would require additional time, cost and effort in order to adjust their planned validation activities to accommodate the need to perform an ad-hoc validation. 

In addition, it is not clear whether such validation reports shall follow the complete requirements for initial validation, regular validation, or whether a special validation framework shall be defined for such ad-hoc validations. The EBA supervisory handbook on validation (EBA/REP/2023/29) currently expects that for non-material changes and extensions, the review of the change could be performed during the regular (yearly) validation activities, however by performing the applicable test from the initial validation. 

We do not see that the additional burden and costs are justified compared to the questionable added value for the purpose of this RTS. The validation function should continue to assess the rating systems’ performance as it is required, and whenever the validation identifies any concern about the rating systems performance or any other issue, it must be addressed in an adequate and timely manner, which is already ensured by various requirements stemming from the CRR or from EBA and ECB products. However, validation assessment should not be in addition to the timing of notifications requiring prior notification or vice versa, and it should also not be mixed with the purpose of this RTS, which is to define criteria for the materiality classification of extensions and changes to the IRB Approach

We would also suggest that no additional validation assessment should be required if the extension is to a portfolio of less than 15% of the original portfolio. 

Upload files

FIN_EA~1.PDF (426.87 KB)

Name of the organization

European Association of Co-operative Banks (EACB)