Response to consultation on Regulatory Technical Standards that specify material changes and extensions to the Internal Ratings Based approach
Question 1. Do you have any comments on the clarification of the scope of the revised draft regulatory technical standards to specify the conditions for assessing the materiality of the use of an existing rating system for other additional exposures not already covered by that rating system and changes to rating systems under the IRB Approach?
In general, we appreciate the effort to clarify the scope of the revised draft regulatory technical standards (RTS). The elaboration on data constitutes an improvement in the interpretation of the Level 1 text. However, we would like to add the correction of errors or minor adjustments necessary for the day-to-day maintenance of the models, which occur in the strict limit of the already approved methods, processes, controls, data collection and IT systems, as suggested in the draft RTS on materiality extensions under FRTB (EBA/CP/2023/36) as well as (EU) 529/2014(Recital 7).
Regarding the scope of the RTS, it would be helpful if further clarification and examples were provided in relation to the type of updates to data that are considered under the scope of the RTS (updates to data used in the development and calibration)and out of the scope (updates to data used in the ongoing application). For example, where in this spectrum would be the following change: to source additional data on collateral values and financial information, on top of the data used in approved sources and methods.
In relation to the changes to rating systems, it is mentioned that changes not affecting the performance of a rating system are not under the scope of the RTS, irrespective of being mandatory or not and even they have a potential impact on the RWA calculation. However, in the case of changes to the methodology for assigning exposures to exposures classes, they are always considered under the scope of the RTS due to the potential impact on the risk parameters. Nevertheless, not all changes to the exposure classes definition may affect the internal risk estimates, as the range of application of rating systems may not vary. The range of application of each rating system is defined based on management practices and business model, so the range is consistent over time, according to paragraph 13 of the EBA/GL/2017/16, although it could imply a potential misalignment with the exposure classes defined for RWA calculation. This interpretation is also sustained in the Basel Framework, standard CRE 30.5:
“The classification of exposures in this way is broadly consistent with established bank practice. However, some banks may use different definitions in their internal risk management and measurement systems. While it is not the intention of the Committee to require banks to change the way in which they manage their business and risks, banks are required to apply the appropriate treatment to each exposure for the purposes of deriving their minimum capital requirement. Banks must demonstrate to supervisors that their methodology for assigning exposures to different classes is appropriate and consistent over time.”
Regarding the amendment made within paragraphs 8 and 9 in Section 3.2, we understand it as the fact that we cannot change the historical data used to estimate models that the JST approved without assessing a type of material/non-material change per this RTS. A model was built and approved based on historical data. However these data must be updated continuously for the model to remain relevant. Not only to capture adequately the latest business developments of banks (such as enhancing the granularity of data), but also to align best the economic capital allocation based on prudential regulation and sector practices. Rather than changing the model’s methodology and thus requiring a full approval, such an update of data and recalibration of indicators should be accommodated by the application of a notification procedure.
Furthermore, we would also encourage the EBA to clarify whether data updates to the data used in the ongoing application of the rating systems in order to calculate the risk-weighted exposure amounts for the application portfolio were never covered by this Regulation.
Under paragraph 11 of Section 3.2, the EBA states that changes in the parameters Maturity (M), Total Annual Sales (S) and the SA-CCF assignment to off-balance sheet items which solely affect the formula used for RWEA calculation should not be within the scope of the RTS on model change as they do not directly affect aspects within the scope of a rating system. With this differentiation for changes, which solely affect the formula used for RWEA calculation, we would like to clarify whether changes in the assignment of F-IRB LGD to exposure would be also considered as part of this items, in the same way that SA-CCF assignment to off-balance sheet items are considered. Moreover, we would appreciate clarification on the additional topics related to changes in the capital engine not affecting regulatory models nor solely to the formula used for RWEA calculation. Thus we consider that the following examples should be considered out of the scope of these RTS:
• Populate amortisation tables of the loans to apply the M of 162.2.a)
• Identification of trade finance products and apply the M of 162.3.
• Identification of covered bonds to apply the LGD of 11.25%.
• Classification or reclassification of products/portfolios in buckets of the 111.2 and Annex I in order to assign SA CCFs when an institution has not received permission to use AIRB-CCF (art.166.8, 166.8 bis and 166.8 ter).
•Identification of new products that could fall under the CCF of the 166.8.b to apply SA CCFs.
•Identification of revenues in the application of the SME factor.
•Start to apply an option that is directly stated in the CRR (e.g. Article 161.7 states that an institution shall be permitted to apply article 230 even for funded credit protection that cannot be included in an A-IRB LGD (because the lack of data).
• Identification of new products that could fall under the CCF of the 166.8 or 166.10. (FIRB CCF)
• Changes in the application of the SME factor.
• Application of the FIRB approach to the corporate and institutions exposure classes (e.g. Application of CRMT, Application of new real guarantees to apply LGD 230 (FIRB), Recognition of additional credit protection in the regulatory LGD under FIRB).
• Implementation of the regulatory floors (e.g. related with PD and LGD parameters, related with the 1-day floor
• Actions to be conducted for implementing regulatory add-ons as per Final Decision letters from the Supervisor.
• Actions to be conducted for implementing the remediation actions committed with Supervisor which are duly and timely notified in accordance with the remediation plans update instructions requested by the FIRB.
• Alignment between parameters used for internal business purposes and those used for regulatory capital purposed regarding CRR Art. 179(1) and Par. 208-210 of the EAB GLs on PD and LGD estimation.
• Changes in the internal methodological guidelines and standards.
• Consideration of the year as 365.25 days in the calculation of the maturity of the total assets of the customer.
• The adaptation of institutions’ internal policies on IRB changes management to implement these RTS.
- Changes to exposure classes that do not affect the models should be excluded from the scope of the RTS.
In addition, apart from the validation process, it is our understanding that the rest of the rating system automation processes that have no impact on the model do not fall within the scope irrespective of the quantitative impact on capital.
Regarding the new Recital (2), the current wording may have some unintended consequences. In particular, we think that recalibration after back-testing, solely as a mechanical effect of adding one additional year of default, should be considered out of scope in order to avoid burden for both banks and supervisors. Recommendation is that they are part of the “ongoing application of rating systems” which should be clarified. Moreover, should the Recital (2) be left unchanged, in the case of machine learning models which could be frequently updated models, such recital will not be fit for purpose and could deter the use of such techniques. Thus, we propose that the distinction between in and out of scope should be based on the input of : if there is a human intervention in the decision for a change, it should be in the scope, however when no human intervention is needed, the changes are out of scope.
As a conclusion, we propose the modification of the Recital (2) in the following manner:
Text proposed by the EBA CP
Amendment proposed
Changes to rating systems as defined in Regulation (EU) No 575/2013 may have a potential impact on the internal risk estimates used for risk weighted exposure amount calculation, and as such include changes affecting the range of application of a rating system, the rating methodology for IRB systems, the definition of default and the validation framework as well as changes to relevant processes, data and the use of the models. Updates to the data used in the development and calibration of the rating systems should therefore be covered by this Regulation. However, updates to the data used in the ongoing application of the rating systems in order to calculate the risk weight exposure amount for the application portfolio should not be covered by this Regulation.’
Changes to rating systems as defined in Regulation (EU) No 575/2013 may have a potential impact on the internal risk estimates used for risk weighted exposure amount calculation, and as such include changes affecting the range of application of a rating system, the rating methodology for IRB systems, the definition of default and the validation framework as well as changes to relevant processes, data and the use of the models. Updates to the data used in the development and calibration of the rating systems with the need for mechanical recalibration should therefore be covered by this Regulation. However, updates to the data used in the ongoing application of the rating systems in order to calculate the risk weight exposure amount for the application portfolio or updates to the data used in the development and calibration of the ratings systems without mechanical recalibration should be covered by this Regulation. In the case of mechanical recalibrations following annual updates of data, such changes could be subject to ex-post notifications. In addition, changes to remediate data quality issues (e.g. amending missing/incorrect LTV input data) in order to improve the modelling framework are not covered by this Regulation.”
Moreover, we understand that the EBA is capitalizing on its statement made on CRR3, which segmentate classifications based on whether the changes impact the model performance. In this context, we understand that the need for model change submission is based on whether the changes impact the performance of the models or not. As most of the requirements of the CRR3 have been already implemented by the institutions and therefore the proposed regulation has only a minor impact. We would therefore like to advocate that the RTS should state permanently that changes imposed by regulation, which do not affect the performance of the rating systems, are out of scope. This would mean that this principle should also be applied to future regulatory projects of comparable scope, i.e. after CRR3.
In paragraph 12 of the Consultation Paper, the EBA aims at clarifying that changes due to regulatory requirements without institution-specific room for manoeuvre, which are mandatory under the CRR3 and do not affect the performance of a rating system, should not fall within the scope of the RTS nor should be subject to application. Therefore they should not to be reported as changes, however clarification is much welcome..
However, we would like to point out that most of the requirements of the CRR3 have been already implemented by the institutions and therefore the proposed regulation has only a minor impact. We would, therefore, like to advocate that this principle should also be applied to future regulatory projects of comparable scope, i.e. after CRR3.
Furthermore, we are of the view that the alignment of the documentation or implementation of the ECB approved model (pre-notification or material change) resulting from the outcome of the proper functioning of the control environment should also be excluded from the scope of these RTS. Corrections of errors in the technical implementation of the model, which are not due to a recalibration of the model, should be out of scope. For instance, allocation excels.
Question 2. Do you have any comments on the clarifications and revisions made to the qualita-tive criteria for assessing the materiality of changes as described in the Annex I, part II, Section 1 and Annex I, part II, Section 2?
Firstly, it is very important to ensure consistency in the implementation of this RTS, since the way written, they leave too much room for interpretation among different supervisors in different jurisdictions. We believe that different criteria or considerations should not be applied by supervisors, depending neither on the jurisdiction, nor on the institution.
In the new RTS, changes in the methodology used for assigning exposures to different exposure classes (according to article 147 of CRR3) are reclassified as ex ante notifications. However, the EBA according to article 143(5) of CRR3 is mandated to write “standards to specify the conditions for assessing the materiality of the use of an existing rating system for other additional exposures not already covered by that rating system and changes to rating systems under the IRB Approach”. The EBA therefore presumes that rating systems and exposure classes are interlinked, meaning that any change to exposure classes will impact rating systems. However, such link is not always the case in practice. Therefore, we think that such changes should be excluded from the scope of the RTS when such changes do not affect the models. Such changes will be consistent with the EBA stance, which is to exclude aspects falling outside the rating systems, which may only affect the RWA formula.
We would like to propose the amendment of the RTS (Annex I, Part I, Section 1) to clarify changes to the definition of default in accordance with Article 178 of Regulation (EU) No 575/2013, as follows :
EBA proposal
New proposal
3. Changes in the definition of default according to Article 178 of Regulation (EU) No 575/2013, if any of the following conditions are met:
(a) they change the method to identify if the obligor is more than 90 days past due on any material credit obligation according to Article 178(1)(b) of Regulation (EU) No 575/2013;
(b) they change the level of application of the definition of default for retail exposures according to Article 178(1), second subparagraph of Regulation (EU) No 575/2013;
(c) they change the use of external data according to Article 178(4) of Regulation (EU) No 575/2013;
(d) they change whether an indication of unlikeliness to pay according to Article 178(3) of Regulation (EU) No 575/2013 results in an automatic or in a manual default reclassification;
(e) they change the default classification in the reference dataset or scope of application of a rating system in a significant manner, the measure and level of which will have been defined by the institution.
3. Changes in the methodologies and rules to the definition of default according to Article 178 of Regulation (EU) No 575/2013, if any of the following conditions are met:
(a) they change the method to identify if the obligor is more than 90 days past due on any material credit obligation according to Article 178(1)(b) of Regulation (EU) No 575/2013;
(b) (a) they change the level of application of the definition of default for retail exposures according to Article 178(1), second subparagraph of Regulation (EU) No 575/2013;
(c) they change the use of external data according to Article 178(4) of Regulation (EU) No 575/2013;
(d) they change whether an indication of unlikeliness to pay according to Article 178(3) of Regulation (EU) No 575/2013 results in an automatic or in a manual default reclassification;
(e) (b) they change the default classification in the reference dataset or scope of application of a rating system in a significant manner, the measure and level of which will have been defined by the institution. Institutions could define such metrics for instance on the relative impacted volumetry of default.
Regarding the clarifications on qualitative criteria, by having an appropriate framework to assess the significance of i) changes in the rank ordering; ii) changes in the distribution of obligors, facilities and exposures across grade or pool and the fact of making the analysis at the level of final risk parameters the following comments are provided:
- Given the requirement of having a MoC at grade level (as per ECB Guidelines on internal models) a change in the MoC could influence rank ordering or distribution despite no change in the main structure of the model. As such the analysis should be done on the final parameter both before and after MoC. The same holds for the presence of a Supervisory Limitation operating at the level of single parameter PD-LGD-CCF and for which a model change is expected to address the related obligation. Indeed, whereas it could be understandable its consideration in the quantitative RWA criteria, for the qualitative assessment the interference of a limitation in the assessment of the change and in the analysis of rank ordering and distribution should be avoided (being in the end a pure additional supervisory conservative measure put on top).
- More in general the framework of analysis defined on the final parameter shall be differentiated between changes impacting the risk differentiation (expected as more intrusive in the structure of the model) and changes in the risk quantification (e.g. resulting for example from a pure extension of the time series or changes in the pure risk quantification components like MoCs or LGD Downturn). Indeed, the inclusion of additional years in a pure recalibration (rather than a basic review of a MoC) could influence the rank ordering and rating distribution (since the parameter could increase or decrease) but without generating changes in the rating criteria / the risk differentiation features of the model. This is also clearly highlighted in Annex I - Part II - Section 1, letters d) and letters f) that clearly differentiate changes impacting the risk differentiation part (letter d)) and the risk quantification (letter f)). In the textual formulation of these two letters (even in the newly drafted amended version) it appears clear that the checks on rank ordering and rating distribution results are particularly relevant for changes in the rating criteria as referred to in Article 170(1)(c) and (e) and Article 170(4) (i.e. letter d)) rather in presence of changes for estimating PDs, LGDs including best estimate of expected loss, and conversion factors according to Articles 180, 181 and 182, thus pertaining to the risk quantification (i.e. letter f)). Therefore, and stated in any case the check on RWA quantitative impact, banks need to have a framework that allows for an appropriate differentiation in the nature of the change (i.e. if related to risk differentiation pertaining Article 170/letter d) or to risk quantification, concerning Articles 180-181-182/letter f)) when it comes to assessing the outcomes of the rank ordering and grade/pools distribution changes.
- Finally a disagreement is raised with reference to the new approach defined for the rank ordering assessment for the Slotting Approach. Indeed, it represents a purely regulatory based approach with just 4 possible performing grades, as such it is definitively disproportionate to include SSCA under the ordinary framework foreseen for the other IRB models. Indeed, Specialized Lending portfolios are characterized by a limited number of observations by definition and, as such, by an inherent volatility not necessarily due to the model change itself but to the features of the portfolio snapshot considered in the specific point-in-time of the assessment. We deem that the previous CDR 529/2014 formulation for this perimeter be appropriate to manage the specificity of SSCA.
Finally, while we acknowledge that focusing on final grades can be a valid alternative, we are deeply concerned that excluding other statistically sound options imposes undue burdens on modelling efforts. In practical terms, demonstrating such tests would necessitate the development of a shadow estimation of the risk parameter. Therefore, we recommend retaining this as an open option rather than prescribing this option as the sole valid alternative.
As mentioned in the criteria 2(f) from Annex I, Part II, Section 1 of the new proposed regulation, change in the fundamental methodology for estimating PD/LGD now encompasses methodology for deriving appropriate adjustments and should be considered as a material change (ex-ante notifications otherwise). For these cases, it is within the remit of the bank to define what constitutes a “change in the fundamental methodology”. In this exercise, difficulties may arise because from a supervisory perspective this fundamental feature could not solely be based on RWA impact. Depending on the size of the exposures/models for the bank, the limit between material change and non-material change is objectively captured through RWA impact, rather than through more subjective criteria determined by the bank. This is why the limits between material and non-material regarding methodologies for estimating PD/LGD should be based on other quantitative criteria (RWA outcomes), which metrics are defined by the bank. Such possibility should be explicitly stipulated in regulation.
The clarifications and revisions to the qualitative criteria for assessing the materiality of changes are well-received. We recognise that changes in the definition of default (DoD) may have material implications for the rating system.
However, we fail to understand why an indication of unlikeliness to pay from a manual process to automatic reclassification should be deemed a material change, if it is merely an automation of the approved manual process. In particular, we consider that the following cases should be specified and be out of the scope of these draft RTS:
- Changes in the marking criteria (e.g. thresholds, calculation formulas, etc.) of UTPs.
- Changes due to obligations or recommendations by IMIs or OSIs that do not result in a change to the IMI model being handled instead by the remediation plan agreed with the supervisor.
- Changes by changes on rules that do not result in a change in the model. That means, any change imposed by new regulatory requirements should not be considered as material.
In addition, requirements regarding what is meant for “changes whether an indication of Unlikeliness to Pay results in an automatic or in a manual default reclassification” remains quite vague. Though it appears clear from the EBA position that modifying an UTP default event trigger from automatic to manual generate a material change, we understand that changes from manual to automatic rather than changes in general on UTP shall be notified a minima as ex-ante changes. In practice, the UTP detection starts from the elementary early warning signals underlying the Portfolio Credit Monitoring process. The system of EWS strictly pertains the credit operations of the Banks and it is subject to ongoing updates and fine-tuning thus requiring timeliness in the execution. The boundaries of this requirement included in the Draft RTS are unclear and there can be the risk of introducing severe slowdown to credit operations if an ex-ante notification is foreseen also for changes introduced at the level of elementary indicators in the EWS (that may have, even if indirectly, an impact on the default detection). As such this point deserves further attention and clarifications in defining the boundaries of the perimeter of application of the RTS: our opinion is that process changes pertaining the pure part of credit operations at the level of the elementary indicators of the Early Warning System should not be in the scope of this RTS but rather only the changes in the rules and methodologies of default detection shall be triggered.
In the same vein, we would appreciate more clarity on two points: notably on what “method to identify if the obligor is more than 90 days-past-due (dpd)” is to be considered and whether all cases of roll-out (sequential implementation of IRB) are out of scope of the RTS in reference to Section 3.4, paragraphs 19 and 20.
Finally, regarding changes to validation methodology and process, we believe room for flexibility should be introduced in the sentence “For example, changes to traffic light thresholds of test metrics leading to a more positive validation result are deemed a material change; however, where such changes lead to an equally strict or more conservative validation result, an ex-ante notification is deemed appropriate. For this purpose, institutions should carefully consider the impact of the change on aggregated test outcomes where thresholds are set at a level higher than an individual test metric.”. Our understanding is that simple changes to traffic light thresholds of test metrics which lead to a more positive validation result are deemed a material change. On the other hand, there could be changes in the aggregation workflow of a test executed at different levels, that should be simulated to get the direction and the classification of the change. Since the validation framework could apply to an extended number of models across a banking group, the full simulation exercise would be really burdensome; for this reason, we deem that the institution should have the possibility to sample the models to simulate, using appropriate materiality criteria, and classify the change based on the outcome on the sample. Moreover, some little fluctuations may be possible, e.g. most of the final test outcomes are the same except for a couple, which are less severe; in this case, we deem a classification as material would be incorrect. Further, there are cases in which the institution should have the possibility for a qualitative classification assessment, to complement the “mechanical” outcome of the simulation. For instance, if the institution introduces/reviews a materiality concept in the aggregation workflow, which penalizes less the immaterial component; per se, such an intervention is more lenient, but the final simulated outcome on the sampled models could be very marginal. Changes leading to equally strict or even more conservative results (e.g. stricter thresholds, additional tests or control steps) should require ex-post notification instead of ex-ante one. Otherwise, sensible and conservative changes could be unnecessarily delayed.
Question 3. Do you have any comments on the clarifications and revisions made to the qualita-tive criteria for assessing the materiality of extensions and reductions as described in the Annex I, Part I, Section 1 and Annex I, Part I, Section 2?
We very much believe that a review of the new point (3) in the recital is imperative. The current text stipulates that extending a rating system to exposures previously under the SA/F-IRB approach are not covered by this regulation. We believe such an extension does not always have to require a permission from supervisor. Especially not where the additional exposure are considered of the same type of exposure. One only extends the scope of application of an existing rating system to additional exposures that are largely of the same exposure type. Examples include: Mergers, buying of portfolio, expanding to additional geographical locations. Given that this would be a change to an existing rating system rather than an initial model approval (i.e. IRB roll-out), it should be covered by this regulation. Including such changes under the scope of this regulation still safeguards the supervisory control over the change, given that as a minimum the ex-ante notification applies. And if representativeness cannot be proven or the extension is material, according to the new threshold, still supervisory approval is required.
Leaving the proposed wording unchanged, would result in an increase of initial model applications when banks buy portfolios. This holds even in cases where data is immaterial in size and where data is representative (e.g. buying mortgage exposures in the same country for which a mortgage rating system is already in place).
Also we would deem it essential to have clarity on the terminology: "new origination of facilities" Does it encompass only the origination of identical facilities, or does it also include new product types, expanded lending criteria, and updates to origination rules that do not change the original/existing exposure segmentation? For example when a bank originates a new facility, categorized as the same exposure type, but in a different geographical location, how should this be treated under the regulation?
We find the clarifications and revisions to the qualitative criteria for assessing the materiality of extensions and reductions to be an improvement, in which case the materiality threshold should be tested for in most cases. However, we overall recommend that changes to exposure classes that do not affect models should be excluded from the RTS scope.
In particular, regarding Article 3.3, several cases should be assessed as a single change. We would like to raise some attention points :
• Modifications of the same nature and to the same rating system implemented sequentially over time should be bundled in a single model change. In this case, we understand that we notify upfront the supervisor with a plan of changes (changes that we identify so far) and we may in practice end with a multiyear plan. In this regard, the RTS could clarify that they leave the possibility to introduce a reasonable limited timeframe to the changes to be bundled.
• One change affecting multiple rating systems (single change to rating systems in the IRB Approach) is considered as a single change and we understand that it leads to an aggregation of the RWA impact of the change across the rating systems affected. In this case, the RTS should clarify that banks are expected to only report the aggregated metric (no calculation at the level of one rating system). Moreover, precision is expected to specify that such bundle of model changes also apply for changes of model perimeter impacting several rating systems. . Furthermore, the RTS should clarify that, in the event of breaching the quantitative thresholds triggering a material model change due to the aggregation of the effects across all rating systems impacted by the change, the initial application request should focus on the specific model undergoing change/review, for which the adjustments were initially intended. The other affected rating systems should be presented to the competent authorities in accordance with the pre-established roll-out plan (i.e., the application is deferred and follows the process agreed with supervisors). Any modifications or anticipations to the above should be discussed and agreed upon between the institutions and the competent authorities.
In accordance with paragraph 19 of the explanatory memorandum, the EBA mentions that "As such, in accordance with Article 148(1) of Regulation (EU) 2024/1623, additional exposures that were not risk-weighted by another rating system (i.e. under the standardized approach or by IRB-F if the scope of an LGD model is extended) require in all cases approval by the competent authority and are not within the scope of this RTS".
But Article 148(1) only concerns the approval of the roll-out plans and does not concern the application for approval of a new rating system.
- Recital (3) could be misunderstood to exclude IRB extensions to STD/ IRB-F exposures from the RTS, contradicting prior understanding.
- We propose: roll-out plans (Article 148(1)) may include extending IRBA systems to STD/ IRB-F. Such extensions should remain within the scope of delegated regulation 529-2014, allowing assessment of representativeness for potential ex-ante notification.
- Automatic material change classification for such extensions would hinder roll-out plan implementation.
Therefore, the RTS should clarify that extending existing IRBA systems to STD/IRB-F can qualify for ex-ante notification, contingent on materiality assessment and representativeness analysis, balancing rigor with efficient IRB roll-out.
Question 4. Do you have any comments on the introduced clarification on the implementation of the quantitative threshold described in Article 4(1)(c)(i) and 4(1)(d)(i)?
We consider the proposal to aggregate changes to different rating systems and to be implemented sequentially over time as positive. However more clarity is needed in this regard.
The introduced clarification on the implementation of the quantitative threshold is welcome. If the way we read it is correct, namely as if several different changes are applied, we believe that they should be assessed on an individual basis, while being accompanied by a reasonable timeframe for bundling sequential changes.
“However, we would appreciate if the EBA could clarify the aggregation of RWA impacts for changes affecting multiple rating systems and different asset classes, for a given period.”
In case a change affecting different rating systems, could institutions understand that these changes can be implemented sequentially? For instance, if the subsidiary of a bank receives a finding (e.g. on mortgages model) that could impact their standards or methodologies, the adjustment to solve the subsidiary finding should be implemented in all mortgage’s models within the Group sequentially to the extent mortgages models are to be reviewed considering the Groups’ model calendar. Moreover, how should this impact be calculated?
Particularly, we believe that changes in segmentation (especially for legal persons) should be assessed as a whole due to the inherent dependencies across different rating systems. In our view, assessing a change in segmentation separately as reductions and extensions artificially inflates the materiality. Instead, we believe a more effective way to streamline the supervisory decision process would be to assess segmentation changes as a single and standalone type of change.
In this case, we understand that it notifies the supervisory authority in advance of a plan of changes (changes that we identify so far) and that it can, in practice, result in a multi-year plan. In this regard, the RTS could specify that it leaves the possibility of introducing a reasonable and limited period for the grouping of changes.
Also on art. 4(2) we welcome an explicit specification that the calculation shall refer to the same point in time for changes under art. 4(1)(c)(i) and art. 4(1)(d)(i), similar to those for art. 4(1)(c)(ii) and(d)(ii).
Additionally we would welcome clarification on art.3(3), the quantitative threshold for changes to the rating system. Would the concept of splitting changes solely be relevant from a quantitative impact perspective? That is, please clarify whether it is permitted to split if neither has any impact on Risk-Weighted Assets (RWA) or whether the direction of the change is towards increased RWA.
Furthermore, what time span should be considered under art. 3(3)? Especially due to regulatory obligations, changes could be implemented at different moments in time which would make it more complex to gauge the impact.
Finally, and most importantly, on art. 3(3) we stress the need to assess impact on each rating system individually to avoid a high number of application packages and increase the workload, also for supervisors significantly.
Question 5. Do you have any comments on the revised 15% threshold described in Article 4(1)(d)(ii) related to the materiality of extensions of the range of application of rating systems?
As per our general comments, we support combining both qualitative and quantitative triggers for a classification as a material change, together with a supervisory flexibility to avoid an overly mechanical approach.
This would be all the more relevant for cases of extensions of the range of application of rating systems, considering the perceived flaws of the new proposed ratio as illustrated below.
To mitigate the flaws of such ratio or alternative ratios, as a fallback to the favoured general approach recalled above, we believe that at least a qualitative backstop to the classification as material change should be introduced which relates to the adequate performance of the model on the extended scope.
This would be consistent with EBA concerns on the performance of the model following the addition of significant exposures which the new ratio aims at capturing, while avoiding an excessive classification as material model change where justified.
The new ratio introduced by the EBA may lead to counterintuitive results, especially with simultaneous reduction and extension.
Let us assume we have an extension on perimeter B of the rating system initially applied to A. We understand from the new EBA requirement that the new ratio will be calculated in the following way:
New ratio=(〖RWEA〗_B^after)/(〖RWEA〗_A^before )
We can derive two cases in the calculation:
Example 1 EAD RWEA - Before RA - After New ratio
Perimeter A 100 50 50
Perimeter B 100 50 5 10%
Perimeter A+B 200 100 55
Example 2 EAD RWEA - Before RWEA - After New ratio
Perimeter A 100 50 50
Perimeter B 100 50 100 200%
Perimeter A+B 200 100 150
In Example 1, the model extended on perimeter B will lead to an important RWA reduction on the additional exposures (division by 10 of the RWA impact with 〖RWEA〗_B^after=5) and the new calculation results in a 10% ratio. In Example 2, the model extended on perimeter B will double the RWA impact on the additional exposures (100 after compared to 50 before) and the new calculation will result in a 200% ratio. The new calculation will imply that the scrutiny should be on the Example 2 case. However, the high reduction of RWA is observed for the Example 1 for which the model initially applied on A will lead to reduce significantly the RWA if it is applied on perimeter B.
In addition, the new ratio will not be relevant in the case of both reduction and extension happening at the same time.
Alternative metrics to RWA, not prone to above shortcomings, should be investigated (such as for instance EAD). The choice of alternative solutions should however be taken in the light of the results of an impact assessment for banks.
Given these challenges, more flexibility is recommended in assessing materiality, particularly when RWA increases significantly but actual performance (e.g., ranking ability) remains strong.
Note that if no mutually agreed-upon alternative solution can be found to adjust EBA proposal, maintaining the current process (status quo) would be our preferred option.
Please consider removing the threshold as it would only be breached by events that already trigger an MCA and approval. Given that EBA considers scope extensions only IRB to IRB, we believe that applying a threshold is only conceptually feasible in a limited number of scenarios, such as mergers and acquisitions (M&A) or bigger changes in the credit model landscape. These would already trigger MCAs. Metrics that assess the distribution of relevant risk characteristics (i.e. representativeness) during the qualitative assessment sufficiently capture the risk associated with scope extensions. Hence the limited added value of such a threshold.
Question 6. Do you have any comments on the documentation requirement for extensions that require prior notification?
Regarding documentation listed in Article 8(1) of these RTS, the assessment report, which is understood as a review of the model change classification (representativeness), is not a full review report of the model of the independent review team. Moreover, we understand from the EBA that in this context “model performance” is not understood as an anticipated back-testing exercise, therefore it is not required from institutions to submit results of a first back-testing exercise when filing for extensions (first back-testing exercise is made after implementation of the models).
We think that such stance will be better understood by supervisors if it is clearly mentioned in the RTS.
In addition, Consultation Box 6 states: “It was considered that validation processes of institutions may be hampered if they are required to provide, for extensions that require prior notification, also the technical documentation and the assessment report of the validation function. In particular, this implies that an institution either has to wait for the periodical validation process before submitting the extension notification, or perform an ad-hoc assessment by the validation function in order to submit the extension for prior notification.” We confirm that the validation process would be hampered by the proposed request on non-material extensions:
- The periodical validation process is executed according to the rules defined on the ECB Supplementary Validation Reporting, which ask that the annual validation process assesses a model version in production (not proposed, under assessment by the JST) and, in the case of PD, ask to perform the tests on the model version in production at the beginning of the observation period (e.g. for the 2025 ongoing validation with observation period 31-12-23 to 31-12-24, the model version in production at 31-12-23 shall be considered). This means that the former proposal “wait for the periodical validation process before submitting the extension notification” is not applicable.
- An ad hoc assessment may be the only option. Nonetheless, we would like to point out that, starting from 2024, a validation assessment is also to be included for a non-material change, when it is aimed at addressing Regulatory Findings (“a Supervised Entity may not consider that a remediation action has been fulfilled for a Regulatory Finding unless the Internal Validation Function or Internal Audit Function has confirmed that fulfilment”). For the same reasoning above on the impossibility to leverage on the periodical validation process, this request is a further additional ad hoc activity that will hamper the validation process.
For this reason, it’s worth identifying a set of tests to meet the EBA expectation on the topic, e.g. the validation deliverable in case of non-material extensions could cope with representativeness, rank ordering, stability (in case leveraging on and verifying what executed by Modelling for classification purposes).
Regarding Article 8 point 2, we would highly appreciate if the EBA could clarify whether the term “before” used in this paragraph “changes classified as requiring notification either before or after implementation” is different from “prior notification” as referred to in article 8 point 1. Additionally, under Article 8-h(h), we would appreciate EBA's confirmation of our understanding that the term "risk numbers" refers to quantitative thresholds for qualitative criteria as per CDR 529/2014.
The new RTS specifies that a phased change should be treated as a single change for the purposes of its impacts. However, if readiness to implement is required for the entirety of the change, the change could only be requested once everything has been completed, effectively eliminating any opportunity for phasing. In cases where changes need to be phased, the implementation date required will be covered by an implementation plan and the readiness to implement should be considered according to the implementation plan submitted as part of the documentation.
Regarding Point 31 section 3.7, we want to point out that it is challenging for banks, from a timing and resource perspective, to manage the implementation of a new or changed model because the waiting time for supervisory approval can be long and is hardly predictable. In practice, this requires banks to run non-production environments increasing operational risk by maintaining multiple “production-like” environments. This can also lead to increased overheads in maintaining code integrity while awaiting approval. Supervisory authorities should make a stronger commitment to ensure timely decisions and provide more planning security.
Last but not least, we concur with the view described in consultation box 6. Requiring the entire documentation catalogue (validation report and technical documentation) for extensions that only require prior notification would be disproportionate. It would unnecessarily delay or slow down sensible model changes. This applies particularly to models that have been developed jointly and are operated by a central servicer (pool models). If still deemed useful from a methodological perspective in individual cases, institutions may add a validation report voluntarily.
Moreover, it should be clarified that changes to the validation process requiring prior notification do not require a written assessment by internal audit.