We expect most IRB Systems, with different degrees of priority and with different impacted parameters or model components, would require material changes to fully address the requirements
No, we don't see any limitation in calculating one-year default rates at least quarterly.
We disagree from the proposed treatment (point 116) for interest and fees capitalised after the moment of default since it is not consistent with the economic loss" meaning of LGD.
In our view only cash flows related to effective recoveries or costs should be taken into account. Interest and fees capitalised after the moment of default are economically irrelevant prior to their capitalisation as no effective cash flow is indeed associated and recoveries are already properly discounted. We agree that interest and fees have a similar economic meaning, so that both should be excluded as in the alternative proposal included in the explanatory box. The alternative proposal that exclude both is fully appropriate.
From a more formal point of view, we consider that art. 181(1)(i) of the CRR states that "to the extent that unpaid late fees have been capitalised in the institution's income statement, they shall be added to the institution's measure of exposure and loss" has a non equivocal interpretation: unpaid late fees count only after they have been capitalised and thus part of the EAD and straightforwardly their been part of the EAD means considering them both at "exposure and loss".
The eventuality of negative realised LGD as outcome of the alternative approach, risk highlighted in the explanatory box, is already accounted by a general zero-floor. Potentially, LGD might find a more prudent floor in the LGD resulting from material costs that being never capitalised are not subject to recovery. Compensating discounting effect is fully economically grounded.
Finally, as far as art. 115 is concerned, we believe that as the outstanding amount at the date of return to non-defaulted exposures includes also interests and fees charged during the default period and such amount should be discounted at default date as any other recovery.
As far as the usage of undrawn amount, we deem Guidelines treatment to be fully in line with CRR provisions for CCF estimates. Indeed a treatment within LGD as allowed for Retail would be equally sensible, but however would require a CRR amendment."
As Rating Scale structure might be relevant both for PD calibration and for backtesting purposes and as criteria used by different institutions in rating scales design are quite different, some guidelines are appropriate. A rating scale should generally be designed in order that undue concentrations are avoided, but more importantly it should ensure that counterparties with the same risk are assigned the same PD and counterparties with different risk are assigned a different PD. For this reason, a rating scale should optimise risk variability within the same class and between different classes.
As classes are used to calibrate PDs, the statistical robustness of risk differentiation should be explicitly tested.
Such an optimisation implies that the optimal number of class is not always the same as it is strongly related to the distribution of underlying available risk drivers and thus of final scores or individual PDs (where estimated). For this reason, a benchmark on the number of classes is not found to be beneficial and might in some cases increase variability. Equally setting a maximum PD threshold would be inappropriate as the granularity of the scale at higher PD levels is strictly related to the discriminatory power of different models, even though the proposed approach would most likely help reducing the RWA variability for the upper classes of the rating scale (i.e. grades close to default).
For the above mentioned reasons, scales should generally be designed specifically for each portfolio and the recourse to institution-wide masterscales should be limited to reporting purposes where an aggregate view is required. Even with the use of portfolio-specific scales, in some cases significant concentrations cannot be avoided (especially for retail - e.g. regularly amortising mortgages).
In other cases, the differentiation of PDs among lower risk classes is not statistically grounded but a a granular scale is required for a reasonable business process. In such cases, for instance, a joint PD calibration for regulatory purposes covering more than one class can be most appropriate and thus calibration should be assessed at this aggregate level.
However a benchmark on the number of grades would not help reducing unjustified variability as the appropriate number is strongly related to the distribution of underlying risk drivers and thus of final scores or individual PDs (where estimated). Equally, setting a maximum PD threshold would be inappropriate as it is strictly related to the discriminatory power of different models.
Most of Rating Systems do not take into account directly economic conditions, but include variables correlated to economic conditions (behavioral data, financial information, etc) so that they are hybrid in nature.
We generally agree with the proposed policy.
As far as short term contracts are concerned, we aknowledge that some business models or portfolios might be more heavily affected by seasonality effects which need to be addressed.
From a more general standpoint, however, we do not believe that short term contracts phenomena should be addressed by adjusting 1-year default figures for missed to follow up positions as this is part of the 1-year default experience of the institutions. The use of overlapping default observation windows, for instance, wouldn't prevent capturing in the 1-year default figures all defaults even when seasonality effects are relevant.
Providing that all defaults are considered, the exclusion of specific corrections seems to us more in line with the CRR definition of default and more consistent with the overall IRB framework as maturity have a 1-year floor.
A process to address rating philosophy over time will be beneficial to correctly identify PIT/TTC-ness characteristics and thus define targeted backtesting metrics focused on unexpected miscalibration. The same can be extended to risk drivers dynamic properties to properly manage representativeness assessments.
Ex-ante definition of a rating philosophy is a non-standard practice. However, models are in practice differently hybrid across portfolios depending on risk drivers - retail models tend to be more PIT as behavioural information have generally an higher weight compared to other portfolios - and modelling techniques - for instance, shadow rating models to CRAs rating tend to be more TTC.
From a general standpoint, we believe that the MoC estimation process is too pervasive and might indeed generate itself homogeneity issues across institutions.
As guidelines do not promote standards of quantifications, they might not fulfil the final objective of enhancing comparability.
As assessing every MoC area with quantitative analysis would be unduly burdensome, impact quantifications should be limited to most significant deficiencies only and an overall estimation among different areas should be allowed, as sources of model risk might be correlated and MoCs shouldn't be summed up in those cases, but jointly assessed (moreover, the application at risk parameter level ignores interconnectedness of risk parameters, therefore MoCs applied to PD may logically have the opposite effect on LGD).
As MoC are required both for model estimation and model application phase, it should be made clear that this should not result in a double counting of MoCs.
In addition to that, more clarifications should be provided as regards the role of MoCs in the use test area.
We agree with the requirement of assessment data representativeness, but some concern is related to article 103(a) as it should be aknowledged that a population of defaulted facilities (as used for LGD estimates) shouldn't share necessarily the same characteristics of the population of performing facilities it is applied to. Relevant phisiological differences exist that needs to be taken into account.
More importantly, we believe that data exclusion shouldn't be neglected as means for appropriately address data representativeness instances. Requiring representativeness of development sample towards the application one, while at the same time asking for the inclusion of all defaults, specifying that “it is not possible to remove the observations that are not fully representative from the estimation sample is not fully reasonable.
This is especially true as the use on non-representative data require adjustments and imply MoCs. Use of elder available data not representative of current recovery practices, bias LGD estimates as they cannot be adequately forward looking.
Not all cases can be addressed by adjustments+MoC; in some cases exclusion is the most sensible option. Of course, supervisory judgement is required to verify if this is actually appropriate on a case by case basis."