Response to joint Consultation on draft Implementing Technical Standards on the mapping of ECAIs’ credit assessments

Go back

Q2. Do you agree with the proposed definition of sufficient for the number of credit ratings and the rest of the requirements imposed for the calculation of the short-run default rate when a sufficient number of credit ratings is available?

See answer to Q1. Again, we think that the implicit assumption that ECAIs will be analysed in isolation is inappropriate. Instead more focus should be given to the extent to which there is a range of ECAIs rating the same instrument.

Comparing the proposed criterion of sufficiency with the simple example of rolling a dice, we could not see it as sufficient to have only 6 throws to have a sound/prudent estimate of throwing a “6”. The sufficient number of observations is dependent on the probability distribution of running into default, but a much better and still practical definition of sufficiency would be to require the number of rated items to be greater or equal to twice the inverse of the expected default rate.

Furthermore, we would like to see clarified what exactly is meant by “expected long-run default rate”; does this expression refer to the long-run benchmark presented in Part 1 of Annex 1?

Moreover, we would recommend imposing a minimum limit of observations as well, as for lower credit quality groups the number of sufficient observations could be unreasonably low.

It is also not quite clear to us how exactly this criterion fits into the process. Is it the idea that above 1/frequency no qualitative assessments are needed, and that below 1/frequency only qualitative assessments are made?

Q4. Do you agree with the proposed options to calculate the quantitative factors when a sufficient number of credit ratings is not available?

Yes, but we think that in general greater emphasis should be placed on targeting comparability between CQSs deriving from different ECAIs than is currently implied by the draft text.

Regarding the reliance on credit ratings of other ECAIs, we think that some consistency requisites should be imposed on ratings’ meanings and internal default definitions of those ECAIs, in order for them to be considered. Moreover, a high percentage of the credit assessments provided by the ECAI itself should ideally match the ones provided by the other ECAI in similar dates for the same items, in order to provide evidence of the similarity in the credit assessment processes between the two ECAIs.

It should be clear that from an economic perspective the use of credit ratings of other ECAI’s creates redundancies and increases the overall systemic risk related to credit assessments.

Q5. Do you agree with the proposed use of the default definition used by the ECAI as a relevant factor for the mapping? Do you agree with the proposed assessment of the comparability of the default definition of an ECAI? If not, what alternatives would you propose? Do you think that the adjustment factor depends on certain characteristics of the rated firms such as size and credit quality and if so, how can this be reflected?

We agree that default definition is relevant to credit risk assessment.

Concerning the adjustment factor of 100%, we could not find the rationale for assuming that the number of non-bankruptcy defaults is equal to the number of bankruptcy defaults. As such, we would suggest that the calculation of the true number of default events, according to the four default situations mentioned in point 6 of Article 3, is provided by supervisory authorities, so that the use of the proposed adjustment factor could be avoided and the accuracy of the information used could be improved.

There is a misprint in Article 8: “The qualitative factors referred to in point (a) of Article 136(2) of Regulation (EU) 575/2013” should be “The qualitative factors referred to in point (b) of Article 136(2) of Regulation (EU) 575/2013”

The regulation also prescribes looking at the pool of issuers that the ECAI covers. It would be worth explaining how that will be implemented.

Q6. Do you agree with the proposed use of the time horizon of the rating category as a relevant factor for the mapping? Do you agree with the proposed use of transition probabilities to identify the expected level of risk during the three-year horizon?

We think that it is largely not relevant to the mapping per se but is instead more relevant to how a given CQS should be interpreted for regulatory purposes.

Questions arises how to calibrate the transition probabilities. Presumably the transition matrix for short-term ratings is ‘much faster’ than the transition matrix for long-term ratings.

Q7. Do you agree with the proposed use of the range and meaning of credit assessments as a relevant factor for the mapping? Do you agree with the proposed restriction of this factor to adjacent rating categories?

We agree that the meaning an ECAI ascribes to its rating is relevant to credit risk assessment but do not see any obvious reason why this insight should be limited merely to adjacent rating categories.

Q8. Do you agree with the proposed use of the risk profile of a credit assessment as a relevant factor for the mapping?

We agree in principle to the use of the risk profile but would like to stress that the consideration of factors such as size, sector or geographical diversification of the items under analysis could bring excessive subjectivity to the mapping process.”

Q9. Do you agree with the proposed use of the estimate provided by the ECAI for the long-run default rate associated with all items assigned the same rating category as a relevant factor for the mapping? Do you agree with the proposed role played by this factor depending on the availability of default data for the rating category?

Yes, but again we think that this is less relevant for the mapping per se and more relevant to how a given CQS should be interpreted for regulatory purposes, except in instances where the ECAI is the sole ECAI to rate an instrument.

Q10. Do you agree with the proposed use of the internal mapping of a rating category established by the ECAI?

We think that this proposal is potentially quite intrusive and it is likely to be more appropriate to focus merely on an ECAI’s ‘published’ outputs. An ECAI’s internal mappings may be less robust than its published output and may exist merely in as informal internal guidance rather than more specific formal internal mappings. It may be very difficult for ECAIs and the mapping authority to work out whether some internal element of the ECAI’s rating process is or is not captured by this proposal.

The reliance on ECAIs full internal mapping does not contribute to the ITS’s goal of harmonization of the mapping of credit assessments.

Moreover, the interaction of Article 14 and the explanatory text provided for Article 11 is unclear, given that the latter seems to refer to subsets of the full ECAI’s internal mappings.

Generally we think that the possible use of the internal mappings strongly depends on the quality of those concrete mappings, so we find it hard to agree on the proposal universally.

Q11. Do you agree with the proposed specification of the long-run and short-run benchmarks? Do you agree with the proposed mechanism to identify a weakening of assessment standards?

We agree that it is likely to be desirable to have benchmarks that aim to identify ‘unexpected’ weakening (or potentially ‘unexpected’ strengthening) of credit assessment standards, particularly ones that apply to the ECAI industry as a whole, see our answer to Q1. However, we think that it would be desirable to provide better justification for actual approach being proposed in draft Article 15 as it is not obvious from the paper alone why the proposed formulae might be a helpful way of doing this.

Also, if an aim is to provide an early warning of the sort of weakening of standards perceived to have occurred ahead of the 2007-2009 Credit Crisis then it is worth noting that this weakening seemed to apply disproportionately to certain types of instrument (e.g. US mortgage-backed instruments). It might therefore be desirable to have benchmarks that differentiate between instrument types.

Furthermore, regarding the mechanism proposed to identify a weakening of assessment standards, we think that it should be specified what would be the consequences when the three conditions mentioned in Article 15 are met.

Q12. Do you agree with the analysis of the impact of the proposals in this CP? If not, can you provide any evidence or data that would explain why you disagree or which might further inform our analysis of the likely impacts of the proposals?

We agree with most of the analysis on the impacts of the proposals mentioned in the consultation paper. Nonetheless, we reckon that the degree of harmonization of the mapping of credit assessments is dependent on the existence of data for the calculation of default rates solely based on quantitative factors, since most qualitative factors proposed depend on ECAIs’ assessments.

It seems useful to also discuss possible reactions of rating agencies to this regulation:
• One reaction could be to increase the volatility of ratings, to adjust them more quickly to changing circumstances.
• Short-term ratings will become less useful, as they will be geared to a three-year horizon. Similarly, ratings to maturity will become less useful, as they will be geared to a three-year horizon.
• A three-year horizon may slow down reassessment when clients are more interested in a one-year horizon.
• On a three-year horizon, less differentiation is possible. Many investors are looking for a shorter investment horizon.

Further comments:
Page 6: the sentence “credit assessments of covered bonds and shares in CIUs have been considered” is not quite clear. For such issues, the recovery rate is also a very important criterion.

Page 8: Two comments on the sentence “where the credit rating is based on a shorter horizon, the expected level of risk of the rating category beyond its time horizon (for example, second and third years if the time horizon of the credit rating is 12 months) should be considered to assess the level of risk of the rating category that is relevant for the mapping.”:
• There are ratings like A1/P1 - a three-year horizon would not be appropriate to validate these ratings.
• The practical use of ratings relies primarily on a one-year horizon.

Page 9: “The benchmarks proposed in these draft RTS have been chosen to maintain the overall level of capital required for externally rated exposures under the Standardised Approach.” Strictly speaking, ratings are only about expected losses. Although these are very important for valuation purposes, they are not directly related to risk (assuming a diversified credit portfolio). Risk is about uncertainty, about volatility of average annual losses. It is important to continue to validate that expected losses are a meaningful indication of the volatility of annual average losses.

Page 15: the sentence “it should be measured over a 3 year time horizon in order to allow the observation of a significant number of defaults when risk is very low” seems to be confusing. We would have thought that the idea is to align with a ‘through-the-cycle’ investment horizon. Measuring over a period of three years does not create more data.

• Why would a three-year perspective be relevant to an investment bank with a trading exposure? Money-market ratings have their own importance. Important events after a 3-mo horizon may cause default, but be irrelevant for s-t claims.
• Similarly, buy-and-hold investors may expect ratings to be valid through to maturity.

Page 15: We do not understand the statement “Also, it should not include public sector ratings given the scarcity of defaults for this type of rating”. We would assume that the number of defaults is defined by the type of rating, rather than by the type of company. We therefore do not see any reason to exclude public sector ratings.

Upload files

Name of organisation

Actuarial Association of Europe