Response to discussion on the EU implementation of MKR and CCR revised standards
Go back
- Bank restructuring that would result in the permanent closure of a trading desk.
- Market conditions that result in serious long-term disruptive impacts on the ability to trade.
- Changing nature of an instrument as described in this discussion paper.
Consider for example if a single notional desk for Banking Book FX is used, it would either not be possible or not meaningful to meet the requirements:
- The business strategy and risk management structure would be for the entire bank – there is no meaning to having to meet this at the desk level
- Many “dealers” would be involved in this notional trading desk
- Limits would not be set at this level as it is an artificial construct with no business owner
- Reporting in isolation has no added value for the notional desk
- Lastly, the notional desk would not have a defined business plan
As such, to prevent regulatory arbitrage, notional desks should be limited to BB positions, while Trading book positions should not be allowed to sit on notional desks.
If this artificial approach is not abandoned then two elements need to be taken into account:
- Non-FV positions from IMA notional trading desks need to be excluded- this would mean that there would need to be at least FV and non-FV notional desks.
- The quantitative requirements for IMA for the notional desks also need to be removed
- Liquidity horizon recalibration, especially in order to account for de-risking profile and mean reversion effect for liquidity horizon above 40days.
- Reducing the cliff effect between LH buckets as per point raised on question 45 (equity small cap context)
Similar consideration can be made on FX volatility risk factors, for which the same analysis shows a maximum 5 days liquidity horizon (even for very concentrated exposures), leading to the suggestion of using a 10 day FRTB liquidity horizon as opposed to the current 40 days LH presented in the FRTB standards.
- where there are trades on names smaller than EUR 2bn, this is to be considered as small in size and it is manageable relative to the liquidity of that market. This refers to the size of a bank’s position relative to the markets, which is slightly different to the paragraph 166-2 which refers to choosing instruments which are liquid relative to the markets it is operating in.
- In addition, if the market cap for a particular stock keeps fluctuating around the 2bn mark, the capitalization process might get lots of noise, since the liquidity horizon for the equity small cap risk factor will fluctuate between 120 day and 20 day a frequent basis
- Only VA which are updated daily and which are not in the exclusion list should be included in HPL
- If an adjustment is considered in the daily VaR the same should be also included in the HPL
If an adjustment can be performed on a daily basis, it is likely that such adjustment it is part of the correct marking practice and contribute to the daily volatility, which the VaR is meant to capture.
Approach c) is operationally efficient and should be sufficiently conservative. In case no data from stress period is available current period could be scaled up based on factors used for ES. Options1b) and 1d) are computationally complex especially when done on P&L level and therefore not our preferred option
2) Types of data acceptable for the observations
2b) should be the default case, 2c) where necessary, 2a) would contradict the BCBS principle best data".
3) Additional conditions on the data observed for the NMRF
It is sensible to only allow one risk factor level per day. True stale risk factor observations should not be filtered out. This is an additional requirement beyond the IMA requirement that would lead to significant operational hurdles. Rather than fall back to the fall back shocks, "gauge data" mentioned in 2) could provide sufficient information to derive shocks.
4) Definition of the liquidity horizon LH(j) for an NMRF
Effective liquidity horizons would be a significant operational burden and very complex to apply, e.g. due to monitoring of broken hedges.
5) Calibration of parameter CLsigma
No comment
6) Calibration of parameters CES equiv
Paragraph 262b) provides a reasonable approach. A floor of 3 seems rather high given the large number of NMRF and the conservative aggregation. A regular calibration (e.g. monthly) of a single value should be sufficient.
7) Calibration of ????????
Option E: The main driver of conservativeness for NMRF is the conservative aggregation scheme given the large number of risk factors. Setting a single value of K for all risk factors would be too simplistic and defining an individual k for every risk factor is too complex.
8) Calibration of ????????????,???????? and ????????e to achieve the target calibration ‘at least as high as an expected shortfall’
No comment."
Kappa calculation: Properly calculating kappa is computationally expensive (every NMRF will have a bespoke kappa) and the correct value for kappa would constantly change over time. Selecting an unbiased value would be necessary to avoid overly conservative calculations due to the large number of NMRFs and the conservative aggregation scheme.
Return calculation: Non-equidistant returns are scaled to large liquidity horizons. This will lead to significant complexity as every return could come from a different time period. A more pragmatic approach would be to calculate returns over 10 days in line with ES and scale them to larger liquidity horizons.Scaling short returns to very long holding periods using the sqrt-of-time rule will easily lead to excessive shocks, in particular if the shock is calibrated for a basis risk factor (in cases where banks decompose NMRFs into modellable proxy and non-modellable basis, footnote 40 of the BCBS text )
Paragraph 247: For all non-linear risk factors, an optimization over the range of possible risk factor values is necessary. While this is easily achievable for sensitivity-based calculations, a grid-based approach is required for solutions based on full revaluation. This will significantly increase the overall model complexity and likely lead to RWA variability.
Rather than selecting parameters for kappa, C_ES and sample size corrections factors to ensure that every NMRF is at least as high as a stand-alone calculation, focus on the overall level of capital that will mostly be driven by conservative aggregation approach.
Given the non-final status of both the RTS on assessment methodology and CRR2, it is difficult to agree that some of the articles from one should apply to the other. In principle, however, it is useful to establish that they can be used as guidance for banks and Competent Authorities, however, we would not recommend that any formal requirement or standard be established ahead of a revised version.
With respect of the RTS on Model Changes, this should only become relevant post go-live of the FRTB framework. As such, we believe that there is sufficient time for a revised version to be published.
There are also a number of elements of the current RTS, which will need to be explicitly addressed ahead of the implementation of FRTB, such as:
- Definition of extensions, in particular to new desks.
- Changes in market risk factors are appropriately considered by quantitative tests at the desk level.
Question 5. What are your views about the qualitative approach used as a starting point under step 2?
A qualitative approach to identify the primary risk driver and assigning them to risk category should work in practice. As stated above, banks already use similar approaches to classify products for CEM and UM. When needed, banks can use the quantitative approach.Question 6. Which would be the most appropriate option for the quantitative approach? Would you recommend another option?
Option 3 would be the most appropriate answer as this uses volatility x sensitivity. Alternatively, for volatility, banks could use their internal vols, FRTB Risk Weights, or SA CCR supervisory factors / vols. This analysis will be done annually for several trades of a product type (including multiple risk factors), and will deliver the primary asset class for every trade, and if >Z% belong to a particular asset class, then that product type is assigned that risk class(es).Question 7. What values would be reasonable for the threshold(s) (X, Y, and their equivalents for Options 3 and 4) that determine the number of material risk drivers? Please provide rationales for proposed levels.
For option 3, Z% could be 40%. Anything below 40% would entail that the asset class is not a primary driver. Normally this would lead to a single asset class being above 40% and being the primary driver. If there are two primary drivers (both driving >40% trades), then those capture most of the risk of product type. One could assign 50% of notional to each of the risk classes in this case to avoid double-counting. In some cases, it is possible no asset class is above 40%, and in that case the product could be assigned to the “other” category, which already has a punitive supervisory factor.Question 8. Do you have any views on the appropriateness of devising a fallback approach? Can you identify any cases where reverting to the fallback approach is necessary?
As stated above, anything with more than 2 primary drivers (or with no primary driver, i.e. all risk classes below 40%), should go under “other” category, which already has a punitive supervisory factor.Question 9. Do you have any views on the appropriateness of a cap on the number of risk categories to which a single derivative transaction can be allocated? If yes, what value would you recommend for that cap (three or four)?
Following on our previous answers, the risk categories should be capped at 2. We do not believe it has added value to have a category for products with 3 primary risk drivers. If there are any products with more than 2 drivers, they may be better suited in “other” category, which already has a punitive supervisory factor.Question 10. Do you have any further comment or consideration on the mandate under discussion?
It is our understanding that the quantitative analysis should be done for a product type to classify it once and review it annually. It will lead to additional and unnecessary complexity to do such analysis at trade level, and classify different trades of the same product type to different risk categories (leading to issues with netting, etc). Also, doing this analysis more frequently may lead to a product type switching risk categories back and forth, and would need investigation and justifications and instability in calculation and volatility in capital requirements.Question 11. Do you have any views on the most appropriate approach to compute supervisory delta in a negative interest rates environment? Please elaborate.
The suggested approach should work for a negative interest rate environment. However there are other products like binary options, digital options, target profit forward, etc. where such a modified Black Scholes formula will not generate an appropriate delta. Hence banks should be allowed to continue to use internal models for delta calculation. Such models are already approved and used in EOD market risk VAR calculations, and recently in ISDA-SIMM uncleared derivatives IM calculation. The daily IM reconciliations show that even though models may be different between banks, the overall effect on IM (or, in case of SACCR, on PFE) is not material. For banks not using internal models the suggested approach seems most appropriate from the available options.Question 12. Which one of the two options do you think is more appropriate from an EU perspective (i.e. maximum harmonisation)? Are you aware of any issue these two options could raise?
Banks should be allowed to select a lamda per currency, based on market rates, such that the formula works for all trades in that currency. We would suggest to refrain from introducing a requirement on a formal industry wide benchmark for lambda as each bank may have different approaches that makes aggregation challenging. Similar to other market data calibrations, for the existing stressed EPE window calibration for example, regulatory guidelines could be provided that allows each bank to implement their own solution.Question 13. Do you agree that the definition of a long position in the primary risk driver and a short position in the primary risk driver in Article 279a(2) of the CRR2 proposal is sufficiently clear for banks to determine whether they hold a long or a short position?
We agree that the long and short positions as definition in the CRR is sufficiently clear.Question 14. Do you agree that changes in instruments’ circumstances that imply a shift between the presumptive lists should be accepted as ‘exceptional circumstances’? Please provide examples.
We agree, but we also note that, as the change in an instrument's circumstance would require a move to be in compliance with the regulations, could it be argued that there should be an implicit approval process without requiring the full reclassification approval from the Competent Authority (CA) and associated capital surcharge. For example, could we notify the CA of the move related to a change in instruments' circumstance and provide associated documentation and if nothing further is requested within a certain timeframe then banks can assume the move to be approved.Question 15. Do you agree that CTP positions that become illiquid must remain in the TB?
Yes, as illiquidity is not a reason for re-classification.Question 16. Please provide examples of cases where exceptional circumstances might warrant the approval of reclassification.
The following are examples of exceptional circumstances requiring approval for reclassification;- Bank restructuring that would result in the permanent closure of a trading desk.
- Market conditions that result in serious long-term disruptive impacts on the ability to trade.
- Changing nature of an instrument as described in this discussion paper.
Question 17. Do institutions have any particular issue in identifying non-trading book FX and commodity positions subject to market risk? What kinds of transactions do those positions correspond to and how material are they with respect to current RWAs for market risks?
It is possible that in certain situations there may be residual FX positions at a legal entity level that are not captured in market risk. This could be due to any un-sold currency P&L for the reporting period across multiple currencies and legal entities that individually are below the selloff requirements for the entity. It is therefore likely that any exposure from this scenario would not be material at a consolidated level.Question 18. What issues would institutions face to value those positions in order to calculate the own funds requirement for market risks using the FRTB standards? Currently, do you revalue all components for the purposes of computing the own funds requirement for market risks? If not, which ones? Currently, how frequently are those positions valued?
No comment.Question 19. For the non-trading book positions subject to the market risk charge that are not accounted for at fair value (or in the case of FX, are non-monetary), do stakeholders have the capacity to mark these positions to market and how frequently can this be done? Do stakeholders have the capacity to “mark to market” the FX component of the non-monetary item subject to FX risk on a frequent basis (for example daily)?
There is no capacity or benefit for banks to create a daily P&L process for non-trading books to revalue FX positions daily. For many accrual banking book positions it is difficult to get intra month valuations. Revaluing only the FX positions on a daily basis is therefore not representative of the actual risk and would also still show significant month end jumps. An artificial risk construction tested against an artificial P&L does therefore not represent a reasonable test of the model.Question 20. Does IFRS 13, i.e. Fair Value Measurement, have an impact on the frequency of non-trading book revaluations? If yes, please explain how.
It does not have an impact on the frequency as IFRS 13 does not prescribe frequency of accounting valuation.Question 21. Are there other factors (for example impairments or write-downs) that can affect the valuation of non-trading book FX positions?
In general any factor that leads to adjusting the carrying value can affect the valuation, e.g. loan loss provisions.Question 22. Do stakeholders have a view on what minimum number of notional trading desks should be allowed? What would be the negative consequences of applying some restrictions to the number of notional trading desks allowed (for example only one notional desk for FX positions and only one for commodities)?
Restricting the number of notional trading desk to only 1 by risk type (FX, Commodities) may prevent any application for IMA if 1 part of banks business is not able to fulfil all IMA requirements.Question 23. Do you consider that trading book positions should not be included in notional trading desks? Would you agree that, for trading desks that include trading and non-trading book instruments, all the trading desk requirements should apply? Do you consider that for notional trading desks all the trading desk requirements should apply? If this is not the case, which qualitative requirements of Article 104b(2) of the CRR2 proposal could not practically apply to notional trading desks?
In our view, notional trading desks should not be required to meet any of the qualitative requirements.Consider for example if a single notional desk for Banking Book FX is used, it would either not be possible or not meaningful to meet the requirements:
- The business strategy and risk management structure would be for the entire bank – there is no meaning to having to meet this at the desk level
- Many “dealers” would be involved in this notional trading desk
- Limits would not be set at this level as it is an artificial construct with no business owner
- Reporting in isolation has no added value for the notional desk
- Lastly, the notional desk would not have a defined business plan
As such, to prevent regulatory arbitrage, notional desks should be limited to BB positions, while Trading book positions should not be allowed to sit on notional desks.
Question 24. Do you see a reason why backtesting requirements should not apply to notional trading desks?
Backtesting & P&L Attribution of positions which are not Fair Value is of limited value. Requiring a separate “artificial” accounting approach for the purposes of own funds calculation increases overheads and lacks any use test to control its accuracy.If this artificial approach is not abandoned then two elements need to be taken into account:
- Non-FV positions from IMA notional trading desks need to be excluded- this would mean that there would need to be at least FV and non-FV notional desks.
- The quantitative requirements for IMA for the notional desks also need to be removed
Question 25. Do you see a reason why P and L attribution requirements should not apply to notional trading desks?
We do not see a benefit in a daily P&L attribution process for non-trading book positions as they are not revalued daily, similar to our response to question 19.Question 26. Do you agree with the proposed general definitions of instruments referencing an exotic underlying and instruments bearing other residual risks? Do you think that these definitions are clear? If not, how would you specify what is an ‘exotic underlying’ and what are ‘instruments that reference exotic underlyings’? Please provide your views, including rationale and examples.
The definition provided is sufficiently clear. Nevertheless we are of the opinion that products such as Variance swap (future volatility exposure) should not be included in the exotic risk category since an SBM vega charge can be computed and any residual risk would be capitalized via a 0.1% charge.Question 27. Do you agree with complementing, for the sake of clarity, those definitions with a non-exhaustive list of instruments bearing other residual risk? Similarly, do you agree with retaining the possibility of excluding some instruments from the RRAO?
We don't see the necessity of complementing those definitions with a non- exhaustive list. Nevertheless we do see that certain products, by virtue of the way they are hedged in the market should not attract a RRAO charge and therefore can be excluded. This could be the case for standard CMS related hedging mechanisms for CMS Spread structures, which should not themselves, attract additional RRAO charges. To provide an example, if we consider Multi Look CMS spread trades (assuming 20Y maturity), this can be hedged with 20 times One look CMS Spreads. All these trades under current rules will attract RROA i.e. 21 times individual Notional, while in reality such package will show an overall flat risk.Question 28. More specifically, do you consider that there are particular instruments (or underlyings) which, while meeting the definitions above (in line with point (d) of paragraph 58 of the FRTB), should be excluded from the RRAO? Alternatively, on the contrary, do you consider that there are instruments (or underlyings) that are not captured by the definitions above and that should be subject to the RRAO? Please provide your views, including rationale and examples.
As mentioned above some instruments such as Variance Swaps or CMS spread options should be excluded or use a more accurate risk weight to avoid penalising market making activities.Question 29. Although the proposed list of options does not aim at being exhaustive, since there is a general definition, do you find that any important option type meeting the criteria in point (i) of point (e) of paragraph 58 of the FRTB is missing? Conversely, do you think that any of the options in the list does not meet general criteria?
No comment.Question 30. Do you think there are any instruments, not meeting the general definitions above, whose risk would however be poorly captured within the standardised approach and should therefore be included in the list of instruments subject to the RRAO?
Distressed products which trade far from par and products already defaulted are generally price based and are not appropriately captured within the standardized approach. While for the subordinated exposures DRC SA is already providing adequate capitalisation (which would cover any residual risk and losses), for senior exposures, it could be argued that some residual risk exists due to recovery risk. For this reason banks would expect such exposures to be charged a residual risk add on charge of 0.1% to cover for such residual recovery risk.Question 31. What are your views on the proposed treatment for behavioural risks? Do you have any proposal for a more objective/prescriptive approach to identifying instruments with behavioural risks?
For securitized products, it would be preferable to have further clarity on current RRAO FTRB classifications. Embedded Prepayment risk (embedded optionality) is generally present in many securitized products (except CMBS), which are generally booked as linear instruments. The bank suggests that for such linear instruments, with prepayment risk (even the non-retails instruments), where an uneconomic exercise of the embedded optionality lead to a loss, should be in scope for RRAO as those positions will not be charged vega/curvature but do exhibit convexity from prepayment behaviour. Therefore, where positions have prepayment risk (i.e. duration factor in a non-zero prepayment rate / expected call date), it would be preferable to use a behavioural add-on in addition to standardized Default Risk Charge (DRC) and Credit Spread Risk (CSR).Question 32. What are your views on the role that the list in point (h) of paragraph 58 of the FRTB should play?
No comment.Question 33. Are there any cases in which instruments could meet the definitions of both ‘instrument referencing an exotic underlying’ and ‘instrument bearing other residual risks’?
Variance swaps may fall in this category, tagged as exotic risk while rather bearing residual risk. This is an issue that should be fixed by moving Future realized volatility outside the category of exotic risks.Question 34. What is your view on the outlined approach? Please provide background and reasoning for your position.
We are of the opinion that the Liquidity Horizons (LH) framework does not need further granularity. Some discretion would be required by banks in mapping more complex product to liquidity horizon buckets. For example in the case of multi underlying trades, the approach listed in paragraph 148 seems sensible, but there are other situations where some discretion should be applied based on expert judgment for more complex exposures e.g. when dealing with VIX indices. For such corner cases the bank should be allowed to apply internal methodologies that would identify the most relevant liquidity horizon once approved by regulators.Question 35. Do you have in mind risk factors for which additional guidance is needed? If yes, which ones?
No additional risk factors come to mind.Question 36. Do you have in mind any risk factor categories or subcategories to add to those listed in Table 2 of Article 325be of the CRR2 proposal?
We advise to amend Table 2 of Article 325be to include the additional risk factor categories and Liquidity Horizons listed in the Basel FAQ published on January 2017 at paragraph 2.2.Question 37. Would you think that QAs could be sufficient to provide additional guidance (instead of RTS)?
Q&As would be sufficient to clarify open uncertainties without compromising flexibility. The following elements would, however, benefit from additional guidance via RTS’s:- Liquidity horizon recalibration, especially in order to account for de-risking profile and mean reversion effect for liquidity horizon above 40days.
- Reducing the cliff effect between LH buckets as per point raised on question 45 (equity small cap context)
Question 38. What is your view on the definition and level of the threshold used for assigning currencies to the most liquid category?
Quantifying the concept of liquidity via a single attribute should be done by using of a broad market definition. Setting a liquidity level only on OTC market data e.g. by using the BIS OTC derivative statistics would be a limitation and misleading. Both cash and derivative products should be considered as well as OTC and exchanged traded markets. Furthermore we would like to highlight that the use of different liquidity horizons, for specified currencies and non-specified, could lead to un-intended impact on liquidity by penalising emerging market jurisdictions and introducing an uneven playing field. We would therefore suggest the use of a unique liquidity horizon set at 10 days.Question 39. If you agree with the threshold outlined, would you agree that the list of selected currencies should be updated on a triennial basis following the publication of the BIS OTC derivative statistics?
We do not consider that the BCBS calculation captures enough of the market to be considered a true measure of liquidity, and a 3 year revision is not appropriate given the dynamic nature of the market.Question 40. If you do not agree with the threshold outlined, please provide reasoning for establishing another selection criterion.
It is worth noting that in relation to FRTB NMRF, many data providers such as MarkIT or Bloomberg, are currently developing initiatives in order to provide an overview of liquidity by product. Regulators could leverage such information in order to better calibrate FRTB liquidity horizons.Question 41. What is your view on the definition and level of the threshold used for currency pairs to be considered most liquid?
Although we support the idea of using the triennial central bank survey on FX as a source for volumes, we are of the opinion that also other sources such as Bloomberg and Reuters should be utilised to achieve a more holistic view around the FX market liquidity. Furthermore, if banks were to estimate the required liquidity horizon, it would take into consideration also elements such as the bank’s market share, the risk sensitivity to each FX risk factors and the internal limit which reflect the bank’s risk appetite.Question 42. If you agree with the threshold outlined, would you agree that the list of selected currencies should be updated on a triennial basis following the publication of the BIS OTC derivative statistics?
No comment.Question 43. If you do not agree with the threshold outlined, please provide reasoning for establishing other selection criteria.
As mentioned above, in order de define liquidity horizons, banks have far more information than only looking into a defined turnover level from the BIS report. From internal analysis, our view is that for the FX spot market there should be no distinction between currencies, since all of them (currently classified in FRTB text as liquid and illiquid) would qualify for a liquidity horizon well below the 10 days. Recent analysis on material bank's risk shows that 2 days is a sensible indication for liquidity horizon. Although a 2 day LH might not be always applicable, an FRTB 10 day LH would be a conservative assumption.Similar consideration can be made on FX volatility risk factors, for which the same analysis shows a maximum 5 days liquidity horizon (even for very concentrated exposures), leading to the suggestion of using a 10 day FRTB liquidity horizon as opposed to the current 40 days LH presented in the FRTB standards.
Question 44. Do you consider that triangulation of currency pairs should be allowed? Is triangulation used in practice to hedge less liquid FX positions?
As mentioned above, only one LH should be used across all the currency pairs. Nevertheless the concept of triangulation would also be an option if it is effectively adopted into regular risk management practice.Question 45. What is your view on the definition and level of the threshold for defining small and large capitalisations for equity price and volatility?
In principle, we support EBA’s suggestion on how to assign small and large capitalization liquidity horizon to equity prices and vols. Nevertheless, we are of the opinion that banks should have discretion in the methodology on how to assign liquidity horizons, subject to internal validation. Such methodologies may consider the relative size of the exposure and should aim to avoid cliff effects and fluctuation of capital requirements. Against this background, we note that:- where there are trades on names smaller than EUR 2bn, this is to be considered as small in size and it is manageable relative to the liquidity of that market. This refers to the size of a bank’s position relative to the markets, which is slightly different to the paragraph 166-2 which refers to choosing instruments which are liquid relative to the markets it is operating in.
- In addition, if the market cap for a particular stock keeps fluctuating around the 2bn mark, the capitalization process might get lots of noise, since the liquidity horizon for the equity small cap risk factor will fluctuate between 120 day and 20 day a frequent basis
Question 46. Do you see any problems in using the ITS published by ESMA to specify the equities that can be considered as large capitalisations?
From a risk management point of view, we are concerned that following ESMA’s main indices and using its components as suggested in paragraph 166-2 might include highly illiquid stocks and count them as a large cap names. This is not sensible as they genuinely should be classed as small cap. For this reason, banks should have the discretion of classifying exposures as small cap.Question 47. Do you agree with the list of criteria for systematic exclusions from hypothetical P and L?
Yes, we agree. The proposed exclusion criteria provide enough information to identify and exclude the VA which as in line with Industry feedbacks should be excluded from HPL.Question 48. Do you have numerous valuation adjustments not computed at desk levels? For those VAs, would it be possible to calculate them at desk level? If not, explain why.
Yes. Valuation adjustments are taken at a portfolio level and applied consistently across parameters appropriate to it, with the understanding that individual adjustments may be required at a more granular level (e.g. Product-Model, or Trade specific). When considering the appropriate netting level for the close out cost calculation, it is important to use a level which reflects how business units would economically unwind the risk in practice. In most cases this is considered to be at the region/business unit level rather than at the trading desk level. It would be possible to calculate such valuation adjustments at the desk level, but that approach is not deemed suitable, as it would not reflect how business units would economically unwind the risk in practice.Question 49. Do you agree with the criteria defined for the inclusion of a valuation adjustment in the hypothetical P and L? If not, please give arguments. Do you agree with the proposal to provide only criteria for inclusion in or exclusion from the hypothetical P and L, in order to allow some flexibility, or do you think that we should have non-exhaustive lists supplemented by criteria?
Yes, we agree. This is because the inclusion criteria evolve around the following two arguments:- Only VA which are updated daily and which are not in the exclusion list should be included in HPL
- If an adjustment is considered in the daily VaR the same should be also included in the HPL
If an adjustment can be performed on a daily basis, it is likely that such adjustment it is part of the correct marking practice and contribute to the daily volatility, which the VaR is meant to capture.
Question 50. Do you agree with developing additional guidance on specific valuation adjustments: related to market risk versus not related to market list, possible daily frequency update in the P and L versus not daily, ‘top of the house’ versus desk-level computation?
No, we are of the opinion that the criteria proposed for exclusion and inclusion as proposed in the EBA DP are sufficient. The definition of specific valuation adjustments related to market risk would be restrictive and difficult to apply in practice, considering that each institution has a different way to name and define valuation adjustment. In principle all valuation adjustments can be somehow linked to the market risk concept. However only if they are updated on a daily basis, can they be effectively captured by the bank risk model, since such daily adjustments will contribute to the daily volatility captured in the risk model? Therefore the key criteria which we propose to follow for inclusion is the daily frequency update.Question 51. Did you have overshootings that are mainly caused by valuation adjustments included in the hypothetical P and L? If yes, which valuation adjustments were mainly causing overshootings? Did you identify types of desks which were more frequently affected by such overshootings? Are these desks likely to breach the backtesting thresholds because of these overshootings (how frequently do the overshootings occur)?
Valuation adjustments are not a large driver of overshootings to the extent that they impact desk eligibility.Question 52. Do you agree with the list of criteria for systematic exclusions from the actual P and L?
Yes, we agree. The EBA DP proposes to exclude the VA which are charged under separate capital treatment and excluded from CET1, while propose to include the adjustment which are not captured daily. This is sensible since actual P&L will be only used in back-testing, hence a P&L event due to remarking e.g. IPV valuation adjustment (taken usually monthly) will not have any adverse effect on P&L Attribution process, but rightly will do so only on back-testing.Question 53. Do you agree with the criteria defined for the inclusion of a valuation adjustment in the actual P and L? If not, please provide arguments.
Yes, we agree as per the answer above.Question 54. Did you have overshootings that are mainly caused by valuation adjustments included in the actual P and L? If yes, which valuation adjustments were mainly causing overshootings? Did you identify types of desks which were more frequently impacted by such overshootings? Are these desks likely to breach the backtesting thresholds because of these overshootings (how frequently do the overshootings occur)?
See answer to 51.Question 55. According to you, is the net interest income part of the time effect?
Yes, it is part of the time effect.Question 56. Do you agree with the proposed definition for net interest income? If not, what would be your proposal?
We do not see the need to define NII as “the cash flow related component of the passage of time on the value of the portfolio. It measures the paid or received interest cash flows and the interest cash flow related effect on the fair value.”, and to take it up in the RTSs as per EBA proposal. We support the more generic definition of P&L due to passage of time.”Question 57. Would you like further indications of the elements to take into account in the time effect? Which elements would you include in the time effect?
We do not see the need to define time effect" and to capture it in regulation as per the EBA proposal. We support the more generic definition of P&L due to passage of time."Question 58. Regarding the different proposals, do you agree with EBA that Proposal 2 would achieve the best outcome? If not, what would be your suggestion?
No comment.Question 59. Do you agree with the principle of including in or excluding from the risk-theoretical P and L the same valuation adjustments as for the hypothetical P and L?
Yes, we agree as per answer above.Question 60. What are your preferred options for points 1-8 above? How would you justify these preferences?
1) Definition of the observation periodApproach c) is operationally efficient and should be sufficiently conservative. In case no data from stress period is available current period could be scaled up based on factors used for ES. Options1b) and 1d) are computationally complex especially when done on P&L level and therefore not our preferred option
2) Types of data acceptable for the observations
2b) should be the default case, 2c) where necessary, 2a) would contradict the BCBS principle best data".
3) Additional conditions on the data observed for the NMRF
It is sensible to only allow one risk factor level per day. True stale risk factor observations should not be filtered out. This is an additional requirement beyond the IMA requirement that would lead to significant operational hurdles. Rather than fall back to the fall back shocks, "gauge data" mentioned in 2) could provide sufficient information to derive shocks.
4) Definition of the liquidity horizon LH(j) for an NMRF
Effective liquidity horizons would be a significant operational burden and very complex to apply, e.g. due to monitoring of broken hedges.
5) Calibration of parameter CLsigma
No comment
6) Calibration of parameters CES equiv
Paragraph 262b) provides a reasonable approach. A floor of 3 seems rather high given the large number of NMRF and the conservative aggregation. A regular calibration (e.g. monthly) of a single value should be sufficient.
7) Calibration of ????????
Option E: The main driver of conservativeness for NMRF is the conservative aggregation scheme given the large number of risk factors. Setting a single value of K for all risk factors would be too simplistic and defining an individual k for every risk factor is too complex.
8) Calibration of ????????????,???????? and ????????e to achieve the target calibration ‘at least as high as an expected shortfall’
No comment."
Question 61. Do you have any observations or concerns about the overall methodology proposed for point (a) of the mandate?
In general we have some concerns on the level of conservativeness Various levels of conservativeness are layered upon each other (volatility calculation, C_ES factor, kappa, correction factor to not underestimate small samples) which will lead to overly conservative stand-alone numbers. The conservativeness of the NMRF charge is largely driven by this conservative aggregation scheme. Ensuring that every NMRF is at least as conservative as a stand-alone ES calculation and adding these will lead to even more conservative NMRF impacts.Kappa calculation: Properly calculating kappa is computationally expensive (every NMRF will have a bespoke kappa) and the correct value for kappa would constantly change over time. Selecting an unbiased value would be necessary to avoid overly conservative calculations due to the large number of NMRFs and the conservative aggregation scheme.
Return calculation: Non-equidistant returns are scaled to large liquidity horizons. This will lead to significant complexity as every return could come from a different time period. A more pragmatic approach would be to calculate returns over 10 days in line with ES and scale them to larger liquidity horizons.Scaling short returns to very long holding periods using the sqrt-of-time rule will easily lead to excessive shocks, in particular if the shock is calibrated for a basis risk factor (in cases where banks decompose NMRFs into modellable proxy and non-modellable basis, footnote 40 of the BCBS text )
Paragraph 247: For all non-linear risk factors, an optimization over the range of possible risk factor values is necessary. While this is easily achievable for sensitivity-based calculations, a grid-based approach is required for solutions based on full revaluation. This will significantly increase the overall model complexity and likely lead to RWA variability.
Question 62. Do you have an alternative proposal for the calculation of an extreme scenario of future shock or stress scenario risk measure?
We propose as an alternative to allow use of stale data to calculate standard deviation. It is very complex to cleanly differentiate between genuine stale data and non-stale data. Every shock will have a bespoke liquidity-horizon making the overall calculation significantly more complex. In addition, return calculations over longer time horizons to mitigate impact from sqrt scaling of long horizons in particular for basis risk factors should also be allowed. Using using sqrt-of-time will be an issue for basis risk factors scaled to long horizons.Rather than selecting parameters for kappa, C_ES and sample size corrections factors to ensure that every NMRF is at least as high as a stand-alone calculation, focus on the overall level of capital that will mostly be driven by conservative aggregation approach.
Question 63. Do you have any comment on the ‘risk factor based approach’ versus the ‘direct loss based approach’? Is computational effort a concern?
If the goal is to ensure that the stress scenarios lead to a ES equivalent number in all cases, a direct loss based approach seems to be more logical than the risk factor based stressed approach described here. The P&L approach is very similar to traditional risk metrics like ES and VaR, which naturally leads to the question why those risk factors should not be included in the ES model in the first place.Question 64. Is there a case for allowing institutions to calculate a standalone expected shortfall directly?
For solutions based on Full Revaluation this approach will quickly become computationally complex due to the multitude of NRMFs that would require a stand-alone ES calculation.Question 65. Do you have any views on points (a)-(g) above?
We do not have any additional views on these points.Question 66. What are the most relevant NMRFs for your institution in broad terms?
Given the status of discussions at the Basel level and the uncertainty on the final eligibility criteria, it is at this stage not possible to provide detailed feedback.Question 67. What are the most relevant statistical distributions for NMRFs?
Given the status of discussions at the Basel level and the uncertainty on the final eligibility criteria, it is at this stage not possible to provide detailed feedback.Question 68. What are the most relevant non-linear tail loss profiles that need to be considered?
No detailed comments at this stage.Question 69. What is the materiality of non-linear tail losses in practice?
No detailed comments at this stage.Question 70. Do you deem Option 1 (the ‘maximum possible loss’) or Option 2 (the prescribed risk weights) more suitable as a fallback approach? What is the reason for your preference?
Option 2 is more suitable. Option 1 is generally not viable as max loss is not viable for the vast majority of risk factors.Question 71. Do you deem the risk factor categories and respective shocks presented in the tables in Annex 2 appropriate for the (types of) NMRFs you expect? If not, what is your proposal to remedy the issues you see?
No comments at this stage.Question 72. Do you agree that, to the extent possible, new FRTB models in the EU should be approved according to updated, harmonised RTS on assessment methodology? Do you agree that, in the absence of such revised standards, relevant parts of the published RTS on assessment methodology, provided they are in line with the new requirements, should apply?
In the event that the RTS on assessment methodology is adopted, we agree that it makes sense for the EBA to propose a revised set of the rules for application to the FRTB. Many of the articles, however, will not be applicable under the new FRTB regulations.Given the non-final status of both the RTS on assessment methodology and CRR2, it is difficult to agree that some of the articles from one should apply to the other. In principle, however, it is useful to establish that they can be used as guidance for banks and Competent Authorities, however, we would not recommend that any formal requirement or standard be established ahead of a revised version.
With respect of the RTS on Model Changes, this should only become relevant post go-live of the FRTB framework. As such, we believe that there is sufficient time for a revised version to be published.
There are also a number of elements of the current RTS, which will need to be explicitly addressed ahead of the implementation of FRTB, such as:
- Definition of extensions, in particular to new desks.
- Changes in market risk factors are appropriately considered by quantitative tests at the desk level.
Question 73. Do you agree that a recalibrated version of the current standardised approach – for banks below the EUR 300 million threshold (as currently proposed in the CRR2 proposal) – is preferable in the EU to the implementation of the BCBS reduced SBM? Do you agree that the recalibration should be carried out simply at the risk class level by applying a scalar, such that the recalibrated approach is generally more conservative – but not systematically more conservative – than the FRTB SA?
No comment as this is related to banks with smaller trading books.Question 74. Do you have any comment on the items mentioned in this section or wish to raise additional implementation issues?
No additional comments.File Upload
DB response to EBA discussion paper.pdf
(483.49 KB)