Response to joint Comitteee Discussion Paper on automation in financial advice

Go back

1. Do you agree with the assessment of the characteristics of automated financial advice tools presented in this Discussion Paper? If not, please explain why.

The Discussion Paper looks at all automated tools that enable consumers to obtain a recommendation for a banking, investment or insurance service based on criteria and preferences they themselves have inputted, without any further human intervention. At the same time, it focuses only on tools whose output can reasonably be perceived, from the consumer’s perspective, to be automated financial advice.

While the consumer’s perspective is a perfectly suitable criterion for defining the range of tools to be discussed, we believe that a further differentiation based on the quality of a tool’s output should be made. Such a differentiation is required so as to avoid any conflict with rules on the provision of advice already in place, e.g. in the area of investment advice on financial instruments.

At the same time, it should be borne in mind that not every automated tool that names financial products on the basis of criteria supplied by the consumer constitutes advice. A general distinction thus has to be made between automated recommendations and just the results of – likewise automated – searches. As explained, a crucial distinguishing criterion is whether consumers perceive the output of a tool to be advice, i.e. a recommendation tailored to their personal circumstances.

Bearing this in mind, we take an extremely critical view of section 24 of the Discussion Paper – for the securities sector, at any rate. Section 24 implies that advice is involved. If so, all supervisory requirements in regard to advice must also be met when advice is provided on an automated basis. This would not be the case in the securities sector if a recommendation for a particular financial instrument was merely based – as mentioned by way of example in section 24 of the Discussion Paper – on a risk profile and investment horizon. We consequently take an equally critical view of the third example in section 26 of the Consultation Paper. This creates the incorrect impression that in automated advice there are no binding supervisory requirements under securities law for the online questionnaire completed by the consumer (see wording “this might include”). Furthermore, the information listed is not the same as that called for under the supervisory requirements applying in the securities sector.

Automated advice should be governed by the same supervisory requirements as advice provided by a human adviser. Human advice should not be discriminated against compared with automated advice. The same activities should instead be subject to the same regulatory standards.

It should thus also be ensured that, in online business as well, the term “advice” may only cover recommendations which – in regard to securities, at any rate – are based on a consumer questionnaire in line with the supervisory requirements applying under securities law and on a suitability assessment. Conversely, it should be made clear to consumers if merely “selection aids” are involved that are designed simply to help them make an investment decision on their own responsibility.

Especially the following cases cannot be seen to constitute advice:

 Search engines that select products from a universe of products solely on the basis of criteria inputted by the consumer. Such automated selection tools are currently in use at national level. They allow consumers to preselect products suitable for them from a provider’s universe of products, using simple selection criteria. This is also made sufficiently clear to consumers by the tools’ design and operating process. These selection tools do not constitute advice and they are thus, for good reason, also not subject to the more stringent requirements governing the provision of advice. With regard to section 24, there are tools on the market in the securities sector that mostly categorise consumers on the basis of their risk appetite/risk expectations and then recommend model portfolios. This process is not treated as advice in standard market practice.

 Tools which – based on information inputted by consumers – recommend spreading an investment across different asset classes in a certain way. Since consumers are merely recommended a portfolio structure at asset class level based on their input, without any specific financial instruments being named, the conditions for investment advice are not fulfilled in this case.

Further discussion should therefore focus on “real” automated advice tools.

In the process, the principle of equivalence of distribution channels must apply. Consumers should be able to decide for themselves whether they prefer to obtain human advice or automated advice. There should be no discrimination against human advice (based on personal trust) compared with automated (anonymous) advice.

3. Are you aware of examples of automated financial advice tools being used in the banking, insurance, and/or securities sectors? Please provide examples, giving details of their operating process.

Current practice is adequately described by the examples given in the Discussion Paper.

4. Do you offer/are you considering offering automated financial advice tools as part of your business model? If so, please briefly describe: i) what type of entity you are, e.g., long established, start-up, a product provider, an intermediary; ii) the service you provide (e.g. to what extent do you integrate human interaction in the tool you provide?); iii) the nature of your clients; iv) your business model; v) who developed the automated tool (i.e. an external company or developed internally?); and vi) the size of your activity and/or forecast activity?

Several member banks of the German Banking Industry Committee associations offer a number of tools designed to help clients find services suited to their needs in different areas.

6. Do you consider the potential benefits to consumers to be accurately described? If not, please explain why.

Section 31 says that a particular benefit of automated advice is that it is cheaper than face-to-face advice. This may be true for fee-based advice, where consumers have to pay a fee for a recommendation irrespective of whether or not they then choose a product, thus making it particularly expensive especially for less wealthy consumers. By contrast, in the case of commission-based advice, which is a widely used service in Germany, consumers are not charged directly for the advice. They are instead billed indirectly and only once they have actually chosen a product, i.e. once they have made a decision based on a recommendation. Consumers are billed indirectly, no matter through which distribution channel they contact their bank. They therefore determine where, how and when they wish to obtain advice or aid in making a decision. Consumers must be free to do so in the future as well. The broad group of less wealthy consumers should never be forced into switching to automated advice. Precisely these consumers rely particularly heavily on personal support when it comes to financial provision for the future/financial advice, as experience shows that the less money they have, the less financial knowledge and experience they possess and the less willing or motivated they are to take care of financial provision arrangements themselves.

Whether, as claimed in sections 32-33 of the Discussion Paper, automated tools enable a wider range of consumers to access advice is something that remains to be seen. Obtaining help and advice online is at any rate an additional route consumers can take to their bank depending on their current preference.

We cannot follow section 35 of the Discussion Paper when it says that automated tools enable consumers to obtain financial advice “in a faster, easier and non-time-consuming way”. This argument is based on the idea that answering a few questions and specifying a few criteria can produce a result on a par with advice obtained personally in a branch. The supervisory requirements under MiFID I, but certainly also under MiFID II, obligate advisers to do much more than just collect a few details about clients. Instead, they are required to make a comprehensive analysis of clients’ personal and financial circumstances and to identify clients’ investment preferences together with clients, taking due account of these circumstances. Clients sometimes pursue conflicting financial objectives, and these have to be discussed jointly by client and adviser and reconciled. Sometimes clients overlook important needs particularly when it comes to financial protection against risks; where insurance is concerned, this is only revealed after an adviser takes full stock of all existing or, as the case may be, missing insurance policies. Only if a tool offered online does the same should it be “classifiable” as advice (see in this context our reply to Q1, fifth paragraph). Whether the data input required is actually less time-consuming for consumers is doubtful. If such a questionnaire is to be compiled, for example, by means of extensive profiling through evaluation of contract data, usage data and internet footprint data, it should be pointed out that substantial data protection obstacles first have to be overcome in this respect.

We also beg to differ as regards the claim made in sections 36/37 of the Discussion Paper that the advice consumers receive when they use automated tools is more consistent and that well-developed algorithms could ensure that consumers with similar characteristics also receive similar advice. What is evidently meant is that automated advice is free of any subjective and personal assessments on the part of the adviser. This may be perceived as an advantage in individual cases. Nevertheless, it should not be underestimated that precisely the adviser’s personal experience can also be extremely beneficial to a recommendation. After all, it is again ignored in this context that current automated product selection tools do not conduct any check similar to a suitability assessment, so that automated advice and face-to-face advice cannot in fact be compared. In particular, automated advice does not normally allow any processing of individual descriptions of complex personal situations. Also, whether the term “personalised feedback” can be used in connection with automated advice (see section 38 of the Discussion Paper) is questionable (see also our above comments on section 36).

Another benefit mentioned is that consumers receive advice based on the latest up-to-date market information. That applies equally to tools used by human advisers.
Finally, a further purported advantage is that automated tools allow consumers to more easily document the advisory process, e.g. in the form of a printout (see section 39 of the Discussion Paper). Such a generalisation is incorrect. When obtaining advice in a branch, clients also receive not only the written record of advice prescribed by law but also, particularly where integrated advice is concerned, printouts containing both full details of the information on which advice was based and of the recommendations made.

7. Are you aware of any additional benefits to consumers? If so, please describe them.

The main benefit for consumers is that they themselves can decide how they wish to obtain advisory services. It should therefore be ensured in the future as well that consumers receive equivalent services no matter whether they use the internet or a branch.

9. Have you observed any of these potential benefits to consumers? If so, please provide examples and describe the kind of benefit that has accrued.

As regards the potential benefits, see our reply to Q6.

10. Do you consider the potential benefits to financial institutions to be accurately described? If not, please explain why.

In the long-term, it may be cheaper for financial institutions to provide advice through automated tools than through branches. However, the initial cost of developing tools that actually select products for the customer in the same way as in a suitability assessment is likely to be very high. It is questionable whether any tool currently available on the market performs such complex selection for the user. It is more tools enabling consumers to successfully copy third-party investment strategies that appear to be successful in the marketplace at present. This process, too, is geared less to making recommendations to consumers and more to helping them make decisions on their own.

A benefit mentioned in section 41 of the Discussion Paper is that automated tools enable financial institutions to broaden their client base. We agree. Today, it is undoubtedly market standard for institutions to offer their clients a sufficiently attractive range of online products and services allowing them to conduct financial business outside opening hours as well. This also applies to decisions on investment, financial provision for the future and protection against risks. By offering an attractive range of online products and services, institutions can improve their market standing. Nevertheless, it should be remembered that, while the internet is a highly important distribution channel, it is not the only one.

A further benefit referred to in section 42 of the Discussion Paper is that institutions can use automated financial advice tools to deliver a more standardised “consumer experience” free of human interpretation. The fourth paragraph of our reply to Q6 above applies in this respect. To allow proper comparison, comparable standards would have to be defined for both face-to-face and online advice. For example, a tool that matches available products solely against the clients’ risk appetite can hardly be called “advice”.

Finally, section 44 mentions as a benefit the fact that automated financial advice tools can be more easily monitored in regard to compliance, etc., since their decisions, unlike decisions made by human beings, are highly consistent. Our above comments again apply in this respect. Easy monitoring is due at present to the fact that the complexity and depth of the underlying decision matrix is still quite limited. Whether this will remain the case in the future is questionable.

14. Do you agree with the description of the potential risks to consumers identified? If not, explain why.

In the following, we should like to look at specific risks identified in the Discussion Paper:

Sections 51-52 of the Discussion Paper refer to the danger that consumers may not understand certain key information on how the tool works and express concern that disclaimers may only be provided as “legal small print”. Disclaimers are a legal feature of Anglo-American jurisdictions. Any solution required should therefore be left to the Member States concerned. Otherwise, we see no danger to this extent in the marketplace. It is, after all, in financial institutions’ very own interest to make the tool understandable for consumers or else it will not generate the required usage figures. Where more complex tools are involved, additional information is provided that consumers can view via pop-ups. In addition, usability tests ensure consumer-friendly operability. Nevertheless, it may happen that a consumer cannot handle an automated tool. With this in mind, it makes good sense in such cases – unlike in cases where mere search engines are used – for consumers to be able to obtain advice by telephone, email, online chat or in a branch.

As part of a consumer-friendly approach, care is usually taken to ensure that the information provided is comprehensible. That goes particularly where an important legal notice is required in individual cases. Such notices have nothing to do with the disclaimers common in Anglo-American jurisdictions, however. They are designed to ensure that the selection result is not misinterpreted by the consumer; they can also be required by law in individual cases. May we refer, by way of example, to the requirement under Article 4(2) of Directive 2008/48/EC to provide standard information setting out how the cost of a consumer loan is calculated.

Section 52 sees the danger that consumers may not understand how the information they input is used by the underlying algorithm. In current practice, the design of the tool, along with the accompanying information, ensure that consumers can acquire a basic understanding of how their input may affect the final output. Ultimately, however, the risk of misinterpretation cannot be completely ruled out. As correctly stated in section 54 of the Discussion Paper, it is also not yet possible at present for a tool to take into account all possible financial objectives a consumer may have. Consumers therefore need to prioritise their financial objectives themselves, using the right tool.

Sections 57-59 refer to the purported danger that consumers may receive, under the guise of “free” and unbiased advice, only recommendations relating to a predefined set of “own products/services” of a provider. Concern is expressed particularly about the possibility that consumers may assume that the advice is free of charge but later pay indirectly for it via higher transaction costs. This danger undoubtedly exists under the fee-based model, although not, in our view, under the commission-based model. Under the latter, consumers are aware that they will not be charged for the advice. Only when they actually choose a product does the provider receive remuneration as prescribed by law. This is also made clear to consumers.

Sections 60-62 express concern that consumers may wrongly perceive a tool’s final output to be advice although it is not actually tailored to their specific situation. This makes it all the more important to make clear to consumers whether the tool in question is “merely” designed to offer them support in making a decision on their own or whether a recommendation based on an automated suitability assessment is actually being delivered. We believe this is necessary so that the same market behaviour is subjected to the same regulatory requirements (for details, see our reply to Q1 above).

Sections 66-67 see the potential danger that consumers may not be sufficiently clear about how the personal data they input is going to be used or may accept data processing terms and conditions too quickly. We wish to point out that the EU General Data Protection Regulation was issued particularly also with the increased use of new media in mind. The statutory requirements it sets provide adequate protection. There is therefore no need for a data protection regime specifically for automated financial advice. The provisions of the EU General Data Protection Regulation particularly also address the concern voiced in Section 67 that financial institutions may use the data they receive from consumers in ways not originally envisioned by consumers.

With regard to section 69, we wish to stress that automated advice naturally has to fully comply with all supervisory requirements. Otherwise, it may be advisable, particularly where complex tools are involved, to make clear to consumers that these are “decision-making aids” that are designed merely to help them make financial decisions on their own responsibility (for details, see our reply to Q1 above). Notice to this effect may, for example, be appropriate if the fact that no financial advice is provided is not evident from the tool itself.

Sections 70-71 refer to the danger of faulty data processing by a tool. The underlying algorithm should be designed in such a way that the tool delivers justifiable output based on the customer’s input. Tool malfunction can never be completely ruled out in isolated cases, however. Where technical problems arise, experience shows that these are mostly to do with website access or mean that a tool cannot be used at all.

Sections 72-73 deal with the danger of manipulation. May we point out in this context that IT legislation sets adequate security requirements for IT infrastructure. In addition, it is very much in providers’ own interest to prevent any such manipulation.

Section 77 refers to the danger that widespread use of similar automated advice tools producing similar advice may trigger a “herding” effect and that this herding behaviour could, ultimately, result in a “shock”. While such an abstract risk scenario may exist under certain conditions, it is highly unrealistic at present in our view. Nevertheless, this thinking shows that it is important not to rely on just one distribution channel.

Finally, section 78 correctly draws attention to the danger that consumers may no longer be given the opportunity to access any human financial advice. In this context, we do in fact see the need to cut back regulation on face-to-face advice to a reasonable level. In addition, automated finance and face-to-face advice should be subject to the same regulatory requirements and the same supervision.

18. Do you agree with the description of the potential risks to financial institutions identified? If not, explain why.

We wish to comment as follows on the risks outlined in the Discussion Paper:

Sections 79-81 refer to the danger of financial institutions being held liable for providing flawed automated advice. This liability risk applies not only to automated advice, but also to flawed face-to-face advice. It is thus all the more important to define, particularly in legal terms, from which point onwards an automated proposal in relation to a given product is deemed to be a recommendation and thus also advice in the legal sense. This is important particularly also in the context of the regulatory requirements so that financial institutions have legal certainty.

Section 82 says that there is the danger for financial institutions that consumers may use a human adviser to supplement automated advice. We do not see this as a danger, but as an option that consumers should be free to choose in the future as well. Should consumers wish to combine elements of digital and human advice in the advisory process, it should nevertheless be ensured that the result is a complete/all-in-one advisory process.

Sections 83-85 outline a scenario in which financial institutions may face risks due to the fact that the party responsible for providing an automated tool cannot be clearly identified. Such a situation is not legally permissible in our view, since under Article 5 of Directive 2000/31/EC (E-Commerce Directive) the service provider must be named on the respective website.

19. Do you consider there to be any risks to financial institutions missing? If so, please explain.

No.

22. Would you agree with the assessment of the potential evolution of automated advice? Please provide your reasoning.

All service providers will make increasing use of the internet as a distribution channel in the future. The growth potential in this area has not yet been exhausted by any means. Efforts will be focused on offering consumers round-the-clock access to banking services. The main aim will to further develop distribution processes so that, where possible, no further processing by banks is required, i.e. providing “one and done” to banking services. A key aspect of this development will be supporting or even advising clients in their decisions to purchase a banking, investment or insurance service. Only interlocking distribution channels will deliver an optimal consumer experience, however. It is thus conceivable that consumers will start by using a decision-making aid and then switch to video advice or perhaps arrange an appointment to call at a branch to ultimately have their financial situation thoroughly checked after all.

We disagree with the implication in section 89 of the Discussion Paper that face-to-face advice in the securities sector may be reserved only for wealthy consumers in the future. While this trend is in fact being encouraged particularly by fee-based advice, commission-based advisory services will continue to allow consumers to make use of face-to-face advice where needed.

Name of organisation

German Banking Industry Committee (Id: 52646912360-95)