AI Law
AI in financial services is not new. In fact, financial services was one of the first sectors to deploy Artificial Intelligence at scale. The trading activities of many financial institutions are now predominantly algorithmic, using technology to decide on pricing and when to place orders.
AI in Financial Services – Some Developments
With increased data and reporting volumes and advanced algorithms, the potential for AI in financial services to be further harnessed and developed is endless. For example:
- Anti-Money Laundering (AML).The Financial Conduct Authority (FCA), in a 2018 speech, identified the potential use of AI to combat money laundering and financial crime.
- Asset management. In the asset management industry, the increasing use of AI is a growth area. Areas using AI include risk management, compliance, investment decisions, securities trading and monitoring, and client relationship management. An FCA speech on the subject, suggests that investment managers may well have to increase their technology spend to keep up with AI developments.
Bank of England speech
The pace at which firms are adopting AI in financial services varies. In November 2018, the Bank of England (BoE) published a speech on the application of advanced analytics reporting that the scale of adoption of advanced analytics across the industry is relatively slow. The speech identified the increased cost to firms in the short-term of increasing levels of automation, machine learning and AI, as well as the likely impact of such innovation on execution and operational risks, which may make businesses more complex and difficult to manage. This leaves space for plenty of business opportunity and innovation.
Financial Services Artificial Intelligence Public-Private Forum
The FCA and BoE have established the Financial Services Artificial Intelligence Public-Private Forum (AIPPF) to further constructive dialogue with the public and private sectors to better understand the use and impact of AI and machine learning (see AIPPF terms of reference published 23 January 2020). The forum builds on the work of the FCA and BoE, who published a joint report on Machine Learning (ML) in UK financial services in October 2019 based on 106 responses. Key findings include:
- Two thirds of respondents already use ML in some form.
- In many cases, ML development has passed the initial development phase and is entering more advanced stages of deployment. Deployment is most advanced in the banking and insurance sectors.
- ML is most commonly used in AML and fraud detection, as well as in customer-facing applications (for example, customer services and marketing). Some firms use ML in areas such as credit risk management, trade pricing and execution, as well as general insurance pricing and underwriting.
- Regulation is not seen as a barrier to ML deployment. However, some firms stress the need for additional guidance on how to interpret existing regulations. The biggest reported constraints are internal to firms, such as the legacy IT systems and data limitations.
AI in Financial Services – FCA expectations
There has been little from the FCA in terms of guidance on AI compliance with its rules. Like other forms of technology, the use of AI must not conflict with a firm’s regulatory obligations, such as its obligation to treat customers fairly. The FCA has expressed concern, for example, that the use of AI in financial services might make it harder for vulnerable customers to obtain insurance cover if the algorithms take into account certain characteristics that would deem it not viable to offer products and services to those less affluent. So firms may wish to ensure that they have systems and processes in place to monitor the impact of AI on their target customers. The use of AI also raises issues around accountability, particularly where firms rely on outsourcing arrangements.
Case-by-case basis
The FCA has said that it would approach potential harm caused by AI in financial services on a case-by-case basis. However, firms that deploy AI and machine learning must ensure they have a solid understanding of the technology and the governance around it, especially when considering ethical questions around data. The FCA wants boards to ask themselves what the worst thing is that can go wrong and mitigate against those risks. Indeed, an FCA Insight article on AI in the boardroom suggests that AI is principally a business rather than a technology issue. Boards therefore need to consider a range of factors: the need for new governance skills, ethical decision-making, explainability (do they understand how the AI operates?), transparency (customer consent for use of data), and the potentially changing nature of liability.
Some existing law and regulation applicable to AI in financial services
Misuse of data
Under GDPR, individuals have the right to know how their personal data is being used by AI. Financial institutions should be aware that GDPR (and section 168 of the DPA 2018) gives individuals the right to bring civil claims for compensation, including for distress, for personal data breaches.
Fairness, discrimination and bias
Principle 6 of the FCA is ‘to pay due regard to the interests of its customers and treat them fairly’. AI only reads the data presented to it on a one-size-fits-all basis and therefore discrimination is probable.
Anti-competitive behaviour
The UK Competition and Markets Authority (CMA), has already used its powers to restrain technology with an anti-competitive objective. In August 2016, it fined Trod, an online seller of posters and frames, for using software to implement an agreement with a competitor not to undercut each other’s prices.
Systems and control
Firms should be aware that the FCA can require them to produce a description of their algo-trading strategies within just 14 days, and that it recommends that firms have a detailed “algorithm inventory” setting out coding protocols, usages, responsibilities and risk controls.
Liability in contract and tort
AI usage (whether by a firm’s suppliers or by the firm with its customers) may give rise to unintended consequences and may expose institutions to claims for breach of contract or in tort, and test the boundaries of existing exclusion clauses. Firms need to assess whether their existing terms and conditions remain fit for purpose, where AI is concerned.
AI in Financial Services – Case Law
The courts are due to consider in mid-2020 the question of where liability lies when an investor suffers substantial losses at the hands of an AI-powered trading or investment system in Tyndaris v VWM. While the outcome of the dispute will principally depend on the facts, the judgment may include wider comments on the use of AI systems by funds or investment managers.
Industry reports on AI
In an October 2019 report, the CityUK concluded that AI-specific regulation was not currently appropriate. The report highlights best practices relating to fairness, transparency and consumer protection, data privacy and security, governance and ecosystem resilience. It also sets out a suggested AI policy approach for the UK government and regulators.
UK Finance has prepared a report in conjunction with Microsoft on AI in financial services. A key takeaway from the report include the need to recognise AI as more than a tool and consider the wider cultural and organisational changes necessary to become a mature AI business. Also as they start to embed AI into core systems, firms need to consider the implications of AI that go beyond the technical, including the wider impact on culture, behaviour and governance. Part Two of the report is intended to help firms determine where AI is the right solution, and how to identify the high-value use cases, looking more deeply at analysing the business case. The report states that firms must consider how to supplement existing governance frameworks, or create new ones, to ensure that the ethics, appropriateness and risk of AI is in balance with the benefits it promises and the firm’s corporate standpoint.
The future is here
AI is becoming more and more incorporated into everyday business practice. With regard to AI in financial services a key takeaway from current regulations is that having a strong understanding of how AI is used within your business and for what purposes can make compliance less of a headache.
EM law specialises in technology law. Get in touch if you have any questions on the above.