Software & Technology
On 25 June 2020, the International Organization of Securities Commissions (IOSCO) published a consultation document (CR02/2020) on the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers, which it has identified as a key priority.
IOSCO consultation paper on AI
IOSCO, the global standard setter for the securities sector,IOSCO and machine learning by market intermediaries and asset managers. Once finalised, the guidance would be non-binding but IOSCO would encourage its members to take it into account when overseeing the use of AI by regulated firms.
IOSCO’s membership comprises securities regulators from around the world. It aims to promote consistent standards of regulation for securities markets.
Why market intermediaries and asset managers?
IOSCO believes that the increasing use of AIML by market intermediaries and asset managers may be altering their business models. For example, firms may use AIML to support their advisory services, risk management, client identification and monitoring, selection of trading algorithms and portfolio management, which may also alter their risk profiles.
One fear is that this use of AIML may create or exacerbate certain risks, which could potentially have an impact on the efficiency of financial markets and could result in consumer harm.
AI industry discussions
As well as setting out its guidance, the report also indicates some of its findings from industry discussions:
Firms implementing AI and ML mostly rely on existing governance and oversight arrangements to sign off and oversee the development and use of the technology. In most instances, the existing review and senior leadership-level approval processes were followed to determine how risks were managed, and how compliance with existing regulatory requirements was met. AI and ML algorithms were generally not regarded as fundamentally different from more traditional algorithms and few firms identified a need to introduce new or modify existing procedural controls to manage specific AI and ML risks.
Some firms indicated that the decision to involve senior leadership in governance and oversight remains a departmental or business line consideration, often in association with the risk and IT or data science groups. There were also varying views on whether technical expertise is necessary from senior management in control functions such as risk management. Despite this, most firms expressed the view that the ultimate responsibility and accountability for the use of AI and ML would lie with the senior leadership of the firm.
Some firms noted that the level of involvement of risk and compliance tends to focus primarily on development and testing of AI and ML rather than through the lifecycle of the model (i.e., implementation and ongoing monitoring). Generally, once implemented, some firms rely on the business line to effectively oversee and monitor the use of the AI and ML. Respondents also noted that risk, compliance and audit functions should be involved throughout all stages of the development of AI and ML.
Many firms did not employ specific compliance personnel with the appropriate programming background to appropriately challenge and oversee the development of ML algorithms. With much of the technology still at an experimental stage, the techniques and toolkits at the disposal of compliance and oversight (risk and internal audit) currently seem limited. In some cases, this is compounded by poor record keeping, resulting in limited compliance visibility as to which specific business functions are reliant on AI and ML at any given point in time.
AI Areas of concern
IOSCO has identified the following areas of potential risk and harm relating to the development, testing and deployment of AIML: governance and oversight; algorithm development, testing and ongoing monitoring; data quality and bias; transparency; outsourcing; and ethical concerns.
Its proposed guidance consists of measures to assist IOSCO members in providing appropriate regulatory frameworks to supervise market intermediaries and asset managers that utilise AIML. These measures cover:
- Appropriate governance, controls and oversight frameworks over the development, use and performance monitoring of AIML.
- Ensuring staff have adequate knowledge, skills and experience to implement, oversee and challenge the outcomes of AIML.
- Robust, consistent and clearly defined development and testing processes to enable firms to identify potential issues before they fully deploy AIML.
- Appropriate transparency and disclosures to investors, regulators and other relevant stakeholders.
How the FCA regulates AI in the UK
For an idea of how AI is currently regulated in finance by the UK read below:
The Financial Conduct Authority (FCA) deems it good practice to review how trading algorithms are used; develop appropriate definitions; ensure all activities are captured; identify any changes to algorithms; and have a consistent methodology across the testing and deployment of AI and ML. Markets in Financial Instruments Directive (MiFID II) requires firms to develop processes to identify algorithmic trading across the business. These can be either investment decisions or execution algorithms, which can be combined into a single strategy. Firms are also required to have a clear methodology and audit trail across the business. Approval and sign-off processes should ensure a separation of validation and development a culture of collaboration and challenge and consistency of a firm’s risk appetite. Whilst the algorithms are field-deployed, it is a requirement to maintain pre-trade and post-trade risk controls, real-time monitoring of algorithms in deployment, with the ability to kill an algorithm or a suite of algorithms centrally, a functionality commonly known as the kill-switch.
It is a best practice, but not a requirement, to have an independent committee to verify the completion of checks. However, under the SM&CR, a firm’s governing body would be expected explicitly to approve the governance framework for algorithmic trading, and its management body should identify the relevant Senior Management Function(s) with responsibility for algorithmic trading.
How to submit comments
Comments may be submitted by one of the three following methods on or before 26 October 2020. To help them process and review your comments more efficiently, please use only one method.
Important: All comments will be made available publicly, unless anonymity is specifically requested. Comments will be converted to PDF format and posted on the IOSCO website. Personal identifying information will not be edited from submissions.
- Send comments to [email protected].
- The subject line of your message must indicate ‘The use of artificial intelligence and machine learning by market intermediaries and asset managers’.
- If you attach a document, indicate the software used (e.g., WordPerfect, Microsoft WORD, ASCII text, etc) to create the attachment.
- Do not submit attachments as HTML, PDF, GIFG, TIFF, PIF, ZIP or EXE files.
- Facsimile Transmission
Send by facsimile transmission using the following fax number: + 34 (91) 555 93 68.
Send 3 copies of your paper comment letter to:
International Organization of Securities Commissions (IOSCO) Calle Oquendo 12
Your comment letter should indicate prominently that it is a ‘Public Comment on The use of artificial intelligence and machine learning by market intermediaries and asset managers’.
For more information read our blog ‘AI in Financial Services.’
What happens next?
The consultation on the draft guidance closes on 26 October 2020. In the UK, the FCA is currently working with the Alan Turing Institute to look at the implications of the financial services industry deploying AI. Meanwhile, the European Commission has released its own guidelines for trustworthy AI and is expected to propose legislation in this area later in 2020.