February 15, 2019
Software & Technology

When someone says “Artificial Intelligence” what do you think? Robots taking over the world? Machines replacing you in your job? Or an opportunity for you to put your feet up and let someone else or rather something else do the chores? Although AI may be daunting for some, there is no doubt that it will transform the way we live. This blog takes a look at some of the legal aspects of AI and what you should consider when developing such a system.

What is Artificial Intelligence?

AI is essentially machine-learning technology used to complete tasks that previously required the skills or intellect of human beings. AI systems will typically demonstrate problem solving skills, knowledge and perception and, to a lesser extent, social intelligence and creativity. Take Ai-Da as an example, a robot named after computer pioneer Ada Lovelace and designed to draw people from sight with a pencil in its bionic hand. With cameras in each of its eyes, Ai-Da will be able to recognize human features and mimic your expression to create a lifelike portrait. Self-driving cars, mobile banking apps and Apple’s personal assistant Siri are further examples of AI technology that we may come across in our everyday lives.

Artificial Intelligence – some of the legal issues

AI regulation

The UK, like the rest of the world, recognises the potential of AI. However, the UK, like the rest of the world, does not currently regulate AI in any significant way. In April 2018 the House of Lords published a report titled “AI in the UK: ready, willing and able?” In this report the House of Lords suggested that blanket AI-specific regulation, at this stage, would be inappropriate. Instead, existing sector-specific regulators were best placed to consider the impact of potential AI regulation on their sectors. The Automated and Electric Vehicles Act 2018, which came into force last July, provides an example of this. This act does not apply to AI generally, but applies specifically to automated vehicles, putting the burden of liability onto the insurer.

In December 2018 the European Commission published a draft of ethics guidelines for the development and use of artificial intelligence (AI). The Commission has opened the guidelines for comments and states that discussions are also taking place through the European AI Alliance, the EU’s multi-stakeholder platform on AI.

The guidelines emphasise “trustworthy AI” as their guiding principle. Trustworthy AI has two components:

  • it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose”; and
  • it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

The Guidelines set out a framework for trustworthy AI that include:

  • the fundamental rights, principles and values that it should comply with;
  • the requirements for trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation; and
  • concrete but non-exhaustive assessment list for trustworthy AI. 
AI development contracts

Whilst some AI developers will attempt to build bespoke hardware, the majority of today’s AI systems will be implemented as software. Contracts to develop AI systems will therefore need to take account of the usual issues associated with developing software. Such issues include the ownership and licensing of any pre-existing IP rights as well as any rights that are developed as part of the project. The software development agreement should also provide for indemnities relating to any infringement of third-party IP rights by the developers of the AI system.

AI generated intellectual property

Traditional copyright law in the UK protects the original creations of authors. Author is defined in the Copyright, Designs and Patents Act 1988 as the person who creates the work. An author must therefore be a human. This poses a potential issue for purely autonomous AI systems, where computers make decisions and carry out functions without any form of human involvement at all. Copyright law does however acknowledge the possibility that works could be “computer-generated”. The author of “computer-generated” work is deemed to be the person “by whom the arrangements necessary for the creation of the work are undertaken”. Under UK Copyright law the software programmer would therefore most likely be the author and first owner of a copyright work generated by an AI. This seems simple enough but the situation becomes complicated for AI which involves human collaboration and input at various stages of development. Who in this scenario will be the owner? Will there be multiple joint owners? As things at present are not clear, you should ensure that any new agreements for the development and use of AI clearly state which parties will own any protectable IP resulting from the AI.

AI and Data Protection

Data processing lies at the heart of AI with AI projects often involving the processing of large amounts of personal data. The ICO published a paper in September last year setting out a number of recommendations for organisations to follow. Organisations should, for example, carefully consider whether their AI system actually requires the processing of personal data or whether they could anonymise the personal data before analysis. As anonymised data does not relate to an identified or identifiable person, it is not personal data for the purpose of the GDPR. Organisations should also carry out a data protection impact assessment. A data protection impact assessment will help organisations identify and minimise the data protection risks of a project. A recent example of AI in the data protection context is the case of Royal Free Hospital v DeepMind. In this case, the Royal Free Hospital handed over the personal data of 1.6 million patients to DeepMind, an AI company and subsidiary of Google. The ICO decided that the hospital had failed to comply with a number of data protection principles and also found that DeepMind was in fact a data controller, rather than a data processor as previously thought.

AI and product liability

A smart car hits a person. Who is at fault? The programmer in the office with the source code? The owner on the road in the smart car? The manufacturer in the lab with the testing protocols? The general principles of tort law (negligence) are likely to apply to the widespread use of AI but under existing law AI is personal property, not a person. AI machines cannot therefore be held liable for negligent acts or omissions that cause damage to third parties. So, who will be held liable? As there are many parties involved in an AI system, this may be difficult to establish and there are many factors that should be taken into consideration. These factors include whether the AI system was following instructions, whether damage can be traced back to the design or production of the AI system and whether the AI system provided any general or specific limitations. Contributory negligence will also be considered as a factor here.

It has also been suggested that liability could be established for AI systems under a similar framework to the Animals Act 1971. Under this Act, when an animal runs onto another person’s property and causes damage, the animal’s owner is liable for this damage. As this is strict liability, there is no need to prove negligence or intent. It may be the case that for some forms of physical AI, for example robots, similar legal framework will be put in place.

Conclusion

Artificial Intelligence is evolving rapidly but the law around it is not. This does not mean that no legal frameworks around artificial intelligence exist – they do but they are rooted in regulation or legal doctrine that do not answer all the questions. From the perspective of AI developers while this creates challenges such as understanding the kind of liability that the products they are developing can throw back at them, this also creates opportunities for developers to inform and shape regulation as it tries to keep up with the paths that developers chose to go down. If you have any questions about AI and what you should be considering when developing such a system please contact Neil Williamson.