Introduction

We’ve been working with a leading PR company for a number of years, having helped them with their standard contracts, data protection and acquiring a new subsidiary company.

The client has grown rapidly, opening multiple offices in the US and expanding its central London headquarters. Like most businesses, it has begun to leverage AI tools.

Challenge

Our client came to us with the concern that it did not have a comprehensive approach to the use of AI with its business. For example, it knew that many of its staff used a variety of different tools, but it wasn’t fully sure which ones – nor did it know whether its staff were all using the client’s subscriptions or their own personal subscriptions. 

The client was right to raise this. The use of AI tools can present a number of relatively unique legal risks, or, at the very least, an evolution of legal risk that needed to be catered for internally. 

The risks are rather widespread – but the ones typically associated with AI technology are: 

1. Data security. If AI companies are not contractually restricted, they are likely to use customer data to train or otherwise enhance their AI models. The free version of OpenAI’s ChatGPT permits AI training by default. Any data that is used for AI training can potentially exist within the AI forever. The AI can be reversed engineered to produce its training data – causing confidentiality and data protection concerns. 

    2. AI oversight. AI models, especially large language models like ChatGPT, are prone to making simple mistakes. Keeping a human in the loop to oversee what an AI is doing can reduce errors. 

    3. Regulatory issues. The EU’s AI Act has imposed a significant level of regulation on AI tools that are used within the EU. Some uses, such as in recruitment, are heavily regulated. The AI Act is one example – in the UK, some form of regulation may be coming. 

    4. Appropriate use. AI tools promote creativity. Sometimes that creativity can be misused – it is easy enough to upload an image of a colleague and create a new image that may be offensive. Employment policies may be out of date to deal with misuse such as this. 

    Process and insight

    On a call with the client’s team, we explored the above issues and how best they can be dealt with.

    The starting point is typically an information gathering exercise. Staff surveys and a contractual audit will reveal exactly what tools are being used and for what purpose. 

    From that position, the primary focus from a legal perspective is to ensure that the client is comfortable from a legal risk perspective and not using AI tools that fall outside that boundary. 

    Once the relevant tools are locked in, we then discussed the need for updating relevant policies (e.g. the Employment Handbook) and implementing staff training around the use of AI tools. 

    Updating policies and delivering training can reduce risks. It can also give employers legal tools when things go wrong – for example, it is an EU AI Act requirement to ensure that all staff are AI literate. Staff training is an easy way to ensure compliance in that respect.

    Solution

    We are in the process of addressing the client’s risk profile and updating its policies and procedures to reflect the use of AI tools within its business. 

    If you are concerned about your business’ use of AI tools, feel free to contact one of our AI lawyers – Neil Williamson or Colin Lambertus – directly, or contact us here.