October 17, 2025
AI Law

Artificial intelligence is increasingly finding its way into everyday workplace tasks for businesses of all sizes. 

Many small and mid-sized UK companies are already using AI as a practical business tool. In fact, recent research by VistaPrint found that more than half (57%) of UK small business owners are already using AI in their business operations and 84% report that AI has had a positive impact on their business. 

The use of AI is not without legal risk. Whilst businesses may like the idea of having automated tools do tasks that human employees could have done – that can have a negative effect on how the businesses and their employees operate, which can in turn entail legal liability.   

In this blog, we will explore how AI is being used in the workplace, the legal risks and what you can do to mitigate them. 

What does ‘AI in the workplace’ mean? 

At its core, AI software refers to computer programs that can mimic the way humans think and act such as by recognising patterns, understanding language, making predictions or generating creative content. More advance forms of AI can interact with other systems in the same way a human would and make decisions without direct human oversight. 

In the workplace, this can take many forms. For example: 

  • Software that drafts routine emails, reports or social media posts based on simple inputs.
  • AI chatbots that answer customer questions 24/7, providing instant support outside of working hours. 
  • Programs that analyse sales data or operational metrics to forecast trends and highlight insights that would take a human analyst much longer to find. 
  • Apps that help HR teams manage staff rotas, coordinate performance reviews or onboard new employees through automated workflows. 
  • Systems that translate documents in real time or summarise meeting notes and action points automatically. 

As above, many UK SMEs are already using AI for these kinds of tasks. 

Risks and challenges of AI in the workplace

While the opportunities are exciting, AI also comes with risks and challenges that employers must manage. It is important to not adopt AI blindly – using it without proper care can lead to legal problems. 

Set out below are some of the key areas where we find that our clients and their employees can or have run into trouble when using AI in the workplace.

Contractual risk

AI systems can and do make mistakes – and those mistakes can have contractual consequences, particularly for service providers. According to Acas-commissioned survey, around 17% of workers are concerned about AI making errors at work. While human error is an accepted risk in business, AI-generated mistakes can be more fundamental or harder to detect, especially if there is no adequate human oversight in place. 

This becomes particularly relevant in the context of contractual obligations. Service providers are typically required to perform their services with reasonable care and skill. If a business delegates tasks to an AI system, for instance, generating client deliverables, and those tasks are performed inaccurately or without adequate supervision, the business may be found to have breached this duty.

IP risk

Using AI tools can raise complex IP issues, particularly when client materials are involved. For example, if employees input client-owned content into generative AI platforms, there is a risk that the resulting output could infringe the client’s IP rights — or that the client may claim unauthorised use of their materials.

Additionally, ownership of AI-generated content is not always straightforward. If an AI tool contributes significantly to the creation of materials, it may be unclear whether the employer, employee, or AI provider owns the result, especially where the tool’s terms of service assert ownership or licence over outputs.

image of AI waves
Data protection

Data privacy and security are additional concerns. AI systems often require large amounts of data, some of which may be sensitive personal information. Handling personal data through AI must comply with data protection laws – which in the UK means the UK GDPR and Data Protection Act 2018.

Confidentiality

Separately from personal data, AI can also pose risks to business confidentiality. If employees enter confidential company or client information into public AI tools (such as free online chatbots), there is a real risk that the data could be accessed, stored, or reused by the tool provider.

A notable case in 2023 involved Samsung, where employees reportedly uploaded sensitive source code and internal meeting notes into a public AI chatbot (ChatGPT) without realising those inputs could be seen by others. The result was a data leak of confidential information and a swift response – Samsung banned employees from using public generative AI tools and started developing an in-house alternative. This incident underscores the importance of clear guidelines: staff must understand that when they input data into AI services, that data might be stored or used to train the AI, potentially exposing company secrets.

Recruitment and HR – the risk of bias and discrimination

One of the most pressing risks is the potential for bias and unfairness. AI systems are trained on historical data, and if the data contains underlying biases, the AI may inadvertently perpetuate or even amplify those biases. This is particularly concerning in recruitment and HR, where automated systems are increasingly used to screen CVs or rank candidates.

A well-known example involved Amazon, which tested an AI recruitment tool that ended up systematically disadvantaging CVs that included the word ‘women’ (the AI had been trained on past hiring data from a male-dominated industry, and it ‘learned’ to prefer male candidates). 

Such outcomes are not only reputationally damaging or unethical – they may also be unlawful. In the UK, employers are subject to the Equality Act 2010 which prohibits discrimination (direct or indirect) based on protected characteristics like sex, race, age, disability, religion, etc. Employers remain responsible for discrimination even if it arises from a third-party algorithm’s decision, and there is no cap on compensation for unlawful discrimination claims. 

In other words, if your AI tool ends up treating people unfairly, your company can be liable just as if a human manager acted improperly. Beyond obvious bias, AI can also introduce opacity – if a decision is made by a ‘black box’ algorithm, it may be difficult to explain how a result was reached. 

For more on how AI is reshaping hiring and HR, see our recent blog AI in recruitment – balancing innovation with legal risk.

Using AI responsibly in the workplace

Employers can manage the risks of AI with a few practical measures. The overarching principle is to maintain human control and good governance over AI tools. Below, we set out six key steps for responsible AI use at work, along with how each helps address the challenges mentioned.  

image of AI robot serving a coffee to a man
1. Create an AI “Acceptable Use Policy”

One of the first things your company should consider is establishing an AI Acceptable Use Policy (or incorporating AI-specific guidelines into your existing policies). This policy sets out how employees may (and may not) use AI in their work. Essentially, it lays down ground rules and expectations to ensure everyone in the organisation is aligned on responsible AI use.

Your AI policy should address critical issues like confidentiality, data protection, and intellectual property. For example, it must instruct staff not to input sensitive business data or personal information into public AI tools that have not been approved by the company (as the Samsung case showed, even well-intentioned use of AI can cause leaks if guidelines are not clear). The policy can list which AI apps or platforms are approved for use and ban any unvetted tools for company data. It should also remind employees that the company’s existing confidentiality agreements and data privacy policies apply to AI usage: any information put into an AI could potentially become public or be stored outside our control. 

The policy might also set quality standards, for instance, requiring that AI-generated text or translations be reviewed by a person before use, to catch errors or inappropriate content. Spell out whether and when employees must disclose that AI was used. 

The level of human oversight required for different AI tasks can be detailed here too. For example, you might permit AI to draft an email but require a human to approve its content before sending. Or allow AI data analysis but not let the AI’s conclusions be implemented without review. By defining these boundaries, the policy draws a line between encouraged uses of AI and prohibited areas.

Importantly, you may wish to incorporate the policy into individual employment contracts. Doing so would give the policy contractual force – meaning that breaches may potentially constitute a breach of contract. If you take this approach, it is crucial that the policy is clearly drafted, communicated to employees in advance, and introduced following appropriate consultation.

A well-drafted, consistently applied policy not only guides employee behaviour but can also serve as valuable evidence of due diligence if your organisation’s use of AI is ever scrutinised by regulators or affected individuals.

image of AI playing chess with a person
2. Maintain human oversight and judgment 

Where organisations choose to deploy AI systems, it is important to maintain an appropriate level of human oversight. While AI can assist with processing data and generating outputs efficiently, it is not infallible and without human review, there is a greater risk of errors going unnoticed or important context being missed.

Human oversight also supports accountability. If decisions are questioned – by regulators, staff, or customers – being able to demonstrate that people were involved in the review process can offer reassurance and help reduce legal risk. Ultimately, striking the right balance between automation and human input is key to using AI responsibly in the workplace.

3. Train your team

Implementing AI responsibly is not just about rules and oversight – it is also about education. Make sure your employees receive training on what AI tools can and cannot do, as well as how to use them effectively and safely. Even a simple orientation can help prevent mistakes like an employee pasting confidential text into a free AI tool or trusting an AI’s flawed output without question.

Training should convey the company’s overall approach to AI. Reiterate that the goal of using AI is to assist and augment their work, not to surveil them or cut jobs. Encourage employees to provide feedback on the AI tools – often the people using an AI system daily will be the first to notice if something is going wrong or could be improved. A culture of open dialogue about AI will help catch issues early and make everyone feel invested in using these tools responsibly. In essence, well-trained employees are your best defence against AI-related risks – they are the ones at the front line, and with proper awareness, they can act as an additional safeguard beyond what is written in policy.

4. Monitor and review

Introducing AI into your operations is not a ‘set it and forget it’ situation. You need to actively monitor and periodically review how the AI is performing, especially if it is involved in business-critical tasks or decisions. Regular audits and check-ins will help ensure the AI remains a help rather than a liability. 

5. Protect personal data

AI systems often thrive on data – including personal data about customers, employees, or other individuals – so it is essential to protect privacy and comply with data protection laws at every step. In the UK, this means adhering to the requirements of the UK GDPR and the Data Protection Act 2018 whenever personal data is involved in your AI projects. Responsible AI use and data protection go hand-in-hand, especially if the AI is analysing people’s information or making decisions that affect them.

A critical tool for this is the Data Protection Impact Assessment (DPIA). In fact, in the vast majority of cases, using AI will involve types of personal data processing that are ‘likely to result in a high risk’ to individuals’ rights – which legally triggers the requirement to undertake a DPIA

DPIA forces you to think through the privacy risks and how to mitigate them. For example, imagine you plan to use an AI tool to evaluate employee performance by analysing emails and work chats – a DPIA would help identify issues (such as intrusion into privacy or bias) and consider safeguards (like getting employee consent, minimising data collected, or using anonymisation). The ICO expects organisations to use DPIAs as a way to build accountability and ‘data protection by design’ into AI initiatives. Failing to do a required DPIA can itself be a breach of UK GDPR.

image of AI with a glass of wine
6. Stay informed on laws and guidance


The legal and regulatory landscape for AI is evolving rapidly. To use AI responsibly, businesses must stay informed about the latest developments in law, regulations, and official guidance. While there is not yet a single, comprehensive AI law in the UK, this does not mean that AI use exists in a legal vacuum. A patchwork of existing legislation already governs many aspects of AI – including data protection and employment law – and new rules are emerging.

As of 2025, the UK does not have a standalone ‘AI Act’ (in contrast to the EU, which passed the landmark EU AI Act in 2024). Instead, the UK government has adopted a ‘pro-innovation’ regulatory strategy, focusing on broad principles and empowering existing regulators to address AI-related risks within their sectors. Regulatory bodies are interpreting their mandates to cover AI issues: the ICO monitors data protection in AI, the Equality and Human Rights Commission keeps an eye on AI-driven discrimination, the Financial Conduct Authority looks at AI in financial services, and so forth.

Conclusion

AI is transforming how UK businesses work – helping teams save time, reduce costs, and make better decisions. But while the opportunities are huge, so are the responsibilities. Adopting AI without proper care can quickly lead to legal or ethical problems, from data breaches and privacy violations to discrimination claims or damage to your reputation. The key is balance – embrace what AI can do for efficiency and innovation but manage the risks diligently.

How EM Law can help

Our team advises on AI governance, data protection compliance, and workplace policies – including drafting tailored AI Acceptable Use Policies and reviewing contracts with AI technology providers. We stay abreast of the fast-changing AI legal landscape (from UK regulatory updates to international developments) so that you do not have to navigate it alone.

If your company is using or planning to use AI and you want to make sure you are doing it safely and legally, get in touch with our team. We can assist with conducting DPIAs, addressing intellectual property questions around AI-generated content, and establishing oversight frameworks for AI tools.

Feel free to contact us here or get in touch directly with our AI experts, Neil Williamson or Colin Lambertus. We are here to help you harness the benefits of AI while protecting your business and its people. 

Further Reading