October 17, 2025
AI Law

Artificial intelligence is increasingly finding its way into everyday workplace tasks for businesses of all sizes. 

Many small and mid-sized UK companies are already using AI as a practical business tool. In fact, recent research by VistaPrint found that more than half (57%) of UK small business owners are already using AI in their business operations, and 84% report that AI has had a positive impact on their business. 

The use of AI is not without legal risk. Whilst businesses may like the idea of having automated tools do tasks that human employees could have done – that can have a negative effect on how the businesses and their employees operate, which can in turn entail legal liability.   

In this blog, we will explore how AI is being used in the workplace, the legal risks and what you can do to mitigate them. 

What does ‘AI in the workplace’ mean? 

At its core, AI software refers to computer programs that can mimic the way humans think and act such as by recognising patterns, understanding language, learning new skills, making predictions or generating creative content. More advanced forms of AI can interact with other systems in the same way a human would and make decisions without direct human oversight. 

In the workplace, this can take many forms. For example: 

  • Software that drafts routine emails, reports or social media posts based on simple inputs.
  • AI chatbots that answer customer questions 24/7, providing instant support and real time feedback outside of working hours. 
  • AI tools that analyse data from sales activity or performance data to forecast trends, surface actionable insights, and support better decision making — tasks that would typically take knowledge workers or analysts significantly longer to complete manually.
  • AI systems used within human resources to automate routine tasks such as managing staff rotas, coordinating performance reviews, and supporting onboarding through ai driven automation, helping improve operational efficiency while reducing human error.
  • AI solutions that use natural language processing to translate documents in real time, summarise meeting notes, and capture action points automatically, enabling teams to focus on higher-value work that requires critical thinking and human judgment.

As above, many UK SMEs are already using AI for these kinds of tasks. 

Risks and challenges of AI in the workplace

While the opportunities are exciting, AI also comes with risks and challenges that employers must manage. It is important to not adopt AI blindly – using it without proper care can lead to legal problems. 

Set out below are some of the key areas where we find that our clients and their employees can or have run into trouble when using AI in the workplace.

Contractual risk

AI systems can and do make mistakes – and those mistakes can have contractual consequences, particularly for service providers. According to Acas-commissioned survey, around 17% of workers are concerned about AI making errors at work. While human error is an accepted risk in business, AI-generated mistakes can be more fundamental or harder to detect, especially if there is no adequate human oversight in place. 

This becomes particularly relevant in the context of contractual obligations. Service providers are typically required to perform their services with reasonable care and skill. If a business delegates tasks to an AI system, for instance, generating client deliverables, and those tasks are performed inaccurately or without adequate supervision, the business may be found to have breached this duty.

IP risk

Using AI tools can raise complex IP issues, particularly when client materials are involved. For example, if employees input client-owned content into generative AI platforms, there is a risk that the resulting output could infringe the client’s IP rights — or that the client may claim unauthorised use of their materials.

Additionally, ownership of AI-generated content is not always straightforward. If an AI tool contributes significantly to the creation of materials, it may be unclear whether the employer, employee, or AI provider owns the result, especially where the tool’s terms of service assert ownership or licence over outputs.

image of AI waves

Data protection

Data privacy and security are additional concerns. AI systems often require large amounts of data, some of which may be sensitive personal information. Handling personal data through AI must comply with data protection laws – which in the UK means the UK GDPR and Data Protection Act 2018.

Confidentiality

Separately from personal data, AI can also pose risks to business confidentiality. If employees enter confidential company or client information into public AI tools (such as free online chatbots), there is a real risk that the data could be accessed, stored, or reused by the tool provider.

A notable case in 2023 involved Samsung, where employees reportedly uploaded sensitive source code and internal meeting notes into a public AI chatbot (ChatGPT) without realising those inputs could be seen by others. The result was a data leak of confidential information and a swift response – Samsung banned employees from using public generative AI tools and started developing an in-house alternative. This incident underscores the importance of clear guidelines: staff must understand that when they input data into AI services, that data might be stored or used to train the AI, potentially exposing company secrets.

Recruitment and HR – the risk of bias and discrimination

One of the most pressing risks of AI in the workplace is the potential for bias and unfairness. Many AI systems rely on historical data analysis and machine learning to recognise patterns and support decision making. If that underlying data reflects existing inequalities, artificial intelligence may unintentionally replicate or even amplify them. This risk is particularly acute in human resources, where AI tools and AI driven automation are increasingly used to screen CVs, rank candidates, or support recruitment decisions affecting the human workforce.

A well-known example involved Amazon, which tested an AI recruitment system designed to improve operational efficiency. The tool learned from historical hiring data in a male-dominated sector and began disadvantaging CVs that included the word “women”. Rather than delivering better business outcomes, the system reinforced bias, demonstrating how ai technologies can undermine fairness when deployed without sufficient human intervention or oversight.

These outcomes are not only reputationally damaging or unethical; they may also be unlawful. In the UK, employers are subject to the Equality Act 2010, which prohibits both direct and indirect discrimination based on protected characteristics such as sex, race, age, disability, or religion. Crucially, responsibility does not fall away simply because a decision was influenced by workplace ai or a third-party ai solution. Employers remain accountable, and there is no cap on compensation for unlawful discrimination claims.

In practical terms, if an ai system treats individuals unfairly, the employer may be liable in the same way as if a manager exercised poor human judgment. Beyond obvious bias, ai adoption can also introduce a lack of transparency. Where a large language model or automated scoring system operates as a “black box”, it may be difficult for business leaders to explain how a particular outcome was reached, increasing both legal and compliance risk.

For more on how AI use is reshaping hiring practices, see our recent blog AI in recruitment – balancing innovation with legal risk.

Using AI responsibly in the workplace

Employers can manage the risks of AI with a few practical measures. The overarching principle is to maintain human control and good governance over AI tools. Below, we set out six key steps for responsible AI use at work, along with how each helps address the challenges mentioned.  

image of AI robot serving a coffee to a man

1. Create an AI “Acceptable Use Policy”

One of the first steps organisations should take when introducing AI in the workplace is to establish a clear AI Acceptable Use Policy, or to incorporate AI-specific rules into existing policies. As AI adoption accelerates across sectors, this policy provides a practical framework for how employees may (and may not) use AI tools, AI systems, and generative AI as part of their day-to-day work. For business leaders, it sets consistent expectations around responsible AI use, supports risk management, and helps ensure that new AI technologies are embedded in a way that aligns with wider business practices and company culture.

At a minimum, the policy should address confidentiality, data protection, and intellectual property. For example, employees should be clearly instructed not to input sensitive business data, personal data, or client information into public or unapproved AI solutions. This is particularly important where workplace AI relies on external platforms or large language model tools, which may retain or reuse inputs outside the organisation’s control. Approved tools should be clearly listed, with unvetted platforms expressly prohibited. The policy should also reinforce that existing confidentiality obligations and data protection rules apply equally to artificial intelligence, regardless of whether AI is used to automate routine tasks, support data entry, or analyse data for insights.

Quality and accountability are equally important. A well-drafted policy should set expectations around human intervention and the human element, requiring appropriate review of AI-generated outputs before they are relied upon. This might include mandating human approval of AI-drafted emails, reports, or accurate responses to customer enquiries, or requiring oversight where AI driven automation is used to process performance data or support decision making. By doing so, organisations reduce the risk of human error, protect workplace performance, and preserve human judgment, critical thinking, and emotional intelligence in areas where context and nuance still matter.

The policy should also define clear boundaries around acceptable and prohibited uses of AI. For example, while AI capabilities may be used to support operational efficiency, worker productivity, or repetitive administrative tasks such as invoice processing, they should not be used to make unsupervised decisions that could affect employment outcomes, contribute to job displacement, or negatively impact mental health across the human workforce. Clear rules help employees understand not just what AI can do, but how AI should be used responsibly in practice.Finally, organisations may wish to incorporate the AI policy into employment contracts, giving it contractual force. This can be a powerful tool, particularly as AI integration expands across teams, functions, and even a global workforce. If this approach is taken, it is essential that the policy is clearly drafted, properly communicated, and introduced following appropriate consultation. When applied consistently, an AI Acceptable Use Policy does more than guide behaviour; it demonstrates AI readiness, supports long-term business outcomes, and provides evidence of due diligence should an organisation’s use of AI ever be questioned by regulators, employees, or customers.

image of AI playing chess with a person

2. Maintain human oversight and judgment 

Where organisations choose to deploy AI systems, it is important to maintain an appropriate level of human oversight. While AI can assist with processing data and generating outputs efficiently, it is not infallible and without human review, there is a greater risk of errors going unnoticed or important context being missed.

Human oversight also supports accountability. If decisions are questioned – by regulators, staff, or customers – being able to demonstrate that people were involved in the review process can offer reassurance and help reduce legal risk. Ultimately, striking the right balance between automation and human input is key to using AI responsibly in the workplace.

3. Train your team for business readiness

Implementing AI responsibly is not just about rules and oversight – it is also about education. Make sure your employees receive training on what AI tools can and cannot do, as well as how to use them effectively and safely. Even a simple orientation can help prevent mistakes like an employee pasting confidential text into a free AI tool or trusting an AI’s flawed output without question.

Training should convey the company’s overall approach to AI. Reiterate that the goal of using AI is to assist and augment their work, not to surveil them or cut jobs. Encourage employees to provide feedback on the AI tools – often the people using an AI system daily will be the first to notice if something is going wrong or could be improved. A culture of open dialogue about AI will help catch issues early and make everyone feel invested in using these tools responsibly. In essence, well-trained employees are your best defence against AI-related risks – they are the ones at the front line, and with proper awareness, they can act as an additional safeguard beyond what is written in policy.

4. Monitor and review

Introducing AI into your operations is not a ‘set it and forget it’ situation. You need to actively monitor and periodically review how the AI is performing, especially if it is involved in business-critical tasks or decisions. Regular audits and check-ins will help ensure the AI remains a help rather than a liability. 

5. Protect personal data

AI systems often thrive on data – including personal data about customers, employees, or other individuals – so it is essential to protect privacy and comply with data protection laws at every step. In the UK, this means adhering to the requirements of the UK GDPR and the Data Protection Act 2018 whenever personal data is involved in your AI projects. Responsible AI use and data protection go hand-in-hand, especially if the AI is analysing people’s information or making decisions that affect them.

A critical tool for this is the Data Protection Impact Assessment (DPIA). In fact, in the vast majority of cases, using AI will involve types of personal data processing that are ‘likely to result in a high risk’ to individuals’ rights – which legally triggers the requirement to undertake a DPIA

DPIA forces you to think through the privacy risks and how to mitigate them. For example, imagine you plan to use an AI tool to evaluate employee performance by analysing emails and work chats – a DPIA would help identify issues (such as intrusion into privacy or bias) and consider safeguards (like getting employee consent, minimising data collected, or using anonymisation). The ICO expects organisations to use DPIAs as a way to build accountability and ‘data protection by design’ into AI initiatives. Failing to do a required DPIA can itself be a breach of UK GDPR.

image of AI with a glass of wine

6. Stay informed on laws and guidance


The legal and regulatory landscape for AI is evolving rapidly. To use AI responsibly, businesses must stay informed about the latest developments in law, regulations, and official guidance. While there is not yet a single, comprehensive AI law in the UK, this does not mean that AI use exists in a legal vacuum. A patchwork of existing legislation already governs many aspects of AI – including data protection and employment law – and new rules are emerging.

As of 2025, the UK does not have a standalone ‘AI Act’ (in contrast to the EU, which passed the landmark EU AI Act in 2024). Instead, the UK government has adopted a ‘pro-innovation’ regulatory strategy, focusing on broad principles and empowering existing regulators to address AI-related risks within their sectors. Regulatory bodies are interpreting their mandates to cover AI issues: the ICO monitors data protection in AI, the Equality and Human Rights Commission keeps an eye on AI-driven discrimination, the Financial Conduct Authority looks at AI in financial services, and so forth.

Conclusion

AI is transforming how UK businesses work – helping teams save time, reduce costs, and make better decisions. But while the opportunities are huge, so are the responsibilities. Adopting AI without proper care can quickly lead to legal or ethical problems, from data breaches and privacy violations to discrimination claims or damage to your reputation. The key is balance – embrace what AI can do for efficiency and innovation but manage the risks diligently.

How EM Law can help

Our team advises on AI governance, data protection compliance, and workplace policies – including drafting tailored AI Acceptable Use Policies and reviewing contracts with AI technology providers. We stay abreast of the fast-changing AI legal landscape (from UK regulatory updates to international developments) so that you do not have to navigate it alone.

If your company is using or planning to use AI and you want to make sure you are doing it safely and legally, get in touch with our team. We can assist with conducting DPIAs, addressing intellectual property questions around AI-generated content, and establishing oversight frameworks for AI tools.

Feel free to contact us here or get in touch directly with our AI experts, Neil Williamson or Colin Lambertus. We are here to help you harness the benefits of AI while protecting your business and its people. 

Further Reading