Whether you are an SME looking to streamline hiring with new tools or an HR manager aiming to reduce bias, AI is reshaping how organisations find and hire talent. From CV screening to video-interview analysis, AI already helps businesses handle applications faster and more consistently.
But the benefits come with legal and ethical questions. Can an algorithm discriminate? What if a ‘black box’ tool rejects someone unfairly? And which UK laws apply when you use AI in hiring?
This blog outlines the advantages and risks of AI in recruitment and what SMEs should watch for from a legal perspective.
The allure of AI in recruitment
AI is rapidly gaining ground in recruitment due to the benefits it offers, especially to smaller businesses with limited HR resources. According to a recent report, 60.5% of UK employers are already using AI in hiring, and 92% of them say it has improved recruitment outcomes.
Why the uptake? AI promises faster and cheaper recruitment while improving the quality of hires. Intelligent tools can scan CVs, assess tests, or even conduct initial video interviews in a fraction of the time a human could. Studies show that AI can cut cost-per-hire by up to 30% – a big draw for SMEs managing tight budgets.
When used well, AI can also make recruitment more objective by focusing on job-related criteria rather than human assumptions. For example, it can highlight candidates’ skills or test results without regard to gender, age or other protected characteristics.
AI recruitment tools are also becoming more versatile – from chatbots that answer candidate queries around the clock to software that screens large volumes of applications or analyses psychometric data. For many SMEs, this technology is no longer a luxury but an increasingly practical way to manage recruitment more efficiently and consistently.
Hidden risks: bias, discrimination and unintended consequences
Despite its promise, AI in recruitment carries significant risks. The biggest risk is bias. Algorithms learn from data, and if historical recruitment data reflects human prejudice or systemic inequality, the AI may reproduce or even amplify those patterns. A well-known example is Amazon’s abandoned recruiting tool, which ended up disadvantaging CVs that included the word ‘women’. Trained on historical data from a male-dominated industry, the AI effectively “learned” to prefer men.
For UK employers, bias is not just an ethical concern – it is a legal one. The Equality Act 2010 prohibits discrimination (direct or indirect) based on protected characteristics such as sex, race, age, disability, religion or age. Employers remain responsible even if a third-party tool produces the outcome. There is no cap on compensation for unlawful discrimination, so the stakes are high.
Risks do not stop at direct discrimination. Seemingly neutral criteria (for example, filtering by post code or gaps in employment) may disproportionately disadvantage people from certain backgrounds unless the criteria are genuinely job-relevant and objectively justified.
Other challenges include:
- digital exclusion and accessibility: candidates without strong digital skills or access to technology – such as older applicants or those with disabilities – may be unfairly disadvantaged. The UK government has noted the ‘risk of digital exclusion for applicants who may not be proficient in, or have access to, technology due to age, disability, socio-economic status or religion.’ Employers must also remember the duty to make reasonable adjustments for disabled candidates.
- transparency: the decisions that can be generated many AI tools are often unexplainable, even by their developers. This may make it difficult for candidates to understand or challenge decisions. Lack of explainability not only undermines trust but also makes it harder for employers to defend themselves in a tribunal claim.
The lesson for SMEs is to not just assume that outsourcing recruitment to AI reduces their responsibility. You remain legally accountable for ensuring fair and non-discriminatory hiring.
Data protection and privacy: handling candidate data and automated decisions
When deploying AI in hiring, it’s not just employment law you need to consider – data protection is central. AI in recruitment usually involves processing personal data (CVs, application forms, test results, and sometimes interview audio/video). In the UK, the UK GDPR and the Data Protection Act 2018 regulate how you collect, use, store and secure this personal data.
Key requirements include:
- Lawful basis, transparency and fairness: be clear with candidates about what data you collect, why, how it is used, who sees it, and how long you keep it. Ensure you have a valid lawful basis (often “legitimate interests” but check this fits your use case).
- Security and minimisation: keep data secure and limit inputs to what is necessary and relevant for the role.
- Automated decision-making: Article 22 of the UK GDPR restricts decisions made solely by automated means that produce legal or similarly significant effects – rejecting a job applicant solely using an AI tool can meet that threshold. Where you rely on automated tools, build in meaningful human review so candidates are not subject to purely automated decisions without safeguards.
For most SMEs, conducting a Data Protection Impact Assessment (DPIA) before using AI recruitment tools is highly advisable and often legally required. DPIAs are mandatory for processing that is ‘likely to result in high risk’ to individuals. Automated large-scale evaluation of job candidates with AI ticks several high-risk boxes (profiling, use of new tech, decision with significant effect on individuals). A DPIA helps identify risks (such as bias or security breaches) and plan mitigation measures. Examples include anonymising data, limiting what information the AI can use and building bias audits. Completing a DPIA not only aids compliance but also forces you as an employer to think through the ethical dimensions – it is a very useful exercise to ensure you’re using AI responsibly.
In short, AI can boost efficiency, but it also raises the bar on data protection. Getting it wrong risks reputational damage and enforcement by the ICO.
What to watch: UK regulation and best practice
As of 2025, the UK has no AI-specific employment law in force. Instead, the UK Government has taken what it calls a “pro-innovation” approach, setting out high-level principles rather than strict rules.
To support businesses, the UK Government has issued sector-specific guidance, including the Responsible AI in Recruitment guide. This guidance encourages employers to build fairness, transparency, accountability, and human oversight into recruitment processes that use AI. It doesn’t impose binding duties but signals clear expectations around how AI should be deployed responsibly.
Regulators such as the ICO (for data protection) and the Equality and Human Rights Commission (for discrimination law) are also watching closely. Together, they make it clear that employers remain responsible for ensuring compliance with existing laws, even if they use third-party AI tools.
It is also worth noting that EU AI Act obligations, while not directly binding on UK employers, may affect UK businesses operating in the EU or supplying to EU customers. Preparing now for higher standards around transparency and risk management could give SMEs a competitive edge.
Practical tips for SMEs
Until clear laws arrive, SMEs should adopt a cautious and responsible approach to AI in recruitment.
Practical steps include:
- Vet AI tools carefully: don’t just take a supplier’s word that their recruitment AI is bias-free and legally compliant. Request documentation from suppliers on training data, bias testing and compliance measures.
- Train your HR team: implement a clear policy on AI use in hiring. Define what tools are used and for which tasks, and outline measures to prevent discrimination or privacy breaches. Train anyone involved in recruitment on how to interpret and question AI outputs. The technology should assist, not replace, human judgment.
- Keep humans in the loop: always ensure meaningful human oversight of AI-driven decisions. Staff should be empowered to question or override AI outputs.
- Monitor outcomes: keep records of how your AI tool is affecting hiring decisions. Monitor metrics like demographic patterns in selections vs rejections to spot potential bias. If you see, for example, that almost no female candidates are making it past a certain AI screening stage, investigate immediately – that could indicate a bias that needs fixing. Documentation will also be your friend if you ever need to demonstrate in a tribunal or to a regulator that you took reasonable steps to ensure fair and lawful use of AI.
- Communicate with candidates: be upfront with applicants that you use AI. Explain in simple terms what the AI does (e.g. ‘we use a software to automatically screen answers to ensure a fair and consistent process’) and reassure them that no final decisions are made by AI alone, if that is the case. Provide a contact point for candidates to ask questions or request reconsideration if they feel the AI got it wrong.
Conclusion
AI can make hiring faster, cheaper and more consistent, but it also brings legal risks around discrimination and data protection. For UK SMEs, the way forward is responsible adoption: keep humans in the loop, document your approach, scrutinise suppliers, and build fairness and transparency into every stage.
If you’re considering using AI in recruitment and want to understand the legal implications, our team at EM Law can help. We advise SMEs on technology, employment and data protection law so you can innovate with confidence. Please feel free to reach out to Neil Williamson or Colin Lambertus directly, or feel free to contact us here.