EM Law | Commercial Lawyers in Central London

AI Lawyers

Neil Williamson is a Corporate Lawyer

Neil Williamson

Commercial Contracts, AI, Technology and Data Protection

EM Law team member Colin

Colin Lambertus

Commercial Contracts, AI, Technology and Data Protection

image of EM Law team member

Howard Ricklow

Commercial Contracts, AI, Technology, Data Protection and Media Law

We are experts in AI law 

Artificial intelligence (AI) continues to dominate the headlines and attract new legislation around the world. For businesses, the AI revolution is both a massive opportunity and (for most) a new legal frontier.

Our artificial intelligence lawyers combine a broad range of experience across multiple legal disciplines together with an in-depth technical understanding of how AI systems work in practice. This equips us with the skills to be at the cutting edge of AI law – assisting across the spectrum of our client base from startups to established SMEs.

Our experience covers AI developers (using their own models or adapted models), organisations using or looking to procure third-party AI solutions and clients that have highly valuable datasets that their customers want to use for their own models.

Our AI lawyers can come at the legal issues around AI adoption from all angles to negotiate the regulatory, contractual, intellectual property and data protection concerns that our clients have.

Services provided by our AI lawyers 

EM Law supports its clients using or engaging with AI tools across all areas of their business. We most frequently provide the following AI-related legal services:

  • AI software development agreements. Contracts that govern the terms under which AI software is designed, developed and delivered. 
  • EU AI Act advice. Providing clients with advice around the implications of the EU AI Act for UK based businesses operating within the EU. 
  • Data licencing and access rights advice. Guidance on how developers of AI tools or their customers can lawfully use, share, or access data.
  • Data ownership advice. Advising clients on who owns what in an AI tool and the output it can generate. 
  • Acceptable use guidelines and policies. Advising on and helping clients to create the rules and standards (both internal and external) that govern how AI tools may be used. 
  • Content moderation advice. Providing guidance on implementing policies and processes for how an AI tool handles, filters or removes inappropriate or harmful content.
  • AI literacy and training. Helping clients understand their training obligations around the use of AI and, where appropriate, delivering additional training. 

Our AI law experience

Recent examples of our work include:

  • Generative AI startup. Advising a venture capital backed generative AI startup (using a highly bespoke model) in respect of its global launch. Drafting a wide range of contracts (both B2C and B2B) and related documentation across different product phases. Assisting on private dataset access issues. Providing comprehensive data protection advice and documentation. 
  • Medical affairs consultancy. Advising a medical affairs/life sciences consultancy creating various predictive and generative AI tools. Drafting launch documentation and a highly bespoke AIaaS contract. Negotiating with multinational customers procuring the AI tools. Providing advice around public dataset access. Assisting with the client’s contracts with its software developers. 
  • Data analytics consultancy. Working with a data analytics consultancy to ensure that its data is suitably ringfenced and protected from its customers that wish to use AI tools on the datasets they purchase. 
  • Website chatbot developer. Representing a software developer selling an AI chatbot it had developed using third party AI tools for the first time. 
  • Dedicated in-tenant AI tool. Drafting a contract for a well-established Microsoft partner in relation to the deployment of a fine-tuned LLM for use within its customers’ environments. 
  • Public relations firm. Advised a multinational public relations firm in respect of a new master services agreement for use with its clients, incorporating appropriate language about the firm’s usage of AI tools. 

Client reviews

Hear from our clients about how our legal expertise has been instrumental in their success: https://emlaw.co.uk/reviews/.

AI law FAQs

Why choose EM Law’s AI lawyers? 

Clients should choose EM Law’s AI lawyers for their deep expertise at the intersection of cutting-edge technology and commercial law. With a strong track record advising startups, scaleups, and established SMEs, EM Law offers pragmatic, forward-thinking legal guidance tailored to the evolving risks and opportunities of AI. We can provide the technical understanding and commercial insight to help you innovate with confidence. 

Who owns the intellectual property generated by an AI tool?

This depends on a number of factors, including: how the AI tool was used, who used it and the terms and conditions applicable to that use. Alternatively, the AI tool may not be generating any intellectual property at all. 

Can I use third-party data to train my AI model?

Only if you have the proper legal rights or licences to do so, and the data use complies with intellectual property, confidentiality, and data protection laws. In some circumstances, using data without an express licence will be unlawful. In other situations, it may be possible to do use such data. 

What role do AI lawyers play in mitigating risks in AI- driven automated decision-making? 

AI systems using machine learning can pose risks in automated decision-making, especially around transparency and accountability. Law firms with AI expertise help businesses implement safeguards to align these systems with the GDPR, the Data Use and Access Act and other AI-related regulations. AI lawyers, whether they’re within in-house legal teams or external law firms, provide guidance to prevent liabilities and ensure ethical use.

What are the risks of deploying AI tools in regulated sectors (e.g. finance or healthcare)?

Regulated sectors may impose specific requirements on transparency, accountability, auditability, and risk management that must be factored into the design and deployment of AI systems.

Can AI systems be held legally liable for decisions or actions?

AI systems themselves cannot be held liable under current law; responsibility typically falls on the developers, deployers, or users depending on the context and contractual arrangements. Responsibility can also mean different things. Two different parties may have an obligation to pay out and recover from the other party, even if they weren’t at fault. 

Do I need specific terms and conditions for users of my AI product?

Yes. Your terms should cover permitted uses, limitations of liability, IP rights, disclaimers on outputs, and data usage policies to mitigate legal and reputational risk.

How should I approach liability in contracts involving AI tools?

Contracts should clearly allocate liability for AI performance, errors, misuse, and compliance failures, often through indemnities, warranties, and limitations of liability. In our experience, appropriate disclaimers are key for contracts that deal with AI tools.

Am I liable to my customers if my third party AI provider goes down and I can’t fully provide my software to my customers?

That will usually be the case, unless your contracts with your customers contain appropriate contractual clauses and carve outs for third party downtime. Properly drafted Service Level Agreements or uptime guarantees, for example, should cater for this by default. 

Open-source AI tools often come with licence terms that can impose obligations around attribution, redistribution, or derivative use. Misuse may lead to IP infringement or security vulnerabilities.

Call our solicitors in London on

+44 (0) 203 637 6374

or make an online enquiry to see how we can help you today

Contact Us