Compliance
The European Commission has recently published its 10 October 2023 warning to Elon Musk, owner of X (formerly Twitter) about concerns over online safety. In this warning, X is reminded of its legal obligations in respect of ‘disinformation’ and ‘illegal’ content that is posted on X about the reignited conflict in the Gaza Strip. The next day, Mark Zuckerberg and Meta received a similar warning.
The debate around and increasing scrutiny of social media platforms and their obligations to enhance online safety is not new. However, the horrific stories, images, and video content coming out of the conflict (and the resulting commentary) has clearly spurred the European Commission to remind both platform owners and, indirectly, their users, that there are laws in place that govern the (still existing) wild west of social media, and the internet more broadly.
The UK is no different, and there has been a recent push towards internet regulation in the form of the Online Safety Bill(OSB). The Commission’s questioning does make one consider what the UK’s position might be, if any.
Therefore, in this blog, we will explore the current state of internet regulation in the UK as it pertains to online safety.
Criminal law and online safety
It is worth setting out the basic position from a criminal law perspective. Content which has been made illegal (hate speech, endorsements of terrorism, and so on) is illegal to post online. However, it is the responsibility of the police to investigate matters and take action (whether by notifying the relevant social media provider or otherwise). Given the immense scale of social media and other platforms and resulting content online, the Government has attempted to shift the burden onto private companies to ensure online safety.
Data protection and online safety
The OSB has passed both the Commons and the Lords, but has not yet received Royal Assent. This should be given in the next month or so. There is then a lead time before it becomes fully enforceable, as secondary legislation is required to put elements of it into place.
Presently, therefore, the most relevant regulatory regime is the Information Commissioner Office’s (ICO) Age Appropriate Design Code (the Code), published pursuant to s. 123 Data Protection Act 2018 (DPA). The Code sets out 15 ‘standards’ that certain service providers must comply with in order to promote online safety for children. The Code is not, and in and of itself, law, however, non-compliance with the Code may be demonstrative of an organisation failing to comply with the UK General Data Protection Regulation (UK GDPR). Recital 38 of the UK GDPR states that, under the UK GDPR, children’s data protection rights (and their personal data) require special protection and attention. In other words, at a broad level, complying with the ‘data protection principles’ set out in Article 5 UK GDPR is more difficult when the personal data of children is concerned. Non-compliance with the principles is a breach of the UK GDPR.
The Code applies to all ‘information society services’ as defined in Reg. 2 The Electronic Commerce (EC Directive) Regulations 2002 – put simply any ‘for profit’ service provider on the internet – whose services are ‘likely to be accessed by children’ or aimed at children directly.
The ICO has published a range of factors that service providers must take into account to assess whether a child would be likely to access a website or an app. These are so broad that the ICO’s intention is clearly to catch as many websites as possible. The only real exemption is to enforce sufficient age gating measures in place to ensure that children cannot access your website or app. Many gambling providers in the UK have this in place, however, most adult-oriented websites rely on a ‘self-declaration’ of the user being at least 18. A self-declaration, the ICO has said, is unlikely to be sufficient.
The Code primarily contains two different pathways to promote and, in some cases, enforce, online safety. The first pathway is more UK GDPR focused – it requires that children’s privacy settings are ‘high’ by default (e.g ensuring that unknown users of social cannot send them a ‘friend request’), that privacy settings are clear to children, that children are not prompted to share more data (‘nudging’), that location settings are not turned on, and, where required (if a child is under 13) parental consent is obtained.
This first pathway obviously applies to websites/apps that are accessible by children, because the provider has determined that the content is suitable for those under 18 (although the Code requires a sliding scale – what might be appropriate for a 16-year-old on one website may not be appropriate for a 5 year old).
Where websites are solely targeted at adults, there is an obligation on the service provider to keep children from accessing the website/app via age verification. This is risk dependent. The more inappropriate the content, the stronger the requirement to keep children out. Although, it is often forgotten that an older piece of legislation is relevant here – the Digital Economy Act 2017 required internet service providers to block adult websites that were not providing sufficient levels of age verification. Although the OSB is set to change how this works.
The Code evidently aims at making the internet safer for children – but what about adults? Interestingly, the Code’s sixth standard requires that where service providers (like social media companies) require all users to behave in a certain way, there must be procedures in place to implement this. If a forum does not permit ‘bullying’ or ‘illegal’ content, then the social media company has an obligation to remove that content. To do otherwise is a breach of the UK GDPR.
This requirement of the Code is an elaboration of the Article 5(1)(a) UK principle of processing personal data ‘fairly’ and ‘transparently.’ If internet users are being told one thing (about what they should expect when using a website that processes their personal data) and another occurs, that is a breach. This applies to adults. But, nonetheless, the Code has its clear limitations.
Penalties for breaches of the UK GDPR are the higher of £17.5 million or 4% of annual worldwide turnover.
Online Safety Bill
The OSB has had a torturous journey through Parliament since its inception in 2017. As was recognised by Parliament, there is significant overlap between the OSB and the Code – the intention (it remains to be seen if it will be the result) is that the OSB and the Code will complement each other to provide regulation to a significant portion of the internet accessed by users in the UK. OFCOM will regulate the service providers covered by the OSB, and the ICO will regulate those caught by the Code.
Whilst the Code’s primary focus is on children, the OSB intends to ensure online safety for both children and adults.
It applies firstly to user-to-user services (typically (but not exclusively) social media companies that permit interactions between users). The requirement is that any user generated content ‘may’ be encountered by another user; that certain content is not encountered in reality does not matter. ‘Content’ is any information that may be transmitted over the internet.
Secondly, the OSB applies to search engines (with certain exceptions). This is broad, catching any functionality that pulls results from other websites.
There are stronger obligations on ‘Category 1’ providers (the major companies such as Facebook and Google) and ‘Category 2’ providers. The thresholds for Category 2 are yet to be fully determined. But up to 25,000 tech companies in the UK are estimated to be affected.
The OSB, broadly, places an obligation on these providers to remove and/or restrict access to illegal content and ‘harmful’ content (both harmful to children and harmful to adults. Illegal content is in essence any content that relates to a criminal offence that can be committed against an individual.
All in scope service providers must (amongst other duties) have systems in place to prevent the uploading of illegal content, and systems for monitoring and removal of it.
What will constitute content that is harmful to adults will be set out in secondary legislation and guidance. It is likely that this point will be hotly debated. Likewise, what is harmful to children is to follow. But it appears that public debate is more appreciative of what may constitute harm, such as ‘self-harm’ content, bullying, and so on.
Whilst we await OFCOM’s guidance around harmful content, we do know now that there is a general catch all in that service providers must also restrict harmful content where there is a material risk of harm to an appreciate number of children and/or adults – this is likely to be an objective test. Harm is defined: covering psychological and physical harm. There are different designations for different types of harmful content, depending on severity. The obligations and expectations are higher for certain kinds of harmful content. This is a departure from the Code – ‘inappropriate’ content was to be determined subjectively. The Government is attempting to proscribe what is and is not acceptable to both adults and children. Adults can take a risk based to legal harms approach, but children cannot.
Akin to the Code, however, service providers must take measures to restrict children’s access to harmful material and/or mitigate its impact (again on a sliding age scale).
Category 1 Service providers must take proportionate measures to reduce the risk that adults are exposed to harmful content that their terms of service permit. Adults are still able to view harmful but not illegal content, but if they do not want to, in scope service providers have a duty to put adequate measures in place.
There are additional duties: risk assessments, reporting, verification of users, regulatory compliance, and so on. Discussing these aspects of the OSB would take up another blog. There are also new criminal offences around online communications – again aimed at promoting online safety.
OFCOM will be empowered to issue fines of the higher of £18 million or 10% global annual turnover. OFCOM may also require service providers to comply with the OSB and other related enforcement powers. There are potential criminal offences for company directors of user to user services that either consent to a breach of the OSB or fail to comply due to ‘neglect’.
Conclusion
The Government is attempting to make the UK the ‘safest place to be online.’ Aligning with EU legislation, online service providers are to face a heavy regulatory burden to prevent and remove illegal content (and in the UK harmful content) as soon as possible or face serious consequences.
Whilst the OSB complements the Code, in some respects it will be a significant push back on the state of the internet in Western countries, which, at its origins, was incredibly free space. Given the internet’s pervasiveness in everyday life, this was to be expected, but it remains to be seen whether it will truly have an impact on the internet, or whether technology companies will simply block UK users due to the OSB.
If you have concerns around your compliance with the Code or, when it becomes law, the OSB, please do not hesitate to contact EM Law here.
Further Reading
Digital Markets, Competition and Consumers Act 2024 – Consumer law changes and enhanced rights
June 12, 2024
New ICO AI Toolkit
May 22, 2022