Online Safety Act 2023 - Explainer

Introduction

The Online Safety Act 2023 introduces laws to protect both children and adults online by imposing new responsibilities on social media and search service providers. These companies must implement systems to prevent illegal activities and remove illegal content.

Children receive the strongest protections, with platforms required to block harmful and age-inappropriate content, while offering easy ways for parents and children to report issues. Adults will also be protected, with platforms needing to be transparent about harmful content and give users more control over what they see.

Ofcom, as the independent regulator, will oversee compliance, issuing guidelines and enforcing rules. The Act’s safety requirements are proportionate, meaning that smaller platforms won’t face the same obligations as larger ones, and both providers and regulators must consider users' rights when fulfilling safety duties.

Who does the Act apply to?

The Online Safety Act applies to search services and platforms that allow users to post or interact online. This includes a wide range of services, such as social media, cloud storage, video sharing, online forums, dating, and messaging apps.

The Act also applies to companies outside the UK if they have links to the UK, such as a significant UK user base, targeting the UK market, or if their services pose a material risk of significant harm to UK users.

How is the Act being implemented?

The Online Safety Act became law on 26 October 2023, and efforts are underway to implement its protections. Ofcom is leading the process with a phased approach to enforce the Act's duties. The government will also introduce secondary legislation to enable certain parts of the framework.

New offences introduced by the Act

The criminal offences under the Online Safety Act, effective from 31 January 2024, target individuals responsible for actions such as encouraging serious self-harm, cyberflashing, sending harmful false information, threatening communications, intimate image abuse, and epilepsy trolling. Convictions have already been made for cyberflashing and threatening communications under these new laws.

What content does the Act tackle?

Illegal Content - The Online Safety Act requires companies to take strong measures against illegal content and activities on their platforms. This includes reducing the risk of illegal activities, removing illegal content, and designing systems to prevent such content from appearing. Platforms must also proactively address priority offences, such as child sexual abuse, extreme violence, fraud, terrorism, and others, as well as remove illegal content when flagged or discovered. Search services also have new duties to limit user exposure to illegal content. The Act aims to prevent illegal content from appearing by encouraging platforms to design safer services.

Content harmful to children - The Online Safety Act prioritises protecting children from harmful or age-inappropriate content, even if it's not illegal. Platforms accessed by children must take steps to shield them from harmful content and behaviour. The Act outlines two categories: Primary Priority Content, from which children must be fully protected (e.g., pornography, content promoting self-harm, eating disorders, or suicide), and Priority Content, which requires age-appropriate restrictions (e.g., bullying, abusive content, depictions of serious violence, dangerous stunts, and exposure to harmful substances).

Age-appropriate experiences for children

The Online Safety Act requires social media companies to consistently enforce age limits and protect child users. Platforms must assess risks to children, implement appropriate age restrictions, and ensure age-appropriate experiences. Websites with age limits must clearly state and consistently apply measures to prevent underage access. Companies are required to use age assurance technologies and disclose what they are using. The Act prevents companies from claiming age restrictions in their terms without actively enforcing them.

Adults can control what they see

Under the Online Safety Act, major online platforms (Category 1) must provide adult users with tools to control the content they see and who can interact with them. This includes options to filter out unverified users, helping to block anonymous trolls. Adults can verify their identity and use tools to limit content from non-verified users.

Following Ofcom's guidance, platforms must offer these tools to help users avoid content that, while not illegal, promotes self-harm, suicide, eating disorders, or abusive and hateful material, such as racism, antisemitism, homophobia, and misogyny. These tools must be effective and easily accessible, while similar protections already apply to children.

Tackling suicide and self-harm content

The Online Safety Act applies to any site that allows users to share content or interact, requiring rapid removal of illegal suicide and self-harm content and proactive protection against content illegal under the Suicide Act 1961. It introduces a new offence for encouraging or assisting serious self-harm. Services accessible to children must prevent them from encountering legal content that promotes suicide or self-harm. Major platforms (Category 1) must also consistently enforce their terms of service regarding the removal or restriction of such content and provide effective reporting and redress mechanisms for users to address enforcement concerns. 

How the Act will be enforced

Ofcom is the regulator for online safety, responsible for ensuring that platforms protect their users. Once new duties are in effect, platforms must demonstrate compliance with the Act's requirements. Ofcom will monitor the effectiveness of these processes and can act against non-compliant companies, imposing fines of up to £18 million or 10% of their global revenue, whichever is higher.

Senior managers may face criminal charges for failing to comply with information requests or enforcement notices related to child safety. In severe cases, Ofcom can seek court approval to stop payment providers, advertisers, and internet service providers from working with non-compliant sites, hindering their ability to generate revenue or be accessed in the UK.

How the Act will affect companies not based in the UK

The Online Safety Act empowers Ofcom to act against all companies within its scope, regardless of their location, as long as they have relevant links to the UK. This includes services with a significant UK user base, those targeting UK users, and any services with content that poses a significant risk of harm to individuals in the UK.

Tackling harmful algorithms

The Online Safety Act mandates that providers assess how their algorithms may affect users' exposure to illegal content and harmful material for children during risk assessments. They must take steps to mitigate identified risks, considering the design, functionalities, and features of their platforms. The law highlights that harm can occur from how content is disseminated, particularly if algorithms repeatedly promote large volumes of harmful content to children. Additionally, some platforms are required to publish annual transparency reports detailing online safety-related information, including the algorithms used and their impact on users' experiences.

Protecting women and girls

The Online Safety Act mandates that platforms proactively address illegal online content that disproportionately affects women and girls, such as harassment, stalking, controlling behaviour, extreme pornography, and revenge pornography. All user-to-user and search services must implement systems to remove flagged illegal content, with specific removal measures outlined in Ofcom's codes of practice. Ofcom is required to consult the Victim's Commissioner and Domestic Abuse Commissioner while developing these codes to ensure the perspectives of women, girls, and victims are included. Additionally, Ofcom must produce guidance summarising effective measures to combat online abuse faced by women and girls, facilitating the implementation of comprehensive protections across platforms.

Review of pornography legislation

Over the past two decades, there has been a significant shift in how media is consumed, and content is interacted with online, necessitating an update in pornography regulation and legislation. In addition to the Online Safety Act, the Independent Pornography Review has been initiated to evaluate the regulation, legislation, and enforcement of both online and offline pornographic content. The review will address how exploitation and abuse are managed within the industry and assess the potential harms of pornography. It aims to ensure that laws and regulations governing the evolving pornography landscape are effective and relevant. The review is expected to publish a report with recommendations for the government by the end of Summer 2024.

Resources

Online Safety Act - https://www.legislation.gov.uk/ukpga/2023/50/contents

Online Safety Act Explainer - https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer#new-offences-introduced-by-the-act

 

Comments

Popular Posts

Keeping Children Safe in Education - SEPTEMBER 2024

Working together to Safeguard Children 2023 - Statutory Guidance

Prevent duty guidance: Guidance for specified authorities in England and Wales