Skip to main content
Home Office in the media

https://homeofficemedia.blog.gov.uk/2019/04/08/online-harms-white-paper-factsheet/

Online Harms White Paper Factsheet

Posted by: , Posted on: - Categories: Fact sheet

Hands typing on computer keyboardThe Online Harms White Paper sets out the Government’s plans for world leading laws to make the UK the safest place in the world to be online. These will make companies more responsible for their users’ safety online, especially children and other vulnerable groups. A new statutory duty of care is being introduced, to make companies take responsibility for the safety of their users and to tackle harm caused by content or activity on their services. Compliance with this duty of care will be overseen and enforced by an independent regulator.

Companies will be held to account for tackling a comprehensive set of online harms. These range from illegal activity and content, such as terrorism, child sexual exploitation and abuse and inciting or assisting suicide, to behaviours that may not be illegal but are nonetheless highly damaging to individuals or threaten our way of life in the UK. The regulator will take a proportionate and risk-based approach, prioritising action to tackle activity or content where there is the greatest evidence or threat of harm, or where children or other vulnerable users are at risk.

There are a wide range of potential enforcement options which the Government are consulting on to ensure a regulator has sufficient powers to uphold public confidence, whilst being fair and proportionate. Technology itself should be part of the solution. The Government are proposing measures to promote the tech-safety sector in the UK, as well as measures to help users manage their safety online. Over the coming months we will engage widely with companies, civil society and other governments to develop our proposals.

Regulatory framework

  • To fulfil the new duty of care, companies will have to take reasonable and proportionate action to tackle online harms on their services.
  • The regulator will set clear safety standards, backed up by mandatory reporting requirements and effective enforcement powers.
  • New regulation will be risk-based and proportionate, taking into account the broad range of businesses and organisations in scope, designed to support innovation and a thriving digital economy.

Regulatory body

  • An independent regulator will implement, oversee and enforce the new regulatory framework.
  • The Government is consulting on whether the regulator should be a new or existing body. The regulator will be funded by industry in the medium term, and the Government is exploring options such as an industry levy to put it on a sustainable footing.

Codes of practice

  • The regulator will issue codes of practice that set out what companies should do to fulfil their new duty of care. For all codes of practice relating to illegal harms, including incitement of violence and the sale of illegal goods and weapons, there will be a clear expectation that the regulator will work with law enforcement to ensure the codes adequately keep pace with the threat.

CSEA and terrorist content

  • Reflecting the threat to national security or the physical safety of children, the regulator will require companies to take particularly robust action to tackle
    terrorist or CSEA content.
  • The Government will have the power to direct the regulator in relation to codes of practice on terrorist activity or CSEA online, and these codes must be signed off by the Home Secretary.
  • The Government will publish interim codes of practice providing guidance about tackling terrorist activity and online CSEA later this year.

Enforcement

  • The regulator will have a range of enforcement powers in order to ensure that all companies in scope of the regulatory framework fulfil their duty of care.
  • We are consulting on which enforcement powers the regulator should have at its disposal. We envisage that the regulator’s core enforcement powers will include:
    • issuing civil fines for proved failures in clearly defined circumstances.
    • serving notices to companies that are found to have breached standards;
    • publishing public notices about the proven failure of the company to comply with standards.
  • Because of the serious nature of the harms in scope and the global nature of online services, it is likely that the regulator will need additional powers. We are consulting on some additional enforcement powers:
  • Disrupting business activities by preventing search results, app-stores or links on social media posts from facilitating access to the companies that are in breach.
  • The creation of new liability (civil fines or extended to criminal liability) for individual senior managers.
  • Internet Service Provider (ISP) blocking of non-compliant websites or apps. This would only be considered as an option of last resort and deploying such an option would be a decision for the independent regulator alone.

Non-legislative measures

The White Paper contains measures to promote the tech-safety sector in the UK, as well as measures to help users manage their safety online:

  • The Government and the new regulator will work with leading industry bodies and other regulators to support innovation and growth in the tech-safety sector and encourage the adoption of safety technologies.
  • The Government will also work with the industry and civil society to develop a safety by design framework to make it easier for start-ups and small businesses to embed safety principles from the outset.
  • The Government will develop a new online media and digital literacy strategy. This will help empower users to manage their online safety and that of their children. The new regulator will also have the power to require companies to report on their education and awareness raising activity.

User Redress

  • Under the duty of care, companies will be expected to have an effective and easy-to-access complaints function. Users should receive timely, clear and transparent responses to their complaints, and there must be an internal appeals function.
  • We are consulting on making a provision in legislation for designated bodies to bring 'super complaints' to the regulator to defend the rights of users and are welcoming views during the consultation on additional options for redress.

Activities and harms in scope

  • The regulatory framework will apply to companies that allow users to share or discover user-generated content or interact with each other online.
  • These services are offered by a very wide range of companies of all sizes, including social media platforms, file hosting sites, public discussion forums, messaging services and search engines.
  • The regulatory approach will reflect the diversity of the organisations in scope and minimise excessive burdens, particularly on small businesses and civil society organisations.
  • The regulator will take a risk-based approach. Their initial focus will be on those companies which pose the biggest and clearest risk of harm to users, either because of the scale of the platforms or because of known issues with serious harms.

Online harms in scope

Harms with a clear legal definition

  • Child sexual abuse and exploitation
  • Terrorist content and activity
  • Organised immigration crime
  • Modern slavery
  • Extreme pornography
  • Revenge pornography
  • Harassment and cyberstalking
  • Hate crime
  • Encouraging or assisting suicide
  • Incitement of violence
  • Sale of illegal goods/ services, such as drugs and weapons (on the open internet)
  • Contempt of court and interference with legal proceedings
  • Sexting of indecent images by under 18s

Harms with a less clear legal definition

  • Cyberbullying and trolling
  • Extremist content and activity
  • Coercive behaviour
  • Disinformation
  • Intimidation
  • Violent content
  • Advocacy of self-harm
  • Promotion of Female Genital Mutilation

Underage exposure to legal content

  • Children accessing pornography
  • Children accessing inappropriate material (including under 13s using social media and under-18s using dating apps, excessive screen time)

Scale of harms relating to terrorism online

  • All five of the terrorist attacks in the UK during 2017 had an online element to them.
  • Our aim is to ensure there is no safe space for terrorists to operate online, and to prevent the dissemination of all forms of terrorist content online.
  • The speed of dissemination is increasingly important. Our research shows that approximately a third of all links to Daesh propaganda are disseminated within one hour of release.
  • Daesh continue to diversify their approach. Our analysis shows that in 2018 Daesh used over 100 platforms to host propaganda, including a range of smaller platforms.
  • Facebook reports that it acted on approximately 14 million pieces of Daesh, al-Qaeda and affiliate group terrorist propaganda in the first three quarters of 2018, 99% of which they say was found proactively by Facebook before it was reported by users.
  • Twitter announced in December 2018 that, between January and June 2018, 205,156 accounts were suspended for violations related to promotion of terrorism, and of those suspensions 91% were flagged using internal, proprietary tools.
  • Terrorist use of the internet continues to evolve, such as the livestreaming of the recent attack in New Zealand. We are working closely with tech companies to understand the challenges around live streaming and opportunities to develop technology to automatically detect, flag and remove them.

Scale of harms relating to Child Sexual Exploitation and Abuse online

  • In the UK, the National Crime Agency estimate 80,000 individuals represent a sexual threat to children, and 400 people a month are being arrested for offences related to online child sexual exploitation and abuse whilst more than 500 children are being safeguarded.
  • Project Arachnid, a web-crawler that can trawl the web to identify webpages with suspected abuse content, has analysed around 51 billion images and 1.3 billion URLs for suspected child sexual abuse material and issued more than 800,000 take down notices.
  • In 2017, the Internet Watch Foundation assessed 80,319 confirmed reports of websites hosting images of child sexual abuse. 43% of the children in the images were aged 11-15 years old, 57% were ten years old or younger. 2% were aged two or younger.
  • In the most horrific cases, child sex offenders in developing countries are abusing children to order at the instigation of offenders in the UK who commission the abuse online and watch it over livestream for a fee of as little as £12.
  • Less than 1% of child sexual abuse imagery is hosted in the UK.
  • In 2018 there were over 18.4 million referrals of child sexual abuse content by US tech companies to the National Centre for Missing and Exploited Children (NCMEC), with some reports containing hundreds of images.
  • In the third quarter of 2018, Facebook reported removing 8.7 million pieces of content globally for breaching policies on child nudity and sexual exploitation.
  • FOI statistics obtained by the NSPCC show that, between April 2018 to September 2018, police recorded 5,161 grooming offences. Instagram was used in 32%, Facebook in 23% and Snapchat in 14% of those instances.

Measures in the Online Harms White Paper to tackle terrorist and CSEA content

  • For the most serious online offending such as terrorism and CSEA, we will expect companies to go much further than for other harms, and demonstrate the steps taken to combat the dissemination of associated content and illegal behaviours.
  • The Government will have the power to issue directions to the regulator regarding the content of the codes of practice relating to these harms and the Home Secretary will approve the draft codes before they are brought into effect. Similarly, the regulator will not normally agree to companies adopting proposals which diverge from these two codes of practice and will require a high burden of proof that alternative proposals will be effective.
  • While it will be for the new regulator to produce codes of practice when it begins work, the Government expects companies to take action now to tackle harmful content or activity on their services. For those harms where there is a risk to national security or to the physical safety of children, the Government will work with law enforcement and other relevant bodies to produce interim codes of practice for online terrorist content and CSEA activity. These codes will be published later this year.
  • Some of the areas that are likely to be included in the interim codes of practice, and that we expect the regulator to include in a code of practice are:
    • steps companies should take to prevent new and known terrorist or CSEA content, and links to content, being made available to users. This should include guidance on proactive use of technological tools, where appropriate, to identify, flag, or block this content;
    • clarification as to what constitutes an expedient timeframe for the removal of terrorist content
    • the reasonable steps companies should take to proactively identify and act upon CSEA content and activity such as grooming.
    • the reasonable steps companies should take to proactively identify and act upon terrorist or CSEA activity or content, including within live streams;
    • guidance on the activity companies should proactively take to prevent this content from being made available to users, which will help inform the design of technological tools;
    • guidance on the content and/or activity companies should proactively take to prevent this content from being made available to users, which will help inform the design of technological tools;
    • steps companies should take to proactively identify suspicious accounts showing indicators of CSEA activity and ensure children are protected from them;
    • guidance about the requirements for how companies should inform and support law enforcement and other relevant Government agencies’ investigations and prosecution of criminal offences in the UK, this will include specific guidance about the content companies should preserve following removal and for how long, and when companies should proactively alert law enforcement to this content;
    • steps companies should take to provide effective systems for child users to report, remove and prevent further circulation of images of themselves which may fall below the illegal threshold, but which leave them vulnerable to abuse;
    • steps companies should undertake when dealing with accounts that have uploaded, engaged with or disseminated terrorist content, including disabling accounts;
    • steps to ensure that users who are affected by CSEA content and activity are directed to, and can access, adequate support;
    • steps companies should take to implement measures to identify which users are children, and adopt enhanced safety measures for these users;
    • steps companies should take to promptly inform law enforcement where there is information about a CSEA offence, including provision of sufficient identifying information about victims and perpetrators;
    • steps companies should take to continually review their efforts in tackling CSEA and terrorist content, to adapt their internal processes and technology, and to continue to keep sufficiently up-to-date with the threat landscape; ensuring that their identification and response continually improves;
    • guidance on the CSEA content and activity companies should proactively prevent, identify and act upon, which will help inform the design and implementation of technological tools;
    • thresholds for the types of content companies should preserve following removal, for how long they should keep them and when/with whom such information should proactively be shared.
    • the steps services are expected to take to prevent searches which lead to terrorist activity and/or content, including automatic suggestions for terrorist content not being made and users being directed towards alternative sources of information or support.

Sharing and comments

Share this page