2021-06-18 10:08
Lexology: Online Safety Bill - balancing online safety and freedom of expression
The Government has signalled its commitment to improving online safety with the Online Safety Bill, announced in the Queen’s Speech in May. It aims to ensure greater protection of young people and to clamp down on racist abuse online, whilst also safeguarding freedom of expression. It will impose a number of obligations on digital service providers, with Ofcom receiving new powers to impose significant penalties for non-compliance.
The government describes the Bill as a ‘milestone’ in its fight to make the internet safer. Research shows that over three quarters of UK adults are concerned about going online. Parents are also increasingly concerned, with the number feeling that the benefits of their children accessing the internet outweighing the risks falling to 55%.
Scope of the Online Safety Bill
The Bill follows the publication of the Online Harms White Paper in April 2019 and aims to create a “democratic digital age”.
It will apply to all digital service providers whose services are available to UK users, allow users to share user-generated content and to search multiple websites and databases. This will include social media platforms, messaging services, online market places and other websites, apps and services that host user-generated content or allow people to talk to others online. The Bill includes provisions to tackle online scams and to require digital service providers to apply measures remove and limit the spread of illegal and harmful content such as child sexual abuse, terrorist materials and suicide content.
Digital service providers fall in to one of two categories, each subject to different obligations for moderating and mitigating harmful or illegal content appearing on or via their platform.
The defining criteria for each category is still to be determined by OFCOM, however it is likely category 1 providers will be those with the largest online presence and high risk features such as Facebook, TikTok, Youtube, Instagram and Twitter. These companies will be subject to more onerous obligations and will need to take proactive steps to address both illegal and harmful (but legal) content, provide extra protection for children and submit transparency reports to OFCOM. Category 2 providers – likely to include search services and lower risk user-to-user services – will need to take proportionate steps in relation to illegal content and to protect children (but not adults) from harmful content.
Under the proposals, Ofcom will receive a new power to fine companies failing in this new duty of care, up to £18 million or 10% of annual turnover (whichever is higher) and to order blocking of access to sites. The Bill also provides for a criminal liability regime for senior managers as a deferred power, which could be introduced at a later date if providers do not respond appropriately to the new safety requirements.
Trend for greater digital regulation
The Bill follows a trend towards greater regulation in the digital space and the increased emphasis on privacy and protection of individuals. As with the recent proposals from the EU on regulation of artificial intelligence, we see a balancing act at play between freedom of expression and the rights and freedoms (or in this case protection) of the individual.
In addition to the requirements on digital service providers in relation to illegal and harmful content, the Bill will also require certain digital service providers to safeguard democratically important content. Category 1 providers will have to conduct and publish ongoing assessments of their impact on freedom of expression and demonstrate steps taken to mitigate any adverse impacts of their platforms. This is not surprising in the wake of two central themes of the online space in recent years: protection of users and complaints lodged against platforms around ‘echo chamber’ algorithms which create the potential for platforms such as Facebook to distort the public perception of political events and current affairs.
Impact and legislative roadmap
In terms of impact, the larger players (TikTok, YouTube etc) will quite likely have the resource to adjust to the new requirements, which are aimed at making the online world safer for users. We know that there are considerable PR implications to handling users’ information well, or badly, and these operators will be alive to this.
Arguably there may even be a positive PR opportunity off the back of the requirements, if resultant changes are presented as ‘improving your experience’ and ‘making our platform safer’. For any service provider innovative enough to offer services which meet the demands of the legislation and tap into the public appetite for child-friendly online environments there seems to be a ready market.
Smaller enterprises offering peer-to-peer based internet services may find the changes more demanding, however it remains to be seen whether they will fall into category 2 and spared the more onerous requirements. Nevertheless, providers in both categories will need to register with OFCOM and pay a fee in line with their worldwide revenue and other factors considered appropriate by OFCOM.
The draft legislation will now be subject to pre-legislative scrutiny by a joint committee of MPs. While there may be changes to the draft legislation as it passes through parliament, the direction of travel is clear: a renewed focus on online harms and how to prevent them.