In The News

Online Safety Bill bought to UK Parliament – some key points

After four years of debate and delays, the UK’s Online Safety Bill is finally being bought in front of parliament on Thursday and is touted as being some of the strictest web legislation in the world should it complete its final steps and become law. If passed, the bill is set to become law later on in the year.

The bill is seen as a comprehensive cyber-security and privacy solution aimed at keeping both children and adults safe online and to help curb hateful content and anonymous trolling.

We take a look at some of the notable features of this sweeping soon-to-be legislation that covers many different online concerns ranging from pornography to scam adverts and offer our own summary.

Crackdown on paid-for scam adverts

The Online Safety Bill is going to target paid-for scam adverts on social media, specifically including adverts where “criminals impersonate celebrities or companies to steal people’s personal data, peddle dodgy financial investments or break into bank accounts.“. Yes, Facebook, everyone’s looking at you.

It’s clear this has been in consultation with UK’s “Money Saving Expert” Martin Lewis who has been at loggerheads with Facebook for some time due to paid-for adverts using his name and face to lure victims to unregulated trading platforms under the guise of “get-rich-quick” systems.

The bill requires social media companies to have robust measures to scan adverts to prevent scams from appearing on their sponsored ads platforms (what these measures will look like will be set out by Ofcom later). Failure to implement such measures can result in large fines (as much as 10% of a company’s annual turnover) or even the platform being blocked altogether.

Of course Facebook has been threatened with various legal avenues regarding these scam adverts and has managed to get away relatively unscathed thus far, so there is no guarantee that this bill will result in further consequences.


Sponsored Content. Continued below...




Quicker response to illegal content

The bill will also require the larger social media companies to act quicker taking down a whole host of illegal content, including romance scams and finance scams. This will likely mean that many social media platforms will need to revamp their reporting features so illegal content is seen faster by either AI or human moderation teams.

But given the high number of scams that are flagged as tolerable by AI moderation, is it likely that certain social media platforms will need to increase the amount of human moderation if they are to remain on the safe side of this requirement.

Social media to self-police “legal but harmful” content

One of the most abstract yet extensive aspects of the bill is a requirement for social networks to regulate and – if needed – remove “legal but harmful” content such as bullying. It’s a currently vague feature to the bill that is likely to prove controversial given the oft subjective nature of what could be considered harmful and what should be protected by the blanket of free speech.

However secondary legislation introduced later will attempt to set out to clarify this feature of the bill.


Sponsored Content. Continued below...




Added protection against “anonymous trolls” and “hateful content”

The bill requires social media platforms protect their users with more controls over who can communicate with them. While most platforms offer a way to block all unsolicited direct messages, the bill will seek more comprehensive controls – specifically the ability for users to block “anonymous” users who have not verified their identities on social media. This in turn would suggest that social media platforms will now require an easy catch-all method for their users to verify themselves so as not to be considered “anonymous”.

The bill will also require social media companies to offer their users more control over whether they can be exposed to harmful content such as bullying and disinformation if that information is allowed to stay on the social media platform.

Cyber-flashing

The bill will criminalise cyber-flashing, which is the practise of sending sexually explicit images over the Internet. And it may not just be fines. There is a maximum sentence of a two-year custodial sentence for offenders.


Sponsored Content. Continued below...




Adult websites to ensure viewers are over 18

The Online Safety Bill will aim to ensure “robust checks” are in place to ensure that adult content websites are not available to under-18s. And the onus is on the adult websites themselves to implement such a solution which will likely involve an age authentication procedure when visiting such sites. That in turn is going to mean the most popular adult websites will probably soon be asking for credit card numbers for age verification purposes.

It is, on the other hand, going to be welcome news for VPN providers and proxy services.

Jail time for senior tech executive

Perhaps one of the most ambitious aspects of the bill is the ability to criminally prosecute senior tech executives for failing to cooperate with Ofcom requests or who have been found destroying or falsifying information or evidence related to Ofcom requests. And such criminal prosecutions could potentially lead to custodial sentences. Such prosecutions will be possible after two months of the bill becoming signed into law.

Summary

There are some welcome and much needed features to this safety bill, including the criminalising of cyber-flashing and [trying to] force companies like Facebook to improve their methods for preventing paid-for scam adverts proliferating across their platform.

But there are some confused and vague aspects of the bill too. The requirement for social media companies to remove (or at least regulate) “legal but harmful” content is just revisiting the messy and unresolved quagmire debate of free speech vs. harmful content.

Social media has long been caught up between warring factions. One side arguing that social media already goes too far in limiting free speech, and the other side claiming it should remove more content that could be interpreted as harmful.

The bill doesn’t take any steps to clarify or help resolve this perpetual and contentious debate, rather it just injects itself in the middle with threats of punitive repercussions for getting it wrong, with no real clarification [as yet] on what ‘wrong’ really is. And if any clarification is forthcoming, is this not just the government forcing themselves as the arbiter on what is acceptable and unacceptable on social media?

Additionally the requirement for social media to allow users to prevent specifically “anonymous” users from contacting them is also likely to be a headache for social media platforms. Many users will have legitimate reasons for not associating their real identities with their social media accounts, and many more will likely be unwilling to “verify” their accounts if it means – for example – having to cough up copies of their government issued ID.

The Online Safety Bill is being seen by Parliament on Thursday. If passed, it will likely become law later in the year.

Share
Published by
Craig Haley