Twitter has moved to stamp on racist abuse directed at black England players after the Euro 2020 final but in the face of widespread demand for social media platforms to act, is its approach enough?
The abuse was also posted on Facebook and comes after players and clubs boycotted social media entirely in April in protest at a growing wave of discrimination aimed at people in football.
Here are the steps being taken by social media platforms to tackle the problem and issues preventing further progress.
Please use Chrome browser for a more accessible video player
What is being asked for?
There are two major requests of the social media platforms.
The first is that: “Messages and posts should be filtered and blocked before being sent or posted if they contain racist or discriminatory material.”
The second is that “all users should be subject to an improved verification process that (only if required by law enforcement) allows for accurate identification of the person behind the account”.
Please use Chrome browser for a more accessible video player
What are the issues with filtering?
The challenge with the first request – filtering content before it has been sent or posted – is that it requires technology to automatically identify whether the content of a message contains racist or discriminatory material, and this technology simply doesn’t exist.
The filtering can’t be based on a list of words – people can invent new epithets or substitute characters – and existing racist terms can be used in a context that doesn’t spread hate, for instance a victim looking for support quoting an abusive message that was sent to them.
How do they filter other material?
The social media platforms have had successes in filtering and blocking terrorist material or images of child sexual exploitation, but these are a different kind of problem from a technological perspective.
There is fortunately a finite amount of abuse images in circulation. Tragically this number is growing, but because the vast majority of this media has been previously uploaded, it has also been fingerprinted making it easier to detect again in the future and automatically take down.
Fingerprinting an image and understanding the meaning of a message in the English language are very different technological challenges.
Even the most advanced natural language processing (NLP) software can struggle to consider the context that a human will innately comprehend, although many companies claim that their software manages this successfully.
Please use Chrome browser for a more accessible video player
What do the companies say?
Instead, both Twitter and Facebook say that they quickly removed abusive messages after they were posted.
Twitter said that “through a combination of machine learning based automation and human review, we have swiftly removed over 1,000 Tweets and permanently suspended a number of accounts for violating our rules”.
A spokesperson for Facebook said: “We quickly removed comments and accounts directing abuse at England’s footballers last night and we’ll continue to take action against those that break our rules.”
They added: “In addition to our work to remove this content, we encourage all players to turn on Hidden Words, a tool which means no one has to see abuse in their comments or DMs.”
Hidden Words is Facebook’s filter for “offensive words, phrases and emojis” in DM requests, but the shortcomings of this approach are described above.
What are the issues with requiring verified IDs?
The call for social media users to identify themselves to the platforms – if not necessarily to the public – has also been echoed by trade body BCS, the chartered institute for IT.
Anonymity online is valuable, as Culture Secretary Oliver Dowden recognised in a parliamentary debate, noting “it is very important for some people – for example, victims fleeing domestic violence and children who have questions about their sexuality that they do not want their families to know they are exploring. There are many reasons to protect that anonymity”.
Suggestions of an ID escrow – in which the platform knows the identity of the user, but other social media users do not – provoke questions about how trustworthy the staff inside the platforms are for “groups such as human rights advocates [and] whistleblowers” which the government has identified as deserving anonymity online.
And if the companies were holding the real identities of these users in escrow then they could be exposed to law enforcement, with a number of undemocratic states known to target dissidents who speak freely against their government on social media.
It is also not clear what processes the social media platforms could have in place to verify these identities.
“Online abuse is not anonymous,” according to Heather Burns, policy manager at the Open Rights Group.
“Virtually all of the current wave of abuse is immediately traceable to the individuals who shared it, and social media platforms can hand details to law enforcement.”
“Government cannot pretend that this problem is not their responsibility. Calls for social media platforms to take material down miss the point, and let criminals off the hook,” Ms Burns added.
What is the government going to do?
Oliver Dowden said: “I share the anger at appalling racist abuse of our heroic players. Social media companies need to up their game in addressing it and, if they fail to, our new Online Safety Bill will hold them to account with fines of up to 10% of global revenue.”
The Online Safety Bill – a draft of which was published this May – introduces a statutory duty on social media platforms to address harm, but it doesn’t define what that harm is.
Instead, judgement about that will be left to the regulator Ofcom, which has the power to penalise a company that doesn’t comply with these duties with a fine of up to 10% of its worldwide revenue.
Notably a similar power was available to the Information Commissioner’s Office to deal with data protection breaches, and the maximum fine has yet to be issued to any major platform.
In cases such as racist abuse, the content will be obviously illegal, but the language about the duty itself is vague.
As drafted, the platforms will be required to “minimise the presence” of racist abuse and the length of time that it remains online. It could be that Ofcom as the regulator thinks they are already doing this.
What do others say?
At its core, the issue is about who is responsible for tackling this content.
Imran Ahmed, the head of the Center for Countering Digital Hate (CCDH), said: “The disgusting racist abuse of England players is a direct result of Big Tech’s collective failure to tackle hate speech over a number of years.
“This culture of impunity exists because these firms refuse to take decisive action and impose any real consequences on those who spew hatred on their platforms.
“Most immediately, racists who abuse public figures should be immediately removed from social media platforms. Nothing will change until Big Tech decides to drastically change its approach to this issue.
“So far, political leaders have only offered words, without action. But if social media companies refuse to wake up to the problem, the government will need to step in to protect people.”
Ms Burns countered: “Illegal racial abuse sent to England’s footballers must be prosecuted under existing laws.
“Government needs to ensure that police and the justice system enforce existing criminal law, rather than abdicating their responsibility by making this the social media platforms’ problem. Social media sites do not operate courts and prisons,” she added.
What else can be done?
Graham Smith, a respected cyberlaw expert at Bird & Bird, told Sky News he believed the government and police could make use of existing “online ASBO” powers to target the most egregious antisocial online behaviour.
In an interview with the Information Law and Policy Centre, he said the potential for using ASBOs (anti-social behaviour orders – now known as IPNAs, or injunctions to prevent nuisance or annoyance) “has been largely ignored”.
IPNA’s “have controversial aspects, but at least have the merit of being targeted against perpetrators and subject to prior due process in court”, Mr Smith added, noting that “thought could be given to extending their availability to some voluntary organisations concerned with victims of online misbehaviour”.