14/07/2021

BIRD'S EYE VIEW

It’s time social giants stepped up to the tackling-racial-hate plate

Social media is a powerful tool to connect and keep in-touch, grow businesses, discover talent, share news and content, and communicate with brands or high-profile people in ways that would not be otherwise possible. The potential behind social media is phenomenal in terms of what it can contribute to at all ends of the scale – when used correctly, it is brilliant.

But social media also ignites hate and can be a truly terrifying and unsafe place. It is unfortunately a tool that at times works against us as well as for us – it has no clear sides, with its functions remaining the same for everyone regardless of their intentions. This, sadly, gives racism an online home – something we at The Kite Factory are extremely mindful of, with several international aid charity clients, whose campaigns are occasionally the target of such hateful comments.

Following the Italy win in the Euros 20 Finals on Sunday, we instantly saw a disturbing amount of hate towards some of our England team based solely on race, predominantly across Twitter and Instagram – an issue that these players have been dealing with for far too long. This constant abuse faced by players previously led to the four-day social media boycott from many football clubs, players, athletes and sporting bodies in April this year, in order to show solidarity against the abuse received via these platforms with the #StopOnlineAbuse campaign. The campaign called for social media companies to do more by:

  1. Putting stronger preventative and takedown measures in place
  2. Protecting users by implementing effective verification
  3. Ensuring real-life consequences

Understandably, the events since the Euros Final have again put the focus back on the platforms themselves, with many comments demanding to know why they are still not doing more to stop this. For instance, London Mayor Sadiq Khan posted on LinkedIn another direct call that “social media companies must take immediate action to remove and prevent this hate”, along with a letter he has written directly to these networks this week. The Professional Footballer’s Association has also again called for the networks to “do better”, as “the intervention from social media companies is insufficient, and it is allowing racist abuse to thrive on the platforms”.

Repeatedly we have seen these social media giants under fire for their failure to police their platforms effectively, and for not doing enough to hold their users accountable when they come online to abuse others. The UK government unveiled a new law earlier this year that will charge large fines against social media companies if they fail to stamp out online abuses, as the Online Safety Bill seeks to “place a duty of care directly on the social firms to ensure they take swift action to remove such content”. Facebook, Instagram and Twitter in particular have been called upon to use their artificial intelligence expertise to act quicker by spotting racist and abusive messages not only once they are posted, but even while they are being written to urge users to ‘think again’ in an attempt to prevent them posting. But an “are you sure you want to post this?” type prompt is simply not doing enough, and it cannot go on falling on the shoulders of other users to go through the process of reporting these comments before they are even picked up and reviewed by the platforms.

The first problem therefore, as highlighted by the #StopOnlineAbuse campaign and is the centre of this discussion, is that the punishments from the platforms are not severe or instant enough. For example, Twitter do have a clear Hateful Content Policy whereby they “will review and take action against reports of accounts targeting an individual or group of people” with any of the following behaviour: violent threats, references to specific means of violence, incitement against protected categories, racist or sexist content and hateful imagery. But there is a 3-step approach, not an instant removal of the content and account:

  1. The person is asked to remove the violating content and serves a period of time in read-only mode
  2. Subsequent violations will lead to longer read-only periods
  3. Possible permanent account suspension following review (take Donald Trump’s suspension for example, but just look at how much inappropriate and hateful content he was able to post before this finally happened)

So, you can see just how easy it is for a banned user to jump straight back online after their 15 minute read-only punishment. This has also been demonstrated this week from Instagram in particular, with many users stating their appeal to report an account was rejected, having just received a response from Instagram stating “Our technology has found that this comment likely doesn’t go against our Community Guidelines. Our technology isn’t perfect, and we’re constantly working to make it better.” Even in the occasional instances where accounts are blocked on these networks, there is nothing to prevent that same user creating a new account which has no link to them personally, professionally or legally.

This creates the second identified issue, in that it is far too easy to set up an account – all you have to do is create an email to verify yourself, which takes a matter of seconds. Suggestions have come about that every single account should have to be verified and linked to a social ID in order to be allowed to post (which can be used for multiple personal/business accounts should someone require this), with the rationale that this would likely see an instant reduction in the amount of hate speech shared online if these cannot come from burner accounts (which they most often do). This has been met with a number of concerns, such as personal data, with many reluctant to upload official IDs to platform giants (following scandals such as Facebook Cambridge Analytica and the increased focus on reducing PII), or many users wanting to partake in social media but not wanting to be discovered personally (perhaps due to reasons such as their job – i.e., teachers not wanting to be found by their pupils). This is a potential solution as the purpose is only to give the social networks a means to identify users should they break the law, not to be a requirement that the users must post from their legal name/identity. We do already see this type of requirement from other apps, in particular dating and betting ones, whereby users are often called to verify themselves before setting up their profile. An interesting solution is that there could be a ‘middleman’ social media ID system that works with local governments, the social networks and the users as a third party. This middleman could be the provider of one social media ID for each user which can then be tracked to that individual person, but this way no personal information is given to the social networks themselves (new users would only have to enter their social ID that is generated from this middle system). This is of course a huge challenge and a long-term solution to set up and as mentioned, there are many factors to be considered. Since the Euros Final on Sunday, an official petition has already been created which calls for a verified ID to be made a requirement, however, many discussions on this are arising each day with various other arguments being made for and against this. It is definitely a complex discussion that needs to take place.

Despite these two calls for action, it still poses the challenge that social networks are far more difficult at being regulated than other media platforms, due to the sheer scale of content being posted every hour. Without a regulator this content simply will not disappear, and there is a huge lack of confidence in the networks doing this efficiently themselves when it is to these giants – to put bluntly – a resource and cost drain. However, the argument stands that the current technology systems should be able to spot a racist slur/message or a combination of words and/or emojis that would indicate hate speech as part of the platforms standard algorithms. Therefore, the immediate focus should be on more advanced technology systems that the social networks must prioritise investing in, alongside harsher punishments for suspended users, to instantly remove the digital home that racism and online abuse currently have access to.

I am doubtful that there will ever be a complete solution to this because this will not eradicate racism from society, but I do believe that social networks themselves are still simply not recognising their responsibility in this issue, not doing enough to reduce online hate in effective ways and are not protecting their users from being subject to this abuse. We will never be able to eradicate online hate completely, but we should and can take far more extreme measures to limit the distribution of such abuse within the digital world.

By Simi Gill, Senior Digital Account Manager

References:

https://www.bbc.co.uk/news/technology-55888066

https://www.bbc.co.uk/sport/56936797

https://www.cnbc.com/2021/05/12/uk-to-fine-social-media-firms-which-fail-to-remove-online-abuse.html

https://thatsnovel.co.uk/2021/02/26/social-media-racism-hate-speech/

https://www.standard.co.uk/sport/football/pfa-social-media-racism-england-euro-2021-b945329.html