fbpx

Platforms fuelling far-right violence: the crux of the matter

Platforms play a lead role in strengthening and emboldening online extremism.

 

This week, YouTube placed restrictions on the channel of English Defence League founder Tommy Robinson which has around 357,000 subscribers, blocking him from streaming live events via the site and removing his uploads from search results.

 

Whilst this is a positive move, researchers have been warning tech companies that online extremism and radicalisation results in real-world violence for YEARS. Technology analysts have even published reports specifically detailing how YouTube influencers and far-right extremists gamed YouTube’s algorithm to push radicalisation messages and to turn a profit.

 

Tommy Robinson is now the ‘poster boy’ for YouTube to show that they are actively tackling online far right propaganda. But what about the hundreds of other online communities drip fed with far-right propaganda and recruited by violent extremist fringes? These dangerous and often anonymous online streams provide white nationalist alt-right groups with the potential to access the political consciousness of young and vulnerable individuals, and in effect are a part of nurturing radicalisation.

 

In the wake of the Christchurch attack, where the attacker broadcast his horrific crimes on Facebook, Australia’s parliament has voted in favour of legislation which could imprison social media chiefs if their platforms stream acts of violence – similar to the New Zealand mosque shooting. According to the new legislation, it is now a crime for social media platforms not to remove “abhorrent violent material” swiftly.

 

Media companies Facebook, The Mirror, Twitter, The Sun, YouTube and The Daily Mail all share moral responsibility for hosting, publicising and amplifying the atrocities of the attack and the views of violent racist extremist. Still, none of these media companies in the UK are currently independently regulated. Media publishers have IPSO – an industry association to handle complaints; starved of independence and regulatory powers. Without independent regulation of platforms and news websites, what happened after New Zealand could then happen again.

 

Under the new Australian law, acts of terror, murder, attempted murder, torture, rape and kidnapping are now defined as abhorrent violent material. The footage must be recorded by the perpetrator or an accomplice for the law to apply. However, critics claim these new laws could have unforeseen consequences which could lead to media censorship and reduced investment in Australia.

 

The Digital Industry Group — an association representing the digital industry in Australia including Facebook, Google and Twitter — said taking down abhorrent content was a “highly complex problem” that required consultation with a range of experts, which the Australian government had not done. Arthur Moses, president of the Australian Law Council, said the legislation could lead to media censorship and prevent whistleblowers from using social media to shine a light on atrocities because of media companies’ fear of prosecution.

 

Technology companies can and should do more to combat violent and extremist content on their sites. Just this week, Facebook’s chief Mark Zuckerberg voiced that he thinks there should be more government regulation of the internet. The social media CEO said that the responsibility for monitoring harmful content is too great for firms alone and addressed harmful content, election integrity, privacy and data portability as the areas for new laws. He went on to say that Facebook was “creating an independent body so people can appeal our decisions” about what is posted and what is taken down.

 

Despite these comments, Facebook has repeatedly lobbied against regulations in the past; including lobbying to water down a data privacy bill in the state of California,  allegedly helped eliminate one proposed in the US by Barack Obama and criticised a law in Germany on harmful content similar to those just described by Mr Zuckerberg.

 

It seems technology companies – and Facebook in particular – want more regulation, but don’t like what’s then offered to them. As long as Mark Zuckerberg and other tech giants remain committed to their profit-driven business models which rely on our clicks and data to generate revenue, we, the consumers and private citizens, will continue to lose out and so will democracy as an institution. With all of the resources and knowledge that technology companies have, they need to be transparent on what best regulatory practice would be, or instead accept sub-par laws. Shouting that they want regulation yet not accepting what is put forward is only amplifying and delaying the problem.

 

This is why Hacked Off is actively engaging in the debate around regulation of digital platforms. It should be government, civil society and individuals who dictate the rules of our society, as we’re the ones solely affected. Facebook and others can then either meet our standards, or go.

Share your thoughts