The Christchurch Pledge and a Regulated Internet
Fantasies of Humanity: The Christchurch Pledge and a Regulated Internet
It had to come. A massacre, broadcast in real time and then shared with viral automatism; the inevitable shock, and the counter from the authorities. The Christchurch shootings, inflicting fifty-one deaths upon worshippers at two mosques in quiet New Zealand on March 15 this year, have spurred Prime Minister Jacinda Ardern. Laws have been passed regulating guns in her country. Interest has increased in monitoring white nationalist groups. But Ardern was never keen keeping the matter local.
In Paris, the NZ Prime Minister, meeting French President Emmanuel Macron, brought other leaders and US tech giants to make a global pledge to “eliminate terrorist and violent extremist content online.” The cheer squad feel behind the “Christchurch Call to Action” was unmistakable. Canada’s Prime Minister Justin Trudeau highlighted the “deadly consequences” of “hateful content online” and his enthusiasm behind the project. “Together, we can create a world where all people – no matter their faith, where they live, or where they are from – are safe and secure both on and offline.” Stirring stuff.
The opening of the pledge starts with a description: “On 15 March 2019, people looked on in horror as, for 17 minutes, a terrorist attack against two mosques in Christchurch, New Zealand, was live streamed.” The emphasis is significant here: not merely the atrocity itself but the means of its dissemination. Stress falls upon the fact that “the live stream was viewed some 4,000 times before being removed.”
The premise of the call is exaggerated and forced: that the events were caused by online content the way a child’s violence can be caused by gormless hours of glued-to-screen viewing. Ignore the tingling motivating factors of the shooter in question, a view that was nurtured in the atmosphere of acceptable intolerance. Ignore, as well, the contested, troubled literature on the “contagion” thesis behind mass shootings and killings. The shooter becomes less significant than the act of streaming his exploits, or sharing unsavoury matter with chatty dolts on certain chat forums. “The attack was livestreamed, went viral and remains available on the web despite the measures taken to remove it.”
The call is framed is a clunky exercise pillowed by the language of openness, only to then flatten it. It articulates “the conviction that a free, open and secure internet offers extraordinary benefits to society. Respect for freedom of expression is fundamental.” But there is an unqualified injunction: “no one has the right to create and share terrorist and violent extremist content online.”
It seems fluffy, the stuff of head-in-the-cloud enthusiasm, but lodged in such calls is a desperate, confused message with sinister implications. Commitments, outlined by Trudeau’s office, include “building more inclusive, resilient communities to counter violent radicalisation” and “enforcing rules laws that stop the production and dissemination of terrorist and extremist content online.” Media outlets would also be told “to apply rules when reporting on terrorist events” to avoid amplification of the content. This is ignorance as antidote, not reason as solution.
Online providers, in turn, are urged to, “Take transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services”. The qualifying point is that such measures are “consistent with human rights and fundamental freedoms.” Transparent processes would include “publishing the consequences of sharing terrorist and violent extremist content”.
Livestreaming is the true bugbear here, with the need to implement “immediate, effective measures to mitigate the specific risk that terrorist and violent extremist content is disseminated”. Algorithms that might magnify the spread of material should also be reviewed.
A more “humane” internet is central to Ardern’s vision which, read another way, is one more regulated and policed of its content and uses. This lies more in the realm of social engineering than it does in free self-correction, the call for presbyters of cyberspace to cull and remove what states, or the tech enforcers, deem inappropriate. Given that “extremism” and “terrorism” remain very much in the eye of the censoring beholder, the dangers of this should be apparent. Dissidents, contrarians and commentators are bound to fall foul of the project.
The regulatory attitude outlined in the pledge has been twinned with a business object. Silicon Valley, to remain in clover, has been convinced to make overtures and moves dealing with the sharing of “terrorist” and “extremist” content. Having become a punching bag for anxious regulators, Facebook announced that Facebook Live would be barred to those who, in the words of company official Guy Rosen, “have broken certain rules… including our Dangerous Organizations and Individuals policy”. A “one strike” policy would be introduced. Technical advances to combat “adversarial media manipulation” and improved “image and video analysis technology” were needed.
With such high minded calls for regulation and control from government voices, a seminal warning is necessary. John Perry Barlow, in A Declaration of the Independence of Cyberspace, began his call quite differently. Traditional states were the problem. “Governments of the Industrial world, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”
Such governments, with efforts to bring in the behemoths of Silicon Valley, have stated their clear purpose: to intrude upon Barlow’s world of the cyber mind and clip any sovereign pretext that might have ever existed. The internet, for them, remains a vigilante playground, difficult to police with its bursts of anarchic sentiment and primeval insensibilities. While Ardern’s sentiments are probably genuine enough, their authenticity hardly matters before the dangers such initiatives will create. Symptoms have been confused, if not totally muddled, with causes; technology has been marked as the great threat.
Dr. Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge. He lectures at RMIT University, Melbourne. Email: email@example.com