As New Zealand fights online hate, the internet’s darkest corners resist

A video game that uses footage of the Christchurch massacre to put Muslims in a gunman’s crosshairs. Memes featuring the face and weapons of the man charged in that New Zealand attack. Messages on online forums that glorify him as St Tarrant — patron saint of the far right.

Jamie TarabayThe New York Times
Published : 6 July 2019, 02:10 PM
Updated : 6 July 2019, 02:10 PM

New Zealand has worked hard to keep the name of Brenton Tarrant, the man charged with killing 51 Muslims in Christchurch, out of the news and to restrict the spread online of the hateful ideology he is accused of promoting. But the footage, games, memes and messages that still populate the dark corners of the global internet underline the immensity of the task, especially for a small country like New Zealand.

“The internet is a very complex and rough environment, and governments, especially small governments, don’t have as many cards as they would like to play,” said Ben Buchanan, a cybersecurity expert who teaches at Georgetown University.

Shortly after the March 15 attack, Prime Minister Jacinda Ardern declared that she would never utter Tarrant’s name and that she would do whatever she could to deny him a platform for his views.

A few days later, the New Zealand government banned the sharing or viewing of a 74-page manifesto that Tarrant is believed to have written. The country also declared it a crime to spread the video purporting to show the massacre; more than a dozen people have been officially warned or charged.

Ardern followed those actions with an effort, which she branded the Christchurch Call, to enlist tech companies like Facebook, Google, Twitter and YouTube to do more to curb violent and extremist content. In an op-ed, Ardern noted that her government could change gun laws and tackle racism and intelligence failures but that “we can’t fix the proliferation of violent content online by ourselves.”

Seventeen countries and the European Commission, as well as eight large tech companies, have signed on to her call. And late last week, leaders at the Group of 20 summit in Osaka, Japan, issued their own appeal to tech companies, declaring in a statement that “the rule of law applies online as it does offline.”

But, if anything, the appetite for material connected to the Christchurch attack continues to grow, said Ben Decker, the chief executive of Memetica, a digital investigations consultancy.

Facebook said that an apparent livestream of the Christchurch attack was viewed by fewer than 200 users, but that videos of the attack posted later were watched by 4,000 others, and that the platform blocked more than 1 million uploads in the days after the assault. It is unclear how many uploads have been attempted in the months since.

The video game adapting the purported Christchurch footage is still being shared online. Modeled on other so-called first-person-shooter games, it tracks a gunman who enters a mosque, drawing a gun and killing anyone in his path.

In the days leading up to a court appearance by Tarrant last month, during which he pleaded not guilty to charges that included murder and terrorism, memes featuring him spiked across the message boards 4Chan and 8Chan, Decker said. Scores of boards on 8Chan are devoted to Tarrant, including forums lionising him as St Tarrant.

After the Christchurch attack, local internet service providers suspended access to websites that hosted videos of the shooting and apologised for the censorship, even as they acknowledged that they could not completely prevent users from viewing the material.

“We appreciate this is a global issue; however, the discussion must start somewhere,” the companies said in a statement addressed to the heads of Facebook, Google and Twitter. “We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content.”

The press in New Zealand has also imposed restrictions on itself. As news outlets have prepared to cover Tarrant’s trial, which is scheduled for May, they have voluntarily agreed to limit coverage of anything that could amplify white supremacist ideology, including the manifesto.

That manifesto has already had an impact beyond New Zealand’s shores. In April, a gunman entered a synagogue 25 miles from San Diego, killing one person and injuring three others. The suspect claimed to have been inspired by the Christchurch shootings, had reportedly posted his own manifesto online and may have tried to livestream the shooting.

Sen. Josh Hawley, R-Mo., has introduced a bill to amend legislation that protects tech companies from liability for content posted by their users.

8Chan, which cooperated with law enforcement after the Christchurch attack, has criticised the bill, saying that any erosion of the legislation is “an affront to liberty and freedom of speech online.”

Ardern has said she hopes that less mainstream platforms like 4Chan and 8Chan will become more open to stamping out extremist content if the major platforms can reach a consensus on the issue.

Given the free speech considerations, and the gargantuan task that tech companies face in monitoring online speech, there has been a focus on the role that artificial intelligence could play in blocking hateful content, including at a House hearing late last month.

FILE -- The courthouse where the suspect in the mosque attacks entered a not-guilty plea via video link in Christchurch, New Zealand, June 14, 2019. The footage, games, memes and messages that glorify the Christchurch massacre and still populate the dark corners of the global internet underline the immensity of the task of fighting online hate, especially for a small country like New Zealand. (Matthew Abbott/The New York Times)

But Buchanan, the Georgetown expert, who attended the hearing, told the committee that automated systems alone would not be able to solve the problem.

Alex Stamos, a former chief security officer at Facebook and now the director of the Stanford Internet Observatory, said at the hearing that there were several steps that tech companies could take to address extreme content online, including being more transparent.

“While there is no single answer that will keep all parties happy, the platforms must do a much better job of elucidating their thinking processes and developing public criteria that bind them to the precedents they create with every decision,” Stamos said.

“There remain many kinds of speech that are objectionable to some in society but not to the point where huge, democratically unaccountable corporations should completely prohibit such speech,” he added. “The decisions made in these gray areas create precedents that aim to serve public safety and democratic freedoms but can also imperil both.”

© 2019 New York Times News Service