A horrific video of the New Zealand mosque massacre was blocked by Facebook during its livestream but circulated on other social media, highlighting the challenges faced by internet platforms in curbing the spread of violent content.
Facebook said it “quickly” removed a live video from the suspected gunman in twin mosque shootings in Christchurch that killed at least 49 people.
But the livestream lasting some 17 minutes, according to some reports, was shared repeatedly on YouTube and Twitter, with some footage still being viewed early Friday.
The major internet platforms have pledged to crack down on sharing of violent images and other inappropriate content through automated systems and human monitoring, but critics say it isn't working.
“There's no excuse for the content from that livestream to be still circulating on social media now,” said Lucinda Creighton, a former government minister in Ireland and an advisor to the Counter Extremism Project, which campaigns to remove violent internet content.
The online platforms “say they have their own technologies but we don't know what that is, there is no transparency, and it's obviously not working”, she added.
The organisation has developed technology that would flag certain kinds of violent content and offered it to internet firms, but has been rebuffed.
YouTube, Twitter scramble
New Zealand police, in a Twitter message early Friday, urged people not to share the “extremely distressing” footage from the Christchurch killings, which was reportedly seen on platforms such as 4chan and Reddit and some media websites.
“We would strongly urge that the link not be shared. We are working to have any footage removed,” the country's police department tweeted.
Both Twitter and YouTube said they were working to remove the videos and related content.
A Twitter spokesperson said issues such as the Christchurch video were handled rigorously by a dedicated team, adding that the platform cooperates with law enforcement.
“Our hearts are broken over today's terrible tragedy in New Zealand. Please know we are working vigilantly to remove any violent footage,” YouTube said in a tweet.
Facebook did not immediately respond to a query on the timing of its action but said it had removed the video following a police request and blocked the alleged shooter's Facebook and Instagram accounts.
“We're also removing any praise or support for the crime and the shooter or shooters as soon as we're aware,” Facebook said.
But Jennifer Grygiel, a Syracuse University communications professor who follows social media, said the companies were doing far too little to prevent the spread of violent content.
“Facebook is an unmoderated platform where you can stream anything you want,” she said, arguing that no meaningful measures have been taken since a 2017 Facebook livestream of a murder in Cleveland, Ohio.
Grygiel said it has become commonplace for perpetrators to use social media to stream acts of violence, and that these are often shared on YouTube and other platforms.
She said platforms like YouTube have the ability to find and remove violent videos with keyword searches, but more people are needed to monitor the platforms.
“They have the tools with social listening to go in with keyword terms and have moderators view and remove all videos linked to this type of incident,” she said.
Grygiel noted that artificial intelligence may help but added that “there's no algorithm that can be designed for this, because you can't predict the future.”