Social media giants, including YouTube, Facebook and Twitter, are facing a new criticism after fighting to block live streaming images of a gun on a New Zealand mosque.
The section saw users uploading and sharing clips from disrupting 17 minutes livestream faster than social media companies could remove them.
Businesses were already in control of the rise in online extremist content, but Friday's disturbing incident also underscored the difficulties of major technologies in eradicating violent content as crises unfold in real time.
In a live point-of-view video uploaded to Facebook on Friday, the killer shot dozens of mosquitoes at some point back to his car to store weapons.
The New Zealand police said the footage only caught part of the attack on two separate mosques that left 49 people dead and more than 40 wounded, "extremely disturbing" and asking people to refuse to share it.
Critics pounced on tech companies, accusing them of not getting ahead of the violent video spread.
"Tech companies have a responsibility to make it morally right," Sen. Cory Booker (D-N.J.), A 2020 participant, told reporters Friday. "I don't care about your profits."
"This is a case where you give a platform to hate," continued Booker. "It is unacceptable and should never have happened and it should have been taken down much faster. The mechanisms should be in place to enable these companies to do so."
"The rapid and wide-spread dissemination of this hateful content – the live stream on Facebook, uploaded on YouTube and enhanced on Reddit – shows how easy the largest platforms can still be abused, "Sen. Mark Warner Mark Robert Warner Hills morning report – Trump readies first veto after recent clash with Senate GOP Senate You warn against Manafort forgiveness after conviction House You introduce bills requiring public companies to account for cyber security competence in management MORE  (D-Va.) Said.
Facebook said it took the video down as soon as it was marked by New Zealand Police. But this response suggested tools for artificial intelligence (AI) and human moderators had failed to capture the live stream, which continued for 17 minutes.
When Facebook suspended the account behind the video, one and a half after it was sent, the footage had already spread over the Internet with thousands of uploads on Twitter, Instagram, YouTube, Reddit and other platforms.
Critics said companies had not prepared for these problems.
"The reality is that Facebook and others have grown to their present outrageous scale without putting rails in place to handle what was predictable damage," Hany Farid, a computer science professor at Dartmouth College, said in a statement to The Hill. "Now they have the unbearably difficult problem of going back and trying to retrofit a system that is not designed to have guard rails to handle what is a spectacular selection of worrying content."
More than 10 hours after the attack, the video could still be found through searches on YouTube, Twitter and Facebook, even though these companies said they were working to prevent the spread of spreading.
YouTube on Friday night had removed thousands of videos related to the incident. Facebook and Twitter did not share numbers, but both said they were working overtime to remove the content as it seemed.
"Shocking, violent and graphic content has no place on our platforms, and we use our technology and human resources to quickly review and remove all such violent content on YouTube," says a YouTube spokesman. "As with any major tragedy, we work with the authorities."
Technical companies have used AI tools to identify and remove subsequent uploads of the video, but the process has become complicated by users posting slightly modified versions. users cut or manipulate the footage, it becomes harder for AI tools to track.
"Since the attack has happened, teams from all over Facebook have been working around the clock to respond to reports and block content, proactively identify content that violates our standards and support first responders and law enforcement," said Mia Garlick, Facebook's Director of Policy for Australia and New Zealand, in a statement.
Friday's attack is not the first time that violence has come to life.
In February 2017, two people, including a 2-year-old boy, were shot and killed during a live streaming on Facebook Later in the same year, Facebook Live captured a 74-year-old man in Cleveland shooting a firearm, 659002] In May 2017, Facebook announced that it was hiring 3,000 more content moderators to address the issue of graphic video content, a move that Mary Anne Franks, a legal professor at the University of Miami, said that "kicking the can down the road. "
She said that Facebook now has to answer questions about whether its Facebook Live product should have been widespread in the first place, given the opportunities it posed for violent extremists.
" We need to talk about Whether Facebook Live should exist at all, "said Franks The Hill." If they can't get it to a place where it's safe, then they shouldn't have the product. "
Although Facebook has violent video policies, the complexity of tracking and removing content uploaded in real time, the company relies on a mix of AI and human content moderators to flag and remove live footage, and both methods come with a set of serious challenges.
AI technology at this time is not sophisticated enough to flag all the live videos depicting violence and death, said Farid, and the thousands of human moderators have to sort through billions of Facebook uploads a day, which is often subject to disturbing content.
"Human moderators are exploited … monetary and psychological impact," Franks said. "They're not an answer to solving this problem."
The issue extends beyond live videos and raises questions about whether the platforms do enough to eliminate extremist content as it emerges.
"We see neo-Nazis consciously using these platforms to spread hate messages, encourage others to violence, and they use it with languages deliberately designed to avoid content filters," extreme scientist and data scientist Emily Gorcenski told The Hill.
"We see the large social media platforms and tech companies are very slow to moderate content, and they are far too permissive of the type of speech that leads to the kind of radicalization we see."
Hours before the recording, the suspect apparently issued a white nationalist manifesto on Twitter and announced that he intended to livestream the mass shooting at 8chan, a fringe comrade he visited.
The New Zealand police confirmed that the suspected firearm had written the white nationalist, anti-immigrant scare that is nearly 80 pages.
Twitter deleted that account hours after the shooting took place and it has been working to remove re-uploads of the video from its service.
Facebook, Twitter, YouTube and other leading social media platforms have been struggling to deal with extremist and white nationalist content for years, especially as the mood of immigrants has spiked in the US and Europe. Companies have struggled to draw the line between freedom of expression and fire-retardant propaganda that has the potential to radiate users.
In the United States, platforms are not legally responsible for what users due to § 230 of the Communications Decency Act post. Tech advocates the law to authorize the Internet, but some lawmakers have questioned whether to change it.
"There is no doubt that we see the prevalence of hatred and extremism online, and what we also see is the social media Businesses are turning in pretzels and trying to figure out how to deal with some of this. as it follows, "says Robert McKenzie, a director and senior member of New America.
Gorcenski, the computer scientist, said that pressure on technology would only grow.
"It is a very easy decision to censor a video showing 50 people murdered," she said.
"What is much harder and bold decision is to actively and aggressively deform the people who share these messages and promote the level of violence and hatred."