Social media companies looking to prevent a video being uploaded at all must first upload a copy of that video to a database, allowing for new uploads to be compared against that footage.Įven when platforms have a reference point - the original offending video - users can manipulate their version of the footage to circumvent upload filters, for example by altering the image or audio quality. The way most content-recognition technology works, he explains, is based on a “fingerprinting” model. “It’s very hard to prevent a newly-recorded violent video from being uploaded for the very first time,” Peng Dong, the co-founder of content-recognition company ACRCloud, tells TIME. ![]() “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware.”Įxperts say the Christchurch video highlights a fatal flaw in social media companies’ approach to content moderation. “We quickly removed both the shooter’s Facebook and Instagram accounts and the video,” a Facebook spokesperson said. (Four arrests were made after the Christchurch shooting, and it remains unclear whether the shooter who live-streamed the attack acted alone.)įacebook said that the original video of the attack was only taken down after they were alerted to its existence by New Zealand police, indicating that an algorithm had not noticed the video. “We would strongly urge that the link not be shared.” Mass shooters often crave notoriety, and each horrific event brings calls to deny assailants the infamy they so desire. “There is extremely distressing footage relating to the incident in Christchurch circulating online,” police said on Twitter. New Zealand police said they were aware the video was circulating on social media, and urged people not to share it. Neither YouTube, Facebook nor Twitter answered questions from TIME about how many copies of the Christchurch video they had taken down. Still, social media companies often fail to recognize violent content before it spreads virally, letting users take advantage of the unprecedented and instantaneous reach offered by the very same platforms trying to police them. Social media companies augment their AI technology with thousands of human moderators who manually check videos and other content. “Once you know something is prohibited content, that’s where the technology kicks in,” says Lemieux. First, there’s content recognition technology, which uses artificial intelligence to compare newly-uploaded footage to known illicit material. “It becomes essentially like a game of whack-a-mole,” says Tony Lemieux, professor of global studies and communication at Georgia State University.įacebook, YouTube and other social media companies have two main ways of checking content uploaded to their platforms. ![]() The episode underscored social media companies’ Sisyphean struggle to police violent content posted on their platforms. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. ![]() The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. ![]() In an apparent effort to ensure their heinous actions would “go viral,” a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |