The online spread of a video from a shooting at two mosques in Christchurch Friday shows the limits of social media moderation.
At least 49 people were murdered Friday (15/03/2019) at two mosques in Christchurch, New Zealand, in an attack that followed a grim playbook for terrorism in the social media era. The shooter apparently seeded warnings on Twitter and 8chan before livestreaming the rampage on Facebook for 17 gut-wrenching minutes. Almost immediately, people copied and reposted versions of the video across the internet, including on Reddit, Twitter, and YouTube. News organizations as well started airing some of the footage as they reported on the destruction that took place.
By the time Silicon Valley executives woke up Friday morning, tech giants’ algorithms and international content moderating armies were already scrambling to contain the damage—and not very successfully. Many hours after the shooting began, various versions of the video were readily searchable on YouTube using basic keywords, like the shooter’s name.
This isn't the first time we’ve seen this pattern play out: It’s been nearly four years since two news reporters were shot and killed on camera in Virginia, with the killer’s first-person video spreading on Facebook and Twitter. It’s also been almost three years since footage of a mass shooting in Dallas also went viral.
The Christchurch massacre has people wondering why, after all this time, tech companies still haven’t figured out a way to stop these videos from spreading. The answer may be a disappointingly simple one: It’s a lot harder than it sounds.
For years now, both Facebook and Google have been developing and implementing automated tools that can detect and remove photos, videos, and text that violate their policies. Facebook uses PhotoDNA, a tool developed by Microsoft, to spot known child pornography images and video. Google has developed its own open source version of that tool. These companies also have invested in technology to spot extremist posts, banding together under a group called the Global Internet Forum to Counter Terrorism to share their repositories of known terrorist content. These programs generate digital signatures known as hashes for images and videos known to be problematic to prevent them from being uploaded again. What's more, Facebook and others have machine learning technology that has been trained to spot new troubling content, such as a beheading or a video with an ISIS flag. All of that is in addition to AI tools that detect more prosaic issues, like copyright infringement.
Automated moderation systems are imperfect, but can be effective. At YouTube, for example, the vast majority of all videos are removed through automation and 73 percent of the ones that are automatically flagged are removed before a single person sees them.
But things get substantially trickier when it comes to live videos and videos that are broadcast in the news. The footage of the Christchurch shooting checks both of those boxes.
“They haven’t gotten to the point of having effective AI to suppress this kind of content on a proactive basis, even though it’s the most cash-rich [...] industry in the world,” says Dipayan Ghosh, a fellow at Harvard’s Kennedy School and a former member of Facebook’s privacy and policy team. That’s one reason why Facebook as well as YouTube have teams of human moderators reviewing content around the world.
Motherboard has an illuminating piece on how Facebook’s content moderators review Live videos that have been flagged by users. According to internal documents obtained by Motherboard, once a video has been flagged, moderators have the ability to ignore it, delete it, check back in on it again in five minutes, or escalate it to specialized review teams. These documents say that moderators are also told to look for warning signs in Live videos, like “crying, pleading, begging” and the “display or sound of guns or other weapons (knives, swords) in any context.”