In recent years, governments have repeatedly called upon Facebook and other social media platforms to do a better job of removing extremist content — specifically, anything promoting terrorism.
Many have turned to artificial intelligence to help them answer that call, but now an investigation by The Atlantic has revealed that these AIs might inadvertently be helping terrorists get away with their heinous crimes — by deleting valuable evidence of them.
The Atlantic piece cites a 2017 Facebook video in which a terrorist oversees the execution of 18 people. Facebook removed the video, but not before it could spread across the internet.
People all across the globe analyzed the video, which led to the discovery that the execution took place in Libya and that the man ordering it was Mahmoud Mustafa Busayf al-Werfalli, an Al-Saiqa commander.
The subsequent warrant for Werfalli’s arrest included several references to the Facebook video and others like it.
Since then, the content-filtering algorithms used by Facebook, YouTube, and the like have gotten far more advanced — they now automatically remove huge swathes of extremist content, sometimes even before it reaches the eyes of a single user.
And that’s a major win in many respects — but the trade-off may be the loss of evidence that prosecutors could use to hold warlords, dictators, and terrorists accountable for their crimes.
READ MORE: Tech Companies Are Deleting Evidence of War Crimes [The Atlantic]
More on terrorism: How Facebook Flags Terrorist Content With Machine Learning