From fake accounts impersonating journalists to war-themed video games fueling false narratives, tech platforms are struggling to contain a tsunami of misinformation around Palestinian-Israeli hostilities after rolling back content moderation policies.
While major world events typically trigger a deluge of falsehoods, researchers say the scale and speed with which misinformation proliferated online following the weekend’s deadly assault on Israel by the Palestinian militant group Hamas was unlike ever before.
The conflict, experts say, offers a grim case study of the diminished ability of prominent platforms such as Meta-owned Facebook and X, formerly known as Twitter, to combat false information in a climate of layoffs and cost-cutting that have gutted trust and safety teams.
Aggravating the problem on the Elon Musk-owned X, in particular, are a slew of contentious measures such as the restoration of accounts pushing bogus conspiracies and an ad revenue sharing program with content creators that researchers say incentivizes engagement instead of accuracy.
Experts fear these moves have increased the risk of misinformation provoking real-world harm, amplifying hate and violence especially in a fast-evolving crisis scenario such as the one unfolding in Israel and Gaza.
“Social media platforms are struggling to keep up with the constant churn of misinformation and incitements to violence,” Andy Carvin, from the Atlantic Council’s Digital Forensic Research Lab (DFRLab), told AFP.
“It’s a trend that’s been building for some time now, and it’s only gotten worse with layoffs impacting trust and safety teams, hampering their ability to keep up with the chaos.”
And in the case of X, changes to the platform have utterly shattered what was previously one of its greatest strengths — monitoring breaking news and helping users separate fact from fiction.
“Flood of grifters” –
Social media users are being bombarded with fake combat photos, old videos from Syria repurposed to look like they were taken from Gaza, and conflict-themed video game footage being passed off as a scene from a Hamas attack, misinformation researchers say.
An image circulating online purported to show Israeli soldiers captured by Hamas, but AFP factcheckers found the picture was taken in 2022 during a military exercise in Gaza.
AFP factcheckers also found several posts on X, Facebook, and TikTok promoted a fake White House document allocating $8 billion in military assistance to Israel.
“The sheer amount of doctored, fake, old videos and images of attacks circulating (online) is making it harder to understand what is going on” in Israel and Gaza, said Alessandro Accorsi, a senior analyst at the Crisis Group think tank.
Accorsi voiced “huge concern” that the misinformation, especially fake images of hostages including children, could stoke violence.
“In crises like terrorist atrocities, wars, and natural disasters, people tend to descend on social media platforms for quickly accessible information,” Imran Ahmed, chief executive of the Center for Countering Digital Hate, told AFP.
“(But) the flood of grifters spreading lies and hate to garner engagement and followers, combined with algorithms that promote this extreme and disturbing content, is why social media is, in fact, such a bad place to access reliable information.”
“Fundamentally broken” –
Making matters worse, tech platforms appear to be abandoning efforts to elevate quality information.
Social media traffic to top news websites from platforms such as Facebook and X has fallen off a cliff over the past year, according to data cited by US media from the research firm Similarweb.
Last week, X stripped headlines from news articles shared by users, with links now appearing only as pictures, a move that experts say could further reduce traffic to news sites.
Musk himself courted harsh criticism when he encouraged his nearly 160 million followers on X to follow two “good” accounts for updates on the war. Both accounts are known purveyors of misinformation.
Musk later deleted his post but not before it racked up millions of views. X did not respond to AFP’s request for comment.
“Even though there are still countless talented journalists and researchers continuing to use X to help the public better understand what’s going on, the signal-to-noise ratio has become intolerable,” said DFRLab’s Carvin.
“Its utility as a reliable research and reporting tool is fundamentally broken and may never recover.” burs-ac/caw – Anuj CHOPRA