I’ve been deeply engrossed (for better and worse) in monitoring the events unfolding in the Middle East. On a personal level, it is both sad and shocking. On a professional level, I have been using muscle that has been built over many years being a prolific user of social media, as well as someone who has worked in communications. I know how rampant disinformation is online and have a good idea of how to spot it (this is a mix of drilling down to the source and looking for credible references that are either verifying or denying the media). That said, it’s a complex process and not an exact science.
One of the bigger observations from doing a lot of hyper-media monitoring from these past few weeks is that much, if not the majority, of disinformation tactics are not all that high-tech. Much of the disinformation spread across multiple social networks and noticeably, X is arguably the least technically sophisticated way of manipulating media—recycled footage. From Rolling Stone:
Viral clips purporting to depict the current crisis have included missile strike footage of the Syrian War in 2020, footage of Egyptian troops paragliding over Cairo (there are confirmed reports that Hamas militants entered Israel with paragliders), and even footage of Bruno Mars fans running towards the stage at one of his concerts, which was falsely presented as video of Israelis fleeing a massacre by Hamas forces at a music festival outside of Gaza. Videos of casualties being pulled from the rubble, as well as troop mobilizations from past military conflicts, have also been circulated online.
Still, instinctively, I feel that AI combined with supercharged and super-viral platforms like TikTok, as well as a less regulated Twitter (X), will likely pour gasoline on the fire that we know is media disinformation today. It’s also not just about lowering the friction it takes to create media such as images or videos, which text-to-media prompting has made Generative AI the game change that it is—media manipulation in the AI era is already adding to the confusion because we may be looking at a legitimate piece of media, assuming that it may be AI-generated when it it is actually real.
This is precisely what happened in the first week during the Isreali-Hamas war (and media war). Conservative personality Ben Shapiro shared an image of a charred infant with his millions of followers, and soon after, disinformation was spread that the image was AI-generated when, in fact, it was authentic:
“Ben Shapiro used an AI generated image to try and whip people into pro-war frenzy,” one user wrote. “Falsifying evidence and outright lies like this is exactly how we got into the Iraq war. Thankfully this time we have the technology to call out their lies.”
It turns out, however, that the image wasn’t fake and that users on X had relied on a shoddy and unproven tool for detecting AI-generated media.
To highlight the tool’s issues, others began showing how legitimate images, including the profile pictures of those who pushed the AI claim, were falsely flagged as computer-generated.
The above underscores where AI detection tools are falling short and what the consequences are when they do. Also, things have not been much more reliable over on the LLM front:
And the confusion between truth and falsehood is not the property of laypeople only. Google's Bard and Microsoft's Bing chatbots, for example, claimed last Wednesday that Israel and Hamas had reached a ceasefire. Although Google and Microsoft warn that their tools are experimental and not yet accurate, they are already incorporating AI content into their search results, so the effort to know what is really real becomes even more never-ending.
A precedent has now been set that what you are looking at might be AI generated—because an AI tool said it was. The unfolding war in the Middle East underscores just how high the stakes are when it comes to validating or invalidating whether media is authentic and credible or has been generated or manipulated.
Humans In The Loop: Truth Seekers
So, we have to assume that the combination of the lowest tech and the highest, involving AI, now becomes a part of the media landscape we consume and interact with. In AI circles especially—it’s common to use the phrase “human in the loop” to underscore that somewhere in the middle of all of that data input, machine learning, and neural network development are human beings checking, double checking, and validating what the machines are doing. In the case of media manipulation and AI's role in either generating it, identifying it, or misidentifying it, humans have never become more essential, just as the truth has never become more obscure. Humans working with machines will be the truth seekers we need—not one or the other, but both.
Totally agree. The mix of badly coded software like AI detection in combination with evil human behavior will increase the volume of misinformation and misinterpreted content.
Reading Sam Altman's interview about AGI and misinformation leaves me a bit worried. Hallucinating LLMS, corrupted AI generated code and human behaviour will be the challenge we have to cope with.
"I'm particularly worried that these models could be used for large-scale disinformation," Altman said. "Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks." (https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/story?id=97897122)
Love this topic. I’ve wanted to write on the good systems of human verification out there in the world today. An how potentially they can be amplified in a post-AI driven media world.
Getty has built a strong business on it. One of Twitter’s best innovations, Community Notes, has somehow withstood Elon’s housecleaning. Wikipedia’s talk pages and public edit history. BellingCat’s citizen journalism. ProPublica’s investigative journalism for public good. Even Reddit’s broad use as a search engine.