Since Israeli airstrikes on Iran commenced, disinformation, much of it AI-created, has surged online, misleading the public and complicating the dissemination of accurate information about military conflicts.
AI Disinformation Surge During Escalating Israel-Iran Tensions

AI Disinformation Surge During Escalating Israel-Iran Tensions
A wave of AI-generated disinformation has flooded social media amidst the Israel-Iran conflict, complicating the truth surrounding military actions.
In an alarming development, the ongoing conflict between Israel and Iran has catalyzed a significant rise in disinformation circulating across social media platforms, primarily fueled by artificial intelligence (AI) technologies. Following Israel's strikes on Iran last week, an analysis conducted by BBC Verify uncovered numerous deceptive posts and videos designed to distort perceptions of military capabilities and the overall situation.
Key among the findings were several viral videos, some claiming to showcase Iran's military abilities and others misrepresenting the aftermath of Israeli strikes. Collectively, these deceptive clips garnered over 100 million views, indicating a widespread reach. Pro-Israeli accounts have also engaged in similar tactics, sharing outdated footage to falsely suggest discontent among Iranians towards their government amid the unrest.
The hostilities escalated when Israel initiated airstrikes on June 13th, subsequently prompting Iranian missile and drone retaliation. An online verification organization described the flood of misleading content as "astonishing", attributing the proliferation to individuals seeking to gain traction and profit through sensationalized material.
Some accounts, termed "super-spreaders", have seen remarkable increases in followers. For instance, the pro-Iranian account Daily Iran Military jumped from approximately 700,000 followers to 1.4 million in just one week, underscoring the speed at which disinformation can propagate. Despite their dubious origins, many accounts maintain convincing pseudonyms that mislead users into believing they are credible.
Expert analysis has revealed that this situation marks the first extensive deployment of generative AI technology in a conflict context. AI-generated images have claimed to depict exaggerated military successes for Iran, with one image of missiles en route to Tel Aviv alone racking up 27 million views. Numerous other misleading videos have made their rounds, featuring randomized clips or simulation footage that aim to persuade users of a significantly altered reality.
Observers believe Russian influence operations could be leveraging these narratives to undermine Western weaponry credibility, particularly targeting the high-profile US F-35 fighter jets amidst the conflict. As widespread disinformation continues to cloud social media platforms, high-profile accounts have been noted to mix genuine engagement with misinformation.
The dissemination of this misleading content serves various motives, including attempts by some to monetize the conflict through social media platforms that reward engagement. Concurrently, pro-Israel narratives have included erroneous claims regarding increasing public dissent in Iran, with a widely spread AI-generated video incorrectly alleging Tehran's support for Israel.
Several posts featuring AI-generated imagery of B-2 bombers over Tehran emerged as tensions escalated further, reflecting the rapid proliferation of disinformation. Some official channels in both Iran and Israel fell prey to sharing fake visuals related to military activities, distorting the public dialogue.
While X users have sought clarification and verification from the platform's AI chatbot Grok, many instances of AI-generated misinformation were misclassified as authentic. Despite TikTok and Meta’s attempts to regulate misleading content, the persistence of disinformation underscores the challenges platforms face in counteracting false narratives.
Matthew Facciani, a researcher, emphasizes the tendency for sensational content to spread quickly due to the nature of human decision-making in binary situations like those presented by conflict, highlighting the conventions that fuel misinformation in politically charged environments.
Key among the findings were several viral videos, some claiming to showcase Iran's military abilities and others misrepresenting the aftermath of Israeli strikes. Collectively, these deceptive clips garnered over 100 million views, indicating a widespread reach. Pro-Israeli accounts have also engaged in similar tactics, sharing outdated footage to falsely suggest discontent among Iranians towards their government amid the unrest.
The hostilities escalated when Israel initiated airstrikes on June 13th, subsequently prompting Iranian missile and drone retaliation. An online verification organization described the flood of misleading content as "astonishing", attributing the proliferation to individuals seeking to gain traction and profit through sensationalized material.
Some accounts, termed "super-spreaders", have seen remarkable increases in followers. For instance, the pro-Iranian account Daily Iran Military jumped from approximately 700,000 followers to 1.4 million in just one week, underscoring the speed at which disinformation can propagate. Despite their dubious origins, many accounts maintain convincing pseudonyms that mislead users into believing they are credible.
Expert analysis has revealed that this situation marks the first extensive deployment of generative AI technology in a conflict context. AI-generated images have claimed to depict exaggerated military successes for Iran, with one image of missiles en route to Tel Aviv alone racking up 27 million views. Numerous other misleading videos have made their rounds, featuring randomized clips or simulation footage that aim to persuade users of a significantly altered reality.
Observers believe Russian influence operations could be leveraging these narratives to undermine Western weaponry credibility, particularly targeting the high-profile US F-35 fighter jets amidst the conflict. As widespread disinformation continues to cloud social media platforms, high-profile accounts have been noted to mix genuine engagement with misinformation.
The dissemination of this misleading content serves various motives, including attempts by some to monetize the conflict through social media platforms that reward engagement. Concurrently, pro-Israel narratives have included erroneous claims regarding increasing public dissent in Iran, with a widely spread AI-generated video incorrectly alleging Tehran's support for Israel.
Several posts featuring AI-generated imagery of B-2 bombers over Tehran emerged as tensions escalated further, reflecting the rapid proliferation of disinformation. Some official channels in both Iran and Israel fell prey to sharing fake visuals related to military activities, distorting the public dialogue.
While X users have sought clarification and verification from the platform's AI chatbot Grok, many instances of AI-generated misinformation were misclassified as authentic. Despite TikTok and Meta’s attempts to regulate misleading content, the persistence of disinformation underscores the challenges platforms face in counteracting false narratives.
Matthew Facciani, a researcher, emphasizes the tendency for sensational content to spread quickly due to the nature of human decision-making in binary situations like those presented by conflict, highlighting the conventions that fuel misinformation in politically charged environments.