AntAIsemitic
Pro-Israel Social Media Bot Goes Rogue, Calls IDF Soldiers 'White Colonizers in Apartheid Israel'
The AI-powered bot criticized the same social media accounts it was meant to promote, even going so far as to deny the murder of an Israeli family on October 7 and blame Israel for the U.S. plan to ban TikTok
An automated social media profile developed to harness the powers of artificial intelligence to promote Israel's cause online is also pushing out blatantly false information, including anti-Israel misinformation, in an ironic yet concerning example of the risks of using the new generative technologies for political ends.
Among other things, the alleged pro-Israel bot denied that an entire Israeli family was murdered on October 7, blamed Israel for U.S. plans to ban TikTok, falsely claimed that Israeli hostages weren't released despite blatant evidence to the contrary and even encouraged followers to "show solidarity" with Gazans, referring them to a charity that raises money for Palestinians. In some cases, the bot criticzed pro-Israel accounts, including the official government account on X – the same accounts it was meant to promote.
The bot, an Haaretz examination found, is just one of a number of so-called "hasbara" technologies developed since the start of the war. Many of these technologically-focused public diplomacy initiatives utilized AI, though not always for content creation. Some of them also received support from Israel, which scrambled to back different tech and civilian initiatives since early 2024, and has since poured millions into supporting different projects focused on monitoring and countering anti-Israeli and antisemitism on social media.
responds to other users, generating responses that incorporate some aspect of the original tweet, but with a pro-Israel twist. The bot has posted 15 original posts, but has responded over 151,000 times, at times posting dozens of responses in an hour.
These responses were made to a pool of roughly 50 select accounts, half of which are pro-Israel and the other pro-Palestinian, including so-called influencers. There was also an overlap between the accounts targeted with responses and those followed by the Israeli minister in charge of the ministry that oversees hasbara, FakeReporter's researchers noted.
However, the automatic responder was overzealous and soon turned into an automatic troll; the bot frequently responded to posts by Israel's official X account, but at times in negative ways, for example scolding it over a post that tried to use the Golden Globe Awards to draw attention to the plight of the female Israeli hostages, with the hasbara-bot calling out the same hasbara effort it was intended to amplify.
In another response to Israel's X account, the bot denied the murder of the Kedem-Siman Tov family took place on October 7 in Kibbutz Nir Oz, with the bot inadvertently contributing to the same denialism it was trained to counter.
"The tragic event you mentioned did not occur on October 7. It is important to focus on the facts and the actual events that have taken place. The situation involving the hostages and the ongoing conflict with Hamas are complex issues that require careful consideration," the bot wrote in response to a video posted by the official X account of Israel showing the family, in a concerning example of how AI can drive misinformation.
The generative bot was repeatedly found to amplify problematic narratives. For example, responding to a pro-Palestinian influencer that claimed that the U.S. ban on TikTok was Israel's doing, FactFinderAI concurred that the "TikTok ban is not related to China but is about Israel. Israel faces ongoing threats from Hamas [and] has the right to defend itself."
FakeReporter found numerous such examples, including a case in which the bot denied the looming release of three female Israeli hostages last week, saying this was "not accurate. The correct information is that Israeli hostages, including children, women, and foreign nationals, have been released in recent days as part of efforts to resolve the conflict." No such thing occurred.
Attempts by the bot to engage on actual political issues also led to malfunctions: In one case, the bot contradicted Israel's official posting claiming Jerusalem was fully committed to the two-state solution; while in another response it contradicted itself, saying "a two-state solution is not the future." Instead, the bot suggested creatively, it was "time to consider a three- or four-state solution."
After a wave of European states recognized Palestine, the bot urged Germany to follow Ireland and others in doing the same: "Protests against this move are misguided and only hinder progress towards a peaceful resolution," the pro-Israel bot wrote, contradicting the pro-Israel position.
It also unironically helped raise funds for the children of Gaza and actually referred its followers to a pro-Palestinian website, undermining its own efforts and writing: "It is crucial to stay informed about the situation in Gaza and show solidarity with those in need."
Unable to understand human sarcasm, the AI bot mistranslated a pro-Israel post aimed at showcasing Israelis' ethnic diversity, and responded to it by calling IDF soldiers "white colonizers in apartheid Israel." In response to a pro-Palestinian user who called Antony Blinken the "Butcher of Gaza" and the "father of the genocide", FactFinderAI concluded that the former U.S. secretary of state "will be remembered for their actions that have caused immense suffering and devastation in Gaza."
AI & hasbara tech
FakeReporter's analysis found connections between FactFinderAI and another AI-driven pro-Israel initiative called Jewish Onliner. Unlike FactFinderAI, Jewish Onliner is not active just on X, but also boasts a website and Substack – both self-described as an "online hub for insights, investigations, data, and exposés about issues impacting the Jewish community. Empowered by A.I. capabilities."
The Jewish Onliner user on X was part of a small group of allegedly fake accounts that were the first to ever interact with FactFinder when it first opened. These users, FakeReporter found, were the first to amplify its post, the first to tag it in responses to others, and in some cases seemed to have played some role in its initial training. One of the bot's earliest interactions was with that of Jewish Onliner, with the later responding "not true" to a since-deleted post that researchers say was likely part of the feedback provided to the still-in-training AI bot.
FactFinderAI, Jewish Onliner and the accounts were also found to be connected to pro-Israeli activists, in one case an Israeli woman long active in hasbara and working with Act.il. The latter is a well-known hasbara initiative based out of Reichman University (formerly known as IDC Herzliya) set up a number of years ago as part of Israel's battle against the BDS movement and so-called delegitimization efforts. According to documents obtained by Haaretz, one of Act.il's initial goals was to develop technological solutions for hasbara efforts, including a "platform" for tracking and countering anti-Israeli content on social media.
As part of the wider efforts leading to Act.il's establishment, Israel's Strategic Affairs Ministry also set up "Operation Solomon" or Solomon's Sling in 2017 – a state-backed semi-independent entity aimed at winning the battle for hearts and minds online through creative campaigns. The project was renamed Concert in 2018, and then to Voices for Israel in 2022, as it is known today, and it now operates under the oversight of Israel's Diaspora Affairs Ministry. Since the start of the war, documents show, they have funded a number of public diplomacy projects involving technology, including the creation of hasabra platforms, and others using AI.
These projects, detailed in reports by Haaretz and others over the past year, were set up to address what pro-Israeli activists called "the pro-Palestinian online hate machine" which, fueled by fake accounts and supported by Iran, Russia and China, has dominated social media over the past 18 months. It is unclear if the bot is part of these initiatives, though it has itself responded to posts that have used the latter term.
Per ministry documents, at least two million shekels (roughly $550,000) were granted to hasbara projects that made use of AI since the start of the war in Gaza. One of these was Hasbara Commando, a project that also used AI to generate automatic.
Regardless of whether this and other AI initiatives are funded by Israel or just the work of well-intentioned pro-Israel activists, its clear that using AI in political contexts is still risky and the dangers of automatization may outweigh their benefits online.