From his apartment in Boston, Kalam Hood has the power to subvert any election with just a few keystrokes.
Hood, a British researcher, activated some of the latest artificial intelligence tools made by OpenAI and Midjourney, another AI startup. Within a few seconds of typing in a few prompts - “take a realistic photo of the ballots in the trash can”; "photograph of long lines of voters waiting outside a polling station in the rain"; “photo of Joe Biden sick in hospital” - AI models spit out loads of realistic images.
Political operatives and foreign governments using these tools - which can generate realistic images that are hard to distinguish from real photographs - hope to sow doubt and influence more than 50 elections planned around the world in 2024. Hood used the vote in the United States in November as just one example. However, such tactics, according to him, can be applied to any election - from the European Union to India and Mexico.
"You can make as many such pictures as you want, very, very quickly," Hud said in a Zoom interview for the Brussels portal "Politiko".

Hood heads research at the Center to Counter Digital Hate, a nonprofit organization that has successfully created dozens of examples of harmful AI-generated disinformation about elections to warn of the technology's potential to undermine democracy.
"What is the benefit of technology that can produce such realistic fake images, and how does that benefit outweigh the potential harm," Hood asked after creating more images of alleged election theft with just a few keystrokes. “It's really unclear”.
No one denies the potential for harm.
Harmful so-called "deepfakes" - a term for misleading, AI-generated content - audio recordings of British Labor Party leader Keir Starmer and his Slovakian counterpart, opposition leader Michal Simečka, spread across social media like wildfire before being disproved by fact-checking. Fake phone calls, purportedly from US President Joe Biden, also created using AI tools, flooded the networks ahead of the recent Democratic primary in New Hampshire to influence people not to vote. And they were quickly exposed as fake.
What is the benefit of technology that can produce such realistic fake images, and how does that benefit outweigh the potential harm
The ready availability of AI tools like ChatGPT and its competitors risks creating a wave of politically motivated lies flooding social media in ways that seemed unimaginable just a few years ago. In an era of entrenched partisan politics and growing skepticism about what is posted online, AI has the potential to make this year's election cycle significantly more difficult to predict.
What's still unclear, however, based on interviews with more than 30 politicians, decision makers, technology executives and outside experts from many countries holding elections in 2024, is what the real impact of AI will be when more than 2 billion people go to the polls from New Delhi to Berlin and Washington.
Despite the apparent improvement of the latest AI tools, almost all deepfakes are still easily - and quickly - debunked, including those produced by Russia and China, which are already using the technology in global influence campaigns.
Many people's political views are difficult to change, and experts believe that AI-generated content will not sway most voters to change their allegiance to a party - no matter how convincingly fake photos, videos or audio recordings appear.

In many elections held this year, including those in countries like Pakistan and Indonesia, where AI tools have been used to some extent, there is little, if any, evidence that the technology has unfairly swayed the outcome of a vote in favor of a politician.
With so many social media posts published around the world every day, the ability of AI-generated lies - even realistic ones like those created by Hud - to break through is a difficult challenge.
Lawmakers, tech CEOs and election watchdog groups are urging caution when it comes to technology that is evolving faster than it can be controlled. However, for many, the impact of AI-generated disinformation remains more theory than fact.
"Trends can change, and dramatically so," said Nick Clegg, president of global affairs at Meta, during a discussion on how AI-generated disinformation has so far affected the 2024 election process.
However, Clegg's words are no comfort to Kara Hunter.
Just a few weeks before the parliamentary elections in Northern Ireland in 2022, this then 24-year-old received a message on the WhatsApp platform from an unknown person. The man immediately asked her if she was the woman in the explicit video - a 40-second deepfake video showing Hunter in an explicit sexual act.
Within days, the AI-generated clip had gone viral, and the Northern Irish politician was bombarded with sexual and violent messages on social media from men around the world.
"It was a campaign to undermine me politically," said Hunter, who won a seat in the Northern Ireland Assembly by a narrow majority of just a few votes, despite the deepfake pornography. "It tarnished my reputation and I can't control it. I will regret it for the rest of my life."
The ready availability of AI tools like ChatGPT and its competitors risks creating a wave of politically motivated lies flooding social media in ways that seemed unimaginable just a few years ago
Hunter is not the only one who has been targeted by an AI-generated political campaign.
At least three Western leaders have seen realistic copies of themselves in presentations by AI companies as part of a discussion about how realistic such digital forgeries have become, five officials briefed on the discussions, who wished to remain anonymous, told Politico.
In Moldova, President Maja Sandu has repeatedly been the target of deepfake material to mock both her and her pro-Western government. Moldovan security authorities blamed the attacks on the Kremlin, claiming it was a new tactic in years of meddling in their country's internal affairs.

Russia-linked groups have also created another fake video - this time, an AI-generated Tom Cruise criticizing the upcoming Paris Olympics in a fake Netflix documentary. Moscow has targeted the upcoming global sporting event with sophisticated influence campaigns after the International Olympic Committee banned Russian athletes from participating. They will now be admitted as neutral athletes.
National politicians are also using AI advances for political gain domestically. In Poland, the political party of current Prime Minister Donald Tusk released a deepfake audio recording of his opponent on social media during the recent election campaign. Meanwhile, former Republican presidential candidate and current Florida governor Ron DeSantis used a similar AI-generated image of former President Donald Trump to attack him. That attack was unsuccessful.
In response, more than 20 leading technology companies around the world - including TikTok, Meta and OpenAI - pledged during a special ceremony at the recent Munich Security Conference to fight against nefarious uses of AI technology during this year's global election cycle. The European Commission has launched a preliminary investigation into the most advanced AI tools as part of new social media rules in the 27-nation bloc. The US Congress has also held hearings on the potential harm of AI, including those related to elections.
The companies' voluntary commitments also include efforts to stop bad actors, including foreign governments and domestic political campaigns, from creating harmful deepfake content. It will also share techniques used in the industry - such as automatically tagging AI-generated images and footage - so the public knows immediately when they encounter such manipulated content. The corporate response comes at a time when almost all governments have little or no technical expertise to deal with the ever-changing threat of AI technology when it comes to elections.
"People are constantly coming up with new ways to try to cheat the system," said Natasha Crampton, who leads Microsoft's efforts to make AI systems more accountable. The tech giant's commercial ties with US-based OpenAI and French competitor Mistral have seen it become a key player in the fight against AI-generated disinformation around the world.

If anyone cares enough to link this avalanche of AI-powered lies to damaging outcomes at the polls, it's Josh Lawson.
As an executive at Meta, the parent company of Facebook, Instagram and WhatsApp, Lawson once led the firm's response to global election politics. The tech giant's platforms continue to play a key role in how legitimate information and disinformation reach voters. Now at the Aspen Institute, Lawson oversees the AI Elections Initiative, a project that aims to combat the negative impact of new technology on elections in 2024 and beyond.
Yet despite dozens of meetings with election officials, civil society representatives and tech companies — including a recent event in New York attended by former presidential candidate Hillary Clinton and Nobel Peace Prize laureate Maria Ressa — Lawson has yet to find concrete evidence that AI generated misinformation directly changes voter habits, especially during general elections.
He acknowledges that the evidence probably exists, but no one has yet presented evidence that AI fraud alters the outcome of elections.
"It's really frustrating," he said.
That frustration and resignation were expressed on several occasions during the interviews Politiko conducted with national security officials, directors of technology companies and supervisory bodies.
Most of them, who spoke about their confidential work on condition of anonymity, acknowledged that AI offers a faster and cheaper way to create disinformation, during elections and in general. However, that technology - until the beginning of 2024 - was more of an additional instrument, than a new threat in itself, in helping the spread of false information related to the elections.
“We're really checking, we really are,” said one senior official at a leading AI company, who spoke on condition of anonymity. "But we just haven't seen the mass impact of stealth AI content."
This is partly because such AI-generated content is often not compelling.
People are wary of what they see online, and most AI-generated content gets little or no attention in the sea of daily social media posts.
A recent Russian attempt to share a deepfake video on social media purportedly highlighting disagreements between Ukrainian President Volodymyr Zelensky and one of his top generals had basic language errors, while the audio was poorly synchronized with the video, according to an analysis shared by Microsoft with "Politiko" portal. Recent questionable pictures of Trump allegedly surrounded by black supporters were debunked within hours on social media - before they could go viral.
"That kind of disinformation is not successful," said Felix Simon, a researcher at Oxford University who tracks the failure of harmful AI-generated content to penetrate the public. People are wary of what they see online, and most AI-generated content gets little or no attention in the sea of daily social media posts, he added.
For Amber Sinha, an artificial intelligence expert at the India-based Mozilla Foundation, a nonprofit affiliated with the tech company behind the Firefox browser, people's legitimate fixation on so-called generative AI has missed the most likely way AI could affect this year's global cycle. of choice.
In India, a nation of 1,4 billion people where general elections begin on April 19, political parties and outside consultants are already using more common AI techniques, such as so-called machine learning tools, to harvest voters' personal data and then flood them with political advertising.
Such tactics, according to Sinha, allowed these operatives to analyze vast amounts of often sensitive personal data such as people's socioeconomic status to more precisely determine which messages would reach potential supporters the most. Currently, those efforts are more widespread - and more effective - than the small but growing number of AI-generated images created to fool people.
"It's not as slick as generative AI," Sinha said. "But the use of these pesky AI tools still plays an important role."
Prepared by: N. Bogetić
Bonus video:
