He drives and spreads fear and terror: The video that the terrorist group "Islamic State" (IS) published on the Internet shows Peter Griffin - the main character from one of the most famous animated series in the world "Family Man".
With a gun in hand, an animated character drives a truck with a bomb across the bridge. The sequence from the animated film is authentic and matches the original, but the sound has been changed.
"Our weapons are heavy, our ranks are many, but God's warriors are more than ready," sings Griffin in a tune that should attract new supporters to the extremist organization "Islamic State" (IS).
Use of advanced technology
Animation is just one example of how extremist groups skillfully use advanced computer technology or artificial intelligence (AI) to create content tailored to their supporters.
VI encompasses a wide range of digital technologies, including rapid processing of large amounts of data, as well as so-called "generative VI" that creates new texts or images based on large amounts of data. This is how the song that the IS strategists put into Peter Griffin's mouth was created, writes DW.
"The rapid democratization of generative technology in the past few years has had a profound effect on the way extremist organizations use Internet media to spread their influence," writes American researcher Daniel Sigel in an article for the Global Network on Extremism and Technology. The video with Peter Griffin's character is an example of such an approach.
VI more and more in use
Last year, various monitoring organizations reported that IS and other extremist groups were encouraging their supporters to use new digital tools. According to the Washington Post, the al-Qaeda-linked group announced online workshops on artificial intelligence in February. Later, the same group published a guide to using chatbots.
After an IS branch killed over 135 people in a terrorist attack on Crocus City Hall in Moscow in March, an IS supporter created a fake news story about the event and published it four days after the attack.
Another example
In early July, Spanish officials arrested nine people for spreading propaganda in favor of the Islamic State. Among those arrested was a man who specialized in the production of extremist multimedia content based on VI techniques.
"Artificial intelligence complements the official propaganda of al-Qaeda and IS," says Mustafa Ajad of the Institute for Strategic Dialogue (ISD). "It allows advocates to create emotional content that serves to win people over to their original idea."
The specific formatting of the content can lead to the fact that the hosts of popular social media shows may not recognize them. "IS supporters have low criteria," says Ajad. "They share even the most ridiculous and unrealistic IS content."
High technical quality
This is no surprise to long-time IS observers. When the group first gained attention in 2014, it did so with the help of high-tech propaganda videos. The goal was to intimidate the enemy and recruit supporters. "Terrorist groups and their supporters continue to use the latest technology to achieve their interests," Ajad said, according to the media.
In addition to propaganda, extremist groups use chatbots with large language models such as ChatGPT to communicate with new, potential recruits. When the chatbot sparks interest, a human character, specialized in recruiting new members, can take over the conversation.
Fake real bomb
In a paper published in 2019 in the journal "Perspectives of Terrorism", researchers studied the connection between IS propaganda and actual attacks. They concluded that there is no "strong and predictable correlation" between propaganda and crime.
"It can be compared to the cyber-weapons-weapons-bombs discussion we had a decade ago," says Lily Pijnenburg Muller, research fellow and cyber security expert at the Department of War Studies at King's College London.
"Even rumors and old videos can have a destabilizing effect and lead to a flood of misinformation on social media," Mueller told Deutsche Welle (DW).
"I don't know if the use of artificial intelligence by foreign terrorist organizations and their supporters is more dangerous at the moment than their very real and vivid propaganda, such as filming pre-planned killings of civilians and attacks on security forces," says extremism researcher Ajad.
"At the moment, the bigger threat comes from groups that actually carry out attacks, inspire individual attackers or successfully recruit new members. They also achieve this by reacting to geopolitical events, especially the war between Israel and the militant Hamas - in response to October 7," he says. Ayad.
"They use civilian casualties and Israeli actions as a rhetorical tool to recruit supporters and build campaigns."
Bonus video: