OpenAI said it had removed the accounts of members of an Iranian group that used ChatGPT software to create content intended to influence the US presidential election.
In Operation Storm-2035, the Iranian group used OpenAI's artificial intelligence software to generate content about US election candidates, the Gaza conflict and Israel's presence at the Olympics, and then shared it through social media accounts and websites.
An investigation by a Microsoft-backed artificial intelligence company has shown that ChatGPT was used to generate long articles and shorter comments on social networks, writes the British newspaper The Guardian.
Most of the identified social media posts received few or no likes, shares or comments, and the company saw no indication that the web articles were being shared via social media, OpenAI said in a statement.
The orders barred the use of artificial intelligence services developed by the company to monitor activities for any further attempts to influence politics.
Earlier in August, Microsoft reported on threat intelligence from Iran's Storm-2035 network, which consists of four websites masquerading as news outlets.
The report said the network is actively engaging groups of American voters on opposite ends of the political spectrum.
Engagement is built with "polarizing messages on issues such as US presidential candidates, LGBT rights and the Israel-Hamas conflict."
OpenAI said in May that it had discovered and shut down covert operations that tried to use its models for "deceptive activity" across the Internet.
Bonus video: