Questionable quotes: Journalist in Wyoming used artificial intelligence to fabricate statements and articles

Jay Baker, who has been a reporter for more than 15 years, then met with Aron Pelcar, who as a 40-year-old was a novice in journalism and who, according to Baker, admitted to him that he had used artificial intelligence and that he had already resigned from "Enterprise".

10496 views 1 comment(s)
Illustration, Photo: Shutterstock
Illustration, Photo: Shutterstock
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.
Ažurirano: 30.08.2024. 12:49h

Quotations from the statements of the governor of the American state of Wyoming and the local prosecutor in a competing newspaper seemed strange to the reporter of the "Powell Tribune" there, CJ Baker, and some of the sentences in those texts looked like they were written by a robot.

But his claim that a reporter from a rival news outlet was using "Generative Artificial Intelligence" (AI) to write articles didn't appear until June 26 with a competitor's story that a local comedian had been chosen to lead a local horse-riding parade that, it said, would " will be an unforgettable celebration of American independence translated by one of the most beloved personalities".

It was clear that Aron Pelčar, a journalist from the competitor Kodi Enterprise, used AI for journalistic purposes, which "ensures that the most critical information is presented mechanically in the text first, which makes it easier for readers to quickly understand what it is about."

Baker, who has been a reporter for more than 15 years, then met with Aron Pelčar, who as a 40-year-old was a novice in journalism and who, according to Baker, admitted to him that he had used artificial intelligence and that he had already resigned from "Enterprise".

The publisher and editor-in-chief of the company "Enterprise", which was founded in 1899 by the famous Buffalo Bill Cody, later apologized and promised to take steps to ensure that it never happens again. In an editorial published on Monday, editor-in-chief Chris Bacon said he had not noticed the AI ​​work and false quotes, and indicated that he still bears all responsibility for its publication.

He apologized for "allowing AI to add unspoken words to articles".

Journalists ruined their careers by fabricating statements and facts long before artificial intelligence appeared. But this latest scandal illustrates the potential pitfalls and dangers artificial intelligence poses to many industries, including journalism. "Chat-bots" can, after only a few requests made to them about what kind of article to "write", produce completely fake, yet somewhat convincing newspaper articles.

AI has found a role in journalism, including the automation of certain tasks. Some newsrooms, including The Associated Press, use artificial intelligence to free up reporters from technical work, but most AP staff are not allowed to use "Generative AI" to create content for publication.

The AP has been using the technology to help with financial reporting articles since 2014, and more recently for some sports coverage. The AP is also experimenting with an AI tool to translate some articles from English to Spanish. At the end of each such story there is a note explaining that this technology was used in its creation.

It has proven important to be open about how and when AI is used. Sports Illustrated magazine was criticized last year for publishing online product reviews generated by artificial intelligence that were presented as written by journalists who do not actually exist. After it was published, the magazine said it was firing the company that produced the articles for its website, but the case tarnished the once-powerful publication's reputation.

In his story for the Powell Tribune that broke the news about Pelcar's use of artificial intelligence in articles, Baker wrote that he had an unpleasant but honest conversation with him, that Pelcar promised to apologize, but claimed that what radio should not reflect on its position in Kodi Enterprise.

"Enterprise" audited all the articles Pelčar wrote for that newspaper during the two months he worked there. They discovered seven articles quoting AI-generated "statements" from six people, and the review is ongoing.

"Those are very compelling quotes," said editor-in-chief Chris Bacon, noting that the people to whom the quotes were attributed said they sounded like something they would say, but that they had never spoken to Pelcar.

Journalist CJ Baker, who discovered this and regularly reads "Enterprise" because it is a competitor, told AP that the constant combination of the same phrases and quotes in Pelcar's articles raised his suspicions.

Thus, Pelčar's article about an armed attack in Yellowstone National Park contains the sentence: "This case serves as a stark reminder of the unpredictable nature of human behavior, even in the most peaceful environments."

Baker said that sentence sounds like a summary of the kind that a "chat-bot" often generates at the end, like a kind of "life lesson", a lesson.

Another article, about a poaching conviction, contained quotes from alleged officials and prosecutors that appeared to come from the release, Baker said. However, there was no announcement, and those services did not know where those quotes came from, he said.

Two of the articles under investigation contained false quotes from Wyoming Gov. Mark Gordon's "statement" that his staff only learned about when Baker called them investigating it.

In one of those cases, Pelcar wrote a governor's statement that was completely fabricated, Michael Perlman, the governor's spokesman, said in an email. In another case, Pelcar made up part of a quote and then combined it with part of an official statement.

The most obvious AI-generated application came in an article about a local comedian that ends with a technical explanation of the "inverted pyramid"—a basic approach to breaking news.

It's not hard to create AI articles: Users can type into the AI ​​program's app that a crime has occurred and ask the program to write an article about it, including adding quotes from official statements, said Alex Mahadevan, director of the Digital Media Literacy Project at To the Pointer Institute, an eminent research center for journalism.

"These generative AI chatbots are programmed to give you an answer, whether that answer is complete garbage or not," Mahadevan said.

Megan Burton, publisher of Kodi Enterprise, wrote an editorial calling AI "a new, advanced form of plagiarism... It's an ugly part of the business. But this company that's willing to correct those mistakes is respectable."

Barton wrote that the newspaper had learned its lesson: that it had a system to recognize AI-generated stories and that it would "discuss at length with reporters that AI-generated stories are not acceptable."

The Enterprise had no rules on the use of artificial intelligence, in part because it seemed obvious that journalists should not use it to write, said editor-in-chief Bacon.

The Pointer Institute has a template that newspapers can use to create policy guidelines for their use of artificial intelligence, and Bacon plans to compile one for his paper and publish it as early as this week.

"That will be a topic of conversation before hiring anyone," he said.

Bonus video: