The use of artificial intelligence in the media raises questions about journalistic ethics

"I think it will take some time for news organizations to develop best practices," said Jared Schroeder, an associate professor specializing in media law and technology at the University of Missouri School of Journalism.

4037 views 2 comment(s)
Illustration, Photo: Shutterstock
Illustration, Photo: Shutterstock
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

With the development of advanced artificial intelligence and its ability to help spread false news and misinformation, some media experts believe the journalism industry should adopt uniform standards for this new technology.

Among the issues discussed by newsroom chiefs is how artificial intelligence — which has previously made mistakes and so-called digital hallucinations, and is already being used to create deepfakes — can be ethically relied upon by an industry whose credibility depends on trust.

"I think it will take some time for news organizations to develop best practices," said Jared Schroeder, an associate professor specializing in media law and technology at the University of Missouri School of Journalism.

"There are still no established best practices and we have two problems: this is new and it's changing. We're not done. Today's artificial intelligence will be different next year and in five years," he added.

Generative AI presents an interesting dilemma for an industry suffering from an economic downturn. Technology can assist in producing transcripts, copy editing, narration of audio or TV packages, and image creation. And investigative news outlets have long relied on it to sift through large databases.

But it also poses a risk of copyright infringement and plagiarism, along with errors.

The New York Times sued Open AI and Microsoft last month for copyright infringement. An August report by NewsGuard found that AI chatbots were plagiarizing thousands of news articles.

As news organizations begin to adopt standards and practices, many experts agree that AI is a useful tool for journalism, but that humans are still needed to oversee its application.

"As a journalist I'm not allowed to use artificial intelligence to write my stories or anything like that," said Ryan Heath, global technology correspondent for news website Axios.

"It's good to use it to do a little research and ask for inspiration, but you can't use it to actually report or compose your articles," he told VOA.

News outlets that have experimented with using artificial intelligence to replace journalists have done so with only limited success.

In November, the American media company Sports Illustrated was accused of publishing content generated by artificial intelligence under false copyrights. Sports Illustrated denied the allegations, saying a third party provided the content. They fired senior executives the following month, but denied that the decision was related to the AI ​​allegations.

Sports Illustrated made headlines again on Friday, announcing mass layoffs after losing its licensing deal over missed payments.

Separately, tech website Sinet's (CNET) experiment with artificial intelligence early last year resulted in dozens of articles containing errors. After posting at least 41 corrections, the site announced that it would no longer publish articles entirely generated by artificial intelligence.

Axios' Ryan Heath said his media company had adopted a more cautious approach.

"They recognize that it's definitely a big transformation, so they've hired people like me to write full-time about AI," he said. "But they want to stop and think about it first."

Aksios is not alone in hiring people to focus on artificial intelligence. The New York Times announced last month that it will appoint Zach Seward as editorial director of its AI initiatives.

In a press release, the publisher stated that Seward's job will be to establish principles for the use of artificial intelligence in the organization.

Voice of America requested an interview from The Times, but the paper declined.

Others, such as the Associated Press (AP), sign licensing agreements with Open AI, the maker of ChatGPT.

The AP also declined VOA's interview request. But in a press release, the news agency said: "Accuracy, fairness and speed are guiding values ​​for AP reporting, and we believe careful use of artificial intelligence can serve these values ​​and improve our work over time."

The implications of the use of artificial intelligence on newsrooms is a key trend for 2024, especially in a year when significant elections will be held in more than 40 countries.

"Embracing the best of artificial intelligence while managing risks will be the underlying narrative for the year ahead," wrote Nick Newman, senior fellow at the Reuters Institute for the Study of Journalism in the organization's annual report on media trends.

Noting that issues of trust and intellectual property are key, Newman added, "Publishers can also see benefits in making their business more efficient and relevant to audiences."

Leaders of newsrooms and organizations that monitor the work of the media also participate in the debate.

Nobel laureate Marija Resa joined Reporters Without Borders and other groups in publishing the Paris Charter on Artificial Intelligence and Journalism in November. The charter's creators say they want it to serve as an ethical blueprint for the use of artificial intelligence in journalism and want news organizations to adopt its 10 principles.

But so far, the charter has not been adopted by many news organizations, and the list of journalists using artificial intelligence is growing.

Pandora's box has been opened, Schroeder said, adding: "It would be dangerous for journalism not to think about how artificial intelligence should be used. That doesn't mean every news organization should use it in the same way."

Some governments seem to share that view.

During a hearing before a subcommittee of the US Senate Judiciary Committee on January 10, the negative aspects of how technology could affect journalism were also discussed.

"It's really a perfect storm of falling revenues and exploding misinformation, and a big part of the cause of this perfect storm is actually technologies like artificial intelligence," said Sen. Richard Blumenthal, D-Connecticut, who chaired the hearing.

In December, the European Union adopted the Law on Artificial Intelligence, to ensure the safe and transparent use of technology. Included are requirements for tech companies to disclose when content is generated by artificial intelligence.

Media experts also emphasize transparency and human oversight in their discussions of how and when artificial intelligence is used in journalism.

Bonus video: