The media is looking to artificial intelligence (AI) tools to cut costs and supplement content, but despite technological advances, AI is unlikely to replace humans in nuanced jobs, writes the Financial Times (FT).
The paper says that rapidly developing artificial intelligence (AI) is unlikely to completely replace traditional media and journalists, but is predicted to greatly affect the responsibilities and work of journalists, broadcasters, creatives and advertisers, bringing so much desired speed and efficiency.
AI is becoming available at a time when media companies, particularly news outlets, are being forced to tighten their belts and make mass layoffs as the growth of digital advertising companies such as Meta and Google has affected a global decline in newspaper group revenues.
Many seem to see technology as the solution: Media groups are increasing investment in artificial intelligence, even as they are forced to cut costs. In 2022, the global market for artificial intelligence in media and entertainment was valued at nearly $15 billion and is projected to grow at a compound annual growth rate of 18,4 percent between 2023 and 2030, according to Grand View Research (Grand View Research).
Michael Selley, partner at UK law firm TLT, says media companies are using AI tools to stand out and stay relevant in an ever-changing market.
In media, the main use cases for AI include text and image generation, as well as AI-assisted editing and research, according to the FT.
Experts suggest that it will most often be used to optimize production processes and take on laborious tasks, for example identifying the main points of an article to write headlines or using speech-to-text technology to save time in writing, subtitling, translating.
"Jobs that are more likely to be handled by AI include editing and copywriting, as generative AI is already relatively strong in these domains," suggests Ravit Dotan, an ethics consultant and AI researcher.
Meanwhile, the paper adds, developments in generative artificial intelligence and increasingly powerful large language models (LLMs) that can, among other things, generate text from huge data sets mean some media outlets have explored AI-assisted generation and distribution. Such "automated journalism" has the potential to disrupt the traditional role of journalists.
"AI is already being widely used when it comes to writing articles," says Oliver Lock of London law firm Farrer & Co. He points to articles published by Sports Illustrated. In cases where structured data is available – for example, with sports statistics or financial results – AI can easily transform this into a news narrative, says Daniel Chazen of Verbit, an AI-based video and audio transcriber.
However, where on-the-ground newsgathering or more complex and nuanced storytelling is required, AI is an imperfect solution – it can assist rather than replace existing roles. Rajvinder Jagdev, a partner at specialist intellectual property litigation firm Powell Gilbert LLP, cites a recent Sky News segment where a reporter tried to use a generative AI tool to plan, script and create TV news: the quality was poor and it took a team of reporters to get the job done.
Jagdev believes that in the "short to medium term" AI tools are likely to be used to supplement existing workflows rather than work independently as "robo-journalists", but that may be the future.
In filmmaking, games and advertising, generative AI is increasingly present in creative processes. It is also increasingly used to improve the user experience, make personalized news recommendations or, in advertising, help show ads at the right time to the right user.
Meta and Google already offer tools that can help marketers generate and better target ads to social media users in real time. There is concern that this could reduce or eliminate the need for advertising creatives, as well as agency staff to provide advice on how to place ads effectively.
"Imagine a level of personalization similar to TikTok, but expanded across a wider range of verticals and industries," says Joel Hellermark, founder and CEO of AI copilot and learning platform Sana.
But the implementation of AI in media still faces many challenges. Generative AI technology remains prone to “hallucinations” – generating inaccurate or false information. Using outdated or biased data banks for LLM training can deepen the potential for misinformation. Artificial intelligence tools can also be used intentionally in a malicious way to create fake videos of people or to manipulate opinion.
"If the problem of inaccuracy persists, there may be an increased demand for fact checkers, and their work may be more challenging and important than ever, as the Internet is flooded with more and more false information generated by artificial intelligence," says Dotan. "Companies and governments should require media outlets to label their content in a way that readers can confirm what really came from them," she adds.
Some argue that ties to the news organizations themselves could help solve the problem. "From a societal interest perspective, it might make sense for LLMs to reach some kind of agreement with major media publishers that would allow software companies to use trusted content to train their systems," Farrer says.
He notes that Le Monde and Prisa Media have made one such deal with OpenAI, while the New York Times is suing the AI group to prevent it from training its LLMs on newspaper data.
Meanwhile, the evolution of generative artificial intelligence raises questions about authorship, raising concerns about copyright protection and intellectual property ownership of AI-generated content. “When AI tools become more sophisticated and are able to generate content without any request, then what happens?” says Jagdev. "Is the author the person who first initialized the AI tool, or maybe the creator of the AI tool - for example OpenAI, Microsoft, Google - or maybe even the AI tool itself," he adds.
Such gray areas could lead to new occupations, such as ethics managers, responsible for ensuring that AI-generated content adheres to ethical norms, Hellermark says.
Rafi Azim-Khan, an expert in European digital law at law firm Crowell & Moring, says AI poses an "existential threat" to the media sector.
However, as he adds, it should be said that AI will be an opportunity for those who adapt well and use it as a positive tool, and a threat for those who do not adapt, who are perpetrators or victims of its abuse or who ignore the new powerful laws and sanctions that are introduced.
Bonus video: