AI both a risk and opportunity for journalism – study

1
This illustration picture shows icons of Google's AI (Artificial Intelligence) app BardAI (or ChatBot) (C-L), OpenAI's app ChatGPT (C-R) and other AI apps on a smartphone screen in Oslo, on July 12, 2023. (Photo by OLIVIER MORIN / AFP)

This illustration picture shows icons of Google’s AI (Artificial Intelligence) app BardAI (or ChatBot) (center-left), OpenAI’s app ChatGPT (center-right), and other AI apps on a smartphone screen in Oslo, on July 12, 2023. (Photo by OLIVIER MORIN / Agence France-Presse)

LONDON — Artificial intelligence (AI) is both a threat and an opportunity for journalism, with more than half of those surveyed for a new report saying they had concerns about its ethical implications on their work.

While 85 percent of respondents had experimented with generative AI, such as ChatGPT or Google Bard, for tasks. including writing summaries and generating headlines, 60 percent said they also had reservations.

The study, carried out by the London School of Economics’ JournalismAI initiative, surveyed over 100 news organizations from 46 countries about their use of AI and associated technologies between April and July.

“More than 60 percent of respondents noted their concern about the ethical implications of AI on journalistic values, including accuracy, fairness and transparency and other aspects of journalism,” the researchers said in a statement.

‘Exciting, scary’

“Journalism around the world is going through another period of exciting and scary technological change,” added report coauthor and project director Charlie Beckett.

He said the study showed that the new generative AI tools were both a “potential threat to the integrity of information and the news media” but also an “incredible opportunity to make journalism more efficient, effective, and trustworthy.”

Journalists recognized the time-saving benefits of AI with tasks, such as interview transcription.

READ:

ChatGPT turns to business as popularity wanes

UP experts seek ‘broad, practical’ policies on AI

UP drafting guidelines on ‘responsible’ AI use

But they also noted the need for AI-generated content to be checked by a human “to mitigate potential harms like bias and inaccuracy,” the authors said. Challenges surrounding AI integration were “more pronounced for newsrooms in the global south” they added.

“AI technologies developed have been predominantly available in English, but not in many Asian languages. We have to catch up doubly to create AI systems, and AI systems that work with our local languages,” the report quoted one respondent in the Philippines as saying.

Co-author Mira Yaseen said the economic and social benefits of AI were concentrated in the global north and its harms disproportionately were affecting the global south.

Read more...