How generative AI can navigate misinformation and disinformation

New research into generative AI images shows only over a third of media organisations surveyed at the time of research have an image-specific AI policy in place.

The study, led by RMIT University in collaboration with Washington State University and the QUT Digital Media Research Centre, interviewed 20 photo editors or related roles from 16 leading public and commercial media organisations across Europe, Australia, and the US about their perceptions of generative AI technologies in visual journalism.

Lead researcher and RMIT Senior Lecturer, Dr TJ Thomson, said most staff interviewed were concerned about the impact of generative AI on misinformation and disinformation.

However, factors that compound the issue, such as the scale and speed at which content is shared on social media and algorithmic bias, were out of their control.

He explained: “Photo editors want to be transparent with their audiences when these technologies are being used, but media organisations can’t control human behaviour or how other platforms display information.”

Thomson said having policies and processes in place that detail how generative AI can be used across different communication forms could prevent incidents of mis- and disinformation.

“More media organisations need to be transparent with their policies so their audiences can also trust that the content was made or edited in the ways the organisation says it is,” he said.

Banning AI technologies is not the answer

The study found five of the surveyed outlets barred staff from using AI to generate images, and three of those outlets only barred photorealistic images.

Others allowed AI-generated images if the story was about AI.

generative ai
© shutterstock/Alliance Images

“Many of the policies I’ve seen from media organisations about generative AI are general and abstract,” Thomson stated.

“Banning generative AI outright would likely be a competitive disadvantage and almost impossible to enforce.”

Australia was found to be behind with AI regulation policies, while the US and EU led the way.

Algorithmic bias affects trust

The study found journalists were concerned about how algorithmic bias could perpetuate stereotypes around gender, race, sexuality and ability, leading to reputational risk and distrust of media.

“We had a photo editor in our study type a detailed prompt into a text-to-image generator to show a South Asian woman wearing a top and pants,” Thomson said.

“Despite detailing the woman’s clothing, the generator persisted with creating an image of a South Asian woman wearing a sari.”

Copyright was also a concern for photo editors as many text-to-image generators were not transparent about where their source materials came from.

While there have been generative AI copyright cases making their way into the courts, such as The New York Times’ lawsuit against OpenAI, it’s still an evolving era.

Thomson added: “Being more conservative and only using third-party AI generators that are trained on proprietary data or only using them for brainstorming or research rather than publication can lessen the legal risk while the courts settle the copyright question.”

Generative AI is not all bad

Despite concerns about mis- and disinformation, the study found most photo editors saw many opportunities for using generative AI, such as brainstorming and generating ideas.

Many were happy to use AI to generate illustrations that were not photorealistic, while others were happy to use AI to generate images when they didn’t have good existing stock images.

While there was concern about losing photojournalism jobs to generative AI, one editor interviewed said they could imagine using AI for simple photography tasks.

“Photographers who are employed will get to do more creative projects and fewer tasks like photographing something on a white background,” said the interviewed editor.

“We could argue that those things are also very easy and simple and take less time for a photographer, but sometimes they’re a headache too.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network