Why journalists should not take AI at face value

Kofoworola Belo-Osagie is a researcher and consultant on nutrition and wellness writes on the need for journalists to contribute to ensuring more objective and trusted AI output
In the past two years of researching nutrition and wellness, I have come to appreciate how much researchers and journalists have done to boost public knowledge of virtually everything.
Researchers and journalists crawled so that Artificial Intelligence (AI) could fly.
Journalists deserve credit for AI
In a world obsessed with quick fixes, AI is regarded as a ‘saviour’ with superheroic abilities because it can get results as soon as you call for it. We often do not acknowledge that it runs on centuries of work done by researchers, journalists, experts, and others who have documented human endeavour.  Thanks to technology, AI can harness vast resources available online and deliver “readymade” information within an impressively short time. AI can make predictions about future occurrences based on available research. AI can help create videos, write news stories, design posters, edit CVs, conduct research, suggest where to visit to tourists, just name it.
The situation, of course, has increased concerns about the survival of journalism. The profession has faced immense pressure in recent times as a result of social media expansion, which has seriously eroded its gatekeeper role.
Now, the same researchers and journalists that contributed to sharing knowledge prior to AI, have to use AI in their work. They have to re-train to use AI to write, edit, research, and fact check to remain on top of their game.
Regardless, journalists (and, of course, researchers) deserve their flowers. We deserve to be celebrated for the rigorous work we have done to keep the world informed. We should claim credit for what AI is now able to achieve because without our news gathering/interpretation/investigation work, the world would not l enjoy the benefits of AI today. Yes, I acknowledge the computer programmers, engineers, scientists, and others who wrote the code, invented hardware and software that AI uses over the years. We thank them for their work. But, journalists and researchers should not be overlooked.
Who is AI quoting?
Now that AI is our reality, is our work done? Should we just send Grok, ChatGPT, Perplexity, Gemini and others on errand, and lean back in our chairs? No. I have realised that we cannot use AI like the average man on the social media streets.
Here is why: the popularity of social media platforms has increased content on the internet. This is good, especially in the global south, where authentic information about our endeavours were previously poorly documented online. However, it has also popularised a new category of creators contributing to the global knowledge pool we all drink from. This group of creators come from all walks of life, sometimes lacking commensurate education. They also  come with varying motives for their efforts (entertainment or glory seeking). AI does not totally discriminate. It also draws on their content when returning results. If when researchers, journalists and other experts dominated the content space, we needed fact checking and peer review, we need it more now that we have creators who do not necessarily apply the rigours of scientific inquiry or the ethics or objectivity of journalism to their content.
in using AI, journalists must interrogate their results. This simply means that the reading does not stop for us because you cannot interrogate results you get when you don’t know the questions to ask. My work researching nutrition and wellness has helped me tremendously when reviewing results presented by AI. I am able to question contradictory results or results that don’t speak for all conditions.
Generally, people tend to believe what AI
tells them. Journalists cannot fall into this category. I have seen countless people on X, for example, ask Grok to confirm a piece of information and they took the response hook, line and sinker, even when Grok was wrong. In a particular instance, a user on X asked Grok about the language used in a video and it responded that it was Pidgin English, when it was Yoruba. I had to point out to Grok that it was wrong. But for people who don’t understand both languages, they would have believed Grok.
I would have believed Grok as well in another instance where the AI tool responded that a particular soundtrack used in another video was an American musician’s but for other users who interjected that it belonged to Adekunle Gold. This simply tells us that AI can be wrong. Besides, how you frame your question to AI determines the feedback.
In using AI, journalists need to ask who AI is quoting. We have the responsibility of digging to know the exact sources and determine whether the information is credible.
Beyond retraining to use AI, let’s model it
Artificial Intelligence may have been designed without our input but going forward, we may need to insert ourselves in its operation. Our gatekeeper role has evolved, not disappeared. Even more than setting records straight, journalists should become interested in modeling AI in ways that can espouse the ethics of the profession so that its output can be more objective and trusted.

Leave a Comment

Your email address will not be published. Required fields are marked *