The media, often referred to as the fourth estate of the realm, has long served as a check on power, a mirror to society, and a platform for public discourse. However, in this era of rapid technological transformation, it must also serve as the lens through which we understand the impact of artificial intelligence (AI).
As this powerful technology reshapes everything from education to elections, from warfare to welfare, it is not only advisable but urgent for the media to be deliberate—intentional—about reporting and investigating AI.
AI is not just another tech buzzword. It is another technological disruption, perhaps even more pervasive and consequential than the mobile phone or the internet. It is a force that is already redefining our present and will most certainly shape our future. From the way we work to the way we communicate, organise, heal, protest, learn, or even dream, AI is becoming woven into the very fabric of daily life. And if the media—the watchdog of society—is not awake and actively interrogating its evolution and implications, then who will?
AI Is Already Changing Journalism—and Not Always for the Better
Let’s start from within: the media industry itself. Journalists and media workers are not immune to the changes AI is bringing. Journalism is among the professions already witnessing significant transformation, thanks to generative AI tools like ChatGPT, Sora, and others. Newsrooms are experimenting with using AI to write headlines, generate drafts, summarise interviews, produce video scripts, and even create photorealistic images.
While this might boost productivity or reduce production costs, there are ethical, professional, and even legal consequences that must be examined. What happens to editorial integrity when algorithms prioritise engagement over truth? How do we ensure that AI-generated content adheres to journalistic standards? Who is responsible when an AI system spreads misinformation, plagiarises a copyrighted story, or generates a deepfake? These are not hypothetical questions. They are real, urgent issues that the media must be brave enough to investigate—beginning with itself.
Reporting on AI Is a Matter of Public Interest
AI is already affecting public services in ways that are mostly hidden from the public. Automated decision-making systems are being used in policing, immigration control, social welfare distribution, and loan approvals. Predictive policing tools are being deployed in cities without any community consultation. AI-powered surveillance systems are being installed in classrooms, markets, and public spaces. And facial recognition software is being used by governments and corporations with little to no transparency or oversight.
These developments raise critical human rights and social justice questions. Who designs these systems? Who gets excluded? How are biases encoded into algorithms? How are people impacted when an opaque system denies them healthcare, education, or a job interview?
Journalists should be asking these questions. The media must investigate these systems the same way it investigates corruption, police brutality, or electoral fraud. AI must not be treated as a sacred cow or an intimidating subject only “tech bros” or scientists can talk about. The public deserves to understand how these systems work, who benefits from them, and who gets harmed.
READ ALSO: ChatGPT hails protecting creative soul of journalism
AI and the Future of Work: Who Gets Left Behind?
The future of work is one of the most hotly debated aspects of AI development. Some experts predict that AI will eliminate millions of jobs, especially routine or repetitive jobs across sectors like banking, customer service, and transport. Others argue it will create new roles in areas like data science, AI ethics, and machine learning engineering.
But here’s the reality in Nigeria, across Africa, and much of the Global South: the digital divide is already wide, and AI is about to widen it even more. While some young people in Lagos, Nairobi, or Accra are learning to build AI models or design automation tools, millions more in rural and underserved communities don’t even have stable internet access. They are not part of the data economy. They are not getting trained for the “AI jobs of the future.” They are being left behind—again.
The media must be the voice calling attention to this deepening divide. It must report on how AI is reinforcing inequalities, both economic and educational. It must spotlight the policy gaps, the lack of digital literacy in our public schools, and the absence of AI strategies from our national development plans. If the media is silent, then the gap will grow, and once again, Africa will be on the receiving end of a global revolution it did not shape.
AI and Election Integrity
Let’s not forget the political dimension. AI is now being used in political campaigns to micro-target voters, spread propaganda, and manipulate public opinion. Deepfakes—AI-generated fake videos or audio recordings—can make it look like a candidate said or did something they never actually did. Large language models can be used to mass-produce fake news articles that look legitimate. These tools can be deployed to suppress votes, incite ethnic hatred, or spread disinformation at scale.
We saw this during elections in Kenya, Nigeria, Brazil, and even the United States. The stakes are high. If journalists do not investigate the use of AI in political campaigns, if we don’t report on algorithmic manipulation or expose the tech mercenaries who get paid to distort reality, democracy itself is at risk.
This is why AI reporting should not be left to the “tech desk” alone. It should be everybody’s beat—political reporters, business analysts, investigative journalists, and human rights correspondents. AI is not just a technology story. It is a society story. A powerful story. A human story.
What Is to Be Done?
So, how can the media rise to the challenge of reporting and investigating AI? Here are a few starting points:
1. Build Capacity in Newsrooms
Journalists need training—not just on how to use AI tools but on how to report on them critically. Media organisations should invest in capacity building around AI literacy, data journalism, and tech accountability. Editors must prioritise AI reporting, not as a nice-to-have but as a strategic imperative.
2. Collaborate with Experts and Civil Society
Journalists don’t have to know everything about machine learning or neural networks. But they must be able to ask the right questions and consult the right experts. Partnerships with academic researchers, civil society groups, and digital rights organisations can strengthen reporting and expand access to information.
3. Investigate the Supply Chain of AI
Who builds the datasets that train AI models? Where is the data coming from? Are the workers labelling that data being paid fairly? Are African languages being included in voice recognition systems, or are we once again being erased? These are investigative angles waiting to be explored.
4. Centre the Marginalised
AI is not neutral. It is shaped by human decisions, and those decisions often reflect the same structural inequalities present in the wider society. Reporting must centre the voices of those most affected by AI—women, rural dwellers, the disabled, and the poor. Their stories must not be an afterthought.
5. Hold Governments and Companies Accountable
Governments are procuring AI tools with public funds. Private tech companies are mining our data and profiting off our behaviours. Yet most operate in secrecy. The media must demand transparency. We must ask for regulatory frameworks. We must expose harmful practices and highlight good ones.
Conclusion: The Media Cannot Afford to Look Away
This is not a call for panic. It is a call for responsibility.
Artificial intelligence will not destroy the world overnight. But it can destroy livelihoods, undermine rights, and deepen injustice if left unchecked. As journalists, our job is not to cheerlead innovation blindly, nor to scaremonger. Our job is to tell the truth, ask hard questions, and amplify the voices of those at the margins.
In this AI moment, the media must rise—not just to adapt, but to interrogate. Not just to use AI tools, but to investigate the power behind them. We must report on AI not because it is trendy, but because it is transformative. And because our silence would be complicit.
Let us not look away. Let us investigate.
Adegboyega is Executive Director of Human Rights Journalists Network, Nigeria.

