RECENTLY, I was invited to speak at a conference on artificial intelligence, where a familiar and increasingly urgent question resurfaced: as AI becomes extraordinarily efficient at sourcing, processing, and delivering information, will journalists eventually become irrelevant?
At first glance, the idea feels compelling.
If machines can scrape information across multiple feeds in real time, summarise parliamentary proceedings, monitor political developments, crunch data, and respond to user queries in conversational language, why would end users still need traditional journalism outlets, or even digital platforms as intermediaries for news? The argument implies a future where individuals bypass traditional journalism outlets entirely and receive information directly from intelligent systems in a conversational, real-time format.
Those who make this argument often point to changing audience behaviour, especially among younger users. Consumers are steadily turning away from conventional news platforms and gravitating toward faster, more personal, and more direct sources of information.
And to be fair, the trend is clear.
The latest Reuters Institute Digital News Report shows a consistent decline in trust, viewership, and platform dependence on mainstream news outlets. Users, particularly young people, increasingly prefer to consume information from influencers, content creators, and niche commentators rather than from professional journalists. Their relationship with news is personal, interactive, emotional, and algorithmically curated.
Thus, many in the industry look at this shift and assume that AI-mediated information access will make journalism redundant faster than we imagine.
But this prediction misunderstands both journalism and technological disruption.
Journalism has never merely been about transmitting information from point A to point B. If journalism were equivalent to distribution, then journalists would already have become irrelevant when blogs, Twitter, and user-generated content exploded over a decade ago. Yet even as more voices entered the digital information space, the public’s need for verification, investigation, context, scrutiny, and accountability only grew stronger.
Every time a major national or geopolitical event unfolds, the world still turns to journalists to understand what actually happened, who benefits, who is responsible, and what it means.
How will journalism evolve to meet the challenges ushered in by AI?
AI does not eliminate the need for this work. On the contrary, AI multiplies it.
We are moving into a period where the volume of available information will exceed human comprehension by several orders of magnitude. Generative systems will not just surface facts; they will hallucinate, remix, misattribute, and infer things that never happened at all.
Political actors, commercial interests, and even the state authorities now possess the ability to manufacture narratives, emotional atmospheres, and public sentiments on an industrial scale, with minimal cost and high precision. The idea that ordinary users will be able to navigate that universe alone, without a professional class dedicated to verification, investigation, and ethical interpretation, is, to say the least, misplaced.
If AI is going to rewrite the information environment, then journalism is not becoming less necessary; it is becoming structurally essential. In five years, the hardest problems in public communication will not be about finding information. It will be about establishing what is real, what is synthetic, what is manipulated, and what is politically engineered for attention and outrage.
For me, the question was never whether journalists would become irrelevant. They won’t.
The real question is two-fold: how the profession will evolve to meet the challenges ushered in by AI, and whether journalists will be able to meaningfully integrate these technologies into their workflows, their verification practices, and their investigative muscle.
To begin with, journalists will need an expanded skillset that goes far beyond traditional verification or deepfake detection. As AI becomes embedded in governance, bureaucracy, law enforcement, welfare delivery, and public administration, journalistic storytelling will have to evolve as well.
In the past, governance reporting focused on the effectiveness of systems, corruption, procurement flaws, or regulatory gaps. In the future, many of those stories will be about data bias, algorithmic discrimination, opaque model assumptions, automated decision-making, and the ways poorly designed AI systems can harm citizens at scale. And journalists will only be able to report on these issues meaningfully if they understand how these systems work.
For instance, they will have to learn algorithmic auditing, network analysis, and the ability to interrogate opaque systems. Similarly, they will need to be able to trace financial and political incentives behind influence operations, scrutinise the training data that powers decision-making models, and investigate how platforms, corporations, and governments deploy automated systems to shape public sentiment, marginalise voices, or suppress inconvenient truths.
In an ‘automated’ information environment, human accountability must become more rigorous, not less. If, for example, a government uses predictive policing, automated welfare systems, or algorithmic surveillance, someone must investigate who designed the system, what assumptions it encodes, whose rights it affects, and whether its outputs reinforce bias or injustice. No machine can perform that function independently. It requires reporters with access, independence, courage, and analytical skill.
Thus, I believe one of the biggest democratic risks in the near future is not that AI will make journalists irrelevant, but that there will be too few trained journalists capable of interrogating AI-driven systems.
Which brings me to the second part of the issue: are we, the journalists, integrating AI into our workflows the right way? The answer is complicated. AI offers extraordinary efficiencies that can support research, transcription, analysis, visualisation, and verification at a scale that traditional newsrooms have never experienced. But integration without standards is risky.
If journalists outsource too much judgement to opaque systems, we risk importing algorithmic bias into editorial choices, eroding verification practices, and unintentionally weakening our core responsibility: the independent assessment of facts.
My take on the question of whether journalism as we know it will ‘survive’ the AI boom is simple: journalism will be just fine; it is the journalists who refuse to evolve who should be worried, not the craft itself.
The writer is the founder of Media Matters for Democracy.
Published in Dawn, December 14th, 2025
