Stop Blaming AI for Problems It Didn’t Create
In recent weeks, two opinion pieces in Canada's National Post newspaper — Alister Adams’ “AI threatens the lifeblood of democracy” and Paul W. Bennett’s “AI isn’t revolutionising learning. It’s mimicking original thought” — have painted a bleak picture of generative AI as a corrosive force in journalism and education. Both writers warn that large language models (LLMs) are eroding trust, destroying livelihoods, and hollowing out human capacity for deep thinking.
They are right about one thing: journalism and education are in trouble. But on the central point — that AI is the root cause of this trouble — they are profoundly wrong.
The Journalism Argument: A Decade-Old Crisis with a New Scapegoat
Adams’ piece points to Google’s AI Overviews as the latest existential threat to newsrooms. It’s true that “zero-click” results reduce web traffic and that ad-based revenue models are collapsing. But to suggest that AI has single-handedly triggered this crisis is to ignore twenty years of media history.
The decline in journalism began long before AI. Newspapers gutted their investigative teams when print ads evaporated in the mid-2000s. News outlets walled off content behind paywalls, alienating younger readers. Trust in the press eroded as partisan narratives became indistinguishable from straight reporting.
AI is not stealing journalism’s lifeblood — it is revealing that the patient has been on life support for years. If publishers cannot adapt to a world where summaries, aggregations, and secondary commentary are the norm — something that began with Google News and Wikipedia, not AI Overviews — the problem is strategic, not technological.
The solution is not to demonise AI, but to integrate it. Imagine AI-powered explainers that embed original reporting, with clear, clickable attribution that rewards the source. Imagine newsrooms using AI to fact-check, surface archival context, or personalise investigative series for different communities. These tools exist. The obstacle is not AI; it’s the refusal to rethink the model.
The Education Argument: Surface-Level Thinking Didn’t Start with ChatGPT
Bennett warns that AI encourages “pseudo-proficiency” — students producing articulate answers without deep comprehension. It’s a fair concern. But again, this is not new.
For over a decade, many students have been actively discouraged, even penalised, for questioning “narratives” in public school curricula. Critical thinking is applauded in theory, but in practice it is bounded by ideological guardrails. A student who challenges the framing of a historical event, or questions the accuracy of a politically sensitive claim, risks a lower grade or social reprimand.
This is the same “surface-level thinking” Bennett worries about, only without AI in the mix. Standardised tests, rigid rubrics, and politically curated lesson plans have long rewarded memorisation and compliance over curiosity and intellectual courage.
LLMs did not create this problem. At worst, they fit neatly into an existing educational culture that prefers consensus answers over critical exploration. At best, they can challenge it — giving students the means to explore multiple viewpoints, test their reasoning, and engage with material at a depth that classroom time constraints often don’t allow.
The danger isn’t that AI makes students shallow thinkers. The danger is that schools will use AI to make it easier to keep students that way.
The Common Enemy — If There Is One
Both Adams and Bennett write as if AI were an autonomous actor with its own agenda. But LLMs are not independent forces plotting the downfall of democracy or education. They are mirrors, reflecting the biases, limitations, and creativity of the people who use them.
If journalism is failing, it’s because the business model is broken, trust has eroded, and adaptation has been slow. If education is failing, it’s because we have built systems that reward obedience over inquiry.
Blaming AI for these failings is not only inaccurate — it’s convenient. It allows institutions to avoid examining their own decisions. It shifts the focus away from the uncomfortable truth: the crisis in both fields is human-made, and so is the solution.
A Better Way Forward
Instead of vilifying AI, we should be asking harder questions about how it can help us rebuild. How can newsrooms use LLMs to expand investigative reach while preserving credit and revenue? How can teachers design assignments that require students to interrogate AI outputs rather than passively accept them?
Generative AI is not a saviour. It is not a villain. It is a tool — one that will reflect our intentions, whether they are shallow or profound. If we want democracy and education to thrive, we must stop outsourcing responsibility for their health to the latest technological disruptor.
Because in the end, the lifeblood of democracy — and the foundation of learning — is not the absence of new tools. It is the presence of people willing to use them wisely.