My first Sunday morning coffee: Beyond the AI prophecy
They promise us salvation. In two years, perhaps in five. Artificial general intelligence, called AGI, will cure all diseases, save the planet, lead humanity into a new era. Or it will destroy us. Definitively.
Both are supposedly equally possible. The truth, however, is more banal: AGI doesn't exist. It has never existed. Nor is it certain that it will ever exist in near future. And yet this fiction rules an entire industry and with it all our economic existence.
Will Douglas Heaven formulated a thesis in MIT Technology Review that I cannot ignore: AGI functions like a conspiracy theory. Flexible timelines that constantly adjust. Promises of salvation and doomsday scenarios in one. And a remarkable resistance to reality. Emerged in 2007 on the fringes of research, the idea has become the dominant narrative – even though no one can define what AGI is supposed to be.
And while many billions of dollars follow the mythical narrative that everything is possible - for better or worse - the concrete challenges posed by AI remain unclear and the real problems of the present remain unsolved.
Need for a pragmatic shift?
The Silicon Valley prophets aside for a moment. One of my friends has ChatGPT create her diet plan and lost ten pounds with it. AI delivered the template. She still had to buy the groceries herself, prepare the meals, and deal with her actual appetite at 9 PM too.
Entrepreneurs and influencers I follow on LinkedIn increasingly generate posts with AI. Posts go online numerous and frequently. Engagement is contingent due to the algorithms that no one really understands. The insight into why anyone should be interested in this content is, in my view, still a human question.
My husband asks ChatGPT for recipes, AI reliably delivers for ingredients currently available in the fridge. He chops the vegetables, adjusts the seasoning, and harmonizes the instructions with what we actually enjoy. We still don't mistake him for a top chef though.
Our son too - who has AI explain the math material he doesn't quite remember from class - notices, not without some frustration, that the explanation doesn't mean he can calculate correctly. And the fact that he has to solve the problems himself in the exam – he already had that without an AI agent.
And what about me? I use AI for quick information as well as source research and experience this as a time-saving blessing. Nevertheless, I don't want to lose my own voice. I don't want to just parrot what AI chatbots present as answers to my questions based on probabilities. I want to feel what I think makes sense, not just read what a language model conjures onto my fancy device in a few seconds.
We learn because we feel when we're wrong. AI cannot.
I do think that real intelligence is physically embodied and coupled with emotions and learning. And this form of intelligence still belongs to us. Because the diet plan is worthless without body awareness. The LinkedIn post is empty without internally moving insight. The recipe doesn't have the ability to flatter spoiled tongues.
A study from Harvard, MIT and Cambridge from October 2025 recently confirmed this empirically. 517 people competed against three leading AI models in thinking and problem-solving tasks. The result: humans consistently outperformed AI. Not because they processed information faster, but because they learned.
The human participants systematically tested hypotheses, experimented, revised their assumptions - also known as plausibility check. In comparison, the AI models worked through things bluntly with their internal logic and stuck to their original programming - even when it failed in relation to human world knowledge.
The researchers came to a clear conclusion: Current AI models confuse information with knowledge. They accumulate data, but they lack awareness of their own thought processes. They cannot recognize when they don't know something. They cannot (yet) really learn. What AI misses is what W. Edwards Deming called continuous improvement through the PDCA cycle: the ability to plan, test, check results against reality, and adapt - a process that requires metacognition and embodied experience.
In humans, we call this lack of metacognition the Dunning-Kruger effect - and it's considered a cognitive bias, not intelligence. Nevertheless, AI is a phenomenon that processes information lightning-fast, recognizes patterns, and is capable of reliably analyzing existing data. And that is awesome for people with tight schedules like me.
The actual threat: Error 45
Here Heaven's analysis becomes politically explosive: Whoever swallows the AGI pill without questioning - whether as a promise of salvation or doomsday prophecy - accepts side effects that are not in their own interest. It's not sensible to either wait for salvation or prepare for the apocalypse when what needs to be figured out is whether the prescribed medicine even fits the diagnosis.
I think AI is problematic for people who have stopped asking their own questions. Who think in binary logic - utopia or apocalypse. Who cannot bear the complexity of life. Who wait for prophets instead of engaging with reality themselves. Therefore, I don't see the threat in AI potentially replacing human intelligence. The danger seems rather to be human intelligence paralyzed in dogma, replacing thinking and learning as too laborious and time consuming.
People who are willing to engage with inconvenient facts because they contradict their own convictions, people who can bear the uncertainty arising from technological progress because they are capable of learning, people who use tools without worshiping or fearing them, need not be afraid of being replaced by AI. The danger of inhuman dogmatism or ignorant self-overestimation is probably the greater risk.
Technology moves fast. Do we have to as well?
I would say it depends on which narrative guides us. Whoever follows the AGI prophets is trapped in their race. Either you keep up or fall behind. Either you help build their future or are forgotten in the past. But what if we choose a different narrative?
One in which the question is not "How fast can we be?" but "Where do we actually want to get to?" Where slowness doesn't mean backwardness, but is a prerequisite for even recognizing the destination. What problems need to be solved today? What challenges require human judgment? What does progress mean when we're constantly chasing the next big thing instead of working on what's right in front of us? Answers to those questions take time. And trust in one's own abilities.
Whether AGI comes in five years or never: It doesn't matter - AI is already changing today how we work, learn, decide. This is a reality that cannot be evaded without losing one's own agency. What's decisive is not whether one believes the grand narratives, but whether one is able to ask the right questions about what's actually happening.
Perhaps that's the real challenge: Not waiting for technology to save or damn us, but examining the narratives sold to us as undeniable truth. Finding our own answers, accepting that those are always insufficient, instead of outsourcing thinking to the loudest voices because they supposedly possess secret knowledge. At the moment, I think the intelligence that matters most in that context still belongs to us.
This post draws on Will Douglas Heaven, "How AGI became the most consequential conspiracy theory of our time," MIT Technology Review, October 30, 2025, as well as Christopher R. Chapman, "AI and Deming's Theory of Knowledge," The Digestible Deming, November 5, 2025.