The Effortless Illusion

In late 2024, economists Anders Humlum and Emilie Vestergaard published a study that caught my attention. They had surveyed 25,000 workers across eleven occupations, from software developers to HR professionals to teachers. The result surprised even the researchers. Despite rapid AI adoption, there were no significant effects on income or working hours. The average time saved was 2.8 percent, roughly ninety seconds per hour.

The researchers had expected more. Earlier studies have shown that AI can make individual tasks up to 50 percent more efficient. But in the real working world, it turned out, these gains disappear. Even more striking was another finding: The study documented that 8.4 percent of workers reported that AI had created new tasks for them.

These two findings together paint a picture that points beyond mere implementation problems. Economists like to explain absent productivity gains as implementation lag, a delay because fundamental technologies require massive investment and reorganization. The steam engine took a century to become productive. Electrification took decades. This explanation is reassuring, it suggests we just need to wait.

But what if the problem runs deeper? What if the way we use AI itself prevents us from realizing its potential? The Humlum-Vestergaard data suggest a different reading: We systematically confuse producing outputs with developing capabilities. And this confusion has consequences that extend far beyond productivity metrics.

A categorical difference

To understand why AI works differently than previous technologies, we need to distinguish between two types of tools. Traditional software structures and accelerates what we do. A spreadsheet program calculates faster than we can, but we must know which numbers are relevant, which formulas make sense, how to interpret results. The cognitive work of understanding, judging and deciding remains with us.

Generative AI promises something fundamentally different. It takes over cognitive operations itself: Researching, formulating, analyzing, even deciding. It doesn't provide tools for our thinking processes but substitutes for them. And this is where a problem arises that manifests differently across three types of tasks.

For routine tasks like calendar coordination, standard replies, formatting, automation is unproblematic. The effort of manually formatting a calendar entry has no intrinsic value. Here efficiency is legitimate.

For learning tasks like acquiring new skills, penetrating complex subjects, the effort itself is the mechanism of competence acquisition. Learning research shows that methods that feel more difficult often lead to better learning outcomes. Researchers call this "desirable difficulties." When students must summarize a text in their own words rather than just reading it, it's more laborious but leads to deeper understanding and better transferability. The brain forms stronger connections when it must work to create meaning.

For work tasks like the daily completion of projects, reports, analyses, the difficulty lies in the fact that both dimensions exist simultaneously. Writing an email is rarely primarily a learning task. But every email sharpens our ability for concise formulation, audience-appropriate communication, clear thought structure. These side effects are not trivial, they accumulate over time into expertise.

So the crucial question is not whether AI is efficient. It's primarily for which tasks is the effort itself valuable because it builds or maintains competence? And we systematically fail to make this distinction when the to-do list is long and AI promises to take work off our hands.

The hidden trade off

Take someone who regularly writes reports. The new tool offers "AI-powered summaries." You feed in the data, the AI structures, formulates, analyzes. Time savings? Definitely. Better reports? Maybe, depending on existing skills.

But what doesn't happen? The person no longer practices distinguishing relevant from irrelevant information. She doesn‘t develop a sense for which structure works for which audience. She doesn’t train the ability to present complex matters concisely. Those skills don't develop through knowledge about report writing but through repeated doing, through mistakes, through revision.

In the short term, AI use means efficiency. In the long term, it undermines the process through which professional expertise emerges. The catch? This loss initially goes unnoticed. The reports get finished, deadlines are met. Only when unusual contexts arise, when AI doesn't fit or fails, does the gap become visible. But by then a competence may already be missing if it was never fully developed.

Research on automation documents related phenomena. When people rely on automated systems, they develop the so called automation bias. This describes the tendency to trust the system even when it's wrong. Pilots who depend on autopilot lose the ability to intervene manually in crisis situations. Not because they forget the theory but because they lack continuous practice.

With AI, another element is added. The systems sound convincing. They present answers with an authority that brooks no doubt. And when we haven't trained the competence to judge quality, we can no longer distinguish between the plausible and the correct. The problem isn't that AI makes mistakes - we do too. The problem is that we no longer recognize the mistakes when we lack the practice.

The organizational dilemma

Now one might object: Good organizations would notice such developments and counteract them. But this is precisely where the systemic problem lies. The metrics by which work performance is measured capture outputs, not competence development. Reports written, projects completed, deadlines met.

When a person uses AI to become more efficient, she gets rewarded. The saved time can be used for more projects. The organization gets more output per working hour. Only when this person moves into a new role where deeper understanding is required, or when the quality of work subtly declines, do gaps become visible. And even then it's hard to attribute whether that's due to lacking AI competence, individual deficits, or a structural problem.

This creates a critical incentive structure. Individually, it's rational to use AI, because you're rated as more productive. Collectively, however, the organization's depth of competence may be eroding. And no one is directly responsible because each individual decision seems reasonable.

Moreover, the way AI creates new tasks obscures the problem further. When 8.4 percent of workers report that AI has created new tasks for them, could that also mean the technology enables things that were previously impossible. But it could also mean: Because the barrier to entry is lowered, tasks are undertaken for which the necessary basic competence is lacking. Someone creates data analyses with AI without statistical understanding. Someone produces texts in foreign languages without mastering the language. The outputs exist, but their quality is hard to assess when expertise is missing.

Protecting what matters

The solution cannot be to avoid AI. The technology is here, it's getting better, and in many contexts it's legitimately useful. The solution lies in conscious differentiation.

Organizations would need to explicitly identify which skills are essential for their core competence. Not abstractly but concretely. Which cognitive operations must people master to handle the work in unpredictable situations? For a consulting firm, that might be analytical thinking and problem structuring. For an editorial office, concise formulation and critical judgment. These skills must be actively trained, even when AI offers shortcuts.

New practices of quality assessment are needed. When AI produces outputs, people must be able to validate them. That presupposes they master the underlying skills themselves. Peer review, mentoring, conscious practice phases without AI - such mechanisms sound old-fashioned but are necessary to maintain competence.

The metrics need to be expanded. Measuring productivity only by outputs always makes AI use attractive. But if it were also measured whether people are able to handle tasks without AI, whether they recognize errors, whether they can adapt in new contexts, different incentives would emerge.

This is not a plea against efficiency. It's a plea for distinguishing between efficiency that creates space and efficiency that undermines foundations. Not every effort is valuable. But the effort that builds expertise we must not eliminate just because it's laborious.

The productive core

The Humlum-Vestergaard study shows a productivity paradox at first glance. At second glance, it shows something else. We have not yet learned to distinguish between different types of work. Work that produces output and work that develops competence are not the same. Sometimes they overlap, often they don't.

AI makes the first type of work more efficient. But if we don't consciously protect the second type, we become more productive at producing things we increasingly understand less. That's more than a productivity problem. It's a competence problem.

The 2.8 percent time savings may not be a failure of the technology at all. It could be the indication that we intuitively sense, that some work should remain laborious. The question is whether we're smart enough to translate this intuition into conscious practice before the pressure for efficiency completely displaces it.

This post draws on: Humlum & Vestergaard: The Impact of ChatGPT on the Labor Market. 2024 as well as Brynjolfsson et. al.: Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics. 2019; Bjork: Memory and Metamemory Considerations in the Training of Human Beings. In Metacognition: Knowing about knowing. 1994; Roediger & Karpicke: Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention. In Psychological Science, 17(3). 2006; Parasuraman & Manzey: Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3). 2010; Mollick & Mollick: Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts. 2024.

Next
Next

Beyond the AI prophecy