Efficiency gains are obscuring a deeper workforce risk
The workplace case for artificial intelligence is usually framed around productivity. AI systems can absorb routine tasks, speed up workflows and give professionals more time to focus on higher-value problems. But the more consequential question is not how much time these tools save today. It is whether long-term reliance on AI quietly reduces the human capacities that organisations still need in order to function well. That is the concern raised by economist Piotr Gaczek of the Poznań University of Economics, who studies the impact of AI on decision-making.
Table of Contents
His argument reflects a growing unease in the labour market. While AI is commonly presented as a support system rather than a full substitute for human work, that support may come with cognitive side effects. Research already suggests that frequent use of AI tools can reduce mental engagement and weaken neural activity, while performance may deteriorate when users who have become accustomed to automated assistance suddenly lose access to it. The danger is not only dependence on a tool, but dependence that becomes visible only when the tool is gone.
Hybrid intelligence still depends on active human judgement
For now, the dominant model remains one of collaboration rather than replacement. In this so-called hybrid intelligence framework, the machine handles repetitive or lower-level tasks while the human supervises, directs and retains responsibility for the final result. In theory, this arrangement should elevate human work by freeing specialists to concentrate on complexity, judgement and interpretation.
Yet that ideal depends on a level of attention that may be difficult to sustain in practice. Gaczek argues that people using AI tend to become somewhat more passive, in part because they process information less carefully once some of the task and responsibility has been delegated to the system. That passivity matters because AI tools do not merely automate effort; they can also reshape how people think through problems. A workflow that appears more efficient on the surface may, over time, produce weaker habits of scrutiny, verification and independent reasoning.
The greater danger may be skill erosion rather than automation itself
The most serious risk, in Gaczek’s view, is not that AI will immediately replace workers across the board, but that essential competencies will gradually disappear. Language models are prone to hallucinations and can generate convincing but false outputs, while users are also vulnerable to automation bias, the tendency to trust machine-generated recommendations too readily. The combination is particularly dangerous in environments where speed is rewarded and friction is unwelcome, because workers may accept plausible answers without properly testing them.
But the longer-term threat runs deeper than error. When a task is repeatedly handed over to a machine, people can lose fluency in performing it themselves and, crucially, lose the ability to pass that knowledge on to others. Gaczek’s example is striking: forgetting how to write a scientific article is one problem, but being unable to teach a doctoral student how to do it is far more serious. At that point, AI is no longer just changing individual performance; it is interrupting the transmission of professional knowledge from one generation of workers to the next.
Companies may preserve knowledge while making people more replaceable
That concern becomes even sharper when firms begin treating employee expertise as something that can be extracted and stored independently of the employee. Gaczek points to a German company launching a research project aimed at using AI to map the knowledge of its managers so that their know-how remains inside the business even if they leave or are dismissed. The example is revealing because it captures a central contradiction of enterprise AI adoption: companies want to preserve institutional knowledge, but the process of codifying that knowledge can also reduce the strategic value of the people who hold it.
In that sense, AI does not just alter tasks; it can alter the employment relationship itself. Once a machine has absorbed enough of a worker’s experience, the company may begin to see that worker as less indispensable. That is why this debate is ultimately about power as much as productivity. The issue is not whether AI can support better work, but whether organisations will use it in ways that hollow out expertise, weaken judgement and turn hard-earned human knowledge into a corporate asset detached from the humans who built it.
Responsible adoption requires more than optimism
Gaczek’s proposed safeguards are telling. He argues for explainable AI systems that can show why they reached a recommendation, for mechanisms that periodically prompt users to remain attentive rather than trusting outputs automatically, and for dedicated supervisors inside companies who can oversee whether AI is being used ethically and in line with internal policy. These are not technical add-ons so much as attempts to preserve human oversight in environments increasingly shaped by automation.
His conclusion is measured rather than alarmist. Businesses should not fear the technology, but neither should they treat it as frictionless progress. The real challenge is to keep enthusiasm and caution in balance. If companies fail to do that, the cost of AI adoption may not appear first in employment statistics, but in the slow erosion of the skills, judgement and teaching capacity that make organisations resilient in the first place.
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

Source: Zaskakujące skutki używania AI. “To alarmujący przykład”



