
By Zavian Swan
In the last article I wrote, I left readers with a thought experiment about automation. It started with the introduction of a cart in a village where you were the local strongman. At first, the cart merely reduced the strength needed, which allowed you to keep your job. While people were able to do some jobs by themselves that used to require your services, there was still a demand and so eventually the market adjusted. Eventually, however, a robot with super strength was introduced and automated the human labour needed altogether, putting you out of a job[1]. Let’s call this phenomena “physical offloading”, which I define as the act of reducing the physical workload required to complete a task by “giving” it to an external tool. In the thought experiment, it was at first the cart that allowed much more weight to be moved much easier, and then the robot that automated the need for human labour altogether. The thought experiment was supposed to help readers connect the dots that similar offloading is both possible and occuring in cognitive tasks as well.
This ‘cognitive offloading’ is quantified in the paper “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers” published by Microsoft, researchers examine how the rise of Generative AI (GenAI) in knowledge workflows influences the way knowledge workers engage in critical thinking[2].
First, the Study Design and Methodology:
The researchers surveyed 319 knowledge workers who provided 936 first-hand examples of using GenAI in various work tasks. The study was designed to address two primary questions:
- When and how do workers perceive the engagement of critical thinking when using GenAI?
- When and why does GenAI affect the effort required for critical thinking?
Participants rated their experiences using scales derived from Bloom’s taxonomy, which frames critical thinking across six cognitive activities: knowledge, comprehension, application, analysis, synthesis, and evaluation[3]. The knowledge workers reported not only whether they engaged in critical thinking but also how GenAI changed the effort needed for each cognitive activity compared to traditional methods.
5 BIG FINDINGS:
- While GenAI can improve worker efficiency, it can inhibit critical engagement with work. This can potentially lead to long-term overreliance on genAI and diminished skill for independent problem-solving.
- When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration and verification; and from task execution to task stewardship.
- Reliance on GenAI can lead to less diverse outcomes, suggesting a risk of diminished personal judgment. This is known as mechanized convergence. Rather than generating entirely original ideas, GenAI tends to regurgitate or merge them to form the concepts it presents. While this is a form of creativity and shouldn’t be diminished, it is simply not a substitute for human-level ingenuity as of now.
- The study finds that the higher a worker’s self-confidence in their ability to perform a task, the more they use their critical thinking skills when using GenAI. Conversely, higher confidence in GenAI’s ability to handle a task is associated with a reduction in critical thinking. The consensus has long been that one needs education in order to have confidence in their outputs, and indeed, research seems to back this up. According to the study “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking,” the author states, “Advanced educational attainment correlated positively with critical thinking skills, suggesting that education mitigates some cognitive impacts of AI reliance.”[4] There are indeed students in the modern age who check their math homework with AI technology the moment they get an answer. What does this say about how confident today’s students are in their own intelligence, education, and abilities compared to AI’s?
- “Our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers.”[5]
Graph Analysis:
In an analysis of the graph, we get a glimpse into what categories GenAI is being used for. What we find is that it is not “just being used,” but in fact every single one of the categories (with the exception of evaluation) is hovering around the 75% mark for less effort at the very least. Particularly notable is comprehension and synthesis, which makes sense given that they are the strong suit of the Large Language Models.
This data points to the changing nature of critical thinking in the workplace.
Instead of generating content from scratch, knowledge workers increasingly invest effort in verifying information, integrating AI-generated outputs into their work, and ensuring that the final outputs meet quality standards. What is motivating this behavior?
Some explanations for these trends could be to enhance work quality, develop professional AI skills, laziness, and the desire to avoid negative outcomes like errors. For example, someone who is not very proficient in the English language could use GenAI to make their emails sound a lot more natural and avoid any potential misunderstandings.
On the flipside, there are some drawbacks to using GenAI. These include overreliance on GenAI for routine or lower-stakes tasks, time pressures, limited awareness of potential AI pitfalls, and challenges in improving AI responses. For instance, while a lawyer noted increased effort in verifying legal information due to AI’s tendency to “make up” details, another participant found that AI drastically reduced the effort in reviewing code by quickly providing correct answers.
Implications for future AI Designs:
The findings suggest that GenAI tools can reduce the perceived cognitive load for certain tasks. However, they find that GenAI poses risks to workers’ critical thinking skills by shifting their roles from active problem-solvers to AI output overseers who must verify and integrate responses into their workflows. Once again (and this can not be emphasized enough) the study underscores the need for designing GenAI systems that actively support critical thinking. This will ensure that efficiency gains do not come at the expense of developing essential critical thinking skills.
Conclusion:
To come full circle and conclude, the village folk who “physically offloaded” their work onto the cart for efficiency (cost cutting, time saving, etc) purposes is representative of what capitalism aims for. The ‘cart’ represents an optimization that in turn allows and promotes more optimization, and therefore more profit. This is how capitalism thrives. Thus, as the common folk continue to embrace the new and more efficient ‘cart’ of AI, they will in theory make more money. On the flipside, however, they may be ignoring the negative consequences of this offloading; a loss of critical thinking skills and the gradual disempowerment of human values within these new AI systems.
[1] Luckily, with current robotics this physical automation doesn’t go further than bringing in the groceries for you, or putting them away: https://www.youtube.com/watch?v=uVcBa6NXAbk, https://www.youtube.com/watch?v=Z3yQHYNXPws
[2] https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/
[3] https://www.d41.org/cms/lib/IL01904672/Centricity/Domain/422/BloomsTaxonomy.pdf
[4] https://phys.org/news/2025-01-ai-linked-eroding-critical-skills.html
[5] https://www.microsoft.com/en-us/research/uploads/prod/2025/01/lee_2025_ai_critical_thinking_survey.pdf?mc_cid=f15de3dd6f