AI use makes us overestimate our cognitive performance, study reveals
TL;DR: Research from Aalto University demonstrates that using AI tools like ChatGPT causes users to systematically overestimate their cognitive performance. Contrary to expectations, those with higher AI literacy showed the greatest overconfidence, with minimal engagement and lack of verification identified as root causes.
A groundbreaking study from Aalto University has revealed a troubling pattern in how people perceive their performance when using large language models: we consistently overestimate our abilities, and those who consider themselves AI-savvy are the worst offenders.
Context and Background
The research, which involved approximately 500 participants, challenged a fundamental principle in cognitive psychology—the Dunning-Kruger Effect. Traditionally, this effect shows that less competent individuals overestimate their abilities whilst experts tend to underestimate theirs. However, when AI enters the equation, this pattern completely disappears.
Participants were asked to complete logical reasoning tasks from the Law School Admission Test (LSAT), with half using ChatGPT and half working independently. After each task, they assessed their own performance accuracy, with financial incentives provided for correct self-evaluations.
The results were striking: “When it comes to AI, the DKE vanishes,” and all users—regardless of actual ability—significantly overestimated their performance. Professor Robin Welsch noted that “higher AI literacy brings more overconfidence,” with those considering themselves AI-savvy showing the greatest performance overestimation.
The Root Cause: Minimal Engagement
The study identified a critical problem: users typically submitted questions to ChatGPT only once, accepting results without verification or deeper exploration. This “cognitive offloading” eliminated the feedback mechanisms necessary for accurate self-assessment.
Researchers observed that users “just thought the AI would solve things for them,” leading to a dangerous disconnect between perceived and actual performance. This minimal engagement pattern prevents users from developing the critical evaluation skills needed to work effectively with AI tools.
Looking Forward
Doctoral researcher Daniela da Silva Fernandes proposes a practical solution: AI platforms should “ask the users if they can explain their reasoning further.” This approach would force deeper engagement and promote critical thinking, helping users develop more accurate self-assessment capabilities.
The implications are significant for organisations implementing AI tools. Training programmes must address not just technical skills, but also the metacognitive abilities needed to evaluate AI-assisted work critically. Those who consider themselves most proficient with AI may require the most attention in developing verification and validation habits.
The study appears in Computers in Human Behavior (2026), offering valuable insights for anyone deploying AI tools in professional settings.
Source Attribution:
- Source: TechXplore / Aalto University
- Original: https://techxplore.com/news/2025-10-ai-overestimate-cognitive-reveals.html
- Published: 28 October 2025