Abstract: "Optimizing human–AI interaction requires users to reflect on their performance critically, yet little is known about generative AI systems’ effect on users’ metacognitive judgments. In two large-scale studies, we investigate how AI usage is associated with users’ metacognitive monitoring and performance in logical reasoning tasks. Specifically, our paper examines whether people using AI to complete tasks can accurately monitor how well they perform. In Study 1, participants (N = 246) used AI to solve 20 logical reasoning problems from the Law School Admission Test. While their task performance improved by three points compared to a norm population, participants overestimated their task performance by four points. Interestingly, higher AI literacy correlated with lower metacognitive accuracy, suggesting that those with more technical knowledge of AI were more confident but less precise in judging their own performance. Using a computational model, we explored individual differences in metacognitive accuracy and found that the Dunning–Kruger effect, usually observed in this task, ceased to exist with AI use. Study 2 (N = 452) replicates these findings. We discuss how AI levels cognitive and metacognitive performance in human–AI interaction and consider the consequences of performance overestimation for designing interactive AI systems that foster accurate self-monitoring, avoid overreliance, and enhance cognitive performance."
Study:
AI makes you smarter but none the wiser: The disconnect between performance and metacognition - https://www.sciencedirect.com/science/article/abs/pii/S07475... | https://doi.org/10.1016/j.chb.2025.108779
Original article: "AI makes you smarter but none the wiser: The disconnect between performance and metacognition" - https://www.sciencedirect.com/science/article/pii/S074756322...
Abstract: "Optimizing human–AI interaction requires users to reflect on their performance critically, yet little is known about generative AI systems’ effect on users’ metacognitive judgments. In two large-scale studies, we investigate how AI usage is associated with users’ metacognitive monitoring and performance in logical reasoning tasks. Specifically, our paper examines whether people using AI to complete tasks can accurately monitor how well they perform. In Study 1, participants (N = 246) used AI to solve 20 logical reasoning problems from the Law School Admission Test. While their task performance improved by three points compared to a norm population, participants overestimated their task performance by four points. Interestingly, higher AI literacy correlated with lower metacognitive accuracy, suggesting that those with more technical knowledge of AI were more confident but less precise in judging their own performance. Using a computational model, we explored individual differences in metacognitive accuracy and found that the Dunning–Kruger effect, usually observed in this task, ceased to exist with AI use. Study 2 (N = 452) replicates these findings. We discuss how AI levels cognitive and metacognitive performance in human–AI interaction and consider the consequences of performance overestimation for designing interactive AI systems that foster accurate self-monitoring, avoid overreliance, and enhance cognitive performance."
[dead]