Skip Navigation
13 comments
  • I'll assess them: they are incompetent and talentless.

    That'll be $20.

  • The results of the second study mirrored the first. The monetary incentive did not correct the overestimation bias. The group using AI continued to perform better than the unaided group but persisted in overestimating their scores. The unaided group showed the classic Dunning-Kruger pattern, where the least skilled participants showed the most bias. The AI group again showed a uniform bias, confirming that the technology fundamentally shifts how users perceive their competence.

    So it's only high performers that are affected then, no? I also wish the article would mention the average bias from the control group. I know the curve looks different, but it sounds like they're probably only talking about a single answer worth of difference between the groups, and with only ~600 participants that doesn't seem that significant.

    The researchers noted that most participants acted as passive recipients of information. They frequently copied and pasted questions into the chat and accepted the AI’s output without significant challenge or verification. Only a small fraction of users treated the AI as a collaborative partner or a tool for double-checking their own logic.

    So then it's possible that they correctly assessed that they're worse at the test than the AI as established earlier in the article. That seems pretty important. I'm sure it's covered in the actual paper but I can only access the article.

13 comments