Anthropic's AI Fluency Index: polished AI output makes users stop checking for errors
Anthropic analyzed nearly 10,000 anonymized Claude conversations and found that the more polished AI output appears, the less users verify its accuracy. When Claude produced artifacts like apps or documents, fact-checking dropped 3.7 percentage points and questioning of arguments fell 3.1 points.
Users who iterated on their prompts questioned reasoning 5.6x more often and spotted missing context 4x more frequently. The finding suggests AI fluency isn't just about prompting skills but about maintaining critical engagement as outputs become more convincing.
View full digest for February 24, 2026