Center for Practical AI
Educators Guide · Literacy Risk 3

Teaching The Deference Reflex

Over-reliance on AI, algorithm aversion, and what calibrated trust in AI systems actually looks like in practice.

The Deference Reflex is the "hinge" between Literacy and Fluency risks because it manifests differently depending on frequency of use. For infrequent users, the risk is uncritical deference — treating AI output as authoritative simply because it came from AI. For heavy users, the risk is skill atrophy — the gradual erosion of the capacity to perform tasks without AI assistance.

The CHI 2025 Microsoft/CMU study (N=319, 936 tasks) provides the clearest current evidence: cognitive engagement with AI tools declines with use over time, and high AI confidence predicts deference even when AI is wrong. Learners in high-AI-use environments may not recognize this pattern in themselves because the erosion is gradual.

The pedagogical challenge is that "use AI more skeptically" is not actionable advice. What's actionable is: form your own answer first, then evaluate AI's answer against yours; maintain periodic AI-free practice in core skills; calibrate trust to track record by domain, not to confidence of delivery.

  • 1Distinguish over-delegation, algorithm aversion, and calibrated reliance as three distinct response patterns.
  • 2Apply the 'own answer first' habit to reduce the cognitive pull of AI confidence.
  • 3Identify domains where they have strong independent judgment vs. domains where deference is more appropriate.
  • 4Articulate a personal calibration strategy for their most-used AI tools.

Opening (no prior reading required)

In the past week, how many times did you change your answer after seeing what AI said? For each case: was the change warranted? How do you know?

On automation complacency

Bainbridge (1983) argued that automating dangerous tasks creates danger by making humans less capable of monitoring and overriding the automation. How does this apply to cognitive tasks automated by AI? What would 'cognitive complacency' look like in your field?

The calibration challenge

Calibrated reliance means trusting AI more in domains where it's accurate and less where it's not. What would you need to know about an AI system to calibrate appropriately? Do you currently have that information for the AI tools you use most?

Skill atrophy

The Microsoft/CMU study suggests that cognitive engagement with AI tools declines over time. In your own primary skills — writing, analysis, coding, clinical judgment — could you tell if you were getting worse? What would the signal be?

Calibration Test (class or individual, ~15 min)

Run the Calibration Test individually or project for the class. After results, group students by their reliance profile and have each group discuss: does this profile match your self-image? What questions produced the most surprising deference patterns?

Own-answer-first experiment (~1 week)

Have students commit to writing their own answer before querying AI on any task for one week. At week's end, discuss: how often was your initial answer better than AI's? Worse? How did forming your own answer change the quality of your evaluation of AI's answer?

AI-free day (~1 day)

Assign one AI-free workday and have students keep a brief log of moments when they reached for AI and what they did instead. Discuss: what did you discover about what you still know? What felt genuinely difficult? Where did AI-free work produce better outcomes than you expected?

Track record audit (~30 min)

Have students pick their most-used AI tool and evaluate its accuracy across 20 recent outputs in their domain. Where is it reliably right? Where is it unreliably wrong? Build a domain-specific reliability map. This is calibration in practice.

Misconception: 'Using AI is always more efficient than doing it myself'

Reframe: The efficiency gain from AI is real but unevenly distributed. For tasks requiring verification, prompting, and iteration, AI often adds total process time. The overhead is invisible when you don't count verification.

Misconception: 'If AI is usually right, it makes sense to just follow it'

Reframe: The problem is identifying the cases where AI is wrong. AI's confidence doesn't reliably signal its accuracy — the CHI 2025 study found high AI confidence actually predicted more deference even when AI was wrong.

Misconception: 'Avoiding deference means being skeptical of AI across the board'

Reframe: Algorithm aversion — rejecting AI systematically after seeing it err — is also miscalibrated. The goal is domain-specific trust calibrated to actual accuracy, not uniform skepticism.