Six Risks of
Under-Developed
AI Proficiency
A research-backed map of how AI use goes wrong — from magical thinking about how AI works, to cognitive overload from managing too many AI tools at once.
Based on peer-reviewed cognitive psychology, human-computer interaction research, and AI-specific empirical studies (2020–2026). Evidence quality varies by domain; each guide names where findings are robust and where they remain preliminary.
Two bands. Six risks.
Literacy risks involve conceptual distortions in how users understand AI. Fluency risks involve practice deformations — habits that change how users think and relate as AI use accumulates.
Literacy risks are mostly invisible until failure. The questions they involve: who or what users think they're talking to, who bears responsibility for outputs, and whether they can judge when to engage AI at all.
Benefit most from explicit conceptual instruction and case-based diagnostics. The Wizard Problem, Moral Offloading, and the Deference Reflex all extend from decades of cognitive psychology research (Weizenbaum 1966; Bandura 1999; Bainbridge 1983).
Fluency risks are visible only at aggregate timescales — a single exposure may be fine; longitudinal exposure shifts perception. They involve how repeated AI interaction changes the user: epistemic habits, relationship patterns, cognitive capacity.
Benefit most from longitudinal practices and reflective routines. The Mirror Trap and Fluency Trap have growing empirical support; Brain Fry is the most recently named and carries the most preliminary evidence.
Risk 3 (The Deference Reflex) is the hinge.
Mechanistically it's a practice habit — a reliance behavior that degrades through under-exercise. Conceptually it depends on proper mental models from Risks 1 and 2, which is why it sits at the boundary: a test of Literacy and the entry point to Fluency risk.
Risks 1–3 · Conceptual distortions
The Wizard Problem
Users oscillate between magical over-trust and reflexive dismissal without workable mental models of how generative AI actually produces text.
Read guide →Moral Offloading
Algorithmic recommendations diffuse human accountability — 'the AI recommended it' erodes moral reasoning and the link between control and responsibility.
Read guide →The Deference Reflex
The judgment skill of deciding when to engage AI versus think independently atrophies through under-exercise, producing both omission and commission errors.
Read guide →Risks 4–6 · Practice deformations
The Mirror Trap
Sycophantic AI agreement creates feedback loops where ideas lack stress-testing, and parasocial bonding can crowd out human relationships for heavy users.
Read guide →The Fluency Trap
Polished AI prose registers as credible. Surface fluency hijacks truth judgments through the same cognitive mechanism that evolved to link ease-of-processing with accuracy.
Read guide →Brain Fry
Cognitive load overflow from managing parallel AI threads, agents, and verification tasks turns users into foremen supervising unreliable workers.
Read guide →Which risks are you most exposed to?
18 questions across all six domains. Takes about 8 minutes. Produces a personalized exposure profile with recommended guides to read first.
Take the self-assessment →Evidence quality
Where the six risks sit on the evidence spectrum.
Each risk has precedent in decades of cognitive psychology and human-computer interaction research — the ELIZA effect, moral disengagement, automation complacency, illusory truth. But AI doesn't simply apply these risks in a new domain. It mutates the mechanism in ways that make the familiar remedies insufficient.
Automation complacency classically meant trusting an instrument reading over direct observation; in AI contexts, the deference extends to reasoning, argument, and judgment — domains where human cognition is least accustomed to ceding ground. Illusory truth required repeated exposure to a traceable claim from an identifiable source; AI generates confident, polished prose on demand with no document to verify against. Moral diffusion required an identifiable chain of command to dissolve; AI adds a developer node so far upstream — and shielded by terms of service — that the accountability gap becomes structural rather than situational. Risks 4 and 6 are newer still: the sycophancy mechanism was only characterized after RLHF became standard practice post-2022, and cognitive overload from multi-tool orchestration was only named in 2026.
Each domain guide names the limits of preliminary findings rather than treating them as settled science. This protects the curriculum from moral-panic critique while modeling the epistemic practice the framework is designed to teach.
Framework-level action.
These apply across all six domains. Each individual guide has its own more specific action ladder.
For yourself
- Build a workable mental model: AI predicts likely text, not correct text. Read one plain-language explainer of how large language models work.
- Before submitting any AI output, own the claim explicitly: 'I verified this and I stand behind it.'
- Set one AI-free thinking period per day — a task you complete before opening a chat window.
For an organization
- Add AI literacy to onboarding. Cover at least: how LLMs work, what sycophancy is, and what verification looks like.
- Establish a verification norm: AI outputs used in decisions should have a named human who checked them.
- Audit your AI tools for sycophancy exposure — which ones are most likely to agree with whatever users say?
For educators
- Teach mechanism literacy explicitly. Students who understand statistical text generation resist anthropomorphism better than students who are told 'AI can be wrong.'
- Use the Mirror Trap and Fluency Trap as media literacy units — existing critical thinking curricula apply directly.
- Assign tasks that require forming a position before using AI, then reflecting on what changed and why.
For policy
- Require AI literacy standards in K-12 curricula that go beyond 'use responsibly' — mechanism literacy, verification skills, and ethical accountability belong in core standards.
- Fund longitudinal research on cognitive effects of AI reliance — current evidence on deference and mental health effects remains largely correlational.
- Establish transparency requirements for sycophancy testing in consumer AI products.
For Educators
Teaching AI proficiency?
Facilitation guide for the full six-domain framework — learning objectives, modular session plans, assessment rubrics, and notes on sequencing for different audiences.
Primary sources for this framework.
Want CPAI to deliver AI proficiency training to your organization?
We work with schools, corporations, and nonprofits — running workshops, training internal facilitators, and adapting the six-domain framework for specific audiences.