Center for Practical AI
Educators Guide · Fluency Risk 6

Teaching Brain Fry

Cognitive load from AI orchestration, attention residue, and the foreman problem — managing multiple AI threads at the cost of actual thinking.

Brain Fry is the least intuitive of the six risks because it runs counter to the premise of AI as a labor-saving technology. Managing AI tools adds a specific kind of load — orchestration, context-switching, verification overhead — that can exceed the load it removes, particularly for heavy users managing multiple simultaneous AI tasks.

The Bedard et al. (HBR 2026) research, though from consulting context rather than peer-reviewed psychology, articulates the "foreman problem" clearly: a knowledge worker managing 4+ AI tools simultaneously is spending cognitive resources coordinating, verifying, and prompting rather than thinking. This is qualitatively different from single-task AI use.

Sweller's cognitive load theory provides the conceptual foundation: intrinsic load (the complexity of the actual task), extraneous load (the format and presentation of the task), and germane load (the effort that builds durable understanding). AI orchestration overhead is primarily extraneous load — it doesn't build capability, it just burns capacity.

Multi-tool AI use is standard practice in software engineering, consulting, content work, and research — exactly the environments where the foreman problem compounds fastest.

  • 1Apply cognitive load theory (intrinsic, extraneous, germane) to their own AI workflow.
  • 2Identify patterns in their AI use that produce high orchestration overhead relative to output value.
  • 3Describe attention residue and its effects on cognitive performance after task-switching.
  • 4Design a personal AI workflow that reduces extraneous load while maintaining output quality.

Opening

Walk students through a typical AI-heavy work session: how many tools, how many tabs, how many unresolved threads. Ask: what percentage of the total session time are you thinking about the actual work vs. managing the AI tools? Would you design this differently if you started from scratch?

On cognitive load theory

Sweller distinguishes intrinsic load (the task itself), extraneous load (format/presentation overhead), and germane load (the effort that builds understanding). Where does AI orchestration fit? Is verifying AI output intrinsic, extraneous, or germane load?

The productivity paradox

RCT evidence (GitHub Copilot, Brynjolfsson 2023) shows productivity gains for AI-assisted work. BCG/HBR consulting research suggests that heavy orchestration is cognitively expensive. How do you reconcile these? What type of AI use produces gains, and what type produces the 'brain fry' pattern?

Attention residue

Leroy (2009) found that incomplete tasks leave residual cognitive preoccupation that reduces performance on subsequent tasks. How many AI threads do you typically leave open when you switch tasks? What is the cumulative attention residue from a full day of this?

Workflow Audit (individual or class, ~5 min)

Have students run the Workflow Audit individually. Share profile distribution with the class anonymously. Discuss: were students surprised by their profile? Which category scores were highest? What would they change about their current workflow?

Tool count audit (~20 min)

Have students list every AI tool they actively used in the past week, then for each: what specific problem does it solve? What would you do without it? Is the output worth the orchestration overhead? The goal is not to reduce tool count, but to make the decision to use each tool conscious rather than habitual.

Attention residue experiment (~1 week)

Have students commit to closing all AI threads before switching tasks for one week. They should log: how often did they want to leave a thread open? What did they notice about focus in the subsequent task? Compare to a control week of normal practice. Even anecdotal data here is useful for building awareness.

Foreman problem mapping (~25 min)

Have students diagram a real recent work project that involved multiple AI tools. Map the workflow: what did each tool do, what did they need to do to manage it (prompting, reviewing, iterating, verifying), and what was the ratio of 'thinking about the problem' to 'managing the tools'? Discuss what a redesigned workflow might look like.

Important context

The Bedard et al. (HBR March 2026) research that grounds the 'Brain Fry' framing is consulting research, not peer-reviewed cognitive science. Its mechanisms are described qualitatively, not experimentally confirmed. Tell students what's established (cognitive load theory, attention residue, the Copilot productivity data) and what's consulting-research framing (heavy AI orchestration as cognitively depleting in the specific ways described).

The cognitive load theory foundation (Sweller 2011) is peer-reviewed and robust. The attention residue findings (Leroy 2009, 2016) are peer-reviewed. The specific application to AI orchestration overhead is theoretically well-grounded but not yet extensively experimentally tested.

Misconception: 'Using more AI tools is always better — more AI means less work'

Reframe: This is the productivity paradox. Each additional AI tool adds orchestration overhead (prompting, reviewing, switching, verifying). At some point, total overhead exceeds total load reduction. The optimum depends on the task, the user, and the tool quality.

Misconception: 'If I'm tired after AI-heavy work, it's just normal tiredness'

Reframe: The research suggests AI orchestration produces a qualitatively specific kind of fatigue — high context-switching, low deep-work ratio. It may look like tiredness but reflects depleted attentional and decision-making resources specifically, not general exhaustion.

Misconception: 'Closing AI threads before switching tasks is inefficient'

Reframe: Leroy's attention residue findings suggest the opposite: incomplete-task preoccupation reduces performance on subsequent tasks. Closing threads completely before switching may feel like it costs time but produces better performance on the next task.