Center for Practical AI
For Educators

Classroom-ready resources on AI and online safety.

Research-backed guidance for K–12 educators on doxxing, voluntary disclosure, and the AI threats facing students. Paired with the parent guide at /education/doxxing-and-disclosure.

Interactive Tool

"The Profile"
Interactive Simulation

A classroom-ready, projection-friendly simulation that walks students through how six ordinary social media posts build a complete, exploitable profile. Includes a Facilitator Mode with discussion questions and pause points designed for group use.

  • 5 acts from "normal day" to network reveal to threat scenarios
  • Privacy choices replay — identifiability drops from 94% to 12%
  • Built-in facilitator pause points and discussion questions
  • Anchored in NCMEC, Thorn, ADL, and Sweeney research
Open the simulation →

Suggested classroom sequence

  1. 1Turn on Facilitator Mode before projecting. The toggle is on the intro screen and the top nav bar.
  2. 2Run Acts 1–2 as a group, pausing after each act for the built-in discussion questions.
  3. 3Let students vote on the scenario for Act 4 (financial sextortion or doxxing/swatting).
  4. 4Have students complete Act 5 individually on their own devices, then compare identifiability scores.
  5. 5Close with: "What's one thing you'll change, and one thing you'll tell someone else?"
← Read the full parent guide first

The parent guide covers the research, the named cases, and the persuasion principles. This page covers school-based intervention design, peer leader selection, and the professional resources that matter most.

Digital-citizenship curricula show positive short-term effects on knowledge and self-efficacy. Long-term behavioral evidence remains modest. The most rigorous finding in the literature is that peer-led programs outperform top-down assemblies for adolescent health behaviors — and that program design matters more than program brand.

Intervention Design

Peer leaders, not top-down assemblies.

A 2016 study by Paluck, Shepherd, and Aronow across 56 middle schools found that social-norm interventions succeeded when they used social referents — students who are well-connected across social groups, not necessarily the most popular — as messengers. Schools that selected peer leaders by social-network centrality (rather than popularity or grades) produced measurable school-wide reductions in conflict and harassment.

When building a digital-safety peer program, do not ask teachers to nominate "good students." Ask students to nominate the classmates they go to for advice. The students who appear most frequently across those nominations are your social referents.

Peer programs with evidence behind them

  • Sources of StrengthSuicide prevention through peer-led asset messaging. Strong RCT evidence.
  • Thorn's NoFiltrPeer-to-peer online safety education targeting sextortion and exploitation.
  • Common Sense Digital CitizenshipScope-and-sequence curriculum, grades K–12. Cluster RCT of Google's Be Internet Awesome (BIA) variant showed positive effects on knowledge and self-efficacy (n=1,072 students, 14 schools, 2023).

Source: Paluck, Shepherd & Aronow (2016), Changing climates of conflict: A social network experiment in 56 schools, PNAS.

School Incidents

Deepfake imagery and sextortion: what schools must know.

The Westfield, NJ case (October 2023) — in which male students generated AI nudes of classmates from yearbook photos using the app Clothoff — has replicated in school districts across the country, including Beverly Hills (CA) and Aledo (TX). No federal criminal statute explicitly covered this conduct at the time of the Westfield case; the TAKE IT DOWN Act (signed 2025) has since created a federal framework.

When a student reports deepfake NCII — immediate steps

  1. 1Involve the school counselor before the principal. The student's emotional safety comes first; administrative process second.
  2. 2Do not ask the student to show you the image, find the image, or describe it in detail. Documenting existence is sufficient; re-traumatization is not.
  3. 3Refer the family to NCMEC Take It Down (takeitdown.ncmec.org) — the service can hash and remove images from participating platforms without the family submitting the image.
  4. 4Contact the district Title IX coordinator. Nonconsensual intimate imagery targeting students on the basis of sex may trigger Title IX obligations.
  5. 5Report the app or platform used to generate the image to the NCMEC CyberTipline (1-800-843-5678).
  6. 6Consult your district attorney on notification obligations under state law — many states now have mandatory-reporting requirements for NCII involving minors.

Source: Thorn school guidance; NCMEC educator resources; ISTE recommendations (2024).

AI Companions

AI chatbots in the classroom.

Common Sense Media and Stanford Brainstorm Lab (April 2025) rated Character.AI, Nomi, Replika, and Meta AI "unacceptable" for users under 18. Testers bypassed teen-specific guardrails with minimal effort, elicited sexual roleplay, and found that platforms consistently missed mental-health warning signs — what the researchers called "missed breadcrumbs."

The APA's 2025 Health Advisory on AI and Adolescent Well-Being draws a line between task-focused AI tools (writing assistance, research support) and relationship-oriented AI companions — and schools should too. The risks documented in the Setzer and Raine cases are associated with companion AI, not task AI — but the line between them is eroding as mainstream products add conversational, personalized features.

Three requirements for any AI tool your school uses with students: staff can review interactions on demand; no persistent user profile exists outside your data-processing agreement; the tool is evaluated annually against Common Sense Media or APA guidance.

Bring CPAI education to your school

We partner with districts, libraries, and nonprofits to deliver research-based AI and digital-safety education to educators and students.