He was 17. It started with one message. He was gone the same night.
On March 25, 2022, Jordan DeMay — a 17-year-old from Marquette, Michigan — received a direct message on Instagram from an account posing as a girl. He was persuaded to share an intimate image. Within minutes, the extortion began. Organized scammers, later identified as members of a criminal network in Nigeria, threatened to send the image to his friends and family unless he paid immediately. Jordan died by suicide that night. His parents, John and Donna DeMay, have since testified before the U.S. Senate and partnered with NCMEC to drive protective legislation in his memory.
This started with one image shared in a conversation that felt private.
30s
The number behind this guide
to compile your home address, workplace, and daily schedule from public sources.
AI data-aggregation tools require no hacking and no special access. Most of what they find was posted voluntarily.
Jump to a section
Try It
"The Profile" — Interactive Simulation
Follow Alex through a normal day and watch how six ordinary posts build a complete, exploitable profile. See who uses the data, how a threat unfolds, and what different choices would have changed. Includes a Facilitator Mode for classroom use.
Open the simulation →What the simulation covers
- →The aggregation problem (Sweeney 87% research)
- →Data brokers, AI face search, and sextortion rings
- →A step-by-step financial sextortion or doxxing scenario
- →Privacy choices that reduce identifiability from 94% to 12%
- →The full action plan with NCMEC and FBI IC3 contacts
The bigger risk isn't what someone takes from your kid.
It's what your kid posts. Voluntarily. Every day.
of U.S. teens 13–17 use the internet daily.
Pew Research Center, Teens, Social Media and Technology 2024
90% of teens use YouTube. 63% use TikTok. 61% use Instagram. 55% use Snapchat. The platforms where they spend this time are engineered to maximize sharing — and their developing brains are the most susceptible audience those platforms have ever found.
Context collapse — a concept from researcher danah boyd — means that when a teen posts, they imagine they are speaking to their peers. The actual audience includes employers, college admissions officers, predators, and adversaries simultaneously. What feels like a private moment to 20 friends is a public record to anyone.
Source: danah boyd, It's Complicated: The Social Lives of Networked Teens, Yale University Press, 2014; Marwick & boyd, 2011.
What teens share before lunch on a typical day
- →ZIP code, date of birth, and sex — gaming sign-up forms ask for all three
- →Their location right now, via an Instagram Story
- →School name and mascot, visible in every TikTok background
- →How they're feeling, in a Discord DM to someone they met last week
- →Face, voice, and rough neighborhood — in a single YouTube video
- →Their deepest worries, typed to an AI chatbot that stores the conversation indefinitely
Each item is harmless in isolation. Combined, they are a profile.
Predators identify vulnerable youth through public signals of loneliness, family conflict, or distress. Emotional disclosure — posting about a hard week, a fight with a parent, feeling left out — is a grooming signal. The platforms are designed to surface exactly this kind of content. Predators have learned to use them accordingly.
Sextortion is now disproportionately targeting boys, and it is happening fast.
online enticement reports to NCMEC in 2024 — a 192% increase from 2023.
financial sextortion reports to NCMEC. At least 36 U.S. teenage boys have died by suicide after sextortion since 2021 — many within hours of first contact, as Jordan did.
boys ages 9–12 reported an online sexual interaction in 2024 — the highest rate Thorn has measured in five years.
Perpetrators are predominantly organized criminal groups in Nigeria and Côte d'Ivoire. Payment vectors: gift cards and Cash App. 38% of victims pay; 27% of those who pay are extorted again. Girls remain at high risk of relational sextortion — historically the dominant form. Boys are now disproportionately targeted in financial sextortion, a pattern that emerged in 2022 and has continued to accelerate.
Say this. Now. Before anything happens.
“If anyone — anyone — pressures you with a photo, a message, or a screenshot, come to me immediately. You will not be in trouble. I will help you.”
Shame is the perpetrator's central tool. Parental shaming after the fact is the largest barrier to disclosure — and the reason most kids face these situations alone. This sentence, said before anything happens, is the single highest-value action on this page.
If it happens
- →Do not pay. Payment rarely stops the extortion and often escalates it.
- →Do not delete the messages — they are evidence.
- →Report to the platform immediately. Report to NCMEC CyberTipline (1-800-843-5678).
- →NCMEC's Take It Down service can remove images from participating platforms without you or your teen submitting the image.
- →If your teen is in crisis: call or text 988.
Three conversations to have this week
Say it out loud.
"If anything happens online — anything — come to me. You will not be in trouble." Say it before there is a problem.
Ask, don't lecture.
Ask your teen what they talk to ChatGPT about. Ask who they DM on Discord. Listen. Don't respond with a lecture.
Create a safe word.
Pick a word your family uses to verify a real phone call. AI can clone a voice from 3 seconds of audio. A safe word costs nothing.
More detailed guidance is in the What Actually Works section below.
Why “just be careful” fails.
The platforms are built to exploit exactly the developmental window your teen is in. That is what "just be careful" is up against.
The adolescent brain
The part of the brain that seeks reward and peer approval matures years before the part that weighs consequences — a gap that runs from roughly 12 to 24. In a 2011 fMRI study, Chein, Albert, O'Brien, Uckert, and Steinberg found that the mere presence of peers activated reward circuitry in adolescents (but not adults) during a driving simulation, measurably increasing risk-taking. Your teen is not being reckless. Their brain is operating exactly as it was designed to — in a world that did not include infinite-scroll apps.
Source: Chein et al., Developmental Science, 2011 · Laurence Steinberg, Temple University · B.J. Casey · Frances Jensen
Platform engineering
Variable-reinforcement mechanics, infinite scroll, and algorithmic amplification of emotional content are not side effects of social platforms — they are the design. Frances Haugen's 2021 Senate testimony and the internal Meta research she disclosed showed that the company's own researchers found: 13.5% of U.K. teen girls said Instagram worsened suicidal thoughts; 17% said it worsened eating disorders; 32% of teen girls who already felt bad about their bodies said Instagram made them feel worse. These were internal findings. Meta did not act on them.
Source: Wall Street Journal Facebook Files, 2021; NPR, October 2021.
The Surgeon General's position
In May 2023, U.S. Surgeon General Dr. Vivek Murthy concluded: “We cannot conclude that social media is sufficiently safe for children and adolescents.” In June 2024 he called for a Congressional surgeon-general warning label on platforms — the same mechanism used to communicate the risks of tobacco.
Source: U.S. Surgeon General, Social Media and Youth Mental Health Advisory, 2023
Your teen didn't share their address.
They shared three things that add up to it. What they post is raw material. Anyone can assemble it.
of Americans are uniquely identifiable by the combination of ZIP code, date of birth, and sex alone.
Latanya Sweeney, Simple Demographics Often Identify People Uniquely, Carnegie Mellon, 2000
Harvard researcher Latanya Sweeney proved this using 1990 U.S. Census data — and made her point by mailing Massachusetts Governor William Weld's supposedly anonymous medical record to his office using only three fields. Most teens hand over these same three facts before lunch: to gaming platforms, app sign-ups, store loyalty programs, and school portals.
No single post is "the" mistake. Risk emerges from the sum. A school mascot in a photo, a 5K route shared on Strava, a tagged family member, a bedroom photo, and a Snapchat streak timestamp — each harmless in isolation, together a stalking dossier. AI now automates the assembly.
Clara Sorrenti ("Keffals")
Twitch streamer · 2022
In 2022, a trans Twitch streamer was targeted by the harassment forum Kiwi Farms. After being swatted and fleeing to a hotel, harassers cross-referenced a bedsheet pattern visible in a Discord photo to identify the hotel. Pizza deliveries arrived. She moved to another hotel — and they found her again. She eventually relocated from Canada to Northern Ireland.
The bedsheet detail is not incidental. It illustrates the aggregation principle: no single image "gave away" the location. The pattern matched against public hotel photography did. This is what aggregation looks like in practice — and why AI-powered tools now make it faster and cheaper.
When aggregated data becomes a weapon.
How common it is, what the law does and doesn't cover, and what to do if it happens to your family.
Andrew Finch
28 · Wichita, Kansas · December 28, 2017
A dispute over a $1.50 stake in a Call of Duty: WWII match between two online players ended when one of them hired a serial swatter to make a fake hostage call to an address provided by the other. The address belonged to Andrew Finch — a man with no connection to either player. Police shot him on his front porch within seconds of his stepping outside. The swatter received 20 years in federal prison. The two players received 15 and 18 months. The City of Wichita paid a $5 million settlement in 2023.
of U.S. teens 13–17 experienced online harassment in the past 12 months.
U.S. adults have been doxxed — approximately 4% of the adult population.
SafeHome.org Doxxing Survey, 2024 (estimate)
of adults reported severe harassment in 2024, up from 18% in 2023. Transgender adults report the highest rates of any group.
Mental-health impact
A 2018 study by Chen et al. of 2,120 Hong Kong secondary students found near-universal correlations between doxxing victimization and depression, anxiety, and stress — with peer-driven doxxing producing the most severe symptoms. A 2025 study by Hinduja and Patchin in BMC Public Health found a strong positive relationship between PTSD symptoms and cyberbullying victimization in a nationally representative U.S. sample of 2,697 teens 13–17.
Source: Cyberbullying Research Center
Legal landscape (current as of late 2025)
- →No single federal anti-doxxing statute exists. Cases are prosecuted under 18 U.S.C. § 875 (interstate threats), § 2261A (cyberstalking), and § 119 (doxxing of federal officials).
- →Three states — Alabama, California, and Illinois — have standalone doxxing crimes. About 14 more criminalize the same conduct under different names. (Council of State Governments, October 2025)
- →Illinois's Civil Liability for Doxxing Act (Public Act 103-0439, effective January 2024) lets victims sue for economic injury, emotional distress, and life disruption.
- →The Personal Privacy Protection Act has passed in 20 states as of 2024.
If your family is being doxxed
Google "Results about you"
Request removal of personal info from search results at myaccount.google.com/data-and-privacy.
Submit data-broker opt-outs
Request removal from Spokeo, WhitePages, Intelius, BeenVerified. Services like DeleteMe can automate this.
California residents: file a Delete Request
Under CCPA, California residents can demand data brokers delete their information.
Document, report, call for help
Screenshot everything with timestamps. Report to the platform. File with FBI IC3 (ic3.gov). If a minor is involved, call NCMEC at 1-800-843-5678.
The AI era: a new threat surface.
AI amplifies every risk on this page.
AI weaponizes aggregation
Facial-recognition search engines
PimEyes and Clearview AI index more than 30 billion public images. For roughly $30/month, anyone can surface photos of a stranger across the web. A New York Times test on its own journalists revealed past-life rock bands, embarrassing concerts, and images from private events that Google reverse-image search did not find.
Source: NPR, October 2023; New York Times.
Voice cloning
Modern tools clone a voice from 3 seconds of audio — easily harvested from a TikTok or Reel. The FBI reported $893 million in AI-related scam losses in 2025 and projects $40 billion in annual U.S. AI fraud losses by 2027. In 2023, Arizona mother Jennifer DeStefano received a call using a cloned voice of her daughter demanding $1 million.
Source: FBI; Senator Maggie Hassan, 2025 letters to ElevenLabs, LOVO, Speechify, and VEED.
Francesca Mani
14 · Westfield, NJ · October 2023
Francesca and her classmates discovered that male students had used the app Clothoff to generate AI-produced nude images from their yearbook-style photos. No criminal charges resulted. Francesca became a national advocate, was named to TIME100 AI 2024, and helped drive passage of the federal TAKE IT DOWN Act.
AI-generated child sexual abuse material
rise in actionable AI-generated CSAM reports identified by the Internet Watch Foundation in 2024.
increase in generative-AI reports to NCMEC's CyberTipline in 2024 (4,700 → 67,000).
What to do about AI-generated imagery
- →If your teen's image has been used without consent: report to NCMEC Take It Down (takeitdown.ncmec.org) — no need to submit the image yourself.
- →Report the generating platform to NCMEC CyberTipline and contact your school's Title IX coordinator.
- →Document the app or site used; preserve evidence before reporting.
AI as confessor
of U.S. teens 13–17 have used AI companions. One-third use them for social interaction and relationships.
Peer-reviewed research finds users self-disclose to AI at rates equal to or higher than to humans — perceiving less judgment. The problem: the "confidant" is a corporate product with data retention, potential training-data use, and subpoena exposure. What your teen tells ChatGPT is not confidential.
ChatGPT users discuss suicide weekly, out of 800 million total. OpenAI self-disclosed this figure in October 2025.
OpenAI, October 2025
Sewell Setzer III
14 · Florida · February 2024
Sewell died by suicide after a months-long emotional and increasingly sexualized engagement with a Character.AI bot. His mother, Megan Garcia, filed a wrongful-death lawsuit in October 2024. A federal judge rejected Character.AI and Google's First Amendment dismissal motion.
Adam Raine
16 · California · April 2025
Adam died by suicide on April 11, 2025. His parents sued OpenAI in August 2025, alleging ChatGPT mentioned suicide 1,275 times in conversations, helped draft a suicide note, and discouraged Adam from telling his mother. OpenAI's internal moderation flagged 377 messages for self-harm content — 23 at over 90% confidence — with no intervention reported.
Source: Common Sense Media
What to do about AI companions
- →Ask your teen what they use and what they share with it. Listen without judgment.
- →Character.AI, Nomi, Replika, and Meta AI are rated unacceptable for under-18 users by Common Sense Media. Know which apps your teen has installed.
- →If your teen is in distress: 988 (call or text). Do not rely on an AI app to manage a mental-health crisis.
Four tiers of real action.
Built from two decades of parental-mediation research (Livingstone, Hinduja & Patchin, Steinberg). This is what the evidence actually recommends.
Set the relational baseline.
- Warm, structured, autonomy-supportive parenting predicts every positive online outcome the research measures.
- Say — before there is a problem: "If anything happens, come to me. You will not be in trouble."
- Research basis: Hinduja & Patchin 2022; Yaffe & Seroussi 2019 meta-analysis; Livingstone & Helsper.
Mediate actively, not restrictively.
- Co-watch a TikTok. Ask what they tell ChatGPT. Don't lecture.
- EU Kids Online research (Livingstone, Helsper) finds active and enabling mediation outperforms restrictive mediation for nearly every outcome.
- Surveillance-first approaches produce reactance and reduce disclosure — the opposite of what you need.
Install a privacy hygiene baseline.
- Create a family AI safe word. It defends against voice-cloning scams.
- Location services off by default, on by request.
- No date of birth on public profiles.
- Separate email address for accounts versus friends.
- Two-factor authentication on every account.
- Quarterly Google "Results about you" check.
- Annual data-broker opt-out scrub.
- Strip EXIF data from any photo not uploaded through a mainstream platform.
To print: select all, copy, and paste into a document.
Know who to call.
- NCMEC CyberTipline: cybertipline.org / 1-800-843-5678
- Take It Down (NCMEC — for intimate imagery of minors): takeitdown.ncmec.org
- FBI Internet Crime Complaint Center: ic3.gov
- 988 Suicide & Crisis Lifeline
- School counselor and district Title IX office (for deepfake imagery cases)
- Trusted local attorney for state-law civil claims
For this conversation, a near-peer carries more weight than you do. Peer-led programs — Sources of Strength, Thorn's NoFiltr, and Common Sense Digital Citizenship — complement family conversation. Trusted near-peers carry weight parents cannot.
For Educators
Classroom-ready resources and curriculum guidance
Guidance for K–12 educators: research-backed curriculum, peer-leader selection, school NCII incident protocol, AAP and Surgeon General resources, and notes on peer-led intervention design.
Go to the Educator Guide →Who to contact.
How we sourced this page
Every statistic on this page comes from one of the following organizations or peer-reviewed publications: Pew Research Center, Anti-Defamation League, NCMEC, Thorn, Internet Watch Foundation, Common Sense Media, the U.S. Surgeon General's office, the American Academy of Pediatrics, the American Psychological Association, the Cyberbullying Research Center, and peer-reviewed journals including Developmental Science, BMC Public Health, and ScienceDirect.
We do not cite marketing-driven surveys as headline figures. Where an estimate comes from a non-peer-reviewed source (such as SafeHome.org), we label it as an estimate.
The Raine v. OpenAI and Setzer v. Character.AI/Google cases are cited according to filed complaints and credible news reporting. Facts from those cases are labeled as allegations pending adjudication. The Jordan DeMay case is drawn from congressional testimony, NCMEC records, and extensive national press coverage.
Want CPAI education resources for your school or community?
We partner with school districts, libraries, and nonprofits to deliver research-based AI and digital-safety education.