Center for Practical AI
Interactive Tool · Deepfakes & NCII

Synthetic Media Lab

Most people think they can spot a fake. Research shows humans correctly identify high-quality AI-generated images only 24.5% of the time. Explore the 7 visual artifacts trained researchers use to detect synthetic media — and understand why even these heuristics have limits.

Includes facilitator mode for classroom or workshop use. Toggle it in the tool below.

The goal of this tool is calibrated caution — not a checklist. The same heuristics that work on 2022 deepfakes are becoming unreliable against 2026 generators. What we can teach is skepticism, verification habits, and the specific contexts where deepfake harm most commonly occurs — not a detection skill that will scale with the technology.

Interactive Tool

Synthetic Media Lab

7 visual tells researchers use to spot AI-generated images

Important: The newest AI generators (Flux, DALL-E 3, Midjourney v6+) can defeat many of these heuristics. The goal is calibrated caution, not a checklist that guarantees detection. Even trained researchers are wrong a significant percentage of the time.

Eye Reflections

AI consistently fails to render consistent light reflections in eyes.

What to look for

Look for: catch-lights appearing on the wrong side of the pupil; duplicate reflections; reflections that don't match the scene's light source; blank or flat pupils with no depth.

Left eye vs. right eye

Real eye

Single catch-light at consistent angle. Pupil has depth and subtle limbal ring. Cornea reflects environment.

AI-generated

Duplicate catch-lights. Reflections at different angles in each eye. Pupil appears flat or plastic-like.

Quick heuristic

Compare the left and right eye reflections. In a real photo, they are mirror-consistent. In AI images, they often don't match.

Reliability note: Modern models (Flux, Midjourney v6+) have improved significantly. Eye artifacts are less reliable than they were in 2022-23.
Detection heuristics synthesized from: Oxford Internet Institute (2025); IWF Harm Without Limits (2026); Kellogg/MIT Detect Fakes project; Rössler et al. FaceForensics++ (2019). Detection accuracy data: 24.5% human baseline for high-quality deepfakes (multiple meta-analyses, 2024).

When skepticism alone isn't enough

Detection heuristics matter for media literacy, but they are not the primary defense against deepfake harm in school and family contexts. The primary defenses are: knowing what to do if an image of you or your child appears, and having established the trust that someone will tell you when something happens.

If images of a minor appear

NCMEC Take It Down (takeitdown.ncmec.org) — free, anonymous, hash-based blocking across platforms.

Take It Down →

If images of an adult appear

StopNCII.org — same hash-based mechanism for adults, across Facebook, Instagram, TikTok, Snapchat, Reddit.

StopNCII.org →

Legal aid and crisis support

Cyber Civil Rights Initiative offers a 24/7 helpline, attorney referrals, and a step-by-step victim guide.

CCRI Safety Center →
← Back to Deepfakes & Synthetic Imagery

Want CPAI education resources for your school or community?

We partner with districts, libraries, and nonprofits to deliver research-based AI education.