Center for Practical AI
Interactive Tool · Algorithmic Bias

"The Score" — COMPAS Risk Simulator

Judges across the United States use algorithmic risk scores to inform bail and sentencing decisions. COMPAS — Correctional Offender Management Profiling for Alternative Sanctions — assigns a 1–10 recidivism risk score based on a structured interview. ProPublica's 2016 analysis of 7,000 Broward County cases found the tool labeled Black defendants as high-risk at nearly twice the rate of white defendants — while both groups reoffended at similar rates.

This simulator approximates the scoring logic from published research. It is not the Northpointe/Equivant proprietary algorithm — that code has never been released.

What this tool demonstrates

  • How answers to ostensibly neutral questions (employment, education history, family) produce a risk score
  • How two defendants with the same criminal history can receive different scores based on socioeconomic proxies
  • The mathematical impossibility at the heart of fairness: tools cannot simultaneously minimize false positives for all groups — a theorem proven by Chouldechova (2017)
  • Why race-neutral inputs can still produce racially disparate outcomes

Interactive Tool

"The Score" — Algorithmic Bias Simulator

Based on ProPublica's COMPAS analysis of 10,000+ defendants in Broward County, FL

Build the profile

Adjust the inputs below to build a defendant profile. The COMPAS algorithm scores defendants on these factors to predict recidivism. Race is not an input — but some inputs are race-correlated.

1870
Score model approximated from: ProPublica COMPAS Analysis (Angwin, Larson, Mattu, Kirchner, 2016) · ProPublica COMPAS dataset (github.com/propublica/compas-analysis) · Chouldechova, A. (2017). "Fair prediction with disparate impact." Big Data, 5(2), 153–163.

The Broward County findings

ProPublica obtained two years of COMPAS scores for defendants in Broward County, Florida, and matched them to actual recidivism records. The analysis found that Black defendants were almost twice as likely as white defendants to be falsely flagged as high-risk (44.9% vs. 23.5% false positive rate). White defendants were more likely to be incorrectly flagged as low-risk — meaning they reoffended at higher rates than predicted.

Northpointe (now Equivant) disputed the methodology, arguing that the tool performed equally well for both groups in terms of overall predictive accuracy. Both claims are mathematically correct. In 2017, Chouldechova proved this is not a flaw in either analysis — it is a mathematical impossibility. When base rates differ between groups, no algorithm can simultaneously equalize false positive rates, false negative rates, and calibration. Something must give. COMPAS was calibrated for overall accuracy; false positive parity was what gave.

In State v. Loomis (2016), the Wisconsin Supreme Court upheld the use of COMPAS in sentencing while acknowledging the defendant had no way to challenge the proprietary algorithm. As of 2025, risk assessment tools are used in pretrial, bail, probation, and parole decisions in at least 39 states.

← Back to Algorithmic Bias Guide

Using this simulator in a classroom or workshop?

Download the educator facilitation guide for discussion questions, the Chouldechova proof in plain language, and policy advocacy exercises.