Episode Description: "You could be arrested by AI for a crime you didn't commit"Summary
Right now, 117 million Americans are sitting in facial recognition databases—that's one in three people. Your face, scraped from social media and government records, is waiting in a server room to be matched against grainy security footage by an AI that doesn't need to be right, just confident.
In January 2020, Robert Williams was arrested in front of his five-year-old daughter for stealing watches he'd never seen. The only evidence? A facial recognition system that matched his old driver's license photo to a blurry figure on security camera footage. The photos showed clearly different people.
This isn't a dystopian future—it's happening now. We dive deep into how facial recognition technology went from sci-fi concept to everyday policing tool, the devastating human cost of algorithmic misidentification, and the shadowy network of private companies building surveillance infrastructure with zero accountability.
We explore the chilling reality of predictive policing algorithms that don't just identify suspects—they predict who will commit crimes before they happen. And we uncover why there are virtually no legal safeguards protecting you from being falsely flagged, arrested, or worse by systems that see patterns where none exist.
But this story isn't just about the technology—it's about power, profit, and the complete absence of oversight in a system that could destroy your life with a confidence score. We'll show you what's really at stake, who's making billions from your biometric data, and the grassroots movements fighting back against algorithmic injustice.
Because in a world where AI can arrest you for crimes you didn't commit, the question isn't whether you're guilty or innocent—it's whether the algorithm thinks you look like someone who is.