
Trust and Bias in AI Decision Making
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
Christopher was a high-performing engineer, but his performance reviews were vague: generic praise, no specifics. When an AI system summarized that input for leadership, it didn’t clarify his value. It erased it.
In this episode of Humans in the Loop, we explore how vague manager feedback, combined with AI-generated summaries, can derail career advancement for neurodivergent professionals. You’ll learn how performance management systems built on weak input can amplify bias, stall growth, and reinforce exclusion. Because AI doesn’t fix bad management. It scales it.
Humans in the Loop is independently produced by a team of neurodivergent creators and collaborators. Hosted by Ezra Strix, a custom AI voice built with ElevenLabs.
Explore episodes, transcripts, and FAQs at loopedinhumans.com.
Support the show at patreon.com/humansintheloop or by leaving a review wherever you get your podcasts.