• Machines That Fail Us - Season 2, Episode 2: "Teaching the Machine: The Hidden Work Behind AI’s Intelligence"
    Feb 27 2025

    The training and coding of AI systems, particularly generative ones, depend on the work of humans teaching machines how to think. This work includes content moderation and labeling, is often conducted under exploitative conditions in the Global South, and remains hidden from users' view. In this episode, we discuss these issues with Adio Dinika, a Research Fellow at the Distributed AI Research Institute (DAIR), where he investigates the invisible labor behind AI systems and how it reflects the various inequalities within the AI industry.

    We often perceive AI tools as entirely artificial, if not almost magical. In reality, the effectiveness and reliability of these systems depend significantly on the labor of humans who ensure that generative AI tools, for example, produce responses that are moderated and free from harmful or toxic content. High-quality training data is essential for building a high-performing large language model, and this data is made up of precisely labeled datasets—a task still carried out by human workers. However, this work is predominantly performed by people in the Global South, often under exploitative and unhealthy conditions, and remains largely invisible to end-users worldwide. The roles of these invisible workers, along with the challenges they face, represent some of the most visible signs of inequality within the AI and tech supply chain, yet they remain little discussed. In this episode of Machines That Fail Us, we dive into this issue with Adio Dinika, a Research Fellow at the Distributed AI Research Institute (DAIR), an international research center focused on the social implications of AI, founded by Timnit Gebru. Together with Dr. Dinika, we explore the hidden human labor behind AI systems and the real, human nature of artificial intelligence.

    Show more Show less
    32 mins
  • Machines That Fail Us - Season 2, Episode 1: "Artificial Lies and Synthetic Media: How AI Powers Disinformation"
    Jan 30 2025

    How is artificial intelligence being used for disinformation purposes? How effective can it be in influencing our reality and political choices? We discuss the rise of synthetic media with Craig Silverman, a reporter for ProPublica who covers voting, platforms, disinformation, and online manipulation, and one of the world’s leading experts on online disinformation.

    In the first season of Machines That Fail Us, our focus was to explore a fundamental question: what do AI errors reveal about their societal impact and our future with artificial intelligence? Through engaging discussions with global experts from journalism, activism, entrepreneurship, and academia, we examined how AI and its shortcomings are already influencing various sectors of society. Alongside analyzing present challenges, we envisioned strategies for creating more equitable and effective AI systems. As artificial intelligence becomes increasingly integrated into our lives, we decided to expand these conversations in the new season, delving into additional areas where machine learning, generative AI, and their societal effects are making a significant mark. This season begins by examining AI's role in the spread of misinformation, disinformation, and the ways generative AI has been used to orchestrate influence campaigns. Are we unknowingly falling victim to machine-generated falsehoods? With 2024 being a record year for global elections, we will explore the extent to which AI-driven disinformation has shaped democratic processes. Has it truly had an impact, and if so, how? In this episode, we are joined by Craig Silverman, an award-winning journalist, author, and one of the foremost authorities on online disinformation, fake news, and digital investigations. Currently reporting for ProPublica, Craig specializes in topics such as voting, disinformation, online manipulation, and the role of digital platforms.

    Show more Show less
    30 mins
  • Machines That Fail Us #5: "The shape of AI to come"
    Jul 4 2024

    The AI we have built so far comes with many different shortcomings and concerns. At the same time, the AI tools we have today are the product of specific technological cultures and business decisions. Could we just do AI differently? For the final episode of “Machines That Fail Us”, we are joined by a leading expert on the intersection of emerging technology, policy, and rights. With Frederike Kaltheuner, founder of the consulting firm new possible and a Senior Advisor to the AI NOW institute, we discussed the shape of future AI and of our life with it.

    Show more Show less
    30 mins
  • Machines That Fail Us #4: Building different AI futures
    Jun 13 2024

    We don’t necessarily have to build artificial intelligence the way we’re doing it today. To make AI really inclusive we must look beyond Western techno-cultures and beyond our understanding of technology being either utopian or dystopian. How could our AI future look different? We asked Prof. Payal Arora, a Professor of Inclusive AI Cultures at Utrecht University.

    Show more Show less
    34 mins
  • Machines That Fail Us #3 Errors and biases: tales of algorithmic discrimination
    May 16 2024

    The records of biases, discriminatory outcomes, and errors as well as the societal impacts of artificial intelligence systems is now widely documented. However, the question remains: How is the struggle for algorithmic justice evolving? We asked Angela Müller, Executive Director of AlgorithmWatch Switzerland.

    Show more Show less
    27 mins
  • Machines That Fail Us #2: Following the AI beat – algorithms making the news
    Apr 18 2024

    What’s the role of journalism in making sense of AI and its errors? With Melissa Heikkilä, senior reporter at the MIT Technology Review. Host: Dr. Philip Di Salvo.

    Show more Show less
    27 mins
  • Machines That Fail Us #1: Making sense of the human error of AI
    Mar 20 2024

    What are the errors that artificial intelligence systems can make and what’s their impact on humans? The Human Error Project team discusses the results of their own research into AI errors and algorithmic profiling.

    Show more Show less
    40 mins