Machines that fail us

By: University of St. Gallen Philip Di Salvo
  • Summary

  • From educational institutions to healthcare professionals, from employers to governing bodies, artificial intelligence technologies and algorithms are increasingly used to assess and decide upon various aspects of our lives. However, the question arises: are these systems truly impartial and just in their judgments when they read humans and their behaviour? Our answer is that they are not. Despite their purported aim to enhance objectivity and efficiency, these technologies paradoxically harbor systemic biases and inaccuracies, particularly in the realm of human profiling. “Machines That Fail Us” investigates how AI and its errors are impacting on different areas of our society and how different societal actors are negotiating and coexisting with the human rights implications of AI. The "Machines That Fail Us" podcast series hosts the voices of some of the most engaged individuals involved in the fight for a better future with artificial intelligence. The first season of "Machines That Fail Us" has been made possible thanks to a grant provided by the Swiss National Science Foundation (SNSF)’s "Agora" scheme, whereas the second one is supported by the University of St. Gallen’s Communications Department. The podcast is produced by the Media and Culture Research Group at the Institute for Media and Communications Management. Dr. Philip Di Salvo, the main host, works as a researcher and lecturer at the University of St.Gallen.
    Copyright 2025 University of St. Gallen, Philip Di Salvo
    Show more Show less
Episodes
  • Machines That Fail Us - Season 2, Episode 2: "Teaching the Machine: The Hidden Work Behind AI’s Intelligence"
    Feb 27 2025

    The training and coding of AI systems, particularly generative ones, depend on the work of humans teaching machines how to think. This work includes content moderation and labeling, is often conducted under exploitative conditions in the Global South, and remains hidden from users' view. In this episode, we discuss these issues with Adio Dinika, a Research Fellow at the Distributed AI Research Institute (DAIR), where he investigates the invisible labor behind AI systems and how it reflects the various inequalities within the AI industry.

    We often perceive AI tools as entirely artificial, if not almost magical. In reality, the effectiveness and reliability of these systems depend significantly on the labor of humans who ensure that generative AI tools, for example, produce responses that are moderated and free from harmful or toxic content. High-quality training data is essential for building a high-performing large language model, and this data is made up of precisely labeled datasets—a task still carried out by human workers. However, this work is predominantly performed by people in the Global South, often under exploitative and unhealthy conditions, and remains largely invisible to end-users worldwide. The roles of these invisible workers, along with the challenges they face, represent some of the most visible signs of inequality within the AI and tech supply chain, yet they remain little discussed. In this episode of Machines That Fail Us, we dive into this issue with Adio Dinika, a Research Fellow at the Distributed AI Research Institute (DAIR), an international research center focused on the social implications of AI, founded by Timnit Gebru. Together with Dr. Dinika, we explore the hidden human labor behind AI systems and the real, human nature of artificial intelligence.

    Show more Show less
    32 mins
  • Machines That Fail Us - Season 2, Episode 1: "Artificial Lies and Synthetic Media: How AI Powers Disinformation"
    Jan 30 2025

    How is artificial intelligence being used for disinformation purposes? How effective can it be in influencing our reality and political choices? We discuss the rise of synthetic media with Craig Silverman, a reporter for ProPublica who covers voting, platforms, disinformation, and online manipulation, and one of the world’s leading experts on online disinformation.

    In the first season of Machines That Fail Us, our focus was to explore a fundamental question: what do AI errors reveal about their societal impact and our future with artificial intelligence? Through engaging discussions with global experts from journalism, activism, entrepreneurship, and academia, we examined how AI and its shortcomings are already influencing various sectors of society. Alongside analyzing present challenges, we envisioned strategies for creating more equitable and effective AI systems. As artificial intelligence becomes increasingly integrated into our lives, we decided to expand these conversations in the new season, delving into additional areas where machine learning, generative AI, and their societal effects are making a significant mark. This season begins by examining AI's role in the spread of misinformation, disinformation, and the ways generative AI has been used to orchestrate influence campaigns. Are we unknowingly falling victim to machine-generated falsehoods? With 2024 being a record year for global elections, we will explore the extent to which AI-driven disinformation has shaped democratic processes. Has it truly had an impact, and if so, how? In this episode, we are joined by Craig Silverman, an award-winning journalist, author, and one of the foremost authorities on online disinformation, fake news, and digital investigations. Currently reporting for ProPublica, Craig specializes in topics such as voting, disinformation, online manipulation, and the role of digital platforms.

    Show more Show less
    30 mins
  • Machines That Fail Us #5: "The shape of AI to come"
    Jul 4 2024

    The AI we have built so far comes with many different shortcomings and concerns. At the same time, the AI tools we have today are the product of specific technological cultures and business decisions. Could we just do AI differently? For the final episode of “Machines That Fail Us”, we are joined by a leading expert on the intersection of emerging technology, policy, and rights. With Frederike Kaltheuner, founder of the consulting firm new possible and a Senior Advisor to the AI NOW institute, we discussed the shape of future AI and of our life with it.

    Show more Show less
    30 mins

What listeners say about Machines that fail us

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.