Philosophy Didn’t Just Eat AI. It Wrote Its Code — and It’s Hungry for Meaning. - The Deeper Thinking Podcast Podcast Por  arte de portada

Philosophy Didn’t Just Eat AI. It Wrote Its Code — and It’s Hungry for Meaning. - The Deeper Thinking Podcast

Philosophy Didn’t Just Eat AI. It Wrote Its Code — and It’s Hungry for Meaning. - The Deeper Thinking Podcast

Escúchala gratis

Ver detalles del espectáculo

Acerca de esta escucha

Philosophy Didn’t Just Eat AI. It Wrote Its Code — and It’s Hungry for Meaning An epistemic meditation on artificial intelligence as a philosophical actor—and the urgency of restoring meaning, not just function, to systems that now decide for us. What does your AI system believe? In this episode, we expand on Michael Schrage and David Kiron’s MIT Sloan thesis, Philosophy Eats AI. We trace how systems built on machine logic inevitably encode assumptions about purpose, knowledge, and reality. This episode reframes AI not as infrastructure—but as worldview. A tool that doesn’t just compute, but commits. This is a quiet engagement with how leadership itself must evolve. With reflections drawn from Gregory Bateson, Karen Barad, Michel Foucault, and Heinz von Foerster, we introduce the idea of synthetic judgment: the emerging ability to interpret, audit, and question what our systems silently believe on our behalf. Reflections Every AI model has a philosophy. Most organizations don’t know what it is.Leadership now requires ontological fluency—what your systems can and can’t see defines your future.AI doesn’t just support judgment. It simulates it—often without your permission.The most dangerous AI systems aren’t wrong. They’re coherent in ways you never intended.To govern AI well, you need to understand what kind of knowing it performs.Synthetic judgment isn’t human vs machine. It’s the ability to remain critical inside coordination. Why Listen? Learn how AI systems enact hidden worldviews about purpose and valueExplore teleology, epistemology, and ontology as business infrastructureUnderstand how synthetic judgment can be cultivated as a leadership skillEngage with thinkers who saw long ago what AI now makes urgent Listen On: YouTubeSpotifyApple Podcasts Support This Work Support future episodes by visiting buymeacoffee.com/thedeeperthinkingpodcast or leaving a review on Apple Podcasts. Thank you. Bibliography Barad, Karen. Meeting the Universe Halfway. Duke University Press, 2007.Bateson, Gregory. Steps to an Ecology of Mind. University of Chicago Press, 2000.Bostrom, Nick. Superintelligence. Oxford University Press, 2014.Crawford, Kate. Atlas of AI. Yale University Press, 2021.Eubanks, Virginia. Automating Inequality. St. Martin’s Press, 2018.Floridi, Luciano. The Logic of Information. Oxford University Press, 2019.Foucault, Michel. The Order of Things. Vintage, 1994.Harari, Yuval Noah. Homo Deus. Harvill Secker, 2016.Kelleher, John D., and Brendan Tierney. Data Science. MIT Press, 2018.Marcus, Gary, and Ernest Davis. Rebooting AI. Pantheon, 2019.Mitchell, Melanie. Artificial Intelligence. Farrar, Straus and Giroux, 2019.Morozov, Evgeny. To Save Everything, Click Here. PublicAffairs, 2013.Noble, Safiya Umoja. Algorithms of Oppression. NYU Press, 2018.Schrage, Michael, and David Kiron. Philosophy Eats AI. MIT Sloan Management Review, 2025.von Foerster, Heinz. Understanding Understanding. Springer, 2003.Wolfram, Stephen. “How to Think Computationally About AI.” 2023.Zuboff, Shoshana. The Age of Surveillance Capitalism. PublicAffairs, 2019. To design AI is to author a worldview. To lead with it is to be answerable for what it sees—and what it cannot. #PhilosophyEatsAI #SyntheticJudgment #Ontology #GregoryBateson #MichaelSchrage #David Kiron #KarenBarad #Foucault #vonFoerster #AIethics #MITSMR #Leadership #AIphilosophy #DeeperThinkingPodcast
adbl_web_global_use_to_activate_T1_webcro805_stickypopup
Todavía no hay opiniones