• Joe Edelman: Co-Founder of Meaning Alignment Institute

  • Dec 6 2024
  • Duración: 1 h y 22 m
  • Podcast

Joe Edelman: Co-Founder of Meaning Alignment Institute

  • Resumen

  • What happens when artificial intelligence starts weighing in on our moral decisions? Matt Prewitt is joined by Meaning Alignment Institute co-founder Joe Edelman to explore this thought-provoking territory in examining how AI is already shaping our daily experiences and values through social media algorithms. They explore the tools developed to help individuals negotiate their values and the implications of AI in moral reasoning – venturing into compelling questions about human-AI symbiosis, the nature of meaningful experiences, and whether machines can truly understand what matters to us. For anyone intrigued by the future of human consciousness and decision-making in an AI-integrated world, this discussion opens up fascinating possibilities – and potential pitfalls – we may not have considered.Links & References: References:CouchSurfing - Wikipedia | CouchSurfing.org | WebsiteTristan Harris: How a handful of tech companies control billions of minds every day | TED TalkCenter for Humane Technology | WebsiteMEANING ALIGNMENT INSTITUTE | WebsiteReplika - AI Girlfriend/BoyfriendWill AI Improve Exponentially At Value Judgments? - by Matt Prewitt | RadicalxChangeMoral Realism (Stanford Encyclopedia of Philosophy)Summa Theologica - WikipediaWhen Generative AI Refuses To Answer Questions, AI Ethics And AI Law Get Deeply Worried | AI RefusalsAmanda Askell: The 100 Most Influential People in AI 2024 | TIME | Amanda Askells' work at AnthropicOvercoming Epistemology by Charles TaylorGod, Beauty, and Symmetry in Science - Catholic Stand | Thomas Aquinas on symmetryFriedrich Hayek - Wikipedia | “Hayekian”Eliezer Yudkowsky - Wikipedia | “AI policy people, especially in this kind Yudkowskyian scene”Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources | Resource rational (cognitive science term)Papers & posts mentioned[2404.10636] What are human values, and how do we align AI to them? | Paper by Oliver Klingefjord, Ryan Lowe, Joe EdelmanModel Integrity - by Joe Edelman and Oliver Klingefjord | Meaning Alignment Institute SubstackBios:Joe Edelman is a philosopher, sociologist, and entrepreneur whose work spans from theoretical philosophy to practical applications in technology and governance. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures.Joe’s Social Links:Meaning Alignment Institute | WebsiteMeaning Alignment Institute (@meaningaligned) / XJoe Edelman (@edelwax) / XMatt Prewitt (he/him) is a lawyer, technologist, and writer. He is the President of the RadicalxChange Foundation.Matt’s Social Links:ᴍᴀᴛᴛ ᴘʀᴇᴡɪᴛᴛ (@m_t_prewitt) / X Connect with RadicalxChange Foundation:RadicalxChange Website@RadxChange | TwitterRxC | YouTubeRxC | InstagramRxC | LinkedInJoin the conversation on Discord.
    Más Menos
adbl_web_global_use_to_activate_webcro805_stickypopup

Lo que los oyentes dicen sobre Joe Edelman: Co-Founder of Meaning Alignment Institute

Calificaciones medias de los clientes

Reseñas - Selecciona las pestañas a continuación para cambiar el origen de las reseñas.