• #4: AI timelines, AGI risk, and existential risk from climate change

  • Aug 7 2022
  • Length: 31 mins
  • Podcast

#4: AI timelines, AGI risk, and existential risk from climate change

  • Summary

  • Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:00 Welcome to Future Matters 01:11 Steinhardt — AI forecasting: one year in 01:52 Davidson — Social returns to productivity growth 02:26 Brundage — Why AGI timeline research/discourse might be overrated 03:03 Cotra — Two-year update on my personal AI timelines 03:50 Grace — What do ML researchers think about AI in 2022? 04:43 Leike — On the windfall clause 05:35 Cotra — Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover 06:32 Maas — Introduction to strategic perspectives on long-term AI governance 06:52 Hadshar — How moral progress happens: the decline of footbinding as a case study 07:35 Trötzmüller — Why EAs are skeptical about AI safety 08:08 Schubert — Moral circle expansion isn’t the key value change we need 08:52 Šimčikas — Wild animal welfare in the far future 09:51 Heikkinen — Strong longtermism and the challenge from anti-aggregative moral views 10:28 Rational Animations — Video on Karnofsky's Most important century 11:23 Other research 12:47 News 15:00 Conversation with John Halstead 15:33 What level of emissions should we reasonably expect over the coming decades? 18:11 What do those emissions imply for warming? 20:52 How worried should we be about the risk of climate change from a longtermist perspective? 26:53 What is the probability of an existential catastrophe due to climate change? 27:06 Do you think EAs should fund modelling work of tail risks from climate change? 28:45 What would be the best use of funds?

    Show more Show less

What listeners say about #4: AI timelines, AGI risk, and existential risk from climate change

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.