• “‘It’s a 10% chance which I did 10 times, so it should be 100%’” by egor.timatkov
    Nov 20 2024
    Audio note: this article contains 33 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.

    Many of you readers may instinctively know that this is wrong. If you flip a coin (50% chance) twice, you are not guaranteed to get heads. The odds of getting a heads are 75%. However you may be surprised to learn that there is some truth to this statement; modifying the statement just slightly will yield not just a true statement, but a useful one.

    It's a spoiler, though. If you want to figure this out as you read this article yourself, you should skip this and then come back. Ok, ready? Here it is:

    It's a _1/n_ chance and I did it _n_ times, so the odds should be... _63%_. Almost always.



    The math:

    Suppose you're [...]

    ---

    Outline:

    (01:04) The math:

    (02:12) Hold on a sec, that formula looks familiar...

    (02:58) So, if something is a _1/n_ chance, and I did it _n_ times, the odds should be... _63\\%_.

    (03:12) What Im NOT saying:

    ---

    First published:
    November 18th, 2024

    Source:
    https://www.lesswrong.com/posts/pNkjHuQGDetRZypmA/it-s-a-10-chance-which-i-did-10-times-so-it-should-be-100

    ---

    Narrated by TYPE III AUDIO.

    Show more Show less
    5 mins
  • “OpenAI Email Archives” by habryka
    Nov 19 2024
    As part of the court case between Elon Musk and Sam Altman, a substantial number of emails between Elon, Sam Altman, Ilya Sutskever, and Greg Brockman have been released as part of the court proceedings.

    I have found reading through these really valuable, and I haven't found an online source that compiles all of them in an easy to read format. So I made one.

    I used AI assistance to generate this, which might have introduced errors. Check the original source to make sure it's accurate before you quote it: https://www.courtlistener.com/docket/69013420/musk-v-altman/ [1]

    Sam Altman to Elon Musk - May 25, 2015

    Been thinking a lot about whether it's possible to stop humanity from developing AI.

    I think the answer is almost definitely not.

    If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first.

    Any thoughts on [...]

    ---

    Outline:

    (00:36) Sam Altman to Elon Musk - May 25, 2015

    (01:19) Elon Musk to Sam Altman - May 25, 2015

    (01:28) Sam Altman to Elon Musk - Jun 24, 2015

    (03:31) Elon Musk to Sam Altman - Jun 24, 2015

    (03:39) Greg Brockman to Elon Musk, (cc: Sam Altman) - Nov 22, 2015

    (06:06) Elon Musk to Sam Altman - Dec 8, 2015

    (07:07) Sam Altman to Elon Musk - Dec 8, 2015

    (07:59) Sam Altman to Elon Musk - Dec 11, 2015

    (08:32) Elon Musk to Sam Altman - Dec 11, 2015

    (08:50) Sam Altman to Elon Musk - Dec 11, 2015

    (09:01) Elon Musk to Sam Altman - Dec 11, 2015

    (09:08) Sam Altman to Elon Musk - Dec 11, 2015

    (09:26) Elon Musk to: Ilya Sutskever, Pamela Vagata, Vicki Cheung, Diederik Kingma, Andrej Karpathy, John D. Schulman, Trevor Blackwell, Greg Brockman, (cc:Sam Altman) - Dec 11, 2015

    (10:35) Greg Brockman to Elon Musk, (cc: Sam Altman) - Feb 21, 2016

    (15:11) Elon Musk to Greg Brockman, (cc: Sam Altman) - Feb 22, 2016

    (15:54) Greg Brockman to Elon Musk, (cc: Sam Altman) - Feb 22, 2016

    (16:14) Greg Brockman to Elon Musk, (cc: Sam Teller) - Mar 21, 2016

    (17:58) Elon Musk to Greg Brockman, (cc: Sam Teller) - Mar 21, 2016

    (18:08) Sam Teller to Elon Musk - April 27, 2016

    (19:28) Elon Musk to Sam Teller - Apr 27, 2016

    (20:05) Sam Altman to Elon Musk, (cc: Sam Teller) - Sep 16, 2016

    (25:31) Elon Musk to Sam Altman, (cc: Sam Teller) - Sep 16, 2016

    (26:36) Sam Altman to Elon Musk, (cc: Sam Teller) - Sep 16, 2016

    (27:01) Elon Musk to Sam Altman, (cc: Sam Teller) - Sep 16, 2016

    (27:17) Sam Altman to Elon Musk, (cc: Sam Teller) - Sep 16, 2016

    (27:29) Sam Teller to Elon Musk - Sep 20, 2016

    (27:55) Elon Musk to Sam Teller - Sep 21, 2016

    (28:11) Ilya Sutskever to Elon Musk, Greg Brockman - Jul 20, 2017

    (29:41) Shivon Zilis to Elon Musk, (cc: Sam Teller) - Aug 28, 2017

    (33:15) Elon Musk to Shivon Zilis, (cc: Sam Teller) - Aug 28, 2017

    (33:30) Ilya Sutskever to Elon Musk, Sam Altman, (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 20, 2017

    (39:05) Elon Musk to Ilya Sutskever, Sam Altman (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 20, 2017

    (39:24) Sam Altman to Elon Musk, Ilya Sutskever (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 21, 2017

    (39:40) Shivon Zilis to Elon Musk, (cc: Sam Teller) - Sep 22, 2017

    (40:10) Elon Musk to Shivon Zilis (cc: Sam Teller) - Sep 22, 2017

    (40:20) Shivon Zilis to Elon Musk, (cc: Sam Teller) - Sep 22, 2017

    (41:54) Sam Altman to Elon Musk (cc: Greg Brockman, Ilya Sutskever, Sam Teller, Shivon Zilis) - Jan 21, 2018

    (42:28) Elon Musk to Sam Altman (cc: Greg Brockman, Ilya Sutskever, Sam Teller, Shivon Zilis) - Jan 21, 2018

    (42:42) Andrej Karpathy to Elon Musk, (cc: Shivon Zilis) - Jan 31, 2018
    Show more Show less
    1 hr and 3 mins
  • “Ayn Rand’s model of ‘living money’; and an upside of burnout” by AnnaSalamon
    Nov 18 2024
    Epistemic status: Toy model. Oversimplified, but has been anecdotally useful to at least a couple people, and I like it as a metaphor.

    Introduction

    I’d like to share a toy model of willpower: your psyche's conscious verbal planner “earns” willpower (earns a certain amount of trust with the rest of your psyche) by choosing actions that nourish your fundamental, bottom-up processes in the long run. For example, your verbal planner might expend willpower dragging you to disappointing first dates, then regain that willpower, and more, upon finding a date that leads to good long-term romance. Wise verbal planners can acquire large willpower budgets by making plans that, on average, nourish your fundamental processes. Delusional or uncaring verbal planners, on the other hand, usually become “burned out” – their willpower budget goes broke-ish, leaving them little to no access to willpower.

    I’ll spend the next section trying to stick this [...]

    ---

    Outline:

    (00:17) Introduction

    (01:10) On processes that lose their relationship to the unknown

    (02:58) Ayn Rand's model of “living money”

    (06:44) An analogous model of “living willpower” and burnout.

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:
    November 16th, 2024

    Source:
    https://www.lesswrong.com/posts/xtuk9wkuSP6H7CcE2/ayn-rand-s-model-of-living-money-and-an-upside-of-burnout

    ---

    Narrated by TYPE III AUDIO.

    Show more Show less
    9 mins
  • “Neutrality” by sarahconstantin
    Nov 17 2024
    Midjourney, “infinite library”I’ve had post-election thoughts percolating, and the sense that I wanted to synthesize something about this moment, but politics per se is not really my beat. This is about as close as I want to come to the topic, and it's a sidelong thing, but I think the time is right.

    It's time to start thinking again about neutrality.

    Neutral institutions, neutral information sources. Things that both seem and are impartial, balanced, incorruptible, universal, legitimate, trustworthy, canonical, foundational.1

    We don’t have them. Clearly.

    We live in a pluralistic and divided world. Everybody's got different “reality-tunnels.” Attempts to impose one worldview on everyone fail.

    To some extent this is healthy and inevitable; we are all different, we do disagree, and it's vain to hope that “everyone can get on the same page” like some kind of hive-mind.

    On the other hand, lots of things aren’t great [...]

    ---

    Outline:

    (02:14) Not “Normality”

    (04:36) What is Neutrality Anyway?

    (07:43) “Neutrality is Impossible” is Technically True But Misses The Point

    (10:50) Systems of the World

    (15:05) Let's Talk About Online

    ---

    First published:
    November 13th, 2024

    Source:
    https://www.lesswrong.com/posts/WxnuLJEtRzqvpbQ7g/neutrality

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show more Show less
    24 mins
  • “Making a conservative case for alignment” by Cameron Berg, Judd Rosenblatt, phgubbins, AE Studio
    Nov 16 2024
    Trump and the Republican party will yield broad governmental control during what will almost certainly be a critical period for AGI development. In this post, we want to briefly share various frames and ideas we’ve been thinking through and actively pitching to Republican lawmakers over the past months in preparation for this possibility.

    Why are we sharing this here? Given that >98% of the EAs and alignment researchers we surveyed earlier this year identified as everything-other-than-conservative, we consider thinking through these questions to be another strategically worthwhile neglected direction.

    (Along these lines, we also want to proactively emphasize that politics is the mind-killer, and that, regardless of one's ideological convictions, those who earnestly care about alignment must take seriously the possibility that Trump will be the US president who presides over the emergence of AGI—and update accordingly in light of this possibility.)

    Political orientation: combined sample of (non-alignment) [...]

    ---

    Outline:

    (01:20) AI-not-disempowering-humanity is conservative in the most fundamental sense

    (03:36) Weve been laying the groundwork for alignment policy in a Republican-controlled government

    (08:06) Trump and some of his closest allies have signaled that they are genuinely concerned about AI risk

    (09:11) Avoiding an AI-induced catastrophe is obviously not a partisan goal

    (10:48) Winning the AI race with China requires leading on both capabilities and safety

    (13:22) Concluding thought

    The original text contained 4 footnotes which were omitted from this narration.

    The original text contained 1 image which was described by AI.

    ---

    First published:
    November 15th, 2024

    Source:
    https://www.lesswrong.com/posts/rfCEWuid7fXxz4Hpa/making-a-conservative-case-for-alignment

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show more Show less
    14 mins
  • “OpenAI Email Archives (from Musk v. Altman)” by habryka
    Nov 16 2024
    As part of the court case between Elon Musk and Sam Altman, a substantial number of emails between Elon, Sam Altman, Ilya Sutskever, and Greg Brockman have been released as part of the court proceedings.

    I have found reading through these really valuable, and I haven't found an online source that compiles all of them in an easy to read format. So I made one.

    I used AI assistance to generate this, which might have introduced errors. Check the original source to make sure it's accurate before you quote it: https://www.courtlistener.com/docket/69013420/musk-v-altman/ [1]

    Sam Altman to Elon Musk - May 25, 2015

    Been thinking a lot about whether it's possible to stop humanity from developing AI.

    I think the answer is almost definitely not.

    If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first.

    Any thoughts on [...]

    ---

    Outline:

    (00:37) Sam Altman to Elon Musk - May 25, 2015

    (01:20) Elon Musk to Sam Altman - May 25, 2015

    (01:29) Sam Altman to Elon Musk - Jun 24, 2015

    (03:33) Elon Musk to Sam Altman - Jun 24, 2015

    (03:41) Greg Brockman to Elon Musk, (cc: Sam Altman) - Nov 22, 2015

    (06:07) Elon Musk to Sam Altman - Dec 8, 2015

    (07:09) Sam Altman to Elon Musk - Dec 8, 2015

    (08:01) Sam Altman to Elon Musk - Dec 11, 2015

    (08:34) Elon Musk to Sam Altman - Dec 11, 2015

    (08:52) Sam Altman to Elon Musk - Dec 11, 2015

    (09:02) Elon Musk to Sam Altman - Dec 11, 2015

    (09:10) Sam Altman to Elon Musk - Dec 11, 2015

    (09:28) Elon Musk to: Ilya Sutskever, Pamela Vagata, Vicki Cheung, Diederik Kingma, Andrej Karpathy, John D. Schulman, Trevor Blackwell, Greg Brockman, (cc:Sam Altman) - Dec 11, 2015

    (10:37) Greg Brockman to Elon Musk, (cc: Sam Altman) - Feb 21, 2016

    (15:13) Elon Musk to Greg Brockman, (cc: Sam Altman) - Feb 22, 2016

    (15:55) Greg Brockman to Elon Musk, (cc: Sam Altman) - Feb 22, 2016

    (16:16) Greg Brockman to Elon Musk, (cc: Sam Teller) - Mar 21, 2016

    (17:59) Elon Musk to Greg Brockman, (cc: Sam Teller) - Mar 21, 2016

    (18:09) Sam Teller to Elon Musk - April 27, 2016

    (19:30) Elon Musk to Sam Teller - Apr 27, 2016

    (20:06) Sam Altman to Elon Musk, (cc: Sam Teller) - Sep 16, 2016

    (25:32) Elon Musk to Sam Altman, (cc: Sam Teller) - Sep 16, 2016

    (26:38) Sam Altman to Elon Musk, (cc: Sam Teller) - Sep 16, 2016

    (27:03) Elon Musk to Sam Altman, (cc: Sam Teller) - Sep 16, 2016

    (27:18) Sam Altman to Elon Musk, (cc: Sam Teller) - Sep 16, 2016

    (27:31) Sam Teller to Elon Musk - Sep 20, 2016

    (27:57) Elon Musk to Sam Teller - Sep 21, 2016

    (28:13) Ilya Sutskever to Elon Musk, Greg Brockman - Jul 20, 2017

    (29:42) Shivon Zilis to Elon Musk, (cc: Sam Teller) - Aug 28, 2017

    (33:16) Elon Musk to Shivon Zilis, (cc: Sam Teller) - Aug 28, 2017

    (33:32) Ilya Sutskever to Elon Musk, Sam Altman, (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 20, 2017

    (39:07) Elon Musk to Ilya Sutskever (cc: Sam Altman; Greg Brockman; Sam Teller; Shivon Zilis) - Sep 20, 2017 (2:17PM)

    (39:42) Elon Musk to Ilya Sutskever, Sam Altman (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 20, 2017 (3:08PM)

    (40:03) Sam Altman to Elon Musk, Ilya Sutskever (cc: Greg Brockman, Sam Teller, Shivon Zilis) - Sep 21, 2017

    (40:18) Shivon Zilis to Elon Musk, (cc: Sam Teller) - Sep 22, 2017

    (40:49) Elon Musk to Shivon Zilis (cc: Sam Teller) - Sep 22, 2017

    (40:59) Shivon Zilis to Elon Musk, (cc: Sam Teller) - Sep 22, 2017

    (42:33) Sam Altman to Elon Musk (cc: Greg Brockman, Ilya Sutskever, Sam Teller, Shivon Zilis) - Jan 21, 2018

    (43:07) Elon Musk to Sam Altman (cc: Greg Brockman, Ilya S
    Show more Show less
    1 hr and 4 mins
  • “Catastrophic sabotage as a major threat model for human-level AI systems” by evhub
    Nov 15 2024
    Thanks to Holden Karnofsky, David Duvenaud, and Kate Woolverton for useful discussions and feedback.

    Following up on our recent “Sabotage Evaluations for Frontier Models” paper, I wanted to share more of my personal thoughts on why I think catastrophic sabotage is important and why I care about it as a threat model. Note that this isn’t in any way intended to be a reflection of Anthropic's views or for that matter anyone's views but my own—it's just a collection of some of my personal thoughts.

    First, some high-level thoughts on what I want to talk about here:

    • I want to focus on a level of future capabilities substantially beyond current models, but below superintelligence: specifically something approximately human-level and substantially transformative, but not yet superintelligent.
      • While I don’t think that most of the proximate cause of AI existential risk comes from such models—I think most of the direct takeover [...]
    ---

    Outline:

    (02:31) Why is catastrophic sabotage a big deal?

    (02:45) Scenario 1: Sabotage alignment research

    (05:01) Necessary capabilities

    (06:37) Scenario 2: Sabotage a critical actor

    (09:12) Necessary capabilities

    (10:51) How do you evaluate a model's capability to do catastrophic sabotage?

    (21:46) What can you do to mitigate the risk of catastrophic sabotage?

    (23:12) Internal usage restrictions

    (25:33) Affirmative safety cases

    ---

    First published:
    October 22nd, 2024

    Source:
    https://www.lesswrong.com/posts/Loxiuqdj6u8muCe54/catastrophic-sabotage-as-a-major-threat-model-for-human

    ---

    Narrated by TYPE III AUDIO.

    Show more Show less
    27 mins
  • “The Online Sports Gambling Experiment Has Failed” by Zvi
    Nov 12 2024
    Related: Book Review: On the Edge: The GamblersI have previously been heavily involved in sports betting. That world was very good to me. The times were good, as were the profits. It was a skill game, and a form of positive-sum entertainment, and I was happy to participate and help ensure the sophisticated customer got a high quality product. I knew it wasn’t the most socially valuable enterprise, but I certainly thought it was net positive.When sports gambling was legalized in America, I was hopeful it too could prove a net positive force, far superior to the previous obnoxious wave of daily fantasy sports. It brings me no pleasure to conclude that this was not the case. The results are in. Legalized mobile gambling on sports, let alone casino games, has proven to be a huge mistake. The societal impacts are far worse than I expected. Table [...]

    ---

    Outline:

    (01:02) The Short Answer

    (02:01) Paper One: Bankruptcies

    (07:03) Paper Two: Reduced Household Savings

    (08:37) Paper Three: Increased Domestic Violence

    (10:04) The Product as Currently Offered is Terrible

    (12:02) Things Sharp Players Do

    (14:07) People Cannot Handle Gambling on Smartphones

    (15:46) Yay and Also Beware Trivial Inconveniences (a future full post)

    (17:03) How Does This Relate to Elite Hypocrisy?

    (18:32) The Standard Libertarian Counterargument

    (19:42) What About Other Prediction Markets?

    (20:07) What Should Be Done

    The original text contained 3 images which were described by AI.

    ---

    First published:
    November 11th, 2024

    Source:
    https://www.lesswrong.com/posts/tHiB8jLocbPLagYDZ/the-online-sports-gambling-experiment-has-failed

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Show more Show less
    22 mins