Episodios

  • “Winning the power to lose” by KatjaGrace
    May 23 2025
    Have the Accelerationists won?

    Last November Kevin Roose announced that those in favor of going fast on AI had now won against those favoring caution, with the reinstatement of Sam Altman at OpenAI. Let's ignore whether Kevin's was a good description of the world, and deal with a more basic question: if it were so—i.e. if Team Acceleration would control the acceleration from here on out—what kind of win was it they won?

    It seems to me that they would have probably won in the same sense that your dog has won if she escapes onto the road. She won the power contest with you and is probably feeling good at this moment, but if she does actually like being alive, and just has different ideas about how safe the road is, or wasn’t focused on anything so abstract as that, then whether she ultimately wins or [...]

    ---

    First published:
    May 20th, 2025

    Source:
    https://www.lesswrong.com/posts/h45ngW5guruD7tS4b/winning-the-power-to-lose

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    4 m
  • [Linkpost] “Gemini Diffusion: watch this space” by Yair Halberstadt
    May 22 2025
    This is a link post. Google Deepmind has announced Gemini Diffusion. Though buried under a host of other IO announcements it's possible that this is actually the most important one!

    This is significant because diffusion models are entirely different to LLMs. Instead of predicting the next token, they iteratively denoise all the output tokens until it produces a coherent result. This is similar to how image diffusion models work.

    I've tried they results and they are surprisingly good! It's incredibly fast, averaging nearly 1000 tokens a second. And it one shotted my Google interview question, giving a perfect response in 2 seconds (though it struggled a bit on the followups).

    It's nowhere near as good as Gemini 2.5 pro, but it knocks ChatGPT 3 out the water. If we'd seen this 3 years ago we'd have been mind blown.

    Now this is wild for two reasons:

    1. We now have [...]
    ---

    First published:
    May 20th, 2025

    Source:
    https://www.lesswrong.com/posts/MZvtRqWnwokTub9sH/gemini-diffusion-watch-this-space

    Linkpost URL:
    https://deepmind.google/models/gemini-diffusion/

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    2 m
  • “AI Doomerism in 1879” by David Gross
    May 21 2025
    I’m reading George Eliot's Impressions of Theophrastus Such (1879)—so far a snoozer compared to her novels. But chapter 17 surprised me for how well it anticipated modern AI doomerism.

    In summary, Theophrastus is in conversation with Trost, who is an optimist about the future of automation and how it will free us from drudgery and permit us to further extend the reach of the most exalted human capabilities. Theophrastus is more concerned that automation is likely to overtake, obsolete, and atrophy human ability.

    Among Theophrastus's concerns:

    • People will find that they no longer can do labor that is valuable enough to compete with the machines.
    • This will eventually include intellectual labor, as we develop for example “a machine for drawing the right conclusion, which will doubtless by-and-by be improved into an automaton for finding true premises.”
    • Whereupon humanity will finally be transcended and superseded by its own creation [...]
    ---

    Outline:

    (02:05) Impressions of Theophrastus Such

    (02:09) Chapter XVII: Shadows of the Coming Race

    ---

    First published:
    May 13th, 2025

    Source:
    https://www.lesswrong.com/posts/DFyoYHhbE8icgbTpe/ai-doomerism-in-1879

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    13 m
  • “Consider not donating under $100 to political candidates” by DanielFilan
    May 16 2025
    Epistemic status: thing people have told me that seems right. Also primarily relevant to US audiences. Also I am speaking in my personal capacity and not representing any employer, present or past.

    Sometimes, I talk to people who work in the AI governance space. One thing that multiple people have told me, which I found surprising, is that there is apparently a real problem where people accidentally rule themselves out of AI policy positions by making political donations of small amounts—in particular, under $10.

    My understanding is that in the United States, donations to political candidates are a matter of public record, and that if you donate to candidates of one party, this might look bad if you want to gain a government position when another party is in charge. Therefore, donating approximately $3 can significantly damage your career, while not helping your preferred candidate all that [...]

    ---

    First published:
    May 11th, 2025

    Source:
    https://www.lesswrong.com/posts/tz43dmLAchxcqnDRA/consider-not-donating-under-usd100-to-political-candidates

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    2 m
  • “It’s Okay to Feel Bad for a Bit” by moridinamael
    May 16 2025
    "If you kiss your child, or your wife, say that you only kiss things which are human, and thus you will not be disturbed if either of them dies." - Epictetus

    "Whatever suffering arises, all arises due to attachment; with the cessation of attachment, there is the cessation of suffering." - Pali canon

    "He is not disturbed by loss, he does not delight in gain; he is not disturbed by blame, he does not delight in praise; he is not disturbed by pain, he does not delight in pleasure; he is not disturbed by dishonor, he does not delight in honor." - Pali Canon (Majjhima Nikaya)

    "An arahant would feel physical pain if struck, but no mental pain. If his mother died, he would organize the funeral, but would feel no grief, no sense of loss." - the Dhammapada

    "Receive without pride, let go without attachment." - Marcus Aurelius

    [...]

    ---

    First published:
    May 10th, 2025

    Source:
    https://www.lesswrong.com/posts/aGnRcBk4rYuZqENug/it-s-okay-to-feel-bad-for-a-bit

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    6 m
  • “Explaining British Naval Dominance During the Age of Sail” by Arjun Panickssery
    May 15 2025
    The other day I discussed how high monitoring costs can explain the emergence of “aristocratic” systems of governance:

    Aristocracy and Hostage Capital

    Arjun Panickssery · Jan 8
    There's a conventional narrative by which the pre-20th century aristocracy was the "old corruption" where civil and military positions were distributed inefficiently due to nepotism until the system was replaced by a professional civil service after more enlightened thinkers prevailed ...

    An element of Douglas Allen's argument that I didn’t expand on was the British Navy. He has a separate paper called “The British Navy Rules” that goes into more detail on why he thinks institutional incentives made them successful from 1670 and 1827 (i.e. for most of the age of fighting sail).

    In the Seven Years’ War (1756–1763) the British had a 7-to-1 casualty difference in single-ship actions. During the French Revolutionary and Napoleonic Wars (1793–1815) the British had a 5-to-1 [...]



    ---

    First published:
    March 28th, 2025

    Source:
    https://www.lesswrong.com/posts/YE4XsvSFJiZkWFtFE/explaining-british-naval-dominance-during-the-age-of-sail

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Más Menos
    9 m
  • “Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies” by So8res
    May 14 2025
    Eliezer and I wrote a book. It's titled If Anyone Builds It, Everyone Dies. Unlike a lot of other writing either of us have done, it's being professionally published. It's hitting shelves on September 16th.

    It's a concise (~60k word) book aimed at a broad audience. It's been well-received by people who received advance copies, with some endorsements including:

    The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster.

    - Stephen Fry, actor, broadcaster, and writer

    If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe [...]

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    May 14th, 2025

    Source:
    https://www.lesswrong.com/posts/iNsy7MsbodCyNTwKs/eliezer-and-i-wrote-a-book-if-anyone-builds-it-everyone-dies

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    7 m
  • “Too Soon” by Gordon Seidoh Worley
    May 14 2025
    It was a cold and cloudy San Francisco Sunday. My wife and I were having lunch with friends at a Korean cafe.

    My phone buzzed with a text. It said my mom was in the hospital.

    I called to find out more. She had a fever, some pain, and had fainted. The situation was serious, but stable.

    Monday was a normal day. No news was good news, right?

    Tuesday she had seizures.

    Wednesday she was in the ICU. I caught the first flight to Tampa.

    Thursday she rested comfortably.

    Friday she was diagnosed with bacterial meningitis, a rare condition that affects about 3,000 people in the US annually. The doctors had known it was a possibility, so she was already receiving treatment.

    We stayed by her side through the weekend. My dad spent every night with her. We made plans for all the fun things we would when she [...]

    ---

    First published:
    May 13th, 2025

    Source:
    https://www.lesswrong.com/posts/reo79XwMKSZuBhKLv/too-soon

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Más Menos
    8 m
adbl_web_global_use_to_activate_T1_webcro805_stickypopup