80,000 Hours Podcast

By: Rob Luisa and the 80 000 Hours team
  • Summary

  • Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.
    All rights reserved
    Show more Show less
Episodes
  • #124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions
    Feb 7 2025
    If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they're at risk of being seriously overrated and applied where they don't belong.Rebroadcast: this episode was originally released in March 2022.Links to learn more, highlights, and full transcript.Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish.First, what do people mean by 'sustainability'? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running.Given that someone needs to keep paying, Karen tells us that in practice, 'sustainability' is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries.'Participatory' also sounds nice, and inasmuch as it means leaders are accountable to the people they're trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves.While that might be suitable in some situations, it's hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing.Finally, making a programme 'holistic' could be smart, but as Karen lays out, it also has some major downsides. For one, it means you're doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it's hard to tell whether you're making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful 'holistic school health' programme that, if continued, was going to cost 3.5 times the entire school's budget.In this in-depth conversation, originally released in March 2022, Karen Levy and host Rob Wiblin chat about the above, as well as:Why it pays to figure out how you'll interpret the results of an experiment ahead of timeThe trouble with misaligned incentives within the development industryProjects that don't deliver value for money and should be scaled downHow Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildrenLogistical challenges in reaching huge numbers of people with essential servicesLessons from Karen's many-decades careerAnd much moreChapters:Cold open (00:00:00)Rob's intro (00:01:33)The interview begins (00:02:21)Funding for effective altruist–mentality development projects (00:04:59)Pre-policy plans (00:08:36)‘Sustainability’, and other myths in typical international development practice (00:21:37)‘Participatoriness’ (00:36:20)‘Holistic approaches’ (00:40:20)How the development industry sees evidence-based development (00:51:31)Initiatives in Africa that should be significantly curtailed (00:56:30)Misaligned incentives within the development industry (01:05:46)Deworming: the early days (01:21:09)The problem of deworming (01:34:27)Deworm the World (01:45:43)Where the majority of the work was happening (01:55:38)Logistical issues (02:20:41)The importance of a theory of change (02:31:46)Ways that things have changed since 2006 (02:36:07)Academic work vs policy work (02:38:33)Fit for Purpose (02:43:40)Living in Kenya (03:00:32)Underrated life advice (03:05:29)Rob’s outro (03:09:18)Producer: Keiran HarrisAudio mastering: Ben Cordell and Ryan KesslerTranscriptions: Katy Moore
    Show more Show less
    3 hrs and 10 mins
  • If digital minds could suffer, how would we ever know? (Article)
    Feb 4 2025
    “I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world.Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way.But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering.We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise.This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem.You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website.Chapters:Introduction (00:00:00)Understanding the moral status of digital minds (00:00:58)Summary (00:03:31)Our overall view (00:04:22)Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)Clearing up common misconceptions (00:12:16)Creating digital minds could go very badly - or very well (00:14:13)Dangers for digital minds (00:14:41)Dangers for humans (00:16:13)Other dangers (00:17:42)Things could also go well (00:18:32)We don't know how to assess the moral status of AI systems (00:19:49)There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)Many plausible theories of consciousness could include digital minds (00:24:16)The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)The scale of this issue might be enormous (00:36:08)Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)Summing up so far (00:52:22)Arguments against the moral status of digital minds as a pressing problem (00:53:25)Two key cruxes (00:53:31)Maybe this problem is intractable (00:54:16)Maybe this issue will be solved by default (00:58:19)Isn't risk from AI more important than the risks to AIs? (01:00:45)Maybe current AI progress will stall (01:02:36)Isn't this just too crazy? (01:03:54)What can you do to help? (01:05:10)Important considerations if you work on this problem (01:13:00)
    Show more Show less
    1 hr and 15 mins
  • #132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems
    Jan 31 2025
    If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia. Rebroadcast: this episode was originally released in June 2022.Links to learn more, highlights, and full transcript.As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.Chapters:Cold open (00:00:00)Rob's intro (00:00:52)The interview begins (00:02:44)Why computer security matters for AI safety (00:07:39)State of the art in information security (00:17:21)The hack of Nvidia (00:26:50)The most secure systems that exist (00:36:27)Formal verification (00:48:03)How organisations can protect against hacks (00:54:18)Is ML making security better or worse? (00:58:11)Motivated 14-year-old hackers (01:01:08)Disincentivising actors from attacking in the first place (01:05:48)Hofvarpnir Studios (01:12:40)Capabilities vs safety (01:19:47)Interesting design choices with big ML models (01:28:44)Nova’s work and how she got into it (01:45:21)Anthropic and career advice (02:05:52)$600M Ethereum hack (02:18:37)Personal computer security advice (02:23:06)LastPass (02:31:04)Stuxnet (02:38:07)Rob's outro (02:40:18)Producer: Keiran HarrisAudio mastering: Ben Cordell and Beppe RådvikTranscriptions: Katy Moore
    Show more Show less
    2 hrs and 41 mins

What listeners say about 80,000 Hours Podcast

Average customer ratings
Overall
  • 5 out of 5 stars
  • 5 Stars
    2
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Performance
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.

Sort by:
Filter by:
  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Brilliant

For anyone who's interested in audiobooks, especially non-fiction work, this podcast is perfect. For people used to short-form podcasts, the 2-5 hour range may seem intimidating, but for those used to the length of audiobooks it's great. The length allows the interviewer to ask genuinely interesting questions, with a bit of back-and-forth with the interviewee.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

1 person found this helpful