Into AI Safety Podcast Por Jacob Haimes arte de portada

Into AI Safety

Into AI Safety

De: Jacob Haimes
Escúchala gratis

Acerca de esta escucha

The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io For even more content and community engagement, head over to my Patreon at https://www.patreon.com/IntoAISafety© Kairos.fm Ciencia Matemáticas
Episodios
  • Making Your Voice Heard w/ Tristan Williams & Felix de Simone
    May 19 2025
    I am joined by Tristan Williams and Felix de Simone to discuss their work on the potential of constituent communication, specifically in the context of AI legislation. These two worked as part of an AI Safety Camp team to understand whether or not it would be useful for more people to be sharing their experiences, concerns, and opinions with their government representative (hint, it is).Check out the blogpost on their findings, "Talking to Congress: Can constituents contacting their legislator influence policy?" and the tool they created!(01:53) - Introductions (04:04) - Starting the project (13:30) - Project overview (16:36) - Understanding constituent communication (28:50) - Literature review (35:52) - Phase 2 (43:26) - Creating a tool for citizen engagement (50:16) - Crafting your message (59:40) - The game of advocacy (01:15:19) - Difficulties on the project (01:22:33) - Call to action (01:32:30) - OutroLinksLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.AI Safety CampPause AIBlueDot ImpactTIME article - There’s an AI Lobbying Frenzy in Washington. Big Tech Is DominatingCongressional Management Foundation study - Communicating with Congress: Perceptions of Citizen Advocacy on Capitol HillCongressional Management Foundation study - The Future of Citizen Engagement: Rebuilding the Democratic DialogueTristan and Felix's blogpost - Talking to Congress: Can constituents contacting their legislator influence policy?Wired article - What It Takes to Make Congress Actually ListenAmerican Journal of Polical Science article - Congressional Representation: Accountability from the Constituent’s PerspectivePolitical Behavior article - Call Your Legislator: A Field Experimental Study of the Impact of a Constituency Mobilization Campaign on Legislative VotingGuided Track websiteThe ToolHolistic AI global regulatory trackerWhite & Case global regulatory trackerSteptoe US AI legislation trackerManatt US AIxHealth legislation trackerIssue One article - Big Tech Cozies Up to New Administration After Spending Record Sums on Lobbying Last YearVerfassungsblog article - BigTech’s Efforts to Derail the AI ActMIT Technology Review article - OpenAI has upped its lobbying efforts nearly sevenfoldOpen Secrets webpage - Issue Profile: Science & TechnologyStatista data - Leading lobbying spenders in the United States in 2024Global Justice Now report - Democracy at risk in Davos: new report exposes big tech lobbying and political interferenceIpsos article - Where Americans stand on AIAP-NORC report - There Is Bipartisan Concern About the Use of AI in the 2024 ElectionsAI Action Summit report - International AI Safety ReportYouGov article - Do Americans think AI will have a positive or negative impact on society?
    Más Menos
    1 h y 33 m
  • INTERVIEW: Scaling Democracy w/ (Dr.) Igor Krawczuk
    Jun 3 2024
    The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more?If you're interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn't published yet).Because the full show notes have a whopping 115 additional links, I'll highlight some that I think are particularly worthwhile here:The best article you'll ever read on Open Source AIThe best article you'll ever read on emergence in MLKate Crawford's Atlas of AI (Wikipedia)On the Measure of IntelligenceThomas Piketty's Capital in the Twenty-First Century (Wikipedia)Yurii Nesterov's Introductory Lectures on Convex OptimizationChapters(02:32) - Introducing Igor (10:11) - Aside on EY, LW, EA, etc., a.k.a. lettersoup (18:30) - Igor on AI alignment (33:06) - "Open Source" in AI (41:20) - The story of infinite riches and suffering (59:11) - On AI threat models (01:09:25) - Representation in AI (01:15:00) - Hazard fishing (01:18:52) - Intelligence and eugenics (01:34:38) - Emergence (01:48:19) - Considering externalities (01:53:33) - The shape of an argument (02:01:39) - More eugenics (02:06:09) - I'm convinced, what now? (02:18:03) - AIxBio (round ??) (02:29:09) - On open release of models (02:40:28) - Data and copyright (02:44:09) - Scientific accessibility and bullshit (02:53:04) - Igor's point of view (02:57:20) - OutroLinksLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. All references, including those only mentioned in the extended version of this episode, are included.Suspicious Machines Methodology, referred to as the "Rotterdam Lighthouse Report" in the episodeLIONS Lab at EPFLThe meme that Igor referencesOn the Hardness of Learning Under SymmetriesCourse on the concept of equivariant deep learningAside on EY/EA/etc.Sources on Eliezer YudkowskiScholarly Community EncyclopediaTIME100 AIYudkowski's personal websiteEY WikipediaA Very Literary Wiki -TIME article: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down documenting EY's ruminations of bombing datacenters; this comes up later in the episode but is included here because it about EY.LessWrongLW WikipediaMIRICoverage on Nick Bostrom (being a racist)The Guardian article: ‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity InstituteThe Guardian article: Oxford shuts down institute run by Elon Musk-backed philosopherInvestigative piece on Émile TorresOn the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜NY Times article: We Teach A.I. Systems Everything, Including Our BiasesNY Times article: Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.Timnit Gebru's WikipediaThe TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General IntelligenceSources on the environmental impact of LLMsThe Environmental Impact of LLMsThe Cost of Inference: Running the ModelsEnergy and Policy Considerations for Deep Learning in NLPThe Carbon Impact of AI vs Search EnginesFilling Gaps in Trustworthy Development of AI (Igor is an author on this one)A Computational Turn in Policy Process Studies: Coevolving Network Dynamics of Policy ChangeThe Smoothed Possibility of Social Choice, an intro in social choice theory and how it overlaps with MLRelating to Dan HendrycksNatural Selection Favors AIs over Humans"One easy-to-digest source to highlight what he gets wrong [is] Social and Biopolitical Dimensions of Evolutionary Thinking" -IgorIntroduction to AI Safety, Ethics, and Society, recently published textbook"Source to the section [of this paper] that makes Dan one of my favs from that crowd." -IgorTwitter post referenced in the episode<...
    Más Menos
    2 h y 59 m
  • INTERVIEW: StakeOut.AI w/ Dr. Peter Park (3)
    Mar 25 2024
    As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.As you may have ascertained from the previous two segments of the interview, Dr. Park cofounded StakeOut.AI along with Harry Luk and one other cofounder whose name has been removed due to requirements of her current position. The non-profit had a simple but important mission: make the adoption of AI technology go well, for humanity, but unfortunately, StakeOut.AI had to dissolve in late February of 2024 because no granter would fund them. Although it certainly is disappointing that the organization is no longer functioning, all three cofounders continue to contribute positively towards improving our world in their current roles.If you would like to investigate further into Dr. Park's work, view his website, Google Scholar, or follow him on Twitter00:00:54 ❙ Intro00:02:41 ❙ Rapid development00:08:25 ❙ Provable safety, safety factors, & CSAM00:18:50 ❙ Litigation00:23:06 ❙ Open/Closed Source00:38:52 ❙ AIxBio00:47:50 ❙ Scientific rigor in AI00:56:22 ❙ AI deception01:02:45 ❙ No takesies-backsies01:08:22 ❙ StakeOut.AI's start01:12:53 ❙ Sustainability & Agency01:18:21 ❙ "I'm sold, next steps?" -you01:23:53 ❙ Lessons from the amazing Spiderman01:33:15 ❙ "I'm ready to switch careers, next steps?" -you01:40:00 ❙ The most important question01:41:11 ❙ OutroLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.StakeOut.AIPause AIAI Governance Scorecard (go to Pg. 3)CIVITAIArticle on CIVITAI and CSAMSenate Hearing: Protecting Children OnlinePBS Newshour CoverageThe Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted WorkOpen Source/Weights/Release/InterpretationOpen Source InitiativeHistory of the OSIMeta’s LLaMa 2 license is not Open SourceIs Llama 2 open source? No – and perhaps we need a new definition of open…Apache License, Version 2.03Blue1Brown: Neural NetworksOpening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generatorsThe online tableSignalBloomz model on HuggingFaceMistral websiteNASA TragediesChallenger disaster on WikipediaColumbia disaster on WikipediaAIxBio RiskDual use of artificial-intelligence-powered drug discoveryCan large language models democratize access to dual-use biotechnology?Open-Sourcing Highly Capable Foundation Models (sadly, I can't rename the article...)Propaganda or Science: Open Source AI and Bioterrorism RiskExaggerating the risks (Part 15: Biorisk from LLMs)Will releasing the weights of future large language models grant widespread access to pandemic agents?On the Societal Impact of Open Foundation ModelsPolicy briefApart ResearchScienceCiceroHuman-level play in the game of Diplomacy by combining language models with strategic reasoningCicero webpageAI Deception: A Survey of Examples, Risks, and Potential SolutionsOpen Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulationAI Safety CampInto AI Safety Patreon
    Más Menos
    1 h y 42 m
adbl_web_global_use_to_activate_T1_webcro805_stickypopup
Todavía no hay opiniones