INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1) Podcast Por  arte de portada

INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)

INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)

Escúchala gratis

Ver detalles del espectáculo

Acerca de esta escucha

Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded ⁠StakeOut.AI, a non-profit focused on making AI go well for humans.

00:54 - Intro
03:15 - Dr. Park, x-risk, and AGI
08:55 - StakeOut.AI
12:05 - Governance scorecard
19:34 - Hollywood webinar
22:02 - Regulations.gov comments
23:48 - Open letters
26:15 - EU AI Act
35:07 - Effective accelerationism
40:50 - Divide and conquer dynamics
45:40 - AI "art"
53:09 - Outro

Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.

  • StakeOut.AI
  • AI Governance Scorecard (go to Pg. 3)
  • Pause AI
  • Regulations.gov
    • USCO StakeOut.AI Comment
    • OMB StakeOut.AI Comment
  • AI Treaty open letter
  • TAISC
  • Alpaca: A Strong, Replicable Instruction-Following Model
  • References on EU AI Act and Cedric O
    • Tweet from Cedric O
    • EU policymakers enter the last mile for Artificial Intelligence rulebook
    • AI Act: EU Parliament’s legal office gives damning opinion on high-risk classification ‘filters’
    • EU’s AI Act negotiations hit the brakes over foundation models
    • The EU AI Act needs Foundation Model Regulation
    • BigTech’s Efforts to Derail the AI Act
  • Open Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulation
  • Divide-and-Conquer Dynamics in AI-Driven Disempowerment
adbl_web_global_use_to_activate_T1_webcro805_stickypopup
Todavía no hay opiniones