AI-Curious with Jeff Wilser Podcast Por Jeff Wilser arte de portada

AI-Curious with Jeff Wilser

AI-Curious with Jeff Wilser

De: Jeff Wilser
Escúchala gratis

Acerca de esta escucha

A podcast that explores the good, the bad, and the creepy of artificial intelligence. Weekly longform conversations with key players in the space, ranging from CEOs to artists to philosophers. Exploring the role of AI in film, health care, business, law, therapy, politics, and everything from religion to war.

Featured by Inc. Magazine as one of "4 Ways to Get AI Savvy in 2024," as "Host Jeff Wilser [gives] you a more holistic understanding of AI--such as the moral implications of using it--and his conversations might even spark novel ideas for how you can best use AI in your business."

© 2025 AI-Curious with Jeff Wilser
Episodios
  • Introducing "AUI": Artificial Useful Intelligence, w/ IBM's Chief Scientist Dr. Ruchir Puri
    Jun 12 2025

    What if we’re all chasing the wrong kind of AI? Dr. Ruchir Puri, Chief Scientist of IBM, argues that Artificial General Intelligence (AGI) is overrated—and that we should be focusing instead on AUI: Artificial Useful Intelligence. This is a pragmatic, business-focused approach to AI that emphasizes real-world value, measurable outcomes, and implementable solutions.

    In this episode of AI-Curious, we explore what AUI actually looks like in practice. We discuss how to bring AI into your organization (even if you’re just getting started), why IBM is betting big on small language models (SLMs), and how companies can move beyond hype toward real, trustworthy AI agents that do actual work.

    You’ll also hear:

    • Why AI usefulness is a function of both quality and cost [00:11:00]
    • The “crawl, walk, run” strategy IBM recommends for business adoption [00:14:00]
    • Internal IBM examples: HR systems and coding assistants [00:16:00]
    • Why SLMs may be a smarter bet than LLMs for many enterprises [00:37:00]
    • A breakdown of how agentic systems are evolving to reflect, act, and self-correct [00:41:00]

    Whether you’re leading a startup or an enterprise, this conversation will help you reframe how you think about deploying AI—starting not with hype, but with value.

    🎧 Subscribe to AI-Curious:

    • Apple Podcasts
    https://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308

    • Spotify
    https://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b

    • YouTube
    https://www.youtube.com/@jeffwilser

    Más Menos
    47 m
  • A Conversation with the AI Pioneer Who Coined ‘AGI’ — Dr. Ben Goertzel
    Jun 6 2025

    What exactly is AGI—Artificial General Intelligence—and how close are we to achieving it? Will it transform the world for better or worse? And how can we even tell when true AGI has arrived?

    In this episode of AI Curious, we sit down with Dr. Ben Goertzel, the iconic computer scientist who coined the term AGI more than 20 years ago. As the founder of SingularityNET and the Artificial Superintelligence Alliance, Ben has spent decades thinking about the architecture, risks, and potential of general intelligence.

    We explore why today’s large language models (LLMs), while powerful, still fall short of true AGI—and what will be needed to bridge that gap. We dive into Ben’s prediction that AGI could arrive within just 1 to 3 years, and why he believes it will likely be decentralized. Along the way, we unpack some of the key ideas from his recent “10 Reckonings of AGI”—a candid look at the social, economic, and existential questions we must face as AGI reshapes human life.

    Topics include:

    • [00:04:00] What AGI really means vs. current LLMs
    • [00:10:00] Are we reaching the limits of current AI architectures?
    • [00:13:00] How will we know when AGI has truly arrived?
    • [00:17:00] The “PhD test” for human-level AGI
    • [00:19:00] AGI timeline predictions (1–3 years? 2029?)
    • [00:29:00] The 10 Reckonings of AGI: key societal impacts
    • [00:36:00] The gap between AGI and superintelligence
    • [00:44:00] Why a decentralized AGI might be safer
    • [00:51:00] Surprising upsides of a post-AGI world

    If you’re curious about the future of artificial intelligence, this conversation offers a rare and unfiltered perspective from one of the field’s most original thinkers.

    SingularityNet

    https://singularitynet.io/

    Ben Goertzel on X

    https://x.com/bengoertzel

    🎧 Subscribe to AI-Curious:

    • Apple Podcasts
    https://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308

    • Spotify
    https://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b

    • YouTube
    https://www.youtube.com/@jeffwilser

    Más Menos
    57 m
  • Should AI Agents Be Trusted? The Problem and Solution, w/ Billions.Network CEO Evin McMullen
    May 23 2025

    What happens when an AI agent says something harmful, or makes a costly mistake? Who’s responsible—and how can we even know who the agent belongs to in the first place?

    In this episode of AI-Curious, we talk with Evin McMullen, CEO and co-founder of Billions.Network, a startup building cryptographic trust infrastructure to verify the identity and accountability of AI agents and digital content.

    We explore the unsettling rise of synthetic media and deepfakes, why identity verification is foundational to AI safety, and how platforms—not users—should be responsible for determining what’s real. Evin explains how Billions uses zero knowledge proofs to establish trust without compromising privacy, and offers a vision for a future where billions of AI agents operate transparently, under clear reputational and legal frameworks.

    Along the way, we cover:

    • The problem with unverified AI agents (2:00)
    • Why 50% of online traffic is now bots—and why that matters (2:45)
    • The Air Canada chatbot legal fiasco (15:00)
    • The difference between chatbots and agentic AI (13:00)
    • What “identity” means in an AI-first internet (10:00)
    • Deepfakes, misinformation, and the limits of user responsibility (22:00)
    • Billions’ “deep trust” framework, explained (29:00)
    • How platforms can earn trust by verifying content authenticity (34:00)
    • Breaking news: Billions’ work with the European Commission (38:20)

    This one dives deep into the infrastructure of digital trust—and why the future of AI may depend on getting this right.

    Learn more: https://billions.network

    🎧 Subscribe to AI-Curious:

    • Apple Podcasts
    https://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308

    • Spotify
    https://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b

    • YouTube
    https://www.youtube.com/@jeffwilser

    Más Menos
    46 m
adbl_web_global_use_to_activate_webcro805_stickypopup
Todavía no hay opiniones