Episodios

  • Building Gemini's Coding Capabilities
    Jun 16 2025

    Connie Fan, Product Lead for Gemini's coding capabilities, and Danny Tarlow, Research Lead for Gemini's coding capabilities, join host Logan Kilpatrick for an in-depth discussion on how the team built one of the world's leading AI coding models. Learn more about the early goals that shaped Gemini's approach to code, the rise of 'vibe coding' and its impact on development, strategies for tackling large codebases with long context and agents, and the future of programming languages in the age of AI.

    Watch on YouTube: ⁠https://www.youtube.com/watch?v=jwbG_m-X-gE⁠

    Chapters:

    0:00 - Intro
    1:10 - Defining Early Coding Goals
    6:23 - Ingredients of a Great Coding Model
    9:28 - Adapting to Developer Workflows
    11:40 - The Rise of Vibe Coding
    14:43 - Code as a Reasoning Tool
    17:20 - Code as a Universal Solver
    20:47 - Evaluating Coding Models
    24:30 - Leveraging Internal Googler Feedback
    26:52 - Winning Over AI Skeptics
    28:04 - Performance Across Programming Languages
    33:05 - The Future of Programming Languages
    36:16 - Strategies for Large Codebases
    41:06 - Hill Climbing New Benchmarks
    42:46 - Short-Term Improvements
    44:42 - Model Style and Taste
    47:43 - 2.5 Pro’s Breakthrough
    51:06 - Early AI Coding Experiences
    56:19 - Specialist vs. Generalist Models

    Más Menos
    1 h
  • Sergey Brin on the Future of AI & Gemini
    Jun 16 2025

    A conversation with Sergey Brin, co-founder of Google and computer scientist working on Gemini, in reaction to a year of progress with Gemini.

    Watch on YouTube: https://www.youtube.com/watch?v=o7U4DV9Fkc0

    Chapters

    0:20 - Initial reactions to I/O
    2:00 - Focus on Gemini’s core text model
    4:29 - Native audio in Gemini and Veo 3
    8:34 - Insights from model training runs
    10:07 - Surprises in current AI developments vs. past expectations
    14:20 - Evolution of model training
    16:40 - The future of reasoning and Deep Think
    20:19 - Google’s startup culture and accelerating AI innovation
    24:51 - Closing

    Más Menos
    27 m
  • Google I/O 2025 Recap with Josh Woodward and Tulsee Doshi
    May 22 2025

    Learn more

    • AI Studio: https://aistudio.google.com/
    • Gemini Canvas: https://gemini.google.com/canvas
    • Mariner: https://labs.google.com/mariner/
    • Gemini Ultra: https://one.google.com/about/google-a...
    • Jules: https://jules.google/
    • Gemini Diffusion: https://deepmind.google/models/gemini...
    • Flow: https://labs.google/flow/about
    • Notebook LM: https://notebooklm.google.com/
    • Stitch: https://stitch.withgoogle.com/

    Chapters

    • 0:59 - I/O Day 1 Recap
    • 02:48 - Envisioning I/O 2030
    • 08:11 - AI for Scientific Breakthroughs
    • 09:20 - Veo 3 & Flow
    • 7:35 - Gemini Live & the Future of Proactive Assistants
    • 20:30 - Gemini in Chrome & Future Apps
    • 22:28 - New Gemini Models: DeepThink, Diffusion & 2.5 Flash/Pro Updates
    • 27:19 - Developer Momentum & Feedback Loop
    • 31:50 - New Developer Products: Jules, Stitch & CodeGen in AI Studio
    • 37:44 - Evolving Product Development Process with AI
    • 39:23 - Closing

    Más Menos
    40 m
  • Deep Dive into Long Context
    May 2 2025

    Explore the synergy between long context models and Retrieval Augmented Generation (RAG) in this episode of Release Notes. Join Google DeepMind's Nikolay Savinov as he discusses the importance of large context windows, how they enable Al agents, and what's next in the field.

    Chapters:
    0:52 Introduction & defining tokens
    5:27 Context window importance
    9:53 RAG vs. Long Context
    14:19 Scaling beyond 2 million tokens
    18:41 Long context improvements since 1.5 Pro release
    23:26 Difficulty of attending to the whole context
    28:37 Evaluating long context: beyond needle-in-a-haystack
    33:41 Integrating long context research
    34:57 Reasoning and long outputs
    40:54 Tips for using long context
    48:51 The future of long context: near-perfect recall and cost reduction
    54:42 The role of infrastructure
    56:15 Long-context and agents

    Más Menos
    1 h
  • Launching Gemini 2.5
    Mar 28 2025

    Tulsee Doshi, Head of Product for Gemini Models joins host Logan Kilpatrick for an in-depth discussion on the latest Gemini 2.5 Pro experimental launch. Gemini 2.5 is a well-rounded, multimodal thinking model, designed to tackle increasingly complex problems. From enhanced reasoning to advanced coding, Gemini 2.5 can create impressive web applications and agentic code applications. Learn about the process of building Gemini 2.5 Pro experimental, the improvements made across the stack, and what’s next for Gemini 2.5.

    Chapters:

    0:00 - Introduction
    1:05 - Gemini 2.5 launch overview
    3:19 - Academic evals vs. vibe checks
    6:19 - The jump to 2.5
    7:51 - Coordinating cross-stack improvements
    11:48 - Role of pre/post-training vs. test-time compute
    13:21 - Shipping Gemini 2.5
    15:29 - Embedded safety process
    17:28 - Multimodal reasoning with Gemini 2.5
    18:55 - Benchmark deep dive
    22:07 - What’s next for Gemini
    24:49 - Dynamic thinking in Gemini 2.5
    25:37 - The team effort behind the launch

    Resources:

    • Gemini → https://goo.gle/41Yf72b
    • Gemini 2.5 blog post → https://goo.gle/441SHiV
    • Example of Gemini’s 2.5 Pro’s game design skills → https://goo.gle/43vxkq1
    • Demo: Gemini 2.5 Pro Experimental in Google AI Studio → https://goo.gle/4c5RbhE
    Más Menos
    28 m
  • Gemini app: Canvas, Deep Research and Personalization
    Mar 20 2025

    Dave Citron, Senior Director Product Management, joins host Logan Kilpatrick for an in-depth discussion on the latest Gemini updates and demos. Learn more about Canvas for collaborative content creation, enhanced Deep Research with Thinking Models and Audio Overview and a new personalization feature.

    0:00 - Introduction
    0:59 - Recent Gemini app launches
    2:00 - Introducing Canvas
    5:12 - Canvas in action
    8:46 - More Canvas examples
    12:02 - Enhanced capabilities with Thinking Models
    15:12 - Deep Research in action
    20:27 - The future of agentic experiences
    22:12 Deep Research and Audio Overviews
    24:11 - Personalization in Gemini app
    27:50 - Personalization in action
    29:58 - How personalization works: user data and privacy
    32:30 -The future of personalization

    Más Menos
    37 m
  • Developing Google DeepMind's Thinking Models
    Feb 24 2025

    Jack Rae, Principal Scientist at Google DeepMind, joins host Logan Kilpatrick for an in-depth discussion on the development of Google’s thinking models. Learn more about practical applications of thinking models, the impact of increased 'thinking time' on model performance and the key role of long context.

    01:14 - Defining Thinking Models
    03:40 - Use Cases for Thinking Models
    07:52 - Thinking Time Improves Answers
    09:57 - Rapid Thinking Progress
    20:11 - Long Context Is Key
    27:41 - Tools for Thinking Models
    29:44 - Incorporating Developer Feedback
    35:11 - The Strawberry Counting Problem
    39:15 - Thinking Model Development Timeline
    42:30 - Towards a GA Thinking Model
    49:24 - Thinking Models Powering AI Agents
    54:14 - The Future of AI Model Evals

    Más Menos
    1 h y 4 m
  • Behind the Scenes of Gemini 2.0
    Dec 11 2024
    Tulsee Doshi, Gemini model product lead, joins host Logan Kilpatrick to go behind the scenes of Gemini 2.0, taking a deep dive into the model's multimodal capabilities and native tool use, and Google's approach to shipping experimental models. Watch on YouTube: https://www.youtube.com/watch?v=L7dw799vu5o Chapters: Meet Tulsee Doshi Gemini's Progress Over the Past Year Introducing Gemini 2.0 Shipping Experimental Models Gemini 2.0’s Native Tool Use Function Calling Multimodal Agents Rapid Fire Questions
    Más Menos
    35 m
adbl_web_global_use_to_activate_webcro805_stickypopup