
Apple’s AI Bombshell--Are LLMs Really That Dumb?!
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
Acerca de esta escucha
In this episode, we dive deep into the viral Apple paper that claims LLMs (large language models) can’t really reason—they just remix memorized patterns. But is this a shocking revelation, or are we missing the bigger picture? Join us as we break down the science behind neural networks, debunk the myths, and reveal why serious AI researchers aren’t surprised by these findings.
We’ll expose the real story: LLMs aren’t just standalone chatbots—they’re powerful when paired with external tools, and that’s where the true AI innovation happens. Discover how tool integration supercharges LLM accuracy, why token output limits matter, and how the media often gets it wrong about AI’s capabilities. If you’re curious about the future of artificial intelligence, machine learning, and the truth behind the headlines, this episode is your must-listen.
Don’t fall for the hype—get the facts, get inspired, and join the conversation! Hit play, share with your fellow tech enthusiasts, and subscribe for more myth-busting AI insights.
Become a supporter of this podcast: https://www.spreaker.com/podcast/tech-threads-sci-tech-future-tech-ai--5976276/support.
Todavía no hay opiniones