Episodios

  • (LLM Optimization-MSFT) COLLABLLM: From Passive Responders to Active Collaborators
    Jul 23 2025

    Tune into our podcast to explore COLLABLLM, a groundbreaking framework redefining human-LLM interactions! Traditional Large Language Models often fall short in complex, open-ended tasks by passively responding and failing to grasp long-term user intent.

    Developed by researchers from Stanford University, Microsoft, and Georgia Tech, COLLABLLM addresses this by incorporating Multiturn-aware Rewards (MR). This innovative approach uses collaborative simulation to estimate the long-term impact of responses, moving beyond immediate rewards to foster active collaboration.

    COLLABLLM excels in various applications, including:

    • Document creation
    • Code generation
    • Multiturn mathematics problem-solving

    It significantly improves task performance, conversational efficiency, and interactivity, leading to higher user satisfaction and reduced time spent on tasks. While primarily effective, some users noted COLLABLLM can occasionally feel bland, lack up-to-date information, and require more effort for personalisation.

    Discover how COLLABLLM transforms LLMs from passive responders into active collaborators, paving the way for more human-centred AI.

    Read the full paper here: http://arxiv.org/pdf/2502.00640

    Más Menos
    16 m
  • [RAG-GOOGLE] MUVERA: Multi-Vector Retrieval via Fixed Dimensional Encodings
    Jul 20 2025

    Welcome to our podcast! Today, we're diving into MUVERA (Multi-Vector Retrieval Algorithm), a groundbreaking development from researchers at Google Research, UMD, and Google DeepMind. While neural embedding models are fundamental to modern information retrieval (IR), multi-vector models, though superior, are computationally expensive. MUVERA addresses this by ingeniously reducing complex multi-vector similarity search to efficient single-vector search, allowing the use of highly-optimised MIPS (Maximum Inner Product Search) solvers.

    The core innovation is Fixed Dimensional Encodings (FDEs), single-vector proxies for multi-vector similarity that offer the first theoretical guarantees (ε-approximations). Empirically, MUVERA significantly outperforms prior state-of-the-art implementations like PLAID, achieving an average of 10% higher recall with 90% lower latency across diverse BEIR retrieval datasets. It also incorporates product quantization for 32x memory compression of FDEs with minimal quality loss.

    A current limitation is that MUVERA did not outperform PLAID on the MS MARCO dataset, possibly due to PLAID's extensive tuning for that specific benchmark. Additionally, the effect of the average number of embeddings per document on FDE retrieval quality remains an area for future study. MUVERA's applications primarily lie in enhancing modern IR pipelines, potentially improving the efficiency of components within LLMs.

    Learn more: https://arxiv.org/pdf/2405.19504

    Más Menos
    14 m
  • (LLM Code-Salesforce) CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models
    Jul 5 2025

    Welcome to our podcast! Today, we're exploring CodeTree, a groundbreaking framework developed by researchers at The University of Texas at Austin and Salesforce Research. CodeTree revolutionises code generation by enabling Large Language Models (LLMs) to efficiently navigate the vast coding search space through an agent-guided tree search. This innovative approach employs a unified tree structure for explicitly exploring coding strategies, generating solutions, and refining them.

    At its core, CodeTree leverages dedicated LLM agents: the Thinker for strategy generation, the Solver for initial code implementation, and the Debugger for solution improvement. Crucially, a Critic Agent dynamically guides the exploration by evaluating nodes, verifying solutions, and making crucial decisions like refining, aborting, or accepting a solution. This multi-agent collaboration, combined with environmental and AI-generated feedback, has led to significant performance gains across diverse coding benchmarks, including HumanEval, MBPP, CodeContests, and SWEBench.

    However, CodeTree's effectiveness hinges on LLMs with strong reasoning abilities; smaller models may struggle with its complex instruction-following roles, potentially leading to misleading feedback. The framework currently prioritises functional correctness, leaving aspects like code readability or efficiency for future enhancements. Despite these limitations, CodeTree offers a powerful paradigm for automated code generation, demonstrating remarkable search efficiency, even with limited generation budgets.

    Paper link: https://arxiv.org/pdf/2411.04329

    Más Menos
    19 m
  • (FM-NVIDIA) Fugatto: Foundational Generative Audio Transformer Opus 1
    Jul 3 2025

    Fugatto, a new generalist audio synthesis and transformation model developed by NVIDIA, and ComposableART, an inference-time technique designed to enhance its capabilities. Fugatto distinguishes itself by its ability to follow free-form text instructions, often with optional audio inputs, addressing the challenge that audio data, unlike text, typically lacks inherent instructional information. The document details a comprehensive data and instruction generation strategy that leverages large language models (LLMs) and audio understanding models to create diverse and rich datasets, enabling Fugatto to handle a wide array of tasks including text-to-speech, text-to-audio, and audio transformations. Furthermore, ComposableART allows for compositional abilities, such as combining, interpolating, or negating instructions, providing fine-grained control over audio outputs beyond the training distribution. The text presents experimental evaluations demonstrating Fugatto's competitive performance against specialised models and highlights its emergent capabilities, such as synthesising novel sounds or performing tasks not explicitly trained for.

    link: https://d1qx31qr3h6wln.cloudfront.net/publications/FUGATTO.pdf

    Más Menos
    18 m
  • (LLM Application-NVIDIA) Small Language Models: The Future of Agentic AI
    Jul 3 2025

    The provided text argues that small language models (SLMs) are the future of agentic AI, positioning them as more economical and operationally suitable than large language models (LLMs) for the majority of tasks within AI agents. While LLMs excel at general conversations, agentic systems frequently involve repetitive, specialised tasks where SLMs offer advantages like lower latency, reduced computational requirements, and significant cost savings. The authors propose a shift to heterogeneous systems, where SLMs handle routine functions and LLMs are used sparingly for complex reasoning. The document also addresses common barriers to SLM adoption, such as existing infrastructure investments and popular misconceptions, and outlines a conversion algorithm for migrating agentic applications from LLMs to SLMs.

    Link: https://arxiv.org/pdf/2506.02153

    Más Menos
    22 m
  • (LLM Explainability-METR) Measuring AI Long Task Completion
    Jun 28 2025

    Welcome to PodXiv! In this episode, we dive into groundbreaking research from METR that introduces a novel metric for understanding AI capabilities: the 50%-task-completion time horizon. This unique measure quantifies how long humans typically take to complete tasks that AI models can achieve with a 50% success rate, offering intuitive insight into real-world performance.

    The study reveals a staggering trend: frontier AI's time horizon has been doubling approximately every seven months since 2019, driven by improvements in reliability, mistake adaptation, logical reasoning, and tool use. This rapid progress has profound implications, with extrapolations suggesting AI could automate many month-long software tasks within five years, a critical insight for responsible AI governance and safety guardrails.

    However, the research acknowledges crucial limitations. Current AI systems perform less effectively on "messier," less structured tasks and those requiring complex human-like context or interaction. These factors highlight that while impressive, the generalisation of these trends to all real-world intellectual labour requires further investigation. Tune in to explore the future of AI autonomy and its societal impact!

    Paper: https://arxiv.org/pdf/2503.14499

    Más Menos
    15 m
  • (FM) MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
    Jun 22 2025

    Join us to explore MiniMax-M1, a revolutionary development from MiniMax, hailed as the world's first open-weight, large-scale hybrid-attention reasoning model. At its core, MiniMax-M1 leverages a sophisticated hybrid Mixture-of-Experts (MoE) architecture paired with a novel lightning attention mechanism, which together facilitate the efficient scaling of test-time compute. A significant advancement is its native support for an impressive 1 million token context length, an eightfold expansion compared to competitors like DeepSeek R1, making it exceptionally well-suited for complex tasks demanding the processing of extensive inputs and prolonged reasoning.

    Further enhancing its capabilities, MiniMax-M1 was trained using CISPO, a pioneering reinforcement learning algorithm. This method, which clips importance sampling weights rather than token updates, notably boosts RL efficiency, demonstrated by the model’s full RL training completing in just three weeks on 512 H800 GPUs for a cost of only $534,700. The model exhibits particular strengths in practical applications such as complex software engineering, effective tool utilization, and various long-context tasks, having been rigorously trained in diverse real-world software engineering environments. While its innovative design and performance are thoroughly detailed, the provided sources do not explicitly outline any limitations of the MiniMax-M1 model.

    To learn more, explore the full technical report: https://arxiv.org/abs/2506.13585.

    Más Menos
    12 m
  • (FM-GOOGLE) Gemini 2.5: Technical Report
    Jun 20 2025

    Tune in to explore Google DeepMind's groundbreaking Gemini 2.X model family, featuring the highly capable Gemini 2.5 Pro and the efficient Gemini 2.5 Flash. These models represent a new frontier in AI, offering natively multimodal understanding, the ability to process over one million tokens of long context, and advanced reasoning through "Thinking" capabilities across diverse domains.

    Gemini 2.5 Pro stands out for its State-of-the-Art performance in coding and reasoning, alongside remarkable multimodal understanding, capable of analysing up to three hours of video content. This enables exciting applications such as building interactive web applications, comprehensive codebase understanding, and powering next-generation agentic workflows, famously demonstrated by "Gemini Plays Pokémon".

    However, the sources also highlight ongoing areas for development. While excelling, the models sometimes struggle with raw pixel vision input and exhibit a tendency for agents to repeat actions with very long contexts exceeding 100k tokens. Challenges like hallucinations and "context poisoning" can also occur. Despite notable increases in some critical capabilities (e.g., cyber uplift), Gemini 2.5 Pro has not reached Critical Capability Levels that would pose a significant risk of severe harm, with Google DeepMind actively accelerating mitigations in these areas.

    Paper link: https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf

    Más Menos
    16 m