Episodes

  • Key Principles For Scaling AI In Enterprise: Leadership Lessons With Walid Mehanna
    Dec 10 2024

    In this episode, we had the privilege of speaking with Walid Mehanna, Chief Data and AI Officer at Merck Group. Walid shares deep insights into how large, complex organizations can scale data and AI and create lasting impact through thoughtful leadership.

    As Chief Data & AI Officer of Merck Group, Walid led the Merck Data & AI Organization, delivering strategy, value, architecture, governance, engineering, and operations across the whole company globally. Hand in hand with Merck’s business sectors and their data offices, we harnessed the power of Data & AI. Walid is glad to be part of Merck as another curious mind dedicated to human progress.

    Show more Show less
    1 hr and 4 mins
  • Maximising the Impact of Your Data & AI Consulting Projects
    Nov 25 2024

    In our latest episode of the Data Science Conversations Podcast, we spoke with Christoph Sporleder, Managing Partner at Rewire, about the evolving role of consulting in the data and AI space.

    This conversation is a must listen for anyone dealing with the challenges of integrating AI into business processes or considering an AI project with an external consulting firm. Christoph draws from decades of experience, offering practical advice and actionable insights for organizations and practitioners alike.

    Key Topics Discussed

    1. Evolution of Data and Cloud Computing

    The shift from local computing to cloud technologies, enabling broader data integration and advanced analytics, with the rise of IoT and machine data.

    2. Data Management Challenges

    Discussion on the evolution from data warehouses to data lakes and the emerging concept of data mesh for better governance and scalability.

    3. Importance of Strategy in AI

    Why a clear strategy is crucial for AI adoption, including aligning organizational leadership and identifying impactful use cases.

    4. Sectoral Adoption of Data and AI

    Differences in adoption across sectors, with early adopters in finance and insurance versus later adoption in manufacturing and infrastructure.

    5. Consulting Models and Engagement

    Insights into consulting engagement types, including strategy consulting, system integration, and body leasing, and their respective challenges and benefits.

    6. Challenges in AI Implementation

    Common pitfalls in AI projects, such as misalignment with business goals, inadequate infrastructure planning, and siloed lighthouse initiatives.

    7. Leadership’s Role in AI Success

    The critical need for senior leadership commitment to drive AI adoption, ensure process integration, and manage organizational change.

    8. Effective Collaboration with Consultants

    Best practices for successful partnerships with consultants, including aligning on objectives, managing personnel transitions, and setting clear engagement expectations.

    9. Future Trends in Data and AI

    Emerging trends like componentized AI architectures, Gen AI integration, and the growing focus on embedding AI within business processes.

    10. Tips for Managing Long-Term Projects

    Strategies for handling staff rotations and maintaining project continuity in consulting engagements, emphasizing planning and communication.

    Show more Show less
    47 mins
  • KP Reddy: How AI is Reshaping Startup Dynamics and VC Strategies
    Sep 24 2024

    KP Reddy, founder and managing partner of Shadow Ventures, explains how AI is set to redefine the startup landscape and the venture capital model. KP shares his unique perspective on the rapidly evolving role of AI in entrepreneurship, offering insights into:

    • GENAI adoption in large companies is still limited
    • How AI is empowering leaner, more efficient startups
    • The potential for AI to disrupt traditional venture capital strategies
    • The emergence of new business models driven by AI capabilities
    • Real-world applications of AI in industries like construction, life sciences, and professional services

    Show more Show less
    1 hr and 2 mins
  • The Evolution of GenAI: From GANs to Multi-Agent Systems
    Aug 29 2024

    Early Interest in Generative AI

    • Martin's initial exposure to Generative AI in 2016 through a conference talk in Milano, Italy, and his early work with Generative Adversarial Networks (GANs).

    Development of GANs and Early Language Models since 2016

    • The evolution of Generative AI from visual content generation to text generation with models like Google's Bard and the increasing popularity of GANs in 2018.


    Launch of GenerativeAI.net and Online Course

    • Martin's creation of GenerativeAI.net and an online course, which gained traction after being promoted on platforms like Reddit and Hacker News.


    Defining Generative AI

    • Martin’s explanation of Generative AI as a technology focused on generating content, contrasting it with Discriminative AI, which focuses on classification and selection.


    Evolution of GenAI Technologies

    • The shift from LSTM models to Transformer models, highlighting key developments like the "Attention Is All You Need" paper and the impact of Transformer architecture on language models.


    Impact of Computing Power on GenAI

    • The role of increasing computing power and larger datasets in improving the capabilities of Generative AI


    Generative AI in Business Applications

    • Martin’s insights into the real-world applications of GenAI, including customer service automation, marketing, and software development.


    Retrieval Augmented Generation (RAG) Architecture

    • The use of RAG architecture in enterprise AI applications, where documents are chunked and queried to provide accurate and relevant responses using large language models.


    Technological Drivers of GenAI

    • The advancements in chip design, including Nvidia’s focus on GPU improvements and the emergence of new processing unit architectures like the LPU.


    Small vs. Large Language Models

    • A comparison between small and large language models, discussing their relative efficiency, cost, and performance, especially in specific use cases.


    Challenges in Implementing GenAI Systems

    • Common challenges faced in deploying GenAI systems, including the costs associated with training and fine-tuning large language models and the importance of clean data.


    Measuring GenAI Performance

    • Martin’s explanation of the complexities in measuring the performance of GenAI systems, including the use of the Hallucination Leaderboard for evaluating language models.


    Emerging Trends in GenAI

    • Discussion of future trends such as the rise of multi-agent frameworks, the potential for AI-driven humanoid robots, and the path towards Artificial General Intelligence (AGI).


    Show more Show less
    43 mins
  • Future AI Trends: Strategy, Hardware and AI Security at Intel
    Jul 24 2024

    In this episode, we sit down with Steve Orrin, Federal Chief Technology Officer at Intel Corporation. Steve shares his extensive experience and insights on the transformative power of AI and its parallels with past technological revolutions. He discusses Intel’s pioneering role in enabling these shifts through innovations in microprocessors, wireless connectivity, and more.

    Steve highlights the pervasive role of AI in various industries and everyday technology, emphasizing the importance of a heterogeneous computing architecture to support diverse AI environments. He talks about the challenges of operationalizing AI, ensuring real-world reliability, and the critical need for robust AI security. Confidential computing emerges as a key solution for protecting AI workloads across different platforms.

    The episode also explores Intel’s strategic tools like oneAPI and OpenVINO, which streamline AI development and deployment. This episode is a must-listen for anyone interested in the evolving landscape of AI and its real-world applications.

    Intel's Legacy and Technological Revolutions

    • Historical parallels between past tech revolutions (PC era, internet era) and current AI era.
    • Intel's contributions to major technological shifts, including the development of wireless technology, USB, and cloud computing.

    AI's Current and Future Landscape

    • AI's pervasive role in everyday technology and various industries.
    • Importance of computing hardware in facilitating AI advancements.
    • AI's integration across different environments: cloud, network, edge, and personal devices.

    Intel's Approach to AI

    • Focus on heterogeneous computing architectures for diverse AI needs.
    • Development of software tools like oneAPI and OpenVINO to enable cross-platform AI development.

    Challenges and Solutions in AI Deployment

    • Scaling AI from lab experiments to real-world applications.
    • Ensuring AI security and trustworthiness through transparency and lifecycle management.
    • Addressing biases in AI datasets and continuous monitoring for maintaining AI integrity.

    AI Security Concerns

    • Protection of AI models and data through hardware security measures like confidential computing.
    • Importance of data privacy and regulatory compliance in AI deployments.
    • Emerging threats such as AI model poisoning, prompt injection attacks, and adversarial attacks.

    Innovations in AI Hardware and Software

    • Confidential computing as a critical technology for securing AI.
    • Research into using AI for chip layout optimization and process improvements in various industries.
    • Future trends in AI applications, including generative AI for fault detection and process optimization.

    Collaboration and Standards in AI Security

    • Intel's involvement in developing industry standards and collaborating with competitors and other stakeholders.
    • The role of industry forums and standards bodies like NIST in advancing AI security.

    Advice for Aspiring AI Security Professionals

    • Importance of hands-on experience with AI technologies.
    • Networking and collaboration with peers and industry experts.
    • Staying informed through industry news, conferences, and educational resources.

    Exciting Developments in AI

    • Fusion of multiple AI applications for complex problem-solving.
    • Advancements in AI hardware, such as AI PCs and edge devices.

    • Potential transformative impacts of AI on everyday life and business operations.


    Show more Show less
    1 hr and 3 mins
  • Enhancing GenAI with Knowledge Graphs: A Deep Dive with Kirk Marple
    Jun 6 2024

    In this episode we talk to Kirk Marple about the power of Knowledge Graphs when combined with GenAI models. Kirk explained the growing relevance of knowledge graphs in the AI era, the practical applications, their integration with LLMs, and the future potential of Graph RAG.

    Kirk Marple a veteran of Microsoft and General Motors, Kirk has spent the last 30 years in software development and data leadership roles. He also successfully exited the first startup he founded, RadiantGrid, acquired by Wohler Technologies.

    Now, as the technical founder and CEO of Graphlit, Kirk and his team are streamlining the development of vertical AI apps with their end-to-end, cloud based offering that ingests unstructured data and leverages retrieval augmented generation to improve accuracy, domain specificity, adaptability, and context understanding – all while expediting development.

    Episode Summary -


    • Introduction to Knowledge Graphs:
    • Knowledge graphs extract relationships between entities like people, places, and things, facilitating efficient information retrieval.
    • They represent intricate interactions and interrelationships, enabling users to "walk the graph" and uncover deeper insights.

    • Importance in the AI Era:
    • Knowledge graphs enhance data retrieval and filtering, crucial for feeding accurate data into large language models (LLMs) and multimodal models.
    • They provide an additional axis for information retrieval, complementing vector search.
    • Industry Use Cases:
    • Commonly used in customer data platforms and CRM models to map relationships within and between companies.
    • Knowledge graphs can convert complex datasets into structured, easily queryable formats.
    • Challenges and Limitations:
    • Familiarity with graph databases and the ETL process for graph data integration is still developing.
    • Graph structures are less common and more complex than traditional relational models.
    • Integrating Knowledge Graphs with LLMs:
    • Knowledge graphs enrich data integration and semantic understanding, adding context to text retrieved by LLMs.
    • They can help reduce hallucinations in LLMs by grounding responses with more accurate and comprehensive context.
    • Graph RAG (Retrieval Augmented Generation):
    • Combines knowledge graphs with RAG to provide additional context for LLM-generated responses.
    • Allows retrieval of data not directly cited in the text, enhancing the breadth of information available for queries.
    • Scalability and Efficiency:
    • Effective graph database architectures can handle large-scale graph data efficiently.
    • Graph RAG requires a robust ingestion pipeline and careful management of data freshness and retrieval processes.
    • Future Developments:
    • Growing interest and implementation of knowledge graphs and Graph RAG in various industries.
    • Potential for new tools and standardization efforts to make these technologies more accessible and effective.
    • Graphlit: Simplifying Knowledge Graphs:
    • The platform focuses on simplifying the creation and use of knowledge graphs for developers.
    • Provides APIs for easy integration, supporting domain-specific vertical AI applications.
    • Offers a unified pipeline for data ingestion, extraction, and knowledge graph construction.
    • Open Source and Community Contributions:
    • Recommendations for...
    Show more Show less
    45 mins
  • Using Open Source LLMs in Language for Grammatical Error Correction (GEC)
    Mar 4 2024

    At LanguageTool, Bartmoss St Clair (Head of AI) is pioneering the use of Large Language Models (LLMs) for grammatical error correction (GEC), moving away from the tool's initial non-AI approach to create a system capable of catching and correcting errors across multiple languages.

    LanguageTool supports over 30 languages, has several million users, and over 4 million installations of its browser add-on, benefiting from a diverse team of employees from around the world.

    Episode Summary -

    1. LanguageTool decided against using existing LLMs like GPT-3 or GPT-4 due to cost, speed, and accuracy benefits of developing their own models, focusing on creating a balance between performance, speed, and cost.
    2. The tool is designed to work with low latency for real-time applications, catering to a wide range of users including academics and businesses, with the aim to balance accurate grammar correction without being intrusive.
    3. Bartmoss discussed the nuanced approach to grammar correction, acknowledging that language evolves and user preferences may vary, necessitating a balance between strict grammatical rules and user acceptability.
    4. The company employs a mix of decoder and encoder-decoder models depending on the task, with a focus on contextual understanding and the challenges of maintaining the original meaning of text while correcting grammar.
    5. A hybrid system that combines rule-based algorithms with machine learning is used to provide nuanced grammar corrections and explanations for the corrections, enhancing user understanding and trust.
    6. LanguageTool is developing a generalized GEC system, incorporating legacy rules and machine learning for comprehensive error correction across various types of text.
    7. Training models involve a mix of user data, expert-annotated data, and synthetic data, aiming to reflect real user error patterns for effective correction.
    8. The company has built tools to benchmark GEC tasks, focusing on precision, recall, and user feedback to guide quality improvements.
    9. Introduction of LLMs has expanded LanguageTool's capabilities, including rewriting and rephrasing, and improved error detection beyond simple grammatical rules.
    10. Despite the higher costs associated with LLMs and hosting infrastructure, the investment is seen as worthwhile for improving user experience and conversion rates for premium products.
    11. Bartmoss speculates on the future impact of LLMs on language evolution, noting their current influence and the importance of adapting to changes in language use over time.
    12. LanguageTool prioritizes privacy and data security, avoiding external APIs for grammatical error correction and developing their systems in-house with open-source models.



    Show more Show less
    50 mins
  • The Path to Responsible AI with Julia Stoyanovich of NYU
    Jan 29 2024

    In this enlightening episode, Dr. Julia Stoyanovich delves into the world of responsible AI, exploring the ethical, societal, and technological implications of AI systems. She underscores the importance of global regulations, human-centric decision-making, and the proactive management of biases and risks associated with AI deployment. Through her expert lens, Dr. Stoyanovich advocates for a future where AI is not only innovative but also equitable, transparent, and aligned with human values.

    Julia is an Institute Associate Professor at NYU in both the Tandon School of Engineering, and the Center for Data Science. In addition she is Director of the Center for Responsible AI also at NYU. Her research focuses on responsible data management, fairness, diversity, transparency, and data protection in all stages of the data science lifecycle.

    Episode Summary -

    1. The Definition of Responsible AI
    2. Example of ethical AI in the medical world - Fast MRI technology
    3. Fairness and Diversity in AI
    4. The role of regulation - What it can and can’t do
    5. Transparency, Bias in AI models and Data Protection
    6. The dangers of Gen AI Hype and problematic AI narratives from the tech industry
    7. The impotence of humans in ensuring ethical development
    8. Why “Responsible AI” is actually a bit of a misleading term
    9. What Data & AI leaders can do to practise Responsible AI

    Show more Show less
    48 mins