Practical DevSecOps Podcast Por Varun Kumar arte de portada

Practical DevSecOps

Practical DevSecOps

De: Varun Kumar
Escúchala gratis

Acerca de esta escucha

Practical DevSecOps (a Hysn Technologies Inc. company) offers vendor-neutral and hands-on DevSecOps and Product Security training and certification programs for IT Professionals. Our online training and certifications are focused on modern areas of information security, including DevOps Security, AI Security, Cloud-Native Security, API Security, Container Security, Threat Modeling, and more.



© 2025 Practical DevSecOps
Educación
Episodios
  • Best AI Security Books in 2025
    Jun 20 2025

    Are you ready to face the escalating threat of AI attacks? AI system attacks are hitting companies every single day. Hackers use AI tools to break into major banks and steal millions. It's a critical time for anyone in tech or cybersecurity to understand how to fight back.

    In this episode, we delve into why AI security is more crucial than ever in 2025. We reveal that 74% of IT security professionals say AI-powered threats are seriously hurting their companies, and a staggering 93% of businesses expect to face AI attacks daily this year.

    These aren't just minor incidents; last year, 73% of organizations were hit by AI-related security breaches, costing an average of $4.8 million each time, with attacks taking an alarming 290 days to even detect.

    The good news? Companies are desperately seeking individuals with AI security expertise, offering excellent opportunities for those who are prepared. We discuss how AI security books serve as your secret weapon, providing proven strategies directly from real security experts who have battled actual AI attacks.

    We'll touch upon some top resources available, covering everything from:

    • Understanding and protecting against Large Language Model (LLM) security threats.
    • Practical applications of LLMs for building smart systems.
    • Developing your own LLMs from scratch.
    • Defending against sophisticated adversarial AI attacks, including prompt injection and model poisoning.
    • Navigating AI data privacy, ethics, and regulatory compliance.
    • Advanced techniques like AI red teaming to systematically assess and enhance security.

    Whether you're a beginner looking to understand the basics or an expert aiming for cutting-edge strategies, finding the right learning path in AI cybersecurity is essential. Don't wait – AI threats are growing stronger every day. Tune in to discover how to upskill and become an AI security expert, building solid skills step by step for career development success.

    Ready to go further? Our Certified AI Security Professional Course offers an in-depth exploration of AI risks. It combines the best book knowledge with hands-on practice, allowing you to work on real AI security system attacks and learn directly from industry experts.

    Enroll today and upskill your AI Security knowledge with Certified AI Security Professional certification. Plus, for a limited time, you can save 15% on this course, and you can buy it now and start whenever you're ready!

    Más Menos
    13 m
  • Threat Modeling for Medtech Industry
    Jun 18 2025

    Join us for an insightful episode as we delve into the critical realm of product security within the Medtech industry. The digital revolution is transforming patient care, but it also introduces significant security risks to medical devices.

    We'll explore the complex security environment where devices like pacemakers and diagnostic systems are increasingly connected, making them targets for unauthorised access, data theft, and operational manipulation.

    Discover how breaches can lead to dire consequences, from endangering patient health and damaging manufacturers' reputations, to incurring financial losses and navigating stricter regulatory hurdles.

    Learn about the types of medical devices most susceptible to cyber threats, including those with connectivity, remote access features, legacy systems, sensitive data storage (PHI), and life-sustaining equipment.

    Our focus shifts to threat modelling – a crucial, proactive process for enhancing medical device security.

    We'll uncover its immense benefits, such as identifying and addressing risks, boosting device resilience against cyberattacks, and ensuring regulatory adherence.

    We'll also touch upon the FDA's recent policy update, transitioning from the Quality System Regulation (QSR) to the Quality Management System Regulation (QMSR), which now incorporates ISO 13485:2016 standards, highlighting a greater emphasis on risk management throughout the device lifecycle.

    Dive deep into various threat modelling techniques that help manufacturers fortify their products:

    Agile Threat Modeling: Integrating security with rapid development cycles, ensuring continuous assessments aligned with ongoing development.

    Goal-Centric Threat Modeling: Prioritizing protection for critical assets and business objectives based on impact on functionalities and compliance requirements.

    Library-Centric Threat Modeling: Utilizing pre-compiled lists of known threats and vulnerabilities pertinent to medical devices for standardized risk assessment, enhancing scalability and efficiency.

    Finally, we'll discuss how specialized training, such as the Practical DevSecOps Certified Threat Modeling Professional (CTMP) course, equips Medtech manufacturers with the essential skills to proactively identify and address security vulnerabilities.

    This training focuses on real-world applications and scenarios, ensuring continuous security assessment and compliance with stringent regulatory standards from design to deployment.

    Tune in to understand why threat modelling is not just a best practice, but an essential component for safeguarding patient well-being and maintaining integrity in the digital healthcare landscape.

    Más Menos
    5 m
  • AI Security Frameworks for Enterprises
    Jun 12 2025

    Welcome to "Securing the Future," the podcast dedicated to navigating the complex world of AI security. In this episode, we unpack the vital role of AI security frameworks—acting as instruction manuals—in safeguarding AI systems for multinational corporations.

    These frameworks provide uniform guidelines for implementing security measures across diverse nations with varying legal requirements, from Asia-Pacific to Europe and North America.


    We explore how these blueprints help organizations find weak spots before bad actors do, establish consistent rules, meet laws and regulations, and ultimately build trust with AI users. Crucially, they enable compliance and reduce implementation costs through standardization.

    This episode delves into four leading frameworks:
    NIST AI Risk Management Framework (AI RMF): We break down its comprehensive, lifecycle-wide approach, structured around four core functions: Govern, Map, Measure, and Manage.

    This widely recognized framework is often recommended for beginners due to its clear steps and available resources. Its risk-based approach is adaptable for specific sectors like healthcare and banking, forming the backbone of their tailored safety frameworks.

    Microsoft’s AI Security Framework: This framework focuses on operationalizing AI security best practices. It addresses five main parts: Security, Privacy, Fairness, Transparency, and Accountability. While integrating with Microsoft tools, its principles are broadly applicable for ensuring AI is used correctly and protected.

    MITRE ATLAS Framework for AI Security: Discover this specialized framework that catalogues real-world AI threats and attack techniques. We discuss attack types like data poisoning, evasion attacks, model stealing, and privacy attacks, which represent “novel attacks” on AI systems. ATLAS is invaluable for threat modelling and red teaming, providing insights into adversarial machine learning techniques.

    Databricks AI Security Framework (DASF) 2.0: Learn about this framework, which identifies 62 risks and 64 real use-case controls. Based on standards like NIST and MITRE, DASF is platform-agnostic, allowing its controls to be mapped across various cloud or data platform providers.

    It critically differentiates between traditional cybersecurity risks and novel AI-specific attacks like adversarial machine learning, and bridges business, data, and security teams with practical tools.

    We discuss how organizations can use parts from different frameworks to build comprehensive protection, complementing each other across strategic risks, governance, and technical controls.

    Case studies from healthcare and banking illustrate how these conceptual frameworks are tailored to meet strict government rules and sector-specific challenges, ensuring robust risk management and governance.


    Ultimately, AI security is an ongoing journey, not a one-off project. The key takeaway is to start small and build up your security over time.


    For more information, read our “Best AI Security Frameworks for Enterprises” blog:

    Más Menos
    6 m
adbl_web_global_use_to_activate_webcro805_stickypopup
Todavía no hay opiniones