Episode 35 — Threat Modeling for AI

This episode covers threat modeling as a structured method for identifying and prioritizing risks in AI systems. Learners must understand the role of frameworks such as MITRE ATLAS, which catalog adversarial techniques, and STRIDE, which provides categories like spoofing, tampering, and information disclosure. For certification purposes, it is essential to define the steps of threat modeling—identifying assets, enumerating threats, assessing risks, and planning mitigations—and to adapt them to the AI lifecycle. The exam relevance lies in showing how threat modeling supports proactive defense and aligns with governance obligations.
In practice, threat modeling involves mapping risks across training, inference, retrieval, and agentic workflows. Examples include identifying poisoning risks in training data, extraction threats in APIs, or prompt injection risks in deployed chat interfaces. Best practices involve embedding threat modeling into design reviews, continuously updating models as systems evolve, and integrating red team findings to refine assumptions. Troubleshooting considerations highlight challenges such as incomplete asset inventories or underestimating the sophistication of adversaries. Learners preparing for exams should be able to describe both the theoretical frameworks and the practical steps for performing effective threat modeling in AI environments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.
Episode 35 — Threat Modeling for AI
Broadcast by