All Episodes

Displaying 1 - 20 of 50 in total

Episode 1 — Course Overview & How to Use This Prepcast

This opening episode provides a structured orientation to the AI Security and Threats Audio course series, helping listeners understand what the program covers and how...

Episode 2 — The AI Security Landscape

This episode defines the AI security landscape by mapping the assets, attack surfaces, and emerging threats that distinguish AI from classical application security. It...

Episode 3 — System Architecture & Trust Boundaries

This episode explains the architecture of AI systems, breaking down their stages and components to show how trust boundaries shift across the lifecycle. Training, infe...

Episode 4 — Data Lifecycle Security

This episode examines data lifecycle security, covering the journey of data from collection and labeling through storage, retention, deletion, and provenance managemen...

Episode 5 — Prompt Security I: Injection & Jailbreaks

This episode introduces prompt injection and jailbreaks as fundamental AI-specific security risks. It defines prompt injection as malicious manipulation of model input...

Episode 6 — Prompt Security II: Indirect & Cross-Domain Injections

This episode examines indirect and cross-domain prompt injections, which expand the attack surface by embedding malicious instructions in external sources such as docu...

Episode 7 — Content Safety vs. Security

This episode explains the distinction and overlap between content safety and security in AI systems, a concept often emphasized in both professional practice and certi...

Episode 8 — Data Poisoning Attacks

This episode introduces data poisoning as a high-priority threat in AI security, where adversaries deliberately insert malicious samples into training or fine-tuning d...

Episode 9 — Training-Time Integrity

This episode covers training-time integrity, focusing on the assurance that data, processes, and infrastructure used in model development remain uncompromised. Learner...

Episode 10 — Privacy Attacks

This episode introduces privacy attacks in AI systems, focusing on techniques that reveal sensitive or personal information from training data or model behavior. Learn...

Episode 11 — Privacy-Preserving Techniques

This episode explores privacy-preserving techniques designed to reduce the risk of sensitive information exposure in AI systems while maintaining utility of the models...

Episode 12 — Model Theft & Extraction

This episode addresses model theft and extraction, highlighting how adversaries can replicate or steal valuable AI models. Model theft occurs when proprietary weights ...

Episode 13 — Adversarial Evasion

This episode introduces adversarial evasion, a class of attacks in which maliciously crafted inputs cause AI systems to misclassify or behave incorrectly. For exam pur...

Episode 14 — RAG Security I: Retrieval & Index Hardening

This episode explores retrieval-augmented generation (RAG) security, focusing on retrieval and index hardening as foundational defenses. RAG combines language models w...

Episode 15 — RAG Security II: Context Filtering & Grounding

This episode continues exploration of RAG security by examining context filtering and grounding as defenses for reliable outputs. Learners must understand context filt...

Episode 16 — Agents as an Attack Surface

This episode introduces AI agents as a new and growing attack surface, highlighting how their autonomy and tool integration create unique risks. Agents differ from sin...

Episode 17 — Secrets & Credential Hygiene

This episode addresses secrets and credential hygiene, emphasizing their critical role in preventing leaks and privilege misuse in AI systems. Secrets include API keys...

Episode 18 — AuthN/Z for LLM Apps

This episode explores authentication (AuthN) and authorization (AuthZ) for large language model (LLM) applications, highlighting their importance in managing identitie...

Episode 19 — Output Validation & Policy Enforcement

This episode examines output validation and policy enforcement as mechanisms for controlling what AI systems produce before results are delivered to users or downstrea...

Episode 20 — Red Teaming Strategy for GenAI

This episode introduces red teaming as a structured method for probing generative AI systems for vulnerabilities, emphasizing its importance for both exam preparation ...

Broadcast by