Who is this course for

Threat Modeling AI Systems is an in-depth technical course that focuses on how to effectively identify threats and design controls in LLMs and other non-deterministic software.   This course is designed for:

  • organizations that deploy machine learning models, generative AI, RAG pipelines, and AI agents
  • security professionals already familiar with threat modeling


Course Overview

This intensive course teaches security professionals how to apply threat modeling to AI systems using a principles-based approach. Rather than focusing on specific AI models or tools that quickly become outdated, the course focuses on the fundamental concepts that drive AI behavior, risk, and security controls. Participants will learn how AI systems work, how attackers exploit them, and why threats such as data poisoning, prompt injection, and model theft require a different security mindset. 

What to expect

Using real-world scenarios and collaborative discussions, participants will practice identifying high-impact AI threats, evaluating security controls, and facilitating effective threat modeling conversations across multidisciplinary teams of security engineers, architects, and data scientists. 

Prerequisites

Participants should already be familiar with:

  • The Four Question Framework
  • Software Development Lifecycle


For individuals unfamiliar with the Four Question Framework, The World’s Shortest Threat Modeling Course is free, self-paced, and can be completed in under an hour. 

Learning outcomes

By the end of the training, participants will be able to:

  • Confidently threat model AI architectures, identify AI-specific attack vectors, and apply durable security principles that remain effective as AI technologies evolve.
  • Identify similarities and differences between threat modeling traditional applications and AI-based systems.
  • Understand how to address the unique threats and controls associated with AI systems.
  • Facilitate effective security discussions with data scientists to develop meaningful AI threat models.


Timing

This course is currently offered in-person only, over 2 consecutive days (16 hours total instruction time)

Course curriculum

  1. Preparation

  2. What Are We Working On? (AI Fundamentals)

  3. What Can Go Wrong? (Threat landscape for AI systems)

  4. What Are We Going to Do About It? (Security controls for AI systems)

  5. Did We Do a Good Job? (Organizational Determination)

Pricing options

Early Bird pricing ends April 15th!

Instructors


Michael Novack

Shoshana Cox

Michael Novack is an AI Security Architect specializing in securing enterprise AI systems, AI agents, and explainability-driven controls. His background spans software engineering and application security architecture in the financial services and insurance sectors, experience he brings to designing principles-first security programs that integrate practices such as threat modeling, security champions programs, secrets management, and AI strategy.

Michael's current focus is on advancing secure enterprise AI adoption, helping organizations implement monitoring, explainability, and governance capabilities that allow teams to understand, manage, and mitigate AI risk with confidence.

In parallel, Michael designs interactive learning tools that make complex AI and security topics more approachable, including an AI strategy board game and a cybersecurity awareness card game. These tools help teams communicate technical topics more effectively and bridge the gap between product, security, and business stakeholders.

LinkedIn: https://www.linkedin.com/in/michael-novack/

Shoshana Cox is a prominent AI security architect, researcher, and strategist with over a decade of experience in mission-critical AI. Her work focuses on bridging the gap between probabilistic AI systems and deterministic security requirements and providing high-level strategy and training for C-suite executives and technical teams globally. Currently serving as the CEO and Head of Research at Bermuda Hundred Strategies, she specializes in the intersection of AI, national security, and threat modeling. She is a member of the core author team for the OWASP AI Exchange, where she contributes to international engineering standards and technical requirements for the EU AI Act. Her professional background spans roles as a mathematician, red team lead, data scientist, and Chief Data Officer.

Shoshana is widely recognized for her work on MLSecOps and AI defensive architectures, authoring technical papers and holding a patent in AI security (US 12,093,400 B1). She is an active voice in the industry through her Substack newsletter, Angles of Attack, where she provides in-depth analysis on topics like agentic memory, "vibe coding," and the security risks of generative AI.

LinkedIn: https://www.linkedin.com/in/disesdi/

Countdown timer

Use this section to create urgency for your offer.

  • 00 Days
  • 00 Hours
  • 00 Minutes
  • 00 Seconds