Threat Modeling AI Systems
When: May 19-20, 2026 | Where: Washington D.C.
Threat Modeling AI Systems is an in-depth technical course that focuses on how to effectively identify threats and design controls in LLMs and other non-deterministic software. This course is designed for:
This intensive course teaches security professionals how to apply threat modeling to AI systems using a principles-based approach. Rather than focusing on specific AI models or tools that quickly become outdated, the course focuses on the fundamental concepts that drive AI behavior, risk, and security controls. Participants will learn how AI systems work, how attackers exploit them, and why threats such as data poisoning, prompt injection, and model theft require a different security mindset.
Using real-world scenarios and collaborative discussions, participants will practice identifying high-impact AI threats, evaluating security controls, and facilitating effective threat modeling conversations across multidisciplinary teams of security engineers, architects, and data scientists.
Participants should already be familiar with:
For individuals unfamiliar with the Four Question Framework, The World’s Shortest Threat Modeling Course is free, self-paced, and can be completed in under an hour.
By the end of the training, participants will be able to:
This course is currently offered in-person only, over 2 consecutive days (16 hours total instruction time)
![]() |
![]() |
Michael Novack |
Shoshana Cox |
|
Michael Novack is an AI Security Architect specializing in securing enterprise AI systems, AI agents, and explainability-driven controls. His background spans software engineering and application security architecture in the financial services and insurance sectors, experience he brings to designing principles-first security programs that integrate practices such as threat modeling, security champions programs, secrets management, and AI strategy. Michael's current focus is on advancing secure enterprise AI adoption, helping organizations implement monitoring, explainability, and governance capabilities that allow teams to understand, manage, and mitigate AI risk with confidence. In parallel, Michael designs interactive learning tools that make complex AI and security topics more approachable, including an AI strategy board game and a cybersecurity awareness card game. These tools help teams communicate technical topics more effectively and bridge the gap between product, security, and business stakeholders. |
Shoshana Cox is a prominent AI security architect, researcher, and strategist with over a decade of experience in mission-critical AI. Her work focuses on bridging the gap between probabilistic AI systems and deterministic security requirements and providing high-level strategy and training for C-suite executives and technical teams globally. Currently serving as the CEO and Head of Research at Bermuda Hundred Strategies, she specializes in the intersection of AI, national security, and threat modeling. She is a member of the core author team for the OWASP AI Exchange, where she contributes to international engineering standards and technical requirements for the EU AI Act. Her professional background spans roles as a mathematician, red team lead, data scientist, and Chief Data Officer. Shoshana is widely recognized for her work on MLSecOps and AI defensive architectures, authoring technical papers and holding a patent in AI security (US 12,093,400 B1). She is an active voice in the industry through her Substack newsletter, Angles of Attack, where she provides in-depth analysis on topics like agentic memory, "vibe coding," and the security risks of generative AI. LinkedIn: https://www.linkedin.com/in/disesdi/ |
Use this section to create urgency for your offer.