PyData Seattle 2025

Red Teaming AI: Getting Started with PyRIT for Safer Generative AI Systems
2025-11-07 , Talk Track 3

As generative AI systems become more powerful and widely deployed, ensuring safety and security is critical. This talk introduces AI red teaming—systematically probing AI systems to uncover potential risks—and demonstrates how to get started using PyRIT (Python Risk Identification Toolkit), an open-source framework for automated and semi-automated red teaming of generative AI systems. Attendees will leave with a practical understanding of how to identify and mitigate risks in AI applications, and how PyRIT can help along the way.


Background & Motivation:

AI safety and security are increasingly important as generative models are integrated into real-world applications. Red teaming—intentionally probing models to uncover vulnerabilities—is a key practice in identifying and mitigating risks. However, many teams lack the tools or guidance to get started.

Talk Overview:

This session introduces the concept of AI red teaming and walks through how to use PyRIT (Python Risk Identification Toolkit), an open-source framework developed to support automated and semi-automated red teaming of LLMs.

Outline & Time Breakdown:

AI Safety & Security Overview (5 min): Why safety matters in generative AI applications.
AI Red Teaming Overview (5 min): Human-led vs. automated approaches, benefits and limitations.
PyRIT Deep Dive (20 min): How PyRIT supports red teaming workflows, including example use cases.
Q&A (10 min)

Key Takeaways:

Understand what AI red teaming is and why it matters.
Learn the differences between human-led and automated red teaming.
Gain practical knowledge of how to use PyRIT to probe LLMs for safety and security risks.

Audience:

The talk is designed to be accessible to a broad audience, with no prior experience required, though familiarity with any one of Python, LLMs, AI safety, AI security are helpful.


Prior Knowledge Expected:

No previous knowledge expected

Roman Lutz is a Responsible AI Engineer on Microsoft's AI Red Team, specializing in the safety and security of generative AI and open source software. He is a maintainer of PyRIT, Microsoft’s open-source AI red teaming toolkit, and has helped shape projects like Fairlearn and the Responsible AI Dashboard. Roman’s work bridges technical rigor with a commitment to transparency and accountability, empowering practitioners to build more robust and ethical AI systems. He shares his projects and insights at romanlutz.github.io.