PyData Global 2025

Open Source Models' Security- Adversarial attacks, Poisoning & Sponge
2025-12-09 , General Track

The use of open-source models is rapidly increasing. According to Gartner, during the Magnetic Era, their adoption is expected to triple compared to foundational models. However, this rise in usage also brings heightened cybersecurity risks. In this lecture, we will explore the unique vulnerabilities associated with open-source models, the algorithmic techniques used to exploit them, and how our startup is addressing these challenges.


In my lecture, I will discuss various methods for attacking machine learning models, including model poisoning, DDoS-style attacks, and the generation of adversarial examples—such as Projected Gradient Descent (PGD), Carlini-Wagner attacks, and others. We will also present defense strategies that are data-agnostic and focus on model-driven approaches to protecting AI systems, particularly those that use open-source models. We will also discuss the differentiation between protecting open-source models and regular LLM (what we are not OWASP LLM)


Prior Knowledge Expected: No

Natan Katz is the co-founder of LuminAI, a startup pioneering statistical red teaming — a method for testing and securing white-box AI models through statistical and geometric analysis of model activations. At LuminAI, he develops techniques to detect and defend against optimization-based adversarial attacks such as PGD, DeepFool, and Carlini–Wagner, helping organizations build safer and more trustworthy AI systems.

Before founding LuminAI, Natan worked across diverse applied domains — from quantitative modeling and speech analysis to customer journey optimization and biometrics — bridging theory and practice across industries. He has also published work on AI for Ethereum ecosystems and AI ethics. Natan holds an M.Sc. in Nonlinear Dynamics from the Weizmann Institute of Science, where he studied dynamic models for malignant tissues.