2025-12-11 –, General Track
The use of open-source models is rapidly increasing. According to Gartner, during the Magnetic Era, their adoption is expected to triple compared to foundational models. However, this rise in usage also brings heightened cybersecurity risks. In this lecture, we will explore the unique vulnerabilities associated with open-source models, the algorithmic techniques used to exploit them, and how our startup is addressing these challenges.
In my lecture, I will discuss various methods for attacking machine learning models, including model poisoning, DDoS-style attacks, and the generation of adversarial examples—such as Projected Gradient Descent (PGD), Carlini-Wagner attacks, and others. We will also present defense strategies that are data-agnostic and focus on model-driven approaches to protecting AI systems, particularly those that use open-source models. We will also discuss the differentiation between protecting open-source models and regular LLM (what we are not OWASP LLM)
No
I have a wide background as an algorithm researcher, quantitative analyst, and data scientist. I am working at the intersection of machine learning, security, and algorithmic robustness. His study spans adversarial machine learning, model behavior analysis, and BNN. I am a co-founder of a startup, which develops tools for malicious behavior and risks in open-source models. In the lecture, I will discuss the theory of these attacks and ML-driven methods for protection