Jay Alammar
Jay Alammar is co-author of Hands-On Large Language Models, published by O'Reilly Media. and Director and Engineering Fellow at Cohere (a pioneering creator of large language models).
Through his popular AI/ML blog, Jay has helped millions of researchers and engineers visually understand machine learning tools and concepts (e.g., The Illustrated Transformers, BERT, DeepSeek-R1, and others).
Session
Large Language Models (LLMs) have grown into prominence as some of the most popular technological artifacts of the day. This talk will provide a highly accessible and visual overview of LLM concepts relevant to today's data professionals. This includes looking at present-day Transformer architectures, tokenizers, reward models, reasoning LLMs, agentic trajectories, and the various training stages of a large language model including next-word prediction, instruction-tuning, preference-tuning, and reinforcement learning.