Dylan Bouchard
Dylan Bouchard is a Principal Applied Scientist focusing on AI Research & Open Source at CVS Health. He leads the company's Responsible AI Research program, where he developed two impactful open source libraries: UQLM, a toolkit for detecting hallucinations in large language models, and LangFair, a framework for evaluating bias and fairness in LLMs. His work bridges academic research with practical tools that help make AI systems more reliable and equitable.
Session
As LLMs become increasingly embedded in critical applications across healthcare, legal, and financial domains, their tendency to generate plausible-sounding but false information poses significant risks. This talk introduces UQLM, an open-source Python package for uncertainty-aware generation that flags likely hallucinations without requiring ground truth data. UQLM computes response-level confidence scores from token probabilities, consistency across sampled responses, LLM judges, and tunable ensembles. Attendees will learn practical strategies for implementing hallucination detection in production systems and leave with code examples they can immediately apply to improve the reliability of their LLM-powered applications. No prior uncertainty quantification background required.