2025-12-09 –, Machine Learning & AI
AI systems are increasingly being integrated into real-world products - from chatbots and search engines to summarisation tools and coding assistants. Yet, despite their fluency, these models can produce confident but false or misleading information, a phenomenon known as hallucination. In production settings, such errors can erode user trust, misinform decisions, and introduce serious risks. This talk unpacks the root causes of hallucinations, explores their impact on various applications, and highlights emerging techniques to detect and mitigate them. With a focus on practical strategies, the session offers guidance for building more trustworthy AI systems fit for deployment.
This session will unpack the problem of AI hallucination - not just what it is, but how it surfaces in everyday use. We’ll look at the common causes, ranging from incomplete context to over-generalisation, and walk through detection and prevention techniques such as grounding, prompt design and RAG. Whether you’re building AI products or evaluating outputs, this talk will give you the tools to recognise hallucinations and reduce their risk.
Outline:
- Introduction to hallucinations in LLMs
- Common causes behind hallucinated outputs
- Impact on production applications
- Techniques for detecting and evaluating hallucinations
- Strategies to reduce hallucinations
- Best practices for building trustworthy AI products
- Key takeaways
Background Knowledge Required:
Beginner-friendly - no prior knowledge needed. Familiarity with LLMs is a plus but not necessary.
No
Aarti Jha is a Senior Data Scientist at Red Hat, where she develops AI-driven solutions to streamline internal processes and reduce operational costs. She brings over 6.5 years of experience in building and deploying data science and machine learning solutions across industry domains. She has been an active part of the PyData community and presented at PyData NYC 2024 and PyData Amsterdam 2025.