06-08, 10:15–11:00 (Europe/London), Grand Hall
Artificial Intelligence (AI) and Machine Learning (ML) are transforming industries such as healthcare, finance, education, and entertainment. However, these advancements are not benefiting everyone equally. Biases in datasets, algorithms, and design processes often lead to AI systems that unintentionally exclude or misrepresent underrepresented communities, reinforcing societal inequalities.
This talk, "AI for Everyone: Building Inclusive Machine Learning Models," explores the critical importance of developing AI systems that are ethical, fair, and accessible to all. We will examine real-world examples of AI bias, discuss techniques for identifying and mitigating bias in data and models, and explore frameworks for responsible AI development. Attendees will leave with actionable insights to design AI solutions that promote fairness, inclusivity, and social impact.
Artificial Intelligence (AI) and Machine Learning (ML) have become central to decision-making processes across industries, from automating hiring decisions to medical diagnostics and financial services. While AI has the potential to drive efficiency and innovation, its benefits are not always equitably distributed. Biases embedded in training datasets, model design, and algorithmic decision-making can lead to discriminatory outcomes that disproportionately affect marginalized communities.
This talk, "AI for Everyone: Building Inclusive Machine Learning Models," will explore the impact of AI bias and discuss strategies for creating more inclusive AI systems. We will analyze real-world examples where AI has failed underrepresented groups, from facial recognition technologies that misidentify people of color to automated systems that reinforce gender and socioeconomic disparities.
Key topics covered in this session include:
Bias in AI – Understanding how biases arise in datasets and machine learning models.
Dataset Design and Fair Representation – Best practices for creating diverse and representative training data.
Algorithmic Fairness – Techniques for detecting and mitigating bias in machine learning models.
Ethical AI Development – Principles and frameworks to ensure accountability, transparency, and inclusivity in AI.
The Societal Impact of Inclusive AI – How equitable AI can drive positive social change and empower underrepresented communities.
This session is designed for developers, data scientists, AI practitioners, and decision-makers who want to ensure fairness and inclusivity in their AI projects. Attendees will leave with a clear understanding of AI bias challenges and practical steps to design ethical, inclusive AI systems that benefit everyone.
No previous knowledge expected
Elizabeth Osanyinro is a data analyst passionate about AI ethics, fairness, and inclusive technology. Currently a Business Analyst at Carbonnote AI, Elizabeth is completing an MSc in Applied Artificial Intelligence and Data Analytics at the University of Bradford. With experience as a digital marketing and business analyst, she has worked on diverse projects, including retail analytics, credit card fraud detection, and blockchain-based digital verification.
Elizabeth is proficient in tools such as Microsoft Excel, SAS, Python, R, Power BI, and Looker. As the founder of PyData Bradford, she actively fosters community-driven learning in AI and data science