PyData Seattle 2025

Explainable AI for Biomedical Image Processing
2025-11-08 , Talk Track 3

Advancements in deep learning for biomedical image processing have led to the development of promising algorithms across multiple clinical domains, including radiology, digital pathology, ophthalmology, cardiology, and dermatology, among others. With robust AI models demonstrating commendable results, it is crucial to understand that their limited interpretability can impede the clinical translation of deep learning algorithms. The inference mechanism of these black-box models is not entirely understood by clinicians, patients, regulatory authorities, and even algorithm developers, thereby exacerbating safety concerns. In this interactive talk, we will explore some novel explainability techniques designed to interpret the decision-making process of robust deep learning algorithms for biomedical image processing. We will also discuss the impact and limitations of these techniques and analyze their potential to provide medically meaningful algorithmic explanations. Open-source resources for implementing these interpretability techniques using Python will be covered to provide a holistic understanding of explaining deep learning models for biomedical image processing.

This talk is distilled from a course that Ojas Ramwala designed, which received the best seminar award for the highest graduate student enrollment at the Department of Biomedical Informatics and Medical Education at the University of Washington, Seattle.


The adoption of deep learning algorithms in real-world healthcare settings is based not only on their generalizability but also on their explainability. While deep learning research papers do a commendable job of explaining the model architecture and algorithm design, there is a limited understanding of the underlying decision-making process of these models.

Research in post-hoc interpretability techniques to explain the reasoning behind individual predictions of deep learning models for biomedical image processing relies on developing heat maps that aim to demonstrate the importance of different regions in medical images (such as X-rays, CT scans, or dermoscopy images) on model predictions. However, such traditional methods based on saliency maps have various limitations, thereby impeding their clinical translation.

It is imperative to explore novel solutions and their corresponding implementation using Python packages and frameworks to advance research in Explainable AI for Biomedical Image Processing.
This talk will explore novel solutions to this challenging problem by examining some of the most recent approaches to explaining deep learning models. Insights on the impact of multimodal deep learning techniques on the explainability of generalizable models will also be provided. The talk will conclude with a discussion on developing interpretability techniques through the lens of user-centered design, essentially by incorporating feedback from clinicians.

This presentation will not only provide a comprehensive understanding of promising explainability approaches but also highlight open-source Python resources to dive deeper into implementing programming workflows for executing robust interpretability experiments.

This interactive talk would be an incredible learning experience for college and graduate students with a fundamental understanding of deep learning models, as well as for software developers and data scientists looking to transition into AI for the healthcare domain. It would also be beneficial for investors and business leaders aiming to contribute to and benefit from the burgeoning research space in AI for healthcare.


Prior Knowledge Expected:

Previous knowledge expected

Ojas A. Ramwala is a final-year Ph.D. candidate at the University of Washington, Seattle, in the Department of Biomedical Informatics and Medical Education, School of Medicine. His research focuses on enhancing the clinical translation of mammography-based deep learning algorithms for breast cancer screening. His work aims to explore how to validate the generalizability of AI models in large and diverse cohorts, establish explainability methods faithful to the AI model architecture to interpret algorithm predictions, and develop robust deep learning algorithms to predict challenging clinical outcomes.

As an inquisitive research enthusiast, his interests include developing and applying Artificial Intelligence and Deep Learning techniques for Biomedical Signal and Image Processing, Bioinformatics, and Genomics. He spent a year at New York University, studying Bioinformatics, where he pursued research at the NYU Center for Genomics and Systems Biology

Previously, Ojas was at the National Institute of Technology - Surat, India, in the Electronics Engineering Department. He has been fortunate to work as a Research Intern at the Council of Scientific and Industrial Research (CSIR-CSIO), the Indian Space Research Organization (ISRO-IIRS), and the Indian Institute of Science (IISc).