PyData Seattle 2025

Ojas Ankurbhai Ramwala

Ojas A. Ramwala is a final-year Ph.D. candidate at the University of Washington, Seattle, in the Department of Biomedical Informatics and Medical Education, School of Medicine. His research focuses on enhancing the clinical translation of mammography-based deep learning algorithms for breast cancer screening. His work aims to explore how to validate the generalizability of AI models in large and diverse cohorts, establish explainability methods faithful to the AI model architecture to interpret algorithm predictions, and develop robust deep learning algorithms to predict challenging clinical outcomes.

As an inquisitive research enthusiast, his interests include developing and applying Artificial Intelligence and Deep Learning techniques for Biomedical Signal and Image Processing, Bioinformatics, and Genomics. He spent a year at New York University, studying Bioinformatics, where he pursued research at the NYU Center for Genomics and Systems Biology

Previously, Ojas was at the National Institute of Technology - Surat, India, in the Electronics Engineering Department. He has been fortunate to work as a Research Intern at the Council of Scientific and Industrial Research (CSIR-CSIO), the Indian Space Research Organization (ISRO-IIRS), and the Indian Institute of Science (IISc).


Session

11-08
11:40
45min
Explainable AI for Biomedical Image Processing
Ojas Ankurbhai Ramwala

Advancements in deep learning for biomedical image processing have led to the development of promising algorithms across multiple clinical domains, including radiology, digital pathology, ophthalmology, cardiology, and dermatology, among others. With robust AI models demonstrating commendable results, it is crucial to understand that their limited interpretability can impede the clinical translation of deep learning algorithms. The inference mechanism of these black-box models is not entirely understood by clinicians, patients, regulatory authorities, and even algorithm developers, thereby exacerbating safety concerns. In this interactive talk, we will explore some novel explainability techniques designed to interpret the decision-making process of robust deep learning algorithms for biomedical image processing. We will also discuss the impact and limitations of these techniques and analyze their potential to provide medically meaningful algorithmic explanations. Open-source resources for implementing these interpretability techniques using Python will be covered to provide a holistic understanding of explaining deep learning models for biomedical image processing.

This talk is distilled from a course that Ojas Ramwala designed, which received the best seminar award for the highest graduate student enrollment at the Department of Biomedical Informatics and Medical Education at the University of Washington, Seattle.

Talk Track 3