PyData Seattle 2025

Pedro Albuquerque

Dear Program Committee,

I am currently a Principal Data Scientist at AppOrchid, where I lead projects at the intersection of machine learning, econometrics, and applied research, with a strong focus on interpretable and trustworthy AI. Over the past 15+ years, I have built a career bridging industry and academia, delivering data-driven solutions at organizations such as FleetOps, Convoy, and ServiceNow (ElementAI). My academic contributions include 2,000+ citations and multiple peer-reviewed publications (Google Scholar profile
).

As an Associate Professor, I taught in the Mathematics, Computer Science, and Business departments, designing and delivering courses in econometrics, statistical inference, and operational research. I also founded the Laboratory of Machine Learning in Finance and Organizations, mentoring more than 30 students and researchers on projects applying ML to finance, business, and social impact.

Beyond research and teaching, I am an experienced speaker and educator, known for communicating complex ideas in clear and engaging ways. Across conferences, lectures, and industry events, I have consistently emphasized explainability, transparency, and practical impact—principles that directly align with the growing demand for trustworthy AI.

With the rise of regulatory frameworks such as the U.S. AI Bill of Rights (2022) and the NIST AI Risk Management Framework, the need for interpretable models like Generalized Additive Models (GAMs) has never been greater. My session will demonstrate how GAMs provide a rare balance of performance, interpretability, and compliance, supported by real-world case studies and hands-on examples in Python.

I believe my background uniquely positions me to deliver a session that is both technically rigorous and directly relevant to today’s regulatory, business, and academic landscapes.

Sincerely,
Pedro Henrique Melo Albuquerque


Session

11-07
15:20
45min
Generalized Additive Models: Explainability Strikes Back
Pedro Albuquerque

Generalized Additive Models (GAMs)

Generalized Additive Models (GAMs) strike a rare balance: they combine the flexibility of complex models with the clarity of simple ones.

They often achieve performance comparable to black-box models, yet remain:
- Easy to interpret
- Computationally efficient
- Aligned with the growing demand for transparency in AI

With recent U.S. AI regulations (White House, 2022) and increasing pressure from decision-makers for explainable models, GAMs are emerging as a natural choice across industries.


Audience

This guide is for readers with some background in Python and statistics, including:
- Data scientists
- Machine learning engineers
- Researchers


Takeaway

By the end, you’ll understand:
- The intuition behind GAMs
- How to build and apply them in practice
- How to interpret and explain GAM predictions and results in Python


Prerequisites

You should be comfortable with:
- Basic regression concepts
- Model regularization
- The bias–variance trade-off
- Python programming

Talk Track 3