<?xml version='1.0' encoding='utf-8' ?>
<iCalendar xmlns:pentabarf='http://pentabarf.org' xmlns:xCal='urn:ietf:params:xml:ns:xcal'>
    <vcalendar>
        <version>2.0</version>
        <prodid>-//Pentabarf//Schedule//EN</prodid>
        <x-wr-caldesc></x-wr-caldesc>
        <x-wr-calname></x-wr-calname>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>YF3MVA@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-YF3MVA</pentabarf:event-slug>
            <pentabarf:title>Opening Session</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T090000</dtstart>
            <dtend>20250901T092000</dtend>
            <duration>002000</duration>
            <summary>Opening Session</summary>
            <description>Opening Session for PyData Berlin 2025</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Plenary Session [Organizers]</category>
            <url>https://cfp.pydata.org/berlin2025/talk/YF3MVA/</url>
            <location>Kuppelsaal</location>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>HYGHBG@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-HYGHBG</pentabarf:event-slug>
            <pentabarf:title>PyData 2077: a data science future retrospective</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T092000</dtstart>
            <dtend>20250901T101000</dtend>
            <duration>005000</duration>
            <summary>PyData 2077: a data science future retrospective</summary>
            <description>VIOLATION DETAILS:
- Unauthorized temporal incursion detected
- Speakers identified as: Kitchen, A. &amp; Summers, L. (baseline timeline)
- Anomalous data signatures suggest retrospective analysis from non-contemporaneous source
- Evidence of information leakage: late 21st-century technological practices and standards
- Risk assessment: Moderate timeline contamination potential

REGULATORY COMPLIANCE REQUIRED:
Per Temporal Code Section 2077.3, you are hereby notified that failure to contain this spacetime disturbance will result in fines of up to 50,000 temporal credits. You must ensure adequate attendance at the specified coordinates to properly observe and contain the anomaly as it unfolds.
WARNING: Preliminary scans indicate the transmission contains advanced analytical frameworks and critical commentary on primitive early-21st-century data science practices. Attendees may experience paradigm shifts, changes to mental models, or sudden clarity regarding field trajectories.

Sincerely,
Compliance Officer Z-7749
Chrono-Regulatory Commission
&quot;Keeping Yesterday Safe for Tomorrow&quot;</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Keynote</category>
            <url>https://cfp.pydata.org/berlin2025/talk/HYGHBG/</url>
            <location>Kuppelsaal</location>
            
            <attendee>Laura Summers</attendee>
            
            <attendee>Andy Kitchen</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>GRZ3RG@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-GRZ3RG</pentabarf:event-slug>
            <pentabarf:title>A Beginner&#x27;s Guide to State Space Modeling</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T104000</dtstart>
            <dtend>20250901T121000</dtend>
            <duration>013000</duration>
            <summary>A Beginner&#x27;s Guide to State Space Modeling</summary>
            <description>State Space Models offer **a structured yet flexible framework for time series analysis**. They elegantly handle latent processes like trends, seasonality, and noisy observations, making them particularly valuable in real-world applications.

We&#x27;ll start with a brief overview of the theory behind SSMs, followed by practical examples where participants will:

- **Understand the components of SSMs**, including observation and state equations.
- **Learn how to specify and fit SSMs** using PyMC&#x27;s state space module.
- Implement a **modeling workflow using a survey data example**, showing how to use SSMs to model the data and generate predictions.
- **Explore advanced topics** such as incorporating external regressors, generating forecasts or building custom models.

### Target Audience
This tutorial is aimed at data scientists, statisticians, and data analysts with a basic understanding of statistics and Python, who are interested in expanding their toolkit with Bayesian time series methods. Prior experience with PyMC is not required but will be beneficial.

### Takeaways

By the end of this tutorial, attendees will:

- Understand the **theoretical foundations** of State Space Models.
- Be able to **implement common SSMs** (local level, trend, and seasonal models) in PyMC.
- **Evaluate and interpret** Bayesian state space models using PyMC.
- **Appreciate practical scenarios** where SSMs outperform traditional time series approaches. 

### Background Knowledge Required
Basic understanding of probability and statistics, and familiarity with Python. Prior experience with PyMC is not required but will be beneficial.

### Materials Distribution
All tutorial materials, including notebooks and datasets, will be made available via a GitHub repository.

## Outline

**0 - 10 min: Introduction to State Space Models**

- What are SSMs, and why use them?

**10 - 25 min: State Space Model Fundamentals**

- Observation and state equations.
- Latent states, Kalman filters, and smoothing in Bayesian frameworks.

**25 - 55 min: Implementing SSMs with PyMC (Hands-On)**

- Setting up a local-level model in PyMC.
- Extending models to incorporate trends and seasonality.
- Posterior inference: interpreting results and uncertainty.

**55 - 75 min: Advanced State Space Modeling (Hands-On)**

- Dealing with missing data and irregular intervals.
- Adding external covariates (regression components).
- Model diagnostics and posterior predictive checks.

**75 - 85 min: Real-world Application Case Study**

- Demonstrating an end-to-end modeling example with real data.
- Discussing best practices for practical time series modeling.

**85 - 90 min: Wrap-up and Interactive Q&amp;A**

- Open floor for questions and further resources.

---

## Additional Resources

- [Introduction to PyMC state space module](https://www.youtube.com/watch?v=G9VWXZdbtKQ)
- [Podcast episode on PyMC&#x27;s state space module](https://learnbayesstats.com/episode/124-state-space-models-structural-time-series-jesse-grabowski)
- [PyMC State Space Module GitHub Repository](https://github.com/pymc-devs/pymc-extras/tree/main/pymc_extras/statespace)

We believe this tutorial will empower participants with practical knowledge of state space modeling in PyMC, enabling them to effectively analyze complex time series data using Bayesian approaches.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Tutorial</category>
            <url>https://cfp.pydata.org/berlin2025/talk/GRZ3RG/</url>
            <location>B09</location>
            
            <attendee>Jesse Grabowski</attendee>
            
            <attendee>Alexandre Andorra</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>MQS99P@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-MQS99P</pentabarf:event-slug>
            <pentabarf:title>PyLadies &amp; Empowered in Tech Lunch</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T123000</dtstart>
            <dtend>20250901T133000</dtend>
            <duration>010000</duration>
            <summary>PyLadies &amp; Empowered in Tech Lunch</summary>
            <description>**PyLadies** is an international mentorship group with a focus on helping more women and gender non-conforming people become active participants and leaders in the Python open-source community. Its mission is to promote, educate and advance a diverse Python community through outreach, education, conferences, events and social gatherings.

---

**Empowered in Tech** is a community in Berlin dedicated to empowering FLINTA (women, lesbians, intersex, non-binary, trans and agender) people to excel in their tech journey. We welcome engineers, software developers, data scientists, designers, product managers, career changers and other professionals in the tech industry. We are open to all tech stacks, programming languages and experience levels. Our goal is to support our members to grow your careers, connect with like-minded people and feel welcome in tech.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Social Event</category>
            <url>https://cfp.pydata.org/berlin2025/talk/MQS99P/</url>
            <location>B09</location>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>WXPVCS@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-WXPVCS</pentabarf:event-slug>
            <pentabarf:title>More than DataFrames: Data Pipelines with the Swiss Army Knife DuckDB</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T134000</dtstart>
            <dtend>20250901T151000</dtend>
            <duration>013000</duration>
            <summary>More than DataFrames: Data Pipelines with the Swiss Army Knife DuckDB</summary>
            <description>The goal of this tutorial is to help Python users understand and use DuckDB not just as a DataFrame interface, but as a fully featured analytics database embedded in their Python workflows. We&#x27;ll highlight real-world patterns where DuckDB shines compared to traditional libraries, especially for medium-scale datasets that don’t justify a full data warehouse.
You’ll learn:
- When and why to reach for DuckDB instead of Pandas/Polars
- How DuckDB handles local files (CSV, Parquet, JSON, Postgres database, and more)
- Using DuckDB to build lightweight, SQL-based data pipelines
- Techniques for caching intermediate data in-process
- How to analyze data from remote sources via HTTP or S3
- Tips for using DuckDB with Jupyter, dbt, or your favorite Python tools</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Tutorial</category>
            <url>https://cfp.pydata.org/berlin2025/talk/WXPVCS/</url>
            <location>B09</location>
            
            <attendee>Mehdi Ouazza</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>XFPTWN@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-XFPTWN</pentabarf:event-slug>
            <pentabarf:title>AI-Ready Data in Action: Powering Smarter Agents</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T154000</dtstart>
            <dtend>20250901T171000</dtend>
            <duration>013000</duration>
            <summary>AI-Ready Data in Action: Powering Smarter Agents</summary>
            <description>Modern AI applications are only as powerful as the data that fuels them. Yet, much of the real-world data AI engineers encounter is messy, incomplete, or unoptimized data. In this hands-on tutorial, AI-Ready Data in Action: Powering Smarter Agents, participants will walk through the full lifecycle of preparing unstructured data, embedding it into LanceDB, and leveraging it for search and agentic applications. Using a real-world dataset, attendees will incrementally ingest, clean, and vectorize text data, tune hybrid search strategies, and build a lightweight chat agent to surface relevant results. The tutorial concludes by showing how to take a working demo into production. By the end, participants will gain practical experience in bridging the gap between messy raw data and production-ready pipelines for AI applications.

**Prior knowledge**

- Basic Python programming.
- Awareness of embeddings, vectors, and AI search concepts (we’ll explain where needed).

The tutorial is designed to be accessible: engineers familiar with Python should be able to follow along step by step.

**Key Takeaways**

By the end of the tutorial, participants will:

1. Understand the end-to-end workflow of taking raw, real-world data and preparing it for AI applications.
2. Build and run an incremental dlt pipeline to ingest real data into LanceDB.
3. Apply text preprocessing and generate embeddings for semantic search.
4. Optimize retrieval with vector and hybrid search strategies.
5. Implement a lightweight AI agent capable of surfacing relevant issues from a natural language description.
6. Learn how to transition from a demo project to a production setup using LanceDB Cloud.

**Outline**

- Introduce dlt (data load tool) and how it enables schema evolution, incremental loading, and normalization in pipelines.
- Introduce LanceDB and explain embeddings, vector search, hybrid retrieval and multi-modal data for AI applications.
- Ingest and preprocess a real dataset with dlt, generate embeddings, and load it into LanceDB following best data engineering practices.
- Optimize search in LanceDB by tuning parameters, selecting distance metrics, and adding hybrid retrieval.
- Build a lightweight AI agent that queries LanceDB and returns the most relevant issues from natural-language prompts.
- Demonstrate the path to production using automation, monitoring, and LanceDB Cloud for scaling and reliability.
- Conclude with key takeaways and an open Q&amp;A.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Tutorial</category>
            <url>https://cfp.pydata.org/berlin2025/talk/XFPTWN/</url>
            <location>B09</location>
            
            <attendee>Violetta Mishechkina</attendee>
            
            <attendee>Chang She</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>VBCU9H@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-VBCU9H</pentabarf:event-slug>
            <pentabarf:title>Beyond Linear Funnels: Visualizing Conditional User Journeys with Python</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T104000</dtstart>
            <dtend>20250901T111000</dtend>
            <duration>003000</duration>
            <summary>Beyond Linear Funnels: Visualizing Conditional User Journeys with Python</summary>
            <description>When we talk about funnels in analytics, most people think of linear funnels, where users move step-by-step through a fixed sequence of actions. But in real-world applications like dynamic forms, on-boarding flows, or diagnostic tools, funnels are often conditional and non-linear. The next step in the journey depends on user input at earlier stages, leading to different paths and variable funnel lengths for every user.

An example is a vehicle pricing tool: while all users answer general questions (e.g., type, mileage), follow-up questions may differ based on previous answers. For instance, only users with electric cars are asked about battery capacity. This branching logic creates challenges for traditional funnel visualization techniques which mostly consider funnels as linear.

Alternative immediate solutions are not perfect:
Visuals like Sankey diagrams are too limited/general and often visually collapse under real-world data messiness (users going back and forth, drop-offs, missing events).
Milestone-based funnels (where you set a few milestones during the funnel to mimic linear funnels) simplify things too much,  hiding key details and masking where things actually break down.

As a data analyst, I needed a way to understand and visualize such nonlinear flows in a more straightforward and consumable way. Finding no library that met this need out of the box, I created funnelius, a Python library that processes raw event logs into ready to consume funnel graphs.

The library accepts a pandas DataFrame with user_id, action and action_timestamp columns. Then it will use pandas to transform DataFrame to a suitable format to feed into graphviz. It also adds necessary columns needed to filter and declutter the graph. Then it will visualize the funnel using dot rendering engine which includes:
- Calculating key metrics for every step: number of users per step, conversion rates, time spent, percentage of total users and drop-offs. 
- conditional formatting based on different metrics to highlight bottlenecks.
- Comparison with another dataframe and showing changes. 
- Showing the answers that users gave in each step and calculate the percentage of answers on every step.l.

The graph can be fine tuned with some options like:
- Only show top-N routes to declutter graph
- Show/hide Dropped users data
- Only include users who started from specific steps. If we know that users must have specific steps as a starting point, this helps remove possible data issues if any.
- Define metrics that should be calculated 

There is also a Streamlit-based UI to interactively adjust parameters and export funnel analysis as PDF instead of doing it programmatically.



This tool can be helpful for data analysts and data scientists with Python knowledge who need to analyse conditional funnels.

Github Repository:
https://github.com/yaseenesmaeelpour/funnelius</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/VBCU9H/</url>
            <location>B07-B08</location>
            
            <attendee>Yaseen Esmaeelpour</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>QMPX9V@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-QMPX9V</pentabarf:event-slug>
            <pentabarf:title>Democratizing Digital Maps: How Protomaps Changes the Game</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T112000</dtstart>
            <dtend>20250901T115000</dtend>
            <duration>003000</duration>
            <summary>Democratizing Digital Maps: How Protomaps Changes the Game</summary>
            <description>In today’s digital landscape, maps have become essential components of countless applications and services, from navigation and logistics to social platforms and data visualization. But for too long, the field has been dominated by a few companies whose services, while powerful, come with significant drawbacks: Usage quotas, tracking requirements, styling limitations, and recurring costs that can quickly skyrocket as applications grow.

This talk will introduce Protomaps, an innovative open source mapping technology that is fundamentally reshaping the way digital maps are created, distributed and used. At its core, Protomaps utilizes the groundbreaking PMTiles format – a single-file approach to vector tiles that eliminates the need for complex tile server infrastructure while increasing performance and reducing bandwidth consumption.

#### Technical innovation

We start with the technical basics of protomaps and explain how the PMTiles format works and why it represents such a significant advance over conventional tile map approaches. Unlike conventional solutions that rely on thousands of individual tile files provided by a complex infrastructure, PMTiles bundles vector map data into a single, efficiently indexed file that can be hosted anywhere.
The presentation will demonstrate how this approach enables progressive loading, allowing maps to render quickly at variable zoom levels while preserving the rich detail and interactive capabilities users expect from modern mapping solutions. We’ll examine the efficiency gains in terms of bandwidth usage, server requirements, and client-side rendering performance.

#### Democratization in Practice

This talk will focus on how Protomaps democratizes digital mapping in a tangible way:

##### Economic Accessibility

By eliminating recurring API costs and usage-based pricing models, Protomaps opens up mapping opportunities for projects of all sizes, from hobby developers to non-profit organizations and educational institutions with limited budgets.

##### Technical Accessibility

We demonstrate practical implementations with Leaflet and MapLibre GL and show how developers can integrate Protomaps with just a few lines of code and minimal configuration.

##### Customization Freedom

Without the styling restrictions imposed by commercial vendors, Protomaps allows complete creative control over the appearance of the map. We show examples of customized maps that would be difficult or impossible to achieve with traditional services.

##### Privacy by Design

As Protomaps enables fully self-hosted mapping solutions, there is no need to share user location data or mapping activity with third parties – a crucial aspect for privacy-conscious applications and those operating under strict regulatory frameworks.

#### Takeaways for Attendees

Participants will leave this session with the following knowledge:

* Understand how PMTiles and Protomaps work
* Know how to use Protomaps in their own projects
* Customize maps to meet specific design and data needs
* A new perspective on the possibilities of democratized digital mapping

Whether you are a developer seeking cost-effective mapping solutions, an organization concerned about data privacy, or simply interested in the evolution of open source geospatial technology, this talk will give you valuable insight into how Protomaps is reshaping the landscape of digital cartography by putting powerful mapping capabilities back into the hands of developers and communities.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/QMPX9V/</url>
            <location>B07-B08</location>
            
            <attendee>Veit Schiele</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>KBEEHS@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-KBEEHS</pentabarf:event-slug>
            <pentabarf:title>Accessible Data Visualizations</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T120000</dtstart>
            <dtend>20250901T123000</dtend>
            <duration>003000</duration>
            <summary>Accessible Data Visualizations</summary>
            <description>## Introduction

Accessible data visualizations extend beyond aesthetics to meet established standards and accommodate diverse visual abilities. This presentation demonstrates how to create visualizations that comply with Web Content Accessibility Guidelines (WCAG) contrast requirements, support users with color vision deficiencies, and convey information through multiple encoding channels. The topics in the presentation explore practical techniques using colors, patterns, SVG accessibility features, and alternative data formats. 

This presentation is designed for data scientists, visualization specialists, dashboard designers, and accessibility auditors who need to communicate findings effectively to diverse audiences. Attendees will benefit by:

- Learning practical techniques to make visualizations accessible without sacrificing analytical depth
- Gaining implementation strategies for common data visualization libraries
- Acquiring skills to expand their reach to users with visual impairments
- Taking away ready-to-use color palettes and pattern sets for immediate implementation

# Topics

## Color Accessibility

Data visualizations must meet WCAG contrast ratios (≥3:1) for distinguishable elements. Our optimized palette features:

- Eight distinct colors plus neutral gray for invalid data
- CIEDE2000 perceptual differences &gt;20 between colors
- Verified compatibility with various color vision deficiencies
- Print-friendly CMYK values (ISO Coated V2 300% or Pantone C)
- Contrast ratios &gt;3.0 (WCAG AA-level) against white and black backgrounds

## Pattern Implementation

Patterns provide critical secondary encoding when color alone is insufficient, we&#x27;ll present:

- Unique pattern paired with each color
- Area fills that maintain distinction at various scales
- Sequential pattern densities for quantitative data
- Pattern elements adaptable as point markers
- Implementation via SVG `&lt;pattern&gt;` tags

## Technical Implementation

Practical examples will demonstrate:

- Using color contrast checkers for validation
- Implementing SVG `&lt;pattern&gt;` elements
- Creating accessible SVG with proper ARIA attributes
- Providing alternative data formats (e.g. HTML tables with semantic descriptions)
- Testing with screen readers and accessibility tools

## Conclusion

Implementing these practices creates data visualizations that are not only compliant with accessibility regulations but also more effective for all users. The cusy Design System offers open-source resources to implement these techniques across various visualization libraries.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/KBEEHS/</url>
            <location>B07-B08</location>
            
            <attendee>Maris Nieuwenhuis</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>AU8F9U@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-AU8F9U</pentabarf:event-slug>
            <pentabarf:title>Automating Content Creation with LLMs: A Journey from Manual to AI-Driven Excellence</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T134000</dtstart>
            <dtend>20250901T141000</dtend>
            <duration>003000</duration>
            <summary>Automating Content Creation with LLMs: A Journey from Manual to AI-Driven Excellence</summary>
            <description>GetYourGuide, a global marketplace for travel experiences, needs to provide structured and inspiring content for every activity in its marketplace. 
Before the release of our AI models, suppliers would create their content fully manually. The manual approach led to several issues in production, such as content inconsistencies, incorrect grammar, non-English language, and poor adherence to our content guidelines.
These content defects negatively impact the conversion rate of activities.
At the same time, with the large scale of new activity generation, our internal teams could only review a very small fraction of the submitted content.  

With our LLM solution, suppliers can now automatically generate optimal content for their activities. Our feature allows users to simply copy-paste any existing raw text of their activity, and our models would then prefill most of the content sections. Suppliers then have the opportunity to review and edit the content.
We chose two different methods to generate free text content and structured information.

For free text, we used the OpenAI fine-tune API to create two different models generating the relevant sections of our travel activities, i.e. the title, the highlights, the short and full descriptions.
For structured information, we used the Function Calling gpt API to prefill the different activities tags and categories that have fixed values constraints in our database, such as the transport used or the type of the guide. 

In order to validate our models, as well as for production monitoring, we developed a dedicated LLM evaluator that identifies hallucinations for our specific case, that is our models generating information that is not factually correct as compared to the input supplier text. With this hallucination evaluator, we were able to score the performance of different models and unlock key learnings and iterations. The evaluator also enables our internal team to detect and correct the hallucinations in production.

After several AB experiments, the new automated content creation feature is fully released to all our suppliers. The activities with content generated via AI showed significantly fewer content defects and a significant increase in bookings, with only a small fraction of hallucinations that can be reviewed and corrected manually.

In this talk, we will share our long journey consisting of several training data iterations to build our fine-tuned models, the prompt engineering challenges in building our evaluator and our function call model. We will also cover the different experiments and the operational challenges in training the models and deploying the service in production.
The talk will provide some concrete ideas and tools to automate the generation of optimal content with LLMs, which is a common use case in many industries.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk [Sponsored]</category>
            <url>https://cfp.pydata.org/berlin2025/talk/AU8F9U/</url>
            <location>B07-B08</location>
            
            <attendee>Marco Vene</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>ZLJRNN@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-ZLJRNN</pentabarf:event-slug>
            <pentabarf:title>Benchmarking 2000+ Cloud Servers for GBM Model Training and LLM Inference Speed</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T142000</dtstart>
            <dtend>20250901T145000</dtend>
            <duration>003000</duration>
            <summary>Benchmarking 2000+ Cloud Servers for GBM Model Training and LLM Inference Speed</summary>
            <description>Spare Cores is a vendor-independent, open-source, Python-based ecosystem offering a comprehensive inventory and performance evaluation of servers across cloud providers. We automate the discovery and provisioning of thousands of server types in public using GitHub Actions to run hardware inspection tools and benchmarks for different workloads, including:
- General performance (GeekBench, PassMark)
- Memory bandwidth and compressions algorightms
- OpenSSL, Redis, and web serving speed
- DS/ML-specific benchmarks like GBM training and LLM inference on CPUs and GPUs

All results and open-source tools (such as database dumps, APIs, and SDKs) are openly published to help users identify and launch the most cost-efficient instance type for their specific use case in their own cloud environment.

This talk introduces the open-source ecosystem, then highlights our latest benchmarking efforts, including the performance evaluation of ~2,000 server types to determine the largest LLM model (from 135M to 70B parameters) that can be loaded on the machines and the inference speeds achievable with various token length for prompt processing and text generation.

Slides: https://sparecores.com/assets/slides/pydata-berlin-2025.html#/cover-slide</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/ZLJRNN/</url>
            <location>B07-B08</location>
            
            <attendee>Gergely Daroczi</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>FPDP3E@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-FPDP3E</pentabarf:event-slug>
            <pentabarf:title>Scaling Python: An End-to-End ML Pipeline for ISS Anomaly Detection with Kubeflow and MLFlow</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T154000</dtstart>
            <dtend>20250901T161000</dtend>
            <duration>003000</duration>
            <summary>Scaling Python: An End-to-End ML Pipeline for ISS Anomaly Detection with Kubeflow and MLFlow</summary>
            <description>Among popular open-source MLOps tools, **Kubeflow** stands out as a Kubernetes-native platform designed to support the entire ML lifecycle, from data preprocessing to model training, deployment, and retraining. Its modular structure enables the integration of a wide range of tools, making it a highly versatile framework for building scalable and reproducible ML workflows. Despite this, most existing resources focus on individual components rather than demonstrating how these can be orchestrated into a seamless, end-to-end pipeline.

In this talk, we present a practical case study that highlights the potential of Kubeflow in a real-world application. Specifically, we showcase how an automated ML pipeline for anomaly detection in International Space Station (ISS) telemetry data can be built and deployed using Kubeflow and other open-source MLOps tools. The dataset, originating from the Columbus module of the ISS, introduces unique challenges due to its complexity and high-dimensional nature, providing an excellent testbed for MLOps workflows.

### **What makes this approach unique?**

Our workflow is built entirely in Python, leveraging Kubeflow’s Python SDK to orchestrate every stage of the pipeline. This eliminates the need for manual interaction with Kubernetes or container configurations, making the process accessible to ML engineers and data scientists without extensive DevOps expertise.

### **Key takeaways for attendees:**

*   **Tool integration:** Learn how to combine Dask for distributed preprocessing, Katib for hyperparameter optimization, PyTorch Operator for distributed training, MLFlow for experiment tracking and monitoring, and KServe for scalable model serving. These tools are orchestrated into a unified pipeline using Kubeflow Pipelines.
*   **Overcoming challenges:** Gain insights into the technical hurdles faced during the implementation of this pipeline and discover the strategies and best practices that made it possible.
*   **Real-world impact:** Understand how to apply MLOps principles to complex, real-world datasets and how these principles translate into scalable, maintainable, and reproducible workflows.

To ensure reproducibility and accessibility, the entire pipeline, including configurations and code, is publicly available in our GitHub repository [here](https://github.com/hsteude/code-ml4cps-paper). Attendees will be able to replicate the workflow, adapt it to their own use cases, or extend it with additional features.

### **Who should attend?**

This session is designed for data scientists, ML engineers, and Python enthusiasts who want to simplify the development of scalable ML pipelines. Whether you&#x27;re new to Kubernetes or looking to streamline your MLOps workflows, this talk will provide actionable insights and tools to help you succeed.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/FPDP3E/</url>
            <location>B07-B08</location>
            
            <attendee>Christian Geier</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>SB88M7@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-SB88M7</pentabarf:event-slug>
            <pentabarf:title>Beyond the Black Box: Interpreting ML models with SHAP</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T162000</dtstart>
            <dtend>20250901T165000</dtend>
            <duration>003000</duration>
            <summary>Beyond the Black Box: Interpreting ML models with SHAP</summary>
            <description>## Audience
This talk is for Data Scientists and Machine Learning Engineers at any level. Basic knowledge of machine learning is useful but not necessary.

## Objective
Attendees will learn why explainable machine learning is important and how to use and interpret SHAP values for their model.

## Details

ML models behave as black boxes in most scenarios. The model predicts or provides a certain output, but it is very difficult to generate any actionable insights directly. This is mostly because we generally have no idea which features are contributing the most to the model&#x27;s behavior internally. SHAP provides a way to explain model predictions and can be an important tool in a data scientist&#x27;s toolbox.

In this talk, we will begin by explaining to the audience the need for explainability and why it is essential to understand beyond what the model outputs. We will then briefly review the mathematical intuition behind Shapley values and their origins in game theory. After that, we will walk through a couple of case studies of tree-based and neural network-based models. We will be focusing on the interpretation of SHAP through various plots. Finally, we will discuss the best practices for interpreting SHAP visualizations, handling large datasets, and common pitfalls to avoid.

## Outline

- Introduction and motivation [1 min]
- Why explainability matters? [5 min]
   - Problem with black box models
   - Actionable insights
- SHAP theory and intuition [5 min]
    - Shapley values
    - Game theory origins
    - SHAP
- Case study 1: Tree-based model [4 min]
    - Problem definition
    - model output
    - SHAP visualization
      - Global plots
      - Local plots
    - Interpretation
- Case study 2: Neural Network model [8 min]
    - Problem definition
    - Model output
    - SHAP visualization
       - Global plots
       - Local plots
    - Interpretation
- Best practices and common pitfalls [4 min]
    - Interpret SHAP correctly
    - Avoid misleading explanations
    - Performance challenges for large datasets
    - Other techniques for explainability
- Q/A [3 min]</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/SB88M7/</url>
            <location>B07-B08</location>
            
            <attendee>Avik Basu</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>VURY38@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-VURY38</pentabarf:event-slug>
            <pentabarf:title>Building an A/B Testing Framework with NiceGUI</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T170000</dtstart>
            <dtend>20250901T173000</dtend>
            <duration>003000</duration>
            <summary>Building an A/B Testing Framework with NiceGUI</summary>
            <description>NiceGUI is a Python-based web UI framework that enables developers to create full-featured, interactive web applications without needing to write JavaScript. 

 In this talk, I’ll share how my team and I used NiceGUI to build an internal A/B testing platform entirely in Python. A/B testing is essential for validating new features and improving user experience, and by creating a custom platform, we were able to streamline experiment management and simplify data visualization.

This talk is ideal for Python developers, data scientists, or anyone interested in creating web-based internal tools quickly. If you&#x27;re looking for a solution that minimizes frontend complexity while providing a powerful framework for building interactive applications, this talk will provide valuable insights. No prior knowledge of JavaScript or frontend frameworks is necessary; familiarity with Python and basic web concepts will suffice.

After a brief introduction, I’ll first explain what A/B testing is and why it’s so crucial for making data-driven decisions. I’ll also discuss why having a custom-built platform can help improve experiment efficiency and results interpretation.

Next, I’ll dive into the key requirements we had for the platform, such as flexibility, ease of use, and seamless integration with our existing backend systems. I’ll also explain why we chose NiceGUI over other Python-based frameworks, emphasizing its ability to help us build a robust web application without the complexities of traditional frontend development.

Throughout the talk, I’ll walk through how we used NiceGUI to design the user interface, display results, and integrate with the backend. I’ll focus on the development experience, highlighting the challenges we faced and how NiceGUI’s features allowed us to make rapid progress while keeping things simple and Pythonic.

The takeaway for the audience will be understanding how NiceGUI simplifies the development of interactive web applications, focusing on internal tools like dashboards or experiment management platforms. I’ll also share the benefits we’ve experienced with the platform so far and discuss the lessons we’ve learned. Finally, I’ll explain how NiceGUI helped us create an interactive, production-ready tool with minimal frontend complexity.

This session will demonstrate, through a specific use case, how NiceGUI can be an ideal solution for Python developers looking to quickly build internal tools, reduce frontend complexity, and speed up development cycles.

Agenda:
1. Introduction &amp; Background (5 minutes)
2. Requirements for an A/B Testing Platform (2 minutes)
3. Why We Chose NiceGUI (2 minutes)
4. How We Built It – Patterns &amp; Architecture (10 minutes)
5. Benefits and Outcomes (3 minutes)
6. Challenges and Lessons Learned (3 minutes)</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/VURY38/</url>
            <location>B07-B08</location>
            
            <attendee>Wessel van de Goor</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>KCPVYN@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-KCPVYN</pentabarf:event-slug>
            <pentabarf:title>🛰️➡️🧑‍💻: Streamlining Satellite Data for Analysis-Ready Outputs</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T104000</dtstart>
            <dtend>20250901T111000</dtend>
            <duration>003000</duration>
            <summary>🛰️➡️🧑‍💻: Streamlining Satellite Data for Analysis-Ready Outputs</summary>
            <description>Satellite imagery offers powerful insights for vegetation monitoring, deforestation detection, and identifying unauthorized activity but raw data isn’t analysis-ready. In this talk, I will share how our team built a scalable, cloud-native pipeline that automates satellite data acquisition, storage, and preprocessing into consistent, analysis-ready datasets (ARDs). Designed for flexibility and growth, the system handles various sensors and formats while ensuring high data quality.

We use Prefect for workflow orchestration and Anyscale Ray for distributed processing, cutting processing times from days to near real-time. Open source SpatioTemporal Asset Catalog  (STAC) standards enable robust metadata indexing, supporting fast querying and long-term interoperability. This adaptable architecture empowers fast, reliable geospatial analytics across domains.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/KCPVYN/</url>
            <location>B05-B06</location>
            
            <attendee>Vinayak Nair</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>8UJA37@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-8UJA37</pentabarf:event-slug>
            <pentabarf:title>Exploring Millions of High-dimensional Datapoints in the Browser for Early Drug Discovery</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T112000</dtstart>
            <dtend>20250901T115000</dtend>
            <duration>003000</duration>
            <summary>Exploring Millions of High-dimensional Datapoints in the Browser for Early Drug Discovery</summary>
            <description>From initial screening to regulatory approval, developing new drugs can take over a decade. A major bottleneck is the early-stage identification of promising compounds, a process that increasingly relies on high-throughput image-based profiling and requires researchers to sift through vast oceans of potential molecular candidates. Analyzing these large-scale, high-dimensional datasets introduces challenges in data ingestion, transformation, and visualization. Overcoming those challenges has the potential to significantly accelerate the journey from discovery to delivery, thus providing life-saving treatments to patients faster.

In this talk, we share how our team at Bayer engineered a system to navigate millions of cell-level data points in the browser. Starting with raw microscopy images, we use computer vision and deep learning models to extract morphological features. These features are aggregated into “consensus profiles” that enable robust comparisons across treatment conditions and experimental batches.
We’ll present how we automated and optimized what was previously a four-week manual workflow using a tech stack including:

•	⁠Apache Airflow for orchestrating parallel processing and ensuring reproducibility  
•	⁠GraphQL combined with REST for a balance of flexibility and speed in serving data
•	React and Next.js for building user interfaces that support real-time interaction with millions of records

We’ll also showcase techniques for creating accessible and performant visualizations: scatter plots, dose-response curves, dendrograms, and similarity heatmaps. These visualizations were designed for scientists who are no software developers, so particular attention was paid to usability, accessibility, and performance.

By presenting practical challenges and solutions, we will enable attendees to improve their approaches to data visualization and interaction in their own domains. We aim to convey how these technologies can transform the way we interact with complex datasets in engineering applications on a broad spectrum, empowering us with more efficient methodologies to locate the needle in a multidimensional haystack.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/8UJA37/</url>
            <location>B05-B06</location>
            
            <attendee>Tim Tenckhoff</attendee>
            
            <attendee>Matthias Orlowski</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>RQCNQV@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-RQCNQV</pentabarf:event-slug>
            <pentabarf:title>Democratizing Experimentation: How GetYourGuide Built a Flexible and Scalable A/B Testing Platform</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T120000</dtstart>
            <dtend>20250901T123000</dtend>
            <duration>003000</duration>
            <summary>Democratizing Experimentation: How GetYourGuide Built a Flexible and Scalable A/B Testing Platform</summary>
            <description>Experimentation is essential for data-driven product development, but centralized experimentation systems often become bottlenecks, limiting innovation and velocity. At GetYourGuide, we faced this challenge and decided to democratize experimentation, enabling analysts and product teams across the company to define, run, and analyze experiments independently. In this session, we&#x27;ll share practical insights from our journey toward democratization, focusing on technical implementation details and lessons learned.  

**From Centralized to Decentralized Experimentation**  
Initially, experimentation at GetYourGuide was centralized, limiting flexibility and slowing down decision-making. We recognized the need to empower individual contributors (ICs) by creating a self-service experimentation platform. We&#x27;ll discuss the practical challenges we encountered, including managing complexity, maintaining consistency, and ensuring data quality across decentralized teams.  

**Enabling Flexible Metric and Dimension Definitions**  
To democratize experimentation effectively, we needed to empower analysts to define their own metrics and dimensions without heavy engineering involvement. We&#x27;ll share how we designed a modular SQL-template approach, allowing analysts to quickly create, test, and deploy new definitions. We&#x27;ll illustrate this approach with real-world examples, such as conversion rate, revenue per visitor, channel splits, and platform segmentation, demonstrating how this flexibility significantly accelerated experimentation velocity.  

**Standardizing Statistical Calculations with the Analyzer Toolkit**
Our initial experimentation infrastructure relied heavily on Looker data models, which proved insufficient for complex statistical methods like sequential testing. To address this, we built a Python-based analysis package, the Analyzer, that standardized statistical calculations and provided reusable components. We&#x27;ll explain how analysts leverage this toolkit to ensure consistency, accuracy, and extensibility of statistical methods. We&#x27;ll also share how the Analyzer became a valuable resource beyond experimentation, supporting broader analytical use-cases across the organization.  

**Batch Processing and API-Driven Experiment Results**  
To ensure timely access to experiment results, we implemented a robust batch processing pipeline that pre-calculates daily experiment impressions, metrics, and dimensions. Additionally, we developed a flexible API layer to enable analysts to retrieve specific experiment results dynamically, without waiting for scheduled batch jobs. We&#x27;ll discuss the technical architecture behind this dual approach, highlighting how it balances efficiency, reliability, and flexibility.  

**Key Lessons and Takeaways**  
Attendees will leave this session with practical insights into:
* Democratizing experimentation to accelerate innovation and velocity.
* Best practices for designing flexible, scalable, and maintainable experimentation infrastructure.
* Technical strategies for enabling self-service metric/dimension definitions, standardized statistical calculations, and extensible analytical capabilities.  
  
We&#x27;ll conclude by briefly outlining our future plans, including additional discriminators, advanced statistical methods, and further UI enhancements aimed at continuous democratization.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/RQCNQV/</url>
            <location>B05-B06</location>
            
            <attendee>Konrad Richter</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>LZYBVH@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-LZYBVH</pentabarf:event-slug>
            <pentabarf:title>The EU AI Act: Unveiling Lesser-Known Aspects, Implementation Entities, and Exemptions</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T134000</dtstart>
            <dtend>20250901T141000</dtend>
            <duration>003000</duration>
            <summary>The EU AI Act: Unveiling Lesser-Known Aspects, Implementation Entities, and Exemptions</summary>
            <description>The EU AI Act is a groundbreaking regulatory framework, partly in effect, designed to govern AI systems based on their perceived risk. This talk provides an overview of the basics and explores lesser-discussed aspects of the Act, such as the entities involved in its implementation, the role of the private sector, and notable exemptions for high-risk government and law enforcement use cases.

The AI Act categorizes AI systems into different groups based on their potential harm. The two most notable groups are unacceptable and high risk groups. Unacceptable risk systems, social scoring systems, unconsciously behavior manipulative systems, and mass CCTV facial recognition systems are among the prohibited group.

On the other hand, high-risk systems including biometric identification systems, AI systems used in education and vocational training, and employment and worker management systems, must meet stringent obligations before entering the market.

Surprisingly, the AI Act excludes many high-risk government and law enforcement use cases. AI systems used for national security, defense, and law enforcement tasks like border control, crime prevention, and criminal investigations are largely exempt. These exemptions aim to preserve public security and Member States&#x27; sovereignty but raise concerns about potential AI misuse in these sensitive areas. For instance, predictive policing tools, though controversial, fall outside the AI Act&#x27;s scope.

Additionally, the AI Act will not apply to AI systems used as research or development tools or to systems developed or used exclusively for military purposes. This leaves a substantial gap in the regulation of high-risk AI systems, emphasizing the need for complementary safeguards.

One of the less talked about aspects is the complex ecosystem of entities involved in the AI Act&#x27;s implementation. The European Artificial Intelligence Board is the Act&#x27;s central hub, comprising representatives from each national supervisory authority, the European Data Protection Supervisor, and the Commission. The board will issue opinions and recommendations to ensure the AI Act&#x27;s consistent application. National supervisory authorities, such as data protection agencies, will oversee the Act&#x27;s enforcement, exchanging information through the board. The European Commission will facilitate cooperation among national authorities and with international organizations.

When it comes to verifying submitted documents and claimed [lack] of high risk systems, there will be entities called notifying bodies which will be established by each Member State to assess and certify notified bodies. Notified bodies are conformity assessment bodies accredited to evaluate high-risk AI systems. These notified bodies, is a space where the private sector and startups can grow and engage with the regulatory bodies. They will play a crucial role in ensuring high-risk AI systems conform to the AI Act&#x27;s requirements.

Moreover, the AI Act introduces AI regulatory sandboxes, temporary experimental spaces allowing developers to test innovative AI systems under regulatory supervision. National competent authorities will establish and monitor these sandboxes, fostering innovation while minimizing risks. The private sector can engage with these sandboxes, creating opportunities for startups and established companies to develop and test their new systems.

In conclusion, the EU AI Act is a comprehensive regulatory framework that establishes a complex ecosystem of implementation entities and offers opportunities for private sector engagement. However, it also presents notable exemptions for high-risk government and law enforcement use cases, sparking debates about its scope and effectiveness. Understanding these lesser-known aspects is crucial for navigating the AI Act&#x27;s regulatory landscape and fostering responsible AI innovation.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/LZYBVH/</url>
            <location>B05-B06</location>
            
            <attendee>Adrin Jalali</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>JE8YJT@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-JE8YJT</pentabarf:event-slug>
            <pentabarf:title>What’s Really Going On in Your Model? A Python Guide to Explainable AI</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T142000</dtstart>
            <dtend>20250901T145000</dtend>
            <duration>003000</duration>
            <summary>What’s Really Going On in Your Model? A Python Guide to Explainable AI</summary>
            <description>We’ve all been there, your machine learning model performs well in testing, but when it comes time to explain why it made a specific prediction, things get murky. In many real-world applications, especially in domains like healthcare, finance, or operations, being able to explain your model isn’t just helpful it’s critical.This talk is a practical walkthrough of explainable AI (XAI) tools in Python, aimed at data scientists and engineers who want to make their models more transparent and trustworthy. We’ll cover libraries like SHAP, LIME, and Captum, and show how to use them to generate both local and global explanations for models ranging from random forests to deep neural nets.You’ll see hands-on examples, common pitfalls to avoid, and ideas for integrating interpretability into your workflow whether you’re trying to debug your model or justify its predictions to a non-technical stakeholder.If you’ve ever wanted to better understand your own models or help others trust them this session is for you.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/JE8YJT/</url>
            <location>B05-B06</location>
            
            <attendee>Yashasvi Misra</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>GW9EXL@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-GW9EXL</pentabarf:event-slug>
            <pentabarf:title>Consumer Choice Models with PyMC Marketing</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T162000</dtstart>
            <dtend>20250901T165000</dtend>
            <duration>003000</duration>
            <summary>Consumer Choice Models with PyMC Marketing</summary>
            <description>The market sets the price, but what drives market demand? Classical implementations of discrete choice models discovered that market structure needed to be explicitly encoded in the model to avoid the problem of implausible predictions about the substitution value of distinct products. We demonstrate this issue and how to resolve it by adding more explicit structure to the models of market demand while giving insight into what drives the utility of products for consumers. These consumer choice models find a natural expression in the Bayesian paradigm and we show how to fit them to real data with PyMC Marketing&#x27;s Consumer Choice module.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/GW9EXL/</url>
            <location>B05-B06</location>
            
            <attendee>Nathaniel Forde</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>3XMJM3@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-3XMJM3</pentabarf:event-slug>
            <pentabarf:title>Risk Budget Optimization for Causal Mix Models</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250901T170000</dtstart>
            <dtend>20250901T173000</dtend>
            <duration>003000</duration>
            <summary>Risk Budget Optimization for Causal Mix Models</summary>
            <description>Budget planning often treats forecasts as fixed targets, leaving decision‑makers blind to the volatility hiding beneath the averages. This talk shows how Bayesian modelling turns every unknown—channel response, cost elasticity, future demand—into an explicit probability distribution. By simulating thousands of plausible futures, we can measure upside and downside simultaneously and translate a company’s risk appetite into clear optimisation objectives such as Value‑at‑Risk, Conditional VaR, entropic risk, or custom utility functions that respect budget caps and pacing rules.

Using reproducible PyMC Code, we will walk through converting posterior samples into risk‑aware spend recommendations, and visualising trade‑offs so non‑technical stakeholders grasp both opportunity and exposure. 

Attendees will leave with a notebook and code to adapt pymc bayesian models with Pymc-Marketing to perform marketing budgets, capital allocation, or any scenario where uncertainty and risk tolerance must shape financial decisions.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/3XMJM3/</url>
            <location>B05-B06</location>
            
            <attendee>Carlos Trujillo</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>JKEHMH@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-JKEHMH</pentabarf:event-slug>
            <pentabarf:title>Narwhals: enabling universal dataframe support</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T091000</dtstart>
            <dtend>20250902T101000</dtend>
            <duration>010000</duration>
            <summary>Narwhals: enabling universal dataframe support</summary>
            <description>Narwhals is a lightweight and extensible compatibility layer between dataframe libraries. It is already used by several major open source libraries including Altair, Bokeh, Marimo, Plotly, and more. You will learn how to use Narwhals to build dataframe-agnostic tools, how Narwhals gained traction in a short amount of time, and what the future of dataframes looks like.

This is a technical talk, and basic familiarity with Python and dataframes will be assumed. We will cover:

* What the data science landscape looked like in 2024 before Narwhals came onto the scene.
* What problems Narwhals solves, why you can&#x27;t &quot;just convert to pandas&quot; or &quot;just use PyArrow&quot;.
* How to use Narwhals, with an emphasis on lazy-only computation.
* Static typing.
* Narwhals and SQL.
* Extending Narwhals with your own backend.
* The Narwhals community, and how you can get involved.
* What we think the future of dataframes looks like, and how you can help make it happen.

Tool builders will learn how to build tools for modern dataframe libraries without sacrificing support for foundational classic libraries such as pandas. Data scientists will learn about what goes on under the hood when their favourite tools support their favourite dataframe libraries. Finally, everyone will learn from insights on community building and management.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Keynote</category>
            <url>https://cfp.pydata.org/berlin2025/talk/JKEHMH/</url>
            <location>Kuppelsaal</location>
            
            <attendee>Marco Gorelli</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>3BVEKT@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-3BVEKT</pentabarf:event-slug>
            <pentabarf:title>Lightning Talks</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T164500</dtstart>
            <dtend>20250902T173000</dtend>
            <duration>004500</duration>
            <summary>Lightning Talks</summary>
            <description>⚡ Lightning Talk Rules

- No promotion for products or companies.
- No call for &#x27;we are hiring&#x27; (but you may name your employer).
- One LT per person per conference policy.

Community Event Announcements

- ⏱ You want to announce a community event? You have ONE minute.
- All event announcements will be collected in a single slide slide deck, see instructions at the Lightning Talk desk in the Community Space in the Lounge on Level 1.

All other LTs:

- ⏱ You have exactly 5 minutes. The clock starts when you start — and ends when time’s up. That’s the thrill of Lightning Talks ⚡
- 🎯 Be sharp, clear, and fun. Introduce your idea, make your point, give the audience something to remember. No pressure. (Okay, maybe a little.)
- 🐍 Keep it relevant to Python, PyData and the community. You can go broad — tools, workflows, stories, experiments — as long as there’s some connection to Python, PyData or the community.
- 👏 Keep it respectful. Keep it awesome. Humor is welcome, but please be kind, inclusive, and professional.
- 🎤 Be ready when your name is called. We’re running a tight session — speakers go on stage rapid-fire. Stay close and stay hyped.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Plenary Session [Organizers]</category>
            <url>https://cfp.pydata.org/berlin2025/talk/3BVEKT/</url>
            <location>Kuppelsaal</location>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>URGKYN@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-URGKYN</pentabarf:event-slug>
            <pentabarf:title>PyLadies &amp; Empowered in Tech Social Event @Hofbräu Wirtshaus</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T180000</dtstart>
            <dtend>20250902T190000</dtend>
            <duration>010000</duration>
            <summary>PyLadies &amp; Empowered in Tech Social Event @Hofbräu Wirtshaus</summary>
            <description>**PyLadies** is an international mentorship group with a focus on helping more women and gender non-conforming people become active participants and leaders in the Python open-source community. Its mission is to promote, educate and advance a diverse Python community through outreach, education, conferences, events and social gatherings.

---

**Empowered in Tech** is a community in Berlin dedicated to empowering FLINTA (women, lesbians, intersex, non-binary, trans and agender) people to excel in their tech journey. We welcome engineers, software developers, data scientists, designers, product managers, career changers and other professionals in the tech industry. We are open to all tech stacks, programming languages and experience levels. Our goal is to support our members to grow your careers, connect with like-minded people and feel welcome in tech.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Social Event</category>
            <url>https://cfp.pydata.org/berlin2025/talk/URGKYN/</url>
            <location>Kuppelsaal</location>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>GBVFJ8@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-GBVFJ8</pentabarf:event-slug>
            <pentabarf:title>Probably Fun: Games to teach Machine Learning</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T104000</dtstart>
            <dtend>20250902T121000</dtend>
            <duration>013000</duration>
            <summary>Probably Fun: Games to teach Machine Learning</summary>
            <description>Board gaming has recently been declared part of the immaterial cultural heritage in Germany by UNESCO. Games encourage people to use their brains in a focused, constructive and peaceful way. This makes games a fantastic tool in the classroom. While many games contain algorithms and statistical models right under the surface, finding an actual model of Machine Learning is a bit harder. We have put some thought into creating or finding games that have a clear connection to Machine Learning.

We have conducted a tutorial featuring board games at PyConDE 2025. This time, we have increased the ante and moved the focus from statistics to Machine Learning. Also, at PyData Berlin we expect a particular challenge: we do not expect a room with tables for 80+ people. Therefore, we chose game mechanics that work with minimal material and scale up to big groups. As a consequence, the games would be easier to adapt to a larger class, such as university courses and seminars. Also, we take care to limit the time a game requires. In a classroom situation this allows to use the game as a priming exercise that can be followed up with theory and/or practical exercises using computers and programming.

The tutorial will be executed according to the following pseudocode (or lesson plan):

1. Game #1 is played in a plenary (5 min)
2. The presenters give a short introduction on why games matter (5 min)
3. The audience is randomly sampled into teams of 6 (2 min)
4. Game #2 is played in the teams in a cooperative manner (15 min)
5. Game #3 is played in the teams in a cooperative manner (15 min)
6. Game #4 is played with the teams competing against each other (20 min)
7. Winners are determined and applauded (5 min)
8. Game #5 is played in the plenary again (10 min)
9. Q &amp; A and wrap-up (10 min)

One of the presenters is certified as a board game educator &quot;Fachkraft für Gesellschaftsspiele&quot; by the Brettspielakademie (https://brettspielakademie.de/).

The games and lessons have been field-tested with university courses and are made available under a Creative Commons license. You are free to reuse or modify them for your own teaching. Several games (mostly on statistics) and sample lesson plans are available on https://www.academis.eu/probably_fun/ .</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Tutorial</category>
            <url>https://cfp.pydata.org/berlin2025/talk/GBVFJ8/</url>
            <location>B09</location>
            
            <attendee>Dr. Kristian Rother</attendee>
            
            <attendee>Shreyaasri Prakash</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>W9Q7JY@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-W9Q7JY</pentabarf:event-slug>
            <pentabarf:title>Deep Dive into the Synthetic Data SDK</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T134000</dtstart>
            <dtend>20250902T151000</dtend>
            <duration>013000</duration>
            <summary>Deep Dive into the Synthetic Data SDK</summary>
            <description>This hands-on tutorial will take participants beyond the basics of the Synthetic Data SDK, the emerging open-source standard for creating privacy-preserving synthetic data.

After a brief recap of the SDK’s core capabilities, the session will dive into advanced functionality, beginning with an in-depth exploration of differential privacy. Attendees will learn how the SDK integrates formal privacy guarantees, configure key parameters (i.e., epsilon and delta), and observe the trade-offs between privacy and utility through live examples.

The session will then focus on conditional generation, demonstrating how users can guide synthetic data output based on specific constraints or target values - an essential feature for scenario testing and AI model validation.

A dedicated section will cover multi-table synthesis, where participants will learn how to model and generate relational datasets with primary-foreign key dependencies, preserving structural and statistical integrity across multiple linked tables.

Finally, the tutorial will introduce the concept of fair synthetic data, showing how the SDK supports data generation aligned with the principle of statistical parity to help reduce representational bias in downstream use cases.

Each segment includes interactive coding exercises and real-world datasets to ensure practical understanding. Participants should have a working knowledge of Python and prior experience with the SDK or similar tools.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Tutorial</category>
            <url>https://cfp.pydata.org/berlin2025/talk/W9Q7JY/</url>
            <location>B09</location>
            
            <attendee>Tobias Hann</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>ZXTLEW@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-ZXTLEW</pentabarf:event-slug>
            <pentabarf:title>Forget the Cloud: Building Lean Batch Pipelines from TCP Streams with Python and DuckDB</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T155000</dtstart>
            <dtend>20250902T163500</dtend>
            <duration>004500</duration>
            <summary>Forget the Cloud: Building Lean Batch Pipelines from TCP Streams with Python and DuckDB</summary>
            <description>Cloud-native tools are everywhere. But not every system can or should move to the cloud.

In many industries like manufacturing, logistics, or energy, TCP streams remain the backbone of real-time data exchange. These systems are often on-premise, resource-constrained, and mission-critical.

This talk shows how you can build lean, powerful batch pipelines with source data coming from TCP streams using Python and DuckDB. All without the complexity of cloud services.

We&#x27;ll cover:

- Why TCP streams still matter
- Stream vs. Batch: Choosing the right model for industrial data
- Pipeline architecture: From streams to batch
- DuckDB + Python: The perfect combo for lightweight analytics
- Key pitfalls along the way
- Limitations of this approach


You&#x27;ll walk away with:

- Ready-to-use patterns for TCP-based data pipelines
- Insights on when to avoid unnecessary cloud complexity
- Tips for building fast, reliable batch jobs on local infrastructure

Whether you process factory sensor data, machine logs, or legacy telemetry, this talk will give you modern tools to make your data streams actionable and efficient.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk (long)</category>
            <url>https://cfp.pydata.org/berlin2025/talk/ZXTLEW/</url>
            <location>B09</location>
            
            <attendee>Orell Garten</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>KPHH7H@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-KPHH7H</pentabarf:event-slug>
            <pentabarf:title>Training Specialized Language Models with Less Data: An End-to-End Practical Guide</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T104000</dtstart>
            <dtend>20250902T111000</dtend>
            <duration>003000</duration>
            <summary>Training Specialized Language Models with Less Data: An End-to-End Practical Guide</summary>
            <description>Training effective language models typically involves two major bottlenecks: the need for vast amounts of labeled data and the engineering complexity of fine-tuning. This talk introduces a practical framework for addressing both, enabling teams to build small, domain-specialized language models (SLMs) that are deployable, secure, and cost-efficient—without needing massive labeled datasets.

SLMs are especially well-suited for focused tasks such as classification, function calling, or question answering, where full-scale LLMs are overkill. They are smaller, faster, and easier to deploy on local or mobile infrastructure—making them ideal for latency-sensitive, privacy-conscious, or resource-limited applications. However, fine-tuning them still traditionally requires manually labeled data in the tens of thousands.

Our approach uses synthetic data generation and validation techniques to drastically reduce the labeling burden. Leveraging large language models (LLMs) as “teacher models,” we generate and curate synthetic training data tailored to specific tasks. This data, combined with a handful of manually labeled examples and a clear task description, is then used to fine-tune SLMs (“student models”) that match or exceed the performance of larger models on the same narrow tasks.

We&#x27;ll walk through a detailed example focused on a real-life use case covering:
- Task scoping: How to define your model’s purpose and output space clearly.
- Synthetic data generation: Prompting LLMs to generate meaningful and diverse examples.
- Data validation: Techniques for filtering out poor-quality, duplicate, or malformed synthetic data.
- Model fine-tuning: How the student model is trained to emulate the teacher’s domain knowledge.
- Deployment: Delivering the model as binaries for use on internal infrastructure or edge devices.

We’ll also discuss key challenges teams face in adopting this approach—such as validation bottlenecks, overfitting on synthetic data, and the need for interpretable task definitions—and how we’ve addressed them in production environments.

This talk is targeted at data scientists, ML engineers, and tech leads who are looking for pragmatic strategies to bring specialized AI features into production without relying on API-based LLMs or manual annotation at scale. No prior knowledge of model distillation is required, though basic familiarity with supervised learning and model training will be helpful.

Attendees will leave with:
- A concrete workflow for training SLMs using synthetic data
- Insights into trade-offs between SLMs and LLMs
- Techniques for validating and curating LLM-generated data
- A better understanding of when and how to deploy small models effectively in production

This is not a theoretical talk. It is a field-tested approach grounded in real use cases, designed to empower small teams to build efficient, private, and reliable NLP systems.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk [Sponsored]</category>
            <url>https://cfp.pydata.org/berlin2025/talk/KPHH7H/</url>
            <location>B07-B08</location>
            
            <attendee>Jacek Golebiowski</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>CAUAZY@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-CAUAZY</pentabarf:event-slug>
            <pentabarf:title>Most AI Agents Are Useless. Let’s Fix That</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T112000</dtstart>
            <dtend>20250902T115000</dtend>
            <duration>003000</duration>
            <summary>Most AI Agents Are Useless. Let’s Fix That</summary>
            <description>Let’s face it: most AI agents are glorified demos. They look flashy, but they’re brittle, hard to debug, and rarely make it into real products. Why? Because wiring an LLM to a few tools is easy. Engineering a robust, testable, and scalable system is hard.

This talk is for practitioners, data scientists, AI engineers, and developers who want to stop tinkering and start shipping. We’ll take a candid look at the common reasons agent systems fail and introduce practical patterns to fix them using Haystack, an open-source Python framework to build custom AI applications.

You’ll learn how to design agents that are:

- **Modular**, so they’re easy to extend and evolve
- **Observable**, so you can trace failures and understand the behavior
- **Maintainable**, so they don’t become one-off science projects

We’ll also cover advanced topics like multimodal inputs and Model Context Protocol (MCP) to push your agents into more capable territory.

Whether you’re just starting to explore agents or trying to tame an unruly prototype, you’ll leave with a clear, actionable blueprint to build something that’s not just smart, but also reliable.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/CAUAZY/</url>
            <location>B07-B08</location>
            
            <attendee>Bilge Yücel</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>NUNXEV@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-NUNXEV</pentabarf:event-slug>
            <pentabarf:title>One API to Rule Them All? LiteLLM in Production</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T120000</dtstart>
            <dtend>20250902T123000</dtend>
            <duration>003000</duration>
            <summary>One API to Rule Them All? LiteLLM in Production</summary>
            <description>Building a real-world LLM system often means juggling different providers, endpoints, and API quirks. LiteLLM promises a unified interface across model backends—but how well does it hold up in production?

In this talk, I’ll share how we integrated LiteLLM into a real-world system that includes budget usage tracking and other production concerns. From provider switching to budget handling, I’ll walk through the benefits we saw—and the challenges we hit. I’ll also touch on the limits of abstraction. You’ll get a practical look at where LiteLLM helped us and where not.

**Key Takeaways**
- Understand how LiteLLM can be used to unify access to multiple LLM providers
- Learn how it fits into a real production pipeline (especially budget management and model load balancing)


**Target Audience**
- Developers and engineers working with LLMs in production
- Anyone curious about LiteLLM’s strengths and limitations in a real system</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/NUNXEV/</url>
            <location>B07-B08</location>
            
            <attendee>Alina Dallmann</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>BCGJQB@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-BCGJQB</pentabarf:event-slug>
            <pentabarf:title>Scaling Probabilistic Models with Variational Inference</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T134000</dtstart>
            <dtend>20250902T141000</dtend>
            <duration>003000</duration>
            <summary>Scaling Probabilistic Models with Variational Inference</summary>
            <description>Probabilistic models have proven to be a great tool for solving business-critical problems in fields such as marketing, demand forecasting, and risk-based optimization. One of the biggest challenges is scaling these models to large data sets and efficiently utilizing modern computing power. 

This talk addresses the challenges of scaling probabilistic models using variational inference and other similar methods. We will explain the core concepts of variational inference in an accessible way, avoiding heavy mathematics. We will use practical examples with NumPyro and PyMC to demonstrate how to apply variational inference effectively. Starting with simple models and then showing applications with custom forecasting models and neural network components. Additionally, we will cover diagnostics such as simulation-based calibration and coverage to ensure model reliability. Our discussion will also include strategies for scaling, including mini-batch optimization and distributed computing.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/BCGJQB/</url>
            <location>B07-B08</location>
            
            <attendee>Dr. Juan Orduz</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>WGJJQN@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-WGJJQN</pentabarf:event-slug>
            <pentabarf:title>How We Automate Chaos: Agentic AI and Community Ops at PyCon DE &amp; PyData</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T142000</dtstart>
            <dtend>20250902T145000</dtend>
            <duration>003000</duration>
            <summary>How We Automate Chaos: Agentic AI and Community Ops at PyCon DE &amp; PyData</summary>
            <description>Every year, PyCon DE &amp; PyData is run by a rotating crew of volunteers who build a full conference from scratch — in their spare time, with limited tools, shifting knowledge, and lots of coffee. It’s like launching a startup, dismantling it, and repeating from memory.

To survive (and stay sane), we’ve turned conference ops into a sandbox for automation — leaning on scripts, structured documentation, and increasingly, agentic AI systems. Think YAML files, GitHub Actions, custom bots, and LLM-powered assistants doing the boring stuff, so humans can focus on creativity and connection.

This talk is a no-fluff case study in what it actually takes to make automation — and especially AI agents — work in the wild:
 * How we went from chaotic Notion boards to reproducible workflows
 * How we use LLMs + APIs (LLMs, GitHub, Google, Drives, Pretalx, Pretix,…) to support speaker logistics, FAQs, video app, video cuts, certificates of participation and schedule drafts
 * Why Pydantic, Structure and even simple scripts matter more than hype
 * And most importantly: why agents are useless without clear structure, validation, and context

We’ll show live examples, share the open tools we’ve built (and broken), and make the case that good community infrastructure is open-source-worthy. If you’re building tools for humans, this talk is for you.

Want to help? We’re actively looking for contributors, testers, and curious minds to build better community tech together — come chat after the talk or find us online.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/WGJJQN/</url>
            <location>B07-B08</location>
            
            <attendee>Alexander CS Hendorf</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>KEJJSP@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-KEJJSP</pentabarf:event-slug>
            <pentabarf:title>Template-based web app and deployment pipeline at an enterprise-ready level on Azure</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T155000</dtstart>
            <dtend>20250902T163500</dtend>
            <duration>004500</duration>
            <summary>Template-based web app and deployment pipeline at an enterprise-ready level on Azure</summary>
            <description>In many enterprise environments, deploying a proof-of-concept data app to the cloud remains frustratingly slow and manual. Early user feedback often depends on clunky screen shares or static screenshots. This talk shows how we transformed that process - automating everything from infrastructure provisioning to web app deployment - using a system of pipeline, bicep, and python templates. The result? Stakeholders can interact with a working Streamlit app within minutes of a commit, with no further manual setup required.

We take you with us on our journey from awkward beginnings to an elegant template-based setup, where all steps of the configuration and deployment process are automated. All Azure resources are created without manual steps. And it takes only one bio-break from submitting your work to the repository to the business user being able to test it live. Along the way we share best practices and pitfalls we discovered, as well as how we structure our templates and repositories, both for the web app, as well as the deployment pipeline. At the end, we will deploy a new web app together and explore the workings of the system live.

While the concept will need adoption to other providers, you don&#x27;t need to use Azure to profit from this talk - all cloud platforms share similar tools and challenges.

Detailed Outline:

1\. Motivation (5 min)

- Why it&#x27;s hard to get user feedback early and why that is problematic
- Why it&#x27;s hard to get a real application running early
- What if we could automate app deployment and configuration, or how the NKD data science teams went from awkward to awesome

2\. The app creation, deployment, and configuration process (12 min)

- Struggles and best practices
- Tools that help with consistency and automation
- Handling virtual environments across dev systems and the cloud
- Web app and pipeline repositories and templates

3\. The pipeline (18 min)

- Structure of the stages
- Minimizing manual configuration with file parsing and bicep
- Matching branch and target server
- Automated Azure resource creation using Azure CLI
- App authorization and authentication configuration with more Azure CLI
- Finally, the deployment

4\. Show case (5 min)

- What the setup looks like when it&#x27;s fully set up
- Deploying an app live

Key Takeaways:

- How to reduce app deployment time from days to minutes using automated templates
- Collaboration setup for small and medium-sized data teams
- Best practices for structuring pipelines and web apps for consistency, security, and scalability
- What not to do: key pitfalls we encountered and how we fixed them

Target audience:

Data or machine learning scientists or engineers in small or medium-sized teams, who want to deploy web apps faster and in a more consistent way. Attendees should be comfortable with python and have basic familiarity with web apps or DevOps principles. While Azure users benefit most, no in-depth knowledge is required - concepts will be explained as we go.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk (long)</category>
            <url>https://cfp.pydata.org/berlin2025/talk/KEJJSP/</url>
            <location>B07-B08</location>
            
            <attendee>Johannes Schöck</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>DBL9PQ@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-DBL9PQ</pentabarf:event-slug>
            <pentabarf:title>The Importance and Elegance of Polars Expressions</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T104000</dtstart>
            <dtend>20250902T111000</dtend>
            <duration>003000</duration>
            <summary>The Importance and Elegance of Polars Expressions</summary>
            <description>Polars has gained popularity for its speed, but what truly makes it stand out is its syntax, especially the use of expressions. The book Python Polars: The Definitive Guide defines an expression as &quot;a tree of operations that describe how to construct one or more Series.&quot; In this talk, we’ll demystify this concept, explaining how expressions make Polars an elegant tool for data manipulation.

We will cover:

- Why expressions are crucial in Polars
- A formal definition of an expression and what it means in practice
- Creating expressions from existing columns or other values
- Using expressions to select, filter, sort, and aggregate data
- Applying expressions for aggregate statistics, mathematical transformations, and handling missing values
- Combining expressions with operators, comparisons, and Boolean logic
- A comparison of idiomatic vs. non-idiomatic Polars code

By the end of this talk, you’ll understand how to leverage Polars expressions to write cleaner and more efficient data manipulation code.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/DBL9PQ/</url>
            <location>B05-B06</location>
            
            <attendee>Jeroen Janssens</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>HUNUEB@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-HUNUEB</pentabarf:event-slug>
            <pentabarf:title>Causal Inference in Network Structures: Lessons learned From Financial Services</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T112000</dtstart>
            <dtend>20250902T115000</dtend>
            <duration>003000</duration>
            <summary>Causal Inference in Network Structures: Lessons learned From Financial Services</summary>
            <description>Wealth planning is a service offered by financial institutions. The advice helps clients grow their wealth through investing. This talk focuses on measuring the true impact of these services on a network of individual’s investments and securities. However, measuring this impact presents several practical challenges, which will be tackled in this talk:

   1) Controlled experiments are often impossible in practice, leaving only observational data available.
   
   2) Defining robust control groups is challenging when treatments are administered to individuals in relationship graphs at different times.

   3) Analysis must account for multiple outcomes with different modalities—securities (time-series) and investing (binary).

   4) The parallel-trend assumption doesn&#x27;t immediately hold.

   5) Market trends confounding effect on outcome needs to be corrected.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/HUNUEB/</url>
            <location>B05-B06</location>
            
            <attendee>Danial Senejohnny</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>GPZPFP@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-GPZPFP</pentabarf:event-slug>
            <pentabarf:title>Building Reactive Data Apps with Shinylive and WebAssembly</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T120000</dtstart>
            <dtend>20250902T123000</dtend>
            <duration>003000</duration>
            <summary>Building Reactive Data Apps with Shinylive and WebAssembly</summary>
            <description>In recent years, WebAssembly (Wasm) has opened new frontiers for delivering Python applications - enabling fully interactive, browser-native experiences without requiring a traditional server backend. This paradigm shift is particularly exciting for data scientists and developers looking to build lightweight, highly responsive data apps that can be deployed as static websites, reducing infrastructure complexity while improving user experience.

In this talk, I will walk through how to use Shinylive for Python, an emerging framework that combines reactive programming principles with the power of WebAssembly, to create rich data applications that run entirely in the browser. We’ll cover how Shinylive translates reactive code into client-side interactions, eliminating the need for round-trips to a Python server. I’ll also introduce techniques for integrating efficient local storage (via Apache Parquet) and show how optional FastAPI services can be layered on for hybrid architectures when needed.

This talk is intended for data scientists, machine learning engineers, and Python developers who are interested in building modern web applications without becoming full-time JavaScript engineers. Attendees will leave with practical techniques for building and deploying reactive data apps that run entirely in the browser.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/GPZPFP/</url>
            <location>B05-B06</location>
            
            <attendee>Christoph Scheuch</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>YZ9BY7@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-YZ9BY7</pentabarf:event-slug>
            <pentabarf:title>Data science in containers: the good, the bad, and the ugly</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T134000</dtstart>
            <dtend>20250902T141000</dtend>
            <duration>003000</duration>
            <summary>Data science in containers: the good, the bad, and the ugly</summary>
            <description>We&#x27;ll demonstrate how to switch versions DRY-style (without maintaining multiple Dockerfiles!), how to leverage newer techniques like BuildKit cache mounts, and discuss other important considerations like the use of Alpine with Python, progressive image loading, and model loading strategies.

Attendees will learn practical containerization techniques specifically tailored for data science workflows, with concrete examples using the Whisper model as our case study.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/YZ9BY7/</url>
            <location>B05-B06</location>
            
            <attendee>Jérôme Petazzoni</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>YKFWKQ@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-YKFWKQ</pentabarf:event-slug>
            <pentabarf:title>Beyond Benchmarks: Practical Evaluation Strategies for Compound AI Systems</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T142000</dtstart>
            <dtend>20250902T145000</dtend>
            <duration>003000</duration>
            <summary>Beyond Benchmarks: Practical Evaluation Strategies for Compound AI Systems</summary>
            <description>As large language models become integral to real-world applications, evaluating and improving their performance is a growing challenge. Generic benchmarks and simple metrics fail to adequately assess domain-specific, multi-step reasoning required by compound AI pipelines like retrieval-augmented generation (RAG), multi-tool agents, or knowledge assistants. Moreover, manual evaluation of every step is infeasible at scale, while fully automated LLM-as-a-judge approaches lack critical domain insights.

In this talk, we will present a practical evaluation approach to enable continuous improvement of LLM-powered systems. It incorporates the following stages: 
- Automatic tracing: capturing input/output pairs across the pipeline to build an evaluation dataset.
- Expert feedback collection: working with subject matter experts and user interactions to assess correctness, and identify failure points.
- Iterative improvement cycle: tuning the components and/or optimizing prompts.
- Degradation tests: turning feedback into automated evaluation tests - ranging from exact match checks to LLM-as-a-judge assertions - to guard against regressions.
- Continuous monitoring: using the growing evaluation dataset to validate the system as models, tools, or data sources evolve.

This framework ensures that LLM applications remain reliable, and aligned with specific business needs over time. 

Target audience: AI practitioners developing and maintaining LLM-based applications.

Attendees will learn strategies to:
- Build a human-in-the-loop evaluation process combining expert feedback and automated methods.
- Turn expert knowledge into automatic tests to guard against regressions.
- Use iterative improvement cycles to refine LLM pipelines over time.

The attendees are assumed to have familiarity with LLMs and machine learning workflows but do not require deep NLP expertise.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/YKFWKQ/</url>
            <location>B05-B06</location>
            
            <attendee>Iryna Kondrashchenko</attendee>
            
            <attendee>Oleh Kostromin</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>JEKYLT@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-JEKYLT</pentabarf:event-slug>
            <pentabarf:title>Navigating healthcare scientific knowledge:building AI agents for accurate biomedical data retrieval</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T150000</dtstart>
            <dtend>20250902T153000</dtend>
            <duration>003000</duration>
            <summary>Navigating healthcare scientific knowledge:building AI agents for accurate biomedical data retrieval</summary>
            <description>The growth of scientific healthcare literature and publicly available biomedical databases has created many opportunities but also great challenges for researchers. While large amounts of biological data are now freely available, finding and connecting relevant information across disparate sources remains time-consuming and complex. LLM-powered tools offer promising solutions to this challenge, but implementing them in healthcare, where accuracy can impact patient outcomes, requires specialised approaches and careful design considerations.
This talk will share practical lessons and technical strategies to address hallucinations, complex domain-specific terminology, source citations. 

The presentation will be structured into three main sections:

1. The challenge of scientific data retrieval (5 mins)
    1. Overview of the current landscape of biological databases and scientific literature
    2. Common challenges researchers face when searching for information across multiple sources
    3. Specificities of healthcare domain where accuracy is critical

2. Technical architecture for LLM-powered scientific search (15 mins)
    1. Reliable approaches to querying structured databases using natural language
    2. Vector database implementation for semantic search across scientific literature
    3. Strategies to ensure retrieved information is properly attributed to sources
    4. Real-world performance considerations: balancing accuracy, latency, and cost

3. Lessons learned and future directions (5 mins)
    1. Performance metrics and user feedback from academic researchers
    2. Challenges and limitations of current approaches
    3. Future directions for AI-assisted scientific discovery

Throughout the talk, I&#x27;ll provide concrete examples of how these technologies can be applied to real research questions, in a production environment, demonstrating the practical value of AI agents in accelerating scientific discovery.

Intended audience: This talk is designed for data scientists, ML / Software engineers, bioinformaticians, and researchers interested in leveraging AI for scientific data retrieval and analysis. 
While examples will focus on biological data, the principles and techniques discussed are applicable across scientific domains. Basic familiarity with Python and AI concepts will be helpful but is not required.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/JEKYLT/</url>
            <location>B05-B06</location>
            
            <attendee>Laura Dumont</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>3LDDAB@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-3LDDAB</pentabarf:event-slug>
            <pentabarf:title>From Manual to LLMs: Scaling Product Categorization</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250902T155000</dtstart>
            <dtend>20250902T163500</dtend>
            <duration>004500</duration>
            <summary>From Manual to LLMs: Scaling Product Categorization</summary>
            <description>**Target Audience:** Data scientists, AI/ML engineers, and practitioners interested in applying large language models (LLMs) / generative AI to solve real-world classification problems at scale. Attendees should have a foundational understanding of machine learning concepts and familiarity with the Python data science stack. Exposure to vector embeddings or LLM APIs is helpful but not mandatory.

**Takeaway:** Attendees will gain practical insights and learn best practices for building, debugging, scaling, and productionizing a complex, multi-step generative AI system for large-scale product categorization. They will understand the evolution from traditional methods to LLMs, learn specific techniques for prompt engineering, batch processing, cost optimization with models like OpenAI&#x27;s, and see the tangible business impact of such a system.

**Detailed Outline:**

This talk chronicles our journey tackling a yet challenging problem: accurately categorizing hundreds of thousands of diverse products into a fine-grained taxonomy of over 1,000 categories. We&#x27;ll share our evolution from initial manual and rule-based systems to a sophisticated, production-ready Generative AI pipeline.



* **Part 1: The Challenge &amp; Initial Approaches (10 minutes)**
    * Introduction to the business need for accurate product categorization at scale.
    * Overview of the limitations encountered with traditional methods:
        * Manual Curation: Slow, expensive, inconsistent, and impossible to scale.
        * Rule-Based Systems: Brittle, hard to maintain, and unable to handle nuances or new product types.
        * Fine-tuned Semantic Models: An improvement, but struggled with zero-shot generalization to new categories and required significant labeled data and retraining.
* **Part 2: Entering the GenAI Era - Iterations &amp; Lessons Learned (10 minutes)**
    * Our initial exploration using LLMs for categorization, what worked, and what failed.
    * **Developing the Prompt:** We&#x27;ll dive deep into the iterative process of prompt engineering for this complex multi-label, hierarchical classification task. We&#x27;ll show examples of early prompts, their failure modes (e.g., inconsistent output format, hallucinated categories, difficulty handling multiple classification signals), and the refinements that led to more reliable results. We will discuss techniques for achieving structured output (e.g., JSON) from the LLM.
    * **Early Scaling Issues:** Discussing the pitfalls of naive API usage, latency problems, and prohibitive costs when dealing with large volumes.
* **Part 3: Building a Robust, Scalable GenAI Pipeline (10 minutes)**
    * **The Hybrid Approach:** Detailing our successful multi-step architecture that combines the strengths of semantic embeddings for efficient candidate retrieval/filtering and LLMs (specifically leveraging OpenAI models) for nuanced final categorization.
    * **Productionization Strategies:**
        * *Batching:* Implementing efficient batch processing using asynchronous requests and the OpenAI Batch API to drastically reduce latency and cost.
        * *Cost vs. Accuracy:* Strategies for selecting the right model based on complexity and cost constraints.
        * *Semantic Similarity &amp; Early Stopping:* Using vector similarity to intelligently prune the search space, avoiding the need to evaluate every product against all 1,000+ categories with the LLM, thus significantly optimizing cost and throughput.
        * *Automation &amp; Monitoring*: How we process updates of categories and products automatically in PySpark and monitor that the live system works as expected.
* **Part 4: Measuring Impact &amp; Looking Ahead (10 minutes)**
    * Presenting the results: Showcasing the significant improvements in categorization accuracy and coverage compared to previous methodsIllustrative examples of challenging products correctly categorized by the GenAI system.
    * Discussing the tangible business value derived as measured in A-B tests 
    * Briefly touching upon ongoing work and future directions.

This presentation will focus on the practical application and engineering challenges, sharing reusable techniques and hard-won lessons applicable to anyone looking to leverage the power of generative AI for large-scale, real-world problems. We aim to provide a transparent account of not just the successes, but also the crucial learnings from failures encountered along the way.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk (long)</category>
            <url>https://cfp.pydata.org/berlin2025/talk/3LDDAB/</url>
            <location>B05-B06</location>
            
            <attendee>Giampaolo Casolla</attendee>
            
            <attendee>Ansgar Grüne</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>C3MGDN@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-C3MGDN</pentabarf:event-slug>
            <pentabarf:title>Maintainers of the Future: Code, Culture, and Everything After</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T091000</dtstart>
            <dtend>20250903T101000</dtend>
            <duration>010000</duration>
            <summary>Maintainers of the Future: Code, Culture, and Everything After</summary>
            <description>This talk invites the audience into a collective reflection on the state of tech today — and a reimagining of the futures we want to build. We’ll explore how small, mission-driven teams can use AI and automation to scale impact while centering their values, and why the work of maintenance — often invisible and undervalued — is foundational to responsible innovation.

Drawing from my experience as a software engineer at a mission-driven company, and as an open-source community leader, I’ll unpack the challenges of long-term technical work: invisible labor, ethical drift, burnout and the quiet leadership of those who stay. In a world obsessed with velocity and dominance, this is a talk about resilience — and why the future belongs to those willing to maintain it as a radical act of shaping what comes next.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Keynote</category>
            <url>https://cfp.pydata.org/berlin2025/talk/C3MGDN/</url>
            <location>Kuppelsaal</location>
            
            <attendee>Jessica Greene</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>M3RVNA@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-M3RVNA</pentabarf:event-slug>
            <pentabarf:title>Closing Session</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T151000</dtstart>
            <dtend>20250903T152500</dtend>
            <duration>001500</duration>
            <summary>Closing Session</summary>
            <description>Closing Session</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Plenary Session [Organizers]</category>
            <url>https://cfp.pydata.org/berlin2025/talk/M3RVNA/</url>
            <location>Kuppelsaal</location>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>GZUXGZ@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-GZUXGZ</pentabarf:event-slug>
            <pentabarf:title>Building an AI Agent for Natural Language to SQL Query Execution on Live Databases</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T104000</dtstart>
            <dtend>20250903T121000</dtend>
            <duration>013000</duration>
            <summary>Building an AI Agent for Natural Language to SQL Query Execution on Live Databases</summary>
            <description>### Overview

Natural‑language interfaces unlock database insights for non‑technical users. This tutorial provides a practical implementation for building these systems reliably and effectively. 

Participants will build an AI agent system that can:

1. Route intelligently between SQL generation and ReAct chat agent workflows
2. Ingest and understand database schemas with domain knowledge
3. Retrieve relevant context and similar query examples using RAG with vector similarity
4. Generate accurate SQL with validation and safety guardrails
5. Execute queries safely with human-in-the-loop approval
6. Present results in an understandable format
7. Track costs and monitor performance using LangSmith
8. Manage session-based memory and conversation context

We&#x27;ll use the Kaggle dataset &quot;[Brazilian E-Commerce dataset by Olist](https://www.kaggle.com/datasets/olistbr/brazilian-ecommerce)&quot; as our working example, demonstrating how to handle multiple tables across two schemas with complex relationships. This dataset will be hosted on an EC2 AWS instance for live interaction during the tutorial.

This tutorial addresses real-world database complexity with production-grade considerations. Participants will start from a repository with backbone code and implement the key components during the session. By the end, attendees will have a working system they can adapt to their own datasets.

### Tools and Frameworks
This tutorial will leverage modern tools and frameworks for efficient development:

**AI and Agent Frameworks:**
- LangChain for agent components and LLM interactions
- LangGraph for agent orchestration and workflow management
- LangSmith for comprehensive cost tracking and monitoring
- OpenAI models with examples of alternatives

**Database and Vector Store:**
- SQLAlchemy for database interactions and schema retrieval
- PostgreSQL as the database engine for the live dataset
- PGVector for similarity-based query retrieval

**Development:**
- YAML for configuration management
- `pyproject.toml` for standardized project configuration
- UV reliable package management and Ruff for code formatting/linting</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Tutorial</category>
            <url>https://cfp.pydata.org/berlin2025/talk/GZUXGZ/</url>
            <location>B09</location>
            
            <attendee>Cainã Max Couto da Silva</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>B3STGX@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-B3STGX</pentabarf:event-slug>
            <pentabarf:title>See only what you are allowed to see: Fine-Grained Authorization</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T134000</dtstart>
            <dtend>20250903T151000</dtend>
            <duration>013000</duration>
            <summary>See only what you are allowed to see: Fine-Grained Authorization</summary>
            <description>This tutorial provides a practical, hands-on introduction to implementing Fine-Grained Authorization (FGA) for data-intensive applications using the open-source tool OpenFGA. As data platforms evolve and regulatory requirements become stricter, controlling access at a granular level – perhaps even row-level in a database context – becomes essential. Role-Based Access Control (RBAC), while common, often struggles to meet these complex needs, leading to insufficient flexibility or administrative overhead.
We will introduce the concept of Relationship-Based Access Control (ReBAC), the authorization paradigm powering systems like Google&#x27;s Zanzibar and OpenFGA. You&#x27;ll learn how ReBAC defines permissions based on the relationships between users and objects (e.g., &quot;Alice is a viewer of Document &#x27;report_Q3&#x27;&quot;), enabling highly flexible and scalable access control logic.
The core of the tutorial will be dedicated to practical implementation. We will guide attendees through:
1.Setting up a local OpenFGA instance (e.g., using Docker).
2. Defining an authorization model using OpenFGA&#x27;s Domain Specific Language (DSL) to represent resources, users, and the relationships between them. We will use a simplified data access scenario as our example, potentially inspired by challenges faced in research or data collaboration platforms.
3. Writing and managing relationship tuples in OpenFGA.
4. Using the OpenFGA Python SDK to connect your application logic to the authorization engine, 
5. Exploring strategies for integrating this with application backend code and potentially addressing concepts like enforcing row-level permissions.

Attendees will follow along with live coding examples and complete exercises designed to solidify their understanding and build confidence in applying FGA principles with OpenFGA. By the end of the 90 minutes, you will have a foundational understanding of FGA/ReBAC and the practical skills to start integrating OpenFGA into your own projects. The tutorial materials, including code examples and setup instructions, will be provided via a GitHub repository.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Tutorial</category>
            <url>https://cfp.pydata.org/berlin2025/talk/B3STGX/</url>
            <location>B09</location>
            
            <attendee>Maria Knorps</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>HKMYHY@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-HKMYHY</pentabarf:event-slug>
            <pentabarf:title>Bye-Bye Query Spaghetti: Write Queries You&#x27;ll Actually Understand Using Pipelined SQL Syntax</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T104000</dtstart>
            <dtend>20250903T111000</dtend>
            <duration>003000</duration>
            <summary>Bye-Bye Query Spaghetti: Write Queries You&#x27;ll Actually Understand Using Pipelined SQL Syntax</summary>
            <description>This session introduces Pipelined SQL, an alternative syntax for writing complex data queries as a clear, sequential flow of manageable transformations within a single query.

Traditional SQL combines filtering (WHERE), aggregation (GROUP BY), and projection (SELECT expressions) within a single, monolithic block. This can make it challenging to discern individual data transformations or modify one aspect without impacting others. Pipelined SQL, in contrast, encourages building queries like an assembly line. You&#x27;ll learn to structure your query logic so that each step performs a specific transformation and cleanly passes its result to the next. This pipelined approach, moving away from deeply nested subqueries or sprawling Common Table Expressions (CTEs), leads to queries that are more readable by easily following the logic from start to finish. As an added benefit, the resulting code is simpler to debug and also easily extendable by additional transformation steps.


The talk will explain the core concepts of Pipelined SQL, how it differs from traditional SQL, what its main advantages over traditional SQL are. Native support for pipelined syntax is steadily growing across many modern databases, query engines and cloud data warehouses. We will explore the landscape of emerging dialects and identify which platforms currently offer native support or extensions for this powerful syntax. The session also covers a range of open-source tools that can compile such pipelined query code into any traditional SQL dialect, making this approach suitable for almost any platform.

Through practical, real-world examples using BigQuery&#x27;s pipe syntax, you&#x27;ll see side-by-side comparisons demonstrating how Pipelined SQL can drastically reduce complexity and improve clarity for common data manipulation tasks. Prepare for genuine &#x27;a-ha!&#x27; moments as you discover how Pipelined SQL offers refreshingly simple approaches to tasks that usually require convoluted traditional SQL.

This session is ideal for data analysts, scientists, engineers, and anyone with basic SQL knowledge who wants to write cleaner, more robust, and more maintainable queries. You&#x27;ll leave with a solid understanding of Pipelined SQL&#x27;s benefits and practical knowledge to start simplifying your own SQL workflows.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/HKMYHY/</url>
            <location>B07-B08</location>
            
            <attendee>Tobias Lampert</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>GQBX3J@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-GQBX3J</pentabarf:event-slug>
            <pentabarf:title>Docling: Get your documents ready for gen AI</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T112000</dtstart>
            <dtend>20250903T115000</dtend>
            <duration>003000</duration>
            <summary>Docling: Get your documents ready for gen AI</summary>
            <description>Docling, an open source package, is rapidly becoming the de facto standard for document parsing and export in the Python community. Earning close to 30,000 GitHub in less than one year and now part of the Linux AI &amp; Data Foundation. Docling is redefining document AI with its ease and speed of use. In this session, we’ll introduce Docling and its features, including how: 

- Support for a wide array of formats—such as PDFs, DOCX, PPTX, HTML, images, and Markdown—and easy conversion to structured Markdown or JSON. 
- Advanced document understanding through capture of intricate page layouts, reading order, and table structures—ideal for complex analysis.
- Integration of the DoclingDocument format with popular AI frameworks—such as LlamaIndex. LangChain, LlamaStack for retrieval-augmented generation (RAG) and QA applications.
- Optical character recognition (OCR) support for scanned documents.
- Support of Visual Language Models like SmolDocling created in collaboration with Hugging Face.
- A user-friendly command line interface (CLI) and MCP connectors for developers.
- How to use it as-a-service and at scale by deploy your own docling-serve.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/GQBX3J/</url>
            <location>B07-B08</location>
            
            <attendee>Michele Dolfi</attendee>
            
            <attendee>Christoph Auer</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>PPAYDV@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-PPAYDV</pentabarf:event-slug>
            <pentabarf:title>Better docs, happier users: What we learned applying Diataxis to HoloViz libraries</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T120000</dtstart>
            <dtend>20250903T123000</dtend>
            <duration>003000</duration>
            <summary>Better docs, happier users: What we learned applying Diataxis to HoloViz libraries</summary>
            <description>Good documentation turns users into contributors — but achieving it requires more than good intentions. This talk shares the journey of applying the Diataxis framework to improve two open-source Python libraries from the HoloViz ecosystem: Panel and hvPlot.. We’ll start with a short introduction to Diataxis (its four documentation types: tutorials, how-to guides, explanations, and references), then briefly present the libraries we worked on and their documentation challenges.

The heart of the talk focuses on practical lessons learned: how we mapped existing content into the Diataxis structure, handled content gaps and redundancies, engaged with the user community, and evolved our approach over time. We’ll also discuss what we would do differently if we started again.

The goal is to give attendees a realistic, hands-on perspective on adopting Diataxis — including both its benefits and its challenges.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/PPAYDV/</url>
            <location>B07-B08</location>
            
            <attendee>Maxime Liquet</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>SCQE8H@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-SCQE8H</pentabarf:event-slug>
            <pentabarf:title>Spot the difference: 🕵️ using foundation models to monitor for change with satellite imagery 🛰️</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T134000</dtstart>
            <dtend>20250903T141000</dtend>
            <duration>003000</duration>
            <summary>Spot the difference: 🕵️ using foundation models to monitor for change with satellite imagery 🛰️</summary>
            <description>Oil and gas pipelines are usually buried around 1.5 meters underground, making them vulnerable to human activity or natural processes like erosion. Pipeline operators need to perform regular checks to ensure the integrity of their infrastructure. Very High Resolution (VHR) satellite images, with ground sampling distances of less than 1 meter,  provide an interesting solution to this problem allowing for large scale monitoring and regular revisit rates. 

Spotting changes is far from simple, as one needs to distinguish between relevant changes (such as construction activity), and irrelevant changes, such as shadows, seasonal changes or changes due to viewing angles. 

Geospatial foundation models, trained on vast collections of satellite imagery from across the globe, offer enhanced generalisation capabilities while requiring relatively few labels to achieve powerful performance. This global-scale pretraining enables these models to develop robust feature representations that transfer effectively to new geographic regions and tasks.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/SCQE8H/</url>
            <location>B07-B08</location>
            
            <attendee>Ferdinand Schenck</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>RM8CNV@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-RM8CNV</pentabarf:event-slug>
            <pentabarf:title>Kubeflow pipelines meet uv</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T142000</dtstart>
            <dtend>20250903T145000</dtend>
            <duration>003000</duration>
            <summary>Kubeflow pipelines meet uv</summary>
            <description>In this demo, you will learn how to set up and run locally a Kubeflow pipeline that:

- adheres standard pyproject.toml format
- keeps consistent python version and dependencies across components
- manages dependencies of all components at once, including lockfile

We will discuss how and why this enhanced setup can improve pipeline and dependency maintainability for systems running in production, while still taking advantage of the Kubeflow API flexibility and features.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/RM8CNV/</url>
            <location>B07-B08</location>
            
            <attendee>Fabrizio Damicelli</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>KKWBKK@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-KKWBKK</pentabarf:event-slug>
            <pentabarf:title>Edge of Intelligence: The State of AI in Browsers</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T104000</dtstart>
            <dtend>20250903T111000</dtend>
            <duration>003000</duration>
            <summary>Edge of Intelligence: The State of AI in Browsers</summary>
            <description>The current AI hype is being run on API calls, GPU clusters, and costly infrastructure. But what if we could break free from these constraints and run our models directly in the consumer&#x27;s browser?

Imagine a world where AI development is more reliable, cheaper, and more secure. In this talk, we&#x27;ll explore the current state of WebAI, including the latest developments, challenges, and opportunities. We&#x27;ll dive into the libraries, tools, and technologies that make it possible to run AI models in the browser, such as WebAssembly, WebGPU, and ONNX. We&#x27;ll discuss how these technologies enable fast and efficient execution of AI models, and how they relate to Python.

After the talk, you&#x27;ll have a clear understanding of how to bring AI to the browser and unlock new possibilities for your applications. Join us to learn how to harness the power of AI and make it more accessible for everyone.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/KKWBKK/</url>
            <location>B05-B06</location>
            
            <attendee>Johannes Kolbe</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>GKFB3J@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-GKFB3J</pentabarf:event-slug>
            <pentabarf:title>How Digital David Wins Against Data Goliaths</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T112000</dtstart>
            <dtend>20250903T115000</dtend>
            <duration>003000</duration>
            <summary>How Digital David Wins Against Data Goliaths</summary>
            <description>After the era of Big Oil, we now live in the age of Big Data. Whoever controls the data, controls the world. Large tech companies lure users with free digital services - email, messaging, and social platforms - but the hidden cost is steep: loss of privacy, autonomy, and data sovereignty. While open-source solutions offer alternatives, their adoption often remains limited to IT specialists. In a world where convenience has become a subtle form of control, the pressing question emerges: Why hasn’t a broader movement for digital freedom taken hold, and is there a viable path forward?

This talk introduces a new and innovative business model supported by a network of digital activists that form a collective force for protecting humanity, enabling digitally aware users to reclaim control over their data. By combining innovative tools, thoughtful practices, and forward-looking approaches, we’ll show how digital gurus can become “Digital Davids,” standing up to the Data Goliaths and shaping a more sovereign digital future for their communities.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/GKFB3J/</url>
            <location>B05-B06</location>
            
            <attendee>Pawel Herman</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>XE9F7X@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-XE9F7X</pentabarf:event-slug>
            <pentabarf:title>Flying Beyond Keywords: Our Aviation Semantic Search Journey</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T120000</dtstart>
            <dtend>20250903T123000</dtend>
            <duration>003000</duration>
            <summary>Flying Beyond Keywords: Our Aviation Semantic Search Journey</summary>
            <description>In aviation, search is anything but straightforward. Reports are written by humans—pilots, cabin crew, engineers—each using their own mix of abbreviations, technical jargon, and everyday language. Standard keyword search often falls short. You might miss critical safety signals because a pilot wrote “navigation didn’t work” instead of “gps jamming,” or used a shorthand unknown to engineers on the ground. What we needed was semantic search—something that understands meaning, not just matches strings.

But we started simple with a plain Postgres setup. Our goal: build something that works. We began with pgvector and basic sentence embeddings to enable semantic search inside Postgres. It was scrappy, but it gave us just enough traction to prove the value of semantic search in this domain.
Then things took off. As complexity grew, so did the need for better retrieval and smarter ranking. We restructured the system: upgraded to better sentence embeddings, and most importantly, added reranking using cross-encoders. This turned our search results from “kinda relevant” to “spot on.” We moved to OpenVINO to make reranking faster on the CPU, especially important since we deploy on AWS Lambda.

But the technical challenges didn’t stop there. We experimented with different pgvector index types—IVFFlat vs HNSW—and discovered surprising trade-offs in index build times and performance, especially under constrained RDS instances. Embedding updates became their own problem, so we built a parallel processing system using SQS and a tool we call “Cockpit” to manage recomputation.

On top of that, search in our world isn&#x27;t a single step. We layer semantic retrieval with full-text filtering, structured filters (e.g., airport, aircraft type), and real-time inputs. This creates a multi-layered AI search pipeline that needs to feel snappy and reliable to end-users.

In this talk, we’ll walk through how we made this work with minimal ML infrastructure, how we evolved from an MVP to a robust system, and what tools made the biggest difference—from tokenization strategies and inference optimizations to batching tricks and search composition patterns. You’ll also hear the gritty details: bottlenecks between tokenization and inference, indexing challenges, and lessons from building this in production for a safety-critical industry.

This talk is for folks who want to leverage Postgres for hybrid search as well. It’s for anyone who has ever duct-taped search with SQL and wondered how to take the next step. We’ll keep it real, share what we did, and reflect on what we’d do differently next time.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/XE9F7X/</url>
            <location>B05-B06</location>
            
            <attendee>Dat Tran</attendee>
            
            <attendee>Dennis Schmidt</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>FDBZSR@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-FDBZSR</pentabarf:event-slug>
            <pentabarf:title>When Postgres is enough: solving document storage, pub/sub and distributed queues without more tools</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T134000</dtstart>
            <dtend>20250903T141000</dtend>
            <duration>003000</duration>
            <summary>When Postgres is enough: solving document storage, pub/sub and distributed queues without more tools</summary>
            <description>When building modern systems, it&#x27;s easy to reach for specialized tools as new requirements pop up: a document store like MongoDB for flexible schemas, Kafka for pub/sub, Redis for distributed queuing, or Weaviate for storing vectors.

But what if you could meet many of these needs by simply extending the Postgres database you likely already have?

In this talk, we’ll explore how Postgres&#x27; powerful native features such as:
- JSONB for document storage
- LISTEN/NOTIFY for pub/sub messaging 
- SELECT FOR UPDATE SKIP LOCKED for queueing
- an extension for vectors 

can be used to solve real-world problems without introducing new infrastructure.

Throughout the talk, we’ll walk through practical code examples in Python and SQL to show exactly how these patterns can be implemented in real projects.

The goal isn’t to suggest that Postgres replaces purpose-built tools like Kafka, Redis, or MongoDB forever. Specialized systems still have their place, especially at larger scales. However, by reusing Postgres intelligently, you can delay these decisions until they are truly necessary,keeping your system simpler, easier to operate, and more maintainable in the meantime.

Especially for freelancers, startups, and small teams, reducing system complexity early on means faster iteration, fewer operational headaches, and lower costs. And since Postgres is already present in most modern tech stacks, these capabilities are often just a few SQL queries away.

## Outline

1. **Introduction**: re-using existing infrastructure instead of introducing new systems to focus on solving problems
1. **Pub/Sub with Postgres**: messaging between services using LISTEN/NOTIFY
1. **Queuing with Postgres**: building distributed queues with SELECT FOR UPDATE SKIP LOCKED
1. **Document Storage with Postgres**: handling flexible schemas and semi-structured data using JSONB
1. **Conclusion**: when re-using Postgres makes sense - and when specialized systems are needed
**Bonus: storing vectors with Postgres for your AI workloads**: adding efficient vector functionality by installing an extension</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/FDBZSR/</url>
            <location>B05-B06</location>
            
            <attendee>Eugen Geist</attendee>
            
        </vevent>
        
        <vevent>
            <method>PUBLISH</method>
            <uid>WWSZKY@@cfp.pydata.org</uid>
            <pentabarf:event-id></pentabarf:event-id>
            <pentabarf:event-slug>-WWSZKY</pentabarf:event-slug>
            <pentabarf:title>Scraping urban mobility: analysis of Berlin carsharing</pentabarf:title>
            <pentabarf:subtitle></pentabarf:subtitle>
            <pentabarf:language>en</pentabarf:language>
            <pentabarf:language-code>en</pentabarf:language-code>
            <dtstart>20250903T142000</dtstart>
            <dtend>20250903T145000</dtend>
            <duration>003000</duration>
            <summary>Scraping urban mobility: analysis of Berlin carsharing</summary>
            <description>You&#x27;ll see the hidden patterns that carsharing data reveals when contextualized with urban information. The outcomes visually demonstrate the impact of area use, traffic and weather conditions across the city through comprehensive data visualizations.
Building on existing research, the talk will present opportunities that arise from including user data in the equation and offer starting points for additional predictors. 

This session is ideal for data scientists interested in urban analytics, transportation modeling, or real-world applications of predictive modeling in mobility systems.</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <category>Talk</category>
            <url>https://cfp.pydata.org/berlin2025/talk/WWSZKY/</url>
            <location>B05-B06</location>
            
            <attendee>Florian König</attendee>
            
        </vevent>
        
    </vcalendar>
</iCalendar>
