PyData Amsterdam 2025

Bridging the Gap: Building Robust, Tool-Integrated LLM Applications with the Model Context Protocol
09-24, 09:00–10:30 (Europe/Amsterdam), Katherine Johnson @ TNW City

Large Language Models (LLMs) are unlocking transformative capabilities — but integrating them into complex, real-world applications remains a major challenge. Simple prompting isn’t enough when dynamic interaction with tools, structured data, and live context is required. This workshop introduces the Model Context Protocol (MCP), an emerging open standard designed to simplify and standardise this integration. Aimed at forward-thinking developers and technologists, this hands-on session will equip participants with practical skills to build intelligent, modular, and extensible LLM-native applications using MCP.


Dive into the next evolution of LLM application development with the Model Context Protocol. As LLMs become more capable, the need for a consistent way to connect them with external tools and data is increasingly urgent. This interactive workshop explores MCP as a breakthrough approach to building tool-aware, context-rich AI systems.

We’ll cover:
- Core Concepts of MCP: Understand MCP's client-server architecture and how it enables structured, contextual reasoning — think of it as “USB-C for AI.” Learn how tools and resources are exposed, discovered, and invoked via the protocol.
- Development Environment Setup: Get hands-on with a modern Python development stack, including FastMCP, Typer, and Streamlit. Configure access to an LLM backend using either the OpenAI API or a local Ollama instance.
- Building MCP Servers & Tools: Learn to expose functions and resources through an MCP server. We’ll walk through a real example — a Wikipedia search-and-summarisation tool — to demonstrate how to define capabilities and serve them in a standards-compliant way.
- Client Development (CLI & Web): Build robust clients using Typer for command-line interfaces and Streamlit for web apps. Learn how to discover server tools, invoke them, and manage stateful interactions.
- LLM-Orchestrated Interactions: Go beyond prompt engineering. See how LLMs can autonomously select and chain tools via MCP to solve complex tasks. We’ll demonstrate orchestration logic that turns LLMs into intelligent agents — not just text predictors.
- Build an AI Research Assistant: Apply your skills by building a working prototype — an AI Research Assistant. This tool will take natural language queries, use an LLM to plan a response, and call MCP tools to search and summarise data in context.

By the end of this workshop, participants will have a working understanding of MCP, a functioning toolchain, and a clear roadmap to building next-generation AI applications.

Who Should Attend:
Developers, data scientists, and technical practitioners ready to move beyond basic prompting and explore advanced, structured LLM integration.

Requirements:
To get the most from this workshop, attendees should have:
- Python 3.11+ installed
- Docker (recommended for running the MCP server)
- Access to an LLM backend (OpenAI API or local Ollama)
- Basic familiarity with Python and the command line