2025-11-09 –, Tutorial Track 4
AI/ML workloads depend heavily on complex software stacks, including numerical computing libraries (SciPy, NumPy), deep learning frameworks (PyTorch, TensorFlow), and specialized toolchains (CUDA, cuDNN). However, integrating these dependencies into Bazel-based workflows remains challenging due to compatibility issues, dependency resolution, and performance optimization. This session explores the process of creating and maintaining Bazel packages for key AI/ML libraries, ensuring reproducibility, performance, and ease of use for researchers and engineers.
Introduction to Bazel for AI/ML (20 min): The importance of Bazel for AI/ML workloads and why it’s useful for dependency management.
Challenges in AI/ML Bazel Packaging (30 min): Challenges in building Bazel packages for AI/ML libraries, including transitive dependencies, build system differences, and GPU acceleration.
Strategies for Packaging (30 min): Strategies for packaging libraries like SciPy, PyTorch, TensorFlow, and other dependencies while ensuring compatibility and performance.
Best Practices for Distribution and Maintenance (20 min): Best practices for maintaining and distributing Bazel packages for AI/ML.
Hands-on Demo (144 min): A demo of building and using AI/ML libraries with Bazel.
Q&A and Open Discussion (36 min): Open discussion and questions.
No previous knowledge expected
Ramesh Oswal is a Senior Motion Planning Engineer at Aurora, with experience from Luminar and Noble.AI. He has expertise in AI/ML for Autonomous Systems and Education. He has also served as a review committee member for NeurIPS 2024, CNCF 2024, and CNCF 2023.