bhrathjatoth
Senior AI Engineer with over eight years of experience architecting scalable machine learning, generative AI, and LLM solutions. Holding a B.Tech from IIT Guwahati, he specializes in RAG, LangChain, PyTorch, and AWS, delivering innovations like a fact-checking system for Cyara, I led risk quantification and hallmarking software projects, boosting exports by 8–9% CAGR. Recognized at CGI’s Global Meet 2018, Bharath drives transformative AI solutions with Docker, Kubernetes, and cloud pipelines, blending technical expertise with impactful leadership.
Session
Python users working on real-time analytics—from payment processing and fraud detection to AI-driven support—rely on message queues to keep data moving reliably and efficiently. Traditional message queues, however, can struggle with large-scale, concurrent workloads, especially when you need durability and replayability.
In this session, we’ll show how Kafka 4.0 introduces robust queue semantics to distributed streaming, empowering Python applications to handle fair, concurrent, and isolated message processing at scale—using familiar Kafka Python clients and frameworks.
But the power lies in what you can build next. We’ll demonstrate how Apache Flink can connect Kafka event streams to real-time Large Language Model (LLM) inference for tasks like sentiment analysis and summarization, all orchestrated via Python APIs and remote model endpoints for powerful, flexible AI inference.
To complete the picture, we’ll cover how enriched results can be stored in popular data lake solutions—such as Apache Iceberg—enabling long-term analytics, time travel, and integration with downstream data science workflows. Support for Iceberg and other lakehouse formats is optional, giving you flexibility to choose the right data backend for your needs.