Loading…
Wednesday, May 22 • 15:55 - 16:30
Large Scale Distributed Deep Learning with Kubernetes Operators - Yuan Tang, Ant Financial & Yong Tang, MobileIron

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
The focus of this talk is the usage of Kubernetes operators to manage and automate training process for machine learning tasks. Two open source Kubernetes operators, tf-operator and mpi-operator, will be discussed. Both operators manage training jobs for TensorFlow but they have different distribution strategies. The tf-operator fits the parameter server distribution strategy which has a centralized parameter server for coordination. The mpi-operator, on the other hand, utilize MPI allreduce primitive implementation. While the parameter server strategy requires a right ratio of CPU (for parameter servers) and GPU (for workers) to reach network-optimal, the all reduce distribution might be easier to optimize network cost. We will share our performance numbers in out talk for comparison of those two operators.

Speakers
avatar for Yong Tang

Yong Tang

Senior Director of Engineering, Ivanti
Yong Tang is Senior Director of Engineering at Ivanti. He is a core maintainer of CoreDNS and contributes to many container, cloud-native, and machine learning projects for the open source community. In addition to CoreDNS, he is a maintainer of Docker/Moby. He is also a maintainer... Read More →
avatar for Yuan Tang

Yuan Tang

Principal Software Engineer, Red Hat
Yuan is a principal software engineer at Red Hat, working on OpenShift AI. Previously, he has led teams to build AI infrastructure and platforms at various companies, including Alibaba and Akuity. He's a project lead of Argo and Kubeflow, a maintainer of TensorFlow and XGBoost, and... Read More →



Wednesday May 22, 2019 15:55 - 16:30 CEST
Hall 8.0 D2