Distributed Training using Horovod
Horovod is an open source framework for distributed deep learning. It is available for use with TensorFlow and several other deep learning frameworks
Last updated
Horovod is an open source framework for distributed deep learning. It is available for use with TensorFlow and several other deep learning frameworks
Last updated
Horovod’s cluster architecture differs from the parameter server architecture. Horovod uses Ring-AllReduce, where the amount of data sent is more nearly proportional to the number of cluster nodes, which can be more efficient when training with a cluster where each node has multiple GPUs (and thus multiple worker processes).
Additionally, whereas the parameter server update process described above is asynchronous, in Horovod updates are synchronous. After all processes have completed their calculations for the current batch, gradients calculated by each process circulate around the ring until every process has a complete set of gradients for the batch from all processes.
At that time, each process updates its local model weights, so every process has the same model weights before starting work on the next batch. The following diagram shows how Ring-AllReduce works (from Meet Horovod: Uber’s Open Source Distributed Deep Learning Framework for TensorFlow):
Horovod employs Message Passing Interface (MPI), a popular standard for managing communication between nodes in a high-performance cluster, and uses NVIDIA’s NCCL library for GPU-level communication.
The Horovod framework eliminates many of the difficulties of Ring-AllReduce cluster setup and works with several popular deep learning frameworks and APIs. For example, if you are using the popular Keras API, you can use either the reference Keras implementation or tf.keras directly with Horovod without converting to an intermediate API such as tf.Estimator.
With Gradient you have full control over mpirun command.
Example mpirun
Multi-node Distributed:
The only requirement to run it as a distributed load on multiple nodes (only for multi node scenario) is to pass:
The following command creates and starts a multi-node mpi based experiment within the Gradient Project specified with the --projectId
option.
Horovod's GitHub repo has good explanation of the -mca params. They also have good set of examples.