RaySGD: Distributed Deep Learning

../_images/raysgdlogo.png

RaySGD is a lightweight library for distributed deep learning, providing thin wrappers around framework-native modules for data parallel training.

Tip

Help us make RaySGD better; take this 1 minute User Survey!

The main features are:

  • Ease of use: Scale Pytorch’s native DistributedDataParallel and TensorFlow’s tf.distribute.MirroredStrategy without needing to monitor individual nodes.
  • Composibility: RaySGD is built on top of the Ray Actor API, enabling seamless integration with existing Ray applications such as RLlib, Tune, and Ray.Serve.
  • Scale up and down: Start on single CPU. Scale up to multi-node, multi-gpu by changing 2 lines of code.