Tune: Scalable Hyperparameter Tuning

_images/tune.png

Tune is a Python library for experiment execution and hyperparameter tuning at any scale. Core features:

Important

Join our community slack to discuss Ray!

For more information, check out:

  • Code: GitHub repository for Tune.

  • User Guide: A comprehensive overview on how to use Tune’s features.

  • Tutorial Notebooks: Our tutorial notebooks of using Tune with Keras or PyTorch.

Try out a tutorial notebook on Colab:

Tune Tutorial

Quick Start

To run this example, install the following: pip install 'ray[tune]' torch torchvision.

This example runs a small grid search to train a convolutional neural network using PyTorch and Tune.

import torch.optim as optim
from ray import tune
from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test


def train_mnist(config):
    train_loader, test_loader = get_data_loaders()
    model = ConvNet()
    optimizer = optim.SGD(model.parameters(), lr=config["lr"])
    for i in range(10):
        train(model, optimizer, train_loader)
        acc = test(model, test_loader)
        tune.track.log(mean_accuracy=acc)


analysis = tune.run(
    train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})

print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))

# Get a dataframe for analyzing trial results.
df = analysis.dataframe()

If TensorBoard is installed, automatically visualize all trial results:

tensorboard --logdir ~/ray_results
_images/tune-start-tb.png

If using TF2 and TensorBoard, Tune will also automatically generate TensorBoard HParams output:

_images/tune-hparams-coord.png

Take a look at the Distributed Experiments documentation for:

  1. Setting up distributed experiments on your local cluster

  2. Using AWS and GCP

  3. Spot instance usage/pre-emptible instances, and more.

Open Source Projects using Tune

Here are some of the popular open source repositories and research projects that leverage Tune. Feel free to submit a pull-request adding (or requesting a removal!) of a listed project.

  • Softlearning: Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.

  • Flambe: An ML framework to accelerate research and its path to production. See flambe.ai.

  • Population Based Augmentation: Population Based Augmentation (PBA) is a algorithm that quickly and efficiently learns data augmentation functions for neural network training. PBA matches state-of-the-art results on CIFAR with one thousand times less compute.

  • Fast AutoAugment by Kakao: Fast AutoAugment (Accepted at NeurIPS 2019) learns augmentation policies using a more efficient search strategy based on density matching.

  • Allentune: Hyperparameter Search for AllenNLP from AllenAI.

  • machinable: A modular configuration system for machine learning research. See machinable.org.

Citing Tune

If Tune helps you in your academic research, you are encouraged to cite our paper. Here is an example bibtex:

@article{liaw2018tune,
    title={Tune: A Research Platform for Distributed Model Selection and Training},
    author={Liaw, Richard and Liang, Eric and Nishihara, Robert
            and Moritz, Philipp and Gonzalez, Joseph E and Stoica, Ion},
    journal={arXiv preprint arXiv:1807.05118},
    year={2018}
}