Ray Projects (Experimental)

Ray projects make it easy to package a Ray application so it can be rerun later in the same environment. They allow for the sharing and reliable reuse of existing code.

Quick start (CLI)

# Creates a project in the current directory. It will create a
# project.yaml defining the code and environment and a cluster.yaml
# describing the cluster configuration. Both will be created in the
# .rayproject subdirectory of the current directory.
$ ray project create <project-name>

# Create a new session from the given project.  Launch a cluster and run
# the command, which must be specified in the project.yaml file. If no
# command is specified, the "default" command in .rayproject/project.yaml
# will be used. Alternatively, use --shell to run a raw shell command.
$ ray session start <command-name> [arguments] [--shell]

# Open a console for the given session.
$ ray session attach

# Stop the given session and terminate all of its worker nodes.
$ ray session stop


See the readme for instructions on how to run these examples:

  • Open Tacotron: A TensorFlow implementation of Google’s Tacotron speech synthesis with pre-trained model (unofficial)
  • PyTorch Transformers: A library of state-of-the-art pretrained models for Natural Language Processing (NLP)

Project file format (project.yaml)

A project file contains everything required to run a project. This includes a cluster configuration, the environment and dependencies for the application, and the specific inputs used to run the project.

Here is an example for a minimal project format:

name: test-project
description: "This is a simple test project"
repo: https://github.com/ray-project/ray

# Cluster to be instantiated by default when starting the project.
cluster: .rayproject/cluster.yaml

# Commands/information to build the environment, once the cluster is
# instantiated. This can include the versions of python libraries etc.
# It can be specified as a Python requirements.txt, a conda environment,
# a Dockerfile, or a shell script to run to set up the libraries.
  requirements: requirements.txt

# List of commands that can be executed once the cluster is instantiated
# and the environment is set up.
# A command can also specify a cluster that overwrites the default cluster.
  - name: default
    command: python default.py
    help: "The command that will be executed if no command name is specified"
  - name: test
    command: python test.py --param1={{param1}} --param2={{param2}}
    help: "A test command"
      - name: "param1"
        help: "The first parameter"
        # The following line indicates possible values this parameter can take.
        choices: ["1", "2"]
      - name: "param2"
        help: "The second parameter"

Project files have to adhere to the following schema:

type object
  • name
The name of the project
type string
  • description
A short description of the project
type string
  • repo
The URL of the repo this project is part of
type string
  • cluster
Path to a .yaml cluster configuration file (relative to the project root)
type string
  • environment
The environment that needs to be set up to run the project
type object
  • dockerimage
URL to a docker image that can be pulled to run the project in
type string
  • dockerfile
Path to a Dockerfile to set up an image the project can run in (relative to the project root)
type string
  • requirements
Path to a Python requirements.txt file to set up project dependencies (relative to the project root)
type string
  • shell
A sequence of shell commands to run to set up the project environment
type array
type string
  • commands
type array
Possible commands to run to start a session
type object
  • name
Name of the command
type string
  • command
Shell command to run on the cluster
type string
  • params
type array
Possible parameters in the command
type object
  • name
Name of the parameter
type string
  • help
Help string for the parameter
type string
  • choices
Possible values the parameter can take
type array

Cluster file format (cluster.yaml)

This is the same as for the autoscaler, see Cluster Launch page.