This page is an index of examples for the various use cases and features of RLlib.
If any example is broken, or if you’d like to add an example to this page, feel free to raise an issue on our Github repository.
- Custom training workflows:
- Example of how to use Tune’s support for custom training functions to implement custom training workflows.
- Curriculum learning:
- Example of how to adjust the configuration of an environment over time.
- Custom metrics:
- Example of how to output custom training metrics to TensorBoard.
- Using rollout workers directly for control over the whole training workflow:
- Example of how to use RLlib’s lower-level building blocks to implement a fully customized training workflow.
Custom Envs and Models¶
- Registering a custom env and model:
- Example of defining and registering a gym env and model for use with RLlib.
- Custom Keras model:
- Example of using a custom Keras model.
- Custom Keras RNN model:
- Example of using a custom Keras RNN model.
- Registering a custom model with supervised loss:
- Example of defining and registering a custom model with a supervised loss.
- Subprocess environment:
- Example of how to ensure subprocesses spawned by envs are killed when RLlib exits.
- Batch normalization:
- Example of adding batch norm layers to a custom model.
- Parametric actions:
- Example of how to handle variable-length or parametric action spaces.
- Eager execution:
- Example of how to leverage TensorFlow eager to simplify debugging and design of custom models and policies.
Serving and Offline¶
Multi-Agent and Hierarchical¶
- Example of different heuristic and learned policies competing against each other in rock-paper-scissors.
- PPO with centralized critic on two-step game:
- Example of customizing PPO to leverage a centralized value function.
- Centralized critic in the env:
- A simpler method of implementing a centralized critic by augmentating agent observations with global information.
- Hand-coded policy:
- Example of running a custom hand-coded policy alongside trainable policies.
- Weight sharing between policies:
- Example of how to define weight-sharing layers between two different policies.
- Multiple trainers:
- Example of alternating training between two DQN and PPO trainers.
- Hierarchical training:
- Example of hierarchical training using the multi-agent API.
- Example of building packet classification trees using RLlib / multi-agent in a bandit-like setting.
- Example of learning optimal LLVM vectorization compiler pragmas for loops in C and C++ codes using RLlib.
- Roboschool / SageMaker:
- Example of training robotic control policies in SageMaker with RLlib.
- Example of training in StarCraft2 maps with RLlib / multi-agent.
- Traffic Flow:
- Example of optimizing mixed-autonomy traffic simulations with RLlib / multi-agent.