This page is an index of examples for the various use cases and features of RLlib.
If any example is broken, or if you’d like to add an example to this page, feel free to raise an issue on our Github repository.
- Custom training workflows:
- Example of how to use Tune’s support for custom training functions to implement custom training workflows.
- Curriculum learning:
- Example of how to adjust the configuration of an environment over time.
- Custom metrics:
- Example of how to output custom training metrics to TensorBoard.
Custom Envs and Models¶
- Registering a custom env:
- Example of defining and registering a gym env for use with RLlib.
- Subprocess environment:
- Example of how to ensure subprocesses spawned by envs are killed when RLlib exits.
- Batch normalization:
- Example of adding batch norm layers to a custom model.
- Parametric actions:
- Example of how to handle variable-length or parametric action spaces.
Serving and Offline¶
Multi-Agent and Hierarchical¶
- Weight sharing between policies:
- Example of how to define weight-sharing layers between two different policies.
- Multiple trainers:
- Example of alternating training between two DQN and PPO trainers.
- Hierarchical training:
- Example of hierarchical training using the multi-agent API.
- Traffic Flow:
- Example of optimizing mixed-autonomy traffic simulations with RLlib / multi-agent.
- Roboschool / SageMaker:
- Example of training robotic control policies in SageMaker with RLlib.
- Example of training in StarCraft2 maps with RLlib / multi-agent.