RLlib Table of Contents¶
- Python API
- REST API
Models, Preprocessors, and Action Distributions¶
- High-throughput architectures
- Multi-agent specific
Concepts and Custom Algorithms¶
If you encounter errors like
blas_thread_init: pthread_create: Resource temporarily unavailable when using many workers,
OMP_NUM_THREADS=1. Similarly, check configured system limits with
ulimit -a for other resource limit errors.
If you encounter out-of-memory errors, consider setting
ray.init() to reduce memory usage.
For debugging unexpected hangs or performance problems, you can run
ray stack to dump
the stack traces of all Ray workers on the current node, and
ray timeline to dump
a timeline visualization of tasks to a file.
RLlib currently runs in
tf.compat.v1 mode. This means eager execution is disabled by default, and RLlib imports TF with
import tensorflow.compat.v1 as tf; tf.disable_v2_behaviour(). Eager execution can be enabled manually by calling
tf.enable_eager_execution() or setting the
"eager": True trainer config.