To learn more about these concepts, see also the ICML paper.
Policy graph classes encapsulate the core numerical components of RL algorithms. This typically includes the policy model that determines actions to take, a trajectory postprocessor for experiences, and a loss function to improve the policy given postprocessed experiences. For a simple example, see the policy gradients graph definition.
Most interaction with deep learning frameworks is isolated to the PolicyGraph interface, allowing RLlib to support multiple frameworks. To simplify the definition of policy graphs, RLlib includes Tensorflow and PyTorch-specific templates.
Given an environment and policy graph, policy evaluation produces batches of experiences. This is your classic “environment interaction loop”. Efficient policy evaluation can be burdensome to get right, especially when leveraging vectorization, RNNs, or when operating in a multi-agent environment. RLlib provides a PolicyEvaluator class that manages all of this, and this class is used in most RLlib algorithms.
You can also use policy evaluation standalone to produce batches of experiences. This can be done by calling
ev.sample() on an evaluator instance, or
ev.sample.remote() in parallel on evaluator instances created as Ray actors (see
For example, in A3C you’d want to compute gradients asynchronously on different workers, and apply them to a central policy graph replica. This strategy is implemented by the AsyncGradientsOptimizer. Another alternative is to gather experiences synchronously in parallel and optimize the model centrally, as in SyncSamplesOptimizer. Policy optimizers abstract these strategies away into reusable modules.