Open In App

A Brief Introduction to Proximal Policy Optimization

Proximal Policy Optimisation (PPO) is a recent advancement in the field of Reinforcement Learning, which provides an improvement on Trust Region Policy Optimization (TRPO). This algorithm was proposed in 2017, and showed remarkable performance when it was implemented by OpenAI. To understand and appreciate the algorithm, we first need to understand what a policy is.

Note: This post is ideally targeted towards people who have a fairly basic understanding of Reinforcement Learning.

A policy, in Reinforcement Learning terminology, is a mapping from action space to state space. It can be imagined to be instructions for the RL agent, in terms of what actions it should take based upon which state of the environment it is currently in. When we talk about evaluating an agent, we generally mean evaluating the policy function to find out how well the agent is performing, following the given policy. This is where Policy Gradient methods play a vital role. When an agent is “learning” and doesn’t really know which actions yield the best result in the corresponding states, it does so by calculating the policy gradients. It works like a neural network architecture, whereby the gradient of the output, i.e, the log of probabilities of actions in that particular state, is taken with respect to parameters of the environment and the change is reflected in the policy, based upon the gradients.



While this tried and tested method works well, the major disadvantages with these methods is their hypersensitivity to hyperparameter tuning such as choice of stepsize, learning rate, etc , along with their poor sample efficiency. Unlike supervised learning which has a guaranteed route to success or convergence with relatively less hyperparameter tuning, reinforcement learning is a lot more complex with various moving parts that need to be considered. PPO aims to strike a balance between important factors like ease of implementation, ease of tuning, sample complexity,sample efficiency and trying to compute an update at each step that minimizes the cost function while ensuring the deviation from the previous policy is relatively small. PPO is in fact, a policy gradient method that learns from online data as well. It merely ensures that the updated policy isn’t too much different from the old policy to ensure low variance in training. The most common implementation of PPO is via the Actor-Critic Model which uses 2 Deep Neural Networks, one taking the action(actor) and the other handles the rewards(critic). The mathematical equation of PPO is shown below:



The following important inferences can be drawn from the PPO equation:

This is how the working PPO algorithm looks, in it’s entirety when implemented in Actor-Critic style:

What we can observe, is that small batches of observation are used for updation, and then thrown away in order to incorporate new a new batch of observations,aka “minibatch”. The updated policy will be ε-clipped to a small region so as to not allow huge updates which might potentially be irrecoverably harmful. In short, PPO behaves exactly like other policy gradient methods in the sense that it also involves the calculation of output probabilities in the forward pass based on various parameters and calculating the gradients to improve those decisions or probabilities in the backward pass. It involves the usage of importance sampling ration like it’s predecessor, TRPO. However, it also ensures that the old policy and new policy are at least at a certain proximity (denoted by ε), and very large updates are not allowed. It has become one of the most widely used policy optimization algorithms in the field of reinforcement learning.


Article Tags :