14

Fundamental Iterative Methods of Reinforcement Learning

 4 years ago
source link: https://mc.ai/fundamental-iterative-methods-of-reinforcement-learning/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
MbIBbyU.png!web

Leading towards reinforcement learning

Value Iteration

Learn the values for all states, then we can act according to the gradient. Value iteration learns the value of the states from the Bellman Update directly. The Bellman Update is guaranteed to converge to optimal values, under some non-restrictive conditions.

Learning a policy may be more direct than learning a value. Learning a value may take an infinite amount of time to converge to numerical precision of a 64bit float (think about a moving average averaging in a constant at every iteration, after starting with an estimate of 0, it will add a smaller and smaller nonzero number forever).

Policy Iteration

Learn a policy in tandem to the values. Policy learning incrementally looks at the current values and extracts a policy. Because the action space is finite , the hope is that it can converge faster than Value Iteration. Conceptually, the last change to the actions will happen well before the small rolling-average updates end. There are two steps to Policy Iteration.

The first is called Policy Extraction , which is how you go from a value to a policy — by taking the policy that maximizes over expected values.

Policy extraction step.

The second step is Policy Evaluation . Policy evaluation takes a policy and runs value iteration conditioned on a policy . The samples are forever tied to the policy, but we know we have to run the iterative algorithms for way fewer steps to extract the relevant action information .

Policy evaluation step.

Like value iteration, policy iteration is guaranteed to converge for most reasonable MDPs because of the underlying Bellman Update.

Q-value Iteration

The problem with knowing optimal values is that it can be hard to distill a policy from it. The argmax operator is distinctly nonlinear and difficult to optimize over, so Q-value Iteration takes a step towards direct policy extraction . The optimal policy at each state is simply the max q-value at that state.

Q-learning of an MDP.

The reason most instruction starts with Value Iteration is that it slots into the Bellman updates a little more naturally. Q-value Iteration requires the substitution of two of the key MDP value relations together . After doing so, it is one step removed from Q-learning, which we will get to know.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK