Model-free (reinforcement learning)

From Wikipedia, the free encyclopedia

In reinforcement learning (RL), a model-free algorithm (as opposed to a one) is an algorithm which does not use the transition probability distribution (and the reward function) associated with the Markov decision process (MDP),[1] which, in RL, represents the problem to be solved. The transition probability distribution (or transition model) and the reward function are often collectively called the "model" of the environment (or MDP), hence the name "model-free". A model-free RL algorithm can be thought of as an "explicit" trial-and-error algorithm.[1] An example of a model-free algorithm is Q-learning.

Key 'Model-Free' reinforcement learning algorithms[]

Algorithm Description Model Policy Action Space State Space Operator
DQN Deep Q Network Model-Free Off-policy Discrete Continuous Q-value
Deep Deterministic Policy Gradient Model-Free Off-policy Continuous Continuous Q-value
Asynchronous Advantage Actor-Critic Algorithm Model-Free On-policy Continuous Continuous Advantage
Trust Region Policy Optimization Model-Free On-policy Continuous Continuous Advantage
Proximal Policy Optimization Model-Free On-policy Continuous Continuous Advantage
Twin Delayed Deep Deterministic Policy Gradient Model-Free Off-policy Continuous Continuous Q-value
Soft Actor-Critic Model-Free Off-policy Continuous Continuous Advantage

References[]

  1. ^ a b Sutton, Richard S.; Barto, Andrew G. (November 13, 2018). Reinforcement Learning: An Introduction (PDF) (Second ed.). A Bradford Book. p. 552. ISBN 0262039249. Retrieved 18 February 2019.
Retrieved from ""