Reinforcement learning


In machine learning and optimal control, reinforcement learning is concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning.
While supervised learning and unsupervised learning algorithms respectively attempt to discover patterns in labeled and unlabeled data, reinforcement learning involves training an agent through interactions with its environment. To learn to maximize rewards from these interactions, the agent makes decisions between trying new actions to learn more about the environment, or using current knowledge of the environment to take the best action. The search for the optimal balance between these two strategies is known as the exploration–exploitation dilemma.
The environment is typically stated in the form of a Markov decision process, as many reinforcement learning algorithms use dynamic programming techniques. The main difference between classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the Markov decision process, and they target large Markov decision processes where exact methods become infeasible.

Principles

Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, RL is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in RL have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation.
Basic reinforcement learning is modeled as a Markov decision process:
  • A set of environment and agent states, ;
  • A set of actions,, of the agent;
  • , the transition probability from state to state under action.
  • , the immediate reward after transitioning from to under action.
The purpose of reinforcement learning is for the agent to learn an optimal policy that maximizes the reward function or other user-provided reinforcement signal that accumulates from immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals learn to adopt behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning.
A basic reinforcement learning agent interacts with its environment in discrete time steps. At each time step, the agent receives the current state and reward. It then chooses an action from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state and the reward associated with the transition is determined. The goal of a reinforcement learning agent is to learn a policy:
that maximizes the expected cumulative reward.
Formulating the problem as a Markov decision process assumes the agent directly observes the current environmental state; in this case, the problem is said to have full observability. If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have partial observability, and formally the problem must be formulated as a partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed.
When the agent's performance is compared to that of an agent that acts optimally, the difference in performance yields the notion of regret. In order to act near optimally, the agent must reason about long-term consequences of its actions, although the immediate reward associated with this might be negative.
Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including energy storage, robot control, photovoltaic generators, backgammon, checkers, Go, and autonomous driving systems.
Two elements make reinforcement learning powerful: the use of samples to optimize performance, and the use of function approximation to deal with large environments. Thanks to these two key components, RL can be used in large environments in the following situations:
  • A model of the environment is known, but an analytic solution is not available;
  • Only a simulation model of the environment is given ;
  • The only way to collect information about the environment is to interact with it.
The first two of these problems could be considered planning problems, while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems.

Exploration

The trade-off between exploration and exploitation has been most thoroughly studied through the multi-armed bandit problem and for finite state space Markov decision processes in Burnetas and Katehakis.
Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states, simple exploration methods are the most practical.
One such method is -greedy, where is a parameter controlling the amount of exploration vs. exploitation. With probability, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect. Alternatively, with probability, exploration is chosen, and the action is chosen uniformly at random. is usually a fixed parameter but can be adjusted either according to a schedule, or adaptively based on heuristics.

Algorithms for control learning

Even if the issue of exploration is disregarded and even if the state was observable, the problem remains to use past experience to find out which actions lead to higher cumulative rewards.

Criterion of optimality

Policy

The agent's action selection is modeled as a map called policy:
The policy map gives the probability of taking action when in state. There are also deterministic policies for which denotes the action that should be played at state.

State-value function

The state-value function is defined as, expected discounted return starting with state, i.e., and successively following policy. Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.
where the random variable denotes the discounted return, and is defined as the sum of future discounted rewards:
where is the reward for transitioning from state to is the discount rate. is less than 1, so rewards in the distant future are weighted less than rewards in the immediate future.
The algorithm must find a policy with maximum expected discounted return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies. A policy is stationary if the action-distribution returned by it depends only on the last state visited. The search can be further restricted to deterministic stationary policies. A deterministic stationary policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.

Brute force

The brute force approach entails two steps:
  • For each possible policy, sample returns while following it
  • Choose the policy with the largest expected discounted return
One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the discounted return of each policy.
These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are [|value function estimation] and direct policy search.

Value function

Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a set of estimates of expected discounted returns for some policy.
These methods rely on the theory of Markov decision processes, where optimality is defined in a sense stronger than the one above: A policy is optimal if it achieves the best-expected discounted return from any initial state. Again, an optimal policy can always be found among stationary policies.
To define optimality in a formal manner, define the state-value of a policy by
where stands for the discounted return associated with following from the initial state. Defining as the maximum possible state-value of, where is allowed to change,
A policy that achieves these optimal state-values in each state is called optimal. Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since, where is a state randomly sampled from the distribution of initial states by choosing the action from with the highest action-value at each state,. The action-value function of such an optimal policy is called the optimal action-value function and is commonly denoted by. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally.
Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions that converge to. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.