Richard S. Sutton
Richard Stuart Sutton is a Canadian computer scientist. He is a professor of computing science at the University of Alberta, fellow & Chief Scientific Advisor at the Alberta Machine Intelligence Institute, and a research scientist at Keen Technologies. Sutton is considered one of the founders of modern computational reinforcement learning. In particular, he contributed to temporal difference learning and policy gradient methods. He received the 2024 Turing Award with Andrew Barto.
Education and early life
Richard Sutton was born in either 1957 or 1958 in Ohio, and grew up in Oak Brook, Illinois, a suburb of Chicago, United States.Sutton received his Bachelor of Arts degree in psychology from Stanford University in 1978 before taking a Master of Science and PhD in computer science from the University of Massachusetts Amherst supervised by Andrew Barto. His doctoral dissertation introduced actor-critic architectures and temporal credit assignment.
He was influenced by Harry Klopf's work in the 1970s, which proposed that supervised learning is insufficient for AI or explaining intelligent behavior, and trial-and-error learning, driven by "hedonic aspects of behavior", is necessary. This focused his interest to reinforcement learning.
File:Rich Sutton on Reinforcement Learning- Alpha Go Zero to 60.jpg|Sutton interviewed by Steve Jurvetson on AlphaGo in 2017|thumb
Career and research
Sutton held a postdoctoral research position at the University of Massachusetts Amherst in 1984. He worked at GTE Laboratories in Waltham, Massachusetts as principal member of technical staff from 1985 to 1994, then returned to the University of Massachusetts Amherst as a senior research scientist. He joined AT&T Labs Shannon Laboratory in Florham Park, New Jersey as principal technical staff member from 1998 to 2002. He has been a professor of computing science at the University of Alberta since 2003, where he helped establish the Reinforcement Learning and Artificial Intelligence Laboratory. In 2017 he became a distinguished research scientist with Google DeepMind and helped launch DeepMind Alberta in Edmonton, a research office operated in close collaboration with the University of Alberta.- 1984: Postdoctoral researcher, University of Massachusetts Amherst
- 1985–1994: Principal member of technical staff, Computer and Intelligent Systems Laboratory, GTE Laboratories
- 1995–1998: Senior research scientist, University of Massachusetts Amherst
- 1998–2002: Principal technical staff member, Artificial Intelligence Department, AT&T Labs Shannon Laboratory
- 2003–present: Professor of computing science, University of Alberta
- 2017–2023: Distinguished research scientist, DeepMind Alberta, Google DeepMind
- 2024–Present: Research scientist, Keen Technologies
Reinforcement learning
Barto and Sutton used Markov decision processes as the mathematical foundation to explain how agents made decisions when in a stochastic or random environment, receiving rewards at the end of every action. Traditional MDP theory assumed the agents knew all information about the MDPs in their attempt toward maximizing their cumulative rewards. Barto and Sutton's reinforcement learning techniques allowed for both the environment and the rewards to be unknown, and thus allowed for these category of algorithms to be applied to a wide array of problems.
Sutton returned to Canada in the 2000s and continued working on the topic which continued to develop in academic circles until one of its first major real world applications saw Google's AlphaGo program built on this concept defeating the then prevailing human champion. Barto and Sutton have widely been credited and accepted as pioneers of modern reinforcement learning, with the technique itself being foundational to the AI boom.
In a 2019 essay, Sutton proposed the "bitter lesson", which criticized the field of AI research for failing to learn that "building in how we think we think does not work in the long run", arguing that "70 years of AI research that general methods that leverage computation are ultimately the most effective, and by a large margin", beating efforts building on human knowledge about specific fields like computer vision, speech recognition, chess or Go.
Sutton argues that large language models aren’t capable of learning on-the-job, and so new model architectures are required to enable continual learning. Sutton further argues that a special training phase will be unnecessary — the agent will learn on-the-fly, rendering large language models obsolete.
In 2023, Sutton and John Carmack announced a partnership for the development of artificial general intelligence.
Awards and honors
Sutton has been a Fellow of the Association for the Advancement of Artificial Intelligence since 2001; his nomination read: "For significant contributions to many topics in machine learning, including reinforcement learning, temporal difference techniques, and neural networks." In 2003, he received the President's Award from the International Neural Network Society and in 2013, the Outstanding Achievement in Research award from the University of Massachusetts Amherst. He received the 2024 Turing Award from the Association for Computing Machinery together with Andrew Barto; the citation of the award read: "For developing the conceptual and algorithmic foundations of reinforcement learning."In 2016, Sutton was elected Fellow of the Royal Society of Canada. In 2021, he was elected Fellow of the Royal Society of London.
Research
Sutton introduced temporal-difference methods for prediction and control, establishing convergence properties and practical algorithms. He proposed integrated learning and planning through the Dyna architecture. He co-developed the options framework for temporal abstraction in reinforcement learning. He co-authored the first modern policy gradient formulation with function approximation.Sutton's essay The Bitter Lesson argued that general methods that scale with computation dominate domain-specific approaches in the long run.
His former doctoral students include David Silver and Doina Precup.
Selected publications
His publications include:| Year | Title | Venue or publisher | Notes |
| 1988 | Learning to predict by the methods of temporal differences | Machine Learning 3, 9-44 | TD learning foundations |
| 1990 | Neural Networks for Control | MIT Press | co-editor with W. T. Miller III and P. J. Werbos |
| 1991 | Dyna, an integrated architecture for learning, planning, and reacting | ACM SIGART Bulletin | Early Dyna results |
| 1998 | MIT Press | with Andrew G. Barto. First edition | |
| 1999 | Between MDPs and semi-MDPs, a framework for temporal abstraction in RL | Artificial Intelligence 112, 181-211 | Options framework with Doina Precup and Satinder Singh |
| 2000 | Policy Gradient Methods for Reinforcement Learning with Function Approximation | NeurIPS 12 | Policy gradient theorem with function approximation |
| 2010 | GQ, a general gradient algorithm for temporal-difference prediction learning with eligibility traces | technical report, University of Alberta | off-policy TD with gradients, with H. R. Maei |
| 2018 | Reinforcement Learning, An Introduction | MIT Press | with Andrew G. Barto. Second edition |