Ronald J. Williams
Ronald James Williams was an American mathematician and computer scientist who spent the majority of his career at Northeastern University. He is considered one of the pioneers of neural networks. In 1986, he co-authored the seminal paper in Nature on the backpropagation algorithm along with David Rumelhart and Geoffrey Hinton, which triggered a boom in neural network research.
Education and career
Williams was born in Southern California. He studied at California Institute of Technology as a undergraduate student and received a B.S. in mathematics there in 1966. He received his M.A. and Ph.D. in mathematics, both at University of California, San Diego in 1972 and 1975, respectively. His Ph.D. thesis was supervised by Donald Werner Anderson. He worked for a defense contractor for some time after graduation. From 1983 to 1986, Williams was a member of the Parallel Distributed Processing research group headed by David Rumelhart at the Institute for Cognitive Science at UCSD. In 1986, Williams accepted a professorship in computer science at Northeastern University in Boston, where he remained afterwards.In addition to the backpropagation paper, Williams made fundamental contributions to the fields of recurrent neural networks, where he, along with David Zipser, invented the teacher forcing algorithm and made important contributions to backpropagation through time. In reinforcement learning, Williams introduced the REINFORCE algorithm in 1992, which became the first policy gradient method.
Besides his works on neural networks, Williams, together with Wenxu Tong and Mary Jo Ondrechen, developed Partial Order Optimum Likelihood, a machine learning method used in the prediction of active amino acids in protein structures. POOL is a maximum likelihood method with a monotonicity constraint and is a general predictor of properties that depend monotonically on the input features.