CHAOS (chess)


CHAOS is a chess playing program that was developed by programmers working at the RCA Systems Programming division in the late 1960s. It played competitively in computer chess competitions in the 1970s and 1980s. It differed from other programs of that era in its look-ahead philosophy, choosing to use chess knowledge to evaluate fewer positions and continuations as opposed to simple evaluations that relied on deep look-ahead to avoid bad moves.

Introduction

CHAOS was originally developed by Ira Ruben, Fred Swartz, Victor Berman, Joe Winograd and William Toikka while working at RCA in Cinnaminson, NJ. Its name is an acronym for 'Chess Heuristics and Other Stuff.'
Program development moved to the Computing Center of the University of Michigan when Swartz changed jobs, and Mike Alexander joined the development group. Swartz, Alexander and Berman were continuously group members from that point onward in CHAOS' evolution, as others of the original authors left and new members contributed episodically. Chess Senior Master Jack O'Keefe contributed to CHAOS' development from about 1980 onwards.
CHAOS was written in Fortran, except for low-level board representation
manipulations written in assembly language or C.
Due to this portability, it ran on RCA, Univac and IBM-compatible mainframes in
its lifetime.
CHAOS heralds from the mainframe computing era when only machines of that capacity were able to play at a high level. Consequently, development and testing could only take place at off-peak times for production use of the machine. In a competition, CHAOS had to run on a dedicated mainframe with a telephone link to the match venue. In its later years, CHAOS ran on computers on the machine assembly floor of Amdahl Corporation on MTS.

Background

Chess and artificial intelligence

Mathematicians Claude Shannon and Alan Turing, working separately,
were the first to view playing chess as a challenge to machines.
Working for AT&T / Bell Labs with its access to telephone switching
equipment, Shannon built a relay-based machine that learned how to work its way
through a two-dimensional, 5x5 cell maze in 1949.
Shannon viewed this as an analogue of the way that organisms learn things about
their natural environment.
There is a random element to searching it, a memory element to benefit from
the search outcome, and a reward element that reinforces learning when the
global outcome is favorable to the organism.
Soon afterward, Shannon wrote a mathematical analysis of the game of chess,
published in 1950.
Like with the maze, he broke down game play into the necessary elements for
reinforcement learning.
Associated with each board configuration a move will be made from, there is a
numerical score.
To decide what move to make, a player wants to maximize their own position's
score after the move and to minimize their opponent's score
.
Since there are about 32 possible moves at each of the early stages of the
game, and about 40 moves and responses in each game, then there are
about or about
possible games - an impossibly large set to evaluate completely.
Therefore, there must be a way to limit the number of moves to look ahead for
to find the best one.
Reducing the game to these few key elements provided a way to think about
human intelligence in general.
Shannon became part of a wider group using computing machines to mimic aspects
of human intelligence that grew into the general idea of
artificial intelligence.
The paradigm that evolved was that there was a quantification of the position
on the board into a score, an evaluation method to find favorable outcomes
, and a strategy to manage the
combinatorial explosion of the look-ahead possibilities.
By the early 1960s, there were computer programs that played chess at a
rudimentary level.
They used very simple evaluation functions for each position and tried to
search as far forward as was practical given the time constraints and
available compute power.
Naturally, programmers optimized their code to use the available computing
resources.
This led to a major philosophical divide among chess programs:
those that tried to evaluate as many positions as possible, and those that
tried to evaluate the most promising move sequences as deeply as possible.
CHAOS was firmly in the camp believing only the most promising moves should
be evaluated in depth.
Said Swartz, "The 'brute force people'... look at every no
matter what garbage it is.
Most moves are just terrible, terrible moves, and most computing time is
being spent on pure garbage."
The program spent more time evaluating each board position in the expectation
that it would find the most promising lines of play to explore in depth.
In 1983, the then-fastest chess program evaluated 110,000 positions
per second, and typical programs 1000–50,000 per second, whereas CHAOS
evaluated about 50-100 per second.

Machine learning and strategies to manage search

From about 1949 onward, Arthur Samuel began work for IBM on
machine learning,
culminating in a checkers-playing program in 1952 and
publications on the topic.
Concurrently, Christopher Strachey created Checkers, a program
to play the board game of Checkers in 1951,
but it had no capacity to learn from its
play.
Checkers was chosen by both authors because it was simpler than chess yet
contained the basic characteristics of an intellectual activity, and, in
Samuel's view, was a test-bed in which heuristic procedures and learning processes could be
evaluated quickly.
Checker playing programs introduced the notion of the game tree and evaluating
play to various depths to choose the best move.
The complexity of chess, however, promoted it to the status of an analogue for
human intelligence, and it attracted computer scientists' attention, who
referred to it as research into artificial intelligence.
Like checkers, it required a numerical assessment of each arrangement of chess
pieces on a board.
It also required looking ahead to future moves to decide how to play the
present position.
Due to the enormous number of possible moves, there had to be a way to confine
the look-ahead search to the most promising lines of play.
From these factors, the notion of minimax score evaluation
developed and, later, alpha-beta tree pruning to abandon looking
at positions worse than any that have already been examined.

Chess search strategies

The AI community viewed artificial intelligence as comprising two parts:
a way to symbolically quantify the knowledge in hand, and a set of heuristics to limit look-ahead to the consequences of
a move.
The early chess playing programs attempted to look forward as far as possible,
perhaps to 3 moves ahead by each player, and to choose the best outcome.
This led to the horizon effect, whereby a key move 4 or more moves ahead
would be unexamined and therefore missed.
Consequently, the programs were quite weak and heuristics to manage the search
became important in their development.
CHAOS used a selective search strategy with iterative widening.
As chess programs evolved, they incorporated books of opening lines of play from historic sources. Nowadays, book moves are catalogued in
machine-readable form, but originally programmers had to type them in.
CHAOS had an extensive book for its time of around 10,000 moves that O'Keefe helped to develop.
A problem with play from an opening book is the behavior of the program when the
play leaves the book: the positional advantage may be so subtle that the
evaluation scheme may be unable to understand it, leading to very wide and shallow searches to establish a line of play.
The horizon effect again plagues move selection after leaving the book. CHAOS mitigated these problems by only using book lines that it could understand, and by relying on cached analyses of continuations out of the book made while the opponent's clock was running.

Game Play History

CHAOS played in twelve ACM computer chess tournaments and four World Computer Chess Championships. Its debut was the ACM computer chess tournament in 1973, taking 2nd place. In 1974, it again won 2nd place in the WCCC, defeating the tournament favorite Chess 4.0 but losing to Kaissa. CHAOS was close to winning the 1980 WCCC, but lost to Belle in a playoff. The 1985 ACM computer chess tournament was CHAOS' last competition.
One of CHAOS' notable victories was over Chess 4.0 at the 1974 WCCC tournament. Chess 4.0 was unbeaten by any other program up until then. Playing as white, CHAOS made a knight sacrifice that traded material for open lines of attack and eventually won the game. CHAOS’ authors thought the move was due to an unintended programming side effect.
Belle v CHAOS was the deciding match for winner of the 1980 WCCC. The game, which Belle won, was characterized by many missed opportunities by both programs. In the position after 8 Qd1-a4+, CHAOS played 8... Nb4-c6, whereas 8... Nb8-c6 would have been much stronger.

Legacy

CHAOS left the competitive arena around 1985 when it became clear that the use of special-purpose hardware to play chess was going to overwhelm intelligent play and, eventually, human players. It had a strong debut, winning over established programs, showing that superior chess knowledge could defeat brute-force approaches. Eventually, Moore’s Law grew hardware’s capabilities faster than chess knowledge could be distilled algorithmically. However, the chess-playing program AlphaZero eventually beat the reigning brute-force program Stockfish by evaluating fewer positions using neural networks that learned from previous play, showing that selective search is an effective strategy.