Brain–computer interface


A brain–computer interface, sometimes called a brain–machine interface, is a direct communication link between the brain's electrical activity and an external device, most commonly a computer or robotic limb. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions. They are often conceptualized as a human–machine interface that skips the intermediary of moving body parts. BCI implementations range from non-invasive and partially invasive to invasive, based on how physically close electrodes are to brain tissue.
Research on BCIs began in the 1970s by Jacques Vidal at the University of California, Los Angeles under a grant from the National Science Foundation, followed by a contract from the Defense Advanced Research Projects Agency. Vidal's 1973 paper introduced the expression brain–computer interface into scientific literature.
Due to the cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels. Following years of animal experimentation, the first neuroprosthetic devices were implanted in humans in the mid-1990s.

History

The history of brain-computer interfaces starts with Hans Berger's discovery of the brain's electrical activity and the development of electroencephalography. In 1924 Berger was the first to record human brain activity utilizing EEG. Berger was able to identify oscillatory activity, such as the alpha wave, by analyzing EEG traces.
Berger's first recording device was rudimentary. He inserted silver wires under the scalps of his patients. These were later replaced by silver foils attached to the patient's head by rubber bandages. Berger connected these sensors to a Lippmann capillary electrometer, with disappointing results. However, more sophisticated measuring devices, such as the Siemens double-coil recording galvanometer, which displayed voltages as small as 10−4 volt, led to success.
Berger analyzed the interrelation of alternations in his EEG wave diagrams with brain diseases. EEGs permitted completely new possibilities for brain research.
Although the term had not yet been coined, one of the earliest examples of a working brain-machine interface was the piece Music for Solo Performer by American composer Alvin Lucier. The piece makes use of EEG and analog signal processing hardware to stimulate acoustic percussion instruments. Performing the piece requires producing alpha waves and thereby "playing" the various instruments via loudspeakers that are placed near or directly on the instruments.
Jacques Vidal coined the term "BCI" and produced the first peer-reviewed publications on this topic. He is widely recognized as the inventor of BCIs. A review pointed out that Vidal's 1973 paper stated the "BCI challenge" of controlling external objects using EEG signals, and especially use of Contingent Negative Variation potential as a challenge for BCI control. Vidal's 1977 experiment was the first application of BCI after his 1973 BCI challenge. It was a noninvasive EEG control of a cursor-like graphical object on a computer screen. The demonstration was movement in a maze.
1988 was the first demonstration of noninvasive EEG control of a physical object, a robot. The experiment demonstrated EEG control of multiple start-stop-restart cycles of movement, along an arbitrary trajectory defined by a line drawn on a floor. The line-following behavior was the default robot behavior, utilizing autonomous intelligence and an autonomous energy source.
In 1990, a report was given on a closed loop, bidirectional, adaptive BCI controlling a computer buzzer by an anticipatory brain potential, the Contingent Negative Variation potential. The experiment described how an expectation state of the brain, manifested by CNV, used a feedback loop to control the S2 buzzer in the S1-S2-CNV paradigm. The resulting cognitive wave representing the expectation learning in the brain was termed Electroexpectogram. The CNV brain potential was part of Vidal's 1973 challenge.
Studies in the 2010s suggested neural stimulation's potential to restore functional connectivity and associated behaviors through modulation of molecular mechanisms. This opened the door for the concept that BCI technologies may be able to restore function.
Beginning in 2013, DARPA funded BCI technology through the BRAIN initiative, which supported work out of teams including University of Pittsburgh Medical Center, Paradromics, Brown, and Synchron.

Neuroprosthetics

Neuroprosthetics is an area of neuroscience concerned with neural prostheses, that is, using artificial devices to replace the function of impaired nervous systems and brain-related problems, or of sensory or other organs. As of December 2010, cochlear implants had been implanted as neuroprosthetic devices in some 736,900 people worldwide. Other neuroprosthetic devices aim to restore vision, including retinal implants. The first neuroprosthetic device, however, was the pacemaker.
The terms are sometimes used interchangeably. Neuroprosthetics and BCIs seek to achieve the same aims, such as restoring sight, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques.

Animal research

Several laboratories have managed to read signals from monkey and rat cerebral cortices to operate BCIs to produce movement. Monkeys have moved computer cursors and commanded robotic arms to perform simple tasks simply by thinking about the task and seeing the results, without motor output. In May 2008 photographs that showed a monkey at the University of Pittsburgh Medical Center operating a robotic arm by thinking were published in multiple studies. Sheep have also been used to evaluate BCI technology, including Synchron's Stentrode and Paradromics' Connexus BCI.
In 2020, Elon Musk's Neuralink was successfully implanted in a pig. In 2021, Musk announced that the company had successfully enabled a monkey to play video games using Neuralink's device.

Early work

In 1969 operant conditioning studies by Fetz et al. at the Regional Primate Research Center and Department of Physiology and Biophysics, University of Washington School of Medicine showed that monkeys could learn to control the deflection of a biofeedback arm with neural activity. Similar work in the 1970s established that monkeys could learn to control the firing rates of individual and multiple neurons in the primary motor cortex if they were rewarded accordingly.
Algorithms to reconstruct movements from motor cortex neurons, which control movement, date back to the 1970s. In the 1980s, Georgopoulos at Johns Hopkins University found a mathematical relationship between the electrical responses of single motor cortex neurons in rhesus macaque monkeys and the direction in which they moved their arms. He also found that dispersed groups of neurons, in different areas of the monkey's brains, collectively controlled motor commands. He was able to record the firings of neurons in only one area at a time, due to equipment limitations.
Several groups have been able to capture complex brain motor cortex signals by recording from neural ensembles and using these to control external devices.

Research

Kennedy and Yang Dan

Phillip Kennedy and colleagues built the first intracortical brain–computer interface by implanting neurotrophic-cone electrodes into monkeys.In 1999, Yang Dan et al. at University of California, Berkeley decoded neuronal firings to reproduce images from cats. The team used an array of electrodes embedded in the thalamus. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. Neuron firings were recorded from watching eight short movies. Using mathematical filters, the researchers decoded the signals to reconstruct recognizable scenes and moving objects.

Nicolelis

professor Miguel Nicolelis advocates using multiple electrodes spread over a greater area of the brain to obtain neuronal signals.
After initial studies in rats during the 1990s, Nicolelis and colleagues developed BCIs that decoded brain activity in owl monkeys and used the devices to reproduce monkey movements in robotic arms. Monkeys' advanced reaching and grasping abilities and hand manipulation skills, made them good test subjects.
By 2000, the group succeeded in building a BCI that reproduced owl monkey movements while the monkey operated a joystick or reached for food. The BCI operated in real time and could remotely control a separate robot. But the monkeys received no feedback.
Later experiments on rhesus monkeys included feedback and reproduced monkey reaching and grasping movements in a robot arm. Their deeply cleft and furrowed brains made them better models for human neurophysiology than owl monkeys. The monkeys were trained to reach and grasp objects on a computer screen by manipulating a joystick while corresponding movements by a robot arm were hidden. The monkeys were later shown the robot and learned to control it by viewing its movements. The BCI used velocity predictions to control reaching movements and simultaneously predicted gripping force.
In 2011 O'Doherty and colleagues showed a BCI with sensory feedback with rhesus monkeys. The monkey controlled the position of an avatar arm while receiving sensory feedback through direct intracortical stimulation in the arm representation area of the sensory cortex.

Donoghue, Schwartz, and Andersen

Other laboratories that have developed BCIs and algorithms that decode neuron signals include John Donoghue at the Carney Institute for Brain Science at Brown University, Andrew Schwartz at the University of Pittsburgh, and Richard Andersen at Caltech. These researchers produced working BCIs using recorded signals from far fewer neurons than Nicolelis.
The Carney Institute reported training rhesus monkeys to use a BCI to track visual targets on a computer screen with or without a joystick. The group created a BCI for three-dimensional tracking in virtual reality and reproduced BCI control in a robotic arm. The same group demonstrated that a monkey could feed itself pieces of fruit and marshmallows using a robotic arm controlled by the animal's brain signals.
Andersen's group used recordings of premovement activity from the posterior parietal cortex, including signals created when experimental animals anticipated receiving a reward.