Functional magnetic resonance imaging
Functional magnetic resonance imaging or functional MRI measures brain activity by detecting changes associated with blood flow. This technique relies on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region also increases.
The primary form of fMRI uses the blood-oxygen-level dependent contrast, discovered by Seiji Ogawa in 1990. This is a type of specialized brain and body scan used to map neural activity in the brain or spinal cord of humans or other animals by imaging the change in blood flow related to energy use by brain cells. Since the early 1990s, fMRI has come to dominate brain mapping research because it does not require people to undergo injections or surgery, to ingest substances, or to be exposed to ionizing radiation. This measure is frequently corrupted by noise from various sources; hence, statistical procedures are used to extract the underlying signal. The resulting brain activation can be graphically represented by color-coding the strength of activation across the brain or the specific region studied. The technique can localize activity to within millimeters but, using standard techniques, no better than within a window of a few seconds. Other methods of obtaining contrast are arterial spin labeling and diffusion MRI. The latter procedure is similar to BOLD fMRI but provides contrast based on the magnitude of diffusion of water molecules in the brain.
In addition to detecting BOLD responses from activity due to tasks/stimuli, fMRI can measure resting state fMRI, or taskless fMRI, which shows the subjects' baseline BOLD variance. Since about 1998 studies have shown the existence and properties of the default mode network, aka 'Resting State Network', a functionally connected neural network of apparent 'brain states'.
fMRI is used in research, and to a lesser extent, in clinical work. It can complement other measures of brain physiology such as EEG and NIRS. Newer methods which improve both spatial and time resolution are being researched, and these largely use biomarkers other than the BOLD signal. Some companies have developed commercial products such as lie detectors based on fMRI techniques, but the research is not believed to be developed enough for widespread commercialization.
OverviewThe fMRI concept builds on the earlier MRI scanning technology and the discovery of properties of oxygen-rich blood. MRI brain scans use a strong, permanent, static magnetic field to align nuclei in the brain region being studied. Another magnetic field, the gradient field, is then applied to spatially locate different nuclei. Finally, a radiofrequency pulse is played to kick the nuclei to higher magnetization levels, with the effect now depending on where they are located. When the RF field is removed, the nuclei go back to their original states, and the energy they emit is measured with a coil to recreate the positions of the nuclei. MRI thus provides a static structural view of brain matter. The central thrust behind fMRI was to extend MRI to capture functional changes in the brain caused by neuronal activity. Differences in magnetic properties between arterial and venous blood provided this link.
Since the 1890s it has been known that changes in blood flow and blood oxygenation in the brain are closely linked to neural activity. When neurons become active, local blood flow to those brain regions increases, and oxygen-rich blood displaces oxygen-depleted blood around 2 seconds later. This rises to a peak over 4–6 seconds, before falling back to the original level. Oxygen is carried by the hemoglobin molecule in red blood cells. Deoxygenated hemoglobin is more magnetic than oxygenated hemoglobin, which is virtually resistant to magnetism. This difference leads to an improved MR signal since the diamagnetic blood interferes with the magnetic MR signal less. This improvement can be mapped to show which neurons are active at a time.
HistoryDuring the late 19th century, Angelo Mosso invented the 'human circulation balance', which could non-invasively measure the redistribution of blood during emotional and intellectual activity. However, although briefly mentioned by William James in 1890, the details and precise workings of this balance and the experiments Mosso performed with it remained largely unknown until the recent discovery of the original instrument as well as Mosso's reports by Stefano Sandrone and colleagues. Angelo Mosso investigated several critical variables that are still relevant in modern neuroimaging such as the ‘signal-to-noise ratio', the appropriate choice of the experimental paradigm and the need for the simultaneous recording of differing physiological parameters. Mosso's manuscripts do not provide direct evidence that the balance was really able to measure changes in cerebral blood flow due to cognition, however a modern replication performed by David T Field has now demonstrated using modern signal processing techniques unavailable to Mosso that a balance apparatus of this type is able to detect changes in cerebral blood volume related to cognition.
In 1890, Charles Roy and Charles Sherrington first experimentally linked brain function to its blood flow, at Cambridge University. The next step to resolving how to measure blood flow to the brain was Linus Pauling's and Charles Coryell's discovery in 1936 that oxygen-rich blood with Hb was weakly repelled by magnetic fields, while oxygen-depleted blood with dHb was attracted to a magnetic field, though less so than ferromagnetic elements such as iron. Seiji Ogawa at AT&T Bell labs recognized that this could be used to augment MRI, which could study just the static structure of the brain, since the differing magnetic properties of dHb and Hb caused by blood flow to activated brain regions would cause measurable changes in the MRI signal. BOLD is the MRI contrast of dHb, discovered in 1990 by Ogawa. In a seminal 1990 study based on earlier work by Thulborn et al., Ogawa and colleagues scanned rodents in a strong magnetic field MRI. To manipulate blood oxygen level, they changed the proportion of oxygen the animals breathed. As this proportion fell, a map of blood flow in the brain was seen in the MRI. They verified this by placing test tubes with oxygenated or deoxygenated blood and creating separate images. They also showed that gradient-echo images, which depend on a form of loss of magnetization called T2* decay, produced the best images. To show these blood flow changes were related to functional brain activity, they changed the composition of the air breathed by rats, and scanned them while monitoring brain activity with EEG. The first attempt to detect the regional brain activity using MRI was performed by Belliveau and colleagues at Harvard University using the contrast agent Magnevist, a ferromagnetic substance remaining in the bloodstream after intravenous injection. However, this method is not popular in human fMRI, because of the inconvenience of the contrast agent injection, and because the agent stays in the blood only for a short time.
Three studies in 1992 were the first to explore using the BOLD contrast in humans. Kenneth Kwong and colleagues, using both gradient-echo and inversion recovery Echo Planar Imaging sequence at a magnetic field strength of 1.5 T published studies showing clear activation of the human visual cortex. The Harvard team thereby showed that both blood flow and blood volume increased locally in activity neural tissue. Ogawa and others conducted a similar study using a higher field and showed that the BOLD signal depended on T2* loss of magnetization. T2* decay is caused by magnetized nuclei in a volume of space losing magnetic coherence from both bumping into one another and from intentional differences in applied magnetic field strength across locations. Bandettini and colleagues used EPI at 1.5 T to show activation in the primary motor cortex, a brain area at the last stage of the circuitry controlling voluntary movements. The magnetic fields, pulse sequences and procedures and techniques used by these early studies are still used in current-day fMRI studies. But today researchers typically collect data from more slices, and preprocess and analyze data using statistical techniques.
PhysiologyThe brain does not store glucose, its primary source of energy. When neurons become active, getting them back to their original state of polarization requires actively pumping ions across the neuronal cell membranes, in both directions. The energy for those ion pumps is mainly produced from glucose. More blood flows in to transport more glucose, also bringing in more oxygen in the form of oxygenated hemoglobin molecules in red blood cells. This is from both a higher rate of blood flow and an expansion of blood vessels. The blood-flow change is localized to within 2 or 3 mm of where the neural activity is. Usually the brought-in oxygen is more than the oxygen consumed in burning glucose, and this causes a net decrease in deoxygenated hemoglobin in that brain area's blood vessels. This changes the magnetic property of the blood, making it interfere less with the magnetization and its eventual decay induced by the MRI process.
The cerebral blood flow corresponds to the consumed glucose differently in different brain regions. Initial results show there is more inflow than consumption of glucose in regions such as the amygdala, basal ganglia, thalamus and cingulate cortex, all of which are recruited for fast responses. In regions that are more deliberative, such as the lateral frontal and lateral parietal lobes, it seems that incoming flow is less than consumption. This affects BOLD sensitivity.
Hemoglobin differs in how it responds to magnetic fields, depending on whether it has a bound oxygen molecule. The dHb molecule is more attracted to magnetic fields. Hence, it distorts the surrounding magnetic field induced by an MRI scanner, causing the nuclei there to lose magnetization faster via the T2* decay. Thus MR pulse sequences sensitive to T2* show more MR signal where blood is highly oxygenated and less where it is not. This effect increases with the square of the strength of the magnetic field. The fMRI signal hence needs both a strong magnetic field and a pulse sequence such as EPI, which is sensitive to T2* contrast.
The physiological blood-flow response largely decides the temporal sensitivity, that is how accurately we can measure when neurons are active, in BOLD fMRI. The basic time resolution parameter is designated TR; the TR dictates how often a particular brain slice is excited and allowed to lose its magnetization. TRs could vary from the very short to the very long. For fMRI specifically, the hemodynamic response lasts over 10 seconds, rising multiplicatively, peaking at 4 to 6 seconds, and then falling multiplicatively. Changes in the blood-flow system, the vascular system, integrate responses to neuronal activity over time. Because this response is a smooth continuous function, sampling with ever-faster TRs does not help; it just gives more points on the response curve obtainable by simple linear interpolation anyway. Experimental paradigms such as staggering when a stimulus is presented at various trials can improve temporal resolution, but reduces the number of effective data points obtained.
BOLD hemodynamic responseThe change in the MR signal from neuronal activity is called the hemodynamic response. It lags the neuronal events triggering it by a couple of seconds, since it takes a while for the vascular system to respond to the brain's need for glucose. From this point it typically rises to a peak at about 5 seconds after the stimulus. If the neurons keep firing, say from a continuous stimulus, the peak spreads to a flat plateau while the neurons stay active. After activity stops, the BOLD signal falls below the original level, the baseline, a phenomenon called the undershoot. Over time the signal recovers to the baseline. There is some evidence that continuous metabolic requirements in a brain region contribute to the undershoot.
The mechanism by which the neural system provides feedback to the vascular system of its need for more glucose is partly the release of glutamate as part of neuron firing. This glutamate affects nearby supporting cells, astrocytes, causing a change in calcium ion concentration. This, in turn, releases nitric oxide at the contact point of astrocytes and intermediate-sized blood vessels, the arterioles. Nitric oxide is a vasodilator causing arterioles to expand and draw in more blood.
A single voxel's response signal over time is called its timecourse. Typically, the unwanted signal, called the noise, from the scanner, random brain activity and similar elements is as big as the signal itself. To eliminate these, fMRI studies repeat a stimulus presentation multiple times.
Spatial resolutionSpatial resolution of an fMRI study refers to how well it discriminates between nearby locations. It is measured by the size of voxels, as in MRI. A voxel is a three-dimensional rectangular cuboid, whose dimensions are set by the slice thickness, the area of a slice, and the grid imposed on the slice by the scanning process. Full-brain studies use larger voxels, while those that focus on specific regions of interest typically use smaller sizes. Sizes range from 4 to 5 mm, or with laminar resolution fMRI, to submillimeter . Smaller voxels contain fewer neurons on average, incorporate less blood flow, and hence have less signal than larger voxels. Smaller voxels imply longer scanning times, since scanning time directly rises with the number of voxels per slice and the number of slices. This can lead both to discomfort for the subject inside the scanner and to loss of the magnetization signal. A voxel typically contains a few million neurons and tens of billions of synapses, with the actual number depending on voxel size and the area of the brain being imaged.
The vascular arterial system supplying fresh blood branches into smaller and smaller vessels as it enters the brain surface and within-brain regions, culminating in a connected capillary bed within the brain. The drainage system, similarly, merges into larger and larger veins as it carries away oxygen-depleted blood. The dHb contribution to the fMRI signal is from both the capillaries near the area of activity and larger draining veins that may be farther away. For good spatial resolution, the signal from the large veins needs to be suppressed, since it does not correspond to the area where the neural activity is. This can be achieved either by using strong static magnetic fields or by using spin-echo pulse sequences. With these, fMRI can examine a spatial range from millimeters to centimeters, and can hence identify Brodmann areas, subcortical nuclei such as the caudate, putamen and thalamus, and hippocampal subfields such as the combined dentate gyrus/CA3, CA1, and subiculum.
Temporal resolutionTemporal resolution is the smallest time period of neural activity reliably separated out by fMRI. One element deciding this is the sampling time, the TR. Below a TR of 1 or 2 seconds, however, scanning just generates sharper HDR curves, without adding much additional information. Temporal resolution can be improved by staggering stimulus presentation across trials. If one-third of data trials are sampled normally, one-third at 1 s, 4 s, 7 s and so on, and the last third at 2 s, 5 s and 8 s, the combined data provide a resolution of 1 s, though with only one-third as many total events.
The time resolution needed depends on brain processing time for various events. An example of the broad range here is given by the visual processing system. What the eye sees is registered on the photoreceptors of the retina within a millisecond or so. These signals get to the primary visual cortex via the thalamus in tens of milliseconds. Neuronal activity related to the act of seeing lasts for more than 100 ms. A fast reaction, such as swerving to avoid a car crash, takes around 200 ms. By about half-a-second, awareness and reflection of the incident sets in. Remembering a similar event may take a few seconds, and emotional or physiological changes such as fear arousal may last minutes or hours. Learned changes, such as recognizing faces or scenes, may last days, months, or years. Most fMRI experiments study brain processes lasting a few seconds, with the study conducted over some tens of minutes. Subjects may move their heads during that time, and this head motion needs to be corrected for. So does drift in the baseline signal over time. Boredom and learning may modify both subject behavior and cognitive processes.
Linear addition from multiple activationWhen a person performs two tasks simultaneously or in overlapping fashion, the BOLD response is expected to add linearly. This is a fundamental assumption of many fMRI studies that is based on the principle that continuously differentiable systems can be expected to behave linearly when perturbations are small; they are linear to first order. Linear addition means the only operation allowed on the individual responses before they are combined is a separate scaling of each. Since scaling is just multiplication by a constant number, this means an event that evokes, say, twice the neural response as another, can be modeled as the first event presented twice simultaneously. The HDR for the doubled-event is then just double that of the single event.
To the extent that the behavior is linear, the time course of the BOLD response to an arbitrary stimulus can be modeled by convolution of that stimulus with the impulse BOLD response. Accurate time course modeling is important in estimating the BOLD response magnitude.
This strong assumption was first studied in 1996 by Boynton and colleagues, who checked the effects on the primary visual cortex of patterns flickering 8 times a second and presented for 3 to 24 seconds. Their result showed that when visual contrast of the image was increased, the HDR shape stayed the same but its amplitude increased proportionally. With some exceptions, responses to longer stimuli could also be inferred by adding together the responses for multiple shorter stimuli summing to the same longer duration. In 1997, Dale and Buckner tested whether individual events, rather than blocks of some duration, also summed the same way, and found they did. But they also found deviations from the linear model at time intervals less than 2 seconds.
A source of nonlinearity in the fMRI response is from the refractory period, where brain activity from a presented stimulus suppresses further activity on a subsequent, similar, stimulus. As stimuli become shorter, the refractory period becomes more noticeable. The refractory period does not change with age, nor do the amplitudes of HDRs. The period differs across brain regions. In both the primary motor cortex and the visual cortex, the HDR amplitude scales linearly with duration of a stimulus or response. In the corresponding secondary regions, the supplementary motor cortex, which is involved in planning motor behavior, and the motion-sensitive V5 region, a strong refractory period is seen and the HDR amplitude stays steady across a range of stimulus or response durations. The refractory effect can be used in a way similar to habituation to see what features of a stimulus a person discriminates as new. Further limits to linearity exist because of saturation: with large stimulation levels a maximum BOLD response is reached.
Matching neural activity to the BOLD signalResearchers have checked the BOLD signal against both signals from implanted electrodes and signals of field potentials from EEG and MEG. The local field potential, which includes both post-neuron-synaptic activity and internal neuron processing, better predicts the BOLD signal. So the BOLD contrast reflects mainly the inputs to a neuron and the neuron's integrative processing within its body, and less the output firing of neurons. In humans, electrodes can be implanted only in patients who need surgery as treatment, but evidence suggests a similar relationship at least for the auditory cortex and the primary visual cortex. Activation locations detected by BOLD fMRI in cortical areas are known to tally with CBF-based functional maps from PET scans. Some regions just a few millimeters in size, such as the lateral geniculate nucleus of the thalamus, which relays visual inputs from the retina to the visual cortex, have been shown to generate the BOLD signal correctly when presented with visual input. Nearby regions such as the pulvinar nucleus were not stimulated for this task, indicating millimeter resolution for the spatial extent of the BOLD response, at least in thalamic nuclei. In the rat brain, single-whisker touch has been shown to elicit BOLD signals from the somatosensory cortex.
However, the BOLD signal cannot separate feedback and feedforward active networks in a region; the slowness of the vascular response means the final signal is the summed version of the whole region's network; blood flow is not discontinuous as the processing proceeds. Also, both inhibitory and excitatory input to a neuron from other neurons sum and contribute to the BOLD signal. Within a neuron these two inputs might cancel out. The BOLD response can also be affected by a variety of factors, including disease, sedation, anxiety, medications that dilate blood vessels, and attention.
The amplitude of the BOLD signal does not necessarily affect its shape. A higher-amplitude signal may be seen for stronger neural activity, but peaking at the same place as a weaker signal. Also, the amplitude does not necessarily reflect behavioral performance. A complex cognitive task may initially trigger high-amplitude signals associated with good performance, but as the subject gets better at it, the amplitude may decrease with performance staying the same. This is expected to be due to increased efficiency in performing the task. The BOLD response across brain regions cannot be compared directly even for the same task, since the density of neurons and the blood-supply characteristics are not constant across the brain. However, the BOLD response can often be compared across subjects for the same brain region and the same task.
More recent characterization of the BOLD signal has used optogenetic techniques in rodents to precisely control neuronal firing while simultaneously monitoring the BOLD response using high field magnets. These techniques suggest that neuronal firing is well correlated with the measured BOLD signal including approximately linear summation of the BOLD signal over closely spaced bursts of neuronal firing. Linear summation is an assumption of commonly used event-related fMRI designs.
Medical usePhysicians use fMRI to assess how risky brain surgery or similar invasive treatment is for a patient and to learn how a normal, diseased or injured brain is functioning. They map the brain with fMRI to identify regions linked to critical functions such as speaking, moving, sensing, or planning. This is useful to plan for surgery and radiation therapy of the brain. Clinicians also use fMRI to anatomically map the brain and detect the effects of tumors, stroke, head and brain injury, or diseases such as Alzheimer's, and developmental disabilities such as Autism etc..
Clinical use of fMRI still lags behind research use. Patients with brain pathologies are more difficult to scan with fMRI than are young healthy volunteers, the typical research-subject population. Tumors and lesions can change the blood flow in ways not related to neural activity, masking the neural HDR. Drugs such as antihistamines and even caffeine can affect HDR. Some patients may be suffering from disorders such as compulsive lying, which makes certain studies impossible. It is harder for those with clinical problems to stay still for long. Using head restraints or bite bars may injure epileptics who have a seizure inside the scanner; bite bars may also discomfort those with dental prostheses.
Despite these difficulties, fMRI has been used clinically to map functional areas, check left-right hemispherical asymmetry in language and memory regions, check the neural correlates of a seizure, study how the brain recovers partially from a stroke, test how well a drug or behavioral therapy works, detect the onset of Alzheimer's, and note the presence of disorders like depression. Mapping of functional areas and understanding lateralization of language and memory help surgeons avoid removing critical brain regions when they have to operate and remove brain tissue. This is of particular importance in removing tumors and in patients who have intractable temporal lobe epilepsy. Lesioning tumors requires pre-surgical planning to ensure no functionally useful tissue is removed needlessly. Recovered depressed patients have shown altered fMRI activity in the cerebellum, and this may indicate a tendency to relapse. Pharmacological fMRI, assaying brain activity after drugs are administered, can be used to check how much a drug penetrates the blood–brain barrier and dose vs effect information of the medication.
Animal researchResearch is primarily performed in non-human primates such as the rhesus macaque. These studies can be used both to check or predict human results and to validate the fMRI technique itself. But the studies are difficult because it is hard to motivate an animal to stay still and typical inducements such as juice trigger head movement while the animal swallows it. It is also expensive to maintain a colony of larger animals such as the macaque.
Analyzing the dataThe goal of fMRI data analysis is to detect correlations between brain activation and a task the subject performs during the scan. It also aims to discover correlations with the specific cognitive states, such as memory and recognition, induced in the subject. The BOLD signature of activation is relatively weak, however, so other sources of noise in the acquired data must be carefully controlled. This means that a series of processing steps must be performed on the acquired images before the actual statistical search for task-related activation can begin. Nevertheless, it is possible to predict, for example, the emotions a person is experiencing solely from their fMRI, with a high degree of accuracy.
PreprocessingThe scanner platform generates a 3 D volume of the subject's head every TR. This consists of an array of voxel intensity values, one value per voxel in the scan. The voxels are arranged one after the other, unfolding the three-dimensional structure into a single line. Several such volumes from a session are joined together to form a 4 D volume corresponding to a run, for the time period the subject stayed in the scanner without adjusting head position. This 4 D volume is the starting point for analysis. The first part of that analysis is preprocessing.
The first step in preprocessing is conventionally slice timing correction. The MR scanner acquires different slices within a single brain volume at different times, and hence the slices represent brain activity at different timepoints. Since this complicates later analysis, a timing correction is applied to bring all slices to the same timepoint reference. This is done by assuming the timecourse of a voxel is smooth when plotted as a dotted line. Hence the voxel's intensity value at other times not in the sampled frames can be calculated by filling in the dots to create a continuous curve.
Head motion correction is another common preprocessing step. When the head moves, the neurons under a voxel move and hence its timecourse now represents largely that of some other voxel in the past. Hence the timecourse curve is effectively cut and pasted from one voxel to another. Motion correction tries different ways of undoing this to see which undoing of the cut-and-paste produces the smoothest timecourse for all voxels. The undoing is by applying a rigid-body transform to the volume, by shifting and rotating the whole volume data to account for motion. The transformed volume is compared statistically to the volume at the first timepoint to see how well they match, using a cost function such as correlation or mutual information. The transformation that gives the minimal cost function is chosen as the model for head motion. Since the head can move in a vastly varied number of ways, it is not possible to search for all possible candidates; nor is there right now an algorithm that provides a globally optimal solution independent of the first transformations we try in a chain.
Distortion corrections account for field nonuniformities of the scanner. One method, as described before, is to use shimming coils. Another is to recreate a field map of the main field by acquiring two images with differing echo times. If the field were uniform, the differences between the two images also would be uniform. Note these are not true preprocessing techniques since they are independent of the study itself. Bias field estimation is a real preprocessing technique using mathematical models of the noise from distortion, such as Markov random fields and expectation maximization algorithms, to correct for distortion.
In general, fMRI studies acquire both many functional images with fMRI and a structural image with MRI. The structural image is usually of a higher resolution and depends on a different signal, the T1 magnetic field decay after excitation. To demarcate regions of interest in the functional image, one needs to align it with the structural one. Even when whole-brain analysis is done, to interpret the final results, that is to figure out which regions the active voxels fall in, one has to align the functional image to the structural one. This is done with a coregistration algorithm that works similar to the motion-correction one, except that here the resolutions are different, and the intensity values cannot be directly compared since the generating signal is different.
Typical MRI studies scan a few different subjects. To integrate the results across subjects, one possibility is to use a common brain atlas, and adjust all the brains to align to the atlas, and then analyze them as a single group. The atlases commonly used are the Talairach one, a single brain of an elderly woman created by Jean Talairach, and the Montreal Neurological Institute one. The second is a probabilistic map created by combining scans from over a hundred individuals. This normalization to a standard template is done by mathematically checking which combination of stretching, squeezing, and warping reduces the differences between the target and the reference. While this is conceptually similar to motion correction, the changes required are more complex than just translation and rotation, and hence optimization even more likely to depend on the first transformations in the chain that is checked.
Temporal filtering is the removal of frequencies of no interest from the signal. A voxel's intensity change over time can be represented as the sum of a number of different repeating waves with differing periods and heights. A plot with these periods on the x-axis and the heights on the y-axis is called a power spectrum, and this plot is created with the Fourier transform technique. Temporal filtering amounts to removing the periodic waves not of interest to us from the power spectrum, and then summing the waves back again, using the inverse Fourier transform to create a new timecourse for the voxel. A high-pass filter removes the lower frequencies, and the lowest frequency that can be identified with this technique is the reciprocal of twice the TR. A low-pass filter removes the higher frequencies, while a band-pass filter removes all frequencies except the particular range of interest.
Smoothing, or spatial filtering, is the idea of averaging the intensities of nearby voxels to produce a smooth spatial map of intensity change across the brain or region of interest. The averaging is often done by convolution with a Gaussian filter, which, at every spatial point, weights neighboring voxels by their distance, with the weights falling exponentially following the bell curve. If the true spatial extent of activation, that is the spread of the cluster of voxels simultaneously active, matches the width of the filter used, this process improves the signal-to-noise ratio. It also makes the total noise for each voxel follow a bell-curve distribution, since adding together a large number of independent, identical distributions of any kind produces the bell curve as the limit case. But if the presumed spatial extent of activation does not match the filter, signal is reduced.
Statistical analysisOne common approach to analysing fMRI data is to consider each voxel separately within the framework of the general linear model. The model assumes, at every time point, that the HDR is equal to the scaled and summed version of the events active at that point. A researcher creates a design matrix specifying which events are active at any timepoint. One common way is to create a matrix with one column per overlapping event, and one row per time point, and to mark it if a particular event, say a stimulus, is active at that time point. One then assumes a specific shape for the HDR, leaving only its amplitude changeable in active voxels. The design matrix and this shape are used to generate a prediction of the exact HDR response of the voxel at every timepoint, using the mathematical procedure of convolution. This prediction does not include the scaling required for every event before summing them.
The basic model assumes the observed HDR is the predicted HDR scaled by the weights for each event and then added, with noise mixed in. This generates a set of linear equations with more equations than unknowns. A linear equation has an exact solution, under most conditions, when equations and unknowns match. Hence one could choose any subset of the equations, with the number equal to the number of variables, and solve them. But, when these solutions are plugged into the left-out equations, there will be a mismatch between the right and left sides, the error. The GLM model attempts to find the scaling weights that minimize the sum of the squares of the error. This method is provably optimal if the error were distributed as a bell curve, and if the scaling-and-summing model were accurate. For a more mathematical description of the GLM model, see generalized linear models.
The GLM model does not take into account the contribution of relationships between multiple voxels. Whereas GLM analysis methods assess whether a voxel or region's signal amplitude is higher or lower for one condition than another, newer statistical models such as multi-voxel pattern analysis, utilize the unique contributions of multiple voxels within a voxel-population. In a typical implementation, a classifier or more basic algorithm is trained to distinguish trials for different conditions within a subset of the data. The trained model is then tested by predicting the conditions of the remaining data. This approach is most typically achieved by training and testing on different scanner sessions or runs. If the classifier is linear, then the training model is a set of weights used to scale the value in each voxel before summing them to generate a single number that determines the condition for each testing set trial. More information on training and testing classifiers is at statistical classification.
Combining with other methodsIt is common to combine fMRI signal acquisition with tracking of participants' responses and reaction times. Physiological measures such as heart rate, breathing, skin conductance, and eye movements are sometimes captured simultaneously with fMRI. The method can also be combined with other brain-imaging techniques such as transcranial stimulation, direct cortical stimulation and, especially, EEG. The fMRI procedure can also be combined with near-infrared spectroscopy to have supplementary information about both oxyhemoglobin and deoxyhemoglobin.
The fMRI technique can complement or supplement other techniques because of its unique strengths and gaps. It can noninvasively record brain signals without risks of ionising radiation inherent in other scanning methods, such as CT or PET scans. It can also record signal from all regions of the brain, unlike EEG/MEG, which are biased toward the cortical surface. But fMRI temporal resolution is poorer than that of EEG since the HDR takes tens of seconds to climb to its peak. Combining EEG with fMRI is hence potentially powerful because the two have complementary strengths—EEG has high temporal resolution, and fMRI high spatial resolution. But simultaneous acquisition needs to account for the EEG signal from varying blood flow triggered by the fMRI gradient field, and the EEG signal from the static field. For details, see EEG vs fMRI.
While fMRI stands out due to its potential to capture neural processes associated with health and disease, brainstimulation techniques such as Transcranial Magnetic Stimulation have the power to alter these neural processes. Therefore, a combination of both is needed to investigate the mechanisms of action of TMS treatment and on the other hand introduce causality into otherwise pure correlational observations. The current state-of-the-art setup for these concurrent TMS/fMRI experiments comprises a large-volume head coil, usually a birdcage coil, with the MR-compatible TMS coil being mounted inside that birdcage coil. It was applied in a multitude of experiments studying local and network interactions. However, classic setups with the TMS coil placed inside MR birdcage-type head coil are characterised by poor signal to noise ratios compared to multi-channel receive arrays used in clinical neuroimaging today. Moreover, the presence of the TMS coil inside the MR birdcage coil causes artefacts beneath the TMS coil, i.e. at the stimulation target. For these reasons dedicated to concurrent TMS/fMRI experiments.
Issues in fMRI
DesignIf the baseline condition is too close to maximum activation, certain processes may not be represented appropriately. Another limitation on experimental design is head motion, which can lead to artificial intensity changes of the fMRI signal.
Block versus event-related designIn a block design, two or more conditions are alternated by blocks. Each block will have a duration of a certain number of fMRI scans and within each block only one condition is presented. By making the conditions differ in only the cognitive process of interest, the fMRI signal that differentiates the conditions should represent this cognitive process of interest. This is known as the subtraction paradigm.
The increase in fMRI signal in response to a stimulus is additive. This means that the amplitude of the hemodynamic response increases when multiple stimuli are presented in rapid succession. When each block is alternated with a rest condition in which the HDR has enough time to return to baseline, a maximum amount of variability is introduced in the signal. As such, we conclude that block designs offer considerable statistical power. There are however severe drawbacks to this method, as the signal is very sensitive to signal drift, such as head motion, especially when only a few blocks are used. Another limiting factor is a poor choice of baseline, as it may prevent meaningful conclusions from being drawn. There are also problems with many tasks lacking the ability to be repeated. Since within each block only one condition is presented, randomization of stimulus types is not possible within a block. This makes the type of stimulus within each block very predictable. As a consequence, participants may become aware of the order of the events.
Event-related designs allow more real world testing, however, the statistical power of event related designs is inherently low, because the signal change in the BOLD fMRI signal following a single stimulus presentation is small.
Both block and event-related designs are based on the subtraction paradigm, which assumes that specific cognitive processes can be added selectively in different conditions. Any difference in blood flow between these two conditions is then assumed to reflect the differing cognitive process. In addition, this model assumes that a cognitive process can be selectively added to a set of active cognitive processes without affecting them.
Baseline versus activity conditionsThe brain is never completely at rest. It never stops functioning and firing neuronal signals, as well as using oxygen as long as the person in question is alive. In fact, in Stark and Squire's, 2001 study When zero is not zero: The problem of ambiguous baseline conditions in fMRI, activity in the medial temporal lobe was substantially higher during rest than during several alternative baseline conditions. The effect of this elevated activity during rest was to reduce, eliminate, or even reverse the sign of the activity during task conditions relevant to memory functions. These results demonstrate that periods of rest are associated with significant cognitive activity and are therefore not an optimal baseline for cognition tasks. In order to discern baseline and activation conditions it is necessary to interpret a lot of information. This includes situations as simple as breathing. Periodic blocks may result in identical data of other variance in the data if the person breathes at a regular rate of 1 breath/5sec, and the blocks occur every 10s, thus impairing the data.
Reverse inferenceNeuroimaging methods such as fMRI and MRI offer a measure of the activation of certain brain areas in response to cognitive tasks engaged in during the scanning process. Data obtained during this time allow cognitive neuroscientists to gain information regarding the role of particular brain regions in cognitive function. However, an issue arises when certain brain regions are alleged by researchers to identify the activation of previously labeled cognitive processes. Poldrack clearly describes this issue:
Reverse inference demonstrates the logical fallacy of affirming what you just found, although this logic could be supported by instances where a certain outcome is generated solely by a specific occurrence. With regard to the brain and brain function it is seldom that a particular brain region is activated solely by one cognitive process. Some suggestions to improve the legitimacy of reverse inference have included both increasing the selectivity of response in the brain region of interest and increasing the prior probability of the cognitive process in question. However, Poldrack suggests that reverse inference should be used merely as a guide to direct further inquiry rather than a direct means to interpret results.
Forward inferenceForward inference is a data driven method that uses patterns of brain activation to distinguish between competing cognitive theories. It shares characteristics with cognitive psychology's dissociation logic and philosophy's forward chaining. For example, Henson discusses forward inference's contribution to the "single process theory vs. dual process theory" debate with regard to recognition memory. Forward inference supports the dual process theory by demonstrating that there are two qualitatively different brain activation patterns when distinguishing between "remember vs. know judgments". The main issue with forward inference is that it is a correlational method. Therefore, one cannot be completely confident that brain regions activated during cognitive process are completely necessary for that execution of those processes. In fact, there are many known cases that demonstrate just that. For example, the hippocampus has been shown to be activated during classical conditioning, however lesion studies have demonstrated that classical conditioning can occur without the hippocampus.
RisksThe most common risk to participants in an fMRI study is claustrophobia and there are reported risks for pregnant women to go through the scanning process. Scanning sessions also subject participants to loud high-pitched noises from Lorentz forces induced in the gradient coils by the rapidly switching current in the powerful static field. The gradient switching can also induce currents in the body causing nerve tingling. Implanted medical devices such as pacemakers could malfunction because of these currents. The radio-frequency field of the excitation coil may heat up the body, and this has to be monitored more carefully in those running a fever, the diabetic, and those with circulatory problems. Local burning from metal necklaces and other jewellery is also a risk.
The strong static magnetic field can cause damage by pulling in nearby heavy metal objects converting them to projectiles.
There is no proven risk of biological harm from even very powerful static magnetic fields. However, genotoxic effects of MRI scanning have been demonstrated in vivo and in vitro, leading a recent review to recommend "a need for further studies and prudent use in order to avoid unnecessary examinations, according to the precautionary principle". In a comparison of genotoxic effects of MRI compared with those of CT scans, Knuuti et al. reported that even though the DNA damage detected after MRI was at a level comparable to that produced by scans using ionizing radiation, differences in the mechanism by which this damage takes place suggests that the cancer risk of MRI, if any, is unknown.
Advanced methodsThe first fMRI studies validated the technique against brain activity known, from other techniques, to be correlated to tasks. By the early 2000s, fMRI studies began to discover novel correlations. Still their technical disadvantages have spurred researchers to try more advanced ways to increase the power of both clinical and research studies.
Better spatial resolutionMRI, in general, has better spatial resolution than EEG and MEG, but not as good a resolution as invasive procedures such as single-unit electrodes. While typical resolutions are in the millimeter range, ultra-high-resolution MRI or MR spectroscopy works at a resolution of tens of micrometers. It uses 7 T fields, small-bore scanners that can fit small animals such as rats, and external contrast agents such as fine iron oxide. Fitting a human requires larger-bore scanners, which make higher fields strengths harder to achieve, especially if the field has to be uniform; it also requires either internal contrast such as BOLD or a non-toxic external contrast agent unlike iron oxide.
Parallel imaging is another technique to improve spatial resolution. This uses multiple coils for excitation and reception. Spatial resolution improves as the square root of the number of coils used. This can be done either with a phased array where the coils are combined in parallel and often sample overlapping areas with gaps in the sampling or with massive coil arrays, which are a much denser set of receivers separate from the excitation coils. These, however, pick up signals better from the brain surface, and less well from deeper structures such as the hippocampus.
Better temporal resolutionTemporal resolution of fMRI is limited by: the feedback mechanism that raises the blood flow operating slowly; having to wait till net magnetization recovers before sampling a slice again; and having to acquire multiple slices to cover the whole brain or region of interest. Advanced techniques to improve temporal resolution address these issues. Using multiple coils speeds up acquisition time in exact proportion to the coils used. Another technique is to decide which parts of the signal matter less and drop those. This could be either those sections of the image that repeat often in a spatial map or those sections repeating infrequently. The first, a high-pass filter in k-space, has been proposed by Gary H. Glover and colleagues at Stanford. These mechanisms assume the researcher has an idea of the expected shape of the activation image.
Typical gradient-echo EPI uses two gradient coils within a slice, and turns on first one coil and then the other, tracing a set of lines in k-space. Turning on both gradient coils can generate angled lines, which cover the same grid space faster. Both gradient coils can also be turned on in a specific sequence to trace a spiral shape in k-space. This spiral imaging sequence acquires images faster than gradient-echo sequences, but needs more math transformations since converting back to voxel space requires the data be in grid form.
New contrast mechanismsBOLD contrast depends on blood flow, which is both slowly changing and subject to noisy influences. Other biomarkers now looked at to provide better contrast include temperature, acidity/alkalinity, calcium-sensitive agents, neuronal magnetic field, and the Lorentz effect. Temperature contrast depends on changes in brain temperature from its activity. The initial burning of glucose raises the temperature, and the subsequent inflow of fresh, cold blood lowers it. These changes alter the magnetic properties of tissue. Since the internal contrast is too difficult to measure, external agents such thulium compounds are used to enhance the effect. Contrast based on pH depends on changes in the acid/alkaline balance of brain cells when they go active. This too often uses an external agent. Calcium-sensitive agents make MRI more sensitive to calcium concentrations, with calcium ions often being the messengers for cellular signalling pathways in active neurons. Neuronal magnetic field contrast measures the magnetic and electric changes from neuronal firing directly. Lorentz-effect imaging tries to measure the physical displacement of active neurons carrying an electric current within the strong static field.
Commercial useSome experiments have shown the neural correlates of peoples' brand preferences. Samuel M. McClure used fMRI to show the dorsolateral prefrontal cortex, hippocampus and midbrain were more active when people knowingly drank Coca-Cola as opposed to when they drank unlabeled Coke. Other studies have shown the brain activity that characterizes men's preference for sports cars, and even differences between Democrats and Republicans in their reaction to campaign commercials with images of the 9/11 attacks. Neuromarketing companies have seized on these studies as a better tool to poll user preferences than the conventional survey technique. One such company was BrightHouse, now shut down. Another is Oxford, UK-based Neurosense, which advises clients how they could potentially use fMRI as part of their marketing business activity. A third is Sales Brain in California.
At least two companies have been set up to use fMRI in lie detection: No Lie MRI and the Cephos Corporation. No Lie MRI charges close to $5000 for its services. These companies depend on evidence such as that from a study by Joshua Greene at Harvard University suggesting the prefrontal cortex is more active in those contemplating lying.
However, there is still a fair amount of controversy over whether these techniques are reliable enough to be used in a legal setting. Some studies indicate that while there is an overall positive correlation, there is a great deal of variation between findings and in some cases considerable difficulty in replicating the findings. A federal magistrate judge in Tennessee prohibited fMRI evidence to back up a defendant's claim of telling the truth, on the grounds that such scans do not measure up to the legal standard of scientific evidence.. Most researchers agree that the ability of fMRI to detect deception in a real life setting has not been established.
Use of the fMRI has been left out of legal debates throughout its history. Use of this technology has not been allowed due to holes in the evidence supporting fMRI. First, most evidence supporting fMRIs accuracy was done in a lab under controlled circumstances with solid facts. This type of testing does not pertain to real life. Real-life scenarios can be much more complicated with many other affecting factors. It has been shown that many other factors affect BOLD other than a typical lie. There have been tests done showing that drug use alters blood flow in the brain, which drastically affects the outcome of BOLD testing. Furthermore, individuals with diseases or disorders such as schizophrenia or compulsive lying can lead to abnormal results as well. Lastly, there is an ethical question relating to fMRI scanning. This testing of BOLD has led to controversy over if fMRIs are an invasion of privacy. Being able to scan and interpret what people are thinking may be thought of as immoral and the controversy still continues.
Because of these factors and more, fMRI evidence has been excluded from any form of legal system. The testing is too uncontrolled and unpredictable. Therefore, it has been stated that fMRI has much more testing to do before it can be considered viable in the eyes the legal system.
CriticismSome scholars have criticized fMRI studies for problematic statistical analyses, often based on low-power, small-sample studies. Other fMRI researchers have defended their work as valid. In 2018, Turner and colleagues have suggested that the small sizes affect the replicability of task-based fMRI studies and claimed that even datasets with at least 100 participants the results may not be well replicated, although there are debates on it.
In one real but satirical fMRI study, a dead salmon was shown pictures of humans in different emotional states. The authors provided evidence, according to two different commonly used statistical tests, of areas in the salmon's brain suggesting meaningful activity. The study was used to highlight the need for more careful statistical analyses in fMRI research, given the large number of voxels in a typical fMRI scan and the multiple comparisons problem.
Before the controversies were publicized in 2010, between 25-40% of studies on fMRI being published were not using the corrected comparisons. But by 2012, that number had dropped to 10%. Dr. Sally Satel, writing in Time, cautioned that while brain scans have scientific value, individual brain areas often serve multiple purposes and "reverse inferences" as commonly used in press reports carry a significant chance of drawing invalid conclusions.
In 2015, it was discovered that a statistical bug was found in the fMRI computations which likely invalidated at least 40,000 fMRI studies preceding 2015, and researchers suggest that results prior to the bug fix cannot be relied upon. Furthermore, it was later shown that how one sets the parameters in the software determines the false positive rate. In other words, study outcome can be determined by changing software parameters.
In 2020 professor Ahmad Hariri, one of the first researchers to use fMRI, performed a largescale experiment that sought to test the reliability of fMRI on individual people.
In the study, he copied protocols from 56 published papers in psychology that used fMRI. The results suggest that fMRI has poor reliability when it comes to individual people BUT GOOD reliability when it comes to general human thought patterns
- EMRF/TRTF, Magnetic Resonance: A peer-reviewed, critical introduction
- Joseph P. Hornak, The basics of MRI
- Richard B. Buxton, Introduction to functional magnetic resonance imaging: Principles and techniques, Cambridge University Press, 2002,
- Roberto Cabeza and Alan Kingstone, Editors, Handbook of Functional Neuroimaging of Cognition, Second Edition, MIT Press, 2006,
- Huettel, S. A.; Song, A. W.; McCarthy, G., Functional Magnetic Resonance Imaging Second Edition, 2009, Massachusetts: Sinauer,