Get Complete Project Material File(s) Now! »
templates and spatial normalization
Brains of different individuals have different shapes and sizes. Still, up to a nonlinear deformation, many aspects of their spatial organization are common, and brain mapping often aims to delineate regions that play similar roles in all individuals. In order to compare brain activity across individuals and experiments, and to find commonalities and differences, it is necessary to align brain images. Once images are aligned, positions in different images correspond to the same anatomical structures, and can be compared. This alignment is called coregistration. Images of different brains are translated, rotated, scaled, and deformed non-linearly until inter-individual variability in shape, size and position is sufficiently small (about 1 cm). Only then, pointwise comparison of brain images becomes meaningful. Even when we perform individual-level analysis, and process each individual’s recorded brain activity separately, it can be beneficial to align the results obtained for several brains in order to assess similarities. This is typically done by registering all brain images to a standard brain template, that was obtained by coregistering and averaging anatomical images from many individuals.
statistical parametric mapping
Brain mapping estimates statistical links between measurements made in the brain – for example by an MRI machine – and phenotypic or behavioral quantities of interest, such as a particular mental condition. The Statistical Parametric Mapping (SPM) approach (Penny et al., 2011) consists in obtaining many brain scans, and using this sample to compute a statistic for each voxel. For example, in Voxel-Based Morphometry (VBM), we compute at each voxel a measure of the local gray matter density. We can obtain such images for a group of Alzheimer’s disease patients and for a group of healthy controls. Then, for each voxel, we can compute the (rescaled) difference between the group means. We thus obtain one statistic for each voxel, which together form a statistical map.
A statistical test can then be applied to identify voxels in which the observed statistic is significant – for example to reject, at some positions in the brain, the hypothesis that the local gray matter density is the same in patients as in healthy controls.
Functional neuroimaging techniques such as fMRI measure the brain activity. During a session, the brain activity of an individual is scanned repeatedly. For each voxel, we thus obtain a time series of measurements. For each time point, i. e. for each measurement, we also have variables that describe the mental condition of the subject – for example a binary variable indicating whether the subject is hearing a sound or not. The voxel time series can then be regressed on these behavioral descriptors. This yields one brain map of regression coefficients for each behavioral descriptor. Statistical tests can be applied, for example to identify regions in which hearing a sound has a significant effect on brain activity – most likely the auditory cortex.
The nature of the measurements, the data processing steps, and the topics of study vary between experiments and imaging modalities. Still, the SPM framework remains similar, and this type of analysis always results in statistical maps of the brain. In the next two sections, we describe in more detail the statistical analysis of fMRI data, which constitutes a canonical application of the SPM framework.
functional mri
Because of its high spatial resolution, fMRI is particularly adapted for brain mapping. The tools developed in this thesis rely mostly on statistical analysis of aggregated results from fMRI studies. Therefore, in this section and the next, we summarize how fMRI data are obtained and analyzed. Many aspects of the analysis of these data are shared with other modalities. This chapter only covers the basic notions needed to understand the rest of this thesis. For an extensive reference of fMRI analysis, see for example Poldrack, Mumford, and Nichols (2011).
Table of contents :
1 overview
1.1 Large-scale meta-analysis
1.2 Mapping text to brain locations
1.3 Mapping arbitrary queries
1.4 Decoding brain images
1.5 Conclusion
I large-scale meta-analysis
2 brain mapping primer
2.1 Templates and spatial normalization
2.2 Statistical Parametric Mapping
2.3 Functional MRI
2.4 Analysis of task fMRI data
2.5 Conclusion
3 meta-analysis
3.1 A complex and noisy literature
3.2 Meta-analysis: testing consensus
3.3 Going beyond univariate meta-analysis
4 building a dataset
4.1 The need for a new dataset
4.2 Gathering articles
4.3 Transforming articles
4.4 Extracting tables and coordinates
4.5 Building the vocabulary
4.6 Summary of collected data
II mapping text to brain locations
5 text to spatial densities
5.1 Problem setting
5.2 Regression model
5.3 Fitting the model
5.4 Solving the estimation problems
5.5 Validation metric
6 text-to-brain experiments
6.1 Experimental setting
6.2 Prediction performance
6.3 Model inspection
6.4 Meta-analysis of a text-only corpus
6.5 Conclusion
III mapping arbitrary queries
7 the neuroquery model
7.1 Extending the scope of meta-analysis
7.2 Overview of the NeuroQuery model
7.3 Revisiting the text-only meta-analysis
7.4 Building the smoothing matrix
7.5 Multi-output feature selection
7.6 Smoothing, projection, mapping
8 neuroquery experiments
8.1 Illustration: interpreting fMRI maps
8.2 Baseline methods
8.3 Mapping challenging queries
8.4 Measuring sample complexity
8.5 Prediction performance
8.6 Example failure
8.7 Using NeuroQuery
8.8 Conclusion: usable meta-analysis tools
IV decoding brain images
9 decoding
9.1 Problem setting
9.2 Representing statistical brain maps
9.3 Classification model
9.4 Evaluation
10 decoding experiments
10.1 Dataset
10.2 Dictionary selection
10.3 Inspecting trained models
10.4 Prediction performance
10.5 Conclusion
V conclusion
11 conclusion
11.1 Practical challenges
11.2 Future direction
11.3 What did not work
11.4 Resources resulting from this thesis
11.5 Final note
references