Skip to main content Skip to navigation

Posters

Just How Complex Are Purkinje Cell Complex Spikes?

Amelia Burroughs (University of Bristol)

Despite comprising only 10% of brain volume, the cerebellum contains approximately 80% of all neurons. Cerebellar activity is critical for motor control and coordination and is necessary for learning movements. The Purkinje cell is the only neuronal type to project out from the cerebellar cortex and influence downstream motor processing. Purkinje cell spike trains must therefore represent all computations performed within the cerebellar cortex. Purkinje cells fire two distinct types of action potential: simple spikes (SS) and complex spikes (CS). These are thought to mediate different aspects of cerebellar operation. SSs are stereotypical, sodium-mediated action potentials that are elicited intrinsically (~30Hz). CSs are infrequent (~1Hz) and are composed of an initial sodium-mediated action potential that is then followed by a number of high-frequency (~500Hz) secondary components (spikelets). The quantitative analysis of CS components remains a novel area of study, but it is likely that changes in CS waveform are functionally significant. I aim to use a combination of mathematical techniques to quantitatively describe the CS waveform. By implementing an annealing algorithm I show that CSs are not unitary events but form a number of distinct clusters based on waveform dynamics. Discrete CS waveforms may differentially modulate SS firing within the same cell and also differentially affect downstream target nuclei activity. In this way, specific CS waveforms may underlie different aspects of cerebellar operation and ultimately motor behaviours.

APACE: Accelerated Permutation Inference for the ACE Model

Xu Chen (University of Warwick)

Heritability studies of imaging phenotypes are becoming more commonplace. Heritability, the proportion of phenotypic variance attributable to genetic sources, is typically estimated with variance components (e.g. in SOLAR) or structural equation models (e.g. in OpenMx), but these approaches are computationally expensive and cannot exploit the sensitivity of spatial statistics, like cluster-wise tests. Thus, we developed a non-iterative estimation method for the ACE model; this method is accurate and is so fast that it allows the use of permutation, which provides sensitive family-wise error corrected voxel- and cluster-wise inferences. Specifically, we fit the ACE model to twin data at each voxel and make inference on summary and aggregate measures of heritability. We call our Matlab-based tool using these inference approaches "Accelerated Permutation Inference for the ACE Model (APACE)", and are distributing it freely at http://warwick.ac.uk/tenichols/apace.

A Method for Fast Whole-brain Aggregate Heritability Estimation

Xu Chen (University of Warwick)

Heritability, the proportion of variability attributable to genetic sources, is a vital quantitative genetic measure and, in particular, non-zero heritability is needed to certify a trait as a "phenotype". However heritability can also be used as a general measure of biological validity, e.g. ranking different pre-processing techniques by heritability of the resulting phenotype. While such comparisons can be done element-wise over the phenotype (e.g. by voxels or surface elements), a whole-brain summary of heritability can simplify the comparisons. In this work we propose a simple measure of aggregate heritability that is easy to compute and involves no ACE model fitting. We derive analytical results that show this aggregate measure is closely related to the average of element-wise heritability. Using real data we found that this extremely fast aggregate heritability is highly similar to that from the traditional (more computationally intensive) mean heritability summaries obtained by fitting ACE model.

Searching Multiregression Dynamic Models of fMRI Networks using Integer Programming

Lilia Carolina Carneiro da Costa (University of Warwick)

In this work we estimate the effective connectivity for resting-state and steady-state task-based functional Magnetic Resonance Imaging (fMRI) data, using a class of Dynamic Bayesian Network (DBN) models, called the Multiregression Dynamic Model (MDM). Several models have been developed in order to define and detect a causal flow from one variable to other, especially in the area of machine learning (e.g. Spirtes et al., 2000 and Pearl, 2000). The MDM models embody a particular pattern of causal relationships which, unlike the Bayesian Network (BN), expresses the dynamic message passing as well as the potential connectivity between different areas in the brain. One of the advantages of this class is that, in contrast to many other DBNs, the hypothesized relationships accommodate conditional conjugate inference. We demonstrate how straightforward it is to search over all possible connectivity networks with dynamically changing intensity of transmission to find the MAP model within this class. This search method is made feasible by using a novel application of an Integer Programming algorithm. The efficacy of applying this particular class of dynamic models to this domain is shown and more specifically the computational efficiency of a corresponding search of 11-node DAG model space. Also, due to a factorization of the marginal likelihood, the search over all possible directed (acyclic or cyclic) graphical structures is even faster and we demonstrate this here.

Robust and explorative analysis of EEG data

Benedikt Ehinger (University of Osnabrueck)

Explorative data-driven analysis of EEG is burdened with problems related to the multiple comparison problem. In a standard EEG experiment testing a large number of electrodes and time points for potential effects is associated with a highly elevated familywise error rate. Furthermore correction procedures that do not take in account the highly correlated structure of the data result in increased type II errors. In addition, high inter-subject variability leads to outliers that possibly skew the data and result in unreliable effects. In order to overcome these problems, we fitted mass-univariate GLMs on single subject trials and in a second step estimated effects based on beta-weights over subjects for all time points and electrodes (Pernet, 2011). In our experiment, we study the role of prediction of visual information during eye-movement behavior. We displayed peripheral stimuli and, after a delay, instructed the subjects to perform saccades onto the stimulus. During the saccade, we exchanged the stimulus with a modified version in some of the trials. We analyzed pre- and post -saccadic ERPs in a 4x2 unbalanced factorial design. The EEG data (64 Channels, 500Hz) were corrected for multiple comparisons by the threshold free cluster enhancement method. This method corrects individual each sample statistic instead of an extended cluster-based statistic. To account for outliers, we used Yuens t-test and bootstrap-percentile methods of trimmed means as robust statistical methods for group level comparisons. We found a significant main effect of changing the stimuli that closely resembles a P300 and, importantly, highly significant interactions modulating this main effect. These approaches allow analyses that show considerably improved sensitivity and robustness. Especially interactions can be analyzed readily using the GLM framework in a flexible and explorative way.

The Fast and Powerful Method for Multiple Testing Inference in Family-Based Heritability Studies with Imaging Data

Habib Ganjgahi (University of Warwick)

Estimation of heritability for neuroimaging phenotypes like cortical thickness, fractional anisotropy and BOLD activations, is essential in imaging genetic studies. Voxel-wise heritability measurements were made possible by genetic analysis tools optimized for imaging research, such as the SOLAR/SOLAReclipse. The mass-univariate nature of voxel-wise analyses present a challenge to account for elevation in false positive findings because of multiple testing. Many standard correction methods for multiple testing, including cluster-wise inference cannot be readily used in imaging genetic studies because of the strict assumption for non-independence of the sample. Here, we are presenting a fast and powerful permutation test for general pedigree studies that provide traditional, spatial inferences for images. Heritability estimation is performed using variance component models. In this approach, the phenotype covariance matrix is decomposed into two components, one for the additive genetic effect and one for the combination of individual-specific environmental effects and measurement error ($\Sigma=2\sigma_A^2 \Phi+\sigma_E^2 I$). These parameters are estimated by maximizing the likelihood function under a multivariate normal assumption. An orthogonal transformation (based on the eigenvectors of the kinship matrix) is used to accelerate computation. two permutation tests are proposed to correct the multiple testing errors in imaging heritability studies. The first method is based on the permuting the kinship covariance matrix and fitting the model repeatedly and applying the likelihood ratio test as a test statistic to derive the uncorrected p-value. In this method, the nonlinear maximum likelihood function is optimized by the numerical method in each permutation, which is computationally intensive. The second method is based on constructing an auxiliary regression model on squared residuals and the kinship covariance matrix eigenvalues. After orthogonal transformation of the data, the second moment of residuals has a linear relationship with the additive genetic effect and the kinship matrix eigenvalues $E($\epsilon_i^2$ )=h^2 (\lambda_gi-1)+1$ (without loss of generality we assume that the data is scaled to unit variance) where $\epsilon$ and $\lambda_g$ are transformed residuals and the kinship matrix eigenvalues respectively. We propose to use half of the explained sum of squares as a test statistic. In this case, permuting the eigenvalues and calculating the test statistic in each permutation calculate the uncorrected p-values. Finally, in each permutation, maximum statistic and maximum cluster size is captured to derive the empirical distribution of these statistics and their critical values. Computer simulation was used to validate the permutation tests and random field cluster size inference for multiple testing error correction in imaging heritability studies. We simulated smooth images of size 64 by 64 containing a circular region of true heritability for a family of 52 and 138 subjects; heritability was varied, h3 = 0.2, 0.4, 0.6, 0.6. 500 permutations were used, and the entire simulation was repeated with 100 realized datasets. Voxel-wise, we found permutations and parametric-likelihood-based inferences gave nearly identical control of false positives and comparable power. Cluster wise inference revealed that random field findings are generally conservative and does not work for low cluster forming thresholds (fig1, right panel). Cluster wise inference based on the LRT statistic image showed that this method works better than RFT however we found that permuting the eigenvalues and using LRT as a test statistic affected by effect size (fig2.). Finally the permutation method based on the auxiliary regression was faster and controlled the false positive rates at the desired level (fig1, left panel) and had the greatest power.

Is Z enough? Impact of Meta-Analysis using only Z/T images in lieu of estimates and standard errors

Camille Maumet (University of Warwick)

Introduction While most neuroimaging meta-analyses are based on peak coordinate data, the best practice method is an Intensity-Based Meta-Analysis (IBMA) that combines the effect estimates and their standard errors (E+SE's) [5]. There are various efforts underway to facilitate sharing of neuroimaging data to make such IBMA's possible (see, e.g. [2]), but the emphasis is usually on sharing T-statistics. However, guidelines for (non-imaging) meta-analysis are clear that T-statistic- based meta-analysis is suboptimal and is to be discouraged [1]. But even if E+SE's are shared, the units must be equivalent, and different software, models or contrasts can lead to incompatible units. Using 21 studies of pain in control subjects, we compare the use of IMBA using only T-statistics to use of E+SE's. Methods Our reference approach is an IBMA based on a 3-level hierarchical model: level 1, subject FFX; level 2, study MFX; level 3: meta- analysis MFX (FLAME MFX) or FFX (FLAME FFX), using FSL's FLAME method [6]. In the absence of E+SE's, there are a number of methods to combine Z-scores [3]. We focused on three of them: Stouffer's method [7], Weighted-Z [8,4], Z MFX [5] and Z Permutation. We also investigated two alternative approaches using only the E<92>s: Random-Effects GLM (RFX GLM) and Contrast Permutation. Conclusions We have compared seven meta-analytic approaches in the context of one- sample test. When only contrast estimates are available, RFX GLM was valid, closest to FLAME MFX reference. When only standardised estimates (i.e. Z/T's) are available, permutation is the preferred op6on as the one providing the most faithful results. Further investigations are needed in order to assess the behaviour of these estimators in other configurations, including meta-analyses focusing on between-study differences.

Finding the right spot - Background correction in ratiometric calcium imaging using independent background measurements

Marlene Pacharra (TU Dortmund) Vanessa Hausherr, Ramona Lehmann, Julia Sisnaiske, and Christoph van Thriel

In neuroscience, imaging of transient calcium responses to a range of stimuli (e.g. neurotransmitters, chemicals) in neuronal cells is an important tool to assess cell functionality. After loading fluorescent Ca2+ indicators into living cells ratiometric measurements are used to quantitatively determine the intracellular calcium concentration during stimulation. Without a valid estimation of background fluorescence, the amplitudes of these measured calcium transients cannot be compared across experiments. In order to control for such sample-based differences, a standard approach is the use of an independent background measurement of a cell-free region that is subsequently subtracted from the fluorescence intensity in all the Regions of Interest (ROIs, cells). We investigated across three different cell types from mice (neuronal progenitor cells, primary cortical neurons, trigeminal ganglion neurons) and three biological replicates, if location (< 1 μm vs. > 10 μm from a cell) and size of the cell-free control area (46 μ m2 vs. 200 μm2), in which the background measurement is made, affects background-corrected calcium amplitudes. We hypothesized that if (a) the background measurement area is close to a cell and (b) the background measurement area is large, the background-corrected amplitudes should be larger since the noisy background can be better eliminated. Depending on location, cell type and replicate, size of background measurement area influenced background-corrected amplitudes differently sometimes resulting in substantially smaller or larger background-corrected amplitudes. Across all cell types, background-corrected amplitudes were larger, if the background measurement was made close to a cell as opposed to further away from a cell. Background correction based on a single independent background measurement is implemented in many software applications for calcium imaging. Results obtained using this approach should be treated carefully. If amplitudes are to be compared across experiments our results suggest that experimenters should control size and location of the background measurement area.

Global tractography within a Bayesian framework

Lisa Mott (University of Nottingham)

Diffusion-weighted magnetic resonance imaging quantifies the diffusion of water in the brain to understand the underlying tissue and enables the reconstruction of white matter tracts in the brain non-invasively and in-vivo by tractography, which is essential to understanding the brain’s structure and functions. Currently, the two commonly used tractography methods do not allow for statis- tically testing for the existence of a connection between two brain regions of interest (ROIs). However, global tractography (Jbabdi et al. 2007) parametrises these connec- tions between two brain regions at a global level and hence, known connections can be acknowledged in the algorithm. Within such a framework the intensity within each voxel is modelled using a partial volume model (Behrens et al. 2003), while we can do model selection to choose between the model where a connection exists between 2 ROIs and the model where there is no such connection.

In this talk we first discuss how one can efficiently estimate the parameters of the partial volume model when fitted to data for a single voxel. The partial volume mod- els allows a number of fibre orientations to be modelled within one voxel. Although regularisation methods have been used in this setting, we introduce thermodynamic integration methods instead which enable for formal model comparison leading to ac- curate estimation of Bayes Factors. Furthermore, we introduce a new method for trac- tography, termed as fully probabilistic tractography, that allows model uncertainty (i.e. the number of different fibre orientations) to be taken into account. Finally, we discuss how these methods that applied for a single voxel can be used to construct an MCMC algorithm for doing efficient inference for global tractography. References

  1. Behrens,T.E.J.,Woolrich,M.W.,Jenkinson,M.,Johansen- Berg,H.,Nunes,R.G.,Clare,S.,Matthews,P.M.,Brady,J.M. and Smith,S.M. 2003. Characterization and Propagation of Uncertainty in Diffusion-Weighted MR Imaging. Mag.Res.inMed50,pp.1077-1088.
  2. Jbabdi,S.,Woolrich,M.W.,Andersson,J.L.R. and Behrens,T.E.J. 2007. A Bayesian framework for global tractography. NeuroImage37,pp.116-129.

Estimating non-stationary brain connectivity networks

Ricardo Pio Monti (Imperial College)

Understanding of the functional architecture of the human brain is at the forefront of neuroimaging. In many applications functional networks are assumed to stationary resulting in a single network estimated for the entire time course. However recent results suggest that the connectivity between brain regions is highly non-stationary even at rest. As a result, new methodologies are needed to comprehensively account for the dynamic nature of functional networks. Such approaches must be capable of accurately estimating networks but also highly adaptive to rapid changes that may occur. In this poster we describe the Smooth Incremental Graphical Lasso Estimation (SINGLE) algorithm which can be used to estimate dynamic brain networks from fMRI data. The proposed method builds on the strength of penalised regression methods and strongly related to network previous work such as the Graphical and Fused Lasso. Consequently, the resulting objective function is convex and can be solved efficiently via an Alternating Directions Method of Multipliers algorithm. This allows for the proposed algorithm to solve large scale problems <96> we further discuss approximations which can be used to improve running times for practical applications. We provide a simulation study to demonstrate the capabilities of the proposed method and apply it to task-based fMRI data. The results demonstrate the capabilities of the proposed method and highlight the Right Inferior Frontal Gyrus and the Right Inferior Parietal lobe as dynamically changing with the task.

Bayesian model selection and estimation: Simultaneous mixed effects for models and parameters

Daniel J. Schad (Charite Hospital Berlin)

Bayesian model selection and estimation (BMSE) are powerful methods for determining the most likely among a set of competing hypotheses about the mechanisms and parameters that generated observed data. In group-studies, full inference is provided by mixed-effects or empirical/hierarchical Bayes' models, which capture individual differences (random effects) as well as mechanisms/parameters common to all individuals (fixed effects). Previous models have assumed mixed-effects either for model-parameters (e.g., Pinheiro & Bates, 2000) or for the model-identity (Stephan et al., 2009). Here, we present a novel Variational Bayes' (VB) model which considers mixed-effects for models and parameters simultaneously. As a first step, we evaluate a method estimating mixed effects for parameters via expectation maximization (EM), while treating models as a fixed-effect (cf. Huys et al., 2011). Based on Monte Carlo simulations of (generalized non-linear) reinforcement learning models of decision-making we show that the EM method efficiently recovers true effects from the data, and that it can be used to estimate GLMs at the level of individual-specific parameters. We derive model-evidences and error bars for fixed effects via importance sampling and demonstrate via simulations that this can be used to test hypotheses on the data. Second, we evaluate our new VB method to simultaneously consider mixed effects for models and parameters, and compare it to a sufficient statistics approach, where mixed effects for parameters (Huys et al., 2011) and models (Stephan et al., 2009) are computed separately and combined for inference. Monte Carlo simulations show that both approaches provide successful estimation of model probabilities when uncertainty is low, but - as theoretically expected - reveal a higher correct probability mass of the new VB method under conditions of uncertainty. Compared to previous approaches (Huys et al., 2011; Stephan et al., 2009), the new VB method thus provides more precise inference in Bayesian model selection under uncertainty, and allows reducing biases in parameter estimation. Our new method suggests that we can and should understand the heterogeneity and homogeneity observed in group studies by investigating contributions of both, the underlying mechanisms and their parameters. We expect that this new mixed-effects method will prove useful for a wide range of group studies in computational modeling in neuroscience.

Download: Bayesian model selection and estimation: Simultaneous mixed effects for models and parameters

Pain-free Bayesian inference for psychometric functions

Heiko Schuett (University of Tuebingen)

To estimate psychophysical performance, psychometric functions are usually modeled as sigmoidal functions, whose parameters are estimated by likelihood maximization. While this approach gives a point estimate, it ignores its reliability (its variance). This is in contrast to Bayesian methods, which in principle can determine the posterior of the parameters and thus the reliability of the estimates. However, using Bayesian methods in practice usually requires extensive expert knowledge, user interaction and computation time. Also many methods---including Bayesian ones---are vulnerable to non-stationary observers (whose performance is not constant). Our work provides an efficient Bayesian analysis, which runs within seconds on a common office computer, requires little user-interaction and improves robustness against non-stationarity. A Matlab implementation of our method, called PSIGNFIT 4, is freely available online. We additionally provide methods to combine posteriors to test the difference between psychometric functions (such as between conditions), obtain posterior distributions for the average of a group, and other comparisons of practical interest. Our method uses numerical integration, allowing robust estimation of a beta-binomial model that is stable against non-stationarities. Comprehensive simulations to test the numerical and statistical correctness and robustness of our method are in progress, and initial results look very promising.

Download: Pain-free Bayesian inference for psychometric functions

Spatial Modelling of Multiple Sclerosis for Disease Subtype Prediction

Bernd Taschler (University of Warwick)

Magnetic resonance imaging (MRI) has become an essential tool in the diagnosis and managing of Multiple Sclerosis (MS). Currently, the assessment of MS is based on a combination of clinical scores and subjective rating of lesion images by clinicians. We present an objective 5-way classification of MS disease subtype as well as a comparison between three different approaches. First we propose two spatially informed models, a Bayesian Spatial Generalized Linear Mixed Model (BSGLMM) and a Log Gaussian Cox Process (LGCP). The BSGLMM relies on a regularised probit regression model and accounts for the binary nature of lesion maps and the spatial dependence between neighbouring voxels. On the other hand, the LGCP accounts for the random spatial variation in lesion location where the centre-of-mass of each lesion is considered as a realisation of a Poisson process that is driven by an underlying, non-negative intensity function. Both models improve upon mass univariate analyses that ignore spatial dependence and rely on some level of arbitrarily defined smoothing of the data. As a comparison, we consider a machine learning approach based on multi-class support vector machine (SVM). For the SVM classification scheme we use a large number of quantitative features derived from three MRI sequences in addition to traditional demographic and clinical measures. We show that the spatial models outperform standard approaches with average prediction accuracies of up to 85%.