Sengupta, B;
Friston, KJ;
Penny, WD;
(2014)
Efficient Gradient Computation for Dynamical Models.
Neuroimage
, 98
pp. 521-527.
10.1016/j.neuroimage.2014.04.040.
Preview |
PDF
1-s2.0-S1053811914003097-main.pdf Download (475kB) |
Abstract
Data assimilation is a fundamental issue that arises across many scales in neuroscience - ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional - norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimised. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time - (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations - as required with forward sensitivities - proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters.
Archive Staff Only
View Item |