Skip to main content Skip to navigation

Abstracts


Agapiou Sergios
 

Unbiased Estimation and Exact Simulation for Bayesian Inverse Problems
We will consider PDE inverse problems formulated in function space.
In this approach, we put a prior distribution on the possibly infinite dimensional unknown parameter and the solution takes the form of the posterior distribution on the unknown parameter given the observed data. The aim is to extract information from the posterior, for example by calculating posterior expectations. The challenge is that typically we cannot sample the posterior which is only available as a change of measure from the prior, where this change of measure involves the solution of the forward problem. The standard approach is then to construct a Markov chain with limiting distribution the posterior measure and to use samples from this Markov chain to estimate posterior expectations, for example using the ergodic average. In this approach there are two forms of approximation, due to a) the discretization of the possibly infinite dimensional parameter and/or the discretization of the forward PDE in order to solve it numerically; b) the use of finite-time samples from the Markov chain which have not necessarily converged to the limiting posterior distribution. We will first discuss techniques for estimating posterior expectations unbiasedly, that is we will construct estimators whose mean is the correct posterior expectation. Then, we will discuss techniques for drawing samples from the posterior without any error due to the discretization of the forward PDE solver.
The first part on unbiased estimation is work done in collaboration with Gareth Roberts (Warwick) and Sebastian Vollmer (Oxford) (arXiv:1411.7713), while the second part on exact simulation is ongoing work in collaboration with Gareth Roberts and Andrew Stuart (Warwick).
 


Albers Dave

Data assimilation in biomedicine with a focus on glucose forecasting for type 2 diabetics
Most directly, this talk will focus on applying data assimilation in the context of biomedicine in general, and in the context of type 2 diabetes in particular. Biomedical research can be cleaved into two broad categories, direct clinical application and basic biology research, and these two contexts imply constraints on the data available as well as on what forecasts are useful. The talk will be begin with an introduction to the data and the constraints that these contexts impose, and discuss how these constraints both guide our computational choices and formulate new problems. Under this conceptual framework an application of data assimilation to glucose forecasting for patients with type 2 diabetes will be shown. In this applicaiton, states and parameters are estimated using a dual unscented Kalman filter, personalizing two mechanistic models to individuals. The use of two models is motivated by the fact that no first principles models of the endocrine system exist and the key components within a given model required to forecast individuals glucose remain unknown. Model selection methodology is then used to make a personalized model choice for the individual and their measurement characteristics. The data assimilation forecasts are empirically evaluated against future glucose measurements at specific times and against trained type 2 diabetes nutrition counselor forecasts. The data assimilation forecasts compare well with specific glucose measurements and match or exceed in accuracy expert forecasts. The talk will conclude with a discussion about how the data assimlation application is currently being used in a clinical setting.


Arridge Simon

Dynamic PhotoAcoustic Tomography
Photoacoustic tomography (PAT) is an emerging biomedical imaging modality with both pre-clinical and clinical applications that is an example of “Imaging from coupled Physics”. This class of imaging techniques seeks to solve the problem with conventional imaging methods of requiring a trade-off between those modalities that give high resolution structural images, but do not discriminate different physiological states well in terms of contrast, and those with good physiological contrast, but poor resolution. In the case of PAT, the method measures contrast in the optical part of the spectrum, which has high spectral sensitivity for different tissues, but uses sound to give high resolution. Resolution and acquisition time for PAT involves a tradeoff between speed and signal-to-noise. In a current project we are applying Compressed Sensing techniques to PAT in a spatio-temporal setting, in order to develop a potentially realtime, high-resolution 4D imaging modality. In this talk I will present our approach and initial results supporting its potential for success. Joint work with : Paul Beard, Marta Betcke, Ben Cox, Nam Trung Huynh, Felix Lucka


Chan Raymond

Point-spread function reconstruction in ground-based astronomy
Because of atmospheric turbulence, images of objects in outer space acquired via ground-based telescopes are usually blurry. One way to estimate the blurring kernel or point spread function (PSF) is to make use of the aberration of wavefronts received at the telescope, i.e., the phase. However only the low-resolution wavefront gradients can be collected by wavefront sensors. In this talk, I will discuss how to use regularization methods to reconstruct high-resolution phase gradients and then use them to recover the phase and the PSF in high accuracy.


Christie Mike

Multilevel Hamiltonian Monte Carlo for Quantifying Uncertainty in Reservoir Simulation
This talk will discuss the application of multi-level methods in conjunction with Hamiltonian Monte Carlo to quantify uncertainty in reservoir simulation. Multilevel techniques exploit a telescoping sum and linearity of the expectation operator to dramatically reduce the costs of Monte Carlo techniques. The algorithm works by representing the quantity of interest on a coarse grid, with a series of corrections accounting for the differences between realizations on finer grids of higher fidelity. To calibrate models to data, the multi-level algorithm is combined with Hamiltonian Monte Carlo to ensure efficient sampling from the posterior. The approach is demonstrated on a reservoir in the Gulf of Mexico, showing that MLHMC is significantly faster than Hamiltonian Monte Carlo and decreases the computational cost compared with HMC.


Ghattas Omar

Scalable algorithms for optimal experimental design for infinite dimensional nonlinear Bayesian inverse problems
The ability to infer an unknown field within a PDE (e.g., medium, state, or source) from observational data with quantified uncertainty opens up the possibility of designing the observational system to minimize the uncertainty in the inferred field---the so-called optimal experimental design problem (OED). The major challenge for OED problems is that the Bayesian inverse problem, difficult as it is, is just the inner problem within the outer experimental design optimization problem. When the uncertain field is discretized by a large number of parameters, and the PDE is expensive to solve, the OED problem because intractable. We address the problem of optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems by invoking a local approximation. We seek an A-optimal design, i.e., we aim to minimize the average variance of a Gaussian approximation to the inversion parameters at the MAP point. The OED problem includes as constraints the optimality condition PDEs defining the MAP point as well as the PDEs describing the action of the posterior covariance. We consider the A-optimal design of well locations for measuring pressure to optimally infer the permeability field in a porous medium flow problem. The numerical results demonstrate that the cost of solving the OED problem---measured by the number of forward/adjoint PDE solves---is independent of the parameter and data dimensions, and depends only on the information content of the data.
This work is joint with Alen Alexanderian (NC State), Noemi Petra (UC Merced), and Georg Stadler (NYU).


Omiros Papaspiliopoulos

The intrinsic dimension of importance sampling
We will discuss the performance of importance sampling with a view towards inverse problem and filterning applications. I will first present general non-asymptotic results which provide a broad range of conditions under which importance sampling is successful. We will then focus on linear inverse problems and will discuss
two notions of effective dimension and their relation to the collapse of importance sampling in high dimensional and/or small observational noise regimes. Finally, we will briefly discuss the relative performance of the standard and optimal proposals in the filtering setting. This is joint work with Omiros Papaspiliopoulos, Daniel Sanz-Alonso and Andrew Stuart.


Helin Tapio

Large noise in variational regularization
We consider variational regularization methods for inverse problems with large noise, which is unbounded in the image space of the forward operator. We introduce a general Banach space setting that allows to define a reasonable notion of solutions for more general noise in a larger space provided one has sufficient mapping properties of the forward operator. As a natural further step we study stochastic noise models and in particular white noise, for which we derive error estimates in terms of the expectation of the Bregman distance. The finiteness of certain expectations leads to a novel class of abstract smoothness conditions on the forward operator, which can be easily interpreted in the Hilbert space case.


Iglesias Marco

A Bayesian level-set approach for geometric inverse problems
We discuss a computational Bayesian level-set framework for the solution of PDE-constrained inverse problems where the unknown is a geometric feature or interface of the underlying PDE forward model. Our approach uses a level-set function that parameterises unknown geometric features within an infinite dimensional Bayesian framework. Key theoretical aspects are discussed and numerical experiments displayed to show the capabilities of the level-set approach for (i) the identification of geologic structures in groundwater flow and (ii) the estimation of electric conductivity in the complete electrode model for EIT. In these applications, the unknown parameters have sharp interfaces characterised by the level-set function which is inferred, within the Bayesian framework, from observational data that arises from the PDE (forward) model under consideration. By means of state-of-the-art MCMC methods we explore the posterior distribution that arises from the Bayesian level-set formulation. Our numerical experiments indicate that the efficacy of the level-set approach relies on the proper prior specification of the intrinsic correlation lengths of the interface parameterised by the level-set function. This motivates our recent hierarchical multiscale approach in which the aforementioned characteristic length scale is learned from the data. The latter approach together with some numerical experiments will be also discussed in this talk.
Joint work with Matt Dunlop(Warwick), Yulong Lu (Warwick), and Andrew Stuart (Warwick)


Mandel Jan

Data assimilation on random smooth functions with applications to ensemble Kalman filter and satellite fire detection
Spatial PDE-based models lead to Bayesian estimation with the state as a random smooth function and data likelihood that is well defined but it is spatially rough and it does not arise from a probability measure. We first show a simple example when the state and the data error distributions are Gaussian measures on a separable Hilbert space, the entire state is observed, and the posterior is undefined. But when the data error distribution is white noise (which is not a probability measure), the posterior is well defined and it is again a probability measure. We then prove large sample convergence of the Ensemble Kalman filter with infinitely dimensional data and white noise randomization. Next, we show a method for the assimilation of active fires detection from satellites into a wildfire spread model. The state of the fire model is encoded as the fire arrival time on a spatial domain, with the covariance as a negative fractional power of the Laplacian. The data likelihood is based on the heat output from a fuel burn model. A preconditioned steepest descent method can find a practically useful approximation of a maximum aposteriori probability estimate in a single iteration, avoiding local maxima.


Marzouk Youssef


Oliver Dean

Metropolized Randomized Maximum Likelihood for sampling from multimodal distributions
In this talk, I describe a method for using optimization to derive efficient independent transition functions for Markov chain Monte Carlo simulations. My interest is in sampling from a posterior density $\pi(x)$ for problems in which the dimension of the model space is large, $\pi(x)$ is multimodal with regions of low probability separating the modes, and evaluation of the likelihood is expensive. The method requires solution of a large-scale minimization for each proposal, and evaluation of a high-dimensional Jacobian determinant, but the acceptance rate for independant proposals can be very high for multimodal posterior distributions. Attention is restricted to the special case for which the target density is the product of a multivariate Gaussian prior and a likelihood function for which the errors in observations are additive and Gaussian.


Reich Sebastian

Non-Gaussian data assimilation via a localized hybrid ensemble transform filter
Most current data assimilation (DA) algorithms for numerical weather prediction (NWP) are based on hybrid variational and ensemble-based approaches, which rely on a Gaussian approximation to forecast uncertainties. Such Gaussian representations are less likely to be appropriate for characterizing uncertainties arising from fully three dimensional and multi-phase convection-driven atmospheric circulation patterns. Thus high-resolution NWP has triggered the exploration of non-Gaussian DA methods. This talk will contribute to this development by presenting a hybrid ensemble transform filter which bridges the ensemble Kalman filter with sequential Monte Carlo methods. The proposed hybrid filter allows for localization and inflation as necessary for filtering spatio-temporal processes under model errors.



Sapsis Themis

Uncertainty quantification of complex systems through adaptive reduced order subspaces
We will discuss uncertainty quantification algorithms using adaptive reduced order subspaces. A wide range of complex systems including unstable laminar flows, turbulent flows, and nonlinear waves with extreme responses, are characterized by strongly transient dynamics. For such systems we have intense energy transfers which are inherently connected with non-Gaussian statistics. The challenge for such systems is two-fold: i) to identify the appropriate subspace that captures the strongly transient and unstable dynamics, and ii) to effectively model the dynamics within this subspace, despite the dynamical interactions with the orthogonal complement. We will review recent methods for the above two tasks and we will present applications on the quantification of low-order statistics for turbulent systems as well as the modeling of extreme events for systems characterized by intermittent instabilities.



Schoenlieb Carola

Bilevel optimisation for variational regularisation models
When assigned with the task of reconstructing an image from imperfect data the first challenge one faces is the derivation of a truthful image and data model. In the context of regularised reconstructions, some of this task amounts to selecting an appropriate regularisation term for the image, as well as an appropriate distance function for the data fit. This can be determined by the a-priori knowledge about the image, the data and their relation to each other. The source of this knowledge is either our understanding of the type of images we want to reconstruct and of the physics behind the acquisition of the data or we can thrive to learn parametric models from the data itself. The common question arises: how can we optimise our model choice? In this talk we discuss a bilevel optimization method for computing optimal parameters in variational regularisation models. In particular, we will consider optimal parameter derivation for total variation denoising with multiple noise distributions, optimising total generalised variation regularisation for its application in photography, and learning good spatial-temporal regularisation for dynamic image reconstruction. This is joint work with M. Benning, L. Calatroni, C. Chung, J. C. De Los Reyes, T. Valkonen, and V. Vlacic


Sprungk Bjorn

Generalized pCN Metropolis algorithms for Bayesian inference in Hilbert spaces
We consider Markov Chain Monte Carlo methods in general Hilbert spaces for sampling from conditional probability measures. Such measures appear, e.g., in Bayesian inference for coefficients of partial differential equations given noisy observations of its solution.
We focus on Metropolis-Hastings algorithms and propose a generalization of the existing pCN-Metropolis algorithm. The new method allows to adapt to the geometry of the conditional measure which yields a higher statistical efficiency. Numerical experiments indicate that the proposed algorithm performs independently of state space dimension and observational noise variance.
We also prove geometric ergodicity of the new method by a comparison argument for spectral gaps.
Moreover, our convergence analysis provides as by-products the tools to define Metropolis-Hastings algorithms in Hilbert spaces which employ state-dependent covariance operators in the proposal.


Stuart Andrew

Analysis of the ensemble Kalman filter for inverse problems
The ensemble Kalman filter (EnKF) is a widely used methodology for state estimation in partial, noisily observed dynamical systems, and for parameter estimation in inverse problems. Despite its widespread use in the geophysical sciences, and its gradual adoption in many other areas of application, analysis of the method is in its infancy. Furthermore, much of the existing analysis deals with the large ensemble limit, far from the regime in which the method is typically used. The goal of this paper is to analyze the method when applied to inverse problems with fixed ensemble size. A continuous-time limit is derived and the long-time behavior of the resulting dynamical system is studied. Most of the rigorous analysis is confined to the linear inverse problem, where we demonstrate that the continuous time limit of the EnKF corresponds to a set of gradient flows for the data misfit in each ensemble member, coupled through a common pre-conditioner which is the empirical covariance matrix of the ensemble. Numerical results demonstrate that the conclusions of the analysis extend beyond the linear inverse problem setting. Numerical experiments are also given which demonstrate the benefits of various extensions of the basic methodology.
(Joint work with Claudia Schillings)


Sullivan Tim

Probabilistic Numerics for Differential Equation
Beginning with a seminal paper of Diaconis (1988), the aim of so-called "probabilistic numerics" is to compute probabilistic solutions to deterministic problems arising in numerical analysis by casting them as statistical inference problems. For example, numerical integration of a deterministic function can be seen as the integration of an unknown/random function, with evaluations of the integrand at the integration nodes proving partial information about the integrand. Advantages offered by this viewpoint include: access to the Bayesian representation of prior and posterior uncertainties; better propagation of uncertainty through hierarchical systems than simple worst-case error bounds; and appropriate accounting for numerical truncation and round-off error in inverse problems, so that the replicability of deterministic simulations is not confused with their accuracy, thereby yielding an inappropriately concentrated Bayesian posterior. This talk will describe recent work on probabilistic numerical solvers for ordinary and partial differential equations, including their theoretical construction, convergence rates, and applications to forward and inverse problems. Joint work with Jon Cockayne, Mark Girolami, and Andrew Stuart (Warwick), and Chris Oates (Sydney).


Tempone Raul

Multi-index Monte Carlo and Multi-index Stochastic Collocation
We describe and analyze the Multi-Index Monte Carlo (MIMC) and the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MIMC is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using first-order differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles's MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, $\mathcal{O}(\tol^{-2}).$ On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally efficient. We finally show the effectiveness of MIMC and MISC in some computational tests, including PDEs with random coefficients and Stochastic Particle Systems.
This is a joint work with Abdul-Lateef Haji Ali (KAUST), Fabio Nobile (EPFL), Lorenzo Tamellini (UNIPV)