On the Stability of Sequential Monte Carlo Methods in High Dimensions
Sequential Monte Carlo (SMC) methods are nowadays routinely applied in a variety of complex applications: hidden Markov models, dynamical systems, target tracking, control problems, just to name a few.
Whereas SMC methods have been dramatically improved and refined in the last decades, they are still known to suffer from the curse of the
dimensionality: algorithms can sometimes break down exponentially fast with the dimension of the state space. The talk will concentrate on a particular version of SMC and will look at methods that can reduce the asymptotic cost of the algorithms from exponential to quadratic in the dimension of the state space. Explicit asymptotic results will clarify the effect of the dimension at the properties of the algorithm and could in future work provide a platform for algorithmic optimisation.
Sequential Monte Carlo for an inverse problem related to the Navier-Stokes equations
This talk is about an inverse problem from the area of numerical weather forecasting and data assimilation. In particular we consider the estimation of the initial condition of the two dimensional Navier-Stokes equations defined on a torus, when one obtains Eulerian noisy measurements of the time and space evolving vector field. We will adopt a Bayesian formulation resulting from a particular regularisation that ensures the problem is well posed. Then we will discuss the computational challenges for these high dimensional problems and present a Markov chain Monte Carlo method for computing the posterior numerically, which is currently considered as the golden standard for evaluating various commonly used data assimilation algorithms. Finally, we will show how one can design a Sequential Monte Carlo sampler for this high dimensional application, which in some numerical examples achieves the same accuracy but in a much more efficient manner. This is based on joint work with A. Beskos, A. Jasra and D. Crisan.
Lecture 1: Introduction and overview
Lecture 2: General theory for nondegenerate observations
Lecture 3: Models and phenomena in high dimension
Lecture 4: Applications to particle filtering
A problem that arises in many applications is to compute the conditional distributions of stochastic models given observed data. Mathematically, such problems motivate the investigation of probabilistic phenomena that arise from conditioning. While work in this area has a long history dating back at least to D. Blackwell (1957), such problems have still been studied comparatively little in the literature, particularly in infinite-dimensional systems that arise in many areas of interest. The topic has connections to several areas of probability, ergodic theory, measure theory, and statistical mechanics, as well as direct practical implications for the design and analysis of algorithms for nonlinear filtering or data assimilation in high-dimensional systems. In these lectures, I aim to give an overview of some problems in this area with emphasis on the behavior of conditional distributions on large time intervals (ergodicity) and on large spatial scales (decay of correlations). I will outline the existing general theory and its limitations, as well as curious phenomena that arise in high dimension and that remain poorly understood. I will also discuss our initial attempts to leverage conditional ergodicity in space and time for the design of particle filtering algorithms that can avoid the curse of dimensionality.
RvH, "The stability of conditional Markov processes and Markov chains in random environments", Ann. Probab. 37, 1876-1925 (2009).
RvH, "On the exchange of intersection and supremum of sigma-fields in filtering theory", Israel J. Math. 192, 763-784 (2012).
X. T. Tong and RvH, "Conditional Ergodicity in Infinite Dimension",
P. Rebeschini and RvH, "Can local particle filters beat the curse of dimensionality?", arxiv:1301.6585