Skip to main content Skip to navigation

2014-2015

2014/15 Term 1:

  • Week 3 - 17th October - Organisation
  • Week 4 - 24th October - Giacomo Zanella - "Some things we've learned about MCMC"
  • Week 5 - 31st October - Jere Koskela and Patrick Conrad - "Coupling from the Past" Propp and Wilson
  • Week 6 - 7th November - Cyril Chimisov - "Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms" Roberts and Rosenthal
  • Week 7 - 14th November - Nick Tawn - "Reversible Jump MCMC" / TBC - perhaps "Green - Reversible jump Markov chain Monte Carlo computation and Bayesian model determination" or "Hastie Green - Reversible Jump MCMC"
  • Week 8 - 21st November - Dan Simpson - "INLA"
  • Week 9 - 28th November - Gareth Roberts - "ScaLE Algorithm I"
  • Week 10 - 5th December - Murray Pollock - "ScaLE Algorithm II"

2014/15 Term 2:

  • Week 1 - 9th January - Chris Oates - "Control Functionals"
    • Abstract: We all know that Monte Carlo averages converge at O(N^(-1/2)), and that quasi Monte Carlo averages can converge even faster, at O(N^(-k)) where k>1/2 depends on certain regularity assumptions. In this talk I will show how both of these convergence rates can be exceeded, using the new approach of 'control functionals'. The method is fully compatible with MCMC methodology and offers a new way to assess convergence of ergodic averages along Markov chains.
  • Week 2 - 16th January - Sergios Agapiou - "Unbiased Monte Carlo: posterior estimation for intractable/infinite-dimensional models"
    • Paper: http://arxiv.org/abs/1411.7713
  • Week 3 - 23rd January - Michael Betancourt - "The Geometric Foundations of Hamiltonian Monte Carlo"
  • Week 4 - 30th January - Sebastian Vollmer - "Unbiased Monte Carlo: posterior estimation for intractable/infinite-dimensional models"
    • Paper: http://arxiv.org/abs/1411.7713
  • Week 5 - 6th February - Flávio Gonçalves (Universidade Federal de Minas Gerais) - "Exact Bayesian inference in spatiotemporal Cox processes driven by multivariate Gaussian processes"
  • Week 6 - 13th February - Joris Bierkens - "Non-Reversible Metropolis-Hastings in continuous spaces"
  • Week 7 - 20th February - Grigorios Mingas (Imperial) - "Hardware acceleration for large-scale stochastic inference"
    • Abstract: Due to the increasing complexity of Bayesian models and the massive amount of data they need to process, runtimes of stochastic inference methods like MCMC and SMC can become impractically long. To face this challenge, it is often insufficient to rely solely on better models or more efficient MCMC methods. It is equally important to leverage the power of modern hardware accelerators, such as Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). These devices offer massive parallel resources, which can be exploited when working with MCMC methods and models amenable to parallelization. In this talk, we summarize the work done jointly by the Department of Electrical and Electronic Engineering and the Department of Mathematics of Imperial College London towards accelerating these methods and applying them to large-scale inference problems. We show that it is possible to achieve speedups of up to 300x compared to conventional CPUs by designing custom hardware architectures for FPGAs and writing optimized code for GPUs. Moreover, we introduce a holistic approach to the problem of accelerating MCMC: We fuse the two stages of MCMC design (algorithmic stage and architectural/hardware stage) into a unified problem. This allows us to combine knowledge about the targeted MCMC method and the underlying hardware platform and make smart algorithmic and architectural design choices choices to further enhance performance. We apply our samplers to two Bayesian problems in genetics: 1) Variable selection for linear regression with large numbers of observations and predictors, 2) State space models with a large number of states and unknown parameters. We compare the samplers' speed with state of the art methods (GUESS algorithm for variable selection, Libbi library for state space models, GPU-based MCMC samplers). Finally, we highlight the advantages and disadvantages of GPUs and FPGAs, giving useful guidelines for practitioners.
  • Week 8 - 27th February - Hongsheng Dai (Essex) - "A new exact Monte Carlo simulation method"
    • Abstract: I will talk about a new rejection sampling algorithm. The algorithm depends on a decomposition of the target distribution to a product of two simple distributions. We simulate one realisation from each of the two simple distributions, respectively. One of them can be accepted as samples from the target distribution, under certain acceptance/rejection schemes. The algorithm is different from all existing methods. I will briefly talk about the ideas and possibilities of improving the efficiency of the new algorithm.
  • Week 9 - 6th March - Felipe Medina Aguayo - "Stability of Noisy Metropolis-Hastings"
    • Abstract: Pseudo-marginal Markov chain Monte Carlo methods for sampling from intractable distributions have gained recent interest and have been theoretically studied in considerable depth. Their main appeal is that they are exact, in the sense that they target marginally the correct invariant distribution. However, the pseudo-marginal Markov chain can exhibit poor mixing and slow convergence towards its target. As an alternative, a subtly different Markov chain can be simulated, where better mixing is possible but the exactness property is sacrificed. This is the noisy algorithm, initially conceptualised as Monte Carlo within Metropolis (MCWM), which has also been studied but to a lesser extent. The present article provides a further characterisation of the noisy algorithm, with a focus on fundamental stability properties like positive recurrence and geometric ergodicity. Sufficient conditions for inheriting geometric ergodicity from a standard Metropolis-Hastings chain are given, as well as convergence of the invariant distribution towards the true target distribution.
  • Week 10 - 13th March - Andreas Hetland - "On Particle Methods for Parameter Estimation in State-Space Models" by Kantas, Doucet, Singh, Maciejowski, and Chopin
    • Abstract: Nonlinear non-Gaussian state-space models are ubiquitous in statistics, econometrics, information engineering and signal processing. Particle methods, also known as Sequential Monte Carlo (SMC) methods, provide reliable numerical approximations to the associated state inference problems. However, in most applications, the state-space model of interest also depends on unknown static parameters that need to be estimated from the data. In this context, standard particle methods fail and it is necessary to rely on more sophisticated algorithms. The aim of this paper is to present a comprehensive review of particle methods that have been proposed to perform static parameter estimation in state-space models. We discuss the advantages and limitations of these methods and illustrate their performance on simple models.


2014/15 Term 3:

  • Week 1 - 24th April - Patrick Conrad - "Transport map accelerated Markov chain Monte Carlo" by Parno and Marzouk
    • Abstract: We introduce a new framework for efficient sampling from complex probability distributions, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use continuous transportation to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map-an approximation of the Knothe-Rosenblatt rearrangement-using information from previous MCMC states, via the solution of an optimization problem. This optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using a Newton method that requires no gradient information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems show order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per target density evaluation and per unit of wallclock time.
  • Week 2 - 1st May - Adam Griffin - "Simulation of Quasi-Stationary Distributions on Reducible State Spaces"
    • Abstract: Most stochastic epidemic models only have degenerate stationary distributions which represent "no infection". We study quasi-stationary distributions (QSDs) linked to these processes by considering events conditional on the epidemic having not died out. Furthermore, in multi-type processes we encounter reducible state spaces which cause problems with simulation. To tackle these, this talk will outline some SMC sampler methods and resampling techniques: Combine-Split Resampling and Regional Resampling developed to derive approximations to QSDs through simulation. These will be demonstrated through application to stochastic epidemic models.
  • Week 3 - 8th May - Kasia Taylor - "Exact sampling of one-dimensional diffusions with discontinuous drift."
    • Abstract: We discuss new methods for sampling diffusions with discontinuous drift. The suggested algorithms extend the class of Exact Algorithms for simulation of diffusions. These methods use retrospective rejection sampling which allows for using only finite information about paths in order to accept or reject them.The candidate paths are realisations of stochastic processes with known tran- sition densities and we can sample them on an arbitrarily fine time grid. There are no approximation methods involved in the rejection-acceptance step. It re- sults in ’exact’ sampling of diffusions, i.e. using correct distributions of finite dimensional projections of diffusions. An additional advantage is that the real- isation of diffusion on a refined time grid can be easily obtained after the path has been already accepted. The focus of this talk will be on one-dimensional diffusions with discontin- uous drift. To address the problem for this class of diffusions we use specially tailored candidate probability measures. We provide methodology for sampling a range of functionals of Brownian motion and its local time which allows us to sample from the candidate measures and perform rejection sampling. It is joint work with Omiros Papaspiliopoulos and Gareth O. Roberts.
  • Week 4 - 15th May - Konstantinos Zygalakis (Southampton) - "On long time approximation of ergodic stochastic differential equations: Applications to big data (and molecular dynamics)"
    • Abstract: Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally expensive. Both the calculation of the acceptance probability and the creation of informed proposals usually require an iteration through the whole data set. The recently proposed stochastic gradient Langevin dynamics (SGLD) method circumvents this problem in three ways: it generates proposals which are only based on a subset of the data, it skips the accept-reject step and it uses sequences of decreasing step-sizes. In this talk using some recent developments in backward error analysis for SDEs we will investigate the properties of the SGLD algorithm (and propose new variants of it) when the time-step remains fixed. Our findings will be illustrated by a variety of different examples. (If there is time using the same techniques we will investigate and construct new efficient integrators used in molecular dynamics simulations)
  • Week 5 - 22nd May - Seminar Cancelled (***Coincides with i-like workshop***)
  • Week 6 - 29th May - Jakub Kominiarczuk (Bristol) - "Gibbs sampling with large, but incomplete datasets"
    • Abstract: Prompted by the challenges posed by the desire to analyse ever growing and complex datasets, there has recently been renewed interest in reducing the computational complexity of Markov chain Monte Carlo methods in the statistical/machine learning literature. Here we contribute to the area a novel complexity reduction technique for the Gibbs sampler, when used in the context of a broad class of statistical models involving latent---or missing---data, whose size grows with the number of observations. Our approach exploits the fact that some of the costly conditional distributions involved in the implementation of the Gibbs sampler can be accurately approximated with computationally cheaper alternatives, whose parameters can be learnt on the fly.
  • Week 7 - 5th June - Nikolas Kantas (Imperial) - "Particle smoothing for eigenfunctions with applications"
    • Abstract: The talk will be mostly based on the paper written together with Nick Whiteley (Bristol) "A particle method for approximating principal eigen-functions and related quantities" (arXiv:1202.6678). The plan is to discuss the methodology and some motivating applications on rare events estimation and discrete time stochastic control. If time permits i will try to make some links with related works on Diffusion Monte Carlo methods and talk about possible extensions.
  • Week 8 - 12th June - Divakar Kumar (OxWaSP) - "The Bernoulli Factory algorithm for linear functions" by Mark Huber
  • Week 9 - 19th June - Sam Livingstone (UCL) - "Some new ergodicity results for Random Walk and Hamiltonian based Metropolis-Hastings algorithms"
    • Abstract: If an MCMC method generates a geometrically ergodic Markov chain, then a Central Limit Theorem exists for many estimators based on averaging the chain, which in practice often gives reasonable guarantees. We firstly consider the Random Walk Metropolis, but a more recent variant in which the proposal variance V(x) is allowed to change with position. Intuitively this means the size and shape of V(x) can adapt to local features of the target density f(x). In this talk I’ll review where this algorithm has appeared in the literature, before turning to geometric ergodicity. In one dimension we have established some general growth conditions on V(x) and tail conditions on f(x) that are required for the method to produce a geometrically ergodic Markov chain. I’ll also discuss the multi-dimensional case, with an illustrative example. If there is time I will also mention some ongoing work on geometric ergodicity of the Hamiltonian Monte Carlo algorithm, which is a joint project with Simon Byrne (UCL), Michael Betancourt (Warwick) and Mark Girolami (Warwick).
  • Week 10 - 26th June - Pieralberto Guarniero - "Look-Ahead Sequential Monte Carlo"
  • Week 11 - 3rd July - Cyril Chimisov & Nick Tawn (Warwick)
    • Cyril Chimisov - "Geometric Bounds for Eigenvalues of Markov Chains" by Diaconis and Strook
    • Nick Tawn - "Markov Chain Decomposition for Convergence Rate Analysis" by Madras and Randall
    • Joint Abstract: Results concerning the bounding of the spectral gap for reversible Markov chains. This is a way of gaining an understanding of the geometric rate of convergence of the Markov chain to stationarity. In particular Cyril will focus on the bounds for chains on discrete state spaces given in the paper by Diaconis and Nick will discuss bounds given in the paper by Madras and Randall (2002) for more general state spaces. Examples of the application of the results will be given.
  • Week 12 - 10th July - Mark Huber - Bernoulli Factories