Skip to main content

Algorithms & Computationally Intensive Inference seminars

Terms 1-3, Location C1.06, Fridays 12:15-14:00 (12:15-12:45 is an informal sandwich lunch).

Reminder emails are not sent to participants unless there is a change to the scheduled programme at short notice. If you would like to speak, or you want to be included in any emails, please contact one of the organisers.

Current Organisers: Murray Pollock, Dootika Vats

  • If you would like to talk, or have ideas for possible speakers, then please email one of the organisers above.

Website URL: www.warwick.ac.uk/compstat

Mailing List Sign-Up: http://mailman1.csv.warwick.ac.uk/mailman/listinfo/algorithmseminar

Mailing List: algorithmseminar@listserv.csv.warwick.ac.uk (NB - only approved members can post)

2017/18 Term 1:

  • Week 1 - 6th October - Axel Finke (UCL) - "On embedded hidden Markov models and particle Markov chain Monte Carlo methods"
    • Abstract: The embedded hidden Markov model (EHMM) sampling method is an MCMC technique for state inference in non-linear non-Gaussian state-space models which was proposed in Neal (2003); Neal et al. (2004) and extended in Shestopaloff & Neal (2016). An extension to Bayesian parameter inference was presented in Shestopaloff & Neal (2013). An alternative class of MCMC schemes addressing similar inference problems is provided by particle MCMC (PMCMC) methods (Andrieu et al. 2009; 2010). All these methods rely on the introduction of artificial extended target distributions for multiple state sequences which, by construction, are such that one randomly indexed sequence is distributed according to the posterior of interest. By adapting the framework of PMCMC methods to the EHMM framework, we obtain novel particle filter (PF)-type algorithms for state inference, related to the class of "sequential MCMC" algorithms (e.g. Septier & Peters (2016) and parameter inference schemes. In addition, we show that most of these algorithms can be viewed as particular cases of a general PF and PMCMC framework. We demonstrate that a properly tuned conditional PF with "local" MCMC moves proposed in Shestopaloff & Neal (2016) can outperform the standard conditional PF significantly when applied to high-dimensional state-space models. We also derive theoretical guarantees for the novel (unconditional) PF-type algorithm and discuss why it could serve as an interesting alternative to standard PFs for likelihood estimation. This is joint work with Arnaud Doucet and Adam M. Johansen.
  • Week 2 - 13th October - Wilfrid Kendall (Warwick) - "Dirichlet forms and MCMC"
    • Abstract: In this talk I will discuss the use of Dirichlet forms to deliver proofs of optimal scaling results for Markov chain Monte Carlo algorithms (specifically, Metropolis-Hastings random walk samplers) under regularity conditions which are substantially weaker than those required by the original approach (based on the use of infinitesimal generators). The Dirichlet form method has the added advantage of providing an explicit construction of the underlying infinite-dimensional context. In particular, this enables us directly to establish weak convergence to the relevant infinite-dimensional diffusion.
    • Reference: Zanella, G., B├ędard, M., & Kendall, W. S. (2016). A Dirichlet Form approach to MCMC Optimal Scaling. To appear in Stochastic Processes and Their Applications. URL: arxiv.org/abs/1606.01528.
  • Week 3 - 20th October - Jure Vogrinc (Imperial) - "Asymptotic variance for Random walk Metropolis chains in high dimensions: logarithmic growth via the Poisson equation"
    • Abstract: There are two ways of speeding up MCMC algorithms: (1) construct more complex samplers that use gradient and higher order information about the target and (2) design a control variate to reduce the asymptotic variance. The efficiency of (1) as a function of dimension has been studied extensively. The talk will focus on analogous results for (2), rigorous results linking the growth of the asymptotic variance with dimension. Specifically, for a d-dimensional Random walk Metropolis chain with an IID target I will present a control variate, for which the asymptotic variance of the corresponding estimator is bounded by a multiple of (log d)/d over the spectral gap of the chain. The control variate is constructed using the solution of the Poisson equation for the scaling limit in the seminal paper "Weak convergence and optimal scaling of random walk Metropolis algorithms" of Gelman, Gilks and Roberts. I will present the ideas behind the proof and discuss potential extensions and applications of the result.
  • Week 4 - 27th October - Andrew Duncan (Sussex) - Talk Title/ Abstract TBC
  • Week 5 - 3rd November - Mini Talks
  • Week 6 - 10th November - Thibaut Lienart (Oxford) - Talk Title/ Abstract TBC
  • Week 7 - 17th November - Short Talks
    • Talk 1 - Arne Gouwy (OxWaSP) - Talk Title/ Abstract TBC
    • Talk 2 - Marcin Mider (OxWaSP) - Talk Title/ Abstract TBC
  • Week 8 - 24th November - Ashley Ford (Bristol) - Talk Title/ Abstract TBC
  • Week 9 - 1st December - Richard Wilkinson (Sheffield) - Talk Title/ Abstract TBC
  • Week 10 - 8th December - Yvo Pokern (UCL) - "Gibbs Flow for Approximate Transport with Applications to Bayesian Computation"

2017/18 Term 2:

  • Week 1 - 12th January - Daniel Sanz-Alonso (Brown) - Talk Title / Abstract TBC
  • Week 2 - 19th January - Available
  • Week 3 - 26th January - Available
  • Week 4 - 2nd February - Hongsheng Dai (Essex) - Talk Title / Abstract TBC
  • Week 5 - 9th February - Available
  • Week 6 - 16th February - Available
  • Week 7 - 23rd February - Andi Wang (Oxford) - Talk Title / Abstract TBC
  • Week 8 - 2nd March - Available
  • Week 9 - 9th March - Available
  • Week 10 - 16th March - Available

2017/18 Term 3:

Previous Years:

2016/2017

2015/2016

2014/2015

2013/2014

2012/2013

2011/2012 

2010/2011

Some key phrases:

- Sampling and inference for diffusions
- Exact algorithms
- Intractable likelihood
- Pseudo-marginal algorithms
- Particle filters
- Importance sampling
- MCMC
- Adaptive MCMC
- Perfect simulation
- Markov chains...
- Random structures...
- Randomised algorithms...