Skip to main content


28th September 2012
Paul Constantine
Reduced Order Models for Parameterized Hyperbolic Conservation Laws with Shock Reconstruction
Continued advances in high performance computing are enabling researchers in computational science to simulate more complex physical models. Such simulations can occupy massive supercomputers for extended periods of time. Unfortunately, the cost of these complex simulations renders parameter studies (e.g., design optimization or uncertainty quantification) infeasible, where multiple simulations must be run to explore the space of design parameters or uncertain inputs.
A common fix is to construct a cheaper reduced order model -- trained on the outputs of a few carefully selected simulation runs -- for use in the parameter study.
Model reduction for large-scale simulations is an active research field. Techniques such as reduced basis methods and various interpolation schemes have been used successfully to approximate the simulation output at new parameter values at a fraction of the computational cost of a full simulation. These methods perform best when the solution is smooth with respect to the model parameters. The solution of nonlinear conservation laws are known to develop discontinuities in space even for smooth initial data. These spatial discontinuities typically imply discontinuities in the parameter space, which severely diminish the performance of standard model reduction methods.
We present a method for constructing an accurate reduced order model of the solution to a parameterized, nonlinear conservation law. We use a standard method for an initial guess and propose a metric for determining regions in space/time where the standard method yields a poor approximation. We then return to the conservation law and correct the regions of low accuracy. We will describe the method in general and present results on the inviscid Euler equations with parameterized initial conditions.

5th October 2012
Farhan Feroz (Cambridge)
Nets and Nests: Accelerated Bayesian Inference in Astrophysics and Cosmology

Astrophysics and cosmology have increasingly become data driven with the availability of large amount of high quality data from missions like WMAP,
Planck and LHC. This has resulted in the development of many innovative methods for performing robust statistical analyses. MultiNest is a Bayesian inference
algorithm, based on nested sampling, which has been applied successfully to numerous challenging problems in cosmology and astroparticle physics due to its
capability of efficiently exploring multi-modal parameter spaces. MultiNest can also calculate the Bayesian evidence and therefore provides means to carry out
Bayesian model selection. I will give a brief description of this algorithm and review its applications in astrophysics and cosmology. I will also describe
some recent work on developing new methods for greatly accelerating statistical analyses, in particular by combining neural networks and nested sampling
methods. These approaches are generic in nature and may therefore be applied beyond astrophysics.

16th November 2012
Donna Calhoun (Boise State University)
A logically Cartesian, adaptively refined two-patch sphere grid for modeling transport in the atmosphere

Recently, we developed a novel, logically Cartesian sphere grid particularly well suited for explicit finite volume schemes. We demonstrated that schemes such as the wave propagation algorithm (R. J. LeVeque, Univ. of Washington) for solving hyperbolic PDEs and a diamond-cell type method (Calhoun, C. Helzel, Univ. of Bochum) for parabolic equations leads to numerically accurate results on our sphere grid.

In our current work, we are developing a new adaptive mesh refinement (AMR) code which uses wave propagation algorithms on non-overlapping AMR grids stored as leaves in a forest of quad- or oct-trees. The underlying tree-based code, p4est (Carsten Burstedde, Univ. of Bonn) manages the multi-block connectivity in a multi-processor environment and has been shown to be highly scalable in realistic applications. This hybrid AMR code, which we call ForestClaw, will easily handle the adaptivity for our two-patch sphere grid, as well as the cubed-sphere, and more generally, any multi-block geometry.

I will discuss features of our finite volumes schemes that make them work well on general surface meshes, as well as present a few of the drawbacks. Currently, our target application is volcanic ash dispersion in the atmosphere. We will present work in progress on our joint efforts with researchers at the Cascade Volcanic Observatory (Vancouver, Washington, USA).

23 November 2012
Vahid Shahrezaei (Imperial)
Input-output relations in biochemical networks

Biological cells process information and make reliable decisions via biochemical signalling networks. The input-output relations in these networks are evolved to produce reliable function in spite of stochasticity present in their dynamics. Here, I present some results on mathematical modelling of a common class of biochemical signalling motifs, the phosphorylation-depohosphorylation cycle. I discuss the role of enzyme saturation, multiple binding sites, diffusion-limited reactions and enzymatic complex formation. I illustrate how these networks can produce linear response, ultrasensitive response or non-monotonic response. I will also discuss the connection to yeast mating and T cell receptor signalling.

27 November 2012
Desmond J Higham (Strathclyde)

Algorithms and Models for Evolving Networks

The digital revolution is generating novel large scale examples of connectivity patterns that change over time.

This scenario may be formalized as a graph with a fixed set of nodes whose edges switch on and off. For example, we may have networks of interacting mobile phone users, emailers, Facebookers or Tweeters. To understand and quantify the key properties of such evolving networks, we can extend classical graph theoretical notions like degree, pathlength and centrality. In this talk I will focus on linear algebra-based algorithms and show that appropriate matrix products can capture various aspects of information flow around an evolving network. I will show how these algorithms performed in a recent case study on Twitter data, where independent influence rankings were available from social media experts.

I will also show how classical random graph models can be extended to the time-dependent setting.

In particular, a model for triadic closure (friends-of-friends tend to become friends) will be seen to produce a bistability effect.

30 November 2012
Ian Murray (Edinburgh)

Sampling hierarchical latent Gaussian models

Latent Gaussian models are a standard workhorse for statistical modelling.

Applications are found in diverse areas such as astro-physics, and modelling sports outcomes. They are also a useful test-bed for probabilistic inference methods, as several difficulties manifest themselves that are common to many inference problems. I'll outline some of my work on Markov chain Monte Carlo methods for simulating the posteriors of hierarchical latent Gaussian models.

11 January 2013
John Thuburn (Exeter)

A primal-dual mixed finite-element method for atmospheric modelling on GungHo grids

The Met Office weather and climate prediction model currently uses a latitude-longitude grid. However, data communication issues around the poles imply that it is unlikely to scale well on future massively parallel computers. Under a project called GungHo, NERC and the Met Office are collaborating to develop a new atmospheric model dynamical core on a quasi-uniform grid.

The problem is particularly challenging because atmospheric dynamics is strongly multi-scale in space and time; an accurate and robust solution requires the algorithm to have many desirable properties, including conservation, accurate representation of wave dispersion and balance, and accurate potential vorticity dynamics.

In this talk I will present a new primal-dual mixed finite-element method for the rotating shallow-water equations that gives those desirable properties on arbitrary polygonal grids. (The shallow-water equations are often used to develop prototype schemes before extending to three-dimensions.) Some preliminary results of standard test cases will be presented suggesting that the new scheme gives comparable accuracy to the current latitude-longitude grid scheme.

25 January 2013
Henry Abarbanel (UC San Diego)

Building Nervous Systems from the Bottom Up

The problem of estimating parameters and states in a model of physical processes from sparse observed time series is a statistical physics question which we will
develop and illustrate. We will discuss how we used this framework to create and explore models for individual neurons in the nucleus HVC of the avian song
system. The method is quite general, and the lessons learned in the application have broad optimistic implications for other questions of this form.

8 February 2013
Djoko Wirosoetisno (Durham)

Navier-Stokes equations on the beta-plane and a rotating sphere

We prove that the solution of the Navier-Stokes equations on the beta-plane and a rotating sphere tends towards zonal flows as the rotation rate tends to infinity. The implications for the dimension of the global attractor is also discussed.

15 February 2013
Mark Peletier (Eindhoven)

Energy-driven pattern formation via competing long- and short-range interactions

I will discuss patterns in block copolymer melts. This is a model system that is mathematically tractable, physically meaningful (and experimentally accessible) and representative for a large class of energy-driven pattern-forming systems. Such systems show a remarkable variety of different patterns, of which only a small fraction is well understood.

In this talk I will focus on a variational model for this system, in a parameter regime in which the system forms regular patterns of small spheroid blobs, called particles. The energy for these structures is dominated by a single-particle term, which penalizes each particle independently. This term drives the system towards particles of a well-defined size. At the next level the interaction between the particles is given by a Coulomb interaction potential, giving rise to approximately periodic arrangements.

15 March 2013
Lucia Scardia (Glasgow)

Multiscale problems in dislocation theory

Dislocations are defects in the crystal lattice of metals, and their collective motion gives rise to macroscopic permanent or plastic deformations.
Since the typical number of dislocations is very large, keeping track of each dislocation is too costly and therefore upscaled models must be derived.
The derivation of such models is one of the hard open problems in mechanical engineering.
This talk will address the rigorous derivation of mesoscopic dislocation models from discrete models using Gamma-convergence.
This is based on work in collaboration with Marc Geers, Stefan Mueller, Ron Peerlings, Mark Peletier and Caterina Zeppieri.

14 June 2013
Mark Wilkinson (Oxford)

Eigenvalue Constraints and Regularity of Q-tensor Navier-Stokes Dynamics

The physicist Pierre-Gilles de Gennes originally put forward the idea that a traceless and symmetric matrix could be a reasonable object to model the small-scale structure of nematic liquid crystals within continuum mechanics. Indeed, if such a matrix is interpreted as a collection of second moments of some probability measure on the unit sphere, its eigenvalues are bounded below by -1/3 and above by 2/3. This constraint raises questions regarding both the rigorous analysis and physical predictions of nematic theories which employ this matrix order parameter, more commonly known as the 'Q-tensor'.

John Ball and Apala Majumdar recently constructed a singular map on traceless, symmetric matrices that penalises Q-tensors which violate the aforementioned bounds by giving them an infinite energy cost. In this talk, I shall discuss some rigorous results for a modified Beris-Edwards model of nematic dynamics into which this map is built, including the existence, regularity and so-called 'strict physicality' of its weak solutions.