Skip to main content Skip to navigation

Abstracts


Mel Ades
(Reading)
Title: Estimating the full posterior pdf with particle filters
Abstract: Recent work has shown that the approximations made in the commonly used data assimilation algorithms lead to errors when trying to understand the uncertainty in the posterior probability distribution (Law and Stuart, 2012). Particle filters are a data assimilation method without any approximations and so can, in theory, provide an accurate representation of the full, potentially multi-modal, posterior pdf. In practice, however, the standard form of the particle filter suffers from filter degeneracy in high-dimensional systems. Filter degeneracy means that all the information on the posterior pdf collapses onto one, possibly unrepresentative, sample and results in particle filters not being considered appropriate for the majority of realistic applications. Particle filters can be formulated with the inclusion of proposal densities. Current research in particle filters as data assimilation methods has therefore focussed on how these proposal densities can be chosen to make particle filters applicable to high dimensional systems. However, it is hard to assess their effectiveness in representing the true posterior pdf, as this is generally unknown in these large models. In this presentation I discuss the different choices of proposal density and use a true posterior pdf generated through MCMC as a 'gold standard' with which to evaluate the different posterior representations using the Lorenz 96 model.


Sergios Agapiou
(Warwick)
Title: High dimensional analysis of the Gibbs sampler for hierarchical inverse problems
Abstract: We will study properties of the Gibbs sampler used for sampling the posterior in certain hierarchical Bayesian formulations of linear inverse problems. Emphasis will be placed on the insight obtained from formulating the problem in function space and this insight will be used to understand the mixing behavior of the Gibbs sampler as the discretization level increases.


Javier Amezcua
(Reading)
Title: Improved proposal densities for particle filters
Abstract: A common problem when using particle filters is degeneracy: most of the particles in the ensemble end up having negligible weights. One can target regions of high likelihood in state space using proposal densities which include, for example, information from future observations. We discuss ways to include this information, from simple nudging to more complicated methods such as 4DVar. Joint work with Peter Jan van Leeuwen.


Ross Bannister
(Reading)
Title: How is balance of a forecast ensemble affected by adaptive and non-adaptive localization schemes?
Abstract: Ensemble data assimilation would help to bypass many of the problems encountered in variational data assimilation (namely the need to model background error covariances) were it not for the sampling errors that inevitably arise from the application of a finite number of ensemble members. Sampling errors lead to rank deficiency (and hence poor conditioning) of the data assimilation problem and can be seen as artifacts in the background error structure functions derived from the finite ensemble. Methods, like covariance localization, have been used for some time to depress these artifacts. Although they have been reasonably effective at this, they do have the negative side effect of affecting the 'balance' properties (hydrostatic, geostrophic, anelastic) of the ensemble. These properties are valuable in order to minimize initialization problems in post-assimilation forecasts and so localization methods are sought that preserve (as far as possible) the degree of balance.
This work reports on a range of balance diagnostics found from an ensemble of Met Office high-resolution forecasts with and without the application of the Schur-product style localization. Of special interest is the investigation of new flow-adaptive localization methods introduced by Bishop and Hodyss in 2007 and 2009 whose effect on balance is not well studied.

Bertrand Bonan (Reading)
Title: Data assimilation and moving point models in ice sheet modelling
Abstract: One of the most important issue is tracking accurately the evolution of physical boundaries such as the grouding line for marine ice sheet or the ice margin in palaeoglaciology. Here we present a moving point method that is well-suited to tracking moving phenomena accurately. This method is based on the preservation of mass fractions. In order to initialise our model, we apply advanced inverse data assimilation techniques to the system. We develop particularly an Ensemble Kalman Filter approach in this context. The data assimilation procedure treats both the mesh point positions and the ice sheet thickness as unknown state variables and updates both of these at each assimilation step. The advantage of the ensemble approach is that it enables the sensitivity of the system to be understood and, more importantly, provides information on the correlations between the variables, in particular between the grid and the ice thickness. We demonstrate the success of the technique for noisy, infrequent, partial measurements of ice thickness, both with and without noisy measurements of the terminus position.
(Joint work with Mike Baines, Nancy Nichols and Dale Partridge)


Jochen Broecker
(Reading)
Title: Realistic performance estimates in data assimilation experiments without the possibility of replication
Abstract: For the purpose of this talk, data assimilation means to find an (approximate) trajectory of a dynamical model that (approximately) matches a set of observations. When it comes to evaluating the solution, we are often faced with the problem of having to use the same observations again. As the observations were used already to find the solution, such an evaluation is likely too optimistic and not representative for the performance of that solution with respect to 'out of sample' observations from the same underlying flow pattern but with independent observational errors. In atmospheric contexts though, such 'out of sample' observations are hardly, if ever, available, or in other words, there is no possibility of replication. This talk will present some ideas on estimating the optimism when evaluating in-sample, thereby giving a more realistic picture of the 'out of sample' performance.


Phil Browne (Reading)
Title: Numerical issues when applying a particle filter to a coupled climate model

Abstract: The equivalent weights particle filter has been shown not to suffer from filter degeneracy in high dimensional systems. Many numerical problems have arisen when implementing the filter within HadCM3, a coupled global climate model with state dimension 2.3 x 106.
In this talk we will discuss these problems, which include numerical linear algebra, sparse matrix vector multiplication and simply coupling to the model. Many of the issues stem from the nontrivial nature of ocean bathimitry and the number of different prognostic variables, not just the sheer size of the state dimension.


Alexey Chernov
(Reading)
Title: Estimation of central statistical moments with the Multilevel Monte Carlo Method
Abstract: The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for forward uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel sample variance estimators and an extension to arbitrary order central moments. In particular, we prove that under certain assumptions, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.


Daan Crommelin
(CWi, Amsterdam)
Title: Stochastic parameterization through statistical inference
Abstract:The abundance of data from observations and model simulations provides an opportunity for data-driven approaches to formulate stochastic parameterizations. Detailed simulations with high-resolution models that are used for process studies (e.g. convection) can also be used as a basis for extracting parameterizations through statistical inference. From high-resolution simulations on limited spatial domains, one can estimate stochastic processes that model the local feedback from small-scale processes on the large-scale state. To take into account the reverse feedback, from large to small scales, these stochastic processes must be conditioned on the large-scale state. In this manner, they mimick, in a statistical sense, the effect of the small-scale processes as simulated by a high-resolution model. With a network of independently evolving copies of the inferred stochastic process, a large spatial domain can be covered. I will discuss the implementation of this approach using finite Markov chains, as well as application to the parameterization of atmospheric moist convection.


Matthew Dunlop
(Warwick)
Title: MAP estimators and discontinuous permeabilities in groundwater flow
Abstract: Given the permeability of a porous medium, the Darcy model for groundwater flow describes the pressure of a fluid in this medium. If there are multiple media present, the permeability will be discontinous at the interfaces between them. We adopt a Bayesian approach to the inverse problem of determining the permeabilities of the different media and the shapes of the interfaces, given noisy measurements of the pressure.


Adam El Said
(Reading)
Title: Optimisation and Conditioning in Variational Data Assimilation
Abstract: Data assimilation merges observations with a dynamical model to find the optimal state estimate of a system given a set of observations. It is cyclic in that it is applied at fixed time intervals and the beginning of each cycle incorporates a forecast from the previous cycle known as the background or apriori estimate. Variational data assimilation aims to minimise a non-linear, least-squares objec­tive function that is constrained by the flow of the (perfect) dynamical model (4DVAR). Relaxing the perfect model assumption gives rise to weak-constraint 4DVAR, which has two formulations of interest.
Gradient-based iterative solvers are used to solve the problem. We gain insight into accuracy and convergence by studying the condition number of the Hessian of both formulations. Theoreti­cal bounds on the condition number are demonstrated using the linear advection equation. The theoretical bounds give us insight into the sensitivities of both formulations to changes in the as­similation parameters on linear models. We also highlight some interesting differences between the formulations obtained from analysing the bounds using the nonlinear chaotic Lorenz-95 model. Joint work with N.K. Nichols, A.S. Lawless.


Patrick Farrell
(Oxford)
Title: Adjoints of finite element models
Abstract: The derivatives of PDE models are key ingredients in many important algorithms of computational mathematics. They find applications in diverse areas such as sensitivity analysis, PDE-constrained optimisation, continuation and bifurcation analysis, error estimation, and generalised stability theory. In the context of data assimilation, they are key for four-dimensional variational data assimilation (4DVAR); in the context of computational statistics, higher derivatives are essential for the exploitation of geometric structure in Monte Carlo methods. These derivatives, computed using the so-called tangent linear and adjoint models, have made an enormous impact in certain scientific fields (such as aeronautics, meteorology, and oceanography). However, their use in other areas has been hampered by the great practical difficulty of the derivation and implementation of tangent linear and adjoint models. In his recent book, Naumann (2011) describes the problem of the robust automated derivation of parallel tangent linear and adjoint models as ''one of the great open problems in the field of high-performance scientific computing''. In this talk, we present an elegant solution to this problem for the common case where the forward model may be written in variational form. The derivatives automatically derived enjoy approximately optimal efficiency, employ optimal checkpointing schemes to minimise recomputation, and scale naturally in parallel. We will also present some applications in optimisation and computational statistics.


Igor Gejadze
(IRSTEA)
Title: On verifiability of optimal solutions in variational data assimilation problems with nonlinear dynamics
Abstract: The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition. The optimal solution (analysis) error arises due to the errors of the input data (background and observation errors). The confidence regions for the optimal solution can be constructed on the basis of the analysis error covariance. However, nonlinearity of the model equations may distort the gaussian prop­erties (normality) of the nonlinear least squares estimator to the extent when the covariance becomes no longer useful for defining the confidence regions. The normality of the estimator can be assessed, for example, by processing an ensemble of optimal solutions generated on artificially perturbed data. Then, any invariant test statistic for multivariate normality is a function of the Mahalanobis distances and angles based on the inverse of the sample covariance matrix. However, for the high-dimensional problems arising in geophysical applications only a very small (as compared to the state vector dimension) ensemble can be computed. This means that the inverse of the sample covariance matrix may not be available. We suggest a new normality measure which does not require this inverse matrix and, therefore, is feasible for computing in high dimensions. Moreover, this measure is represented as a sum of partial contributions associated with the elements of the state vector. This allows us to reveal the subsets of the state vector (or the areas of the spatially distributed variable) for which the normality of the estimator is violated most significantly. For example, numerical experiments conducted for the 1D Burgers equation have shown that there may exist pretty narrow areas of strong non-gaussianity for which the confidence regions estimated on the basis of the analysis error covariance may be totally deceptive, whereas they remain reasonably good for the rest of the domain. Joint work with V. Shutyaev.

Katherine Howes (Reading)
Title: Developing coupled data assimilation methods in the presence of model error
Abstract: Coupled atmospheric and oceanic models are currently used by operational centres to produce seasonal-decadal forecasts. Operational centres such as ECMWF and the Met Office are moving towards coupled estimation of the initial conditions for both the atmospheric and oceanic variables. Although improvements to the forecast skill are expected when coupled estimation is performed, the increase in accuracy is hindered by the fact that these operational coupled models contain error. In this work we use coupled strong constraint four-dimensional variational data assimilation (4DVAR) with simple coupled models to investigate the effect that different types of model error have on the estimation of the initial conditions (analysis). We present a new method to improve the analysis through estimation of a coupling parameter.


Marco Iglesias Hernandez
(Nottingham)
Title: Regularising Ensemble Kalman Methods for Inverse Problems
Abstract: We present a novel regularizing ensemble Kalman method for solving PDE-constrained inverse problems. The proposed work combines ideas from iterative regularisation and ensemble Kalman methods to generate a derivative-free solver for inverse problems. We provide numerical results to illustrate the efficacy of the proposed method for solving inverse problems in subsurface flow applications.


Chris Jones
(North Carolina)
Title: Joint state-parameter data assimilation by a two-stage filtering technique
Abstract: This presentation is about an approach for a joint state-parameter estimation in a sequential data assimilation framework. The state augmentation technique, in which the state vector is augmented by the model parameters, has had some success in the case where model parameters are additive. However, many geophysical or climate models contain non-additive parameters such as those arising from the physical parametrization of sub-grid scale processes. In this case, the state augmentation technique may become ineffective since its inference about parameters from partially observed states based on the cross covariance between states and parameters is inadequate if states and parameters are not linearly correlated. In this talk, we study a two-stage filtering technique that runs particle filtering (PF) to estimate parameters while updating the state estimate using Ensemble Kalman filter (ENKF). The applicability of the proposed method is demonstrated using the Lorenz-96 system, where the forcing is parameterized. The proposed method is shown to be capable of estimating the relevant parameters with high accuracy as well as reducing uncertainty, whereas the state augmentation technique fails to achieve good results.


Tom Kent
(Leeds)
Title: A modified shallow water model for investigating convective-scale data assimilation
Abstract: It is often unfeasible, and indeed undesirable, to investigate the potential of new convective-scale data assimilation schemes on operational forecasting systems. Instead, idealised models are employed that capture the fundamental features of convective-scale dynamics while remaining computationally inexpensive, thus allowing an extensive investigation of the proposed scheme. Here, I outline a modified rotating shallow water model to represent an idealised atmosphere with moist convection. By combining the non-linearity due to advection in the shallow water equations and the onset of precipitation, the proposed model captures two important dynamical processes of convecting and precipitating weather systems. The model is a valid non-conservative hyperbolic system of partial differential equations and is solved numerically using a shock-capturing finite volume/element framework which deals robustly with the high non-linearity and so-called non-conservative products. The model will be used for investigating data assimilation schemes (ensemble and variational) at the convective-scale and is currently being integrated in to the Met Office's Data Assimilation Modelling framework, a test bed for data assimilation research using idealised models.


Kody Law
(King Abdullah University of Science and Technology, Saudi Arabia)
Title: A deterministic approach to filtering and EnKF for continuous stochastic processes observed at discrete times
Abstract: Filtering of a continuous-time stochastic process which is observed at discrete observation times is considered. An approach is proposed in which an accurate numerical approximation of the Fokker-Planck equation is used to obtain the predicting density. This density is then used either to approximate (a) the true filtering density arising from Bayes rule, (b) the mean-field EnKF density, or (c) a Gaussian approximation. The local error of the EnKF is given as the sum of two components: (i) the error between the finite approximation and the mean-field limit (sample or discretization error), and (ii) the error between the mean-field limit and the true filtering distribution (linear update error). In its simplest form presented here we prove that the error (i) is asymptotically smaller using this new approach as compared to standard EnKF for model dimensions d ≤ 3, and therefore either of (a,b,c) outperform standard EnKF as long as (i) is the dominant source of error. Once error (ii) exceeds error (i) this improvement is irrelevant and standard EnKF with any sufficiently large ensemble size is comparable to (b). To confirm the analytical results relating to error (i) and to further investigate the effects of error (ii) we perform numerical experiments for both a linear and a nonlinear Langevin SDE. In the nonlinear case, we examine the effect of imposing a Gaussian approximation on the distribution and find that the approximate Gaussian filter may perform better than the non-Gaussian mean-field EnKF, depending on when the Gaussian approximation is imposed. It may be possible to use these results to develop more effective filters.


Amos Lawless
(Reading)
Title: Exploring coupled data assimilation using an idealised model
Abstract: The successful application of data assimilation techniques to operational numerical weather prediction and ocean forecasting systems has led to an increased interest in their use for the initialisation of coupled atmosphere-ocean models in prediction on seasonal to decadal timescales. Coupled data assimilation presents a significant challenge but offers a long list of potential benefits, including improved use of near-surface observations, reduction of initialisation shocks in coupled forecasts, and generation of a consistent system state for the initialisation of coupled forecasts across all timescales. In this work we explore some of the fundamental questions in the design of coupled data assimilation systems within the context of an idealised one-dimensional coupled atmosphere-ocean model. We describe the development of a simplified single-column coupled atmosphere-ocean 4D-Var assimilation system and present preliminary results from a series of identical twin experiments devised to investigate and compare the behaviour and sensitivities of different coupled data assimilation methodologies. (Joint work with Polly Smith, Alison Fowler, Keith Haines)

Peter Jan van Leeuwen (Reading)
Title: What numerical weather prediction centres are doing and what they should be doing
Abstract: Numerical weather prediction is developing fast as the operational centres realise that existing methodologies are failing with increasing model
resolution. The main research direction is into so-called hybrid methods in which ideas from 4D space-time variational data assimilation and Ensemble Kalman Filters are combined. In one variant ensemble members are used to provide the 3D space or 4D space-time covariances which are then used in the variational framework. This allows for covariances that use past observations, so allowing for transfer of observation information between assimilation windows, and avoiding the use of an adjoint.
Other variants use an ensemble in which each member results from a variational data-assimilation problem with perturbed observations. The emphasis is on increasing the window size to use as much observations as possible. We will discuss what these different methods are trying to achieve, what we would really like to do, and how we should modify present attempts into something more useful.

Sean Lim (Oxford)
Title: A Model Problem for Seismic Migration
Abstract: Seismic migration is an important process in the seismic imaging which estimates the scatterers of seismic waves in the
subsurface. Many different migration techniques have been developed to obtain better results, the simplest method is known as the Kirchhoff summation
migration. Here, we will consider a simple case of migration, where we have a flat surface and a homogeneous subsurface. The problem is solved using a fully
Bayesian approach with the goal of uncertainty quantification.

Noeleene Mallia (Reading)
Title: Assessing the Performance of Data Assimilation Algorithms with Linear Error Feedback
Abstract: A problem of many real world data assimilation experiments is that they cannot be replicated. An evaluation of model performance against the
available observations will yield optimistic results since the observations have been used to find the solution. A possible solution to
this problem is to estimate the optimism using the in-sample error. Further to Jochen Broecker’s presentation, this talk will consider
estimating the optimism for data assimilation algorithms that employ linear error feedback. This type of feedback is found frequently in data
assimilation; examples include 3D-Var and the Kalman Filter. Numerical experiments implementing this approach are presented. Specifically, this
talk considers an algorithm where we employ linear error feedback using a constant gain matrix. We use results from control theory along with our estimate of the out of sample error to select a gain matrix which yields minimum out-of-sample error.


Daniel Rey
(San Diego)
Title: Accurate Data Assimilation with Sparse Data
Abstract: Transferring information from observations to models of complex systems may meet impediments when the number of observations at any observation time is not sufficient. This is especially so when chaotic behavior is expressed. We show how to use time-delay embedding, familiar from nonlinear dynamics, to provide the information required to obtain accurate state and parameter estimates. Good estimates of parameters and unobserved states are necessary for good predictions of the future state of a model system. This method may be critical in allowing the understanding of prediction in complex systems as varied as nervous systems and weather prediction where insufficient measurements are typical. Co-authors: Michael Eldridge, Mark Kostuk, Henry D.I. Abarbanel, Jan Schumann-Bischoff, and Ulrich Parlitz


 
Daniel Sanz Alonso (Warwick)
Title: Accuracy of the Optimal Filter for Partially Observed Deterministic Dynamical Systems
Abstract: The aim of filtering is to estimate, in an on-line fashion, the value of a stochastic process, the signal, as noisy observations become available. In this talk we study discrete-time, randomly initialized signals that evolve according to a deterministic map Ψ and assume that only a low-dimensional projection of the signal, given by an observation operator P , can be observed. Our focus is on the situation where the noise in the observations is small and we determine conditions on P, and its relation to the map Ψ, which ensure that the signal can be accurately tracked in the long-time asymptotic regime. We thus address the question of what observations are sufficient to reconstruct the signal accurately if the initial state of the dynamical system is uncertain. The conditional distribution of the signal at the current time given all previous and present observations is known as filtering distribution. It is well-known that its mean constitutes the optimal filter in a mean square sense. We may therefore employ suboptimal filters to provide upper bounds on the error made by the optimal filter. Our main findings come as a by-product of results, of independent interest, on computable bounds for suboptimal filters based on variants of the 3dVar filtering algorithm, which was initially introduced in the context of high dimensional meterological filtering problems. We demonstrate the method in the first instance by studying linear signal dynamics. The general theory proposed is then applied to chaotic signals defined via the solution, at discrete times, to a dissipative differential equation with quadratic energy-conserving nonlinearity. The two-dimensional Navier Stokes equation on a torus, the Lorenz 63 model and the Lorenz 96 model, all observed partially and noisily, are contained within the scope of our analysis.

Abhishek Shukla (Warwick)
Title: Controlling unpredictability with observations in the partially observed Lorenz '96 model
Abstract: In this talk we discuss the accuracy of the Kalman Filter based data assimilation schemes when the underlying model is non-linear and the observed data is partial and noisy. The Lorenz'96 system in chaotic regime is considered as the underlying model. We also show the construction of an adaptive observation operator based on the local linear dynamics of the model. We then compare the numerical results of the adaptive observation scheme with the fixed projection observation operator . ​


Aretha Teckentrup
(Florida State)
Title: Multilevel Markov chain Monte Carlo algorithms for uncertainty quantification
Abstract:The parameters in mathematical models for many physical processes are often impossible to determine fully or accurately, and are hence subject to uncertainty. By modelling the input parameters as stochastic processes, it is possible to quantify the uncertainty in the model outputs. Based on the information available, a prior distribution is assigned to the input parameters. If in addition, some dynamic data (or observations) related to the model outputs are available, a better representation of the parameters can be obtained by conditioning the prior distribution on these data, leading to the posterior distribution. In most situations, the posterior distribution is intractable in the sense that exact sampling from it is unavailable, and Markov chain Monte Carlo (MCMC) methods are hence frequently used. However, in large scale applications, where the number of input parameters is typically very high and the computation of the likelihood very expensive, conventional MCMC methods quickly become infeasible. In this talk, we therefore develop a new multilevel version of a Metropolis-Hastings algorithm, based on a hierarchy of model discretisations. The new multilevel algorithm is generally applicable, and independent of the underlying mathematical model. For a typical problem in subsurface flow, we will demonstrate the gains with respect to conventional MCMC that are possible with this new approach, and provide a full convergence analysis of the new algorithm.


Jochen Voss
(Leeds)
Title: MAP estimators and 4DVAR
Abstract: In a recent article (Dashti et al., 2013) we show how the maximum a posteriori (MAP) estimator can be applied to problem of estimating an unknown function u from noisy measurements of a known, possibly nonlinear, map G applied to u. Our result shows that the MAP estimator be be characterised as the minimiser of an Onsager-Machlup functional defined on the Cameron-Martin space of the prior, thus leading to a variational problem. I this talk we relate our results about MAP estimators to the four-dimensional variational assimilation (4D-Var) method for data assimilation.


Joanne Waller
(Reading)
Title: Diagnosing observation error statistics for Doppler radar radial wind
Abstract: Data assimilation techniques combine observations with a model prediction of the state, known as the background, to provide a best estimate of the state, known as the analysis. For a data assimilation scheme to provide an optimal analysis the errors associated with the observations must be correctly specified. These errors can be attributed to four main sources;

  • Instrument error
  • Error introduced in the observation operator, including modelling errors and errors due to the approximation of a continuous function as a discrete function.
  • Errors of representativity - these are errors that arise where the observations can resolve spatial scales that the model cannot.
  • Pre-processing errors

The instrument noise is typically independent and uncorrelated; whereas pre-processing, observation operator and representativity errors are correlated. Often these correlations are ignored in operational data assimilation and the techniques of superobbing and thinning are used to improve the assumption of uncorrelated errors.
Information about the observation error is contained within innovation, the difference between the observation and the background or analysis. Taking the expectation of the background and analysis innovation statistics can provide estimates of the observation errors. Here we calculate the observation error statistics for Doppler radar radial wind using the operational observations along with the analysis and background fields generated by the Met Office 1.5km model. We find that error variances grow with height and that the correlation length-scale is dependent on both on the height and range of the observation. Previous work suggests that including these estimated errors in the assimilation may provide significant benefit.
Joint work with: S. L. Dance, N. K. Nichols, D. Simonin, S. P. Ballard