Abstract. We will discuss nonstationary turbulence at the stage prior (and leading to) formation of Kolmogorov cascade. We will consider several basic
models of turbulence in which the energy spectrum obeys a nonlinear PDE or an integral equation. We will show that the specrum has a power-law asymptotic
with an anomalous exponent which is less than the Kolmogorov value - 5/3 By comparison with turbulence in Burgers equation and with numerical simulations
of Navier-Stokes equations, we will speculate that the anomalous scaling is related to cormation of a singularity, and that this scenario may be generic for turbulence systems.
Topological chaos is a type of chaotic behavior that is forced by the motion of obstacles in some domain. 'Taffy pullers' are a classic example. I will review two approaches to topological chaos, with applications in particular to stirring and mixing in fluid dynamics.
The first approach involves constructing devices where the fluid motion is topologically complex, usually by imposing a specific motion of stirring rods. I will then discuss optimization strategies that can be implemented. The second approach is diagnostic, where flow characteristics are deduced from observations of periodic or random orbits and their topological properties.
Self-assembly, a process in which a disordered system of preexisting components forms an organized structure or pattern, is both ubiquitous in nature and important for the synthesis of many designer materials. In this talk, we will address three variational models for self-assembly from the point of view of mathematical analysis and computation.
The first is a nonlocal perturbation of Coulombic-type to the well-known Ginzburg-Landau/Cahn-Hilliard free energy. The functional has a rich and complex energy landscape with many metastable states. I will present a simple method for assessing whether or not a particular computed metastable state is a global minimizer. The method is based upon finding a ''suitable" global quadratic lower bound to the free energy.
The second model is a purely geometric and finite-dimensional paradigm for self-assembly which generalizes the notion of centroidal Voronoi tessellations from points to rigid bodies. Using a level set formulation, we a priori fix the geometry for the structures and consider self-assembly entirely dictated by distance functions. I will introduce a novel fast algorithm for simulations in two and three space dimensions.
Since the realization of the first Bose-Einstein condensates in 1995, experimentalists can now routinely create these degenerate quantum gases in the laboratory which provide an unparalleled level of control in terms of various system parameters. In particular, the strength of the interatomic interactions can be tuned as desired. In addition, using various optical and magnetic trapping techniques, quasi one or two-dimensional Bose-Einstein condensates can be created by effectively freezing out the motion of the atoms along different coordinate directions. The ability to create such effectively low dimensional systems provides exciting opportunities to test theories associated with phase transitions in low dimensions by studying the relaxation of systems driven out of equilibrium. A particular problem that is of current interest is the relaxation of a two-dimensional Bose gas from a nonequilibrium initial state that consists of quantized vortices formed by stirring the system. In this work, we will consider how quantized vortices can form clusters of like signed vortices in a two-dimensional Bose-Einstein condensate. Such clustering can be understood in terms of negative temperature states of a vortex gas. We show that due to the long-range nature of the Coulomb-like interactions in point vortex flows, these negative temperature states strongly depend on the shape of the geometry in which this clustering phenomena is considered. We analyse the problem of clustering of quantized vortices in a number of different regions. We present a mean-field theory to describe the different regimes for which clustering of like signed vortices can occur and compare our predictions with numerical simulations of a point vortex gas. We also extend our results to the Gross-Pitaevskii model of a Bose gas by performing numerical simulations for a range of different configurations using parameters that are relevant to current experiments.
Man's curiosity and fascination with the wonderful architectures seen near the shoot apical meristems (SAM's) of plants goes back almost two thousand years. Renaissance scientists such as Kepler and daVinci were intrigued and Kepler was one of the first to note that spiral plant patterns had connections with Fibonacci sequences. It is remarkable, despite continued interest over the centuries, that only recently have quantitative explanations emerged which enjoy broad acceptance. In this talk, I will review the progress to date, and discuss both teleological explanations based upon the observations of Hofmeister and encoded in the works of Douady and Couder, and mechanistic explanations which seek to model the relevant biochemistry and mechanics at work near the SAM. The former approach argues that new phylla (flowers, seeds, bracts, etc.) are placed according to some optimization principle. The latter approach leads to instability driven pattern forming systems in which either the plant growth hormone auxin field or the local stress field have quasiperiodic structures at whose maxima new phylla are likely to be initiated. Whereas the latter model is richer than the former in that one obtains field rather than configuration information and because it addresses the connection between phyllotactic configurations and surface morphologies, one of the stunning and surprising outcomes of our work is that both approaches lead to absolutely consistent outcomes. It may very well be that nature employs pattern forming systems to achieve optimal outcomes not just in plants but in many organisms.
The talk should be accessible to a broad audience.
Numerical modelling has proved very effective in making predictions of real world processes, for example in weather forecasting. One of the the main sources of uncertainty are processes that occur on sales to small to be resolved by the solver of the underlying differential equations. Examples include clouds in the atmosphere or deep convection in the ocean. These sub-grid scale processes are usually parameterised with a simple so called bulk formula. However such formulae do not reflect the full richness of the sub-grid scale behaviour. Often we have good models for what is happening at these small sales but we cannot afford to run them within each grid cell. We propose to replace such models with emulators. Emulators are statistical approximations to a full numerical model and are fast to run. We illustrate the process with a model of ocean convection. we also discuss how such emulators of sub-grid scale processes could be used to make stochastic parameterisations.
Adaptive multi-level Monte Carlo methods for investigating biochemical reaction networks
Discrete-state, continuous-time Markov models are widely used in the modelling of biochemical reaction networks. Their complexity generally precludes analytic solution, and so we rely on Monte Carlo simulation to estimate system statistics of interest. Perhaps the most widely used method is the Gillespie algorithm. This algorithm is exact but computationally complex. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Sample paths are generated by taking leaps of length tau through time and using an approximate method to generate reactions within leaps. However, tau must be held relatively small to avoid significant estimator bias and this significantly impacts on potential computational advantages of the method.
The multi-level method of Anderson and Higham tackles this problem by cleverly generating a suite of sample paths with different accuracy in order to estimate statistics. A base estimator is computed using many (cheap) paths at low accuracy. The bias inherent in this estimator is then reduced using a number of correction estimators. Each correction term is estimated using a collection of (increasingly expensive) paired sample paths where one path of each pair is generated at a higher accuracy compared to the other. By sharing randomness between these paired sample paths only a relatively small number of pairs are required to calculate each correction term.
In the original multi-level method, paths are simulated using the tau-leap technique with a fixed value of tau. This approach can result in poor performance where the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel, adaptive time-stepping approach we extend the applicability of the multi-level method to such cases. In our algorithm, tau is chosen according to the stochastic behaviour of each sample path. We present an implementation of our adaptive time-stepping multi-level method that, despite its simplicity, performs well across a wide range of sample problems.