|When||15 November 2017|
John Sylvester - University of Cambridge
Please register if you plan to attend this event so we can estimate the numbers.
This form is closed and is no longer accepting any submissions.
|09:30 - 10:30||
John Sylvester - University of Cambridge
Random walks, random graphs and applications to computation
|10:30 - 11:00||Coffee break|
|11:00 - 12:00||
Samuel Johnson - University of Birmingham
The architecture of complex systems: Some lessons from ecology
Rainforests, coral reefs and other very large ecosystems seem to be the most stable in nature, but this has long been regarded as mathematically paradoxical. More generally, the relationship between structure and dynamics in complex systems is the subject of much debate. I will discuss how 'trophic coherence', a recently identified property of food webs and other biological networks, is key to understanding many dynamical and structural features of complex systems. In particular, it allows networks to become more stable with increasing size and complexity, determines whether a given system will be in a regime of high or of negligible feedback, and influences spreading processes such as epidemics or cascades of neural activity. See also: https://en.wikipedia.org/wiki/Trophic_coherence
|12:00 - 13:00||
Anders Hansen - University of Cambridge
On computational barriers in data science and the paradoxes of deep learning
The use of regularisation techniques such as and Total Variation in Basis Pursuit and Lasso, as well as linear and semidefinite programming and neural networks (deep learning) has seen great success in data science. Yet, we will discuss the following paradox: it is impossible to design algorithms to find minimisers accurately for these problems when given inaccurate input data, even when the inaccuracies can be made arbitrarily small. The paradox implies that any algorithm designed to solve these problems will fail in the following way: For fixed dimensions and any small accuracy parameter epsilon , one can choose an arbitrary large time and find an input such that the algorithm will run for longer than and still not have reached epsilon accuracy. Moreover, it is impossible to determine when the algorithm should halt to achieve an epsilon accurate solution. The largest epsilon for which this failure happens is called the Breakdown-epsilon. Typically, the Breakdown-epsilon even when the the input is bounded by one, is well-conditioned, and the objective function can be computed with arbitrary accuracy.
Despite the paradox we explain why empirically many modern algorithms perform very well in real-world scenarios. In particular, when restricting to subclasses of problems the Breakdown epsilon may shrink. Moreover, typically one can find polynomial time algorithms in and , where is the number of correct digits in the computed solution and is the size of the input data. However, the Breakdown-epsilon is the breaking point, and for , any algorithm, even randomised, becomes arbitrarily slow and will not be able to halt and guarantee correct digits in the output.
The above result leads to the paradoxes of deep learning: (1) One cannot guarantee the existence of algorithms for accurately training the neural network, even if there is one minimum and no local minima. Moreover, (2) one can have 100% success rate on arbitrarily many test cases, yet uncountably many misclassifications on elements that are arbitrarily close to the training set.
|13:00 - 14:00||
Free lunch in the common room, provided by the Warwick SIAM-IMA chapter
|14:00 - 15:00||
Jason Laurie - Aston University
Kelvin-wave turbulence theory for the small scale energy transfer in quantum turbulence
|15:00 - 16:00||
Tobias Grafke - University of Warwick
Extreme Events and Metastability in Fluids and Waves
|16:00 - 16:30||Coffee break|
|16:30 - 17:30||
Carl Whitfield - University of Manchester
Quantifying uncertainty and structural heterogeneity in models of human ventilation
I will present some recent work on efficient methods for approximating flows and gas transport in a heterogeneous lung model. We use these methods to measure the sensitivity of indices from the multi-breath washout (MBW) test to perturbations in the model geometry and mechanical properties. The MBW indices have been shown to be effective at detecting various lung conditions associated with increased ventilation heterogeneity. I will compare our preliminary findings to previous studies in the literature and discuss the advantages and limitations of the approach used here.
|17:30 - ?||
Wine and cheese in common room