Regular Seminars
Welcome to CRiSM seminar series!
Seminars take place fortnight during term time, 1-2pm on Wednesdays, in room MB0.07 or online on the CRiSM MS Teams channelLink opens in a new window.
We encourage all postgraduate students (MSc and PhD) to attend this series: it is a great opportunity to know more about current research within the department and outside.
CRiSM seminars 2022/23 are organised by Massimiliano TamborrinoLink opens in a new window and Andi WangLink opens in a new window (from Term 2).
Term 3
- Apr 26 Shah RajenLink opens in a new window (Cambridge) Title: Rank-transformed subsampling: Inference for multiple data splitting and exchangeable p-values. AbstractLink opens in a new window. SlidesLink opens in a new window. Recorded TalkLink opens in a new window.
- May 3 Henry ReeveLink opens in a new window (Bristol) Title: Isotonic subgroup selection. AbstractLink opens in a new window. SlidesLink opens in a new window. Recorded TalkLink opens in a new window.
- May 10 Ruth BakerLink opens in a new window (Oxford) [11-12am] Title: Leveraging concepts from stochastic simulation and machine learning for efficient Bayesian inference. AbstractLink opens in a new window: SlidesLink opens in a new window. Recorded TalkLink opens in a new window.
- May 17 Irini MoustakiLink opens in a new window (LSE) Title: Some new developments on pairwise likelihood estimation for latent variable models. AbstractLink opens in a new window. SlidesLink opens in a new window. Recorded TalkLink opens in a new window.
- May 24 Patricia Reynaud-BouretLink opens in a new window (Université Côte d'Azur): Title: Encoding/decoding abilities of the DMS and DLS networks during learning. AbstractLink opens in a new window. Slides.
- Jun 7 Heather BatteyLink opens in a new window (Imperial College London). Title: Inducement of population-level sparsity. Abstract.
- Jun 21 Chang-Han RheeLink opens in a new window (Northwestern University). Title:
Heavy-Tailed Large Deviations and How to Eliminate Sharp Minima from SGD. AbstractLink opens in a new window. Recroded Talk. https://www.mccormick.northwestern.edu/research-faculty/directory/profiles/rhee-chang-han.html
Term 2
- Jan 11 Po-Ling LohLink opens in a new window (Cambridge). Title: Robust empirical risk minimization via Newton's method. Abstract: see hereLink opens in a new window. SlidesLink opens in a new window. Recorded TalkLink opens in a new window.
- Jan 25 Heba SailemLink opens in a new window (King's College London). Title: Quantitative approaches for decoding molecular and microenvironmental factors underlying tissue architecture. AbstractLink opens in a new window. SlidesLink opens in a new window. Recorded talkLink opens in a new window.
- Feb 8 Matt MooresLink opens in a new window (Univerisity of Wollongong). Title: Bayesian inference for thermogravimetric analysis of iron oxidation. AbstracLink opens in a new windowtLink opens in a new window. Slides. Recorded talk.
- Feb 22 Henry ReeveLink opens in a new window (Bristol) - POSTPONED to Term 3, May 3.
- Mar 8 Haeran ChoLink opens in a new window (Bristol). Title: Factor-adjusted network estimation and forecasting for high-dimensional time series. AbstractLink opens in a new window. Slides. Recorded Talk.
Term 1
-
Oct 5 Mike WestLink opens in a new window (Duke University). Title: Bayesian Predictive Decision Synthesis
- Oct 19 Kamelia Daudel Link opens in a new window(Oxford). Title: Challenges and opportunities in scalable Alpha-divergence Variational Inference: application to Importance Weighted Auto-Encoders. Abstract available hereLink opens in a new window. Recorded TalkLink opens in a new window.
- Nov 2 Robert GoudieLink opens in a new window (Cambridge) Title: Joining Bayesian submodels with Markov melding. Abstract.Link opens in a new window SlidesLink opens in a new window. Recorded TalkLink opens in a new window.
- Nov 16 Stefan Horst SommerLink opens in a new window (Copenhagen University). Title: Diffusions means in geometric statistics. AbstractLink opens in a new window. SlidesLink opens in a new window. Recorded talkLink opens in a new window.
- Nov 30 Yunxiao ChenLink opens in a new window (London School of Economics) Title: Compound Decision for Parallel Sequential Change Detection. AbstractLink opens in a new window. Slides. Recorded Talk.
Mon 13 May, '24- |
Designing and Learning Algorithmic RegularisersMB0.07, MSBA major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation. |
|
Mon 20 May, '24- |
Designing and Learning Algorithmic RegularisersMB0.07, MSBA major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation. |
|
Mon 27 May, '24- |
Designing and Learning Algorithmic RegularisersMB0.07, MSBA major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation. |
|
Mon 3 Jun, '24- |
Designing and Learning Algorithmic RegularisersMB0.07, MSBA major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation. |
|
Mon 10 Jun, '24- |
Designing and Learning Algorithmic RegularisersMB0.07, MSBA major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation. |
|
Mon 17 Jun, '24- |
Designing and Learning Algorithmic RegularisersMB0.07, MSBA major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation. |
|
Mon 24 Jun, '24- |
Designing and Learning Algorithmic RegularisersMB0.07, MSBA major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation. |