Skip to main content Skip to navigation

Regular Seminars

Welcome to CRiSM seminar series!

Seminars take place fortnight during term time, 1-2pm on Wednesdays, in room MB0.07 or online on the CRiSM MS Teams channelLink opens in a new window.

We encourage all postgraduate students (MSc and PhD) to attend this series: it is a great opportunity to know more about current research within the department and outside.

CRiSM seminars 2022/23 are organised by Massimiliano TamborrinoLink opens in a new window and Andi WangLink opens in a new window (from Term 2).

Term 3

Term 2

Term 1

Select tags to filter on
  Jump to any date

Search calendar

Enter a search term into the box below to search for all events matching those terms.

Start typing a search term to generate results.

How do I use this calendar?

You can click on an event to display further information about it.

The toolbar above the calendar has buttons to view different events. Use the left and right arrow icons to view events in the past and future. The button inbetween returns you to today's view. The button to the right of this shows a mini-calendar to let you quickly jump to any date.

The dropdown box on the right allows you to see a different view of the calendar, such as an agenda or a termly view.

If this calendar has tags, you can use the labelled checkboxes at the top of the page to select just the tags you wish to view, and then click "Show selected". The calendar will be redisplayed with just the events related to these tags, making it easier to find what you're looking for.

 
Mon 13 May, '24
-
Designing and Learning Algorithmic Regularisers
MB0.07, MSB

A major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation.

Mon 20 May, '24
-
Designing and Learning Algorithmic Regularisers
MB0.07, MSB

A major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation.

Mon 27 May, '24
-
Designing and Learning Algorithmic Regularisers
MB0.07, MSB

A major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation.

Mon 3 Jun, '24
-
Designing and Learning Algorithmic Regularisers
MB0.07, MSB

A major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation.

Mon 10 Jun, '24
-
Designing and Learning Algorithmic Regularisers
MB0.07, MSB

A major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation.

Mon 17 Jun, '24
-
Designing and Learning Algorithmic Regularisers
MB0.07, MSB

A major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation.

Mon 24 Jun, '24
-
Designing and Learning Algorithmic Regularisers
MB0.07, MSB

A major challenge in statistical learning involves developing models that can effectively leverage the structure of a problem and generalise well. Achieving this goal critically depends on the precise selection and application of suitable regularisation methods. Although there have been significant advancements in understanding explicit regularisation techniques, such as Lasso and nuclear norm minimisation, which have profoundly impacted the field in recent years, the development of regularisation approaches—especially those that are implicit or algorithmic—remains a difficult task. In this talk, we will address this challenge by exploring mirror descent, a family of first-order methods that generalise gradient descent. Using tools from probability and optimisation, we will introduce a structured framework for designing mirror maps and Bregman divergences. This framework enables mirror descent to attain optimal statistical rates in some settings in linear and kernel regression. If time permits, we will also briefly showcase the application of mirror descent in reinforcement learning, specifically focusing on the use of neural networks to learn mirror maps for policy optimisation.

Placeholder