Skip to main content

Michael Pearce

About Me (see below for my PhD work)


The title of my PhD is "Simulataneous Multi-Task Bayesian Optimisation". I am supervised by Prof. J├╝rgen Branke in Operations research in the Warwick Businesss School and Prof. Kate Smith-Miles at Monash University in Melbourne. I studied Maths and Physics at Bristol Univeristy, lived in Japan teaching English for a few years and came to Warwick University to do an Msc+PhD in Complexity Science and Operations Research.

Although I find myself getting excited by anything machine learning, my interests are in Bayesian Statistics and Deep Neural Networks (cycleGAN is amazing!) and my PhD work is focused on Gaussian Process regression and their use in active learning. Particularly I have worked a lot with the Efficient Global Optimisation (EGO) and Knowledge Gradient algortihms. In my Msc Projects I used deep convolutional neural networks for image classification and in my second project my supervisor and I built an agent based housing market to invesitgate the effects of interest rates and doposit sizes on stability in the mortgage market. In my PhD I have used genetic algorithms to optimise sample allocations, support vector machines for algorithm selection, Gaussian Processes for surrogate modelling, Markov-Chain Monte-Carlo for sampling hyperparameters. Previously at a Study Group at the Alan Turing Istitute my teammembers and I used Q-learning to optimise traffic light emmmisions which was a lot of fun. At a more recent event we used word embeddings and neural networks and t-SNE to visualise and classify medical papers into review groups.

Outside of academia, I am also mad about bikes and snowboarding, I love to ride my road bikes and my trials bike and visit the snowy Alps at least once a year. I like to think that I can speak Japanese (passed JLPT N2) and I am slowly learning Spanish.

Email: m.a.l.pearce[at.mark]



Download my academic CV here: MichaelPearceCV_Academic.pdf

Download my non-academic CV here: MichaelPearceCV_NonAcademic.pdf

Google Scholar: Michael Pearce

My PhD Work

Imagine a user who is a faced with an optimisation problem where the objective function is hard to model and expensive to evaluate. For example tuning hyperparameters of machine learning algorithms. Active learning is the branch of machine learning where the training data is collected adaptively in an optimal fashion for a particular goal. Within this, field of Bayesian Optimisation is using Gaussian Process Regression to cheaply interpolate and statistically predict the large expensive objective function using only a few function evaluations. Then one can use the cheap model to guide the search for the optimum of the expensive function ensuring that every time the expensive function is evaluated, each new value gives the maximum information gain thereby fewer evaluations are required to find a satisfactory optimum. Hopefully, after a while a user will have collected a dataset that is the best possible dataset for this particular purpose! The start of my PhD has been using these methods for the case when there are multiple objective functions that are independent and/or correlated with one another and the goal is to find the unique optimum for each function.


In a simple discrete case, this is like finding the "ceiling" of multiple overlapping expensive functions, for each point in a 2D plane we aim to find the highest of the available funtions ensuring that each new funtion evaluation has maximum information gain about the ceiling overall. We derive the Regional Expected Value of Improvement (REVI) and an implementation of this procedure is given in the right hand animation below, the coloured surfaces show the Gausian Process estimate of each function and the black point shows where the new function is selected for evaluation.

The most general case of this framework is learning the optimal points of a function for a range of parameters, for example in reinforcement learning, one may want to know the best continuous action that optimises the instantaneous reward for each unique (possibly continuous) state.

Medical School Internship

I spent 6 months in 2016 working on the Inspire Project for clinical trial design in small populations (rare deseases) with Dr. Siew Wan Hee, Dr. Jason Madan, and Prof. Nigel Stallard. We developed hybrid Bayesian-frequentist models, the trial designers have prior knowledge about a new treatment from previous trials and therefore can use bayesian statistics. However a regulator requires a p-value and a frequentist hypothesis test with a predetermined significance level, usually 5%, we made models that fulfil both these criteria.


  1. Bayesian Simulation Optimisation with Input Uncertainty, Michael Pearce, Juergen Branke. Winter Simulation Conference 2017
  2. Efficient Expected Improvement Estimation for Continuous Multiple Ranking and Selection, Michael Pearce, Juergen Branke. Winter Simulation Conference 2017

Submitted Papers

  1. Optimal Active Learning for Multiple Ranking and Selection with Gaussain Process Priors, Michael Pearce, Juergen Branke. Submitted to Informs Journal on Computing
  2. Continuous Multi-Task Bayesian Optimisation with Correlation, Michael Pearce, Juergen Branke. Submitted to European Journal of Operational Research
  3. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function, Pearce Submitted to Clinical Trials
  4. Approaches to sample size calculation for clinical trials in rare diseases Miller et. al. (5th Author) Submitted to Pharmaceutical Statistics

Conferences, Workshops and Activities

  1. The Genetic and Evolutionary Computation Conference 2017, programme committee member.

  2. Introduction to Machine Learning Summer School, Warwick Mathematics Institute, I helped organise a tutorial series on nerual Networks.
  3. Warwick Summer School on Complexity Science 2015, gave a talk on Gaussian Process Regression.
  4. Data Study Group with Industry Alan Turing Institute May 2017, Siemens, optimising traffic lights to minimise pollution
  5. Data Study Group with Industry Alan Turing Institute Sept 2017, Cochrane Reviews, automating allocation of research papers to reviews
  6. Complexity Summer Retreat 2017, I won the informal presentation competition with slides on "Deep Linear Regression and Wide Learning"
  7. I regularly attend the Statistics and Machine learning reading group run by Jim Skinner and the Deep Learning reading group run by Ayman Boustati
  8. International Society for Clinical Biostatistics 2016, attendee