Skip to main content Skip to navigation

Paper No. 14-23

Download 14-23

N Underhill and JQ Smith

Context-dependent score based Bayesian information criteria

Abstract: In a number of applications, we argue that standard Bayes factor model comparison and selection may be inappropriate for decision making under specific, utility-based, criteria. It has been suggested that the use of scoring rules in this context allows greater exibility: scores can be customised to a client's utility and model selection can proceed on the basis of the highest scoring model. We argue here that the approach of comparing the cumulative scores of competing models is not ideal because it tends to ignore a model's ability to 'catch up' through parameter learning. An alternative approach of selecting a model on its maximum posterior score based on a plug in or posterior expected value is problematic in that it uses the data twice in estimation and evaluation. We therefore introduce a new Bayesian posterior score information criterion (BPSIC), which is a generalisation of the Bayesian predictive information criterion proposed by Ando (2007). This allows the analyst both to tailor an appropriate scoring function to the needs of the ultimate decision maker and to correct appropriately for bias in using the data on a posterior basis to revise parameter estimates. We show that this criterion can provide a convenient method of initial model comparison when the number of models under consideration is large or when computational burdens are high. We illustrate the new methods with simulated examples and real data from the UK electricity imbalance market.

Keywords:
Scoring rules, Bayesian model selection, Information criteria, Utility based model selection.