Skip to main content

Abstracts

Nat Wilcox (Chapman)

Some Thoughts on Probabilistic Choice, Experimental Designs and Experimental Purposes

In general we have two goals for algebraic preference models: (A) they survive skeptical within-sample hypothesis tests; and (B) they predict well to new or hold-out data. Most researchers now agree that models of the probabilistic part of choice are needed to best accomplish both goals. However, another matter may be less well appreciated: It is that "good experimental design" depends both on our experimental purpose (that is, whether we are focusing on goal A or B), and also on the model of probabilistic choice we specify. This talk will illustrate these two points using several examples. The tentative conclusion is that specific experimental designs are (and should be) driven both by specific purposes and specific assumptions about randomness. Because of this, attempts to generalize conclusions across differently generated experimental data sets call for some special caution.

Top Go to programme

 

Michael Regenwetter (Illinois)

Quantitative Testing of Decision Theories

I will present a general modeling framework for probabilistic specification of algebraic theories for binary choice. I will provide examples of frequentist, as well as Bayesian, tests, e.g., of Cumulative Prospect Theory, using individual participant laboratory choice data.

Top Go to programme

 

Jerome Busemeyer (Indiana)

Achieving More Coherent Representations of Latent Utility by Using More Sophisticated Stochastic Models of Manifest Behavior

Decision theorists are becoming increasingly frustrated by the ephemeral nature of utility. The theorist searches for some coherence in the utility function of a decision maker across tasks and contexts, but the behavioral data force the theorist to think otherwise. In particular, choices among gambles can reverse with irrelevant changes in the descriptions of events that result in exactly the same distribution of final outcomes; preferences measured by choices between gambles can reverse when these preferences are measured by instead certainty equivalents; choices among consumer products can reverse by changing the context of the choice set; choices between actions can reverse when these choices are made under different time constraints. The purpose of this paper is to show that by building more sophisticated stochastic models of behavior, and treating utility as a latent parameter of this stochastic process, it is possible to recover the coherence that decision theorists seek. In particular, I will focus on a stochastic model of choice and certainty equivalents called decision field theory. The paper concludes with the point that a trade off must be accepted -- more complex models of behavior are required to recover simpler representations of utility.

Top Go to programme

 

Jorg Rieskamp (Basel)

Testing Sequential Sampling Models against Standard Random Utility Models

Economists have addressed the probabilistic character of choice behavior by the development of random utility models. Psychologists, on the contrary, have focused on the development of cognitive models describing the underlying cognitive process that explains the variability of people's choices. Sequential sampling models represent one prominent cognitive approach to explain decision making. According to these models the decision maker accumulates evidence for the available choice options until a decision threshold is crossed. In the present project we test a prominent sequential sampling model (i.e. decision field theory) against standard random utility models (i.e., logit and probit models) to predict consumer behavior. The results show that for randomly selected choice situations sequential sampling models predict people's decision better than random utility models, but the improved fit is not large enough to justify the larger complexity of the sequential sampling model. However, when focusing on choice situations in which the choice options influence each others' evaluations the sequential sampling model outperforms the random utility models substantially in predicting people's preferences.

Top Go to programme

 

Graham Loomes (Warwick)/Dani Navarro-Martinez (LSE)

Sequential Expected Utility Theory: Sequential Sampling in Economic Decision Making under Risk

We introduce the notion of sequential sampling (or repeated sampling) in a standard Expected Utility (EU) framework to capture the deliberation process involved in decision making. We show how a simple sequential-sampling EU model can be constructed, we illustrate its main implications, and we present experimental evidence testing some of its predictions. Our results show that the simple idea that individuals sample repeatedly from standard EU preferences can explain some of the most prominent deviations from EU theory. Moreover, the model provides predictions on additional measures related to deliberation (mainly response times and confidence), which standard economic models are silent about. Most of the model’s predictions are supported by our experimental evidence. Our sequential-sampling approach has also the potential to be extended to other economic decision models.

Top Go to programme

 

Pavlo Blavatskyy

How to Model Probabilistic Choice: A Microeconomic Perspective

Empirical research often requires a method how to convert a deterministic microeconomic theory into an econometric model. Several such methods have been proposed including classical strong utility or Fechner model, strict utility or Luce model and random preference/utility as well as contemporary Fishburn (1978) probabilistic EUT, Wilcox (2008, 2010) contextual utility and Blavatskyy (2009, 2011) lattice approach. Yet, none of these methods satisfies some desired microeconomic properties such as rare violations of first-order stochastic dominance or weak stochastic transitivity, invariance to positive affine transformations of utility function and existence of a well-defined measure of risk aversion. I shall present some personal thoughts on how to resolve this problem giving birth to a new method, which can be regarded as a descendent of the classical Fechner (strong utility) model and a sibling of Blavatskyy (2009, 2011) lattice approach.

Top Go to programme

 

Michael Birnbaum (Fullerton)

True and Error Models of Response Variation in Choice and Judgment Studies

The true and error model assumes that each participant can have a different set of true preferences and that each choice problem or judgment can have a different rate of error. A slightly more general model allows each person to have a different level of noise that amplifies the errors. Another variant allows error rates to differ for people with different “true” preference patterns. A still more general variant of TE model allows that an individual may change true preferences during a long experiment from block to block, employing a mixture of different “true” preferences at different times over the course of the experiment. This model does not imply that responses to the same item by the same person on different trials will be independent (except in special cases), an assumption made by certain random preference models. The true and error models are not incompatible with Fechner/Thurstone/Luce type models, which impose a transitive underlying continuum, but TE models do not exclude systematic intransitivity. They need not exclude the possibility that evidence accumulates over time, triggering a response when a decision limen is reached. Empirical evidence from tests of transitivity, stochastic dominance, restricted branch independence and Allais paradoxes will be presented to illustrate applications of the models. Because the true and error models do not impose transitivity, stochastic dominance, or related properties that provide crucial tests among current decision making models (and yet these models can be tested), they can be seen as apparently neutral frameworks in which these critical properties can be tested.

Top Go to programme

 

Stephane Hess (Leeds)

In Search of the Real Drivers of Heterogeneity in Choice Models

The study of heterogeneity across individual decision makers is one of the key areas of activity in the field of behavioural research. However, a disproportionately large share of the research effort focusses on heterogeneity in sensitivities to individual attributes, and in particular how such heterogeneity can be accommodated in a random coefficients framework. While differences in marginal sensitivities clearly play a role in driving behaviour, this presentation makes the case that retrieved differences in such sensitivities may in fact be caused by a number of different factors. In particular, we look at the possible role of underlying attitudes, differences in decision rules across respondents and the role of information processing strategies. We show evidence from a number of studies that suggest that accounting for such richer behavioural patterns leads to important gains in understanding of behaviour, and may also reduce the level of residual random heterogeneity. Conversely, this suggests that not adequately accounting for such additional factors may overstate the degree of unexplained heterogeneity in marginal sensitivities.

Top Go to programme

 

Miguel Costa-Gomez (Aberdeen)

Level-k Models and Decision Noise

The level-k model is one of the models currently used to fit data of experimental one-shot games. In this talk I will discuss the modeling of three of its features: i) The adjustment of players’ beliefs via iterated best responses. ii) The anchor of players’ beliefs, known as L0 behavior. and iii) Decision noise. I will pay special attention to the role of decision noise in fitting the data, while highlighting its relationship to some of the modeling choices for the other two features.

Top Go to programme

 

Ted Turocy (UEA)

Quantal Response Equilibrium: A Survey

McKelvey and Palfrey introduced the quantal response equilibrium (QRE) concept in a pair of articles published in 1995 (Games and Economic Behavior, for strategic games) and 1998 Experimental Economics, for extensive games). Its initial appeal lay in its elegant formulation which simultaneously built upon well-established approaches in decision and game theory, including random utility models, fixed-point reasoning, and purification of mixed-strategy equilibria, while at the same time making predictions which matched anomalous behaviour across a range of experimental games such as asymmetric matching pennies, centipede games, the traveler's dilemma, among others. At the same time, open questions remain as to the empirical content of QRE and its domain of applicability, especially as QRE is silent (or offers multiple interpretations) on matters such as the origin of decision noise, and procedural mechanisms by which QRE-like behaviour might arise. This talk will survey the history to date of QRE with the viewpoint of the behavioural social scientist in mind, including what is (and is not) known about the theoretical and mathematical structure of QRE, possible interpretations of the mathematical model, and its role in computation and selection of Nash equilibria, as well as its successes and failures in organising experimental data, and will suggest open questions and directions for current and future work.

Top Go to programme

 

Andrea Isoni (Warwick) /Graham Loomes (Warwick)

Preference and Belief Imprecision in Games

Many experimental studies have found that behaviour in simple one-shot games is inconsistent with the assumption that strategy choices are best responses to equilibrium beliefs. These findings have been explained either as best response to non-equilibrium beliefs – as in level-k and Cognitive Hierarchy models – or as equilibria that reflect noisy preferences – as in Quantal Response Equilibria. We investigate to what extent failure to best respond to stated beliefs is the result of preference and/or belief imprecision. We elicit belief ranges and confidence in strategy choices in four classes of one-shot 2x2 two-person games. Our measures of imprecision show a substantial degree of sensitivity to parameter changes, both within and between game structures. Best response rates are higher when players are more confident about their strategy choices, and for games in which belief ranges are relatively narrow.

Top Go to programme

 

Gordon Brown (Warwick)

Noise, Context, and Individual Differences in Risk Attitude

Economists and psychologists typically take different approaches to individual differences in attitudes towards risk. One idea is that stable individual differences in risk attitude exist, but that the expression of and/or the measurement of these individual differences is subject to noise. Another idea, associated with recent approaches within psychology, is that choices on tasks designed to measure risk attitude are largely driven by experienced or retrieved comparison context. We report an experiment in which people’s risk attitudes are measured on repeated occasions using a Holt-Laury procedure with variation in the context of choice options. Such a procedure allows us to apportion variance to (a) context effects, (b) stable individual differences, and (c) noise.

Top Go to programme

 

Stefan Traub (Bremen)

Attention and Revealed Preference in a Portfolio Choice Experiment

In a laboratory experiment each of 41 student subjects face a series of 16 successive grouped portfolio selection problems. We classify subjects' choices into consistent and inconsistent choices as to Varian's (1982) generalized axiom of revealed preference (GARP) and check whether chosen portfolios are dominated in terms of first-order stochastic dominance. While dealing with their choice tasks, we record the attention paid to each portfolio in terms of time spent on it. We compute the first four central moments of the distribution function of attention. Preliminary data analysis suggests that subjects who perform worse need more time to complete their tasks and their distribution functions of attention exhibit less variance, skewness, and kurtosis.

Top Go to programme

 

Peter Dayan (UCL)

A View from the Bottom

This talk will summarise the key ideas emerged during the workshop from the perspective of neuroscience.

Top Go to programme