Welcome! I am a lecturer with the School of Economics at the University of Surrey and I specialize in micro theory and experimental economics.
This paper studies a new measure for the cost of learning that allows the different attributes of the options faced by an agent to differ in their associated learning costs. The new measure maintains the tractability of Shannon’s classic measure but produces richer choice predictions and identifies a new form of informational bias significant for welfare and counterfactual analysis that is conducted with the multinomial logit model. Necessary and sufficient conditions are provided for optimal agent behavior under the new measure for the cost of learning.
A novel data enrichment demonstrates that experiment subjects are more likely to invest effort into learning about the value of options if simple choice parameters, like price, differ from previous choice problems. This increase in effort in ‘unfamiliar’ choice problems means that the behavior of many subjects violate even the most flexible model of costly learning if the cost for information is assumed to be constant across choice problems with the same prior beliefs. This observation motivates the introduction of heterogeneous decision makers into a standard and more restrictive (posterior separable) model of costly learning to better fit the data.
We investigate the problem of identifying incomplete preferences in the domain of uncertainty by proposing an incentive-compatible mechanism that bounds the behavior that can be rationalized by very general classes of complete preferences. Hence, choices that do not abide by the bounds indicate that the decision maker cannot rank the alternatives. Data collected from an experiment that implements the proposed mechanism indicates that when choices cannot be rationalized by Subjective Expected Utility they are usually incompatible with general models of complete preferences. Moreover, behavior that is indicative of incomplete preferences is empirically associated with deliberate randomization.
By weakening Shannon’s original axioms to allow for attributes of the choice environment to differ in their associated learning costs, this paper provides an axiomatic foundation for Multi-Attribute Shannon Entropy, a natural multi-parameter generalization of Shannon Entropy. Sufficient conditions are also provided for a simple dataset that identifies the Multi-Attribute Shannon Entropy cost function for information by analysing stochastic choice data produced by a rationally inattentive agent that is picking between pairs of options when relatively few states of the world have a positive probability of being realized.
Work in Progress
(with Umberto Garfagnini)
An experimental investigation of the impacts of redundant and contradictory pieces of information.