International Choice Modelling Conference, International Choice Modelling Conference 2017

Font Size: 
Addressing empirical challenges related to the incentive compatibility of stated preference methods
Mikolaj Czajkowski, Christian Vossler, Wiktor Budzinski, Aleksandra Wisniewska, Ewa Zawojska

Last modified: 28 March 2017


Stated preference surveys continue to be the leading approach for estimating the value of public goods. Although the methodology has been in use for over fifty years, concerns over the ability of surveys to provide valid welfare measures remain, serving as an obstacle to widespread adoption in the legal and policy arenas. Theoretical work identifies conditions for a stated preference survey to be incentive compatible in the sense that it provides incentives for respondents to reveal their preferences truthfully. These conditions rely heavily on respondents’ latent (unobserved) beliefs (Carson and Groves, 2007; Vossler et al., 2012): when a single binary choice question (SBC) is used, respondents must perceive that the cost stated in a survey can be coercively collected upon policy implementation (“payment consequentiality”) and that a response in favor of the proposal weakly monotonically increases the chance of its implementation (“policy consequentiality”). In addition to these beliefs, incentive compatibility for the increasingly popular repeated binary discrete choice experiment (binary DCE) requires respondents to believe that at most one of the proposed policies can be implemented and that the perceived implementation rule induces independence between choice sets (Vossler et al., 2012). Our study provides theoretical and econometric framework for addressing two significant challenges that often arise in empirical work endeavoring the theoretical assumptions tied to respondents’ beliefs.

One empirical challenge is how to appropriately include stated measures of unobservable beliefs, such as Likert-scale responses to a policy consequentiality question, into models of stated preferences. Direct inclusion of stated measures of beliefs may be problematic for two reasons. First, stated beliefs are measured imprecisely, giving rise to issues of measurement error. Second, stated beliefs may be correlated with other unobserved factors that influence choices. In prior work, Herriges et al. (2010) develop a Bayesian treatment effect model for SBC data that uses instrumental variables to identify the effect of stated policy consequentiality on willingness-to-pay (WTP). Vossler et al. (2012) and Vossler and Watson (2013) mention binary probit instrumental variable models, with the former study suggesting statistical evidence that measured beliefs can be considered exogenous and the latter citing a weak instruments problem. We propose a Hybrid Mixed Logit (HMXL) approach, which models a belief as a latent variable in the utility function, ties the belief to observed covariates (respondents’ characteristics) through a structural equation, and specifies a measurement equation where the stated belief is a function of the latent variable, thus recognizing measurement error. The proposed HMXL model can be applied to both SBC and DCE data, and can accommodate multiple latent beliefs. As with standard mixed logit models, the HMXL allows the analyst to incorporate various forms of preference heterogeneity. Identification relies on there being available measures of the latent variables rather than instrumental variables for the directly included (endogenous) stated belief measure(s).

A second challenge, assuming the theoretical conditions tied to beliefs are not universally met, is how to modify survey design to induce desired beliefs. In their critical literature review, Kling et al. (2012) point out that “the effect of consequentiality scripts in stated preference surveys is in its infancy”. Using as an empirical application a binary DCE survey concerning public programs for discounted theatre tickets in Warsaw, Poland, we employ a split-sample approach to investigate four information scripts that vary in their signals of policy consequentiality. The baseline treatment provides information at a level that is common in stated preference surveys, and the other treatments increase the frequency at which policy consequentiality is emphasized in the survey. This exogenous variation allows us to identify whether there is a causal effect of policy consequentiality on elicited values. As acknowledged in prior work, follow-up consequentiality questions are themselves inconsequential constructs. This opens up the possibility that identified correlations may be spurious and, similarly, that the drivers of consequentiality question responses may have little to do with actual beliefs. The scripts we explore can further be easily incorporated into general practice. An ancillary benefit of the HMXL framework in this context is that it allows one to not only measure whether information signals alter stated beliefs but also whether such signals influence stated WTP.

Our empirical study provides several important insights. First, similar to Vossler and Watson (2013), we are not able to identify (strong) instrumental variables from the extensive information collected through the survey. This provides further impetus for the proposed HMXL estimator. Second, we find that latent beliefs over policy consequentiality have a discernible effect on elicited WTP for the policy programs considered. Importantly, these latent beliefs are strongly correlated with measured beliefs, using as our measurement device a Likert-scale policy consequentiality question now prevalent in the literature. Third, WTP is significantly correlated with our information treatments that vary the signals of policy consequentiality, which emphasizes the empirical importance of the theoretical assumption regarding policy consequentiality; indeed, this can be taken as evidence in favor of construct validity. Fourth, somewhat surprisingly, the information treatments have no significant effect on stated beliefs. Thus, although the econometric results provide empirical support that a follow-up question about respondents’ beliefs over actual policy consequences provides useful information, the findings emphasize the importance of developing follow-up questions that elicit beliefs more precisely.


Carson, R., and Groves, T., 2007. Incentive and informational properties of preference questions. Environmental and Resource Economics, 37(1):181-210.

Herriges, J., Kling, C., Liu, C.-C., and Tobias, J., 2010. What are the consequences of consequentiality? Journal of Environmental Economics and Management, 59(1):67-81.

Kling, C., Phaneuf, D. J., and Zhao, J., 2012. From Exxon to BP: Has Some Number Become Better than No Number? Journal of Economic Perspectives, 26(4):3-26.

Vossler, C. A., Doyon, M., and Rondeau, D., 2012. Truth in Consequentiality: Theory and Field Evidence on Discrete Choice Experiments. American Economic Journal: Microeconomics, 4(4):145-171.

Vossler, C. A., and Watson, S. B., 2013. Understanding the Consequences of Consequentiality: Testing the Validity of Stated Preferences in the Field. Journal of Economic Behavior and Organization, 86:137-147.

Conference registration is required in order to view papers.