International Choice Modelling Conference, International Choice Modelling Conference 2017

Font Size: 
Indirect questioning as a nudging instrument? Evidence from a CE
Roberta Raffaelli, Luisa Menapace

Last modified: 28 March 2017

Abstract


Indirect questioning (IQ), where respondents are asked to predict the behavior of others (instead of their own), has been used in surveys since the ’70 in order to elicit private information about sensitive issues. The idea behind IQ is that, as people have no immediate experience of other’s private information, their answers are based on their own attitudes, intentions and behaviour, which they adjust for perceived differences between themselves and others. Therefore, the responses to IQ are believed to be less prone to social desirability bias and demand effects (Fisher 1993).

Recently indirect questioning (called also inferred valuation IV) has been implemented in choice experiments as a mechanism to remove social desirability bias in preference elicitation (Lusk and Norwood (2009), Lusk and Norwood (2010), Carlsson et al. (2010), Olynk, Tonsor, and Wolf (2010), and Yadav et al (2013). This approach rests on the assumptions that respondents do not derive utility from misrepresenting someone else’s preference and respondents reveal their own preferences with their predictions.

IQ have been employed by using different question formats (e.g. asking to predict the choice of an “average person” or to predict the distribution of choices of others), with and without monetary incentives, with different degree of scrutiny by the interviewers (online vs face-to-face at home), adopting both the between subject and with-in subjects approach.

Substantial evidence has been collected that indeed shows WTP measures obtained with inferred valuation are lower than those obtained with direct questions (instilling confidence that IQ contribute to the removal of social desirability bias), nevertheless it is unclear how well inferred valuation WTP measures reflect real preferences. Some findings of recent stated preference studies and insights from the social psychology literature suggest the presence of potential issues with IQ.

First, the study by Carlsson et al. (2010) suggests that IQ (and not only direct questions) might be subject to a self-enhancement effect when respondents attempt to present themselves in a relatively favorable light by distorting the predictions of others’ preferences or behavior. Second, ample evidence from social psychology suggests that people think of themselves as ‘better’ than others, the so-called ‘better-than-average’ effect (Alicke & Govorun 2005). In either case, estimates obtained with IQ might be downwardly biased. Third, the stated preference literature has used a wide range of different behaviors to be predicted with IQ, without critically assessing the effect of these differences on elicited values. For example, in some studies respondents were asked to predict others’ behavior in a real market situation (e.g., at the grocery store), and in other studies respondents were asked to predict others’ behavior in hypothetical situations (e.g., hypothetical product choices of study participants). While one might expect the specific behavior which is to be predicted to affect the elicited values, to the best of our knowledge no explicit discussion of this issue can be found in the literature.

Our study investigates all these issues related to the use of IQ with a CE designed to elicit preferences for organic pasta characterized by normative attributes concerning the origin, cultivar, and the production and processing conditions (e.g. fair price to producers and social responsible workforce). We used a partial within-subject and partial between-subject experiment design with two question formats (direct and indirect questions), three treatments ((i) with incentives to provide accurate estimates of others’ purchase intentions, (ii) with incentives to provide accurate estimates of others’ actual purchase, and (iii) without incentives (control treatment)), and two order of presentation regarding the format of the questions (direct questions first or indirect questions first). As an economic incentive we used store coupons of the value of 30 Euros granted to the best predicting respondents.

Data was collected at an organic store chain in three different large cities in Italy (Rom, Milan, Palermo) using touch-screen computer assisted face-to-face interview. A random sample of 600 respondents faced 6 choice cards with 3 alternatives plus the no-buy option. Choice cards were presented twice: once in the form of the traditional direct question (pick one) and once in the form of an indirect question asking respondents to make predictions of the answers/choices of a sample of store’s customers participating in the study. Order of presentation was randomized.

The results are very interesting. Apart from the expected results of IQ leading to lower WTPs than direct questioning for normative attributes, we find no evidence that answers to IQ are affected by a significant indirect self-enhancement effect. Only in a few cases, we find evidence that respondents hold downward biased beliefs regarding others. These results allow us to suggest that it seems unlikely that estimates obtained with inferred valuation are severely biased due to either the self-enhancement or the better-than-average effect. Secondly, we find that respondents are not sufficiently sophisticated in their formulation of beliefs to provide distinct predictions regarding slightly different behaviors.

Looking at the effect of monetary incentives, we find that incentives associated to IQ do not produce WTPs that are statistically different from the control treatment but they emerge to have a spill-over effect on the DQs. More interestingly, even in the absence of monetary incentive, when indirect questions are asked before direct questions, IQs have a ‘debiasing’ effect on direct question WTP estimates. This finding has practical implications for the implementation of CEs and suggests that in lieu of expensive monetary incentives used to remove social desirability bias, researchers could consider constructing a survey in which ancillary IQs are asked before the DQs of interest. Indirect questioning emerges as an interesting “nudging” instrument deserving further investigation.

The methods and findings of this study should appeal to a broad cross-section of attendees of the 2017 ICM conference, in particular to those researchers interested in methods to elicit consumer preferences or health services preference, and in experimental and behavioral economics. In addition to generating discussion on elicitation methods and incentives in choice experiments, it is hoped that this paper will help highlight and motivate new research on improving state of the art elicitation techniques in choice experiments.

Conference registration is required in order to view papers.