International Choice Modelling Conference, International Choice Modelling Conference 2015

Font Size: 
Statistical versus response efficiency – the non-neutrality of the choice of an underlying experimental design
Juergen Meyerhoff, Søren Bøye Olsen

Last modified: 11 May 2015

Abstract


One of the cornerstones of discrete choice experiments (DCE) is the experimental design (ED) that generally underlies the arrangement of attributes and levels for the alternatives in a choice task. However, the experimental design itself did not receive much attention in the earlier days of DCE applications, and it was generally assumed that the design itself would not significantly bias the central estimates of a DCE such as the willingness to pay estimates, particularly when the number of observations is high. Rather, it was assumed that the design itself was neutral regarding final outcomes. This has changed recently and researchers have become more concerned about the effects an ED might have on the precision and efficiency of the estimated structural parameters. In the context of this debate two separate paradigms developed in the literature. One seeks to maximise differences between the attribute levels of the stated preference alternatives, whereas the other seeks to minimise the variances of the parameter estimates obtained for each of the attribute coefficients included in the utility specification (Scarpa & Rose 2008). Within the first paradigm linear design principles are used implicitly assuming that respondents are indifferent between all attribute levels, and therefore also between all alternatives, and that no uncertainty exists regarding this indifference. In contrast, the second paradigm rests on the assumption that for non-linear models such as those in the logit family the property of orthogonality is not as essential and that for non-linear models significant gains in efficiency can be achieved when some prior information is available.

That efficient designs are advantageous is nowadays widely accepted, and since the availability of a software package - NGene by ChoiceMetrics - that easily allows to generate a range of efficient designs their application has spread rapidly. This development, however, raises another question according to Louviere et al. (2008): Do optimal designs come at a price? The central question here is whether designs constructed according to different optimality criteria affect choice task complexity and thus potentially respondents' decision processing strategies. Statistically more efficient designs may, for example, have unintended consequences by increasing the cognitive burden for respondents (Ferrini & Scarpa 2007) and may result in different distributions of taste (Johnson et al. 2010) and may trigger information processing strategies such as attribute non-attendance (Yao et al. 2014). Therefore, the ED can affect the main outcomes of DCE such as estimates of market shares and marginal willingness to pay values.

The present paper contributes to the so far still limited evidence on the possible non-neutrality of the choice of an underlying experimental design. In an online survey in Denmark concerning consumer preferences for meat, respondents in four split samples were provided with identical questionnaires comprising a DCE. The four samples, however, differed with respect to the ED underlying the DCE. They were optimised for the following four statistical criteria: minimized D-error, minimized C-error, minimized S-error, and minimized B-error. The first design aims at minimizing all variances, including the covariance, of all parameter estimates, the second design is specifically suited to minimize the variance of functions of model coefficient estimates such as the marginal willingness to pay, the third design aims to minimize the sample size needed to obtain statistically significant parameter estimates, and the last design aims at maximizing the utility balance of the choice tasks (Scarpa & Rose 2008, Rose & Bliemer 2013). In the survey, each respondent, regardless of which split sample she was randomly assigned to, faced 12 choice tasks. Overall, 1574 interviews are available for analysis (D-error: 390; C-error: 395; S-error: 388; B-error: 401).

In the paper, for each of the four ED we compare the a-priori design measures with the posterior design efficiency measures, among these the minimum number of respondents needed in order to obtain significant parameter estimates for each attribute. Furthermore, we calculate and compare complexity measures in terms of entropy and the number of attribute levels changes, compare response strategies such as stated non- attendance and the frequency of choices of the zero-price option (SQ alternative), and finally calculate marginal WTP estimates using a WTP space model capturing unobserved preference heterogeneity.

Preliminary results suggest that the choice of ED does indeed affect the estimated WTP. We find significant differences in WTP estimates obtained under the different ED. Furthermore, we show that some designs are more efficient in terms of recovering significant parameter estimates with as little as 10 respondents. There is, however, not one single ED that outperforms the others across all attributes in this regard, and it is thus not possible based on these findings to generally recommend a specific type of ED. Nevertheless, our results underline the importance of the choice of ED as well as the need to investigate this issue further.

References

Ferrini, S.,Scarpa, R., 2007. Designs with a priori information for nonmarket valuation with choice experiments: A Monte Carlo study. Journal of Environmental Economics and Management 53, 342-363.

Johnson, F. R., Ozdemir, S., Phillips, K. A., 2010. Effects of symplifying choice tasks on estimates of taste heterogeneity in stated-choice surveys. Social Science & Medicine 70, 183-190.

Louviere, J.J., T. Islam, N. Wasi, D. Street, and L. Burgess. 2008. "Designing Discrete Choice Experiments: Do Optimal Designs Come at a Price?" Journal of Consumer Research 35:360-375.

Rose, J.M., Bliemer, M., C. J., 2013. Sample size requirements for stated choice experiments. Transportation

Scarpa, R., Rose, J. M. 2008. Design efficiency for non-market valuation with choice modelling: how to measure it, what to report and why. The Australien Journal of Agricultural and Resource Economics 52, 253-282.

Yao, R.T., Scarpa, R., Rose, J.M., Turner, J. A., Experimental design criteria and their behavioural efficiency: an evaluation in the field. Environmental and Resource Economics (online)


Conference registration is required in order to view papers.