International Choice Modelling Conference, International Choice Modelling Conference 2015

Font Size: 
Deniz Akinc, Martina Vandebroek

Last modified: 11 May 2015


Discrete choice experiments (DCEs) have become a commonly used technique in health economics, marketing and transportation research, to answer a wide range of questions. The technique uses an attribute-based measure of utility, based on the assumption that products, services or policies can be described by their attributes and that respondents' valuation depends on the levels of these attributes. Typically, in a DCE, respondents are presented with a series of choice sets, each composed of several options or alternatives of products, services or policies that are defined as combinations of different attribute levels. Respondents are asked to choose between two or more alternatives. These choices will reveal the partworths which are the respondent's assessment of the different attribute levels.

Practitioners have frequently used the conditional logit (CL) model or multinomial logit model in the context of discrete choice experiments (DCE). As these models assume that all persons use the same partworths to assess the values of different product attributes, the use of these models is obviously inappropriate to accurately understand and describe reality. Nowadays, the heterogeneity in consumers' preferences is mainly analyzed through the mixed logit (MXL) model which estimates the distribution of the preference parameters in the population.

Designing the experiment is a very important aspect of DCEs because it determines what models can be estimated with what level of precision. Choosing the profiles to be used in the experiment and composing the choice sets by grouping the profiles so that the experiment provides maximum information on the parameters of the model is therefore essential. Traditionally, classical experimental designs which are optimal for linear models are used. They mainly make use of full and fractional factorial designs and orthogonal designs to generate a choice design based on linear design principles. However, as orthogonality is important for estimating linear models, these designs are not necessarily appropriate for fitting discrete choice data, since discrete choice models are nonlinear in the parameters. Therefore, the data collected based on these designs do not provide maximal information on the consumer preferences. This has resulted in the development of new and innovative methods based on optimal design theory to allocate the attribute levels to the design matrix in order to improve the statistical efficiency of experimental designs.

While several measures for the statistical efficiency of experimental designs have been proposed, the D-optimality criterion has been the most commonly used metric. The D-optimality criterion maximizes the determinant of the Fisher information matrix in maximum likelihood estimation so that the parameters of the model can efficiently be estimated. In order to calculate the statistical efficiency of a design, information on the parameters and also on the choice model to be estimated are required.

The exact parameter values are of course not known at the design generation stage, and the researcher needs to make assumptions on these values. To overcome this problem, one way is to assume prior values for parameter estimates that are all zero (Street and Burgess, 2007). The second method is to assume that the parameters are equal to some non-zero and fixed values (Huber and Zwerina, 1996). This method will provide local optimal designs. A third approach is to generate Bayesian optimal designs by using a distribution of likely parameter values. By evaluating the efficiency of the design over the set of possible prior values, the design becomes more robust (Sandor and Wedel, 2001).

Furthermore, at the design generation stage, it is not known which choice model will fit the choice data best. In the literature, the main focus has been on the optimal design of choice experiments for estimating the CL model. As incorporating respondent heterogeneity became more important, various studies showed the advantages of using an optimal MXL design over an optimal CL design, therefore the optimal designs for the CMXL (cross-sectional mixed logit) and PMXL (panel mixed logit) models were introduced to the literature. The CMXL model is a special case of the PMXL (panel mixed logit) model and assumes that the choice probabilities for a single respondent are independent across choice sets, where the PMXL model explicitly takes into account the within respondent correlation across repeated choices.

The main disadvantage of the optimal design approach is that some assumptions should be made about both model and true values of the parameters in all design construction methods. To account for model and prior parameter uncertainty, a model-robust approach is investigated in this study. We look for designs that yield reasonable results for the true model even if the postulated model for which the design was computed is different. We investigate whether some orthogonal designs or designs optimized for the CL model or designs generated for a particular heterogeneity model are more robust than others to model misspecification and to prior parameter misspecification. By using different software packages like SAS, SAWTOOTH and NGENE, several orthogonal designs and optimal designs for the CL and the MXL models are obtained for various experimental setups and assuming high and low respondent heterogeneity. To test the effect of model misspecification, the D-efficiency of the design constructed for a specific choice model is computed for various choice models. To test the effect of parameter misspecification on design efficiency, the D-efficiency is calculated for different parameter values. The results indicate that one has to be careful using orthogonal designs as many different orthogonal designs exist for the same design problem with different design efficiencies. Bayesian D-optimal CL design with prior N(0, small variance) have overall the smallest D-errors when there is no prior information. On the other hand, Bayesian D-optimal CL design with prior N(μβ, small variance) have overall the smallest D-errors when prior information is known.



Huber, J. and Zwerina, K., 1996. The importance of utility balance and efficient choice designs. Journal of Marketing Research. 33, 307-317.


Sándor, Z. and Wedel, M., 2001. Designing Conjoint Choice Experiments Using Managers' Prior Beliefs. Marketing Science. 38, 430-444.


Street, D. J., and Burgess, L., 2007. The construction of optimal stated choice experiments: Theory and methods. Hoboken, NJ: Wiley.



Conference registration is required in order to view papers.