582.1
Do RRT and the Crosswise Model Produce Valid Measurements of Sensitive Items? An Online Validation Study

Friday, July 18, 2014: 3:30 PM
Room: 416
Oral Presentation
Marc HOEGLINGER , ETH Zurich, Zürich, Switzerland
Andreas DIEKMANN , Sociology, ETH Zurich, Zurich, Switzerland
Ben JANN , University of Bern, Switzerland
Social desirability and the fear of negative consequences often deter a considerable share of survey respondents from truthfully responding to sensitive items such as own norm violations (e.g., nonvoting, tax evasion, cheating). As a consequence, the prevalence of norm violations is underestimated and results are inaccurate. Indirect techniques for surveying sensitive questions such as the Randomized Response Technique (RRT, Warner 1965) or the Crosswise Model (Tian et al. 2007), a new variation of the RRT, are intended to mitigate this problem by providing complete concealment of respondents’ answers.

However, it is far from clear whether these indirect techniques generally produce more valid measurements than standard direct questioning (DQ). Furthermore, most systematic evaluations so far compare a particular method’s resulting prevalence estimates of sensitive behavior with DQ estimates under the “more-is-better”-assumption and lack a known true value for validation. Under the “more-is-better”-assumption higher prevalence estimates are interpreted as more valid estimates. So, it cannot be ruled out that higher prevalence estimates are a methodological artifact and not the result of a technique’s superior validity. Whether or not a particular sensitive question technique truly produces more valid measurements can only be answered for certain with validation studies. But the possibilities to carry out validation studies are rare and the topics’ range very restricted.

Therefore, we designed an online experiment which allows for the in-depth validation of any sensitive question technique. Inspired by Fischbacher and Heusi’s (2008) widely used cheating dice game we developed two dice games where respondents had an incentive to violate a norm, i.e., they could illegitimately claim a bonus payment. After the game, respondents were surveyed about their personal norm adherence, i.e., whether they cheated or not. We used different sensitive question techniques to survey this sensitive item. Resulting prevalence estimates were then validated with the true observed behavior.