Mode effect explained

Mode effect should not be confused with modality effect.

Mode effect is a broad term referring to a phenomenon where a particular survey administration mode causes different data to be collected. For example, when asking a question using two different modes (e.g. paper and telephone), responses to one mode may be significantly and substantially different from responses given in the other mode. Mode effects are a methodological artifact, limiting the ability to compare results from different modes of collection.

Theory

Particular survey modes put respondents into different frames of mind, referred to as a mental "script".[1] This can affect the results they give. For example:

Mode effects are likely to be larger when the differences between modes are larger . Face-to-face interviews are substantially different from self-completed pen-and-paper forms. By contrast, web-surveys, pen-and-paper and other self-completed forms are quite similar (each requiring respondents to read and privately respond to a question) and therefore mode effects may be minimised.

Users of surveys must consider the potential for mode effects when comparing results from studies in different modes. However, this is difficult as mode effects can be complex and subject to interactions between respondent demographics, subject matter and mode. Unless the mode effects are formally investigated for the survey instrument, it is difficult to quantify their size and qualitative judgments by experts familiar with the subject matter and respective modes are required instead.

Social desirability bias

Studies of mode effects are sometimes contradictory but some general patterns do emerge. For example, social desirability bias tends to be highest for telephone surveys and lowest for web surveys:[2] [3]

  1. Telephone surveys
  2. Face-to-face surveys
  3. IVR surveys
  4. Mail surveys
  5. Web surveys

Therefore, as the data collected on sensitive topics (such as sexual behavior or illicit activities) will change depending on the administration mode, researchers should be cautious of combining data or comparing results from different modes.

Differences in questions between modes

Some modes require different question wording from others, in order to suit the features of the mode. For example, self-complete forms can use lists of examples or extensive instructions to help respondents answer relatively complex questions. By contrast, in telephone interviews, respondents are often limited by their working memory and are unlikely to understand a long question with multiple sub-clauses. Another example is a 'matrix' of questions, commonly found on self-complete forms, cannot be read out easily in a verbal interview; rather a matrix would generally need to be scripted as a series of individual questions.

Differences in question wording across modes may cause different data to be collected by different modes. However, this is not always the case, and appropriate adaptation of questions to a new mode can yield comparable data . Survey designers should consider the conventions of the mode when adapting questions. For example, while it may be acceptable to require respondents to calculate total figures themselves in a paper form, respondents may perceive it to be burdensome if this is required in a web form (where respondents might expect totals to be calculated automatically by the computer). This may in turn change their attitude toward the form, altering their behaviour and ultimately changing the data collected.

Identifying and resolving mode effects

Mode effects can be identified by embedding an experiment within the survey, where a proportion of respondents are allocated to each mode. Differences in results from each mode should identify the 'mode effect' for this particular survey.

Once a mode effect has been quantified, it may be possible to use this information to reprocess existing data and allow comparison between data collected in different modes (e.g. by backcasting a time series to determine what past results 'would have' been had they been administered in the new mode).

Differential coverage between modes

Different administration modes may inherently exclude some parts of the target population. This potentially biases the sample that is taken, and changes the data from what would have been collected using another mode. For example, people without a home phone are excluded from Random Digit Dialling (RDD) surveys, and people without internet access are unlikely to complete a web survey. This means different samples are taken from the population when using different modes. Unless experiments are specifically designed to investigate differential coverage, mode effects will be confounded by coverage [4], and significant differences between modes/experimental conditions could have several explanations:

This problem is exacerbated when in 'live' administration of a survey, multiple modes are used. Some surveys use multiple modes, allowing respondents to choose the most convenient method for them. That is, different 'types' of respondents are expected to complete different modes based on their own choices. In this case, mode effects are difficult to quantify as randomly allocating respondents to a condition does not reflect their preference. Such an experiment lacks external validity and results would not directly generalise to situations offering respondents a choice. Failing to randomly allocate participants to a condition (i.e. allowing them to have a choice, thereby retaining external validity) would mean apparent differences between modes reflect the combined effect of a) different respondent types choosing each mode and b) any mode effects.

Notes and References

  1. Groves, Robert M. (1989). Survey Errors and Survey Costs, New York: Wiley-Interscience.
  2. [Frauke Kreuter]
  3. Allyson L. Holbrook, Melanie C. Green And Jon A. Krosnick. "Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias". Public Opin Q (2003) 67 (1): 79–125. .
  4. de Leeuw, E. (2005). To mix or not to mix data collection modes in surveys. Journal of Official Statistics, 21(2): 233-255.