The problem with scale questions
Scale questions. They seem simple enough, but they are among the more contentious (along with ranking questions – here’s our post on how to deal with those!). Research has proven that people respond differently to scale questions, regardless of what question is being asked (Paulhus 19911).
There are three broad response styles when answering scale questions in market research:
- Extreme Response Style (ERS): the tendency to give extreme answers, at either end of a scale
- Midpoint Response Style (MRS): the tendency to answer toward the middle of a scale
- Acquiescence Response Style (ARS): the tendency to agree with questions regardless of what they’re about, also known as ‘agreement tendency’
Variation in response styles is particularly pronounced when comparing some markets, a reflection of cultural differences. For example, one market might score every brand an 8 or 9 out of 10, while another market consistently scores brands 4 or 5 out of 10. Not only does this pose problems when comparing results across markets, it also has the potential to skew your overall (or ‘global’) mean score for each scale question.
Enter, ANOVA
But all is not lost. We just need to call on the help of a statistical test called analysis of variance (ANOVA).
ANOVA is an established technique that looks at difference (or variance) in data by comparing mean scores. ANOVA looks at differences between groups (in our case the differences between markets) and within groups (i.e. within each market) to identify which data points are statistically influencing the data set and which are randomly influencing it.
If we find that one country is statistically influencing our overall scores, we can look to adjust or standardise the data to account for the specific skew of that market. How this is done depends on the results of the survey in question, and extent of the skew seen. But for example, we may choose to centralise responses to scale questions by setting the mean scores to zero. We would then give positive scores to those whose scores fall above the average and negative scores to those whose scores fall below the average.
We can also reduce opportunities for cultural biases to influence responses
Although we have ANOVA in our toolkit, the best way to account for cultural biases in survey responses is through careful questionnaire design. While we can’t eliminate these biases completely, there are a few tricks which can help:
- limit scale questions for metrics where cross-market comparison is key
- use text or image labels for each point in the scale, rather than numbers, as these are less open to interpretation
- if you have to use numbers, only show the anchor points (the end labels) to reduce the likelihood respondents will gravitate towards the centre
- use a granular scale (e.g. 0-100) to achieve more nuanced results and prevent respondents grouping together at certain points of the scale.
- Paulhus, Delroy L. ( 1991 ). Measurement and control of response bias. In Robinson, J.P., Sawyer, P.R. and Wrightman, L.S. (eds.), Measures of Personality and Social Psychological Attitudes, Vol. 1. San Diego CA: Academic Press.