- How do I appeal to the largest number of consumers? (TURF analysis)
- How do I prioritise marketing messages or product attributes? (Max Diff)
- How do I find out what people value in my (new) product / service? (Conjoint)
- How do I identify what drives a desired behaviour or outcome? (Key driver analysis)
- How do I know what to prioritise to meet strategic goals? (Gap analysis)
- How do I build consumer loyalty? (Consumer journey mapping)
- How do I use behavioural science to improve my research? (Cognitive biases)
- How do I live without you? (LeAnn Rimes)
- How do I know how many people will buy my product at a given price? (Van Westendorp’s price sensitivity meter)
- How do I assess the impact of my advertising? (Ad effectiveness)
- How do I turn data into clear findings (Data visualisation)
- How do I tap into the unconscious perceptions that influence decision-making? (Implicit response testing)
- How do I reduce a large amount of data into something more meaningful? (Factor analysis)
- How do I group people together based on shared characteristics? (Segmentation)
- How do I forecast market share at a given price point? (Brand price trade off)
- How do I account for cultural differences when surveying across markets? (ANOVA)
- How do I judge brand performance relative to competitors (Correspondence analysis / brand mapping)
The problem with scale questions
Scale questions. They seem simple enough, but they are among the more contentious (along with ranking questions – here’s our post on how to deal with those!). Research has proven that people respond differently to scale questions, regardless of what question is being asked (Paulhus 19911).
There are three broad response styles when answering scale questions in market research:
- Extreme Response Style (ERS): the tendency to give extreme answers, at either end of a scale
- Midpoint Response Style (MRS): the tendency to answer toward the middle of a scale
- Acquiescence Response Style (ARS): the tendency to agree with questions regardless of what they’re about, also known as ‘agreement tendency’
Variation in response styles is particularly pronounced when comparing some markets, a reflection of cultural differences. For example, one market might score every brand an 8 or 9 out of 10, while another market consistently scores brands 4 or 5 out of 10. Not only does this pose problems when comparing results across markets, it also has the potential to skew your overall (or ‘global’) mean score for each scale question.
But all is not lost. We just need to call on the help of a statistical test called analysis of variance (ANOVA).
ANOVA is an established technique that looks at difference (or variance) in data by comparing mean scores. ANOVA looks at differences between groups (in our case the differences between markets) and within groups (i.e. within each market) to identify which data points are statistically influencing the data set and which are randomly influencing it.
If we find that one country is statistically influencing our overall scores, we can look to adjust or standardise the data to account for the specific skew of that market. How this is done depends on the results of the survey in question, and extent of the skew seen. But for example, we may choose to centralise responses to scale questions by setting the mean scores to zero. We would then give positive scores to those whose scores fall above the average and negative scores to those whose scores fall below the average.
We can also reduce opportunities for cultural biases to influence responses
Although we have ANOVA in our toolkit, the best way to account for cultural biases in survey responses is through careful questionnaire design. While we can’t eliminate these biases completely, there are a few tricks which can help:
- limit scale questions for metrics where cross-market comparison is key
- use text or image labels for each point in the scale, rather than numbers, as these are less open to interpretation
- if you have to use numbers, only show the anchor points (the end labels) to reduce the likelihood respondents will gravitate towards the centre
- use a granular scale (e.g. 0-100) to achieve more nuanced results and prevent respondents grouping together at certain points of the scale.
- Paulhus, Delroy L. ( 1991 ). Measurement and control of response bias. In Robinson, J.P., Sawyer, P.R. and Wrightman, L.S. (eds.), Measures of Personality and Social Psychological Attitudes, Vol. 1. San Diego CA: Academic Press.