- How do I appeal to the largest number of consumers? (TURF analysis)
- How do I prioritise marketing messages or product attributes? (Max Diff)
- How do I find out what people value in my (new) product / service? (Conjoint)
- How do I identify what drives a desired behaviour or outcome? (Key driver analysis)
- How do I know what to prioritise to meet strategic goals? (Gap analysis)
- How do I build consumer loyalty? (Consumer journey mapping)
- How do I use behavioural science to improve my research? (Cognitive biases)
- How do I live without you? (LeAnn Rimes)
- How do I know how many people will buy my product at a given price? (Van Westendorp’s price sensitivity meter)
- How do I assess the impact of my advertising? (Ad effectiveness)
- How do I turn data into clear findings (Data visualisation)
- How do I tap into the unconscious perceptions that influence decision-making? (Implicit response testing)
- How do I reduce a large amount of data into something more meaningful? (Factor analysis)
- How do I group people together based on shared characteristics? (Segmentation)
- How do I forecast market share at a given price point? (Brand price trade off)
- How do I account for cultural differences when surveying across markets? (ANOVA)
- How do I judge brand performance relative to competitors (Correspondence analysis / brand mapping)
The two types of decision-making
If you’ve read our blog post on cognitive biases, you’ll know it’s often quite tricky to get to the bottom of what people really think using research without careful questionnaire design. The tendency to answer survey questions in a considered, analytical and often rational way doesn’t always reflect how we’d act if we were posed the same question or choice in the real world – we often make quick, more impulsive or intuitive decisions that are driven by a range of subconscious biases and beliefs built up over time. This is especially true when it comes to decisions that involve brands.
This ‘automatic’ mode of decision-making is often referred to as System 1 thinking, with the former, more considered decision-making referred to as System 2 thinking. As a general rule of thumb, System 1 thinking is faster than System 2 thinking, and so it stands to reason that if we want to tap into System 1 thinking, we need respondents to answer our questions quickly too. You’ve probably heard of the best-selling book, ‘Thinking, Fast and Slow’ by Daniel Kahneman, which covers this exact topic in a clear and compelling manner and is worth a read if you’d like to learn more!
Tapping into system 1 thinking
In order to force respondents to use System 1 thinking, we can use an exercise called an implicit response test (IRT) within our survey (these tests can also be called implicit association tests, or implicit reaction speed tests). When we use these tests in online surveys, consumers are typically shown a brand or product and then a range of attributes or perceptions one at a time. They have to say as quickly as they can whether or not they associate that brand/product with that perception / attribute, usually by pressing corresponding keys on their device. The survey software captures both the answer given (yes / no) and the speed at which the answer was given, with respondents given the chance to practice a little so that they feel comfortable with the exercise.
We can then analyse the results to see not only which brands / products are associated with which perceptions / attributes, but how quickly people were able to give that answer – the faster the answer was given, the more it represents a person’s natural ‘automatic’ response, while slower responses indicate a more considered, less subconscious response. This makes implicit response testing a useful tool for unpicking brand associations, for name or logo testing / development and other research questions where gut feel can drive consumer behaviour.
What does implicit response testing tell us?
Before we can analyse the results of an implicit response test, we first need to remove the outliers in the data – those responses that are either too fast or too slow to represent a realistic response. It’s important to look at the data at hand, rather than applying a general rule of thumb here as people respond differently depending on what’s being asked in the test.
To clean the data, best practice is to look at the averaged reaction times for each test run, to observe what’s referred to as a normal distribution (most responses are huddled around the same time window).
Most responses fall under the grey area shown in the image above, and therefore represent the average reaction time at the top of the curve. Responses at either end of the curve are too fast (green) or too slow (red), with only a few people responding in these time frames – these are the responses to clean out of your dataset so that your results aren’t skewed.
Once you’re satisfied with your dataset you can plot it on a quadrant chart, like the example below, showing the level of association between a brand / product and a perception / attribute (how many people said yes or no) on one axis, and the strength of that association (the time it took for them to say yes or no) on the other. A strong positive or negative association that’s quickly reached is likely to have significant influence in decision-making for that brand in the real world.