Why ranking questions fall short
Ranking questions can actually be among the more contentious when writing a survey… If you have 20 product attributes and you want to know how much a customer values them, can you reasonably expect them to rank 20 attributes listed out on a screen? Say you ask for their top three to make it a more meaningful question, how can you tell if those top three are valued almost the same amount, or if the attribute ranked number one is a clear leader, while two and three trail much further behind? When it comes to analysis, are you only interested in the amount of times an attribute is ranked number one, or is any top three ranking relevant? Can. Of. Worms.
If understanding how a range of attributes such as product features, marketing messages or brand performance indicators are ranked by consumers is a core objective to your research, using a choice based technique can provide clear direction.
How can max diff help?
One of the most widely used choice based techniques is called maximum difference scaling, or max diff for short. Max diff is a bit of a does-what-it-says-on-the-tin technique, in that it looks to rank (or scale) the data by asking respondents to choose their most and least preferred / important answers (i.e. answers that show the maximum difference in their preference / importance). This approach allows us to create a definitive, and relative, ranking of data points.
Respondents are asked to pick their top and bottom option from a choice of typically four or five at any one time. The exercise repeats until each answer option has appeared a minimum number of times (the number of exercises and the design of the max diff depends on how many attributes are being tested).
Every choice a respondent makes gives us a hierarchy of preference, which we analyse collectively across audiences. The output is a share of preference (or importance, depending on what you’re testing) for each attribute tested. Across all attributes tested, the share of preference / importance will sum to 100%. Going back to our example of 20 product attributes, if all 20 were valued equally, they would have 5% of the share of preference / importance. The data never falls out this equally, but it does allow us to see the most preferred / important attributes, and how other attributes compare.

In this totally made-up example max diff output, we can see that for prospects or a content service, the top three genres are equally appealing. Although ranked first, second and third, they are all extremely important. Combined the top three genres account for a third of all the new content prospects most want to watch. At the other end of the chart, we can see genres with more niche appeal vs. the other genres tested.
The output from max diff can be paired with TURF analysis to ensure the most effective strategy is deployed with the available resources. Would you believe we’ve got a blog post on TURF analysis too, what luck!