How do I assess the impact of my advertising? (Ad effectiveness)

  1. How do I appeal to the largest number of consumers? (TURF analysis)
  2. How do I prioritise marketing messages or product attributes? (Max Diff)
  3. How do I find out what people value in my (new) product / service? (Conjoint)
  4. How do I identify what drives a desired behaviour or outcome? (Key driver analysis)
  5. How do I know what to prioritise to meet strategic goals? (Gap analysis)
  6. How do I build consumer loyalty? (Consumer journey mapping)
  7. How do I use behavioural science to improve my research? (Cognitive biases)
  8. How do I live without you? (LeAnn Rimes)
  9. How do I know how many people will buy my product at a given price? (Van Westendorp’s price sensitivity meter)
  10. How do I assess the impact of my advertising? (Ad effectiveness)
  11. How do I turn data into clear findings (Data visualisation)
  12. How do I tap into the unconscious perceptions that influence decision-making? (Implicit response testing)
  13. How do I reduce a large amount of data into something more meaningful? (Factor analysis)
  14. How do I group people together based on shared characteristics? (Segmentation)
  15. How do I forecast market share at a given price point? (Brand price trade off)
  16. How do I account for cultural differences when surveying across markets? (ANOVA)
  17. How do I judge brand performance relative to competitors (Correspondence analysis / brand mapping)

The what and the why of advertising effectiveness

Advertising effectiveness research is getting easier and harder at the same time. It’s easier because we have access to more data than ever before, and harder because there are more and more ad platforms and formats to track, and each come with their own unique challenges. Currently there’s no single source dataset that will allow you to assess the effectiveness of a multi-channel campaign.

We’re lucky enough (for now!) to now have sophisticated passive ad tracking technology which can identify whether someone has seen an ad, either through cookies or a digital ad tag, and then map out their online behaviour (and even some of their offline behaviour using geotagging). From 2022 this will become more challenging when Google phases out its support for third party cookies in its Chrome browser.

Passive data has an impressive ability to show you what has happened as a result of your campaign, but how do you know why that campaign was so successful (or not)? And if you don’t know, how can you repeat or improve your results next time?

It is because of this question survey-based ad effectiveness research is still going strong – either as a standalone approach or, better still, combined with the types of passive technology described above.

The benefits of a survey-based approach

Fielding a survey to people who have seen your campaign gives you a chance to find out even more about what impact there was, on brand awareness, perceptions and intention to purchase, for example. But it also allows you to ask why – what information changed their perceptions of the brand, what information did they take away from the ads? We can find out what they liked about the creative execution, and whether they felt the ads were relevant to them.

While it’s great to capture responses from people who have seen your ads in situ, it’s even better to combine that with a comparison point of those who haven’t seen your ads. This could be done by surveying your target audience before the campaign to get a baseline read for key metrics, then after the campaign to see what uplift it’s created. This is generally known as a pre and post campaign evaluation, for a big campaign you can even throw in one or two mid campaign surveys too.

Another way to compare the impact of those who’ve seen your advertising with those who haven’t is to conduct a forced exposure exercise. This can be done by showing images and videos in the survey, or through virtual reality content that mimics the campaign environment (e.g. walking down a street and seeing billboards).

Forcing respondents to view ads in a survey also gives us a chance to check whether they can recall seeing them – a key metric. A further advantage is that respondents then don’t have to answer questions about creative execution and message saliency entirely from memory. Forced exposure is also incredibly helpful when you want to assess a very small campaign – where finding people who have seen the ads would be very tricky (and expensive!).

You can even layer a pre and post campaign evaluation approach with a control vs. exposed group design for greater robustness. While this might seem like overkill, designing good ad effectiveness research requires careful consideration of, and control for, a range of variables. Depending on the intended purpose of the results a belts and braces approach to the research design may be required! For example, if you’re hoping to prove the value of your media channel to a client, you’ll need to be confident in your findings.

Considerations for your research design might include whether the opportunity to see the advertising translates to ad exposure (how does recall compare with recognition among your exposed group?), are there other channels where advertising might have reached your survey respondents which need to be accounted for (is your control group really a control?), are there other differences between your control and exposed samples which could be accounting for differences in key metrics (can you reliably isolate differences for key metrics to ad exposure)?