The oft-quoted John Wanamaker once coined the phrase “half the money I spend on advertising is wasted; the trouble is I don’t know which half.” While measuring the success of any advertising campaign was loosely achievable in the pre-digital era (counting foot traffic in stores after the release of a prime-time TVC, for example), the perceived accuracy of digital campaigns is certainly a major selling point over other forms of media. The current iteration of digital measurability, however, is proving to be a double-edged sword.
There is a strong argument that in order to determine the effectiveness of a digital campaign then everything must be measured in order to get a big-picture overview, and the rising prominence of big data has contributed to this belief. The problem with this, however, is that it either leads to ‘analysis paralysis’, or only a select few metrics being comprehended and reported on (mainly due to the inability of most organisations to make sense of hundreds of different parameters).
Using a site analytics report for an SME as an example, crunching the data for the rate of new users they received in a quarter compared to the preceding quarter, showing their bounce rate compared to industry benchmarks, along with mapping the geographical location of their site’s visitors aren’t very important metrics when all they really want to know is how many contact form completions they had last month.
The inverse of this is also true, with larger organisations who are well-versed in digital marketing often requesting reports that outline every single variable, and the reporter consequently trying to make sense of how a rise in mobile referral traffic fits in with consistent pages viewed per session and a decreased average order value from organic users during a 48-hour period (cross-referenced to engagement across their social media accounts and Vimeo channel).
The other issue associated with business’ unwavering desire to measure everything digital is that ultimately only part of the picture is being painted. Using a standard digital display banner as an example, the go-to metrics that determine ‘success’ are usually click-through rate (CTR) and/or click-through conversions. However, the result of looking only at these figures means that accountability usually lies with the agency responsible for implementing the display banner in a campaign; not the creative agency or design team who decided on the call-to-action or creative elements used. While split-testing these types of ads is (merely) an adequate way of determining what variations are working well, measuring brand uplift or recollection during & after a display banner campaign is a vastly more important measurement – especially considering the fact users are statistically more likely to summit Mt Everest than to click on a banner ad.
Currently, the ability to measure hundreds of variables in a digital campaign is certainly present. The capability to make sense of it all, however, is not. Despite the continuous advances made with marketing automation and analysis tools, there is still a human element involved with any good digital campaign.
However, the (reasonably) recent advent of machine learning in advertising has revolutionised the way ads are bought, and is also set to quickly replace the human-shaped bottleneck during the reporting process. With several AI-based advertising buying platforms now offering API plugins for dashboarding and reporting purposes, the role of the digital marketing professional will shift away from drawing probable conclusions based on raw data, and move towards telling a meaningful story with accurate insights generated by statistically significant, highly-relevant curated data sets.
While Wanamaker’s conundrum may have been solved with the financial measurability of digital campaigns, there is still a vast amount of ground to cover before advertising dollars can be spent anywhere close to 100% efficiency. Until that time, listening to the client’s needs and figuring out what is relevant to them is far more important than slicing and dicing 500,000 cells worth of data into something vaguely comprehensible.
Alternatively, we can just wait until our robot overlords replace us….