Consumers set up a blockbuster holiday season at the Box Office
Establishing a clear definition of roles between players and referees is central to the smooth, fair running of sporting contests. The presence of an impartial third party facilitates even treatment of both sides, ensures that rules are consistently applied, and delivers an end result which can be trusted. The same principles can be applied to the digital advertising world.
In sport, a player attempting to officiate the game themselves will be quickly corrected by the indisputable decisions of the referee or umpire. In digital advertising, where all sides of the ecosystem have access to a multitude of first- and third-party tools, there is a more complex need to establish what precisely is being reported in each instance, and which numbers are suitable for the purposes of buying, selling or evaluating campaigns. Few publishers or ad servers would deliberately mislead current and potential advertising partners, but would naturally prefer to showcase their value using metrics and measures which show them in the best possible light.
The issue here is if media sellers ‘grade their own homework’ using a variety of tools and measures, it becomes significantly more difficult, if not impossible, to reliably compare results. This does not help advertisers, and the ensuing confusion also harms publishers’ ability to demonstrate the value of their inventory; if all sellers can simply select their best set of numbers, it does not allow truly valuable inventory to stand out so prominently, or justify more premium pricing. Referees are needed to keep official scores.
A key example in today’s market is the subject of ad viewability. The Media Rating Council (MRC) defines a viewable ad as one with 50% of its pixels in view for 1 second in the case of display ads, and for 2 seconds in the case of video ads. This benchmark may seem relatively straightforward, but the complexity for advertisers and their agencies comes when considering disparities between vendors of viewability measurement. If the same campaign or media entity could be measured at two different levels, it creates tough decisions on whether to take the ‘easier’ option of the higher figure, or apply the more stringent measurement and work hard to increase that figure.
To stretch the sporting metaphor a little further, it might seem desirable to experience games with the highest possible scores, but not if this comes at the expense of fairness. The same applies in the marketing world. Higher viewability percentages are desirable for publishers who can use them as a means to justify the quality of their inventory, and they are useful to agencies as another indicator of the performance they are delivering for their clients. In reality, these scores are only meaningful if they are legitimate and allow for an even evaluation between campaigns or media properties. For example, if a ‘high’ viewability figure contains 50% of impressions that were served to invalid traffic (IVT), the performance reported for other metrics could be halved.
IVT inflates viewability numbers by giving credit to impressions ‘seen’ by bots, that in reality have no chance of driving brand or sales impact. Not only is this creating measurement inconsistencies, but in the worst cases, this is enabling fraudsters to continue to siphon advertising budgets illegally because of a reluctance to accept lower viewability figures.
In addition to this fundamental issue, there are a host of more nuanced questions, such as whether a particular tool measures all open browser windows or just the one on top? Can it spot ads served outside of the viewable window or multiple ads that are stacked on top of each other? Referees are needed to interpret the rules, ensure that they are understood, and applied in the most uniform manner possible.
To this end, the MRC accreditation of viewability providers now verifies across multiple methodology points. Selecting a viewability measurement provider can now be an informed decision, based on capabilities and methodologies, rather than simply an emotional choice based on which outputs are more desirable. Knowing the ‘rules’ which are being applied can help all sides set expectations, make necessary adjustments and improve performance for mutual benefit.
Just as sport would descend into chaos without referees or if teams were playing to different sets of rules, the growth of the digital industry relies on consistent application of clear measurement criteria, using third-party referees. There is no benefit to taking shortcuts here – setting higher standards challenges all participants to work harder, but allows the truly valuable players to shine.