Awards organisers demand measurement, but…

The Measurement Practice goes live!
15th March 2016
Best practice & challenges for measurement – an interview with DataScouting
8th April 2016

Silver anvilIn recent years, awards have become an important feature of the PR landscape, both here in the UK and in the US. Companies and their agencies devote considerable time and resources into putting together awards submissions that they hope will end up as winners – so publicly validating the quality of their campaign efforts. Increasingly, awards organisers – whether trade media or PR and research associations – are specifying that entries must include a section on the measurement and research that was used to plot PR results against business or other goals.

From the TMP team’s combined experience of judging awards entries over the years, it’s clear, however, that this is a requirement still more honoured in the breach than in reality. Given that most awards entries are prepared by PR agencies, acting on behalf of their clients, it’s perhaps understandable (though really not forgivable) that the research and evaluation component should so often end up being treated as an afterthought. And, given most small to mid-tier PR agencies are still, by and large, light on research and data skills, maybe we shouldn’t expect much in the way of joined up thinking between effort and quantifiable results.

However, this doesn’t lessen the unfortunate fact that entry writers so often fail to grasp what a research and measurement component could look like in the context of an award entry. Even entries to the prestigious Silver Anvils in the US are prone to this.

Our colleague and friend Angela Jeffrey has been a long-time evangelist for the impact that measurement can bring to PR planning and effectiveness. She has explored in some detail the interrelationship between media output and business outcomes (you’ll find her important paper on this topic here), one of the key tenets of the Barcelona Principles. Recently, Angie was a judge for the US Silver Anvils, and was dismayed at the paucity of any serious measurement and evaluation component in the entries she judged. For our blog, we asked Angie to write this note about her experience:

I recently helped judge the Public Relations Society of America’s Silver Anvil Awards. The Silver Anvils are still considered the “Oscars” of the PR profession, so being on the judging panel was a great honor. But I was not at all prepared for the state of PR measurement from the entries I reviewed. In fact, out of 27 robust entries, not one was judged strong enough for an Anvil.

Why? Award entrants appeared to have no understanding of what would be considered considered “research,” “measurable objectives” or“evaluation.” Out of the 27 entries, only one, maybe two made any attempt at primary research, and only a couple quantified their objectives and then matched them in some way to the results/evaluation section. Of those entries, sadly, none had the creative chops to be Anvil-worthy.

As for the rest, we saw some incredible numbers compiled into the results/evaluation section – many including business outcomes like sales, event attendance, funds raised for charities, etc. But, these results didn’t match the objectives. These entries were almost painful to read because many of the events were terrific and showed a lot of creativity. There were several potential Anvil winners *if only* the research-objectives-evaluation portions had actually tied together.

This isn’t a question of great skills, or resources, either. I have been honored with two Silver Anvils in my career. Neither were for big organizations or agencies, but were submitted on an almost solo practitioner basis. In both cases, I’d had little experience in writing PR award entries, but I came up with research, measurable objectives and results that not only matched, but that utilized data from other parts of the organizations to make the cases.

For instance, to measure an increase in “high end shoppers” to a retailer grand re-opening, we used a pre/post increase in credit card purchases (which defined this target for this company) and comparative sales increases for our district compared to 40 others. For the second event, which had only $14,000 in paid advertising, we could rightly claim increases in revenues against objectives since PR had been the only driving force.

Bottom line – both event budgets were too low to have conducted primary research, but secondary research led us to quantifiable objectives and results data through our finance department. Relevant data is nearly always available to us as practitioners – but it demands a willingness to explore what data already exists in our own, or our clients’ organizations.

My point? If someone like me, who was off the big-company/big-agency circuit could figure work-arounds to Anvil requirements with no training, then how on earth are big agencies and big corporations missing them today when they have had years of teaching and preaching about how to measure? The majority of our 27 entries came from massive brands and their agencies – they really should have been able to do better.

Clearly, there is much work still to be done by PR measurement & evaluation evangelists – not just to train the profession on how to write a great award entry, but also to demonstrate how to put together truly effective, outcomes-based programs.

What really jumps out of this cri-de-coeur from Angie is that clients (and their agencies) could be doing so much more with their measurement (assuming they have some in place). Practical training is critical for upskilling savvy PR people, who can then implement a strong measurement programme either using their own resources, or with the support of one of the many specialist measurement providers available. The Measurement Practice offers focused measurement training and workshops to businesses, organisations and agencies – a first step to ensuring awards entries aren’t a waste of time and money.

More information on The Measurement Practice’s training & workshop service can be found here.

In the end, though, there’s also a big question for the awards industry: given the commercial value of awards, how much pressure can you afford to put on entrants, to encourage greater compliance with your terms of entry?

We would love to hear your comments about the extent to which the lack of measurement as a component of awards entries reflects a deeper reluctance amongst PR professionals to see measurement as anything other than a distraction rather than a powerful communications management tool.

Angie is Vice President Brand Manager at Ad Benchmark Index, a leader in global advertising research. She can be reached at Angie@adbenchmark.com

3 Comments

  1. Liam Kelly says:

    Angie’s comments on training sum up the issue for me. The Silver Anvils are all about incorporating sound research, planning, execution and evaluation. The awards have well defined and explained criteria and the requirement to link objectives and evaluation is explicit. And so, the seeming failure of entrants to match objectives to measured results is worrying and suggests a lack of understanding of the importance of objectives. The Barcelona Principles are designed around measuring well-defined objectives, and simple training can show how to apply the principles to every campaign regardless of budget.

  2. Hits the nail on the head (Anvil?).

    One key element which seems to be missing from almost all awards entries is failing to think, ‘What does success look like?’ beyond the obvious. This question does not stand alone – there is an essential follow up question which practitioners should be asking, ‘How do I show that this activity has contributed to this success?’.

    I’m convinced that the majority of awards entries start from the premise of a perceived ‘cool’, or ‘creative’ campaign which seems to have done OK. The ‘How …?’ question is asked after the activity, leaving agencies and their clients scrambling to find some measures which might support the claim that the campaign delivered against the objectives. Equally, I’ve come across entries where the objectives were obviously changed afterwards to fit the results of whatever metric the team could find.

    Training on some basic principles such as setting your goals _and_how_to_measure them_ in advance is obvious, but also helping teams understand how more effective evaluation leads to better, more effective campaigns.

    Finally, submitting an entry which is creatively or intellectually a strong and robust campaign, but then offering only weak evidence is so disrespectful of the agency and client teams involved.

    • Mike Daniels says:

      It’s worth noting that this issue of awards judging hasn’t gone unnoticed at the IPR’s Measurement Commission. There’s a recently published paper here: http://www.instituteforpr.org/ipr-measurement-commission-judging-guidelines/ that is designed to help judges apply more stringent (and consistent) criteria to at least the measurement/evaluation component of awards entries… Of course, given most agencies and clients scrabble around for evidence of success after the campaign has finished, as you say Colin, most entries will continue to fall short in this area. If awards organisers apply the criteria and entrants are aware of them in advance, then we may see some evidence of chage. I wish I could be more hopeful of that in the short term, though…

Leave a Reply

Your email address will not be published.