A few years back, the top PR researchers in the field tried to tackle an intractable problem -- how to reliably measure the tonality of news.
Frustrated by research approaches that yielded inconsistent results and unreliable data, industry leaders had flown to Barcelona to launch a push to standardize PR measurement. Standards, so the thinking went, would get everyone on the same page, effectively transforming unreliable data into reliable data. Voila.
Unfortunately, one of the first efforts to crack the code on tonality had fallen well short of that goal. After reading dozens of news stories about major companies, and consulting with a code frame devised as an industry standard, three coders were asked to record basic information, including whether the tone of the story was positive, negative, neutral or balanced.
"The research results yielded low to moderate intercoder reliability," the researchers reported. It didn't work.
A few years later, they tried again, this time with experienced coders, a refined code frame, and a renewed push to demonstrate the viability of the measurement standards. This time, tey cleared the bar. But just barely.
Why? Because tonality is simply not a discrete metric. It cannot be counted.
In the study, for example, the coders were asked to assess the influence of a news story on a reader's likeliness to support, recommend, work or do business with a company mentioned in the coverage. Those are four separate and distinct things.
That's not to say news does not impact those four variables. It does. No question. Proprietary research has connected news to brand equity, purchase intent, employee satisfaction, and, sales. Beyond that, news also correlates to customer satisfaction scores, and is a trigger for regulatory, legislative and judicial inquiries.
In business terms, news is an external variable associated with a broad spectrum of business impacts.
The trick, then, is to rationalize our media measurement to better monitor, manage, and at times mitigate those impacts. This is part of the new math we talked about in Turning down the Volume. Like news volume, our tonality scores will need to be quantified as part of a discrete time-series dataset representing news coverage.
What does that mean in practice?
The Whiteboard above visualizes average daily tonality scores generated by trained machines. Using Meltwater online news aggregation tools and some intuitive Boolean search strings, you can generate years of visualizations like this one fairly quickly. We built 20 of these in an NC State undergraduate class last semester, and are building 20 more this semester.
Here are some of the things that we learned. One, highest and lowest average daily news tonality scores tend to occur on days when there is relatively little coverage. Two, the tonality scores do not correlate with survey data tracking consumer perceptions of news. In fact, statistically speaking, that relationship is close to random. Three, not surprisingly, news tonality skews negative. More on all of that in future blogs.
And finally, tonality scores and news volume both measure news coverage, but the data does not move in the same direction at the same time. That's problematic when it comes to generating a data stream representing published news that can be integrated with time-series business models.
Time for a little alchemy.
Because business decisions are guided by multivariate equations -- and knowing that news coverage is a meaningful variable in the equation -- we will need a single data set that represents news coverage over time. In Forging the News Stream, we will explore ways to integrate news volume and tonality into a single reliable, validated and replicated time-series dataset that fits into business models.
But first, let's explore the PR Measurement Trap.
Back to the Whiteboard.
Meanwhile, if you want to connect or provide feedback, feel free to email me at Jim.Pierpoint@HeadlineRisk.com
Copyright Headline Risk LLC, 2021. All rights reserved