Turn down the Volume

Updated: Jan 7






While it would seem fairly straightforward, measuring news volume over the years has proven to be somewhat tricky, and has only become more challenging as media has gone digital.


Back in the day, things seemed simpler. PR agencies compiled thick clip books that landed with a thump on clients’ desks, physical evidence of their success. Years later, the volume and tonality of news coverage became proxies to impute media reach and influence with customers.


Around the turn of the century, firms mastered the art of automated Boolean searches feeding into online dashboards. And as the media shifted to online news sites, webmasters in control rooms backlit by banks of computer screens held out the promise of direct attribution and online optimization to quantify the economic value of news.


Still, none of these approaches has effectively aligned news coverage to business performance. If we are going to attribute news outputs to a broad range of business outcomes, we will need a new math.


Because business relies on multivariate analytic models, news will need to be quantified as discrete time-series data. Discrete data is based on counts. We could think of each story as a clip count.


Right now, that's not the case when it comes to media monitoring. News volume to a large extent has been held out to represent several different things -- the number of stories published, but also an estimate of the relative number of potential readers or viewers, or even the actual number of actual readers or viewers.


See a potential problem here? Counting clips is one thing. But the days of basing news volume on audited circulation are long gone. And estimating actual reach based on newspaper circulation or television ratings always involved an unspoken assumption -- that every reader actually read every story in the paper, or every viewer actually watched every story on the news.


The Whiteboard above visualizes a typical news stream based on simple, discrete clip counts. Using Meltwater online news aggregation tools and some intuitive Boolean search strings, you can generate years of visualizations like this one fairly quickly. We built 20 of these in an NC State undergraduate class last semester.


What does the data tell us? Any number of things:

  • Typically, when it comes to news volume, public companies see a spike in January. That's a signature pattern reflecting press coverage of year-end earnings.

  • When major news breaks, second-day story counts are typically higher than day one. That reflects an online echo of breaking news.

  • A distinctive cluster of news signals a protracted news cycle, often the catalyst for a crisis event. More on that down the road.

What does the data not tell us? Again, a number of things:

  • News volume cannot measure reach, or how many people saw or heard coverage about a company. We'll look at those correlations in the Third Rails blog.

  • Content data cannot measure consumer recall, which in some cases is virtually nil, and in other cases can stretch for months and even years. We'll explore this in Signals and Noise.

  • Media data by definition is rearward-facing and cannot be trended forward, which is critical to navigate a crisis. Look for more on this in the Long Tails blog.

Bottom line -- there is a compelling opportunity for communication research to refine our media monitoring, gauging what was published in a way that aligns with what people saw or heard. Factor in time, and we have a three-dimensional measurement framework essential for gauging news impacts.


Next step: Tackling Tone-deaf Sentiment data.


Back to the Whiteboard.


Meanwhile, if you want to connect or provide feedback, feel free to email me at Jim.Pierpoint@HeadlineRisk.com


Copyright Headline Risk LLC, 2021, All rights reserved









.







9 views0 comments

Recent Posts

See All