You might not have known it, but it was party time in Vegas for television executives this past week during the annual NAPTE event. Actually, I’m sure the mood was pretty subdued with revenues being down 20-30 percent and all.
One of the dustups in the desert involves changing the way we measure audiences. For decades, there has been a sort of wink-and-a-nod agreement between buyers and broadcasters that the ratings are what the statistical models say they were.
Well, that’s not cutting it anymore. Advertisers paying the bills want actual head counts, and frankly, they want sales conversion data too. After all, the web delivers it (not including the fraud) – why not TV?
Proposing technical solutions to the audience quantification and analysis problem is a sure fire way to upset the apple cart – nobody wants to hear about fewer viewers, channel hopping, commercial skipping, time shifting and wayward eye scans. All that pesky measurement is sure to being some unpleasant news about how flighty viewers are.
So now we’ve got a little battle brewing between national programmers and their advertisers who have robust access to cross-platform distribution opportunities, and local TV station ownership groups including Seattle based Fisher Communications who really don’t yet and donít want to hear about how they’re not being accessed in the non-broadcast space – or at least fear that the locals will be left in the dust on this issue as new measurement protocols are established.
Meanwhile, trying to figure out just how to get accurate audience measurement is a huge technical problem. Too bad every single solitary device out there doesn’t have a separate IP address that can feed back and aggregate real time data. Hmmmm.