The April 8, 2007, New York Times had an extraordinary self-indictment of numbers abuse:
Every Monday, a Times ranking of the top 10 prime time broadcast television programs uses a Nielsen rating that indicates how many households watched each show the previous week. On March 26, “60 Minutes” ranked No. 8 with a 9.2 Nielsen rating. (Each rating point represents 1.1 million homes.) With a margin of error of 0.3-rating point...there was no statistically significant difference between the rating of “60 Minutes” and any of the three programs above it in the ranking, or either of the two below it. With no mention of the margin of error, however, Times readers were left to believe the rankings really meant something.
Turns out omitting the margin of error is not new:
Over the past 25 years, only two of the 3,124 archived articles that mentioned Nielsen and “ratings” included a reference to the margin of error.
The piece was by Byron Calame, until recently The Times’ Public Editor. As “readers’ representative,” Calame independently investigated reader questions and complaints. In this case, he contacted Nielsen and questioned Times editors responsible for running the numbers.
The Nielsen spokesperson said the numbers were “estimates,” “should not be construed literally,” and lacked margin of error data due to resource constraints on Nielsen’s side.
Is that a problem? Paraphrasing a Times editor’s response: No one else shows margins of error, so what’s the problem?
Calame asked another editor why The Times did not at least tell readers that Nielsen does not provide the margin of error. The explanation is telling: “If we run a large disclaimer saying, in effect, this company is withholding a critical piece of information, I imagine many readers would simply turn the page.”
Okay, thanks for clarifying the priorities.
Calame’s piece called on The Times to do better, and if nothing else, The Times deserves credit for encouraging this criticism from within.