Here't the latest report in Columbia Journalism Review taking a look at the polling controversy in Florida:
Herald’s Caputo dives deep on diverging polls
Do other news organizations undermine their credibility when they don’t do the same?
Voters here have reason to be confused this week as they look at two polls, coming out one day apart, with one showing Barack Obama leading Mitt Romney in the state and the other showing Romney leading Obama.
We begin with a poll released May 23 from Quinnipiac University, which showed Romney with a six-point lead over Obama in Florida (47-41, with a reported margin of error of 2.4 points). Predictably, the poll caused a flurry of news reports:
“Romney Leads Obama in Key State of Florida, Poll Says” — Fox News
“In a major reversal in an important swing state, Mitt Romney now tops President Obama in Florida” — New York Daily News
“Poll: Romney takes lead in Florida” — CNN
“Romney Widens Lead Over Obama Among Voters in Florida” — Bloomberg
“Q Poll: It’s Romney beating Obama in Fla.” — Orlando Sentinel
But all those headlines came into question when Marist College released a poll the next day—that is, today—showing Obama leading Romney in Florida by four points (48-44, with a three-point margin of error). The early headlines were again predictable:
“Polls: Obama leads in key states of Ohio, Fla. and Va.” — USA Today
“New polls in 3 battleground states give Obama slight edge” — CNN
“New poll shows Obama with edge in 3 key races” — CBS News
But why were two polls, a day apart, so far apart? Few attempted to address that question. This was typical, from CBS News: “The Marist poll contrasts with one released Wednesday by Quinnipiac, which showed Romney up six points, 47 percent to 41 percent, in the crucial battleground state of Florida.”
Contrasts? Well, yes. But is that all we might want to know?
The first real questions came from Florida Democratic political operative Steve Schale, who challenged the methodology used by Quinnipiac. Schale claimed at his blog that Quinnipiac had oversampled Republicans and white voters. Schale’s analysis is interesting, though some folks might disagree.
Meanwhile, The Miami Herald’s Marc Caputo did a commendable job of exploring the reasons why Marist and Quinnipiac arrived at different results. In a post on the Herald’s “Naked Politics” blog, Caputo spoke to the pollsters and experts in the field. He noted that Marist worked off registered-voter lists and Quinnipiac did not. He examined why the two pollsters used different methods and let them defend their methods.
To recap: When more Democrats account for the overall sample, the Democrat tends to win. And when more Republicans are sampled, the Republican wins. And the loser whines that the poll isn’t accurate because the demographic breakdown doesn’t mirror registration or performance in a state where Democrats cast anywhere from 37 percent to 42 percent of the ballots and Republicans cast anywhere from 38 percent to 41 percent of the ballots in recent Florida presidential elections.
However, pollsters say the surveys (and many others like it) are actually more accurate than they appear because they’re technically not gauging actual voter registration. They’re gauging self-described party ID of voters. They’re getting a sense of the voter writ large. Although, there is considerable overlap between the self-described party ID and self-described party registration.
Still, if the mood of the electorate shifts left, polls like this tend to pick up more people who say they’re Democrats. And if the mood of the electorate shifts right, these polls tend to pick up more self-identifying Republicans.
The specific question Quinnipiac says it asks: “Generally speaking, do you consider yourself a Republican, a Democrat, an Independent, or what?” It’s not asking: “What is your official party registration?” (The latter is a variant of the type of question asked by The Miami Herald/Tampa Bay Times’ pollster, Mason Dixon Polling & Research).
Though the two questions are alike (and rely on people to tell the truth), the responses can produce different demographic bottom-line results because the questions ultimately differ. That is, one question is: How do you feel? The other: What are you?
This is fairly deep in the methodological weeds, as befits a blog whose audience is disproportionately politics junkies, but it is also the kind of reporting that takes a complicated subject and makes it easy to understand. Caputo is to be applauded for helping readers understand why the two polls appeared so dramatically different.
I have long been concerned about the careless way most news organizations deal with polls. I tackled this subject in a CJR report in January, in which I took the Tampa Bay Times and Miami Herald to task for what I believe was sloppy polling and reporting.
Which begs the question: How did I handle the Quinnipiac poll when I wrote about it for my own site, CrowleyPoliticalReport.com? I chose to focus on the trend numbers rather than the horserace. Trend results are more meaningful and more likely to be accurate. I also provided readers with the entire results of the polls, something very few news organizations do.
I have been a critic of the Quinnipiac poll and the media’s use of polls in general. I continue to be skeptical. (CJR’s Brendan Nyhan has also expressed skepticism about the value of state-specific presidential polls, especially at this point in the campaign.)
But as long as polls continue to drive coverage, I hope that reporters for other news organizations will take the critical eye and in-depth approach that Caputo did. And perhaps it is time for media outlets to rethink their relationship with the business of polling. Has any news organization actually visited any pollster to watch them in action? And how much harm are news organizations doing to their own credibility when they offer viewers and readers two markedly different polls just two days apart with no explanation?