Please note: This is a very long article. It includes information from the American Association for Public Opinion Research and the National Council on Public Polls. This article begins with a brief assessment of current polling in Florida and is followed by important information from the two groups just mentioned - including the NCCP's 20 questions that journalists - and everyone else - should ask about polling results
In the last few months, Florida votes have been hit with a barrage of polls with wildly differing numbers. Some of the worst of polls have been sponsored by the media.
For years, Florida media has done a less than stellar job with polls.
Explanations for the results are often tepid. Misunderstanding of the margin-of-error is common. Throw away lines about subgroup results being "higher than the margin of error" for the total sample does not reveal that these "higher" results often render the subgroup totals meaningless.
Rarely are there explanations about the differences between registered voters and likely voters - or how it was detemined that someone was a likely voter.
Geographic subgroups - South Florida, North Florida, Central Florida - are often too small to be statistically significant.
News organizations sometimes hold polling results for several days before reporting them - a particularly egregious mistake in the final weeks of an election when voter opinion can shift significantly making the "new" poll, at best, stale.
Reporting of subgroups in a poll is often baseless. In a polling meeting during the 2008 election, some of the Florida news organizations taking part began focusing on results for Hispanics. The polling sample - 72. When the pollsters were asked the statiscal validity of the subgroups they replied there was none.
Some of the media on that conference call still could not resist treating the Hispanic numbers as a "trend" and reported the numbers as if the the sample had been a 1,000. And the pollsters did not object.
All too often the media treats all polls as being equal. A poll that samples 800 likely voters is to some news organizations as valid as one that samples 500 registered voters.
On Monday, Rich Heffley, a Republican political operative, said this to the St. Petersburg Times when asked about Florida polls: "Most of it's not worth the paper it's printed on."
And Heffley wisely added this: "There needs to be some level of quality control."
Heffley suggested that the media establish a set standards before agreeing to publish results, either in print or online. "There are so many ways to screw up polling and very few ways to do it right."
For many years, I suggested that all Florida news organizations team together to do a series of annual polls under the umbrella of the Florida Press Association and the Florida Association of Broadcasters.
It would help ease the problem of doing polls on the cheap, eliminate unnecessary competition among news organizations and remove the "need" to report every poll that pops up. If done right, Florida news organizations would have polls available to them that would be far better than what is available now.
No one was interested then and there are no signs they are interested now.
Meanwhile here is some information that can help you figure out which polls have useful information and which do not.
There is a consensus in the polling community that it is better to report “likely” voters than “registered” voters, which is why survey organizations do it. But having acknowledged this, there are some cautions:
First, people change in their commitment to voting as the campaign unfolds. Respondents are probably better able to tell if they really are going to vote as it gets closer to Election Day. One of the reasons the polls get in closer agreement as Election Day nears is that all of their “likely voter” formulations behave more similarly. It is easier to identify true “likely voters” the closer it is to Election Day.
Second, there is no magic formula for identifying likely voters. There are a number of indicators, but the combination of them is a bit subjective between polling organizations. As of now, there’s no clear right or wrong formula.
Third, no one knows what actual turnout is going to be. It’s like trying to hit a moving target. A polling organization might make a prediction based on a turnout of 70% of registered voters (that they designate as “likely”) but, there are high and low interest elections. The past is not a perfect guide to the future. No one knows what’s actually going to happen. That’s why we have the elections.
So, pre-election polls of likely voters are “best estimates.” Good ones, as the history of numerous polling organizations over numerous years have shown, but just best estimates nonetheless. However, it’s always hard to argue against proper caution and respect for nuance in reporting on these polls.
Journalists need to take into account that when subgroup results are reported, the sampling error margin for those figures is larger than the sampling error for results based on the sample as a whole. Journalists should identify the number of respondents in the subgroup, even for larger groups such as Democrats or men or Hispanics or senior citizens.
The following table shows the number of respondents in various subgroups from a statewide poll of a battleground state in the Midwest.
The National Council on Public Polls offers 20 questions that journalists - and everyone else - should ask about poll results.
What polling firm, research house, political campaign, or other group conducted the poll? This is always the first question to ask.
If you don't know who did the poll, you can't get the answers to all the other questions listed here. If the person providing poll results can't or won't tell you who did it, the results should not be reported, for their validity cannot be checked.
Reputable polling firms will provide you with the information you need to evaluate the survey. Because reputation is important to a quality firm, a professionally conducted poll will avoid many errors.
2. Who paid for the poll and why was it done?
You must know who paid for the survey, because that tells you – and your audience – who thought these topics are important enough to spend money finding out what people think.
Polls are not conducted for the good of the world. They are conducted for a reason – either to gain helpful information or to advance a particular cause.
It may be the news organization wants to develop a good story. It may be the politician wants to be re-elected. It may be that the corporation is trying to push sales of its new product. Or a special-interest group may be trying to prove that its views are the views of the entire country.
All are legitimate reasons for doing a poll.
The important issue for you as a journalist is whether the motive for doing the poll creates such serious doubts about the validity of the results that the numbers should not be publicized.
Private polls conducted for a political campaign are often unsuited for publication. These polls are conducted solely to help the candidate win – and for no other reason. The poll may have very slanted questions or a strange sampling methodology, all with a tactical campaign purpose. A campaign may be testing out new slogans, a new statement on a key issue or a new attack on an opponent. But since the goal of the candidate’s poll may not be a straightforward, unbiased reading of the public's sentiments, the results should be reported with great care.
Likewise, reporting on a survey by a special-interest group is tricky. For example, an environmental group trumpets a poll saying the American people support strong measures to protect the environment. That may be true, but the poll was conducted for a group with definite views. That may have swayed the question wording, the timing of the poll, the group interviewed and the order of the questions. You should carefully examine the poll to be certain that it accurately reflects public opinion and does not simply push a single viewpoint.
3. How many people were interviewed for the survey?
Because polls give approximate answers, the more people interviewed in a scientific poll, the smaller the error due to the size of the sample, all other things being equal. A common trap to avoid is that "more is automatically better." While it is absolutely true that the more people interviewed in a scientific survey, the smaller the sampling error, other factors may be more important in judging the quality of a survey.
4. How were those people chosen?
The key reason that some polls reflect public opinion accurately and other polls are unscientific junk is how people were chosen to be interviewed. In scientific polls, the pollster uses a specific statistical method for picking respondents. In unscientific polls, the person picks himself to participate.
The method pollsters use to pick interviewees relies on the bedrock of mathematical reality: when the chance of selecting each person in the target population is known, then and only then do the results of the sample survey reflect the entire population. This is called a random sample or a probability sample. This is the reason that interviews with 1,000 American adults can accurately reflect the opinions of more than 210 million American adults.
Most scientific samples use special techniques to be economically feasible. For example, some sampling methods for telephone interviewing do not just pick randomly generated telephone numbers. Only telephone exchanges that are known to contain working residential numbers are selected, reducing the number of wasted calls. This still produces a random sample. But samples of only listed telephone numbers do not produce a random sample of all working telephone numbers.
But even a random sample cannot be purely random in practice as some people don't have phones, refuse to answer, or aren't home.
Surveys conducted in countries other than the United States may use different but still valid scientific sampling techniques, for example, because relatively few residents have telephones. In surveys in other countries, the same questions about sampling should be asked before reporting a survey.
5. What area (nation, state, or region) or what group (teachers, lawyers, Democratic voters, etc.) were these people chosen from?
It is absolutely critical to know from which group the interviewees were chosen.
You must know if a sample was drawn from among all adults in the United States, or just from those in one state or in one city, or from another group. For example, a survey of business people can reflect the opinions of business people – but not of all adults. Only if the interviewees were chosen from among all American adults can the poll reflect the opinions of all American adults.
In the case of telephone samples, the population represented is that of people living in households with telephones. For most purposes, telephone households are similar to the general population. But if you were reporting a poll on what it was like to be homeless, a telephone sample would not be appropriate. The increasingly widespread use of cell phones, particularly as the only phone in some households, may have an impact in the future on the ability of a telephone poll to accurately reflect a specific population. Remember, the use of a scientific sampling technique does not mean that the correct population was interviewed.
Political polls are especially sensitive to this issue.
In pre-primary and pre-election polls, which people are chosen as the base for poll results is critical. A poll of all adults, for example, is not very useful for a primary race where only 25 percent of the registered voters actually turn out. So look for polls based on registered voters, "likely voters," previous primary voters and such. These distinctions are important and should be included in the story, for one of the most difficult challenges in polling is trying to figure out who actually is going to vote.
The ease of conducting surveys in the United States is not duplicated around the world. It may not be possible or practical in some countries to conduct surveys of a random sample throughout the country. Surveys based on a smaller group than the entire population – such as a few larger cities – can still be reliable if reported correctly - as the views of those in the larger cities, for example, but not those of the country - and may be the only available data.
6. Are the results based on the answers of all the people interviewed?
One of the easiest ways to misrepresent the results of a poll is to report the answers of only a subgroup. For example, there is usually a substantial difference between the opinions of Democrats and Republicans on campaign-related matters. Reporting the opinions of only Democrats in a poll purported to be of all adults would substantially misrepresent the results.
Poll results based on Democrats must be identified as such and should be reported as representing only Democratic opinions.
Of course, reporting on just one subgroup can be exactly the right course. In polling on a primary contest, it is the opinions of those who can vote in the primary that count – not those who cannot vote in that contest. Primary polls should include only eligible primary voters.
7. Who should have been interviewed and was not? Or do response rates matter?
No survey ever reaches everyone who should have been interviewed. You ought to know what steps were undertaken to minimize non-response, such as the number of attempts to reach the appropriate respondent and over how many days.
There are many reasons why people who should have been interviewed were not. They may have refused attempts to interview them. Or interviews may not have been attempted if people were not home when the interviewer called. Or there may have been a language problem or a hearing problem.
In recent years, the percentage of people who respond to polls has diminished. There has been an increase in those who refuse to participate. Some of this is due to the increase in telemarketing and part is due to Caller ID and other technology that allows screening of incoming calls. While this is a subject that concerns pollsters, so far careful study has found that these reduced response rates have not had a major impact on the accuracy of most public polls.
Where possible, you should obtain the overall response rate from the pollster, calculated on a recognized basis such as the standards of the American Association for Public Opinion Research. One poll is not “better” than another simply because of the one statistic called response rate.
8. When was the poll done?
Events have a dramatic impact on poll results. Your interpretation of a poll should depend on when it was conducted relative to key events. Even the freshest poll results can be overtaken by events. The President may have given a stirring speech to the nation, pictures of abuse of prisoners by the military may have been broadcast, the stock market may have crashed or an oil tanker may have sunk, spilling millions of gallons of crude on beautiful beaches.
Poll results that are several weeks or months old may be perfectly valid, but events may have erased any newsworthy relationship to current public opinion.
9. How were the interviews conducted?
There are four main possibilities: in person, by telephone, online or by mail. Most surveys are conducted by telephone, with the calls made by interviewers from a central location. However, some surveys are still conducted by sending interviewers into people's homes to conduct the interviews.
Some surveys are conducted by mail. In scientific polls, the pollster picks the people to receive the mail questionnaires. The respondent fills out the questionnaire and returns it.
Mail surveys can be excellent sources of information, but it takes weeks to do a mail survey, meaning that the results cannot be as timely as a telephone survey. And mail surveys can be subject to other kinds of errors, particularly extremely low response rates. In many mail surveys, many more people fail to participate than do. This makes the results suspect.
Surveys done in shopping malls, in stores or on the sidewalk may have their uses for their sponsors, but publishing the results in the media is not among them. These approaches may yield interesting human-interest stories, but they should never be treated as if they represent public opinion.
Advances in computer technology have allowed the development of computerized interviewing systems that dial the phone, play taped questions to a respondent and then record answers the person gives by punching numbers on the telephone keypad. Such surveys may be more vulnerable to significant problems including uncontrolled selection of respondents within the household, the ability of young children to complete the survey, and poor response rates.
Such problems should disqualify any survey from being used unless the journalist knows that the survey has proper respondent selection, verifiable age screening, and reasonable response rates.
10. What about polls on the Internet or World Wide Web?
The explosive growth of the Internet and the World Wide Web has given rise to an equally explosive growth in various types of online polls and surveys.
Online surveys can be scientific if the samples are drawn in the right way. Some online surveys start with a scientific national random sample and recruit participants while others just take anyone who volunteers. Online surveys need to be carefully evaluated before use.
Several methods have been developed to sample the opinions of those who have online access. The fundamental rules of sampling still apply online: the pollster must select those who are asked to participate in the survey in a random fashion. In those cases where the population of interest has nearly universal Internet access or where the pollster has carefully recruited from the entire population, online polls are candidates for reporting.
However, even a survey that accurately sampled all those who have access to the Internet would still fall short of a poll of all Americans, as about one in three adults do not have Internet access.
But many Internet polls are simply the latest variation on the pseudo-polls that have existed for many years. Whether the effort is a click-on Web survey, a dial-in poll or a mail-in survey, the results should be ignored and not reported. All these pseudo-polls suffer from the same problem: the respondents are self-selected. The individuals choose themselves to take part in the poll – there is no pollster choosing the respondents to be interviewed.
Remember, the purpose of a poll is to draw conclusions about the population, not about the sample. In these pseudo-polls, there is no way to project the results to any larger group. Any similarity between the results of a pseudo-poll and a scientific survey is pure chance.
Clicking on your candidate’s button in the "voting booth" on a Web site may drive up the numbers for your candidate in a presidential horse-race poll online. For most such efforts, no effort is made to pick the respondents, to limit users from voting multiple times or to reach out for people who might not normally visit the Web site.
The dial-in or click-in polls may be fine for deciding who should win on American Idol or which music video is the MTV Video of the Week. The opinions expressed may be real, but in sum the numbers are just entertainment. There is no way to tell who actually called in, how old they are, or how many times each person called.
Never be fooled by the number of responses. In some cases a few people call in thousands of times. Even if 500,000 calls are tallied, no one has any real knowledge of what the results mean. If big numbers impress you, remember that the Literary Digest's non-scientific sample of 2,000,000 people said Landon would beat Roosevelt in the 1936 Presidential election.
Mail-in coupon polls are just as bad. In this case, the magazine or newspaper includes a coupon to be returned with the answers to the questions. Again, there is no way to know who responded and how many times each person did.
Another variation on the pseudo-poll comes as part of a fund-raising effort. An organization sends out a letter with a survey form attached to a large list of people, asking for opinions and for the respondent to send money to support the organization or pay for tabulating the survey. The questions are often loaded and the results of such an effort are always meaningless.
This technique is used by a wide variety of organizations from political parties and special-interest groups to charitable organizations. Again, if the poll in question is part of a fund-raising pitch, pitch it – in the wastebasket.
11. What is the sampling error for the poll results?
Interviews with a scientific sample of 1,000 adults can accurately reflect the opinions of nearly 210 million American adults. That means interviews attempted with all 210 million adults – if such were possible – would give approximately the same results as a well-conducted survey based on 1,000 interviews.
What happens if another carefully done poll of 1,000 adults gives slightly different results from the first survey? Neither of the polls is "wrong." This range of possible results is called the error due to sampling, often called the margin of error.
This is not an "error" in the sense of making a mistake. Rather, it is a measure of the possible range of approximation in the results because a sample was used.
Pollsters express the degree of the certainty of results based on a sample as a "confidence level." This means a sample is likely to be within so many points of the results one would have gotten if an interview were attempted with the entire target population. Most polls are usually reported using the 95% confidence level.
Thus, for example, a "3 percentage point margin of error" in a national poll means that if the attempt were made to interview every adult in the nation with the same questions in the same way at the same time as the poll was taken, the poll's answers would fall within plus or minus 3 percentage points of the complete count’s results 95% of the time.
This does not address the issue of whether people cooperate with the survey, or if the questions are understood, or if any other methodological issue exists. The sampling error is only the portion of the potential error in a survey introduced by using a sample rather than interviewing the entire population. Sampling error tells us nothing about the refusals or those consistently unavailable for interview; it also tells us nothing about the biasing effects of a particular question wording or the bias a particular interviewer may inject into the interview situation. It also applies only to scientific surveys.
Remember that the sampling error margin applies to each figure in the results – it is at least 3 percentage points plus or minus for each one in our example. Thus, in a poll question matching two candidates for President, both figures are subject to sampling error.
Certainly, if the gap between the two candidates is less than the sampling error margin, you should not say that one candidate is ahead of the other. You can say the race is "close," the race is "roughly even," or there is "little difference between the candidates." But it should not be called a "dead heat" unless the candidates are tied with the same percentages. And it certainly is not a “statistical tie” unless both candidates have the same exact percentages.
And just as certainly, when the gap between the two candidates is equal to or more than twice the error margin – 6 percentage points in our example – and if there are only two candidates and no undecided voters, you can say with confidence that the poll says Candidate A is clearly leading Candidate B.
When the gap between the two candidates is more than the error margin but less than twice the error margin, you should say that Candidate A "is ahead," "has an advantage" or "holds an edge." The story should mention that there is a small possibility that Candidate B is ahead of Candidate A.
When there are more than two choices or undecided voters – virtually in every poll in the real world – the question gets much more complicated.
While the solution is statistically complex, you can fairly easily evaluate this situation by estimating the error margin. You can do that by taking the sum of the percentages for each of the two candidates in question and multiplying it by the total respondents for the survey (only the likely voters if that is appropriate). This number is now the effective sample size for your judgment. Look up the sampling error in a table of statistics for that reduced sample size, and apply it to the candidate percentages. If they overlap, then you do not know if one is ahead. If they do not, then you can make the judgment that one candidate has a lead.
And bear in mind that when subgroup results are reported – women or blacks or young people – the sampling error margin for those figures is greater than for results based on the sample as a whole. Be very careful about reporting results from extremely small subgroups. Any results based on fewer than 100 respondents are subject to such large sampling errors that it is almost impossible to report the numbers in a meaningful manner.
13. What other kinds of factors can skew poll results?
The margin of sampling error is just one possible source of inaccuracy in a poll. It is not necessarily the source of the greatest possible error; we use it because it's the only one that can be quantified. And, other things being equal, it is useful for evaluating whether differences between poll results are meaningful in a statistical sense.
Question phrasing and question order are also likely sources of flaws. Inadequate interviewer training and supervision, data processing errors and other operational problems can also introduce errors. Professional polling operations are less subject to these problems than volunteer-conducted polls, which are usually less trustworthy. Be particularly careful of polls conducted by untrained and unsupervised college students. There have been several cases where the results were at least in part reported by the students without conducting any survey at all.
You should always ask if the poll results have been "weighted." This process is usually used to account for unequal probabilities of selection and to adjust slightly the demographics in the sample. You should be aware that a poll could be manipulated unduly by weighting the numbers to produce a desired result. While some weighting may be appropriate, other weighting is not. Weighting a scientific poll is only appropriate to reflect unequal probabilities or to adjust to independent values that are mostly constant.
Perhaps the best test of any poll question is your reaction to it. On the face of it, does the question seem fair and unbiased? Does it present a balanced set of choices? Would most people be able to answer the question?
On sensitive questions – such as abortion – the complete wording of the question should probably be included in your story. It may well be worthwhile to compare the results of several different polls from different organizations on sensitive questions. You should examine carefully both the results and the exact wording of the questions.
15. In what order were the questions asked?
Sometimes the very order of the questions can have an impact on the results. Often that impact is intentional; sometimes it is not. The impact of order can often be subtle.
During troubled economic times, for example, if people are asked what they think of the economy before they are asked their opinion of the president, the presidential popularity rating will probably be lower than if you had reversed the order of the questions. And in good economic times, the opposite is true.
What is important here is whether the questions that were asked prior to the critical question in the poll could sway the results. If the poll asks questions about abortion just before a question about an abortion ballot measure, the prior questions could sway the results.
16. What about "push polls?"
In recent years, some political campaigns and special-interest groups have used a technique called "push polls" to spread rumors and even outright lies about opponents. These efforts are not polls, but political manipulation trying to hide behind the smokescreen of a public opinion survey.
In a "push poll," a large number of people are called by telephone and asked to participate in a purported survey. The survey "questions" are really thinly-veiled accusations against an opponent or repetitions of rumors about a candidate’s personal or professional behavior. The focus here is on making certain the respondent hears and understands the accusation in the question, not in gathering the respondent’s opinions.
"Push polls" are unethical and have been condemned by professional polling organizations.
"Push polls" must be distinguished from some types of legitimate surveys done by political campaigns. At times, a campaign poll may ask a series of questions about contrasting issue positions of the candidates – or various things that could be said about a candidate, some of which are negative. These legitimate questions seek to gauge the public’s reaction to a candidate’s position or to a possible legitimate attack on a candidate’s record.
A legitimate poll can be distinguished from a "push poll" usually by:
The number of calls made – a push poll makes thousands and thousands of calls, instead of hundreds for most surveys; The identity of who is making the telephone calls – a polling firm for a scientific survey as opposed to a telemarketing house or the campaign itself for a "push poll;" The lack of any true gathering of results in a "push poll," which has as its only objective the dissemination of false or misleading information.
17. What other polls have been done on this topic? Do they say the same thing? If they are different, why are they different?
Results of other polls – by a newspaper or television station, a public survey firm or even a candidate's opponent – should be used to check and contrast poll results you have in hand.
If the polls differ, first check the timing of the interviewing. If the polls were done at different times, the differing results may demonstrate a swing in public opinion.
If the polls were done about the same time, ask each poll sponsor for an explanation of the differences. Conflicting polls often make good stories.
18. What about exit polls?
Exit polls, properly conducted, are an excellent source of information about voters in a given election. They are the only opportunity to survey actual voters and only voters.
There are several issues that should be considered in reporting exit polls. First, exit polls report how voters believe they cast their ballots. The election of 2000 showed that voters may think they have voted for a candidate, but their votes may not have been recorded. Or in some cases, voters actually voted for a different candidate than they thought they did.
Second, absentee voters are not included in many exit polls. In states where a large number of voters vote either early or absentee, an absentee telephone poll may be combined with an exit poll to measure voter opinion. If in a specific case there are large numbers of absentee voters and no absentee poll, you should be careful to report that the exit poll is only of Election Day voters.
Third, make sure that the company conducting the exit poll has a track record. Too many exit polls are conducted in a minimal number of voting locations by people who do not have experience in this specialized method of polling. Those results can be misleading.
19. What else needs to be included in the report of a poll?
The key element in reporting polls is context. Not only does this mean that you should compare the poll to others taken at the same time or earlier, but it also means that you need to report on what events may have impacted on the poll results.
A good poll story not only reports the results of the poll but also assists the reader in the interpretation of those results. If the poll shows a continued decline in consumer confidence even though leading economic indicators have improved, your report might include some analysis of whether or not people see improvement in their daily economic lives even though the indicators are on the rise.
If a candidate has shown marked improvement in a horse race, you might want to report about the millions of dollars spent on advertising immediately prior to the poll.
Putting the poll in context should be a major part of your reporting.
However, remember that the laws of chance alone say that the results of one poll out of 20 may be skewed away from the public's real views just because of sampling error.
Also remember that no matter how good the poll, no matter how wide the margin, no matter how big the sample, a pre-election poll does not show that one candidate has the race "locked up." Things change – often and dramatically in politics. That’s why candidates campaign.
If the poll was conducted correctly, and you have been able to obtain the information outlined here, your news judgment and that of your editors should be applied to polls, as it is to every other element of a story.
In spite of the difficulties, the public opinion survey, correctly conducted, is still the best objective measure of the state of the views of the public.
This is a copyrighted publication of the National Council on Public Polls in keeping with its mission to help educate journalists on the use of public opinion polls.
The National Council on Public Polls hereby grants the right to duplicate this work in whole, but not in part, for any noncommercial purpose provided that any copy include all of the information on this page.
Sheldon R. Gawiser, Ph.D. is Director, Elections, NBC News. G. Evans Witt is CEO, Princeton Survey Research Associates International. They were cofounders of the Associated Press/NBC News Poll.
For any additional information on any aspect of polling or a specific poll, please call NCPP at 845.575.5050.