Yesterday (just before the Typepad outage that prevented me from posting all day), Chris Cillizza’s The Fix blog at WashingtonPost.com took a helpful look at the "Pros and Cons of Auto-Dialed Surveys" and said some kind words about MP in the process. Thanks Chris!
In the process, Cillizza made a quick reference to the issue of response rates:
A traditional live interview telephone poll has a response rate of roughly 30 percent — meaning that three out of every ten households contacted participate in the survey. The polling establishment has long held that people are less likely to respond to an automated survey than a call from a real person, meaning that auto-dialed poll have even lower response rates and therefore a higher possibility of bias in the sample. Neither Rasmussen nor Survey USA makes their response rates public, although, in fairness, neither do most media outlets or major partisan pollsters.
A few additional points:
First, Cillizza’s quick definition of response rates is close (and arguably close enough for his article), but not exactly right. Generally speaking, the response rate has two components: (1) the contact rate, or the percentage of sampled households that the pollster is able to reach during the course of the study and (2) the cooperation rate, or the percentage of contacted households that agree to complete the survey rather than hanging up. So the response rate tells us the percentage of the eligible sampled households that the pollster is able to contact and complete an interview.
Second, "typical" response rates are difficult to boil down to a single number, as they vary widely depending on the organization that does the survey, how they do it — and perhaps most important — how they calculate the response rate. The calculation gets complicated because random digit dial (RDD) samples include some ultimately uknown portion of non-working numbers that ring as if they are live. Another problem is that some of the sampled numbers reach businesses, government offices, fax machines or other numbers that are not eligible for the survey and, therefore, should not be included in the response rate calculation. The pollster rarely knows precisely how many numbers are ineligible, and must use some estimate to calculate the response rate. The pollster also needs to decide how to treat partial interviews — those where the respondent answers some questions, but hangs up before completing the interview.
The American Association for Public Opinion Research (AAPOR) publishes a set of standard definitions for calculating response rates (and a response rate calculator spreadsheet), but the various technical issues outlined above makes the calculations amazingly complex. The AAPOR definitions currently include over 30 pages of documentation on how to code the final "disposition" of each call to facilitate six different ways to calculate a response rate. Gary Langer, the ABC News polling director, addressed many of the technical issues of response rate calculations in an article available on the ABC web site.
The most comprehensive report on response and cooperation rates for news media polls I am aware of was compiled in 2003 by three academic survey methodologists: Jon Krosnick, Allyson Holbrook and Alison Pfent. In a paper presented at the 2003 AAPOR Conference, Krosnick and his colleagues analyzed the response rates from 20 national surveys contributed by major news media pollsters. They found response rates that varied from a low of 4% to a high of 51%, depending on the survey and method of calculation. The values of AAPOR’s Response Rate 3 (which aims to estimate the unknown eligible numbers) ranged from 5% to 39% with an average of 22% (see slides 8-9).
But keep in mind that many of these surveys were conducted by national media pollsters (such as CBS News/New York Times and ABC News/Washington Post) whose familiar brand names typically help increase participation. Response rates by pollsters without well known brand names — such as those conducted by yours truly — tend to get lower cooperation rates. Also, the data from Krosnick et. al. are already three years old. Current response rates are probably a bit lower.
Finally, we have the issue of how response rates for automated polls compare to those using live interviewers. Cillizza is right that most public pollsters — including Rasmussen and SurveyUSA — do not routinely publish response rate statistics along with survey results. However, SurveyUSA has posted a chart on their web site that shows eight years of response and refusal rates, although they have not updated the graphic with surveys conducted since 2002.
For 2002, Survey USA’s graph indicates a response rate — using AAPOR’s Response Rate 4 (RR4) — of roughly 10%. Krosnick’s 2003 report showed an average RR4 of 22% for national media polls, with a range between 5% and 40%. But keep in mind that Krosnick’s data were for national surveys. Virtually all of the polls by SurveyUSA in that period were statewide or local.
..WP blogger Chris Cillizza narrowly focuses upon “auto-dialed polling” and its “troublesome” low response rates & randomness — while implying that a 30% Response-Rate of traditional live telephone interviews is at least an adequate (… non-troublesome ?) standard for a “scientific sample”.
FACT: Any poll sample based upon a 30% Response-Rate is ‘NOT’ a scientific sample.
Any poll relying upon a non-scientific, non-random selection sample is ‘NOT’ a statistically valid survey.
Any non-scientific poll can ‘NOT’ logically state conclusions about the larger “population” under study.
The response-rate is understandably a very sensitive issue for public-opinion pollsters… since the bedrock requirement of a random-sample for survey research is quite difficult to achieve in real world practice — each member of the targeted population under study MUST have a statistically equal-probability of being sampled in the survey.
The scientific requirement is a 100% Response-Rate in sampling. However, polling statisticians generally speculate that a 70% Response-Rate provides a tolerable error rate in public opinion polling, given scientific validity of other poll procedures.
Bottom line is that all mainstream public opinion polls are loose estimates, not scientific measurements.