There is one poll I have been dying to write about since I started this blog. It has a surprisingly strong track record and, as luck would have it, happens to survey only voters in Ohio, one of the most important battleground states in the nation.
Four years ago, four highly respected academic survey methodologists took a close look at this survey and concluded:
It has been strikingly accurate in forecasting election outcomes since 1980, with an average error of only 1.6 percentage points. They have been substantially more accurate than telephone polls forecasting the same races conducted by the University of Akron (average error = 5.4 percentage points), the University of Cincinnati (average error = 4.9 percentage points), each of which has average error rates reported [for most pre-election polls nationwide] by Crespi (1988) and King (1993).” [From Visser, Krosnick, Marquette and Curtin (2000), p. 227]
As it happens, this survey just released its final pre-election numbers on Saturday. It shows a dead heat between Bush and Kerry (50% to 50%).
So why haven’t you heard more about this highly accurate survey? Because the poll I’m talking about is the Columbus Dispatch Mail Poll, and like the late Rodney Dangerfield, it gets no respect.
I have a hunch that may be about to change.
For years, politicos and pollsters scoffed at mail-in polls. After all, we know the history of the Literary Digest mail-in surveys that wrongly predicted that Alf Landon would defeat Franklin Roosevelt. And most political sophisticates considered mail surveys too slow and their response rates too low in comparison to telephone surveys. Well, a lot has changed in 20 years.
Unlike the mail-in polls of old, the Dispatch poll draws random probability samples from the most recent registered voter list available from the Ohio Secretary of State. Unlike telephone surveys, the Dispatch can sample from the voter lists without concern for unlisted or missing telephone numbers. Then send out over 10,000 packets by U.S. Mail, each containing a cover letter on Columbus Dispatch letterhead, a questionnaire and a postage paid return envelope. They design the paper questionnaire so it closely resembles the look of the actual ballot. They do several surveys during the course of an election year, but on the final survey, they omit the undecided category, thereby forcing voters to chose the way they do in the voting booth.
The methodologists who studied the Dispatch Poll (Visser, Krosnick, et. al. 2000, pp. 227-228) have several theories for its greater accuracy:
- It involves a very large sample size. The survey released over the weekend had 2,880 respondents and sampling error of 2%. No other Ohio poll released this week has even half as many respondents.
- It is a better solution to the problem of “modeling” likely voters: “The mail survey procedure recruited samples of respondents that more closely resembled the actual voting population, perhaps because the act of completing a self-administered questionnaire is in many ways comparable to the act of voting” (emphasis added).
- The lack of an undecided option: “Having to allocate those undecided voters introduced error into the telephone polls.”
- The mail survey was a closer facsimile of the ballot than the questions pollsters typically use to ask about vote preference.
Let me add two more theories:
The mail survey’s “low” response rate is no longer a disadvantage. The Dispatch reported a response rate of 25% for the most recent mail survey. Telephone surveys rarely publish response rates, but according to another study (also by Jon Krosnick and a different set of colleagues), response rates from 20 national news media telephone surveys averaged 22% in 2003 (and ranged from 5% to 39%).
Of course those telephone response rates count a lot of non-voters who agree to be interviewed. Only about 1% of the Dispatch respondents return their ballots indicating they will not vote. Telephone respondents typically over-report voting due to the “social discomfort” (embarrassment) of admitting non-voting to a stranger. So why do 99% of Dispatch Poll respondents report they will vote? You can either believe that Ohioans are so easily embarrassed by non-voting that many feel compelled to fill out and mail back a paper questionnaire, or you can believe that the Dispatch Poll’s 25% response rate is artificially low, since the non-respondents include the non-voters who have essentially screened themselves out.
Also, since the paper questionnaire simulates the actual experience of voting so closely and eliminates the option of being undecided, the incumbent rule does not apply. Most undecideds “break” on the last poll just as they will on Election Day.
One more benefit to consider: None of the big problems looming for telephone surveys — cell phone only households, number portability, caller ID — are an issue for a mail survey.
The Dispatch mail polls done earlier this year had one big shortcoming. They were based on older lists from the Secretary of State that missed most of this year’s new registrants. Not this time. Let’s let Darrel Roland’s poll story in Saturday’s Dispatch explain:
One difference between the latest poll and the one published four weeks ago is the inclusion of more newly registered voters in the sample, whose names were in the latest available data from the secretary of state’s office. About 88% of the new voters – including those from Ohio’s largest counties – were among the potential poll participants.
And which candidate did those new voters prefer? “These newbies now represent one in eight Ohio voters, and they support Kerry by nearly a 2-1 margin [65% to 34%].”
As a result, Kerry has moved from a seven-point deficit in late September to a 50% to 50% tie. Kerry actually “led” on the most recent survey by “a mere eight votes out of 2,880 ballots returned in the mail survey – the tightest margin ever in a final Dispatch Poll.”
Roland also adds the following:
However, in the past four weeks Kerry has surged from a 7 percentage-point deficit into a tie with Bush. And several signs indicate the Massachusetts senator has gained the momentum in Ohio.
Kerry is ahead by 14 points among independent voters. He has a narrow lead in northwestern Ohio, the state’s most reliable bellwether media market. And he has brought black voters home, gaining 91 percent support among black respondents.
Meanwhile, the poll contains troubling signs for Bush. Only 44 percent say things in the nation are headed in the right direction. Fewer than half approve of his handling of Iraq and the economy. And his overall approval rating is 49 percent, a measure that many political experts say represents a ceiling on his support Tuesday.
So here is what I see: Ohio’s most historically accurate survey is telling us that the race is a dead heat. The Dispatch Poll is not infallible – it still has sampling error of 2% – but it has come closer to reality more consistently than any other Ohio poll and involves the best “model” of likely voters I know of. Meanwhile, the RealClearPolitics average for Ohio stands at this hour at 48.6% for Bush, 46.8% for Kerry. Seven out of eight surveys have Bush ahead, by margins of 2-4%.
There are two ways (at least) to interpret these latest results. You can believe that the race is now breaking Bush’s way in Ohio. The Dispatch poll was fielded between October 20 and October 29. However, evidence of a trend is inconsistent across individual surveys: Three polls (SurveyUSA, the Ohio Poll and Mason-Dixon) show movement to Bush, two surveys (Fox and Gallup) show movement to Kerry and two surveys (Rasmussen and Strategic Vision) show no change in the margin.
Or you can believe that the incumbent rule is alive and well and that the two results are essentially consistent in pointing to a very close finish: an incumbent with 48-49% of the vote on a telephone poll is headed for a very close finish with a slight edge over the challenger.
You can make a case for either scenario. My instincts are telling me that the Dispatch poll is right, that the incumbent rule holds and that Ohio will be incredibly close. I also have a hunch that the traditional likely voter models are underestimating the impact of the new registrants just enough to have Bush about a percentage point too high. So I still think Kerry will win Ohio, even though 7 of 8 most recent polls out tonight seem to say otherwise.
Go figure.
I’ll have more later tonight…
Offline sources on the jump page
—————-
A Special Request: Have you found this site useful? Please help me keep it alive by taking a few minutes to complete this brief (3-5 minute) about Mystery Pollster. Respondent identities will not be tracked and, as such, your participation will remain completely anonymous and confidential If you have any problems with the survey please email me. Click here if you’re asking, “why a reader survey now?” Thank you!!
Penny S. Visser, Jon A. Krosnick, Jesse Marquette and Michael Curtin (2000). “Improving Election Forecasting,” in In Election Polls, the News Media and Democracy, Paul J. Lavrakas and Michael Traugott (eds.), Chatham, N.J.: Chatham House Publishers.
Penny S. Visser, Jon A. Krosnick, Jesse Marquette and Michael Curtin (1996). “Mail Surveys for Election Forecasting? An Evaluation of the Columbus Dispatch Poll.” Public Opinion Quarterly 64:125-48
Predicting the Pollster
The Mystery Pollster has been thinly slicing likely voter models for days now — so thin he’s emulating Paul Cicero’s fine work on a clove of garlic in Goodfella’s. He says he’s posting later today and maybe he’ll roll…
Arkansas! Back on the table. — Talk about your Election Day optimisim.
I just tried to work up some realistic pessimism and the worst I could come up with is that the supposedly hyper-accurate Ohio mail poll Mystery Pollster talks about shows an exact tie, and it’s supposed to rain in Ohio tomorrow. Oh no!
So things lo…
Polling questions
Mystery Pollster is an extremely interesting site. I hope it lasts. The latest entry concerns a survey held by an Ohio newspaper which the blogger says may come out as an unheralded, but extremely reliable guide to what happens in…
Something else that struck me about phone polls today (and that I haven’t seen discused) is that the person who answers the phone doesn’t necessarily speak for the whole household. Particularly in large families, the father may go one way, the mother another, and the children would demographically be expected to be more Democratic than either of their parents. Whereas mailings, if they are sent to individuals and not just addresses, would be immune to this sort of problem unless a family’s provider simply answers all of the mail (which may happen, but seems unlikely).
Do you think that this may be a significant source of error in current phone polling? Or is this much like the inability to reach cell-phone only users and unlikely to cause a shift of more than a point or two?
A very, very interesting post… Thank you for taking the time to explain the benefits of a mail survey.
One quick question… You noted that they mail over 10,000 packets with a cover letter on Columbus Dispatch letterhead. Does this create bias in the poll? The “non-central” parts of Ohio may not respond as much as those in the Columbus area. I have a hard time believing that response rates in Cleveland and Cincy and the rural areas will be as high as the Columbus market.
The First Winner – the incumbent horse
Odds-on favorite, Makybe Diva (11-4) from Great Britain, ridden by G Boss, won the Melbourne Cup for the second time.
Given that one incubent has held her cup, it might not be too far a stretch to reckon that Mr Bush’s odds just improved.
US Election
I guess the US is voting as I write this after returning from being offline in the wilderness. So I’ve
Slate’s Prediction on the Election
I’ve already predicted that Kerry will win the election. Eric Alterman, writing for Slate, goes much further. He predicts a 25-40% chance of a Kerry landslide. How he arrived at his percentages I have no idea but agree with his
The discussion of the Dispatch survey reminds me of our experience in working a major newspaper chain. We were working for 3 of the chain’s newspapers in a governor’s race in one state. We convinced the editors to let us change from phone to mail questionnaires. Very reluctantly they did. All of the phone surveys reported in the media were showing one candidate leading the race. Our mail survey showed the reverse. The newspapers made a decision not to report our results for the governor’s race in their stories since, one editor said, the mail survey results were “too different” from the results of the phone surveys being reported by the highly respected national polling organizations.
The editors thought our results were just wrong — because 1) we were using a mail survey and 2) we refused to weight our results. We assume, like the Dispatch, that the people who go to the trouble of responding to the mail survey are the same people who get out and vote. I have no basis for weighting otherwise other than the proportion of people voting in the previous election.
On election day, our unweighted results were within a percentage point of the election results. All of the telephone surveys were way off in the governor’s race. I wrote to the editors on the morning after the election and said, “Look at this. I always say polls are snapshots in time and are not predictive of election results, but this is interesting.”
The response from the editor of the biggest paper was “Don’t you think that’s just a coincidence?” And that was that. That paper very happily went back to telephone surveys and weighting models they are using in the 2004 elections.
Mail survey results always include the percent return rate. Our mail survey return rates are typically in the 30 to 45 percent range with two follow-ups. Phone survey results almost never include the percent cooperation or percent response. When we have had to do phone surveys, the response rate is less than 20 percent (of all working residential numbers in the sample). Return rate for mail and response rate for phone are not entirely comparable, but they are close enough for me to make a decision for mail for my clients.
We don’t find that we need to have large samples (and smaller sampling error) in order to get a good reading on a population. We would only use a large sample (i.e, a mailing of 10,000) if we were trying to analyze the results, for example, by county within a state. A mailed sample of 1500 addresses would suffice for most studies.