So many good topics, so little time. Several items appeared in the past 48 hours that are worth passing along.
Slate’s David Kenner and William Saletan did a post-election review of the issues raised in their pre-election consumer’s guide to polling. Their conclusions:
On Weighting by Party: "Pollsters who assumed that historical patterns would temper the Republican intensity in this year’s surveys got it wrong. Those who bet on the data instead of the historical patterns [the Pew Research Center and the Battleground Survey] got it right."
On Undecided Voters: Contrary to the historical pattern, undecided voters did not break to the challenger: "Oops! According to exit polls, Bush got 46 percent of those who made up their minds in the last week of the campaign and 44 percent of those who made up their minds in the final three days. TIPP got it wrong, Gallup got it very wrong, and Slate’s vote-share formula got it very, very wrong . Who got it right? Pew again. In its final report, Pew predicted that undecideds ‘may break only slightly in Kerry’s favor.’ With 6 percent of voters undecided in the week before the election, Pew added 3 percent to Bush’s total and 3 percent to Kerry’s."
[Mystery Pollster was also quite wrong on this one].
On "Automated" Surveys: "Rasmussen and SurveyUSA beat their human competitors in the battleground states, often by large margins… when the two major automated pollsters beat the three major human pollsters across the board, it’s time to broaden the experiment in automated polling and compare results to see what’s working and why. Clearly, the automated pollsters are onto something, and the human pollsters will have to figure out how to beat it-or join it."
Read it all. This piece is also a reminder to Mystery Pollster that he needs to wrap up his review of exit polls and move on to these other important lessons from the 2004 elections.
USA Today‘s Mark Memmott reported yesterday that Congressman John Conyers (D-MI) is requesting "raw" exit poll data from the news media outlets that sponsored the National Election Pool (NEP) poll. Memmott’s piece includes the caution (echoed by MP) that "most polling experts who have studied exit polls doubt the data would be of use." Buried in the story is a bit more on the internal report being prepared by Mitofsky International & Edison Research:
Edie Emery, a spokeswoman for the [NEP] consortium, said the group did not want to comment on Conyers’ request. She said that, as after past elections, much of this year’s data "will be archived at the Roper Center and the University of Connecticut in early 2005."
In addition, she said, the firms that produced the exit polls are reviewing this year’s results and will submit a report to the AP and networks "in mid- to late-January" [emphasis added].
Finally, the Cal Tech MIT Voting Project has just released an addendum to their original report on "Voting Technologies and Underestimate of the Bush Vote." Regular readers will recall that their otherwise intriguing analysis unknowingly used exit poll data that had already been "corrected" (or reweighted) to match the actual count. Their addendum repeats the same analysis using two sets of uncorrected data reported by Steven Freeman and others. Their conclusion remains the same:
There is no statistically significant correlation between the use of voting methods in states and the size of the exit poll discrepancies….
The attention paid to the size and cause of exit poll discrepancies reveals a desire to use exit polls as a check on the honesty of election officials and the performance of voting systems. The design of the NEP exit polls makes it a blunt instrument for this sort of oversight. Therefore, much of the attention on these polls, seeking evidence of fraud, has been misplaced. More direct methods of election system auditing will be more effective [emphasis added].
The report addendum includes much supporting analysis and data. It is very well done and worth reading in full.
>Rasmussen and SurveyUSA beat their human competitors in the battleground states, often >by large margins…
Here’s my usual question on this: beat in what sense?
Did they predict simply the winner better? the % totals? the margin? what?
I would argue that the only important statistic is how close they got to the actual percentages and if they within their own moe etc. I ran the numbers for SUSA (auto), ARG (human), and Zogby (human) for some of the primary states and foudn that both ARG and SUSA were outside MOE roughly half the time while Zogby was outside about a quarter. That, to me, is a much more interesting indication of the value of their methodology than “beating” someone else.
As shown on my sample-weighting website, I have a different take from Kenner and Saletan on the issue of weighting by party ID. I think party weighting acquitted itself pretty well and was far preferable to the many polls during the election cycle (primarily, but not exclusively, from Gallup) showing GOP edges of several percent in sample composition.
http://www.hs.ttu.edu/hdfs3390/weighting.htm
Thanks to Mark for the friendly link — let me take the question of the first poster.
Here was the method I used for determining who “beat” who. I took the difference between Kerry’s actual vote percentage, and the predicted vote percentage by a pollster. Then I took the difference between Bush’s actual vote percentage, and the predicted percentage by a pollster. Then I added the two numbers together. That number was the percentage points that a pollster was wrong by.
In a comparison of polls by live and automated pollsters in the battleground states, we found that the automated pollsters were wrong by less, on average, than the live pollsters (read the article for the exact details).
I didn’t compare the margins, because that’s a number that can be deceptive. If Bush beat Kerry 51-45 in some state, that would be a margin of 6. However, a pollster could produce a poll showing Bush winning 47-41, and that too would have a margin of 6 — but the pollster wouldn’t have done a very good job of predicting how many people would vote for each candidate. Another pollster, who predicted the same race at 50-46, would have done a far better job at predicting how the state would vote, even though his margin was slightly off.
>I didn’t compare the margins, because that’s a number that can be deceptive.
True. But, for example, SUSA gives itself great marks on accuracy by exactly such a type of analysis and urges people to buy their products on that basis.
But it still begs the question – is your analysis really statistically significant? You make a blanket statement at the end:
“But when the two major automated pollsters beat the three major human pollsters across the board, it’s time to broaden the experiment in automated polling and compare results to see what’s working and why. Clearly, the automated pollsters are onto something, and the human pollsters will have to figure out how to beat it—or join it.”
A quick look at the data you present in the article shows that of the examples you cite, perhaps on Gallup is conclusively outside MOE (I didn’t run the numbers as I don’t have the data but I eyeballed it from what I recall about the various polls). Thus, perhaps only one of the human pollsters was effectively beaten by the automated ones. So, are the automated pollsters really on to something?
In any case, supposing the conclusions you come to are statistically backed up, here are some other considerations:
1) Autopollsters (SUSA in particular) often have a large MOE, making their results look a little better compared to larger sampling firms and potentially masking methodological error. This point, in particualr, makes me worry about the difficulty of creating a true apples to apples comparison.
2) Robocalls tend to push leaners rather hard for an answer (not many DKs or undecideds). This may account for more accurate polling as it factors in more leaners than most human pollsters. Is this really a function of the automation or simply the choice to push leaners?
In Scott Pauls’s message above (Dec. 10, 2:27 PM), where it says:
“…perhaps on Gallup is conclusively outside MOE”
am I correct that you meant “only Gallup”?
“am I correct that you meant “only Gallup”?”
You are correct – typing too fast.
Mark:
It appears that you have the CalTech/MIT original study and addendum links backwards.
Scott Pauls: see Rasmussen’s final results ste by state in in http://www.rasmussenreports.com/State%20by%20state%20comparisons%202004.htm
For their national results, see http://www.rasmussenreports.com/Presidential_Tracking_Poll.htm
Mark, are you sure about the revised Caltech-MIT analysis?
“It is very well done…”
They used the CNN data rounded to a tenth and didn’t consider the confidence intervals (that I can see).
Significant digits….
“Note: A state’s “Red Shift” is defined as the percentage of the vote received by Bush in the official election returns minus the percentage of support received by Bush in the exit poll.”
That means both rules regarding adding/subtracting (see “minus” above) and multiplying/dividing (Freeman extrapolation) apply. As I understand it, both the exit poll proportion and the election result proportion should be rounded to the whole digit according to the rule. Am I missing something here? I might be, but forget that for a moment…
How can they correlate data from 40+ states when each state has a its own margin of error?
This from the study:
“That is, states that used more optical scanners or more DREs had slightly smaller Red Shifts than those that used other methods. However, these correlation coefficients are also so small that they are not statistically significant.”
Aren’t “these correlation coefficients” based on the assumption that the observed variance between the exit poll and the election result is significant in the first place? If there is no significant variance, then the observed variance *could be* attributable sampling error. That is why calculating the Z-score and p-value for the difference in each state is necessary FOR EACH PROPORTION!
I’m going to read this study again in the morning to make sure it’s not the fatigue talking, but as of now, I think this is terrible analysis. Garbage in – garbage out.
Caltech-MIT Exit Poll Analysis Flawed (again)
Although they may be correct in their conclusion…the Caltech-MIT folks have not proved this with this recent addendum.
Uhh… sorry Mark about all the trackbacks… My trackback pinger kept telling me the ping failed.
Do undecided voters break for the challenger?
When pollsters interpret results for a race involving an incumbent, pollsters have typically applied the rule that incumbents rarely get a higher percentage in the election than they receive in polls, and that voters still undecided on the very last…