Exit Polls: CalTech/MIT Report

Exit Polls Legacy blog posts

Two new reports on exit polls came to my attention over the weekend. Both reports were written by high-powered PhDs from high-powered institutions, and both contribute to the ongoing debate (or efforts to debunk) theories of vote fraud in this year’s election. Unfortunately, I am not sure either report brings us much closer to resolving the underlying controversy.

I’ll take up the first report on “Voting Machines and the Underestimate of the Bush Vote” from the CalTech/MIT Voting Project in this post. The folks at the Voting Project set out to debunk one popular theory initially floated on the web — that exit polls showed a greater discrepancy with actual results in states with newer electronic voting machines that lack a paper trail.

They knew from their own extensive work on the new voting technology that most states used a mix of voting machines in 2004. So they tried something novel: They produced graphs that plotted the percentage of votes cast by various voting machines (paper ballots, lever machines, optical scan and touchscreen) in each state against the percentage discrepancy on the exit poll for each state. If, for example, touchscreen (or DRE – “direct recording electronic”) voting enabled widespread fraud, then the size of the discrepancy between the exit polls and reality should increase as the percentage of votes cast by DRE increases. The charts show no such pattern by any type of voting equipment, so the CalTech/MIT researchers concluded, “there is no evidence that electronic voting machines were used to steal the 2004 election for George Bush.”

One problem though. As an emailer to RottenDenmark pointed out on Saturday, the CalTech/MIT report used the "corrected" exit poll data now available online at CNN.com. As regular readers of this site know, NEP weights (or adjusts) the exit polls so that their tabulations of vote preference match reality. This is a long-time standard practice for the national network exit polls.

How do we know that the CalTech/MIT report used the corrected data? Remember what you learned in college, “always read the footnotes.” Footnote #2 tells us:

The exit poll data were taken from the cnn.com web site. The poll data can be accessed through http://www.cnn.com/ELECTION/2004/pages/results/index.html. Because the web site does not report the bottom line candidate percentages directly, we had to calculate them from the demographic breakdowns. In this case, we estimated the Bush percentage of votes in the exit polls using the gender breakdown. For instance, 54% of the respondents in Florida were women, 46% men. Women gave 50% of their votes to Bush, men, 53%. Therefore, Bush’s overall share of the exit poll in Florida was calculated as (54% x 50%) + (46% x 53%) = 51.38%.

Now go to the CNN site, and retrieve the results for Florida. You will notice that the current vote-by-gender numbers reported by CNN match the numbers in the above footnote. In other words, the numbers analyzed in the MIT/CalTech paper are the numbers NEP weighted to match the actual result. They are NOT unweighted, end-of-day poll results that have been the object of all the speculation, the ones that showed Kerry doing better in most states than he did in the actual count. Of course, the final weighted vote numbers used in the MIT/CalTech report still show very small, seemly random discrepancies. This is presumably due to rounding of the four percentages (vote by gender and the percentage male and female) they used to calculate each candidate’s vote for each state.

In case there is any doubt, the CalTech MIT report also noted three “outliers” – three states whose exit poll numbers looked very different from the actual result in Footnote #5:

Rhode Island gave 47.4% support to Kerry (sic) in the exit poll, compared to the actual 38.9%. The two other statistically differences were Oklahoma (59.2% exit poll vs. 65.6% official return) and New York (31.9% exit poll vs. 40.5% official return). Note that none of these states was a “battleground state.”

Two problems here. The first is that they obviously meant that Bush, not Kerry, got 38.9% in Rhode Island, 65.6% in Oklahoma and 40.5% in New York.  A simple typo, no big deal.  Now go back to the results at CNN.com and try to replicate their calculations for these three states. The numbers are now different. I get the following percentages for Bush on the exit polls: 39% in Rhode Island , 65% in Oklahoma and 41% in New York. Obviously they now differ from reality by no more than a single percentage point. The CalTech/MIT researchers probably grabbed results for these three states before NEP got around to weighting them to match actual results.

Thus, the charts in the CalTech/MIT report don’t really tell us much. They are essentially analyzing rounding error.

Now, I am assuming this is just an honest mistake. It happens to the best of us.  Moreover, the Voting Project researchers were on the right track.  They can probably replicate this analysis using data captured before NEP weighted the exit polls to match the count.  I would not be surprised if they reached the same conclusion [Clarification: I am guessing the analysis will show no significant relationship between the type of voting equipment and the exit poll discrepancy].

More important, as noted here before, the analysis that the Voting Project researchers attempted could be done with far more precision and power using the raw exit poll data. The exit polls track type of voting equipment down to the precinct level. So if large discrepancies occurred in DRE precincts nationwide, but nowhere else (a very big “if”), the data would show it. Again, I’m dubious, but this would be a very easy theory to “debunk.”  Unfortunately, NEP officials are so far reticent to discuss their data.

So…

Attention Keith Olbermann!: You want to “continue to cover [voting angst] with all prudent speed?” Excellent. Here is one piece of the puzzle you can help solve. The good news is, you don’t need to find some “Deep Throat” informant or submit a Freedom of Information Act request. Just call up NBC’s polling director and ask. OK, true, you may need to convince a few colleagues at the other networks to do the same. Nonetheless, the networks own and control the NEP exit poll data, so I’m sure they’ll gladly help “debunk” this controversy by making the relevant data available. Right? Or does that whole “right to know” thing only apply to everyone else?

Next up, another exit poll report by another MIT PhD…

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.