The Freeman Paper

Exit Polls Legacy blog posts

And speaking of MIT educated PhDs…

The latest “must read” among those who want to pursue theories that the vote count was wrong and the exit polls were right (or who want to debunk them) is a paper released by an MIT PhD named Stephen F. Freeman, now a Visiting Scholar in Organizational Dynamics at the University of Pennsylvania. His report, entitled, “The Unexplained Exit Poll Discrepancy” is available for download here and here.

Freeman’s paper makes one very helpful contribution to this debate. He reports exit poll results captured by CNN just after midnight on Election Night. He extrapolates from vote-by-gender tabulations for 11 battleground states posted that appear to be the last available before the complete samples were weighted to conform to the reported results (although the sample sizes are slightly lower than those now posted online). Given how late they appeared on the CNN website, they are presumably weighted by actual turnout, although absent confirmation from the National Election Pool (NEP) we will never know for certain.

Freeman’s data confirms the consistent skew to Kerry evident in leaked exit poll numbers posted on blogs earlier in the day (see my earlier post on this topic). “In ten of eleven consensus battleground states,” Freeman writes, “the tallied margin differs from the predicted margin, and in every one, the shift favors Bush.”

[An aside: Freeman justified his list of battleground states with a footnote: “These eleven states are classified as battleground states based on being on at least 2 of 3 prominent lists, Zogby’s, MSNBC, and the Washington Post.” Okay, fair enough, but if Freeman has data for other states, why not release it all? Or would that make the pattern less consistent?]

But Freeman is not content to confirm the small but consistent skew to Kerry in the exit polls. His paper makes three arguments: (1) Exit polls can “predict overall results” in elections “with very high degrees of certainty,” (2) the odds against unusual “anomalies” in just three states — Florida, Pennsylvania and Ohio — “are 250 million to one” and (3) none of the official “explanations” (his quotations, not mine) for the discrepancies are persuasive. So while he cautions against “premature” conclusions of “systematic fraud or mistabulation,” he nonetheless sees vote fraud as “an unavoidable hypothesis.”

I have problems with all three arguments. Let me take them one at a time.

1) Exit polls “predict overall results” in elections “with very high degrees of certainty.”

Freeman says exit polls have “high degrees of certainty” because:

It’s easy to get a statistically valid sample; and there is not a problem with figuring out who is going to vote – or how they will vote.

Then Freeman quotes two “experts:” Dick Morris, who says “exit polls are almost never wrong” and Thom Hartman who says German exit polls “have never been more than a tenth of a percent off.” Then he cites an exit poll conducted by students at BYU that was only off by two tenths of a percent this year.

Whoa, whoa, whoa.

I can set aside, for a moment, my qualms about Dick Morris as an expert on exit poll methodology, and I will suspend disbelief about Hartman’s claims about the German exit polls until I learn more. However, Freeman’s assertion that it is “easy” for an exit poll to get a statistically valid sample is unconvincing.

It is true that exit polls have no problem identifying “likely voters,” but they trade that problem for a huge set of logistical challenges. The national exit polls hire 1500 interviewers for just one day of work every two years and deploy them to randomly chosen precincts nationwide. Telephone surveys can train and supervise interviewers in a central facility. No such luck for exit polls. They depend on interviewers with relatively little prior experience or training. The year, in fact, NEP conducted most of its interviewer training by telephone. Yes, exit pollsters can easily draw a statistically valid sample of precincts, but some interviewers will inevitably fail to show up for work on Election Day. NEP tries to deploy substitutes to fill the gaps, but some precincts inevitably go uncovered. In 2000, 16 percent of sampled precincts were uncovered (Konner, 2004; although this statistic may have applied to those covering both the exit poll and sampled “key precincts”).

Next, consider the challenges facing each interviewer as they attempt to randomly select voters emerging from the polling place (some of which I learned about in recent emails from NEP interviewers): Interviewers typically work each precinct alone, soliciting participation from every “nth” voter to exit the polling place (the “n” interval is typically between 3 and 5). But these interviewers must also break away to tabulate responses and call in results three separate times during the day. They make their last call about an hour before the polls close and then stop interviewing altogether. If too many voters emerge from the polling place at once, they will miss some potential respondents. If polling place officials are not cooperative, the interviewer may have to stand so far from the polling place that they cannot intercept voters or are lost in the inevitable gaggle of electioneering partisans. If several precincts vote at single polling place, the interviewers have no way to identify voters from their specifically selected precinct and samples from all of those who vote at that polling place.

All of these real world factors make it hard, not easy, for an exit poll to get a “statistically valid sample.” That’s why Warren Mitofsky, the NEP official who helped invent the exit poll, describes them as “blunt instruments” and why Joan Konner, dean of the Columbia School of Journalism concluded in a review last year for Public Opinion Quarterly that “exit polls do not always reflect the final margin” (Konner, 2000, p. 10).

Remember, the networks use exit polls to project the outcome only in states where a candidate leads by a margin far in excess of mere sampling error, states like New York or Utah. They did not depend on exit polls alone to call any of the 11 battleground states in Freeman’s table because they know that exit polls lack the laser precision that Freeman implies. And discrepancy or not, they called every state right.

2) The odds against the unusual “anomalies” in just three states — Florida, Pennsylvania and Ohio — “are 250 million to one.”

The important point here is that everyone, even the officials from NEP, now concedes that the exit polls showed a small but statistically significant bias in Kerry’s direction across most states in 2004 before they were weighted to match the actual results. Freeman’s data show Kerry doing an average of 1.9 percentage points better than the actual count in the 11 states for which he has data. In a public appearance last week, Joe Lenski of the NEP reported that the exit polls had “an average deviation to Kerry” of 1.9 percentage points – exactly the same number. Warren Mitofsky confirmed Lenski’s comments in an email to me over the weekend.

Also, as I noted here on November 4, Kerry’s standing in exit polls exceeded the actual result in 15 of the 16 states for which Slate’s Jack Shafer posted results at 7:38 EST on Election Night. Freeman’s data show the same pattern in 10 of 11 states. This is akin to flipping a coin and having it come up heads 10 of 11 times, an outcome with a probability of 0.6% or 167 to 1.

So when Freeman is right when he says it nearly impossible to explain these discrepancies by sampling error alone. Having said that, his 250 million to 1 statistic is exaggerated. The reason is that Freeman assumes “simple random sampling” (see his Footnote 15). Exit polls are well known to use “cluster sampling.” They first select precincts, not people and then try to randomly select multiple voters at each cluster. While NEP reports only minimal information about sampling error (“4% for a typical characteristic from… a typical state exit poll),” an analysis of the 1996 exit polls by those who helped conduct it estimated that the cluster sample design ads “a 30 percent increase in the sampling error computed under the assumption of simple random sampling” (Merkle and Edelman, 2000, p. 72). That study is useful because the 1996 state exit polls involved roughly the same number of precincts (1,468) as this year’s polls (1,480). Merkle and Edelman also provided a table of the estimated “clustered” sampling error that I have adapted below.

Having said that, the observed discrepancies from the actual count in Freeman’s data still appear to be statistically significant using the Merkle & Edelman margins of error in Ohio, Florida and Pennsylvania. If NEP were to provide the the actual “p-values” (probability of an error) for all three states, and we multiplied them as Freeman did, the real odds that this happened by chance alone are still probably at least 1,000,000 to 1. In a business where we are typically “certain” when there is a 5% chance of an error (e.g. 1 in 20), one in a million is still pretty darn certain. Still, you can decide why Freeman chose to ignore a well-known facet of exit polling design and report the most sensational number available.

3) None of the official “explanations” are persuasive

Freeman notes the claim by the New York Times‘ Rutenberg that NEP’s internal report had “debunked” theories of vote fraud (something I wrote about here) and laments, “it does not explain beyond that declaration how the possibility was debunked.” That is correct. I can add one new wrinkle: A reporter who had been working on the story shared a rumor that the Times story mischaracterized the NEP report, that it never used the word “debunked” to describe theories about vote fraud. I put this question to Warren Mitofsky via email, and he refused to characterize the report in any way, except to described it as confidential.

Freeman argues that pollsters can magically weight away differences caused by non-coverage or demographic differences caused by non-response. Since the only measure of the demographics of actual voters on Election Day is the exit polls themselves, what would they weight to exactly?

Regarding the possibility that the polls sampled too many women, he quotes Dick Morris:

The very first thing a pollster does is weight or quota for gender. Once the female reaches 52 percent of the sample, one either refuses additional female respondents or weights down the ones subsequently counted. This is, dear Watson, elementary.

It may be elementary to Watson, but it is flat wrong to those who know exit polls. Telephone surveys typically set quotas for gender (because women are more likely to answer the phone), but exit polls do not. That’s why the exit polls report different percentages of men and women from state to state. So much for Dick Morris, exit poll methodologist.

Freeman also dismisses the theory suggested by NEP’s Warren Mitofsky, that “Kerry voters were more anxious to participate in our exit polls than the Bush voters” as a mere hypothesis:

The problem with this “explanation” or even one that would have considerably more “face validity” (which means that it makes sense on the face of it)…is that it is not an explanation but rather a hypothesis. It’s apparent that “Kerry voters were much more willing to participate in the exit poll than Bush voters” only given several questionable assumptions. An explanation would require independent evidence.

Well of course it would. So would the “explanation” of vote fraud.

First, it is worth noting that NEP officials agree. Salon.com’s Farhood Manjoo recently reported the following:

[The NEP’s Joe] Lenski told me that such a probe [of what went wrong] is currently underway; there are many theories for why the polls might have skewed toward Kerry, Lenski said, but he’s not ready to conclude anything just yet. At some point, though, he said we’ll be able to find out what happened, and what the polls actually said.

Let’s hope that happens soon. For now, consider whether any of the following adds “face validity” to the notion that “Kerry voters were much more willing to participate than Bush voters:”

a) This discrepancy favoring Democratic candidates is not new.

Consider this excerpt from a report by Warren Mitosfky published last year in Public Opinion Quarterly:

An inspection of within-precinct error in the exit poll for senate and governor races in 1990, 1994 and 1998 shows an understatement of the Democratic candidate for 20 percent of the 180 poll in that time period and an overstatement 38 percent of the time…the most likely source of this error is differential non-response rates for Democrats and Republicans (Mitofsky, 2003, p. 51).

So they showed twice as many state exit polls overestimating the Democratic candidate performance nearly twice as often as they underestimated it. 

Or consider this from Joan Konner’s report published in the same issue:

A post-election memo from Mitofsky and Joe Lenski, Mitofsky’s associate and partner on the election desk, stated that on election day 2000, VNS’s exit poll overstated the Gore vote in 22 states and understated the Bush vote in nine states. In only 10 states, the exit polls matched actual results. The VNS post-election report says its exit poll estimates showed the wrong winner in eight states (Konner, 2003, p. 11).

So much for the previously “high degrees of certainty” Freeman told us about.

b) Exit poll response rates have been declining.

The average response rates on the VNS exit polls fell from 60% in 1992 to 55% in 1996 to 51% in 2000 (Konner, 2003). NEP has not released a response rate for this year, but there has certainly been a downward trend over the last three elections.

Given the overall 50% rate, differences in response between Bush and Kerry supporters would not need to be very big to skew the results. Le me explain: I put the vote-by-party results into a spreadsheet for Ohio. I can replicate the skew in Ohio (one that makes Kerry’s; vote 3 percentage points higher than the count and Bush 3 percentage points lower) by assuming a 45% response rate for Republicans and a 55% response rate for Democrats. Not a big difference.

c) Perceptions of news media bias are consistently higher among Republicans and rising.

According to a study conducted in January 2004 by the Pew Research Center, 42% of Republicans believe news coverage of the campaign is biased in favor of Democrats compared to only 29% of Democrats believe news coverage is biased in favor of the Republicans. The overall percentage that believes the news is free of any form of bias bias has declined dramatically over the last seventeen years: 67% in 1987, 53% in 1996, 48% in 2000 and 38% this year.

Now consider that when exit poll interviewers make their pitch to respondents, they are supposed to read this script (the text comes from NEP training materials shared via email by an interviewer):

Hi. I’m taking a short confidential survey for the television networks and
newspapers. Would you please take a moment to fill it out?

I am taking a public opinion survey only after people have voted and it is completely anonymous. It is being conducted for ABC, the Associated Press, CBS, CNN, Fox and NBC, nor for any political candidate or party.**

The questionnaire they presented, and the identifying badge they wore, were both emblazoned with this logo:**

So to summarize: [If you want to explain the exit poll discrepancy] Absent further data from NEP, you can choose to believe that an existing problem with exit polls got worse this year in the face of declining response rates and rising distrust of big media, that a slightly higher number of Bush voters than Kerry voters declined to be interviewed. Or, you can believe that a massive secret conspiracy somehow shifted roughly 2% of the vote from Kerry to Bush in every battleground state, a conspiracy that fooled everyone but the exit pollsters – and then only for a few hours – after which they deliberately suppressed evidence of the fraud and damaged their own reputations by blaming the discrepancies on weaknesses in their data.

Please.

Don’t get me wrong. I am disturbed by the notion of electronic voting machines with no paper record, and I totally support the efforts of those pushing for a genuine audit trail. If Ralph Nader or the Libertarians want to pay for recounts to press this point, I am all for it. I know vote fraud can happen, and I support efforts to pursue real evidence of such misdeeds. I am also frustrated by the lack of transparency and disclosure from NEP, even on such simple issues as reporting the sampling error for each state exit poll. Given the growing controversy, I hope they release as much data as possible on their investigation as soon as possible. The discrepancy also has very important implications for survey research generally, and pollsters everywhere will benefit by learning more about it.

Finally, I understand completely the frustration of Democratic partisans with the election results. I’m a Democrat too. Sure, it’s tempting to engage in a little wishful thinking about the exit polls. However, to continue to see evidence of vote fraud in the “unexplained exit poll discrepancy” is more than wishful. It borders on delusional.

[11/19 – Clarification added in the third to last paragraph.  See some additional thoughts here]

Update: Mayflower Hill has an exclusive interview with Warren Mitofsky conducted earlier today. Using the type of analysis anticipated previously on this site, Mitofsky explains that his data show no evidence of fraud involving electronic voting machines.

Offline Sources on the “jump:”

**Correction/Update – 8/15/2006 – The introduction by interviewers originally included in this post was the one intended for interviewers to use to introduce themselves to polling place officials, not to introduce themselves to voters.  Also the logos displayed on the questionnaires were black and white, not color.

Konner, Joan (2003). “The Case for Caution.” Public Opinion Quarterly 67(1):5-18.

Merkle, Daniel M. and Murray Edelman (2000). “A Review of the 1996 Voter New Service Exit Polls from a Total Survey Error Perspective.” In Election Polls, the News Media and Democracy, ed. P.J. Lavrakas, M.W. Traugott, pp. 68-92. New York: Chatam House.

Merkle, Daniel M. and Murray Edelman (2002). “Nonresponse in Exit Polls: A Comprehensive Analysis.” In Survey Nonresponse, ed. R. M. Groves, D. A. Dillman, J. L. Eltinge, and R. J. A. Little, pp. 243-58. New York: Wiley.

Mitofsky, Warren J. (2003). “Voter News Service After the Fall.” Public Opinion Quarterly 67(1):45-58.

Mitofsky, Warren J. (1991). “A Short History of Exit Polls. In Polling and Presidental Election Coverage, Lavrakas, Paul J, and Jack K. Holley, eds, Newbury Park, CA: Sage. pp. 83-99

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.