Is RFK, Jr. Right About Exit Polls? – Part I

Exit Polls Legacy blog posts

Late last week, Rolling Stone published an article by Robert Kennedy, Jr. that asks provocatively, "Was the 2004 Election Stolen?"  While it covers many topics involving alleged suppression and fraud in Ohio, the article disappoints in its discussion of the exit poll controversy, because on that aspect of the controversy Kennedy manages to dredge up nearly every long-ago discredited distortion or half-truth on this subject without any acknowledgement of contrary arguments or the weaknesses in his argument.  It is as if the exit poll debate of the last eighteen months never happened. With this two-part post, I want to review the article’s discussion of the exit poll controversy in-depth, for it provides a good opportunity to learn something about what exit polls can tell us — and mostly what they cannot — about whether fraud was committed in the 2004 elections. 

But before getting to exit polls I want to make two things clear.  First, despite its weaknesses, the Kennedy article raises some important and troubling questions about real problems in Ohio in 2004.  As Ohio State University Law Professor Dan Tokaji puts it, the article is "useful in exposing how shoddy election administration practices can result in lost votes, and how some recently enacted laws will make things worse rather than better."  The summary of problems deserving attention includes long lines in minority precincts, efforts of the Republican Party to selectively challenge (or "cage") new registrants and the many examples of pure incompetence by local election officials.  And then there is partisanship of Republican Secretary of State Ken Blackwell, now his party’s nominee for governor.  Blackwell will need to answer to Ohio voters for, as Salon.com’s Farhad Manjoo writes, having "used his powers for partisan gain," issuing "a series of arbitrary and capricious voting and registration rules that could well have disenfranchised many people in the state" (but interests disclosed: I am a Democratic pollster with clients in Ohio)

Second, while I have devoted 68 posts and tens of thousands of words to the exit poll controversy since Election Day 2004, I have never argued that the exit polls can be used to rule out or disprove the possibility that vote fraud may have occurred in Ohio or anywhere else during in 2004.  The question has always been whether the exit polls provide affirmative evidence that fraud did in fact occur. This involves a very basic concept of statistical inquiry:  We assume no effect until one can be proven, or more technically, we assume a "null hypothesis" until we can prove some alternative.  The same principle exists in law as the presumption of innocence.  We do not assume a crime has been committed and work backwards to try to disprove it.  We presume innocence until enough evidence has been established to prove guilt. 

Everyone agrees that the 2004 exit poll results gathered by the news media consortium known National Election Pool (NEP) showed a small but statistically significant difference that favored John Kerry when compared to the official count.  But is that discrepancy evidence of fraud?  It might be, if we could rule out the possibility that other problems or potential sources of error in the exit polls that can also explain the discrepancy.   What I have argued for the last year and a half is that the exit polls have many such weaknesses that have long been in evidence. 

At the center of the exit poll debate is a basic concept about polls that deserves a lot more attention:  Statistical sampling error — the random variation that comes from drawing a sample of voters rather than interviewing the whole population — is just one source of potential error in a survey.  There are others including bias from selected respondents who decline to participate (response error), from voters missed altogether (coverage error), from questions that do not accurately measure the attitude of interest (measurement error) or from a failure to choose exiting voters at random using the correct sampling interval. 

The rest of this post (and the one or more parts that follow) will review the exit poll section of the RFK, Jr. Rolling Stone article line by line.  Passages from the article are in bold italics. 

The first indication that something was gravely amiss on November 2nd, 2004, was the inexplicable discrepancies between exit polls and actual vote counts. Polls in thirty states weren’t just off the mark — they deviated to an extent that cannot be accounted for by their margin of error.

It is certainly true that the 2004 exit poll estimates produced by the National Election Pool (NEP) generally overstated John Kerry’s share of the vote compared to the vote count.  That overstatement in statewide exit poll estimates averaged five (5) percentage points on the Bush-Kerry margin, according to the report that the exit pollsters, Edison Media Research and Mitofsky International, released in January 2005.   

The overstatement was slightly larger (5.5 percentage points) for the estimate of the national popular vote.  The national exit poll sample showed Kerry with 51% and Bush with 48%, but the final count showed a 2.5% margin (50.73% for Bush and 48.27% for Kerry).  It was larger still (6.5 percentage points) in terms of the average error within individual precincts — something the report termed "within precinct error" (WPE).   

The key point:  Everyone — including the exit pollsters — agrees that the average discrepancy was statistically significant. 

In all but four states, the discrepancy favored President Bush.(16)

No.  While the discrepancy was certainly widespread, this sentence misstates the statistics provided in the citation, the Edison-Mitofsky report.  Even if we ignore statistical significance and simply count up the number of states where the exit poll running showed Kerry doing better than the count even by some small fraction of a percent, then the discrepancies favored Bush in all but nine states, not four (see pp. 22-23).   The reference to four states appears to come from the number of states where exit polls overstated Kerry’s vote by more than one standard error. But the equivalent number where the discrepancy favored President Bush by more than one standard error was 26 states, not "all but four."

And to try to translate that into something approximating English, a difference of one standard error or more means that we can be roughly 68% confident that the difference is meaningful.  "Statistical significance" is a subjective judgment — in the eye of the beholder — but in attitude surveys that term usually implies a confidence level of 95% or greater.

Aside from the distortion of the statistics, however, this point is not particularly relevant.  Again, everyone agrees that the overall exit poll discrepancy was widespread and statistically significant. 

Over the past decades, exit polling has evolved into an exact science. Indeed, among pollsters and statisticians, such surveys are thought to be the most reliable. Unlike pre-election polls, in which voters are asked to predict their own behavior at some point in the future, exit polls ask voters leaving the voting booth to report an action they just executed.

It is certainly true that exit polls benefit from having ready access to actual voters who have just made their choices.  Exit pollsters need not jump through hoops to identify "likely voters" nor find ways to allocate those who say they are "undecided."   And yes, if you look back at my first post on exit polls on Election Day 2004, I too described exit polls as "among the most sophisticated and reliable political surveys available."

However, I have certainly learned a great deal about exit polls since then, and calling them the "most reliable" of surveys ignores a host of other practical challenges.  Exit polls generally sample a larger number of voters than telephone polls, but they do so because the "cluster sample" technique used on exit polls– which first selects sample precincts and then voters at those precincts — has more sampling error than comparably sized telephone poll samples.  Exit polls also miss the growing number that vote by mail or cast absentee ballots. 

[Clarification: the exit pollsters used telephone surveys to reach absentee voters in 2004 in 13 states that had high proportions of absentee or
vote-by-mail voters. However, these telephone
face all the usual challenges of preelection surveys in identifying actual voters]. 

Most important, exit polls rely on their interviewers to randomly select voters at each polling place.  Interviewers are instructed to keep a running tally of voters as they exit the polling place and attempt to interview only those voters at a specific "interval," such as every third voter or every fifth that passes by.  A host of real world conditions, such as  — the number of precincts voting at any given polling places, how far the interviewer is required to stand from the exit, the number of exits, inclement weather or simply the interviewer’s level of experience — can interfere with their ability to intercept and interview voters at random. 

Exit poll interviewers must also cope with a phenomenon impossible on telephone polls:  Curious voters who offer to volunteer to participate, even if they would not have been selected according to the random interval procedure. 

Finally, the NEP exit pollsters face an immense logistical challenge:  Once every four years, they conduct exit polls both nationally and in every state.  Thus, they must recruit and deploy enough interviewers to cover nearly 1500 precincts scattered randomly throughout 50 states and the District of Columbia. 

The results are exquisitely accurate: Exit polls in Germany, for example, have never missed the mark by more than three-tenths of one percent.(17)

Not true.  That 0.3% statistic comes from averages calculated by Steven Freeman on the exit polls conducted by one German exit pollster (Forschungsgruppe Wahlen) for the ZDF television network in elections held in 2002, 1998 and 1994.  But even Freeman’s paper concedes that other German exit polls have been off by slightly more, and in one case by as much as 1.5% for individual candidates.   

The results were also not quite so accurate for FG Wahlen in the 2005 parliamentary elections (results available here).  They showed a slightly higher error averaged across the five main parties (0.9%).  However, if we group the parties into coalitions as Freeman did in his paper "to make the numbers more comparable to the U.S. Presidential election" (p. 8, see table 1.3) the most recent F.G. Wahlen exit poll showed an error on the margin of 3.8% (my calculation). 

However, while the more recent German exit polls may not be quite as "exquisitely accurate" as Kennedy implies, he and Freeman are right that the German exit polls have typically been more accurate than in the U.S.  And as I explained back in December 2004, that greater accuracy occurs for sound fundamental reasons having to do with measures that appear to reduce sampling, coverage and non-response error: The German exit polls feature larger sample sizes and benefit from significantly better cooperation from election officials.  FG Wahlen assigns two "experienced" interviewers per precinct and they are allowed to stand at the door of the polling place for the entire day.  The NEP assigned one interviewer to a polling place in 2004, three quarters had never worked as an exit poll interviewer before, all had to leave the their polling place uncovered several times during the day and only about half were allowed to stand inside or just outside the door of the polling place.  The German exit pollsters typically obtain an 80% response rate, the US exit polls in 2004 had a 53% completion rate (p. 31). All of this means that the German exit polls are less prone to coverage and response error. 

”Exit polls are almost never wrong,” Dick Morris, a political consultant who has worked for both Republicans and Democrats, noted after the 2004 vote. Such surveys are ”so reliable,” he added, ”that they are used as guides to the relative honesty of elections in Third World countries.”(18)

Dick Morris is entitled to his opinion, but many others with more relevant exit poll experience disagree.  As noted here eighteen months ago (and reported this weekend by Salon’s Farhad Manjoo, the ACE Project (an acronym for Administration and Cost of Elections, a joint project funded by the UN and the US Agency for International Development) concluded:

[Exit poll] reliability can be questionable. One might think that there is no reason why voters in stable democracies should conceal or lie about how they have voted, especially because nobody is under any obligation to answer in an exit poll. But in practice they often do. The majority of exit polls carried out in European countries over the past years have been failures

Also, as Bard College political scientist Mark Lindeman reports, senior election observers from the Carter Center have repeatedly advised against the use of exit polls for election monitoring in Central American Countries, calling them "risky," "unreliable" and "misleading."

In 2003, vote tampering revealed by exit polling in the Republic of Georgia forced Eduard Shevardnadze to step down.(19) And in November 2004, exit polling in the Ukraine — paid for by the Bush administration — exposed election fraud that denied Viktor Yushchenko the presidency.(20)

And thus we come to an oft-repeated legend: Exit polls "exposed" fraud in Ukraine and elsewhere, so why not here?  The biggest problem with that story is that the election monitors in those counties did not depend on exit polls to provide evidence of fraud.   In Ukraine, at least, the solid evidence came from eye-witnesses, taped phone conversations, and physical evidence of vote tampering.  Review the reports of the most authoritative monitor on the elections in Georgia and Ukraine — the Office of Security and Cooperation in Europe (OSCE) –, and you will find plenty of evidence cited but not a single mention of the phrase "exit poll."

The report of MIT Political Scientist Charles Stewart (as aptly summarized by Salon’s Farhad Manjoo) also provides a series of reasons worth reviewing as to why the Ukraine example provides a poor parallel to the 2004 U.S. election.

But that same month, when exit polls revealed disturbing disparities in the U.S. election, the six media organizations that had commissioned the survey treated its very existence as an embarrassment.

There is reason for a sense of embarrassment and it involves one of the most blatant omissions from the Kennedy article:  U.S. exit polls have been wrong before.  In fact, according to the Edison-Mitofsky report, they have shown a consistent discrepancy favoring the Democrats in every presidential election since 1988.  And while the 2004 discrepancy was the highest ever, they were almost as far off in 1992.  More specifically, the "within precinct error" (WPE) reported by Edison-Mitofsky showed differences favoring the Democrat of 2.2 points on the margin in 1988,  5.0 in 1992, 2.2 in 1996, 1.8 in 2000 and 6.5 in 2004 (see p. 34).

Go back and watch the classic political documentary, The War Room — or easier, go back and read my post from January 2005 — and you will see that that leaked exit polls on Election Day 1992 provided as distorted a view as those leaked in 2004.  The difference was that the leaked exit polls in 1992 were known mostly to insiders and served to exaggerate the size of Bill Clinton’s eventual victory.  Clinton won by less than those early exit polls suggested, but he still won the election, so there was little lingering outrage. 

Continues with Part II…..

Suggestions for further reading in the meantime on the jump.

[Typos corrected]

For those who cannot wait for the next installment, I strongly recommend these two early reports on the RFK, Jr. Rolling Stone article:

Also highly recommended: two papers presented at the recent AAPOR conference that directly address other contentions in the Rolling Stone Article: 

Interests declared: Both Liddle and Lindeman are friends and have
contributed suggestions and comments for this blog post, although I
take full responsibility for the final product. Both have made
important, arguably heroic contributions to this debate in the face of
personal and often anonymous attacks on their reputation and character.

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.