I wanted to read the full 77 pages of the Edison/Mitofsky report in full before weighing in. It took much of the day, but here are some first impressions.
This report will not answer every question nor assuage every doubt, but credit is due for its uncharacteristic degree of disclosure. Having spent much time recently reviewing the scraps in the public domain on past exit polls, I am impressed by the sheer volume of data and information it contains. Yes, the report often accentuates the positive — not surprising for an internal assessment — but it is also reasonably unflinching in describing the exit poll errors and shortcomings. Yes, this data has been long in coming, but better late than never. We should keep in mind that we are dealing with organizations that are instinctively resistant (to put it mildly) to "hanging out [their] dirty underwear."
Say what you will about its conclusions, this report is loaded with never before disclosed data: The final exit poll estimates of the vote for each state along with the sampling error statistics used internally, exit poll estimates of senate and gubernatorial contests and their associated errors in every state, other "estimator" data used on election night, tabulations of "within precinct error" (WPE) for each state going back to 1988, and a very thorough review of the precinct level characteristics where that error was highest in 2004.
For tonight, let me review the three things that stand out most for me in this report:
First, the report confirms a great deal we suspected about the exit poll interviewers – who they were, how they were recruited and trained (pp. 49-51). The interviewer is where the rubber hits the road for most surveys. While exit polls do involve "secret ballots," the interviewer is responsible for randomly selecting and recruiting voters according to the proscribed procedure and keeping an accurate tally of those they missed or that refused (something to keep in mind when analyzing the completion rate statistics). The interviewer is also the human face of the exit poll, the person that makes a critical first impression on potential respondents.
The report confirms that interviewers were often young mostly inexperienced. Interviewers were evaluated and hired with a phone call and trained with a 20-minute "training/rehearsal call" and an interviewer manual sent via FedEx. They were often college students — 35% were age 18-24, half were under 35. Perhaps most important, more than three quarters (77%) had never before worked as exit poll interviewers. Most worked alone on Election Day.
One obvious omission: I may have missed it, but I see no comparable data in the report on interviewer characteristics from prior years. Was it not available? Also, the report mentions a post-election telephone survey of the interviewers (p. 49). It would seem logical to ask the interviewers about their partisan leanings, especially in a post-hoc probe, but the report makes no mention of any such measure.
Second: There was no systematic bias in the random samples of precincts chosen in each state. The proof of this is relatively straightforward: Replace the interviews in each precinct with the actual votes and compare the sample to the complete count. There were errors, as with any sample, but they were random across the 50 states (see pp. 28-30). If anything those errors favored Bush slightly. Blogger Gerry Dales explains it well:
In other words, when the actual vote totals from the sampled precincts were used, they did successfully represent the overall voting population quite well. Had they sampled too many Democratic leaning precincts, then when the actual vote results were used rather than the exit poll results, the estimate would not have provided a very good estimate of the final vote count (it would have overstated Kerry’s support). The problem was not in the selection of the sample precincts- it was that the data in the chosen precincts was not representative of the actual voting at those precincts [Emphasis added].
Third and finally: The centerpiece of the report concerns the investigation of that "within precinct error" (WPE) found on pages 31 to 48. If you have time to read nothing else, read that. The authors review every characteristic that correlates with a greater error. They found higher rates of "within precinct error" favoring Kerry in precincts with the following characteristics:
- An interviewer age 35 or lower
- An interviewer with a graduate degree
- A larger number of voters, where a smaller proportion were selected
- An interviewer with less experience
- An interviewer who had been hired a week or less prior to the election
- An interviewer who said they had been trained "somewhat or not very well."
- In cities and suburbs
- In swing states
- Where Bush ran stronger
- Interviewers had to stand far from the exits
- Interviewers could not approach every voter
- Polling place officials were not cooperative
- Voters were not cooperative
- Poll-watchers or lawyers interfered with interviewing
- Weather affected interviewing
The report pointedly avoids a speculative connecting of dots. They apparently preferred to present "just the facts" and leave the conjecture to others. Unfortunately, none of the characteristics above, by itself, "proves" the Kerry supporters were more likely than Bush supporters to participate in the poll. However, it is not hard to see the underlying attitudes and behaviors at work might create and exacerbate the within-precinct bias.
Consider age, for example. What assumptions might a voter make about a college student approaching with a clipboard? Would it be crazy to assume that student was a Kerry supporter? If you were a Bush voter already suspicious of the media, might the appearance of such an interviewer make you just a bit more likely to say no, or to walk briskly in the other direction? Would it be easier to avoid that interviewer if they were standing farther away? What if the interviewer were forced to stand 100 feet away, among a group of electioneering Democrats – would the Bush voter be more likely to avoid the whole group?
Writing in the comments section of the previous post, "Nathan" made a reasonable hypothesis about the higher error for interviewers with advanced degrees:
Voters (almost certainly accurately) concluded that interviewers were liberal and thus Kerry voters were more likely to talk to them…throw in any sort of additional colloquy engaged in between the interviewers and interviewees and there you have it.
Now consider the Kerry voter approaching the same college student interviewer. Might that voter feel something opposite of a Bush voter — a bit more trusting or sympathetic toward the interviewer? And suppose the randomly selected voter did not want to participate but his wife – a Kerry supporter – eagerly volunteers to take his place. Would the less experienced interviewer be more likely to bend the selection rules so she could take the poll?
The problem with all of this speculation – plausible as it may be – it that it is nearly impossible to prove to anyone’s satisfaction. That is the nature of non-response. We know little about those who refuse because…we did not interview them.
I want to try to answer my friend Gerry Dale’s observation (on his blog and in the comments section here) about the pattern of response rates by the partisanship of the precinct (I think he is too quick to dismiss "differential non-response"), but it’s really late and I need to get some sleep. I’ll post tomorrow morning…promise.
Also, if anyone has any questions about the report, please post them or email me. It is a bit dense and technical and, at very least, I can help translate.
Mark,
Thanks for the kind words and the link.
“I think he is too quick to dismiss ‘differential non-response'”
I am not sure that I would say that I have dismissed differential non-response as a primary source of the statistical bias.
What I have done is dismissed the notion that the report has proved it. After having poured over the report, I have concluded that to my eyes, there is more evidence in the report working against that hypothesis than supporting it. So going only on the data in the report, I would conclude it is something else.
As you said earlier, they have the data and should be able to prove it if it was the case. The fact that they did not leads me to suspect that the rest of the data won’t support it either.
But I have not dismissed the possibility. If I had, I would not be Jonesing for the data 🙂
“Now consider the Kerry voter approaching the same college student interviewer. If you were a Bush voter already suspicious of the media, might the appearance of such an interviewer make you just a bit more likely to say no, or to walk briskly in the other direction?”
It sure might. But then, the age group with the biggest WPE was the age group older than college students, 24-35. And the education level of the interviewer that had the greatest disparity was not the college student, scruffy as he may be, but rather those with completed advanced degrees.
There are all sorts of different hypothesis that one can come up with. Duke University’s philosophy chair Robert Brandon would likely say that Republicans, by and large stupid in his estimation, are intimidated by intelligent, educated looking interviewers.
The data will give us some clues, and we can pare down the possibilities using Occam’s Razor.
“one should not increase, beyond what is necessary, the number of entities required to explain anything”
ps- Since you gave an ‘imagine this’ scenerio descibing the interaction between a suspicious Republican and a college kid interviewer, let me give a hypothetical.
Consider a college kid, into the MoveOn crowd, who had been recruited from Craigslist.com, and who had been recommended by the PoliSci Dean at the University of Minnesota. Not only that, he knows that exit polls get leaked and some think that they can influence later voters. He also knows that if the election is close, really close, in his state, that exit polls could be a useful political propaganda weapon in the post election phase. He does not like Republicans very much, and whenever he can avoid a Republican, he does so. Someone asks if they can fill out more than one survey, and he jokes “only if you are a Kerry supporter. No, just kidding” but when the person says “actually, I am”, he turns a blind eye. Afterwards, he knows his results will be off, but hopefully they will have accomplished the goal. But how to avoid suspicion? He decides to say that voters were uncooperative. To say that the poll workers were uncooperative. That his training was inadequate. That the weather interefered. Locusts! It wasn’t his fault!
I want even more data. Hypotheticals are fun, and when set up right they can even convince people that something is possible, but hopefully we’ll get some data to help us figure it out better.
Why The Exit Polls Were Wrong
As promised earlier this week, the long awaited report from the National Election Pool on exit polls was released this morning. The highlights: “Our investigation of the differences between the exit poll estimates and the actual vote count point to one…
You say that there was no “speculative connecting of the dots” in the Edison/Mitofsky report. I admit that I have not read the report, but the NYT says this in its piece on the report: “The researchers focused instead on the median age of the surveyors, 34, and they hypothesized that perhaps younger voters felt more comfortable than older voters submitting questionnaires to younger surveyors.”
More on the exit polls
Mystery Pollster Mark Blumenthal writes,
I want to try to answer my friend Gerry Dale’s observation (on his blog and in the comments section here) about the pattern of response rates by the partisanship of the precinct (I think he is too quick to d…
THE WISE POLLSTER OF CHELM
There was a tremendous problem in the city of Chelm with the exit polling. The exit polling seemed to say Kerrivotska one but when they secretly counted the vote it was Bushovotska who won.
All the wise pollsters of Chelm were rushed to think on the problem and then think some more. Finally, after three days, they came out and said we have the answer, “More people said they voted for Kerrivotsk than voted for Bushovotska.” Oh how the people rejoiced, because this certainly would have caused problems in Chelm.
A young boy asked, “But that why did Bushovotska get more votes when the votes were secretly counted.”
The wise pollsters of Chelm went back and met for three more days. There were many sage comments and there were many brilliant points. After three days they emerged and said, “The reason Bushovotska got more votes is because when people were asked by the exit pollsters they said they voted for Kerrovotska when they actually voted for Bushovotska because they want people to believe they would vote for Kerrovotska even though they actually wanted Bushovotska.” And the people of Chelm rejoiced because again the great problem of the exit polling had been solved.
“But why would people do this?” the child asked.
The wise pollsters of Chelm went back and met for three more days and this was their greatest meeting because they had much newly released data and they were able to say amazingingly brilliant things about this data. After three more days the wise pollsters of Chelm re-emerged once more. “We have decided it was because the pollsters were younger and more educated. People who voted for Bush were older and less educated. Therefore if people who were asking the question were more educated than people who are answering the question there would be an extraordinary reaction and Bushovotska would actually turn in to Kerrovotska. This happens all the time with us. We the wise pollsters of Chelm say brilliant things, but because you are all less educated you do not understand and misconstrue.”
The people of Chelm rejoiced becasue this was truly the best answer and they felt they were lucky to have to sage advice of the wise pollsters of Chelm
I have just one question on nonresponse error which I haven’t ever seen answered to my satisfaction. Maybe Mark, or Rick, or someone here can answer it in clean numbers.
**How much** nonresponse error would it take, to generate (oh, say) 1.9% systematic error? How many times more likely to respond would Kerry voters have to be, than Bush voters, to generate the bias that was observed?
The magnitude of this value would probably help everyone’s intuition when assessing the report’s analysis.
Hey Eric,
I havent thought about it in detail…but it would not require a hugh difference in likelyhood…i think it would just require that
democrats are 1.04 times more likely to respond than republicans.
Hey Eric,
On p 31 of the 19Jan05 Mitofsky report, they say,
“The average completion rate for all exit poll precincts was 53%. While we cannot measure the completion rate by Democratic and Republican voters, hypothetical completion rates of 56% among Kerry voters and 50% among Bush voters overall would account for the entire Within Precinct Error that we observed in 2004.”
BBB
All the discussion focuses on interviewer/ee interactions. But how do the data get collected? Are they not phoned in, and read form by form and line by line to be rekeyed at the other end? Do all the keyboarders speak English?
“Within Precinct” also included “between our interviewer and the keyboarder”
BBB,
If one were to use those response rates for every state, then the data does not come close to fitting the exit polls. In 25 states, the exit polls differ by more than five points from what one would get by multiplying Bush voters by .50 and Kerry voters by .56 and then converting to a percentage gap.
Question, can we assume the factors cited above are listed in order of correlation to WPE? Does the report do a correlation coefficient type analysis, if possible?
Factors cited that raise questions as to why/how they would lower quality of exit poll. Number gives rank on original list of 15:
2 An interviewer with a graduate degree
7 In cities and suburbs
9 Where Bush ran stronger (I assume this means Bush won the precinct in the election results)
Factors that on their face would tend to decrease accuracy of exit poll.
1 An interviewer age 35 or lower
3 A larger number of voters, where a smaller proportion were selected
4 An interviewer with less experience
5 An interviewer who had been hired a week or less prior to the election
6 An interviewer who said they had been trained “somewhat or not very well.”
8 In swing states
10 Interviewers had to stand far from the exits
11 Interviewers could not approach every voter
12 Polling place officials were not cooperative
13 Voters were not cooperative (I assume this means low response rates)
14 Poll-watchers or lawyers interfered with interviewing
15 Weather affected interviewing
PS I remind MP that exit poll performance on other races was cited as a good test of exit polls. What does the report tell us on that question?
Small correction.
8 Swing states
would support GOP nonresponse, theoretically, but other than that it could be on the questionable list, I guess.
Mysterypollster, thank you for your fine analysis. It seems to me that what needs to be explained in particular, is not so much the determinants of WPE as such, but the difference between WPE 2004 and WPE in the previous elections: WPE 2004 is quite a bit higher, and seriously skewed. I find no clue toward an explanation in the report, which deserves credit for its candor in this respect. The authors could perhaps have chosen for a wordy “cover-up” of the discrepancies but did valuable and thorough additional research instead.
First of all, a big WTF? on the graduate degree thing. Could this be overeducated pollers funking with their sample trying to get a perfect cross-section?
All right, so these all correlated to either enhanced poll numbers or depressed vote counts for Kerry:
“In cities and suburbs,” as opposed to deep red rural areas.
“In swing states,” where a little voter fraud goes a long way.
“Where Bush ran stronger,” which is quite an ambiguous statement. Stronger than expected? Stronger than Kerry?
“Interviewers had to stand far from the exits,” “Interviewers could not approach every voter,” or “Polling place officials were not cooperative,” all of which raise flags of potential misdeeds by election officials.
“Poll-watchers or lawyers interfered with interviewing,” which raises the same flags.
Everything else I see here — young, inexperienced, and undertrained pollers did a worse job: Duh! — is irrelevant to the issue of vote fraud, neither proves nor disproves.
Is there anything in the report which is inconsistent with the possibility that official fraud contributed to the exit poll discrepancy?
As the Reeps established with the Ukraine ordeal, the discrepancy itself creates a presumption of fraud, which will remain until specifically rebutted. Releasing a flood of information without addressing that issue (if that’s actually what they’ve done), is either pointless noise or a diversionary tactic.
If you look at the data on p 54 and 55 you can see that in the ‘battleground’ states Florida, Penn and Ohio that had been carpet bombed by ads and activism, the response rate is below 50%. Ohio overall is 44.1%. The 60+ group response rate is 39% in Fl and 31.6%! in OH.
How can you have accuracy when so many people select themselves out?
The other way of looking at it is after 10 months of ads ad a million rallies, these people were ready for the election to be over. Why in the world would they take another 10 minutes to fill out the survey when they thought it was another agitator or Moveon-er. I would have refused too.
It’s actually rather simple:
1) Voters associate exit polling with the media, which is natural since the exit poll results are announced by the media and the pollsters are hired by the media
2) Republican voters think the mainstream media are the enemy, an unofficial arm of the Democratic Party (read “Dan Rather”) who voted 85% for Kerry.
3) Thus, and with perfect logic given the premises, Republicans disproportionately refuse to cooperate with or associate with people they percieve as hostile–namely, exit pollsters. If the exit pollers wore Bush buttons, I’m sure the Democrat voters would have reacted similarly.
What this suggests is that exit polls will continue to show bias in partisan elections, for at least the near future.
But CivilWarGuy, why hasn’t this problem happened in prior elections? It’s not as if the conservatives suddenly decided as of this election cycle that media is bad–they’ve been promoting the “media bad” message since at least the early ’80’s.
CivilWarGuy has it exactly right.
Dfarrah, there has been a noticeable increase in the dissatisfaction of the Right with the large television news divisions. I would guess that you are probably on the left of the political divide by the tone of your message, so you likely don’t really understand the Republican mindset the way I would. Many of the detest the media- I mean really detest them. This got even worse with the CBS/fake documents story in September.
Alex asks a great question: “PS I remind MP that exit poll performance on other races was cited as a good test of exit polls. What does the report tell us on that question?” This of course is key. The current leading theory (which I favor as well) is that there was pollster-introduced bias. If this is true, we should see similar exit poll distortions, precinct-by-precinct, in all the races not just the presidential race.
MP, what do you think? This would be a great way to put the widespread-fraud theory to bed, wouldn’t it?
CivilWarGuy and Yancey … I understand your theory, but if that were the cause then we would see a consistent Kerry bias across all states. Instead, we see a highly variable bias, most for Kerry but some for Bush, and by highly varying amounts.
And, while you are correct that conservatives negative media perceptions have grown, so have those of liberals. Recent polling indicates the depth of antipathy is still greater for conservatives, but liberal antipathy is growing more rapidly.
Dfarrah: Re: Republican vs. democrat views of media bias, according to the polls I’ve seen, about three times as many Americans believe the media is left-wing as believe it is right-wing.
And as to the argument that if Republican media bias was the cause, it would have shown up in prior elections: many Republicans believe this bias in exit polling occurred in 2000 as well. If you’ll remember that election night, you may remember the exit polling “computer error” that called a Gore landslide in Florida, followed up by a media “call” for Gore of that state before the polls in the most Republican part of the state were closed. I can assure you, Republicans remember that exit polling gaffe and media “call” gaffe, which cost them 8-12,000 votes in that state. Less well remembered is what Ann Coulter and others have written about–states that Bush won by 5-10% weren’t “called” until the wee small hours, whereas states Gore won by a lesser margin were “called” much earlier. Since the raw numbers were similar, the difference must have been that the exit poll numbers made the mainstream media pause–pretty much the pattern of 2004.
“How can you have accuracy when so many people select themselves out?”
The completion rate for phone polls is much lower; more people select themselves out of them.
Yet phone polls were a lot more accurate than the exit polls.
FWIW
Observer says “if that were the cause then we would see a consistent Kerry bias across all states. Instead, we see a highly variable bias, most for Kerry but some for Bush, and by highly varying amounts.
I’m not sure, but it looked to me like the state-to-state variance correlated reasonably well with the amount of partisan advertising in that state. I.e., the variance was greater in “battleground” states, by and large. That is taken as evidence of fraud, but it could also be evidence that there were flaws in polling that were exacerbated by the advertising rate. And it does make sense that in a state where tempers were revved by a massive smear job, voters would be more hotly partisan and therefore less willing to talk to the perceived “other side’s” pollsters. So you would get a varying bias. in results.
I wonder if this could be related to the (anecdotal) reluctance of Bush voters to put up Bush signs and bumper stickers that we heard about before the election?
There was so much bile for so long this campaign season that a lot of Republicans got into the habit of being really circumspect when the subject of politics came up. If you’ve been politely avoiding MoveOn wackos for months on end, I could see you avoiding an exit poll even without an explicit dislike for the polling organization. Cultural signifiers like “grad student glasses” (black, rectangular frames) could exacerbate this problem.
CivilWarGuy,
Regards the calling of elections, the only documented fact that I know of, and that fox ADMITS too, is that they incorrectly called florida for Bush. AP anaylsised the same data and correctly found that florida was too close to call. Yes the networks blew it when they called florida for Gore, but that was an honest mistake that resulted from bad data by VNS, but when fox called florida for bush..they were just flat out wrong based on the data they had!!!!!
Honestly people who believe there is a liberal bias in the media…are just wrong, it is that simple. And everytime I hear about liberal bias..all I can think about is..rathar than discuss some imagined bias…why dont these people instead discuss real issues with real facts. rather than some imagined bias that can never be proved or disproved by the very nature of its claim.
Exit Poll Study
Mystery Pollster reports on the Edison/Mitofsky study on the highly unreliable exit polls of the 2004 elections. The main problem was in selecting which districts to poll but in the polls at each district not representing the votes at that…
Brian Dudley, if you re-read mine and previous posts, you’ll see that what I was writing (in response to a previous post) about was evidence of exit polling bias in 2000, NOT the mistaken Gore call. I merely mentioned the Gore call as something that Republicans remember, in reference to the reasons Republicans distrust exit pollsters. While I’d be happy to discuss media bias in some other forum, what I was talking about–and what this whole thread is about–is something different. The “Mystery Pollster” deserves postings that keep on the topic of polling.
Blame the college students
Mystery Pollster analyzes the analysis of the Election Night exit-poll debacle; Mickey Kaus summarizes his…