The fact is, a lot of people over the last month have blown opportunities to tamp down the internet wildfire and restore some confidence in the outcome of the 2004 election. The exit polling organization (that received $10 million from the networks, by the way) should have come out weeks ago and explained why their exit polls were inaccurate? I accept the group’s quiet explanations that their workers, in some states, were improperly trained and that the mathematical models analysts relied upon throughout the day were problematic. But the consortium should should swallow their pride, hold a full blown press conference, and help douse the fire that is raging [emphasis added].
No argument here. I’ll have more to say on the training issue in the next day or two. Stay tuned..
Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.
9 thoughts on “NBC’s Shuster on Exit Polls”
I’m sorry, but I’m getting more and more skeptical. This is a “non-explanation” explanation. Improper training? Problematic Models? It sounds like they are trying the old “cumulative problems” excuse. If the exit polls had been wildly off in a random fashion their explanation would make sense. But the polling errors consistently favored Bush. For this to happen — and for it to be a problem with the exit polling — there needs to be a consistent systemic or methodological problem that favored Kerry.
This is reminding me too much of the VNS reaction to the 2000 exit polling problem in Florida. As you recall, Gore was ahead in the state exit polls by a reported 3% — far enough outside the margin of error to call the state for Gore in advance. Based on extensive ballot analysis performed over the following year, we now know that most of this error can be explained by two large vote counting problems, in Duval and Palm Beach counties. That is, the exit poll was not wrong outside the margin of error in that it was measuring who people *thought* they were voting for. Instead, the vote counting technology failed to record their intent due to a) massive unintentional “overvotes” in Duval and b) large numbers of unintentional votes for Buchanan instead of Gore in Palm Beach.
So, we KNOW that the exit poll was within the margin of error for Florida, but also that the vote counting errors were not due to fraud but poor balloting design/technology. However, VNS’ reaction to election 2000 was to issue a report in December 2000 “explaining” how their Florida poll had been so wrong. The report was specifically distributed only to subscribes with the proviso that it not be made public — a red flag warning if ever there was one. The portions of that report that have been released indicated that it highlighted 7 polling errors.
Now, THIS IS KEY: VNS effectively fell on its sword in the report. That is: even though their own data indicated a vote counting problem, they chose to bury this fact and instead attributed the exit poll error to their exit polls and presumed that the voting was correct. In order to do so, they had to distort their findings so much that they felt the could not release their report to the general public.
I fear NEP is about to do the same — I fear that their data does indicate a similar vote counting problem but that they are motivated to sweep this fact under the rug.
If I’m wrong on the above analysis, please show me my mistake.
Here, here, Observer!
I have been harping on the provisional vote and spoiled votes explanation since Nov 3. But your explanation of these concerns is far clearer and more thorough than mine have been. If the exit pollsters don’t talk about problems with the vote count (NOT FRAUD per se) and how much that affects the apparent discrepancy, then they are being negligent or worse hoping the public won’t notice.
I have sent Mark Blumenthal my simple math estimates that point towards how much provisionals and spoiled votes may have contributed to the Exit Poll discrepancy.
Basically, I believe the TRUE EXIT POLL discrepancy won’t be known until the RECOUNT is completed. Would you agree, Observer?
I just reread Walter Mebane’s (Cornell University Dept. of Government) piece on the overvotes in the 2000 election in Florida, in which the author demonstrates (persuasively, in my mind), that “a plurality of voters there intended to vote for the Democrat…notwithstanding the fact that the legal and political process produced a victory for Bush.” [Walter R. Mebane, “The Wrong Man is President! Overvotes in the 2000 Presidential Election in Florida,” Perspectives on Politics Sept. 2004 2:3, pp. 525-35].
Some of the same strategies seem to have been in place in the 2004 election as well. If we accept, for the time being, the issue of error in exit polling, we still have to explain exactly by what mechanism the error in polling produced such a wide margin that appeared to favor a single candidate (Kerry) when the actual results favored Bush. This is a systematic, across-the-board effect that has yet to be explained, as Alex from LA points out.
One blogger (“Truthisall”) posted the following statistical calculation on the Democratic Underground, and I would be enormously grateful for some assessment here:
****
The calculation of the odds that Bush’s vote tallies in 16 states would all increase beyond the Exit Poll Margin of Error.
************* ONE IN 200 TRILLION ********************
So far, no one has.
So once again, I ask: If any reseacher or mathematician or statistician disagrees with the use of the Excel Binomial distribution function to calculate the probability, please do so.
In the initial calculation, I used the probability of 5% that a single state would deviate beyond the MOE as input to the Excel Binomial Distribution function. But that was the probability of a move beyond the MOE, regardless of whether the move was favorable to Bush OR Kerry. The odds for this: 1 in 4.5 billion.
In fact, what we really want is the probability that the vote would deviate beyond the MOE to Bush alone, which is exactly what happened. So that’s why we use 2.5% and NOT 5.0% as our input probability. It’s the Bush tail of the probability. We just split the probability in half, the tail that would go to Bush.
What is the effect of this seemingly small, innocuous change on our final probability estimate? It means that the probability that these 16 deviations could be due to CHANCE is EVEN MORE REMOTE.
Here are the odds that 16 out of 51 states would move beyond the MOE in favor of Bush, again using the Binomial Distribution. But this time with .025 (rather than .05) as the probability that a given state would move beyond the MOE to Bush:
The probability P is calculated as P =1-BINOMDIST(16,51,0.025,TRUE)
P = 0.000000000000004996
The odds are 1/P or ******** 1 out of 200.159 TRILLION *********
that the deviations could have occurred due to chance alone.
Try it yourself in Excel.
Here are the odds for various scenarios that in N states, Bush’s vote tallies would move beyond the MOE:
N The odds are 1 out of:
1 – 3
2 – 7
4 – 113
6 – 3,715
8 – 223,016
10 – 22,192,000
12 – 3,432,782,579
14 – 788,997,832,405
16 – 200,159,983,438,689
*****
[From http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=203×108448%5D
Any thoughts?
Curious…
First thing. The probability calcs appear to assume a margin of error associated with a simple random sample.
Second, the calcs are based on the assumption that 100% of the error associated with the exit poll in each state is purely random and can be explained by sampling error alone. In a “perfect” world, if you took each poll in each of the 50 states 100 times, the average of these polls (aka “mean of samples”) should be exactly the election outcome in each state. If the exit poll methods were perfect (i.e. no bias in survey instrument design, cluster sample selection, surevy instrument administration, coding, weighting, and reporting), and the margin of error associated with the probability calc considered the standard error for a cluster sample (which is much higher than that for a simple random sample), then one could use a probability calculation similar to the one above.
I’ve done some analysis with an estimate of the standard error for a cluster sample based on a table provided by the NEP for use in determining the confidence interval (aka margin of error) associated with the cluster samples.
I’ve found that all but a few states are within the margin of errors associated with the polls.
That said, Dr. Freeman’s exit poll data do show a larger bias toward Kerry (38 states plus DC predicted a larger % of the vote for Kerry, whereas exit polls in 9 states predicted a larger Bush % than realized in the election result). The average variance (Z-score and p-value) for the Kerry bias states was higher than the average variance for the Bush bias states when you account for the standard error of a cluster sample, which Freeman (and others, including myself in previous analysis) has not done.
Also, the average variance in the battleground states was higher than the average variance in the non-battleground states.
I will be posting my analysis in the next few days. The key question will be, are the “differences” statistically significant. I’m working on that problem now.
Rick,
Are you saying it is possible that there would have been an error in the exit polls in 38 out of 47 states based on a single type of sampling error. This seems almost impossible to me to do ON PURPOSE (much more difficult than the widespread fraud some people are saying is impossible and therefore immediately discount) when you consider that you are talking about 47 different states and each state has its own intricacies. This would mean that there would have to be more than one sampling error and each of the errors, with the great majority of errors favoring Kerry and this was all by chance. I know you are doing the analysis now, but again, it seems almost impossible.
In a previous post, Mr. Blumenthal quoted a story that “The networks’ 1992 national exit poll overstated Democrat Bill Clinton’s advantage by 2.5 percentage points, about the same as the Kerry skew.”
Nobody cared, then, of course, because it was clear that Clinton had won, and the exact margin by which he did made litle difference. However, I am wondering if the 1992 exit polls overstated Clinton’s lead in *as many* states as happened for Kerry in 2004.
The very fact that the exit polls were wrong in so many states IMO makes fraud a *less* likely explanation for 2004. They overrated Kerry’s percentage in Diebold states with paper trails, Diebold states without paper trails, non-Diebold e-voting states, and punchcard states. And not all the states and counties involved were controlled by Republicans. I agree that spoilage and provisional votes (rather than fraud per se) may explain part of the divergence, but it does seem to me that some systematic problem with the polls themnselves (e.g., Republican reluctance to participate in polls conducted by the allegedly “liberal media”) very likely had a role.
Wilbur, at this point I’m not saying anything about what the data means. All I’m going to do is report what the data says. Saying what the data says is quite simple. It’s just math with a little theory (stress the math over the theory). Saying what it means is something different.
Dr. Freeman has “cleansed” his data. Apparently there were transcription errors and some of the states were accidently pulled from the data that was weighted to the election result.
Given Freeman’s new data, the Kerry bias was only statistically significant in 5 states: New Hampshire, New York, North Carolina, South Carolina, and Vermont. Make sense of that!
However, 7 states were Biased to Bush (overpredicted Bush’s actual % and underpredicted Kerry’s actual %), but 42 states plus DC show bias to Kerry (overpredicted Kerry’s actual % and underpredicted Bush’s actual %). There were two states that showed bias to both Bush and Kerry (underpredicted the % that went to someone other than Bush or Kerry).
Also, the bias was stronger in states with Kerry bias (42 + DC) than it was in Bush states (7). The bias was essentially equal in the two states that showed both Bush and Kerry bias (Wisconsin and Montana).
Dr. Freeman is missing data for Virginia (he has some, but its from 7:30 pm and I have judged it not trustworthy).
David T – Great question! Unfortunately I don’t know how to obtain the 1992 data. However, I have ordered the 2004 primary exit poll data from the Roper Center. Unfortunately though this may not have what we need. I think we are VERY fortunate that Dr. Freeman saved his data. Although I, like Mark, don’t agree with what he is doing with the data, at least he had the foresight to gather it.
My daughter needs her diaper changed. Ta ta.
Okay, I’ve had some of my work reviewed and there are questions that I need to have answered before I make any of the tables or spreadsheets public.
If I am interpreting a table provided to me by the NEP correctly, then I stand by what I posted in comments above.
If however I am not interpreting this table correctly as suggested by someone, then I need to rework things.
Basically I don’t know if my results represent the “highest” Z-scores and p-values or the “lowest” Z-scores and p-values possible given the data.
If I can’t answer this question, it’s all meaningless.
Memo to Rick: Hold your horses and don’t spill the beans. Those buggers can be hot!
All of this talk of exit polls and margins of error caused me to wonder whether anyone has taken a closer look at the Ukrainian exit polls that apparently signalled to the outside world that their reported election results could only have been the result of massive fraud there. I wonder whether we aren’t blindly applying different standards to the exit polls in the Ukraine to the ones we conduct here, and wonder why, if the margins of error in both cases aren’t roughly similar, why the outcome in the one case has the Western media rushing to declare fraud overseas, but oddly mute on the outcome in the US.
The Russia Journal Daily reported on Nov. 1:
”
Data on Ukrainian exit polls released
November 01, 2004 Posted: 11:17 Moscow time (07:17 GMT)
KIEV – According to the final results of exit polls, the two top candidates for the presidential post in Ukraine have approximately the same number of votes.
Exit polls conducted through face-to-face interviews showed incumbent Prime Minister Viktor Yanukovich garnering 42.67 percent, and opposition leader Viktor Yushchenko winning 38.28 percent, whereas anonymous polls (respondents fill out forms anonymously) showed contrary results, namely Yushchenko receiving 44.4 percent and Yanukovich getting 38 percent.
The polls were conducted at 8 p.m. Moscow time, one hour prior to the closing of polling stations. Some 50,000 respondents took part in the polls.”
Source URL: http://www.russiajournal.com/news/cnews-article.shtml?nd=46162
Perhaps more to the point, Tatiana Silina of Zerkalo Nerdeli has reported in substantive detail on the various exit poll methods employed. She writes:
“In fact, there were at least three exit polls. The “National Exit Poll” had been prepared for about twelve months and covered widely enough. The NEP project, initiated and authored by the Democratic Initiatives foundation, was financially supported by eight foreign embassies and four international funds. It was implemented by a consortium of four well-known sociological services: the Kyiv International Institute of Sociology, the Socis Center, the Social Minitoring center, and the sociological service with the Razumkov Center of Political and Economic Studies. Three of these took part in the exit polls during the 1998 and 2002 parliamentary elections and both rounds of the 1999 presidential election.
Shortly before the October 31 election it became known that another exit poll would be conducted by the Russian fund Obshchestvennoye Mneniye [Public Opinion], headed by the notorious Kremlin political engineer Gleb Pavlovsky. His fund, however, has no network in Ukraine, and it is still unknown to the general public who exactly conducted the exit poll for the Russians while all major Ukrainian institutes and centers were engaged in their own exit polls. In fact, that exit poll is hardly worth remembering since its organizers officially announced on October 31 that they had to discontinue it because too many voters (more than 40 percent) declined to respond. Nevertheless, for an unknown reason, the central Ukrainian mass media, controlled by Bankova, cited it very amply.
The third exit poll was conducted, almost obscurely, by the Ukrainian Institute of Social Research – a governmental organization, which is widely popularized by Bankova’s mouthpieces as “the most honest and trustworthy”. It should be noted that all the six universities that were involved in the KIIS exit poll are located in the eastern part of Ukraine. Other sociologists, though, are sure that Alexander Yaremenko, the institute’s director, was just on a PR contract, and his mission was the same as the one performed by the so-called “prop candidates”. This can be seen at least from Yaremenko’s commentaries: instead of reporting the results of his institute’s surveys, he mostly criticized other sociological services that reported Yushchenko’s advantage. Prof. Valeriy Khmelko, the KIIS President and a board member of the National Sociological Association, has said that by questioning his colleagues’ professionalism and objectivity, Yaremenko breaches the code of professional ethics.
…The figures, which were presented by two sociological centers from the NEP consortium, were strangely altered on October 31 evening. Their initial exit poll data showed Yushchenko’s clear advantage, but after some weird metamorphosis two hours later, he was almost five percent behind Yanukovych. (The other two members of the consortium never altered their initial reports, which showed Yushchenko’s advantage.)…
The returns were as follows: KIIS: 44.2% for Yushchenko, 38.6% for Yanukovych; Razumkov Center: 44.7% for Yushchenko, 37.2% for Yanukovych; Socis Center: 43.2% for Yushchenko, 41.1% for Yanukovych; Social Monitoring Center: 40.4% for Yushchenko, 40.7% for Yanukovych. Since the data obtained from two exit polls by the “box method” did not differ much, they were combined. The two latter centers combined their returns, too, disregarding the difference (43.2% vs. 40.4% in Yushchenko’s favor), which exceeded the accepted 2%. We never got a clear explanation. Socis President Mykola Chrilov admitted at a press conference on Thursday that “perhaps, it was not the very proper way, but we worked by one method, so we decided to combine our returns.”
The complete report can be found at http://www.mirror-weekly.com/nn/show/520/48297/
Any takers?
I’m sorry, but I’m getting more and more skeptical. This is a “non-explanation” explanation. Improper training? Problematic Models? It sounds like they are trying the old “cumulative problems” excuse. If the exit polls had been wildly off in a random fashion their explanation would make sense. But the polling errors consistently favored Bush. For this to happen — and for it to be a problem with the exit polling — there needs to be a consistent systemic or methodological problem that favored Kerry.
This is reminding me too much of the VNS reaction to the 2000 exit polling problem in Florida. As you recall, Gore was ahead in the state exit polls by a reported 3% — far enough outside the margin of error to call the state for Gore in advance. Based on extensive ballot analysis performed over the following year, we now know that most of this error can be explained by two large vote counting problems, in Duval and Palm Beach counties. That is, the exit poll was not wrong outside the margin of error in that it was measuring who people *thought* they were voting for. Instead, the vote counting technology failed to record their intent due to a) massive unintentional “overvotes” in Duval and b) large numbers of unintentional votes for Buchanan instead of Gore in Palm Beach.
So, we KNOW that the exit poll was within the margin of error for Florida, but also that the vote counting errors were not due to fraud but poor balloting design/technology. However, VNS’ reaction to election 2000 was to issue a report in December 2000 “explaining” how their Florida poll had been so wrong. The report was specifically distributed only to subscribes with the proviso that it not be made public — a red flag warning if ever there was one. The portions of that report that have been released indicated that it highlighted 7 polling errors.
Now, THIS IS KEY: VNS effectively fell on its sword in the report. That is: even though their own data indicated a vote counting problem, they chose to bury this fact and instead attributed the exit poll error to their exit polls and presumed that the voting was correct. In order to do so, they had to distort their findings so much that they felt the could not release their report to the general public.
I fear NEP is about to do the same — I fear that their data does indicate a similar vote counting problem but that they are motivated to sweep this fact under the rug.
If I’m wrong on the above analysis, please show me my mistake.
Here, here, Observer!
I have been harping on the provisional vote and spoiled votes explanation since Nov 3. But your explanation of these concerns is far clearer and more thorough than mine have been. If the exit pollsters don’t talk about problems with the vote count (NOT FRAUD per se) and how much that affects the apparent discrepancy, then they are being negligent or worse hoping the public won’t notice.
I have sent Mark Blumenthal my simple math estimates that point towards how much provisionals and spoiled votes may have contributed to the Exit Poll discrepancy.
Basically, I believe the TRUE EXIT POLL discrepancy won’t be known until the RECOUNT is completed. Would you agree, Observer?
I just reread Walter Mebane’s (Cornell University Dept. of Government) piece on the overvotes in the 2000 election in Florida, in which the author demonstrates (persuasively, in my mind), that “a plurality of voters there intended to vote for the Democrat…notwithstanding the fact that the legal and political process produced a victory for Bush.” [Walter R. Mebane, “The Wrong Man is President! Overvotes in the 2000 Presidential Election in Florida,” Perspectives on Politics Sept. 2004 2:3, pp. 525-35].
Some of the same strategies seem to have been in place in the 2004 election as well. If we accept, for the time being, the issue of error in exit polling, we still have to explain exactly by what mechanism the error in polling produced such a wide margin that appeared to favor a single candidate (Kerry) when the actual results favored Bush. This is a systematic, across-the-board effect that has yet to be explained, as Alex from LA points out.
One blogger (“Truthisall”) posted the following statistical calculation on the Democratic Underground, and I would be enormously grateful for some assessment here:
****
The calculation of the odds that Bush’s vote tallies in 16 states would all increase beyond the Exit Poll Margin of Error.
************* ONE IN 200 TRILLION ********************
So far, no one has.
So once again, I ask: If any reseacher or mathematician or statistician disagrees with the use of the Excel Binomial distribution function to calculate the probability, please do so.
In the initial calculation, I used the probability of 5% that a single state would deviate beyond the MOE as input to the Excel Binomial Distribution function. But that was the probability of a move beyond the MOE, regardless of whether the move was favorable to Bush OR Kerry. The odds for this: 1 in 4.5 billion.
In fact, what we really want is the probability that the vote would deviate beyond the MOE to Bush alone, which is exactly what happened. So that’s why we use 2.5% and NOT 5.0% as our input probability. It’s the Bush tail of the probability. We just split the probability in half, the tail that would go to Bush.
What is the effect of this seemingly small, innocuous change on our final probability estimate? It means that the probability that these 16 deviations could be due to CHANCE is EVEN MORE REMOTE.
Here are the odds that 16 out of 51 states would move beyond the MOE in favor of Bush, again using the Binomial Distribution. But this time with .025 (rather than .05) as the probability that a given state would move beyond the MOE to Bush:
The probability P is calculated as P =1-BINOMDIST(16,51,0.025,TRUE)
P = 0.000000000000004996
The odds are 1/P or ******** 1 out of 200.159 TRILLION *********
that the deviations could have occurred due to chance alone.
Try it yourself in Excel.
Here are the odds for various scenarios that in N states, Bush’s vote tallies would move beyond the MOE:
N The odds are 1 out of:
1 – 3
2 – 7
4 – 113
6 – 3,715
8 – 223,016
10 – 22,192,000
12 – 3,432,782,579
14 – 788,997,832,405
16 – 200,159,983,438,689
*****
[From http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=203×108448%5D
Any thoughts?
Curious…
First thing. The probability calcs appear to assume a margin of error associated with a simple random sample.
Second, the calcs are based on the assumption that 100% of the error associated with the exit poll in each state is purely random and can be explained by sampling error alone. In a “perfect” world, if you took each poll in each of the 50 states 100 times, the average of these polls (aka “mean of samples”) should be exactly the election outcome in each state. If the exit poll methods were perfect (i.e. no bias in survey instrument design, cluster sample selection, surevy instrument administration, coding, weighting, and reporting), and the margin of error associated with the probability calc considered the standard error for a cluster sample (which is much higher than that for a simple random sample), then one could use a probability calculation similar to the one above.
I’ve done some analysis with an estimate of the standard error for a cluster sample based on a table provided by the NEP for use in determining the confidence interval (aka margin of error) associated with the cluster samples.
I’ve found that all but a few states are within the margin of errors associated with the polls.
That said, Dr. Freeman’s exit poll data do show a larger bias toward Kerry (38 states plus DC predicted a larger % of the vote for Kerry, whereas exit polls in 9 states predicted a larger Bush % than realized in the election result). The average variance (Z-score and p-value) for the Kerry bias states was higher than the average variance for the Bush bias states when you account for the standard error of a cluster sample, which Freeman (and others, including myself in previous analysis) has not done.
Also, the average variance in the battleground states was higher than the average variance in the non-battleground states.
I will be posting my analysis in the next few days. The key question will be, are the “differences” statistically significant. I’m working on that problem now.
Rick,
Are you saying it is possible that there would have been an error in the exit polls in 38 out of 47 states based on a single type of sampling error. This seems almost impossible to me to do ON PURPOSE (much more difficult than the widespread fraud some people are saying is impossible and therefore immediately discount) when you consider that you are talking about 47 different states and each state has its own intricacies. This would mean that there would have to be more than one sampling error and each of the errors, with the great majority of errors favoring Kerry and this was all by chance. I know you are doing the analysis now, but again, it seems almost impossible.
In a previous post, Mr. Blumenthal quoted a story that “The networks’ 1992 national exit poll overstated Democrat Bill Clinton’s advantage by 2.5 percentage points, about the same as the Kerry skew.”
Nobody cared, then, of course, because it was clear that Clinton had won, and the exact margin by which he did made litle difference. However, I am wondering if the 1992 exit polls overstated Clinton’s lead in *as many* states as happened for Kerry in 2004.
The very fact that the exit polls were wrong in so many states IMO makes fraud a *less* likely explanation for 2004. They overrated Kerry’s percentage in Diebold states with paper trails, Diebold states without paper trails, non-Diebold e-voting states, and punchcard states. And not all the states and counties involved were controlled by Republicans. I agree that spoilage and provisional votes (rather than fraud per se) may explain part of the divergence, but it does seem to me that some systematic problem with the polls themnselves (e.g., Republican reluctance to participate in polls conducted by the allegedly “liberal media”) very likely had a role.
Wilbur, at this point I’m not saying anything about what the data means. All I’m going to do is report what the data says. Saying what the data says is quite simple. It’s just math with a little theory (stress the math over the theory). Saying what it means is something different.
Dr. Freeman has “cleansed” his data. Apparently there were transcription errors and some of the states were accidently pulled from the data that was weighted to the election result.
Given Freeman’s new data, the Kerry bias was only statistically significant in 5 states: New Hampshire, New York, North Carolina, South Carolina, and Vermont. Make sense of that!
However, 7 states were Biased to Bush (overpredicted Bush’s actual % and underpredicted Kerry’s actual %), but 42 states plus DC show bias to Kerry (overpredicted Kerry’s actual % and underpredicted Bush’s actual %). There were two states that showed bias to both Bush and Kerry (underpredicted the % that went to someone other than Bush or Kerry).
Also, the bias was stronger in states with Kerry bias (42 + DC) than it was in Bush states (7). The bias was essentially equal in the two states that showed both Bush and Kerry bias (Wisconsin and Montana).
Dr. Freeman is missing data for Virginia (he has some, but its from 7:30 pm and I have judged it not trustworthy).
David T – Great question! Unfortunately I don’t know how to obtain the 1992 data. However, I have ordered the 2004 primary exit poll data from the Roper Center. Unfortunately though this may not have what we need. I think we are VERY fortunate that Dr. Freeman saved his data. Although I, like Mark, don’t agree with what he is doing with the data, at least he had the foresight to gather it.
My daughter needs her diaper changed. Ta ta.
Okay, I’ve had some of my work reviewed and there are questions that I need to have answered before I make any of the tables or spreadsheets public.
If I am interpreting a table provided to me by the NEP correctly, then I stand by what I posted in comments above.
If however I am not interpreting this table correctly as suggested by someone, then I need to rework things.
Basically I don’t know if my results represent the “highest” Z-scores and p-values or the “lowest” Z-scores and p-values possible given the data.
If I can’t answer this question, it’s all meaningless.
Memo to Rick: Hold your horses and don’t spill the beans. Those buggers can be hot!
All of this talk of exit polls and margins of error caused me to wonder whether anyone has taken a closer look at the Ukrainian exit polls that apparently signalled to the outside world that their reported election results could only have been the result of massive fraud there. I wonder whether we aren’t blindly applying different standards to the exit polls in the Ukraine to the ones we conduct here, and wonder why, if the margins of error in both cases aren’t roughly similar, why the outcome in the one case has the Western media rushing to declare fraud overseas, but oddly mute on the outcome in the US.
The Russia Journal Daily reported on Nov. 1:
”
Data on Ukrainian exit polls released
November 01, 2004 Posted: 11:17 Moscow time (07:17 GMT)
KIEV – According to the final results of exit polls, the two top candidates for the presidential post in Ukraine have approximately the same number of votes.
Exit polls conducted through face-to-face interviews showed incumbent Prime Minister Viktor Yanukovich garnering 42.67 percent, and opposition leader Viktor Yushchenko winning 38.28 percent, whereas anonymous polls (respondents fill out forms anonymously) showed contrary results, namely Yushchenko receiving 44.4 percent and Yanukovich getting 38 percent.
The polls were conducted at 8 p.m. Moscow time, one hour prior to the closing of polling stations. Some 50,000 respondents took part in the polls.”
Source URL: http://www.russiajournal.com/news/cnews-article.shtml?nd=46162
Perhaps more to the point, Tatiana Silina of Zerkalo Nerdeli has reported in substantive detail on the various exit poll methods employed. She writes:
“In fact, there were at least three exit polls. The “National Exit Poll” had been prepared for about twelve months and covered widely enough. The NEP project, initiated and authored by the Democratic Initiatives foundation, was financially supported by eight foreign embassies and four international funds. It was implemented by a consortium of four well-known sociological services: the Kyiv International Institute of Sociology, the Socis Center, the Social Minitoring center, and the sociological service with the Razumkov Center of Political and Economic Studies. Three of these took part in the exit polls during the 1998 and 2002 parliamentary elections and both rounds of the 1999 presidential election.
Shortly before the October 31 election it became known that another exit poll would be conducted by the Russian fund Obshchestvennoye Mneniye [Public Opinion], headed by the notorious Kremlin political engineer Gleb Pavlovsky. His fund, however, has no network in Ukraine, and it is still unknown to the general public who exactly conducted the exit poll for the Russians while all major Ukrainian institutes and centers were engaged in their own exit polls. In fact, that exit poll is hardly worth remembering since its organizers officially announced on October 31 that they had to discontinue it because too many voters (more than 40 percent) declined to respond. Nevertheless, for an unknown reason, the central Ukrainian mass media, controlled by Bankova, cited it very amply.
The third exit poll was conducted, almost obscurely, by the Ukrainian Institute of Social Research – a governmental organization, which is widely popularized by Bankova’s mouthpieces as “the most honest and trustworthy”. It should be noted that all the six universities that were involved in the KIIS exit poll are located in the eastern part of Ukraine. Other sociologists, though, are sure that Alexander Yaremenko, the institute’s director, was just on a PR contract, and his mission was the same as the one performed by the so-called “prop candidates”. This can be seen at least from Yaremenko’s commentaries: instead of reporting the results of his institute’s surveys, he mostly criticized other sociological services that reported Yushchenko’s advantage. Prof. Valeriy Khmelko, the KIIS President and a board member of the National Sociological Association, has said that by questioning his colleagues’ professionalism and objectivity, Yaremenko breaches the code of professional ethics.
…The figures, which were presented by two sociological centers from the NEP consortium, were strangely altered on October 31 evening. Their initial exit poll data showed Yushchenko’s clear advantage, but after some weird metamorphosis two hours later, he was almost five percent behind Yanukovych. (The other two members of the consortium never altered their initial reports, which showed Yushchenko’s advantage.)…
The returns were as follows: KIIS: 44.2% for Yushchenko, 38.6% for Yanukovych; Razumkov Center: 44.7% for Yushchenko, 37.2% for Yanukovych; Socis Center: 43.2% for Yushchenko, 41.1% for Yanukovych; Social Monitoring Center: 40.4% for Yushchenko, 40.7% for Yanukovych. Since the data obtained from two exit polls by the “box method” did not differ much, they were combined. The two latter centers combined their returns, too, disregarding the difference (43.2% vs. 40.4% in Yushchenko’s favor), which exceeded the accepted 2%. We never got a clear explanation. Socis President Mykola Chrilov admitted at a press conference on Thursday that “perhaps, it was not the very proper way, but we worked by one method, so we decided to combine our returns.”
The complete report can be found at http://www.mirror-weekly.com/nn/show/520/48297/
Any takers?