Measurement Error…in the Count

Legacy blog posts Measurement Issues

A quick break from exit polls…

Alert JW, a resident of Washington State, asks this interesting question highly relevant to the ongoing recount in that state’s race for Governor:

Can a vote that is only "decided" by 42 votes out of 2,800,000 ever really be accurate? We’re going into our 2nd recount, and I bet that the various totals given by each recount approximate the variation that exists in sampling polls if your sampling size was 2.8 million. Does anyone ever talk about this thing?

Actually, some have. When the presidential recount in Florida came down to a margin of a few hundred votes either way, Johns Hopkins University President William R. Brody penned a Washington Post OP-ED piece on this very point:

But before we rush to conclude that a recount will resolve any closely contested election, consider this simple fact: A plurality of 300 votes out of nearly 6 million votes cast constitutes a margin of only 1 in 20,000. If we wish to recount the votes to determine whether the number 300 is indeed correct, we must be accurate in the recount process to much better than 0.005 percent.

Put another way, if you or I were asked to recount votes in one of the Florida precincts and were given a stack of 20,000 votes to count, we would have to perform the recount with zero errors! Just one error in the 20,000 ballots would be equivalent to the 300-vote margin that Gov. George W. Bush finished with in the recount.

I don’t know about others, but I can assure you that there is no way I could count 5,000 ballots, let alone 20,000, and maintain 100 percent accuracy. Simply distract me for one second while I’m counting and I could easily make a mistake.

We, the American people – and in this case, most especially the media – have tacitly assumed that voting is an intrinsically accurate process. But even in the absence of ballot tampering, no voting process can be expected to be 100 percent error free…

All of which raises an important question. What is the intrinsic accuracy of the voting process, of the voting machines and tallying methods? I suspect that most people would be happy to learn that vote counting was accurate to 0.05 percent. But in 6 million votes, that error rate would translate into a 3,000-vote margin of error – clearly not accurate enough for this election. If we knew the error rate, we could perhaps put into a statute the requirement for a runoff election whenever the margin was less than the voting error rate.

Now consider this issue in terms of surveys. We have been discussing sampling error in recent posts, the random variation that comes from drawing a random sample rather than interviewing the entire population. In tabulating the vote there is no sample, hence no sampling error, yet small tabulation errors still occur. Brody wrote about errors four years ago; such errors certainly remained prevalent this year. In surveys, these inevitable tabulation errors are usually random and offsetting. Absent strong evidence to the contrary, I assume most such errors in the vote count were similarly random.

[Because someone will ask: Yes, I have seen claims that "100% of the reports of improper vote tabulation" benefited George Bush, but so far at least, I have not seen systematic evidence beyond the anecdotal. If you know of any such effort, or any effort to debunk these claims, please post a comment.]

Another source of error suggested in the Florida recount, but not touched on by Brody was a broader conception of what survey researchers call "measurement error." We know that four years ago, many Florida voters went to the polls intending to cast vote for one candidate, but did not ultimately have their choice recorded as intended because of confusing "butterfly" ballots and or improperly punched chads that voided their ballots. Obviously, there was considerable debate — legal and political — over whether a recount could have corrected some of those errors. Whatever side of that debate you were on, it is clear that there was some fuzziness in the count then and now.

If measurement error can be a factor in something as seemingly straightforward as balloting for president, imagine how important it can be on more complex issue questions that frequently show up on opinion polls. Ideally, a survey researcher will try to minimize measurement error by "pre-testing" questions – do they measure the things we want them to? The Mystery Pollster assumes the issue of potential "measurement error" will come up again and again as we broaden our focus a bit in 2005.

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.