The Why & How of Likely Voters – Part II

Legacy blog posts Likely Voters

Okay, we know from Part I that survey designed to forecast an election should ideally focus on those who actually vote and ignore non-voters. We also know that we cannot select likely voters by simply asking, “will you vote?” The most common technique among pollsters is to use some combination of questions to identify a group of the most likely voters whose size, as a percentage of adults, resembles the level of turnout expected on Election Day. My posts over the next few days will describe the mechanics of what pollsters do in detail.

For now, however, I want to focus on one issue: Even if we think we know the likely turnout on November 2, can we precisely calibrate the size of the likely voter sample to match it? Much of the recent debate surrounding likely voter models assumes that we can. Let me suggest two reasons that may be a risky proposition.

Problems with Voting Age Population. Although we typically calculate turnout as a percentage of the Voting Age Population (VAP), there are good reasons to be skeptical of calibrating the size of the “likely voter” sample to match such estimates. Michael McDonald, a political scientist at George Mason University, estimates that roughly one in ten persons included in the VAP was ineligible to vote in 2000, including non-citizens (7.7%), ineligible felons (1.5%) and ineligible persons living abroad (1.4%). When he takes these categories into account, McDonald’s estimates of 2000 turnout increase from 50% to 54% (see also the Washington Post op-ed by Popkin and McDonald from 2000).

This finding is relevant because telephone surveys typically exclude non-English speakers and cannot reach felons living in prisons and those living abroad. Telephone surveys also miss other categories of otherwise eligible citizens that rarely vote, including people in nursing homes, soldiers in barracks and those who are mentally incompetent. As such, we should assume that the percentage of real voters included in telephone surveys of adults will be significantly higher than turnout as a percentage of the voting age population. How much higher? Opinions will vary.

Earlier this week, I spoke to Richard Morin, polling director at the Washingon Post, about their likely voter model. He explained that his concerns about problems with the Voting Age Population, combined with greater levels of voter interest measured this year, are why the Post is now using a likely voter model that screens out 40% of all adults rather than a larger number.

Non-response. Another issue that has received surprisingly little attention is the potential impact of lower response rates. If non-voters tend to hang up on pollsters more often than voters, then samples of adults will tend to over-represent likely voters. That might not be a problem if a pollster is simply screening for likely voters. However, it might cause a problem if the pollster first samples adults then tries to calibrate a likely voter subsample to match expected turnout among all adults. If the adult sample overrepresents voters, then the pollster will define the likely electorate too narrowly, even if they guess right about turnout.

Is there any evidence that non-voters hang up on pollsters more readily? Unfortunately, academics have devoted surprisingly little attention to this issue. A recent study in Public Opinion Quarterly provides some theoretical support. Survey methodologists Robert Groves, Stanley Presser and Sarah Dipko found that people are more likely to participate in surveys on topics that interest them. They found, for example, that teachers were more likely to respond to a survey about education, that new parents were more likely to respond to a survey about children and their parents and that seniors were more likely to respond to a survey about Medicare and health. Although they looked at political contributors (who were more likely to respond to all surveys) they did not look specifically at voters.

The challenge of researching non-response is that — duh — we cannot interview those who hang up. One approach is to use reluctant respondents (those who relent and participate after initially refusing) as surrogates for those who ultimately refuse. In his 19931 book, The Phantom Respondent, University of Chicago Political Scientist John Brehm did just that. Looking at election surveys conducted by the University of Michigan in the late 1980s, studies that validated the actual voting behavior or respondents against election records, Brehm found evidence that reluctant respondents tended to be non-voters. Although Brehm derived his findings from some bewilderingly complex statistical models, his conclusion was clear. The surveys he analyzed “oversample voters” due to their “significant levels of non-response” (p. 138).

Keep in mind that Brehm analyzed surveys from the late 1980s with response rates of 60% to 70%. The response rates that news media surveys obtain today are considerably lower. A study in 2003 by Stanford Professor Jon Krosnick, for example, reported that response rates from 20 news media surveys averaged 22% (and ranged from 5% to 39%).

Oddly, I can find no recent academic research looking at whether non-voters hang up on pollsters more often than voters. However, most of the campaign pollsters I know have long assumed just such a pattern. I do not want to overstate this argument, as we have precious little hard evidence either way. However, if pre-election surveys of adults are overestimating probable voters, as seems at least plausible, then some of the “likely voter” sub-samples derived from them may be too narrow.

What is the bottom line? Opinion surveys are very good at measuring current attitudes, but they are imperfect, at best, as predictors of future behavior. When it comes to “modeling” turnout, surveys can separate the most likely voters from the least likely, and help show us the differences between the two. However, this pollster would urge great caution to those who are judging the plausibility of likely voter samples by comparing their size to specific levels of turnout.

More to come…

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.