And finally: The final post in my "epic" series on likely voter models. The link to the jump page will take you to descriptions of the likely voter selection or modeling procedures used by 22 major polling organizations: All of the major national polls (except Zogby), plus several firms that conduct only statewide surveys.
Though a lot was already in the public domain, I gathered much of the information that follows by making direct requests of the survey organizations. Although many public pollsters typically withhold details of their likely voter models, and many were reluctant to offer more than vague descriptions, I am pleasantly surprised by the volume of detail they were willing to share. My only regret is that I was unable to produce this listing earlier. However, consumers of polling data rarely scrutinize the polls more closely than they will for the next 24 hours. So I hope it is better late than not at all.
While this information demands a far more careful, considered assessment than I can provide this morning, there are some obvious general lessons. First, the details that follow should make clear that no two likely voter models are exactly alike. Modeling likely voters is truly the one aspect of polling that is arguably more art than science. Though there are some similarities across polls, the differences are more striking.
The most important differences worth noting.
- All except the CBS/New York Times survey try to decide whether individual respondents qualify as likely voters or not. Only CBS/NYT models likely voters by weighting respondents by their probability of voting.
- Thirteen survey organizations (ABC/Washington Post, AP-IPSOS, ARG, CBS/New York Times, Gallup, Harris, LA Times, Marist, NBC/Wall Street Journal, Newsweek, Pew, Quinnipiac, TIPP, Time) ask vote questions of all registered voters and later apply screen questions to select likely voters. These organizations often report results for both registered and likely voters. Most of the others (Battleground, Democracy Corps, ICR, Insider Advantage, Rasmussen, SurveyUSA, and TIPP) typically define likely voters with screen questions and ask the vote question only of those who pass the screen. The Fox/Opinion Dynamics has alternated between screens for registered and likely voters.
- Fourteen survey organizations use some sort of self-reported measure of past voting as part of their model or screen. These include ABC Washington Post, AP-IPSOS, ARG, CBS/New York Times, Democracy Corps, FOX/Opinion Dynamics, Gallup, Harris, LA Times, Newsweek, Pew, Quinnipiac, Rasmussen and Time. The others do not.
- Seven surveys (ABC/Washington Post, Gallup, LA Times, Newsweek, Pew, Quinnipiac and Time) use models that emulate the key feature of the famous Gallup model. They aim for a specific "cut-off" percentage – a proportion of likely voters among adults that corresponds to the expected turnout among likely voters. Most of the others do not aim for a specific "cut-off" percentage.
- Four surveys routinely weight by party as part of their likely voter model: Battleround, Rasmussen, TIPP and ABC/Washington Post (only during October and then only partially). The NBC/Wall Street Journal poll sometimes weights by party, but on an ad hoc basis. The Insider Advantage survey will weight by self-reported party registration in states that register by party.
Again, this conclusion is based on the most cursory look at recent polling data, but I notice one apparent pattern: At the national level, the surveys that attempt to calibrate the percentage of likely voters to expected turnout have given George Bush wider leads than other polls released since Labor Day. The surveys that weight by party have shown smaller Bush leads. The rest fell in the middle. Although the surveys have converged in the final week, the pattern persists, though the differences are much smaller. Again, I ran these numbers quickly (and I haven’t had much sleep lately), so please treat the findings as tentative:
- In September, the three surveys released by Gallup favored Bush by an average of 10 percentage points (53% to 43%). The nine surveys released by the organizations that used cut-off models without a party weight (ABC/Washington Post, AP-IPSOS, Gallup, LA Times, Pew and Time) favored Bush by an average 7 points (51% to 44%). The seven survey results from the three polls that routinely weighted by party (Zogby, TIPP and Rassmussen [for whom I used three weekly averages]) gave Bush only a 3 point lead (47% to 44%). The average of the other 14 surveys released fell somewhere in the middle (5 point Bush lead, 49% to 44%).
- On their final release, the final results from surveys doing cut-off models without weighting (Gallup, CNN, Pew, LA Times, Newsweek and Time) give Bush an average lead of three-points (49% to 46%). As of this writing, the four tracking surveys now routinely weight by party give Bush an average lead of just one-point (48% to 47%). The other surveys fall in between (49% to 47%). I’ll take another look when the final numbers are released.
These differences are very small, but the race is so close that a point or two could make the difference, especially if you believe that John Kerry will get more of the remaining undecided vote than George Bush. More on this later today…
One last note. The information about each survey that appears on the jump page represents a remarkable degree of disclosure. With two notable exceptions — Mason Dixon and Zogby — every survey organization I contacted was willing to answer a few simple questions about their likely voter models during one of their busiest periods of the year. We owe a big thank you to these organizations for putting their concerns aside and opening up their methodology a bit more to the blogosphere.
Details of likely voter models are listed below in alphabetical order by organization.
This summary covers only the methods pollsters use to select likely voters. Survey may also differ in many other ways, including the way they draw and administer their samples and the language and order they use to ask question. Slate’s Will Saletan and colleagues have a terrific "Consumer’s Guide to Polls" now online that discusses many of these other important differences.
Disclaimer: The text that follows is my work alone I put the text together over four day span in which I experienced (a) the birth of my second child and (b) very little sleep. There WILL be typos. I apologize for these in advance, but I would rather let you see this information today, while it is still relevant. Email me with typos as you find them, and I’ll gladly repair.
MP
ABC News
ABC conducts surveys jointly with the Washington Post, but each organization applies its own weighting and likely voter models (the Washington Post‘s procedures are described separately, as the end of this page).
Mechanics – The ABC methodology page explains their likely voter models:
Our practice at ABC News is develop a range of "likely voter" models, employing elements such as self-reported voter registration, intention to vote, attention to the race, past voting, age, respondents’ knowledge of their polling places and political party identification. We evaluate the level of voter turnout produced by these models and diagnose differences across models when they occur."
In an email, Polling Director Gary Langer said that ABC uses "straight cutoff models," which they use the variables above in some way, presumably an index, to rank voters by their likelihood to vote and select those at a certain "cutoff level." He also said that "turnout ranges across models" although he did not elaborate.
New registrants – Langer said that registrants could qualify as likely voters, but again, he did not elaborate. An ABC survey release on October 19 said that 10% of their likely voters say 2004 will be the first time they vote in a presidential election.
Party ID – Starting in early October, the ABC pre-election tracking did incorporate weighting by party identification into their model as described thusly on their methodology page:
Keeping in mind that actual change can occur, but also that random movement can distort, our solution is to compute an average of party ID as measured in our nightly tracking poll, and party ID as measured in recent presidential elections. This averaging approach allows us to pick up real movement in party ID while constraining random variability
Methodology links – ABC has a five-page methodology disclosure, a guide to public opinion and primer on response rates
AP-IPSOS
Mechanics – Likely voters are self-described registered voters who say they have a great deal or quite a bit of interest in following news about the campaign (Q.LV3 – text below) AND one of the following:
– Voted in 2000 (Q.LV1) AND rating of 8-10 (Q.LV2), OR;
– Did not vote in 2000 because too young to vote (Q.LV1) AND rating of 8-10 (Q.LV2), OR;
– Did not vote in 2000 for reason other than too young AND rating of 10 (Q.LV2).
Likely voters were 75% of registered voters or 61% of all adults on their Oct 22-26 survey
Question text:
1. Are you currently registered to vote at this address, or not?
LV1. Sometimes things come up and people are not able to vote. In the 2000 election for President, did you happen to vote? (IF NO, ASK:) Why not?
LV2. On November 2nd, the election for President will be held. Using a 1-to-10 scale, where 10 means you are completely certain you will vote and 1 means you are completely certain you will NOT vote, how likely are you to vote in the upcoming presidential election? You can use any number between 1 and 10, to indicate how strongly you feel about your likelihood to
LV3. How much interest do you have in following news about the campaign for President, a great deal, quite a bit, only some, very little, or no interest at all?
Party ID – AP-IPSOS does not weight by party ID.
The American Research Group (ARG)
Mechanics – ARG uses three direct questions to determine likely voters: (1) a 1 to 10 scale (definitely not vote to definitely vote) for self-assessment, (2) a question on interest in the election (not interested at all, not too interested, somewhat interested, very interested), and (3) past voting behavior (always vote, vote in most, vote in at least half, vote in less than half, never vote/have never voted), plus sex and age.
An email from ARG President Dick Bennett adds the following:
Past research using samples with voter histories appended to the files shows that 9’s or 10’s in the self-assessment scale do vote and are therefore likely voters. The real problem is with the 7’s and 8’s. Our software constructs a discriminant model from a sample of the 9’s and 10’s and then applies it to the remaining 7’s, 8’s, 9’s, and 10’s. An average of the probabilities of the 9’s and 10’s is created and if the 7’s and 8’s match or exceed the average probability, they are considered likely voters. This may sound complicated, but the software does it automatically. We also can bounce any 9’s and 10’s that don’t fit the population of likely voters.
(I have a tendency to increase the average probability generated by the model. It becomes a problem when the 7’s and 8’s differ from the 9’s and 10’s in ballot preference. Most times, however, the 7’s and 8’s do not differ in ballot preference. When they do and I have ignored the model, I note it in the results.)
Our latest surveys are showing about 85% of registered voters coming through as likely voters. That is not going to happen, but we do know that a likely voter does not need to actually vote on election day to represent voters on election day. Random events will prevent some likely voters from voting on election day and a 7 or 8 does not have any greater chance of being prevented from voting than a 9 or a 10.
New Registrants
New registrants can be classified as a likely voter and it is also possible for someone stating that they intend to register on election day but are not registered when we talk to them. The whole purpose of the discriminant model is to include voters who give 7’s and 8’s if it looks like they will vote. A new voter’s 7 or 8 may be equivilent to the 10 of a voter who always votes.
Party ID – ARG does not weight by party ID.
Battleground/GWU
The Battleground survey is conducted by two campaign polling firms, The Tarrance Group (R) and Lake, Snell, Perry (D).
Mechanics – The Battleground survey selects likely voters by screening for voters who say they are registered to vote and are at least somewhat likely to vote (see text below). They also screen out anyone employed by an advertising agency, newspaper, or television station. They do not set a specific cut-off percentage.
Question text –
A. Are you registered to vote in this state?
B. Now, thinking ahead to the elections that will be held this November — What is the likelihood of your voting in this upcoming election — are you extremely likely, very likely,
somewhat likely, or not very likely at all to vote?
C. Are you, or is anyone in your household, employed with an advertising agency, newspaper, television station?
New Registrants – can qualify as long as they say they are at least somewhat likely to vote.
Regional stratification – The survey sets pre-interviews quotas by state and by gender. These quotas are set using statistics for registration and past turnout. They determine the percentage contribution of each state to the total number of registered voters and past voters, then create an average that "uses one part registration and two parts turnout."
Party ID – The Battleground Survey weights party identification to GOP=42.3%, Independent=15.4%, Democrat=42.3% (Their percentages for Democrats and Republicans include independents who lean to one of the parties on a follow-up question). In an email, the Tarrance Group explained that their party targets were arrived at "in collaboration with Lake, Snell, Perry to weight the data in the most fair and accurate manner possible."
CBS News/New York Times
Mechanics – CBS News uses a unique procedure that weights each registered voter by their likelihood of voting. This procedure uses the responses of ever registered voter in the sample, only at different weighted values. CBS describes their likely voter model in great detail here. I described it in this post.
New Registrants – are included, although their answers are presumably weighted lower than other respondents, since new registrants typically turn out to vote at lower levels.
Party ID – CBS News does not weight by party ID.
Methodology Page– CBS has a methodology page and a seperate description of how they select likely voters. he New York Times always includes a lengthy methodology description with each poll.
Democracy Corps – conducted by Democratic pollster Greenberg Quinlan Rosner Research
Mechanics – Select likely voters with screen questions. A likely voter is a registered voter who voted in either the 2000 Presidential or 2002 Congressional election and reporters they are at least "probably" going to vote next year (see question text below). They drop the past turnout requirement for those not young enough to vote in 2000 or who otherwise registered since 2000
Question text –
Q.3 First of all, are you registered to vote at this address?
Q.4 Many people weren’t able to vote in the 2000 election for President between George Bush, Al Gore, and Ralph Nader. How about you? Were you able to vote, or for some reason were you unable to vote?
Q.5 Were you registered to vote at that time and did not vote, or were you not registered to vote?
Q.6 As you know, there was an election for Congress and other offices in 2002. Many people weren’t able to vote. How about you? Were you able to vote or for some reason were you unable to vote?
Q.7 Were you registered to vote at that time and did not vote, or were you not registered to vote?
Q.8 What are the chances of your voting in the election for President this year: are you almost certain to vote, will you probably vote, are the chances 50-50, or don’t you think you will vote?
New Registrants – New registrants can classify as likely voters as long report they will probably or definitely vote (on Q8 above).
Regional stratification – Democracy Corps sets regional quotas based on past vote and registration statistics.
Party ID – Democracy Corps does not weight by party ID.
Fox News/Opinion Dynamics
Mechanics – Each respondent is screened to establish him/her as a registered voter who voted in the 2000 presidential election or has registered since then. In states without registration or same-day registration, the question asks about the frequency with which the individual votes. The likely voter scale scores respondents through a series of additional questions about past voting behavior and interest in voting in the current election."
New registrants – a person who has never registered before has to show very high levels of interest and intention to make it into the survey anyway.
Regional stratification -Fox News/Opinion Dynamics regionally stratifies its random digit dial (RDD) samples by past turnout statistics so that numbers from across the country are selected in proportion to the number of voters in each state.
Party ID – The Fox News/Opinion Dynamics survey does not weight by party ID.
Methodology page – http://www.foxnews.com/story/0,2933,95854,00.html
Gallup/CNN/USAToday
*Mechanics – I have described the Gallup model at length beginning with this post. Gallup creates a seven point index based on answers to seven questions as shown below. Under 21 respondents can receive up to 3 bonus points depending on their answers to other questions. Likely voters are typically sevens, plus sixes weighted down to a cutoff percentage. Gallup had been setting the weighted cutoff to 55% of its adult samples. Jeff Jones of Gallup reported that the cutoff would be increased to 60% for its last survey.
1) How much have you thought about the upcoming elections for president, quite a lot or only a little? (Quite a lot = 1 point)
2) Do you happen to know where people who live in your neighborhood go to vote? (Yes = 1 point)3) Have you ever voted in your precinct or election district? (Yes = 1 point)
4) How often would you say you vote, always, nearly always, part of the time or seldom (Always or nearly always = 1 point)
5) Do you plan to vote in the presidential election this November? (Yes = 1 point)
6) In the last presidential election, did you vote for Al Gore or George Bush, or did things come up to keep you from voting?" (Voted = 1 point)
7) If "1" represents someone who will definitely not vote and "10" represents someone who definitely will vote, where on this scale would you place yourself? (Currently 7-10 = 1, according to this "quiz" on USA Today)
New Registrants – 18-21 year olds get bonus points. On a survey in late October, 6% of likely voters said they will cast their first presidential vote in the 2004 election. See this post for more details.
Party ID – Gallup does not weight by party ID.
Methodology page – While Gallup does not have a methology page per se, Gallup often answer methodological questions on their Editor’s Blog.
Harris Interactive
Mechanics – Harris screens for likely voters using questions on registration, likelihood of voting, past voting, whether the election might make a difference and interest in election.
According to an email Harris’ David Krane, they do not use a specific cut-off percentage, though likely voters as a percentage of the adult sample "ranges anywhere from 65%-80% depending on the set of questions that we use."
New Registrants – Harris identifies new registrants as part of their registration question. Without elaborating, the email from Harris states that new registrants can qualify as likely voters.
Party ID – Harris does not weight by party ID, but Krane added, "We really would prefer not to weight by party – we strongly feel this way – but we haven’t ruled it out. If there are really weird skews from previous surveys and previous years we might consider it but it would really have to be a last resort."
International Communications Research (ICR)
Mechanics – The ICR selects registered voters with a screen that consists of registered voters who say they are "absolutely certain" they will vote. The certainty question, similar to ones others use, has the following response options: absolutely certain, probably, or 50-50 or less. Via email, David Dutwin of ICR reports: "We find that around 86-88% of registered voters say they are absolutely certain."
New Registrants – Since the screen does not include past voting, new registrants can qualify if they are absolutely certain to vote.
Party ID – ICR does not weight by party ID, but Dutwin ads via email, "we do not have a hard and fast rule about this…we have yet to be more than 2% off on the [average] Democrat and Republican numbers, and once we were 4% off with independent."
Mechanics – Insider Advantage the only survey organization on this listing that uses lists of registered voers to draw all samples. While several national media polls (including ABC/Washington Post and Quinnipiac) experimented with list based samples, all continue to draw samples during the current election season using the random digit dial (RDD) methodology.
The benefit of a registered voter list is that it includes only registered voters. The disadvantage is that such lists typically lack phone numbers, and efforts to obtain telephone numbers via computer match miss those without listed phones. As a result, list based samples can miss anywhere from 30-50% of the pool of registered voters
Insider Advantage asks to speak to the registered voter on the list, then screens for those who say hey are "definitely" or "probably" like to vote vote, and screens out those who "might" or "might not" vote.
New registrants – Another big disadvantage of a list-based sample is that it misses new registrants. Vendor lists are typically obtained from the Secretary of State many months before the election.
Party ID – Insider Advantage does not weight by party identification, although in states that register voters by party, they will weight self-reported party registration to the distribution of party registrants in the sample of registered voters.
Los Angeles Times
Mechanics – Likely voters are selected using responses to questions on (1) Intention to vote, (2) certainty to vote, (3) interest in the campaign and (4) past voting history (list courtesy National Journal’s Hotline). They use these questions to score respondents and designate each as high, medium or low turnout voters. They ask registered voters all questions in the survey. According to an email from Susan Pinkus, polling director at the LA Times: "There is no cut off for voting age population (VAP) — whatever we get in the sample we get."
New Registrants – They ask a question to identify first time voters, and they explicitly allow new registrants to qualify as likely voters if they are "pretty sure" or "definitely" planning to vote.
Party ID – The LA Times poll does not weight by party ID.
Methodology page – http://www.latimes.com/news/custom/timespoll/la-timespollfaq.htmlstory
The Marist Institute
Mechanics – The Marist poll begins with an RDD sample of voting age respondents. Likely voters are those who say they are: Registered to vote, have a likely chance of voting (specific race is identified), and are interested in the election (specific race is identified). In an email, Marist said respondents "are given a weight based on the intensity of support of their choice," but did not elaborate. They also say that while the do not set a specific cut-off percentage, 61% of adults qualified as likely voters on a mid-October interview, "an increase of 13 points from a similar poll conducted during the same week in 2000."
New Registrants – since their initial RDD sample dials all adult households and the model does not include measures of past voting, new registrants can will qualify.
Party ID – Marist does not weight by party ID.
Did not return my phone calls. When National Journal’s Hotline asked Mason-Dixon about their likely voter screen, they sent the following statement by email:
We definitely have our own system of [selecting likely voters], which has proven to be largely more successful than others over the years….However, making that system and all of its particular elements public information only invites our competition to steal or copy it.
As such, we are not in a position to discuss our screening methods and other research practices that we use to differentiate likely voters. We are a privately owned firm, not a public or university-affiliated research organization that receives any level of public funding. As a private enterprise in a competitive market, this is a simple matter of protecting our long-term business interests
Oddly, Mason Dixon did offer the following description of their likely voter model to Slate.com: "They ask how likely you are to vote. If you’re at least somewhat likely, you’re in." That description seems somewhat at odds with the "system" they are so concerned others might "steal."
NBC/Wall Street Journal
Democrat Peter D. Hart and Republican Bill McInturff jointly conduct the NBC/Wall Street Journal survey.
Mechanics – The NBC/WSJ survey defines likely voters as registered voters who rate their interesting the upcoming elections a 9 or 10 on a 1-10 scale. They do not aim for a specific cutoff percentage, although likely voters were 63% of registered voters in August and 71% of registered voters in September.
Question text –
Are you currently registered to vote at this address?
And did you register to vote within the past twelve months, or have you been registered to vote for longer than that?
Did you decide to register to vote within the past twelve months because you were not previously eligible to vote, because you were not previously interested in voting, or because you moved or changed your place of residence? +
Please tell me how interested you are in the upcoming elections, using a scale from one to ten, on which a ten means that you are very interested in this November’s elections and a one means that you are not at all interested? You may choose any number from one to ten
New Registrants – will qualify since the initial RDD sample dials all adult households and the model does not include measures of past voting.
Party ID – They will weight by party on an ad hoc basis if the result for party identification varies significantly from their historical average. Explained in more detail here.
Newsweek/Princeton Survey Research Associates (PSRA)
Mechanics – Uses a series of questions to create an index and screen out those below a certain score. An email from Larry Hugick at PSRA described their approach as "the method Paul Perry developed at Gallup in the 1950s modified to address changes in demographics, voting behavior and survey mode."
More:
The items used deal with interest in politics, self-reports on likelihood of voting, awareness of polling place location, frequency of past voting, and past voting in current election district. Early voters and younger voters are scored somewhat differently from other voters.
The turnout level is assumed to be comparable to 1992. The adjusted cut-off point for likely voters is approximately 60% the total weighted sample.
According to Hugick, their cut-off percentage includes "some adjustment to the cut-off point to account for ‘missing’ and under-represented groups in telephone poll samples (low-income, non-English-speaking, the institutionalized, etc.)."
New Registrants – Again, quoting the email from PSRA:
The Newsweek poll’s likely voter base allows for first-time voters. We use RDD sample, so newly registered voters are included in the RV base. In determining likely voters, past voting history counts less for voters under 30 and is factored out for those aged 18-19. Nine percent of likely voters in our [October 14-15] Newsweek poll say they never previously voted in a presidential election.
Party ID – Hugick: "The Newsweek poll is NOT weighted by party ID. Party ID is regarded as an attitude that is sensitive to question wording, context, and the news environment."
Pew Research Center
Mechanics – Uses a series of questions to create an index and screen out those below a certain score. An email from Michael Dimock at Pew described the model as "a variant of the Perry-Gallup approach — as we get closer to Election Day we tend to make the scale a bit more inclusive to increase reliability, but to little substantive effect [meaning, presumably, that they loosen the screen with little effect on the vote]."
Non-registrants and those who say they are not planning to vote are never considered likely voters. Among registrants, the index of likely voting scores a point for each of the following (the text may not be exact): (1) Given "a lot" of thought to the election, (2) Voted in 2000 election, (3) "Always" vote, (4) "Plan to vote" and "9" or "10" on a 1-10 scale of certainty to vote this November, (5) "Absolutely certain" registered to vote, (6) Have previously voted in your precinct or election district, and (7) Follow government and public affairs "most" or "some" of the time, regardless of whether there is an election going on or not. Pew did not use the last three items in the scale prior to their final survey.
New Registrants – People age 18-23 were given a bonus point to compensate for not having had as many opportunities in the past to vote.
Party ID – The Pew Research Center does not weight by party ID.
Methodology page – Pew typically includes a detailed methodology description in their online reports (here’s an example from early October). Many of the questions used in their likely voter scale appear in the questionnaire (an example).
Quinnipiac
Mechanics – Uses a series of questions to create an index and screen out those below a certain score. An email from Doug Schwartz at Quinnipiac described the methodology as follows:
We ask a series of questions that measure likelihood of voting, past voting, interest in the election, awareness of the polling place location. We use past turnout to determine the cutoff. For example, if we estimate that 50 percent of the voting age population will vote in the election, about 50 percent of the top scores will be included as likely voters.
New Registrants. Though he did not elaborate, Schwartz said via email, "It is possible for first registrants to be classified as likely voters."
Party ID – Quinnipiac does not weight by party ID.
Rasmussen Reports
Rasmussen and Survey USA are the only organizations listed that conduct interviews with an automated, recorded voice rather than a live interviewer.
Mechanics – Via email, Scott Rasmussen said he uses screening questions to select the sample, and questions about likelihood of voting, past voting and interest in the election (closer to election day) to "further refine" their likely voters ("i.e.-you might get through the screening questions but still be considered a low probability voter"). Rasmussen does not aim for a specific cut off percentage.
New Registrants – Via email, Rasmussen said it was "possible" for new registrants to be classified as likely voters, adding only: "As a practical matter, it is easier for younger voters who are first time registrants than older voters."
Party ID – Rasmussen weights their likely voter sample by party identification. Rasmussen explained:
Our base model is 35% R 39% D and 26% other. However, once the sample is weighted to that model, responses that indicate likelihood of turnout can adjust that a bit. As a practical matter, our samples never vary more than 2 percentage points from that base model (and rarely by that much)
We believe that Party ID is something like loyalty to a sports team. Although its intensity and enthusiasm may ebb and flow, the party ID stays with an individual and is not subject to whims of the moment. Obviously, there are some changes over time. Changes in the partisan make-up of the Electorate are more likely the result of turnout and enthusiasm rather than people changing their minds
SurveyUSA
Survey USA and Rasmussen are the only organizations listed that conduct interviews with an automated, recorded voice rather than a live interviewer.
Mechanics – Survey USA identifies likely voters are those who say they are registered to vote and also say they are "very likely" or "absolutely certain" they will vote (see full text below). They do not aim for a specific cut off percentage, and though they ask about past voting behavior they do not include past voting as part of the definition of a likely voter. They use different models for primary elections.
In an email, Jay Leve of Survey USA added the following:
We consider our likely-voter model a work in progress, and will not be satisfied until we reduce our error (asymptotically) to zero. The likely voter screen we are using in the 2004 general, for example, is a refinement of the model we used in the 2004 primaries. The model we used in 2004 primaries was a refinement on what we used in the 2003 off-year elections, etc. To this end, SurveyUSA is at present running a dozen or so side-by-side parallel tests (data that we are gathering, but not reporting, and are holding for post-election analysis), to see whether we might have done even better in 2004 had we chosen slightly a different methodology, and if so, we shall incorporate those refinements into the election polls we conduct in 2005 and beyond. To be explicit: refinements in the likely voter model do not necessarily mean the addition of more screening questions. We are actually using fewer questions in 2004 than we have used in some previous years. It is possible that the learning we get from our side-by-side testing in 2004 will lead us to ask even fewer questions in the future. The tension between asking the fewest number of questions possible and getting the most "perfect" portrait of the electorate is always on our mind.
Question text
Are you registered to vote in the state of Pennsylvania?
On November 2nd, Americans will elect a president. Which of the following 5 statements best describes you:
One: I ABSOLUTELY will NOT vote in the Presidential election.
Two: I PROBABLY will NOT vote.
Three: I probably WILL vote.
Four: I VERY LIKELY WILL vote.
Five: I am absolutely CERTAIN to vote.
*New Registrants – will qualify since the initial RDD sample dials all adult households and the model does not include measures of past voting.
Party ID – Survey USA does not weight by party ID.
The TechnoMetrica Institute of Policy and Politics (TIPP)
Mechanics – Likely voters in the TIPP tracking poll are those who report a greater likelihood to vote on two different questions: They say they are very likely to vote on an initial vote likelihood question and report that their chances of voting are 9 or 10 on a 1-10 scale on a second, confirmatory question asked at the conclusion of the interview.
The TIPP poll has not aimed for a cutoff at a specific percentage of the adult sample, however the likely voters have typically been 58-60% of adults and approximately 85% of self-reported registered voters.
Question text –
And how LIKELY is it that you will ACTUALLY VOTE in November’s Presidential election? Would you say you are very likely, somewhat likely, not very likely or not at all likely?
Finally, I would like you to rate your chances of voting in November’s presidential election. We will use a scale from 1 to 10. "1" represents someone who definitely WILL NOT VOTE and "10" represents someone who DEFINITELY WILL VOTE. Where on this scale of 1 to 10 would you place yourself?
New Registrants – will qualify since the initial RDD sample dials all adult households and the model does not include measures of past voting.
Party ID – The TIPP surveys are weighted by party identification to match the combined total of their last three months of surveys, weighted only by demographics. I described this process in more detail here.
Time Magazine- Schulman, Ronca, & Bucuvalas (SRBI)
Mechanics – Uses a series of questions to create an index and screen out those below a certain score. An email from Mark Schulman confirms that, "we basically use the Gallup index strategy." In the mid-October email, he elaborated:
At the moment, we are recalibrating just a bit to take into account extremely high interest levels in the election. We’re adding some additional items as we speak to attenuate the likely voter scale. We will now test 10 items in our index. Also, our turnout target is 62-64% of the adult telephone household that we reach in our surveys. That’s a slightly smaller universe than the Census universe, which is done in-person.
New Registrants – Shulman wrote: "Young registered voters can qualify for inclusion if they rank high on the other attributes."
Party ID – Time/SRBI does not weight by party ID.
The Washington Post
The Washington Post conducts surveys jointly with the ABC News, but each organization applies its own weighting and likely voter models (the ABC News procedures are described separately at the beginning of this page).
Mechanics – The Post’s methodology page describes their current procedures:
The Post uses seven variables to define likely voters, including whether the respondent states they are registered to vote, their intention to vote, past voting history, interest in the presidential campaign, age, whether the respondent is voting for the first time in 2004 and whether the voter knows the location of his or her polling place. These variables produce a sample of likely voters that is largely composed of individuals who regularly vote in presidential elections but does include newly registered as well as other first time voters.
New Registrants – As specified above, newly registered voters are included. The methodology page adds, "In a typical sample, about one in 10 likely voters are self-described first-time voters and one in six are between the ages of 18-29." It does not elaborate on the specifics of how those first time voters are selected.
Party ID – For the October tracking survey, the Post weights by party identification as described on their methodology page:
The Post also adjusts the percentages of self-identified Democrats and Republicans by partially weighting to bring the percentages of those groups to within three percentage points of their proportion of the electorate, as measured by national exit polls of voters in the last three presidential elections.
*Methodology page – http://www.washingtonpost.com/wp-dyn/articles/A9363-2004Oct5.html
Zogby International
Zogby’s spokeswoman, Shawnta Watson Wolcott, did not respond to my request for more information on Zogby’s likely voter model. However, Thomas Lang, a reporter for the CJR Campaign Desk, at least got his call returned:
Zogby International spokesperson Shawnta Watson Walcott was only authorized to tell me that they ask six to eight questions, some of which focus on past voter behavior, some of which do not.
————————
A Special Request: Have you found this site useful? Please help me keep it alive by taking a few minutes to complete this brief (3-5 minute) about Mystery Pollster. Respondent identities will not be tracked and, as such, your participation will remain completely anonymous and confidential If you have any problems with the survey please email me. Click here if you’re asking, "why a reader survey now?" Thank you!!
————————
All this sausage making reminds me of this Monty Python scene:
”
Sir Bedevere : …and that, my liege, is how we know the Earth to be banana shaped.
King Arthur : This new learning amazes me, Sir Bedevere. Explain again how sheep’s bladders may be employed to prevent earthquakes.”
Only on Wednesday will will be able to select the polling method that was correct.
I’ll send you an email as well, but what do you think of Gallup’s latest that proportions 90% of undecideds to Kerry?
Mark, great blog.
Can you finish it off for this year with a discussion of exit polls?
Mark, great work.
Reading the methodology for most of the polls seems to confirm what I had long suspected. Random-digit dialing is heavily skewed towards PEOPLE WHO PICK UP THEIR PHONES without knowing who is on the other end. In this age of cell phones and especially caller ID this is I suspect a very skewed demographic. My kids will not pick up a phone if they don’t recognize the caller, and relatives I know who are in debt and bothered by collectors also will not.
This probably accounts for much of the lower than to be expected percentages of young people, minorites and skewed dem/rep percentages in most polls?
If you get a chance to respond, I’d appreciate your comments on this thought.
If voter turnout winds up skewing the poll results, it’s interesting to note the difference between the polls that force a specific turnout vs. those that don’t.
Those that force specific turnout have Bush up by 2.8 (49.67 to 46.83)
Those that don’t have Bush up by 1.3 (48.14 to 46.88)
And it would appear that at least 12 of those polls referenced above have given their final poll (not sure about LA Times, Time, Harris and ICR).
Whether this is a the deciding likely voter model factor is clearly debatable, but for me, I’m going with my gut and saying the polls that don’t force specific voter turnout % are more accurate.
Updated with Harris and a few other late polls… it went from a 2.8 and 1.3 Bush lead to a 3.0 and 1.0 Bush lead depending on poll methodology (change 1.0 to 1.6 depending on which Harris poll released today you believe).
On that note, I’d love to hear theories as to why Harris has a 6 point swing in two polls with identical questions, given on the same dates, and same likely voter models. The article said the presence of cell-only users in the online poll affected it somewhat, but near as I can tell, that was only worth 2/3 of a percentage point… can the other 5.3 points be strictly due to samling error? Ignoring the cell phone issue,
Bush would be 46-52% on the phone poll and 45-47% on the online poll.
And Kerry at 47-51 on the online poll and 42-48 on the phone poll. Technically within the margin of error, but JUST BARELY.
Seems like something besides cell phone usage is also at play here.
Harris says they’ll release their final poll tonight or tomorrow before the real polls open. It will be interesting if we see yet another large difference from them.
Predicting the Pollster
The Mystery Pollster has been thinly slicing likely voter models for days now — so thin he’s emulating Paul Cicero’s fine work on a clove of garlic in Goodfella’s. He says he’s posting later today and maybe he’ll roll…
Likely Voters and the Incumbent Rule
Continuing a time honored tradition here at WCRS (a few weeks is a period of time after all), I link again to the fascinating stuff put up by the Mystery Pollster. Today he blogs about two of the hallmark…
Generic House vs. National Vote: Part II
So how did national estimates of the “generic” House vote compare to the national vote for Congress? We learned in my last post on this topic that the national House vote is being counted and is not yet set in…
Rasmussen’s Democrats — Don’t Tell Mama?
Both Mickey Kaus and Chris Bowers at MyDD noticed that Rasmussen Reports has been showing a much closer race on their automated national tracking of the 2008 Democratic presidential primary contest. Both floated different theories for that difference …