As of yesterday afternoon, the National Election Pool website (exit-poll.net) has posted methodology statements for both the national and statewide exit polls and the complete verbatim questionnaires used in each state. While these statements will not answer every question, they do provide more complete methodological information than what has been readily available online.
Some highlights of what readers will find in the four PDF files now available:
- National Exit Poll Methodology Statement — The statement includes a description of how telephone samples of absentee or early voters were incorporated into the national sample, a discussion of how samples were weighted, and a table showing the appropriate “confidence intervals” (what we commonly refer to as margins of error) for various sample sizes.
- State Exit Poll Methodology Statement — Includes similar methodology information as the national statement plus something I have not seen yet: An accounting for each state of the number of sampled precincts, interviews conducted on election day and telephone interviews of early/absentee voters (for the 13 states where such interviews were conducted).
- National Exit Poll Questionnaire — Readers will note that the pdf file includes four separate versions of the national questionnaire. NEP administered these so that a random one quarter of the roughly 14,000 interviews in the national sample filled out each questionnaire. Some questions (such as the vote and most basic demographics) were asked on all four questionnaires. Some questions, such as President Bush’s job approval rating, were answered by half the respondents (roughly 7,000); others of only one quarter (roughly 3,500).
- State Exit Poll Questionnaires — This document includes the verbatim questionnaires used on Election Day in 50 states plus the District of Columbia (it omits Oregon). The state name appears in the footer on the bottom right corner of each page. It also includes Spanish language questionnaires in states where they were available. The pdf file does not appear to include questionnaires used for telephone surveys of early absentee votes. Thus, Oregon (which had only early voting) is not included.
According to Joe Lenski of Edison Research, these documents were all provided to NEP member networks on Election Day. At the request of the networks, and “in the interests of having this information available in one place,” they have made the documents available on exit-poll.net.
Happy reading…more tomorrow…
I obtained the table provided in this methods statement over a week ago from Edison (see this post where I first made it public).
I wrote Warren Mitofsky with the following question regarding the table.
“Take the column “951-2350”. As I read the table, I think it can be interpreted one of three ways:
“1. An exit poll with a sample size somewhere near the midpoint of the sample size range represents the number in the cell. That is, a state with a sample size of roughly 1,650 is the closest to the 4%. Therefore a state with a sample size of 951 is closer to 5%, while a state with a sample size of 2350 is closer to 3%.
“2. The upper end of the sample size range represents the number in the cell. That is, state with a sample size of 2350 is very close to 4% and a state with a sample size of 951 is closer to 5%, not 4%.
“3. The lower end of the sample size range represents the number in the cell. That is, a state with a sample size of 951 is very close to 4% and a state with a sample size of 2350 is closer to 3%, not 4%.”
Mitofsky’s response this evening was:
“Mr. Brady,
“Your are asking too much from a table. The only reasonable interpretation is that all sample sizes between 951-2350 have the MoE in that column. That’s what the table says. Any other interpretation is off. The numbers already include the design effect we thought appropriate.
“warren mitofsky”
I take this to mean that one state with a sample size of 2349 and another state with a sample size of 951, could have the exact same standard error. I think this has profound significance for Dr. Freeman’s analysis, which was based on a 30% adjustment for the cluster sample across all the states. This 30% adjustment of course assumes that, as is the case with simple random samples, that the standard error rises and falls with the sample size.
Mr. Mitofsky’s statements about this table mean that no such assumption can be made.
I will be posting soon with some “rough” analysis based on these new data and this clarification from Mitofsky in the next few days (need to be careful). Open source folks, if you know how to do the math and do it before I do, go for it!
Developing…
statistics don’t lie — people do — or something like that
Mystery Pollster continues to follow-up on the election tracking polls (here, here, and here only for example), and it’s always good stuff. Since the election if there’s been a polling story in the news, or a new theory/rumor floating around,…