Professor M

Exit Polls Legacy blog posts

Following up on a lesson learned from the last post, that a story can sometimes make a point more powerfully than a lot of arcane data, I decided to share an excerpts from a series of emails I received about the experiences of four NEP interviewers from the state of Minnesota. The information comes from the Minnesota college professor – let’s call him Professor M — who helped recruit these college student interviewers for NEP. I have shared much of the substance of this story in previous posts, but in light of the findings of the Edison-Mitofsky report, I thought it would be useful to share his verbatim comments.

In early November, intrigued by the controversy surrounding the exit polls, Professor M decided to interview his four students about their experiences as interviewers. As he points out, a sample size of 4 is truly "anecdotal" — it is by no means representative of the experiences of the 1400 odd interviewers who worked for NEP on Election Day. However, it is remarkable how many of the problems he notes help explain patterns in the data on "within precinct error" in the Edison-Mitofsky report.

The following are excerpts from our email dialogue:

The information that I got from my students is quite intriguing, but of course it cannot in any way be considered a representative sample. Also, the students I spoke with kept no independent notes of response rates or other details while serving as interviewers; their impressions of who responded and who didn’t were entirely from memory.

The geographic distribution of the four interviewers was as follows: one outer exurbia, one inner ring suburb, one "exclusive" upscale suburb and one precinct in an ethnically and economically diverse Minneapolis neighborhood.

The [NEP] badge did display the logos of the networks prominently.

However, this could not be easily seen from a distance, and at least one of my students was hampered by the fact that a contingent of folks from MoveOn.Org was stationed right next to her at the 100-ft line. This made her appear from a distance as being connected with them, and being forced to stand 100 ft from the polls, people were able to easily turn aside to avoid her and MoveOn. Also, it gets dark early up here, so the badge was not visible from a distance after 4pm or so.

The two students in suburban areas commented that they had the hardest trouble getting participation in the early morning — probably due to lines and people needing to get to work.

I believe all of the students reported receiving requests from voters who wanted to participate in the survey despite the fact that they were not the nth person to emerge. Nearly all of the students reported some inclusions that were somewhat less than random. Most commonly this occurred when a couple emerged together and the person the poll worker approached refused but the partner offered to participate. None of the students saw any "difference" in which one of the two participated as long as one of them did.

As I understood what the students told me (who did not see themselves as doing anything wrong, by the way) they would not have coded a refusal at all in that situation.

One student reported at least one instance of a person simply taking a survey from her supplies (which were out in the open at her table) filling it out, and dropping it into the survey box. By the time she realized what had happened (she was busy trying to buttonhole legitimate respondents), there was no way to determine for certain which of the surveys in the box had been incorrectly included.

A few additional observations from [a fourth student] — she noted that she had more refusals among white males, although she was not sure if that was related to her own appearance (she is African-American). Also, she observed (and this makes sense, when you think about it) that her response rate improved over the course of the day as she became better at honing her "sales pitch." Still, despite the fact that she had perhaps the most advantageous placement of any of my four students (she was indoors at the only entrance/exit and had full cooperation from the staff on-site), she still recalls a fairly low response rate — 40-50% perhaps.

[Emphasis added]

To clarify one point: Each interviewer was given an "interviewing rate" which ranged from 1 to 10 nationally. Here is the way the Edison-Mitofsky training materials (passed along by Professor M) describe what was supposed to happen:

We set an interviewing rate based on how many voters we expect at your polling place. If your interviewing rate is 3, you will interview every 3rd voter that passes you. If it is 5, you will interview every 5th voter that passes you, etc. We set an interviewing rate to make sure you end up with the correct number of completed interviews over the course of the day, and to ensure that every voter has an equal chance of being interviewed.

If the targeted voter declines to participate or if you miss the voter and do not get a chance to ask him or her to participate, you should mark them as a "Refusal" or "Miss" on your Refusals and Misses Tally Sheet and start counting voters again (for a more thorough explanation of refusals and misses, refer to page 9). For example, if your interviewing rate is 3 and the 3rd "person refuses to participate, you do not interview the 4th person. Instead, start counting again to three with the next person. [Emphasis added]

The point: If interviewers allowed "inclusions that were somewhat less than random" but did not tally refusals appropriately, then the "completion rates" now getting so much scrutiny in the Edison-Mitofsky report are not only inaccurate,   but the inaccuracies will probably occur most, on average, in the same precincts shwoing the biggest "within precinct error" (WPE).

The bigger point: Consider that all of the above comes from just four interviewers. Imagine how much we might learn if we could talk to hundreds. Apparently, that is exactly what Edison Mitofsky says they will soon do (or are already doing) with the interviewers in Ohio and Pennsylvania (p. 13):

We are in the process of an in-depth evaluation of the exit poll process in Ohio and Pennsylvania…We will follow up with in-depth interviews with the exit poll interviewers in the precincts in which we saw the largest errors in an attempt to determine if there were any factors that we have missed thus far in our investigation of Within Precinct Error.

I think I can speak for others in the survey research profession when I say we hope they ultimately share more of what they learn.  It will help us all do better work.

[Typo fixed 1/28]

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.