One of the great rewards of writing this blog is the incredible diversity of its readers — everyone from ordinary political junkies to the some of the most respected authorities in survey research. I heard indirectly from one of the latter over the weekend regarding my post on NSA phone records issue on the CBS Public Eye blog, and I would like to share his comments. They underscore the caution we should all have in placing too much faith in any one survey question about an issue of public policy.
There are few academics more respected on the subject of writing a survey questions than Professor Howard Schuman of the University of Michigan. In 1981, along with co-author Stanley Presser (now a professor at the University of Maryland), he wrote Questions and Answers in Survey Research, a book that remains required reading for graduate students of survey methodology. After my article appeared on CBS Public Eye last Friday, someone posted a link to the members-only listserv of the American Association for Public Opinion Research (AAPOR). Schuman read it and posted some thoughts to the listserv that I will reproduce below.
But first, I thought it a little ironic that I had nearly quoted Schuman at the end of the Public Eye post, cutting the key quotation at the last minute only because my piece was already running long. It came from the seminal article, "Problems in the Use of Survey Questions to Measure Public Opinion," that Schuman co-authored with Jacqueline Scott for the journal Science in 1987 (vol. 236, pp. 957-959). The article described several experiments that compared results when similar questions were asked using an open-ended format (where respondents answer in their own words) or a closed-ended format (where respondents choose from a list of alternatives).
The experiments yielded some very big differences, but also revealed shortcomings with both formats. They demonstrated that both open and closed-ended questions have the potential to produce highly misleading results. Schuman and Scott concluded with this recommendation (p. 959):
There is one practical solution to the problems pointed to in this report. The solution requires giving up the hope that a question, or even a set of questions, can be used to assess preferences in an absolute sense or even the absolute ranking of preferences and relies instead on describing changes over time and differences across social categories. The same applies to all survey questions, including those that seem on their face to provide a picture of public opinion.
Schuman’s reaction to my Public Eye piece (quoted with his permission) show that his philosophy has not changed over the years:
Mark Blumenthal’s relearning of the effects of different formulations of
questions is useful, but might go even further to recognize that the
timing of a poll (and a few other features) can also produce quite
different results. Given polls on any issue, but especially a new one,
we should all keep in mind the old verse about the Elephant, a copy of
which can be found at:http://en.wikisource.org/wiki/The_Blindmen_and_the_Elephant
Just substitute "attitude" for "theologic" in the last stanza.
The Blindmen and the Elephant is worth the click. More importantly, Schuman’s words of advice — new and old — are well worth remembering.
There is something disturbing afoot here that cannot be good for the republic. The polling community, which we have to recognize is a business and not a research community, seems to have decided to engage in a battle with other disciplines whose expertise is in understanding the science of tools the pollsters just use as technicians. One can postulate any number of reasons, but professional rivalry seems to be at the root of the matter. It seems important here to point this out for readers and media who may rely on putative experts on either side whose expertise they are not in position to evaluate, much less their conclusions.
The almost propogandistic nature of Lindemans’ approach of labelling those who have mathematical and analytic expertise and their proponents as “Exit Poll Fundamentalists” should be a red flag that there is a subtext here. His overall approach is to implicitly to (mis)-characterize “EPTs” as asserting that exit polls prove there was fraud, while a reasonable reading of the best work by those skilled in the science is that the 2004 elections results were extremely unusual. So much so that the proper response would be a serious study by our most respected and talented experts why the results were so unusual.
What is interesting in Blumenthal’s and Lindeman’s style of argumentation is an absence of any hint that, just as they and their colleagues communicate ideas with a subtlety that may escape the notice of less skilled observers, on the whole they don’t seem to want to acknowledge that in fact they are relatively insensitive in the communcation subtleties of those skilled in the mathematical science on which the polling tools they use are based. One example of this is the rather silly statement on page 30:
“Baiman and Dopp (2006, 16) comment, ‘Mathematics (beyond the level of this report) show that a downward slope… of WPD [WPE] plotted by exit poll share is consistent with vote fraud and miscounts.’ We need not overthink the mathematics..” Maybe, maybe not. But Lindeman’s didactic claim that the an alternative explanation “is equally consistent with non-response bias” rests on deeper mathematical assumptions than he puts into evidence.
Blumenthal and Lindeman do not acknowledge that to mathematical scientists, the very use of terms like “margin of error” and “statistical significance” imply very strong assumptions about the underlying probability spaces on which the statistical models are based. Mathematical scientists will almost subconsciously note these as unproven assumptions, and take those assumptions into account when interpreting the results derived from those assumptions.
Perhaps the most disappointing evidence that professional rivalry is afoot here is Lindeman’s explicit acknowledgement on page 32 entitled “Exit poll fundamentalism as a puzzle”. He directly and rather embarrasingly accuses folks with proven mathematical expertise of “arguing out of field” when it comes to the actual polling statistics involved. And he obviously does not entertain the idea that it could be political scientists and survey researchers who are arguing out of field when it comes to understanding what the mathematics actually tell us about the statistics. He also goes on to make a completely unsubstantiable analogy between creation science and a pre-emptive dismissal of those who would factually challenge this.
Overall, Lindeman reveals his real frustration that to a certain degree popular culture has looked to the folks who can provide scientific interpretations of what went on, rather than folks like him and Blumenthal who provide socio-political interpretations. In part, this very well may be because the scientific interpretations more fundamentally call into question the socio-politicial interpretations than vice-versa. Unfortunately this is just another instance of the same argument for respectability that the social sciences have been waging at the NSF and elsewhere for some time now. That argument will not be won, however, by trying to demean those with the scientific and mathematical skills to examine events that strain the very limits of the mathematical tools social scientists use in their work.
Because of an apparent blog “misfunction” this comment should have been posted to “Is RFK, Jr. Right About Exit Polls? – Part I”. Please delete or move as you see fit. My apologies.
I expected that my paper would attract derision from people who failed to engage its content, so I figured I might as well say what I thought. The paper defines exit poll fundamentalism, and defines adherents to its tenets as exit poll fundamentalists. Quite simply, if the shoe does not fit, do not wear it.
For instance, the oft-repeated assertion that exit polls are accurate, in the face of many strands of evidence to the contrary, does seem to bear a family resemblance to the assertion that the Bible is inerrant. This assertion can hardly be attributed to or blamed on “those who have mathematical and analytic expertise,” any more than inerrantism could be attributed to ‘those who have expertise in ancient languages’ — although, no doubt, some inerrantists do have such expertise.
I can detect no signs of a disciplinary Exit Poll War pitting “the polling community” on one side against… well, I’m not quite sure whom on the other side. On the record, Kathy Dopp has argued that the Election Science Institute analysis is “meaningless bunk.” As you know, a coauthor of that analysis, Fritz Scheuren, is past president of the American Statistical Association. If there really is a disciplinary war here, it is beyond me how to characterize it. Certainly it is not a matter of Blumenthal and Lindeman as members of the “polling… business community”(!?) dissing mathematicians, or a divide between “socio-political” and “scientific” interpretations.
I am surprised that you would single out the correlation between WPD and exit poll share as an issue on which to challenge unspoken mathematical assumptions. Since WPD is arithmetically dependent on exit poll share, surely no one can be much surprised that red shift tends to be larger in precincts where Kerry’s exit poll share is larger (although indeed we can contrive data for which it wouldn’t be). This tendency exists even in simulations in which _random sampling error is the only source of polling error_.
You state, “a reasonable reading of the best work by those skilled in the science is that the 2004 elections results were extremely unusual.” Since you offer precisely zero examples of this best work (nor specify what “science” you have in mind), it is hard to say whether I agree. When Dopp and Baiman state that the Ohio exit polls provide “Virtually Irrefutable Evidence of Vote Miscount,” can I reconcile this with your reading, or shall I infer that you do not consider theirs among the best work? I think we should stop pretending that these folks are ‘just asking questions,’ and acknowledge that they are giving answers.
My paper examines and criticizes many of those answers. If you are capable of engaging the arguments, please do. For that matter, if you can document that scientists in some discipline strongly support the exit poll fraud arguments, please present documentation. Let’s get real now.
“The polling community, which we have to recognize is a business and not a research community, seems to have decided to engage in a battle with other disciplines whose expertise is in understanding the science of tools the pollsters just use as technicians.”
There seem to be a few major misunderstanding here. The American professional association for “the polling community” is the AAPOR, the R standing for “research”. But like a great many professional communities it serves those with both science and business interests.
More importantly, the idea that those in the “polling community” merely use as “tools” mathematical techniques that are only understood by mathematicians is absurd. Indeed I know several mathematicians who know very little about the kinds of mathematical tools used in survey research, and rather more social and cognitive scientists who understand a great deal about the mathematics of the statistical tools they use. Indeed many of our most powerful statistical tools were developed by social scientists, and, notably, by agricultural scientists.
Moreover, the idea that Blumenthal and Lindeman “do not acknowledge that to mathematical scientists, the very use of terms like “margin of error” and “statistical significance” imply very strong assumptions about the underlying probability spaces on which the statistical models are based” is a quite unsubstantiated assertion. I cannot speak for Blumenthal, but I happen to know that Lindeman has an acute awareness that the terms you mention “imply very strong assumptions about the underlying probability spaces on which the statistical models are based” because he and I (and Rick Brady) are co-authors on a paper in which we investigated precisely the underlying probability spaces on which measures of exit poll discrepancy are based.
In contrast, Dopp and Baiman appear unaware of some of the most basic assumptions that underlie inferential statistics, including the nature of the “probability spaces” of their own models. In the example they cite from Lindeman, Lindeman is drawing attention to an analysis by Baiman and Dopp in which they draw an inference from a regression in which the same error term occurs on both sides of the regression equation.
It would seem that expertise in mathematics does not immunise mathematicians against the kind of basic errors that even a mere technical user of statistics would be trained to avoid. Cerrtainly to understand the mathematics that underlie statistical techniques, you need a good grasp of mathematics. However, while such a grasp may be a necessary requirement, it is certainly not a sufficient one, and as a necessary requirement it is patently not out of reach of those whose primary discipline is not that of pure mathematics. And I would certainly expect the necessary mathematical understanding to be possessed by the ex-president of the American Statistical Association whose work Dopp refers to as “mathematical bunk”.
Elizabeth Liddle