So, as promised in Part I, let’s continue to consider the post on Redstate.org last week by Jay Cost (aka the Horserace Blogger) that sharply criticized a recent ABC news poll on the Plame/Wilson/Rove imbroglio. Cost had some additional criticisms I did not address, but will below. Two are specific to this poll, but one is of much broader general interest. As it happens, reader FR emailed to raise a similar query: "Why should we trust public opinion polls on issues where respondents probably know very little about the topic?" That is a very good question.
First, a review. Cost’s post zeroed in on three questions from the ABC poll. I’ve also copied those questions below, in the order they were asked, along with a few others that ABC asked about Rove/Plame, et. al. (for full results, see the ABC pdf summary):
1. As you may know, a federal prosecutor is investigating whether someone in the White House may have broken the law by identifying an undercover CIA agent to some news reporters. One reporter has gone to jail rather than reveal her source. How closely are you following this issue – very closely, somewhat closely, not too closely or not closely at all?
2. Do you think this is a very serious matter, somewhat serious, not too serious or not serious at all?
3. Do you think the White House is or is not fully cooperating with this investigation?
4. It’s been reported that one of George W. Bush’s closest advisers, Karl Rove, spoke briefly with a reporter about this CIA agent. If investigators find that Rove leaked classified information, do you think he should or should not lose his job in the Bush administration?
5. Do you think the reporter who has gone to jail is doing the right thing or the wrong thing in not identifying her confidential source in this case?
Cost had criticisms about the first question that we discussed in the last post. He also raised other objections. I’d like to comment on three:
A) The ABC release did not include "any kind of cross-tabulation to see if those who are not paying attention are the ones who think it’s serious." He goes on to speculate that the 47% that are not paying attention (on Q1) might be the bulk of the 47% that thinks the White House is not cooperating (on Q3).
It is certainly true that ABC did not provide complete cross-tabulations of these questions by the attentiveness question, but they did characterize what such cross-tabs would show. As a commenter on RedState.org points out, the ABC release did include the following text:
Those paying close attention (who include about as many Republicans as Democrats) are more likely than others to call it very serious, to say the White House is not cooperating, to say Rove should be fired if he leaked, and to say Miller is doing the right thing.
So, Cost guessed wrong. On this count, ABC is not guilty of trying to "make it seem like" the public is less happy with the White House than it is.
Now, MP certainly agrees (and apparently, so do the analysts at ABC) that such a cross-tab analysis is appropriate. MP would always like to see more data than less, although in this case, ABC certainly provided enough information to allay Cost’s suspicions.
On the other hand, MP does not agree with Cost when he asks rhetorically, "Why should we care what the uninformed think on the matter?" Two reasons: First, in this instance at least, ABC asked about attentiveness, not information level (although one is a reasonable surrogate for the other). Second, people will sometimes possess strong opinions about issues they are not following closely.
Consider, for example…Jay Cost. He tells us in his opening line that, "I really have no interest in this Plame/Wilson/Rove ‘scandal.’" He may not have any interest, but he certainly seems to have an opinion (the quotation marks around the word scandal seem like a pretty good hint). How would he feel if a pollster threw ignored his opinion (and those with similar views) just because his interest level is low? I’m pretty sure I know the answer.
B) Cost argues that the information about the Rove/Plame affair provided in the first question of the series provides a "frame" that influences respondent answers to the questions that follow.
Here MP must concede that Cost may have a point. Survey methodologists have shown that "order effects" can occur. Put another way, questions asked early in a survey can affect the answers provided to questions that follow. We noted a few weeks back that,
Small, seemingly trivial differences in wording can affect the results. The only way to know for certain is to conduct split sample experiments that test different versions of a question on identical random samples that hold all other conditions constant.
Unfortunately, the academic research on this issue is relatively limited. We know that order effects can occur, but often do not. The only way to know for sure is with extensive pre-testing and split sample experiments which public and campaign pollsters rarely have the time or budget to conduct. So we try to follow some general rules: We try to write questionnaires to go from the general to the specific, from open-ended questions to those that provide answer categories, from those that provide little or no information to those that provide a great deal.
MP will grant that he is a bit uncomfortable with the amount of information provided in the first question. We also tend to agree with Cost that it is odd to ask respondents if they consider this a "serious matter," after informing them that it involves breaking the law "by identifying an undercover CIA agent," and that a reporter has already gone to jail. How is that not "serious?" Nevertheless, MP doubts that the first two questions provide anywhere near the sort of bias or "framing" effect that Cost hypothesizes.
As for the other questions, we can speculate about it endlessly. Different pollsters will take different approaches. Consider the recent survey by Gallup on this issue, released earlier this week (results also posted here). They found results consistent with the ABC poll on how closely Americans are following the issue, but on a follow-up, found that 40% think Bush should fire Rove. On the ABC poll, 75% said Bush should fire Rove "if investigators find that Rove leaked classified information." Very different results, but also very different questions.
C) The third and most important question that Cost raises is a more general one: Can we trust any polls that ask about subjects about which respondents are poorly informed?
Cost argues:
Political scientists have found that when people are not paying much attention to an issue, they are quite susceptible to "framing effects" that can be created through question wording and question ordering (for more detail, see John Zaller’s The Nature and Origins of Mass Opinion, 1992).
This a good point, although MP does not agree that the ABC pollsters "designed" their poll "to give the impression that the public thinks something that it does not." However, Costs more general point is worth consideration: Just what should we make of polls about issues about which the public is poorly informed?
Arguing that we should never poll on such issues is a non-starter. The market will not tolerate it. We follow (and argue about) poll questions on issues like these because we care about them. Telling political junkies to ignore polls on such topics is like asking us to stop blinking. Consider the blogger who warned on Election Day, "don’t pay attention to those leaked exit polls." That sure worked.
More importantly, political partisans are usually interested in how to persuade, how to move public opinion, not just where it stands now. So we have good reason to want to gauge how the public will react to new information. We just need to be careful in reporting the results to distinguish between questions that measure how the public feels right now, and those that provide a "projective" sense of how they might feel in the future.
More specifically, MP has two pieces of advice for what to make of polls about issues on which the public appear to be poorly informed:
- Be careful! Pollsters can push respondents easily on such issues, and results can be very sensitive to question wording. Any one poll may give a misleading impression.
- As we have suggested before, look at many different polls. No one pollster will have a monopoly on wisdom. Yet use a resource like the Polling Report or the Hotline Poll Track Archives ($$), and you will begin to "asymptotically approach the truth" (as our friend Kaus often puts it).
Another excellent set of comments, Mystery Pollster! I believe that, in large measure, you and I have reached common ground. I am willing to generally cede your (A). With your concessions in (B) that leaves us only a short distance from each other. Mostly, our difference is in (C), which is a very broad issue. I’ll touch on that below, but first a few comments on your (A) and (B)
On your first point, about the cross-tabulation, you are correct (and I have already by “chided” by Gerry Dales for my cross-tab comments). However, I have to disagree with your rejoinder about caring about the inattentive (and issue which, at the end of the day, really is part-and-parcel of our disagreement over (C)).
You specifically argue that, “people will sometimes possess strong opinions about issues they are not following closely.” And you give me as an example, quoting my absence of interest. This is a rhetorically appetizing point to make, but you are just playing around with multiple definitions of the word “interest.” When one says that one is “interested,” it is an issue of attentiveness, as you say (personally, I think that using “interested” instead of “attentive” is a poor word choice on your part — but nevertheless, I would agree that such use is common). When one says that one “has no interest,” it is an issue of personal value. Thus, I can be paying attention and find no personal value within the story (which is in fact the case). Am I acting irrationally? Not really, acquiring information on this story for me has been costless. I follow political news closely anyway, and so I have picked up the relevant bits and pieces of information. Thus, I am attentive (albeit it inadvertantly) and have no interest in it.
The reason that I think that people who possess strong opinions and yet are uninformed are politically irrelevant is because, as I mentioned in my response to you, their responses are likely to be a reflection of elite discourse.
And, just to make it clear, the quotation marks around “scandal” were not a normative judgment on my part, but rather a judgment on the consequences of this affair. I think nothing will come of it, politically at least. They were just mean to indicate how this story makes me yawn. I do not say this to be critical of you, i.e. that you *should* have realized this. I think your inference about my use of the quotation marks was valid, it just happened to be incorrect.
On your second point, one of order effects, I am not up-to-date on the academic research on their consequences. Although, generally speaking, I do know that the issue of media framing has been relatively easy to capture within experiments, but relatively difficult to capture in the field. So I am unsurprised that framing effects related to order are difficult to nail down. This does not, however, mean that the question writer cannot and should not avoid the possibility of framing. One can still side-step the effects, even if those effects are only potential. Further, I think one should make every effort to side-step, and I think ABC failed to do that with this poll.
Your third point is where our disagreement is most robust. First on this issue, a minor clarification. You characterize me as asking, “Can we trust any polls that ask about subjects about which respondents are poorly informed?” This is not exactly my question. I certainly do not think that polls, provided that they do not suffer from actual framing effects, on matters like these suffer either from internal or external invalidity. They are measures of what the public is “thinking.” The real issue, for me, is that last word, which I have placed in quotation marks. These polls find something, it is true, but what they find seems to be quite different than what the average politico thinks they find.
I also assert that the issue transcends framing effects inherent to the poll. It touches the ability of the public to put forth answers that are in any sense of the word independent. Responses to questions along these lines do not, in my opinion (or, should I say, in the opinion of the consensus which I perceive within the academic literature), reflect anything except a parroting of the conversation that elites are having with one another. As V.O. Key put it, “the voice of the people is but an echo chamber.”
As for the issue about whether this is a worthwhile endeavor, I honestly must say that I am surprised by your answer. Abandoning polls like this is a “non-starter” because the market will not tolerate it? The implication seems to be that the market is a kind of legitimating force in this instance. Wow! And this from a Democrat! I agree, as a practical matter, that these polls will not stop being produced. However, I do not think that makes them a legitimate use of resources.
You are also right that political junkies will never stop paying attention to these sorts of polls. I have long thought, though, that the phrase “political junkie” is quite telling — that in some respects the political junkie shares some traits with the heroin junkie. The political junkie, as you say, cannot stop looking at and caring about polls like these, despite the fact that they are — in important respects (i.e. the Zaller 1992 respect) — meaningless.
My response: what does that say about the political junkie? Not good things. Not good things. One’s “addiction” to politics lends oneself to the desire to expend time and emotional energy thinking about, worrying about, celebrating about, something that, objectively speaking, does not provide a legitimate cause for the thought, worry or celebration — having been informed of this absence, the political junkie is incapable of changing his ways. He/she continues to think, worry or celebrate over something that is actually ephemeral. This is why I am glad to be a former political junkie.
Generally, I think polls like this feed into and provide psychological cover (numbers are amazing legitimators) for people to participate in what I call the “soap opera” frame of American politics. The announcer reads ABC’s numbers, the director cues the organ music, and the show ends with the key cliffhanger question: “Will Rove have to resign? Tune in next week!” Personally, I much prefer the Sopranos for my soap opera fix. It is much more dramatic (Madison’s system is *not* well designed for drama!) and, at the end of the season, you don’t end up with feelings of hatred for the blue- or red-staters.
I would further argue that while the political activist ostensibly has greater reason to care about such polls, they in fact have little real reason. Your metaphor about moving public opinion on a matter like this is not a good one. The public is more mush on an issue like this. You push it, it does not move, it just kind of collapses where your hand touched it. This is why I would argue that on questions like this, i.e. where information levels are so low, the results are really quite uninteresting. The public is reflective, not responsive.
Jay,
I don’t agree with how you use Zaller’s model to undercut this survey.
Two points. First, they are more likely to be swayed by elite opinion–that much is true. *But* whether they are swayed or not also depends on their predispositions. So saying that a poll of uninformed voters is nothing but a reflection of elite discourse is incorrect, or at least, I disagree that this is the consensus of the academic literature.
Second, while you *may* be right to say that opinions on this matter are “mush” you have no evidence to back that up. It is just as possible that there is elite debate occurring over the issue, and that the ABC poll has captured opinion in motion, as the public learns, integrates, and responds.
To say that these opinions are politically uninteresting seems to me false. If you were a Republican political operative, would you feel safer saying “these opinions are mush” or “you know this issue may be gaining some traction and we’d better get worried.”
Given the other results (e.g. Pew), I’d have to say the weight of the evidence is on the latter.
Are these sorts of polls misused to generate non-news stories? Of course they are. But that does not mean we the poll figures themselves are totally meaningless.