Ideology as a “Diagnostic?” – Part I

Divergent Polls Legacy blog posts Measurement Issues Sampling Issues

A few weeks ago, blogger Gerry Daly (Dalythoughts) took a close look at self-reported ideology as reported on several national polls.  Daly was mostly interested in whether a recent Washington Post/ABC survey sampled too few self-identified conservatives.  In the process, he theorized that “ideology is an attribute rather than an attitude.”  I questioned that theory in the comments section here, and Gerry followed-up with a reaction on his own blog.  This discussion, along with a few reader emails, led me to want to take a closer look at both self-reported ideology and the whole notion of using party identification and self-reported ideology as “diagnostic” measures to assess political surveys. 

The more I thought about it, I realized that this topic is bigger than a single blog post.   It leads to many of the questions that come up about polls repeatedly and, as such, suggests a longer important conversation about using attitudes like party identification and ideology as diagnostics.  So rather than try to consider all the issues that Gerry raised in one shot, I’d like to take this topic slowly.  Today I’ll raise some questions that I’ll try to pursue over the next few days or weeks or wherever the thread takes us.

Let’s start with self-reported ideology.  One thing Gerry did was to look at average results for self-reported ideology for a few polling organizations.  I took the values that he started with and obtained a few more.  Here’s what we have – the following table shows either the average or rolled together responses for self-reported ideology.  Each question asked respondents to identify themselves in some form as “conservative, moderate or liberal” (more on the differences in question wording below):

A quick note on the sources:   Harris provided annual averages in an online report.  Daly computed results for Pew using cell counts for ideology in a cross-tab in this report; the Pew Research kindly provided the appropriately weighted results for 2004 on request.   We calculated average results for 2004 for the New York Times and Gallup.  The Times reports results for all questions for all surveys they conducted in partnership with CBS for 2004 (via a PDF available via the link in the upper right corner of this page – note, surveys conducted only by CBS are not included).   I obtained results for the 2004 Gallup surveys from their “Gallup Brain” archive.   Please consider this table a rough draft – -I’d like to verify the values with Gallup and the New York Times and request similar results from other national pollsters for 2004.

While the results in the table are broadly consistent (all show far more conservatives then liberals and 38-41% in the moderate category), there are small differences.  The Gallup survey shows slightly more self-identified conservatives (40%) than Pew (37%) and Harris (36%), and the New York Times shows slightly fewer (33%).  For today, let’s consider the possible explanations. 

Academics organize the study of survey methodology into classes of “errors” — ways that a survey statistic might vary from the underlying “true” value present in the full population of interest.  The current full-blown typology is known as the “total survey error” framework.  I will not try to explain or define all of it here (though I can suggest a terrific graduate course that covers it all).   Rather, for the purposes of this discussion, let me oversimplify that framework and lump everything into three primary reasons the results for ideology might differ from pollster to pollster: 

1) Random Sampling Error – All survey statistics have some built in random variation because they are based on a random sample rather than on counting the full population.  We typically call this range of variation the “margin of error.” 

In this example, sampling error alone does not account for the small differences across surveys.  Since each line in the table represents at least 10,000 interviews, the margin of error is quite small.  Assuming we apply 95% confidence level, the margin of error for the Harris and New York Times results will be roughly  1%, for Gallup and Pew roughly  0.5%   Thus, random statistical variation alone cannot explain differences of two percentage points or more in the table above. 

2) Errors of Representation – If a survey is not truly random, the statistics that result may have some error.  In telephone surveys, a fairly large percentage of those contacted do not agree to participate.  Others are not home or are available when called.  If these “non-respondents” are different from those that respond, the survey might show some statistical bias.  So, to use a hypothetical example, if liberals are more likely to be home or more willing to be interviewed, the survey will over-represent them.  This is “non-response bias.” 

Similarly, some respondents may be left out of a random digit dial telephone survey because their residence lacks a working landline telephone.  If those without telephones are different in terms of their self-reported ideology than those included, the result is a “coverage bias” that tilts the sample in one direction or another. 

These are the two big potential reasons for a less than representative sample.  Another potential problem is a deviation from purely random selection of respondent who gets interviewed.  The pollster should strive to pick a random person within each household, but this is hard to do in practice.  Differences in the way pollsters choose respondents at the household level can introduce differences between surveys. 

Most of the discussion of the differences between polls in terms of party identification or ideology assumes that these errors of representation are the only possible problem (other than random error).  They overlook a third category. 

3) Errors of Measurement – Even if the samples are all representative and all consist of the same kinds of people, the polls may still differ in terms of self-reported ideology because of the way they ask the ideology question.  In this example, we need to try to separate the underlying concept (whether Americans have a political ideology, whether they conceive of a continuum that runs from liberal to conservative and classify themselves accordingly) from the mechanics of how we ask the ideology question (what wording we use, how previous questions might define the context, how interviewers interact with respondents when they are not sure of an answer).  The short answer is that very small differences in wording, context and execution can make very small differences. 

(Another theoretical source of error that I have not discussed is that the four organizations conducted a different number of surveys, and we were not in the field on precisely the same dates.  However, all four did periodic surveys during 2004 with slightly greater frequency in the fall.  As I see no obvious trend during 2004 in the ideology results for NYT/CBS and Gallup.  So, my assumption is that despite differences in field dates, the data were collected in essentially comparable time periods).

Tomorrow, I want to consider how we might go about distinguishing between measurement error and problems of representation.  I also want to suggest some specific theories – call them “hypotheses” if you want to get all formal about it – for why different pollsters showed slightly different results in self-reported ideology during 2004.  Let me say, for now, that it is not obvious to me which source of “error” (representation or measurement ) is to blame for these small differences. 

For today, I do want to provide the verbatim text of the ideology question for the four survey organizations cited above:

New York Times/CBS – How would you describe your views on most political matters? Generally, do you think of yourself as liberal, moderate, or conservative?

Harris — How would you describe your own political philosophy – conservative, moderate, or liberal?

Pew — In general, would you describe your political views as very conservative, conservative, moderate, liberal or very liberal? 

Gallup – “How would you describe your political views – Very conservative, Conservative, Moderate, Liberal, or Very Liberal? [Gallup rotates the order in which interviewers read the categories.  Half the sample hears the categories starting with very conservative going to very liberal (as above); half hears the reverse order, from very liberal to very conservative]

Feel free to speculate about the differences in the comments.    More in the next post.   

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.