Yesterday, I talked to two junior high school students doing a school project on political polling. One of their questions was, "Do people tell the truth when they answer poll questions?" The answer is, they usually do though there may be times when they do not, especially when the question asks about something that might create embarrassment or what social scientists call "social discomfort." If you did not vote, for example, you might be reluctant to admit that to a stranger.
Today’s column from the Wall Street Journal’s Carl Bialik (aka "The Numbers Guy") provides another highly pertinent example (link is free to all): A survey by Gallup showed that 33% reported giving donations that averaged $279 per household. Bailik did the math and found that would add up to $10 billion contributed by US households as of January 9. He also cited official estimates of the total donated by private sources at well under $1 billion.
The culprit? Social discomfort:
People tend to fudge when they feel social pressure to answer questions a certain way — in this case, by saying they’ve given to a good cause.
"The interview is a social experience," says Jeffrey M. Jones, managing editor of the Gallup Poll. A USA Today/Gallup survey was the most widely cited source of the one-third statistic in news articles. "As would be the case in a cocktail party or at a job interview, you want to give a good impression of yourself. Even though you can’t see the other person, and may never talk to them again, you want them to think well of you."…
Surveys about charity aren’t the only ones skewed by social pressure. Negative pressure about drug use or sexual behavior can suppress responses on those topics. Conversely, questions about behavior deemed positive, such as voting, tend to elicit false positive responses. Mr. Jones cites a Gallup poll conducted between November 19 and 21 in which 84% of respondents said they had voted a few weeks before. Actual turnout among eligible (though not necessary registered) voters was 60.7%.
Bialik also has a good review of some other methodological explanation for why an initial survey had 45% reporting tsunami relief donations and the second survey a week later showing self-reported donations at 33%, even though both surveys asked the same question with identical language.
Do People Tell Pollsters the Truth?
Mystery Pollster reported February 11, 2005 that on February 10 he “talked to two junior high school students doing a school project on political polling. One of their questions was,” he wrote, “Do people tell the truth when they answer…
I believe social science research shows that the reason most people lie is not because of social discomfort (although they will dissemble to get out of a difficult situation – but that is lying to avoid retribution which is something different). A great deal of lying is done to increase the social importance of the liar (thus it can be cognitively justified that he or she would have done this if they could have, but just didn’t have the opportunity). This relates back to the idea that in general most people are just not naturally inclined to lie.
Thus there are certain items in a poll that people will lie in response to such as how often do you vote, or who did you vote for when the winner is already determined (normally I think called the bandwagon effect. Bush seemed to get very little of this which is stunning in and of itself). But people would be much less likely to lie when asked in an exit poll who they voted for because there really is no immediate positive impact for the self.
Here’s an interesting thing though. The key in community to limiting lies is shame. In other words, invoking game theory, it is a cost benefit analysis. The liar is attempting to determine the relative benefit in standing in the community for lying against the amount of shame felt for being found out. The emotion of shame and its use by the community then is critical in limiting lying (which is imperative if communities are going to work). What seems to have happened in our society is that shame has been diminished to such a degree (in other words for some reason people no longer feel it very much) that the cost benefit analysis suggests that it is always better to lie and risk being found out. The retribution for being found out is so small.
The more people start lying the more people get together in a perverted social contract to limit shame. This is almost epidemic among baby-boomers and goes a long way towards explaining current Washington culture.
There is also the “patriotic duty” angle. Going back to the ’70s, Columnist Mike Royko was promoting the idea that the modern form of political polling is contrary to the sort of good statesmanship needed for democracies to flourish. He asserted a patriotic duty to lie to political pollsters. I know many of my friends agreed with Royko and to this day lie to political pollsters as a matter of principle.
This PROVES that Kerry won!
My wife called me earlier today. She asked what the Gallup Organization is. I told her and asked if she got polled. She said we missed the call, but she saw it on the caller id. Darn… I’ve never been polled… I’ll ask my wife if she would have lied on the poll. My father in law is an advocate of lying to pollsters. Always has been. I still don’t understand this logic.
Rick,
Please understand that the contrarian is a well known part of reliability issues and is always factored in to any half way decent research design. People making the argument that contrarians skew results simply don’t know that much about design.
So how do you factor in for people like me who lie to pollsters?
krm – if you “sound” conservative, they mark it accordingly. If you “sound” liberal, they mark it accordingly.
I’ve yet to read anything that tells survey researchers how to account for lying. There are probably lots of studies that have shown lying is random and therefore does not systematically bias a survey, but I haven’t come across one (not saying they aren’t there).
Citations would be nice, Wilbur. 🙂 Then we can all partake in the discussion 🙂 🙂
Rick,
Just pick up any book on research design, it is standard 101 stuff (well actually first year graduate school in any of the social sciences). There is the “good” respondent – the person who will attempt to tell you what you want to hear and the “contrarian”, the person who will purposely attempt to mess up your study on purpose. As I said these usually make up a small percentage and are usually accounted for in the design itself. But as a failsafe you always run a reliability test to make sure that something like this isn’t happening, a P or something.
Which just made me realize. Mitofsky and company must have run real time reliability tests on their data. Everybody does this, you can’t even get a second year research project published without it. Where is the data from the reliability tests? That will give us a whole lot of answers.
Wilbur, I don’t deny that there have been studies on “good” and “contrarian” respondents, but I question your contention that it is “standard 101 stuff.”
I’m in no way a stats expert. I have had 4 undergraduate courses in statistics or related to statistics (political economy and GIS that relied heavily on ANOVA, regression, and X^2) and 1 graduate course specifically on survey research methods (I’m taking my 6th stats class now).
I also have 6 solidly “101” texts in front of me now that don’t deal with this subject at all. Maybe UCSD Political Science and SDSU Public Administration are poor programs? A search of the library catalogue turned up a couple of promising texts where the “respondent problems” keyword registered hits. I’ll check those out when I get a chance.
Four questions: 1) Can you provide support for your contention that the contrarian respondent “usually make(s) up a small percentage and are usually accounted for in the design itself”; 2) How do you “run a reliability test to make sure that something like this isn’t happening, a P or something”; 3) What does this reliability test look like?; and 4) 1) What, in your experience, is the best text I should consult on this subject?
Rick,
The issue is not statistics but research design which is much more important. To take a look at reliability tests see
An Introduction to Psychological Tests and Scales
by Kate Miriam Loewenthal
as one example. Do they not do reliability tests in political science? There is absolutely no way to get quantitative survey data published in the psychological sciences without them. Basically what happens is there are certain patterns of answers where the tester knows that the respondent is not telling the truth (like I said this doesn’t happen that often but it does happen). More important for this discussion is reliability tests allow the exploration of whether there is some problem in the survey that is not related to the survey itself – for instance experimenter bias which happens much more often. That is the key argument of many people on why there was such a difference between the exit polls and the outcome. Post-hoc reliability tests aren’t worth as much but they offer some information. But I can’t believe that Mitofsky was not doing some pre-determined reliability testing. Survey data is pretty worthless without it.
For information on validity difficulties with respondents and how to deal with it in research design try,
Experimental Design in Psychological Research
by Allen Louis Edwards
I think they have some stuff on pen and paper research (which is what survey research is called in psychology).
Wilbur,
I must defer to the Poli Sci experts, but no, I have never heard of (nor have been taught) about methods for determining and accounting for patterns of non-truth. Sounds like psycho-babble to me 🙂
But I’ll check out your texts and I’ve scoured my survey research methods texts again for reliability tests, but no…
From what I know, a public opinion pollster drafts a survey instrument, then pre-tests it on a focus group to see if there are problems with the questions.
One thing we did cover in class is the fact that certain types of questions, if not worded sensitively, or if the respondent is not provided with a range of options, can provoke “lies.” The example used in class was about abortions:
Question: How many abortions have you had? Blank Line______ This question is often left blank or has a 0.
Ask the same question, but give the respondent options and the respondent is more likely to tell the truth: a) 0, b)1, c)2, d)3-5, e) 6 or more.
We never got into reliability tests or anything more advanced. I wish more of MPs readers (or MP himself) would chime in on this issue. Like I said, I believe you, it makes sense that there would be models for this, I just haven’t been exposed to them.
Okay, found some more stuff… I think it was too “basic” that I blew past it and thought you were talking about something more profound.
Basically, reliability is the the idea that a perfectly “reliable” poll can be given twice to the same person without the results of the second administration being affected by experience with the first. Yeah – undergaduate lower division stuff.
Different forms of the same basic test should, in theory, yield the same results, if nothing happens between taking the two tests that could change the outcome. You can calculate “reliability coefficients” to estimate estimate how reliable a test is. Correct? That’s what my undergraduate texts say about it.
I was looking for “lying.” I still wonder how how to calculate a relability coefficient of an exit poll survey instrument unless it were “tested” as an exit poll with the same environment and conditions as the real deal. How do you do that? I still think that all you can do is pre-test the instrument and make sure that the instrument isn’t causing inconsistent responses, but I don’t think the exit poll was designed to try and determine if people lied. Do you? Mark, do you do that for your surveys?
I’ll read up on this more, but I think reliability testing (of the sort my textbooks talk about) are more to do with predicting errors in the survey instrument (a reliability coefficient) and not with trying to estimate, by a pattern of responses to questions or some other means, whether someone is telling an outright lie.
It’s me again… Wilbur, can you explain something you said?
“Mitofsky and company must have run real time reliability tests on their data.” How would one do this – conduct a “real time” reliability test. What would this test look like?
Back in the early sixties, when I finished college, testing was standard for much employment – IQ, psychological, whatever. One of the things that all of us tried to do was to understand the psychological tests well enough to “beat” them for whatever job we were trying to get. Most of those tests did have reliability checks in them and part of “beating” the tests was to psych yourself into exactly the personality the employer wanted. I still recognize many reliability checks when I see them. Recently I was asked to fill out one of Zogby’s on-line polls and thought I detected a reliability check. I have read through the Exit Polls for Georgia, South Carolina, and Georgia. I saw no reliability checks.
If Fran is right and there were no reliability checks on the surveys then they are pretty worthless and I am forced to move in to the position that we shouldn’t be discussing them anymore (although as I said it is possible to do much less powerful post-hoc tests).
But I am left to ponder Casey Stengel’s philosophical question for the ages, “Can’t anybody here play this game?”
When you are doing a survey if more people like Coke or Pepsi then you are simply measuring preference and no – you probably don’t need a reliability measure.
When you are asking people who they voted for you are measuring an act that is the direct result of a cognitive decision making process and of course you do need reliability checks.
Do people in political science not know the difference between a preference and a choice?
I may be a bit late to this discussion, but I remember from my days working with standardized tests that there were two numbers we were interested in: reliability and validity.
Reliability, as already mentioned, is a measure of repeatability.
Validity means how closely the numbers generated by your instrument match the quantity or attribute you are trying to determine.
Thus, it is possible to have a very reliable poll that has low validity.
Register now. Show how you don’t want your opinion to count, except at the polls.