The MyDD survey that we discussed last week is now complete, and the site is rolling out the results over the weekend (here, here and here so far, with a detailed description of the methodology here). To be sure, this is a survey with a partisan sponsor and some of the questions it asks reflect the activist-liberal views of the MyDD readership. Those with a different point of view may well see no added value in the MyDD survey, or see some biased intent in some of the questions. Such is the challenge of surveys with partisan sponsorship. However, blogger Chris Bowers has endeavored to conduct a credible survey using a professional pollster, a traditional methodology and a sample that represents all Americans. He has also tried hard to not only meet the usual standards for disclosure but to significantly exceed them. Whatever you think of the substance of the MyDD questions, they deserve credit for their commitment to professionalism and transparency.
Bowers and Wright briefed me on their efforts on Friday and, not surprisingly, we discussed very little that they have not subsequently disclosed online. The only exceptions are the remaining results that they will post online over the next few days.
Here are two quick methodological notes of interest to MP readers: First, MyDD pollster Joel Wright had considered an unusual method to weight the data by self-reported party registration (rather than identification). After much consideration, Wright decided against weighting by party, because his method would have raised the percentage of self-identified registered Democrats from 33% to 40% of his sample. Wright explains in a comment posted on MyDD:
There’s a decent case to make that this [40% Democrat weighting] is correct, from the data I compiled and also a few other sources. However, most other polls don’t show that figure for Dems. That would have caused a controversy. It would have misdirected discussion about the poll into a discussion about proper proportions, etc. We would have been talking about weights instead of data and findings. Accused of partisanship when it’s not so in the least. All in the first time out for the poll. So I made a command decision, a very hard decision to make, late last night to go with conventional method in the best interest of the poll. This time.
Wright has a point about the potential for controversy. MP certainly would have had some concerns, but as Wright is headed back to the drawing board on his weighting methodology I will save my questions for another day.
Separately, Wright says that MyDD plans to make the raw data available for downloading within about a month, so that those interested in doing their own tabulations or analysis can have at it.
Finally, let me close with some general comments about partisan polls. In my last post on the MyDD poll, I wrote that "polls sponsored by partisan groups are nothing new." Regular MP reader Robert Chung (creator of this worthy page on survey house effects) asked in a comment if I meant that "sponsorship means the results are not reliable?" That was not my meaning, but he raises a good question nonetheless. How reliable are partisan sponsored polls?
Although most of the polls reported on in mainstream media are sponsored by the media outlets themselves, political parties, candidates and interest groups also routinely conduct their own surveys. Readers should remember that MP earns his living conducting polls for Democratic political candidates, and as such, may not be the most impartial source on this question. However, my perspective is that my "partisan" clients expect, first and foremost, that we provide accurate and reliable results, so our professional duty is to get the numbers right, without regard to their potential propaganda value. As such, data from the internal polls that drive campaign strategy rarely see the light of day.
However, as many of you know, partisan pollsters do sometimes release survey results selectively when they cast our clients in a favorable light. Academics who analyze public polls systematically (such as this recently published study, and also this paper) find that releases from partisan pollsters show a consistent bias. That is, publicly released pre-election polls from Democratic pollsters tend to be few points more favorable to Democrats, and polls from Republican pollsters tend to be a few points better for Republicans. Obviously, pollsters debate the reasons for this pattern, but most believe "selection bias" explains a lot of it. Consider it this way: Random sampling error means that all polls have a range of variation. If partisans release only those results that fall on the positive half of the bell curve (from their perspective), the released results will show a consistent bias. Either way, data consumers should take public releases of partisan results with the appropriate grain of salt.
The usual pattern of cherry-picking of results is one reason to be encouraged by the precedent of the MyDD poll of putting out so much detail about the survey in advance. They announced ahead of time that they were fielding the poll, released several drafts of the questionnaire for reader review and comment and announced when their survey went into the field. MP can think of no other news media or partisan poll that discloses so much in advance. MyDD’s only hesitation was in releasing the final version of their questionnaire in advance out of a fear that the wording would be criticized before they had a chance to release data. MP hopes that in future surveys they will take that extra step, because it would help boost the credibility of their claim to release all data, warts and all.
Yes, of course, MyDD is a liberal activist site, and the content of their survey reflects that world view. Conservatives may well take issue with some of the question wording or see bias in their analysis. That is the nature of partisan polling. However, I hope all will appreciate the professionalism and commitment to transparency in the MyDD approach. They are setting a good precedent for others to follow. MP sincerely hopes a site on the conservative side of the blogosphere follows their example.
How can the results be otherwise when the MSM constantly blames Bush for everything bad that happens in the country and never recognizes the good, Both of which have happened during his terms. All in all I think these polls just show what the media reports.
Interesting topic. I have a few thoughts. Let’s agree that this poll and the methodology are accurate for what they are asking and to whom they ask the questions (although I agree with the comment that Polling is biased by MSM coverage du jour).
The issue is projecting the results of this poll (or any poll) into a reliable election prediction. Polls do not speak to voter turnout, or to the changing voter demographics that will exist nearly 3 years from now.
Also, I would be concerned that a sample of 1004 voters is probably not very translatable to Electoral College results.
For example, Dems are heavily concentrated in major NE/CA Urban centers, carrying those cities by as much as 80%… and thus the 33% Dem Base is not widely distributed, making it less impactful to an Electoral College majority. As soon as they achieve more than 51% in a state, the rest of the votes are “wasted”, so excessive margins indicate the base is highly concentrated. OK for polls, not good for Elec. College.
Finally, the samples are deemed representative of the party affiliations… if that is the case, then the following holds true:
Poll 1004 Voters
Dems 33%
Reps 29%
Indep 37%
2004 Election: Reps 51%, Dems 48%.
Assuming both parties held their bases, this implies that Indeps voted 22 for Reps vs 15 for Dems.
Are they really “independent” if they vote 3-2 for Republicans? If they were Independent, one might expect a closer split. But if Independents continue to vote Republican, the distinction is not very meaningful.
I will look forward to seeing the rest of the questions, and their answers.
Miguel Lecuona — Texas Pundit
I believe that is registration rather than affiliation, which would probably pull in a bit of both in the final analysis. But it is my understanding that, in the last couple decades at least, there is a significant Republican bloc that will self-identify as conservative on the ideology question and Independent on the party ID question. Hence, polls often show more Dems than Reps but also more conservatives than liberals.
What has always irritated me about the “wrong track” questions is that it does not distinguish from what end the criticism is coming from – there is a contingent of those the right who criticise Bush for abandoning small-government conservatism, and would never vote for any democrat for the same reason… and there are those in the (possibly overlapping) “faster please” crowd who think the Administration is not nearly agressive enough in the prosecution of the WOT. They very well might say “wrong track”, and yet not really be in play for any plausible Democrat canditate.
Open Source Polling
If for no other reason that it’s the first serious poll commissioned by a blog, the MyDD poll will be worth checking out this week as blogger Chris Bowers releases the results in several installments. Mystery Pollster praises the poll’s “open source” n…
The problem with static polls, like the MyDD poll, is you only get a snapshot. One day later, the poll data is already obsolete, without trend analysis.
On Rasmussen, GW is up at 50% today.
We need RTP (real-time polling).
A continuously updated KMDST (kinetic multi-dimensional scaling test), say. 😉
If you compare the ANES party identification data (going back to 1952) with the presidential election popular vote you get an interesting result. 14 out of 14 presidential elections, the republican candidate pulls a larger percentage of the popular vote than would be indicated by the poll’s party identification. 13 out of 14 presidential elections, the democrat pulls a cmaller percentage of the popular vote than would be indicated by the poll’s party identification (the one exception was the Johnson landslide year when he pulled 0.05% more than the popular vote).
In fact, 9 out of the 14 presidential elections, the republican candidates popular vote percentage was greater than the total of the percentage identifying themselves as republican (strong, weak or independant leaning towards republican) plus the percentage identifying themselves as truly independant independants (no party leaning).
The ANES poll for 2004 says 33% identified as a democrat (strong or weak identification) and a further 17% as an independant leaning towards the democrats. The same 2004 polling shows republicans at 28% with another 12% independants leaning towards the republicans. So the poll would make you think that the democrats would start with a 50% to 40% advantage based on party id. Not quite how I remember how the election worked out.
The average is for republicans to do 13% better in presidential popular vote percentages than the ANES poll party identification and the democrats to average 6% lower over the last 14 elections.
Bottom line is that either there are a lot of mismatch between people identifying in polls with a party and people voting for a party’s presidential candidate or the polls are having some fundamental flaws in regards to matching up the people polled with those who vote. Mismatches like this shake my faith in the polls being able to predict how the people really feel since there doesn’t seem to be much convergence between popular presidential vote percentage and the poll’s party ID question.
I guess what I’m still wrestling with on this is what their objective is with the polling. Is it genuinely to find out where the American people are at? Is it intended as opposition research, to show the issues that Democratic candidates should be hitting on harder? Is it to prove Democratic talking points? You know how it is, it’s fine to say it’s the first (although one wonders why they don’t accept the many other polls that are out there), it’s sensible (but only moderately newsworthy) if it’s the second, and it’s useless if it’s the third.