Blogger Dalythoughts dug into the 2002 post-election analysis by the National Council on Public Polls (NCCP) and discovered what can happen when a pollster weights every survey by party ID.
Did you know that one pollster ‘called’ over 29% of the 2002 Senate and/or Gubernatorial races for the wrong candidate, despite polling more races than all but one other company…And that the average for everybody else was getting about 13% of the races wrong, by comparison? No peeking. Can you name this pollster?
Click here for the answer. I’ll give you a hint. The name was mentioned in the last post and starts with a Z.
Daly’s post reminds me to highly recommend the excellent resources of National Council on Public Polls, especially their 20 questions a journalist should ask and frequently asked questions from the public.
PS: Thanks to Dalythoughts for the link to us
OTOH, wasn’t Zogby closer to the final results in both the 1996 and 2000 presidential elections? (In 1996, unlike the other pollsters, he didn’t exaggerate Clinton’s margin of victory; in 2000 he was practically the only one to have Gore a bit ahead in the popular vote. Admittedly, polls that had a double-digit Clinton lead in 1996 and a slight Bush popular lead in 2000 may have been within the margin of sampling error; but the fact remains that Zogby was closer to the actual results.) Is it possible that the same “weighting voters by party” method that hurt him in predicting state elections is more accurate in presidential ones? Or did 2002 inaugurate a new era of party-identification volatility?…
“the public opinion survey, correctly conducted, is still the best objective measure of the state of the views of the public.”
[from the ’20 questions’ linked above]
And there I was, thinking elections were the best way to measure the views of the public!
“And there I was, thinking elections were the best way to measure the views of the public!”
Well, of course, polls are better than elections for measuring opinion–for the simple reason that (presidential) elections can only measure the views people have one day every four years! We need (or at least some of us want) some way of measuring what people’s views are in the interim…
In all seriousness, I am a bit annoyed at the way the “snapshot” theory allows pollsters to explain away all divergences between their results and the election results–they can always say “Well, people changed their minds at the last minute.” (The one kind of poll where errors can’t be ratoonalized like that is the exit poll…)