Belated Update: Gallup on Weighting by Party

Legacy blog posts Weighting by Party

After yesterday’s post on Gallup’s decision to start reporting two different types of rolling averages regarding the president’s job approval rating, the folks at Gallup emailed to point out two things I had overlooked.  First, they have been reporting the "smoothed" averages I discussed yesterday in their (subscriber only) "In Depth" presidential approval page since January.  Second, back on January 16, Editor-in-Chief Frank Newport posted some similarly in-depth comments (free to all) about Gallup’s policies on weighting by party identification.  For those who follow the debate on party weighting – and I know you’re out there – Newport’s post is a must read.  Here is a quick summary: 

As Newport notes, his article largely summarizes a presentation he made at last year’s AAPOR conference and elsewhere.  MP saw much of this presentation about a year ago in Washington when Newport said he was in the midst of "zero-basing everything we know about party [identification] in an election year."  His presentation raised many questions.  Although he reiterates that Gallup continues a process of "reviewing, researching, and discussing our policies on this issue," this article includes the conclusions that Newport and Gallup reached in their deep dig on party ID. It also provides some context for their decision to report "smoothed" averages for presidential approval along side the regular non-averaged results. 

Most of the issues that Newport discusses should be familiar to those who have followed the party ID debate on MP.  However, his summary is a good place to start for those who are not.  The bottom line:

Gallup is not convinced that variation in party identification from poll to poll predominantly results from sampling error. In other words, Gallup is not convinced that party identification varies from sample to sample because the wrong percentages of "real" Republicans, independents, or Democrats are selected into the sample.

Instead, it seems at least equally likely that survey-to-survey variation in self-reported party identification is caused by two other factors: 1) measurement error and 2) real change in the population.

Newport argues that some small percentage of Americans may shift their answers on the party identification question in response to their events, or what Newport calls "short-term environmental stimuli."  These can be events in the news or questions asked in the middle of a survey.  As such, Newport endorses the theory that asking the party ID question near the end of each interview increases the potential for either short term change or "measurement error."  He cites the AP/IPSOS study presented at AAPOR (and discussed by MP) last year.    The Gallup organization thus concludes that adjusting or weighting a sample of voters based on their answers to the Party ID question introduces the possibility of making those samples less representative, not more so. 

Newport’s statement also responds indirectly for the calls to weight by party using a "smoothed" or rolling average of recent results.  The most prominent advocate of this approach is Professor Alan Abramowitz.  He spelled out his proposal for "Dynamic Weighting" in a paper posted back in September by the Cook Political Report (see also Ruy Teixeira’s commentary).  Newport’s response also provides the context for Gallup’s decision to report smoothed averages of presidential approval:

Attempting to weight an entire sample based on a smoothed estimate of PID involves the impossible challenge of trying to isolate some proportion of the change in PID that results from sampling error and not "real" population change or simple measurement error. While weighting to a smoothed estimate could, in theory, help eliminate sampling error for a particular sample, it is not possible to know to what degree this is being done. Weighting to a smoothed estimate can also create more bias in a sample by changing that sample’s overall composition in a way that a) incorrectly alters what is an estimate of a real change in the population or b) incorrectly alters an entire sample based on measurement error involved in one variable at the end of the survey questionnaire — error that did not affect the measurement of variables included nearer the beginning of the questionnaire.

This is not to say that it is inappropriate to smooth the reporting of individual variables. Analysts may want to report a rolling average or other smoothed procedure in order to provide a longer-term perspective on the trends of a specific measure of interest. In other words, even if one assumes that survey-to-survey variation reflects real-world population change, one may want to look at data trends from a broader perspective. This effort to produce a smoothed estimate can be done for any given variable, including party identification.

This procedure, however, is best conducted on variable-to-variable basis. The rationale for smoothing one variable (for example, party identification) and then weighting all other variables in a dataset to that smoothed average is less defensible for the reasons enumerated.

Newport’s response will not end this debate, and MP will continue to follow and comment on it.  The decision of whether to weight on party ID or not is not a simple one, not easily resolved by a one-size-fits all rule. 

However, there is one point that nearly everyone agrees on:  Every survey includes some "component" of random error which complicates our ability to see real changes in any one survey.  While smoothing and rolling averages help reduce random error, they cannot eliminate it entirely.  Some argue that routines like "Samplemiser" sometimes smooth out real change – ditto for weighting by party.   Either way, there is no perfect, fool-proof way to remove the uncertainty that comes with sampling error.  The best rule is to look at as much data as we can — including "smoothed" averages — and try to avoid reading too much into minor fluctuations between two surveys. 

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.