Polling Holiday Sales

Internet Polls Legacy blog posts

The Wall Street Journal‘s "Numbers Guy" Carl Bialik takes a look this week at the various measures used to estimate retail sales over the Thanksgiving holiday weekend.  While not about political polling, to be sure, most of the measures he describes are survey based, and the lessons learned are useful for anyone who consumes survey data.   

Bialik piece is well-worth reading in full (no subscription required), I want to add a point or two.  He explained the methods used by a survey sponsored by the National Retail Foundation that estimated a 22% increase in Thanksgiving weekend spending compared to a year ago:

Here’s how the NRF’s polling company, BIGResearch LLC, arrived at that estimate: The company has gathered an online panel of consumers who answer regular surveys about their buying habits, elections and other matters. The company emailed panelists on the Monday or Tuesday before Thanksgiving to advise them that a survey was coming over the weekend. Then a second email went out late Thanksgiving night, saying the survey was open. It stayed open until late Saturday night. The survey asked consumers a series of questions about their weekend shopping activity and season-long plans. The key question for the group’s estimate was, "How much did you spend on holiday shopping?"

BIGResearch averaged answers to that question, adjusting for factors like the age, gender and income of its 4,209 respondents. Then it extrapolated to all U.S. adults. The conclusion: spending was up 22% compared with the same weekend last year, as measured in the same way. About 8% of that growth came from the U.S. population increase, and with a greater percentage of respondents saying they plan to shop than did last year. The rest of the increase came from a surge in reported average spending over the weekend.

Bialik goes on to note some very valid criticism of this methodology and then includes a quotation from yours truly:

Pollster Mark Blumenthal, who writes about survey research on his blog MysteryPollster.com, told me it’s reasonable to compare survey results from this year with results from last year, as NRF is doing. But he cautions that "what [survey respondents] say about what they did may or may not reflect what they actually did."

I want to add two more cautions:  First, the survey results may not be truly projective of the U.S. population.  As Bialik tells us, the survey was conducted using an "online panel."  Online panels are becoming more common (MP has discussed them here and here).  They have been used to conduct political polls by Harris Interactive, Economist/YouGov, Knowledge Networks, Polimetrix and and the Wall Street Journal/Zogby "Interactive" surveys sponsored by the .  However, except for Knowledge Networks, none of these surveys use random sampling to select their pool of potential respondents.  In a typical telephone survey, for example, every household with a working telephone has a chance of being selected.  Panel researchers, by contrast, begin by creating a pool of potential respondents who have selected themselves, typically by responding to advertisements on websites (including this one) or accepting an invitation to answer a survey when filling out a website registration form.  The researchers typically offer some sort of monetary incentive to those willing to respond to occasional surveys. 

Although their specific methods vary widely (the methods of Harris, Knowledge Networks and Polimetrix are especially unique), panel researchers typically draw samples from the panel of volunteers and then weight the results by demographics like age, gender and income, as BIGResearch did.  While these statistical adjustments will force the demographic composition of their samples to match larger populations, other attitudes or characteristics of the sample may still be way out of whack. 

Consider a somewhat absurd hypothetical:  Suppose we conducted a survey by randomly intercepting respondents walking through shopping malls.  Suppose we even started by picking a random sample of shopping malls across the United States.  We could certainly weight the selected sample by race, gender and age and other demographics to match the U.S. population, but even then our sample would still overestimate those who visit shopping malls, and by extension, their retail purchases in comparison to the full population.  As a projection of the shopping behavior of the U.S. population, our hypothetical sample would be pretty worthless.

Now consider the way BIGResearch gathered their sample.  Oh wait, we can’t do that.  We can’t because BIGResearch tells us nothing about how they gathered their panel, not in Bialik’s article, not in the NRF press release, not in their "full report" and not anywhere I can find on their company website

So we will have to make an educated guess.  Since BIGResearch conducts an online panel, we know they are missing those without Internet access.  If they are like most Internet panels and recruit through Internet banner ads and other "opt-in" solicitations tied to some sort of monetary incentive, we can also assume their pool is biased toward those inclined to complete surveys for cash.  Is it possible this selection process might create some bias in the estimation of retails sales?  We have no way of knowing for sure.  Though to be fair, Bialik notes some prior BIGResearch retail sales estimates that were "on the mark" compared to Commerce Department statistics. 

A second and potentially more fundamental caution involves the way BIGResearch conducted this particular survey.  According to Bialik, they contacted potential respondents a few days before Thanksgiving to "advise them that a survey was coming over the weekend."  Bialik does not say how much the initial invitation said about the survey topic, but if it described the survey as about holiday shopping the solicitation itself may have helped motivate respondents to go do some.  Also, consider the academic research showing that those with an interest in a survey’s topic are more likely to respond to it.  As such, heavy shoppers are probably more likely to complete a survey about shopping. 

Despite all of the above, as per my quotation in Bialik’s article, if the survey was done exactly the same way, asking exactly the same questions two years in a row, then using it to spot trends in the way respondents answered questions is reasonable.  What seems more questionable – especially given the lack of methodological disclosure – is trusting that those answers provide a bullet-proof projection of what the respondents actually did or whether their answers are truly projective of the U.S. population.

 

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.