The AMA Spring Break Survey – Part II

Internet Polls Interpreting Polls Legacy blog posts Polls in the News Sampling Issues

Picking up where we left off yesterday, the AMA’s Spring Break survey has problems other than the disclosure of its methodology.  We must also consider the misleading reporting of results from the survey that were based on less than the full sample. 

I should make it clear that in discussing this survey, I in no way mean to minimize the public health threat arising from the reckless behavior often in evidence at the popular spring break destinations.  Of course, one need not look to spring break trips to find an alarming rate of both binge drinking and unprotected sex among college age adults (and click both links to see examples of studies that meet the very highest standards of survey research).

MP also has no doubt that spring break trips tend to increase such behavior. Academic research on the phenomenon is rare, but eleven years ago, researchers from the University of Wisconsin-Stout conducted what they explicitly labeled a “convenience sample” of students found, literally, on the beach at Panama City Florida during spring break.  They found, among other things that 92% of the men and 78% of the women they interviewed reported participating in binge drinking episodes the previous day (although they were also careful to note that students at other locations or involved in other activities other than sitting on the beach may have been different than those sampled).

In this case, however, the AMA was not looking to break new “academic” ground but to produce a “media advocacy tool.”  The apparent purpose, given the AMA’s longstanding work on this subject was to raise alarm bells about the health risks of spring break to young women.  The question is whether these “media advocacy” efforts went a bit too far in pursuing an arguably worthy goal. 

Also as noted yesterday, the survey got a lot of exposure in both print and broadcast news and the television accounts tended to focus on the “girls gone wild” theme.  For example, on March 9, the CBS Early Show’s Hannah Storm cited “amazing statistics” showing that “83% of college women and graduates admit heavier than usual drinking and 74% increased sexual activity on spring break.”  On the NBC Today Show the same day, Katie Couric observed:

57% say they are promiscuous to fit in; 59 percent know friends with multiple sex partners during spring break. So obviously, this is sort of an everybody’s doing it mentality and I need to do it if I want to be accepted. 

And most of the local television news references I scanned via Nexis, as well the Jon Stewart Daily Show graphic reproduced below, focused on the most titillating of the findings in the AP article:  13% reported having sex with more than one partner and 10% said they regretted engaging in public or group sexual activity. 

But if one reads the AP article carefully, it is clear that the most sensational of the percentages were based on just the 27% of women in the sample who reported having “attended a college spring break trip:”

Of the 27 percent who said they had attended a college spring break trip:

  • More than half said they regretted getting sick from drinking on the trip.
  • About 40 percent said they regretted passing out or not remembering what they did.
  • 13 percent said they had sexual activity with more than one partner.
  • 10 percent said they regretted engaging in public or group sexual activity.
  • More than half were underage when they first drank alcohol on a spring break trip.

The fact that only about a quarter of the respondents actually went on a spring break trip — information missing from every broadcast and op-ed reference I encountered — raises several concerns.  First, does the study place too much faith in second-hand reports from the nearly three quarters of the women in the sample who never went on a spring break trip?  Second, how many of those who reported or heard these numbers got the misleading impression that the percentages involved described the experiences of all 18-34 year old women?  See the Hannah Storm quotation above. She appears to be among the misled. 

One might think that the press release from the AMA would have gone out of its way to distinguish between questions asked of the full sample and those asked of the smaller subgroup that had actually been on a spring break trip.  Unfortunately, not only did they fail to specify that certain percentages were based on a subgroup, they also failed to mention that only 27% of their sample had ever taken a spring break trip.  Worse, the bullet-point summary of results in their press release mixes results for the whole sample with results based on just 174 respondents, a practice that could easily confuse a casual reader. 

[Highlighting added]

So what can we make of all this?

Consider first the relatively straightforward issues of disclosure and data reporting.  In this case, the AMA failed to indicate in their press release which results were based on the full sample and which on a subgroup.  Their press release also failed to indicate size of the subgroup.  Both practices are contrary to the principals of disclosure of the National Council of Public Polls.  Also, as described yesterday, their methodology statement at first erroneously described the survey as a “random sample” complete with a “margin of error.”  It was actually based on a non-random, volunteer Internet panel.  In correcting their error — two weeks after the data appeared in media reports across the country — they expunged from the record all traces of their original error.  In the future, anyone encountering the apparent contradiction between the AP article and the AMA release might wrongly conclude that AP’s reporter introduced the notion of “random sampling” into the story.  For all of this, at very least, the AMA owes an apology to both the news media and the general public. 

The issues regarding non-random Internet panel studies are less easy to resolve and worthy of further debate.  To be sure, pollsters and reporters need to disclose when a survey rely on something less than a random sample.  But aside from the disclosure issue, difficult questions remain:  At what point do low rates of coverage and response so degrade a random sample as to render it less than “scientific?”  And is there any yardstick by which non-random Internet panel studies can ever claim to “scientifically” project the attitudes of some larger population?  In the coming years, the survey research profession and the news media will need to grapple with these questions. 

For now, MP agrees with those troubled by the distinctions made by the AMA official (as quoted in yesterday’s post) that between “academic research” and “a public opinion poll:” 

[T]his was not academic research — it was a public opinion poll that is standard for policy development and used by politicians and nonprofits.”

Apparently I need to reiterate that this is not an academic study and will be published in any peer reviewed journal; this is a standard media advocacy tool

I agree that the release of data into the public domain demands a higher standard than what some campaigns, businesses and other organizations consider acceptable for strictly internal research.  With an internal poll, the degree of separation between the pollster and the data consumer is small, and the pollster is in a better position to warn clients about the limitations of the data.  Numbers released into the public domain, on the other hand, can easily take on a life of their own, and data consumers are more apt to reach their own conclusions absent the pollster’s caveats.  Consider the excellent point made by MP reader and Political Science Professor Adam Berinsky in a comment earlier today: 

Why should anyone accept a lower standard for a poll just because the results are not sent to a peer-reviewed journal? If anything, a higher-standard needs to be enforced for publicly disseminated polls. Reviewers for journals have the technical expertise to know when something is awry. Readers of newspapers don’t and they should not be expected to have such expertise. It is incumbent on the providers of the information to make sure that their data are collected and analyzed using appropriate methods.

Finally, I put a question to Rutgers University Professor and AAPOR President Cliff Zukin that is similar to one left earlier this afternoon by an anonymous MP commenter.  I noted that telephone surveys have long had trouble reaching students, a problem worsening as adults under 30 are more likely to live in the cell phone only households that are out of reach of random digit dial telephone samples.  Absent a multi-million dollar in-person study, bullet-proof “scientific” data on college students and the spring break phenomenon may be unattainable.  If the AMA had correctly disclosed their methodology and made no claims of “random sampling,” would it be better for the media to report on flawed information than none at all?   

Zukin’s response was emphatic: 

Clearly here flawed information is worse than none.    And wouldn’t the AMA agree? What is the basic tenet of the Hippocratic Oath for a physician:  First, do no harm. 

Would I rather have no story put out than a potentially misleading one suggesting that college students on spring break are largely drunken sluts?  Absolutely.  As a college professor for 29 years, not a question about it.  This piece is extremely unfair to college-aged women. 

And the other question here is one of proper disclosure.  Even if one had but limited resources to study the problem, the claims made in reporting the findings have to be considerate of the methodology used.  I call your attention to the statement they make in the email that the main function of the study was to be useful in advocacy.   Even if I were to approve of their goals, I believe that almost all good research is empirical in nature.  We don’t start with propositions we would like to prove.   And the goal of research is to be an establisher of facts, not as a means of advocacy. 

I’m not naive or simplistic.  I don’t say this about research that is either done by partisans or about research that is never entered into the public arena.  But the case here is a non-profit organization entering data into the public debate.  As I said to them in one of my emails, AAPOR can no more condone bad science than the AMA would knowingly condone bad medicine.  It’s really that simple

As always, contrary opinions are welcome in the comments section below.  Again, I emailed the AMA and their pollster yesterday offering the opportunity to comment on this story, and neither have responded. 

Mark Blumenthal

Mark Blumenthal is political pollster with deep and varied experience across survey research, campaigns, and media. The original "Mystery Pollster" and co-creator of Pollster.com, he explains complex concepts to a multitude of audiences and how data informs politics and decision-making. A researcher and consultant who crafts effective questions and identifies innovative solutions to deliver results. An award winning political journalist who brings insights and crafts compelling narratives from chaotic data.