Last week, MP discussed a not-quite projective study of evacuees from Hurricane Katrina published by the New York Times. I noted the effort made by the Times to differentiate their study from a “scientific poll” and to make clear that the results “cannot be projected to a definable population.” This week, we have the story of results from a widely published study sponsored by the American Medical Association (AMA) that was not nearly so careful. They took a deceptive approach to disclosure that is becoming more common, inaccurately describing a non-random Internet panel survey as a “random sample” complete with a “margin of error.”
The study that the AMA billed as a poll of college women and graduates certainly made a lot of news. An story on the poll the AP’s Lindsey Tanner appeared in thousands of newspapers and websites. A search of the Nexis database shows mentions on the NBC Today Show, the CBS Early Show and hundreds of mentions on local television and radio news broadcasts across the country. Results from the survey also appeared in the New York Times ($), in Ana Marie Cox’s new column for Time Magazine and even on Jon Stewart’s Daily Show.
Cliff Zukin, the current president of the American Association for Public Opinion Research (AAPOR), saw the survey results printed in the Times, and wondered about how the survey had been conducted. He contacted the AMA and was referred to the methodology section of their online release. He saw the following description (which has since been scrubbed):
The American Medical Association commissioned the survey. Fako & Associates, Inc., of Lemont, Illinois, a national public opinion research firm, conducted the survey online. A nationwide random sample of 644 women age 17 – 35 who currently attend college, graduated from college or attended, but did not graduate from college within the United States were surveyed. The survey has a margin of error of +/- 4.00 percent at the 95 percent level of confidence [emphasis added].
Zukin sent an email to Janet Williams, deputy director of the AMA’s Office of Alcohol, Tobacco and Other Drug Abuse to ask “how the random sample of 644 women was selected?” (Zukin’s complete email correspondence with the AMA appears in full after the jump). He asked about the “mode of interviewing, sampling frame, eligibility and selection criteria, and the response rate,” as called for in the AAPOR professional code of disclosure.
Willams responded:
The poll was conducted in the industry standard for internet polls — this was not academic research — it was a public opinion poll that is standard for policy development and used by politicians and nonprofits.
The internet poll methodology used by the AMA’s vendor, Fako & Associates, made use of the Survey Spot volunteer Internet panel maintained by Survey Sampling, Inc. (SSI). According to the SSI website, panel members “come from many sources, including banner ads, online recruitment methods, and RDD telephone recruitment.” Anyone can opt-in to Survey Spot at their recruitment website. A poll conducted with the Survey Spot panel may yield interesting and potentially useful data, but that data will not add up to a “random sample” of anything other than the individuals who choose to participate in the panel.
Zukin replied:
I’m very troubled by this methodology. As an op-in non-probability sample, it lacks scientific validity in that your respondents are not generalizable to the population you purport to make inferences about. As such the report of the findings may be seriously misleading. I do not accept the distinction you make between academic research and a “public opinion” survey.
The next day, Williams shot back:
I have been involved in the development of public policy research for more than 15 years using this company and several others. We do not make any claims that this is a scientific study and again I ask why did you not have a problem with the other two public opinion surveys I have conducted. I also am afraid that you are looking at the media coverage and not what we issued…
As far as the methodology, it is the standard in the industry and does generalize for the population. Apparently I need to reiterate that this is not an academic study and will be published in any peer reviewed journal; this is a standard media advocacy tool that is regularly used by the American Lung Association, American Heart Association, American Cancer Society and others.
On that score, Williams was in error. Yes, the article by AP’s Tanner did refer to the study as “a nationwide random sample of 644 college women or graduates ages 17 to 35,” but then so did the original AMA release put out by Williams’ office as noted above. The original release also provided a margin of error, something that is only statistically appropriate for a truly “scientific” random sample.
MP’ and Williams also differ in our perception of what constitutes an “industry standard.” While companies that conduct market research using opt-in volunteer panels are certainly proliferating, the field remains in its Wild West stage. Every company seems to have a different methodology for collecting and “sampling” volunteer respondents and then weighting the results to make them appear representative. Few disclose much if anything about the demographics or attitudes of the volunteers in their sample pool.
The one issue on which Williams has a point, unfortunately, involves methodological disclosure. The AMA poll is unfortunately one of many I have seen that calculate a margin of error for a non-random panel sample. This sort of misleading “disclosure” should not be an “industry standard,” but sadly, it fast becoming one.
Zukin noted much of this in a subsequent reply:
Simply put, statistically, you are wrong. The methodology is not standard, it is not generalizable to the population. And, the reporting of a sampling error figure, as you have done in your methods statement is fanciful. Because of the way you sampled people, with a non-probability sample, there is no way can know about the accuracy of your sampling and error margin. This is simply without basis in mathematical fact. 100 out of 100 statisticians would tell you that there is no sampling error on a non-probability sample. It is beyond question that your methodological statement is factually inaccurate and misleading.
A little over an hour later, Zukin received another message, this time from Dave Fako, the pollster whose company conducted the survey:
Janet at the AMA made some incorrect assessments of the methodology that was used for the survey. I’d like to clarify some of your questions.
This survey was an online panel study, conducted in accordance with professional standards for this type of study. We do not, and never intended to, represent it as a probability study and in all of our disclosures very clearly identified it as a study using an online panel. We reviewed our methodology statement and noticed an inadvertent declaration of sampling error. We have updated our methodology statement on this survey to emphasize that this was a panel study to represent how the survey was conducted.
The AMA release, last updated on March 23, has been updated as noted in Fako’s email. It no longer describes the survey as a “random sample” and claims the survey has a “margin of error.” At the same time, the release includes nothing to indicate that a correction has been made or that it has been changed from its original version. Anyone visiting that website today might wrongly conclude that claim of “random sampling” was the invention of AP reporter Lindsey Tanner.
Does the AMA still consider the survey “generalizable” to the population of 17-35 year old women? Neither Fako’s email nor the corrected methodology statement above make that clear.
[Update: As noted by reader TM Lutas in the comments, despite the correction, the news page on the Fako & Associates web site continues to point to a PDF of a USA Today article on the survey that includes reference to the survey’s “margin of error”].
Frustrated with the lack of response from the AMA, who subsequently stopped returning Zukin’s calls, he shared his correspondence with me. When I asked why he wanted to go public with the dispute, Zukin replied that he “first tried to respond quietly by calling the AMA Communications Director, hoping they would voluntarily issue a clarification of their earlier release.” He also sent the New York Times a Letter to the Editor, which so far has not been published. He continued:
I did not want to let the issue die, however without an attempt at a wider discussion/circulation of the problem…There just is a strong sentiment within our profession, at least as represented on [AAPOR’s Executive] Council, that we need to address these rogue opt-in surveys that masquerade as probability samplings and report their results with a margin of sampling error. SO, I turned to MP.
I have emailed both Janet Williams and Dave Fako asking for comment, and they have not responded as of this posting.
I hate to end on a bit of cliff-hanger (no pun intended), but there is more to this story. I will continue with Part II tomorrow. The full text of the exchange between AAPOR President Cliff Zukin and the AMA follows on the jump.
Interest Disclosed: I am an AAPOR member currently a nominee to chair AAPOR’s Publications and Information committee.
UPDATE: Continues with Part II.
From: Cliff Zukin
Sent: Tuesday, March 14, 2006 4:20 PM
To: Janet Williams
Cc: ‘Nancy Mathiowetz’
Subject: spring break AMA surveyJanet,
Thank you for leaving a message at my house.
I did follow up as Mary Kaiser suggested and looked at the methodology statement on the web site. I really would like to get some additional information.
I am most interested in how the random sample of 644 women was selected. This would include the mode of interviewing, sampling frame, eligibility and selection criteria, and the response rate. I guess I would basically like the basic information called for in the professional code of disclosure. You can find this at aapor.org, but I’ll email the relevant section under separate cover.
Thanks in advance.
Cliff Zukin
———————————————————————
From: Janet Williams
Sent: Tuesday, March 14, 2006 4:44 PM
To: Cliff Zukin
Cc: Nancy Mathiowetz
Subject: RE: spring break AMA surveyThe poll was conducted in the industry standard for internet polls – this was not academic research – it was a public opinion poll that is standard for policy development and used by politicians and nonprofits. I guess I am curious as to why you are interested in the methodology for this poll and not for the other two polls I conducted in the exact same way: alcopops (which surveyed girls and women) (Dec. 2004) and social source suppliers of alcohol (June 2005).
Janet Williams
Deputy Director
Office of Alcohol, Tobacco and Other Drug Abuse
American Medical Association———————————————————————
From: Cliff Zukin
Sent: Tuesday, March 14, 2006 9:14 PM
To: Janet Williams
Cc: ‘Nancy Mathiowetz’
Subject: RE: spring break AMA surveyJanet,
I’m very troubled by this methodology. As an op-in non-probability sample, it lacks scientific validity in that your respondents are not generalizable to the population you purport to make inferences about. As such the report of the findings may be seriously misleading. I do not accept the distinction you make between academic research and a “public opinion” survey. Moreover, this is not the standard used in policy research, by non-profits or even by politicians. Surveys are either conducted according to sound methodological practices, or not. Just as the AMA has standards for sound medical practice, AAPOR has standards for sound opinion research.
I believe this to be true generally, and I think there is an even greater responsibility when research findings are put into the public domain. I will discuss this with AAPOR’s standards chair and committee; we may ask the AMA to issue a clarifying statement.
Cliff Zukin
———————————————————————
From: Janet Williams
Sent: Wednesday, March 15, 2006 10:15 AM
To: Cliff Zukin
Cc: Nancy Mathiowetz
Subject: RE: spring break AMA surveyI have been involved in the development of public policy research for more than 15 years using this company and several others. We do not make any claims that this is a scientific study and again I ask why did you not have a problem with the other two public opinion surveys I have conducted. I also am afraid that you are looking at the media coverage and not what we issued. The purpose was to get some experience info and opinions on how women are portrayed in ads and support for policies. We are not using this poll to castigate anyone’s science or work on alcohol use. I am very confused by your outrage and have never received any such criticism for the clean indoor air polls I conducted in Illinois and municipalities in my work at the American Lung Association prior to my joining the AMA.
As far as the methodology, it is the standard in the industry and does generalize for the population. Apparently I need to reiterate that this is not an academic study and will be published in any peer reviewed journal; this is a standard media advocacy tool that is regularly used by the American Lung Association, American Heart Association, American Cancer Society and others.
I have forwarded your email to our pollster and, if warranted, he will respond.
Janet Williams
———————————————————————
From: Cliff Zukin
Sent: Wednesday, March 15, 2006 11:26 AM
To: ‘Janet Williams’
Cc: ‘Nancy Mathiowetz’
Subject: RE: spring break AMA surveyJanet:
I did not respond to any previous research because I was unaware of it.
I think AMA needs to review what it has done here, including your assertions. Simply put, statistically, you are wrong. The methodology is not standard, it is not generalizable to the population. And, the reporting of a sampling error figure, as you have done in your methods statement is fanciful. Because of the way you sampled people, with a non-probability sample, there is no way can know about the accuracy of your sampling and error margin. This is simply without basis in mathematical fact. 100 out of 100 statisticians would tell you that there is no sampling error on a non-probability sample. It is beyond question that your methodological statement is factually inaccurate and misleading.
I am also troubled by the fact you actually call this study a “media advocacy tool.” It is unconscionable to put something in the public domain under the guise of a scientific survey when it has such a high potential to be inaccurate and mislead. Scientific surveys should be done to measure public opinion, not to influence collective opinion.
Giving the benefit of the doubt here, I assume that AMA does not knowingly wish to mislead the public and press. Now that this might have happened inadvertently, I encourage you to take about what steps the AMA could take on its own to correct the information you have put out. AAPOR will be discussing this matter later in the week.
Cliff Zukin
———————————————————————
From: Dave Fako
Sent: Wednesday, March 15, 2006 5:46 PM
To: Cliff Zukin
Subject: Answers to Your Question About AMA Online SurveyDear Cliff Zukin:
Thank you for your questions about the AMA Spring Break survey. Like you, I am committed to the credibility of all public opinion research and am dedicated to utilizing rigid standards in all of our research projects.
Janet at the AMA made some incorrect assessments of the methodology that was used for the survey. I’d like to clarify some of your questions.
This survey was an online panel study, conducted in accordance with professional standards for this type of study. We do not, and never intended to, represent it as a probability study and in all of our disclosures very clearly identified it as a study using an online panel. We reviewed our methodology statement and noticed an inadvertent declaration of sampling error. We have updated our methodology statement on this survey to emphasize that this was a panel study to represent how the survey was conducted. That updated statement is listed below. Additional details of how the panel was assembled, maintained and utilized along with disclosure of incentives, etc. have been included in all of our material related to this study.
This is our updated summary of methodology:
“The American Medical Association commissioned the survey. Fako & Associates, Inc., of Lemont, Illinois, a national public opinion research firm, conducted the survey online February 27 – March 1, 2006. A nationwide sample of 644 women age 17 – 35 who are part of an online survey panel who currently attend college, graduated from college or attended, but did not graduate from college, who reside within the United States were surveyed.
The source of the panel is Survey Sampling International’s (SSI) Survey Spot Panel. A strict multi-step screening process was used to ensure that only qualified individuals participated in the survey. The survey makeup was: 62% women age 17 – 23 and 38% women age 24 – 35. The survey was conducted in proportion to regional shares of the population based on current census data.”
We apologize for any misunderstanding about the survey. We are committed to conducting legitimate public opinion research that provides our clients with the most accurate data and in-depth strategic analysis of the findings. We never craft polls to give our clients the answers they want; in fact, we regularly decline to take on clients who ask us to design polls to give them the answer they want; and, we refuse to craft/include questions in our surveys that are not designed to elicit true opinions.
Fako & Associates has conducted over 500 public opinion/strategic research surveys for political candidates, public policy organizations, corporations and units of government since 1999. Our record of success and accuracy and repeated use by numerous clients speaks to our commitment to quality and accuracy.
Again, I apologize for any misunderstanding and would be glad to work with you to promote quality and legitimate public opinion research.
I hope this addresses your concerns.
Feel free to ask additional questions.
Dave Fako
The most disturbing part of this whole exchange is the statement by Williams that:
“this was not academic research — it was a public opinion poll that is standard for policy development and used by politicians and nonprofits.”
Why should anyone accept a lower standard for a poll just because the results are not sent to a peer-reviewed journal? If anything, it seems to me that a higher-standard needs to be enforced for publicly disseminated polls. Reviewers for journals have the technical expertise to know when something is awry. Readers of newspapers don’t and they should not be expected to have such expertise. It is incumbent on the providers of the information to make sure that their data are collected and analyzed using appropriate methods. Otherwise the whole survey research enterprise suffers (see the Literary Digest debacle of 1936 for a historical example).
This debate brings up an interesting contrast between research “in a bubble” and the realities of working with real populations and budgets.
I’m not sure how you would go about conducting a survey of college students and recent grads without using a panel… Surely we can’t assume that a screened RDD phone sample is representative as so many in this age group have gone cell-only–particularly those living on campus. Buying a list?–with the type of transition students go through that again makes a truly random sample unlikely. Not to mention the abysmal response rates you’d be likely to get in such an audience these days.
Yet if someone actually did a phone survey with this audience, assumed it was a random sample, and assigned an appropriate margin of error, would it be subject to this same level of scrutiny that this panel-based AMA survey is getting? Of course not–we recognize that phone samples have problems (and are getting worse) but we accept that as a hazard of the business and that the science of random sampling works despite blips like low response rates and cell-only households.
To be sure, calculating a “margin of error” for a panel-based survey is flawed and misleading. But is it really that much better for a phone survey?
Kudos to Cliff Zukin for taking on this issue! As the web survey mode of data collection continues to gain market share, the frequency of this type of misleading behavior seems to be increasing each year. In order to separate high quality public opinion work from “polls” similar to this one, it is imperative that public opinion professionals fight these battles.
Thank you for the comments. I’ve addressed some of the questions raised in Part II, which is now posted:
http://www.mysterypollster.com/main/2006/03/the_ama_spring__1.html
The key may be Janet’s view of what she’s doing: “Apparently I need to reiterate that this is not an academic study and will [not?] be published in any peer reviewed journal; this is a standard media advocacy tool that is regularly used by the American Lung Association, American Heart Association, American Cancer Society and others.”
Interesting view: “a standard media advocacy tool” is what it is, but what does that mean?
She’s looking for something to support what she’s advocating — not necessarily something valid or reliable, just something that says what she wants to say in her advocacy.
She sees herself as an advocate, and apparently feels no allegiance to the truth in making her arguments.
It’s snake oil! Pure and simple. Why is it so difficult for people to acknowledge that? It’s a scam. Did you notice how quick they were to label themselves as “professional”? No true professional has to say that, and calling oneself professional does not make one professional.
There is a real documented girls gone wild effect.
http://www.issues.org/13.2/courtw.htm
It is demographic. It is caused by an excess of females over males.
Fako but accurate?
The central excuse is the entirety of the problem. “This is the industry standard.”
Take the sentence apart: What “this?” What “industry?” Whose “standard?”
It would be more accurate to say “This type of research has no standards, and we adhere to them.”
I don’t know if you want to kick Fako & Associates any more but their News page on their website includes a USA Today PDF of the story complete with the margin of error statement. It doesn’t have the explicit statement that it’s a random survey but the bit about it being generalizable is embedded in that margin statement. Is it an oversight? Maybe you should ask.
IVR Internet: How Reliable?
If one story is more important than all others this year–to those of us who obsess over political polls–it is the proliferation of surveys using non-traditional methodologies, such as surveys conducted over the Internet and automated polls that use a…