Quick Tips for Becoming Poll-Literate

If you’re a political junkie like me, or just a casual election-follower, you’ve probably read a few polls that made your jaw drop.  Here are some things a skeptical poll consumer should look for before letting their jaw fully drop.
 

Selection Bias

One of the first questions you should ask yourself when you read a poll is “What kinds of people did they ask?”  What we want in a poll is to get an accurate cross-section of the voting populace.  Selection bias occurs when there is a systematic bias (not necessarily a political bias) that leads to results that are skewed in favor of a particular subpopulation.  For example, a classic case of selection bias that has become even more prominent since the 2008 election is the heavy reliance on landline phone polls.  Increasingly, many Americans (especially the youth) are exclusively using a cell phone, meaning they can’t be reached by traditional landline polling methods.  Pollsters are adapting by trying to reach cell-only voters and adjusting weights in their models, but it’s nonetheless still a problem to be aware of.

 

Framing Effects

During President Obama’s push for health care reform (HCR), polling on the public’s support for the legislative overhaul was all over the place, heavily dependent on the way the pollster framed the question.  When the question compared the proposed HCR to the very popular Medicare program, public support was higher than, for example, referring to HCR as “government-run”; even though these terms are different ways of describing the same legislation, they lead to different results!  This effect is compounded in cases like HCR where the average voter is uninformed on the actual bill.

 

Question Order Effects

In February, an internal pollster from Senator Scott Brown’s reelection campaign criticized a survey conducted by Suffolk University that surprisingly favored Senator Brown by 9 points.  The reason?  Question ordering effects––specifically, the Suffolk poll asked leading questions like “Does [challenger] Elizabeth Warren have the experience to be a United States Senator?” before asking respondents who they would vote for.  Pioneering research by Howard Schuman and Stanley Presser in the 1970s demonstrated the effects question ordering can have on polling results; their suggestion was to always ask general questions before specific ones, so that the content of the specific questions doesn’t contaminate answers to the more general question, which is usually the one we care most about.  The Suffolk poll did this exactly backwards, and the Brown campaign should be commended for its integrity in questioning a flawed poll in its own favor.

 

Fictional Opinions

A classic experiment by George Bishop (1980) demonstrated that between 30 and 40% of respondents either agreed or disagreed with the repeal of the 1975 Public Affairs Act.  The problem?  There never was a 1975 Public Affairs Act.  The finding illustrated the effects of suggestibility and authority––if respondents are willing to give ‘real’ answers to fake questions, then we shouldn’t always take survey results at face value.  With today’s roller coaster news cycle, where out of nowhere Ron Paul makes repealing the 14th Amendment a hot-button issue, you should constantly ask yourself, “Do Americans really understand or even know what the 14th Amendment is?” before interpreting any polls regarding the country’s attitudes toward its repeal.

 

Resources

If you are interested, the best resources I know of are FiveThirtyEight and Pollster.  Nate Silver and Mark Blumenthal are certainly much more knowledgeable on the subject.

 

References

Bishop, G.F. et al., (1980). Pseudo-opinions on public affairs. Public Opinion Quarterly, 44(2): 198-209.

Schuman, H. & Presser, S. (1981). Questions and answers in attitude surveys: Experiments on question form, wording, and context. New York: Academic Press.