Political Hay

Polling’s Station

Opinion polls are pervasive. But can we trust them?

By 12.1.10

Send to Kindle

The Tea Party's emergence and the Democrats' decline. Public backlashes to Obamacare and the "Ground Zero Mosque." Murmurings about the president's religious faith and the field of prospective 2012 GOP presidential hopefuls.

Think of any recent political headline and odds are it can be linked to opinion polling.

In the past, opinion polls weren't very reliable (remember the headline "Dewey Defeats Truman"? It can be blamed on faulty polling), but at least their objective was modest: to capture public opinion.

Today's polls just as often drive the news cycle and create public opinion. Given the pervasiveness of polls, the question is: Can we trust them?

I visited a few polling firms' websites and discovered polls on everything from Americans' feelings about Daylight Saving Time (a plurality thinks it's not "worth the hassle") to which country's citizens feel safest "walking alone at night" (Singapore's).

We used to get polls predicting whether the president would be re-elected. Now we also get polls telling us how people feel about his choice of pet or vacation spot.

But as polls have become more prominent, so have charges that they are politically motivated. Rush Limbaugh has accused Gallup of "doing everything they can …to keep Obama's approval at 50 percent." And Eric Boehlert of the liberal Media Matters claims pollster Scott Rasmussen's data "looks like it all comes out of the RNC."

White House Press Secretary Robert Gibbs once complained about a Gallup poll showing President Obama's approval rating dipping to 47 percent. "If I was a heart patient and Gallup was my EKG, I'd visit my doctor," he griped. "I'm sure a six-year-old with a crayon could do something not unlike that. I don't put a lot of stake in, never have, in the EKG that is the daily Gallup trend. I don't pay a lot of attention to meaninglessness."

But the Obama administration has sometimes been very happy to talk up Gallup's meaninglessness, er, findings. When Gallup polling showed initial public support for stimulus spending in early 2009, President Obama told reporters, "I think if you took a look at the Gallup poll yesterday, the American people don't need convincing."

No pollster attracts more criticism than Rasmussen. His daily presidential tracking polls consistently show Obama's approval rating about five percentage points lower than other pollsters.

Political liberals, according to a Politico story, insist that Rasmussen's polls are, "at best, the result of a flawed polling model and, at worst, designed to undermine Democratic politicians and the party's national agenda."

But Rasmussen's polls were among the most accurate in predicting outcomes in the 2004 and 2006 elections. The liberal website FiveThirtyEight.com gave him the third-highest mark for accuracy in predicting the outcome of the 2008 primaries. Rasmussen was accurate again this year, with the only major miss its projection that Sharron Angle would defeat Harry Reid in Nevada by four points. (Reid won by five points.)

Politics aside, there are many challenges to achieving accurate poll results. These include survey bias resulting from how questions are worded, and sample bias caused by non-random samples of the population.

Every word used in a poll question can affect respondents' answers. For instance, a February CBS/New York Times poll found that 70 percent of Americans favor gay men and lesbians serving in the military. But the same poll found that just 59 percent of Americans favor homosexuals serving in the military.

It would seem to be a distinction without a difference. But 11 percent of respondents apparently consider the military service of "gay men and lesbians" more acceptable than that of "homosexuals." Go figure.

Sometimes the substance of a question can hinge on just one word. When the word "openly" was inserted after "serving" in each question, support dropped to 58 percent and 44 percent, respectively.

In general, the more detailed a pollster's question, the more illuminating the answers will be. As the Weekly Standard's Andrew Ferguson has pointed out, "Ask 'Would you like a Ferris wheel in your backyard?' and a shockingly high percentage of Americans might say yes. Complicate the question, however -- 'Would you like a Ferris wheel in your backyard if it tripled your electric bill and bumped off the family dog?' -- and the number would drop."

For decades polls have showed that a majority of Americans support Roe v. Wade, the 1973 Supreme Court decision legalizing abortion nationally. To many Americans, Roe is synonymous with abortion rights, and that to support even a limited right to abortion is to support Roe.

In 2007, the Ethics and Public Policy Center commissioned a national poll of registered voters that attempted to measure what the public knows about Roe. When respondents were simply asked whether they wanted Roe overturned, a majority (55 percent) said "no," and only 34 percent said "yes."

Respondents were then given an explanation of what Roe means -- that it prohibits states from limiting abortion in the first six months of pregnancy, and that if Roe were overturned, states could pass laws to legalize abortion. With this knowledge, the share of respondents that opposed reversing Roe dropped seven points, to 48 percent, and the share that supported overturning Roe leaped nine points, to 43 percent.

Not that this settles the question. I know partisans on both sides who would object to the above description of Roe and to its stated implications. And if you think trying to explain abortion is hard, try testing the public's knowledge of stem cell research, global warming, or campaign finance reform.

Another challenge is deciding whom to poll. Days before the 1936 presidential election, Literary Digest released a poll predicting that Republican Alf Landon would win comfortably. Three days later, his opponent, Democrat Franklin Delano Roosevelt, won in the biggest presidential landslide in more than a century. Landon carried only two states and received 8 electoral votes to FDR's 523.

The Digest poll included 2.3 million people (nearly two percent of the U.S. population). The problem? Sampling bias. Its sample was huge but hardly random, created by combining telephone and automobile ownership listings. Telephones and cars were amenities available mostly to the rich at the height of the Great Depression. So the Digest ended up polling a disproportionate number of wealthy Americans, who were more likely to support the Republican.

So why did Rasmussen's polls look better for Republicans ahead of the 2010 election?

In large part it was because of who was sampled. While many pollsters sampled "all adults" or "registered voters," Rasmussen polled "likely voters," a population that captured more Republicans, who were more enthusiastic about voting this year.

Every decision a pollster makes will affect the poll's outcome. For example, most pollsters contact people by phone. But some use pre-recorded telephone inquiries (which are cheaper and allow for larger samples), while others conduct live phone interviews.

Why does it matter? Because respondents tend to be more candid with the computerized questioners -- less apt, for instance, to exaggerate how likely they are to vote or to lie about holding an unsavory view.

Here's my advice: The next time you read a headline about an opinion poll, don't take it at face value. Dig a little deeper. Examine the statistical methods and think critically about the wording of the poll, its sample size, who was surveyed and how they were contacted.

Pollsters are constantly refining their methods. But one thing sophisticated statistical techniques can never completely account for is the complex and sometimes contradictory mind of the respondent.

As E.B. White once said, "The so-called science of poll-taking is not a science at all but mere necromancy. People are unpredictable by nature, and although you can take a nation's pulse, you can't be sure that the nation hasn't just run up a flight of stairs."

Like this Article

Print this Article

Print Article
About the Author

Daniel Allott is a writer in Washington, D.C.