Political Hay

Poll Vaulting

Keeping the horse race -- and the pollsters -- straight.

By 10.24.08

Send to Kindle

Is Barack Obama up by double digits or is this race a dead heat?  Is there an inconsistency between tightening national polls and yesterday's battleground state polls, or is it no big deal? What exactly is happening with the polls all over the place?

Comparing polls these days is hardly a matter of comparing apples to apples.  There are apples, oranges, and even pineapples. To say "no two polls are alike" in methodology is an understatement. The whole host of things that pollsters do to their data -- weight it, screen it, use different samples and question forms -- all can lead to a world where two polls that you'd expect to be within the margin of error of one another wind up all over the map.

First, there's what pollsters do in sampling to get the data in the first place. Some use cell phones, some don't. The methodology for handling cell users is newly emerging, and cell users are typically treated a bit differently than landline users, such as being offered incentives for their time. Then, if you're looking at a poll that uses "likely voters," you're looking at a sample that has been based on a pollster's assumption. Screening out respondents who are registered to vote but don't fit a pollster's definition of "likely voter" introduces a subjective filter to the data.

Since almost no pollsters reveal what their likely voter screener is, it's hard for an outside observer to know whether to trust that pollster's assumptions about the behavior or questions that can best identify likely voters. Sure, some voters are more likely to vote than others. But if you're going to screen out folks who are eligible to vote because you don't think they're going to exercise their democratic right, readers should want to know what your rationale is for that screen.

There's also sample size and dates of fielding. A survey that fields for two days is very different from a survey that fields for four in this political environment where the latest big political event immediately finds its way into households through cable news and the internet. Plus, smaller samples mean bigger margins of error.

SO BEFORE WE EVEN get to the survey itself or the data weighting, you've already got a pretty major bit of subjectivity and varying methodology that's been thrown into the mix. Unless all pollsters are using the exact same sample size, fielding time, and model to determine "likely voter" behavior, you're already looking at apples and oranges. (To its credit, CNN releases both likely voter and registered voter results. While we still don't know how CNN screens for likely voters, it's good that it provides both sets of data.)

Now, consider that pollsters have different ways of asking the ballot test question. In the statewide polls James Antle blogged about yesterday, the pollsters all take different approaches. The CNN polls ask respondents to choose between the tickets, naming both the presidential and vice presidential candidates. If a voter is undecided, he is then pushed to say what candidate he is leaning toward. In the Big Ten Battleground poll, undecided voters are allowed to remain undecided. Quinnipiac pushes the leaners as well and includes them in the totals, but the Quinnipiac question does not name the vice presidential candidates.

See? Three different polls that will all show up as "Obama/McCain Ballot Tests." And all three questions are structured very differently.

Then there's data weighting. What pollsters do with the data after they get it is important. When a pollster weights data, they do so with good reason -- you are making sure that your sample is representative of the facts you know about the population you're sampling. Most do this. But what pollsters weight varies. Some weight by party ID (I discuss this further here), while some don't. Some allow partisan splits that are far, far, far outside the norm to remain because they treat party as a question response as fluid as "Who would you vote for?" Others treat partisanship the same way you treat race or gender, something that is much more stable in the electorate.

So after all that filtering, sampling, surveying, and weighting, it makes sense that you'd wind up with polls that are a bit all over the place.  If you're interested in keeping tabs on where the electorate is going, research the different polls, choose the ones with methodologies you trust, and if you can, try to evaluate them individually. But with the enormous amount of data swirling around out there as the election nears, you've got the ability to be a discriminating consumer. Rather than look at it as a curse of confusion, you can pick and choose the best and look to them to inform your pre-election predictions and analysis.

Like this Article

Print this Article

Print Article
About the Author

Kristen Soltis is a project manager for the Winston Group, a polling firm in Washington, D.C.