Why the Polls Failed - The American Spectator | USA News and Politics
Why the Polls Failed
by

“Things are seldom what they seem.” So sings the aging “Buttercup” in Gilbert and Sullivan’s Pirates of Penzance. Similarly, the opinions and preferences that respondents reveal to pollsters are sometimes falsely reported.

Brooding over some recently received injury to your ego or worrying over some just noted “symptom,” you encounter an acquaintance who cheerily asks “how’s it going?” Your likely answer will be “just great!” Such disingenuousness which sometimes colors our daily discourse is, as well, a serious source of error in opinion polling; and nowhere is it more troublesome than in surveys of political preference.

A month or so ago, in a brief article printed in this journal, I argued that the election polls, most of which reported — over the long, long campaign — that Trump was down and Clinton remained up, were contaminated by some modicum of such false self-report. Drawing from contemporary social psychology, I suggested that the process underlying that false self-report was “evaluation apprehension.”

A moment of “full disclosure”: That term was coined by me some years ago but it has since attained rather wide use in contemporary social psychology. It points to a state of arousal experienced as “an anxiety-toned concern that one be positively judged or, at least, that one not be negatively judged.” This operates for some of the people all of the time and, far more commonly, for some of the people some of the time. One sort of occasion that does often stir evaluation apprehension (EA) appears to be the opinion survey interview in which the questioned person wants, whether he “knows it” or not, to please or at least not displease the interviewer. And this will be far more likely if the subject is political than if it concerns, say, toothpaste preferences.

Zeroing in on the Clinton-Trump race, the near unanimity of the print and broadcasting media in their rejection and derogation of Donald Trump could have roused EA and resulting disingenuousness. Particularly, this would be the case for one who favored or leaned toward Trump but was now being asked to voice his or her presidential preference. This would not happen in all or most cases, but could happen in enough of them to tilt the data in an inaccurate direction. Trump himself did facilitate this process by his unattractive lapses into insult and mockery.

Yet another apparent lapse did further compound the problem. Interestingly, it became publicly and “scandalously” visible later on the same day (October 7) in which my first article on these matters appeared. It was the ten-year-old Access Hollywood video which caught Trump seeming to brag about his sexual prowess and irresistibility. That the Trump totals then dipped in most of the polls was no surprise. That they rebounded after a few more days bespoke a link between him and a vast and aggrieved sector of the voting public which the pollsters (and most of the commentators) are only now discovering or acknowledging.

It would advance political journalism if some of its practitioners were able to acquire a more detailed understanding of the art (science it never was and can never be) of public opinion polling. If they had actually achieved some working sophistication, adepts like Chris Matthews would not so readily turn from awarding pleasure credits (“I felt this thrill going up my leg” while contemplating Obama) to exhorting vigilante justice as he urged, late on election night, that “we should tar and feather all of the pollsters.”

Another and more kindly broadcast journalist suggested that the polls did no wrong since, after all, they usually reported error-limit ranges of 3 percent. As I said to him through the TV screen: “Yes, my boy, but then roughly half the errors should have shown Trump ahead!” The serious point is that error limits, as statistically calculated, are understood to reflect errors in sampling, respondent discards, quota excesses, or poor interviewer performance. Such errors across a range of surveys would be “random” in their distribution rather than one-sided. There are ways to possibly find and reduce the one-sided tilting of data due to the activation of EA, but they are not easily worked into quick conversations with a respondent on a cell phone.

I have written these quick words as a social psychologist who might be able to illuminate an important methodological issue. But beyond methodology lies understanding. We have been spared a return to managerial statism at the end of its tether. Future historians, and present journalists who have managed to read a book or two, will have to ask why half of the electorate (the half somewhat more male, less coastal, less college-educated and less wealthy) enabled a Quixote of our time to get so far beyond the windmills.

Sign up to receive our latest updates! Register


By submitting this form, you are consenting to receive marketing emails from: . You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Be a Free Market Loving Patriot. Subscribe Today!