Polls: Don’t believe ’em!

One of the most effective forms of deception in politics involves the abuse of public opinion polling.

It’s rare that public opinion polls are relevant to stories on political issues. Regarding most of the important issues with which politicians deal at the national level, the public simply doesn’t have an opinion. But that doesn’t stop polls from measuring opinions that the public can’t have.

In a 2010 Pew poll, 41% couldn’t name the Vice President; that figure was 29% in a 2011 Newsweek poll. (Newsweek’s report on this poll idiotically blamed people’s ignorance on “income inequality.”)

Americans are often polled on issues about which they know next to nothing. One recent Bloomberg poll, taken before the selection of Janet Yellen as the new chairman of the Federal Reserve System, “found” that 33% approved of President Obama’s efforts in selecting a new chair, and 25% disapproved. Only 42% admitted they weren’t sure.  In fact, the number of people aware of the Larry Summers/Janet Yellen rivalry and other aspects of the selection process is too small to measure. The fact that 58% expressed an opinion is an indicator not of actual opinion on the Fed chairman selection process but on the eagerness of poll respondents to give some answer, any answer and thus please the pollster or make themselves seem to be well-informed.

As philosophy professor named Thomas Michael Norton-Smith noted in a 1994 article:

Respondents in surveys conducted in 1978 and 1979 were asked whether or not they favored the passage of obscure pieces of legislation about which they could have known little, if anything. Despite their lack of even rudimentary knowledge about the Agricultural Trade Act (1978) and the Money Control Bill (1979), 31% of respondents ventured an opinion on the former, and 26% on the latter. On another survey respondents in the greater Cincinnati area were asked, “Some people say that the 1975 Public Affairs Act should be repealed. Do you agree or disagree with this idea?” A third of the respondents ventured an opinion on the Public Affairs Act even though it did not exist.

Even when the pollster screened people for their knowledge or lack thereof, 10% still expressed an opinion on the fictional measure.

I’ve seen polls asking people their opinions on complex issues such as the Chemical Weapons Convention or the early-1980s nuclear freeze proposal, and articles reporting the results despite evidence that fewer than 10% of people actually knew anything about those issues.

(To be clear: I’m not suggesting that people in general or voters in particular are stupid. In my opinion, the average American understands the world better than, say, a typical attendee at the annual White House Correspondents’ Dinner. But people can have informed opinions only on topics about which they are aware, and it’s unreasonable to expect the typical hairdresser or steelworker or chicken farmer to sit down after a long day of work and study complex matters of public policy. That’s why there is a tremendous burden on those of us who get to be fulltime policy wonks, a burden to work hard to explain issues to people who don’t get to spend a lot of time studying these things.)

Sometimes pollsters try to get around this knowledge problem by explaining an issue to respondents. I’ve seen “questions” that are more than 200 words long. Of course, the response to such a “question” is not a response to the issue itself but to the wording of the question. (My rule of thumb is that no poll question is valid if it’s longer than 12 words.)

A few years ago, there was a poll in the Washington Post that was apparently intended to make it look as if voters in the DC suburbs of Northern Virginia supported a proposed tax hike referendum. The question about whether people supported the tax hike was asked only after respondents were presented with a long list of wonderful things that could happen if the tax hike passed, things such as a reduction in the length of the awful DC-area commutes. The poll “showed” that people backed the tax increase… which the voters then handily defeated.

Once upon a time, there were few public opinion polls, and political reporters were quite aware of the unreliability of polls. Today, the science of polling has greatly improved, and a poll conducted in accord with strict scientific standards can produce valuable and accurate results, given the statistical margin of error (the error rate that all polls, even the best-conducted polls, suffer due to the math associated with the way polling works).

Today, the proliferation of polls has led to the reliance of political reporters on polls that are deeply flawed, that are broken in terms of how various groups are sampled or in the way the questions are worded. Often, there are errors in interpretation; the poll results themselves are OK, but political reporters misinterpret them in their stories.

Let’s take a look at a recent polls with significant problems—a polls that was widely reported but should never have been the basis for news stories.

[A note: For discussion purposes, we must assume that polls reported in the major national news media were actually conducted—that they aren’t the result of some guy filling out the responses without actually conducting the survey or conducting it in the manner claimed. Of course, we don’t know that, and we can’t know that unless we’re in the room (so to speak) when the poll is conducted. Occasionally, I come across a widely reported poll that is highly questionable, that has results that just don’t make sense. It’s hard to prove outright fraud, even though it does occur. To be ethical, a journalist should treat any poll with a grain of salt unless he or she can personally verify the fact that the poll was conducted as advertised. For this article, though, let’s put that matter aside.]

 

Sampling

Among recent polls with problems, there’s the New York Times/CBS poll of September 19-23. First, 19% of respondents are people who can’t even be bothered to register to vote, despite all the laws making it absurdly easy (such as when you get a driver’s license or sign up for welfare). There’s nothing inherently wrong with surveying “all adults” rather than registered voters or likely voters, as long as that is highlighted, underlined, made explicit in the news stories based on the poll. Which didn’t happen.

The poll also seems to have oversampled certain groups compared to others. The ratio of people with children at home (who lean Republican) to people without children (who lean Democrat) should be about two-to-one, but, in the NYT/CBS sample, the ratio is 32-29, virtually a tie. Married people (who lean Republican) should outnumber never-married people (who lean Democrat) by three-to-one, but, in the poll, it’s about two-to-one (53-27).  Protestants (who lean Republican) should be 77-78% of the respondents, but they’re 46%.

Conservatives should outnumber liberals by almost two-to-one, but the poll has the ratio at 34:23.

And people who claim not to have health insurance or health care coverage, which should be less than 10% unless illegal aliens are included, is a whopping 16%.

Of course, demographers might argue over the exact representation of each group in the poll. The problem is that, as far as I can tell, every case of apparent oversampling/undersampling favors the liberal or Democrat side. Honest errors would tend to be more evenly distributed and would balance each other out.

In the poll results, as reported by the New York Times and CBS, we don’t see the so-called “cross-tabs”—the data showing how each group responded to each question and how the groups compared to each other. Thus, it’s impossible to determine the precise degree to which sampling problems might have affected the results. It’s certainly enough to suggest that they’re off by several points on some of the close questions.

 

Wording

Now consider the bizarre wording of a question on Obamacare: “Regardless of how you personally feel about the 2010 health care law, what would you like to see Congress do when it comes to the 2010 health care law? (1) Uphold the law and make it work as well as possible or (2) try to stop the law from being put into place by cutting off funds to implement it.”  Not only is the question five times the maximum length for a legitimate question, in my opinion, but it includes what people in politics called “weasel words” to elicit a particular response. Those words: “make it work as well as possible.”  Who’s against that? Amazingly, despite the slanted wording, 38% still favored defunding (with another 1% favoring blocking the law without defunding), and only 56% backed making it “work as well as possible.”

The reporting on that question from the NYT/CBS poll ignored the wording of question and suggested that people supported funding Obamacare, with no qualifiers. Here’s how the Washington Post reported the result: “[T]oday’s New York Times/CBS News poll finds that by 56-38, Americans support upholding Obamacare rather than defunding it . . . “

Here’s the wording of another question: “In general, do you think it is acceptable for a President or members of Congress to threaten a government shutdown during their budget negotiations in order to achieve their goals, or is that not an acceptable way to negotiate?”  That’s called a leading question, the sort that a lawyer can’t ask of one of his witnesses because the favored answer is made clear by the wording of the question. It’s 39 words, including “acceptable”… “threaten”…”shutdown”…”in order to achieve their goals”…”not an acceptable way to negotiate”—all terms that telegraph what the correct answer is supposed to be.  Not surprising, the result was “acceptable,” 16% and “unacceptable,” 80%.

Another oft-reported question asked people whom they would “blame’ more in the case of a partial government shutdown, with the result “Republicans in Congress,” 44%; “Barack Obama and Democrats,” 35%. But those figures include people who wouldn’t blame anyone—that is, people who support the idea of a shutdown. How many? Other polls have shown support for the shutdown ranging from the teens to the high 30s. The NYT/CBS poll, despite a biased wording on the question (“How do you feel about the possibility of a government shutdown—mostly satisfied or mostly frustrated?”) found 10% “mostly satisfied” with a shutdown. Another result showed 19% saying it would “worth risking a shutdown of the federal government to defund the health care law.”

That’s enough to render meaningless any figure based on a who-do-you-blame question!  Imagine if people were asked, “Who do you blame for the Cowboys victory in their last game with the Redskins?” How would Cowboys fans answer that question—“blaming” the Cowboys for winning or “blaming” the Redskins for failing to stop the Cowboys or “blaming” the Redskins for trying to stop the Cowboys or what? Failing to isolate the pro-shutdown people from the anti-shutdown people makes the nine point difference between Republican “blame” and Democrat “blame” meaningless. Yet the poll was widely reported as supporting the idea that Republicans were getting the “blame.”

 

Left out of the mainstream

Despite all the flaws, people ideologically aligned with the President were found on many key measures to be in the minority or outside the mainstream.

The poll indicated that only 5% of respondents had “a lot” of confidence in the Federal Reserve’s ability to promote economic growth (“some,” 27%; “not much,” 25%; “none,” 13%). That means that, on that issue, a whopping 5% agree with the President’s position on this issue.

Only 30% thought Barack Obama cares “a lot” about “the needs and problems of people like yourself.”  (He cares “some,” 32%; “not much,” 16%; “not at all,” 21%.)

Only 17% strongly approved of “the health care law that was enacted in 2010,” with 34% strongly disapproving. The law will “hurt the national economy,” 49; “improve the national economy,” 26%.

Should the debt ceiling be raised “because the government must pay its existing bills and obligations,” or should the debt ceiling be raised with offsetting spending cuts, or not raised at all? Despite the weasel words about bills and obligations, part of a question longer than 100 words long, the result was “raised without conditions,” 17%; “raised with spending cuts,” 55%, and “not raised,” 24%. In other words, the President’s position, raising the debt ceiling without offsetting spending cuts, was opposed by 17% to 79% (55% + 24%). [In a September 20-23 Bloomberg poll giving an either-or choice between an unconditional increase in the debt ceiling vs. “requiring spending cuts when the debt ceiling is raised even if it risks default,” respondents opposed the unconditional increase by 61-28.]

It turns out that the mainstream position wins even when the poll is otherwise tilted toward the Left. Other recent polls have found people favoring “smaller government with fewer services” over “bigger government with more services’ by 60-35, and classifying the federal government as “too powerful” versus “not powerful enough” by an astonishing 60-7! That’s right: The President’s position is shared by seven percent of the population, according to one recent poll (Gallup).

 

A common problem

The problems in the recent New York Times/CBS poll are common to public opinion polls that are widely reported. A recent Peter Hart/CNBC poll, September 16-19, included 9% people who admit to being registered to vote, and oversampled Obama voters compared to Romney voters by 15 or 16 points. (Obama got 8% more votes than Romney in the 2012 election, but self-proclaimed Obama voters in the Hart/CNBC sample outnumbered self-proclaimed Romney voters by 25%!) Conservatives outnumbered liberals only 34-20 (with “very liberal”—the Obama position—at 8%), but the conservative-liberal ratio should be almost two-to-one.

The poll also displays biased wording in its questions—for example, warning that failing to raise the debt ceiling would results in the federal government “not meet[ing] its financial obligations . . . defaulting on its loans  and not making payments to Social Security recipients and government workers.” Yet even with the sampling problem and the biased wording, 19% favored totally eliminating funding for Obamacare “even if it means government shut-down and default,”15% favored the defunding but not with shutdown and “default,” another 4% favored defunding but aren’t sure about shutdown/default, while 44% opposed the defunding.

Adjust the poll for the sampling problem regarding Obama and Romney voters, and the slight advantage to the Obama side on that question probably goes away. (We can’t know for sure because the data aren’t provided.)

And, despite the problems, the poll found that, on such matters as whether Obamacare will improve healthcare, or the healthcare of your family, the respondents were against Obamacare. The U.S. healthcare system, compared to a year ago, “better” 15% and “worse’ 49%.” Obamacare, “very positive” 14% and “very negative” 35%; overall, 46% to 29% negative.

Another Peter Hart poll, this time done with NBC and the Wall Street Journal over October 7-9, had similar problems. The respondents include many people who aren’t registered to vote (14%)—an inclusion that is not itself misleading so long as the news stories reporting the poll make it clear.  (Most of them don’t.)  More of a problem is that Obama voters, who should outnumber Romney voters by about 8%, outnumber them in the poll by 26%.

In this poll, government employees or people in their households make up at least 20% of the respondents. (I can’t find a figure on households with government employees, but local, state, and federal employees are slightly more than 1/15th of the population, so that 20% figure for government employees’ and their households seems very high. Outside of the field of national security, the vast majority of federal employees lean liberal/Democrat, especially when the topic of the day is a government shutdown.)

As in the other polls examined here, the questions were biased, using terms like “bankruptcy” and “defaulting on [the federal government’s] loans” to describe a refusal to raise the debt ceiling. Despite the tilt, however, more respondents expressed concerned about raising federal spending and debt (41%) than worried about the “bankruptcy” of the federal government (37%). And the respondents opposed President Obama’s refusal to negotiate with Republicans unless they re-open the government and raise the debt limit (52%  disagreeing with Obama vs. 40% agreeing, and 36% strongly disagreeing vs. 30% strongly agreeing.)

The poll provides another example of bias in wording, with a question about whether “Government should do more to solve problems and help meet the needs of people” or “Government is doing too many things better left to businesses and individuals.” Supposedly, the first choice posted a 52-44 advantage. But, of course, Option A is one with which many conservatives, libertarians, etc. would agree. They would say that government should solve problems and help people by letting them keep more of their own hand-earned money and leaving them alone to make most critical decisions about how to live their lives. Given that the wording makes little sense, the question appears to have been worded specifically to inflate the number of responses that can be reported as favoring Big Government. Most likely, though, the bias is the result of an unconscious process; liberals subscribe to a stereotype holding that non-liberals just don’t care about regular people or have compassion for their suffering.

 

Conclusion

When a public opinion poll is conducted scientifically, with a carefully selected (or weighted) sample and with short, carefully worded questions, it can produce information that is valuable to the public discussion of political issues.

But most polls aren’t like that.

Most polls—at least, most polls that are publicly reported—are full of sampling glitches, biases, “push” questions in which the wording suggests a favored response, straw-men questions in which one side’s position is misrepresented, and other flaws. Nevertheless, political discussions on television are replete with references to these polls, few of which have been read by the reporters and commentators who cite them. At best, they read the press release versions that distort the actual results of the polls.

For political reporters and commentators, polls are a crutch. They are discussed to fill the column-inches or the time that ought to be used for reporting and analysis of the issues themselves. It’s time to start giving polls the respect they deserve, and not one ounce more.

Share this post!