Polling Problems and Why We Should Still Trust (Some) Polls

SUMMARY

UNIFYING THEME: Information Marketplace: Ensuring the Public has the Data

Elections indicate who wins, but not why. Public opinion polling, done right, remains the best way of obtaining citizens' opinions. While some suggest two consecutive polling "fails" in presidential elections destroy trust in the process, policy makers in a representative democracy should pause before branding all polling data with the same mark.

RELATED NEWS: Josh Clinton on polls and trust in polling as seen in:

by Joshua ClintonAbby and Jon Winkelried Chair and Professor of Political Science

Another election, another apparent miss by too many of the pre-election polls that drove election coverage throughout the 2020 cycle.  Official post-mortems are actively underway, but pre-election polls clearly understated support for Republicans across the country.  So, what does this mean for polling?

Some have suggested that two consecutive "fails" in presidential elections means that no poll can be trusted.  This conclusion is too hasty and too broad.  It fails to appreciate the essential role public opinion polls can and should play in maintaining the legitimacy of our representative government and informing policy makers.


Putting aside 2020 for a minute, why should we bother with polling?

Representative government relies on elected officials acting in the interests of the citizens they represent.  But how can those officials know what the people might want from their government?  Elections indicate who wins, but not why.  Politicians are often eager to claim that their victory represents a mandate, but the only thing an election actually reveals is that a majority of voters preferred one candidate over another - a decision that can be driven as much attraction as repulsion, and as much by partisanship and identity as by policy.

But how can we know what voters think?  Politicians in earlier periods relied on newspaper editorial pages to assess community priorities, but few would suggest that this work nowadays given the dramatic decline in local newspapers and the increasing politicization of some news sources.  Protests can reveal public discontent, but it is hard to know how broadly the views of those passionate enough to take to the streets -- or write letters or attend meetings -- resonate in the society at large.  Social media platforms have increased the ease of expressing political opinions, but it is hard to discern what the public thinks from posts made by politically active citizens, trolls, and 'bots.

At the end of the day, public opinion polling, done right, remains the best way of obtaining citizens' opinions.  By proactively attempting to give everyone an equal chance of being heard, public opinion polls provide a way of obtaining the views of citizens who are uninterested or unable to express their political views otherwise.

To be clear, public opinion is important, but it cannot and should not wholly determine public policy or legislative action.  Political leadership can, and should, help inform and shape public opinion - especially in a representative democracy where our elected officials often have more time, expertise, and awareness of the complex situations facing the nation than ordinary citizens. Moreover, public opinion may be incoherent, based on a misunderstanding of reality, and/or be only weakly held.  Despite these important caveats, knowing the public's opinion matters because it reveals what the public thinks it wants from its government and this awareness can help highlighting when and where political leadership is required.


But about that 2020 pre-election polling…

After the 2016 election, it was clear that there were problems with pre-election polls.  Putting aside whether pre-election polls should be used to make such projections, the 2016 election outcome caused great soul-searching among pollsters and several reasons for the relatively poor performance of state-level polls were identified: late-deciding voters in the critical swing states unexpectedly choose President Trump by large margins, many state-level polls failed to account for the relationship between education and vote choice, and the polls in several close states failed to correctly predict the electorate's size and composition.

Heading into the 2020 election, pre-election pollsters had reasons for optimism.  Not only did the 2018 pre-election polls correctly predict that the Democrats would recapture the House of Representatives, but pre-election polling practices had also changed and many polls now accounted for the importance of education. In addition, unlike the uncertain and unsettled electorate of 2016, voters in 2020 were largely decided heading into Election Day.

Like 2016, pre-election polls continued to drive campaign news coverage thoughout the 2020 cycle.  At least 1,572 state-level presidential polls were conducted and publicly released - including 438 in the last two weeks alone - and these polls resulted in many hours of coverage devoted to speculation about what the results meant, why the candidates were performing as they were, and what could change before Election Day.  Given how well Democrats were performing in pre-election polls being done in the "red" states of Iowa, Ohio, and Texas, many expected that 2020 might be a "blue wave" election.

As we now know, this did not happen.  Moreover, the 2020 pre-election polls managed to do worse than the 2016 pre-election polls at the state level and state-level pre-election polls understated the support for Republican candidates by 5% on average. Despite nearly 500 polls being done in the last two weeks and a dramatic increase in the sophistication of pre-election surveys, the understatement of Republican support was the largest polling error in recent memory.


So, does 2020 prove that polls are too broken to trust?

Confronted with the sense that the polls "failed" in 2016 and 2020 - and that the polls in 2020 failed to accurate measure the support for Republicans --- does this mean that public opinion is unknowable from public opinion polls?  Or, perhaps even worse, that the polls are inherently skewed against Republicans?

The poor performance of pre-election polls in 2020 was consequential and unfortunate, but it does not necessarily impugn the accuracy of all public opinion polling.  Pre-election polling is different from, and more difficult than, public opinion polling that seeks to gauge the opinions of citizens in a state (our country).  In fact, some prominent polling organizations, such as Gallup, has put aside pre-election polling to focus exclusively on public opinion polling.

Pre-election polls must figure out two things. First, who is going to vote, and second, who those voters will vote for.  A mistake in either will create polling errors and both are unknowable .  In 2016, for example, not only was the higher turnout in Republican (rural) areas relative to Democratic (urban) areas unexpected by pre-election pollsters, but so too was the extent to which late-deciding voters would support President Trump.  While a pre-election poll with too few Republicans is obviously unlikely to correctly predict the outcome of an election, an unavoidable issue with pre-election polling is that we can never be sure of how many Republicans is too few (or too many) until well after the actual election.  Pre-election polls must inevitably make very consequential decisions about what they think the electorate is going to be without any way of knowing whether those decisions are correct.

These decisions matter because the people who answer surveys nowadays are often not a random sample of the electorate.  Because the average respondent is older, more educated, whiter, and more female than the average voter, pollsters must make statistical adjustments to ensure that the pool of respondents better resemble the population of interest.  For pre-election polls, this means making an educated guess about who is going to vote and what that implies about the composition of the electorate - a guess that can always be wrong.

Polls that seek to measure the public opinion of an entire state, in contrast, face a much easier task because we know what a state should look like.  When conducting our Vanderbilt Poll, for example, we know exactly how our respondents compare to the state - unlike pre-election polls, we don't need to guess what we think the state looks like.  Whereas pre-election pollsters can make mistakes because the electorate turns out to be different than what they thought, we know what the state of Tennessee looks like and this lessens the opportunity for error. There are certainly a lot of ways for polls to get it wrong - more on that in future posts - but because we know what a state looks like we can be far more confident when ensuring that our sample of poll respondents reflects the overall population when doing polls that are not trying to predict an election.


All Polls are Not Created Equal

The outcomes of the 2016 and 2020 presidential elections have caused some to dismiss all polling as hopeless and helpless.  The polling errors of the last two cycles were nowhere near as large as those that occurred in 1936 and 1948, but they have caused many to discount the importance and value of public opinion polling.  It is certainly unfortunate that the coverage of polls often does not adequately convey the many decisions that pollsters must make when analyzing a poll - as well as the potential consequences of those decisions - but it is too easy to dismiss all polls based only the recent performance of pre-election polls.

It would also be a mistake.  Pre-election polls must make educated guesses about what the electorate will be and mistakes in those guesses can produce inaccurate poll results.  Polls that seek to describe the opinions of a state (or country), in contrast, benefit from the fact that we know what the composition of the state should be and pollsters are better able to ensure that their poll respondents are representative of that population.

Public opinion polls continue to play an important role in a representative democracy by highlighting the opinions, priorities, and beliefs of its' citizens.  But people are correct to view pre-election polling with a critical eye.  Horse race numbers should not be taken as gospel by pundits or politicians.  However, policy makers in a representative democracy should pause before branding all polling data with the same mark lest we lose one of the few ways that we have of assessing what the public thinks.  Paraphrasing Churchill, public opinion polls may very well be the worst way to assess what the public thinks, aside from all the alternatives.



Joshua Clinton

Joshua Clinton

Joshua Clinton is the Abby and Jon Winkleried Chair and Professor of Political Science at Vanderbilt University where he uses statistical methods to better understand political processes and outcomes. He is the co-director of the Vanderbilt Center for the Study of Democratic Institutions, which launched the Vanderbilt Poll in January 2011 to provide a non-partisan and scientifically based reading of public opinion within the state of Tennessee and the city of Nashville. He is a Senior Elections Analyst at NBC News and the Editor in Chief of the Quarterly Journal of Political Science.  His work has been featured in peer reviewed outlets including: Proceedings of the National Academy of Sciences, Science Advances, the American Political Science Review, American Journal of Political Science, Journal of Politics, Public Opinion Quarterly, in addition to many media outlets. His specializations include: the politics in the U.S. Congress, campaigns and elections, the testing of theories using statistical models, and the uses and abuses of statistical methods for understanding political phenomena.