This post is a draft of a proposal for a undergraduate interdisciplinary courses between statistics and political science on polling.
Statistical topics could include voter turnout, the herding phenomenon, stratification and post-stratification, weighting, sampling theory, accounting for bias from different sources, detection of voter fraud, and what-if analyses based on second choices, proportional representation and transferable votes.
Other topics would include factors involved in voter turnout, voter suppression, and disenfranchisement. the game theory of strategic voting, proportional representation vs first-past-the-post, along with others recommended from a political scientist that I can't speak about with authority.
Topic: Demographic Factors
Different demographics are more likely to answer their phones and answer questions to polling companies, such as those with higher faith in authority. Probability of answering polling questions and of voting changes greatly between different demographics. For example people under the age of 30 tend to vote in half the numbers of people over the age of 30. As such, even if you do manage to get someone under the age of 30 to answer their cell phone and give their opinion, it should be discounted or weighted lower in an election predicting poll because those people are less likely to vote even if they say they will.
Topic: Social Desirability Bias.
Social desirability bias is the name for the phenomenon in which someone will answer a question in a way that they perceive to be more desirable by society at large. In some situations we can measure directly the amount of social desirability bias, for example if one were to estimate the number of alcoholic drinks people consume by asking people directly, one would typically get about half of the true amount.
Statistics Canada found this one-half factor by comparing what people answered to the amount of liquor sold at Ontario's liquor distribution branches. These distribution branches consistently report selling twice as much alcohol as residents of Ontario report drinking. (Note: Almost all alcohol purchased in Ontario is consumed in Ontario, and only a small fraction of the alcohol consumed in Ontario was purchased somewhere else such as another province or across the US border.)
Topic: Three or more choices.
When we start to look at situations in which options are not simply a binary like approve or disapprove, or preference for one of two preference presidential candidates, things become much more complicated.
Consider the four major political parties of Canada. Preference for these political parties each other in differing amounts. For example people who report the NDP as their primary choice often report the Green Party as their secondary choice and vice versa. Also people who report the conservative party as their primary choice often list no secondary choice at all. This means that even if you could get an unbiased measurement of party preference among the four parties you still don't have a clear idea of the political spectrum of the population at any given moment. You would need to map the four political parties on some sort of grid instead of simply having them as left versus right.
Possible case study: Detection of Voter Fraud in Russia.
A strange statistical phenomenon appeared in the polling data of Russia's 2010 presidential election: A correlation between turnout at different stations and the proportion of people that voted for the incumbent Vladmir Putin. Stations in the top 20% of voter turnout were the only ones that voted for the United Russia party with more than 90% of ballots. This correlation does not appear between in any Western country's most recent elections - turnout and vote for the winning party were unrelated in these other elections, even in cases where there could be an 'enthusiasm gap' between supporters of different parties.
In the 2017 presidential election, a different but equally strange phenomenon appeared: A unusually large number of stations that were reporting exactly 80%, exactly 85%, or exactly 90% turnout, even from polling locations that had a large number of votes. The chances of there being spikes in the distribution of turnout numbers is very low, and having them at heaped into round numbers is especially suspicious.
Source: Akarlin.com |
Possible case study: The herding phenomenon.
Polling companies conduct political polls as part of their marketing strategy to show their credibility and expertise. Naturally, they are concerned with standing out with unusual (i.e. risky) poll results. As a result, polling companies' results tend towards the mean of other polling companies. As such their biases become highly correlated. Two statistical results are psuedoreplication and insensitivity to rapid changes.
There are lots of different ways that a polling company could adjust their numbers in order to get a better shot at the truth. The ordering of questions, and ordering of options within questions seem to matter. Studies have shown that the first option in a list gets taken more often then it would if it were lower in the list. In a longer list, this difference becomes more pronounced especially because of memory effect.
Possible case study: Why are Rasmussen Reports far from the other polling companies?
Most of the top-rated (for credibility and transparency) polling companies such as Ipsos Reid have put the approval rating of Donald Trump, at least in October of 2018, at about 41%. There is very little week-to-week variation, with most companies being in the 39 to 42 range. However there's one company, also highly rated, which seems to be drifting upwards from the rest of the pulling companies. At the start of the election term it was about 3% higher than the rest of the polling companies, so it would give 44 instead of 41. Now almost two years into the presidential term that difference has grown to 10%, such that Rasmussen Reports now gives a 51% where are the rest of the field is giving 41%. Why would that be?
Possible case study: "Dewey Defeats Truman", a Sample Selection, Weighting, Stratification, and Polling Method Blunder.
In an ideal situation if you wanted to get a representative sample, you could use simple random sampling and collect a thousand people with each of them equally likely to be in the sample, and ask their opinions. The results would be an unbiased estimate of what the entire population felt.
In reality, a simple random sample is impossible in political polling. The Dewey versus Truman polls had a substantial bias that large enough to predict the wrong presidential candidate to win the election, because polls were mainly conducted by telephone, and at the time people who were more wealthy were more likely to own a telephone and therefore more likely to be in the poll. The resulting presidential preference showed Dewey who was more favored by the wealthy than Truman who actually won.
Similar situations have happened in modern polling, in which older demographics have owned land line telephones which have been used for calling people more often.
Further ideas:
- A hook for indiginization of content: Voter suppression and disenfranchisement. The idea that voting is capitulating to colonialism.
- Voting blocs: How do estimates of a population change when groups vote in a highly correlated manner. (does it reduce effective sample size, how do we estimate bloc block correlation?)
- Gerrymandering.
- The spoiler effect
- The effect of macroeconomic factors like unemployment on voting for incumbent.
Chica wants your support to become the first canine Prime Minister |
Special thanks to Elena Szefer, of SFU and Emmes Canada for sharing her extensive report on polling effects, and to Eric Grenier of CBC and ThreeHundredEight for the insights that led to this proposal.
No comments:
Post a Comment