Political pundits are notoriously bad at predicting elections. Sabermetrician and statistical geek Nathan Silver is showing how science can succeed where pundits fail.
Editor’s note: Watch for the official launch of the Dragon Phylogeny Project and Blog in a few weeks. In the mean time we thought you might enjoy our take on the U.S. Presidential election.
How is this possible when national polls and political commentators suggest such a tight race? The answer lies with statistical geek and five-thirty-eight blogger Nathan Silver, who predicts that Obama will likely win all of the key swing states: Colorado, Iowa, New Hamshire and Virginia, with Romney likely to win Nebraska and North Carolina. Only Florida remains a virtual toss-up at this point.
Silver’s predictions are not without its critics, particularly on the political right of the spectrum, but when the results are finalized tonight they will all face a collective moment of truth. The Dragon Phylogeny Project has a lot of confidence in Silver’s projections. The reason is not that we are politically biased but because we possess a staunchly conservative scientific mindset, and Nathan Silver uses a powerful yet flexible tool for predictive modelling – Bayesian statistics.
Presbyterian Minister and Statistical Revolutionary
Bayesian statistics are named for the Presbyterian minister Thomas Bayes, whose theorem completely turned statistical inference on its head. And this is precisely what makes Bayesian statistics so powerful for prediction. To understand why, consider the statistical cliché of flipping a coin 100 times.
Frequentist statistical inference is the most popular form of non-Bayesian inference and begins with a statistical model. In the coin-flipping example our model might be that each flip of the coin is independent with an equal probability of heads and tails. Count the number of observed heads and tails from our 100 coin tosses and compare the results with the prediction of our statistical model. We can then assign a probability of getting the observed result given our statistical model. If the result is unlikely (for example, if we get 99 heads and 1 tail), then we reject the model. In this case, we might conclude that the coin tosses are not independent or something about the coin or the tossing apparatus is biased. This probability of the data given the model – or P(data|model) as a convenient short-hand – is the P-value often reported in scientific literature. Typically, the statistical model is rejected when P < 0.05 – that is, when P(data|model) is less than 5%. This is what is usually what is meant when a scientific study finds ‘statistically significant’ results.
Bayesian statistical inference turns the process completely on its head, by evaluating a given statistical model based on the available data – P(model|data). Using the same coin-toss example, we begin with the data (99 heads and 1 tail) and then ask what is the probability of a given statistical model. For example, what is the probability that the coin is unbiased (50%) or biased toward 60%, 75% or 95% heads? A probability can then be assigned to each model.
The elegance and the power of Bayesian statistics are in its ability to continuously assess statistical predictions as new data become available. In the case of political forecasting, Nathan Silver asks “What is the probability that Obama or Romney win State X given all of the available polling data?”
Bayesian statistics are not just for political forecasting – they are having marked impacts in all areas of biology, including phylogenetic inference, which is really what the Dragon Phylogeny Project is all about.
In phylogenetics, the power of Bayesian statistics lies with the ability to contrast the probability of alternative phylogenies. For example, we might build a phylogeny based on DNA sequences from NCBI GenBank for the first subunit of thee cytochrome c oxidase genes (COI) of humans, chimps, mice, rabbits, dogs and birds. Using a Bayesian framework, we can predict not only the most probable tree topology, but also assess the probability of other topologies.
For example, what is the probability that humans evolved from dog-like ancestors? A legitimate hypothesis given that many dog owners resemble their pets. However, this hypothesis is not even remotely supported by genetic data, which consistently group humans within the primates. It’s actually pretty spectacular that Darwin was right about this, even though he had absolutely no concept of genes or DNA. This is yet another example of the power of Darwin’s great thesis.
But while there may be no question about human origins, in statistical prediction one should always be careful not to put too much confidence into individual statistical models. All statistical models are based on assumptions and require unbiased data; garbage in = garbage out, as the saying goes. Nonetheless, two additional lines of evidence suggest that Nathan Silver’s projections are right on the mark.
The Invisible Hand of Las Vegas
First, it’s one thing for pundits to make predictions on the air, but how many are willing to bet money on their predictions? Like Free Market Capitalism, the invisible hand of gambling tends to converge on the statistical truth as many people making bets with incomplete information tend to balance around the ‘true’ odds. The Vegas money line is about -400 for Obama, meaning that a $4 bet will win $1. This translates to about a 75% chance of winning (i.e. you would have to win 1 out of 4 times to break even). This is slightly below Nathan Silver’s prediction of 90% probability of victory for the Democratic President, but still shows a strong advantage for Obama.
Second, there is the news of a last-minute change in the Romney strategy to shift its campaigning into Pennsylvania instead of Ohio. There are of course many potential explanations, but it is certainly consistent with the prediction that Romney’s advisors have concluded he has a low chance of winning Ohio and have decided that a last-minute play for Pennsylvania might provide the only chance of winning.
In just a few hours, political pundits and statistical projections will face the test of real-world voters. If Silver’s predictions turn out to be correct, you don’t have to let it shake your political leanings, but you should let it convince you of the awesome power of Bayesian statistical inference.