Predicting future events is challenging – yesterday’s result underlined that. It doesn’t matter how much data we have as data alone isn’t a source of truth. Nate Silver and his team at FiveThirtyEight had attempted to caveat their predictions at every stage by calling out the larger-than-usual uncertainty they were seeing. But, in the final analysis, they and many others got the results completely wrong.
Before we get to what happened, it is worth noting that we’re going to hear many step forward and say “See – I predicted it.” That’s normal. It always happens in every exercise that involves predicting the future. In most cases, they just kept believing in an outcome they really wanted. It is easy to just put a contrarian point out there. If it didn’t work, it was contrarian anyway. Now that it did work, it is an opportunity to earn some press. I’d move past that quickly as results aren’t a good approximation for process. And, the lessons are going to be in the process.
Getting back to the issue with prediction, I think a couple of forces made this exercise very challenging. First, fewer people were likely willing to admit that they were voting Republican. Second, it looks like there is a case to be made for serious sampling bias. It doesn’t look like the pollsters were actually reaching a representative sample. Third, the assumption that high turnout would favor the Democrats was completely wrong. And, finally, it seems more than likely that the various polls were influencing each other.
Of course, this analysis is easier in hindsight.
If you work with data on a day-to-day basis, there a couple of takeaways. Firstly, you will never be able to build the perfect model. Treat any exercise aimed at predicting the future as a range of possibilities. Secondly, it doesn’t matter how many of your past models were right. You are only as good as your next one.
Over confidence can be dangerous, often fatal.