For this second episode, we’ll be talking about perhaps the most contentious topic in all of statistics: forecasting. There’s a lot of debate as to whether truly accurate forecasting is even possible. The truth is that, like many things, it’s complicated. Let’s get into it to figure out more.
First of all, let’s define forecast as a term. Because it’s not quite as simple as “predicting” something — accurate prediction is for the most part impossible. Rather, forecasting gives a more holistic view of something, such as trends in a system. Its work is more aligned in probabilities rather than values. In fact, we might say that the primary goal of forecasting is to understand the hidden complexities in the probability of an event occurring. Not necessarily prediction at all.
So then, what techniques are used for current forecasting? I’ll go over three that tend to hit most of the core aspects of the science: Rolling Updates, Wisdom of the Crowds, and Brier Scores.
The first big part here is rolling updates. With forecasts, you can’t really set it and forget it. Rather, you need to update the forecast with any relevant events or information that might come out. Just keep in mind the Horse Better’s Fallacy here — you’re looking for quality information!
The second part is wisdom of the crowds. In forecast theory, this tends to give the most accurate forecast numbers. Essentially what it means is that you take a large number of people’s individual guesses, average them together, and find a uniform forecast decision based on that. There are some rules to wisdom of the crowds: for example, you can’t just pick random people off the street. They have to be experts in the subject in order for it to work. Another example is that the group must be given time to look deeply into the problem — intuitive answers go back to scattershot reliability.
Finally, we need to have a way to determine whether our forecasts were high quality. For this, we use Brier Scores. Brier Scores are pretty much what they sound like — measures with which we use to score forecasts. However, Brier Scores aren’t perfect. A lot of the times, there’s holes within the scores (for example, a forecast for a stable system will get a very high Brier Score whereas a forecast for an unstable system will get a very low Brier Score — despite the fact that it’s easy to forecast for something that always stays the same!)
Now that we’ve gone over this all, I can elaborate a bit more on what I think about forecasting. Overall I think forecasting is an immature and imperfect science; however, I also think it is solvable. That meaning, in 10 or 20 or so years, we’ll have a much cleaner view of what forecasting is and more accurate forecasts as a whole. I don’t think forecasting should be cast to the wayside, but rather embraced and researched more fully.