Apalla #11 – Statistics, Part 1

This next series is going to be on statistics. For this first go around, we won’t be getting into the nitty gritty of distributions and p-tests. Instead, I’ll start off with something everyone can benefit from: statistical fallacies. In the second part, we’ll go into perhaps the most controversial topic in all of statistics: forecasting. But for now, let’s stick with the fallacies.

In a world where every new data source must be properly scrutinized, learning the “gotchas” associated with statistics is pretty key. In fact, out of all biases, people most often fall for statistical ones!

The most common fallacy relates to probabilities and values. My favorite analogy to this is by (I believe) Nassim Nicholas Taleb, who states: The chance of a lottery is small, but there is still a winner. In other words, there is a big difference between something happening and the probability of something happening.

“Pssh, that’s obvious!”, I hear you say. Indeed, when it’s spelled out like that, it seems pretty much common sense: there’s a clear distinction between probabilities and values. However, our default brain doesn’t think that way. When we see something has a 95% chance it will happen, we believe it will happen no matter what — not that it has a 95% chance!

Black Swan events are a great example of this (in fact, that NNT analogy comes from his Black Swan book!). If we give an event — say, a global pandemic — a 1% chance of happening, our immediate response is that it’s too unlikely to be worth prepping for. In other words, it just won’t happen. Of course, now we know it does happen… so what we should instead do is map the chance of occurring with the devastation of the effect to find an appropriate expected value for preparation. Say we have a 1% chance of a global pandemic, which can create $100,000 of loss for us… then we spend $1,000 preparing a defense for it now. It won’t save us completely when it happens, but it’s certainly better than doing nothing!

Moving on to other fallacies, the next big one is The Horse Better’s Fallacy, named after the famous story told by Adam Robinson. This bias basically states that the quality of information matters much more than the quantity of information. This fallacy can tie into a lot of other fallacies, but it has its own unique point: don’t bother gathering low-picking scraps when making a decision, just use 2-3 good pieces of data.

Finally, we have the idea of ergodicity. When something is ergodic, that means that it is guaranteed to have a specific distribution. A good example of this is flipping a coin; there’s always going to be a 50/50 distribution (barring weird incidents — remember the Black Swan rule!) so we can safely say it’s ergodic. However, we tend to overestimate the amount of ergodic systems in the universe. Most things are actually stochastic, which for simplicity’s sake is the opposite of ergodic; a distribution that is completely random, or changes over time. So if you think you know something is going by a static distribution, think twice and confirm; it might save you from a dangerous statistical hiccup!

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: