The Shotgun Effect

Imagine you want to spend time investigating whether being around stupid people causes medical conditions. So you set up a trial with two epidemiological groups – one for people who live near homeopathic clinics, and one for people who live on college campuses away from homeopathic clinics (as a control). Then you check these groups for all manner of illnesses. You examine them for leukemia, lung cancer, measles, multiple sclerosis, Ebola, AIDS… all in all, over 1000 serious health conditions. Your tests are designed for between 95% and 99.9% certainty. In the end, it turns out that those who live near homeopathic clinics are statistically more likely to have shingles, tuberculosis, and hepatitis C. So can we conclude that homeopathic clinics cause shingles, tuberculosis, and hep C?

 

 

God, I wish. Because given how many homeopaths either buy into studies that fall prey to this kind of error or abuse it themselves, a study like this could lead them to shutting down clinics for the sake of public health. Have I mentioned that I don’t like homeopathy?

 

Seriously, guys, it’s just water.

 

So what’s the problem with the study? This is something I like to call the “Shotgun Effect”. Basically, you throw turds at a wall until one sticks. What are the odds in a study with 1000 components that nothing will turn out to be a false positive? Let’s simplify the numbers a bit and assume that all of the tests had a 99% probability of accuracy in the case of a positive. That means we have a 1% chance to have a false positive. So if we want no false positives, let’s check the probability. The trick is to check the likelihood that we get nothing but true positives, and then take the inverse of that. So in 1000 studies, the likelihood that we get no false positives is going to be 0.991000, which is approximately 4,32*10-5. So, 0,0432%. The odds that you’re not going to get any false positives isĀ significantly less than 1%. In fact, with that many studies, the odds are actually pretty good that you’d, by sheer chance, get something way out there.

 

This doesn’t just apply to studies like the above (although studies which examine large numbers of conditions are very liable to get this effect). You can also get it by having too many groups with modified protocols, or just running many, many studies. In the case of homeopathy, for example, it’s trivially easy to see even with no testing that it doesn’t work. It’s water. If you believe homeopathy works you are scientifically illiterate. That’s all there is to it. So why do you still get the occasional randomized, controlled trial (RCT) that seems to show some effect above placebo? Well, homeopathic practitioners keep on running them! Negative study after negative study is apparently not enough to dissuade them, but any single, tenuous positive study will renew the faith – confirmation bias at work.

 

Gratuitous Dilbert plug.

But anyways, they run a few hundred studies, testing homeopathy on all manner of ailments, and every once in a blue moon the stars align and you end up with a study which seems to show homeopathy having some effect above placebo for a certain ailment. Of course, understand that it wasn’t homeopathy that led to this result. We don’t need trials to show that homeopathy doesn’t work; the scientific grounding is demonstrably false; it’s like running tests to see if faith healing works. It is a waste of time and resources to perform trials on something which we know shouldn’t work and for which there’s no good evidence that it does. But run enough studies, and you’re all but certain to find a handful of false positives which can’t be reliably repeated. And that’s what homeopaths cling to (well, that and totally misrepresenting the results of studies which actually show that homeopathy doesn’t work). And that’s bogus.