The problem is that every natural process is subject to random variation. Even without changing the conditions of an experiment, there is going to be random variation in measurements. For example, one population of white mice might have 6 cancers, but the next might have 12 and the next might have zero, all from natural variation. So the challenge of most experiments is to determine whether the thing one is testing (e.g. exposure to a particular substance) is actually changing the measurements in a population, or whether that change is simply the result of random variation. That is what the 95% confidence interval (that Naomi Oreskes wants to get rid of) really means. It means there is only a 5% chance that the results measured were due to natural variation.Peer review cannot possibly hope to address this - the experiment was performed correctly within its parameters. The parameters simply assumed a 5% false positive rate and the experiment "hit the jackpot".
This is a useful test, but I hope you can see how it can fail. Something like 5% of the time that one is measuring two things that actually are uncorrelated, the test is going to give you a false positive. Let's say in a year that the world does 1000 studies to test links that don't actually exist. Just from natural variation, 5% of these studies will still seem to show a link at the 95% confidence level. We will have 50 studies that year broadcasting false links. The media will proceed to scare the crap out of you over these 50 things.
The problem with science as practised today is that journals do not like to publish experiments designed to replicate earlier results. It isn't "news", and so it doesn't get published. Since scientists have to publish ("publish or perish") to make their careers, we are stuck with a minimum of 5% of "scientific fact" that is bollocks. And the more significant the "fact" the bigger the bollocks.
2 comments:
"Dammit, where'd my cat go?" - Schroedinger.
XKCD was right on point with this question.
Post a Comment