False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine.Exaggerations and bogus results getting published. Now how could that possibly happen?
The problem begins with the public’s rising expectations of science.Ah. It's all the public's fault. Got it.
Being human, scientists are tempted to show that they know more than they do. The number of investigators—and the number of experiments, observations and analyses they produce—has also increased exponentially in many fields, but adequate safeguards against bias are lacking. Research is fragmented, competition is fierce and emphasis is often given to single studies instead of the big picture.Now that's more like it. Scientists (like other people) are tempted to sometimes shade the truth in order to get their career ahead. And the scientific establishment is lousy about picking up on that.
Much research is conducted for reasons other than the pursuit of truth. Conflicts of interest abound, and they influence outcomes. In health care, research is often performed at the behest of companies that have a large financial stake in the results.In climate science there's pressure from politicians to get the right results. The more right results you get, the more grants you get.
Nah - that's crazy talk! The politicians are pure as the driven snow and absolutely have no ulterior motives! And the scientists [who hid the decline - ed] are noble pursuers of holy truth! Settled! It's all settled, I say!
Back to Scientific American:
The crisis should not shake confidence in the scientific method. The ability to prove something false continues to be a hallmark of science. But scientists need to improve the way they do their research and how they disseminate evidence.Or in the case of climate science, they pay absolutely no attention to how the actual results track the predictions:
First, we must routinely demand robust and extensive external validation—in the form of additional studies—for any report that claims to have found something new. Many fields pay little attention to the need for replication or do it sparingly and haphazardly.
Eventually findings that bear on treatment decisions and policies should come with a disclosure of any uncertainty that surrounds them. It is fully acceptable for patients and physicians to follow a treatment based on information that has, say, only a 1 percent chance of being correct. But we must be realistic about the odds.A big complaint about climate science is the lack of discussion about uncertainties. Perhaps the best article on this is Judith Curry's Uncertainty Monster, but the climate science establishment won't discuss the subject. Rather, we keep hearing that the science is settled.
Of course, Scientific American won't discuss these issues in climate science, or Dr. Curry without slandering her. There is something deeply broken about science as it is practiced today.