Thursday, July 23, 2009

Falsifiable

Generally to be considered "scientific", something has to be falsifiable - where anyone can try to duplicate your observations or results. If there's no way that this can be done, then the thing cannot be held to be scientific. Carl Sagan used a typically accessible parable that illustrated this critical part of the Scientific Method:

"A fire-breathing dragon lives in my garage"

Suppose (I'm following a group therapy approach by the psychologist Richard Franklin) I seriously make such an assertion to you. Surely you'd want to check it out, see for yourself. There have been innumerable stories of dragons over the centuries, but no real evidence. What an opportunity!

"Show me," you say. I lead you to my garage. You look inside and see a ladder, empty paint cans, an old tricycle -- but no dragon.

"Where's the dragon?" you ask.

"Oh, she's right here," I reply, waving vaguely. "I neglected to mention that she's an invisible dragon."

You propose spreading flour on the floor of the garage to capture the dragon's footprints.

"Good idea," I say, "but this dragon floats in the air."

[Lots of ingenious tests for the dragon's existence presented and explained away.]

Now, what's the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? If there's no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists? Your inability to invalidate my hypothesis is not at all the same thing as proving it true. Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder.
So the primary - perhaps singular - requirement of science is data. Access to data (to see if someone made a mistake or to compare it to a different set of data) is simply a given, if something is to be considered scientific. Otherwise, how is the hypothesis falsifiable? The assertions would be immune to disproof.

An interesting thing is going on in the Global Warming debate - one group of scientists (the global warmers) is refusing to release their data. Steve McIntyre asked the UK Meteorologic Office to send him their data, so he could check it:
You stated that CRUTEM3 data that you held was the value added data. Pursuant to the Environmental Information Regulations Act 2004, please provide me with this data in the digital form, together with any documents that you hold describing the procedures under which the data has been quality controlled and where deemed appropriate, adjusted to account for apparent non-climatic influences
They said no. Their reasons were very, very interesting:
The Met Office received the data information from Professor Jones at the University of East Anglia on the strict understanding by the data providers that this station data must not be publicly released.
Well now. Leaving aside whether the University of East Anglia in general, and Professor Jones' projects in particular are publicly funded, doesn't this make it hard to analyze the public policy recommendations related to climate change? The Met Office heartily agrees:
We considered that if the public have information on environmental matters, they could hope to influence decisions from a position of knowledge rather than speculation. However, the effective conduct of international relations depends upon maintaining trust and confidence between states and international organisations. This relationship of trust allows for the free and frank exchange of information on the understanding that it will be treated in confidence. If the United Kingdom does not respect such confidences, its ability to protect and promote United Kingdom interests through international relations may be hampered.
Well, well, well.

So what can we say about any conclusions, recommendations, or reports issued by the UK Met Office, that are based on this data? They are unfalsifiable.

McIntyre is very unpopular indeed among the Global Warming set, because he focuses on their data. He's the reason that you never hear about the "Hockey Stick" any more - he found that the data was cooked and the computer model was buggy, in a way that produced the hockey stick shaped curve. How bad is the data? Some of it no longer exists:
In passing, I mention an important archiving problem. Pete Holzmann identified actual tags from the Graybill program. We found that 50% of the data had not been archived. Was this selective or not? No one knows. Graybill died quite young. His 21 notes were notoriously incomplete. Worse, when the Tree Ring Laboratory moved a few years ago, apparently they forgot to arrange for old samples to be protected. Their former quarters were destroyed. Some of the records were apparently recovered from the trash by one scientist but others are permanently lost.
This is what the IPCC's $50 Trillion recommendation is based on. RTWT. The situation isn't just worse than you think. It's worse than you can possibly imagine. And some of you have quite good imaginations.

The science is settled, you see, but no, you can't have the data. You can't even see what was done to quality control the data, because it might damage a government's ability to protect it's national interests.

Oops, gotta go. It's those darn Deniers, back on my lawn again ...

UPDATE: More on the UK Met office here.

3 comments:

  1. As soon as you mentioned your fire-breathing dragon in the garage, I knew where you were going with this.

    Well said!

    NOW I know why there's global warming- it's that damn dragon!

    Get rid of it for crissakes & Al Gore will shut up...

    ReplyDelete
  2. Carl Sagan was a boyhood hero of mine. I watched the PBS series, read the book "Cosmos" until the pages fell out, and even had the series soundtrack album :)

    Nerd much.

    Whether its "stupid" racist cops, healthcare reform that "the people demand", or the settled science of climate change - clearly the time for debate is over, Borepatch.

    I like to imagine all their pointy heads whiplashing forward as if on little springs, when people throw on the brakes and refuse to assimilate :)

    ReplyDelete
  3. The "falsifiable" distinction is basically on the right track --- esp., it gets the point that scientific theories aren't proved, but instead survive the risk of disproof --- but we can do better. Today people are highly motivated to better in order to be sufficiently thorough and rigorous that we can automate inductive reasoning. Thus, a lot of the really thorough modern treatments come out of modern interest in machine learning, or classical statistical work nearby.

    Imagine trying to use the yes-or-no "falsifiable" criterion to express the difference between (making up numbers here) 1-day forecasts which are 85% correct, 2-day weather forecasts which are 60% correct, 21-day weather forecasts which are 40% correct, and 5-year weather forecasts which are also 40% correct (based on the algorithm of predicting the weather will be the same as in previous years on the same date). There seems to be a transition where the weathermen stop knowing what they're talking about, but the "falsifiable" on-off distinction is an iffy guide. Falsifiability is also awkward for reasoning about "relativity of wrong" issues like why Newton's laws of motion are so damned useful given that they were found to be "false" (in experiments close to the speed of light or Planck's constant). If you do so, you tend to bog down in false dichotomy and other asking-the-wrong-question issues. Modern approaches can help with this.

    Modern approaches also give us a more rigorous automatable understanding of Occam's Razor, which is roughly as central to CAGW (and, for that matter, to older debates such as heliocentrism) as falsifiability.

    There are multiple ways to motivate the analysis. I think Solomonoff induction aka Kolmogorov induction is the most intuitive for software-oriented people, but the journey from that intuition to practical calculations seems painful. Thus, if you actually want to implement practical induction in software, Vapnik's analysis as given in _The Nature of Statistical Learning Theory_ seems like a better approach. There are also various other ways into this that I know less about, e.g., I am dimly aware of people setting out from Bayes' theorem and hacking through the weeds in other ways, and of approaches motivated by computer science people proving bounds on what distinctions particular kinds of learning algorithms can reliably infer from particular kinds of data.

    Some of your criticisms of CAGW --- cherry-picking and other kinds of fraud at the experimental/peerreview/funding coalface, e.g. --- are independent of statistical hairiness. (You don't need statistics to appreciate GIGO...) But the "falsifiability" criticism seems kinda fuzzy --- on the right track, but not a perfect fit --- because it is not obvious how you'd used that to answer someone who e.g. said "of course it could be falsified! if temperature remained 100.000000% stable the way you fascist MAGA climate denier inbred deniers claim!" The modern approaches I referred to don't have this limitation, and can express things like "the model does not compress the dataset enough to justify the model's complexity" or "something something VC dimension something" without getting tripped up over false dichotomy (like "see, it could be falsified by this insanely farfetched circumstance, AND IT'S NOT YOU FASCIST, SO IT'S TRUE, SO HOLD STILL SO WE CAN HIT YOU WITH THIS BIKE LOCK NO YOU CAN'T PULL A GUN NO VIOLE") and similar technical gotchas. In effect the modern statistical/ML stuff gives us systematized precise expressive flexible (quantitative not qualitative) statements of older more qualitative concepts such as privileging the hypothesis, Occam's Razor, overfitting, and (indeed) unfalsifiability.

    ReplyDelete

Remember your manners when you post. Anonymous comments are not allowed because of the plague of spam comments.