It's almost certainly not. The term "bug" used to describe strange failures in electro-mechanical devices seems to have been in common use in the 1920s, if not earlier. This bug - an actual moth - was discovered trapped in a relay in one of the very early computers in the Naval Weapons Laboratory in 1947. The operators - well aware of the common use of the term - taped the carcass in the operator's log book, ensuring their everlasting fame.
Very few computer problems these days are due to hardware failures. Sure it happens, but the vast majority of bugs are in software. It's because of this that there's a whole industry that's sprung up around Internet Security - my own field of specialization. The easy bugs are the ones that cause crashes; the hard bugs are the ones that lead to security failures: subtle bugs that are very hard to test in a QA (Quality Assurance) program, that lead not to loss of functionality but to mass pwnage.
It's generally thought in the industry that you can't not have bugs, at least in non-trivial programs. Bugs are generally measured based on the number per 1000 lines of source code. Better and more experienced programmers unsurprisingly write code with many fewer bugs per 1000 lines.
Bugs are sometimes called "glitches" in videogame software, and identifying them is one of the two major goals of beta programs (the other being validation of gameplay balance). There's a whole sub-genre of Youtube devoted to these glitches:
There's quite a debate in the software industry as to whether there are more bugs (security and otherwise) in Open Source or closed source (commercial, proprietary) software. The famous statement that "many eyes make bugs shallow" is somewhat controversial, but it's indisputable that having the source code available makes it much easier for someone to analyze software for correctness.
Much of the "evidence" that the planet is warming comes from computer models (the data is surprisingly dodgy). These models are enormous, complicated software programs that attempt to take a set of historical inputs and generate a set of outputs that match what the climate has been observed to do in the past. Models will contain a set of potential problems that could skew the results:
- Climate parameters may be chosen artificially to produce a sane output, rather than for rational values that seem plausible in the real world.
- Statistical analysis is common - you might even say required - but is subtle and easy to get wrong.
- Error trapping and handling can be critical - if an error that could change the output is not trapped, the model will run to completion with potentially spurious output.
We just don't know. One of the frequent requests skeptics make is to get software and data. Access to software makes it much easier to see of there are errors that skew the results. These requests (often backed by Freedom of Information Act requests) are routinely turned down. What we do know about the software is horrifying, at least as far as software quality is concerned:
What can we glean from this? Several things, none of them good for the reputation of the "science" of Anthropogenic Global Warming:And so, we hear the predicted results of the models, predictions of dangerous or catastrophic warming to come. We don't know if these results are valid, or are simply the result of tweaking the software until an output with suitably hysterical predictions results. The only way to know is to see the software and the data inputs. So can we see them?
1. The climate change data sets are - by CRU's own admission - are filled with decade-long gaps ("the expected 1990-2003 period is MISSING").
2. The climate data sets contain - by CRU's own admission - fabricated data ("I can make it up. So I have :-)").
3. The the data is inconsistent to the point of confusion ("the WMO codes and station names /locations are identical") and so - by CRU's own admission - a manual override process was added to the code, allowing the person running the code to make arbitrary changes to the data (this bit:Please choose one:4. (speculation here) These manual overrides are not logged anywhere, meaning that for any given output of the model, it is impossible to know what was manually changed during the run, or the impact of those changes on the output.
1. Match them after all.
2. Leave the existing station alone, and discard the update.
3. Give existing station a false code, and make the update the new WMO station.
Enter 1,2 or 3:)
5. (speculation here) There is no method to save these changes, so that the next time the model is run it may (probably will?) produce different output.
No. Trust us, say the climate scientists. It's peer reviewed.
But not, seemingly, QA'ed. Even with millions of dollars, there wasn't any budget left over for a test plan. Isegoria, commenting on a Wired article about science, sums up how it should (but doesn't) work:
Climate science in general, and climate modeling in particular, seems to violate each of these four rules.Lehrer’s advice on how to learn from failure:
- Check your assumptions.
- Seek out the ignorant.
- Encourage diversity.
- Beware of failure-blindness.
What about glitches in the matrix -- the kind that duplicate kittehs?
ReplyDeleteIt's only a bug if it's unintentional. Any code in the climate modeling software which contributes positively to predictions that we're all gonna die unless we take all the money, enslave all the capitalists, and shoot anyone who asks too many questions, is not a bug but a feature. Any code which does not contribute to those predictions is a bug.
ReplyDeleteYour problem is that you appear to think that discovering and illustrating the truth is a goal here. It is not. If it were, they'd be working in a real science.
Lissa, as you well know, you can't have too many kittehs. That's not a bug, it's a feature!
ReplyDeleteMatt, this is why people (including me) are insisting on open availability of source code. Models are, as you point out, highly dependent on input assumptions. If these assumptions are not sane, then the output won't be, either.
QA is the first thing to get short changed when time pressure is applied to software development. And what better time pressure is there than "the world is ending!"?
ReplyDeleteThen again, the software did do what they wanted it to - concur with their preconceived result. I'd put this more along the lines of when you use a bunny shaped cake pan, I won't be too surprised when your cake turns out to be shaped like a bunny.