Well, when some of these friends got out of college, they landed some interesting jobs. One of them was at a company doing Artificial Intelligence. Mind you, this was back around 1980, with computers that were slower than what you're using to read this now. They knew that the hardware needed Moore's Law to drive it to smaller/faster/cheaper, but were absolutely convinced that we would have thinking computers "in 20 or 30 years".
You'll notice that it's 30 years later, and we don't have thinking computers (or flying cars, for that matter). I quite simply didn't believe them at the time, although it was a gut feel reaction - I've heard a lot of predictions from technologists based on extending a trend line, and so I developed a healthy skepticism even at an early age.
But this idea of thinking computers keeps coming back. Even a smart guy like Aretae talks about it:
in 2050, barring the singularity, Robots will do most work, including most work we currently consider to be intellectual work, and 90+% of the population will live largely useless (n the historical sense) lives, because robots can do EVERYTHING better than they can.I'm very skeptical. I think that computer security can help explain it. One area that was hot, hot, hot in the 1990s was "Intrusion Detection" (IDS) - looking for patterns of log messages or network traffic that you would expect to see during an attack. For example, someone who is probing for services to exploit will have a very characteristic signature that is pretty unmistakable - lots of connections to different services from a single computer. Nothing actually works like that, and so it must be an attack.
If only we could describe what most of these looked like, went the thought, we could detect attacks as they actually happen. It was pretty sexy stuff (if you're a security geek). The problem is that it doesn't work.
We don't understand what's normal well enough to capture it in an automated computer analysis. As a result, IDS has been shunted off to the sidelines. While most IDS systems have thousands of signatures, only a hundred or so get turned on. We used to call these the "Nifty Fifty" - the (back then) fifty signatures that never went wrong. Now there are a hundred, maybe two hundred. Of course, no attacker worth his salt will do anything to trigger any of them, and so IDS really isn't very useful.
In technical terminology, incorrectly identifying normal activity as malicious or undesirable is called a "False Positive" situation, ind this is deadly for IDS systems. Customers would report that they were getting thousands of false positives a day, and so their operators would turn the systems off. Quite frankly, that's what the "Nifty Fifty" idea was trying to do - ship with most of the system turned off. Of course, at that point why would you want one at all?
And so to Artificial Intelligence. Computers do some things really well. Pure mathematics is simply not worth doing by humans, because computers are so fast and accurate. But computers are unbelievably bad at pattern recognition, and do not seem to be getting better. It's not that people don't keep trying, but success is really limited. The IBM Watson Jeopardy winning computer was really just a voice recognition system married to a database lookup system. What made it unbeatable was that it didn't have thumbs - the electronic relay it used to buzz in was simply faster than the human reaction time on their buzzers. But Watson couldn't have a conversation with you. Even a three year old would be a more interesting conversationalist.
And thus my skepticism about AI taking over. AI's failures in pattern recognition have been persistent, and sometimes simply spectacular:
Automatic image-analysis systems are already used to catch unwanted pornography before it reaches a computer monitor. But they often struggle to distinguish between indecent imagery and more innocuous pictures with large flesh-coloured regions, such as a person in swimwear or a close-up face. Analysing the audio for a "sexual scream or moan" could solve the problem, say electrical engineers MyungJong Kim and Hoirin Kim at the Korea Advanced Institute of Science and Technology in Daejeon, South Korea.It false positives on the laugh track. 93 percent may sound like a lot, but it's far short of a usable system. Consider a company that implemented an anti-porn surfing system that was 93% accurate. That means 7% of what employees download will be misclassified, and a bunch of that will misidentify innocent content as being porn. A human being will have to investigate each of these. You'll need a roomful of HR drones to keep up with the false positives.
The model outperformed other audio-based techniques, correctly identifying 93 per cent of the pornographic content from the test clips. The clips it missed had confusable sound, such as background music, causing the model to misclassify some lewd clips. Comedy shows with laughter were also sometimes mistaken for pornography, as the loud audience cheers and cries share similar spectral characteristics to sexual sounds.
The idea that we'll simply catalog "what the patterns are" is seductive, but so far has led to nothing but madness - the patterns have been too hard to classify reliably, other than in extremely limited situations. Even something as well-specified as computer networking protocols are essentially beyond our scope of understanding, at least from an IDS point of view. We've tried, with really really smart people (I know these people, and can vouch for their intelligence).
And after 15 years of development, people turn off their IDS systems. The failure is spectacular, and complete. I expect that people will continue to work on AI, and that we'll continue to be 20-30 years away from "thinking computers". Just like we were in 1980.
I guess this is a good time for a disclaimer, seeing as I'm making predictions about the advance of technology. Anyone who does that is a fool, so I've tried to keep this short. Your mileage may vary, void where prohibited, do not remove tag under penalty of law.