The fatal crash that killed pedestrian Elaine Herzberg in Tempe, Arizona, in March occurred because of a software bug in Uber's self-driving car technology, The Information's Amir Efrati reported on Monday. According to two anonymous sources who talked to Efrati, Uber's sensors did, in fact, detect Herzberg as she crossed the street with her bicycle. Unfortunately, the software classified her as a "false positive" and decided it didn't need to stop for her.
A "False Positive" is when something is incorrectly classified. A medical example is where your doctor tells you that you have cancer when you actually don't. False positives are a huge problem in software engineering - anyone who has ever had your antivirus block something by mistake has direct experience with this. Software that has too many False Positives is unusable. Needless to say, software developers put a huge amount of effort into reducing False Positives. The Ars Technical article describes this tradeoff pretty well:
Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren't there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object. According to Efrati, that's what happened in Tempe in March—and unfortunately the "real object" was a human being.
That seems to be what happened here. The Uber software developers put in a bunch of code that analyzed potential obstacles that have been identified, to see if the item is likely to be a False Positive. It decided that Elaine Herzberg was a False Positive and ran her over.
There are certainly more situations like this hiding in the software algorithms. Uber can no doubt improve this particular failure, but this is probably going to be an ad-hoc patch to their decision algorithm that won't address other similar situations. We can't know how many of these patches to the decision tree will be possible before it collapses under its own weight. Patching an algorithm can introduce new bugs and so it's not clear when the system will be safe enough to use. If ever.
Spell checkers fix spelling mistakes, but introduce other ones - there's a whole Internet meme about unfortunate autocorrect mistakes.
And this discussion entirely ignores the problem of False Negatives, where the software simply misses something. This is like the doctor saying that you don't have cancer when you actually do, or a self-driving car not understanding that a toddler is an obstacle.
Get ready for more situations like this one. Nobody really understands how the software works, including the developers.
Makes me think of "I Robot"...
ReplyDeleteAnd the saying, "Just because you can, doesn't mean you should."
I would not ever want such a vehicle.
How utterly meta. The false-positive detector returned a false positive...?
ReplyDeleteEric, actually the False Positive detector created a False Negative.
ReplyDeleteYes, but... the overall false negative resulted from the spurious detection of a false positive that wasn't actually there.
ReplyDeleteThis stuff makes my brain hurt. (I sometimes have to deal with logically similar, if vastly simpler, situations from time to time - and have to explain to clients why a particular case is unreasonably difficult as specified. Life would be so much easier if I could send a signal back in time a few milliseconds.)
As I've said for years, the companies are not being rigorous enough in their design (to include testing and debugging); they need to achieve the level of of software reliability used for aircraft autopilots, nuclear reactors, and similar systems.
ReplyDeleteThis should include running millions of miles on closed courses in all weather conditions with many, many, many obstacles and other vehicles. From what I can tell, they haven't - they hype their systems, then they believe the hype that their systems can do what the salesmen claimed.
If they don't rein themselves in and get serious, their problems are going to set the industry back years, if not decades. While I haven't seen per capita fatality numbers for self driving cars, given how few of them are on the road so far, the several publicized deaths in the last year surely exceed per capita fatal accidents for manned cars.