The fatal crash that killed pedestrian Elaine Herzberg in Tempe, Arizona, in March occurred because of a software bug in Uber's self-driving car technology, The Information's Amir Efrati reported on Monday. According to two anonymous sources who talked to Efrati, Uber's sensors did, in fact, detect Herzberg as she crossed the street with her bicycle. Unfortunately, the software classified her as a "false positive" and decided it didn't need to stop for her.
A "False Positive" is when something is incorrectly classified. A medical example is where your doctor tells you that you have cancer when you actually don't. False positives are a huge problem in software engineering - anyone who has ever had your antivirus block something by mistake has direct experience with this. Software that has too many False Positives is unusable. Needless to say, software developers put a huge amount of effort into reducing False Positives. The Ars Technical article describes this tradeoff pretty well:
Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren't there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object. According to Efrati, that's what happened in Tempe in March—and unfortunately the "real object" was a human being.
That seems to be what happened here. The Uber software developers put in a bunch of code that analyzed potential obstacles that have been identified, to see if the item is likely to be a False Positive. It decided that Elaine Herzberg was a False Positive and ran her over.
There are certainly more situations like this hiding in the software algorithms. Uber can no doubt improve this particular failure, but this is probably going to be an ad-hoc patch to their decision algorithm that won't address other similar situations. We can't know how many of these patches to the decision tree will be possible before it collapses under its own weight. Patching an algorithm can introduce new bugs and so it's not clear when the system will be safe enough to use. If ever.
Spell checkers fix spelling mistakes, but introduce other ones - there's a whole Internet meme about unfortunate autocorrect mistakes.
And this discussion entirely ignores the problem of False Negatives, where the software simply misses something. This is like the doctor saying that you don't have cancer when you actually do, or a self-driving car not understanding that a toddler is an obstacle.
Get ready for more situations like this one. Nobody really understands how the software works, including the developers.