Automobile drivers, for obvious reasons, often have much less time to react. “When something pops up in front of your car, you have one second,” Casner says. “You think of a Top Gun pilot needing to have lightning-fast reflexes? Well, an ordinary driver needs to be even faster.”
In other words, the everyday driving environment affords so little margin for error that any distinction between “on” and “in” the loop can quickly become moot. Tesla acknowledges this by constraining the circumstances in which a driver can engage Autopilot: “clear lane lines, a relatively constant speed, a sense of the cars around you and a map of the area you’re traveling through,” according to MIT Technology Review. But Brown’s death suggests that, even within this seemingly conservative envelope, driving “on the loop” may be uniquely unforgiving.Clear lanes, constant speed, awareness of cars around you, a good map. That's about as easy as you can make it for the guidance system, and it's still too hard.
But NASA has been down this road before, too. In studies of highly automated cockpits, NASA researchers documented a peculiar psychological pattern: The more foolproof the automation’s performance becomes, the harder it is for an on-the-loop supervisor to monitor it. “What we heard from pilots is that they had trouble following along [with the automation],” Casner says. “If you’re sitting there watching the system and it’s doing great, it’s very tiring.” In fact, it’s extremely difficult for humans to accurately monitor a repetitive process for long periods of time. This so-called “vigilance decrement” was first identified and measured in 1948 by psychologist Robert Mackworth, who asked British radar operators to spend two hours watching for errors in the sweep of a rigged analog clock. Mackworth found that the radar operators’ accuracy plummeted after 30 minutes; more recent versions of the experiment have documented similar vigilance decrements after just 15 minutes.The fallback for the guidance system is to have the driver take over, but it looks like people don't handle this situation very well. And it also looks like people don't want to handle the situation well.
According to some researchers, this potentially dangerous contradiction is baked into the demand for self-driving cars themselves. “No one is going to buy a partially-automated car [like Tesla’s Model S] just so they can monitor the automation,” says Edwin Hutchins, a MacArthur Fellow and cognitive scientist who recently co-authored a paper on self-driving cars with Casner and design expert Donald Norman. “People are already eating, applying makeup, talking on the phone and fiddling with the entertainment system when they should be paying attention to the road,” Hutchins explains. “They’re going to buy [self-driving cars] so that they can do more of that stuff, not less.”This problem (self-driving cars) smells a lot to me like what we've seen in the Artificial Intelligence research community. There have been widely publicized advances on very narrow, specific technology problems, but AI has remained "just 5 years away" for 30 years. The problem there is that we really don't know what Intelligence is (at least in enough detail to specify it for a computer). Likewise, we don't understand how to safely react to the myriad of potentially dangerous driving situations to be able to specify it for the computer.
Maybe it's just that computers process data so differently from us that we simply can't specify these things.
Bottom line: don't expect a self-driving car anytime soon, no matter what the auto companies are saying.
....... So the plan is. The automated system handles everything until the problem becomes so severe that the software can't resolve to a valid solution. At which point. It says. Here you take over!!!!
ReplyDelete.... And hands the controls over to the napping driver with zero situational awareness.
Right. Good luck with that plan!!
Why would I even want a self-driving car? I don't even like cars that shift gears for themselves.
ReplyDeleteSo because we can't have perfect automation, we choose to have no automation. If we subject humans to the same standard, there would be no need for any cars, buses, trains or airplanes because we still would not have found a suitable pilot.
ReplyDeleteGenericviews, my point is that what we will have will be VERY "not perfect", and in fact will likely kill a bunch of people.
ReplyDeleteCorrection: kill MORE people, because it looks like a Tesla killed its driver.
And actually, we DO subject humans to the same standard. Unlike computers, humans are pretty decent at adapting to unforeseen circumstances.
It has nothing to do with lives or technology, present or future.
ReplyDeleteIf the vehicles obey all of the laws all of the time, the revenue from fines - moving and non-moving - will disappear.
No government will allow the loss of that revenue stream.
WON'T. HAPPEN.