Wednesday, December 12, 2018

Why Uber's self-driving car killed that woman

It seems that safety wasn't an after thought - it wasn't thought of at all:
On March 19, the world learned that Uber had a serious safety problem when a prototype self-driving car struck and killed pedestrian Elaine Herzberg in Tempe, Arizona. But signs of Uber's safety problems were evident to company insiders even before the crash. And at least one Uber manager tried to raise the alarm on March 13—just days before Herzberg's death. 
Robbie Miller worked for Google's self-driving car program until 2016, when he left for the self-driving truck startup Otto. Otto was snapped up by Uber later that year, and Miller became an operations manager in Uber's self-driving truck program. 
Miller quit his job at Uber in March 2018 and went on to lidar startup Luminar. Before he left the company he sent an email to Eric Meyhofer, the leader of Uber's self-driving car project, about safety problems at the company. The email, which was obtained by The Information's Amir Efrati, is absolutely scathing.
...
"A car was damaged nearly every other day in February," Miller said. "We shouldn’t be hitting things every 15,000 miles."
The article beggars belief.  If this is true, then Uber's self-driving program may be done; I don't see how they can credibly regain the public's trust.  This speaks to a deeply untrustworthy corporate culture:
Miller pointed to an incident in November 2017, when an Uber car had a "dangerous behavior" that nearly caused a crash. The driver notified his superiors about the problem, Miller wrote, but the report was ignored. A few days later Miller noticed the report and urged the team to investigate it. 
But Miller says his request was ignored—and when he pressed the issue with "several people" responsible for overseeing the program, they "told me incidents like that happen all of the time." Ultimately, Miller said it was two weeks before "anyone qualified to analyze the logs reviewed them."
Happens all the time.  No biggie.

Wow.

7 comments:

  1. No biggie to them, THEY won't be the ones killed... sigh

    ReplyDelete
  2. I think we're going to find out they accidentally some self-driving code from Roomba...

    ReplyDelete
  3. Actually, Aaron, that may not be far off the mark. It very well may be that code was used for mission critical routines that was never designed for mission critical application.

    ReplyDelete
  4. Leftists, politicians, NGOs have all jumped on the self-driving bandwagon. It's coming, no matter how unsafe or unrealistic. Like catalytic converters vs other cheaper and better technology, or Green Energy, or the push against plastics (many which are made out of agricultural waste, so... carbon sequestration?)

    Just like the mandate for solar panels, or unrealistic mpg goals, this will also be a mandate forced upon us by our 'betters.'

    Makes me wish to get a Stanley Steamer, really it does.

    ReplyDelete
  5. Beans:

    Oh, no, you want a Doble steam car. Makes the Stanley cars appear to be some backyard gokart. Jay Leno has one. ~1931, passes current CA smog test.


    https://www.damninteresting.com/the-last-great-steam-car/

    http://www.popularmechanics.com/automotive/jay_leno_garage/1302916.html?page=1

    ReplyDelete
  6. I'm not surprised - not only are they, as mentioned above, not using the proper code (or even mindset) for critical applications, they don't appear to be fixing bugs.
    It is also not surprising since they have had problems with their self driving program in Pittsburgh ever since it started - like driving the wrong way on a one way street.
    While it looks like a bandwagon at this point, I believe it won't take too many more accidents before support drops and the whole idea is effectively shelved for years. If they put a little more care into it and went a little slower, ultimately adoption would be faster.

    ReplyDelete

Remember your manners when you post. Anonymous comments are not allowed because of the plague of spam comments.