Showing posts with label Self-driving cars. Show all posts
Showing posts with label Self-driving cars. Show all posts

Thursday, October 3, 2024

KIA cars can be hacked with a smartphone

I hope you don't drive a KIA.  This is actually a failure of post manufacturing security processes, not that it makes things any better:

Sam Curry, who previously demonstrated remote takeover vulnerabilities in a range of brands – from Toyota to Rolls Royce – found this vulnerability in vehicles as old as model year 2014. The mess means the cars can be geolocated, turned on or off, locked or unlocked, have their horns honked and lights activated, and even have their cameras accessed – all remotely.

...

The issue originated in one of the Kia web portals used by dealerships. Long story short and a hefty bit of API abuse later, Curry and his band of far-more-capable Kia Boyz managed to register a fake dealer account to get a valid access token, which they were then able to use to call any backend dealer API command they wanted.

"From the victim's side, there was no notification that their vehicle had been accessed nor their access permissions modified," Curry noted in his writeup. "An attacker could resolve someone's license plate, enter their VIN through the API, then track them passively and send active commands like unlock, start, or honk."

Security wags have long called this sort of architecture "broken by design" - it was intentionally set up to allow privileged access via a poorly authenticated system that has to scale through a big organization.  I don't have much confidence that KIA can fix this, or that they will likely want to.

And oh yeah - there's a smartphone app to help the Bad Guys.

All I can say is that 1968 Goat isn't vulnerable to this attack, and will never be.

 

Wednesday, September 25, 2024

US bans Chinese "Connected Car" tech

They say it's a security concern.  They're right:

Now, the US Commerce Department is set to enact a de facto ban on most Chinese vehicles, by prohibiting Chinese connected car software and hardware from operating on US roads, according to Reuters.

The rationale? National security concerns. "When foreign adversaries build software to make a vehicle [connected], that means it can be used for surveillance, can be remotely controlled, which threatens the privacy and safety of Americans on the road," said Commerce Secretary Gina Raimondo.

"In an extreme situation, a foreign adversary could shut down or take control of all their vehicles operating in the United States all at the same time, causing crashes, blocking roads," said Secretary Raimondo, a scenario we saw depicted in Fate of the Furious (where it caused me a headache), as well as more recently (and to better effect) in Leave the World Behind.

Yup.

Now I expect there's a whole lot more behind this and the security risks are just nice window dressing, but it's pretty hard to argue with this.

Thursday, June 27, 2024

The Rat Bastards lose a privacy battle

Good:

One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business.

Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a "Driving Behavior Data History Report" to insurers.

Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times' Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every "hard acceleration" or "hard braking event," among other data points.

You will no doubt be shocked to hear that car dealers "helped" customers opt-in, as part of getting their brand new vehicles ready for the road.

But it looks like the revenue from this didn't offset the bad PR and customer bad feelings associated with the program, and so they dropped it like a hot potato.

GM quickly announced a halt to data sharing in late March, days after the Times' reporting sparked considerable outcry. GM had been sending data to both Verisk and LexisNexis Risk Solutions, the latter of which is not signaling any kind of retreat from the telematics pipeline. LexisNexis' telematics page shows logos for carmakers Kia, Mitsubishi, and Subaru.

...

Disclosure of GM's stealthily authorized data sharing has sparked numerous lawsuits, investigations from California and Texas agencies, and interest from Congress and the Federal Trade Commission.

Act like a Rat Bastard, get treated like a Rat Bastard.

Monday, January 29, 2024

Interesting Security News

Item the first: follow the money:

Trend Micro's Zero Day Initiative (ZDI) held its first-ever automotive-focused Pwn2Own event in Tokyo last week, and awarded over $1.3 million to the discoverers of 49 vehicle-related zero day vulnerabilities.

Researchers from French security outfit Synacktiv took home $450,000 after demonstrating six successful exploits, one of which saw the company’s crew gain root access to a Tesla Modem. Another effort found a sandbox escape in the Musk-mobiles’ infotainment system.

Other popular targets at the three day event included after-market infotainment systems and, more troublingly, a whole host of successful hacks on EV chargers.

This is a good strategy - show me the hack, I'll show you the money.  More, please.  Plus, good on them picking automotive computing as the target.  Long time readers will recall that this is something I've been harping on for quite some time.

Item the second: SEC gets pwned (same link as above): 

We had our suspicions when Twitter/X blamed the US Securities and Exchange Commission for the account takeover that led to the premature release of news the regulator would allow Bitcoin exchange-traded funds– and those suspicions have been confirmed.

"The SEC determined that the unauthorized party obtained control of the SEC cell phone number associated with the account in an apparent 'SIM swap' attack," the Commission admitted last week.

For those unfamiliar with this form of attack, SIM swaps involve convincing a telecom carrier to transfer a phone number to a new SIM card (a shift for which there are a variety of legitimate reasons), giving an attacker control over communications going to and from that number – like a second authentication factor.

That didn't matter, of course, because the SEC also admitted it disabled multi-factor authentication with Twitter support in July last year "due to issues accessing the account," but no one bothered to turn it back on.

"It made security too hard and then we forgot all about it" is an excuse that I suspect that SEC investigators wouldn't accept.  Top. Men.

Monday, January 16, 2023

"Artificial Intelligence" is the new Ship Of Fools

BMW introducing AI that talks to you.  Comrade Misfit is horrified (language alert):

A German car that talks to you? "Nein, nein, nein! Ve are not moving until you fasten your seatbelt" 'Ach du lieber, you spilled your coffee ON MY CLEAN FLOOR! WIPE IT UP, NOW!!"

Snerk.  This will take the current spoken GPS turn-by-turn directions to a new level: "REPEAT ZEE INSTRUCTIONS!!!"

Also, we're told that AI can make a better playlist, and that the bot is "your AI friend".  Nazzo fast:

"AI can now build you a better playlist"

No it can't.

And I don't have an AI friend.

(via)



Friday, September 23, 2022

Is Tesla's autopilot mode killing motorcyclists?

I don't know, But I Just Want 2 Ride sums up what's going on.  If you're a two-wheels-down sort of reader, you should go check it out.

In other news, I Just Want 2 Ride is still posting.  I think the first time I linked there is seven years ago which is pretty long for the blogsosphere.

 

Monday, August 10, 2020

The more you know about self-driving cars the less safe you will feel

I've been saying this for quite some time, and every so often something else comes out about how the self-driving systems are designed.  It all just reinforces how bad these are.  Another of these revelations is out and it's a doozie.

It seems that testing shows that self-driving cars are running into stopped cars, at least during test trials.  They don't have any trouble avoiding moving cars, just cars that have pulled over partly onto the shoulder.  The reason will blow your mind:

Radar has low angular resolution, so it had only a crude idea of the environment around the vehicle. What radar is quite good at, however, is figuring out how fast objects are moving. And so a key strategy for making the technology work was to ignore anything that wasn't moving. A car's radar will detect a lot of stationary objects located somewhere ahead of the car: these might be trees, parked cars, bridges, overhead signs, and so forth.

These systems were designed to work on controlled-access freeways, and, in the vast majority of cases, stationary objects near a freeway would be on the side of the road (or suspended above it) rather than directly in the car's path. Early adaptive cruise control systems simply didn't have the capability to distinguish the vast majority of objects that were near the road from the tiny minority that were on the road.

So cars were programmed to focus on maintaining a safe distance from other moving objects—cars—and to ignore stationary objects. Designers assumed it would still be the job of the human driver to pay attention to the road and intervene if there was an obstacle directly in the roadway. [emphasis added by me - Borepatch]

This is exactly why I have been saying that I don't trust these systems.  I don't know what design assumptions went into them, or into the components that make up the system.  It may even be that the system designers don't know all the assumptions, at least those in the components they use.

And the assumptions can kill you.


This article describes a test done by the American Automobile Association - well done, AAA.  There's a lot of ass covering by the auto manufacturers saying that drivers are ultimately in charge, yadda yadda.  Here's the problem with that:
This may be a fundamental problem with this approach to driver assistance technology. The ADAS is supposed to do most of the driving, but the human driver is supposed to still monitor the system and make sure it doesn't make mistakes. But our brains aren't wired for this level of monotony. Monitoring a system that works correctly 99 percent of the time is in some ways harder—not easier—than just driving the car yourself. And monitoring a system that works correctly 99.9 percent of the time is even harder, because it's that much easier for our brains to get distracted by something else.
Not to mention that this isn't what marketeers are selling, or what people want to buy.  People want "Drive me to the supermarket", not "do some of the driving to the supermarket while I do some of it, too."  That's a hard sell for a system that must add thousands of dollars to the vehicle's purchase price.

Me, I want to know how to disable these damned things.

Tuesday, February 18, 2020

So what does your "smart" car do when it can't get on the 'net?

Rental car won't start because renter drove it to a park in the boondocks.  Rental agency recommends she sleep in the car and see if it starts in the morning:
Over the weekend, a trip to the Californian boonies by Guardian journalist Kari Paul turned into a cautionary tale about the perils of the connected car and the Internet of Things. Paul had rented a car through a local car-sharing service called GIG Car Share, which offers a fleet of hybrid Toyota Priuses and electric Chevrolet Bolt EVs in the Bay Area and Sacramento, with plans to spend the weekend in a more rural part of the state about three hours north of Oakland. But on Sunday, she was left stranded on an unpaved road when the car's telematics system lost its cell signal. Without being able to call home, the rented Prius refused to move.
But I'm sure that the software in autonomous cars will be able to anticipate problems like this and figure out a way around it.  Suuuuuuure.

Friday, November 8, 2019

NTSB: Uber pedestrian fatality happened because autopilot software didn't understand that someone might jaywalk

Last year an experimental self-driving car from Uber hit and killed Elaine Hertzberg in Tempe, AZ.  The NTSB has completed its investigation and said that the cause of the crash is that the software did not classify her as a pedestrian because it could not handle the idea that a pedestrian might jaywalk:
Radar in Uber's self-driving vehicle detected pedestrian Elaine Herzberg more than five seconds before the SUV crashed into her, according to a new report from the National Safety Transportation Board. Unfortunately, a series of poor software design decisions prevented the software from taking any action until 0.2 seconds before the deadly crash in Tempe, Arizona.

FURTHER READING: Uber manager in March: “We shouldn’t be hitting things every 15,000 miles”

Herzberg's death occurred in March 2018, and the NTSB published its initial report on the case in May of that year. That report made clear that badly written software, not failing hardware, was responsible for the crash that killed Herzberg. 
... 
Two things are noteworthy about this sequence of events. First, at no point did the system classify her as a pedestrian. According to the NTSB, that's because "the system design did not include consideration for jaywalking pedestrians." 
Second, the constantly switching classifications prevented Uber's software from accurately computing her trajectory and realizing she was on a collision course with the vehicle. You might think that if a self-driving system sees an object moving into the path of the vehicle, it would put on its brakes even if it wasn't sure what kind of object it was. But that's not how Uber's software worked.
I don't find this even a little hard to believe, but you should click through to read the whole article which is simply horrifying.  Computers are really good at some things (like adding up long columns of figures) and really bad at other things (like identifying random objects in real time).  One thing they are terrible at is "common sense" - they don't have any that the programmer doesn't write into the code.  Things like a pedestrian might cross a road somewhere other than at a crosswalk.

I've said for some time that I'll never ride in one of these things, but now I need to expand that - I don't want any of these on the roads where I might be.

Friday, August 2, 2019

Another successful attack on self-driving cars

My Dad used to love to say that the reason that history repeats itself is that nobody listens the first time. We see this in computer security all the time, where the exact same mistakes that have been made over and over are repeated in a new technology field.

One of my favorite examples is from the early days of Internet shopping.  Some of the first shopping cart software ran mostly on the browser (it was written as client-side javascript).  People would select an item, then save the web page locally.  Then they would use an editor to find the price in the javascript and change the price to a dollar.  Reload the locally saved page, click "Buy Now" and voila - you bought a laptop computer for a dollar.

The problem is that the e-commerce system didn't validate inputs correctly.  Fast forward 20 years to today and guess what self-driving car AI doesn't do?



Now, I'm not sure quite how you'd go about validating that sort of input, but it's pretty darn important data.  I mean, someone could die.  This is a very hard problem to solve, but it's exactly the sort of problem you'd need to solve.

Imagine a projection of a traffic sign that clogs up the Washington beltway at rush hour (I mean, more so than normal).  The bottleneck would cascade as people run out of gas until nothing could move for hours and hours.  And the attack looks trivial.  There are likely a lot of variations that could tie everything in knots.

This is exactly why I have been so vocally skeptical about the viability of self-driving cars.  The easy problem is getting the AI to work under normal circumstances ("easy" is actually pretty hard, but it certainly doable).  What's hard is to make the system robust under attack.  Saying "why would anyone want to attack the system" is actually a sign that whatever you are building will never be fit for purpose.

Wednesday, May 22, 2019

Tesla Autopilot routinely cuts off other cars

"Using the system is like monitoring a kid behind the wheel for the very first time. As any parent knows, it’s far more convenient and less stressful to simply drive yourself."
- Consumer Reports' Jake Fischer
Absolutely scathing review of Tesla's latest Autopilot software by Consumer Reports.  Sample:
Tesla vehicles have a tendency to cut off other drivers when making lane changes, according to CR's tests. After changing lanes in heavy traffic, the Model 3 "often immediately applies the brakes to create space behind the follow car—this can be a rude surprise to the vehicle you cut off,” Fisher said.
Great.  You car is a big old jerk.

Of course, none of this is a surprise to anyone who's been paying attention.

Monday, May 13, 2019

The song almost writes itself, doesn't it?


Seen on the Book of Faces by the Queen Of The World, who is patient when I rant about self-driving cars.

Wednesday, May 8, 2019

The Boeing 737 MAX situation, explained

In the beginning was the plan.
And then came the assumptions.
And the assumptions were without form.
And the plan was without substance.
And darkness was upon the face of the Engineers.
And they spoke among themselves saying,
"It is a crock of shit and it stinketh."
 
And the Engineers went unto their supervisors and said,
"It is a pail of dung and none may abide the odor thereof."
And the supervisor went unto their managers and said,
"It is a container of excrement and it is very strong, such that none may abide by it."
 
And the managers went unto their directors, saying,
"It is a vessel of fertilizer, and none may abide its strength."
And the directors spoke among themselves saying to one another
   
"It contains that which aids plant growth and it is very strong."
And the directors spoke among themselves, saying to one another
"It contains that which aids plant growth and it is very strong."
And the directors went unto the vice presidents, saying unto them,
"It promotes growth and is very powerful."
 
And the vice presidents went unto the president, saying unto him,
"The new plan will promote the growth and vigor of the company, with powerful effects."
And the president looked upon the plan and saw that it was good.
When I was a newly minted engineer, this was one of the things passed around (as photocopies - kids, ask your parents).  I was pretty green, and so thought it was breathlessly cynical.  I had a lot to learn about how information deteriorates through an organization structure.

It looks like this happened at Boeing:
Boeing engineers knew about the problem in 2017 – months before the fatal Lion Air and Ethiopian Airways crashes. The company only revealed this to US Federal Aviation Authority regulators after Lion Air flight JT610 crashed in October 2018, claiming in this week's statement that "the issue did not adversely impact airplane safety or operation". 
"Senior company leadership was not involved in the review and first became aware of this issue in the aftermath of the Lion Air accident," added Boeing.
Reading between the lines, it seems pretty clear that Boeing expects major lawsuits, and is preparing to try to throw their software vendor under the bus.  I expect that this won't work - after all, it's really saying that "hey, we really don't know what our vendors are up to" - and quite frankly is shouldn't work.  If Boeing's lawyers are successful and their software vendor gets sued into bankruptcy then Boeing has a whole bunch of critical software without a supplier to do changes and maintenance on it.

It looks like Boeing itself is in panic mode here.  Every move they make to try to get out in front of this seems to be digging themselves into a deeper hole.  Err, or deeper into their vessel of fertilizer.


But self-drivign cars will be totally safe.  This sort of thing would never happen there.  Nosiree.

Monday, April 8, 2019

The only thing dumber than a self-driving car is a self-driving tank

Peter has a post discussing what size gun for small armored vehicles, which has a lot of interesting stuff if that's your bag, Baby.  But he has a very interesting question:
In future warfare, as far as front-line combat is concerned, does infantry still have a role on the battlefield?  Is combat going to develop into a slugging match between vehicles, and possibly between unmanned systems or artificial-intelligence autonomous weapons systems?
No.

Nobody is going to put much faith into autonomous weapons systems for a long, long time.  The reason is that failure modes are much more complex than for autonomous automobiles, and are susceptible to enemy subversion.  We're actually already seeing some of this for self-driving cars, where security researchers were successful in tricking a Tesla to move into the oncoming traffic lane by putting some stickers on the road surface.  Srlsy.

So far self-driving cars have been learning (mostly successfully) to avoid obstacles on well defined roadways.  Results have been decently impressive although nowhere near good enough for me to trust my life to one of these things.  I've posted about many of the failures here, and this really boils down to a case of underestimating how difficult the problem is combined with a generous dose of Gee-Wizz marketing.  Essentially this is a problem space where rapid progress is made until the solution is 80% complete, at which point the people working the problem realize that they're facing the next 80%.

And remember, this is for driving on well marked roads with lanes painted on the surface and signposts to give a lot of clues about what's coming next.  No imagine a vehicle that has to navigate off-road, avoid obstacles, avoid damaging property owned by friendlies, all while searching for and identifying potential targets.

Remember, the targets will be actively trying to trick the vehicle's sensors and AI algorithms.  As they say, this will be a target rich environment.  I predict that the first time that a Red Team takes on one of these vehicles it will all be over very quickly.  The AI needs to do a lot more than identify obstacles on a well defined roadway, it needs to do off-road navigation while figuring out whether it is being tricked or not.

The situation is very similar to the difference between getting a web site up and running, and getting one running that is hard to hack.  The first case is just getting functionality to work as designed, the second involves ensuring that the functionality cannot be bent by clever stratagem to do something that the designer doesn't want done.

Good luck with that - this is an entirely new field, with entirely new compromise possibilities.  Head-Smashed-In Buffalo Jump is a site where plains indians hunted bison by tricking them and driving them off a cliff.  Bison are about a billion times smarter than even the best AI, and this was a viable hunting strategy nonetheless.  Is it possible to confuse a self-driving tank to drive off a cliff?  I for one wouldn't bet big money that you couldn't.

This is not a self-driving Wikipedia

And so back to Peter's question.  Yes, infantry has a place on the battlefield of tomorrow.  Quite frankly one of their uses might be to override a confused AI that is about to drive over a cliff.  Infantry will be smarter than tanks for a long, long time.

UPDATE 10 APRIL 2019 17:23: Lawrence has a very interesting take on this.  I think he's right.

Wednesday, March 27, 2019

Marketing doesn't change the Truth

It just makes it "better":
The advertising industry's self-regulatory division has urged Verizon to stop claiming that it has America's first 5G network, but Verizon claims that its "first to 5G" commercials are not misleading and is appealing the decision. 
The National Advertising Division (NAD), an investigative unit managed by the Council of Better Business Bureaus, announced its recommendation to Verizon last week. The NAD investigated after a challenge lodged by AT&T, which has been misleading customers itself by renaming large portions of its 4G network to "5G E." But AT&T's challenge of Verizon's 5G ads was "the first case involving advertising for 5G" to come before the self-regulatory body, the NAD said.
You're going to see a boatload of these adverts over the next couple of years.  Take them with a big grain of salt.

Kind of like Tesla's "autopilot".  Maybe a bit more important, there.  After all, crappy "5G" won't kill you.

Wednesday, March 6, 2019

Link Dump

Here are some things that are all interesting, but none of which tipped the scales for a stand alone post.

Uber will not face criminal charges for Elaine Herzberg's death.  The DA is still deciding about charges for the driver, who seems to have been streaming TV to her phone instead of looking at the road when the fatal crash happened.  Uber has already settled for an undisclosed sum with Herzberg's family.

The pull quote from that article?  An email from an Uber developer: "We shouldn't be hitting things every 15,000 miles."  Gee, ya think?

I'm not a prepper but I now that some of you are.  This is an interesting post on someone starting a garden with a view to long term SHTF survivability.  Seed storage lifetime and time from planting to harvest are covered, along with how squash will cross into new types that may or may not be long term productive.  It's probably not a lot of new info for hard core preppers but is a good quick introduction.

Did you ever wonder how the Internet works?  How do messages get delivered?  This is a quite accessible (although slightly technical) overview.

The great unpublished story about how oil exploration has been revolutionized (and killed "peak oil") has an unlikely angle: gravity.  This is simply fascinating.  It appears that we can "see" oil deposits wherever they are on the planet.  Now this is the 21st Century that I had been promised.

And an update to yesterday's post about NSA discontinuing its mass surveillance program (maybe):




Friday, February 15, 2019

Self-Driving cars: unsafe at any speed

A Tesla on autopilot drove itself into a wreck.  The failure mode is interesting:
Yet another example came to light on Monday when a driver in North Brunswick, New Jersey wrecked his Tesla on a highway while the vehicle was in Autopilot mode. According to a report published by News 12 New Jersey, the driver said that the vehicle "got confused due to the lane markings" at a point where the driver could have stayed on the highway or taken an exit. The driver claims that Autopilot split the difference and went down "the middle", between the exit and staying on the highway.

The car then drove off the road and collided with several objects before coming to a stop. The driver claims that he tried to regain control of the vehicle but that "it would not let him".
Insty is skeptical, but I'm not.  This is exactly the kind of situation that you should suspect the software could handle badly: confusing input from signs or lane markers leading to a failure to navigate the car on a safe route.  It's not a software bug, it's a gap in the algorithm used to control the car.

I'm not sure that this is solvable, either.  The way software developers handle these "edge cases" is (a) ignore them if possible (I can't see Tesla being able to do that, or (b) write a special case condition that covers the situation.  The problem with the later option is that there can be hundreds of these special cases that need to be coded in.  That makes the software a huge bloated mass that nobody can really predict how it will work.  Validation becomes really hard and QA testing becomes essentially impossible.

And this is without postulating software bugs - this is all trying to make the algorithm suck less.  Of course, the more code you have to write, the more bugs you will have - remember that validation becomes really hard and testing well nigh impossible?  You'll have an unknown number of potentially fatal bugs that you probably won't know about.

Until we have a different type of computer (probably one that is not based on von Neumann architecture).  If you want to get really computer geeky (and I know that some of you do), automotive autopilot problems are almost certainly NP-Hard.  For non computer geeks that means if you want to code one of the then good luck - you're going to need it.

The bottom line: I have absolutely no intention to ever trust my life to software that is NP-Hard.  I know (and admire) many software developers, but this is flying too close to the sun.  Someone's wings will melt.

Wednesday, January 9, 2019

Tab clearing

Here's a grab bag of items that are only related by the fact that they're in this grab bag.

B alerts us to the fact that a huge amount of what is reported as "Science" is in fact a scam.  The Iron Law of Bureaucracy applies to Department Heads and University Presidents as much (or maybe more) as any bureaucrat.  I would go so far as to say that today's scientific bureaucracy essentially ensures that there will be a crisis of reproduceability.

I don't almost ever go on Facebook, because they're simply evil - they sell your data to anyone who will pony up.  So what, you say?  Here's what:
A lot of people probably don’t care if Netflix or Microsoft have access to their “private” messages. But technology companies aren’t the only kids on the block with big bucks. Do you really want your health insurance company having access to your “private” messages? That medical issue that grandma messaged you about may be hereditary and the fact that you might face it at some point may convince your health insurance company to up your premium. Would Facebook provide access to your “private” messages to health insurance companies? You have no way of knowing.
Related: this cannot be said often enough:


Reality is starting to catch up to (and overwhelm) the hype about self-driving cars.  It's about time, but this quote from the article is pretty pathetic:
"I've been seeing an increasing recognition from everybody—OEMs down to various startups—that this is all a lot tougher than anybody anticipated two or three years ago," industry analyst Sam Abuelsamid told Ars. "The farther along they get in the process, the more they learn how much they don't understand."
We have Top Men working on it.  Top.  Men.  They obviously don't read this blog because I've been talking about this for years.

Once again I must point out that the cyber security job market is red hot and you don't need a college degree to get in to it.  You can study on your own and take certification tests for small money (a few grand, max) and find yourself making big bucks without a huge amount of college debt - and without all the Snowflake indoctrination that goes with it.  Some companies even offer scholarships.  If you are (or know) a young man who's smart and has some get up and go, this might be their ticket.

Philip emails in response to my post about Sidecarcross racing (Motocross with sidecars):
If you think dirt bike side car racing is as mad as a box of frogs, try looking at some Isle of Man TT side car road racing. 150 MPH at times on a flat platform with no hand holds and not strapped in is a bit too hirsute for me to do, methinks!
I'm with him 100%.  In my 20s I might have thought that Sidecarcross was cool enough to try out (heck I did dirt biking, so it's just a short step from that).  But even the 22 year old me would never have tried this - which as he says is indeed madder than a box of frogs:

Wednesday, December 12, 2018

Why Uber's self-driving car killed that woman

It seems that safety wasn't an after thought - it wasn't thought of at all:
On March 19, the world learned that Uber had a serious safety problem when a prototype self-driving car struck and killed pedestrian Elaine Herzberg in Tempe, Arizona. But signs of Uber's safety problems were evident to company insiders even before the crash. And at least one Uber manager tried to raise the alarm on March 13—just days before Herzberg's death. 
Robbie Miller worked for Google's self-driving car program until 2016, when he left for the self-driving truck startup Otto. Otto was snapped up by Uber later that year, and Miller became an operations manager in Uber's self-driving truck program. 
Miller quit his job at Uber in March 2018 and went on to lidar startup Luminar. Before he left the company he sent an email to Eric Meyhofer, the leader of Uber's self-driving car project, about safety problems at the company. The email, which was obtained by The Information's Amir Efrati, is absolutely scathing.
...
"A car was damaged nearly every other day in February," Miller said. "We shouldn’t be hitting things every 15,000 miles."
The article beggars belief.  If this is true, then Uber's self-driving program may be done; I don't see how they can credibly regain the public's trust.  This speaks to a deeply untrustworthy corporate culture:
Miller pointed to an incident in November 2017, when an Uber car had a "dangerous behavior" that nearly caused a crash. The driver notified his superiors about the problem, Miller wrote, but the report was ignored. A few days later Miller noticed the report and urged the team to investigate it. 
But Miller says his request was ignored—and when he pressed the issue with "several people" responsible for overseeing the program, they "told me incidents like that happen all of the time." Ultimately, Miller said it was two weeks before "anyone qualified to analyze the logs reviewed them."
Happens all the time.  No biggie.

Wow.

Monday, June 11, 2018

Self Driving Car software is worse than you think

This is kind of jaw dropping.  The cars cannot avoid ramming stationary objects at high speed:
A natural reaction to these incidents is to assume that there must be something seriously wrong with Tesla's Autopilot system. After all, you might expect that avoiding collisions with large, stationary objects like fire engines and concrete lane dividers would be one of the most basic functions of a car's automatic emergency braking technology. 
But while there's obviously room for improvement, the reality is that the behavior of Tesla's driver assistance technology here isn't that different from that of competing systems from other carmakers. As surprising as it might seem, most of the driver-assistance systems on the roads today are simply not designed to prevent a crash in this kind of situation.
This is bizarre, and I strongly recommend you read the entire article.  You would think that this would be a basic capability, but since the system was put together from parts that evolved over time, this is something that seems to have dropped through the cracks.  It's highly doubtful that this is the only think that can kill you that has dropped through the cracks.

Holy cow, what a mess.