My Dad used to love to say that the reason that history repeats itself is that nobody listens the first time. We see this in computer security all the time, where the exact same mistakes that have been made over and over are repeated in a new technology field.
The problem is that the e-commerce system didn't validate inputs correctly. Fast forward 20 years to today and guess what self-driving car AI doesn't do?
Now, I'm not sure quite how you'd go about validating that sort of input, but it's pretty darn important data. I mean, someone could die. This is a very hard problem to solve, but it's exactly the sort of problem you'd need to solve.
Imagine a projection of a traffic sign that clogs up the Washington beltway at rush hour (I mean, more so than normal). The bottleneck would cascade as people run out of gas until nothing could move for hours and hours. And the attack looks trivial. There are likely a lot of variations that could tie everything in knots.
This is exactly why I have been so vocally skeptical about the viability of self-driving cars. The easy problem is getting the AI to work under normal circumstances ("easy" is actually pretty hard, but it certainly doable). What's hard is to make the system robust under attack. Saying "why would anyone want to attack the system" is actually a sign that whatever you are building will never be fit for purpose.