Peter has a post discussing
what size gun for small armored vehicles, which has a lot of interesting stuff if that's your bag, Baby. But he has a very interesting question:
In future warfare, as far as front-line combat is concerned, does infantry still have a role on the battlefield? Is combat going to develop into a slugging match between vehicles, and possibly between unmanned systems or artificial-intelligence autonomous weapons systems?
No.
Nobody is going to put much faith into autonomous weapons systems for a long, long time. The reason is that failure modes are much more complex than for autonomous automobiles, and are susceptible to enemy subversion. We're actually already seeing some of this for self-driving cars, where security researchers were successful in tricking a Tesla to move into the oncoming traffic lane by
putting some stickers on the road surface. Srlsy.
So far self-driving cars have been learning (mostly successfully) to avoid obstacles on well defined roadways. Results have been decently impressive although nowhere near good enough for me to trust my life to one of these things. I've posted about
many of the failures here, and this really boils down to a case of underestimating how difficult the problem is combined with a generous dose of Gee-Wizz marketing. Essentially this is a problem space where rapid progress is made until the solution is 80% complete, at which point the people working the problem realize that they're facing the
next 80%.
And remember, this is for driving on well marked roads with lanes painted on the surface and signposts to give a lot of clues about what's coming next. No imagine a vehicle that has to navigate off-road, avoid obstacles, avoid damaging property owned by friendlies, all while searching for and identifying potential targets.
Remember, the targets will be actively trying to trick the vehicle's sensors and AI algorithms. As they say, this will be a target rich environment. I predict that the first time that a Red Team takes on one of these vehicles it will all be over very quickly. The AI needs to do a
lot more than identify obstacles on a well defined roadway, it needs to do off-road navigation while figuring out whether it is being tricked or not.
The situation is very similar to the difference between getting a web site up and running, and getting one running that is hard to hack. The first case is just getting functionality to work as designed, the second involves ensuring that the functionality cannot be bent by clever stratagem to do something that the designer doesn't want done.
Good luck with that - this is an entirely new field, with entirely new compromise possibilities.
Head-Smashed-In Buffalo Jump is a site where plains indians hunted bison by tricking them and driving them off a cliff. Bison are about a billion times smarter than even the best AI, and this was a viable hunting strategy nonetheless. Is it possible to confuse a self-driving tank to drive off a cliff? I for one wouldn't bet big money that you couldn't.
And so back to Peter's question. Yes, infantry has a place on the battlefield of tomorrow. Quite frankly one of their uses might be to override a confused AI that is about to drive over a cliff. Infantry will be smarter than tanks for a long, long time.
UPDATE 10 APRIL 2019 17:23: Lawrence has
a very interesting take on this. I think he's right.