Monday, November 6, 2017

Artificial Intelligence: still waiting

AI does poorly at something that every small child excels at: identifying images.  Even newborn babies can recognize that a face is a face and a book is not a face.  AI algorithms take long long training sessions to get to basic competence, and even then it seems that it's practical to fool them:
Students at MIT in the US claim they have developed an algorithm for creating 3D objects and pictures that trick image-recognition systems into severely misidentifying them. Think toy turtles labeled rifles, and baseballs as cups of coffee.
This has been a problem in my field (computer security) for about as long as I can remember - certainly back into the 1980s, and almost certainly longer.  Programmers work to get functionality correct - the program performs as intended when fed normal (i.e. expected) input.  Programmers have done poorly anticipating what a Bad Guy might feed the program as input.  Instead of a name in a text field, how about a thousand letter A characters?  Oops, now the Bad Guy can run code of his choice because the program didn't anticipate this input and then fails in an uncontrolled way.

And now AI looks like it's doing precisely the same old pattern:
The problem is that although neural networks can be taught to be experts at identifying images, having to spoon-feed them millions of examples during training means they don’t generalize particularly well. They tend to be really good at identifying whatever you've shown them previously, and fail at anything in between. 
Switch a few pixels here or there, or add a little noise to what is actually an image of, say, a gray tabby cat, and Google's Tensorflow-powered open-source Inception model will think it’s a bowl of guacamole. This is not a hypothetical example: it's something the MIT students, working together as an independent team dubbed LabSix, claim they have achieved.
Oops.
“Our work gives an algorithm for reliably constructing targeted 3D physical-world adversarial examples, and our evaluation shows that these 3D adversarial examples work. [It] shows that adversarial examples are a real concern in practical systems,” the team said. 
“A fairly direct application of 3D adversarial objects could be designing a T-shirt which lets people rob a store without raising any alarms because they’re classified as a car by the security camera,” they added.
Double oops.

The problem is how programs are designed and implemented.  And quite frankly, designing a program that can anticipate all possible attacks is probably a fool's errand - the program would be so complicated that it would probably run poorly, if at all.

The moral is to use software as a tool, but always understand that the output can be wonky.  This is especially so when someone wants the output to go wonky.  A.I. can be a useful tool, but anyone who thinks this will change the world is in for a quite nasty surprise.

3 comments:

  1. There was a fatal accident involving a Tesla in north Florida. A semi-tractor trailer truck turned left in front of the Tesla. The Tesla's sensors were unable to distinguish a white truck from the bright sky. The Tesla just drove under the truck without slowing, decapitating the driver.

    As you say, any baby that knows the two words truck and sky will not make that mistake. What age needs to be the cutoff point? Two year-old?

    And it just doesn't seem to get better. In the mid 80s I was taking an optics class from the Physics department. The professor was talking about something they called leading edge then, saying "it would choke a mainframe to look at photo of a stool at an odd angle and not think it's a dog, but no dog makes that mistake".

    ReplyDelete
  2. That Tesla example is why the driver was supposed to keep his hands on the wheel. He wasn't--he was watching a movie. It's really his own fault for not actually driving the car.

    ReplyDelete
  3. I am not permitted by law to program. I can sometimes idiot proof my stuff... but to write something to thwart a determined hack? Bah - there's sorcerers and demons for that kind of work!

    ReplyDelete

Remember your manners when you post. Anonymous comments are not allowed because of the plague of spam comments.