Why the crazy title? You may have seen the 1969 film “They Shoot Horses, Don’t They?” It’s about people struggling with a variety of human frailties competing in a grueling 1930’s dance marathon. If you haven’t seen it, believe me it’s not a feel-good film. But my point here with “Turtles Shoot, Don’t They” is that when a turtle can be mistaken for a rifle, the results can be equally devastating.
Artificial intelligence and machine vision have come a long ways, however this technology is going into cars that not only aid us, but may soon drive us through town or on high speed highways. We’d like to have the security of knowing that they can see and identify objects on or next to the road as well — or better than — as we humans can. So it’s disconcerting that Google’s AI thinks a turtle is a gun, or a cat — which might be running in front of a car — is guacamole.
Researchers are just as busy creating adversarial images — images that can fool an AI — as they are figuring out how to properly categorize such images. Recently published, “One pixel attack for fooling deep neural networks” explains how little it takes sometimes to fool an AI.
I hope that AI scientists can fix these issues. I also want them to figure out why an AI can’t understand that a turtle is just a turtle. After all, even a young child would probably know that the photo shown in the research is a turtle, not a gun. What is going on in a child’s image recognition ability that isn’t going on in an AI?