Google has created a machine learning platform called the Cloud Vision API that can categorize images looking at them. It also can pick out faces in a crowd and identify them, identify inappropriate content, determine that a house on lakefront property is different than a lake ecosystem. Researchers at the University of Washington have found out that the entire system can be attacked by adding noise to the images to be analyzed.
Google's Cloud Vision API is easily fooled when noise is added to an image . If a sufficient amount of noise is intentionally added to an image, the system fails and can no longer determine that the noisy image is the same as the original image. Humans can glance at the images and recognize the matching images instantly. Only 14.25% noise was needed to be added to the image to completely fool the A.I..
This is of particular importance as law enforcement could be using this system to find child porn, solve a murder, or identify criminals walking in front of CCTV cameras. In the case of a criminal, substituting a noisy mugshot for the original mugshot in a crime database would hide his identity. He would become a law abiding citizen without a record to the cameras. Scary stuff indeed!
At this point readers may ask why this matters: the authors suggest that deliberately adding noise to images could be an attack vector because "an adversary can easily bypass an image filtering system, by adding noise to an image with inappropriate content." Which sounds interesting because bad actors could easily learn that images are subject to machine analysis. For example, The Register recently learned of a drone designed to photograph supermarket shelves so that image analysis can automatically figure out what stock needs to be re-ordered. An attack on such a trove of images that results in empty shelves could see customers decide to shop elsewhere.