Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Researchers Demonstrate AI Can Be Fooled

Report Describes How Image Recognition Tools Can Be Deceived
Researchers Demonstrate AI Can Be Fooled

The artificial intelligence systems used by image recognition tools, such as those that certain connected cars use to identify street signs, can be tricked to make an incorrect identification by a low-cost but effective attack using a camera, a projector and a PC, according to Purdue University researchers.

See Also: Freeing Public Security and Networking Talent to do more with Automation

A research paper describes an Optical Adversarial Attack, or OPAD, which uses a projector to project calculated patterns that alter the appearance of the 3D objects to AI-based image recognition systems. The paper will be presented in October at an ICCV 2021 Workshop.

In an experiment, a pattern was projected onto a stop sign, causing the image recognition to read the sign as a speed limit sign instead. The researchers say this attack method could also work with image recognition tools in applications ranging from military drones to facial recognition systems, potentially undermining their reliability.

Chuck Everette, director of cybersecurity advocacy at AI-based threat protection services provider Deep Instinct, notes that AI image recognition tools are heavily used in medical image classification; in security, including surveillance and data validation; and in manufacturing, retail, e-commerce and marketing.

OPAD, Everette says, “could be used against the military's use of image recognition in unmanned aerial vehicles, weapons systems and general security applications, including facial recognition and object recognition. If criminals, as well as nation-states, were able to employ this type of technology on a large scale and with more predictable results, it could have a devastating impact that will affect the lives of millions of citizens."

Marcus Fowler, director of strategic threat at Darktrace, an AI company specializing in cyber defense, adds that if attackers can manipulate object understanding, companies will no longer be able to trust tools such as facial recognition.

AI-based facial biometrics verification company Onfido notes how an adversarial attack subtly modifies an image so that the changes are almost undetectable to the human eye. The modified image is called an adversarial image.

Onfido suggests there's also a potential risk of inappropriate or illegal content being modified with the projection of subtle patterns so that it is undetectable by the content moderation algorithms used in websites or by police web crawlers.

Stanley Chan, one of the report's authors and a professor at Purdue University, tells Information Security Media Group that OPAD techniques have yet to cause significant threat. The research on OPAD, he says, could help with efforts to devise ways to defend against optical attacks, such as constantly illuminating an object with predefined patterns.

Attack Method

The researchers demonstrated modifying how existing objects are viewed by AI.

Optical adversarial attack process (Source: Purdue University)

The attack process is demonstrated in a video.

Nontargeted attacks cause misclassification of the adversarial image, while in targeted attacks, the attacker pretends to get the image classified as a specific target class, which is different from the true class, the report adds.

Nontargeted attacks can cause misclassification of the adversarial image. The most successful attacks performed by researchers have been based on gradient methods, using the midlevel gradient of intensity of light, Onfido says.

There are two major approaches to performing such attacks: one-shot attacks, in which the attacker takes a single step in the direction of the gradient, as with OPAD, and repeated attacks, where instead of a single step, the attackers takes several steps, to account for movement of the target and the environment.

The use of OPAD, researchers say, demonstrates that an optical system can be used to alter the appearance of faces or for long-range surveillance tasks. Chan adds that OPAD also demonstrates the feasibility of attacking real 3D objects - changing their appearance to cause AI to misidentify them - without even touching them.

Limitations

The feasibility of OPAD is constrained by the surface material of the object and the saturation of color, the researchers note.

"For example, a bright red color shirt is difficult because its red pixel is too strong. An apple is difficult because it reflects the light," the report states

Chan suggests people could choose materials, colors, and sizes of objects that are harder to manipulate to mitigate such attacks. "The interesting future work is to explore the type of materials and objects that can never be attacked." he says.

Mitigation

Everette says companies can deploy adversarial training of AI systems to defend themselves against OPAD techniques.

Training the AI model with adversarial samples "builds a natural robustness for the model to defend against these types of attacks," he says.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.