Вештачка интелигенција

A New Study Reveals Vulnerabilities in Artificial Intelligence Systems

Summary

Artificial intelligence (AI) is becoming increasingly susceptible to deceptive attacks, according to a recent study, challenging previous beliefs about its resilience. These vulnerabilities expose the potential for manipulating AI systems, leading to inaccurate decision-making. Researchers have developed QuadAttacK, a software […]

A New Study Reveals Vulnerabilities in Artificial Intelligence Systems

Artificial intelligence (AI) is becoming increasingly susceptible to deceptive attacks, according to a recent study, challenging previous beliefs about its resilience. These vulnerabilities expose the potential for manipulating AI systems, leading to inaccurate decision-making. Researchers have developed QuadAttacK, a software designed to test neural networks for susceptibility to deceptive attacks, and it has revealed widespread vulnerabilities in deep neural networks used in AI. These findings highlight the need to enhance AI’s resilience against such attacks, especially in applications with implications for human lives.

Key facts:

– Deceptive attacks involve manipulating data to confuse AI systems, resulting in inaccurate outcomes.
– QuadAttacK, a software developed by researchers, can test deep neural networks for susceptibility to deceptive vulnerabilities.
– Widespread vulnerabilities have been observed in various widely used deep neural networks, emphasizing the need to increase AI’s resilience.

Artificial intelligence offers numerous possibilities in various domains, from autonomous vehicles to interpreting medical images. However, new research indicates that these systems are more vulnerable to targeted attacks than previously thought. The issue lies in “deceptive attacks,” where someone manipulates the data input into an AI system to confuse it. For example, someone can know that placing a certain type of sticker in a specific location on a stop sign effectively renders it invisible to the AI system. Alternatively, a hacker could install code on an X-ray machine that alters image data in a way that leads to incorrect diagnoses by the AI system.

The vulnerabilities identified in this study are far more prevalent than previously assumed. “Essentially, you can make all sorts of changes to a stop sign, and the AI trained to recognize stop signs will still know that it’s a stop sign,” says Tianfu Vu, co-author of the study and an associate professor of electrical engineering at the University of North Carolina. “However, if the AI has a vulnerability and an attacker knows that vulnerability, they can exploit it and cause an accident.”

The new research focused on determining the extent to which these vulnerabilities are widespread in AI’s deep neural networks. It was found that the vulnerabilities are far more pervasive than previously thought. “Furthermore, we discovered that attackers can leverage these vulnerabilities to make the AI interpret data in the way they desire,” says Vu. “Using the example of a stop sign, an attacker can make the AI system perceive that sign as a mailbox, a speed limit sign, or a green light, simply by using different stickers or exploiting the vulnerability.”

To test the susceptibility of deep neural networks to these deceptive attacks, researchers developed a software called QuadAttacK. This software can test any deep neural network for susceptibility to deceptive vulnerabilities. “Essentially, when you have a trained AI system and you test it using clean data, the system will behave as intended. QuadAttacK observes these operations and learns how the AI makes decisions regarding the data. This allows QuadAttacK to determine how the data can be manipulated to deceive the AI. Then, QuadAttacK starts sending manipulated data to the AI system and observes how the system responds. If QuadAttacK identifies a vulnerability, it can quickly make the AI perceive what QuadAttacK desires.”

In testing the concept, researchers used QuadAttacK to test four deep neural networks: two convolutional neural networks (ResNet-50 and DenseNet-121) and two visual transformers (ViT-B and DEiT-S). These four networks were chosen as they are widely used in AI systems worldwide. “We were surprised to find that all four networks are highly vulnerable to deceptive attacks,” says Vu. “We were particularly surprised by how much we could adapt the attacks to make the networks perceive what we wanted.”

The research team has made QuadAttacK publicly accessible, allowing the research community to use it for testing the vulnerabilities of neural networks. The program can be found here: URL. “Now that we can better recognize these vulnerabilities, the next step is to find ways to reduce them,” says Vu. “We already have some potential solutions, but the results of that work will be published at a later date.”

This study, titled “QuadAttacK: A Quadratic Programming Approach to Learning Ordered Top-K Adversarial Attacks,” will be presented on December 16th at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023) held in New Orleans, Louisiana. The first author of the study is Thomas Panijaga, a doctoral student at the University of North Carolina. Co-authors of the study include Ryan Grenger, a doctoral student at the same university.

The research funding is supported by the U.S. Army Research Office through grants W911NF1810295 and W911NF2210010, as well as by the National Science Foundation through grants 1909644, 2024688, and 2013451.

FAQ: