Вештачка интелигенција

The Use of Artificial Intelligence for Software Security: Exploring a Different Perspective

Summary

In the wake of President Joe Biden’s executive order on artificial intelligence, there is a growing concern within the technology world about its focus. Two computer science professors, Swarat Chaudhuri from the University of Texas at Austin and Armando Solar-Lezama […]

The Use of Artificial Intelligence for Software Security: Exploring a Different Perspective

In the wake of President Joe Biden’s executive order on artificial intelligence, there is a growing concern within the technology world about its focus. Two computer science professors, Swarat Chaudhuri from the University of Texas at Austin and Armando Solar-Lezama from MIT, have expressed their worries about the shortcomings of the order, which could hinder our ability to advance security in an increasingly AI-driven world. This article aims to offer a fresh perspective on the topic while maintaining the core facts.

The executive order by the Biden administration sets new standards for AI security and specifically emphasizes the risks associated with “base models” – general-purpose statistical models trained on massive datasets that power AI systems like ChatGPT and DALL-E. As researchers, we agree that there are genuine concerns regarding the security of these models. However, the approach taken in the executive order has the potential to exacerbate these risks by focusing on the wrong things and limiting access for those seeking to address the problem.

While the focus on large base models is well-intentioned, it possesses three main drawbacks. Firstly, it is inadequate, as it disregards the potential havoc that smaller models can cause. Secondly, it is unnecessary, as targeted mechanisms can be developed to protect against the misuse of AI. Lastly, it represents regulatory overreach that may end up favoring only a few big Silicon Valley companies, at the expense of broader innovation in AI.

To highlight the weaknesses of the Biden administration’s approach, let us consider MalevolentGPT, a malicious AI already available on the dark web. Think of it as the evil twin of ChatGPT. While ChatGPT has built-in security measures, MalevolentGPT excels in writing malicious code used for cyber-attacks. Building a system like MalevolentGPT starts with a general-purpose base model, which is then “fine-tuned” using additional data – in this case, malicious code obtained from the darker corners of the internet.

The base model itself does not necessarily have to be a giant machine that requires regulatory measures. Everything can go completely unnoticed in relation to the Biden executive order. However, this does not mean that MalevolentGPT is harmless.

But just because we can build models like MalevolentGPT and bypass reporting thresholds, does not mean that cybersecurity is a lost cause. In fact, AI technology can offer a way to strengthen the security of our software infrastructure.

Most cyber-attacks exploit vulnerabilities in the targeted software. Unfortunately, global software systems are riddled with vulnerabilities. If we could make our software more resilient in general, the threat posed by malicious AI like MalevolentGPT – or human hackers for that matter – could be minimized.

This may sound like a challenge, but the same technologies that make rogue AI a threat can also help create secure software. There is an entire subfield of computer science called “formal verification” that focuses on mathematically proving the correctness of a program. Historically, formal verification has been too expensive and labor-intensive to be widely applied, but new techniques based on base models can reduce their costs.

The executive order deserves credit for recognizing the potential of AI technology in building secure software. This aligns with the other positive aspects of the order, which aim to address specific issues such as algorithmic discrimination or the potential risks AI may pose in healthcare. Conversely, the order’s requirements regarding large base models do not effectively respond to concrete risks. Instead, they react to a focused narrative.

Frequently Asked Questions