More than half of developers admit that generative AI tools often generate insecure code, but still, 96% of development teams use these tools, with more than half of them using them all the time, according to a report published on […]
More than half of developers admit that generative AI tools often generate insecure code, but still, 96% of development teams use these tools, with more than half of them using them all the time, according to a report published on Tuesday by Snyk, a company that specializes in developer security.
The report, based on a survey conducted among 537 members of software engineering and security teams, also revealed that 79.9% of respondents bypass security policies to use artificial intelligence.
“I knew that developers were bypassing rules to use generative AI tools, but what is truly surprising is that 80% of respondents bypass their organization’s security policies to use artificial intelligence all the time, most of the time, or some of the time,” said Simon Maple, a leading advocate for development at Snyk. “I was really amazed that the number was so high.”
Without testing, the risk of vulnerabilities in code increases.
Bypassing security policies creates a huge risk, the report emphasizes, because while companies are quickly adopting artificial intelligence, they are not automating security processes to protect their code. Only 9.7% of respondents said that their team automates 75% or more of security scans. This lack of automation leaves a significant security gap.
“Generative AI is an accelerator,” says Maple. “It can increase the speed of writing code and delivering that code into production. If we don’t test, the risk of vulnerabilities in production increases.”
“Fortunately, we found that one in five respondents increased the number of security scans as a direct result of using artificial intelligence tools,” he added. “That number is still too small, but organizations are realizing the need to increase the number of security scans based on the use of AI tools.”