AI can write code like humans—bugs and all
Researchers at NYU Tandon recently analyzed code generated by Github’s Copilot, a program released last June, and found that, for certain tasks where security is crucial, the code contains security flaws around 40 percent of the time.
The figure “is a little bit higher than I would have expected,” says Brendan Dolan-Gavitt, assistant professor of computer science and engineering at NYU Tandon, member of the NYU’s Center for Cybersecurity, and principal investigator. “But the way Copilot was trained wasn’t actually to write good code — it was just to produce the kind of text that would follow a given prompt.”
Hammond Pearce, a postdoctoral researcher, believes the program sometimes produces problematic code because it doesn’t fully understand what a piece of code is trying to do. “Vulnerabilities are often caused by a lack of context that a developer needs to know,” he says.
Despite such flaws, Copilot and similar AI-powered tools may herald a sea change in the way software developers write code. There’s growing interest in using AI to help automate more mundane work.