GitHub's Copilot may steer you into dangerous waters about 40% of the time – study
Academics have put GitHub's Copilot to the test on the security front, and said they found that roughly 40 per cent of the time, code generated by the programming assistant is, at best, buggy, and at worst, potentially vulnerable to attack.
Copilot arrived with several caveats, such as its tendency to generate incorrect code, its proclivity for exposing secrets, and its problems judging software licenses. But the AI programming helper, based on OpenAI's Codex neural network, also has another shortcoming: just like humans, it may produce flimsy code.
That's perhaps unsurprising given that Copilot was trained on source code from GitHub and ingested all the bugs therein. Nonetheless, five boffins affiliated with New York University's Tandon School of Engineering felt it necessary to quantify the extent to which Copilot fulfills the dictum "garbage in, garbage out."
In a paper released through ArXiv, "An Empirical Cybersecurity Evaluation of GitHub Copilot’s Code Contributions," Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, and Ramesh Karri created 89 scenarios for Copilot to craft code for, resulting in 1,692 programs, about 40 per cent of which included bugs or design flaws that may be exploitable by an attacker.