Why you can't trust AI-generated autocomplete code to be secure


Researchers at NYU Tandon, including Hammond Pearce, member of the Center for Cybersecurity, put Github’s Copilot to the test and saw that it struggles to generate code that's syntactically correct and meaningful. "[Copilot] is very good at coming up with novel code that has not necessarily been seen before. But will it reproduce patterns? Yes, it will," said Pearce. “[These models] understand [code] at the level of this text I have seen is usually followed by this other text. It doesn't have a notion of what is good code," added Brendan Dolan-Gavitt, assistant professor at NYU Tandon and member of the Center for Cybersecurity.