Seeking a New Element in Artificial Intelligence: Trust

NYU Tandon Researchers Win NSF Grant to Develop Tools to Defend Neural Networks and Machine Learning Systems from Attack and Identify Security Flaws

stop sign changed to speed limit sign with post it note

NYU Tandon researchers turned the stop sign outside their office into a speed limit sign by sticking on a Post-It note. The machine learning software had been modified through a backdoor. A team from NYU Tandon and Columbia are now working under a new National Science Foundation grant to improve the heretofore opaque security and trustworthiness of AI systems.

BROOKLYN, New York, Tuesday, August 21, 2018 – For decades, the cybersecurity community has devised protections to fend off malicious software attacks and identify and fix flaws that can disrupt the computing programs that are central to all aspects of life. Now, a team of researchers from New York University Tandon School of Engineering and Columbia University has received a grant from the National Science Foundation (NSF) to develop some of the first tools to bring those same protections to artificial intelligence (AI) systems.

“There are ways to test and debug computer software before you deploy it and methods of verifying that your software works as you expect it to,” said Siddharth Garg, an assistant professor of electrical and computer engineering at NYU Tandon. “There’s nothing analogous for AI systems, and we’re developing a tool suite that will lead to safer, more secure deployment of the systems used in autonomous driving, medical imaging, and other applications,” he said.

In addition to Garg, the research team includes NYU Tandon assistant professors Anna Choromanska, in the Electrical and Computer Engineering Department, Brendan Dolan-Gavitt, in the Computer Science and Engineering Department, and  Suman Jana, an assistant professor of computer science at Columbia University School of Engineering.

The three-year, $900,000 grant will allow the researchers to hone a set of tools that are already in development, each addressing a different aspect of bringing trust and security to AI systems. Garg explained that the team’s work will include defensive schemes designed to defend against malicious attacks and detect the presence of “backdoors” that can be exploited, as well as diagnosing unintentional flaws in AI systems that could have safety impacts. Several recent, well-publicized autonomous car crashes are examples.

The artificial neural networks underlying the AI systems that allow for self-driving cars, speech, and facial recognition, as well as the machine learning algorithms that are transforming medical imaging, are so complex and uniquely constructed that the traditional methods used to test, debug, and verify software simply don’t apply. “As deep learning is being used in more and more areas, it’s critical to develop new ways of identifying vulnerabilities and flaws, and to know when we’ve tested a system well enough that we’re confident to deploy it,” Dolan-Gavitt said.


About the New York University Tandon School of Engineering

The NYU Tandon School of Engineering dates to 1854, the founding date for both the New York University School of Civil Engineering and Architecture and the Brooklyn Collegiate and Polytechnic Institute (widely known as Brooklyn Poly). A January 2014 merger created a comprehensive school of education and research in engineering and applied sciences, rooted in a tradition of invention and entrepreneurship and dedicated to furthering technology in service to society. In addition to its main location in Brooklyn, NYU Tandon collaborates with other schools within NYU, one of the country’s foremost private research universities, and is closely connected to engineering programs at NYU Abu Dhabi and NYU Shanghai. It operates Future Labs focused on start-up businesses in downtown Manhattan and Brooklyn and an award-winning online graduate program. For more information, visit engineering.nyu.edu.