We make software and supply chains safer... and the cybersecurity workforce of the future more robust
We make hardware harder to hack
Aided by advances in sensors, artificial intelligence, robotics, and networking technology, digital manufacturing (DM) is revolutionizing the traditional manufacturing industry. In order to stay competitive, many medium- and small-scale enterprises are becoming part of larger, cybermanufacturing business networks. That poses some danger, however.
Professor of Mechanical and Aerospace Engineering Nikhil Gupta (a member of the NYU Center for Cybersecurity) and Professor of Electrical and Computer Engineering Ramesh Karri (co-founder and co-chair of the Center) are coming to their aid by examining the cybersecurity risks in the emerging DM context, assessing the impact on DM, and identifying new approaches to keeping DM safe and efficient.
Typically, data encryption protects data in transit: it’s locked in an encrypted “container” for transit over potentially unsecured networks, then unlocked at the other end, by the other party for analysis. But outsourcing to a third party is inherently insecure. An emerging type of encryption, called fully homomorphic encryption (FHE), is now being considered the “holy grail of encryption” because it enables multiple users to process encrypted data while the data or models remain encrypted, preserving data privacy throughout the analytics process, not just during transit.
Now, Research Assistant Professor of Electrical and Computer Engineering Michail Maniatakos is working with Assistant Professor of Computer Science & Engineering and Electrical & Computer Engineering Brandon Reagen to design a revolutionary new microchip (codenamed “Trebuchet”) that will accelerate and enable practical applications of FHE. Their work — done in collaboration with data security company Duality and supported by a $14 million grant from the DARPA — is of particular value to AI systems because it allows data scientists to train some of the most advanced machine learning models on encrypted data, enabling organizations to leverage greater amounts of diverse sensitive data for training.
The supply chain involved in manufacturing chips is complex, and most foundries are overseas. That means that once a chip is fabricated and returned to the customer, a question arises: has a deliberate flaw, known as a Trojan, been inserted during the fabrication process for malicious purposes?
Professor of Electrical and Computer Engineering Farshad Khorrami is working with Professor of Electrical and Computer Engineering Ramesh Karri, research scientist Prashanth Krishnamurthy, and colleagues in Germany to study new ways of detecting the sabotage of integrated microchips during their fabrication. With funding from the Office of Naval Research, the team is developing a novel detection technique that employs transistor short-term aging effects in integrated circuits (ICs). In the process, they’re building a specially designed testbed that they anticipate will become a vital resource for the broader hardware security community by providing access to physical ICs with Trojan-free and infected variants of circuits ranging from moderate-sized cryptographic circuits to complex microprocessors, and a Field Programmable Gate Array-based interface to interrogate and test the ICs using their own methods
We grow — and diversify — the workforce
In 2016, thanks to Professor Nasir Memon, then chair of the Department of Computer Science and Engineering, NYU Tandon launched its Bridge Program, meant to help those without a background in computer science prefer for a master’s degree in a STEM field. The program attracted a small but passionate cohort of 50 students, from a Princeton psychology major who always loved computers but was sidelined by dreams of Olympic pole vaulting to an economics and anthropology student who saw the chance to create online education opportunities in underdeveloped nations. The upcoming Bridge cohort is still passionate, but it’s a lot larger: at 1,000 students strong, it represents a huge increase in size from just a few years ago.
Concurrently growing by leaps and bounds is Tandon’s Cyber Fellows initiative, which offers scholarships that result in one of the lowest-cost online master’s degrees in the country and encourages a diverse pool of highly skilled technical graduates ready to step into the growing cybersecurity gap made evident by the increasing number of major attacks like SolarWinds and Pegasus. Launched in 2018 with an inaugural class of 100, Cyber Fellows has seen enrollment increase by a factor of 10
We make software more secure
Machine-learning systems are becoming pervasive not only in technologies affecting our day-to-day lives, but also in those observing them, including face expression recognition systems. Companies that make and use such widely deployed services rely on so-called privacy preservation tools that often use generative adversarial networks (GANs), typically produced by a third party to scrub images of individuals’ identity. But how good are they? The answer is not very. according to Institute Associate Professor of Electrical and Computer Engineering Siddharth Garg, Research Assistant Professor Benjamin Tan, and Ph.D. candidate Kang Liu.
In a paper presented at the 35th AAAI Conference on Artificial Intelligence, the team explored whether private data could still be recovered from images that had been “sanitized” by such deep-learning discriminators as privacy protecting GANs (PP-GANs) and had even passed empirical tests; they discovered that those tools could, in fact, be subverted to pass privacy checks, while still allowing secret information to be extracted from sanitized images. With privacy tools having such broad applicability across sensitive domains — including removing location-relevant information from vehicular camera data, obfuscating the identity of a person who produced a handwriting sample, or removing barcodes from images — the research provides a timely, actionable warning about the insufficiency of existing privacy checks and the potential risks of using untrusted third-party PP-GAN tools.
Assistant Professor of Computer Science & Engineering and Electrical & Computer Engineering Brandon Reagen and a team of collaborators, including Nandan Kumar Jha, a Ph.D. student, and Zahra Ghodsi, a former doctoral student under the guidance of Institute Associate Professor Siddharth Garg, are rethinking the basic functions that drive the ability of neural networks to make inferences on encrypted data.
When neural networks compute on encrypted data, there’s a heavy toll in time and computational resources, and many of these costs are incurred by rectified linear activation function (ReLU), a non-linear operation. The team has developed a framework called DeepReDuce that offers a solution through the rearrangement and reduction of ReLUs in neural networks. They say that while a fundamental reassessment of where and how components are distributed in neural networks is needed, it’s entirely possible to skip many time-intensive and computationally expensive ReLU operations and still get high performing networks at two to four times faster run time.
The inquiry is not merely academic. As the use of AI grows in concert with concerns about the security of personal, corporate, and government data security, neural networks are increasingly doing computations on encrypted data, making the team’s work important across many sectors.
We make hacking harder
The SolarWinds attack of 2020 demonstrated the lengths to which state actors will go to sabotage government and corporate systems. It was also a shot across the bow, a clear warning that our software supply chains need to be secure from development to delivery to the end user.
Because he has devoted much of his career to making these protections a reality, it also put Associate Professor of Computer Science and Engineering Justin Cappos in the spotlight: it was Cappos and his team who developed the government-funded system in-toto, to thwart attacks of the kind that took down countless federal servers when bad actors tucked malware into SolarWinds updates.
In-toto, an easy-to-use framework that cryptographically ensures the integrity of the software supply chain was, developed in collaboration with former Tandon Ph.D. student Santiago Torres-Arias, now a professor at Purdue University. It has been integrated into several major software projects, including those hosted by the Cloud Native Computing Foundation.
Using a blockchain-like verification of interactions, in-toto requires that each step in software development, testing, packaging, and, finally, distribution processes conforms to the layout specified by the developer and confirms to the end-user that the product has not been altered for malicious purposes, such as by adding backdoors in the source code. By doing so, in-toto ensures that all steps can be trusted. Because of the decentralized nature of software development, opportunities for an attacker to insert malicious code or otherwise compromise the finished product are manifold, but in experiments re-creating more than 30 real-life software supply chain compromises that impacted hundreds of millions of users, the Tandon team found that in-toto would have effectively prevented at least 83% of those attacks.
However, even before the advent of in-toto, Cappos was already well-known in the security community for developing The Uptane Framework (TUF), aimed at preventing the insertion of malicious software into the “last mile” of the software update process: delivery of patches to devices, servers, and other software-driven technology; and for Uptane, an adaptation of TUF to secure the over-the-air updates to vehicles that is now standard practice in the automotive industry. TUF has been adopted in a number of high-profile projects by organizations like the Linux Foundation, IBM, Microsoft, Google, and Amazon, and Uptane is now deployed through its inclusion in the Linux Automotive Suite to numerous automakers worldwide.
Tools like Uptane are of critical importance now, given that newer-model automobiles are so highly computerized. But they will become even more important as machine-learning systems delivering self-driving functionality are installed in more and more fleets. The threat is not idle: according to industry experts, cyberattacks on vehicles increased 700% between 2016 and 2019 — attacks, ranging from tracking a driver without their knowledge to forcing a car to drive off the road or disabling its braking system.
Beyond the world of software updates, Cappos is an often-cited advocate for the judicious use of social media, online financial tools, and services, the security of which many people take for granted.