Research News
A study on the association of socioeconomic and physical cofactors contributing to power restoration after Hurricane Maria
This research was led by Masoud Ghandehari, professor in the Department of Civil and Urban Engineering at NYU Tandon, with Shams Azad, a Ph.D. student under Ghandehari’s guidance.
The electric power infrastructure in Puerto Rico suffered substantial damage as Hurricane Maria crossed the island on September 20, 2017. Despite significant efforts made by authorities, it took almost a year to achieve near-complete power recovery. The electrical power failure contributed to the loss of life and the slow pace of disaster recovery. Hurricane Maria caused extensive damage to Puerto Rico’s power lines, leaving on average 80% of the distribution system out of order for months.
In this study, imagery of daily nighttime lights from space was used to measure the loss and restoration of electric power every day at 500-meter spatial resolution. The researchers monitored the island’s 889 county subdivisions for over eight months using Visible/Infrared Imagery and Radiometer Suite sensors — which showed how power was absent/present visually — and by formulating a regression model to identify the status of the power recovery effort.
The hurricane hit the island with its maximum strength at the point of landfall, which corresponds to massive destruction across all physical infrastructures, resulting in a longer recovery period. Indeed, the researchers found that every 50-kilometer increase in distance from the landfall corresponded to 30% fewer days without power. Road connectivity was a major issue for the restoration effort: areas having a direct connection with hi-speed roads recovered more quickly with 7% fewer outage days. Areas that were affected by moderate landslides needed 5.5%, and high landslide areas needed 11.4% more days to recover.
The researchers found that financially disadvantaged areas suffered more from the extended outage. For every 10% increase in population below the poverty line, there was a 2% increase in recovery time. While financial status did impact restoration efforts, the investigators did not find any additional association of race or ethnicity in the study.
Spatial-dynamic matching equilibrium models of New York City taxi and Uber markets
This research was led by Joseph Chow, assistant professor of civil and urban engineering and deputy director of the C2SMART transportation research center at NYU Tandon, with Kaan Ozbay, director of C2SMART, and lead author Diego Correa, a former Ph.D. student, now General Director of Mobility of the City of Cuenca, Ecuador.
With the rapidly changing landscape for taxis, ride-hailing, and ride-sourcing services, public agencies have an urgent need to understand how such new services impact social welfare, as well as how customers are matched to service providers, and how ride-sourcing operations, surge pricing policy and more are evaluated.
The researchers conducted an empirical study to understand these problems specifically for ride-sharing service Uber in New York City (NYC). Since key data is not readily available for the service, the team deployed a dynamic spatial equilibrium model using data on distribution, service, and revenue for NYC taxi fleets, data that is readily available from the city. Specifically, they performed spatial distribution analyses using data on demand activities, service coverage, fleet sizes, matches (rider pickups), and social welfare (the social compensation or detriment to riders of pricing and availability of service) by zone and time of day. They tied that to Uber pickup data for a specific time period in New York City (NYC).
They found, for example, that the NYC taxi industry generates $495,900 in consumer surplus and $1,022,000 in Taxi profits representing the aggregate surplus of 16,400 taxi-passenger matches. For the Uber market, welfare estimates indicate that $73,300 in consumer surplus and $151,300 in Uber profits, representing the aggregate surplus of 2,250 Uber-passenger matches in the 4-hour analysis period.
Additionally, taxi demand over the study period is 20,949, while full matches are 16,433, implying that 4,516 demanded customer trips are unmet each hour, or an average of 452 every 6 min. This contrasts with the 5,537 Taxis that are vacant at any one time. The externalities of this inefficiency are not directly captured by the model. However, the consumer surplus of the other mobility options reflects the level of congestion in the roadways due to the Taxi and Uber fleet scenarios. It can guide policy for improving lower externality options. For the congestion charging scenario for Uber, a $5 charge should be accompanied by at least a 1.20% increase in consumer surplus in lower externality modes like public transit. This can be achieved by ensuring that enough of the congestion charge is diverted to improving the transit for that difference.
Future research will inevitably consider collaborating with local agencies to evaluate different Uber policies.
This research was partially supported by the National Science Foundation Grant No. CMMI-1634973, the C2SMART Tier-1 University Transportation Center and the Secretaría de Educación Superior, Ciencia, Tecnología e Innovación (SENESCYT) Ecuador.
Robust reinforcement learning: A case study in linear quadratic regulation
This research, whose principal author is Ph.D. student Bo Pang, was directed by Zhong-Ping Jiang, professor in the Department of Electrical and Computer Engineering.
As an important and popular method in reinforcement learning (RL), policy iteration has been widely studied by researchers and utilized in different kinds of real-life applications by practitioners.
Policy iteration involves two steps: policy evaluation and policy improvement. In policy evaluation, a given policy is evaluated based on a scalar performance index. Then this performance index is utilized to generate a new control policy in policy improvement. These two steps are iterated in turn, to find the solution of the RL problem at hand. When all the information involved in this process is exactly known, the convergence to the optimal solution can be provably guaranteed, by exploiting the monotonicity property of the policy improvement step. That is, the performance of the newly generated policy is no worse than that of the given policy in each iteration.
However, in practice policy evaluation or policy improvement can hardly be implemented precisely, because of the existence of various errors, which may be induced by function approximation, state estimation, sensor noise, external disturbance and so on. Therefore, a natural question to ask is: when is a policy iteration algorithm robust to the errors in the learning process? In other words, under what conditions on the errors does the policy iteration still converge to (a neighborhood of) the optimal solution? And how to quantify the size of this neighbourhood?
This paper studies the robustness of reinforcement learning algorithms to errors in the learning process. Specifically, they revisit the benchmark problem of discrete-time linear quadratic regulation (LQR) and study the long-standing open question: Under what conditions is the policy iteration method robustly stable from a dynamical systems perspective?
Using advanced stability results in control theory, they show that policy iteration for LQR is inherently robust to small errors in the learning process and enjoys small-disturbance input-to-state stability: whenever the error in each iteration is bounded and small, the solutions of the policy iteration algorithm are also bounded, and, moreover, enter and stay in a small neighbourhood of the optimal LQR solution. As an application, a novel off-policy optimistic least-squares policy iteration for the LQR problem is proposed, when the system dynamics are subjected to additive stochastic disturbances. The proposed new results in robust reinforcement learning are validated by a numerical example.
This work was supported in part by the U.S. National Science Foundation.
Asymptotic trajectory tracking of autonomous bicycles via backstepping and optimal control
Zhong-Ping Jiang, professor of electrical and computer engineering (ECE) and member of the C2SMART transportation research center at NYU Tandon, directed this research. Leilei Cui, a Ph.D. student in the ECE Department is lead author. Zhengyou Zhang and Shuai Wang from Tencent are co-authors.
This paper studies the trajectory tracking and balance control problem for an autonomous bicycle — one that is ridden like a normal bicycle before automatically traveling by itself to the next user — that is a non-minimum phase, strongly nonlinear system.
As compared with most existing methods dealing only with approximate trajectory tracking, this paper solves a longstanding open problem in bicycle control: how to develop a constructive design to achieve asymptotic trajectory tracking with balance. The crucial strategy is to view the controlled bicycle dynamics from an interconnected system perspective.
More specifically, the nonlinear dynamics of the autonomous bicycle is decomposed into two interconnected subsystems: a tracking subsystem and a balancing subsystem. For the tracking subsystem, the popular backstepping approach is applied to determine the propulsive force of the bicycle. For the balancing subsystem, optimal control is applied to determine the steering angular velocity of the handlebar in order to balance the bicycle and align the bicycle with the desired yaw angle. In order to tackle the strong coupling between the tracking and the balancing systems, the small-gain technique is applied for the first time to prove the asymptotic stability of the closed-loop bicycle system. Finally, the efficacy of the proposed exact trajectory tracking control methodology is validated by numerical simulations (see the video).
"Our contribution to this field is principally at the level of new theoretical development," said Jiang, adding that the key challenge is in the bicycle's inherent instability and more degrees of freedom than the number of controllers. "Although the bicycle looks simple, it is much more difficult to control than driving a car because riding a bike needs to simultaneously track a trajectory and balance the body of the bike. So a new theory is needed for the design of an AI-based, universal controller." He said the work holds great potential for developing control architectures for complex systems beyond bicycles.
The work was done under the aegis of the Control and Network (CAN) Lab led by Jiang, which consists of about 10 people and focuses on the development of fundamental principles and tools for the stability analysis and control of nonlinear dynamical networks, with applications to information, mechanical and biological systems.
The research was funded by the National Science Foundation (Grant number 10.13039/100000001).
Self-assembly of stimuli-responsive coiled-coil fibrous hydrogels
Jin Kim Montclare, professor of chemical and biomolecular engineering, with affilations at NYU Langone Health and NYU College of Dentistry, directed this research with first author Michael Meleties, fellow Ph.D. student Dustin Britton, postdoctoral associate Priya Katyal, and undergraduate research assistant Bonnie Lin.
Owing to their tunable properties, hydrogels comprising stimuli-sensitive polymers are among the most appealing molecular scaffolds because their versatility allows for applications in tissue engineering, drug delivery and other biomedical fields.
Peptides and proteins are increasingly popular as building blocks because they can be stimulated to self-assemble into nanostructures such as nanoparticles or nanofibers, which enables gelation — the formation of supramolecular hydrogels that can trap water and small molecules. Engineers, to generate such smart biomaterials, are developing systems that can respond to a multitude of stimuli including heat. Although thermosensitive hydrogels are among widely studied and well-understood class of protein biomaterials, substantial progress is also reportedly being made in incorporating stimuli-responsiveness including pH, light, ionic strength, redox, as well as the addition of small molecules.
The NYU Tandon researchers, who previously reported a responsive hydrogel formed using a coiled-coil protein, Q, expanded their studies to identify the gelation of Q protein at distinct temperatures and pH conditions.
Using transmission electron microscopy, rheology and structural analyses, they observed that Q self-assembles and forms fiber-based hydrogels exhibiting upper critical solution temperature (UCST) behavior with increased elastic properties at pH 7.4 and pH 10. At pH 6, however, Q forms polydisperse nanoparticles, which do not further self-assemble and undergo gelation. The high net positive charge of Q at pH 6 creates significant electrostatic repulsion, preventing its gelation. This study will potentially guide the development of novel scaffolds and functional biomaterials that are sensitive towards biologically relevant stimuli
Montclare explained that upper critical solution temperature (UCST) phase behavior is characterized by a solution that will form a hydrogel when it is cooled below a critical temperature.
"In our case, it is due to the physical crosslinking/entanglement of fibers that our fiber-based hydrogel forms when cooled," she said, adding that when the temperature is raised above the critical temperature, the hydrogel transitions back into solution and most of the fibers should disentangle.
"In our study, we looked at how this process is affected by pH. We believe that the high net charge of the protein at pH 6 creates electrostatic repulsions that prevent the protein from assembling into fibers and further into hydrogels, while at higher pH where there would be less electrostatic repulsion, the protein is able to assemble into fibers that can then undergo gelation."
CO2 doping of organic interlayers for perovskite solar cells
The team reporting on this research was led by André D. Taylor, a professor of chemical and biomolecular engineering at NYU Tandon, and post-doctoral associate Jaemin Kong.
Perovskite solar cells have progressed in recent years with rapid increases in power conversion efficiency (from 3% in 2006 to 25.5% today), making them more competitive with silicon-based photovoltaic cells. However, a number of challenges remain before they can become a competitive commercial technology.
One of these challenges involves inherent limitations in the process of p-type doping of organic hole-transporting materials within the photovoltaic cells.
This process, wherein doping is achieved by the ingress and diffusion of oxygen into hole transport layers, is time intensive (several hours to a day), making commercial mass production of perovskite solar cells impractical. The Tandon team, however, discovered a method of vastly increasing the speed of this process through the use of carbon dioxide instead of oxygen.
In perovskite solar cells, doped organic semiconductors are normally required as charge-extraction interlayers situated between the photoactive perovskite layer and the electrodes. The conventional means of doping these interlayers involves the addition of lithium bis(trifluoromethane)sulfonimide (LiTFSI), a lithium salt, to spiro-OMeTAD, a π-conjugated organic semiconductor widely used for a hole-transporting material in perovskite solar cells, and the doping process is then initiated by exposing spiro-OMeTAD:LiTFSI blend films to air and light. Besides being time consuming, this method largely depends on ambient conditions. By contrast, Taylor and his team reported a fast and reproducible doping method that involves bubbling a spiro-OMeTAD:LiTFSI solution with carbon dioxide (CO2) under ultraviolet light.
They found that the CO2 bubbling process rapidly enhanced electrical conductivity of the interlayer by 100 times compared to that of a pristine blend film, which is also approximately 10 times higher than that obtained from an oxygen bubbling process. The CO2 treated film also resulted in stable, high-efficiency perovskite solar cells without any post-treatments.
The lead author Jaemin Kong explained that “Employing the pre-doped spiro-OMeTAD to perovskite solar cells shortens the device fabrication and processing time. Further, it makes cells much more stable as most detrimental lithium ions in spiro-OMeTAD:LiTFSI solution were stabilized to lithium carbonate, created while the doping of spiro-OMeTAD happened during CO2 bubbling process. The lithium carbonates end up being filtered out when we spincast the pre-doped solution onto the perovskite layer. Thus, we could obtain fairly pure doped organic materials for efficient hole transporting layers.”
Moreover, the team found that the CO2 doping method can be used for p-type doping of other π-conjugated polymers, such as PTAA, MEH-PPV, P3HT, and PBDB-T. Taylor said the team is looking to push the boundary beyond typical organic semiconductors used for solar cells.
"We believe that wide applicability of CO2 doping to various π-conjugated organic molecules stimulates research ranging from organic solar cells to OLEDs and OFETs even to thermoelectric devices that all require controlled doping of organic semiconductors,” and added that “Since this process consumes a quite large amount of CO2 gas during the process, it can be also considered for CO2 capture and sequestration study in the future. We are hoping that the CO2 doping technique could be a stepping stone for overcoming existing challenges in organic electronics and beyond.”
DeepReDuce: ReLU Reduction for Fast Private Inference
This research was led by Brandon Reagen, assistant professor of computer science and electrical and computer engineering, with Nandan Kumar Jha, a Ph.D. student under Reagen, and Zahra Ghodsi, who obtained her Ph.D. at NYU Tandon under Siddharth Garg, Institute associate professor of electrical and computer engineering.
Concerns surrounding data privacy are having an influence on how companies are changing the way they use and store users’ data. Additionally, lawmakers are passing legislation to improve users’ privacy rights. Deep learning is the core driver of many applications impacted by privacy concerns. It provides high utility in classifying, recommending, and interpreting user data to build user experiences and requires large amounts of private user data to do so. Private inference (PI) is a solution that simultaneously provides strong privacy guarantees while preserving the utility of neural networks to power applications.
Homomorphic data encryption, which allows inferences to be made directly on encrypted data, is a solution that addresses the rise of privacy concerns for personal, medical, military, government and other sensitive information. However, the primary challenge facing private inference is that computing on encrypted data levies an impractically high penalty on latency, stemming mostly from non-linear operators like ReLU (rectified linear activation function).
Solving this challenge requires new optimization methods that minimize network ReLU counts while preserving accuracy. One approach is minimizing the use of ReLU by eliminating uses of this function that do little to contribute to the accuracy of inferences.
“What we are to trying to do there is rethink how neural nets are designed in the first place,” said Reagen. “You can skip a lot of these time and computationally-expensive ReLU operations and still get high performing networks at 2 to 4 times faster run time.”
The team proposed DeepReDuce, a set of optimizations for the judicious removal of ReLUs to reduce private inference latency. The researchers tested this by dropping ReLUs from classic networks to significantly reduce inference latency while maintaining high accuracy.
The team found that, compared to the state-of-the-art for private inference DeepReDuce improved accuracy and reduced ReLU count by up to 3.5% (iso-ReLU count) and 3.5× (iso-accuracy), respectively.
The work extends an innovation, called CryptoNAS. Described in an earlier paper whose authors include Ghodsi and a third Ph.D. student, Akshaj Veldanda, CryptoNAS optimizes the use of ReLUs as one might rearrange how rocks are arranged in a stream to optimize the flow of water: it rebalances the distribution of ReLUS in the network and removes redundant ReLUs.
The investigators will present their work on DeepReDuce at the 2021 International Conference on Machine Learning (ICML) from July 18-24, 2021.
Teaching Responsible Data Science: Charting New Pedagogical Territory
Julia Stoyanovich, director of the Center for Responsible AI (R/AI) at NYU Tandon, and assistant professor of computer science and engineering and of data science, co-authored this paper with Armanda Lewis, a graduate student pursuing her master’s at the NYU Center for Data Science.
The authors detail their development of and pedagogy for a technical course focused on responsible data science, which tackles the issues of ethics in AI, legal compliance, data quality, algorithmic fairness and diversity, transparency of data and algorithms, privacy, and data protection.
The ability to interpret machine-assisted decision-making is an important component of responsible data science that gives a good lens through which to see other responsible data science topics, including privacy and fairness. The researchers’ study includes best practices for teaching technical data science and AI courses that focus on interpretability, and tying responsible data science to current learning science and learning analytics research.
The work also explores the use of “nutritional labels” — a family of interpretability tools that are gaining popularity in responsible data science research and practice — for interpreting machine learning models.
- In the paper, the investigators offer a description of a unique course on responsible data science that is geared toward technical students, and incorporates topics from social science, ethics and law.
- The work connects theories and advances within the learning sciences to the teaching of responsible data science, specifically, interpretability — allowing humans to understand, trust and, if necessary, contest the computational process and its outcomes. The study asserts that interpretability is central to the critical study of the underlying computational elements of machine learning platforms.
- The collaborators assert that they are among the first to consider the pedagogical implications of responsible data science, creating parallels between cutting-edge data science research and cutting-edge educational research within the fields of learning sciences, artificial intelligence in education, and learning analytics and knowledge.
Additionally, the authors propose a set of pedagogical techniques for teaching the interpretability of data and models, positioning interpretability as a central integrative component of responsible data science.
On the design of an optimal flexible bus dispatching system with modular bus units: Using the three-dimensional macroscopic fundamental diagram
This research was led by Monica Menendez, Global Network professor of civil and urban engineering, and Joseph Chow, deputy director of the C2SMART University Transportation Center at NYU Tandon.
This project proposes a flexible bus dispatching system using automated modular vehicle technology, and considers multimodal interactions and congestion propagation dynamics.
This study proposes a novel flexible bus dispatching system in which a fleet of fully automated modular bus units, together with conventional buses, serves the passenger demand. These modular bus units can either operate individually or combined (forming larger modular buses with a higher passenger capacity). This provides enormous flexibility to manage the service frequencies and vehicle allocation, reducing thereby the operating cost and improving passenger mobility.
The investigators developed an optimization model to determine the optimal composition of modular bus units and the optimal service frequency at which the buses (both conventional and modular) should be dispatched across each bus line. They explicitly accounted for the dynamics of traffic congestion and complex interactions between the modes at the network level, based on a recently proposed three-dimensional macroscopic fundamental diagram (3D-MFD). To the best of Chow and Menendez' knowledge, this is the first application of the 3D-MFD and modular bus units for the frequency setting problem in the domain of bus operations.
Using this system of analysis, the researchers were able to show improved costs across the system by adjusting the number of combined modular bus units and their dispatching frequencies to changes in car and bus passenger demand. A comparison with the commonly used approach that considers only the bus system (neglecting the complex multimodal interactions and congestion propagation) reveals the value of the proposed modeling framework.
Impact of COVID-19 behavioral inertia on reopening strategies for New York City transit
This research was led by Joseph Chow, deputy director of the C2SMART University Transportation Center at NYU Tandon. Co-authors included Kaan Ozbay, Director, and Shri Iyer, Managing Director of C2SMART. Chow and Ozbay are professors in the department of Civil and Urban Engineering.
The COVID-19 pandemic has affected travel behaviors and transportation system operations, and raised new challenges for public transit. Cities are grappling with what policies can be effective for a phased reopening shaped by social distancing.
The C2SMART researchers used a baseline model for pre-COVID conditions to create a new model representing travel behavior during the COVID-19 pandemic. They achieved this both by recalibrating the population agendas to include work-from-home, and by re-estimating the mode choice model (to fit observed traffic and transit ridership data) for the Center’s MATsim-NYC platform, a multi-agent simulation test bed for evaluating emerging transportation technologies and policies. They then analyzed the increase in car traffic due to the phased reopen plan guided by the state government of New York.
Analyzing four reopening phases and two reopening scenarios (with and without transit capacity restrictions), they found that a reopening with 100% transit capacity may only see as much as 73% of pre-COVID ridership and an increase in the number of car trips by as much as 142% of pre-pandemic levels. They also discovered that limiting transit capacity to 50% would decrease transit ridership further from 73% to 64% while increasing car trips to as much as 143% of pre-pandemic levels.
They noted that, while the increase appears small, the impact on consumer surplus is disproportionately large due to already increased traffic congestion. Many of the trips also get shifted to other modes like micromobility.
The findings imply that a transit capacity restriction policy during reopening needs to be accompanied by (1) support for micromobility modes, particularly in non-Manhattan boroughs, and (2) congestion alleviation policies that focus on reducing traffic in Manhattan, such as cordon-based pricing.