Research News
New Mathematical Model Shows How Economic Inequalities Affect Migration Patterns
For as long as there have been humans, there have been migrations — some driven by the promise of a better life, others by the desperate need to survive. But while the world has changed dramatically, the mathematical models used to explain how people move have often lagged behind reality. A new study in PNAS Nexus from a team led by Institute Professor Maurizio Porfiri argues that the patterns of human movement can’t be fully understood without reckoning with inequality.
For decades, researchers have relied on models that treat all cities and regions as if they were equal. The “radiation” and “gravity” models, the workhorses of mobility science, describe migration as a function of population size and distance: how many people live in one place, and how far they have to go to reach another. These equations have been useful for predicting broad commuting and migration trends, but they share a blind spot: they assume that opportunities and living conditions are evenly distributed. In a world where climate change, war, and widening economic divides are shaping the way people move, that assumption no longer makes sense.
Porfiri and his colleagues built a new model that explicitly incorporates inequality. It assigns each location a different “opportunity distribution,” a measure of how attractive it is based on social, economic, or environmental conditions. Cities or towns suffering from war, poverty, or environmental disasters are penalized in the model; their residents are more likely to leave, and outsiders are less likely to move in. The result is a mathematical system that behaves more like the real world.
The team tested their model in two settings: South Sudan and the United States—places that could hardly be more different, yet both marked by deep disparities. In South Sudan, years of civil conflict and catastrophic flooding have displaced millions. The researchers assembled a new dataset that tracked these internal movements across the country’s counties between 2020 and 2021. When they compared their inequality-aware model to the traditional one, the difference was stark. The new approach captured how people fled not just from areas of violence but also from those hit hardest by floods, revealing the powerful influence of environmental stress on migration. In fact, flooding alone explained more of the observed migration patterns than conflict did.
In the United States, the researchers turned their attention to a more familiar form of movement: the daily commute. Using data from the American Community Survey, they explored how factors like income inequality, poverty, and housing costs shape commuting flows between counties. Once again, inequality mattered. The model showed that places where rent consumed a larger share of income, or where poverty was more widespread, had distinctive commuting patterns — ones that standard models could not explain.
What the study suggests is that mobility is as much a story of inequality as it is of geography. People do not simply move because of distance or population pressure; they move because some places have become unlivable, unaffordable, or unsafe. “Mobility reflects human aspiration, but also human constraint,” said Porfiri, who serves as Director of the Director of Center for Urban Science + Progress, Interim Chair of the Department of Civil and Urban Engineering, as well as Director of NYU’s Urban Institute. “Understanding both sides of that equation is crucial if we want to plan for the future.”
The implications are far-reaching. As climate change intensifies floods, droughts, and heat waves, and as economic gaps widen within and between nations, migration pressures are likely to grow. Models like this one could help policymakers anticipate where displaced people will go, and what stresses those movements might place on cities and infrastructure. They could also inform strategies to reduce inequality itself — by identifying which regions are most vulnerable to losing their populations, and which are absorbing more than they can sustain.
Alongside Porfiri, contributing authors include Alain Boldini of the New York Institute of Technology, Manuel Heitor of Instituto Superior Técnico, Lisbon, Salvatore Imperatore and Pietro De Lellis of the University of Naples, Rishita Das of the Indian Institute of Science, and Luis Ceferino of the University of California Berkeley. This study was funded in part by the National Science Foundation.
Alain Boldini, Pietro De Lellis, Salvatore Imperatore, Rishita Das, Luis Ceferino, Manuel Heitor, Maurizio Porfiri, Predicting the role of inequalities on human mobility patterns, PNAS Nexus, Volume 5, Issue 1, January 2026, pgaf407, https://doi.org/10.1093/pnasnexus/pgaf407
How Policy, People, and Power Interact to Determine the Future of the Electric Grid
When energy researchers talk about the future of the grid, they often focus on individual pieces: solar panels, batteries, nuclear plants, or new transmission lines. But in a recent study, urban systems researcher Anton Rozhkov takes a different approach — treating the energy system itself as a complex, evolving organism shaped as much by policy and human behavior as by technology.
Rozhkov’s research, recently published in PLOS Complex Systems, models the electricity system of Northern Illinois, focusing on the service territory of Commonwealth Edison (ComEd). Rather than trying to predict exactly how much electricity the region will generate or consume decades from now, the model explores how the system’s overall behavior changes under different long-term scenarios.
“This work is not intended as a deterministic prediction,” Rozhkov, Industry Assistant Professor in the Center for Urban Science + Progress, explains. “What’s important for complex systems is the general trend, the system’s trajectory, whether something is increasing or decreasing under various scenarios, and how steep and fast that change is.”
The study uses a system dynamics framework, a method designed to capture feedback loops and interactions over time. Rozhkov modeled both electricity generation — reflecting Illinois’s current distinctive mix, including a significant share of nuclear power — and demand, then explored how the balance shifts across a 50-year horizon. Illinois is an especially interesting test case, he notes, because few U.S. states rely as heavily on nuclear energy, making the transition to low-carbon systems more nuanced than a simple fossil-fuel-to-renewables swap.
From this baseline, Rozhkov examined five broad scenarios. Some focused on technology, such as a future dominated by renewable energy, with or without the development of large-scale battery storage. Others reflected policy goals, including a scenario aligned with Illinois’s Climate and Equitable Jobs Act, which sets a target of economy-wide climate neutrality by 2040. Still others explored changes in how people live and consume energy, including denser urban development and widespread adoption of distributed energy resources by households and neighborhoods.
Across these scenarios, the model tracked economic costs and environmental outcomes, particularly greenhouse gas emissions. One striking pattern emerged when decentralized energy production — such as rooftop solar paired with the ability to sell excess power back to the grid — became widespread. In that case, demand for centralized electricity generation steadily declined, and there became a need to support that shift through policies: a clear example of an integrated approach. “We can clearly see that utilities would be needed to produce energy differently in the future, and the energy market should be ready for it,” Rozhkov says.
This is the central concept in the paper is which Rozhkov calls a “policy-driven transition.” Policy, in this context, is not a physical component of the grid but a force that shapes decisions. Incentives, tax credits, and regulations can push households and businesses toward clean energy even when natural conditions are less than ideal. He points to the northeastern states as an example: despite limited sunlight, strong solar policies have made rooftop installations attractive. “Policy can move someone from being on the fence about installing solar to actually doing it,” he says.
The research also highlights the growing role of decentralization, in which individual buildings or districts generate much of their own power. In Illinois’s deregulated electricity market, customers can sell excess energy back to the grid, blurring the line between consumer and producer. Beyond cost savings, decentralization can improve resilience during outages or extreme events, allowing communities to maintain power independently when the main grid fails.
Importantly, Rozhkov’s findings suggest that no single lever — technology, policy, or individual motivation — can drive a successful energy transition on its own. “Isolated single-solution approaches (technology-only, policy-only, planning-only) are rarely enough. The transition emerges from the interactions across all of them, and that’s something we need to consider as urban scientists,” he says. The scenarios that performed best combined strong policy frameworks, technological change, and shifts in behavior and urban design.
Although the model was developed for Northern Illinois, it is designed to be adaptable. With sufficient data, it could be applied to other regions, from New York City to Texas, allowing researchers to explore how different regulatory environments shape energy futures. Rozhkov’s next steps include comparing states with contrasting policies, exploring if policies or natural conditions are driving the transition to a more renewable-based energy profile, and digging deeper into behavioral factors about why people choose to adopt distributed energy in the first place.
Rozhkov, A. (2025). Decentralized Renewable Energy Integration in the urban energy markets: A system dynamics approach. PLOS Complex Systems, 2(12). https://doi.org/10.1371/journal.pcsy.0000083
Predicting the Peak: New AI Model Prepares NYC’s Power Grid for a Warmer Future
Buildings produce a large share of New York's greenhouse gas emissions, but predicting future energy demand — essential for reducing those emissions — has been hampered by missing data on how buildings currently use energy.
Semiha Ergan in NYU Tandon’s Civil and Urban Engineering (CUE) Department, is pursuing research that addresses the problem from two directions. Both projects, conducted with CUE Ph.D. student Heng Quan, apply machine learning to forecast building energy use, including short-term (day-ahead) predictions to support grid peak management and longer-term (monthly) projections of how climate change may affect energy demand and building-grid interactions.
First, Ergan and Quan introduced STARS (Synthetic-to-real Transfer for At-scale Robust Short-term forecasting), which predicts 24-hour-ahead electricity use across buildings in New York State. The practical problem, Ergan said, is that most buildings lack the historic submetered sensor data that conventional forecasting models require.
"In reality, most buildings only have monthly electricity use data,” said Ergan, noting that detailed sensor data remains rare even in buildings subject to energy reporting requirements like New York City’s Local Law 84 and 87.
STARS sidesteps that limitation by training on thousands of simulated building profiles from the U.S. Department of Energy's ComStock library, then transferring what it learns to real buildings. Tested against actual consumption data from 101 New York State buildings, the model achieved 12.07 percent error in summer and 11.44 percent in winter, below the roughly 30 percent threshold that industry guidelines consider well-calibrated.
The 24-hour forecast window is designed for demand response programs. During heat waves, grid operators need day-ahead predictions to coordinate building energy use — pre-cooling spaces before peak hours, for example — to prevent blackouts. "Eventually, this will help advance grid efficiency and citizen comfort,” said Ergan.
In complementary work with Quan, Ergan is examining how climate change will impact New York City buildings’ energy use over longer timeframes. They developed a physics-based machine learning model to address the poor extrapolation performance of purely data-driven methods.
The model is trained on real building energy consumption data from NYC Local Law 84 covering over 1,000 buildings and projects monthly energy use under warming scenarios of 2 to 4 degrees Fahrenheit. By incorporating physics-based knowledge into the machine learning framework, it enables robust projections even without historical data from the future climate conditions buildings will face.
Their methodology revealed, for example, that a 4-degree increase could raise summer energy use by an average of 7.6 percent.
Together, the two projects address building energy use from complementary time scales: short-term forecasting enables day-ahead grid coordination during peak demand events, while long-term climate projections inform infrastructure planning and policy. Both ultimately support greenhouse gas reduction goals, as operational energy use converts directly to emissions.
The research was funded by NYU Tandon's IDC Innovation Hub.
Heng Quan and Semiha Ergan. 2025. Sim-to-Real Transfer Learning for Large-Scale Short-Term Building Energy Forecasting in Sustainable Cities. In Proceedings of the 12th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (BuildSys '25). Association for Computing Machinery, New York, NY, USA, 86–95. https://doi.org/10.1145/3736425.3772006
Mass Shootings Trigger Starkly Different Congressional Responses on Social Media Along Party Lines, NYU Tandon Study Finds
After mass shootings, Democrats are nearly four times more likely than Republicans to post about guns on social media, but the disparity goes deeper than volume, according to research from NYU Tandon School of Engineering.
The study analyzed the full two-year term of the 117th Congress using computational methods designed to distinguish true cause-and-effect relationships from mere coincidence. The analysis reveals that mass shootings directly trigger Democratic posts within roughly two days, while Republicans show no such causal response.
Topic analysis of gun-related posts, moreover, reveals strikingly different foci. Democrats are far more likely to zero in on legislation, communities, families, and victims, while Republicans more often center on Second Amendment rights, law enforcement, and crime.
The study, in PLOS Global Public Health, analyzed 785,881 total posts from 513 members of the 117th Congress on X over two years, and identified 12,274 gun-related posts using keyword-based filtering. The team tracked how legislators’ posting related to 1,338 mass shooting incidents that occurred between January 2021 and January 2023.
"In this research, we tested a long-running concern, namely that when it comes to gun violence, Americans too often talk past each other instead of with each other,” said Institute Professor Maurizio Porfiri, the paper’s senior author. Porfiri is director of NYU Tandon’s Center for Urban Science + Progress (CUSP) and of its newly formed Urban Institute. “Our findings expose a fundamental difference in the way we mourn and react in the aftermath of a mass shooting. Building consensus with these stark differences across the political aisle becomes exceptionally difficult under these conditions.”
The research team employed the PCMCI+ causal discovery algorithm, which uses statistical methods to identify whether one event genuinely causes another or whether they simply occur around the same time. Combined with mixed-effects logistic regression and topic modeling, this approach revealed that Democrats respond causally to shooting severity — particularly the number of fatalities — both immediately and the day after incidents. Posting likelihood rose especially when incidents occurred in legislators' home states.
"The findings matter because they expose a structural problem in how Americans address gun violence as a nation," said CUSP Assistant Research Scientist Dmytro Bukhanevych, the paper’s lead author. "When Democrats surge onto social media after mass shootings while Republicans have a comparatively smaller level of response, there's no shared moment of attention. The asymmetry itself becomes a barrier to meaningful exchange."
The research may help advocates and policymakers time interventions more strategically. With congressional attention peaking immediately and declining within roughly 48 hours, sustained public pressure needs to extend well beyond the immediate aftermath. And understanding the distinct frames each party uses could help communicators craft messages that bridge ideological divides rather than reinforcing them.
Along with Porfiri and Bukhanevych, CUSP Ph.D. candidate Rayan Succar served as the PLOS paper’s co-author.
This study contributes to Porfiri’s ongoing research related to U.S. gun prevalence and violence, which he and colleagues are pursuing under a National Science Foundation grant to study the “firearm ecosystem” in the United States. Prior published research under the grant explores:
- the degree that political views, rather than race, shape reactions to mass shooting data;
- the role that cities’ population size plays on the incidences of gun homicides, gun ownership and licensed gun sellers;
- motivations of fame-seeking mass shooters;
- factors that prompt gun purchases;
- state-by-state gun ownership trends; and
- forecasting monthly gun homicide rates.
Buffalo's Deadly Blizzard Revealed When Travel Bans Lose Their Power Over Time
When Buffalo, New York’s devastating December 2022 blizzard claimed more than 30 lives, it exposed a hard reality: even life-saving travel bans can lose their force over time, especially when residents face situations where compliance becomes difficult. The disruption stretched on for days, straining households' ability to stay supplied without venturing out.
Researchers at NYU Tandon School of Engineering and Rochester Institute of Technology (RIT) have now developed a way to help authorities anticipate when these breakdowns may begin.
Published in Transport Policy, the study introduces a predictive framework using weather indicators — snowfall, temperature, and snow depth — to estimate how quickly a travel ban may start to lose effectiveness.
"Agencies have the option to implement travel bans during life-threatening storms," said Professor Kaan Ozbay, the paper's senior author and founding Director of NYU Tandon's C2SMART transportation research center. "But a ban that works for a 24-hour storm may not hold for a week-long event. This framework helps officials understand those differences and plan accordingly."
The research compared two Buffalo storms weeks apart in late 2022. Both involved travel restrictions, but travel patterns diverged sharply. Analyzing travel-time and speed estimates from vehicles and navigation systems, researchers tracked how movement changed around the period restrictions were in effect, identifying statistical "turning points" when travel began shifting back toward normal.
During December 2022's blizzard, travel patterns rebounded before officials lifted the ban. November 2022's storm, with more frequent updates and neighborhood-specific adjustments for South Buffalo, showed stronger sustained travel suppression. The contrast suggests policy durability is shaped not only by storm conditions, but also by how long restrictions must be maintained and how well responses adapt to local realities.
“Location-specific modifications, including those made for South Buffalo during the November event, may be associated with improved compliance of the policy compared to the December storm,” explained Eren Kaval, a C2SMART Ph.D. candidate and the paper's first author.
The framework introduces a metric called "Loss of Resilience of Policy" quantifying how a policy's ability to limit travel deteriorates over time. Regression modeling indicates weather forecast information can help anticipate that trajectory. Harsher conditions — heavier snowfall and greater snow depth — are associated with larger losses of policy resilience, information officials could use during planning.
"If forecasts predict heavy snow over five days, officials can anticipate a blanket ban may not hold," Kaval said. "They might design a different approach from the start, such as targeted restrictions for hardest-hit areas, planned food distribution, or phased restrictions acknowledging people will need to venture out."
The researchers found that the breakdown varied across the city. Some neighborhoods exhibited larger shifts than others, with patterns discussed alongside socioeconomic factors like income and education.
"Some communities had fewer options," Kaval said. "If you can't stockpile a week's supplies, staying home that long becomes impossible." This helps explain why blanket bans can falter. They implicitly assume equal capacity to comply when that capacity varies. The framework can help identify where compliance may be hardest to sustain and inform targeted interventions before storms hit.
"The aim isn't to blame residents or agencies," Ozbay said. "It's to help officials design realistic policies from the beginning. If forecasts show a storm will push beyond what most can prepare for, you can build that into your emergency plan by arranging food deliveries, opening warming centers strategically, or implementing rolling restrictions rather than week-long bans."
The alternative — maintaining restrictions residents cannot realistically follow — can erode trust and weaken future emergency orders. Understanding these dynamics could help preserve emergency measures' legitimacy while keeping people safer.
The approach could apply to other prolonged emergencies like hurricanes or floods, where officials must balance safety with what people can sustain.
The Transport Policy paper was inspired by initial findings from a C2SMART joint research project with NYU Wagner led by Sarah Kaufman, Director of the NYU Rudin Center for Transportation & Assistant Clinical Professor of Public Service, examining lessons learned from the 2022 Buffalo blizzard. It also builds on Professor Ozbay's previous work with Zilin Bian, a co-author of the current paper and NYU Tandon Ph.D. graduate, now an assistant professor at RIT, and Jingqin (Jannie) Gao, Assistant Research Director of C2SMART, on using AI and Big Data to quantify the time lag effect in transportation systems when authorities took action in response to the COVID-19 pandemic.
Eren Kaval, Zilin Bian, Kaan Ozbay, Data-driven quantification of the resilience of enforcement policies under emergency conditions: A comparative study of two major winter storms in Buffalo, New York, Transport Policy, Volume 176, 2026, https://doi.org/10.1016/j.tranpol.2025.103893.
New Algorithm Dramatically Speeds Up Stroke Detection Scans
When someone walks into an emergency room with symptoms of a stroke, every second matters. But today, diagnosing the type of stroke, the life-or-death distinction between a clot and a bleed, requires large, stationary machines like CT scanners that may not be available everywhere. In ambulances, rural clinics, and many hospitals worldwide, doctors often have no way to make this determination in time.
For years, scientists have imagined a different world, one in which a lightweight microwave imaging device, no bigger than a bike helmet, could allow clinicians to look inside the head without radiation, without a shielded room, and without waiting. That idea isn’t far-fetched. Microwave imaging technology already exists and can detect changes in the electrical properties of tissues — changes that happen when stroke, swelling, or tumors disrupt the brain’s normal structure.
The real obstacle has always been speed. “The hardware can be portable,” said Stephen Kim, a Research Professor in the Department of Biomedical Engineering at NYU Tandon. “But the computations needed to turn the raw microwave data into an actual image have been far too slow. You can’t wait up to an hour to know if someone is having a hemorrhagic stroke.”
Kim, along with BME Ph.D. student Lara Pinar and Department Chair Andreas Hielscher, believes that barrier may now be disappearing. In a new study published in IEEE Transactions on Computational Imaging, the team describes an innovative algorithm that reconstructs microwave images 10 to 30 times faster than the best existing methods, a leap that could bring real-time microwave imaging from theory into practice.
It’s a breakthrough that didn’t come from building new devices or designing faster hardware, but from rethinking the mathematics behind the imaging itself. Kim recalls spending long nights in the lab watching microwave reconstructions crawl along frame by frame. “You could almost hear the computer groan,” he said. “It was like trying to push a boulder uphill. We knew there had to be a better way.”
At the heart of the problem is how traditional algorithms work. They repeatedly try to “guess” the electrical properties of the tissue, check whether that guess explains the measured microwave signals, and adjust the guess again. This is a tedious process that can require solving large electromagnetic equations hundreds of times.
The team’s new method takes a different path. Instead of demanding a perfectly accurate intermediate solution at every iteration, their algorithm allows quick, imperfect approximations early on and tightens the accuracy only as needed. This shift, which is simple in concept, but powerful in practice, dramatically reduces the number of heavy computations.
To make the process even more efficient, the team incorporated several clever tricks: using a compact mathematical representation to shrink the size of the problem, streamlining how updates are computed, and using a modeling approach that remains stable even for complex head shapes.
The results are striking. Reconstructions that once took nearly an hour now appear in under 40 seconds. In tests with real experimental data, including cylindrical targets imaged using a microwave scanner from the University of Manitoba, the method consistently delivered high-quality results in seconds instead of minutes.
For Kim and Hielscher, who have worked collaboratively for decades on optical and microwave imaging techniques, the speed improvement feels like a long-awaited turning point. “We always knew microwave imaging had the potential to be portable and affordable. But without rapid reconstruction, the technology couldn’t make the leap into real clinical settings,” Hielscher said. “Now we’re finally closing that gap.”
The promise extends far beyond stroke detection. Portable microwave devices could one day provide an accessible alternative to mammography in low-resource settings, monitor brain swelling in intensive care units without repeated CT scans, or track tumor responses to therapy by observing subtle changes in tissue composition.
The team is now focused on extending the algorithm to full 3D imaging, a step that would bring microwave tomography even closer to practical deployment. But the momentum is palpable. “We’re taking a technology that has been stuck in the lab for years and giving it the speed it needs to matter clinically,” Kim said. “That’s the part that excites us: imagining how many patients someday might benefit from this.”
Accelerated Microwave Tomographic Imaging with a PDE-Constrained Optimization Method, IEEE Transactions on Computational Imaging, VOL. 11, 1614 – 1629 (2025), Authors: Stephen H. Kim, Lara Pinar, and Andreas H. Hielscher.
New AI Language-Vision Models Transform Traffic Video Analysis to Improve Road Safety
New York City's thousands of traffic cameras capture endless hours of footage each day, but analyzing that video to identify safety problems and implement improvements typically requires resources that most transportation agencies don't have.
Now, researchers at NYU Tandon School of Engineering have developed an artificial intelligence system that can automatically identify collisions and near-misses in existing traffic video by combining language reasoning and visual intelligence, potentially transforming how cities improve road safety without major new investments.
Published in the journal Accident Analysis and Prevention, the research won New York City's Vision Zero Research Award, an annual recognition of work that aligns with the City's road safety priorities and offers actionable insights. Professor Kaan Ozbay, the paper's senior author, presented the study at the eighth annual Research on the Road symposium on November 19.
The work exemplifies cross-disciplinary collaboration between computer vision experts from NYU's new Center for Robotics and Embodied Intelligence and transportation safety researchers at NYU Tandon's C2SMART center, where Ozbay serves as Director.
By automatically identifying where and when collisions and near-misses occur, the team’s system — called SeeUnsafe — can help transportation agencies pinpoint dangerous intersections and road conditions that need intervention before more serious accidents happen. It leverages pre-trained AI models that can understand both images and text, representing one of the first applications of multimodal large language models to analyze long-form traffic videos.
"You have a thousand cameras running 24/7 in New York City. Having people examine and analyze all that footage manually is untenable," Ozbay said. "SeeUnsafe gives city officials a highly effective way to take full advantage of that existing investment."
"Agencies don't need to be computer vision experts. They can use this technology without the need to collect and label their own data to train an AI-based video analysis model," added NYU Tandon Associate Professor Chen Feng, a co-founding director of the Center for Robotics and Embodied Intelligence, and paper co-author.
Tested on the Toyota Woven Traffic Safety dataset, SeeUnsafe outperformed other models, correctly classifying videos as collisions, near-misses, or normal traffic 76.71% of the time. The system can also identify which specific road users were involved in critical events, with success rates reaching up to 87.5%.
Traditionally, traffic safety interventions are implemented only after accidents occur. By analyzing patterns of near-misses — such as vehicles passing too close to pedestrians or performing risky maneuvers at intersections — agencies can proactively identify danger zones. This approach enables the implementation of preventive measures like improved signage, optimized signal timing, and redesigned road layouts before serious accidents take place.
The system generates “road safety reports” — natural language explanations for its decisions, describing factors like weather conditions, traffic volume, and the specific movements that led to near-misses or collisions.
While the system has limitations, including sensitivity to object tracking accuracy and challenges with low-light conditions, it establishes a foundation for using AI to “understand” road safety context from vast amounts of traffic footage. The researchers suggest the approach could extend to in-vehicle dash cameras, potentially enabling real-time risk assessment from a driver's perspective.
The research adds to a growing body of work from C2SMART that can improve New York City's transportation systems. Recent projects include studying how heavy electric trucks could strain the city's roads and bridges, analyzing how speed cameras change driver behavior across different neighborhoods, developing a “digital twin” that can find smarter routing to reduce FDNY response times, and a multi-year collaboration with the City to monitor the Brooklyn-Queens Expressway for damage-causing overweight vehicles.
In addition to Ozbay and Feng, the paper's authors are lead author Ruixuan Zhang, a Ph.D. student in transportation engineering at NYU Tandon; Beichen Wang and Juexiao Zhang, both graduate students from NYU's Courant Institute of Mathematical Sciences; and Zilin Bian, a recent NYU Tandon Ph.D. graduate now an assistant professor at Rochester Institute of Technology.
Funding for the research came from the National Science Foundation and the U.S. Department of Transportation's University Transportation Centers Program.
Ruixuan Zhang, Beichen Wang, Juexiao Zhang, Zilin Bian, Chen Feng, Kaan Ozbay,
When language and vision meet road safety: Leveraging multimodal large language models for video-based traffic accident analysis, Accident Analysis & Prevention, Volume 219,2025, 108077,ISSN 0001-4575,https://doi.org/10.1016/j.aap.2025.108077.
Study Shows New Method to Produce Ultrahard Single Layer Diamond in Industrial Applications
Graphene’s enduring appeal lies in its remarkable combination of lightness, flexibility, and strength. Now, researchers have shown that under pressure, it can briefly take on the traits of one of its more glamorous carbon cousins. By introducing nitrogen atoms and applying pressure, a team of scientists has coaxed bilayer graphene grown through chemical vapor deposition (CVD) into a diamond-like phase — without the need for extreme heat. The finding, reported in Advanced Materials Technologies, shows a scalable way to create ultrathin coatings that combine the toughness of diamond with the processability of graphene.
The work, led by Elisa Riedo, Herman F. Mark Professor in Chemical and Biomolecular Engineering, focuses on the delicate balance between two forms of carbon bonding. In ordinary graphene, carbon atoms connect through sp² bonds in a flat honeycomb arrangement, giving rise to its electrical conductivity and mechanical toughness. Diamond, on the other hand, is built from sp³ bonds in a three-dimensional network that confers exceptional hardness. Converting one to the other typically demands extreme pressure and temperature. The team discovered that nitrogen doping lowers this barrier, allowing the transition to occur at room temperature when the layers are pressed together.
To test the effect, the researchers used CVD bilayer graphene films on silicon dioxide substrates and incorporated nitrogen atoms during the growth process. They then applied mechanical pressure using a technique known as modulated nanoindentation. The nitrogen-doped bilayer films exhibited nearly twice the stiffness of the bare substrate, suggesting the formation of stronger, diamond-like interlayer bonds. By contrast, nitrogen-doped monolayer or thicker multilayer samples showed no comparable stiffening, indicating that the effect depends on both the doping and the precise bilayer structure.
Molecular dynamics simulations provided a possible explanation. The models showed that nitrogen atoms promote the formation of sp³ bonds between the two layers when they are compressed. The nitrogen atoms appear to stabilize these interlayer bonds, effectively “locking” parts of the bilayer into a more diamond-like configuration. This cooperation between chemical doping and pressure points to a previously unrecognized pathway for transforming graphene’s atomic structure.
The implications extend beyond a mere curiosity of carbon chemistry. Because the experiments used large-area graphene grown by chemical vapor deposition, the process is inherently compatible with industrial fabrication methods and wafer scale dimensions. The transformation also occurs under mild conditions, avoiding the high temperatures that typically destroy or distort 2D materials. In principle, the approach could yield ultrathin, lightweight coatings that resist wear and deformation while maintaining the advantages of graphene substrates.
Yet the work raises as many questions as it answers. The extent of the transformation remains uncertain — whether the sp³ bonding is continuous or confined to localized regions under the indenter is not yet clear. Researchers also do not know whether the diamond-like phase persists once the pressure is released, or whether it relaxes back to graphene over time. Understanding how stable and uniform these transformations are will be critical for any practical use.
The effect on electronic behavior also remains to be seen. Diamond-like carbon is typically an electrical insulator, so localized sp³ regions could alter the electronic or optical properties of the film. For device applications, the challenge will be to tune the process so that mechanical and electrical properties can be balanced rather than compromised.
Future research will need to clarify how doping levels, pressure intensity, and substrate choice influence the transformation.
The study suggests that graphene’s versatility may stretch further than expected. By manipulating its atomic environment — through doping, strain, or pressure — researchers may be able to switch between distinct structural phases on demand. Such control could lead to a new generation of adaptive materials, capable of shifting from soft to hard, or from conductive to insulating, depending on their operating conditions.
Graphene has often been described as a material with untapped potential. This work offers another glimpse of that potential, showing that even after more than a decade of intense study, carbon’s simplest form still has surprises left to offer.
This work was supported by the U.S. Army Research Office.
Researchers Quantify Intensity of Emotional Response to Sound, Images and Touch Through Skin Conductance
When we listen to a moving piece of music or feel the gentle pulse of a haptic vibration, our bodies react before we consciously register the feeling. The heart may quicken, palms may sweat resulting in subtle electrical resistance variations in the skin. These changes, though often imperceptible, reflect the brain’s engagement with the world. A recent study by researchers at NYU Tandon and the Icahn School of Medicine at Mount Sinai and published in PLOS Mental Health explores how such physiological signals can reveal cognitive arousal — the level of mental alertness and emotional activation — without the need for subjective reporting.
The researchers, led by Associate Professor of Biomedical Engineering Rose Faghih at NYU Tandon, focused on skin conductance, a well-established indicator of autonomic nervous system activity. When sweat glands are stimulated, even minutely, the skin’s ability to conduct electricity changes. This process, known as electrodermal activity, has long been associated with emotional and cognitive states. What distinguishes this study is the combination of physiological modeling and advanced statistical methods to interpret these subtle electrical fluctuations in response to different sensory experiences.
This research work started as a course project for student authors Suzanne Oliver and Jinhan Zhang in Faghih's “Neural and Physiological Signal Processing.'' Research Scientist and co-author Vidya Raju mentored the students under the supervision of Faghih. James W. Murrough, Professor of Psychiatry and Neuroscience and Director of the Depression and Anxiety Center for Discovery and Treatment at the Icahn School of Medicine at Mount Sinai also collaborated in this research.
Taking Prof. Faghih's class was a great experience and allowed me to combine coursework and research,” said Oliver. “It was very exciting to see the work I did in class could help improve treatment of mental health conditions in the future."
The researchers analyzed a published dataset of participants’ continuously recorded skin conductance measured while they were exposed to visual, auditory, and haptic stimuli. Participants also provided self-ratings of arousal using the Self-Assessment Manikin, a pictorial scale that quantifies emotional states. By applying a physiologically informed computational model, the team separated the slow and fast components of the skin’s electrical response and inferred when the autonomic nervous system was most active. Bayesian filtering and a marked point process algorithm were then used to estimate a continuous measure of cognitive arousal over time.
The analysis revealed a striking pattern: the nervous system responded most strongly within two seconds of a new stimulus, with haptic sensations eliciting the largest immediate activations. Yet when the researchers compared these physiological signals to participants’ own self-assessments, auditory stimuli — particularly sounds and music — were most often linked to high arousal states. This suggests that the brain’s perception of stimulation and the body’s raw autonomic responses, while related, may not always align perfectly. However, when the physiological signals were further processed into estimates of user arousal, the modelled arousal agreed with the participant's assessment that auditory stimuli caused the highest arousal.
Interestingly, the model was able to track transitions in participants’ arousal levels as they moved from low- to high-intensity stimuli with an accuracy exceeding random chance. When the participants who felt more stimulated by visual cues were analyzed separately from those more responsive to touch, the model’s predictions revealed the significant differences in participants’ responses to these stimuli in the self-reports effectively capturing group trends.
The implications of this work extend beyond the laboratory. In clinical contexts, self-reported measures remain the gold standard for assessing mental states such as anxiety or stress, yet they are inherently subjective and often unreliable. Objective metrics derived from skin conductance could complement these reports, offering clinicians a more nuanced view of a patient’s emotional dynamics in real time. Such tools might one day aid in monitoring recovery from depression, anxiety, or post-traumatic stress disorder, where changes in physiological arousal often mirror symptom fluctuations.
The study also points to potential uses in virtual reality and human-computer interaction. By quantifying how users react to visual, auditory, or tactile elements, systems could adapt dynamically — heightening immersion, enhancing focus, or reducing stress depending on the goal. This closed-loop feedback between body and machine could make digital environments more responsive to human emotion.
Still, the authors acknowledge the complexity of translating sweat and associated signals into precise emotional understanding. Factors such as stimulus duration, individual variability, and prior experience complicate the interpretation. The correlation between computed arousal and self-reported ratings was modest overall, reflecting the intricate and personal nature of emotional experience. Yet the model’s consistency in identifying moments of heightened engagement underscores its promise as a complementary measure of internal states.
In essence, the study bridges a subtle gap between physiology and perception. By grounding emotion in the body’s own electrical rhythms, it invites a more continuous, data-driven view of how humans experience the world — one that may eventually inform both mental health care and the design of emotionally intelligent technologies.
China Commands 47% of Remote Sensing Research, While U.S. Produces Just 9%, NYU Tandon Study Reveals
The United States is falling far behind China in remote sensing research, according to a comprehensive new study that tracked seven decades of academic publishing and reveals a notable reversal in global technological standing.
China now accounts for nearly half of all peer-reviewed journal publications in this critical field, while American output has declined to single digits.
"This represents one of the most significant shifts in global technological leadership in recent history," said Debra Laefer, the lead author of the study. Laefer is a NYU Tandon Civil and Urban Engineering professor, and a faculty member of Tandon’s Center for Urban Science + Progress.
Published in the journal Geomatics, the research analyzed over 126,000 papers published between 1961 and 2023 to document how China has surged from virtually no presence from the 1960s through the 1990s to 47% of remote sensing publications by 2023, while the United States has dropped from producing 88% of research in the 1960s to only 9% today.
Remote sensing — the science of gathering information from a distance using technologies like laser scanning, imagery, and hyperspectral imagery from the ground, the air, and even space — underpins critical applications from autonomous vehicles to climate monitoring and national security.
The global market was valued at $452 billion in 2022 and is projected to reach $1.44 trillion by 2030, making leadership in this field essential for economic competitiveness. Laefer emphasized that understanding who drives technical expertise and funding in this area is "of national and international importance, as they are inextricably linked with intellectual property generation, which is also shown in our data."
The research reveals that remote sensing scholarship has experienced exponential growth, expanding from roughly a dozen papers annually in the 1960s to more than 13,000 per year by 2023, a thousand-fold increase that far outpaces general scientific publishing trends.
Laefer and co-author Jingru Hua — at the time a master’s student in the NYU Center for Data Science — attribute this surge to decreased equipment costs, greater global participation, digital-only publishing, and most significantly, the adoption of artificial intelligence techniques like machine learning and deep learning.
Perhaps most notable for American competitiveness, the research demonstrates a near-perfect correlation between national funding and publication output. China's National Natural Science Foundation now appears in funding acknowledgments for over 53% of remote sensing papers published between 2021 and 2023, while U.S. agencies are credited in only 5%.
The study identified six Chinese funding entities among the top ten global funders in recent years, compared to only two American organizations, NASA and the National Science Foundation (NSF). NASA, once the dominant funder at 50% of publications through the 1990s, has been vastly outpaced by Chinese funding organizations. Notably, NSF does not have dedicated divisions specifically for geomatics (the science of gathering and analyzing geographic data) or geodesy (the science of measuring Earth's shape and positions on it).
China's research dominance extends to intellectual property generation as well. According to patent data included in the study, China now accounts for the majority of remote sensing patents filed globally. In just the three years from 2021 to 2023, over 43,000 patents containing "remote sensing" were filed worldwide, with China responsible for the clear majority, a dramatic reversal from the late 20th century when the United States held near-total dominance.
The researchers' analysis of publication titles reveals evolving technological priorities. Early decades focused heavily on satellite imagery, but recent years show explosive growth in artificial intelligence techniques, with terms like "deep learning" and "machine learning" now dominating publication titles. The number of papers mentioning these techniques has grown exponentially, reaching over 80,000 publications by 2023.
The findings have implications for technological competitiveness. Remote sensing capabilities underpin emerging technologies including augmented reality, autonomous navigation, and digital twins, all important areas for economic and commercial applications. With China's continued investment and the field's commercial value expected to triple by 2030, the study provides a baseline for understanding shifts in this important technological domain.
Laefer, D.; Hua, J. Remote Sensing Publications 1961–2023—Analysis of National and Global Trends. Geomatics 2025, 5, 47. https://doi.org/10.3390/geomatics5030047