Research News
NYU Tandon Supports MTA in Combating Climate Threats
As transit agencies face growing climate risks and limited capital budgets, deciding which flood protection measures to implement — and where — has become a critical challenge.
Now, a research team at NYU Tandon School of Engineering has built a computer modeling framework that allows agencies to rapidly test and prioritize hundreds of subway resilience strategies for coastal storm surge flooding before committing to major infrastructure investments.
Developed in collaboration with researchers at Columbia University and Princeton University, the model enables the New York Metropolitan Transportation Authority (MTA) to simulate coastal storm surge flooding scenarios under different climate projections and evaluate which combinations of coastal barriers and station-level protections will provide the greatest return on investment.
The physics-based approach, published in Transportation Research, calculates flooding extent and economic losses for each scenario in about one minute on a standard laptop. This speed makes comprehensive resilience planning practical for the MTA, the state agency that oversees NYC's public transportation system.
The research team validated its simulation by accurately reproducing Superstorm Sandy's 2012 flooding patterns. That storm inundated 150 subway stations across New York City, causing $5 billion in repair costs to stations, tunnels, and electrical systems, plus additional economic losses from extended service disruptions.
Since that event, the MTA has invested $7.6 billion in repairs and nearly 4,000 coastal surge protections, including elevating critical infrastructure, securing entrances at underground subway stations, and installing marine doors at the Hugh L. Carey and Queens Midtown tunnels.
“Protecting our infrastructure and the New Yorkers that rely on it from the impacts of climate change is one of the MTA's top priorities," said Eric Wilson, Senior Vice President of Climate & Land Use at the MTA. "Innovative tools like this give us a powerful, data-driven way to evaluate resilience investments before we build them, helping ensure every dollar we spend strengthens the system and safeguards service for millions of daily riders."
"As extreme storms become more frequent and sea level rises, transit agencies need reliable tools to determine how protective measures will actually perform in these changing circumstances before committing billions in infrastructure investments," said Yuki Miura, the study's lead author and assistant professor at NYU Tandon, where she is a faculty member in the newly established NYU Urban Institute. “Our model lets agencies rapidly compare hundreds of strategies under different future conditions. That makes it possible to identify solutions that are not only cost-effective, but also robust to uncertainty.”
Working with MTA and NYC government officials, the research team leveraged the model's speed to rapidly test numerous flooding scenarios for Lower Manhattan's subway system (below 34th Street). The study presents 13 representative stress tests through the end of the century, each combining Superstorm Sandy-level storm surges with projected sea level rise and various protective strategies.
The modeling shows that layered strategies — combining coastal barriers with targeted protection at key subway openings — can substantially reduce flood risk in a cost-effective and system-wide manner. Raising Lower Manhattan's entire coastline by two meters (about 6.5 feet) could prevent subway flooding even with mid-century sea level rise.
A hybrid approach — completing the East Side Coastal Resiliency seawall paired with sealing the 1,500 most critical of the subway's 3,500 openings (entrances, vents, stairways, and other entry points) — would cost about the same as sealing all 3,500 openings, but could also protect neighborhood streets, buildings, and infrastructure from coastal flooding, not only the subway itself. For the MTA, its 4,000 coastal surge protections are a critical first line of defense, and the East Side Coastal Resiliency seawall is a secondary protection.
The analysis reveals a counterintuitive finding: flood risk does not scale linearly. Instead, localized vulnerabilities at critical junctions can trigger cascading failures throughout the system, meaning strategic investments at a handful of key locations can be far more effective than broadly distributed protections.
In addition to calculating flood depths both above and below ground, the study quantifies economic impacts from subway inoperability. The researchers estimate a Superstorm Sandy-level storm today would cause $5.5 billion in economic losses to Manhattan from transit disruptions alone — separate from repair costs — given that 40-60% of New Yorkers depend on public transportation for daily commutes. This estimate does not account for the coastal surge protections that the MTA has implemented. These protections would be deployed by the MTA during a Superstorm Sandy-level storm, reducing the associated economic loss due to transit disruptions.
The research team developed the model in coordination with the New York City Transit Authority, a division of the MTA, which provided detailed system specifications, including tunnel dimensions, station volumes, and opening locations, while maintaining security considerations.
"We're grateful for the productive collaboration with the MTA," said Miura, who has faculty appointments in both Tandon's Center for Urban Science + Progress and its Mechanical and Aerospace Engineering Department. "Their engagement has been essential in developing a tool that supports evidence-based decision-making for infrastructure investments."
Miura points out that ongoing work is exploring how this framework can be integrated into long-term capital planning and adapted for other infrastructure systems facing climate risk. While this study focused on New York City, the methodology can be adapted to other coastal cities with underground transit infrastructure.
The research was supported by the National Science Foundation. The study's senior author is George Deodatis of Columbia University. Co-authors are Christine Y. Blackshaw of Princeton University, Michelle S. Zhang of Columbia University, and Kyle T. Mandli of the Flatiron Institute.
Yuki Miura, Christine Y. Blackshaw, Michelle S. Zhang, Kyle T. Mandli, George Deodatis,
Coastal storm-induced flooding risk of the New York City subway amid climate change,
Transportation Research Part D: Transport and Environment, Volume 149,2025, 104974, ISSN 1361-9209, https://doi.org/10.1016/j.trd.2025.104974.
Bacteria Have a Secret Engineering Trick to Keep Themselves in Shape
Blow up a long balloon and two things happen: it gets longer and it gets wider.
Now imagine a living cell that inflates itself under enormous pressure and yet only grows longer, never adding width. That is exactly what rod-shaped bacteria do, every time they divide, with a precision that has baffled scientists for decades.
A new study published in Current Biology has finally found the answer. Researchers suggest their discovery could point toward new treatments for antibiotic-resistant bacteria.
Rod-shaped bacteria like Bacillus subtilis — a harmless soil microbe and one of biology's most studied model organisms — are encased in a rigid shell called the cell wall, made of a polymer called peptidoglycan, and pressurized from within at many times the pressure of a car tire.
To grow, bacteria must continuously remodel this wall: snipping out old material with enzymes and weaving in new polymer. This should cause the cell to bulge outward as well as elongate. Yet rod-shaped bacteria hold their width to within 40 nanometers — roughly 1,750 times thinner than a human hair.
"Most antibiotics that target the bacterial cell wall disrupt its structure/architecture," said Paola Bardetti, the study's lead author and an industry assistant professor of chemical and biomolecular engineering at NYU Tandon School of Engineering. "Our work reveals an entirely different vulnerability: the physical mechanism bacteria rely on to maintain their shape. No drug has ever targeted that. Until now, we didn't understand it well enough to try."
The NYU team, led by Bardetti and the paper’s senior author Enrique Rojas, an associate professor of biology at NYU, subjected living bacteria to rapid osmotic shocks — briefly raising or lowering internal pressure — while tracking wall deformations as small as a few nanometers.
What they found was a sharp mechanical threshold. Below normal pressure the wall behaves like a finger trap toy: reducing pressure makes it expand sideways. Above it, the wall softens and the cell widens. At the transition, width stays constant, precisely where growing bacteria sit.
“The cell wall is a smart material,” said Bardetti. “It responds to mechanical stress in a way that is tuned to keep the cell the right shape. Every time we probed it, it surprised us.”
This tipping-point strategy also confers automatic self-correction. When cells were manipulated to grow wider than normal, the wall slipped into the finger-trap regime, thinning it back toward the target width. The critical pressure of the transition also shifted in response to changes in wall architecture — a second feedback loop — making this a homeostatic system encoded in the physical properties of the material itself.
In scientific terms: the wall is anisotropic, far stiffer circumferentially than longitudinally, with a Poisson ratio of 0.45–0.5 and anisotropy at the physical maximum. The stress-softening non-linearity — an abrupt drop in circumferential stiffness at the critical pressure — parks the cell at the boundary between widening and thinning.
The same phenomenon appeared in Arabidopsis thaliana plant roots, suggesting a shape-control strategy evolution has arrived at independently.
"Finding the same strategy in bacteria and plant roots was genuinely exciting," said Bardetti. "It suggests a fundamental principle of tubular morphogenesis that nature has independently discovered more than once. The next step is identifying the molecular machinery that sets the critical pressure, because once you know that, you have a potential drug target.”
In addition to Bardetti and Rojas, the paper’s authors are Felix Barber (currently Assistant Professor at Ohio State University) and Dylan Fitzmaurice, postdoctoral researcher and PhD candidate respectively in the Rojas Lab at the time of the study. Research funding came from the National Institutes of Health and the National Science Foundation. Microscopy support was provided by the NYU Langone Health Microscopy Lab, partially funded by the National Cancer Institute.
Non-linear stress-softening of peptidoglycan mediates bacterial cell shape homeostasis
Bardetti, Paola et al. Current Biology, Volume 36, Issue 5, 1156 - 1165.e5
New Method of Data Center Cooling Could Dramatically Decrease Electricity Use
Data centers — the warehouse-sized buildings that store our photos, stream our movies and train artificial intelligence — are voracious consumers of electricity. A surprisingly large share of that power never reaches a microchip. Instead, it is spent on cooling, hauling away the heat generated by millions of tightly packed servers.
As data centers proliferate thanks to the AI boom, their electricity needs are colliding with a grid already under strain. One solution is to rethink the basics of cooling. In a new study, researchers at NYU Tandon explore an alternative solution: use waste heat from nearby factories to cool data centers, by storing that heat in a material that can later deliver cooling on demand.
“While the electricity needs for data centers are still a small slice of the total U.S. electricity market, it is rapidly growing,” says Dharik Mallapragada, Assistant Professor of Chemical and Biomolecular Engineering and lead author of the paper. “This is an opportunity to ‘bend the curve’ and aim for a much more sustainable future, in a way that is beneficial to everyone involved.”
Thermal batteries
At the heart of the concept are minerals called zeolite. Zeolites are crystalline materials riddled with microscopic pores, giving them a remarkable ability to soak up water vapor. When a dry zeolite encounters water vapor, it adsorbs the vapor and releases heat. When the zeolite is heated to sufficiently high temperatures, it releases the water again, resetting the cycle.
Importantly, zeolites are inexpensive materials that are already in use for a wide range of applications, including water treatment and oil refining. “Zeolite and its interaction with water can be used for storing thermal energy”, says Assistant Professor Pavel Kots, a co-author on the study and an expert in zeolite synthesis and characterization. At an industrial facility — such as a chemical plant or refinery — low- to medium-temperature waste heat (below about 200 degrees Celsius) is used to “charge” the thermal battery by drying the zeolite. The water driven off is condensed and recovered. The charged zeolite is then transported, by truck or rail, to a data center.
Once on site, the process runs in reverse. Warm air or other coolants (e.g., water) from the server room help evaporate water, producing a cooling effect. The water vapor is adsorbed by the dried zeolite, which effectively acts as a heat sink. Crucially, this adsorption process can replace the electricity-hungry compression chillers that dominate today’s data center cooling systems.
Unlike typical heat storage methods, zeolite-based storage does not slowly lose its energy over time. The thermal energy remains locked in the material until the water is reintroduced. That makes it suitable not only for long-duration storage but also for transport over tens of kilometers.
Big energy savings, modest trade-offs
Using detailed thermodynamic modeling, the NYU team, which included Kots, Mallapragada, and postdoctoral researcher Gilvan Farias Neto, compared their proposed system with a conventional setup: a data center cooled by a compression chiller and an industrial facility rejecting waste heat through cooling towers.
The results are striking. Across a range of operating conditions, the team estimated that the proposed approach can reduce total electricity used by the data center for cooling and the industrial facility by more than 75 percent. For the data center alone, electricity consumption for cooling can be reduced by as much as 86 percent. In energy efficiency terms, this translates into a 12% improvement in power usage effectiveness, a key metric in the data center industry.
Water use tells a more nuanced story. The combined system consumes somewhat more water overall — roughly 15 to 25 percent more — because evaporation is central to the cooling process. But this increase masks an important detail: the industrial facility itself sees a dramatic reduction in water use, since much of its waste heat is diverted into charging the thermal batteries rather than being dumped through cooling towers. Water released during zeolite charging can also be reused on site, partially closing the loop. The analysis did not consider the changes in indirect water use, i.e. associated with electricity generation, for the facility, which could partially or fully offset increases in direct water use, depending on the make-up of the electricity supply.
In order for this set up to work, data centers need to be fairly close to industrial facilities. To assess the scalability of their approach, the researchers conducted a geospatial analysis of U.S. facilities. The median distance between data centers and the 10 nearest industrial sites turned out to be just 57 kilometers.
Even after accounting for the energy needed to haul tons of zeolite back and forth — assuming modern electric trucks — the system still delivers net electricity savings in many scenarios, sometimes exceeding 40 percent. Rail transport could reduce the energy penalty further.
The proposed system is still at the modeling stage, and many engineering challenges remain. Zeolite beds must be designed for durability, rapid heat transfer and repeated cycling. Coordinating operations between data centers and industrial partners will require new business models. The research team has begun speaking with several industry leaders about the possibility of scaling this solution up.
Still, the idea highlights an under-appreciated truth: in an energy-hungry digital economy, waste heat can be monetized as a valuable resource. By reimagining cooling as a problem of thermal logistics rather than electrical demand, zeolite-based thermal batteries could help data centers grow without overheating the grid.
Farias, Kots, Mallapragada(2026) Zeolite Based Thermal Energy Storage to Leverage Industrial Waste Heat for Data Center Cooling, Chem Rxiv https://doi.org/10.26434/chemrxiv-2026-28wv2.
Tracking Wildlife Trafficking in the Age of Online Marketplaces
Wildlife trafficking is one of the world’s most widespread illegal trades, contributing to biodiversity loss, organized crime, and public health risks. Once concentrated in physical markets, much of this activity has moved online. Today, animals and animal products are advertised on large e-commerce platforms alongside ordinary consumer goods. This shift makes enforcement harder — but it also creates a valuable source of data.
Every online advertisement leaves behind digital information: text descriptions, prices, images, seller details, and timestamps. If collected and analyzed at scale, these traces can help researchers understand how wildlife trafficking operates online. The problem is volume. Online marketplaces contain millions of listings, and most searches for animal names return irrelevant results such as toys, artwork, or souvenirs. Distinguishing illegal wildlife ads from harmless products is difficult to do manually and challenging to automate.
Institute Professor of Computer Science Juliana Freire is part of a team that is taking on the problem head on, building a scalable system designed to address this challenge. They developed a flexible data collection pipeline that automatically gathers wildlife-related advertisements from the web and filters them using modern machine learning techniques. The goal is not to focus on one species or one website, but to enable broad, systematic monitoring across many platforms, regions, and languages, as well as to develop strategies to disrupt illegal markets.
The team is a multi-disciplinary effort, including Gohar Petrossian, Professor of Criminal Justice at John Jay College of Criminal Justice; Jennifer Jacquet, Professor of Environmental Science and Policy at the University of Miami; and Sunandan Chakraborty, Professor of Data Science at Indiana University.
The pipeline begins with web crawling. The researchers generate tens of thousands of search URLs by combining endangered species names with the search structures of major e-commerce websites. A specialized crawler then follows these links, downloading product pages while limiting requests to avoid overwhelming servers. Over just 34 days, the system retrieved more than 11 million ads.
Next comes information extraction. Product pages are messy and inconsistent, varying widely across websites. The pipeline uses a combination of HTML parsing tools and automated scrapers to extract useful details such as titles, descriptions, prices, images, and seller information. These data are stored in structured formats that allow large-scale analysis.
The most critical step is filtering. While machine learning classifiers can be used for this filtering, training specialized classifiers for multiple collection tasks is both time-consuming and expensive, requiring experts to create training data for each task. Freire’s group developed a new approach that leverages large-language models (LLMs) to label data and use the labeled data to automatically create specialized classifiers, which can perform data triage at a low cost and at scale.
The result is essentially a "model factory:" a pipeline that can automatically produce customized, low-cost classifiers on demand for different triage tasks — different species, different product types, different platforms — without requiring experts to label data from scratch each time.
This research has enabled large-scale data collection to answer different scientific questions and shed insights into different aspects of wildlife trafficking. One analysis of 14,000 reptile leather product listings on eBay showed that crocodile, alligator, and python skins dominated the market. Only about 10 animal-product combinations (such as ‘crocodile bags’, ‘alligator bags’ and ‘alligator watches’ made up about 72 percent of all listings, indicating that the trade heavily focuses on a few luxury items. The analysis of all of the listings from these sites showed that while small leather products were shipped from 65 countries, 93 percent came from 10 countries, with the United States, United Kingdom, Australia collectively accounting for over 3/4th of this market.
Similar data from Ebay on shark and ray trophies reveals that, although the platform has introduced policies to restrict threatened or endangered species, their derivatives are still circulated widely on the platform. Tiger shark trophies accounted for one-fifth of such listings, with asking prices up to $3,000. Over 85 percent of listings were linked to sellers in the United States, suggesting a pipeline from deep sea commercial fishing vessels to the US trophy trade.
This research is also being used to determine what would be the most effective way to disrupt this market. For example, the researchers found that targeting key sellers is effective, but targeting key product types — “alligator watch,” for example — breaks the market of reptile leather products equally effectively, and is much easier to enact at a broad scale.
The authors emphasize that this system is a starting point, not a finished solution. The pipeline is designed to be extensible, allowing future researchers to incorporate better classifiers, image-based analysis, or new data sources. By making the code openly available, they aim to support broader collaboration.
As wildlife trade continues to move online, understanding its digital footprint will be increasingly important. Scalable data collection tools like this one offer a way to transform scattered online listings into actionable knowledge, an essential step toward disrupting illegal wildlife trade in the digital era.
Juliana Silva Barbosa, Ulhas Gondhali, Gohar Petrossian, Kinshuk Sharma, Sunandan Chakraborty, Jennifer Jacquet, and Juliana Freire. 2025. A Cost-Effective LLM-based Approach to Identify Wildlife Trafficking in Online Marketplaces. Proc. ACM Manag. Data 3, 3, Article 119 (June 2025), 23 pages. https://doi.org/10.1145/3725256
Huntington’s Disease is Neuroscience’s Clearest Test Case
Neuroscience rarely enjoys clean experiments. Most brain disorders are mosaics of risk genes, aging, lifestyle and chance that leave their origins obscured. Huntington’s disease (HD) is different. It begins with a single genetic expansion — a repeated stretch of DNA letters in the HTT gene — that is both measurable and decisive. If you inherit a sufficiently long repeat, you will develop the disease. That stark clarity makes HD scientifically invaluable.
That is the argument put forward by Roy Maimon in a new essay in Trends in Molecular Medicine. Maimon, Assistant Professor of Biomedical Engineering, is an expert in endogenous neural stem cells and their altered regenerative potential in brain diseases like HD. In the piece, he argues that HD’s clear cause and effects offer the ideal starting point to uncover parts of the brain, and the diseases that affect it, that are far less understood.
But it is not only the unique biological properties of HD that make it such a potentially productive topic — it’s the people too. “The HD world is unusually united,” Maimon writes. “It is among the few fields in biomedicine where patients, scientists, and clinicians share the same space. We meet families regularly. We see their courage and humor. We celebrate together and we grieve together.”
“It is this culture that makes HD research not just productive but deeply personal, too. You become part of something when you join this community that truly matters.”
The HTT mutation acts like a molecular clock. The longer the repeat, the earlier symptoms tend to appear. Decades before the first involuntary movement or subtle cognitive shift, a blood test can reveal who carries the expansion. Few neurological diseases offer such foresight. That predictability allows researchers to ask a question that is nearly impossible elsewhere: What happens if we intervene before neurons begin to die?
The brain changes in HD follow a surprisingly consistent pattern. Early damage centers on a deep brain structure called the striatum, which helps control movement and decision-making. Over time, connected regions of the cortex also become involved. Even within the striatum, certain neurons are especially vulnerable, while their neighbors remain relatively resilient. Why some cells are fragile and others hardy is a major puzzle — and HD offers a controlled setting to investigate it.
Because the genetic cause is so precise, HD has become a testing ground for cutting-edge therapies. Drugs called antisense oligonucleotides are designed to lower production of the harmful huntingtin protein. Gene therapies aim to deliver long-lasting genetic instructions that reduce or modify the faulty gene’s output. Other approaches target the DNA repair machinery thought to drive repeat expansion. Not every clinical trial has succeeded, but each has sharpened our understanding of how to measure brain changes, track biomarkers in blood and spinal fluid, and choose the best moment to intervene.
Researchers are also exploring ways to repair or replace damaged cells. The striatum lies near a region of the brain that can generate new neurons, at least in small numbers. In animal studies, boosting this process — or transplanting healthy cells — has shown promise in rebuilding parts of the damaged circuit. While such strategies remain experimental, HD provides a uniquely measurable proving ground: a known mutation, defined target cells and a predictable timeline.
“Every dollar invested in HD yields methods, models, and biomarkers that accelerate discoveries throughout the entire field,” Maimon writes. “For students, it's the quickest path to learning real translational neuroscience. For investors and funders, it’s the most efficient place to deploy resources: high-quality trials, engaged patients, measurable outcomes, and clear readthrough to other diseases. For scientists, it is a bridge between basic biology and the clinic.”
“New York University (NYU) is uniquely positioned to serve as a national hub for Huntington’s disease research”, Maimon adds. “The university bridges engineering, neuroscience, clinical medicine, and data science within a single ecosystem, allowing discoveries to move rapidly from molecular insight to patient-facing trials. With strong programs in gene therapy, RNA therapeutics, biomarker development, and computational modeling, NYU offers the interdisciplinary infrastructure required to tackle a disease that demands precision at every level. Equally important, its proximity to major clinical centers and patient communities enables sustained engagement with families living with HD, ensuring that translational efforts remain both scientifically rigorous and deeply human”.
Maimon, R. (2026). Huntington’s disease is the best investment in Neuroscience Today. Trends in Molecular Medicine. https://doi.org/10.1016/j.molmed.2026.01.001
Wildfire Prevention Models Miss Key Factor: How Forests Will Change Over Decades
Eucalyptus trees, laden with flammable oils, could spread into Portugal's south-central region by 2060 if changing climate conditions make the area more hospitable to their growth, creating wildfire hotspots that would evade detection by conventional prevention approaches.
The gap exists because most wildfire models account for climate change but treat forests as static, missing how vegetation itself will evolve and alter fire risk.
A new study from NYU Tandon School of Engineering fills this gap by modeling both climate and vegetation changes together. Published in the International Journal of Wildland Fire, the research projects how forests will evolve through 2060 and reveals that ignoring vegetation dynamics produces fundamentally incomplete fire risk projections.
"If you only consider the impact of climate but ignore vegetation, you're going to miss wildfire patterns that will happen," said Augustin Guibaud, the NYU Tandon assistant professor who led the research team. "Vegetation works on a timescale that's different from climate or weather."
Testing the model in Portugal revealed a striking paradox: local fire risk doesn't always track with global warming trends. Some higher-emission scenarios actually showed decreased fire risk in Portugal, with medium emissions projecting a 12% decrease when vegetation responses were included. In the low-emission scenario, projections without vegetation changes predicted a 59% increase in burned area by 2060, but including how forests would actually adapt reduced that to just 3%.
"The climate scenario which is more drastic from a temperature perspective may not be the one associated with highest risk at the local level," Guibaud explained. The counterintuitive results underscore that local climate conditions and vegetation responses can diverge significantly from global patterns.
The findings matter beyond Portugal. Wildfires are increasing in frequency, intensity and geographic scope across Mediterranean climates and western North America, with regions like California experiencing recurring large fires. Climate projections indicate these trends will continue, making long-term planning increasingly important. Guibaud anticipates working with federal agencies to apply the methodology in the United States, where the same dynamics of shifting vegetation and fire risk are playing out.
The team developed their approach using machine learning to analyze Portugal's wildfire patterns, correctly identifying 84% of historical wildfire locations in validation tests. They modeled how wildfires would change under three climate futures through 2060 — from low to high emissions — incorporating how seven dominant ecosystems characterized by the tree species would shift in response to changing temperature and precipitation.
The model has immediate practical implications. Planting strategies aimed at reducing wildfire risk can backfire if they don't account for future climate. Species that won't survive future conditions waste resources, while fire-prone species that will thrive "lock in elevated risk for decades," Guibaud said. Because forest ecosystems take about a century to fully restore, those mistakes reverberate for generations.
The team's model integrates data from NASA satellite systems, Portugal's National Forest Inventory, and IPCC climate projections, using Maximum Entropy modeling to project species shifts and a Graph Convolutional Network to assess fire risk based on surrounding vegetation and terrain. The researchers developed a method to decouple climate and vegetation effects by running projections twice: once holding vegetation constant and once allowing it to evolve.
The team plans to refine the vegetation modeling to include shrubs and grasses, not just tree species. In addition to Guibaud, who sits in Tandon's Mechanical and Aerospace Engineering Department and its Center for Urban Science + Progress, the paper's authors are Feiyang Ren, now at the University of Leeds; Noah Tobinsky, who worked on the project as a master's student at NYU Tandon; and Trisung Dorji of University College London.
Ren F, Tobinsky N, Dorji T, Guibaud A. (2025) On the importance of both climate and vegetation evolution when predicting long-term wildfire susceptibility. International Journal of Wildland Fire 34, WF25092. https://doi.org/10.1071/WF25092
New Mathematical Model Shows How Economic Inequalities Affect Migration Patterns
For as long as there have been humans, there have been migrations — some driven by the promise of a better life, others by the desperate need to survive. But while the world has changed dramatically, the mathematical models used to explain how people move have often lagged behind reality. A new study in PNAS Nexus from a team led by Institute Professor Maurizio Porfiri argues that the patterns of human movement can’t be fully understood without reckoning with inequality.
For decades, researchers have relied on models that treat all cities and regions as if they were equal. The “radiation” and “gravity” models, the workhorses of mobility science, describe migration as a function of population size and distance: how many people live in one place, and how far they have to go to reach another. These equations have been useful for predicting broad commuting and migration trends, but they share a blind spot: they assume that opportunities and living conditions are evenly distributed. In a world where climate change, war, and widening economic divides are shaping the way people move, that assumption no longer makes sense.
Porfiri and his colleagues built a new model that explicitly incorporates inequality. It assigns each location a different “opportunity distribution,” a measure of how attractive it is based on social, economic, or environmental conditions. Cities or towns suffering from war, poverty, or environmental disasters are penalized in the model; their residents are more likely to leave, and outsiders are less likely to move in. The result is a mathematical system that behaves more like the real world.
The team tested their model in two settings: South Sudan and the United States—places that could hardly be more different, yet both marked by deep disparities. In South Sudan, years of civil conflict and catastrophic flooding have displaced millions. The researchers assembled a new dataset that tracked these internal movements across the country’s counties between 2020 and 2021. When they compared their inequality-aware model to the traditional one, the difference was stark. The new approach captured how people fled not just from areas of violence but also from those hit hardest by floods, revealing the powerful influence of environmental stress on migration. In fact, flooding alone explained more of the observed migration patterns than conflict did.
In the United States, the researchers turned their attention to a more familiar form of movement: the daily commute. Using data from the American Community Survey, they explored how factors like income inequality, poverty, and housing costs shape commuting flows between counties. Once again, inequality mattered. The model showed that places where rent consumed a larger share of income, or where poverty was more widespread, had distinctive commuting patterns — ones that standard models could not explain.
What the study suggests is that mobility is as much a story of inequality as it is of geography. People do not simply move because of distance or population pressure; they move because some places have become unlivable, unaffordable, or unsafe. “Mobility reflects human aspiration, but also human constraint,” said Porfiri, who serves as Director of the Director of Center for Urban Science + Progress, Interim Chair of the Department of Civil and Urban Engineering, as well as Director of NYU’s Urban Institute. “Understanding both sides of that equation is crucial if we want to plan for the future.”
The implications are far-reaching. As climate change intensifies floods, droughts, and heat waves, and as economic gaps widen within and between nations, migration pressures are likely to grow. Models like this one could help policymakers anticipate where displaced people will go, and what stresses those movements might place on cities and infrastructure. They could also inform strategies to reduce inequality itself — by identifying which regions are most vulnerable to losing their populations, and which are absorbing more than they can sustain.
Alongside Porfiri, contributing authors include Alain Boldini of the New York Institute of Technology, Manuel Heitor of Instituto Superior Técnico, Lisbon, Salvatore Imperatore and Pietro De Lellis of the University of Naples, Rishita Das of the Indian Institute of Science, and Luis Ceferino of the University of California Berkeley. This study was funded in part by the National Science Foundation.
Alain Boldini, Pietro De Lellis, Salvatore Imperatore, Rishita Das, Luis Ceferino, Manuel Heitor, Maurizio Porfiri, Predicting the role of inequalities on human mobility patterns, PNAS Nexus, Volume 5, Issue 1, January 2026, pgaf407, https://doi.org/10.1093/pnasnexus/pgaf407
How Policy, People, and Power Interact to Determine the Future of the Electric Grid
When energy researchers talk about the future of the grid, they often focus on individual pieces: solar panels, batteries, nuclear plants, or new transmission lines. But in a recent study, urban systems researcher Anton Rozhkov takes a different approach — treating the energy system itself as a complex, evolving organism shaped as much by policy and human behavior as by technology.
Rozhkov’s research, recently published in PLOS Complex Systems, models the electricity system of Northern Illinois, focusing on the service territory of Commonwealth Edison (ComEd). Rather than trying to predict exactly how much electricity the region will generate or consume decades from now, the model explores how the system’s overall behavior changes under different long-term scenarios.
“This work is not intended as a deterministic prediction,” Rozhkov, Industry Assistant Professor in the Center for Urban Science + Progress, explains. “What’s important for complex systems is the general trend, the system’s trajectory, whether something is increasing or decreasing under various scenarios, and how steep and fast that change is.”
The study uses a system dynamics framework, a method designed to capture feedback loops and interactions over time. Rozhkov modeled both electricity generation — reflecting Illinois’s current distinctive mix, including a significant share of nuclear power — and demand, then explored how the balance shifts across a 50-year horizon. Illinois is an especially interesting test case, he notes, because few U.S. states rely as heavily on nuclear energy, making the transition to low-carbon systems more nuanced than a simple fossil-fuel-to-renewables swap.
From this baseline, Rozhkov examined five broad scenarios. Some focused on technology, such as a future dominated by renewable energy, with or without the development of large-scale battery storage. Others reflected policy goals, including a scenario aligned with Illinois’s Climate and Equitable Jobs Act, which sets a target of economy-wide climate neutrality by 2040. Still others explored changes in how people live and consume energy, including denser urban development and widespread adoption of distributed energy resources by households and neighborhoods.
Across these scenarios, the model tracked economic costs and environmental outcomes, particularly greenhouse gas emissions. One striking pattern emerged when decentralized energy production — such as rooftop solar paired with the ability to sell excess power back to the grid — became widespread. In that case, demand for centralized electricity generation steadily declined, and there became a need to support that shift through policies: a clear example of an integrated approach. “We can clearly see that utilities would be needed to produce energy differently in the future, and the energy market should be ready for it,” Rozhkov says.
This is the central concept in the paper is which Rozhkov calls a “policy-driven transition.” Policy, in this context, is not a physical component of the grid but a force that shapes decisions. Incentives, tax credits, and regulations can push households and businesses toward clean energy even when natural conditions are less than ideal. He points to the northeastern states as an example: despite limited sunlight, strong solar policies have made rooftop installations attractive. “Policy can move someone from being on the fence about installing solar to actually doing it,” he says.
The research also highlights the growing role of decentralization, in which individual buildings or districts generate much of their own power. In Illinois’s deregulated electricity market, customers can sell excess energy back to the grid, blurring the line between consumer and producer. Beyond cost savings, decentralization can improve resilience during outages or extreme events, allowing communities to maintain power independently when the main grid fails.
Importantly, Rozhkov’s findings suggest that no single lever — technology, policy, or individual motivation — can drive a successful energy transition on its own. “Isolated single-solution approaches (technology-only, policy-only, planning-only) are rarely enough. The transition emerges from the interactions across all of them, and that’s something we need to consider as urban scientists,” he says. The scenarios that performed best combined strong policy frameworks, technological change, and shifts in behavior and urban design.
Although the model was developed for Northern Illinois, it is designed to be adaptable. With sufficient data, it could be applied to other regions, from New York City to Texas, allowing researchers to explore how different regulatory environments shape energy futures. Rozhkov’s next steps include comparing states with contrasting policies, exploring if policies or natural conditions are driving the transition to a more renewable-based energy profile, and digging deeper into behavioral factors about why people choose to adopt distributed energy in the first place.
Rozhkov, A. (2025). Decentralized Renewable Energy Integration in the urban energy markets: A system dynamics approach. PLOS Complex Systems, 2(12). https://doi.org/10.1371/journal.pcsy.0000083
Predicting the Peak: New AI Model Prepares NYC’s Power Grid for a Warmer Future
Buildings produce a large share of New York's greenhouse gas emissions, but predicting future energy demand — essential for reducing those emissions — has been hampered by missing data on how buildings currently use energy.
Semiha Ergan in NYU Tandon’s Civil and Urban Engineering (CUE) Department, is pursuing research that addresses the problem from two directions. Both projects, conducted with CUE Ph.D. student Heng Quan, apply machine learning to forecast building energy use, including short-term (day-ahead) predictions to support grid peak management and longer-term (monthly) projections of how climate change may affect energy demand and building-grid interactions.
First, Ergan and Quan introduced STARS (Synthetic-to-real Transfer for At-scale Robust Short-term forecasting), which predicts 24-hour-ahead electricity use across buildings in New York State. The practical problem, Ergan said, is that most buildings lack the historic submetered sensor data that conventional forecasting models require.
"In reality, most buildings only have monthly electricity use data,” said Ergan, noting that detailed sensor data remains rare even in buildings subject to energy reporting requirements like New York City’s Local Law 84 and 87.
STARS sidesteps that limitation by training on thousands of simulated building profiles from the U.S. Department of Energy's ComStock library, then transferring what it learns to real buildings. Tested against actual consumption data from 101 New York State buildings, the model achieved 12.07 percent error in summer and 11.44 percent in winter, below the roughly 30 percent threshold that industry guidelines consider well-calibrated.
The 24-hour forecast window is designed for demand response programs. During heat waves, grid operators need day-ahead predictions to coordinate building energy use — pre-cooling spaces before peak hours, for example — to prevent blackouts. "Eventually, this will help advance grid efficiency and citizen comfort,” said Ergan.
In complementary work with Quan, Ergan is examining how climate change will impact New York City buildings’ energy use over longer timeframes. They developed a physics-based machine learning model to address the poor extrapolation performance of purely data-driven methods.
The model is trained on real building energy consumption data from NYC Local Law 84 covering over 1,000 buildings and projects monthly energy use under warming scenarios of 2 to 4 degrees Fahrenheit. By incorporating physics-based knowledge into the machine learning framework, it enables robust projections even without historical data from the future climate conditions buildings will face.
Their methodology revealed, for example, that a 4-degree increase could raise summer energy use by an average of 7.6 percent.
Together, the two projects address building energy use from complementary time scales: short-term forecasting enables day-ahead grid coordination during peak demand events, while long-term climate projections inform infrastructure planning and policy. Both ultimately support greenhouse gas reduction goals, as operational energy use converts directly to emissions.
The research was funded by NYU Tandon's IDC Innovation Hub.
Heng Quan and Semiha Ergan. 2025. Sim-to-Real Transfer Learning for Large-Scale Short-Term Building Energy Forecasting in Sustainable Cities. In Proceedings of the 12th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (BuildSys '25). Association for Computing Machinery, New York, NY, USA, 86–95. https://doi.org/10.1145/3736425.3772006
Mass Shootings Trigger Starkly Different Congressional Responses on Social Media Along Party Lines, NYU Tandon Study Finds
After mass shootings, Democrats are nearly four times more likely than Republicans to post about guns on social media, but the disparity goes deeper than volume, according to research from NYU Tandon School of Engineering.
The study analyzed the full two-year term of the 117th Congress using computational methods designed to distinguish true cause-and-effect relationships from mere coincidence. The analysis reveals that mass shootings directly trigger Democratic posts within roughly two days, while Republicans show no such causal response.
Topic analysis of gun-related posts, moreover, reveals strikingly different foci. Democrats are far more likely to zero in on legislation, communities, families, and victims, while Republicans more often center on Second Amendment rights, law enforcement, and crime.
The study, in PLOS Global Public Health, analyzed 785,881 total posts from 513 members of the 117th Congress on X over two years, and identified 12,274 gun-related posts using keyword-based filtering. The team tracked how legislators’ posting related to 1,338 mass shooting incidents that occurred between January 2021 and January 2023.
"In this research, we tested a long-running concern, namely that when it comes to gun violence, Americans too often talk past each other instead of with each other,” said Institute Professor Maurizio Porfiri, the paper’s senior author. Porfiri is director of NYU Tandon’s Center for Urban Science + Progress (CUSP) and of its newly formed Urban Institute. “Our findings expose a fundamental difference in the way we mourn and react in the aftermath of a mass shooting. Building consensus with these stark differences across the political aisle becomes exceptionally difficult under these conditions.”
The research team employed the PCMCI+ causal discovery algorithm, which uses statistical methods to identify whether one event genuinely causes another or whether they simply occur around the same time. Combined with mixed-effects logistic regression and topic modeling, this approach revealed that Democrats respond causally to shooting severity — particularly the number of fatalities — both immediately and the day after incidents. Posting likelihood rose especially when incidents occurred in legislators' home states.
"The findings matter because they expose a structural problem in how Americans address gun violence as a nation," said CUSP Assistant Research Scientist Dmytro Bukhanevych, the paper’s lead author. "When Democrats surge onto social media after mass shootings while Republicans have a comparatively smaller level of response, there's no shared moment of attention. The asymmetry itself becomes a barrier to meaningful exchange."
The research may help advocates and policymakers time interventions more strategically. With congressional attention peaking immediately and declining within roughly 48 hours, sustained public pressure needs to extend well beyond the immediate aftermath. And understanding the distinct frames each party uses could help communicators craft messages that bridge ideological divides rather than reinforcing them.
Along with Porfiri and Bukhanevych, CUSP Ph.D. candidate Rayan Succar served as the PLOS paper’s co-author.
This study contributes to Porfiri’s ongoing research related to U.S. gun prevalence and violence, which he and colleagues are pursuing under a National Science Foundation grant to study the “firearm ecosystem” in the United States. Prior published research under the grant explores:
- the degree that political views, rather than race, shape reactions to mass shooting data;
- the role that cities’ population size plays on the incidences of gun homicides, gun ownership and licensed gun sellers;
- motivations of fame-seeking mass shooters;
- factors that prompt gun purchases;
- state-by-state gun ownership trends; and
- forecasting monthly gun homicide rates.