Research News
New Algorithm Dramatically Speeds Up Stroke Detection Scans
When someone walks into an emergency room with symptoms of a stroke, every second matters. But today, diagnosing the type of stroke, the life-or-death distinction between a clot and a bleed, requires large, stationary machines like CT scanners that may not be available everywhere. In ambulances, rural clinics, and many hospitals worldwide, doctors often have no way to make this determination in time.
For years, scientists have imagined a different world, one in which a lightweight microwave imaging device, no bigger than a bike helmet, could allow clinicians to look inside the head without radiation, without a shielded room, and without waiting. That idea isn’t far-fetched. Microwave imaging technology already exists and can detect changes in the electrical properties of tissues — changes that happen when stroke, swelling, or tumors disrupt the brain’s normal structure.
The real obstacle has always been speed. “The hardware can be portable,” said Stephen Kim, a Research Professor in the Department of Biomedical Engineering at NYU Tandon. “But the computations needed to turn the raw microwave data into an actual image have been far too slow. You can’t wait up to an hour to know if someone is having a hemorrhagic stroke.”
Kim, along with BME Ph.D. student Lara Pinar and Department Chair Andreas Hielscher, believes that barrier may now be disappearing. In a new study published in IEEE Transactions on Computational Imaging, the team describes an innovative algorithm that reconstructs microwave images 10 to 30 times faster than the best existing methods, a leap that could bring real-time microwave imaging from theory into practice.
It’s a breakthrough that didn’t come from building new devices or designing faster hardware, but from rethinking the mathematics behind the imaging itself. Kim recalls spending long nights in the lab watching microwave reconstructions crawl along frame by frame. “You could almost hear the computer groan,” he said. “It was like trying to push a boulder uphill. We knew there had to be a better way.”
At the heart of the problem is how traditional algorithms work. They repeatedly try to “guess” the electrical properties of the tissue, check whether that guess explains the measured microwave signals, and adjust the guess again. This is a tedious process that can require solving large electromagnetic equations hundreds of times.
The team’s new method takes a different path. Instead of demanding a perfectly accurate intermediate solution at every iteration, their algorithm allows quick, imperfect approximations early on and tightens the accuracy only as needed. This shift, which is simple in concept, but powerful in practice, dramatically reduces the number of heavy computations.
To make the process even more efficient, the team incorporated several clever tricks: using a compact mathematical representation to shrink the size of the problem, streamlining how updates are computed, and using a modeling approach that remains stable even for complex head shapes.
The results are striking. Reconstructions that once took nearly an hour now appear in under 40 seconds. In tests with real experimental data, including cylindrical targets imaged using a microwave scanner from the University of Manitoba, the method consistently delivered high-quality results in seconds instead of minutes.
For Kim and Hielscher, who have worked collaboratively for decades on optical and microwave imaging techniques, the speed improvement feels like a long-awaited turning point. “We always knew microwave imaging had the potential to be portable and affordable. But without rapid reconstruction, the technology couldn’t make the leap into real clinical settings,” Hielscher said. “Now we’re finally closing that gap.”
The promise extends far beyond stroke detection. Portable microwave devices could one day provide an accessible alternative to mammography in low-resource settings, monitor brain swelling in intensive care units without repeated CT scans, or track tumor responses to therapy by observing subtle changes in tissue composition.
The team is now focused on extending the algorithm to full 3D imaging, a step that would bring microwave tomography even closer to practical deployment. But the momentum is palpable. “We’re taking a technology that has been stuck in the lab for years and giving it the speed it needs to matter clinically,” Kim said. “That’s the part that excites us: imagining how many patients someday might benefit from this.”
Accelerated Microwave Tomographic Imaging with a PDE-Constrained Optimization Method, IEEE Transactions on Computational Imaging, VOL. 11, 1614 – 1629 (2025), Authors: Stephen H. Kim, Lara Pinar, and Andreas H. Hielscher.
New AI Language-Vision Models Transform Traffic Video Analysis to Improve Road Safety
New York City's thousands of traffic cameras capture endless hours of footage each day, but analyzing that video to identify safety problems and implement improvements typically requires resources that most transportation agencies don't have.
Now, researchers at NYU Tandon School of Engineering have developed an artificial intelligence system that can automatically identify collisions and near-misses in existing traffic video by combining language reasoning and visual intelligence, potentially transforming how cities improve road safety without major new investments.
Published in the journal Accident Analysis and Prevention, the research won New York City's Vision Zero Research Award, an annual recognition of work that aligns with the City's road safety priorities and offers actionable insights. Professor Kaan Ozbay, the paper's senior author, presented the study at the eighth annual Research on the Road symposium on November 19.
The work exemplifies cross-disciplinary collaboration between computer vision experts from NYU's new Center for Robotics and Embodied Intelligence and transportation safety researchers at NYU Tandon's C2SMART center, where Ozbay serves as Director.
By automatically identifying where and when collisions and near-misses occur, the team’s system — called SeeUnsafe — can help transportation agencies pinpoint dangerous intersections and road conditions that need intervention before more serious accidents happen. It leverages pre-trained AI models that can understand both images and text, representing one of the first applications of multimodal large language models to analyze long-form traffic videos.
"You have a thousand cameras running 24/7 in New York City. Having people examine and analyze all that footage manually is untenable," Ozbay said. "SeeUnsafe gives city officials a highly effective way to take full advantage of that existing investment."
"Agencies don't need to be computer vision experts. They can use this technology without the need to collect and label their own data to train an AI-based video analysis model," added NYU Tandon Associate Professor Chen Feng, a co-founding director of the Center for Robotics and Embodied Intelligence, and paper co-author.
Tested on the Toyota Woven Traffic Safety dataset, SeeUnsafe outperformed other models, correctly classifying videos as collisions, near-misses, or normal traffic 76.71% of the time. The system can also identify which specific road users were involved in critical events, with success rates reaching up to 87.5%.
Traditionally, traffic safety interventions are implemented only after accidents occur. By analyzing patterns of near-misses — such as vehicles passing too close to pedestrians or performing risky maneuvers at intersections — agencies can proactively identify danger zones. This approach enables the implementation of preventive measures like improved signage, optimized signal timing, and redesigned road layouts before serious accidents take place.
The system generates “road safety reports” — natural language explanations for its decisions, describing factors like weather conditions, traffic volume, and the specific movements that led to near-misses or collisions.
While the system has limitations, including sensitivity to object tracking accuracy and challenges with low-light conditions, it establishes a foundation for using AI to “understand” road safety context from vast amounts of traffic footage. The researchers suggest the approach could extend to in-vehicle dash cameras, potentially enabling real-time risk assessment from a driver's perspective.
The research adds to a growing body of work from C2SMART that can improve New York City's transportation systems. Recent projects include studying how heavy electric trucks could strain the city's roads and bridges, analyzing how speed cameras change driver behavior across different neighborhoods, developing a “digital twin” that can find smarter routing to reduce FDNY response times, and a multi-year collaboration with the City to monitor the Brooklyn-Queens Expressway for damage-causing overweight vehicles.
In addition to Ozbay and Feng, the paper's authors are lead author Ruixuan Zhang, a Ph.D. student in transportation engineering at NYU Tandon; Beichen Wang and Juexiao Zhang, both graduate students from NYU's Courant Institute of Mathematical Sciences; and Zilin Bian, a recent NYU Tandon Ph.D. graduate now an assistant professor at Rochester Institute of Technology.
Funding for the research came from the National Science Foundation and the U.S. Department of Transportation's University Transportation Centers Program.
Ruixuan Zhang, Beichen Wang, Juexiao Zhang, Zilin Bian, Chen Feng, Kaan Ozbay,
When language and vision meet road safety: Leveraging multimodal large language models for video-based traffic accident analysis, Accident Analysis & Prevention, Volume 219,2025, 108077,ISSN 0001-4575,https://doi.org/10.1016/j.aap.2025.108077.
Study Shows New Method to Produce Ultrahard Single Layer Diamond in Industrial Applications
Graphene’s enduring appeal lies in its remarkable combination of lightness, flexibility, and strength. Now, researchers have shown that under pressure, it can briefly take on the traits of one of its more glamorous carbon cousins. By introducing nitrogen atoms and applying pressure, a team of scientists has coaxed bilayer graphene grown through chemical vapor deposition (CVD) into a diamond-like phase — without the need for extreme heat. The finding, reported in Advanced Materials Technologies, shows a scalable way to create ultrathin coatings that combine the toughness of diamond with the processability of graphene.
The work, led by Elisa Riedo, Herman F. Mark Professor in Chemical and Biomolecular Engineering, focuses on the delicate balance between two forms of carbon bonding. In ordinary graphene, carbon atoms connect through sp² bonds in a flat honeycomb arrangement, giving rise to its electrical conductivity and mechanical toughness. Diamond, on the other hand, is built from sp³ bonds in a three-dimensional network that confers exceptional hardness. Converting one to the other typically demands extreme pressure and temperature. The team discovered that nitrogen doping lowers this barrier, allowing the transition to occur at room temperature when the layers are pressed together.
To test the effect, the researchers used CVD bilayer graphene films on silicon dioxide substrates and incorporated nitrogen atoms during the growth process. They then applied mechanical pressure using a technique known as modulated nanoindentation. The nitrogen-doped bilayer films exhibited nearly twice the stiffness of the bare substrate, suggesting the formation of stronger, diamond-like interlayer bonds. By contrast, nitrogen-doped monolayer or thicker multilayer samples showed no comparable stiffening, indicating that the effect depends on both the doping and the precise bilayer structure.
Molecular dynamics simulations provided a possible explanation. The models showed that nitrogen atoms promote the formation of sp³ bonds between the two layers when they are compressed. The nitrogen atoms appear to stabilize these interlayer bonds, effectively “locking” parts of the bilayer into a more diamond-like configuration. This cooperation between chemical doping and pressure points to a previously unrecognized pathway for transforming graphene’s atomic structure.
The implications extend beyond a mere curiosity of carbon chemistry. Because the experiments used large-area graphene grown by chemical vapor deposition, the process is inherently compatible with industrial fabrication methods and wafer scale dimensions. The transformation also occurs under mild conditions, avoiding the high temperatures that typically destroy or distort 2D materials. In principle, the approach could yield ultrathin, lightweight coatings that resist wear and deformation while maintaining the advantages of graphene substrates.
Yet the work raises as many questions as it answers. The extent of the transformation remains uncertain — whether the sp³ bonding is continuous or confined to localized regions under the indenter is not yet clear. Researchers also do not know whether the diamond-like phase persists once the pressure is released, or whether it relaxes back to graphene over time. Understanding how stable and uniform these transformations are will be critical for any practical use.
The effect on electronic behavior also remains to be seen. Diamond-like carbon is typically an electrical insulator, so localized sp³ regions could alter the electronic or optical properties of the film. For device applications, the challenge will be to tune the process so that mechanical and electrical properties can be balanced rather than compromised.
Future research will need to clarify how doping levels, pressure intensity, and substrate choice influence the transformation.
The study suggests that graphene’s versatility may stretch further than expected. By manipulating its atomic environment — through doping, strain, or pressure — researchers may be able to switch between distinct structural phases on demand. Such control could lead to a new generation of adaptive materials, capable of shifting from soft to hard, or from conductive to insulating, depending on their operating conditions.
Graphene has often been described as a material with untapped potential. This work offers another glimpse of that potential, showing that even after more than a decade of intense study, carbon’s simplest form still has surprises left to offer.
This work was supported by the U.S. Army Research Office.
Researchers Quantify Intensity of Emotional Response to Sound, Images and Touch Through Skin Conductance
When we listen to a moving piece of music or feel the gentle pulse of a haptic vibration, our bodies react before we consciously register the feeling. The heart may quicken, palms may sweat resulting in subtle electrical resistance variations in the skin. These changes, though often imperceptible, reflect the brain’s engagement with the world. A recent study by researchers at NYU Tandon and the Icahn School of Medicine at Mount Sinai and published in PLOS Mental Health explores how such physiological signals can reveal cognitive arousal — the level of mental alertness and emotional activation — without the need for subjective reporting.
The researchers, led by Associate Professor of Biomedical Engineering Rose Faghih at NYU Tandon, focused on skin conductance, a well-established indicator of autonomic nervous system activity. When sweat glands are stimulated, even minutely, the skin’s ability to conduct electricity changes. This process, known as electrodermal activity, has long been associated with emotional and cognitive states. What distinguishes this study is the combination of physiological modeling and advanced statistical methods to interpret these subtle electrical fluctuations in response to different sensory experiences.
This research work started as a course project for student authors Suzanne Oliver and Jinhan Zhang in Faghih's “Neural and Physiological Signal Processing.'' Research Scientist and co-author Vidya Raju mentored the students under the supervision of Faghih. James W. Murrough, Professor of Psychiatry and Neuroscience and Director of the Depression and Anxiety Center for Discovery and Treatment at the Icahn School of Medicine at Mount Sinai also collaborated in this research.
Taking Prof. Faghih's class was a great experience and allowed me to combine coursework and research,” said Oliver. “It was very exciting to see the work I did in class could help improve treatment of mental health conditions in the future."
The researchers analyzed a published dataset of participants’ continuously recorded skin conductance measured while they were exposed to visual, auditory, and haptic stimuli. Participants also provided self-ratings of arousal using the Self-Assessment Manikin, a pictorial scale that quantifies emotional states. By applying a physiologically informed computational model, the team separated the slow and fast components of the skin’s electrical response and inferred when the autonomic nervous system was most active. Bayesian filtering and a marked point process algorithm were then used to estimate a continuous measure of cognitive arousal over time.
The analysis revealed a striking pattern: the nervous system responded most strongly within two seconds of a new stimulus, with haptic sensations eliciting the largest immediate activations. Yet when the researchers compared these physiological signals to participants’ own self-assessments, auditory stimuli — particularly sounds and music — were most often linked to high arousal states. This suggests that the brain’s perception of stimulation and the body’s raw autonomic responses, while related, may not always align perfectly. However, when the physiological signals were further processed into estimates of user arousal, the modelled arousal agreed with the participant's assessment that auditory stimuli caused the highest arousal.
Interestingly, the model was able to track transitions in participants’ arousal levels as they moved from low- to high-intensity stimuli with an accuracy exceeding random chance. When the participants who felt more stimulated by visual cues were analyzed separately from those more responsive to touch, the model’s predictions revealed the significant differences in participants’ responses to these stimuli in the self-reports effectively capturing group trends.
The implications of this work extend beyond the laboratory. In clinical contexts, self-reported measures remain the gold standard for assessing mental states such as anxiety or stress, yet they are inherently subjective and often unreliable. Objective metrics derived from skin conductance could complement these reports, offering clinicians a more nuanced view of a patient’s emotional dynamics in real time. Such tools might one day aid in monitoring recovery from depression, anxiety, or post-traumatic stress disorder, where changes in physiological arousal often mirror symptom fluctuations.
The study also points to potential uses in virtual reality and human-computer interaction. By quantifying how users react to visual, auditory, or tactile elements, systems could adapt dynamically — heightening immersion, enhancing focus, or reducing stress depending on the goal. This closed-loop feedback between body and machine could make digital environments more responsive to human emotion.
Still, the authors acknowledge the complexity of translating sweat and associated signals into precise emotional understanding. Factors such as stimulus duration, individual variability, and prior experience complicate the interpretation. The correlation between computed arousal and self-reported ratings was modest overall, reflecting the intricate and personal nature of emotional experience. Yet the model’s consistency in identifying moments of heightened engagement underscores its promise as a complementary measure of internal states.
In essence, the study bridges a subtle gap between physiology and perception. By grounding emotion in the body’s own electrical rhythms, it invites a more continuous, data-driven view of how humans experience the world — one that may eventually inform both mental health care and the design of emotionally intelligent technologies.
China Commands 47% of Remote Sensing Research, While U.S. Produces Just 9%, NYU Tandon Study Reveals
The United States is falling far behind China in remote sensing research, according to a comprehensive new study that tracked seven decades of academic publishing and reveals a notable reversal in global technological standing.
China now accounts for nearly half of all peer-reviewed journal publications in this critical field, while American output has declined to single digits.
"This represents one of the most significant shifts in global technological leadership in recent history," said Debra Laefer, the lead author of the study. Laefer is a NYU Tandon Civil and Urban Engineering professor, and a faculty member of Tandon’s Center for Urban Science + Progress.
Published in the journal Geomatics, the research analyzed over 126,000 papers published between 1961 and 2023 to document how China has surged from virtually no presence from the 1960s through the 1990s to 47% of remote sensing publications by 2023, while the United States has dropped from producing 88% of research in the 1960s to only 9% today.
Remote sensing — the science of gathering information from a distance using technologies like laser scanning, imagery, and hyperspectral imagery from the ground, the air, and even space — underpins critical applications from autonomous vehicles to climate monitoring and national security.
The global market was valued at $452 billion in 2022 and is projected to reach $1.44 trillion by 2030, making leadership in this field essential for economic competitiveness. Laefer emphasized that understanding who drives technical expertise and funding in this area is "of national and international importance, as they are inextricably linked with intellectual property generation, which is also shown in our data."
The research reveals that remote sensing scholarship has experienced exponential growth, expanding from roughly a dozen papers annually in the 1960s to more than 13,000 per year by 2023, a thousand-fold increase that far outpaces general scientific publishing trends.
Laefer and co-author Jingru Hua — at the time a master’s student in the NYU Center for Data Science — attribute this surge to decreased equipment costs, greater global participation, digital-only publishing, and most significantly, the adoption of artificial intelligence techniques like machine learning and deep learning.
Perhaps most notable for American competitiveness, the research demonstrates a near-perfect correlation between national funding and publication output. China's National Natural Science Foundation now appears in funding acknowledgments for over 53% of remote sensing papers published between 2021 and 2023, while U.S. agencies are credited in only 5%.
The study identified six Chinese funding entities among the top ten global funders in recent years, compared to only two American organizations, NASA and the National Science Foundation (NSF). NASA, once the dominant funder at 50% of publications through the 1990s, has been vastly outpaced by Chinese funding organizations. Notably, NSF does not have dedicated divisions specifically for geomatics (the science of gathering and analyzing geographic data) or geodesy (the science of measuring Earth's shape and positions on it).
China's research dominance extends to intellectual property generation as well. According to patent data included in the study, China now accounts for the majority of remote sensing patents filed globally. In just the three years from 2021 to 2023, over 43,000 patents containing "remote sensing" were filed worldwide, with China responsible for the clear majority, a dramatic reversal from the late 20th century when the United States held near-total dominance.
The researchers' analysis of publication titles reveals evolving technological priorities. Early decades focused heavily on satellite imagery, but recent years show explosive growth in artificial intelligence techniques, with terms like "deep learning" and "machine learning" now dominating publication titles. The number of papers mentioning these techniques has grown exponentially, reaching over 80,000 publications by 2023.
The findings have implications for technological competitiveness. Remote sensing capabilities underpin emerging technologies including augmented reality, autonomous navigation, and digital twins, all important areas for economic and commercial applications. With China's continued investment and the field's commercial value expected to triple by 2030, the study provides a baseline for understanding shifts in this important technological domain.
Laefer, D.; Hua, J. Remote Sensing Publications 1961–2023—Analysis of National and Global Trends. Geomatics 2025, 5, 47. https://doi.org/10.3390/geomatics5030047
Heavier Electric Trucks Could Strain New York City’s Roads and Bridges, Study Warns
New York City’s roads and bridges already incur millions in annual damage from oversized trucks, and a new study warns the shift to electric freight could intensify that burden. As electric trucks replace diesel models, their heavier batteries could increase the city's yearly repair costs by up to nearly 12 percent by 2050.
Led by C2SMART researchers at NYU Tandon School of Engineering in collaboration with Rochester Institute of Technology (RIT) and published in Transport Policy, the study finds that oversized trucks already cause about $4.16 million in damage each year while permits bring in only $1.28 million. Electric trucks typically weigh 2,000 to 3,000 pounds more than diesel models, and in rare long-range cases as much as 8,000 to 9,000, so the financial gap is expected to grow.
“As electric vehicles become more common, our city’s infrastructure will face new and changing demands to support this transition,” said Professor Kaan Ozbay, the paper’s senior author and director of NYU Tandon’s C2SMART transportation research center. “Our framework shows that the city should adapt its planning and fee structures to ensure it can accommodate the costs of keeping bridges and roads safe as a result of more widespread adoption of e-trucks. ”
Using New York City’s Overdimensional Vehicle Permits dataset, the researchers modeled how electric-truck adoption could play out through 2050. They found that switching to e-trucks could increase damage costs by 2.23 to 4.45 percent by 2030, and by 9.19 to 11.71 percent by 2050. More extreme scenarios tied to unusually heavy batteries produced higher figures, though the authors say those outcomes are unlikely as technology improves.
The impact would not be uniform across the city. Manhattan faces the greatest increase, with parts of Brooklyn, Queens, and the Bronx also at risk due to heavy truck volumes and aging structures. Staten Island and many outer areas show lower impact. Bridges shoulder about 65 percent of the added costs because they are especially sensitive to increases in gross vehicle weight. Pavement, affected more by axle loads, wears down more gradually.
“We found that conventional oversized trucks in New York City already impose more than $4 million in annual damage,” said the study’s lead author Zerun Liu, NYU Tandon Ph.D. candidate in the Civil and Urban Engineering department’s recently established Urban Systems Ph.D. program, who is advised by Professor Ozbay. “With projected adoption of electric trucks, those costs could increase by an additional nearly 12 percent. That gap highlights the urgent need for new strategies to keep infrastructure sustainable.”
To manage the risks, the researchers created a susceptibility index identifying road segments and bridges most vulnerable to heavier vehicles. They recommend replacing flat permit fees with flexible, weight-based fees that reflect actual costs while still recognizing environmental benefits. They also call for expanding weight monitoring on high-risk corridors, especially in Manhattan, and factoring e-truck projections into city maintenance and capital plans to avoid expensive emergency repairs.
Although the study focuses on New York City, similar pressures are emerging elsewhere. The European Union allows zero-emission trucks to exceed weight limits by nearly 9,000 pounds, while U.S. rules permit an additional 2,000. The framework developed by the NYU Tandon and RIT team offers cities a way to balance climate goals with the realities of infrastructure wear.
Despite the added costs, the authors stress that the overall case for electric trucks in New York remains strong. Their scenarios suggest that widespread electrification could cut about 2,032 tons of carbon dioxide each year, improving air quality and public health.
“The proposed methodological framework can provide actionable insights for policymakers to ensure infrastructure longevity and safety as e-truck adoption grows,” Ozbay said.
In addition to senior author Ozbay and lead author Liu, the paper’s other authors are Jingqin Gao, C2SMART’s Assistant Director of Research; Tu Lan, a Ph.D. student in the Urban Systems Ph.D. program graduated under Professor Ozbay’s advisement; and Zilin Bian, a recent NYU Tandon Ph.D. graduate from the Civil and Urban Engineering department , now an assistant professor at RIT.
Funding came from the U.S. Department of Transportation’s University Transportation Centers Program.
Zerun Liu, Tu Lan, Zilin Bian, Jingqin Gao, Kaan Ozbay, A comprehensive framework for the assessment of the effects of increased electric truck weights on road infrastructure: A New York City case study, Transport Policy,
Volume 173, 2025 https://doi.org/10.1016/j.tranpol.2025.103808.
NYU Tandon research reveals how grassroots logistics networks fed New Yorkers during COVID-19 crisis
Grassroots logistics networks provided food and essential goods to New Yorkers who fell through the cracks of conventional supply chains during the COVID-19 pandemic, offering important lessons for engineers designing the next generation of distribution technologies, according to new research from NYU Tandon and the University of Toronto.
Presented at the 28th ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing (CSCW), the study examines three community-driven distribution systems that emerged in New York City: immigrant street vendors in Corona, Queens; a theater-turned-food-pantry on Manhattan's Lower East Side; and the citywide mutual aid network. Each represented what researchers call "supply chains of last resort,” critical interventions filling gaps left by traditional logistics infrastructure.
The research was conducted by Margaret Jack, Industry Assistant Professor in NYU Tandon's Department of Technology, Culture, and Society, and Robert Soden, Assistant Professor in the Department of Computer Science and in School of the Environment of the University of Toronto.
The study contributes to human-centered engineering research by examining how people creatively appropriate and repurpose existing technologies — from WhatsApp to shopping carts — to build functional logistics systems without formal infrastructure.These alternative logistics networks demonstrate how technologies designed for individual productivity are being adapted for civic and ecological collaboration, raising design questions relevant to the development of future civic technologies.
The Corona Plaza street vending community exemplifies this creative adaptation. Vendors used shopping carts, portable griddles, and folding tables to create temporary restaurants, while leveraging TikTok and YouTube for marketing, Zelle for payments, and WhatsApp groups for coordination, stitching together consumer technologies in ways their designers never intended.
At the Abrons Art Center, a performance venue transformed its stage into a food distribution hub serving over seven hundred families weekly. Theater technicians applied their engineering skills to construct a walk-in refrigerator using scenery-building techniques, and creatively used the theater's fly system to lift one wall above their heads so they could move full pallets of food into the refrigerator.
The citywide mutual aid network connected thousands of volunteers who cobbled together digital tools (Google Docs, AirTable, Slack, and WhatsApp) to build complex workflows for volunteer management and resource distribution. Rather than waiting for custom-built platforms, organizers rapidly prototyped solutions using available technologies, then shared successful approaches across the network.
The study offers important insights for computer-supported cooperative work and infrastructure engineering. Jack and Soden frame these logistics networks as socio-technical systems whose function depends not just on physical tools but on social networks and digital infrastructure, revealing design opportunities often missed by conventional technology development.The research challenges how engineers think about logistics infrastructure.
While companies like Amazon design tightly-coupled systems optimizing for efficiency and control, these alternative networks succeeded through flexibility and "seamfulness" — deliberately visible seams between system components that allowed for creative adaptation
"There's a tendency to forget about all the invisible infrastructural work supporting our lives until there is a breakdown, and in the emergency of COVID in New York City, we saw a lot of breakdown," Jack explained. "Our cases show that embracing flexibility and recognizing inevitable situated human action within infrastructures can produce more resilient systems."
The study identifies specific opportunities for engineering research and design. Alternative logistics networks struggled with tools built by corporations for different purposes, pointing to a need for movement-aligned civic technologies designed specifically for grassroots coordination. The researchers argue for "seamful design" approaches in logistics engineering, creating systems that highlight rather than hide complexity, empowering users to appropriate technologies in their own emergent ways.
However, these grassroots networks faced structural barriers that engineering alone cannot solve. Street vendors endured police harassment despite functional logistics systems. The Abrons refrigerator was dismantled due to permit requirements despite working perfectly.
As engineers design systems for climate adaptation and crisis response, the lessons from pandemic-era alternative logistics become increasingly relevant, demonstrating how community capacity and regulatory support can create positive conditions for resilient infrastructure even under severe resource constraints.
This work was funded by a John Burdick mini-grant for Research on Social Movements and Social Change from Syracuse University's Maxwell School.
Hydrogen processing plant failures mostly linked to design flaws, not hydrogen itself, study finds
Hydrogen is often touted as a clean, carbon-free energy carrier that could help decarbonize industry and transportation. Yet the very properties that make it efficient and lightweight also make it uniquely tricky to handle safely. A new study published in the International Journal of Hydrogen Energy by researchers at NYU Tandon and University College London takes a systematic look at what truly makes hydrogen accidents different from conventional industrial failures, and what that means for safety and regulation.
By analyzing more than 700 incidents in the Hydrogen Incidents and Accidents Database (HIAD 2.0), the team found that 59 percent of mishaps involving hydrogen stem from the same sorts of issues that plague other energy systems: design flaws, mechanical failures, and human error. Only 15 percent can be directly traced to the intrinsic properties of hydrogen itself, such as its high diffusivity, low ignition energy, or ability to degrade metals from within. The remaining cases lacked enough detail to tell one way or another.
“Of course, in the case of hydrogen, the consequences of a fire or an explosion can be a lot more severe due to the unique combustion properties of this gas. But when looking at the root cause of an incident, hydrogen is not inherently more dangerous than other flammable gases used in industry,” says lead author Augustin Guibaud, Assistant Professor of Mechanical and Aerospace Engineering. “However, the way it interacts with materials and the environment is fundamentally different. The danger comes from misunderstanding those differences.”
Those differences arise from hydrogen’s atomic scale. Its extremely small molecules slip through metal lattices where larger gases like methane cannot, leading to subtle but serious material failures. The study details several such mechanisms: hydrogen embrittlement, which weakens metals by disrupting atomic bonds; hydrogen-induced cracking, in which pressurized gas accumulates inside tiny voids until the material bursts; and high-temperature hydrogen attack, where hydrogen reacts with carbon in steel to form methane, eroding its structure. Other hazards include hydrogen-assisted corrosion and the effects of storing the gas at pressures up to 700 bar — dozens of times higher than those used for natural gas.
These microscopic processes have huge consequences. The 2019 explosion at a hydrogen refueling station in Sandvika, Norway, for example, stemmed from a faulty high-pressure component rather than combustion chemistry, but it underscored how even small mechanical flaws can escalate quickly under hydrogen service conditions.
Guibaud, who is also a member of the Center for Urban Science + Progress, notes that the goal of the research is not to minimize hydrogen’s risks but to clarify them. “Our findings also highlight where traditional safety practices fail to capture hydrogen’s unique behavior,” Guibaud says. “If we can distinguish between what is general and what is hydrogen-specific, we can focus regulation and design standards on the right problems.”
That distinction, the authors argue, is essential as hydrogen infrastructure expands beyond controlled industrial sites into urban fueling stations, residential heating, and renewable power storage. Current regulations, they point out, often apply “one-size-fits-all” safety distances or design codes that lack a strong scientific basis. Overly cautious rules can slow deployment and raise costs, while overly permissive ones can leave gaps in protection.
Instead, the researchers advocate for risk-informed, evidence-based safety standards grounded in hydrogen’s particular chemistry and physics. They also call for improved data collection and international coordination, noting that the hydrogen industry today lacks the tools to improve systematic data collection and transparency.
“The challenge,” says Guibaud, “isn’t just preventing accidents — it’s learning from them fast enough to guide a rapidly changing energy landscape.” As hydrogen moves from the lab to the mainstream, knowing which failures are truly “hydrogen failures” may prove as vital as the technology itself.
Li, Yutao, et al. “Differentiating hydrogen-driven hazards from conventional failure modes in hydrogen infrastructure.” International Journal of Hydrogen Energy, vol. 183, Oct. 2025, p. 151155, https://doi.org/10.1016/j.ijhydene.2025.151155.
New research reveals uptake of AI-powered messaging in healthcare settings
A new study from NYU Tandon, NYU Langone Health, and the NYU Stern School of Business offers one of the first data-driven looks at how generative AI might help healthcare providers manage their message overload — and why many are hesitant to adopt the technology.
Over a ten-month period from October 2023 through August 2024, a team led by Morton L. Topfer Professor of Technology Management Oded Nov observed more than 55,000 patient messages sent to healthcare providers through a secure online patient portal. The system used an embedded generative AI tool that automatically generated draft replies for incoming patient messages; healthcare providers could choose to start with the draft, begin a reply from scratch, or use their usual reply interface.
The research was published in npj Digital Medicine.
“This paper provides evidence that AI has the potential to make patient-provider communication more efficient and more responsive,” says Soumik Mandal, research scientist and lead author of the research. “To unlock its full potential in the next phase, however, will require tailored implementation to ensure that AI tools meaningfully reduce clinician burden while enhancing care quality. The paper outlines some practical strategies to improve draft utilization and guide future implementation efforts as key next steps.”
Other authors include NYU Stern’s Batia M. Wiesenfeld, as well as NYU Langone Health’s Adam C. Szerencsy, William R. Small, Vincent Major, Safiya Richardson, Antoinette Schoenthaler, and Devin Mann.
According to the published results, providers chose to “Start with Draft” in 19.4 percent of cases where a draft was shown. Adoption rose modestly over the course of the study as the system’s prompting improved. Using a draft shaved roughly 7 percent off response times, a median of 331 seconds versus 355 seconds when drafting from scratch, but in many cases, this time saved was offset by time spent reviewing, editing, or ignoring drafts.
“LLMs are a new technology that can help providers be more responsive, more effective and more efficient in their communication with their patients,” says Nov. “The more we understand who uses it and why, the better we can leverage it.”
By analyzing tens of thousands of messages, the researchers found that certain qualities made drafts more likely to be used. Shorter, more readable, and more informative drafts tended to be preferred. Tone also mattered: messages that sounded slightly more human and empathetic were more likely to be adopted, though the ideal balance differed by role. Physicians leaned toward concise, neutral text, while support staff were more receptive to messages with a warmer tone. These preferences hint at a future where AI systems could adapt their writing style based on the user’s role or communication history.
Still, the study shows how hesitant healthcare providers remain to rely on AI-generated language at all. The authors suggest several possible reasons including suboptimal alignment with clinical workflows, and the cognitive cost of reviewing a constant stream of AI output, much of which may be irrelevant. Simply generating text for every message, they argue, can create clutter that undermines the very efficiency such tools are meant to provide.
The researchers see ample opportunity ahead. Future systems may need to learn each user’s style, selectively generate drafts only for messages likely to benefit, and continuously adapt prompt strategies.
Mandal, Soumik, et al. Utilization of Generative AI-Drafted Responses for Managing Patient-Provider Communication, 2 Sept. 2025, https://doi.org/10.1101/2025.08.31.25334725.
AI tools can help hackers plant hidden flaws in computer chips, study finds
Widely available artificial intelligence systems can be used to deliberately insert hard-to-detect security vulnerabilities into the code that defines computer chips, according to new research from the NYU Tandon School of Engineering, a warning about the potential weaponization of AI in hardware design.
In a study published by IEEE Security & Privacy, an NYU Tandon research team showed that large language models like ChatGPT could help both novices and experts create "hardware Trojans,” malicious modifications hidden within chip designs that can leak sensitive information, disable systems or grant unauthorized access to attackers.
To test whether AI could facilitate malicious hardware modifications, the researchers organized a competition over two years called the AI Hardware Attack Challenge as part of CSAW, an annual student-run cybersecurity event held by the NYU Center for Cybersecurity.
Participants were challenged to use generative AI to insert exploitable vulnerabilities into open-source hardware designs, including RISC-V processors and cryptographic accelerators, then demonstrate working attacks.
"AI tools definitely simplify the process of adding these vulnerabilities," said Jason Blocklove, a Ph.D. candidate in NYU Tandon’s Electrical and Computer Engineering (ECE) Department and lead author of the study. "Some teams fully automated the process. Others interacted with large language models to understand the design better, identify where vulnerabilities could be inserted, and then write relatively simple malicious code."
The most effective submissions came from teams that created automated tools requiring minimal human oversight. These systems could analyze hardware code to identify vulnerable locations, then generate and insert custom trojans without direct human intervention. The AI-generated flaws included backdoors granting unauthorized memory access, mechanisms to leak encryption keys, and logic designed to crash systems under specific conditions.
Perhaps most concerning, several teams with little hardware expertise successfully created sophisticated attacks. Two submissions came from undergraduate teams with minimal prior knowledge of chip design or security, yet both produced vulnerabilities rated medium to high severity by standard scoring systems.
Most large language models include safeguards designed to prevent malicious use, but competition participants found these protections relatively easy to circumvent. One winning team crafted prompts framing malicious requests as academic scenarios, successfully inducing the AI to generate working hardware trojans. Other teams discovered that requesting responses in less common languages could bypass content filters entirely.
The permanence of hardware vulnerabilities amplifies the risk. Unlike software flaws that can be corrected through updates, errors in manufactured chips cannot be fixed without replacing the components entirely.
"Once a chip has been manufactured, there is no way to fix anything in it without replacing the components themselves," Blocklove said. "That's why researchers focus on hardware security. We’re getting ahead of problems that don't exist in the real world yet but could conceivably occur. If such an attack did happen, the consequences could be catastrophic."
The research follows earlier work by the same team demonstrating AI's potential benefits for chip design. In their "Chip Chat" project, the researchers showed that ChatGPT could help design a functioning microprocessor. The new study reveals the technology's dual nature. The same capabilities that could democratize chip design might also enable new forms of attack.
"This competition has highlighted both a need for improved LLM guardrails as well as a major need for improved verification and security analysis tools," the researchers wrote.
The researchers emphasized that commercially available AI models represent only the beginning of potential threats. More specialized open-source models, which remain largely unexplored for these purposes, could prove even more capable of generating sophisticated hardware attacks.
The paper’s senior author is NYU Tandon’s Ramesh Karri, Professor and Chair of ECE. Karri is also on the faculty of the Center for Advanced Technology in Telecommunications and co-founded and co-directed the NYU Center for Cybersecurity (CCS). Karri founded the embedded security challenge (ESC), the first hardware security challenge worldwide. Hammond Pearce, Senior Lecturer at UNSW Sydney's School of Computer Science and Engineering and a former NYU Tandon research assistant professor in ECE and CCS, is the other co-author.
J. Blocklove, H. Pearce and R. Karri, "Lowering the Bar: How Large Language Models Can be Used as a Copilot by Hardware Hackers" in IEEE Security & Privacy, vol. , no. 01, pp. 2-12, PrePrints 5555, doi: 10.1109/MSEC.2025.3600140.