Research News
Shape-shifting particles let scientists control how fluids flow
Imagine a liquid that flows freely one moment, then stiffens into a near-solid the next, and then can switch back with a simple change in temperature. Researchers at the University of Chicago Pritzker School of Molecular Engineering and NYU Tandon have now developed such a material, using tiny particles that can change their shape and stiffness on demand. Their research paper “Tunable shear thickening, aging, and rejuvenation in suspensions of shape-memory endowed liquid crystalline particles,” published in PNAS, demonstrates a new way to regulate how dense suspensions — mixtures of solid particles in a fluid — behave under stress.
These new particles are made from liquid crystal elastomers (LCEs), a material that combines the structure of liquid crystals with the flexibility of rubber. When heated or cooled, the particles change shape: they soften and become round at higher temperatures, and stiffen into irregular, angular forms at lower ones. This change has a dramatic effect on how the suspension flows.
From Smooth to Stiff and Back Again
Dense suspensions are found in everyday products like paints, toothpaste, and cement. Under certain conditions, these materials can thicken unpredictably under force, a behavior known as shear thickening. In some cases, the thickening becomes so extreme that the material jams and stops flowing altogether. This can cause problems in processing and manufacturing, where smooth, consistent flow is essential.
The research team, co-led by UChicago PME professor of Molecular Engineering Stuart Rowan and Juan de Pablo, formerly at UChicago and now Executive Vice President for Global Science and Technology at NYU and Executive Dean of the NYU Tandon School of Engineering, designed LCE particles whose shapes can be programmed during synthesis. They found that suspensions made from more irregular, "potato-shaped" particles thickened much more under stress than those made from smoother, "pea-shaped" ones.
But the key breakthrough came with temperature control. At lower temperatures, the potato-shaped particles were rigid and irregular, and their suspensions exhibited strong shear thickening — resisting flow when stress increased. But as the temperature rose past 45–50 °C, the particles transformed into softer, rounder shapes, and the suspension became much easier to stir or pump. The researchers showed that this change could be repeated over and over again.
“The basic behavior is akin to what one observes with corn starch and water, where under small shear the material is a liquid, but when submitted to high shear it is a solid. There are several factors that play a role in such shear behavior, including shape and stiffness of the particles in the suspensions. Here we show that it is possible to design stimuli-response particles that allow access to suspensions with tunable flow behavior,” said Rowan.
Chuqiao Chen, first author of the study and a Ph.D. candidate in the University of Chicago’s Pritzker School of Molecular Engineering at the time of the research, added, “In a narrow temperature window, we saw a full transition from a jammed, thick state to a freely flowing one. It’s like flipping a switch on how the fluid behaves.”
A Suspension with a Memory
Over time, even in the absence of flow, the particle suspensions tended to settle into more solid-like states in a process known as “aging.” The particles clump together and form structures that resist movement. This behavior, common in dense materials, can make them hard to work with after storage.
However, the LCE-based suspensions have a built-in solution. When the aged suspensions were heated above their shape-transition temperature, the particles relaxed into spherical forms and the clusters broke apart. The suspension returned to a fluid state, effectively resetting itself. This transformation did not require stirring or mixing, just a brief heating and cooling cycle.
The ability to control both particle shape and stiffness with temperature gives researchers an entirely new handle on how dense fluids behave. Traditionally, tuning the flow properties of suspensions required adjusting how many particles were present or modifying the fluid’s chemistry. With this approach, the same suspension can be adjusted simply by changing the temperature.
The potential uses are wide-ranging. In additive manufacturing (3D printing), for example, preventing jamming and controlling flow are major concerns. In industrial mixing, being able to “switch off” thickening behavior could help improve efficiency. The team’s findings suggest that even modest heating or cooling could achieve this.
The research opens a path toward materials that can flow, jam, and unjam on cue — not by changing their contents, but by altering how their parts are arranged and how they interact.
In addition to Rowan, de Pablo and Chen, the study's authors are Carina D. V. Martinez Narvaez, Nina Chang, and Carlos Medina Jimenez of the University of Chicago's Pritzker School of Molecular Engineering; Joseph M. Dennis of the Army Research Laboratory; and Heinrich M. Jaeger of the University of Chicago's James Franck Institute.
The University of Chicago Materials Research Science and Engineering Center (which is funded by the National Science Foundation) and the Army Research Laboratory Cooperative Agreement provided funding for the research.
NYU Tandon engineers create first immunocompetent leukemia device for CAR T immunotherapy screening
A team of researchers led by NYU Tandon School of Engineering's Weiqiang Chen has developed a miniature device that could transform how blood cancer treatments are tested and tailored for patients.
The team’s microscope slide-sized "leukemia-on-a-chip" is the first laboratory device to successfully combine both the physical structure of bone marrow and a functioning human immune system, an advance that could dramatically accelerate new immunotherapy development.
This innovation comes at a particularly timely moment, as the FDA recently announced a plan to phase out animal testing requirements for monoclonal antibodies and other drugs, releasing a comprehensive roadmap for reducing animal testing in preclinical safety studies.
As described in a paper published in Nature Biomedical Engineering, the new technology allows scientists to observe in real time how immunotherapy drugs interact with cancer cells in an environment that closely mimics the human body, representing exactly the type of alternative testing method the FDA is now encouraging.
"We can now watch cancer treatments unfold as they would in a patient, but under completely controlled conditions without animal experimentation," said Chen, professor of mechanical and aerospace engineering.
Chimeric Antigen Receptor T-cell therapy, or CAR T-cell therapy, has emerged as a promising immunotherapy approach for treating certain blood cancers. It involves removing a patient's immune cells, genetically engineering them to target cancer, and returning them to the patient's body. Despite its potential, nearly half of patients relapse, and many experience serious side effects including cytokine release syndrome.
Scientists have struggled to improve these treatments, in part because conventional testing methods fall short. Animal models are time-consuming and difficult to monitor (and fail to accurately mimic the human immune system's complex responses to these therapies), while standard laboratory tests do not represent the complex environment where cancer and immune cells interact.
The new device recreates three regions of bone marrow where leukemia develops: blood vessels, surrounding marrow cavity, and outer bone lining. When populated with patient bone marrow cells, the system begins to self-organize, with cells producing their own structural proteins like collagen, fibronectin, and laminin, creating not only the physical structure but, most importantly, retaining the complex immune environment of the tissue.
Using advanced imaging techniques, the researchers watched individual immune cells as they moved through blood vessels, recognized cancer cells, and eliminated them, a process previously impossible to witness with such clarity in a living system. The team could track precisely how fast the CAR T-cells traveled while hunting down cancer cells, revealing that these engineered immune cells move with purpose when searching for their targets, slowing down when they detect nearby cancer cells to engage and destroy them.
"We observed immune cells patrolling their environment, making contact with cancer cells, and killing them one by one," Chen said.
The researchers also discovered that engineered immune cells activate other immune cells not directly targeted by the therapy, a "bystander effect" that may contribute to both treatment effectiveness and side effects.
By manipulating the system, the team recreated common clinical scenarios seen in patients: complete remission, treatment resistance, and initial response followed by relapse. Their testing revealed that newer "fouth-generation" CAR T-cells with enhanced design features performed better than standard versions, especially at lower doses.
While animal models require months of preparation, the leukemia chip can be assembled in half a day and supports two-week experiments.
"This technology could eventually allow doctors to test a patient's cancer cells against different therapy designs before treatment begins," Chen explained. "Instead of a one-size-fits-all approach, we could identify which specific treatment would work best for each patient."
The researchers developed a "matrix-based analytical and integrative index" to evaluate the performance of different CAR T-cell products, analyzing multiple aspects of immune response in different scenarios. This comprehensive analysis could provide a more accurate prediction of which therapies will succeed in patients.
Along with Chen, the paper's authors are Chao Ma, Huishu Wang, Lunan Liu and Jie Tong of NYU Tandon; Matthew T. Witkowski of the University of Colorado Anschutz Medical Campus; Iannis Aifantis of NYU Grossman School of Medicine; and Saba Ghassemi of the University of Pennsylvania.
The work was supported by the National Science Foundation, National Institutes of Health, Cancer Research Institute, Leukemia & Lymphoma Society, National Cancer Institute, Alex's Lemonade Stand Cancer Research Foundation, St. Baldrick's Foundation, and other organizations.
Ma, C., Wang, H., Liu, L. et al. Bioengineered immunocompetent preclinical trial-on-chip tool enables screening of CAR T cell therapy for leukaemia. Nat. Biomed. Eng (2025).
Scientists create light-powered microscopic swimmers that could dramatically advance drug delivery
Scientists have created tiny disk-shaped particles that can swim on their own when hit with light, akin to microscopic robots that move through a special liquid without any external motors or propellers.
Published in Advanced Functional Materials, the work shows how these artificial swimmers could one day be used to deliver cargo in a variety of fluidic situations, with potential applications in drug delivery, water pollutant clean up, or the creation of new types of smart materials that change their properties on command.
"The essential new principles we discovered — how to make microscopic objects swim on command using simple materials that undergo phase transitions when exposed to controllable energy sources — pave the way for applications that range from design of responsive fluids, controlled drug delivery, and new classes of sensors, to name a few,” explained lead researcher Juan de Pablo.
Currently the Executive Vice President for Global Science and Technology at NYU and Executive Dean of the NYU Tandon School of Engineering, de Pablo conducted this research in collaboration with postdoctoral researchers and faculty at the Pritzker School of Molecular Engineering at the University of Chicago, the Paulson School of Engineering at Harvard University, and the Universidad Autonoma of San Luis Potosi, in Mexico
The research team designed tiny flat discs about 200 micrometers across, which is roughly twice the width of a human hair. These structures are made from dried food dye and propylene glycol, creating solid discs with bumpy surfaces that are essential for swimming.
When placed in a nematic liquid crystal (the same material used in LCD screens) and hit with green LED light, the discs start swimming on their own. The food dye absorbs the light and converts it to heat, warming up the liquid crystal around the disc. This causes the organized liquid crystal molecules (normally lined up like soldiers in formation) to “melt” and become jumbled and disorganized, creating an imbalance that pushes the disc forward.
Depending on temperature and light brightness, the discs behave differently. Under the right conditions, they achieve sustained swimming at speeds of about half a micrometer per second, notable for something this tiny.
The most spectacular results happen when the discs can move in three dimensions. As they swim, they create beautiful flower-like patterns of light visible under a microscope. These patterns evolve from simple 4-petaled shapes to intricate 12-petaled designs as the light gets brighter.
"The platelet lifts due to an incompatibility between the liquid crystal's preferred molecular orientation at different surfaces," said de Pablo. "This creates an uneven elastic response that literally pushes one side of the platelet upward."
What distinguishes this discovery is how different it is from other swimming methods. Unlike bacteria that use whip-like tails or other artificial swimmers that need expensive chemical reactions, these discs create movement using a simple melting transition, cheap materials and basic LED lights. Plus, they have perfect on/off control: when light is turned off, they stop swimming immediately.
This research taps into the growing field of "active matter", which are materials that can harvest energy from their surroundings and turn it into movement. While these specific discs rely on light and heat to change the extent of order in a liquid crystal , the principles could be adapted to create swimmers in other types of liquid or solid media, powered by light or body heat, for example.
The paper's lead author is Antonio Tavera-Vázquez (Pritzker School of Molecular Engineering at the University of Chicago), who is a postdoctoral researcher in the group of Juan de Pablo. The team also includes Danai Montalvan-Sorrosa (John A. Paulson School of Engineering and Applied Sciences at Harvard University and the Facultad de Ciencias, Departamento de Biología Celular at Universidad Nacional Autónoma de México); Gustavo R. Perez-Lemus (Pritzker School of Molecular Engineering at the University of Chicago and NYU Tandon currently); Otilio E. Rodriguez-Lopez (Facultad de Ciencias and Instituto de Física at Universidad Autónoma de San Luis Potosí in Mexico); Jose A. Martinez-Gonzalez (Facultad de Ciencias at Universidad Autónoma de San Luis Potosí); and Vinothan N. Manoharan (John A. Paulson School of Engineering and Applied Sciences and the Department of Physics at Harvard University).
Funding for this research was primarily provided by the Department of Energy, Office of Science Basic Energy Sciences, with additional support for some aspects of the experiments and equipment provided by the National Science Foundation, the Army Research Office MURI program, and the National Institutes of Health.
Tavera‐Vázquez, Antonio, et al. (2025) Microplate active migration emerging from light‐induced phase transitions in a nematic liquid crystal.” Advanced Functional Materials
Syntax on the brain: Researchers map how we build sentences, word by word
In a recent study published in Nature Communications Psychology, researchers from NYU led by Associate Professor of Biomedical Engineering at NYU Tandon and Neurology at NYU Grossman School of Medicine Adeen Flinker and Postdoctoral Researcher Adam Morgan used high-resolution electrocorticography (ECoG) to investigate how the human brain assembles sentences from individual words. While much of our understanding of language production has been built on single-word tasks such as picture naming, this new study directly tests whether those insights extend to the far more complex act of producing full sentences.
Ten neurosurgical patients undergoing epilepsy treatment participated in a set of speech tasks that included naming isolated words and describing cartoon scenes using full sentences. By applying machine learning to ECoG data — recorded directly from electrodes on the brain’s surface — the researchers first identified the unique pattern of brain activity for each of six words when they were said in isolation. They then tracked these patterns over time while patients used the same set of words in sentences.
The findings show that while cortical patterns encoding individual words remain stable across different tasks, the way the brain sequences and manages those words changes depending on the sentence structure. In sensorimotor regions, activity closely followed the spoken order of words. But in prefrontal regions, particularly the inferior and middle frontal gyri, words were encoded in a completely different way. These regions encoded not just what words patients were planning to say, but also what syntactic role it played — subject or object — and how that role fit into the grammatical structure of the sentence.
The researchers furthermore discovered that the prefrontal cortex sustains words throughout the entire duration of passive sentences like “Frankenstein was hit by Dracula.” In these more complex types of sentences, both nouns remained active in the prefrontal cortex throughout the sentence, even as the other one was being said. This sustained, parallel encoding suggests that constructing syntactically non-canonical sentences requires the brain to hold and manipulate more information over time, possibly recruiting additional working memory resources.
Interestingly, this dynamic aligns with a longstanding observation in linguistics: most of the world’s languages favor placing subjects before objects. The researchers propose that this could be due, in part, to neural efficiency. Processing less common structures like passives appears to demand more cognitive effort, which over evolutionary time could influence language patterns.
Ultimately, this work offers a detailed glimpse into the cortical choreography of sentence production and challenges some of the long-standing assumptions about how speech unfolds in the brain. Rather than a simple linear process, it appears that speaking involves a flexible interplay between stable word representations and syntactically driven dynamics, shaped by the demands of grammatical structure.
Alongside Flinker and Morgan, Orrin Devinsky, Werner K. Doyle, Patricia Dugan, and Daniel Friedman of NYU Langone contributed to this research. It was supported by multiple grants from the National Institutes of Health.
Morgan, A.M., Devinsky, O., Doyle, W.K. et al. Decoding words during sentence production with ECoG reveals syntactic role encoding and structure-dependent temporal dynamics. Commun Psychol 3, 87 (2025).
NYU Tandon engineers create first AI model specialized for chip design language, earning top journal honor
Researchers at NYU Tandon School of Engineering have created VeriGen, the first specialized artificial intelligence model successfully trained to generate Verilog code, the programming language that describes how a chip's circuitry functions.
The research just earned the ACM Transactions on Design Automation of Electronic Systems 2024 Best Paper Award, affirming it as a major advance in automating the creation of hardware description languages that have traditionally required deep technical expertise.
"General purpose AI models are not very good at generating Verilog code, because there's very little Verilog code on the Internet available for training," said lead author Institute Professor Siddharth Garg, who sits in NYU Tandon’s Department of Electrical and Computer Engineering (ECE) and serves on the faculty of NYU WIRELESS and NYU Center for Cybersecurity (CCS). "These models tend to do well on programming languages that are well represented on GitHub, like C and Python, but tend to do a lot worse on poorly represented languages like Verilog."
Along with Garg, a team of NYU Tandon Ph.D. students, postdoctoral researchers, and faculty members Ramesh Karri and Brendan Dolan-Gavitt tackled this challenge by creating and distributing the largest AI training dataset of Verilog code ever assembled. They scoured GitHub to gather approximately 50,000 Verilog files from public repositories, and supplemented this with content from 70 Verilog textbooks. This data collection process required careful filtering and de-duplication to create a high-quality training corpus.
For their most powerful model, the researchers then fine-tuned Salesforce's open-source CodeGen-16B language model, which contains 16 billion parameters and was originally pre-trained on both natural language and programming code.
The computational demands were substantial. Training required three NVIDIA A100 GPUs working in parallel, with the model parameters alone consuming 30 GB of memory and the full training process requiring approximately 250 GB of GPU memory.
This fine-tuned model performed impressively in testing, outperforming commercial state-of-the-art models while being an order of magnitude smaller and fully open-source. In their evaluation, the fine-tuned CodeGen-16B achieved a 41.9% rate of functionally correct code versus 35.4% for the commercial code-davinci-002 model — with fine-tuning boosting accuracy from just 1.09% to 27%, demonstrating the significant advantage of domain-specific training.
"We've shown that by fine-tuning a model on that specific task you care about, you can get orders of magnitude reduction in the size of the model," Garg noted, highlighting how their approach improved both accuracy and efficiency. The smaller size enables the model to run on standard laptops rather than requiring specialized hardware.
The team evaluated VeriGen's capabilities across a range of increasingly complex hardware design tasks, from basic digital components to advanced finite state machines. While still not perfect — particularly on the most complex challenges — VeriGen demonstrated remarkable improvements over general-purpose models, especially in generating syntactically correct code.
The significance of this work has been recognized in the field, with subsequent research by NVIDIA in 2025 acknowledging VeriGen as one of the earliest and most important benchmarks for LLM-based Verilog generation, helping establish foundations for rapid advancements in AI-assisted hardware design.
The project's open-source nature has already sparked significant interest in the field. While VeriGen was the team's first model presented in the ACM paper, they've since developed an improved family of models called 'CL Verilog' that perform even better.
These newer models have been provided to hardware companies including Qualcomm and NXP for evaluation of potential commercial applications. The work builds upon earlier NYU Tandon efforts including the 2020 DAVE (Deriving Automatically Verilog from English) project, advancing the field by creating a more comprehensive solution through large-scale fine-tuning of language models.
VeriGen complements other AI-assisted chip design initiatives from NYU Tandon aimed at democratizing hardware: their Chip Chat project created the first functional microchip designed through natural language conversations with GPT-4; Chips4All, supported by the National Science Foundation's (NSF’s) Research Traineeship program, trains diverse STEM graduate students in chip design; and BASICS, funded through NSF's Experiential Learning for Emerging and Novel Technologies initiative, teaches chip design to non-STEM professionals.
In addition to Garg, the VeriGen paper authors are Shailja Thakur (former NYU Tandon); Baleegh Ahmad (NYU Tandon PhD '25), Hammond Pearce (former NYU Tandon; currently University of New South Wales), Benjamin Tan (University of Calgary), Dolan-Gavitt (NYU Tandon Associate Professor of Computer Science and Engineering (CSE) and CCS faculty), and Karri (NYU Tandon Professor of ECE and CCS faculty).
Funding for the VeriGen research came from the National Science Foundation and the Army Research Office.
Shailja Thakur, Baleegh Ahmad, Hammond Pearce, Benjamin Tan, Brendan Dolan-Gavitt, Ramesh Karri, and Siddharth Garg. 2024. VeriGen: A Large Language Model for Verilog Code Generation. ACM Trans. Des. Autom. Electron. Syst. 29, 3, Article 46 (May 2024), 31 pages.
NYU Tandon researchers develop simple, low-cost method to detect GPS trackers hidden in vehicles, empowering cyberstalking victims
A team of researchers at NYU Tandon School of Engineering has developed a novel method to detect hidden GPS tracking devices in vehicles, offering new hope to victims of technology-enabled domestic abuse.
Overseen by NYU Tandon assistant professor Danny Y. Huang, the research addresses a growing problem: abusers secretly placing GPS trackers in their partners' or ex-partners' vehicles to monitor their movements. Traditionally, detecting these devices has been difficult and expensive, leaving many victims vulnerable to continued surveillance.
"The tech industry has created many tools that can be repurposed for cyberstalking, but has invested far less in technologies that protect privacy," said Huang. “We believe this innovation has the potential to significantly empower victims of domestic abuse by providing them with a readily accessible way to regain their privacy and safety."
Huang holds appointments in both the Electrical & Computer Engineering and Computer Science & Engineering departments. He is also a member of NYU Center for Cybersecurity, NYU Tandon's Center for Urban Science + Progress, and Center for Advanced Technology in Telecommunications.
"GPS tracking in domestic abuse situations is unfortunately common," said Moshe (Mo) Satt, a Ph.D. candidate working under Huang who is the lead author on the research paper that he will present at USENIX VehicleSec '25, a major cybersecurity conference, in August 2025. Satt is the Chief Information Security Officer (CISO) at the NYC Department of Sanitation and teaches several cybersecurity courses at the graduate and undergraduate levels as an NYU Tandon adjunct faculty member. "We wanted to develop a tool to combat it that is inexpensive and potentially very user-friendly."
The team's innovative approach relies on tinySA, a $150 palm-sized spectrum analyzer typically used by amateur radio enthusiasts for testing antennas and debugging wireless equipment.
Using this commercially-available device, the researchers developed a specialized algorithm that distinguishes weak tracker signals amid cellular transmission noise by monitoring LTE IoT uplink frequency bands. This approach — the first to reliably detect concealed 4G LTE IoT cellular GPS vehicle trackers using affordable equipment — isolates signals sent from concealed devices to nearby cell towers, solving technical challenges in determining which frequencies to scan, interpreting results, and filtering false positives.
For victims, the setup can potentially be used as a mobile detection system while driving. If the user observes regular signal peaks on the tinySA during or after a drive, they can likely identify the presence of a cellular GPS tracker without requiring technical expertise. The setup could detect hidden GPS tracker signals within a range of up to three feet, according to the study.
The research addresses a significant public safety concern affecting approximately 13.5 million stalking victims annually in the United States, 80 percent experiencing technological stalking. In some cases, this surveillance has led to violent attacks.
The researchers are developing several pathways to real-world implementation, including smartphone integration, automated "black box" detection systems that could notify the user if a tracker is detected, partnerships with abuse support organizations, and a mobile detection service model similar to roadside assistance.
In addition to Satt and Huang, the paper’s authors are Donghan Hu, a NYU Tandon postdoc working under Huang, and NYU Tandon Phd candidate Patrick Zielinski.
This research was made possible through funding from the NYU Center for Cybersecurity, NYU mLab, and NYU Tandon School of Engineering. Additional support was provided by the ARDC (Amateur Radio Digital Communications), ARRL (American Radio Relay League), Cornell Tech CETA (Clinic to End Tech Abuse), KAIST System Security lab, NYU OSIRIS, and NYU Tandon UGSRP (Undergraduate Summer Research Program).
Satt, Moshe & Hu, Donghan & Zielinski, Patrick & Huang, Danny. (2025). You Can Drive But You Cannot Hide: Detection of Hidden Cellular GPS Vehicle Trackers.
New 3D flood visualizations help communities understand rising water risks
As climate change intensifies extreme weather, two new NYU studies show 3D flood visualizations developed by a cross-institutional research team dramatically outperform traditional maps for communicating risk.
When Sunset Park, Brooklyn residents compared both formats that visualized flooding, 92% preferred the dynamic 3D approach.
"The challenge we face is that substantial sectors of the population ignore flood warnings and fail to evacuate," said Professor Debra F. Laefer, the NYU Tandon School of Engineering senior researcher involved in both studies who holds appointments in the Civil and Urban Engineering Department and in the Center for Urban Science + Progress (CUSP). "Our findings suggest dynamic 3D visualizations could significantly improve how we communicate these life-threatening risks."
A Laefer-led team from NYU Tandon and NYU Steinhardt School of Culture, Education, and Human Development — with colleagues from University College Dublin and Queen’s University Belfast — developed a low-cost visualization method that transforms LiDAR scans of urban streets into immersive flood simulations, detailed in a paper called "Low-Cost, LiDAR-Based, Dynamic, Flood Risk Communication Viewer” published in Remote Sensing.
Under leadership of Tandon and Steinhardt researchers, the team evaluated these visualizations in a second paper, "From 2D to 3D: Flood risk communication in a flood-prone neighborhood via dynamic, isometric street views," published in Progress in Disaster Science. This study compared visualization methods for a Category 3 hurricane scenario: a conventional NOAA flood map versus a 3D simulation showing water rising to three feet at the intersection of 4th Avenue and 36th in Sunset Park.
The results were stark. Not only did participants overwhelmingly prefer the 3D visualization, but 100% found it more authoritative than the traditional map, with a significantly better understanding of evacuation challenges.
What makes this approach innovative is its computational efficiency. Unlike existing systems that require powerful hardware, it decouples flood prediction from visualization, allowing operation on standard computers.
"We achieved this using a Potree viewer coupled with Inkscape to create dynamic flood water flow," Laefer notes. "Our study didn't require a graphics card — just a single, quad-core processor."
The visualization includes realistic water movement created through compounding sine wave functions, with algorithms controlling transparency, color, and flow speed.
"One of the most rewarding aspects was seeing how participants instantly grasped the flood severity without technical explanations," said Kshitij Chandna, a master’s student advised by Laefer at the time of the research, who is the co-author on both studies. "When someone looks at a 3D simulation and says 'I would need to evacuate,' you know you've successfully communicated risk in a way traditional maps cannot."
For Sunset Park's immigrant community, many facing language barriers, the intuitive 3D visualization proved particularly valuable. Participants described it as "more realistic," "clearer," and "more visual" than traditional maps.
The implications extend beyond flood visualization. The researchers have already demonstrated visualizing water flowing through pipes and are exploring applications for other types of flooding.
As climate change increases flooding frequency, this research suggests dynamic 3D visualization could bridge the gap between abstract warnings and concrete actions needed to save lives.
The Remote Sensing paper's authors are Laefer, Chandna, Evan O'Keeffe, Kim Hertz (NYU Tandon); Jing Zhu and Raul Lejano (NYU Steinhardt); Anh Vo and Michela Bertolotto (University College Dublin); and Ulrich Ofterdinger (Queen's University Belfast). The Progress in Disaster Science paper was authored by Zhu, Laefer, Lejano, Peter Gmelch (NYU Tandon), O'Keeffe, and Chandna.
The United States National Science Foundation provided funding for this research, which builds upon Laefer's pioneering work in LiDAR and remote sensing technologies for urban applications.
Jing Zhu, Debra F. Laefer, Raul P. Lejano, Peter Gmelch, Evan O'Keeffe, Kshitij Chandna, From 2D to 3D: Flood risk communication in a flood-prone neighborhood via dynamic, isometric street views, Progress in Disaster Science, Volume 26, 2025, 100419, ISSN 2590-0617, https://doi.org/10.1016/j.pdisas.2025.100419.
Novel technique boosts cadmium telluride solar cell performance by 13 percent
An NYU Tandon-led research team has developed a novel technique to significantly enhance the performance of cadmium telluride (CdTe) solar cells. Unlike conventional silicon panels that use thick layers of silicon, these solar cells use a simpler, less expensive approach — depositing an ultra-thin layer of cadmium and tellurium compounds onto glass.
This thinner design reduces manufacturing costs while helping the cells maintain their efficiency at high temperatures and in low-light conditions. Though less common than traditional silicon panels — the familiar dark blue or black panels seen on rooftops — CdTe solar cells are an emerging technology primarily used in utility-scale solar farms, currently accounting for about 40% of U.S. large-scale solar installations.
A persistent challenge with these cells, however, has been damage that occurs during a critical manufacturing step — when the metal wiring is added to collect electricity from the cell. The high-temperature process of applying these metal contacts can damage the material, particularly at the boundaries where microscopic crystal regions meet, like weak points between tiles in a mosaic. This damage creates barriers that reduce the cell's power output.
In research published in ACS Applied Materials & Interfaces, the team found that applying an ultra-thin oxide coating — either aluminum gallium oxide (AlGaOx) or silicon oxide (SiOx) — before adding metal contacts like gold prevents this damage. The coating naturally collects at these vulnerable boundaries between crystal regions, protecting them while leaving the rest of the surface clear for electrical contact.
This simple and scalable solution has led to major improvements in the cells' electrical output, increasing the maximum voltage they can produce by 13% and boosting their overall power generation.
"Silicon solar cells are rated at room temperature, but their performance drops as temperatures rise. You don't have that problem with CdTe cells, which makes them particularly valuable in warmer regions like the Caribbean or near the equator," said André Taylor, an NYU Tandon professor of chemical and biomolecular engineering and one of the paper’s authors. The paper’s corresponding author is B. Edward Sartor, who was a doctoral student in Taylor's lab when the study was conducted.
With the protective layer in place, the open-circuit voltage of the solar cells increased from 750 to 850 millivolts. The fill factor, another key efficiency metric, also improved, provided the oxide layer remained thin enough to avoid increasing electrical resistance.
"The AlGaOx layer protects the cell when you're evaporating the gold contacts, which come in at high temperature and condense on the surface. Without this protection, you damage the interface and create defects that lower device performance," Taylor explained.
The oxide layer is applied through a simple spin-coating process, a widely used technique in semiconductor manufacturing that allows precise control over coverage. The researchers also found that the method works with different metal contacts, including gold and molybdenum, and that it shows potential benefits when combined with zinc telluride nitrogen-doped (ZnTe:N) buffer layers, which help facilitate the movement of positive charge carriers (holes) in the solar cell.
"This discovery suggests a promising path to make CdTe solar cells more efficient and reliable,” said Taylor. “It's a straightforward adjustment to existing manufacturing processes that could potentially advance solar energy production."
The research comes at a critical time for U.S. solar manufacturing. After losing the silicon solar cell market to China, CdTe technology offers a strategic opportunity to rebuild domestic manufacturing capabilities, with companies like First Solar leading the way. The technology also offers a unique sustainability angle: tellurium, a key ingredient, can be extracted from copper mining operations, where it was previously considered a waste material, potentially creating new economic value.
Funded by the U.S. Department of Energy's Solar Energy Technologies Office, this research adds to Taylor's diverse solar technology portfolio. His research group has explored multiple solar cell technologies, including polymers, small molecule solar cells, and hybrid cells combining carbon nanotubes with silicon. The group previously introduced the world's first Förster Resonance Energy Transfer (FRET)-based solar cell and continues to advance research in emerging technologies like perovskite solar cells.
Sartor, B. E., Muzzio, R., Jiang, C.-S., Lee, C., Perkins, C. L., Taylor, A. D., & Reese, M. O. (2025). Selective Isolation of Surface Grain Boundaries by Oxide Dielectrics Improves Cd(Se,Te) Device Performance. ACS Applied Materials & Interfaces, 17(5), 7641–7647. doi:10.1021/acsami.4c16902
Sophisticated data analysis uncovers how city living disrupts ADHD's path to obesity
A hidden link between impulsivity and obesity may not be fixed in human biology but shaped by the cities we live in.
Using a novel engineering-based approach, researchers from NYU Tandon School of Engineering and Italy's Istituto Superiore di Sanità found that attention-deficit/hyperactivity disorder (ADHD) contributes to obesity not only directly through known biological pathways but also indirectly, by reducing physical activity. The findings are published in PLOS Complex Systems.
Obesity prevalence is also influenced by other city-level variables, such as access to mental health services and food insecurity, thereby opening the door to potential mitigation strategies.
To uncover the nexus between ADHD and obesity, the research team applied urban scaling laws — a mathematical framework from complexity science — to public health data from 915 U.S. cities. Urban scaling describes how features of cities change with population size, similar to how biological traits scale with body size.
They found that both ADHD and obesity decrease sublinearly with population: as cities grow, per-capita prevalence declines. Meanwhile, access to mental health providers and college education rises superlinearly, increasing faster than city size. Larger cities, it seems, offer not just more services, but disproportionately more support for conditions linked to impulsivity.
But size alone doesn’t tell the full story. To reveal where cities over- or underperform relative to expectations, the researchers used Scale-Adjusted Metropolitan Indicators (SAMIs). SAMIs measure how much a city deviates from what urban scaling would predict — highlighting, for example, when a small city has unusually low obesity rates or when a large one falls short on mental health access. These deviations became the foundation for a causal analysis.
"Urban scaling and causal discovery methods allow us to see relationships that traditional health research might miss," explains Maurizio Porfiri, senior author on the PLOS paper. Porfiri is an NYU Tandon Institute Professor with appointments in the Departments of Mechanical and Aerospace Engineering, Biomedical Engineering, Civil and Urban Engineering, and Technology Management and Innovation. He also serves as Director of the NYU Center for Urban Science + Progress (CUSP).
"Without accounting for how city size naturally affects health metrics, we’d misattribute success or failure to the wrong factors. By filtering out these population effects first, we can identify the true causal pathways linking ADHD to obesity — and more importantly, how urban environments modify these relationships,” adds Tian Gan, Ph.D. student in Mechanical Engineering at NYU Tandon. Simone Macrì, senior scientist at the Istituto Superiore di Sanità in Rome, further comments that “This approach reveals precise intervention points that wouldn’t be apparent otherwise"
Using SAMIs, the team mapped a network of interrelated variables: ADHD prevalence led to higher physical inactivity, which in turn increased obesity. Access to mental health care helped reduce inactivity, indirectly lowering obesity risk. Higher prevalence of college education correlated with better mental health access and more physical activity.
This causal map revealed a dynamic system in which impulsivity, health behaviors, and urban infrastructure interact — and cities themselves either reinforce or weaken these effects.
These patterns weren’t uniform. When the researchers mapped SAMIs by region, cities in the Southeastern and Southwestern U.S. consistently showed greater disparities. Neighboring cities often displayed striking differences in ADHD and obesity prevalence, mental health access, and food insecurity — suggesting that local policy, culture, and resources may either amplify or buffer these behavioral health risks.
“Regional averages can mask a lot of variation,” Porfiri said. “The SAMIs let us see which cities are punching above or below their weight. It’s not just about how big a city is — it’s about how it uses its resources. With this kind of insight, policymakers can target investments in mental health care, education, and physical activity to break the link between ADHD and obesity where it's strongest.”
To validate the findings at a more granular level, the team analyzed data from over 19,000 children across the U.S. from the National Survey of Children’s Health. The same causal patterns held: children with more severe ADHD were more likely to be obese, especially when physical activity and household education were low.
The study follows earlier work by Porfiri and collaborators using urban scaling to explore firearm ownership and gun violence across U.S. cities. That research revealed that New York City, despite its large size, significantly outperforms expectations on public safety—underscoring how city-level deviations can challenge assumptions about scale and risk.
In addition to Porfiri, Gan, and Macrì, Rayan Succar, a doctoral candidate in Mechanical Engineering working under Porfiri’s advisement, is also an author on the paper.
The research was supported by funding from the U.S. National Science Foundation and the European Union’s Horizon 2020 programme.
Gan T, Succar R, Macrì S, Porfiri M (2025) Investigating the link between impulsivity and obesity through urban scaling laws. PLOS Complex Syst 2(5): e0000046. https://doi.org/10.1371/journal.pcsy.0000046
Mapping a new brain network for naming
How are we able to recall a word we want to say? This basic ability, called word retrieval, is often compromised in patients with brain damage. Interestingly, many patients who can name words they see, like identifying a pet in the room as a “cat”, struggle with retrieving words in everyday discourse.
Scientists have long sought to understand how the brain retrieves words during speech. A new study by researchers at New York University sheds light on this mystery, revealing a left-lateralized network in the dorsolateral prefrontal cortex that plays a crucial role in naming. The findings, published in Cell Reports, provide new insights into the neural architecture of language, offering potential applications for both neuroscience and clinical interventions.
Mapping the Brain’s Naming Network
Word retrieval is a fundamental aspect of human communication, allowing us to link concepts to language. Despite decades of research, the exact neural dynamics underlying this process — particularly in natural auditory contexts — remain poorly understood.
NYU researchers — led by Biomedical Engineering Graduate Student Leyao Yu and Associate Professor of Biomedical Engineering at NYU Tandon and Neurology at NYU Grossman School of Medicine Adeen Flinker — recorded electrocorticographic (ECoG) data from 48 neurosurgical patients to examine the spatial and temporal organization of language processing in the brain. By using unsupervised clustering techniques, the researchers identified two distinct but overlapping networks responsible for word retrieval. The first, a semantic processing network, was located in the middle and inferior frontal gyri. This network was engaged in integrating meaning and was sensitive to how surprising a word was within a given sentence. The second, an articulatory planning network, was situated in the inferior frontal and precentral gyri, which played a crucial role in speech production, regardless of whether words were presented visually or auditorily.
Auditory Naming and the Prefrontal Cortex
The study builds upon decades of work in language neuroscience. Previous research suggested that different regions of the brain were responsible for retrieving words depending on whether they were seen or heard. However, earlier studies relied on methods with limited temporal resolution, leaving many unanswered questions about how these networks interact in real time.
By leveraging the high spatial and temporal resolution of ECoG, the researchers uncovered a striking ventral-dorsal gradient in the prefrontal cortex. They found that while articulatory planning was localized ventrally, semantic processing was uniquely represented in a dorsal region of the inferior frontal gyrus and middle frontal gyrus — a previously underappreciated hub for language processing.
"These findings suggest that a missing piece in our understanding of language processing lies in this dorsal prefrontal region," explains lead author Leyao Yu. "Our study provides the first direct evidence that this area is involved in mapping sounds to meaning in an auditory context."
Implications for Neuroscience and Medicine
The study has far-reaching implications, not only for theoretical neuroscience but also for clinical applications. Language deficits, such as anomia — the inability to retrieve words — are common in stroke, brain injury, and neurodegenerative disorders. Understanding the precise neural networks involved in word retrieval could lead to better diagnostics and targeted rehabilitation therapies for patients suffering from these conditions.
Additionally, the study provides a roadmap for future research in brain-computer interfaces (BCIs) and neuroprosthetics. By decoding the neural signals associated with naming, scientists could potentially develop assistive devices for individuals with speech impairments, allowing them to communicate more effectively through direct brain-computer communication.
For now, one thing is clear: our ability to name the world around us is not just a simple act of recall, but the result of a sophisticated and finely tuned neural system — one that is now being revealed in greater detail than ever before.
Yu, L., Dugan, P., Doyle, W., Devinsky, O., Friedman, D., & Flinker, A. (2025). A left-lateralized dorsolateral prefrontal network for naming. Cell Reports, 44(5), 115677. https://doi.org/10.1016/j.celrep.2025.115677