Research News
NYU Tandon receives Google DeepMind grant to advance AI adaptation
AI systems that can dynamically adjust to human norms and behaviors may soon become reality, thanks to a NYU Tandon School of Engineering project that has received prestigious grant funding from Google DeepMind.
The research, led by NYU Tandon Assistant Professor Eugene Vinitsky in collaboration with Google DeepMind scientist Edward Hughes, aims to overcome the limitations of today's rigid AI algorithms. Vinitsky is part of Tandon's Civil & Urban Engineering Department and is also on the faculty of C2SMARTER, NYU Tandon's U.S. Department of Transportation-funded Tier 1 University Transportation Center.
Their project, "Adapting to Partners Quickly and Safely in Unforeseen Situations," focuses on a novel technique called meta-learning, or "learning to learn," where AI agents are trained to interact with various synthetic partners. This approach helps AI develop adaptive strategies that generalize to new, unseen partners — including humans. Most AI today operates with predefined algorithms, making it inflexible in real-world interactions.
The goal is to create AI that can proactively infer human behaviors, a crucial advancement for applications like autonomous vehicles and robotic assistants. A primary focus is ensuring AI adapts safely. Human drivers may follow unwritten norms of local driving cultures, for instance, but current AI-powered vehicles struggle with these subtle differences, often leading to overly cautious or rigid driving.
By training AI agents against a diverse set of synthetic driving styles, the research aims to enable AI to transition from a maximally safe starting posture to fluid adaptation as it learns more about the specific environment. The implications extend beyond driving.
"This research is about creating AI that understands and adapts to human behavior in a natural way," said Vinitsky. "By building AI that can quickly learn and adjust, we're paving the way for safer autonomous systems, more intuitive robotic assistants, and even AI that can collaborate effectively as a copilot in workplaces."
Unlike static AI models that require frequent manual updates, adaptive AI could continuously refine itself, making it more reliable. Vinitsky and Hughes believe this adaptability is a crucial missing capability in AI systems today.
While promising, the research also presents challenges. AI must infer human intentions responsibly to avoid reinforcing biases or making unsafe decisions. To address these concerns, the team will integrate safety checks and test their approach in diverse real-world scenarios, including human-in-the-loop simulations.
Google DeepMind, which was formed by merging two pioneering AI research labs — Google Brain and DeepMind (acquired by Google in 2014) — is an artificial intelligence research lab specializing in machine learning and AI systems development.
NYC speed cameras take six months to change driver behavior, effects vary by neighborhood
New York City's automated speed cameras reduced traffic crashes by 14% and decreased speeding violations by 75% over time, according to research from NYU Tandon's C2SMARTER published in Transportation Research Interdisciplinary Perspectives that tracked more than 1,800 cameras across school zones from 2019 to 2021.
With speeding contributing to approximately one-third of all motor vehicle fatalities nationwide, these findings translate to potentially hundreds of lives saved in America's most densely populated city.
The study from C2SMARTER — a US Department of Transportation Tier 1 University Transportation Center — complements the NYC Department of Transportation's (NYC DOT) own 2024 report, which similarly found a 14% reduction in injuries and fatalities at camera locations compared to control sites without cameras.
While the NYC DOT report provides valuable citywide statistics, the C2SMARTER study reveals several critical insights: cameras typically reach a strong level of effectiveness within six months, effectiveness patterns vary geographically across the city, and changes in driving behavior may exhibit a ‘time-lag’ effect.
"Our research methodology provided an in-depth short-term and long-term analysis of these cameras, taking into consideration the continuous installation of new cameras," explained Jingqin Gao, Assistant Director of Research at C2SMARTER and the paper's lead author. "By tracking each camera's performance over time, we uncovered spatial and temporal patterns that may be less visible in citywide data, providing officials additional insights on the longitudinal effects and more strategic positioning of future cameras to maximize the program’s effectiveness."
NYC's speed camera program has evolved from a 20-camera pilot in 2013 to a network of 2,200 cameras across all 750 school zones by 2023 — expanding from limited hours (6 a.m.-10 p.m. weekdays) to 24/7 operation in 2022. C2SMARTER's research examines the critical 2019-2021 timeframe when the program first achieved citywide scale.
What sets this study apart is its longitudinal approach — tracking fixed camera sites over extended periods. The research revealed most cameras achieve their safety purpose within six months, with violations dropping and staying low — showing drivers have changed behavior to drive more slowly and the cameras are working as intended, to deter speeding.
"Our long-term analysis identified four distinct patterns in how specific camera installations performed," said Gao. "Cameras at some locations showed consistent reductions at varying magnitudes in two groups, with a surge in speeding tickets during COVID. A third group exhibited a relatively modest effect but nearly curbed speeding behaviors within 1.5 years, despite COVID-19 impacts, and a small set of camera sites saw marginal impact in the first few months but experienced dramatic COVID-era speeding increases," Gao added. "Our short-term analysis also provided evidence of a 'time-lag effect,' where driver compliance improved gradually rather than immediately after installation."
The C2SMARTER team led by its Director Kaan Ozbay, professor in the NYU Tandon Civil and Urban Engineering Department (CUE), pioneered the application of Survival Analysis with Random Effect (SARE) for before-and-after evaluation of traffic safety treatments. This statistical method models the time intervals between crashes rather than simply counting them. Their findings were published in a series of papers in top traffic safety journals, including Risk Analysis and Safety Science.
This approach alleviates the challenge posed by the need for waiting years to collect data needed to conduct before and after analysis using traditional statistical approaches. The significantly shorter time periods of data collection potentially saves lives by allowing traffic engineers to re-evaluate their deployment approaches of safety treatments.
"The SARE method can accommodate the different implementation dates of speed cameras,” said Di Yang, a paper co-author who is currently an assistant professor at Morgan State University. Yang received his Ph.D. from CUE in 2022 under Ozbay’s advisement. “This approach allows us to better leverage the time intervals between crashes to estimate the change in crash rates before and after implementing speed cameras.”
These nuanced findings provide critical guidance for policymakers and urban planners across the country. Rather than a one-size-fits-all approach, the research points to the need for targeted, data-driven strategies that combine enforcement with engineering solutions tailored to specific locations.
"This isn't just about issuing tickets," concluded Ozbay. "It's about using data analytics and advanced statistical methods to save lives on our streets, especially in dense urban areas where a single speeding vehicle can have devastating consequences."
The study contributes to C2SMARTER's work to improve NYC transportation systems’ efficiency and safety. Among its projects, the Center has created a "digital twin" of Harlem with the NYC Fire Department to reduce emergency response times; tested and deployed weigh-in-motion technology to extend the Brooklyn Queens Expressway's lifespan; and developed performance measures for NYC Department of Transportation's off-hour delivery program.
In addition to Gao, Ozbay and Yang, the paper's authors include Chuan Xu and Smrithi Sharma, both with C2SMARTER and NYU Tandon’s Department of Civil and Urban Engineering at the time of the research.
Gao, J., Yang, D., Xu, C., Ozbay, K., & Sharma, S. (2025). Assessing the impact of fixed speed cameras on speeding behavior and crashes: A longitudinal study in New York City. Transportation Research Interdisciplinary Perspectives, 30, 101373.
Cracking the code of private AI: The role of entropy in secure language models
Large Language Models (LLMs) have rapidly become an integral part of our digital landscape, powering everything from chatbots to code generators. However, as these AI systems increasingly rely on proprietary, cloud-hosted models, concerns over user privacy and data security have escalated. How can we harness the power of AI without exposing sensitive data?
A recent study, Entropy-Guided Attention for Private LLMs by Nandan Kumar Jha, a Ph.D. candidate at the NYU Center for Cybersecurity (CCS), and Brandon Reagen, Assistant Professor in the Department of Electrical and Computer Engineering and a member of CCS, introduces a novel approach to making AI more secure. The paper was presented at the Privacy-preserving Artificial Intelligence workshop at the AAAI Workshop on Privacy-Preserving Artificial Intelligence in early March.
The researchers delve into a fundamental, yet often overlooked, property of neural networks: entropy — the measure of information uncertainty within a system. Their work proposes that by understanding entropy’s role in AI architectures, we can improve the privacy, efficiency, and reliability of LLMs.
The Privacy Paradox in AI
When we interact with AI models — whether asking a virtual assistant for medical advice or using AI-powered legal research tools — our input data is typically processed in the cloud. This means user queries, even if encrypted in transit, are ultimately decrypted for processing by the model. This presents a fundamental privacy risk: sensitive data could be exposed, either unintentionally through leaks or maliciously via cyberattacks.
To design efficient private LLMs, researchers must rethink the architecture these models are built on. However, simply removing nonlinearities destabilizes training and disrupts the core functionality of components like the attention mechanism.
“Nonlinearities are the lifeblood of neural networks,” says Jha. “They enable models to learn rich representations and capture complex patterns.”
The field of Private Inference (PI) aims to solve this problem by allowing AI models to operate directly on encrypted data, ensuring that neither the user nor the model provider ever sees the raw input. However, PI comes with significant computational costs. Encryption methods that protect privacy also make computation more complex, leading to higher latency and energy consumption — two major roadblocks to practical deployment.
To tackle these challenges, Jha and Reagen’s research focuses on the nonlinear transformations within AI models. In deep learning, nonlinear functions like activation functions play a crucial role in shaping how models process information. The researchers explore how these nonlinearities affect entropy — specifically, the diversity of information being passed through different layers of a transformer model.
“Our work directly tackles this challenge and takes a fundamentally different approach to privacy,” says Jha. “It removes nonlinear operations while preserving as much of the model’s functionality as possible.”
Using Shannon’s entropy as a quantitative measure, they reveal two key failure modes that occur when nonlinearity is removed:
- Entropy Collapse (Deep Layers): In the absence of nonlinearity, later layers in the network fail to retain useful information, leading to unstable training.
- Entropic Overload (Early Layers): Without proper entropy control, earlier layers fail to efficiently utilize the Multi-Head Attention (MHA) mechanism, reducing the model’s ability to capture diverse representations.
This insight is new — it suggests that entropy isn’t just a mathematical abstraction but a key design principle that determines whether a model can function properly.
A New AI Blueprint
Armed with these findings, the researchers propose an entropy-guided attention mechanism that dynamically regulates information flow in transformer models. Their approach consists of Entropy Regularization — a new technique that prevents early layers from being overwhelmed by excessive information — and PI-Friendly Normalization — alternative methods to standard layer normalization that help stabilize training while preserving privacy.
By strategically regulating the entropy of attention distributions, they were able to maintain coherent, trainable behavior even in drastically simplified models, which ensures that attention weights remain meaningful, avoiding degenerate patterns that commonly arise once nonlinearity is removed, where a disproportionate number of heads exhibit extreme behavior — collapsing to near one-hot attention (low entropy) or diffusing attention uniformly (high entropy) — both of which impair the model’s ability to focus and generalize.
This work bridges the gap between information theory and architectural design, establishing entropy dynamics as a principled guide for developing efficient privacy-preserving LLMs. It represents a crucial step toward making privacy-preserving AI more practical and efficient in real-world applications. By bridging the gap between information theory and neural architecture design, their work offers a roadmap for developing AI models that are not only more private but also computationally efficient.
The team has also open-sourced their implementation, inviting researchers and developers to experiment with their entropy-guided approach.
arXiv:2501.03489v2 [cs.LG] 8 Jan 2025
Encryption breakthrough lays groundwork for privacy-preserving AI models
In an era where data privacy concerns loom large, a new approach in artificial intelligence (AI) could reshape how sensitive information is processed.
Researchers Austin Ebel and Karthik Garimella, Ph.D students, and Assistant Professor of Electrical and Computer Engineering Brandon Reagen have introduced Orion, a novel framework that brings fully homomorphic encryption (FHE) to deep learning — allowing AI models to practically and efficiently operate directly on encrypted data without needing to decrypt it first.
The implications of this advancement, published in a recent study on arXiv that earned a Best Paper Award at 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), are profound. FHE has long been considered the 'holy grail' of cryptography. Unlike traditional encryption, which protects data only when it is at rest or in transit, FHE allows computations to be performed on encrypted data without ever decrypting it. However, despite its promise, implementing deep learning models with FHE has been notoriously difficult due to the immense computational overhead and the technical hurdles in adapting neural networks to FHE’s bespoke programming model.
“Whenever you use online services, there are machine learning models operating in the background, collecting both your inputs and outputs,” says Garimella. “That compromises user privacy. Our goal is to bring FHE into the mainstream, and allow users to continue using the services they rely on everyday without releasing their personal, private data."
Orion tackles these challenges head-on with an automated framework that seamlessly converts deep learning models written in PyTorch into efficient FHE programs. It does so by introducing a novel method to optimize how encrypted data is structured, significantly reducing computational overhead. The framework also streamlines encryption-related processes, making it easier to manage accumulated noise and execute deep learning computations efficiently.
By employing these techniques, Orion achieves a 2.38x speedup over existing state-of-the-art methods on ResNet-20, a common benchmark model used in FHE deep learning research that is comparatively small. But perhaps most impressively, Orion enables computations on much larger networks than previously possible. The researchers demonstrated the first-ever high-resolution FHE object detection using YOLO-v1—a deep learning model with 139 million parameters, roughly 500 times larger than ResNet-20 — showcasing Orion’s ability to handle real-world AI workloads.
The code the team produced is lightweight and could be used by anyone with a basic understanding of computer science. Not only would this help in increasing the efficiency of the computations; it also makes it easily deployable across industries.
“There has been an incredible barrier to entry for people who don't want to spend months to years learning the ins and outs”, says Ebel. “With Orion, that barrier to entry is now almost non-existent.”
The development of Orion marks a critical milestone in bridging the gap between FHE and practical deep learning applications. With this framework, industries reliant on privacy — such as healthcare, finance, and cybersecurity — could leverage AI without exposing sensitive user data.
“Take online advertising,” says Reagen, who is also a member of the NYU Center for Cybersecurity. “If you want to process an individual's information in order to serve them targeted ads using neural networks, this allows service providers to analyze that data while keeping it totally confidential. For the marketers and the public, that’s a win-win scenario.”
While challenges remain in making FHE fully practical at scale, Orion brings the technology closer to widespread adoption. The research team has open-sourced the project, making it accessible to developers and researchers worldwide.
As AI continues to integrate deeper into daily life, privacy-preserving techniques like Orion could redefine the balance between innovation and security — ensuring that smarter algorithms don’t come at the cost of user privacy.
4/3/2025: This story has been updated to include the Best Paper Award at ASPLOS '25.
arXiv:2311.03470v3 [cs.CR] 12 Feb 2025
AI food scanner turns phone photos into nutritional analysis
Snap a photo of your meal, and artificial intelligence instantly tells you its calorie count, fat content, and nutritional value — no more food diaries or guesswork.
This futuristic scenario is now much closer to reality, thanks to an AI system developed by NYU Tandon School of Engineering researchers that promises a new tool for the millions of people who want to manage their weight, diabetes and other diet-related health conditions.
The technology, detailed in a paper presented at the 6th IEEE International Conference on Mobile Computing and Sustainable Informatics, uses advanced deep-learning algorithms to recognize food items in images and calculate their nutritional content, including calories, protein, carbohydrates and fat.
For over a decade, NYU's Fire Research Group, which includes the paper's lead author Prabodh Panindre and co-author Sunil Kumar, has studied critical firefighter health and operational challenges. Several research studies show that 73-88% of career and 76-87% of volunteer firefighters are overweight or obese, facing increased cardiovascular and other health risks that threaten operational readiness. These findings directly motivated the development of their AI-powered food-tracking system.
"Traditional methods of tracking food intake rely heavily on self-reporting, which is notoriously unreliable," said Panindre, Associate Research Professor of NYU Tandon School of Engineering’s Department of Mechanical Engineering. "Our system removes human error from the equation."
Despite the apparent simplicity of the concept, developing reliable food recognition AI has stumped researchers for years. Previous attempts struggled with three fundamental challenges that the NYU Tandon team appears to have overcome.
"The sheer visual diversity of food is staggering," said Kumar, Professor of Mechanical Engineering at NYU Abu Dhabi and Global Network Professor of Mechanical Engineering at NYU Tandon. "Unlike manufactured objects with standardized appearances, the same dish can look dramatically different based on who prepared it. A burger from one restaurant bears little resemblance to one from another place, and homemade versions add another layer of complexity."
Earlier systems also faltered when estimating portion sizes — a crucial factor in nutritional calculations. The NYU team's advance is their volumetric computation function, which uses advanced image processing to measure the exact area each food occupies on a plate.
The system correlates the area occupied by each food item with density and macronutrient data to convert 2D images into nutritional assessments. This integration of volumetric computations with the AI model enables precise analysis without manual input, solving a longstanding challenge in automated dietary tracking.
The third major hurdle has been computational efficiency. Previous models required too much processing power to be practical for real-time use, often necessitating cloud processing that introduced delays and privacy concerns.
The researchers used a powerful image-recognition technology called YOLOv8 with ONNX Runtime (a tool that helps AI programs run more efficiently) to build a food-identification program that runs on a website instead of as a downloadable app, allowing people to simply visit it using their phone's web browser to analyze meals and track their diet.
When tested on a pizza slice, the system calculated 317 calories, 10 grams of protein, 40 grams of carbohydrates, and 13 grams of fat — nutritional values that closely matched reference standards. It performed similarly well when analyzing more complex dishes such as idli sambhar, a South Indian specialty featuring steamed rice cakes with lentil stew, for which it calculated 221 calories, 7 grams of protein, 46 grams of carbohydrates and just 1 gram of fat.
"One of our goals was to ensure the system works across diverse cuisines and food presentations," said Panindre. "We wanted it to be as accurate with a hot dog — 280 calories according to our system — as it is with baklava, a Middle Eastern pastry that our system identifies as having 310 calories and 18 grams of fat."
The researchers solved data challenges by combining similar food categories, removing food types with too few examples, and giving extra emphasis to certain foods during training. These techniques helped refine their training dataset from countless initial images to a more balanced set of 95,000 instances across 214 food categories.
The technical performance metrics are impressive: the system achieved a mean Average Precision (mAP) score of 0.7941 at an Intersection over Union (IoU) threshold of 0.5. For non-specialists, this means the AI can accurately locate and identify food items approximately 80% of the time, even when they overlap or are partially obscured.
The system has been deployed as a web application that works on mobile devices, making it potentially accessible to anyone with a smartphone. The researchers describe their current system as a "proof-of-concept" that could be refined and expanded for broader healthcare applications very soon.
In addition to Panindre and Kumar, the paper's authors are Praneeth Kumar Thummalapalli and Tanmay Mandal, both master’s degree students in NYU Tandon’s Department of Computer Science and Engineering.
New research uses AI to unravel the complex wiring of the motor system
The nervous system is a marvel of biological engineering, composed of intricate networks that control every aspect of an animal's movement and behavior. A fundamental question in neuroscience is how these vast, complex circuits are assembled during development. A recent study by a group of researchers including Erdem Varol, Assistant Professor of Computer Science and Engineering and a member of the Visualization, Imaging and Data Analysis Center, has provided new insights into this problem by studying how the neurons responsible for leg movement in fruit flies (Drosophila melanogaster) establish their connections.
The researchers developed ConnectionMiner, a novel computational tool that integrates gene expression data with electron microscopy-derived connectomes. This tool enabled them to infer neuronal identities and predict synaptic connectivity with remarkable accuracy. Their findings, published on bioRxiv [PDF], offer a blueprint for understanding how neurons wire themselves into functional circuits.
Neurons form connections based on genetic and molecular cues, but identifying the precise mechanisms behind this process has been difficult. In the fruit fly, roughly 69 motor neurons (MNs) in each leg are responsible for controlling movement. These neurons receive input from more than 1,500 premotor neurons (preMNs) through over 200,000 synapses. The challenge lies in understanding how each MN finds the right preMN partners and how these connections are established at the molecular level.
By applying single-cell RNA sequencing (scRNAseq) at multiple developmental stages, the researchers tracked how different gene families, particularly transcription factors (TFs) and cell adhesion molecules (CAMs), shape the unique identities of MNs. They discovered that these molecular signals not only define neuronal types but also correlate with the strength of their synaptic connections.
Traditional methods of studying neuronal circuits rely on either gene expression data (which tells us what molecules neurons produce) or connectomics (which maps how neurons are wired together). However, integrating these two datasets has been a major challenge. ConnectionMiner bridges this gap by using machine learning to refine ambiguous neuronal annotations, effectively reconstructing the genetic and synaptic landscape of the nervous system.
The researchers tested their tool on the Drosophila leg motor system, identifying combinatorial gene signatures that likely orchestrate the assembly of circuits from preMNs to MNs and ultimately to muscles. By leveraging both transcriptomic (gene expression) and connectomic (wiring) data, ConnectionMiner successfully resolved previously uncharacterized neuronal identities and predicted the molecular interactions driving connectivity.
By mapping these relationships, ConnectionMiner provides a predictive framework for understanding how the nervous system assembles itself.
“The nervous system is one of the most complex networks that we know of, and deciphering its molecular building blocks is key to understanding much about our health, our behavior and our lives in general,” says Varol. “Tools like ConnectionMiner are a major stepping stone towards unlocking the brain’s molecular blueprint — enabling us to identify the genes that build neural circuits, revolutionize the diagnosis and treatment of neurological disorders, and fundamentally enhance our understanding of how brain wiring drives behavior.”
This research has far-reaching implications. Understanding the molecular rules that govern neural connectivity in fruit flies could inform studies of more complex nervous systems, including our own. The principles uncovered here might help explain how neural circuits form during development, how they recover from injury, and even how neurodevelopmental disorders arise when connectivity goes awry.
Furthermore, computational tools like ConnectionMiner represent a paradigm shift in neuroscience. By integrating artificial intelligence with biological data, researchers can now tackle questions that were previously too complex to analyze. The approach outlined in this study could be applied to other model organisms, potentially unlocking new insights into brain development, neural repair, and artificial intelligence itself.
Gupta, H.P., Azevedo, A.W., Chen, Y.C., Xing, K., Sims, P.A., Varol, E., & Mann, R.S. (2025). Decoding neuronal wiring by joint inference of cell identity and synaptic connectivity. bioRxiv. https://doi.org/10.1101/2025.03.04.640006
NYU researchers developing engineered immune cells to target Alzheimer’s disease
Researchers at New York University are developing a novel cell therapy that could offer a longer-lasting, potentially more effective treatment for Alzheimer’s disease by clearing toxic proteins from the brain.
Instead of requiring repeated antibody infusions, which can be costly and cause inflammation, this new approach aims to use engineered immune cells to target and remove amyloid plaques — one of the hallmarks of Alzheimer’s disease.
The project has been awarded a $4.2 million grant from the National Institutes of Health’s National Institute on Aging to fund research over the next five years.
The multiple-Principal Investigator (MPI) research team is led by contact MPI Martin Sadowski, Professor of Neurology, Psychiatry, and Biochemistry and Molecular Pharmacology at NYU Grossman School of Medicine. He is joined by MPIs Paul M. Mathews, Research Associate Professor in the Department of Psychiatry at NYU Grossman School of Medicine, and David M. Truong, Assistant Professor of Biomedical Engineering and Pathology at NYU Tandon School of Engineering.
Truong’s lab is playing a key role in the genetic engineering of immune cells for the therapy, building on his expertise in stem cell engineering and synthetic biology. His team is working on designing “off-the-shelf” immune cells—cells that do not need to be taken from the patient but can instead be manufactured and prepared in advance.
Truong described the motivation behind the project as deeply personal. “I have Alzheimer's in my family, and I wanted to use my expertise to help introduce an innovative therapy that could really change the way we treat the disease,” he said.
The team is developing a type of engineered macrophage, a kind of immune cell that can identify and remove harmful proteins in the brain. These cells will be created from human induced pluripotent stem cells, a renewable source of cells that can be genetically modified in the lab.
The engineered cells will be designed to target and bind to amyloid plaques for removal, optimize brain access by reducing competition from the brain’s own immune cells, and include built-in safety mechanisms to deactivate the therapy if necessary.
Unlike many other experimental Alzheimer’s treatments, this approach does not require injecting the cells directly into the brain. Instead, they will be delivered through the bloodstream, where they can cross the blood-brain barrier and begin clearing harmful proteins, avoiding invasive procedures while ensuring effective treatment.
To further enhance safety, the therapy includes a built-in “kill switch” that allows doctors to deactivate the cells if needed. If unintended side effects occur, a specific drug can be administered to eliminate them, ensuring the treatment remains both controlled and adaptable.
Once Truong’s lab finalizes the engineering of these human cells, they will be handed off to Sadowski’s team for testing in Alzheimer’s disease models. The Nathan S. Kline Institute for Psychiatric Research will assist in evaluating how well the cells remove amyloid plaques, while NYU Grossman School of Medicine will analyze how the cells behave in the brain.
The therapy is an adaptation of chimeric antigen receptor (CAR) technology, which has been revolutionary in cancer treatment. While CAR-T cell therapies have been used to fight blood cancers, this research aims to adapt similar technology to neurodegenerative diseases like Alzheimer’s.
Truong noted that cell therapy is a rapidly evolving field and that while CAR-T therapy has primarily been used against cancer, this project is pushing the boundaries to see if such an approach could work for Alzheimer’s, a disease that affects millions and has few effective treatment options.
The five-year grant follows an R61/R33 funding model, meaning that the first two years are dedicated to proving the feasibility of the therapy. If the team meets its key scientific milestones, funding will continue for three additional years to move the research toward clinical readiness.
Self-driving cars learn to share road knowledge through digital word-of-mouth
An NYU Tandon-led research team has developed a way for self-driving vehicles to share their knowledge about road conditions indirectly, making it possible for each vehicle to learn from the experiences of others even when they rarely meet on the road.
The research, presented in a paper at the Association for the Advancement of Artificial Intelligence Conference on February 27, 2025, tackles a persistent problem in artificial intelligence: how to help vehicles learn from each other while keeping their data private. Typically, vehicles only share what they have learned during brief direct encounters, limiting how quickly they can adapt to new conditions.
"Think of it like creating a network of shared experiences for self-driving cars," said Yong Liu, who supervised the research led by his Ph.D. student Xiaoyu Wang. Liu is a professor in NYU Tandon’s Electrical and Computer Engineering Department and a member of its Center for Advanced Technology in Telecommunications and Distributed Information Systems and of NYU WIRELESS.
"A car that has only driven in Manhattan could now learn about road conditions in Brooklyn from other vehicles, even if it never drives there itself. This would make every vehicle smarter and better prepared for situations it hasn't personally encountered,” Liu said.
The researchers call their new approach Cached Decentralized Federated Learning (Cached-DFL). Unlike traditional Federated Learning, which relies on a central server to coordinate updates, Cached-DFL enables vehicles to train their own AI models locally and share those models with others directly.
When vehicles come within 100 meters of each other, they use high-speed device-to-device communication to exchange trained models rather than raw data. Crucially, they can also pass along models they’ve received from previous encounters, allowing information to spread far beyond immediate interactions. Each vehicle maintains a cache of up to 10 external models and updates its AI every 120 seconds.
To prevent outdated information from degrading performance, the system automatically removes older models based on a staleness threshold, ensuring that vehicles prioritize recent and relevant knowledge.
The researchers tested their system through computer simulations using Manhattan’s street layout as a template. In their experiments, virtual vehicles moved along the city’s grid at about 14 meters per second, making turns at intersections based on probability, with a 50% chance of continuing straight and equal odds of turning onto other available roads.
Unlike conventional decentralized learning methods, which suffer when vehicles don’t meet frequently, Cached-DFL allows models to travel indirectly through the network, much like how messages spread in delay-tolerant networks, which are designed to handle intermittent connectivity by storing and forwarding data until a connection is available. By acting as relays, vehicles can pass along knowledge even if they never personally experience certain conditions.
"It's a bit like how information spreads in social networks," explained Liu. "Devices can now pass along knowledge from others they've met, even if those devices never directly encounter each other."
This multi-hop transfer mechanism reduces the limitations of traditional model-sharing approaches, which rely on immediate, one-to-one exchanges. By allowing vehicles to act as relays, Cached-DFL enables learning to propagate across an entire fleet more efficiently than if each vehicle were limited to direct interactions alone.
The technology allows connected vehicles to learn about road conditions, signals, and obstacles while keeping data private. This is especially useful in cities where cars face varied conditions but rarely meet long enough for traditional learning methods.
The study shows that vehicle speed, cache size, and model expiration impact learning efficiency. Faster speeds and frequent communication improve results, while outdated models reduce accuracy. A group-based caching strategy further enhances learning by prioritizing diverse models from different areas rather than just the latest ones.
As AI moves from centralized servers to edge devices, Cached-DFL provides a secure and efficient way for self-driving cars to learn collectively, making them smarter and more adaptive. Cached-DFL can also be applied to other networked systems of smart mobile agents, such as drones, robots and satellites, for robust and efficient decentralized learning towards achieving swarm intelligence.
The researchers have made their code publicly available. More detail can be found in their technical report. In addition to Liu and Wang, the research team consists of Guojun Xiong and Jian Li of Stony Brook University; and Houwei Cao of New York Institute of Technology.
The research was supported by multiple National Science Foundation grants, the Resilient & Intelligent NextG Systems (RINGS) program — which includes funding from the Department of Defense and the National Institute of Standards and Technology — and NYU’s computing resources.
New AI system accurately maps urban green spaces, exposing environmental divides
A research team led by Rumi Chunara — an NYU associate professor with appointments in both the Tandon School of Engineering and the School of Global Public Health — has unveiled a new artificial intelligence (AI) system that uses satellite imagery to track urban green spaces more accurately than prior methods, critical to ensuring healthy cities.
To validate their approach, the researchers tested the system in Karachi, Pakistan's largest city where several team members are based. Karachi proved an ideal test case with its mix of dense urban areas and varying vegetation conditions.
Accepted for publication by the ACM Journal on Computing and Sustainable Societies, the team’s analysis exposed a stark environmental divide: some areas enjoy tree-lined streets while many neighborhoods have almost no vegetation at all.
Cities have long struggled to track their green spaces precisely, from parks to individual street trees, with traditional satellite analysis missing up to about 37% of urban vegetation.
As cities face climate change and rapid urbanization, especially in Asia and Africa, accurate measurement has become vital. Green spaces can help reduce urban temperatures, filter air pollution, and provide essential spaces for exercise and mental health.
But these benefits may be unequally distributed. Low-income areas often lack vegetation, making them hotter and more polluted than tree-lined wealthy neighborhoods.
The research team developed their solution by enhancing AI segmentation architectures, such as DeepLabV3+. Using high-resolution satellite imagery from Google Earth, they trained the system by augmenting their training data to include varied versions of green vegetation under different lighting and seasonal conditions — a process they call 'green augmentation.' This technique improved vegetation detection accuracy by 13.4% compared to existing AI methods — a significant advance in the field.
When measuring how often the system correctly identifies vegetation, it achieved 89.4% accuracy with 90.6% reliability, substantially better than traditional methods which only achieve 63.3% accuracy with 64.0% reliability.
"Previous methods relied on simple light wavelength measurements," said Chunara, who serves as the Director of the NYU Center for Health Data Science and is a member of NYU Tandon’s Visualization Imaging and Data Analysis Center (VIDA). "Our system learns to recognize more subtle patterns that distinguish trees from grass, even in challenging urban environments. This type of data is necessary for urban planners to identify neighborhoods that lack vegetation so they can develop new green spaces that will deliver the most benefits possible. Without accurate mapping, cities cannot address disparities effectively."
The Karachi analysis found the city averages just 4.17 square meters of green space per person, less than half the World Health Organization's (WHO’s) recommended minimum of 9 square meters per capita. The disparity within neighborhoods is dramatic: while some outlying union councils — Pakistan’s smallest local government body, a total of 173 were included in the study — have more than 80 square meters per person, five union councils have less than 0.1 square meters per capita.
The study revealed that areas with more paved roads — typically a marker of economic development — tend to have more trees and grass. More significantly, in eight different union councils studied, areas with more vegetation showed markedly lower surface temperatures, demonstrating green spaces' role in cooling cities.
Singapore offers a contrast, showing what's possible with deliberate planning. Despite having a similar population density to Karachi, it provides 9.9 square meters of green space per person, exceeding the WHO target.
The researchers have made their methodology public, though applying it to other cities would require retraining the system on local satellite imagery.
This study adds to Chunara’s body of work developing computational and statistical methods, including data mining and machine learning, to understand social determinants of health and health disparities. Prior studies include using social media posts to map neighborhood-level systemic racism and homophobia and assess their mental health impact, as well as analyzing electronic health records to understand telemedicine access disparities during COVID-19.
In addition to Chunara, the paper’s authors are Miao Zhang, a Ph.D. candidate in NYU Tandon’s Department of Computer Science and Engineering and VIDA; and Hajra Arshad, Manzar Abbas, Hamzah Jehanzeb, Izza Tahir, Javerya Hassan and Zainab Samad from The Aga Khan University's Department of Medicine in Karachi. Samad also holds an appointment in The Aga Khan University’s CITRIC Health Data Science Center.
Funding for the study was provided by the National Science Foundation and National Institutes of Health.
Miao Zhang, Hajra Arshad, Manzar Abbas, Hamzah Jehanzeb, Izza Tahir, Javerya Hassan, Zainab Samad, and Rumi Chunara. 2025. Quantifying greenspace with satellite images in Karachi, Pakistan using a new data augmentation paradigm. ACM J. Comput. Sustain. Soc. Just Accepted (February 2025). https://doi.org/10.1145/3716370
Research reveals economic ripple effects of business closures, remote work and other disruptions
With remote and hybrid work now an established norm, many restaurants located adjacent to office buildings are facing a permanent decline in foot traffic. But how will this behavioral shift ripple through businesses along commute routes? Does it trigger a chain reaction that extends far beyond the immediate vicinity of a commercial hub?
In a new paper published in Nature Human Behavior, a team of researchers led by NYU Tandon School of Engineering’s Takahiro Yabe and Northeastern University’s Esteban Moro have shown how connections between businesses stretch far beyond proximity when human behavior data is factored into the equation. The result shows that businesses — from gas stations to laundromats — can see large changes in their revenues, even if they're not located in major business districts.
“Urban science views cities as complex adaptive systems, rather than entities that can be engineered with straightforward solutions,” says Yabe, Assistant Professor at the Department of Technology Management and Innovation and the Center for Urban Science and Progress. “Our research contributes to understanding how changes in urban environments influence human behavior and economic dynamics. By focusing on dependencies between businesses and points of interest, we can help cities design more effective, equitable policies and infrastructure.”
Traditional models for measuring interdependence of businesses largely rely on their physical proximity to one another. The research team, which also included researchers from the University of Pittsburgh and MIT — analyzed anonymized mobile phone data from over a million devices across New York, Boston, Los Angeles, Seattle, and Dallas, tracking how people move between businesses and other points of interest throughout the day. This allowed researchers to create detailed "dependency networks" showing how different establishments rely on each other's customer base – and how far disruptions can spread.
This integration allowed them to refine predictions of business resilience during disruptions — such as those triggered by the COVID-19 pandemic — boosting accuracy by a staggering 40% compared to traditional models that relied solely on geographic proximity.
These networks revealed surprising patterns. While traditional models focused mainly on immediate neighbors – like a coffee shop next to a closed office building – the reality is far more complex. The study found that airports can significantly impact businesses up to 2.5-3.5 kilometers away, while even supercenters and colleges influence businesses within a 1.5-kilometer radius.
The researchers were also able to show how different types of establishments create varying ripple patterns. While shopping malls and colleges tend to have strong but localized effects, airports, stadiums, and theme parks can send economic shockwaves across entire urban areas. Perhaps most surprisingly, arts venues, restaurants, and service businesses can experience substantial impacts even when they're far from the source of the disruption. This can provide key takeaways for people to design and run cities.
“This network has significant potential for urban planners and policymakers,” says Yabe. “For example, organizations like Business Improvement Districts (BIDs) can use it to identify synergies between parks and surrounding businesses, optimizing economic growth. Planners can also simulate the impacts of interventions, like congestion pricing or new infrastructure, to anticipate ripple effects throughout the city.”
The publication is accompanied by an interactive visual dashboard, inviting users to simulate disruptions and observe how they impact a city’s economic landscape. Through a detailed map of POI’s in Boston, users can see exactly how much connectivity individual businesses have with those in their communities and beyond, and explore how business closures affect points far beyond their blocks or neighborhoods.
This research builds upon previous research of Yabe and his colleagues. Last year, they started a National Science Foundation-funded project on how EV chargers affect dining, shopping, and other activity patterns, and aim to provide policy makers with tools to support small and medium-sized businesses through their judicious placement. The project explores how and where charging stations should be placed to not only meet drivers’ needs but also enhance the economic resilience of local businesses and promote social equity.
The improved accuracy of business resilience predictions during disruptions such as pandemics or climate change-induced natural disasters is a crucial development for urban planners and policymakers working to strengthen the economic stability of cities. This insight makes a compelling case for shifting from a place-based to a network-based approach to planning and managing urban economies — one that recognizes that the health of a city's economy is a web of interconnected threads. The closure of an office or museum may seem like an isolated event, but within the tightly woven fabric of urban economies, it can reverberate with far-reaching effects.