Research News
AI food scanner turns phone photos into nutritional analysis
Snap a photo of your meal, and artificial intelligence instantly tells you its calorie count, fat content, and nutritional value — no more food diaries or guesswork.
This futuristic scenario is now much closer to reality, thanks to an AI system developed by NYU Tandon School of Engineering researchers that promises a new tool for the millions of people who want to manage their weight, diabetes and other diet-related health conditions.
The technology, detailed in a paper presented at the 6th IEEE International Conference on Mobile Computing and Sustainable Informatics, uses advanced deep-learning algorithms to recognize food items in images and calculate their nutritional content, including calories, protein, carbohydrates and fat.
For over a decade, NYU's Fire Research Group, which includes the paper's lead author Prabodh Panindre and co-author Sunil Kumar, has studied critical firefighter health and operational challenges. Several research studies show that 73-88% of career and 76-87% of volunteer firefighters are overweight or obese, facing increased cardiovascular and other health risks that threaten operational readiness. These findings directly motivated the development of their AI-powered food-tracking system.
"Traditional methods of tracking food intake rely heavily on self-reporting, which is notoriously unreliable," said Panindre, Associate Research Professor of NYU Tandon School of Engineering’s Department of Mechanical Engineering. "Our system removes human error from the equation."
Despite the apparent simplicity of the concept, developing reliable food recognition AI has stumped researchers for years. Previous attempts struggled with three fundamental challenges that the NYU Tandon team appears to have overcome.
"The sheer visual diversity of food is staggering," said Kumar, Professor of Mechanical Engineering at NYU Abu Dhabi and Global Network Professor of Mechanical Engineering at NYU Tandon. "Unlike manufactured objects with standardized appearances, the same dish can look dramatically different based on who prepared it. A burger from one restaurant bears little resemblance to one from another place, and homemade versions add another layer of complexity."
Earlier systems also faltered when estimating portion sizes — a crucial factor in nutritional calculations. The NYU team's advance is their volumetric computation function, which uses advanced image processing to measure the exact area each food occupies on a plate.
The system correlates the area occupied by each food item with density and macronutrient data to convert 2D images into nutritional assessments. This integration of volumetric computations with the AI model enables precise analysis without manual input, solving a longstanding challenge in automated dietary tracking.
The third major hurdle has been computational efficiency. Previous models required too much processing power to be practical for real-time use, often necessitating cloud processing that introduced delays and privacy concerns.
The researchers used a powerful image-recognition technology called YOLOv8 with ONNX Runtime (a tool that helps AI programs run more efficiently) to build a food-identification program that runs on a website instead of as a downloadable app, allowing people to simply visit it using their phone's web browser to analyze meals and track their diet.
When tested on a pizza slice, the system calculated 317 calories, 10 grams of protein, 40 grams of carbohydrates, and 13 grams of fat — nutritional values that closely matched reference standards. It performed similarly well when analyzing more complex dishes such as idli sambhar, a South Indian specialty featuring steamed rice cakes with lentil stew, for which it calculated 221 calories, 7 grams of protein, 46 grams of carbohydrates and just 1 gram of fat.
"One of our goals was to ensure the system works across diverse cuisines and food presentations," said Panindre. "We wanted it to be as accurate with a hot dog — 280 calories according to our system — as it is with baklava, a Middle Eastern pastry that our system identifies as having 310 calories and 18 grams of fat."
The researchers solved data challenges by combining similar food categories, removing food types with too few examples, and giving extra emphasis to certain foods during training. These techniques helped refine their training dataset from countless initial images to a more balanced set of 95,000 instances across 214 food categories.
The technical performance metrics are impressive: the system achieved a mean Average Precision (mAP) score of 0.7941 at an Intersection over Union (IoU) threshold of 0.5. For non-specialists, this means the AI can accurately locate and identify food items approximately 80% of the time, even when they overlap or are partially obscured.
The system has been deployed as a web application that works on mobile devices, making it potentially accessible to anyone with a smartphone. The researchers describe their current system as a "proof-of-concept" that could be refined and expanded for broader healthcare applications very soon.
In addition to Panindre and Kumar, the paper's authors are Praneeth Kumar Thummalapalli and Tanmay Mandal, both master’s degree students in NYU Tandon’s Department of Computer Science and Engineering.
New research uses AI to unravel the complex wiring of the motor system
The nervous system is a marvel of biological engineering, composed of intricate networks that control every aspect of an animal's movement and behavior. A fundamental question in neuroscience is how these vast, complex circuits are assembled during development. A recent study by a group of researchers including Erdem Varol, Assistant Professor of Computer Science and Engineering and a member of the Visualization, Imaging and Data Analysis Center, has provided new insights into this problem by studying how the neurons responsible for leg movement in fruit flies (Drosophila melanogaster) establish their connections.
The researchers developed ConnectionMiner, a novel computational tool that integrates gene expression data with electron microscopy-derived connectomes. This tool enabled them to infer neuronal identities and predict synaptic connectivity with remarkable accuracy. Their findings, published on bioRxiv [PDF], offer a blueprint for understanding how neurons wire themselves into functional circuits.
Neurons form connections based on genetic and molecular cues, but identifying the precise mechanisms behind this process has been difficult. In the fruit fly, roughly 69 motor neurons (MNs) in each leg are responsible for controlling movement. These neurons receive input from more than 1,500 premotor neurons (preMNs) through over 200,000 synapses. The challenge lies in understanding how each MN finds the right preMN partners and how these connections are established at the molecular level.
By applying single-cell RNA sequencing (scRNAseq) at multiple developmental stages, the researchers tracked how different gene families, particularly transcription factors (TFs) and cell adhesion molecules (CAMs), shape the unique identities of MNs. They discovered that these molecular signals not only define neuronal types but also correlate with the strength of their synaptic connections.
Traditional methods of studying neuronal circuits rely on either gene expression data (which tells us what molecules neurons produce) or connectomics (which maps how neurons are wired together). However, integrating these two datasets has been a major challenge. ConnectionMiner bridges this gap by using machine learning to refine ambiguous neuronal annotations, effectively reconstructing the genetic and synaptic landscape of the nervous system.
The researchers tested their tool on the Drosophila leg motor system, identifying combinatorial gene signatures that likely orchestrate the assembly of circuits from preMNs to MNs and ultimately to muscles. By leveraging both transcriptomic (gene expression) and connectomic (wiring) data, ConnectionMiner successfully resolved previously uncharacterized neuronal identities and predicted the molecular interactions driving connectivity.
By mapping these relationships, ConnectionMiner provides a predictive framework for understanding how the nervous system assembles itself.
“The nervous system is one of the most complex networks that we know of, and deciphering its molecular building blocks is key to understanding much about our health, our behavior and our lives in general,” says Varol. “Tools like ConnectionMiner are a major stepping stone towards unlocking the brain’s molecular blueprint — enabling us to identify the genes that build neural circuits, revolutionize the diagnosis and treatment of neurological disorders, and fundamentally enhance our understanding of how brain wiring drives behavior.”
This research has far-reaching implications. Understanding the molecular rules that govern neural connectivity in fruit flies could inform studies of more complex nervous systems, including our own. The principles uncovered here might help explain how neural circuits form during development, how they recover from injury, and even how neurodevelopmental disorders arise when connectivity goes awry.
Furthermore, computational tools like ConnectionMiner represent a paradigm shift in neuroscience. By integrating artificial intelligence with biological data, researchers can now tackle questions that were previously too complex to analyze. The approach outlined in this study could be applied to other model organisms, potentially unlocking new insights into brain development, neural repair, and artificial intelligence itself.
Gupta, H.P., Azevedo, A.W., Chen, Y.C., Xing, K., Sims, P.A., Varol, E., & Mann, R.S. (2025). Decoding neuronal wiring by joint inference of cell identity and synaptic connectivity. bioRxiv. https://doi.org/10.1101/2025.03.04.640006
NYU researchers developing engineered immune cells to target Alzheimer’s disease
Researchers at New York University are developing a novel cell therapy that could offer a longer-lasting, potentially more effective treatment for Alzheimer’s disease by clearing toxic proteins from the brain.
Instead of requiring repeated antibody infusions, which can be costly and cause inflammation, this new approach aims to use engineered immune cells to target and remove amyloid plaques — one of the hallmarks of Alzheimer’s disease.
The project has been awarded a $4.2 million grant from the National Institutes of Health’s National Institute on Aging to fund research over the next five years.
The multiple-Principal Investigator (MPI) research team is led by contact MPI Martin Sadowski, Professor of Neurology, Psychiatry, and Biochemistry and Molecular Pharmacology at NYU Grossman School of Medicine. He is joined by MPIs Paul M. Mathews, Research Associate Professor in the Department of Psychiatry at NYU Grossman School of Medicine, and David M. Truong, Assistant Professor of Biomedical Engineering and Pathology at NYU Tandon School of Engineering.
Truong’s lab is playing a key role in the genetic engineering of immune cells for the therapy, building on his expertise in stem cell engineering and synthetic biology. His team is working on designing “off-the-shelf” immune cells—cells that do not need to be taken from the patient but can instead be manufactured and prepared in advance.
Truong described the motivation behind the project as deeply personal. “I have Alzheimer's in my family, and I wanted to use my expertise to help introduce an innovative therapy that could really change the way we treat the disease,” he said.
The team is developing a type of engineered macrophage, a kind of immune cell that can identify and remove harmful proteins in the brain. These cells will be created from human induced pluripotent stem cells, a renewable source of cells that can be genetically modified in the lab.
The engineered cells will be designed to target and bind to amyloid plaques for removal, optimize brain access by reducing competition from the brain’s own immune cells, and include built-in safety mechanisms to deactivate the therapy if necessary.
Unlike many other experimental Alzheimer’s treatments, this approach does not require injecting the cells directly into the brain. Instead, they will be delivered through the bloodstream, where they can cross the blood-brain barrier and begin clearing harmful proteins, avoiding invasive procedures while ensuring effective treatment.
To further enhance safety, the therapy includes a built-in “kill switch” that allows doctors to deactivate the cells if needed. If unintended side effects occur, a specific drug can be administered to eliminate them, ensuring the treatment remains both controlled and adaptable.
Once Truong’s lab finalizes the engineering of these human cells, they will be handed off to Sadowski’s team for testing in Alzheimer’s disease models. The Nathan S. Kline Institute for Psychiatric Research will assist in evaluating how well the cells remove amyloid plaques, while NYU Grossman School of Medicine will analyze how the cells behave in the brain.
The therapy is an adaptation of chimeric antigen receptor (CAR) technology, which has been revolutionary in cancer treatment. While CAR-T cell therapies have been used to fight blood cancers, this research aims to adapt similar technology to neurodegenerative diseases like Alzheimer’s.
Truong noted that cell therapy is a rapidly evolving field and that while CAR-T therapy has primarily been used against cancer, this project is pushing the boundaries to see if such an approach could work for Alzheimer’s, a disease that affects millions and has few effective treatment options.
The five-year grant follows an R61/R33 funding model, meaning that the first two years are dedicated to proving the feasibility of the therapy. If the team meets its key scientific milestones, funding will continue for three additional years to move the research toward clinical readiness.
Self-driving cars learn to share road knowledge through digital word-of-mouth
An NYU Tandon-led research team has developed a way for self-driving vehicles to share their knowledge about road conditions indirectly, making it possible for each vehicle to learn from the experiences of others even when they rarely meet on the road.
The research, presented in a paper at the Association for the Advancement of Artificial Intelligence Conference on February 27, 2025, tackles a persistent problem in artificial intelligence: how to help vehicles learn from each other while keeping their data private. Typically, vehicles only share what they have learned during brief direct encounters, limiting how quickly they can adapt to new conditions.
"Think of it like creating a network of shared experiences for self-driving cars," said Yong Liu, who supervised the research led by his Ph.D. student Xiaoyu Wang. Liu is a professor in NYU Tandon’s Electrical and Computer Engineering Department and a member of its Center for Advanced Technology in Telecommunications and Distributed Information Systems and of NYU WIRELESS.
"A car that has only driven in Manhattan could now learn about road conditions in Brooklyn from other vehicles, even if it never drives there itself. This would make every vehicle smarter and better prepared for situations it hasn't personally encountered,” Liu said.
The researchers call their new approach Cached Decentralized Federated Learning (Cached-DFL). Unlike traditional Federated Learning, which relies on a central server to coordinate updates, Cached-DFL enables vehicles to train their own AI models locally and share those models with others directly.
When vehicles come within 100 meters of each other, they use high-speed device-to-device communication to exchange trained models rather than raw data. Crucially, they can also pass along models they’ve received from previous encounters, allowing information to spread far beyond immediate interactions. Each vehicle maintains a cache of up to 10 external models and updates its AI every 120 seconds.
To prevent outdated information from degrading performance, the system automatically removes older models based on a staleness threshold, ensuring that vehicles prioritize recent and relevant knowledge.
The researchers tested their system through computer simulations using Manhattan’s street layout as a template. In their experiments, virtual vehicles moved along the city’s grid at about 14 meters per second, making turns at intersections based on probability, with a 50% chance of continuing straight and equal odds of turning onto other available roads.
Unlike conventional decentralized learning methods, which suffer when vehicles don’t meet frequently, Cached-DFL allows models to travel indirectly through the network, much like how messages spread in delay-tolerant networks, which are designed to handle intermittent connectivity by storing and forwarding data until a connection is available. By acting as relays, vehicles can pass along knowledge even if they never personally experience certain conditions.
"It's a bit like how information spreads in social networks," explained Liu. "Devices can now pass along knowledge from others they've met, even if those devices never directly encounter each other."
This multi-hop transfer mechanism reduces the limitations of traditional model-sharing approaches, which rely on immediate, one-to-one exchanges. By allowing vehicles to act as relays, Cached-DFL enables learning to propagate across an entire fleet more efficiently than if each vehicle were limited to direct interactions alone.
The technology allows connected vehicles to learn about road conditions, signals, and obstacles while keeping data private. This is especially useful in cities where cars face varied conditions but rarely meet long enough for traditional learning methods.
The study shows that vehicle speed, cache size, and model expiration impact learning efficiency. Faster speeds and frequent communication improve results, while outdated models reduce accuracy. A group-based caching strategy further enhances learning by prioritizing diverse models from different areas rather than just the latest ones.
As AI moves from centralized servers to edge devices, Cached-DFL provides a secure and efficient way for self-driving cars to learn collectively, making them smarter and more adaptive. Cached-DFL can also be applied to other networked systems of smart mobile agents, such as drones, robots and satellites, for robust and efficient decentralized learning towards achieving swarm intelligence.
The researchers have made their code publicly available. More detail can be found in their technical report. In addition to Liu and Wang, the research team consists of Guojun Xiong and Jian Li of Stony Brook University; and Houwei Cao of New York Institute of Technology.
The research was supported by multiple National Science Foundation grants, the Resilient & Intelligent NextG Systems (RINGS) program — which includes funding from the Department of Defense and the National Institute of Standards and Technology — and NYU’s computing resources.
New AI system accurately maps urban green spaces, exposing environmental divides
A research team led by Rumi Chunara — an NYU associate professor with appointments in both the Tandon School of Engineering and the School of Global Public Health — has unveiled a new artificial intelligence (AI) system that uses satellite imagery to track urban green spaces more accurately than prior methods, critical to ensuring healthy cities.
To validate their approach, the researchers tested the system in Karachi, Pakistan's largest city where several team members are based. Karachi proved an ideal test case with its mix of dense urban areas and varying vegetation conditions.
Accepted for publication by the ACM Journal on Computing and Sustainable Societies, the team’s analysis exposed a stark environmental divide: some areas enjoy tree-lined streets while many neighborhoods have almost no vegetation at all.
Cities have long struggled to track their green spaces precisely, from parks to individual street trees, with traditional satellite analysis missing up to about 37% of urban vegetation.
As cities face climate change and rapid urbanization, especially in Asia and Africa, accurate measurement has become vital. Green spaces can help reduce urban temperatures, filter air pollution, and provide essential spaces for exercise and mental health.
But these benefits may be unequally distributed. Low-income areas often lack vegetation, making them hotter and more polluted than tree-lined wealthy neighborhoods.
The research team developed their solution by enhancing AI segmentation architectures, such as DeepLabV3+. Using high-resolution satellite imagery from Google Earth, they trained the system by augmenting their training data to include varied versions of green vegetation under different lighting and seasonal conditions — a process they call 'green augmentation.' This technique improved vegetation detection accuracy by 13.4% compared to existing AI methods — a significant advance in the field.
When measuring how often the system correctly identifies vegetation, it achieved 89.4% accuracy with 90.6% reliability, substantially better than traditional methods which only achieve 63.3% accuracy with 64.0% reliability.
"Previous methods relied on simple light wavelength measurements," said Chunara, who serves as the Director of the NYU Center for Health Data Science and is a member of NYU Tandon’s Visualization Imaging and Data Analysis Center (VIDA). "Our system learns to recognize more subtle patterns that distinguish trees from grass, even in challenging urban environments. This type of data is necessary for urban planners to identify neighborhoods that lack vegetation so they can develop new green spaces that will deliver the most benefits possible. Without accurate mapping, cities cannot address disparities effectively."
The Karachi analysis found the city averages just 4.17 square meters of green space per person, less than half the World Health Organization's (WHO’s) recommended minimum of 9 square meters per capita. The disparity within neighborhoods is dramatic: while some outlying union councils — Pakistan’s smallest local government body, a total of 173 were included in the study — have more than 80 square meters per person, five union councils have less than 0.1 square meters per capita.
The study revealed that areas with more paved roads — typically a marker of economic development — tend to have more trees and grass. More significantly, in eight different union councils studied, areas with more vegetation showed markedly lower surface temperatures, demonstrating green spaces' role in cooling cities.
Singapore offers a contrast, showing what's possible with deliberate planning. Despite having a similar population density to Karachi, it provides 9.9 square meters of green space per person, exceeding the WHO target.
The researchers have made their methodology public, though applying it to other cities would require retraining the system on local satellite imagery.
This study adds to Chunara’s body of work developing computational and statistical methods, including data mining and machine learning, to understand social determinants of health and health disparities. Prior studies include using social media posts to map neighborhood-level systemic racism and homophobia and assess their mental health impact, as well as analyzing electronic health records to understand telemedicine access disparities during COVID-19.
In addition to Chunara, the paper’s authors are Miao Zhang, a Ph.D. candidate in NYU Tandon’s Department of Computer Science and Engineering and VIDA; and Hajra Arshad, Manzar Abbas, Hamzah Jehanzeb, Izza Tahir, Javerya Hassan and Zainab Samad from The Aga Khan University's Department of Medicine in Karachi. Samad also holds an appointment in The Aga Khan University’s CITRIC Health Data Science Center.
Funding for the study was provided by the National Science Foundation and National Institutes of Health.
Miao Zhang, Hajra Arshad, Manzar Abbas, Hamzah Jehanzeb, Izza Tahir, Javerya Hassan, Zainab Samad, and Rumi Chunara. 2025. Quantifying greenspace with satellite images in Karachi, Pakistan using a new data augmentation paradigm. ACM J. Comput. Sustain. Soc. Just Accepted (February 2025). https://doi.org/10.1145/3716370
Research reveals economic ripple effects of business closures, remote work and other disruptions
With remote and hybrid work now an established norm, many restaurants located adjacent to office buildings are facing a permanent decline in foot traffic. But how will this behavioral shift ripple through businesses along commute routes? Does it trigger a chain reaction that extends far beyond the immediate vicinity of a commercial hub?
In a new paper published in Nature Human Behavior, a team of researchers led by NYU Tandon School of Engineering’s Takahiro Yabe and Northeastern University’s Esteban Moro have shown how connections between businesses stretch far beyond proximity when human behavior data is factored into the equation. The result shows that businesses — from gas stations to laundromats — can see large changes in their revenues, even if they're not located in major business districts.
“Urban science views cities as complex adaptive systems, rather than entities that can be engineered with straightforward solutions,” says Yabe, Assistant Professor at the Department of Technology Management and Innovation and the Center for Urban Science and Progress. “Our research contributes to understanding how changes in urban environments influence human behavior and economic dynamics. By focusing on dependencies between businesses and points of interest, we can help cities design more effective, equitable policies and infrastructure.”
Traditional models for measuring interdependence of businesses largely rely on their physical proximity to one another. The research team, which also included researchers from the University of Pittsburgh and MIT — analyzed anonymized mobile phone data from over a million devices across New York, Boston, Los Angeles, Seattle, and Dallas, tracking how people move between businesses and other points of interest throughout the day. This allowed researchers to create detailed "dependency networks" showing how different establishments rely on each other's customer base – and how far disruptions can spread.
This integration allowed them to refine predictions of business resilience during disruptions — such as those triggered by the COVID-19 pandemic — boosting accuracy by a staggering 40% compared to traditional models that relied solely on geographic proximity.
These networks revealed surprising patterns. While traditional models focused mainly on immediate neighbors – like a coffee shop next to a closed office building – the reality is far more complex. The study found that airports can significantly impact businesses up to 2.5-3.5 kilometers away, while even supercenters and colleges influence businesses within a 1.5-kilometer radius.
The researchers were also able to show how different types of establishments create varying ripple patterns. While shopping malls and colleges tend to have strong but localized effects, airports, stadiums, and theme parks can send economic shockwaves across entire urban areas. Perhaps most surprisingly, arts venues, restaurants, and service businesses can experience substantial impacts even when they're far from the source of the disruption. This can provide key takeaways for people to design and run cities.
“This network has significant potential for urban planners and policymakers,” says Yabe. “For example, organizations like Business Improvement Districts (BIDs) can use it to identify synergies between parks and surrounding businesses, optimizing economic growth. Planners can also simulate the impacts of interventions, like congestion pricing or new infrastructure, to anticipate ripple effects throughout the city.”
The publication is accompanied by an interactive visual dashboard, inviting users to simulate disruptions and observe how they impact a city’s economic landscape. Through a detailed map of POI’s in Boston, users can see exactly how much connectivity individual businesses have with those in their communities and beyond, and explore how business closures affect points far beyond their blocks or neighborhoods.
This research builds upon previous research of Yabe and his colleagues. Last year, they started a National Science Foundation-funded project on how EV chargers affect dining, shopping, and other activity patterns, and aim to provide policy makers with tools to support small and medium-sized businesses through their judicious placement. The project explores how and where charging stations should be placed to not only meet drivers’ needs but also enhance the economic resilience of local businesses and promote social equity.
The improved accuracy of business resilience predictions during disruptions such as pandemics or climate change-induced natural disasters is a crucial development for urban planners and policymakers working to strengthen the economic stability of cities. This insight makes a compelling case for shifting from a place-based to a network-based approach to planning and managing urban economies — one that recognizes that the health of a city's economy is a web of interconnected threads. The closure of an office or museum may seem like an isolated event, but within the tightly woven fabric of urban economies, it can reverberate with far-reaching effects.
“Forever chemicals” in wastewater far more widespread than previously known, new multi-university study reveals
The "forever chemicals" flowing from U.S. wastewater treatment plants are not only more abundant than previously thought, but also largely consist of pharmaceuticals that have received little scientific or regulatory attention, a new multi-university study reveals.
The research, published in PNAS – and covered by The New York Times and The Washington Post, among other outlets – found that common prescription drugs make up about 75% of the organic fluorine in wastewater entering treatment plants, and 62% in treated water released to the environment.
These findings suggest millions of Americans could be exposed to these persistent chemicals through their drinking water.
"We've been focused on a small subset of these chemicals, but that's just the tip of the iceberg," said Bridger J. Ruyle, an incoming Assistant Professor in NYU Tandon School of Engineering’s Civil and Urban Engineering department and the study’s lead author. “The research shows that even advanced wastewater treatment removes less than 25% of these compounds before they're discharged into rivers and streams.”
Of particular concern is that six forever chemicals recently regulated by the Environmental Protection Agency in drinking water make up only about 8% of the organic fluorine found in wastewater effluent. The remainder consists largely of fluorinated pharmaceuticals and other compounds that aren't currently regulated.
Using a national model that tracks how wastewater moves through U.S. waterways, the researchers estimate that during normal river conditions, about 15 million Americans receive drinking water containing levels of these compounds above regulatory limits. During drought conditions, that number could rise to 23 million people.
The study examined eight large wastewater treatment facilities serving metropolitan areas across the United States. These facilities are similar to those serving about 70% of the U.S. population, suggesting the findings have broad national implications.
"What's particularly troubling is that these fluorinated pharmaceuticals are designed to be biologically active at very low doses," said Ruyle. "We don't yet understand the public health implications of long-term exposure to these compounds through drinking water."
The research comes at a critical time, as about 20% of all pharmaceuticals now contain fluorine. While this chemical element makes drugs more effective by helping them persist in the body longer, that same persistence means they don't break down in the environment.
The findings suggest that current regulatory approaches focusing on individual chemicals may be insufficient to address the complex mixture of fluorinated compounds in wastewater. The study also highlights how water scarcity could exacerbate the problem. In regions experiencing drought or implementing water conservation measures through wastewater reuse, there's less dilution of these chemicals before they reach drinking water intakes.
"These results emphasize the urgent need to reduce ongoing sources of these chemicals and evaluate the long-term effects of fluorinated pharmaceuticals in our water supply," said Ruyle. "We can't just focus on the handful of compounds we've studied extensively while ignoring the majority of what's actually out there. We need a more comprehensive approach to regulation and increased attention to the ecological and public health impacts of fluorinated pharmaceuticals."
The paper’s publication comes on the heels of Ruyle's November 2024 testimony to New York State lawmakers warning about the threats of forever chemicals passing through water treatment plants. The new findings provide detailed evidence supporting his concerns about the potential prevalence of these compounds in downstream drinking water supplies.
In addition to Ruyle, the paper's authors are Emily Pennoyer, Thomas Webster, and Wendy Heiger-Bernays from Boston University School of Public Health; Simon Vojta, Jitka Becanova, and Rainer Lohmann from the University of Rhode Island's Graduate School of Oceanography; Minhazul Islam and Paul Westerhoff from Arizona State University; Charles Schaefer from CDM Smith in New Jersey; and Elsie Sunderland, who holds appointments at Harvard's School of Engineering and Applied Sciences, Department of Earth and Planetary Sciences, and School of Public Health.
Financial support for this work was provided by the National Institute for Environmental Health Science Superfund Research Program (P42ES027706) and the Water Research Foundation (Project 5031). This study was also supported by contributions from the anonymous participating wastewater treatment facilities.
B.J. Ruyle, E.H. Pennoyer, S. Vojta, J. Becanova, M. Islam, T.F. Webster, W. Heiger-Bernays, R. Lohmann, P. Westerhoff, C.E. Schaefer, E.M. Sunderland, High organofluorine concentrations in municipal wastewater affect downstream drinking water supplies for millions of Americans, Proc. Natl. Acad. Sci. U.S.A.
New virtual reality-tested system shows promise in aiding navigation of people with blindness or low vision
A new study offers hope for people who are blind or have low vision (pBLV) through an innovative navigation system that was tested using virtual reality. The system, which combines vibrational and sound feedback, aims to help users navigate complex real-world environments more safely and effectively.
The research from NYU Tandon School of Engineering, published in JMIR Rehabilitation and Assistive Technology, advances work from John-Ross Rizzo, Maurizio Porfiri and colleagues toward developing a first-of-its-kind wearable system to help pBLV navigate their surroundings independently.
“Traditional mobility aids have key limitations that we want to overcome,” said Fabiana Sofia Ricci, the paper’s lead author and a Ph.D. candidate in NYU Tandon Department of Biomedical Engineering (BME) and NYU Tandon’s Center for Urban Science + Progress (CUSP). “White canes only detect objects through contact and miss obstacles outside their range, while guide dogs require extensive training and are costly. As a result, only 2 to 8 percent of visually impaired Americans use either aid.”
In this study, the research team miniaturized the earlier haptic feedback of its backpack-based system into a discreet belt equipped with 10 precision vibration motors. The belt's electronic components, including a custom circuit board and microcontroller, fit into a simple waist bag, a crucial step toward making the technology practical for real-world use.
The system provides two types of sensory feedback: vibrations through the belt indicate obstacle location and proximity, while audio beeps through a headset become more frequent as users approach obstacles in their path.
"We want to reach a point where the technology we’re building is light, largely unseen and has all the necessary performance required for efficient and safe navigation," said Rizzo, who is an associate professor in NYU Tandon’s BME department, associate director of NYU WIRELESS, affiliated faculty at CUSP and associate professor in the Department of Rehabilitation Medicine at NYU Grossman School of Medicine.
"The goal is something you can wear with any type of clothing, so people are not bothered in any way by the technology."
The researchers tested the technology by recruiting 72 participants with normal vision, who wore Meta Quest 2 VR headsets and haptic feedback belts while walking around NYU's Media Commons at 370 Jay Street in Downtown Brooklyn, an empty room with only side curtains.
Through their headsets, the participants experienced a virtual subway station as someone with advanced glaucoma would see it - with reduced peripheral vision, blurred details, and altered color perception. The environment, created with Unity gaming software to match the room's exact dimensions, allowed the team to determine how well participants could navigate using the belt's vibrations and audio feedback when their vision was impaired.
"We worked with mobility specialists and NYU Langone ophthalmologists to design the VR simulation to accurately recreate advanced glaucoma symptoms," says Porfiri, the paper’s senior author, CUSP Director and an Institute Professor in NYU Tandon’s Departments of BME and Mechanical and Aerospace Engineering. "Within this environment, we included common transit challenges that visually impaired people face daily - broken elevators, construction zones, pedestrian traffic, and unexpected obstacles."
Results showed that haptic feedback significantly reduced collisions with obstacles, while audio cues helped users move more smoothly through space. Future studies will involve individuals with actual vision loss.
The technology complements the functionality of Commute Booster, a mobile app being developed by a Rizzo-led team to provide pBLV navigation guidance inside subway stations. Commute Booster “reads” station signage and tells users where to go, while the haptic belt could help those users avoid obstacles along the way.
In December 2023, the National Science Foundation (NSF) awarded Rizzo, Porfiri, and a team of NYU colleagues a $5 million grant via its Convergence Accelerator, a program whose mission includes supporting the development of assistive and rehabilitative technologies. That grant, along with others from NSF, funded this research and also supports Commute Booster’s development. In addition to Ricci, Rizzo and Porfiri, Lorenzo Liguori and Eduardo Palermo are the paper’s authors, both from the Department of Mechanical and Aerospace Engineering of Sapienza University of Rome, Italy.
Ricci F, Liguori L, Palermo E, Rizzo J, Porfiri M
Navigation Training for Persons With Visual Disability Through Multisensory Assistive Technology: Mixed Methods Experimental Study
JMIR Rehabil Assist Technol 2024;11:e55776
DOI: 10.2196/55776
Out of thin air: Researchers create microchips capable of detecting and diagnosing diseases
In a world grappling with a multitude of health threats — ranging from fast-spreading viruses to chronic diseases and drug-resistant bacteria — the need for quick, reliable, and easy-to-use home diagnostic tests has never been greater. Imagine a future where these tests can be done anywhere, by anyone, using a device as small and portable as your smartwatch. To do that, you need microchips capable of detecting miniscule concentrations of viruses or bacteria in the air.
Now, new research from NYU Tandon faculty including Professor of Electrical and Computer Engineering Davood Shahrjerdi; Herman F. Mark Professor in Chemical and Biomolecular Engineering Elisa Riedo; and Giuseppe de Peppo, Industry Associate Professor in Chemical and Biomolecular Engineering and who was previously at Mirimus, shows it’s possible to develop and build microchips that can not only identify multiple diseases from a single cough or air sample, but can also be produced at scale.
“This study opens new horizons in the field of biosensing. Microchips, the backbone of smartphones, computers, and other smart devices, have transformed the way people communicate, entertain, and work. Similarly, today, our technology will allow microchips to revolutionize healthcare, from medical diagnostics, to environmental health” says Riedo,
“The innovative technology demonstrated in this article uses field-effect transistors (FETs) — miniature electronic sensors that directly detect biological markers and convert them into digital signals — offering an alternative to traditional color-based chemical diagnostic tests like home pregnancy tests,” said Shahrjerdi. “This advanced approach enables faster results, testing for multiple diseases simultaneously, and immediate data transmission to healthcare providers” says Sharjerdi, who is also the Director of the NYU Nanofabrication Cleanroom, a state-of-the-art facility where some of the chips used in this study were fabricated. Riedo and Shahrjerdi are also the co-directors of the NYU NanoBioX initiative.
Field-effect transistors, a staple of modern electronics, are emerging as powerful tools in this quest for diagnostic instruments. These tiny devices can be adapted to function as biosensors, detecting specific pathogens or biomarkers in real time, without the need for chemical labels or lengthy lab procedures. By converting biological interactions into measurable electrical signals, FET-based biosensors offer a rapid and versatile platform for diagnostics.
Recent advancements have pushed the detection capabilities of FET biosensors to incredibly small levels — down to femtomolar concentrations, or one quadrillionth of a mole — by incorporating nanoscale materials such as nanowires, indium oxide, and graphene. Yet, despite their potential, FET-based sensors still face a significant challenge: they struggle to detect multiple pathogens or biomarkers simultaneously on the same chip. Current methods for customizing these sensors, such as drop-casting bioreceptors like antibodies onto the FET’s surface, lack the precision and scalability required for more complex diagnostic tasks.
To address this, these researchers are exploring new ways to modify FET surfaces, allowing each transistor on a chip to be tailored to detect a different biomarker. This would enable parallel detection of multiple pathogens.
Enter thermal scanning probe lithography (tSPL), a breakthrough technology that may hold the key to overcoming these barriers. This technique allows for the precise chemical patterning of a polymer-coated chip, enabling the functionalization of individual FETs with different bioreceptors, such as antibodies or aptamers, at resolutions as fine as 20 nanometers. This is on par with the tiny size of transistors in today’s advanced semiconductor chips. By allowing for highly selective modification of each transistor, this method opens the door to the development of FET-based sensors that can detect a wide variety of pathogens on a single chip, with unparalleled sensitivity.
Riedo, who was instrumental in the development and proliferation of tSPL technology, sees its use here to be further evidence of the groundbreaking way this nanofabrication technique can be used in practical applications. “tSPL, now a commercially available lithographic technology, has been key to functionalize each FET with different bio-receptors in order to achieve multiplexing,” she says.
In tests, FET sensors functionalized using tSPL have shown remarkable performance, detecting as few as 3 attomolar (aM) concentrations of SARS-CoV-2 spike proteins and as little as 10 live virus particles per milliliter, while effectively distinguishing between different types of viruses, including influenza A. The ability to reliably detect such minute quantities of pathogens with high specificity is a critical step toward creating portable diagnostic devices that could one day be used in a variety of settings, from hospitals to homes.
The study, now published by the Royal Society of Chemistry in Nanoscale, was supported by Mirimus, a Brooklyn-based biotechnology company, and LendLease, a multinational construction and real estate company based in Australia. They are working with the NYU Tandon team to develop illness-detecting wearables and home devices, respectively.
“This research shows off the power of the collaboration between industry and academia, and how it can change the face of modern medicine,” says Prem Premsrirut, President and CEO of Mirimus. “NYU Tandon’s researchers are producing work that will play a large role in the future of disease detection.”
“Companies such as Lendlease and other developers involved in urban regeneration are searching for innovative solutions like this to sense biological threats in buildings.” says Alberto Sangiovanni Vincentelli of UC Berkeley, a collaborator on the Project. “Biodefense measures like this will be a new infrastructural layer for the buildings of the future”
As semiconductor manufacturing continues to advance, integrating billions of nanoscale FETs onto microchips, the potential for using these chips in biosensing applications is becoming increasingly feasible. A universal, scalable method for functionalizing FET surfaces at nanoscale precision would enable the creation of sophisticated diagnostic tools, capable of detecting multiple diseases in real time, with the kind of speed and accuracy that could transform modern medicine.
Discovery of new growth-directed graphene stacking domains may precede new era for quantum applications
Graphene, a single layer of carbon atoms arranged in a two-dimensional honeycomb lattice, is known for its exceptional properties: incredible strength (about 200 times stronger than steel), light weight, flexibility, and excellent conduction of electricity and heat. These properties have made graphene increasingly important in applications across various fields, including electronics, energy storage, medical technology, and, most recently, quantum computing.
Graphene’s quantum properties, such as superconductivity and other unique quantum behaviors, are known to arise when graphene atomic layers are stacked and twisted with precision to produce “ABC stacking domains.” Historically, achieving ABC stacking domains required exfoliating graphene and manually twisting and aligning layers with exact orientations—a highly intricate process that is difficult to scale for industrial applications.
Now, researchers at NYU Tandon School of Engineering led by Elisa Riedo, Herman F. Mark Professor in Chemical and Biomolecular Engineering, have uncovered a new phenomenon in graphene research, observing growth-induced self-organized ABA and ABC stacking domains that could kick-start the development of advanced quantum technologies. The findings, published in a recent study in the Proceedings of the National Academy Of Sciences (PNAS), demonstrate how specific stacking arrangements in three-layer epitaxial graphene systems emerge naturally — eliminating the need for complex, non-scalable techniques traditionally used in graphene twisting fabrication.
These researchers, including Martin Rejhon, previously a post-doctoral fellow at NYU, have now observed the self-assembly of ABA and ABC domains within a three-layer epitaxial graphene system grown on silicon carbide (SiC). Using advanced conductive atomic force microscopy (AFM), the team found that these domains form naturally without the need for manual twisting or alignment. This spontaneous organization represents a significant step forward in graphene stacking domains fabrication.
The size and shape of these stacking domains are influenced by the interplay of strain and the geometry of the three-layer graphene regions. Some domains form as stripe-like structures, tens of nanometers wide and extending over microns, offering promising potential for future applications.
“In the future we could control the size and location of these stacking patterns through pregrowth patterning of the SiC substrate,” Riedo said.
These self-assembled ABA/ABC stacking domains could lead to transformative applications in quantum devices. Their stripe-shaped configurations, for example, are well-suited for enabling unconventional quantum Hall effects, superconductivity, and charge density waves. Such breakthroughs pave the way for scalable electronic devices leveraging graphene's quantum properties.
This discovery marks a major leap in graphene research, bringing scientists closer to realizing the full potential of this remarkable material in next-generation electronics and quantum technologies.
The funding for this research came from the Army Research Office, a directorate of U.S. Army Combat Capabilities Development Command Army Research Laboratory under Award # W911NF2020116. This research also included researchers from Charles University, Prague.