Prabodh Panindre holds a PhD in Mechanical Engineering and an MBA from New York University. His scholarly focus includes artificial intelligence, fire science and firefighter safety research, optics, heat transfer, nanotechnology, and microfluidics. He is an Honorary Fellow of the Institution of Fire Engineers.
His research group has received several grants (more than $8.5 million) from the U.S. Department of Homeland Security for fire prevention and safety research. He led a team of NYU researchers on the "Wind-Driven High-Rise Fires" project with the Fire Department of New York (FDNY) and the National Institute of Standards and Technology (NIST), which produced revolutionary changes in many of FDNY’s long-established tactics. The new firefighting procedures developed through this research have been implemented by FDNY in several real-life fires in New York City. This research was featured on the cover page of the ASME (American Society of Mechanical Engineering) Magazine.
He also led the research that developed an innovative training methodology to disseminate firefighter safety research, and to educate firefighters in the most effective manner. This training has been used by more than 75,000 firefighters from all 50 U.S. states and officially adopted by more than 1000 fire departments nationwide. He has been featured on more than 500 newswires across the globe, including the New York Times, NY Daily News, Yahoo, Reuters, United Press International, and National Volunteer Fire Council, just to name a few. He has been interviewed by several TV news channels, including NBC News, ABC News, News 12, PIX 11 News, etc.
Research News
NYU Tandon researchers develop new AI system that leverages standard security cameras to detect fires in seconds, and could transform emergency response
Fire kills nearly 3,700 Americans annually and destroys $23 billion in property, with many deaths occurring because traditional smoke detectors fail to alert occupants in time.
Now, the NYU Fire Research Group at NYU Tandon School of Engineering has developed an artificial intelligence system that could significantly improve fire safety by detecting fires and smoke in real-time using ordinary security cameras already installed in many buildings.
Published in the IEEE Internet of Things Journal, the research demonstrates a system that can analyze video footage and identify fires within 0.016 seconds per frame—faster than the blink of an eye—potentially providing crucial extra minutes for evacuation and emergency response. Unlike conventional smoke detectors that require significant smoke buildup and proximity to activate, this AI system can spot fires in their earliest stages from video alone.
"The key advantage is speed and coverage," explained lead researcher Prabodh Panindre, Research Associate Professor in NYU Tandon’s Department of Mechanical and Aerospace Engineering (MAE). "A single camera can monitor a much larger area than traditional detectors, and we can spot fires in the initial stages before they generate enough smoke to trigger conventional systems."
The need for improved fire detection technology is evident from concerning statistics: 11% of residential fire fatalities occur in homes where smoke detectors failed to alert occupants, either due to malfunction or the complete absence of detectors. Moreover, modern building materials and open floor plans have made fires spread faster than ever before, with structural collapse times significantly reduced compared to legacy construction.
The NYU Tandon research team developed an ensemble approach that combines multiple state-of-the-art AI algorithms. Rather than relying on a single AI model that might mistake a red car or sunset for fire, the system requires agreement between multiple algorithms before confirming a fire detection, substantially reducing false alarms, a critical consideration in emergency situations.
The researchers trained their models by building a comprehensive custom image dataset representing all five classes of fires recognized by the National Fire Protection Association, from ordinary combustible materials to electrical fires and cooking-related incidents. The system achieved notable accuracy rates, with the best-performing model combination reaching 80.6% detection accuracy.
The system incorporates temporal analysis to differentiate between actual fires and static fire-like objects that could trigger false alarms. By monitoring how the size and shape of detected fire regions change over consecutive video frames, the algorithm can distinguish between a real, growing fire and a static image of flames hanging on a wall. "Real fires are dynamic, growing and changing shape," explained Sunil Kumar, Professor of MAE. "Our system tracks these changes over time, achieving 92.6% accuracy in eliminating false detections."
The technology operates within a cloud-based Internet of Things architecture where multiple standard security cameras stream raw video to servers that perform AI analysis. When fire is detected, the system automatically generates video clips and sends real-time alerts via email and text message. This design means the technology can be implemented using existing CCTV infrastructure without requiring expensive hardware upgrades, an important advantage for widespread adoption.
This technology can be integrated into drones or unmanned aerial vehicles to search for wildfires in remote forested areas. Early-stage wildfire detection would buy critical hours in the race to contain and extinguish them, enabling faster dispatch of resources, and prioritized evacuation orders that dramatically reduce ecological and property loss.
To improve the safety of firefighters and assist during fire response, the same detection system can be embedded into the tools firefighters already carry: helmet cameras, thermal imagers, and vehicle-mounted cameras, as well as into autonomous firefighting robots. In urban areas, UAVs integrated with this technology can help the fire service in performing 360-degree size-up, especially when fire is on higher floors of high-rise structures.
“It can remotely assist us in confirming the location of the fire and possibility of trapped occupants,” said Capt. John Ceriello from the Fire Department of New York City.
Beyond fire detection, the researchers note their approach could be adapted for other emergency scenarios such as security threats or medical emergencies, potentially expanding how we monitor and respond to various safety risks in our society.
In addition to Panindre and Kumar, the research team includes Nanda Kalidindi (’18 MS Computer Science, NYU Tandon), Shantanu Acharya (’23 MS Computer Science, NYU), and Praneeth Thummalapalli (’25 MS Computer Science, NYU Tandon).
P. Panindre, S. Acharya, N. Kalidindi and S. Kumar, "Artificial Intelligence-Integrated Autonomous IoT Alert System for Real-Time Remote Fire and Smoke Detection in Live Video Streams," in IEEE Internet of Things Journal, doi: 10.1109/JIOT.2025.3598979.
AI food scanner turns phone photos into nutritional analysis
Snap a photo of your meal, and artificial intelligence instantly tells you its calorie count, fat content, and nutritional value — no more food diaries or guesswork.
This futuristic scenario is now much closer to reality, thanks to an AI system developed by NYU Tandon School of Engineering researchers that promises a new tool for the millions of people who want to manage their weight, diabetes and other diet-related health conditions.
The technology, detailed in a paper presented at the 6th IEEE International Conference on Mobile Computing and Sustainable Informatics, uses advanced deep-learning algorithms to recognize food items in images and calculate their nutritional content, including calories, protein, carbohydrates and fat.
For over a decade, NYU's Fire Research Group, which includes the paper's lead author Prabodh Panindre and co-author Sunil Kumar, has studied critical firefighter health and operational challenges. Several research studies show that 73-88% of career and 76-87% of volunteer firefighters are overweight or obese, facing increased cardiovascular and other health risks that threaten operational readiness. These findings directly motivated the development of their AI-powered food-tracking system.
"Traditional methods of tracking food intake rely heavily on self-reporting, which is notoriously unreliable," said Panindre, Associate Research Professor of NYU Tandon School of Engineering’s Department of Mechanical Engineering. "Our system removes human error from the equation."
Despite the apparent simplicity of the concept, developing reliable food recognition AI has stumped researchers for years. Previous attempts struggled with three fundamental challenges that the NYU Tandon team appears to have overcome.
"The sheer visual diversity of food is staggering," said Kumar, Professor of Mechanical Engineering at NYU Abu Dhabi and Global Network Professor of Mechanical Engineering at NYU Tandon. "Unlike manufactured objects with standardized appearances, the same dish can look dramatically different based on who prepared it. A burger from one restaurant bears little resemblance to one from another place, and homemade versions add another layer of complexity."
Earlier systems also faltered when estimating portion sizes — a crucial factor in nutritional calculations. The NYU team's advance is their volumetric computation function, which uses advanced image processing to measure the exact area each food occupies on a plate.
The system correlates the area occupied by each food item with density and macronutrient data to convert 2D images into nutritional assessments. This integration of volumetric computations with the AI model enables precise analysis without manual input, solving a longstanding challenge in automated dietary tracking.
The third major hurdle has been computational efficiency. Previous models required too much processing power to be practical for real-time use, often necessitating cloud processing that introduced delays and privacy concerns.
The researchers used a powerful image-recognition technology called YOLOv8 with ONNX Runtime (a tool that helps AI programs run more efficiently) to build a food-identification program that runs on a website instead of as a downloadable app, allowing people to simply visit it using their phone's web browser to analyze meals and track their diet.
When tested on a pizza slice, the system calculated 317 calories, 10 grams of protein, 40 grams of carbohydrates, and 13 grams of fat — nutritional values that closely matched reference standards. It performed similarly well when analyzing more complex dishes such as idli sambhar, a South Indian specialty featuring steamed rice cakes with lentil stew, for which it calculated 221 calories, 7 grams of protein, 46 grams of carbohydrates and just 1 gram of fat.
"One of our goals was to ensure the system works across diverse cuisines and food presentations," said Panindre. "We wanted it to be as accurate with a hot dog — 280 calories according to our system — as it is with baklava, a Middle Eastern pastry that our system identifies as having 310 calories and 18 grams of fat."
The researchers solved data challenges by combining similar food categories, removing food types with too few examples, and giving extra emphasis to certain foods during training. These techniques helped refine their training dataset from countless initial images to a more balanced set of 95,000 instances across 214 food categories.
The technical performance metrics are impressive: the system achieved a mean Average Precision (mAP) score of 0.7941 at an Intersection over Union (IoU) threshold of 0.5. For non-specialists, this means the AI can accurately locate and identify food items approximately 80% of the time, even when they overlap or are partially obscured.
The system has been deployed as a web application that works on mobile devices, making it potentially accessible to anyone with a smartphone. The researchers describe their current system as a "proof-of-concept" that could be refined and expanded for broader healthcare applications very soon.
In addition to Panindre and Kumar, the paper's authors are Praneeth Kumar Thummalapalli and Tanmay Mandal, both master’s degree students in NYU Tandon’s Department of Computer Science and Engineering.