Farshad Khorrami
-
Professor (IEEE Fellow)
-
Co-Founder of Center for AI and Robotics (2021)
-
Co-Director of Center for AI and Robotics (2021-Present)
Education
The Ohio State University, 1988
Doctor of Philosophy, Electrical Engineering
The Ohio State University, 1984
Master of Science, Mathematics
The Ohio State University, 1984
Bachelor of Science, Electrical Engineering
The Ohio State University, 1982
Bachelor of Science, Mathematics
Publications
Journal Articles
http://crrl.poly.edu/Publications.htm
Authored/Edited Books
F. Khorrami, P. Krishnamurthy, and H. Melkote, Modeling and Adaptive Nonlinear Control of Electric Motors, Springer, Heidelberg, 2003.
Affiliations
Control/Robotics Research Laboratory, Director
https://crrl.engineering.nyu.edu/
Center for AI and Robotics (NYU Abu Dhabi), Co-Director
Affiliated Faculty at NYU Abu Dhabi
Research News
Large language models can execute complete ransomware attacks autonomously, NYU Tandon research shows
Criminals can use artificial intelligence, specifically large language models, to autonomously carry out ransomware attacks that steal personal files and demand payment, handling every step from breaking into computer systems to writing threatening messages to victims, according to new research from NYU Tandon School of Engineering.
The study serves as an early warning to help defenders prepare countermeasures before bad actors adopt these AI-powered techniques.
A simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems.
This system, which the researchers call “Ransomware 3.0," became widely known recently as "PromptLock," a name chosen by cybersecurity firm ESET when experts there discovered it on VirusTotal, an online platform where security researchers test whether files can be detected as malicious.
The Tandon researchers had uploaded their prototype to VirusTotal during testing procedures, and the files there appeared as functional ransomware code with no indication of their academic origin. ESET initially believed they found the first AI-powered ransomware being developed by malicious actors. While it is the first to be AI-powered, the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment.
"The cybersecurity community's immediate concern when our prototype was discovered shows how seriously we must take AI-enabled threats," said Md Raz, a doctoral candidate in the Electrical and Computer Engineering Department who is the lead author on the Ransomware 3.0 paper the team published publicly. "While the initial alarm was based on an erroneous belief that our prototype was in-the-wild ransomware and not laboratory proof-of-concept research, it demonstrates that these systems are sophisticated enough to deceive security experts into thinking they're real malware from attack groups."
The research methodology involved embedding written instructions within computer programs rather than traditional pre-written attack code. When activated, the malware contacts AI language models to generate Lua scripts customized for each victim's specific computer setup, using open-source models that lack the safety restrictions of commercial AI services.
Each execution produces unique attack code despite identical starting prompts, creating a major challenge for cybersecurity defenses. Traditional security software relies on detecting known malware signatures or behavioral patterns, but AI-generated attacks produce variable code and execution behaviors that could evade these detection systems entirely.
Testing across three representative environments showed both AI models were highly effective at system mapping and correctly flagged 63-96% of sensitive files depending on environment type. The AI-generated scripts proved cross-platform compatible, operating on (desktop/server) Windows, Linux, and (embedded) Raspberry Pi systems without modification.
The economic implications reveal how AI could reshape ransomware operations. Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models. Open-source AI models eliminate these costs entirely.
This cost reduction could enable less sophisticated actors to conduct advanced campaigns previously requiring specialized technical skills. The system's ability to generate personalized extortion messages referencing discovered files could increase psychological pressure on victims compared to generic ransom demands.
The researchers conducted their work under institutional ethical guidelines within controlled laboratory environments. The published paper provides critical technical details that can help the broader cybersecurity community understand this emerging threat model and develop stronger defenses.
The researchers recommend monitoring sensitive file access patterns, controlling outbound AI service connections, and developing detection capabilities specifically designed for AI-generated attack behaviors.
The paper's senior authors are Ramesh Karri — ECE Professor and department chair, and faculty member of Center for Advanced Technology in Telecommunications (CATT) and NYU Center for Cybersecurity — and Farshad Khorrami — ECE Professor and CATT faculty member. In addition to lead author Raz, the other authors include ECE Ph.D. candidate Meet Udeshi; ECE Postdoctoral Scholar Venkata Sai Charan Putrevu and ECE Senior Research Scientist Prashanth Krishnamurthy.
The work was supported by grants from the Department of Energy, National Science Foundation, and from the State of New York via Empire State Development's Division of Science, Technology and Innovation.
Raz, Md, et al. “Ransomware 3.0: Self-Composing and LLM-Orchestrated.” arXiv.Org, 28 Aug. 2025, doi.org/10.48550/arXiv.2508.20444.
NYU Tandon researchers develop AI agent that solves cybersecurity challenges autonomously
Artificial intelligence agents — AI systems that can work independently toward specific goals without constant human guidance — have demonstrated strong capabilities in software development and web navigation. Their effectiveness in cybersecurity has remained limited, however.
That may soon change, thanks to a research team from NYU Tandon School of Engineering, NYU Abu Dhabi and other universities that developed an AI agent capable of autonomously solving complex cybersecurity challenges.
The system, called EnIGMA, was presented this month at the International Conference on Machine Learning (ICML) 2025 in Vancouver, Canada.
"EnIGMA is about using Large Language Model agents for cybersecurity applications," said Meet Udeshi, a NYU Tandon Ph.D. student and co-author of the research. Udeshi is advised by Ramesh Karri, Chair of NYU Tandon's Electrical and Computer Engineering Department (ECE) and a faculty member of the NYU Center for Cybersecurity and NYU Center for Advanced Technology in Telecommunications (CATT), and by Farshad Khorrami, ECE professor and CATT faculty member. Both Karri and Khorrami are co-authors on the paper, with Karri serving as a senior author.
To build EnIGMA, the researchers started with an existing framework called SWE-agent, which was originally designed for software engineering tasks. However, cybersecurity challenges required specialized tools that didn't exist in previous AI systems. "We have to restructure those interfaces to feed it into an LLM properly. So we've done that for a couple of cybersecurity tools," Udeshi explained.
The key innovation was developing what they call "Interactive Agent Tools" that convert visual cybersecurity programs into text-based formats the AI can understand. Traditional cybersecurity tools like debuggers and network analyzers use graphical interfaces with clickable buttons, visual displays, and interactive elements that humans can see and manipulate.
"Large language models process text only, but these interactive tools with graphical user interfaces work differently, so we had to restructure those interfaces to work with LLMs," Udeshi said.
The team built their own dataset by collecting and structuring Capture The Flag (CTF) challenges specifically for large language models. These gamified cybersecurity competitions simulate real-world vulnerabilities and have traditionally been used to train human cybersecurity professionals.
"CTFs are like a gamified version of cybersecurity used in academic competitions. They're not true cybersecurity problems that you would face in the real world, but they are very good simulations," Udeshi noted.
Paper co-author Minghao Shao, a NYU Tandon Ph.D. student and Global Ph.D. Fellow at NYU Abu Dhabi who is advised by Karri and Muhammad Shafique, Professor of Computer Engineering at NYU Abu Dhabi and ECE Global Network Professor at NYU Tandon, described the technical architecture: "We built our own CTF benchmark dataset and created a specialized data loading system to feed these challenges into the model." Shafique is also a co-author on the paper.
The framework includes specialized prompts that provide the model with instructions tailored to cybersecurity scenarios.
EnIGMA demonstrated superior performance across multiple benchmarks. The system was tested on 390 CTF challenges across four different benchmarks, achieving state-of-the-art results and solving more than three times as many challenges as previous AI agents.
During the research conducted approximately 12 months ago, "Claude 3.5 Sonnet from Anthropic was the best model, and GPT-4o was second at that time," according to Udeshi.
The research also identified a previously unknown phenomenon called "soliloquizing," where the AI model generates hallucinated observations without actually interacting with the environment, a discovery that could have important consequences for AI safety and reliability.
Beyond this technical finding, the potential applications extend outside of academic competitions. "If you think of an autonomous LLM agent that can solve these CTFs, that agent has substantial cybersecurity skills that you can use for other cybersecurity tasks as well," Udeshi explained. The agent could potentially be applied to real-world vulnerability assessment, with the ability to "try hundreds of different approaches" autonomously.
The researchers acknowledge the dual-use nature of their technology. While EnIGMA could help security professionals identify and patch vulnerabilities more efficiently, it could also potentially be misused for malicious purposes. The team has notified representatives from major AI companies including Meta, Anthropic, and OpenAI about their results.
In addition to Karri, Khorrami, Shafique, Udeshi and Shao, the paper's authors are Talor Abramovich (Tel Aviv University), Kilian Lieret (Princeton University), Haoran Xi (NYU Tandon), Kimberly Milner (NYU Tandon), Sofija Jancheska (NYU Tandon), John Yang (Stanford University), Carlos E. Jimenez (Princeton University), Prashanth Krishnamurthy (NYU Tandon), Brendan Dolan-Gavitt (NYU Tandon), Karthik Narasimhan (Princeton University), and Ofir Press (Princeton University).
Funding for the research came from Open Philanthropy, Oracle, the National Science Foundation, the Army Research Office, the Department of Energy, and NYU Abu Dhabi Center for Cybersecurity and Center for Artificial Intelligence and Robotics.
Tracking Real-time Anomalies in Power Systems (TRAPS)
The researchers participating in this grant include Farshad Khorrami and Ramesh Karri, Professors of Electrical and Computer Engineering and member and director — respectively — of the NYU Center for Cybersecurity; and Research Scientist Prashanth Krishnamurthy.
A project to develop methods of securing the U.S. power grid from hackers, led by NYU Tandon researchers at the NYU Center for Cybersecurity, is one of six university teams receiving a portion of $12 million from the U.S. Department of Energy (DOE), supporting research, development, and demonstration (RD&D) of novel cybersecurity technologies to help the U.S. power grid survive and recover quickly from cyberattacks.
The Tandon team received $1.94 million for the project from the DOE fund, with matching support from NYU bringing the total to around $2.8 million, to develop Tracking Real-time Anomalies in Power Systems (TRAPS) to detect and localize anomalies in power grid cyber-physical systems. Collaborators include SRI International, the New York Power Authority, and Consolidated Edison. TRAPS will correlate time series measurements from electrical signals, embedded computing devices, and network communications to detect anomalies using semantic mismatches between measurements, allowing it to perform cross-domain real-time integrity verification.
Administered by the DOE's Office of Cybersecurity, Energy Security, and Emergency Response (CESER), the strategic project aims to advance anomaly detection, artificial intelligence and machine learning, and physics-based analytics to strengthen the security of next-generation energy systems. These systems include components placed in substations to detect cyber intrusions more quickly and automatically block access to control functions.
The program aligns with the DOE’s larger goal of bolstering the security and resiliency of the power grid toward advancing President Biden’s goal of a 100% clean electrical grid by 2035 and net-zero carbon emissions by 2050.
Detection of Hardware Trojans Using Controlled Short-Term Aging
This research project is led by Department of Electrical and Computer Engineering Professors Farshad Khorrami and Ramesh Karri, who is co-founder and co-chair of the NYU Center for Cybersecurity, and Prashanth Krishnamurthy, a research scientist at NYU Tandon; and Jörg Henkel and Hussam Amrouch of the Computer Science Department of the Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany.
The project builds upon on-going research, funded by a $1.3 million grant from the Office of Naval Research, to create algorithms for detecting Trojans — deliberate flaws inserted into chips during fabrication — based on the short term aging phenomena in transistors.
It will focus on this physical phenomenon of short-term aging as a route to detecting hardware Trojans. The efficacy of short-term aging-based hardware Trojan detection has been demonstrated through simulations on integrated circuits (ICs) with several types of hardware Trojans through stochastic perturbations injected into the simulation studies. This DURIP project seeks to demonstrate hardware Trojan detection in actual physical ICs.
Khorrami explained that the new $359,000 grant will support the design and fabrication of 28nm chips with and without built-in trojans
"The supply chain in manufacturing chips is complex and most foundries are overseas. Once a chip is fabricated and returned to the customer, the question is if additional hardware has been included on the chip die for most likely malicious purposes," he said.
For this purpose, this DURIP project is proposing a novel experimental testbed consisting of:
• A specifically designed IC that contains Trojan-free and Trojan-infected variants of multiple circuits (e.g., cryptographic accelerators and micrcontrollers). This IC will be used for evaluation of the efficacy and accuracy of the hardware short-term aging based Trojan detection methods. To validate the Trojan detection methodology the team will use 3mm×3mm ICs with both Trojan-free and Trojan-infected variants of multiple circuits.
• AnFPGA-based interface module to apply clock signal and inputs to the fabricated IC and collect outputs.
• A fast switching programmable power supply for precise application of supply voltage changes to the IC’s being tested. The unit will apply patterns of supply voltages to the test chips to induce controllable and repeatable levels of short-term aging.
• Finally, a data analysis software module on a host computer for machine learning based device evaluation and anomaly detection (i.e., detection of hardware Trojans).
This testbed, a vital resource in the physical validation of the proposed NYU-KIT hardware Trojan detection methodology will also be a valuable resource for evaluating and validating other hardware Trojan detection techniques developed by NYU and the hardware security researchers outside of NYU. The testbed will therefore be a unique experimental facility for the hardware security community by providing access to (i) physical ICs with Trojan- free and Trojan-infected variants of circuits ranging from moderate-sized cryptographic circuits to complex microprocessors plus (ii) a generic FPGA-based interface to interrogate and test these ICs for Trojans according to their detection method.