Agentic AI for Risk Management (GY - ONLY)
-
Exploring and developing multi-agent AI systems for early risk detection, monitoring, and decision support in complex real-world environments
Agentic AI for Risk Management provides students the chance to work on an exciting topic at the intersection of AI, decision-making, and risk management. The focus is on agentic AI: AI systems that can make decisions, plan, work with other agents, and adapt over time. Students will explore how these AI systems can help detect risks early, follow how problems spread, and improve decision-making in complex and uncertain situations.
The project is highly hands-on and follows a learn-by-doing approach. Students will not only study ideas, but also help design and build real solutions, in a way that mirrors a true research and development setting. You will join an ongoing VIP effort and continue developing a project started by earlier student teams. This means your work will contribute to something larger than a single semester and can become part of a project with lasting impact.
Because this VIP project involves multi-agent AI systems and machine learning for risk management, students should have taken at least one machine learning course before joining, or be able to clearly show that they already have a good background in the subject.
-
This is a 1.5-credit course for graduate students only (Master’s and PhD students), with an optional 0-credit version.
-
The project is designed to feel like real work in a quant research or R&D engineering role in finance or technology.
-
Students will be expected to build working code, contribute seriously to the project, and work actively with the team.
Students in this VIP will be expected to:
- Work with a teammate to present a topic connected to the project theme.
- Help the team reproduce and extend an existing multi-agent risk model from the research literature.
- Contribute to testing the model under challenging situations, such as sudden information shocks, cascading failures, and adversarial behavior.
- Contribute to the team’s final summary report.
This project can also create exciting opportunities for interested students to take part in competitions, share their work at workshops or conferences, and possibly contribute to a research paper or white paper.
VIP Agentic AI for Risk Management Project
This project studies how multi-agent AI systems can help detect, understand, and manage risk in complex and changing environments. It also examines the risks that these AI systems may create on their own.
The project includes two main parts.
-
The first is a structured review of the literature on agentic AI, multi-agent systems, and AI for risk management.
-
The second is a hands-on experimental part, where students reproduce, test, and extend existing models in realistic and challenging settings.
To connect the work to real-world problems, the team will focus on several application areas.
The first is mergers and acquisitions (M&A), where buyers, sellers, regulators, advisors, and market participants often have different goals and incomplete information. This makes it a strong setting for studying risk with multi-agent AI. In this area, AI agents could help identify antitrust issues, detect valuation problems, and explore post-merger risks before they become costly.
The project will then extend to other domains such as healthcare, legal and regulatory compliance, and supply chains or regulated markets. In these settings, AI agents may help monitor safety, follow changing rules, detect early warning signs, and support better decisions.
Across all domains, the project is built around one central question: how can agentic AI help manage risk in dynamic systems, while also accounting for the new risks introduced by AI itself, such as limited transparency, unexpected collective behavior, and the possibility that AI decisions may increase systemic risk?
The long-term goal is to build a modular research and experimental pipeline that can be used across domains and may produce results suitable for academic presentations or publications.
Research, Design, and Technical Questions Explored in the Project
This project will explore important questions such as:
- How to understand and organize what is already known about agentic AI and multi-agent systems for risk management.
- How to identify the most promising methods, the main open questions, and the biggest gaps in current research.
- How to design AI agents that can plan ahead, update their decisions when new information arrives, and act in a structured and reliable way.
- How to study risks that appear when many agents interact, such as coordination problems, follow-the-crowd behavior, feedback loops, and growing systemic risk.
- How to make AI decisions easier to understand in high-stakes settings and build methods that can be useful across different real-world areas.
- How to build simulations where many autonomous agents interact in environments with uncertainty and incomplete information.
- How to implement tools such as multi-agent reinforcement learning and game-theoretic models to study decision-making and risk.
- How to evaluate risk detection and prediction methods under difficult conditions, such as sudden shocks, adversarial behavior, and cascading failures.
- How to scale the proposed methods to realistic problem sizes using open-source agentic AI frameworks and cloud computing resources.
Subteams
This VIP project will be organized into two subteams, so students can contribute in different ways based on their interests and strengths.
Research Subteam
- Study and review the research literature on agentic AI, multi-agent systems, and AI for risk management.
- Help reproduce existing multi-agent models from the literature and improve them under more realistic and challenging conditions.
- Analyze results, compare findings across different application areas, and help present the team’s work.
Data Science / Engineering Subteam
- Explore and analyze data from the selected application areas.
- Help build the multi-agent simulation pipeline, connect open-source agentic AI tools, and debug the system when needed.
- Design and develop evaluation tools and simple interfaces to show and explain the experimental results.
This structure allows students to choose a role that fits their interests, while still working closely with the full team on a shared project.
Majors and Areas of Interest
- Financial Engineering
- Computer Science
- Data Science
- Artificial Intelligence
- Risk Engineering
- Healthcare Informatics
- Legal Technology
- Software Engineering
Methods and Technologies
- Multi-Agent Reinforcement Learning
- Game-Theoretic Modeling
- Large Language Model-Based Agents
- Deep Learning
- Anomaly Detection
- Graph Neural Networks
- Simulation and Stress Testing
- Data Science
- Open-Source Agentic AI Frameworks