An empirical foundation for multi-agent reinforcement learning at scale

Speaker
Eugene Vinitsky
Assistant professor, Civil and Urban Engineering, New York University
Title
"An empirical foundation for multi-agent reinforcement learning at scale"
Abstract
Reinforcement learning, particularly in multi-agent settings, is known to be finicky and hard to deploy. We present and discuss empirical findings that point towards simple, performant, and easy to use settings for multi-agent learning algorithms. We demonstrate, through newly designed benchmarks, that regularized policy gradient algorithms appear to work across large classes of games from fully cooperative to zero sum. Finally, we demonstrate that straightforward applications of these algorithms at scale enables decentralized control systems to achieve high robustness, using them to build a simulated driver that experiences an incident only once in a million miles.
About Speaker
Eugene Vinitsky is an assistant professor of civil and urban engineering at NYU where he works on scaling up multi-agent reinforcement learning for the design of capable, decentralized autonomous systems. His PhD work consisted of designing the algorithms and controllers for a field deployment of a 100 autonomous vehicle traffic smoothing system. At UC Berkeley he received his PhD in controls engineering and received an MS and BS in physics from UC Santa Barbara and Caltech respectively. He has spent time at Tesla, Deepmind, Facebook AI Research, and was a researcher at the Apple Special Project Group before moving to NYU