NYU Tandon partners with international organizations to promote the socially responsible use of AI


When the United Nations Office for Disarmament Affairs (ODA) and the Stockholm International Peace Research Institute (SIPRI) partnered for an initiative on responsible innovation in artificial intelligence for peace and security, they needed to convene an advisory board of experts capable of guiding the project and informing strategic and operational decisions.

Knowing the research being done at NYU Tandon on responsible AI, AI ethics, and robotics, the group turned to two faculty members: Associate Professor Ludovic Righetti, who directs Tandon’s new Center for Robotics and Embodied Intelligence and has been a longtime research collaborator of Vincent Boulanin of SIPRI, and Institute Associate Professor Julia Stoyanovich, the founding director of the Center for Responsible AI, who has been deeply involved in AI governance and regulation on the city, state, and federal levels.

The initiative, dubbed Promoting Responsible Innovation in Artificial Intelligence for Peace and Security and spearheaded by Boulanin and Charles Ovink of ODA, was launched because of the realization that while AI systems hold the potential to help the world achieve the U.N. Sustainable Development Goals and support peacekeeping missions — consider, for example, the use of drones to deliver medical supplies — they can also be used to spread disinformation via chatbots or carry out cyberattacks. Developers and researchers like Stoyanovich and Righetti are key to mitigating the risks.

Since the pair joined the advisory board, they have contributed, both individually and together, to many of the group’s efforts, including:

  • Co-authoring the IEEE Spectrum op-ed “AI Missteps Could Unravel Global Peace and Security,” which asserted that stakeholders must become more aware of the challenges of working with AI — and of their capacity to do something about them, starting with education and career development that incorporates not only technical knowledge but insights from the social sciences and humanities.
  • An appearance by Stoyanovich on the ODA podcast to discuss the importance of responsible AI.
  • Righetti and Boulanin’s recent IEEE Spectrum op-ed, "Navigating the Dual-Use Dilemma," which discussed the potential risks of open-access research for peace and security and proposes actions to avoid unintended consequences while fostering open innovation.
  • Partnering with Boulanin, Ovink, and Jules Palayer of SIPRI’s Governance of Artificial Intelligence Programme to hold a poster session, panel discussion, and multi-day workshop focused on addressing the misuse of civilian artificial intelligence. Hosted at Tandon and the U.N., it drew attendees from around the world.

Following that event, Ovink wrote: “I have far too many takeaways to unpack properly here, but perhaps the most encouraging is the way everyone involved demonstrated their readiness to engage with each other and address challenges head-on, and the creativity and open-mindedness in their approaches. Responsible AI is not something any single actor or stakeholder can generate; it requires wide and thoughtful engagement of communities from across the AI lifecycle and beyond, and a willingness to recognize and reflect varied perspectives. When it comes to the risks that can be presented to international peace and security, the stakes are significant, but so is the momentum to do something about it.”

Righetti concluded: “It was great meeting participants with very different backgrounds and perspectives, from engineering to policy making, all gathered with the common goal of promoting responsible AI for peace and security. It was encouraging to hear many ideas, from education to research practices, that can contribute to a more responsible AI ecosystem. I certainly learned a lot!”