15th Annual NYU Tandon Cyber Lecture Navigates the Ethics of AI in Cybersecurity

a panel seated on stage with slide of headshots behind them

Leading experts from industry and academia joined the panel discussion at NYU Tandon's annual Cybersecurity Lecture.

The NYU Tandon School of Engineering recently hosted its 15th Annual Cybersecurity Lecture, sponsored by AIG, to confront one of the field's most pressing challenges: ensuring Artificial Intelligence is deployed responsibly and ethically. The event, titled "AI and Ethics in Cybersecurity," brought together leading experts from industry and academia to address the urgent need for governance and a collaborative approach to the evolving AI threat landscape.

Joel Caminer, Senior Director of the NYU Center for Cybersecurity, introduced the event, noting that the lecture series, now in its 15th year, continues its mission of assembling experts across academia and industries for discourse on threats to national and critical infrastructure security.

 

Keynote: Closing the Operational Gap

The lecture began with a keynote address from Jeff Schwartz, VP of Engineering at Check Point Software, who highlighted the challenge AI presents to defenders.

"AI is vastly decreasing the cost of attack," Schwartz stated. He explained that this reduction in cost has created a significant "operational gap" between the speed at which threat actors can innovate and the controlled pace at which organizations must consume new technology. "It takes months and sometimes years for organizations to deploy new applications," he noted, while threat actors "could iterate and innovate at a pace that creates this operational gap." Schwartz emphasized that the industry's role is to help close this asymmetry, as an organization's own institutional speed often becomes the biggest limiting factor in its security success.

NYU Cyber AIG Lecturer

 

Defining Responsible AI and Distributed Accountability

Following the keynote, a distinguished panel provided a multidisciplinary framework for embedding responsibility into the AI lifecycle. The panel featured:

  • Wendy Callaghan – Global Head of Data, Digital, and Cyber Legal at AIG
  • Chetan Karande: Director, Technology Research and Innovation at DTCC
  • Quiessence Phillips – Head of Cloud Security at Kroll
  • Julia Stoyanovich – Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, and Director of the Center for Responsible AI at NYU

Julia Stoyanovich defined Responsible AI as making "the design, development, use, and oversight of AI systems socially sustainable." She stressed the need for a distributed accountability regime in which technologists expose the "knobs of responsibility" to enable collective control. Challenging the idea that regulation stifles progress, Stoyanovich pointed to the EU's GDPR as a catalyst for innovation in privacy-preserving technologies.

 

The Corporate Imperative: From Apply to Invent

Panelists from the corporate sector outlined their strategies for navigating innovation alongside risk:

  • Chetan Karande described a three-stage curve of AI adoption: Apply (using off-the-shelf tools like AI coding assistants for quick wins), Transform (renewing workflows and processes), and Invent (creating totally new business lines, which carries the highest risk). He noted that coding assistants have seen a 40% increase in developer throughput.
  • Quiessence Phillips reinforced that while third-party tools increase efficiency, they also introduce new attack surfaces and dependencies, arguing that most organizations today cannot accurately report their AI-related threat profile.
  • Wendy Callaghan provided an overview of the regulatory landscape, noting that lawmakers are attempting to balance innovation with risk through measures like the risk-based framework of the EU AI Act and new sector-specific laws in the U.S.

The discussion concluded with a strong consensus: AI is a "team sport," requiring multidisciplinary collaboration to embed ethics and security throughout the development lifecycle.

 

NYU Cyber AIG Answering Questions

 

Key Takeaways for Practitioners

Before the final networking reception, the panelists offered concise advice to the audience:

  • Julia Stoyanovich: "Say no to magical thinking," and treat AI systems like any other technology that must be tested.
  • Quiessence Phillips: "Be a conversationalist across domains and deep in at least one."
  • Chetan Karande: Embrace the multidisciplinary experience required, integrating development with ethics and security.

The event demonstrated that while AI presents an exponential increase in both promise and peril, collaboration between academia and industry is the essential foundation for a secure future.


 

Watch the 15th Cybersecurity Lecture:

 

Next Steps: