Building trust in the age of AI: How financial services can innovate with ethics and transparency

Five people sit on a stage that is branded as "Finovate Fall"

Yossi Leon (second to left) on the "Responsible AI in Financial Services - Unlocking Innovation While Managing Risks & Building Trustworthy Systems for Your Customers” panel with industry peers.

AI is transforming the finance sector, allowing for real-time decision-making, accelerated work flows, enhanced risk management, reductions in operational costs, and more personalized client experiences. Yet its use introduces ethical implications, from algorithmic bias to privacy violations and loss of public trust.

Earlier this month, financial technology professionals gathered at Finovate, an international convention series aimed at disseminating cutting-edge knowledge, networking, and collaborating on solutions to the issues facing the industry.

Yossi Leon, an NYU Tandon Adjunct Professor of Technology Management and Innovation, was in attendance to participate in a panel called “Responsible AI in Financial Services - Unlocking Innovation While Managing Risks & Building Trustworthy Systems for Your Customers.”

Leon teaches a graduate-level FinTech course that introduces students to AI and machine learning, decentralized finance, algorithmic trading, and other such topics, and he also serves as the Chief Technology Officer at FIA Tech, a leading technology provider to the exchange-traded derivatives industry. Those dual roles give him a broad perspective on AI’s benefits and risks and how the technology could shape us as a society.  

“Responsible AI isn’t just a compliance checkbox, it’s about building trust,” he says.

 
Among the key points he made to educators at the panel:
  • I tell students to not merely use the machine, but shadow it. Understand how it makes decisions, where it fails, and how bias creeps in. That’s how you become tool builders, not just tool users.
  • The better we prepare students now, the more likely it is that they’ll innovate responsibly at scale.
  • Real learning occurs when students confront biased datasets, weigh accuracy vs. privacy tradeoffs, and see principles play out during real projects.
  • Embedding responsible AI into capstone projects, internships, and even coding bootcamps makes ethics and fairness tangible.
 
From his perspective as a CTO, he advised:
  • Responsible AI should be treated like any other part of the software lifecycle — planned for in product roadmaps, coding standards, and technology choices.
  • Companies that don’t embed responsible AI early will pay for it later — financially and reputationally.

“We must identify the emerging roles and skills required to harness AI effectively, combining technical expertise, ethical judgment, and human creativity, so universities can prepare students now to thrive in an AI-driven world and shape how the technology transforms our industries and society.”



— Yossi Leon