NYU Tandon and Verizon Explore AI's Impact on Trust, Health, and Transit during the "AI and You" Event

three person panel discussion on stage

One of three panel discussions that explored the role of AI in everyday life. Left to Right: Ashley Greenspan, Director of State and Local Government Affairs at Verizon; Institute Professor Maurizio Porfiri, and Associate Professor Rumi Chunara

Two identical resumes. Same qualifications, same experience, same skills. The only difference: one applicant has a name that sounds overtly ethnic, while the other is ethnically ambiguous. When an AI system summarized these resumes for hiring managers, it attributed different job-relevant traits to each candidate and generated summaries with noticeably different sentiments — leading to measurably different selection rates.

This wasn't a hypothetical scenario. It was real research presented by NYU Tandon Assistant Professor Emily Black at "AI and You," a forum examining artificial intelligence's expanding role across critical sectors. And it captures a troubling pattern that emerged throughout the day-long event: AI systems that appear neutral on the surface often encode hidden biases with real consequences for people's lives.

Black's research on resume summarization reveals why standard AI evaluation metrics can be dangerously misleading. The systems she studied scored well on conventional measures of summarization accuracy. By traditional standards, they worked fine. But when she examined end-to-end outcomes — who actually got selected for interviews — bias emerged clearly.

Her warning to the room was direct: "If you're using generative AI in your hiring process, and that leads to discrimination, it’s a problem."

 

Connecting AI Experts Across Industry and Academia

"AI and You," an event sponsored by Verizon and hosted by NYU Tandon School of Engineering, brought together faculty experts and industry leaders to examine AI's role in cybersecurity, healthcare, and urban transportation. What became clear across panels was the gap between how these systems optimally function and how they perform when they encounter messy, real-world data — and what can be done to close that gap.

Ashley Greenspan, Director of State and Local Government Affairs at Verizon, who moderated one of the panels, put the stakes in stark terms: "As we'll hear, the rapid acceleration of artificial intelligence means that these systems are actively shaping critical decisions in our society. And AI is fundamentally dependent on the data that it's trained on. And when that data is flawed, the system becomes a dangerous amplifier of inequality. When we allow AI to learn from biased data, we're coding our past prejudices into our future, which makes mitigating bias a moral imperative."

 

When Healthcare Algorithms Get It Wrong

The bias in hiring decisions Black described isn't an isolated issue — it extends into systems that make life-or-death decisions about our health. Take hospital prediction algorithms, which are increasingly used to identify which patients are likely to return for care. These systems guide decisions about follow-up care and resource allocation — but Associate Professor Rumi Chunara, who directs the Center for Health Data Science at NYU's School of Global Public Health, discovered they don't work equally well for everyone.

Her team found that algorithm performance varied significantly based on patients' insurance types. "If that guess for who's going to come back is not correct for a certain group, then there might be less proactive measures taken to help those people," Chunara explained. In other words, the patients who might need the most help could be the ones the system is least likely to flag.

The problem, Chunara revealed, often starts with the data itself. During the COVID-19 pandemic, her research showed that different demographic groups accessed testing at vastly different rates due to barriers like distance from testing sites, work responsibilities, and comfort with healthcare settings. "Although different groups might have more positive COVID versus less, they were also coming into the hospital at different rates," she said.

The result? AI systems trained on this data created an inaccurate picture of disease prevalence across communities, potentially leading to misdirected interventions precisely when they mattered most. To address these data limitations, Chunara is exploring innovative approaches like creating synthetic medical record notes using multi-agent AI systems with clinical expertise guardrails—essentially, using AI to fix AI's blind spots.

 

The Security Problem AI Can't Solve (Yet)

The challenge of hidden failures extends beyond bias into security vulnerabilities that could compromise critical infrastructure. Moderator Emilia David, VentureBeat's Senior AI Reporter, posed a series of questions that cut to the heart of the regulatory challenge: "One of the big questions facing us is how to ensure that network traffic is good traffic and that AI is used in the right ways. How do you approach lawmakers to help make sense of it all? How can we apply existing privacy and cybersecurity laws to this technology . . . especially to telecommunication systems?"

Professor Sundeep Rangan, Director of NYU Wireless, outlined a fundamental limitation that makes these questions particularly urgent: "AI inherently learns from past examples, so there's always a risk" when defending against new and novel attacks. This creates a particular vulnerability in wireless systems, where anomaly detection is critical but where AI may be blind to threats it hasn't encountered before. Rangan believes the industry will have to search for more robust security measures that work, regardless of whether AI can detect every threat.

He also pushed back against common misconceptions, noting that contrary to public perception, telecom operators collect relatively anonymized data focused on network optimization rather than personal surveillance. In other words, the real security concern isn't Big Brother — it's the system's ability to defend against attackers who think differently than the AI was trained to anticipate.

Jonathan Metallo, Verizon Associate General Counsel for AI Legal, offered insight into how companies are navigating these challenges within existing legal frameworks. "As a company, Verizon is not generating, producing, or developing foundational models, such as large language models," he explained. "What we're doing is applying AI in various capacities, be it in our network to try and make predictions about network disruptions, for example. So as a lawyer supporting that program, I review any AI use cases . . . how we're training those models, what information is going into them. We're trying to ensure that the models we deploy are not exhibiting bias, misbehaving, or outputting harmful content."

 

Building Better Cities (Digitally First)

Not all the AI applications discussed at the event were about fixing failures. Institute Associate Professor Joseph Chow, Deputy Director of the C2SMART University Transportation Center, described his team's work creating a "digital twin" of New York City — a citywide multi-agent simulation that can evaluate policy impacts before they're implemented.

Chow's team is using this system to assess proposals like the IBX transit line, but he envisions going further: AI-driven approaches that could design transit routes not just for accessibility but as data-collection sensors that enable continuous improvement. "We can really benefit from adopting some kind of AI-driven digital twin in the future," he said, advocating for systems that provide proactive insights rather than reactive solutions.

As Shri Iyer, Managing Director of C2SMART and a panel moderator, framed the challenge: "The field is evolving, everything's dynamic, but agency operators and decision makers have different constraints and requirements, so if you're a transit agency, how do you take advantage right away, or how do you set the framework up to take advantage of some of these advances?" He emphasized that transit doesn't operate in isolation: "People use transit as part of their lives, and there are many other parts; transit is a component of how the city functions, but it's a lot larger than just getting people from A to B. How can AI really help us break down some of the barriers or silos that we have traditionally invested in and made decisions around?"

These questions are already being answered by innovative companies working with city agencies. Stacey Matlen, Senior Vice President of Innovation Programs at the Partnership for New York City, shared concrete examples from her organization's public sector innovation labs, where they work closely with city and state agencies to test new technologies. "Focusing on transit tech lab companies, of all that have gone through our program, 76% use AI or have AI as a core function of their technology," she noted.

The results have been tangible. A company called Throughput, a supply chain AI venture, helped analyze real-time supply-chain shortages and manage parts ordering for the MTA. "They identified 80,000 AI recommendations on restocking, and they're currently being scaled, with the anticipation that they'll deliver $1 million in added efficiencies and reduced costs," Matlen explained.

Another example: QEATech, which uses thermal drone scans of building envelopes to identify heat loss. Working with the Port Authority at a Newark administrative building, "they were able to identify 1,200 megawatt hours per year of inefficiencies that could have been prevented with retrofits. And they were able to use that sort of technology to prioritize which retrofits are needed."

AI is also enhancing accessibility and responsiveness in more visible ways. Matlen pointed to the colorful QR codes at the Jay Street-MetroTech subway station, which use computer vision technology to provide real-time transit information —"a really cool tool that helps if you have visual impairments or English isn't your first language, or you just want real-time transit information."

 

The Path Forward: Transparency, Testing, and Humility

Institute Professor Maurizio Porfiri, Director of the Center for Urban Science + Progress (CUSP) and Interim Chair of the Department of Civil and Urban Engineering, shared a framework that he developed with several colleagues in a recent study. It seemed to resonate across panels — six principles for ethical research that amount to a call for radical transparency: be explicit about modeling aims, communicate assumptions and perspectives, match models with real-world stakes, communicate and quantify uncertainty, share everything to ensure reproducibility, and collaborate across disciplines.

These principles reflect a common thread throughout the event: AI's power to improve urban systems, healthcare, and communications is real, but so is its capacity to perpetuate and even amplify existing inequities if deployed without rigorous evaluation and continuous scrutiny.

As Vice Dean Sayar Lonial noted in his opening remarks, the event aimed to explore how AI affects our everyday lives, with its benefits and pitfalls. The answer that emerged was more nuanced than a simple cost-benefit analysis. AI systems are already making consequential decisions about our health, our safety, and our opportunities — and the question isn't whether to use them, but whether we're willing to do the hard work of ensuring they work fairly for everyone.

That work, as the day's discussions made clear, requires exactly the kind of collaboration on display at the event itself. Lonial stressed the importance of academic and industry partnerships in facilitating these critical dialogues. "We are so pleased to start this conversation with Verizon's help and look forward to continuing it with them as a sponsor and partner," he concluded.

a woman speaking at lecturn

Watch the Event Video

Listen to experts share valuable insights into one of today's most transformative technologies.