Unconventional Approaches to Pressing Issues
Interdisciplinary work isn’t a buzzword for us — it’s a way of life.
Engineers at NYU Tandon pride themselves at looking at a problem from a new angle, bringing in different expertise to paint a fuller picture of a groundbreaking solution. We’re bringing together the arts, political science, the humanities and technology to create something brand new that addresses issues — like data and democracy — that affect all of us.
NYU Tandon Professor Damon McCoy and doctoral student Laura Edelson are bringing truth to political advertising with the NYU Ad Observatory, a novel online tool that helps reporters, researchers, thought leaders, policy makers, and the general public easily analyze political ads on Facebook during campaigns and ahead of U.S. elections.
Transparency in political advertising is vital to ensuring safe and fair elections, but difficult to achieve without disclosure of funding. That information is not required for political ads on Facebook, which is used by nearly 70% of Americans, is by far the country’s leading source of news, and is the top social media destination for political advertising. To reveal the data beneath the ads, the team uses Facebook’s API and ad library reports, but enhances this information with additional features, including the ability to search by topic (like immigration), ad adjective (like donate), or total ad spend over time.
Deepfakes and misinformation
Deepfake technology lets almost anyone create realistic-looking photos and videos of people saying and doing things that they never actually said or did. Deepfakes pose a range of political and social dangers, including eroding public trust, damaging personal reputations, and undermining the electoral process.
Among the hardest aspects of detecting deepfakes is that digital photo files aren’t coded so that it’s evident when tampering has taken place. Professor of Computer Science and Engineering Nasir Memon is taking a proactive approach, creating a forensics-friendly image right from the start using tamper-resistant digital markings, rather than the common tactic: initially focusing on good visual quality and later hoping that forensic techniques work.
He’s tackling another tough problem as well. The dissemination of fake news on social media is a pernicious trend with dire implications for the world. Indeed, research shows that public engagement with spurious news is greater than with legitimate news from mainstream sources, making social media a powerful channel for propaganda.
A new study on the spread of disinformation by Memon and his colleagues reveals that pairing headlines with credibility alerts from fact-checkers, the public, news media and even AI, can reduce peoples’ intention to share. However, the effectiveness of these alerts varies with political orientation and gender. The good news for truth seekers? Official fact-checking sources are overwhelmingly trusted.
Going to congress
With legislation and regulation almost always developed by officials working behind closed doors, the rates of trust in America’s institutions are at historic low. At a session on Capitol Hill this year, Beth Simone Noveck, director of The Governance Lab (The GovLab) at NYU Tandon, called upon government leaders to adopt new technology to improve citizen engagement in lawmaking. She announced the launch of “CrowdLaw for Congress,” a GovLab training initiative that provides examples from legislatures and parliaments around the world for U.S. institutions to draw upon as they seek to deepen the foundations of democracy in uncertain times.
She returned to Washington a second time (virtually) after the COVID-19 crisis hit, for a session hosted by the Select Committee on the Modernization of Congress, where she discussed best practices for remote committee and member operations, and ways other legislatures around the world are handling business during the global pandemic.
Tandon researchers regularly create all types of cutting-edge technology, including novel ways of using big data and artificial intelligence. Those are complex issues with complex ramifications, so it’s imperative that ethics and responsibility be an integral theme of their work.
Our new Center for Responsible AI is a hub for interdisciplinary research and the creation of best-in-class open-source tools and frameworks for equitable data-sharing, increased transparency, and more. We’re developing standardized curricula so that every computer and data science student understands the importance of responsible AI, as well as educating current practitioners and the general public, and we’re establishing an AI for Good startup program, leveraging opportunities to apply artificial intelligence to societal problems that are otherwise overlooked in pursuit of broad capital market opportunities.
Insights into gun violence
Surges in firearm acquisition after mass shootings are a well-documented phenomenon, but analytic research into the causes of this behavior — be it driven by a desire for self-protection, or a fear that access to firearms will be curtailed — has been sparse.
A new study led by Institute Professor Maurizio Porfiri applied a data science methodology to infer causal relationships and found that the decision to purchase a gun is driven by the latter concern — stricter regulations on gun purchase and ownership — more than by a desire to protect oneself. The study is part of a first-of-its-kind effort backed by a $2 million grant from the National Science Foundation (NSF) to examine causal relationships between potentially contributing factors such as firearm prevalence, state legislation, media exposure, and people’s opinion of firearm-related harms, all at the individual, state, and nation levels. all at the individual, state, and nation levels.