Damon McCoy
-
Professor
-
Co-Director, NYU Center for Cybersecurity
Damon McCoy received his Ph.D. in Computer Science from the University of Colorado, Boulder. He is a member of the Center for Automotive Embedded Systems Security (CAESS) that conducted one of the first security analysis of a modern automobile. His research focuses on empirically measuring the security and privacy of technology systems and their intersections with society. Currently, his primary focus is on online payment systems, economics of cybercrime, automotive systems, privacy-enhancing technologies and censorship resistance.
Education
University of Colorado, Boulder 1999
B.S., Computer Science
University of Colorado, Boulder 2009
Ph.D., Computer Science
Research News
Arizona’s chief election officer endured more Twitter attacks than any other similar state official during 2022 midterm elections, new study reveals
In the lead-up to the 2022 U.S. midterm elections, Arizona's chief election officer Katie Hobbs received far more harassing messages on Twitter than any of her counterparts in other states. Over 30 percent of all tweets directed at her, along with commentators on her posts, fell into the "most aggressive" category of attacks.
That is a finding from researchers at NYU Tandon School of Engineering, Universidad del Rosario in Colombia, and the University of Murcia in Spain, in a paper published in Information Fusion that examines the phenomenon of online intimidation targeting state election administrators.
The research team used a machine learning model from the Perspective API – a tool developed by Google to identify abusive online comments – to analyze nearly 600,000 tweets mentioning chief state election officials nationwide during the weeks surrounding November 8, 2022. These tweets were rated based on six attributes: toxicity, severe toxicity, identity attack, insult, profanity, and threat.
Arizona produced the most Twitter activity in the study, representing almost ninety percent of all collected tweets, and had by far the highest volume of toxic language directed at its top election official, who was also running for governor. Sentiment analysis revealed these messages exhibited high rates of overt "attacks on the author" and "attacks on commenters," as well as generalized toxicity and inflammatory rhetoric.
"Many of the harassing messages made connections to the 2020 presidential election and baseless conspiracy theories about electoral fraud," said Damon McCoy, the paper’s senior author, a professor of Computer Science and Engineering at NYU Tandon. McCoy is co-director of Cybersecurity for Democracy, a multi-university research project of the Center for Cybersecurity at NYU Tandon and the Cybersecurity and Privacy Institute at the Northeastern University Khoury College of Computer Sciences that aims to expose online threats and recommend how to counter them.
To further investigate, the researchers employed entity recognition software to automatically detect references within the hateful messages. It flagged prevalent mentions of "Watergate" and inflammatory phrases like being "at ground zero" for election integrity issues when discussing Arizona.
Clustering analysis based on semantic similarities within the messages also allowed the researchers to identify distinct communities promoting hate speech and map their interactions.
While political speech is constitutionally protected, the researchers warn that abuse and intimidation of election workers could have a chilling effect, deterring qualified professionals from overseeing voting and eroding public trust.
"If we want to safeguard democracy, we must find ways to promote civil discourse and protect those ensuring fair elections from harassment and threats," said McCoy.
The study proposes using the data pipeline developed by the researchers to automatically detect abusive accounts and content for faster content moderation. It also calls for clearer policies around harassment of election officials and cultural shifts to uphold democratic norms.
The research adds to McCoy’s body of work that delves into identifying and combating online threats and misinformation that can harm democracy and civic life. Other studies have investigated the monetization of YouTube political conspiracy channels, looked at how well Facebook identifies and manages political ads on its platform and explored U.S. political Facebook ads in Spanish during the 2020 presidential election.
Zapata Rozo, A., Campo-Archbold, A., Díaz-López, D., Gray, I., Pastor-Galindo, J., Nespoli, P., Gómez Mármol, F., & McCoy, D. (2024). Cyber democracy in the digital age: Characterizing hate networks in the 2022 US midterm elections. Information Fusion, 110, 102459. https://doi.org/10.1016/j.inffus.2024.102459
Conspiracy Brokers: Understanding the Monetization of YouTube Conspiracy Theories
In a first-of-its-kind study, Center for Cybersecurity researchers led by Damon McCoy have found that YouTube channels with conspiracy content are fertile ground for predatory advertisers — with conspiracy channels having nearly 11 times the prevalence of likely predatory or deceptive ads when compared to mainstream YouTube channels and being twice as likely to feature non-advertising ways to monetize content, such as donation links for Patreon, GoFundMe and PayPal.
Researchers also discovered that:
- Certain scams were more common. Self-improvement ads, many of them get-rich-quick schemes, were seen more frequently vs. mainstream content. So were lifestyle, health and insurance ads — including two advertisers unique to conspiracy channels that were generating leads for insurance scammers. Ads promoting questionable products were also common, such as a supplement that claimed to cure Type 2 diabetes.
- Affiliate marketing was a constant. Among those marketing low-quality products, for example, almost 95 percent used some form of affiliate marketing.
- Videos with ads got far more views. In the conspiracy channels, monetized videos had almost four times as many views as demonetized ones. Since YouTube’s business model relies on advertising, this may be because its recommender algorithm prioritizes videos that contain ads.
- Content pointed to alternative social media sites. Sites like Gab, Parler and Telegram were mentioned more commonly in conspiracy channels than in mainstream ones; Facebook and Twitter were also frequently referenced.
The study was conducted with support from the National Science Foundation.