Julia Jose is a Computer Science Ph.D. candidate advised by Dr. Rachel Greenstadt. She is a member of the Privacy, Security, and Automation Lab at NYU. She is interested in using natural language processing and machine learning tools to tackle privacy issues online. Before joining NYU, she worked as a Data Scientist for Atos zData, focusing on applied NLP, as well as MLOps tools and technologies. She was also a member of the AI Ethics council at Atos zData. Julia holds a bachelor’s degree in Electronics and Communication Engineering (2019) from the National Institute of Technology in Delhi, and a master’s degree in Computer Science (2021) from Arizona State University. At ASU, she worked at the Behavioral Research Lab, focusing on NLP applications in behavioral research. During the early part of her career, she worked as a Research and Development Intern for the Qatar Computing Research Institute, contributing to research in the Arabic Language Technologies Department.
Education
Arizona State University, 2021
Master of Science, Computer Science
National Institute of Technology Delhi, 2019
Bachelor of Technology, Electronics and Communication Engineering
Research News
Ad blockers may be showing users more problematic ads, study finds
Ad blockers, the digital shields that nearly one billion internet users deploy to protect themselves from intrusive advertising, may be inadvertently exposing their users to more problematic content, according to a new study from NYU Tandon School of Engineering.
The study, which analyzed over 1,200 advertisements across the United States and Germany, found that users of Adblock Plus's "Acceptable Ads" feature encountered 13.6% more problematic advertisements compared to users browsing without any ad blocking software. The finding challenges the widely held belief that such privacy tools uniformly improve the online experience.
"While programs like Acceptable Ads aim to balance user and advertiser interests by permitting less disruptive ads, their standards often fall short of addressing user concerns comprehensively," said Ritik Roongta, NYU Tandon Computer Science and Engineering (CSE) PhD student and lead author of the study that will be presented at the 25th Privacy Enhancing Technologies Symposium on July 15, 2025. Rachel Greenstadt, CSE professor and faculty member of the NYU Center for Cybersecurity, oversaw the research.
The research team developed an automated system using artificial intelligence to identify problematic advertisements at scale. To define what constitutes "problematic," the researchers created a comprehensive taxonomy drawing from advertising industry policies, regulatory guidelines, and user feedback studies.
Their framework identifies seven categories of concerning content: ads inappropriate for minors (such as alcohol or gambling promotions), offensive or explicit material, deceptive health or financial claims, manipulative design tactics like fake urgency timers, intrusive user experiences, fraudulent schemes, and political content without proper disclosure.
Their AI system, powered by OpenAI's GPT-4o-mini model, matched human experts' judgments 79% of the time when identifying problematic content across these categories.
The study revealed particularly concerning patterns for younger internet users. Nearly 10% of advertisements shown to underage users in the study violated regulations designed to protect minors. This highlights systematic failures in preventing inappropriate advertising from reaching children, the very problem that drives many users to adopt ad blockers in the first place.
Adblock Plus’s Acceptable Ads represents an attempt at compromise in the ongoing battle between advertisers and privacy advocates. The program, used by over 300 million people worldwide, works by maintaining curated lists of approved advertising exchanges (the automated platforms that connect advertisers with websites) and publishers (the websites and apps that display ads). The program allows certain advertisements to bypass ad blockers if they meet "non-intrusive" standards.
However, the NYU Tandon researchers discovered that advertising exchanges behave differently when serving ads to users with ad blockers enabled. While newly added exchanges in the Acceptable Ads program showed fewer problematic advertisements, existing approved exchanges that weren't blocked actually increased their delivery of problematic content to these privacy-conscious users.
"This differential treatment of ad blocker users by ad exchanges raises serious questions," Roongta noted. "Do ad exchanges detect the presence of these privacy-preserving extensions and intentionally target their users with problematic content?"
The implications extend beyond user experience. The researchers warn that this differential treatment could enable a new form of digital fingerprinting, where privacy-conscious users become identifiable precisely because of their attempts to protect themselves. This creates what the study calls a "hidden cost" for privacy-aware users.
The $740 billion digital advertising industry has been locked in an escalating arms race with privacy tools. Publishers lose an estimated $54 billion annually to ad blockers, leading nearly one-third of websites to deploy scripts that detect and respond to ad blocking software.
"The misleading nomenclature of terms like 'acceptable' or 'better' ads creates a perception of enhanced user experience, which is not fully realized," said Greenstadt.
This study extends earlier research by Greenstadt and Roongta, which found that popular privacy-enhancing browser extensions often fail to meet user expectations across key performance and compatibility metrics. The current work reveals another dimension of how privacy technologies may inadvertently harm the users they aim to protect.
In addition to Greenstadt and Roongta, the current paper's authors are Julia Jose, an NYU Tandon CSE PhD candidate, and Hussam Habib, research associate at Greenstadt’s PSAL lab.
Large Language Models fall short in detecting propaganda
In an era of rampant misinformation, detecting propaganda in news articles is more crucial than ever. A new study, however, suggests that even the most advanced artificial intelligence systems struggle with this task, with some propaganda techniques proving particularly elusive.
In a paper presented at the 5th International Workshop on Cyber Social Threats, part of the 18th International AAAI Conference on Web and Social Media in June 2024, Rachel Greenstadt — professor in the Computer Science and Engineering Department and a member of NYU Center for Cybersecurity — and her Ph.D. advisee Julia Jose evaluated several large language models (LLMs), including OpenAI's GPT-3.5 and GPT-4, and Anthropic's Claude 3 Opus, on their ability to identify six common propaganda techniques in online news articles:
- Name-calling: Labeling a person or idea negatively to discredit it without evidence.
- Loaded language: Using words with strong emotional implications to influence an audience.
- Doubt: Questioning the credibility of someone or something without justification.
- Appeal to fear: Instilling anxiety or panic to promote a specific idea or action.
- Flag-waving: Exploiting strong patriotic feelings to justify or promote an action or idea.
- Exaggeration or minimization: Representing something as excessively better or worse than it really is.
The study found that while these AI models showed some promise, they consistently underperformed compared to more specialized systems designed for propaganda detection.
“LLMs tend to perform relatively well on some of the more common techniques such as name-calling and loaded language,” said Greenstadt. “Their accuracy declines as the complexity increases, particularly with ‘appeal to fear’ and ‘flag-waving’ techniques.”
The baseline model, built on a technology called RoBERTa-CRF, significantly outperformed the LLMs across all six propaganda techniques examined. The researchers noted, however, that GPT-4 did show improvements over its predecessor, GPT-3.5, and outperformed a simpler baseline model in detecting certain techniques like name-calling, appeal to fear, and flag-waving.
These findings highlight the ongoing challenges in developing AI systems capable of nuanced language understanding, particularly when it comes to detecting subtle forms of manipulation in text.
"Propaganda often relies on emotional appeals and logical fallacies that can be difficult even for humans to consistently identify," Greenstadt said. "Our results suggest that we still have a long way to go before AI can reliably assist in this critical task, especially with more nuanced techniques. They also serve as a reminder that, for now, human discernment remains crucial in identifying and countering propaganda in news media.”
The study, which was supported by the National Science Foundation under grant number 1940713, adds to Greenstadt's body of work centering on developing intelligent systems that are not only autonomous but also reliable and ethical. Her research aims to create AI that can be entrusted with crucial information and decision-making processes.