Julia Jose is a Computer Science Ph.D. candidate advised by Dr. Rachel Greenstadt. She is a member of the Privacy, Security, and Automation Lab at NYU. She is interested in using natural language processing and machine learning tools to tackle privacy issues online. Before joining NYU, she worked as a Data Scientist for Atos zData, focusing on applied NLP, as well as MLOps tools and technologies. She was also a member of the AI Ethics council at Atos zData. Julia holds a bachelor’s degree in Electronics and Communication Engineering (2019) from the National Institute of Technology in Delhi, and a master’s degree in Computer Science (2021) from Arizona State University. At ASU, she worked at the Behavioral Research Lab, focusing on NLP applications in behavioral research. During the early part of her career, she worked as a Research and Development Intern for the Qatar Computing Research Institute, contributing to research in the Arabic Language Technologies Department.
Education
Arizona State University, 2021
Master of Science, Computer Science
National Institute of Technology Delhi, 2019
Bachelor of Technology, Electronics and Communication Engineering
Research News
Large Language Models fall short in detecting propaganda
In an era of rampant misinformation, detecting propaganda in news articles is more crucial than ever. A new study, however, suggests that even the most advanced artificial intelligence systems struggle with this task, with some propaganda techniques proving particularly elusive.
In a paper presented at the 5th International Workshop on Cyber Social Threats, part of the 18th International AAAI Conference on Web and Social Media in June 2024, Rachel Greenstadt — professor in the Computer Science and Engineering Department and a member of NYU Center for Cybersecurity — and her Ph.D. advisee Julia Jose evaluated several large language models (LLMs), including OpenAI's GPT-3.5 and GPT-4, and Anthropic's Claude 3 Opus, on their ability to identify six common propaganda techniques in online news articles:
- Name-calling: Labeling a person or idea negatively to discredit it without evidence.
- Loaded language: Using words with strong emotional implications to influence an audience.
- Doubt: Questioning the credibility of someone or something without justification.
- Appeal to fear: Instilling anxiety or panic to promote a specific idea or action.
- Flag-waving: Exploiting strong patriotic feelings to justify or promote an action or idea.
- Exaggeration or minimization: Representing something as excessively better or worse than it really is.
The study found that while these AI models showed some promise, they consistently underperformed compared to more specialized systems designed for propaganda detection.
“LLMs tend to perform relatively well on some of the more common techniques such as name-calling and loaded language,” said Greenstadt. “Their accuracy declines as the complexity increases, particularly with ‘appeal to fear’ and ‘flag-waving’ techniques.”
The baseline model, built on a technology called RoBERTa-CRF, significantly outperformed the LLMs across all six propaganda techniques examined. The researchers noted, however, that GPT-4 did show improvements over its predecessor, GPT-3.5, and outperformed a simpler baseline model in detecting certain techniques like name-calling, appeal to fear, and flag-waving.
These findings highlight the ongoing challenges in developing AI systems capable of nuanced language understanding, particularly when it comes to detecting subtle forms of manipulation in text.
"Propaganda often relies on emotional appeals and logical fallacies that can be difficult even for humans to consistently identify," Greenstadt said. "Our results suggest that we still have a long way to go before AI can reliably assist in this critical task, especially with more nuanced techniques. They also serve as a reminder that, for now, human discernment remains crucial in identifying and countering propaganda in news media.”
The study, which was supported by the National Science Foundation under grant number 1940713, adds to Greenstadt's body of work centering on developing intelligent systems that are not only autonomous but also reliable and ethical. Her research aims to create AI that can be entrusted with crucial information and decision-making processes.