Nasir Memon
-
Professor
-
Dean of Engineering at NYU Shanghai
Nasir Memon is a professor of Computer Science and Engineering at the New York University Tandon School of Engineering and Dean of Engineering at NYU Shanghai. He introduced cybersecurity studies to NYU Tandon in 1999, making it one of the first schools to implement the program at the undergraduate level. He is a co-founder of NYU's Center for Cyber Security (CCS) at New York as well as NYU Abu Dhabi. He is the founder of the OSIRIS Lab, CSAW, the NYU Tandon Bridge program as well as the Cyber Fellows program at NYU. He has received several best paper awards and awards for excellence in teaching. He has been on the editorial boards of several journals, and was the Editor-In-Chief of the IEEE Transactions on Information Security and Forensics. He is an IEEE Fellow and an SPIE Fellow for his contributions to image compression and media security and forensics. His research interests include digital forensics, biometrics, data compression, network security and security and human behavior.
Education
University of Nebraska
Ph. D., Computer Science
Birla Institute of Technology and Science, Pilani
Master of Science, Mathematics
Birla Institute of Technology and Science, Pilani
Bachelor of Engineering, Chemical Engineering
Awards and Distinctions
- Best Paper Award: DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution. Philip Bontrager, Aditi Roy, Julian Togelius, Nasir Memon, and Arun Ross. IEEE BTAS 2018.
- SPIE Fellow 2014
- Best Research in Advanced ID Systems: Online Authentication of Digital Signature through Mobile Phones. Nasir Memon and Napa Sae-Bae. 2014
- Best Paper Award: Xiang Liu, Liyun Li, and Nasir Memon. International Conference on Machine Learning and Data Mining, 2013.
- Best Paper Award: IEEE Signal Processing Society. Protecting Biometric Templates with Sketch: Theory and Practice. Yagiz Sutcu, Qiming Li and Nasir Memon. 2012
- IEEE Fellow 2010
- IEEE Signal Processing Soceity; Distinguished Lecturer: Nasir Memon, 2011-2012 (Image Forensics: Collection, Search, Attribution and Authentication, Biometric Security and Privacy, Network Forensics, Advanced File Carving and Techniques, Image Steganography and Steganalysis).
- Best Paper Award: DFRWS 2008 Annual Conference. Anandabrata Pal, Husrev Sencar, and Nasir Memon
- Jacobs Excellence in Education Award. Polytechnic University, 2002.
- ISO/IEC Certificate of Appreciation. International Standards Organization, 2002.
- NSF CAREER AWARD: Lossless, Near-Lossless and Lossy Plus Lossless Image Compression. Nasir Memon. 15-May-1997
Research News
NYU Tandon researchers mitigate racial bias in facial recognition technology with demographically diverse synthetic image dataset for AI training
Facial recognition technology has made great strides in accuracy thanks to advanced artificial intelligence (AI) models trained on massive datasets of face images.
These datasets often lack diversity in terms of race, ethnicity, gender, and other demographic categories, however, causing facial recognition systems to perform worse on underrepresented demographic groups compared to groups ubiquitous in the training data. In other words, the systems are less likely to accurately match different images depicting the same person if that person belongs to a group that was insufficiently represented in the training data.
This systemic bias can jeopardize the integrity and fairness of facial recognition systems deployed for security purposes or to protect individual rights and civil liberties.
Researchers at NYU Tandon School of Engineering are tackling the problem. In a recent paper, a team led by Julian Togelius, Associate Professor of Computer Science and Engineering (CSE) revealed it successfully reduced facial recognition bias by generating highly diverse and balanced synthetic face datasets that can train facial recognition AI models to produce more fair results. The paper’s lead author is Anubhav Jain, Ph.D. candidate in CSE.
The team applied an "evolutionary algorithm" to control the output of StyleGAN2, an existing generative AI model that creates high-quality artificial face images and was initially trained on the Flickr Faces High Quality Dataset (FFHQ). The method is a "zero-shot" technique, meaning the researchers used the model as-is, without additional training.
The algorithm the researchers developed searches in the model’s latent space until it generates an equal balance of synthetic faces with appropriate demographic representations. The team was able to produce a dataset of 13.5 million unique synthetic face images, with 50,000 distinct digital identities for each of six major racial groups: White, Black, Indian, Asian, Hispanic and Middle Eastern.
The researchers then pre-trained three facial recognition models — ArcFace, AdaFace and ElasticFace — on the large, balanced synthetic dataset they generated.
The result not only boosted overall accuracy compared to models trained on existing imbalanced datasets, but also significantly reduced demographic bias. The trained models showed more equitable accuracy across all racial groups compared to existing models exhibiting poor performance on underrepresented minorities.
The synthetic data proved similarly effective for improving the fairness of algorithms analyzing face images for attributes like gender and ethnicity categorization.
By avoiding the need to collect and store real people's face data, the synthetic approach delivers the added benefit of protecting individual privacy, a concern when training AI models on images of actual people’s faces. And by generating balanced representations across demographic groups, it overcomes the bias limitations of existing face datasets and models.
The researchers have open-sourced their code to enable others to reproduce and build upon their work developing unbiased, high-accuracy facial recognition and analysis capabilities. This could pave the way for deploying the technology more responsibly across security, law enforcement and other sensitive applications where fairness is paramount.
This study — whose authors also include Rishit Dholakia (’22) MS in Computer Science, NYU Courant; and Nasir Memon, Dean of Engineering at NYU Shanghai, NYU Tandon ECE professor and faculty member of NYU Center for Cybersecurity — builds upon a paper the researchers shared at the IEEE International Joint Conference on Biometrics (IJCB), September 25-28, 2023.
Anubhav Jain , Rishit Dholakia , Nasir Memon , et al. Zero-shot demographically unbiased image generation from an existing biased StyleGAN. TechRxiv. December 02, 2023
New AI model developed at NYU Tandon can alter apparent ages of facial images while retaining identifying features, a breakthrough in the field
NYU Tandon School of Engineering researchers developed a new artificial intelligence technique to change a person’s apparent age in images while maintaining their unique identifying features, a significant step forward from standard AI models that can make people look younger or older but fail to retain their individual biometric identifiers.
In a paper published in the proceedings of the IEEE International Joint Conference on Biometrics (IJCB), Sudipta Banerjee, the paper’s first author and a research assistant professor in the Computer Science and Engineering (CSE) Department, and colleagues trained a type of generative AI model – a latent diffusion model – to “know” how to perform identity-retaining age transformation.
To do this, Banerjee – working with CSE PhD candidate Govind Mittal and PhD graduate Ameya Joshi, under the guidance of Chinmay Hegde, CSE associate professor and Nasir Memon, CSE professor – overcame a typical challenge in this type of work, namely assembling a large set of training data consisting of images that show individual people over many years.
Instead, the team trained the model with a small set of images of an individual, along with a separate set of images with captions indicating the age category of the person represented: child, teenager, young adult, middle-aged, elderly, or old. This set included images of celebrities captured throughout their lives.
The model learned the biometric characteristics that identified individuals from the first set. The age-captioned images taught the model the relationship between images and age. The trained model could then be used to simulate aging or de-aging by specifying a target age using a text prompt.
Researchers employed a method called "DreamBooth" for editing human face images by gradually modifying them using a combination of neural network components. The method involves adding and removing noise – random variations or disturbances – to images while considering the underlying data distribution.
The approach utilizes text prompts and class labels to guide the image generation process, focusing on maintaining identity-specific details and overall image quality. Various loss functions are employed to fine-tune the neural network model, and the method's effectiveness is demonstrated through experiments on generating human face images with age-related changes and contextual variations.
The researchers tested their method against other existing age-modification methods, by having 26 volunteers match the generated image with an actual image of that person, and with ArcFace, a facial recognition algorithm. They found their method outperformed other methods, with a decrease of up to 44% in the rate of incorrect rejections.
True or false: studying work practices of professional fact-checkers
This research included Nasir Memon of NYU Tandon and the NYU Center for Cybersecurity, Nicholas Micallef of the NYU Abu Dhabi Center for Cyber Security, and researchers from Indiana University Bloomington and the University of Utah.
Online misinformation is a critical societal threat . While fact-checking plays a role in combating the exponential rise of misinformation, little empirical research has been done on the work practices of professional fact-checkers and fact-checking organizations.
Existing research has covered fact-checking practitioner views, effectiveness of fact checking efforts, and professional and user practices for responding to political claims. While researchers are beginning to investigate challenges to fact-checking, such efforts typically focus on traditional media outlets rather than independent fact-checking organizations (e.g., Politifact). Similarly, such research has not yet investigated the entire misinformation landscape, including the dissemination of the outcomes of fact-checking work.
To address these shortcomings, a team including Nasir Memon of NYU Tandon and Nicholas Micalleff of NYU Abu Dhabi interviewed 21 professional fact-checkers from 19 countries, covering topics drawn from previous research analyzing fact-checking from a journalistic perspective. The interviews focused on gathering information about the fact-checking profession, fact-checking processes and methods, the use of computation tools for fact-checking, and challenges and barriers to fact-checking.
The study, "True or False: Studying the Work Practices of Professional Fact-Checkers," found that most of the fact-checkers felt they have a social responsibility of correcting harmful information to provide “a service to the public,” emphasizing that they want the outcome of their work to both educate and inform the public. Some fact-checkers mentioned that they hope to contribute to an information ecosystem providing a “balanced battlefield” for the discussion of an issue, particularly during elections.
The interviews also revealed that the fact-checking process involves first selecting a claim, contextualizing and analyzing it, consulting data and domain experts, writing up the results and deciding on a rating, and disseminating the report.
Fact-checkers encounter several challenges in achieving their desired impact because current fact-checking work practices are largely manual, ad-hoc, and limited in scale, scope, and reach. As a result, the rate at which misinformation can be fact-checked is much slower than the speed at which it is generated. The research points out the need for unified and collaborative computational tools that empower the human fact-checker in the loop by supporting the entire pipeline of fact-checking work practices from claim selection to outcome dissemination. Such tools could help narrow the gap between misinformation generation and fact-check dissemination by improving the effectiveness, efficiency, and scale of fact-checking work and dissemination of its outcomes.
This research has been supported by New York University Abu Dhabi and Indiana University Bloomington.