New AI model developed at NYU Tandon can alter apparent ages of facial images while retaining identifying features, a breakthrough in the field
NYU Tandon School of Engineering researchers developed a new artificial intelligence technique to change a person’s apparent age in images while maintaining their unique identifying features, a significant step forward from standard AI models that can make people look younger or older but fail to retain their individual biometric identifiers.
In a paper published in the proceedings of the IEEE International Joint Conference on Biometrics (IJCB), Sudipta Banerjee, the paper’s first author and a research assistant professor in the Computer Science and Engineering (CSE) Department, and colleagues trained a type of generative AI model – a latent diffusion model – to “know” how to perform identity-retaining age transformation.
To do this, Banerjee – working with CSE PhD candidate Govind Mittal and PhD graduate Ameya Joshi, under the guidance of Chinmay Hegde, CSE associate professor and Nasir Memon, CSE professor – overcame a typical challenge in this type of work, namely assembling a large set of training data consisting of images that show individual people over many years.
Instead, the team trained the model with a small set of images of an individual, along with a separate set of images with captions indicating the age category of the person represented: child, teenager, young adult, middle-aged, elderly, or old. This set included images of celebrities captured throughout their lives.
The model learned the biometric characteristics that identified individuals from the first set. The age-captioned images taught the model the relationship between images and age. The trained model could then be used to simulate aging or de-aging by specifying a target age using a text prompt.
Researchers employed a method called "DreamBooth" for editing human face images by gradually modifying them using a combination of neural network components. The method involves adding and removing noise – random variations or disturbances – to images while considering the underlying data distribution.
The approach utilizes text prompts and class labels to guide the image generation process, focusing on maintaining identity-specific details and overall image quality. Various loss functions are employed to fine-tune the neural network model, and the method's effectiveness is demonstrated through experiments on generating human face images with age-related changes and contextual variations.
The researchers tested their method against other existing age-modification methods, by having 26 volunteers match the generated image with an actual image of that person, and with ArcFace, a facial recognition algorithm. They found their method outperformed other methods, with a decrease of up to 44% in the rate of incorrect rejections.