Renowned Computer Scientist Yoshua Bengio Explains Adversarial Nets

Computer Scientist Yoshua Bengio & NYU Tandon Professor Anna Choromanska

Yoshua Bengio with Professor Anna Choromanska, organizer of the Modern AI lecture series

Yoshua Bengio, head of the Montreal Institute for Learning Algorithms (MILA), editor of the Journal of Machine Learning Research, and professor at the University of Montreal took to the stage at NYU Tandon’s MakerSpace on March 19, 2018 to delve into wizardry behind Generative Adversarial Networks (GANs) a key model for artificial intelligence (AI).

Renowned as co-creator of the field of deep learning, Bengio was second speaker in a new seminar series, Modern Artificial Intelligence, organized by Professor Anna Choromanska and hosted by NYU Tandon’s Department of Electrical and Computer Engineering. Bengio has done trailblazing research on neural networks with Google researcher Geoffrey Hinton and Yann LeCun, now Facebook’s head of AI, who, on Feb. 20, gave the series’ inaugural lecture.

Speaking to a standing-room only audience of students and professors, Bengio, who has had some 300 articles published, and whose work has been cited over 100,000 times, discussed the neural architecture behind adversarial networks, and how such networks’ core dichotomy — discriminator versus generator — makes it possible for the latter to “learn” how to create data sets by seeing what the discriminator regards as real and fake.

Teaching computers to do what we do naturally — interpreting at lightning fast speed large sets of data and applying them to sets — starts with training. Generative models involve a process of collecting data that could represent images, sounds, grammar and much more, and training a network to create authentic-seeming examples.

Bengio explained that Generative Adversarial Networks (GANs), is, at its foundation, a configuration where two parts of the system have competing objectives: A discriminator, typically a convolutional neural network, is tasked with differentiating between positive and negative data sets — a real image versus a generated one, for example. The generator “learns” to create synthetic data sets that seem real by figuring out what the discriminator deems real. This is facilitated by a process called back- propagation, that, rather like a feedback loop, teaches the generator to perfect its parameters over time, to become better and better at “confusing” the discriminator.

“Think of the generator as a counterfeiter trying to ‘fool’ the discriminator into calling its data part of a ‘positive’ class,” he said, pointing out that GANs are still far from human-level artificial intelligence.

Upcoming seminars in the series will feature Stefano Soatto, founding director of the UCLA Vision Lab; and Vladimir Vapnik, the creator of the first support vector machine (SVM) algorithm.