Events

AI Seminar Series: Vladimir Vapnik - Rethinking Statistical Learning Theory

Lecture / Panel
 
Open to the Public

ECE Seminar Series on Modern Artificial Intelligence presents:

Rethinking Statistical Learning Theory: Learning Using Statistical Invariants

The talk considers Teacher-Student interaction in learning processes. It introduces a new learning paradigm, called Learning Using Statistical Invariants (LUSI), which is different from the classical one. In the classical paradigm, learning machine constructs, using data, a classification or regression function that minimizes the expected loss; it is  data-driven learning. In the LUSI paradigm, in order to construct the desired classification or regression function using both data and Teacher's input, learning machine computes statistical invariants that are specific for the problem, and then minimizes the expected loss in a way that preserves these invariants; it is  both data and intelligence-driven learning.

From a mathematical point of view, methods of the classical paradigm employ mechanisms of strong convergence of approximations to the desired function, whereas methods of the new paradigm employ both strong and weak convergence mechanisms. This can significantly increase the rate of convergence.

Vladimir Vapnik Biography
Professor Vapnik gained his Masters Degree in Mathematics in 1958 at Uzbek State University, Samarkand, USSR. From 1961 to 1990 he worked at the Institute of Control Sciences, Moscow, where he became Head of the Computer Science Research Department. He then joined AT&T Bell Laboratories, Holmdel, NJ, having been appointed Professor of Computer Science and Statistics at Royal Holloway in 1995.

Professor Vapnik has taught and researched in computer science, theoretical and applied statistics for over 30 years. He has published 6 monographs and over a hundred research papers. His major achievements have been the development of a general theory of minimizing the expected risk using empirical data and a new type of learning machine called Support Vector machine that possesses a high level of generalization ability. These techniques have been used to solve many pattern recognition and regression estimation problems and have been applied to the problems of dependency estimation, forecasting, and constructing intelligent machines. His current research is presented in his latest books "Statistical Learning Theory", Wiley, 1998, and "The Nature of Statistical Learning Theory", second edition, Springer, 2000.

He was one of the invited speakers at the Colloquium "The Importance of being Learnable" hosted by the Computer Learning Research Centre at Royal Holloway in September 1998.

Free and open to the public

This event will be live-streamed on engineering.nyu.edu/live

RSVP


The Seminar Series in Modern Artificial Intelligence begins a new tradition at New York University. The series will be held at NYU Tandon School of Engineering and is hosted by the Department of Electrical and Computer Engineering. Organized by Professor Anna Choromanska, the series aims to bring together faculty and students to discuss the most important research trends in the world of AI. The speakers include world-renowned experts whose research is making an immense impact on the development of new machine learning techniques and technologies and helping to build a better, smarter, more-connected world.

Moderator