Speaker: Chinmay Hegde, Rice University
I will first describe several new theoretical results for neural learning algorithms for the unsupervised setting. Our approach rests on two key ideas: (1) the learned representations often themselves obey conciseness assumptions (such as compositionality, sparseness, and/or democracy), and (2) datasets often obey certain natural generative modeling assumptions. Our results can be viewed as formal evidence that (shallow) networks are indeed unsupervised feature learning mechanisms, and may shed insights on how to train larger stacked architectures.
I will then describe an approach for unsupervised learning that succeeds in the setting of limited, coarse, unlabeled data. Our approach rests on a new generative modeling architecture, together an associated training algorithm, that is explicitly physics-aware. We demonstrate this approach in an application in computational material science, and show its benefits over the state of the art.
About the Speaker: Chinmay Hegde is with the Electrical and Computer Engineering at Iowa State University in Ames, IA, where he has been an assistant professor since Fall 2015. His research focuses on developing fast and robust algorithms for machine learning and statistical signal processing, with applications to imaging, transportation analytics, and materials informatics. Before coming to Ames, Chinmay received his PhD at Rice University, and was a postdoctoral associate in CSAIL at MIT. He is the recipient of multiple awards, including best paper awards at ICML, SPARS, and MMLS; the Budd Award for Best Engineering PhD Thesis in 2013; the NSF CRII Award in 2016; the Warren Boast Undergraduate Teaching Award in 2016, the Boast-Nilsson Award for Educational Impact in 2018; and the NSF CAREER Award in 2018.