Spring 2020 Seminars
A complete listing
|Jan 7||11am - 12pm||Mathy Vanhoef||NYU Abu Dhabi||NYU Wireless Seminar: Dragonblood: Attacking the Dragonfly Handshake of WPA3 and EAP-pwd||370 Jay, Room 913|
|Jan 13||11am - 12pm||Qing Qu||New York University||Nonconvex Optimization for Sparse Deconvolution: Geometry, Algorithms, and Applications||370 Jay, Room 824|
|Jan 17||11am - 12pm||Jong Chul Ye||Korea Advanced Inst. Of Science and Technology (KAIST), Korea||Geometric Understanding of Supervised and Unsupervised Deep Learning for Biomedical Image Reconstruction||370 Jay, Room 824|
|Jan 29||11am - 12pm||Sidharth Jaggi||The Chinese University of Hong Kong, Hong Kong, China||Covert communication, or, how to whisper||370 Jay, Room 825|
|Jan 30||11am - 12pm||Riccardo Leonardi||University of Brescia, Italy||Modelling and Finding Symmetries for Signal and Image Representation||370 Jay, Room 825|
|Feb 4||2:30pm - 3:30pm||Sungjoo Yoo||Seoul National University, Korea||Quantizing neural networks for ultra-low-precision computation||370 Jay, Room 1013|
|Feb 6||11am - 12pm||Serge Leef||DARPA||Toward Simulation and Optimization of Distributed Real-time Intelligent Vehicle Electronics||370 Jay, Room 825|
|Feb 7||11am - 12pm||Bijoy Ghosh||Texas Tech University||Special Seminar: Formation Control of Multi-Agent Sensors with Target Localization and Tracking||370 Jay, Room 1013|
|Feb 11||11:30am - 12:30pm||Siwei Lyu||University at Albany, State University of New York||DeepFake the Menace?||370 Jay, Room 1013|
|Feb 13||11am - 12pm||Jan Kautz||NVIDIA||AI Seminar Series: Generative Models for Image Synthesis||370 Jay, Room 825|
|Feb 13||3pm - 4pm||Shlomo Shamai||Technion-Israel Institute of Technology, Israel||Wireless Networks via the Cloud: An Information Theoretic View||370 Jay, Room 913|
|Feb 14||11am - 12pm||Roel Dobbe||NYU AI Now Institute||Smart Grid Research Seminar: Learning to Control in Power Systems: Design and Analysis Guidelines for Concrete Safety Problems||370 Jay, Room 824|
|Mar 5||11am - 12pm||Gabor Lugosi||Pompeu Fabra University, Barcelona Spain||AI Seminar Series: Archeology of random trees||370 Jay, Room 1201|
|Mar 6||2pm - 3pm||Shawn Blanton||Carnegie Mellon University||Designing Secure Hardware Systems||370 Jay, Room 825|
|Mar 12||11am - 12pm||Reza Moheimani||UT Dallas||Atomically Precise Manufacturing: Control and Automation on the Atomic Scale||370 Jay, Room 825|
|Bert Hochwald||University of Notre Dame||Rethinking the Radio: Looking at the Tradeoff of Quantity Over Quality||370 Jay, Room 825|
|Nicola Cesa-Bianchi||University of Milan, Italy||AI Seminar Series: Machine Learning and Sequential Decision Making||370 Jay, Room 1201|
|Magnus Egerstedt||Georgia Institute of Technology||Long Duration Autonomy With Applications to Persistent Environmental Monitoring||370 Jay, Room 825|
|Robert Schapire||Microsoft||AI Seminar Series: The Contextual Bandits Problem||370 Jay, Room 1201|
Speaker: Mathy Vanhoef, NYU Abu Dhabi
Date: Jan 7
Abstract: In this talk, we show that the Dragonfly handshake of WPA3 and EAP-pwd is affected by several design and implementations flaws. Most prominently, we present side-channel leaks that allow an adversary to perform brute-force attacks on the password. Additionally, we present invalid curve attacks against all EAP-pwd and one WPA3 implementation. These implementation-specific attacks enable an adversary to bypass authentication. Finally, we discuss countermeasures that have been incorporated into the Wi-Fi standard.
About the Speaker: Mathy Vanhoef is a postdoctoral researcher at New York University Abu Dhabi. He is most well known for his KRACK attack against WPA2 and the RC4 NOMORE attack against RC4. His research interest is in computer security with a focus on network security, wireless security (e.g. Wi-Fi), network protocols, and applied cryptography. Currently, his research is about analyzing security protocols to automatically discover (logical) implementation vulnerabilities.
Speaker: Qing Qu, New York University
Date: Jan 13
Abstract: Deconvolution of sparse point sources from its convolution with a unknown point spread function (PSF) finds many applications in neuroscience, microscopy imaging, physics, and computer vision. The problem is challenging to solve — it exhibits intrinsic shift symmetry structures that its natural formulation is nonconvex. There is very little theoretical analysis showing under what conditions nonconvex optimization methods are guaranteed to work, or may fail. In this talk, we develop global optimization theory for sparse blind deconvolution, via analyzing its nonconvex optimization landscape. First, we show how to use geometric intuitions to build efficient nonconvex algorithms, linearly converge to target solutions even with random initializations. Moreover, we extended our geometric understandings to sparse deconvolution with multiple PSFs (a.k.a convolutional dictionary learning), where each measurement is a superposition of convolution of multiple unknown PSFs. Based on its similarity to overcomplete dictionary learning, we provide the first global algorithmic guarantees for convolutional dictionary learning. Finally, we show how to use these intuitions to design fast practical methods, demonstrating on several applications in neuroscience and microscopy imaging.
About the Speaker: Qing Qu is a Moore-Sloan data science fellow at Center for Data Science, New York University. He received his Ph.D from Columbia University in Electrical Engineering in Oct. 2018. He received his B.Eng. from Tsinghua University in Jul. 2011, and a M.Sc.from the Johns Hopkins University in Dec. 2012, both in Electrical and Computer Engineering. He interned at U.S. Army Research Laboratory in 2012 and Microsoft Research in 2016, respectively. His research interest lies at the intersection of foundation of data science, machine learning, numerical optimization, and signal/image processing, with focus on developing efficient nonconvex methods and global optimality guarantees for solving representation learning and nonlinear inverse problems in engineering and imaging sciences. He is the recipient of Best Student Paper Award at SPARS’15 (with Ju Sun, John Wright), and the recipient of Microsoft PhD Fellowship in machine learning.
Geometric Understanding of Supervised and Unsupervised Deep Learning for Biomedical Image Reconstruction
Speaker: Jong Chul Ye, Korea Advanced Inst. Of Science and Technology (KAIST), Korea
Date: Jan 17
Abstract: Recently, deep learning approaches have been extensively used for various inverse problems thanks to its excellent performance. However, it is still difficult to obtain coherent geometric view why such deep learning architectures provide superior performance over mathematically-driven classical algorithms. Inspired by recent theoretical understanding of geometry of CNN as a combinatorial framelet representation, here we provide a unified theoretical framework that leads to a better understanding and optimized design of CNNs for various inverse problems. We also present our generalized cycleGAN framework for unsupervised learning that can be used for general inverse problems without having any matched training data set. We provide extensive experimental results using our supervised and unsupervised neural networks for several biomedical imaging reconstruction problems to verify the geometric understanding of CNNs for image reconstruction problems.
About the Speaker: Jong Chul Ye is currently a professor of the Dept. of Bio/Brain Engineering and adjunct professor at Dept. of Mathematical Sciences of KAIST, Korea. Before joining KAIST, he worked at Philips Research, and GE Global Research, both in New York. He is an associate editor for IEEE Trans. Medical Imaging, and a senior editor for IEEE Signal Processing Magazine. He is the incoming chair of IEEE SPS Technical Committee for Computational Imaging, and a general cochair (with Mathews Jacob) for 2020 IEEE Symp. on Biomedical Imaging (ISBI), Iowa City. His current research interests include machine learning and signal processing for various imaging reconstruction problems in x-ray CT, MRI, optics, ultrasound, etc. He is a Fellow of IEEE for “his contributions to signal processing and machine learning for bio-medical imaging.”
Speaker: Sidharth Jaggi, The Chinese University of Hong Kong, Hong Kong, China
Date: Jan 29
Abstract: Covert communication tries to answer the following question -- if Alice wishes to whisper to Bob while ensuring that the eavesdropper Eve cannot even detect whether or not Alice is whispering, how much can she whisper. Ensuring such a stringent security requirement can be met requires new ideas from information theory, coding theory, and cryptography. In this talk I will survey the state of the existing literature (recent information-theoretic capacity-style results for a variety of settings), and then discuss even more recent results. Specifically, I will highlight:
Code constructions: Computationally efficient code constructions that achieve the information-theoretic capacity bounds.
Resilience to jamming: In some settings, Eve may not just be a passive eavesdropper, but actively attempt to jam Alice's communication, even if she isn't sure whether or not Alice is actually whispering. I will discuss covert communication schemes that are resilient to such malicious jamming.
Impact of environmental uncertainty: Often, noise levels on the communication medium are not static, but stochastically varying (for instance, in fading channels). It turns out such natural variation can dramatically impact the capacity -- indeed, in general such variation hurts Eve's detector much more than it hurts Bob's decoder.
About the Speaker: Sidharth Jaggi received his B. Tech. from I.I.T. Bombay 2000, his M.S. and Ph.D. degrees from the CalTech in 2001 and 2006 respectively, all in EE. He spent 2006 as a Postdoctoral Associate at LIDS MIT. He joined the Department of Information Engineering at the Chinese University of Hong Kong in 2007, where he is now an Associate Professor. His interests lie at the intersection of network information theory, coding theory, and algorithms. His research group thus (somewhat unwillingly) calls itself the CAN-DO-IT team (Codes, Algorithms, Networks: Design and Optimization for Information Theory). Examples of topics he has dabbled in include network coding, sparse recovery/group-testing, covert communication, and his current obsession is with adversarial channels.H.
Speaker: Riccardo Leonardi, University of Brescia, Italy
Date: Jan 30
Abstract: Symmetries play an essential role for the understanding and modelling of the world. Natural objects, their dynamics, or more generally speaking natural waveforms often exhibit local “partial” symmetries which are key to the understanding/modelling of laws of physics or to the description/recognition of real objects. In the framework of this talk we propose a simple signal processing operation which opens new pathways to alternative representations of information, with possible use in classification, information modelling, representation, or compression. We shall define the key operation that finds the position in a waveform that optimally decouples the energy distribution between the even and odd component around such a position. We shall show how such an operation can be casted recursively to provide for a hierarchical representation of any waveform through a decomposition tree which provides a generative reconstruction algorithm from a set of exponentially decaying coefficients. On another hand we shall show how the even/odd decomposition can be used to identify local symmetries or anti-symmetries present in any waveform. We shall show a second application, namely how it can be used to efficiently provide a solution to the problem of reflection symmetry detection for natural images, a 30 year long lasting issue in computer vision and image processing. The original framework is extended to be applied on 2-D local patches. Candidate symmetry segments can be found by scanning and connecting local maxima from 2D correlation patches in various directions. Such candidates are then validated by verifying the associated specularity of gradient direction information. This partially mimics how the brain performs planar symmetry detection. We shall demonstrate experimental simulations exhibit superior performance from manually annotated ground truth from competition datasets or picked at random natural images from large datasets.
About the Speaker: Riccardo obtained his Diploma (1984) and Ph.D. (1987) degrees in Electrical Engineering from the Swiss Federal Institute of Technology in Lausanne. After conducting research in the US on visual communications for ~5 years, he was appointed in 1992 at the University of Brescia, Italy to establish activities in the Telecommunication domain. He holds there the Signal Processing Chair. His main research interests cover the field of Multimedia Signal Processing applications and Visual Communication (mainly Image/Video Compression & Content-Based Media Analysis). He has actively participated to ISO/MPEG standardisation activities. He holds more than 200 papers and patents in the field. Riccardo Leonardi is a fellow of the IEEE. He is currently acting as GTTI Chairman (elected position by all Italian Faculty members) for Italian Academic Coordination in the fields of Signal Processing, Communication Networking, and Remote Sensing.
Speaker: Sungjoo Yoo, Seoul National University
Date: Feb 4
Abstract: Bit-width needs to be minimized for efficient neural network design in terms of chip area, code size, and, most importantly, energy efficiency. In this talk, we first review state-of-the-art quantization methods in the industry and academia and introduce our ideas of outlier quantization, precision highway and quantization error fluctuation-aware training, which finally offers 4-bit linear weight/activation quantization of MobileNet v3.
About the Speaker: Sungjoo Yoo received Ph.D. from Seoul National University in 2000. From 2000 to 2004, he was a researcher at system level synthesis (SLS) group, TIMA laboratory, Grenoble France. From 2004 to 2008, he led, as principal engineer, system-level design team at System LSI, Samsung Electronics. From 2008 to 2015, he was associate professor at POSTECH. In 2015, he joined Seoul National University and is now full professor. In 2018, he did his sabbatical at Facebook, Menlo Park, US. His current research interests are software/hardware co-design of deep neural networks and machine learning-based optimization of computer architecture.
Speaker: Serge Leef, DARPA
Date: Feb 6
Abstract: Distributed compute and control systems make numerous modern world application including automobiles and airplanes possible. These are complex, networked systems with hundreds to thousands computers collaborating to interact with human operators and the physical world. Design and verification of these systems is expensive, cumbersome and suboptimal as current approaches only cover a small number of operational scenarios. Problems identified in late stages of a project are difficult and very expensive to fix since the system is already built. The process and the outcome would be profoundly improved if the system could be simulated and optimized while still under development.
About the Speaker: Mr. Serge Leef joined DARPA in August 2018 as a program manager in the Microsystems Technology Office (MTO). His research interests include computer architecture, simulation, synthesis, semiconductor intellectual property (IP), cyber-physical modeling, distributed systems, secure design flows, and supply chain management. He is also interested in the facilitation of startup ecosystems and business aspects of technology.
Leef came to DARPA from Mentor, a Siemens Business where from 2010 until 2018 he was a Vice President of New Ventures, responsible for identifying and developing technology and business opportunities in systems-oriented markets. Additionally, from 1999 to 2018, he served as a division General Manager, responsible for defining strategies and building successful businesses around design automation products in the areas of hardware/software co-design, multi-physics simulation, IP integration, SoC optimization, design data management, automotive/aerospace networking, cloud-based electronic design, Internet of Things (IoT) infrastructure, and hardware cybersecurity.
Prior to joining Mentor, he was responsible for design automation at Silicon Graphics, where he and his team created revolutionary, high-speed simulation tools to enable the design of high-speed 3D graphics chips, which defined the state-of-the-art in visualization, imaging, gaming, and special effects for a decade. Prior to that, he managed a CAE/CAD organization at Microchip and developed functional and physical design and verification tools for major 8- and 16-bit microcontroller and microprocessor programs at Intel.
Leef received his Bachelor of Science degree in electrical engineering and Master of Science degree in computer science from Arizona State University. He has served on corporate, state, and academic advisory boards, delivered numerous public speeches, and holds two patents.
Speaker: Bijoy Ghosh, Texas Tech University
Date: Feb 7
Abstract: Riemannian Geometric methods have been applied to control rotatory dynamics and in the last two decades we have witnessed much progress in controlling visual sensors for target localization and tracking. The motivation for such a control problem originates in Biology especially in the control of eye and head rotation. In this talk we introduce nonlinear systems theory and geometry to control a pair of rotating sensors for gaze control and tracking a point target. Specific application of our project to optimally control human eye pair in the binocular vision setup is new.
About the Speaker: Bijoy received his Ph.D. degree in Engineering Sciences from Harvard University, USA, in 1983. From 1983 to 2007 Bijoy was with the Department of Electrical and Systems Engineering, Washington University, St. Louis, USA, where he was a Professor and Director of the Center for BioCybernetics and Intelligent Systems. Currently he is the Dick and Martha Brooks Regents Professor of Mathematics and Statistics at Texas Tech University, Lubbock, TX, USA. He received the D. P. Eckmann award in 1988 from the American Automatic Control Council, the Japan Society for the Promotion of Sciences Invitation Fellowship in 1997 and the Chinese Academy of Sciences Invitation Fellowship in 2016. He is an IEEE (2000) and IFAC (2014) Fellow. Bijoy had held visiting positions at Tokyo Institute of Technology and Osaka University in Japan, University of Padova in Italy, Royal Institute of Technology and Institut Mittag-Leffler, Stockholm, Sweden, Yale University, USA, Technical University of Munich, Germany, Chinese Academy of Sciences, China and Indian Institute of Technology, Kharagpur, India. Bijoy's current research interest is in BioMechanics, Cyberphysical Systems and Control Problems in Rehabilitation Engineering.
Speaker: Siwei Lyu, University at Albany, State University of New York
Date: Feb 11
Abstract: The advancements of AI technology, in particular, deep generative models, have enabled the creation of fake images, audios and videos in ways that have not been possible before. Such fake videos, commonly known as the DeepFakes, are eroding our trust to digital media and causing serious ethical, legal, social, and financial consequences. In this talk, I will briefly review the technologies behind the creation of DeepFakes, and then introduce current detection methods of such fake videos and measures that can obstruct the generation of DeepFakes, as well as general technical aspects to combat DeepFakes.
About the Speaker: Siwei Lyu is a Professor at the Department of Computer Science and the Director of Computer Vision and Machine Learning Lab (CVML) of University at Albany, State University of New York. Dr. Lyu received his Ph.D. degree in Computer Science from Dartmouth College in 2005, and his M.S. degree in Computer Science in 2000 and B.S. degree in Information Science in 1997, both from Peking University, China. Dr. Lyu's research interests include digital media forensics, computer vision, and machine learning. Dr. Lyu has published over 130 refereed journal and conference papers. Dr. Lyu's research projects are funded by NSF, DARPA, ARO and NIJ. He is the recipient of the IEEE Signal Processing Society Best Paper Award (2011), the National Science Foundation CAREER Award (2010), SUNY Chancellor's Award for Excellence in Research and Creative Activities (2018) and Google Faculty Research Award (2019).
Speaker: Jan Kautz, NVIDIA
Date: Feb 13
Abstract: Recent progress in generative models and particularly generative adversarial networks (GANs) has been remarkable. They have been shown to excel at image synthesis as well as image-to-image translation problems. I will present a number of our recent methods in this space, which, for instance, can translate images from one domain (e.g., day time) to another domain (e.g., night time) in an unsupervised fashion, synthesize completely new images, and even learn to detect defects by synthesizing their own training data.
About the Speaker: Jan Kautz is VP of Learning and Perception Research at NVIDIA. Jan and his team pursue fundamental research in the areas of computer vision and deep learning, including visual perception, geometric vision, generative models, and efficient deep learning. His and his team's work has been recognized with various awards and has been regularly featured in the media. Before joining NVIDIA in 2013, Jan was a tenured faculty member at University College London. He holds a BSc in Computer Science from the University of Erlangen-Nürnberg (1999), an MMath from the University of Waterloo (1999), received his PhD from the Max-Planck-Institut für Informatik (2003), and worked as a post-doctoral researcher at the Massachusetts Institute of Technology (2003-2006).
Speaker: Shlomo Shamai, Technion-Israel Institute of Technology
Date: Feb 13
Abstract: Cloud based wireless networks named also as Cloud Radio Access Networks (C-RANs) emerge as appealing architectures for next-generation wireless/cellular systems whereby the processing/encoding/decoding is migrated from the local base-stations/radio units (RUs) to a control/central units (CU) in the "cloud". The network operates via fronthaul digital links connecting the CU and the RUs (operating as relays). The uplink and downlink are examined from a network information theoretic perspective, with emphasis of simple oblivious processing at the RUs, which is attractive also from the practical point of view. The analytic approach, applied to simple wireless/cellular models illustrates the considerable performance gains associated with advanced network information theoretically inspired techniques, carrying also practical implications. An outlook, pointing out interesting theoretical directions, referring also to Fog radio access networks (F-RAN), concludes the presentation.
About the Speaker: Shlomo Shamai (Shitz) is with the Viterbi Department of Electrical Engineering, Technion-Israel Institute of Technology, where he is now a Technion Distinguished Professor, and holds the William Fondiller Chair of Telecommunications.
He is an IEEE Life Fellow, an URSI Fellow, a member of the Israeli Academy of Sciences and Humanities and a foreign member of the US National Academy of Engineering. He is the recipient of the 2011 Claude E. Shannon Award, the 2014 Rothschild Prize in Mathematics/Computer Sciences and Engineering, the 2017 IEEE Richard W. Hamming Medal. He is also a co-recipient of the 2018 Third Bell Labs Prize for Shaping the Future of Information and Communications Technology and other awards and recognitions.
The overview is based on joint studies with I.E. Augerri, G. Caire, S.-H. Park, O. Sahin, O. Simeone and A. Zaidi. The research of S. Shamai has been supported by the European Union's Horizon 2020, Research And Innovation Program: 694630.
Smart Grid Research Seminar: Learning to Control in Power Systems: Design and Analysis Guidelines for Concrete Safety Problems
Speaker: Roel Dobbe, NYU AI Now Institute
Date: Feb 14
Abstract: Rapid progress in machine learning and artificial intelligence (AI) has brought renewed attention to its applicability in power systems for modern forms of control that help integrate higher levels of renewable generation and address increasing levels of uncertainty and variability. In this paper we discuss these new applications and shine light on the most relevant new safety risks and considerations that emerge when relying on learning for control purposes in electric grid operations. We build on recent taxonomical work in AI safety and focus on four concrete safety problems. We draw on two case studies, one in frequency regulation and one in distribution system control, to exemplify these problems and show mitigating measures. We then provide general guidelines and literature to help people working on integrating learning capabilities for control purposes to make safety risks a central tenet of design.
About the Speaker: Roel Dobbe is a postdoctoral researcher at the AI Now Institute, and will join TU Delft's Technology, Policy and Management department as an assistant professor in August 2020. He attained his PhD from UC Berkeley in Electrical Engineering and Computer Sciences under the supervision of Prof. Claire Tomlin. Roel's research addresses the development, analysis, integration and governance of data-driven systems. His PhD work, entitled "An Integrative Approach to Data-Driven Monitoring and Control of Electric Distribution Networks”, combined optimization, machine learning and control theory to enable monitoring and control of safety-critical systems, including energy & power systems and cancer diagnosis and treatment. In addition to research, Roel has experience in industry and public institutions, where he has served as a management consultant for AT Kearney, a data scientist for C3 IoT, and a researcher for the National ThinkTank in The Netherlands. His diverse background led him to examine the ways in which values and stakeholder perspectives are represented in the process of designing and deploying AI and algorithmic decision-making and control systems. Roel is passionate about developing practices to help engineers and computer scientists engage more closely both with impacted communities and scholars in the social sciences, and to better contend with serious questions of ethics and governance. Towards this end, Roel founded Graduates for Engaged and Extended Scholarship around Computing & Engineering (GEESE); a student organization stimulating graduate students across all disciplines studying or developing technologies to take a broader lens at their field of study and engage across disciplines. Roel has published his work in various journals and conferences, including Automatica, the IEEE Conference on Decision and Control, the IEEE Transactions on Power Systems and Smart Grid, IEEE/ACM Transactions on Computational Biology and Bioinformatics, and NeurIPS.
Speaker: Gabor Lugosi, Pompeu Fabra University, Barcelona Spain
Date: Mar 5
Abstract: Networks are often naturally modeled by random processes in which nodes of the network are added one-by-one, according to some random rule. Uniform and preferential attachment trees are among the simplest examples of such dynamically growing networks. The statistical problems we address in this talk regard discovering the past of the tree when a present-day snapshot is observed. We present a few results that show that, even in gigantic networks, a lot of information is preserved from the very early days. In particular, we discuss the problem of finding the root and the broadcasting problem.
About the Speaker: Gabor Lugosi is an ICREA research professor at the Department of Economics, Pompeu Fabra University, Barcelona. He graduated in electrical engineering at the Technical University of Budapest in 1987, and received his Ph.D. from the Hungarian Academy of Sciences in 1991. His main research interests include the theory of machine learning, combinatorial statistics, and information theory.
Speaker: Shawn Blanton, Carnegie Mellon University
Date: Mar 6
Abstract: On October 29, 2018, DARPA issued an RFI that stated: “This Request for Information (RFI) from the
Defense Advanced Research Projects Agency’s (DARPA) Microsystems Technology Office (MTO) seeks
information on technology, concepts, and approaches to support the integration of security capabilities
directly into System on Chip (SoC) system design and to enable the autonomous integration and
assembly of SoCs.
This RFI and the tens of millions of dollars that the US government has already invested in hardware security research and development is based on the fact that the fabrication of state-of-the-art electronics is now mostly overseas. With the recent announcement that GLOBALFOUNDRIES is going to stop all 7nm development, there is now only one company in the US that continues to pursue advanced semiconductors (Intel). Unfortunately, Intel does not have the same experience of making chips for third parties as does Samsung and (most importantly) TSMC (Taiwan Semiconductor Manufacturing Corporation). As a result, the US government believes it will be forced to fabricate advanced, sensitive electronics overseas in untrusted fabrication facilities. As a result, there is keen interest in design methodologies that mitigate reverse engineering, tampering, counterfeiting, etc.
In this talk, an overview of hardware security will be presented followed by a discussion on a concept called logic locking. This approach will be described and the “back and forth” that is now occurring in the research community involving: (i) vulnerability discovery and (ii) logic locking improvement.
About the Speaker: Shawn Blanton is the Trustee Professor in the Department of Electrical and Computer Engineering (ECE) at Carnegie
Mellon University where he formerly served as director of the Center for Silicon System Implementation, an
organization that consisted of 18 faculty members and over 80 PHD students that focused on the design and
manufacture of silicon-based systems. He currently serves as the Associate Department Head for Research in ECE,
and the Associate Dean for Diversity and Inclusion for the College of Engineering. He received the Bachelor's degree
in engineering from Calvin College in 1987, a Master's degree in Electrical Engineering in 1989 from the University of
Arizona, and a Ph.D. degree in Computer Science and Engineering from the University of Michigan, Ann Arbor in
Professor Blanton’s research interests are housed in the Advanced Chip Testing Laboratory (ACTL, www.ece.cmu.edu/~actl) and include the design, verification, test, diagnosis and security of integrated, heterogeneous systems. He has published many papers in these areas and has several issued and pending patents in the area of IC test, diagnosis and security. Besides several best paper awards, Prof. Blanton has received the National Science Foundation Career Award for the development of a microelectromechanical systems (MEMS) testing methodology and two IBM Faculty Partnership Awards. He is a Fellow of the IEEE, and is the recipient of the 2006 Emerald Award for outstanding leadership in recruiting and mentoring minorities for advanced degrees in science and technology.
Speaker: Reza Moheimani, UT Dallas
Date: Mar 12
Abstract: Improvement in manufacturing precision has been the driving force behind technological advancements throughout history. Atomically precise manufacturing (APM) requires the ultimate in engineering precision. While most manufacturing techniques treat matter as infinitely divisible, APM uses the quantized nature of matter to enable fabrication of devices with atomic precision. Hydrogen Depassivation Lithography (HDL) is an approach to atomically precise manufacturing, whereby a scanning tunneling microscope (STM) tip is used to inject electrons into surface chemical bonds, causing them to break. By scanning the tip across a hydrogen passivated surface, lines of hydrogen atoms are removed creating patterns of exposed silicon dangling bonds. Compared with the background hydrogen-terminated silicon atoms, these dangling bonds are more reactive to many species of material that prefer to adsorb into the patterned area. This method has been used recently to create nanoscale electronic devices including wires, transistors, qubits and quantum dots.
This approach to APM depends on reliable and repeatable operation of a scanning tunneling microscope. However, STM is a characterization tool and its use for nano-fabrication results in challenges, the foremost being frequent occurrence of tip-sample crash in such APM systems. A common cause of tip sample-crash is the poor performance of STM feedback control system. We show that there is a direct link between the Local Barrier Height (LBH), a quantum mechanical property of the tip and sample, and stability robustness of the feedback control loop. We demonstrate how the LBH can be estimated reliably and used to adaptively tune controller parameters so that closed loop stability is preserved. We report experimental results, conducted on two STM scanners, that establish the efficiency of the proposed PI tuning method in avoiding the tip/sample crash in STMs.
About the Speaker: Reza Moheimani currently holds the James Von Ehr Distinguished Chair in Science and Technology in Department of Systems Engineering at the University of Texas at Dallas. His current research interests include ultrahigh-precision mechatronic systems, with particular emphasis on dynamics and control at the nanometer scale, including applications of control and estimation in nanopositioning systems for high-speed scanning probe microscopy and nanomanufacturing, modeling and control of microcantilever-based devices, control of microactuators in microelectromechanical systems, and design, modeling and control of micromachined nanopositioners for on-chip scanning probe microscopy.
Dr. Moheimani is a Fellow of IEEE, IFAC and the Institute of Physics, U.K. His research has been recognized with a number of awards, including IFAC Nathaniel B. Nichols Medal (2014), IFAC Mechatronic Systems Award (2013), IEEE Control Systems Technology Award (2009), IEEE Transactions on Control Systems Technology Outstanding Paper Award (2007 & 2018) and several best paper awards in various conferences. He is Editor-in-Chief of Mechatronics and has served on the editorial boards of a number of other journals, including IEEE Transactions on Mechatronics, IEEE Transactions on Control Systems Technology, and Control Engineering Practice.
Speaker: Bert Hochwald, University of Notre Dame
Date: Apr 1
Abstract: It is not an exaggeration to say that a typical person carries at least a dozen radios among his or
her watches, health monitors, key fobs, door openers, laptops, cell phones, and RFID devices.
This growth in the number of radios per person is likely to accelerate with the opening of new
and higher-frequency bands, since it is well established that there are impressive performance
advantages in throughput and sensing of having multiple radios operating in the same band.
Having more of something is always better!
However, since consumer devices are generally highly sensitive to cost, power, and size, the fundamental question of whether more is better needs to be examined within these constraints. If we are considering replacing a single radio with two radios operating in the same frequency band, but demand that the total cost, power, and size of the replacements be no more than the one being replaced, we need to compromise on some aspect of their design. Yet we not want to compromise on the potential performance advantages that these two radios may provide. Taking this thought exercise to the extreme, we ask if it is feasible to reap huge throughput or sensing performance advantages by replacing a single “high-quality” radio with thousands of “low- quality” radios without changing total cost, power, or size? I will show that it can be so by measuring how quantity can be substituted for quality in communication and sensing systems.
About the Speaker: Bertrand Hochwald was born in New York, NY, USA. He received the bachelor’s degree from
Swarthmore College, Swarthmore, PA, USA, the M.S. degree in electrical engineering from
Duke University, Durham, NC, USA, and the M.A. degree in statistics, and the Ph.D. degree in
electrical engineering from Yale University, New Haven, CT, USA.
From 1986 to 1989, he was with the Department of Defense, Fort Meade, MD, USA. He was a Research Associate and a Visiting Assistant Professor at the Coordinated Science Laboratory, University of Illinois at UrbanaChampaign, Urbana, IL, USA. In 1996, he joined the Mathematics of Communications Research Department, Bell Laboratories, Lucent Technologies, Murray Hill, NJ, USA, where he was a Distinguished Member of the Technical Staff. In 2005, he joined Beceem Communications, Santa Clara, CA, USA, as the Chief Scientist and Vice- President of Systems Engineering. He served as a Consulting Professor of Electrical Engineering at Stanford University, Palo Alto, CA, USA. In 2011, he joined the University of Notre Dame, Notre Dame, IN, USA, as a Freimann Professor of Electrical Engineering.
Dr. Hochwald received several achievement awards while employed at the Department of Defense and the Prize Teaching Fellowship at Yale University. He has served as an Editor of several IEEE journals and has given plenary and invited talks on various aspects of signal processing and communications. He has forty-six patents and has co-invented several well- known multiple-antenna techniques, including a differential method, linear dispersion codes, and multi-user vector perturbation methods. He received the 2006 Stephen O. Rice Prize for the best paper published in the IEEE Transactions on Communications. He co-authored a paper that won the 2016 Best Paper Award by a young author in the IEEE Transactions on Circuits and Systems. He also won the 2018 H. A. Wheeler Prize Paper Award from the IEEE Transactions on Antennas and Propagation. His PhD students have won various honors for their PhD research, including the 2018 Paul Baran Young Scholar Award from the Marconi Society. He is listed as a Thomson Reuters Most Influential Scientific Mind in multiple years.
Speaker: Nicola Cesa-Bianchi, University of Milan, Italy
Date: Apr 2
Abstract: A solid theoretical understanding of the algorithms that power machine learning systems is of increasing importance given the pervasiveness of AI technologies. In online learning, a setting in which agents make repeated decisions on a stream of data, the predictive performance of an algorithm can be certified through surprisingly robust mathematical guarantees. The talk will focus on learning with partial feedback, a framework that is successfully applied to many domains including product recommendation and online advertising. With the help of concrete examples, we will explore the extent to which different forms of partial feedback, obtained through observation or communication with other agents, can affect the learning ability of online algorithms.
About the Speaker: Nicolò Cesa-Bianchi is a professor of Computer Science at the University of Milan, Italy. His main research interests are the design and analysis of machine learning algorithms for statistical and online learning, multi-armed bandit problems, and graph analytics. On these topics, he has published over 140 papers. He is co-author of the monographs "Prediction, Learning, and Games" and "Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems". He served as President of the Association for Computational Learning and co-chaired the program committee of some of the most important machine learning conferences, including NeurIPS and COLT. He is the recipient of a Google Research Award, a Xerox Foundation Award, a Criteo Faculty Award, and a Google Focused Award.
Speaker: Magnus Egerstedt, Georgia Institute of Technology
Date: Apr 23
Abstract: When robots are to be deployed over long time scales, optimality should take a backseat to “survivability”, i.e., it is more important that the robots do not break or completely deplete their energy sources than that they perform certain tasks as effectively as possible. For example, in the context of multi-agent robotics, we have a fairly good understanding of how to design coordinated control strategies for making teams of mobile robots achieve geometric objectives, such as assembling shapes or covering areas. But, what happens when these geometric objectives no longer matter all that much? In this talk, we consider this question of long duration autonomy for teams of robots that are deployed in an environment over a sustained period of time and that can be recruited to perform a number of different tasks in a distributed, safe, and provably correct manner. This development will involve the composition of multiple barrier certificates for encoding tasks and safety constraints, as well as a detour into ecology as a way of understanding how persistent environmental monitoring can be achieved by studying animals with low-energy life-styles, such as the three-toed sloth.
About the Speaker: Magnus Egerstedt is a Professor and School Chair in the School of Electrical and Computer Engineering at the Georgia Institute of Technology, where he also holds secondary faculty appointments in Mechanical Engineering, Aerospace Engineering, and Interactive Computing. Prior to becoming School Chair, he served as the director for Georgia Tech’s multidisciplinary Institute for Robotics and Intelligent Machines. A native of Sweden, Dr. Egerstedt was born, raised, and educated in Stockholm. He received a B.A. degree in Philosophy from Stockholm University, and M.S. and Ph.D. degrees in Engineering Physics and Applied Mathematics, respectively, from the Royal Institute of Technology. He subsequently was a Postdoctoral Scholar at Harvard University. Dr. Egerstedt conducts research in the areas of control theory and robotics, with particular focus on control and coordination of complex networks, such as multi-robot systems, mobile sensor networks, and cyber-physical systems. He is a Fellow of both the IEEE and IFAC, and is a foreign member of the Royal Swedish Academy of Engineering Sciences. He has received a number of teaching and research awards for his work, including the John. R. Ragazzini Award from the American Automatic Control Council, the O. Hugo Schuck Best Paper Award from the American Control Conference, and the Best Multi-Robot Paper Award from the IEEE International Conference on Robotics and Automation.
Speaker: Robert Schapire, Microsoft Research (NYC)
Date: May 7
Abstract: We consider how to learn through experience to make intelligent decisions. In the generic setting, called the contextual bandits problem, the learner must repeatedly decide which action to take in response to an observed context, and is then permitted to observe the received reward, but only for the chosen action. The goal is to learn to behave nearly as well as the best policy (or decision rule) in some possibly very large and rich space of candidate policies. This talk will describe progress on developing general methods for this problem and some of its variants.
About the Speaker: Robert Schapire is a Partner Researcher at Microsoft Research in New York City. He received his PhD from MIT in 1991. After a short post-doc at Harvard, he joined the technical staff at AT&T Labs (formerly AT&T Bell Laboratories) in 1991. In 2002, he became a Professor of Computer Science at Princeton University. He joined Microsoft Research in 2014. His awards include the 1991 ACM Doctoral Dissertation Award, the 2003 Gödel Prize, and the 2004 Kanelakkis Theory and Practice Award (both of the last two with Yoav Freund). He is a fellow of the AAAI, and a member of both the National Academy of Engineering and the National Academy of Sciences. His main research interest is in theoretical and applied machine learning, with particular focus on boosting, online learning, game theory, and maximum entropy.