Trustworthy AI for Human Machine Interface
- S. Farokh Atashzar, Assistant Professor, Medical Robotics and Intelligent Interactive Technologies Lab (MERIIT @ NYU)
- Jackie Libby, Smart Cities Postdoctoral Associate, NYU CUSP
Project Abstract
We exist in a world with advanced AI, yet there is a lack of AI for assisting disabled populations who cannot achieve basic manipulability functions. This project will be to develop trustworthy Machine Learning models to to address existing problems of human-machine interfaces.
Project Description & Overview
This research project will try to answer whether: 1) AI models can be trained faster to exclude the need for high-performance computers; and 2) if calibration can be minimized for new users so that neuro-robots are easy to use and ubiquitous in less-developed regions.
For this, we will develop a new biosignal processing pipeline using artificial intelligence, specifically a shallow-hybrid neural network, which includes an engine for modeling long term and short term dynamical dependencies in the signal space. The model will be tested in comparison with exiting state-of-the-art algorithms that we have developed in the last years. For this, data will be used from large datasets available to us. There is a potential possibility of involvement in data collection, depending on the academic and research background.
Students are more than welcome to contact f.atashzar@nyu.edu with questions. See one of our recent efforts with applications in neurorobotics for more context.
Datasets
We will use available large data sets on high-density electromyography and will try to predict the intended gestures. The data set includes high volume of signals collected from the upper limb of ~50 human subjects.
Competencies
Academic/research background in machine learning and/or signal/data processing is encouraged.
Learning Outcomes & Deliverables
Signal Processing, Deep Learning, Human-Machine Interface
Students
Jeff Guo, Yusen Li, Soobin Lim, Baohan Liu