Jeff Zhang

Jeff (Jun) Zhang (PhD Candidate, 2015-) is the recipient of the Ernst Weber Fellowship for his Ph.D. in the Department of Electrical and Computer Engineering, New York University.

He received his B.Eng. and M.Eng. in Computer Engineering with Magna Cum Laude from Hunan University, Changsha, China. His research interests are in Computer Architecture and EDA, with particularly emphasis on energy-efficient and fault-tolerant design for deep learning systems and accelerators. He also has research internship experience with Samsung Semiconductor and Microsoft Research.


Research Interests: Deep Learning, Computer Architecture and Systems, Electronic Design Automation (EDA)

I'm on the job market looking for academic/research positions!

New York University,    2020 (expected)
Ph.D., Electrical and Computer Engineering

Hunan University,         2013
M.Eng., Computer Engineering

Hunan University,         2011
B.Eng., Information Security

Microsoft Research, Redmond WA   
          Research Intern                                August 2018 to November 2018

  • Architecture Optimizations for Deep Convolutional Neural Networks.

Samsung Semiconductor Inc., San Jose CA 
          Research Intern                                        May 2018 to August 2018

  • Reinforcement learning based SSD storage performance optimization.

A nearly complete list of my publications can be found via my Google Scholar page.

[HotCloud'20Model-Switching: Dealing with Fluctuating Workloads in Machine-Learning-as-a-Service Systems. (Acceptance rate: 22/95=23%)     
J. Zhang et al.

[ESWeek'19CompAct: On-chip Compression of Activations for Low Power Systolic Array Based CNN Acceleration. (Acceptance rate: 16/75=21%)    
J. Zhang et al.                                                                                              

[DAC'19Building Robust Machine Learning Systems: Current Progress, Research Challenges, and Opportunities. (Special Session)
J. Zhang et al.                                                                                 

[ICCAD'18FATE: fast and accurate timing error prediction framework for low power DNN accelerator design. (Acceptance rate: 98/396=24.7%)
J. Zhang, S. Garg                                                                           

[DAC'18ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy-Efficient DNN Accelerators.  (Acceptance rate: 168/691=24.3%)
J. Zhang, Z. Ghodsi, S. Garg.                                                        

[VTS'18] Analyzing and Mitigating the Impact of Permanent Faults on a Systolic Array Based Neural Network Accelerator.  (Best Paper Nominee)
J. Zhang, T. Gu, K. Basu, S. Garg.                                                 

[DATE'17BandiTS: Dynamic timing speculation using multi-armed bandit based optimization. 
J. Zhang, S. Garg.                                                                                         

[DAC'16Synergistic Timing Speculation for Multi-threaded Programs. (Acceptance rate: 152/876=17%)       
A. Yasin, J. Zhang, H. Chen, S. Chakraborty, S. Garg, K. Chakraborty.                

  • Best Presentation Award Nomination, ACM SIGDA DATE PhD Forum, 2020
  • Best Paper Award Nomination, IEEE VLSI Test Symposium, 2018
  • Ernst Weber P.h.D Fellowship, New York University, 2015, 2016
  • National Scholarship, Ministry of Education of China, 2008, 2010, 2012

Current Projects, Research Labs, and Groups