About Me
My name is Bingyin Zhao. I am a Ph.D. student in the Department of Electrical and Computer Engineering at Clemson University, under the supervision of Dr. Yingjie Lao. I worked as a Deep Learning Software and Research Intern at NVIDIA under the supervision of Dr. Jose M. Alvarez and Dr. Zhiding Yu. I received my M.S. and B.S. degrees in Electrical Engineering from Rochester Institute of Technology, Rochester, NY, USA and East China University of Science and Technology, Shanghai, China in 2014 and 2012, respectively.
My research interests include Trustworthy AI, particularly data poisoning attacks and countermeasures and computer vision for autonomous vehicles. I have authored and co-authored several research papers, which have been accepted and published in prestigious conferences and journals, including ICCV’23, AAAI’22, IEEE TCAD, DAC’23, WACV’22, etc. I also serve as a reviewer for multiple computer vision, artificial intelligence and machine learning conferences, such as NeurIPS, ICLR, CVPR, ICCV, ECCV, AAAI, etc.
🧑🎓Education Background
- Ph.D., Jan. 2018 - Now;
- Clemson University
- Major: Computer Engineering
- Advisor: Dr. Yingjie Lao
- Master of Science, September 2014;
- Rochester Institute of Technology
- Major: Electrical Engineering
- Bachelor of Science, July 2012;
- East China University of Science and Technology
- Major: Electrical Engineering
📮News and Updates
- [02/2024] I passed my PhD defense.
- [07/2023] Our paper Fully Attentional Networks with Self-emerging Token Labeling is accepted by International Conference on Computer Vision (ICCV) 2023. The code and models can be found at NVlabs.
- [03/2023] Our paper Data-Driven Feature Selection Framework for Approximate Circuit Design is accepted by IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD).
- [02/2023] Our paper NNTesting: Neural Network Fault Attacks Detection Using Gradient-Based Test Vector Generation is accepted by 60th Design Automation Conference (DAC).
- [05/2022] I will join NVIDIA as a Deep Learning Software and Research Intern.
- [12/2021] Our paper CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets is accepted by Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22). [Code]
- [10/2021] Our paper Towards Class-oriented Poisoning Attacks against Neural Networks is accepted by Winter Conference on Applications of Computer Vision (WACV) 2022.