I am a senior researcher at Huawei Noah's Ark Lab, Beijing, where I work on deep learning, model compression, and computer vision, etc. Before that, I did my PhD at school of EECS, Peking University, where I was co-advised by Prof. Chao Xu and Prof. Dacheng Tao. I did my bachelors at school of science, Xidian University.
Variational Inequalities, Projection and contraction methods for VI
ADMM-like splitting contraction methods for convex optimization
Recent Projects
Actually, model compression is a kind of technique for developing portable deep neural networks with lower memory and computation costs. I have done several projects in Huawei including some smartphones' applications in 2019 and 2020 (e.g. Mate 30 and Honor V30). Currently, I am leading the AdderNet project, which aims to develop a series of deep learning models using only additions (Discussions on Reddit).
I would like to say, AdderNet is very cool! The initial idea was came up in about 2017 when climbing with some friends at Beijing. By replacing all convolutional layers (except the first and the last layers), we now can obtain comparable performance on ResNet architectures. In addition, to make the story more complete, we recent release the hardware implementation and some quantization methods. The results are quite encouraging, we can reduce both the energy consumption and thecircuit areas significantly without affecting the performance. Now, we are working on more applications to reduce the costs of launching AI algorithms such as low-level vision, detection, and NLP tasks.
GhostNet on MindSpore: SOTA Lightweight CV Networks
The initial verison of GhostNet was accepted by CVPR 2020, which achieved SOTA performance on ImageNet: 75.7% top1 acc with only 226M FLOPS. In the current version, we release a series computer vision models (e.g. int8 quantization, detection, and larger networks) on MindsSpore 1.0 and Mate 30 Pro (Kirin 990).
This project aims to develop a video style transfer system on the Huawei Atlas 200 DK AI developer Kit. The latency of the original model for processing one image is about 630ms. After accelerating it using our method, the lantency now is about 40ms.
I'm interested in devleoping efficient models for computer vision (e.g. classification, detection, and super-resolution) using pruning, quantization, distilaltion, NAS, etc.
Conference Papers:
Transformer in Transformer
Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, Yunhe Wang NeurIPS 2021 | paper | code | MindSpore code
Learning Frequency Domain Approximation for Binary Neural Networks
Yixing Xu, Kai Han, Chang Xu, Yehui Tang, Chunjing Xu, Yunhe Wang NeurIPS 2021 | paper | Oral Presentation
Dynamic Resolution Network
Mingjian Zhu*, Kai Han*, Enhua Wu, Qiulin Zhang, Ying Nie, Zhenzhong Lan, Yunhe Wang NeurIPS 2021 (* equal contribution) | paper
Post-Training Quantization for Vision Transformer
Zhenhua Liu, Yunhe Wang, Kai Han, Wei Zhang, Siwei Ma, Wen Gao
NeurIPS 2021 | paper
Augmented Shortcuts for Vision Transformers
Yehui Tang, Kai Han, Chang Xu, An Xiao, Yiping Deng, Chao Xu, Yunhe Wang NeurIPS 2021 | paper
Adder Attention for Vision Transformer
Han Shu*, Jiahao Wang*, Hanting Chen, Lin Li, Yujiu Yang, Yunhe Wang NeurIPS 2021 (* equal contribution) | paper
Towards Stable and Robust Addernets
Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
NeurIPS 2021 | paper
Handling Long-Tailed Feature Distribution in Addernets
Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
NeurIPS 2021 | paper
Neural Architecture Dilation for Adversarial Robustness
Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
NeurIPS 2021 | paper
An Empirical Study of Adder Neural Networks for Object Detection
Xinghao Chen, Chang Xu, Minjing Dong, Chunjing Xu, Yunhe Wang NeurIPS 2021 | paper
Learning Frequency-Aware Dynamic Network for Efficient Super-Resolution
Wenbin Xie, Dehua Song, Chang Xu, Chunjing Xu, Hui Zhang, Yunhe Wang ICCV 2021 | paper
Winograd Algorithm for AdderNet
Wenshuo Li, Hanting Chen, Mingqiang Huang, Xinghao Chen, Chunjing Xu, Yunhe Wang ICML 2021 | paper
Distilling Object Detectors via Decoupled Features
Jianyuan Guo, Kai Han, Yunhe Wang, Wei Zhang, Chunjing Xu, Chang Xu
CVPR 2021 | paper
HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens
Zhaohui Yang, Yunhe Wang, Xinghao Chen, Jianyuan Guo, Wei Zhang,
Chao Xu, Chunjing Xu, Dacheng Tao, Chang Xu
CVPR 2021 | paper | MindSpore code
Data-Free Knowledge Distillation For Image Super-Resolution
Yiman Zhang, Hanting Chen, Xinghao Chen, Yiping Deng, Chunjing Xu, Yunhe Wang CVPR 2021 | paper
Positive-Unlabeled Data Purification in the Wild for Object Detection
Jianyuan Guo, Kai Han, Han Wu, Xinghao Chen, Chao Zhang, Chunjing Xu, Chang Xu, Yunhe Wang CVPR 2021 | paper
One-shot Graph Neural Architecture Search with Dynamic Search Space
Yanxi Li, Zean Wen, Yunhe Wang, Chang Xu
AAAI 2021 paper
Adversarial Robustness through Disentangled Representations
Shuo Yang, Tianyu Guo, Yunhe Wang, Chang Xu
AAAI 2021 paper
Kernel Based Progressive Distillation for Adder Neural Networks
Yixing Xu, Chang Xu, Xinghao Chen, Wei Zhang, Chunjing Xu, Yunhe Wang NeurIPS 2020 | paper | Spotlight | code
Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets
Kai Han*, Yunhe Wang*, Qiulin Zhang, Wei Zhang, Chunjing Xu, Tong Zhang
NeurIPS 2020 (* equal contribution) | paper | code
Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts
Guilin Li*, Junlei Zhang*, Yunhe Wang, Chuanjian Liu, Matthias Tan, Yunfeng Lin,
Wei Zhang, Jiashi Feng, Tong Zhang
NeurIPS 2020 (* equal contribution) | paper | code
Searching for Low-Bit Weights in Quantized Neural Networks
Zhaohui Yang, Yunhe Wang, Kai Han, Chunjing Xu, Chao Xu, Dacheng Tao, Chang Xu
NeurIPS 2020 | paper | code
SCOP: Scientific Control for Reliable Neural Network Pruning
Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing Xu, Chao Xu, Chang Xu
NeurIPS 2020 | paper | code
Adapting Neural Architectures Between Domains
Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
NeurIPS 2020 | paper | code
Discernible Image Compression
Zhaohui Yang, Yunhe Wang, Chang Xu, Peng Du, Chao Xu, Chunjing Xu, Qi Tian
ACM MM 2020 | paper
Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer
Xinghao Chen*, Yiman Zhang*, Yunhe Wang, Han Shu, Chunjing Xu, Chang Xu
ECCV 2020 (* equal contribution) | paper | code
Learning Binary Neurons with Noisy Supervision
Kai Han, Yunhe Wang, Yixing Xu, Chunjing Xu, Enhua Wu, Chang Xu
ICML 2020 | paper
Neural Architecture Search in a Proxy Validation Loss Landscape
Yanxi Li, Minjing Dong, Yunhe Wang, Chang Xu
ICML 2020 | paper
On Positive-Unlabeled Classification in GAN
Tianyu Guo, Chang Xu, Jiajun Huang, Yunhe Wang, Boxin Shi, Chao Xu, Dacheng Tao
CVPR 2020 | paper
Adversarial Learning of Portable Student Networks Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
AAAI 2018 | paper
Beyond Filters: Compact Feature Map for Portable Deep Model Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
ICML 2017 | paper | code | supplement
Beyond RPCA: Flattening Complex Noise in the Frequency Domain Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
AAAI 2017 | paper
Privileged Multi-Label Learning
Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao
IJCAI 2017 | paper
CNNpack: Packing Convolutional Neural Networks in the Frequency Domain Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
NeurIPS 2016 | paper | supplement
Journal Papers:
A Survey on Visual Transformer
Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, Dacheng Tao
IEEE TPAMI 2022 | paper
Learning Versatile Convolution Filters for Efficient Visual Recognition
Kai Han*, Yunhe Wang*, Chang Xu, Chunjing Xu, Enhua Wu, Dacheng Tao
IEEE TPAMI 2021 (* equal contribution) | paper | code
Adversarial Recurrent Time Series Imputation
Shuo Yang, Minjing Dong, Yunhe Wang, Chang Xu
IEEE TNNLS 2020 |paper
Learning Student Networks via Feature Embedding
Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
IEEE TNNLS 2020 | paper
Packing Convolutional Neural Networks in the Frequency Domain Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
IEEE TPAMI 2018 | paper
DCT Regularized Extreme Visual Recovery Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
IEEE TIP 2017 | paper
DCT Inspired Feature Transform for Image Retrieval and Reconstruction Yunhe Wang, Miaojing Shi, Shan You, Chao Xu
IEEE TIP 2016 | paper