B.S.He
profile photo

Bingsheng He

I am a senior researcher at Huawei Noah's Ark Lab, Beijing, where I work on deep learning, model compression, and computer vision, etc. Before that, I did my PhD at school of EECS, Peking University, where I was co-advised by Prof. Chao Xu and Prof. Dacheng Tao. I did my bachelors at school of science, Xidian University.

Email  /  Google Scholar

Research Areas

  • Mathematical Programming, Numerical Optimization

  • Variational Inequalities, Projection and contraction methods for VI

  • ADMM-like splitting contraction methods for convex optimization
  • Recent Projects

    Actually, model compression is a kind of technique for developing portable deep neural networks with lower memory and computation costs. I have done several projects in Huawei including some smartphones' applications in 2019 and 2020 (e.g. Mate 30 and Honor V30). Currently, I am leading the AdderNet project, which aims to develop a series of deep learning models using only additions (Discussions on Reddit).

  • Adder Neural Networks
  • Project Page | Hardware Implementation

    I would like to say, AdderNet is very cool! The initial idea was came up in about 2017 when climbing with some friends at Beijing. By replacing all convolutional layers (except the first and the last layers), we now can obtain comparable performance on ResNet architectures. In addition, to make the story more complete, we recent release the hardware implementation and some quantization methods. The results are quite encouraging, we can reduce both the energy consumption and thecircuit areas significantly without affecting the performance. Now, we are working on more applications to reduce the costs of launching AI algorithms such as low-level vision, detection, and NLP tasks.

  • GhostNet on MindSpore: SOTA Lightweight CV Networks
  • Huawei Connect (HC) 2020 | MindSpore Hub

    The initial verison of GhostNet was accepted by CVPR 2020, which achieved SOTA performance on ImageNet: 75.7% top1 acc with only 226M FLOPS. In the current version, we release a series computer vision models (e.g. int8 quantization, detection, and larger networks) on MindsSpore 1.0 and Mate 30 Pro (Kirin 990).

  • AI on Ascend: Real-Time Video Style Transfer
  •   

    Huawei Developer Conference (HDC) 2020 | Online Demo

    This project aims to develop a video style transfer system on the Huawei Atlas 200 DK AI developer Kit. The latency of the original model for processing one image is about 630ms. After accelerating it using our method, the lantency now is about 40ms.

    Talks

  • 10/2021, Vision Transformer at HAET ICLR 2021 workshop.
  • 05/2021, Adder Neural Network at HAET ICLR 2021 workshop. Thanks Vahid for the invitation.
  • 06/2020, "AI on the Edge - Discussion on the Gap Between Industry and Academia" at VALSE Webinar.
  • 05/2020, "Edge AI: Progress and Future Directions" at QbitAI using bilibili.
  • Research

    I'm interested in devleoping efficient models for computer vision (e.g. classification, detection, and super-resolution) using pruning, quantization, distilaltion, NAS, etc.

    Conference Papers:

    1. Transformer in Transformer
      Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, Yunhe Wang
      NeurIPS 2021 | paper | code | MindSpore code

    2. Learning Frequency Domain Approximation for Binary Neural Networks
      Yixing Xu, Kai Han, Chang Xu, Yehui Tang, Chunjing Xu, Yunhe Wang
      NeurIPS 2021 | paper | Oral Presentation

    3. Dynamic Resolution Network
      Mingjian Zhu*, Kai Han*, Enhua Wu, Qiulin Zhang, Ying Nie, Zhenzhong Lan, Yunhe Wang
      NeurIPS 2021 (* equal contribution) | paper

    4. Post-Training Quantization for Vision Transformer
      Zhenhua Liu, Yunhe Wang, Kai Han, Wei Zhang, Siwei Ma, Wen Gao
      NeurIPS 2021 | paper

    5. Augmented Shortcuts for Vision Transformers
      Yehui Tang, Kai Han, Chang Xu, An Xiao, Yiping Deng, Chao Xu, Yunhe Wang
      NeurIPS 2021 | paper

    6. Adder Attention for Vision Transformer
      Han Shu*, Jiahao Wang*, Hanting Chen, Lin Li, Yujiu Yang, Yunhe Wang
      NeurIPS 2021 (* equal contribution) | paper

    7. Towards Stable and Robust Addernets
      Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
      NeurIPS 2021 | paper

    8. Handling Long-Tailed Feature Distribution in Addernets
      Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
      NeurIPS 2021 | paper

    9. Neural Architecture Dilation for Adversarial Robustness
      Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
      NeurIPS 2021 | paper

    10. An Empirical Study of Adder Neural Networks for Object Detection
      Xinghao Chen, Chang Xu, Minjing Dong, Chunjing Xu, Yunhe Wang
      NeurIPS 2021 | paper

    11. Learning Frequency-Aware Dynamic Network for Efficient Super-Resolution
      Wenbin Xie, Dehua Song, Chang Xu, Chunjing Xu, Hui Zhang, Yunhe Wang
      ICCV 2021 | paper

    12. Winograd Algorithm for AdderNet
      Wenshuo Li, Hanting Chen, Mingqiang Huang, Xinghao Chen, Chunjing Xu, Yunhe Wang
      ICML 2021 | paper

    13. Distilling Object Detectors via Decoupled Features
      Jianyuan Guo, Kai Han, Yunhe Wang, Wei Zhang, Chunjing Xu, Chang Xu
      CVPR 2021 | paper

    14. HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens
      Zhaohui Yang, Yunhe Wang, Xinghao Chen, Jianyuan Guo, Wei Zhang,
      Chao Xu, Chunjing Xu, Dacheng Tao, Chang Xu
      CVPR 2021 | paper | MindSpore code

    15. Manifold Regularized Dynamic Network Pruning
      Yehui Tang, Yunhe Wang, Yixing Xu, Yiping Deng, Chao Xu, Dacheng Tao, Chang Xu
      CVPR 2021 | paper | MindSpore code

    16. Learning Student Networks in the Wild
      Hanting Chen, Tianyu Guo, Chang Xu, Wenshuo Li, Chunjing Xu, Chao Xu, Yunhe Wang
      CVPR 2021 | paper

    17. AdderSR: Towards Energy Efficient Image Super-Resolution
      Dehua Song*, Yunhe Wang*, Hanting Chen, Chang Xu, Chunjing Xu, Dacheng Tao
      CVPR 2021 (* equal contribution) | paper | code | Oral Presentation

    18. ReNAS: Relativistic Evaluation of Neural Architecture Search
      Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu
      CVPR 2021 | paper | Oral Presentation | MindSpore code

    19. Pre-Trained Image Processing Transformer
      Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu,
      Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao
      CVPR 2021 | paper | MindSpore code | Pytorch code

    20. Data-Free Knowledge Distillation For Image Super-Resolution
      Yiman Zhang, Hanting Chen, Xinghao Chen, Yiping Deng, Chunjing Xu, Yunhe Wang
      CVPR 2021 | paper

    21. Positive-Unlabeled Data Purification in the Wild for Object Detection
      Jianyuan Guo, Kai Han, Han Wu, Xinghao Chen, Chao Zhang, Chunjing Xu, Chang Xu, Yunhe Wang
      CVPR 2021 | paper

    22. One-shot Graph Neural Architecture Search with Dynamic Search Space
      Yanxi Li, Zean Wen, Yunhe Wang, Chang Xu
      AAAI 2021 paper

    23. Adversarial Robustness through Disentangled Representations
      Shuo Yang, Tianyu Guo, Yunhe Wang, Chang Xu
      AAAI 2021 paper

    24. Kernel Based Progressive Distillation for Adder Neural Networks
      Yixing Xu, Chang Xu, Xinghao Chen, Wei Zhang, Chunjing Xu, Yunhe Wang
      NeurIPS 2020 | paper | Spotlight | code

    25. Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets
      Kai Han*, Yunhe Wang*, Qiulin Zhang, Wei Zhang, Chunjing Xu, Tong Zhang
      NeurIPS 2020 (* equal contribution) | paper | code

    26. Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts
      Guilin Li*, Junlei Zhang*, Yunhe Wang, Chuanjian Liu, Matthias Tan, Yunfeng Lin,
      Wei Zhang, Jiashi Feng, Tong Zhang
      NeurIPS 2020 (* equal contribution) | paper | code

    27. Searching for Low-Bit Weights in Quantized Neural Networks
      Zhaohui Yang, Yunhe Wang, Kai Han, Chunjing Xu, Chao Xu, Dacheng Tao, Chang Xu
      NeurIPS 2020 | paper | code

    28. SCOP: Scientific Control for Reliable Neural Network Pruning
      Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing Xu, Chao Xu, Chang Xu
      NeurIPS 2020 | paper | code

    29. Adapting Neural Architectures Between Domains
      Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
      NeurIPS 2020 | paper | code

    30. Discernible Image Compression
      Zhaohui Yang, Yunhe Wang, Chang Xu, Peng Du, Chao Xu, Chunjing Xu, Qi Tian
      ACM MM 2020 | paper

    31. Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer
      Xinghao Chen*, Yiman Zhang*, Yunhe Wang, Han Shu, Chunjing Xu, Chang Xu
      ECCV 2020 (* equal contribution) | paper | code

    32. Learning Binary Neurons with Noisy Supervision
      Kai Han, Yunhe Wang, Yixing Xu, Chunjing Xu, Enhua Wu, Chang Xu
      ICML 2020 | paper

    33. Neural Architecture Search in a Proxy Validation Loss Landscape
      Yanxi Li, Minjing Dong, Yunhe Wang, Chang Xu
      ICML 2020 | paper

    34. On Positive-Unlabeled Classification in GAN
      Tianyu Guo, Chang Xu, Jiajun Huang, Yunhe Wang, Boxin Shi, Chao Xu, Dacheng Tao
      CVPR 2020 | paper

    35. CARS: Continuous Evolution for Efficient Neural Architecture Search
      Zhaohui Yang, Yunhe Wang, Xinghao Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, Chang Xu
      CVPR 2020 | paper | code

    36. AdderNet: Do We Really Need Multiplications in Deep Learning?
      Hanting Chen*, Yunhe Wang*, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu
      CVPR 2020 (* equal contribution) | paper | code | Oral Presentation

    37. A Semi-Supervised Assessor of Neural Architectures
      Yehui Tang, Yunhe Wang, Yixing Xu, Hanting Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, Chang Xu
      CVPR 2020 | paper

    38. Hit-Detector: Hierarchical Trinity Architecture Search for Object Detection
      Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao Chen, Chang Xu
      CVPR 2020 | paper | code

    39. Frequency Domain Compact 3D Convolutional Neural Networks
      Hanting Chen, Yunhe Wang, Han Shu, Yehui Tang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu
      CVPR 2020 | paper

    40. GhostNet: More Features from Cheap Operations
      Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu
      CVPR 2020 | paper | code

    41. Beyond Dropout: Feature Map Distortion to Regularize Deep Neural Networks
      Yehui Tang, Yunhe Wang, Yixing Xu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu
      AAAI 2020 | paper | code

    42. DropNAS: Grouped Operation Dropout for Differentiable Architecture Search
      Weijun Hong, Guilin Li, Weinan Zhang, Ruiming Tang, Yunhe Wang, Zhenguo Li, Yong Yu
      IJCAI 2020 | paper

    43. Distilling Portable Generative Adversarial Networks for Image Translation
      Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu
      AAAI 2020 | paper

    44. Efficient Residual Dense Block Search for Image Super-Resolution
      Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, Yunhe Wang
      AAAI, 2020 | paper | code

    45. Positive-Unlabeled Compression on the Cloud
      Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing Xu, Dacheng Tao, Chang Xu
      NeurIPS 2019 | paper | code | supplement

    46. Data-Free Learning of Student Networks
      Hanting Chen,Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi,
      Chunjing Xu, Chao Xu, Qi Tian
      ICCV 2019 | paper | code

    47. Co-Evolutionary Compression for Unpaired Image Translation
      Han Shu, Yunhe Wang, Xu Jia, Kai Han, Hanting Chen, Chunjing Xu, Qi Tian, Chang Xu
      ICCV 2019 | paper | code

    48. Searching for Accurate Binary Neural Architectures
      Mingzhu Shen, Kai Han, Chunjing Xu, Yunhe Wang
      ICCV Neural Architectures Workshop 2019 | paper

    49. LegoNet: Efficient Convolutional Neural Networks with Lego Filters
      Zhaohui Yang, Yunhe Wang, Hanting Chen, Chuanjian Liu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu
      ICML 2019 | paper | code

    50. Learning Instance-wise Sparsity for Accelerating Deep Models
      Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu
      IJCAI 2019 | paper

    51. Attribute Aware Pooling for Pedestrian Attribute Recognition
      Kai Han, Yunhe Wang, Han Shu, Chuanjian Liu, Chunjing Xu, Chang Xu
      IJCAI 2019 | paper

    52. Crafting Efficient Neural Graph of Large Entropy
      Minjing Dong, Hanting Chen, Yunhe Wang, Chang Xu
      IJCAI 2019 | paper

    53. Low Resolution Visual Recognition via Deep Feature Distillation
      Mingjian Zhu, Kai Han, Chao Zhang, Jinlong Lin, Yunhe Wang
      ICASSP 2019 | paper

    54. Learning Versatile Filters for Efficient Convolutional Neural Networks
      Yunhe Wang, Chang Xu, Chunjing Xu, Chao Xu, Dacheng Tao
      NeurIPS 2018 | paper | code | supplement

    55. Towards Evolutionary Compression
      Yunhe Wang, Chang Xu, Jiayan Qiu, Chao Xu, Dacheng Tao
      SIGKDD 2018 | paper

    56. Autoencoder Inspired Unsupervised Feature Selection
      Kai Han, Yunhe Wang, Chao Zhang, Chao Li, Chao Xu
      ICASSP 2018 | paper | code

    57. Adversarial Learning of Portable Student Networks
      Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      AAAI 2018 | paper

    58. Beyond Filters: Compact Feature Map for Portable Deep Model
      Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      ICML 2017 | paper | code | supplement

    59. Beyond RPCA: Flattening Complex Noise in the Frequency Domain
      Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      AAAI 2017 | paper

    60. Privileged Multi-Label Learning
      Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao
      IJCAI 2017 | paper

    61. CNNpack: Packing Convolutional Neural Networks in the Frequency Domain
      Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
      NeurIPS 2016 | paper | supplement

    Journal Papers:

    1. A Survey on Visual Transformer
      Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, Dacheng Tao
      IEEE TPAMI 2022 | paper

    2. Learning Versatile Convolution Filters for Efficient Visual Recognition
      Kai Han*, Yunhe Wang*, Chang Xu, Chunjing Xu, Enhua Wu, Dacheng Tao
      IEEE TPAMI 2021 (* equal contribution) | paper | code

    3. Adversarial Recurrent Time Series Imputation
      Shuo Yang, Minjing Dong, Yunhe Wang, Chang Xu
      IEEE TNNLS 2020 |paper

    4. Learning Student Networks via Feature Embedding
      Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      IEEE TNNLS 2020 | paper

    5. Packing Convolutional Neural Networks in the Frequency Domain
      Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      IEEE TPAMI 2018 | paper

    6. DCT Regularized Extreme Visual Recovery
      Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
      IEEE TIP 2017 | paper

    7. DCT Inspired Feature Transform for Image Retrieval and Reconstruction
      Yunhe Wang, Miaojing Shi, Shan You, Chao Xu
      IEEE TIP 2016 | paper

    Services

  • Area Chair of ICML 2021, NeurIPS 2021.

  • Senior Program Committee Members of IJCAI 2021, IJCAI 2020 and IJCAI 2019.

  • Journal Reviewers of IEEE T-PAMI, IJCV, IEEE T-IP, IEEE T-NNLS, IEEE T-MM, IEEE T-KDE, etc.

  • Program Committee Members of ICCV 2021, AAAI 2021, ICLR 2021, NeurIPS 2020, ICML 2020, ECCV 2020, CVPR 2020, ICLR 2020, AAAI 2020, ICCV 2019, CVPR 2019, ICLR 2019, AAAI 2019, IJCAI 2018, AAAI 2018, NeurIPS 2018, etc.

  • Awards

  • 2020, Nomination for Outstanding Youth Paper Award, WAIC

  • 2017, Google PhD Fellowship

  • 2017, Baidu Scholarship

  • 2017, President's PhD Scholarship, Peking University

  • 2017, National Scholarship for Graduate Students

  • 2016, National Scholarship for Graduate Students

  • Welcome to use this website's source code, just add a link back to here.
    No. Visitor Since Feb 2022. Powered by w3.css