I obtained PhD from University of Cambridge. My current research interests include but not limited to Bayesian deep learning, causal inference, design and application of deep learning systems. We are devoted to solve the fundamental challenge of the generalization ability of machine learning systems. I serve as programme committee members and reviewers for several key machine learning journals and conferences. Our lab has published several papers on top machine learning and artificial intelligence conferences, such as NeurIPS, CVPR, ICCV, AAAI, IJCAI, etc.
Contact: ynylincoln AT sjtu DOT edu DOT cn [2023年入学招生开始，请尽早联系，开始实习。有2023年入学硕士名额，需要通过交大保研夏令营，DDL临近，请尽快联系。]
Nanyang Ye, Qianxiao Li, Xiao-Yun Zhou, Zhanxing Zhu. An Annealing Mechanism for Adversarial Training. IEEE Transaction on Neural Network and Learning System 2021.
Nanyang Ye, Zhanxing Zhu. Bayesian Adversarial Learning. Advances in Neural Information Processing Systems 2018 (NeurIPS 2018).
Nanyang Ye, Zhanxing Zhu, Rafał K. Mantiuk. Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks. Advances in Neural Information Processing Systems 2017 (NeurIPS 2017).
Lin Zhu, Xinbing Wang, Chenghu Zhou, Nanyang Ye*. Bayesian Cross-modal Alignment Learning for Few-Shot Out-of-Distribution Generalization. AAAI Conference on Artificial Intelligence 2023 (AAAI 2023). (* Corresponding author)
Nanyang Ye, Jingxuan Tang, Huayu Deng, Xiao-Yun Zhou, Qianxiao Li, Zhenguo Li, Guang-Zhong Yang, Zhanxing Zhu. Adversarial Invariant Learning. IEEE Conference on Computer Vision and Pattern Recognition 2021 (CVPR 2021).
Nanyang Ye#, Kaican Li#, Haoyue Bai, Runpeng Yu, Lanqing Hong, Fengwei Zhou, Zhenguo Li, Jun Zhu. OoD-Bench: Benchmarking and Understanding Out-of-Distribution Generalization Datasets and Algorithms. IEEE Conference on Computer Vision and Pattern Recognition 2022 (CVPR 2022).
Nanyang Ye, Lin Zhu, Jia Wang, Zhaoyu Zeng, Jiayao Shao, Chensheng Peng, Bikang Pan, Kaican Li, Jun Zhu. Certifiable Out-of-Distribution Generalization. AAAI Conference on Artificial Intelligence 2023 (AAAI 2023).
Haoyue Bai, Rui Sun, Lanqing Hong, Fengwei Zhou, Nanyang Ye*, Han-Jia Ye, Gary Chan, Zhenguo Li. DecAug: Out-of-Distribution Generalization via Decomposed Feature Representation and Semantic Augmentation. AAAI Conference on Artificial Intelligence (AAAI 2021). (* Corresponding author)
Haoyue Bai, Fengwei Zhou, Lanqing Hong, Nanyang Ye*, S.-H. Gary Chan, Zhenguo Li. NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization. International Conference on Computer Vision 2021 (ICCV 2021). (* Corresponding author)
Linfeng Cao, Aofan Jiang, Wei Li, Huaying Wu, Nanyang Ye*. OoDHDR-codec: Out-of-Distribution Generalization for HDR Image Compression. AAAI Conference on Artificial Intelligence 2022 (AAAI 2022). (* Corresponding author)
Runpeng Yu, Hong Zhu, Kaican Li, Lanqing Hong, Rui Zhang*, Nanyang Ye*, Shao-Lun Huang, Xiuqiang He. Regularization Penalty Optimization for Addressing Data Quality Variance in OoD Algorithms. AAAI Conference on Artificial Intelligence 2022 (AAAI 2022). (* Corresponding author)
Nanyang Ye, Qianxiao Li, Xiao-Yun Zhou, Zhanxing Zhu. Amata: An Annealing Mechanism for Adversarial Training Acceleration. AAAI Conference on Artificial Intelligence (AAAI 2021). (* Corresponding author)
Nanyang Ye, Zhanxing Zhu. Stochastic Fractional Hamiltonian Monte Carlo. International Joint Conference on Artificial Intelligence 2018 (IJCAI 2018).
Nanyang Ye, Krzysztof Wolski, Rafał K. Mantiuk. Predicting Visible Image Differences Under Varying Display Brightness and Viewing Distance. IEEE Conference on Computer Vision and Pattern Recognition 2019 (CVPR 2019).
Nanyang Ye, Jingbiao Mei, Zhicheng Fang, Yuwen Zhang, Ziqing Zhang, Huaying Wu, Xiaoyao Liang. BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture. Design Automation Conference 2021 (DAC 2021).
Krzysztof Wolski#, Daniele Giunchi#, Nanyang Ye#, Piotr Didyk, Karol Myszkowski, Radosław Mantiuk, Hans-PeterSeidel, Anthony Steed, Rafał K. Mantiuk. Dataset and Metrics for Predicting Local Visible Differences. ACM Transaction on Graphics (TOG 2018) (The first three authors contribute equally to the paper). vol 37, issue 5.
Nanyang Ye, María Pérez-Ortiz, Rafał K. Mantiuk. Visibility Metric for Visually Lossless Image Compression. Picture Coding Symposium 2019 (PCS 2019). Top Conference in Image Compression.
Nanyang Ye, María Pérez-Ortiz, Rafał K. Mantiuk. Trained Perceptual Transform for Quality Assessment of High Dynamic Range Images and Video. International Conference on Image Processing 2018 (ICIP 2018). pp. 1718-1722.
Aliaksei Mikhailiuk, Nanyang Ye, Rafał K. Mantiuk. The Effect of Display Brightness and Viewing Distance: A Dataset for Visually Lossless Image Compression. Human Vision and Electronic Imaging 2021 (HVEI 2021).
Xu Cao, Zijie Chen, Bolin Lai, Yuxuan Wang, Yu Chen, Zhengqing Cao, Zhilin Yang, Nanyang Ye, Junbo Zhao, Xiao-Yun Zhou, Peng Qi. VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images. International Conference on Intelligent Robots and Systems 2021 (IROS 2021).
Nanyang Ye, Xiaoming Tao, Linhao Dong, Ning Ge. Mouse Calibration Aided Real-time Gaze Estimation Based on Boost Gaussian Bayesian Learning. International Conference on Image Processing 2016 (ICIP 2016). pp. 2797-2801.
Nanyang Ye, Linhao Dong, Xiaoming Tao, Ning Ge. Efficient Multi-Cell Clustering for Coordinated Multi-Point Transmission with Blossom Tree Algorithm. IEEE Vehicular Technology Conference 2015 (VTC 2015).pp. 1-4.
(A lot of the codes can be found on the papers with codes website).
We always welcome self-motivated students including undergraduate interns, graduate applicants to apply for positions in our lab from various backgrounds, such as mathematics, computer science, software engineering, and automation etc. Before you come, we hope you already had some basic knowledge of machine learning and machine learning development framework, such as PyTorch or Tensorflow.
Welcome to send me e-mails for more information.
实验室在机器学习顶级会议NeurIPS, CVPR, IJCAI等发表多篇论文，研究方向：深度学习算法，深度学习安全性，因果推理，以及深度学习的应用。我们致力于从理论和算法上解决限制机器学习算法在实际应用中的根本问题，突破目前深度学习被称为现代炼金术的困境。实验室长期与剑桥大学，北京大学等机构保持长期合作关系。优秀同学有机会推荐海外顶级高校学习。
申请条件：1)强烈的科学探索精神 2)快速迭代实现的编程能力 3)了解机器学习的基本知识 4)较好的英文阅读和写作能力 5)数学基础
Mei (University of Cambridge, full scholarship), Bai (University of Wisconsin Madison, full scholarship), Cao (Ohio State University, full scholarship), Yu(National University of Singapore, full scholarship),…., we are expecting more good news to come this year !