OPtimization and Trustworthy Machine Learning (OPTML) group is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/ deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.
As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.
We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!
Authors marked in bold indicate our group members, and “*” indicates equal contribution.
Trustworthy AI: Robustness, fairness, and model explanation
Model sparsification can simplify machine unlearning
J. Jia*, J. Liu*, P. Ram, Y. Yao, G. Liu, Y. Liu, P. Sharma, S. Liu
NeurIPS’23 (Spotlight)
Understanding and Improving Visual Prompting: A Label-Mapping Perspective
A. Chen, Y. Yao, P.-Y. Chen, Y. Zhang, S. Liu
CVPR’23
Revisiting and advancing fast adversarial training through the lens of bi-level optimization
Y. Zhang*, G. Zhang*, P. Khanduri, M. Hong, S. Chang, S. Liu
ICML’22
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu
ICLR’22 (Spotlight)
Reverse Engineering of Imperceptible Adversarial Image Perturbations
Y. Gong*, Y. Yao*, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu
ICLR’22
Scalable AI: Model & data compression, distributed learning, black-box optimization, and automated ML
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
Y. Zhang*, Y. Zhang*, A. Chen, J. Jia, J. Liu, G. Liu, M. Hong, S. Chang, S. Liu
NeurIPS’23
Advancing Model Pruning via Bi-level Optimization
Y. Zhang*, Y. Yao*, P. Ram, P. Zhao, T. Chen, M. Hong, Y. Wang, S. Liu
NeurIPS’22
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
G. Zhang*, S. Lu*, Y. Zhang, X. Chen, P.-Y. Chen, Q. Fan, L. Martie, L. Horesh, M. Hong, S. Liu
UAI’22 (Best Paper Runner-Up Award)
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
S. Liu, S. Lu, X. Chen, Y. Feng, K. Xu, A. Al-Dujaili, M. Hong, U.-M. O’Reilly
ICML’20
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero, P. K. Varshney
IEEE Signal Processing Magazine, 2020
We are grateful for funding from Michigan State University, MIT-IBM Watson AI Lab, DARPA, Cisco Research, NSF, DSO National Laboratories, LLNL, ARO, Amazon Research.
Our ICLR’25 paper titled When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers is selected for an Oral (1.8% acceptance rate)
5. February 2025PI Liu is honored to receive the 2025 Withrow Rising Scholar Award from Michigan State University’s College of Engineering. This prestigious award annually recognizes junior faculty for excellence in instruction, scholarship, and distinguished service to the university and student body.
28. January 2025Congratulations to Yihua Zhang for receiving the prestigious IBM PhD Fellowship Award and the CPAL Rising Star Award.
27. January 2025Congratulations to Brian Zhang for being named one of the Top 300 Scholars of the 84th Annual Science Talent Search in 2025 for his project Elevating Visual Prompting in Transfer Learning via Pruned Model Ensembles: No Retrain, No Pain conducted during his high school externship at OPTML mentored by Yuguang Yao.
22. January 2025One paper in ICLR’25, offering a theoretical understanding of task vectors and their application to LLM unlearning (see paper).
21. December 2024Honored to be one of the 10 recipients of the Amazon Research Award for Spring and Winter 2024
8. December 2024Check out the OPTML Menu of Innovations @ NeurIPS 2024!
28. November 2024Thrilled to announce that our position paper, Rethinking Machine Unlearning for LLMs, has been accepted for publication in Nature Machine Intelligence. Congratulations to the team and our amazing collaborators for achieving this milestone!