OPtimization and Trustworthy Machine Learning (OPTML) group is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/ deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.
📖 For a more detailed introduction, see our Welcome2OPTML Booklet.
As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.
We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!
Authors marked in bold indicate our group members, and “*” indicates equal contribution.
Trustworthy AI: Robustness, fairness, and model explanation
The Fragile Truth of Saliency: Improving LLM Input Attribution via Attention Bias Optimization
Y. Zhang, C. Wang, Y. Chen, C. Fan, J. Jia, S. Liu,
NeurIPS’25 (Spotlight)
Rethinking Machine Unlearning for Large Language Models
S. Liu, Y. Yao, J. Jia, S. Casper, N. Baracaldo, P. Hase, Y. Yao, C. Y. Liu, X. Xu, H. Li, K. R. Varshney, M. Bansal, S. Koyejo, Y. Liu
Nature Machine Intelligence, 2025.
Salun: Empowering Machine Unlearning Via Gradient-based Weight Saliency In Both Image Classification And Generation
C. Fan*, J. Liu*, Y. Zhang, E. Wong, D. Wei, S. Liu
ICLR’24 (Spotlight)
Model sparsification can simplify machine unlearning
J. Jia*, J. Liu*, P. Ram, Y. Yao, G. Liu, Y. Liu, P. Sharma, S. Liu
NeurIPS’23 (Spotlight)
Scalable AI: Model & data compression, distributed learning, black-box optimization, and automated ML
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
Y. Zhang*, Y. Zhang*, A. Chen, J. Jia, J. Liu, G. Liu, M. Hong, S. Chang, S. Liu
NeurIPS’23
Advancing Model Pruning via Bi-level Optimization
Y. Zhang*, Y. Yao*, P. Ram, P. Zhao, T. Chen, M. Hong, Y. Wang, S. Liu
NeurIPS’22
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
G. Zhang*, S. Lu*, Y. Zhang, X. Chen, P.-Y. Chen, Q. Fan, L. Martie, L. Horesh, M. Hong, S. Liu
UAI’22 (Best Paper Runner-Up Award)
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
S. Liu, S. Lu, X. Chen, Y. Feng, K. Xu, A. Al-Dujaili, M. Hong, U.-M. O’Reilly
ICML’20
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero, P. K. Varshney
IEEE Signal Processing Magazine, 2020
We are grateful for funding from Michigan State University, MIT-IBM Watson AI Lab, DARPA, Cisco Research, NSF, DSO National Laboratories, LLNL, ARO, Amazon Research, Open Philanthropy, Schmidt Sciences and CAIS (Center for AI Safety).
We released the Welcome2OPTML Booklet as a detailed introduction to our lab and an open invitation for strong prospective candidates to join us in 2026!
19 April 2026Honor Congratulations to Jinghan Jia on receiving the 2025–2026 Fitch H. Beach Award (First Place), the highest student honor in the College of Engineering. This marks another outstanding recognition for OPTMLers, following Yihua Zhang’s first-place award in 2024–2025.
4 April 2026Papers Two papers accepted at ACL 2026, including a Main Conference paper led by our (remote) summer intern, Renjie.
25 March 2026GrantGrateful to receive a research grant from the IARPA BENGAL program and to serve as a co-PI, collaborating with IBM (lead PI institution).
25 January 2026Papers Congrats to OPTML on six papers accepted at ICLR 2026, covering LLM unlearning, safety, test-time compute, and gradient-free learning!
5 December 2025Grant Congratulations to PI Sijia Liu on receiving a research grant from DSO National Laboratories (DSO) as the PI. This award will support OPTML’s research on backdoor attacks in foundation models and the development of robust defense strategies.
22 November 2025Honor Congratulations to OPTMLer Jinghan Jia for being selected for the Anthropic Fellows Program for AI Safety Research!
6 November 2025Honor PI Sijia Liu is running for the AAAI 2026 Executive Council Election, aiming to bridge the gap between foundational and applied AI research through new AAAI initiatives and community engagement. AAAI members are encouraged to cast their vote in support!