Welcome to the OPTML Group

About Us

OPtimization and Trustworthy Machine Learning (OPTML) group is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/ deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.

OPTML @ MSU

📖 For a more detailed introduction, see our Welcome2OPTML Booklet.

As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.

We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!

Representative Publications

Authors marked in bold indicate our group members, and “*” indicates equal contribution.

Trustworthy AI: Robustness, fairness, and model explanation

Scalable AI: Model & data compression, distributed learning, black-box optimization, and automated ML

Sponsors

We are grateful for funding from Michigan State University, MIT-IBM Watson AI Lab, DARPA, Cisco Research, NSF, DSO National Laboratories, LLNL, ARO, Amazon Research, abd Open Philanthropy.



News

We released the Welcome2OPTML Booklet as a detailed introduction to our lab and an open invitation for strong prospective candidates to join us in 2026!

8 October 2025

The slides for our “Robust Machine Unlearning” tutorial at IEEE MILCOM 2025 are now available: [Download PDF]

5 October 2025

Yihua and Dr. Liu will deliver a tutorial on “Robust Machine Unlearning” at IEEE MILCOM 2025 (Check schedule here)!

26 September 2025

Dr. Liu is named Red Cedar Distinguished Professor!

18 September 2025

Congrats to OPTML on three papers accepted at NeurIPS 2025! One spotlight and two posters, covering LLM unlearning, interpretability, and security on reasoning models!

24 August 2025

Congrats to Changsheng, Yihua, and Jinghan on their paper acceptance at the 18th ACM Workshop on Artificial Intelligence and Security (AISec).

21 August 2025

Congratulations to Changsheng, Chongyu, Yihua, and Jinghan on their paper, Reasoning Model Unlearning: Forgetting Traces, Not Just Answers, While Preserving Reasoning Skills, which is accepted as a Main paper at EMNLP 2025.

13 August 2025

We are excited to announce that our tutorial “Robust Machine Unlearning: Securing Foundation Models Against Forgetting Failures” has been accepted for presentation at the IEEE Military Communications Conference (MILCOM 2025); see details at schedule.

... see all News