Welcome to the OPTML Group

About Us

OPtimization and Trustworthy Machine Learning (OPTML) group is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/ deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.

As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.

We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!

Representative Publications

Authors marked in bold indicate our group members, and “*” indicates equal contribution.

Trustworthy AI: Robustness, fairness, and model explanation

Scalable AI: Model & data compression, distributed learning, black-box optimization, and automated ML

Sponsors

We are grateful for funding from Michigan State University, MIT-IBM Watson AI Lab, DARPA, Cisco Research, NSF, DSO National Laboratories, LLNL, ARO, Amazon Research.



News

18. November 2024

Grateful for receiving the NAIRR Pilot Award in the field of Artificial Intelligence and Intelligent Systems!

4. November 2024

One paper in WACV’25: Can Adversarial Examples Be Parsed to Reveal Victim Model Information?

26. September 2024

Six papers in NeurIPS’24, including one in dataset & benchmark track. Congrats to Yihua Zhang, Yuguang Yao, Jinghan Jia, and Yimeng Zhang for their outstanding leadership!

20. September 2024

One paper in EMNLP’24: SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning

20 August 2024

Grateful to receive the Amazon Research Award for AI in Information Security–Spring 2024!

20. July 2024

The 3rd AdvML-Frontiers Workshop is now live and will be co-located at NeurIPS’24! Submit your papers by Aug 30.

11. July 2024

Dr. Liu has received the prestigious NSF Faculty Early Career Development (CAREER) Award!

10. July 2024

Congratulations to Yihua for receiving the 2024 MLCommons Rising Stars Award!

... see all News