Authors marked in blue indicate our group members, and “*” indicates equal contribution.
Jump: Preprints, Conference Papers and Journal Papers
Back to Top
Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning
C. Fan*, J. Liu*, L. Lin*, J. Jia, R. Zhang, S. Mei, S. Liu
arXiv Preprint
Back to Top
Can Adversarial Examples Be Parsed to Reveal Victim Model Information?
Y. Yao, J. Liu, Y. Gong, X. Liu, Y. Wang, X. Lin, S. Liu
WACV’25
WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models
J. Jia, J. Liu, Y. Zhang, P. Ram, N. Baracaldo, S. Liu
NeurIPS’24
Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
Y. Zhang, X. Chen, J. Jia, Y. Zhang, C. Fan, J. Liu, M. Hong, K. Ding, S. Liu
NeurIPS’24
From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models
Z. Pan*, Y. Yao*, G. Liu, B. Shen, H. V. Zhao, R. R. Kompella, S. Liu
NeurIPS’24
UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models
Y. Zhang, C. Fan, Y. Zhang, Y. Yao, J. Jia, J. Liu, G. Zhang, G. Liu, R. Kompella, X. Liu, S. Liu
NeurIPS’24 D&B
SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning
J. Jia, Y. Zhang, Y. Zhang, J. Liu, B. Runwal, J. Diffendender, B. Kailkhura, S. Liu
EMNLP’24
To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images … For Now
Y. Zhang*, J. Jia*, X. Chen, A. Chen, Y. Zhang, J. Liu, K. Ding, S. Liu
ECCV’24
Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning
C. Fan*, J. Liu*, A. Hero, S. Liu
ECCV’24
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Y. Zhang*, P. Li*, J. Hong*, J. Li, Y. Zhang, W. Zheng, P. Y. Chen, J. D. Lee, W. Yin, M. Hong, Z. Wang, S. Liu, T. Chen
ICML’24
SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation
C. Fan*, J. Liu*, Y. Zhang, E. Wong, D. Wei, S. Liu
ICLR’24 (Spotlight, acceptance rate 5%)
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training
A. Chen*, Y. Zhang*, J. Jia, J. Diffenderfer, J. Liu, K. Parasyris, Y. Zhang, Z. Zhang, B. Kailkhura, S. Liu
ICLR’24
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency
S. Pal, Y. Yao, R. Wang, B. Shen, S. Liu
ICLR’24
AutoVP: An Automated Visual Prompting Frameowrk and Benchmark
H.-Y. Tsao, L. Hsiung, P.-Y. Chen, S. Liu, T.-Y. Ho
ICLR’24
Model Sparsity Can Simplify Machine Unlearning
J. Jia*, J. Liu*, P. Ram, Y. Yao, G. Liu, Y. Liu, P. Sharma, S. Liu
NeurIPS’23 (Spotlight, acceptance rate 3%)
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
Y. Zhang*, Y. Zhang*, A. Chen*, J. Jia, J. Liu, G. Liu, M. Hong, S. Chang, S. Liu
NeurIPS’23
On the Convergence and Sample Complexity Analysis of Deep Q-Networks with Greedy Exploration
S. Zhang, M. Wang, H. Li, M. Liu, P. Chen, S. Lu, S. Liu, K. Murugesan. S. Chaudhury
NeurIPS’23
Robust Mixture-of-Expert Training for Convolutional Neural Networks
Y. Zhang, R. Cai, T. Chen, G. Zhang, H. Zhang. P. Chen, S. Chang, Z. Wang, S. Liu
ICCV’23 (Oral, acceptance rate 2%)
Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach
P. Khanduri, I. Tsaknakis, Y. Zhang, J. Liu, S. Liu, J. Zhang, M. Hong
ICML’23
Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
M. Nowaz, R. Chowdhury, S. Zhang, M. Wang, S. Liu, P. Chen
ICML’23
Understanding and Improving Visual Prompting: A Label-Mapping Perspective
A. Chen, Y. Yao, P. Chen, Y. Zhang, S. Liu
CVPR’23
Text-Visual Prompting for Efficient 2D Temporal Video Grounding
Y. Zhang, X. Chen, J. Jia, S. Liu, K. Ding
CVPR’23
What Is Missing in IRM Training and Evaluation? Challenges and Solutions
Y. Zhang, P. Sharma, P. Ram, M. Hong, K. R. Varshney, S. Liu
ICLR’23
Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks
S. Zhang, M. Wang, P. Chen, S. Liu, S. Lu, M. Liu
ICLR’23
A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity
H. Li, M. Wang, S. Liu, P. Chen
ICLR’23
TextGrad: Advancing Robustness Evaluation in {NLP} by Gradient-Driven Optimization
B. Hou, J. Jia*, Y. Zhang*, G. Zhang*, Y. Zhang, S. Liu, S. Chang
ICLR’23
Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices
Y. Zhang*, A. K. Kamath*, Q. Wu*, Z. Fan*, W., Z. Wang, S. Chang, S. Liu, C. Hao
ASPDAC’23
CLAWSAT: Towards Both Robust and Accurate Code Models
J. Jia*, S. Srikant*, T. Mitrovska, C. Gan, S. Chang, S. Liu, U-M O’Reilly
SANER’23
Advancing Model Pruning via Bi-level Optimization
Y. Zhang*, Y. Yao*, P. Ram, P. Zhao, T. Chen, M. Hong, Y. Wang, S. Liu
NeurIPS’22
Fairness Reprogramming
G. Zhang*, Y. Zhang*, Y. Zhang, W. Fan, Q. Li, S. Liu, S. Chang
NeurIPS’22
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
G. Zhang, S. Lu, Y. Zhang*, X. Chen, P. Chen, Q. Fan, L. Martie, L. Horesh, M. Hong, and S. Liu
UAI’22 (Oral, Best Paper Runner-up Award)
Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization
Y. Zhang*, G. Zhang*, P. Khanduri, M. Hong, S. Chang, and S. Liu
ICML’22.
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training.
T. Chen, H. Zhang, Z. Zhang, S. Chang, S. Liu, P. Chen, and Z. Wang
ICML’22.
Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework
C. Ko, J. Mohapatra, S. Liu, P. Chen, L. Daniel, and L. Weng
ICML’22.
Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling
H. Li, M. Weng, S. Liu, P. Chen, and J. Xiong
ICML’22.
A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction
Y. Xie, D. Wang, P. Chen, J. Xiong, S. Liu, S. Koyejo
NAACL’22.
Learning to Generate Image Source-Agnostic Universal Adversarial Perturbations
P. Zhao, P. Ram, S. Lu, Y. Yao, D. Bouneffouf, X. Lin, S. Liu
IJCAI’22
Proactive Image Manipulation Detection
V. Asnani, X. Yin, T. Hassner, S. Liu, X. Liu
CVPR’22
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
T. Chen*, Z. Zhang*, Y. Zhang*, S. Chang, S. Liu, Z. Wang
CVPR’22
Reverse Engineering of Imperceptible Adversarial Image Perturbations
Y. Gong*, Y. Yao*, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu
ICLR’22
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu
ICLR’22 (Spotlight, acceptance rate 5%)
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong
ICLR’22
Optimizer Amalgamation
T. Huang, T. Chen, S. Liu, S. Chang, L. Amini, Z. Wang
ICLR’22
Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach
P. Khanduri, H. Yang, M. Hong, J. Liu, H.T. Wai, S. Liu
ICLR’22
Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGD
C. Fan, P. Ram and S. Liu
NeurIPS Workshop MetaLearn, 2021
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
X. Ma, G. Yuan, X. Shen, T. Chen, X. Chen, X. Chen, N. Liu, M. Qin, S. Liu, Z. Wang, Y. Wang.
NeurIPS’21
When Does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?
L. Fan, S. Liu, P.-Y. Chen, G. Zhang, C. Gan.
NeurIPS’21
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
G. Yuan, X. Ma, W. Niu, Z. Li, Z. Kong, N. Liu, Y. Gong, Z. Zhan, C. He, Q. Jin, S. Wang, M. Qin, B. Ren, Y. Wang, S. Liu, X. Lin.
NeurIPS’21 (Spotlight, acceptance rate 3%)
Adversarial Attack Generation Empowered by Min-Max Optimization
J. Wang, T. Zhang, S. Liu, P.-Y. Chen, J. Xu, M. Fardad, B. Li.
NeurIPS’21
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks
S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong.
NeurIPS’21
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
N. Liu, G. Yuan, Z. Che, X. Shen, X. Ma, Q. Jin, J. Ren, J. Tang, S. Liu, Y. Wang.
ICML’21
NPAS: A Compiler-aware Framework of Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Z. Li, G. Yuan, W. Niu, Y. Li, P. Zhao, Y. Cai, X. Shen, Z. Zhan, Z. Kong, Q. Jin, Z. Chen, S. Liu, K. Yang, Y. Wang, B. Ren, and X. Lin.
CVPR’21 (Oral, acceptance rate 4%)
The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models
T. Chen, J. Frankle, S. Chang, S. Liu, Y. Zhang, M. Carbin, and Z. Wang.
CVPR’21
Hidden Cost of Randomized Smoothing
J. Mohapatra, C.-Y. Ko, L. Weng, P.-Y. Chen, S. Liu, L. Daniel
AISTATS’21
Rate-improved Inexact Augmented Lagrangian Method for Constrained Nonconvex Optimization
Z. Li, P.-Y. Chen, S. Liu, S. Lu, Y. Xu
AISTATS’21
On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning
R. Wang, K. Xu, S. Liu, P.-Y. Chen, T.-W. Weng, C. Gan, M. Wang
ICLR’21
Robust Overfitting May be Mitigated by Properly Learned Smoothening
T. Chen, Z. Zhang, S. Liu, S. Chang, and Z. Wang
ICLR’21
Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning
T. Chen, Z. Zhang, S. Liu, S. Chang, and Z. Wang,
ICLR’21
Generating Adversarial Computer Programs using Optimized Obfuscations
S. Srikant, S. Liu, T. Mitrovska, S. Chang, Q. Fan, G. Zhang, U.-M. O’Reilly
ICLR’21
Fast Training of Provably Robust Neural Networks by SingleProp
A. Boopathy, L. Weng, S. Liu, P.-Y. Chen, G. Zhang, L. Daniel
AAAI’21
Self-Progressing Robust Training
M. Cheng, P.-Y. Chen, S. Liu, S. Chang, C.-J. Hsieh, P. Das
AAAI’21
RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices
W. Niu, M. Sun, Z. Li, J.-A. Chen, J. Guan, X. Shen, Y. Wang, S. Liu, X. Lin, B. Ren
AAAI’21
The Lottery Ticket Hypothesis for the Pre-trained BERT Networks
T. Chen, J. Frankle, S. Chang, S. Liu, Y. Zhang, Z. Wang, M. Carbin
NeurIPS’20
Training Stronger Baselines for Learning to Optimize
T. Chen, W. Zhang, J. Zhou, S. Chang, S. Liu, L. Amini, Z. Wang
NeurIPS’20 (Spotlight, acceptance rate 3%)
Higher-Order Certification for Randomized Smoothing
J. Mohapatra, C.-Y. Ko, L. Weng, P.-Y. Chen, S. Liu, L. Daniel
NeurIPS’20 (Spotlight, acceptance rate 3%)
Adversarial T-shirt! Evading Person Detectors in A Physical World
K. Xu, G. Zhang, S. Liu, Q. Fan, M. Sun, H. Chen, P.-Y. Chen, Y. Wang, X. Lin
ECCV’20 (Spotlight, acceptance rate 5%)
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
R. Wang, G. Zhang, S. Liu, P.-Y. Chen, J. Xiong, M. Wang
ECCV’20
An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
X. Ma, W. Niu, T. Zhang, S. Liu, S. Lin, H. Li, W. Wen, X. Chen, J. Tang, K. Ma, B. Ren, Y. Wang
ECCV’20
Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case
S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong
ICML’20
Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing
S. Dutta, D. Wei, H. Yueksel, P.-Y. Chen, S. Liu, K. R. Varshney
ICML’20
Proper Network Interpretability Helps Adversarial Robustness in Classification
A. Boopathy, S. Liu, G. Zhang, C. Liu, P.-Y. Chen, S. Chang, L. Daniel
ICML’20
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
S. Liu*, S. Lu*, X. Chen*, Y. Feng*, K. Xu*, A. Al-Dujaili*, M. Hong, U.-M. O’Reilly
ICML’20
Adversarial Robustness: From Self-Supervised Pretraining to Fine-Tuning
T. Chen, S. Liu, S. Chang, Y. Cheng, L. Amini, Z. Wang
CVPR’20
Towards Verifying Robustness of Neural Networks against Semantic Perturbations
J. Mohapatra, L. Weng, P.-Y. Chen, S. Liu, L. Daniel
CVPR’20 (Oral, acceptance rate 5%)
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
M. Cheng, S. Singh, P.-Y. Chen, S. Liu, C.-J. Hsieh
ICLR’20
An ADMM Based Framework for AutoML Pipeline Configuration
S. Liu*, P. Ram*, D. Vijaykeerthy, D. Bouneffouf, G. Bramble, H. Samulowitz, D. Wang, A. Conn, A. Gray
AAAI’20
Towards Certificated Model Robustness Against Weight Perturbations
L. Weng*, P. Zhao*, S. Liu, P.-Y. Chen, X. Lin, L. Daniel
AAAI’20
ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization
X. Chen*, S. Liu*, K. Xu*, X. Li*, X. Lin, M. Hong, D. Cox
NeurIPS’19
Generation of Low Distortion Adversarial Attacks via Convex Programming
T. Zhang, S. Liu, Y. Wang, M. Fardad
ICDM’19
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
P. Zhao, S. Liu, P.-Y. Chen, N. Hoang, K. Xu, B. Kailkhura, X. Lin
ICCV’19
Adversarial Robustness vs. Model Compression, or Both?
S. Ye*, K. Xu*, S. Liu, H. Cheng, J.-H. Lambrechts, H. Zhang, A. Zhou, K. Ma, Y. Wang, X. Lin
ICCV’19
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
K. Xu*, H. Chen*, S. Liu, P.-Y. Chen, T.-W. Wen, M. Hong, X. Lin
IJCAI’19
Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications
P.-Y. Chen, L. Wu, S. Liu, I. Rajapakse
ICML’19
signSGD via Zeroth-Order Oracle
S. Liu, P.-Y. Chen, X. Chen, M. Hong
ICLR’19
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
K. Xu*, S. Liu*, P. Zhao, P.-Y. Chen, H. Zhang, D. Erdogmus, Y. Wang, X. Lin
ICLR’19
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization
X. Chen, S. Liu, R. Sun, M. Hong
ICLR’19
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
A. Boopathy, L. Weng, P.-Y. Chen, S. Liu, L. Daniel
AAAI’19
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
C.-C. Tu*, P. Ting*, P.-Y. Chen*, S. Liu, H. Zhang, J. Yi, C.-J. Hsieh, S.-M. Cheng
AAAI’19
Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization
S. Liu, B. Kailkhura, P.-Y. Chen, P. Ting, S. Chang, L. Amini
NeurIPS’18
Ultra-Fast Robust Compressive Sensing Based on Memristor Crossbars
S. Liu, A. Ren, Y. Wang, P. K. Varshney
ICASSP’17 (Best Student Paper Award, Third Place)
Back to Top
An Introduction to Bilevel Optimization: Foundations and Applications in Signal Processing and Machine Learning
Y. Zhang, P. Khanduri, I. Tsaknakis, Y. Yao, M. Hong and S. Liu
IEEE Signal Processing Magazine, vol. 41, no. 1, pp. 38-59, Jan. 2024
Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case
S. Zhang, M. Wang, J. Xiong, S. Liu, P.-Y. Chen
IEEE Transactions on Neural Networks and Learning Systems, 2020
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero, P. K. Varshney
IEEE Signal Processing Magazine, 2020
On sparse identification of complex dynamical systems: A study on discovering influential reactions in chemical reaction networks
F. Harirchi, D. Kim, O. Khalil, S. Liu, P. Elvati, M. Baranwal, A. O. Hero, A. Violi
Fuel, Elsevier, 2020
Genome Architecture Mediates Transcriptional Control of Human Myogenic Reprogramming
S. Liu, H. Chen, S. Ronquist, L. Seaman, N. Ceglia, W. Meixner, L. A. Muir, P.-Y. Chen, G. Higgins, P. Baldi, S. Smale, A. O. Hero, I. Rajapakse
iScience, Cell, 2018
Optimal Sensor Collaboration for Parameter Tracking Using Energy Harvesting Sensors
S. Zhang, S. Liu, V. Sharma, P. K. Varshney
IEEE Transactions on Signal Processing, 2018
A Memristor-Based Optimization Framework for Artificial Intelligence Applications
S. Liu, Y. Wang, M. Fardad, and P. K. Varshney
IEEE Circuits and Systems Magazine, 2018
Accelerated Distributed Dual Averaging Over Evolving Networks of Growing Connectivity
S. Liu, P.-Y. Chen, and A. O. Hero
IEEE Transactions on Signal Processing, 2018
Bias-Variance Tradeoff of Graph Laplacian Regularizer
P.-Y. Chen, S. Liu
IEEE Signal Process. Lett., 2017
Chromosome conformation and gene expression patterns differ profoundly in human fibroblasts grown in spheroids versus monolayers
H. Chen, L. Seaman, S. Liu, T. Ried, I. Rajapakse
Nucleus, 2017
Optimized Sensor Collaboration for Estimation of Temporally Correlated Parameters
S. Liu, S. Kar, M. Fardad, P. K. Varshney
IEEE Transactions on Signal Processing, 2017
Measurement Matrix Design for Compressed Detection With Secrecy Guarantees
B. Kailkhura, S. Liu, T. Wimalajeewa, P. K. Varshney
IEEE Wireless Communications Letters, 2016
Sensor Selection for Estimation with Correlated Measurement Noise
S. Liu, S. P. Chepuri, M. Fardad, E. Masazade, G. Leus, P. K. Varshney
IEEE Transactions on Signal Processing, 2016
Sparsity-Aware Sensor Collaboration for Linear Coherent Estimation
S. Liu, S. Kar, M. Fardad, and P. K. Varshney
IEEE Transactions on Signal Processing, 2015
Energy-Aware Sensor Selection in Field Reconstruction
S. Liu, A. Vempaty, M. Fardad, E. Masazade, and P. K. Varshney
IEEE Signal Processing Letters, 2014
Sensor Selection for Nonlinear Systems in Large Sensor Networks
X. Shen, S. Liu, and P. K. Varshney
IEEE Transactions on Aerospace and Electronic Systems, 2014
Optimal Periodic Sensor Scheduling in Networks of Dynamical Systems
S. Liu, M. Fardad, E. Masazade and P. K. Varshney
IEEE Transactions on Signal Processing, 2014