News
- [2025.05] One paper has been accepted by ACL 2025.
- [2025.05] One paper has been accepted by KDD 2025.
- [2025.05] Four paper has been accepted by ICML 2025 (1 Spotlight).
- [2025.03] One paper has been accepted by IEEE T-PAMI.
- [2025.01] One paper has been accepted by ICLR 2025.
- [2025.01] One paper has been accepted by WWW 2025 (Oral).
- [2024.12] One paper has been accepted by OFC 2025 (Oral).
- [2024.12] One paper has been accepted by AAAI 2025 (Oral).
- [2024.09] Four papers have been accepted by NeurIPS 2024.
- [2024.09] One paper has been accepted by IEEE T-KDE.
- [2024.08] Rechieved President's Graduate Fellowship from NUS.
- [2024.07] Two papers have been accepted by ACM MM 2024.
- [2024.07] One paper has been accepted by IEEE T-NNLS.
- [2024.06] One paper has been accepted by IEEE T-PAMI.
- [2024.05] One paper has been accepted by IEEE T-NNLS.
- [2024.05] One paper has been accepted by ICML 2024 (Spotlight).
- [2024.01] One paper has been accepted by IEEE T-KDE.
- [2024.01] Two papers have been accepted by ICLR 2024 (1 Spotlight).
- [2024.01] One papers has been accepted by IEEE T-NNLS.
- [2023.12] Three papers have been accepted by AAAI 2024 (Oral).
- [2023.12] One paper has been accepted by ICDE 2024.
- [2023.11] Rechieved China National Scholarship
- [2023.09] One paper has been accepted by NeurIPS 2023.
- [2023.07] Four papers have been accepted by ACM MM 2023.
- [2023.07] One paper has been accepted by IEEE T-NNLS.
- [2023.06] One paper has been accepted by IEEE T-KDE.
- [2023.04] One paper has been accepted by ICML 2023.
- [2023.04] One paper has been accepted by IEEE T-NNLS.
- [2023.04] One paper has been accepted by SIGIR 2023.
- [2023.01] One paper has been accepted by ICLR 2023.
- [2022.12] Rechieved China National Scholarship
- [2022.11] Three papers have been accepted by AAAI 2023.
- [2022.06] One paper has been accepted by ACM MM 2022.
- [2022.04] One paper has been accepted by IJCAI 2022.
- [2021.12] One paper has been accepted by AAAI 2022.
- [2020.12] Rechieved China National Scholarship
More
|
Research
My research mainly focuses on self-supervised learning and its applications in graph learning (e.g., graph clustering, KG embedding), foundation models (e.g., LLMs, MLLMs), recommendation systems, code intelligence, and bioinformatics.
* denotes equal contributions. The selected papers are listed as follows.
|
|
GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning
Yue Liu,
Shengfang Zhai,
Mingzhe Du,
Yulin Chen,
Tri Cao,
Hongcheng Gao,
Cheng Wang,
Xinfeng Li,
Kun Wang,
Junfeng Fang,
Jiaheng Zhang,
Bryan Hooi
arXiv, 2025
Paper
/
Code
We propose a new VLM safeguard termed GuardReasoner-VL by incentivize the guard model to deliberatively reason before making moderation decisions via online RL. Experiments on 14 multi-modal benchmarks demonstrate the superiority.
|
|
FlowReasoner: Reinforcing Query-Level Meta-Agents
Hongcheng Gao*,
Yue Liu*,
Y. He,
L. Dou,
C. Du,
Z. Deng,
Bryan Hooi,
Min Lin,
Tianyu Pang
arXiv, 2025
Paper
/
Code
We propose a reasoning-based meta-agent termed FlowReasoner to automate the design of query-level multi-agent systems, i.e., one systems per query, using distillation and reinforcement learning from external execution feedback.
|
|
Efficient Inference for Large Reasoning Models: A Survey
Yue Liu*,
J. Wu*,
Y. He*,
H. Gao,
H. Chen,
B. Bi,
Jiaheng Zhang,
Zhiqi Huang,
Bryan Hooi
arXiv, 2025
Paper
/
Project
We conduct a comprehensive survey on efficient inference for large reasoning models (LRMs). We categorize the existing methods into two main categories explicit compact CoT and implicit latent CoT. We summarize the challenges and highlight further improvement.
|
|
GuardReasoner: Towards Reasoning-based LLM Safeguards
Yue Liu,
H. Gao,
S. Zhai,
J. Xia,
T. Wu,
Z. Xue,
Y. Chen,
K. Kawaguchi,
J. Zhang,
Bryan Hooi
ICLR FM-Wild Workshop, 2025
Paper
/
Code
/
Model
/
Data
We propose a new LLM safeguard termed GuardReasoner by guiding it to learn to reason. It improves the reasoning ability, explainability, and generalizability via Reasoning SFT and Hard-Sample DPO. Experiments on 13 benchmarks of 3 guardrail tasks demonstrate the superiority. The data, code, and models (1B, 3B, 8B) are released.
|
|
FlipAttack: Jailbreak LLMs via Flipping
Yue Liu,
Xiaoxin He,
Miao Xiong,
Jinlan Fu,
Shumin Deng,
Bryan Hooi
ICML, 2025
Paper
/
Code
We propose a simple yet effective jailbreak attack termed FlipAttack against black-box LLMs within only 1 query. By analyzing LLMs' understanding mechanism, we design 4 flipping modes to disguise the attack. Then, we guide LLMs understand and execute the harmful behaivors. Experiments on 8 LLMs and 5 guards demonstrate the superiority.
|
|
Identify Then Recommend: Towards Unsupervised Group Recommendation
Yue Liu,
S. Zhu,
T. Yang,
J. Ma,
Wenliang Zhong
NeurIPS, 2024
Paper
/
Code
We propose an unsupervised group recommendation method named ITR first to identify user groups and then conduct self-supervised group recommendation via two pre-text tasks. Results on both open data and industrial data show the effectiveness.
|
|
End-to-end Learnable Clustering for Intent Learning in Recommendation
Yue Liu*,
Shihao Zhu*,
J. Xia,
Y. Ma,
J. Ma,
W. Zhong,
G. Zhang,
K. Zhang,
Xinwang Liu
NeurIPS, 2024
Paper
/
Code
We propose an intent learning method termed ELCRec, which leverages end-to-end learnable clustering and cluster-assisted contrastive learning to improve recommendation. Both the results on open benchmarks and industrial engines demonstrate the superiority.
|
|
Improved Dual Correlation Reduction Network with Affinity Recovery
Yue Liu*,
Sihang Zhou*,
X. Yang,
Xinwang Liu,
W. Tu,
L. Li,
Xin Xu,
Funchun Sun,
IEEE T-NNLS, 2024
Paper
/
Code
We explore deep-in reasons of representation collapse in deep graph clustering and improve the dual correlation reduction network with the affinity recovery strategy.
|
|
Deep Temporal Graph Clustering
Meng Liu,
Yue Liu,
K. Liang,
S. Wang,
S. Zhou,
Xinwang Liu
ICLR, 2024; Selected as Best Paper of China Computational Power Conference, 2024.
Paper
/
Code
We aim to extend deep graph clustering to temporal graphs, which are more practical in real-world scenarios. We propose a general framework TGC by clustering distribution assignment and adjacency reconstruction.
|
|
At Which Training Stage Does Code Data Help LLM Reasoning?
Yingwei Ma*,
Yue Liu*,
Y. Yu,
Y. Jiang,
C. Wang,
S. Li
ICLR (Spotlight), 2024
Paper
/
Code
We explore at which training stage code data can help LLMs reason. The extensive experiments and insights deepen our understanding of LLMs' reasoning capability and the corresponding applications, e.g., scientific question answering, legal support, etc.
|
|
Reinforcement Graph Clustering with Unknown Cluster Number
Yue Liu,
Ke Liang,
Jun Xia,
X. Yang,
S. Zhou,
Meng Liu,
Xinwang Liu,
Stan Z. Li
ACM MM, 2023
Paper
/
Code
We show that the promising performance of deep graph clustering methods relies on the pre-defined cluster number and propose RGC to determine the cluster number via reinforcement learning.
|
|
Knowledge Graph Contrastive Learning based on Relation-Symmetrical Structure
Ke Liang*,
Yue Liu*,
S. Zhou,
W. Tu,
Y. Wen,
X. Yang,
X. Dong,
Xinwang Liu
IEEE T-KDE (ESI Highly Cited Paper), 2023
Paper
/
Code
We propose a plug-and-play knowledge graph contrastive learning method named KGE-SymCL by mining the symmetrical structure information in knowledge graphs.
|
|
Dink-Net: Neural Clustering on Large Graphs
Yue Liu,
K. Liang,
Jun Xia,
S. Zhou,
X. Yang,
Xinwang Liu,
Stan Z. Li
ICML, 2023
Paper
/
Code
We analyze the drawbacks of existing deep graph clustering methods and scale deep graph clustering to large-scale graphs. The proposed shrink and dilation loss functions optimize clustering distribution adversarially, allowing batch training without performance dropping.
|
|
Simple Contrastive Graph Clustering
Yue Liu,
X. Yang,
S. Zhou,
Xinwang Liu,
S. Wang,
K. Liang,
W. Tu,
L. Li,
IEEE T-NNLS, 2023
Paper
/
Code
We propose to replace the complicated and consuming graph data augmentations by designing parameter un-shared Siamese encoders and perturbing node embeddings.
|
|
Hard Sample Aware Network for Contrastive Deep Graph Clustering
Yue Liu,
X. Yang,
S. Zhou,
X. Liu,
Z. Wang,
K. Liang,
W. Tu,
L. Li,
J. Duan,
C. Chen
AAAI (Oral & Most Influential AAAI Paper) (13/539) [Link], 2023
Paper
/
Code
We propose a Hard Sample Aware Network (HSAN) to mine both the hard positive samples and hard negative samples with a comprehensive similarity measure criterion and a general dynamic sample weighing strategy.
|
|
A Survey of Deep Graph Clustering: Taxonomy, Challenge, and Application
Yue Liu,
J. Xia,
S. Zhou,
S. Wang,
X. Guo,
X. Yang,
K. Liang,
W. Tu,
Stan Z. Li,
X. Liu
arXiv, 2022
Paper
/
Project
Deep graph clustering, which aims to group the nodes in graph into disjoint clusters, has become a new hot research spot. This paper summarizes the taxonomy, challenge, and application of deep graph clustering. We hope this work will serve as a quick guide and help researchers to overcome the challenges in this field.
|
|
Deep Graph Clustering via Dual Correlation Reduction
Yue Liu*,
Wenxuan Tu*,
S. Zhou,
X. Liu,
L. Song,
X. Yang,
E. Zhu
AAAI, 2022
Paper
/
Code
We propose a self-supervised deep graph clustering method termed Dual Correlation Reduction Network (DCRN) to address the representation collapse issue by reducing information correlation in both sample and feature levels.
|
"If we knew what it was we were doing, it would not be called research, would it?"                                                     --Albert Einstein
|
- Associate Member @ Sea AI Lab, advised by Prof. Tianyu Pang
- Research Assistant @ Westlake University, working with Jun Xia, advised by Prof. Stan Z. Li
- Research Assistant @ Institute of Automation, working with Yuheng Ji, advised by Prof. Xiaolong Zheng
- Senior Recommendation Algorithm Engineer @ Alipay Co., Ltd.
- Master of Engineering @ National University of Defence Technology, advised by Prof. Xinwang Liu
- Recommendation Algorithm Engineer Intern @ Ant Group Co., Ltd.
- Financial Risk Control Algorithm Engineer Intern @ Meituan Co., Ltd.
- 3D Vision Algorithm Engineer Intern @ SpeedBot Robotics Co., Ltd., advised by Prof. Kai Xu
- Bachelor of Engineering @ Northeastern University
- Reviewer for ICML, ICLR, NeurIPS
- Reviewer for CVPR, ICCV
- Reviewer for ACL, EMNLP, COLING
- Reviewer for AAAI, IJCAI, ACM MM, AISTATS
- Reviewer for KDD, WWW, CIKM, WSDM, LoG
- Reviewer for IEEE T-KDE, IEEE T-NNLS, IEEE T-MM
- President's Graduate Fellowship, National University of Singapore. [Link]
- China National Scholarship. [PDF]
- China National Scholarship. [PDF]
- China National Scholarship. [PDF]
Design and source code from Jon Barron's website
|