Yue Liu

Yue Liu (刘悦) is a Ph.D. student at National University of Singapore (NUS).

Email  /  Google Scholar  /  Twitter  /  Github  /  GF

profile photo
News
  • [2024.09] Four papers have been accepted by NeurIPS 2024.
  • [2024.09] One paper has been accepted by IEEE T-KDE.
  • [2024.08] Rechieved President's Graduate Fellowship from NUS.
  • [2024.07] Two papers have been accepted by ACM MM 2024.
  • [2024.07] One paper has been accepted by IEEE T-NNLS.
  • [2024.06] One paper has been accepted by IEEE T-PAMI.
  • [2024.05] One paper has been accepted by IEEE T-NNLS.
  • [2024.05] One paper has been accepted by ICML 2024 (Spotlight).
  • [2024.01] One paper has been accepted by IEEE T-KDE.
  • [2024.01] Two papers have been accepted by ICLR 2024 (one Spotlight).
  • [2024.01] One papers has been accepted by IEEE T-NNLS.
  • [2023.12] Three papers have been accepted by AAAI 2024 (Oral).
  • [2023.12] One paper has been accepted by ICDE 2024.
  • [2023.11] Rechieved China National Scholarship
  • [2023.09] One paper has been accepted by NeurIPS 2023.
  • [2023.07] Four papers have been accepted by ACM MM 2023.
  • [2023.07] One paper has been accepted by IEEE T-NNLS.
  • [2023.06] One paper has been accepted by IEEE T-KDE.
  • [2023.04] One paper has been accepted by ICML 2023.
  • [2023.04] One paper has been accepted by IEEE T-NNLS.
  • [2023.04] One paper has been accepted by SIGIR 2023.
  • [2023.01] One paper has been accepted by ICLR 2023.
  • [2022.12] Rechieved China National Scholarship
  • [2022.11] Three papers have been accepted by AAAI 2023.
  • [2022.06] One paper has been accepted by ACM MM 2022.
  • [2022.04] One paper has been accepted by IJCAI 2022.
  • [2021.12] One paper has been accepted by AAAI 2022.
  • [2020.12] Rechieved China National Scholarship
  • More

Research

My research mainly focuses on self-supervised learning and its applications in graph learning (e.g., graph clustering, KG embedding), foundation models (e.g., LLMs, MLLMs), recommendation systems, code intelligence, and bioinformatics.
* denotes equal contributions. The selected papers are listed as follows.

FlipAttack: Jailbreak LLMs via Flipping
Yue Liu, Xiaoxin He, Miao Xiong, Jinlan Fu, Shumin Deng, Bryan Hooi
arXiv, 2024
Paper / Code

We propose a simple yet effective jailbreak attack termed FlipAttack against black-box LLMs within only 1 query. By analyzing LLMs' understanding mechanism, we design 4 flipping modes to disguise the attack. Then, we guide LLMs understand and execute the harmful behaivors. Experiments on 8 LLMs and 5 guards demonstrate the superiority.

Identify Then Recommend: Towards Unsupervised Group Recommendation
Yue Liu, S. Zhu, Y. Ma, J. Ma, Wenliang Zhong
NeurIPS, 2024
Paper / Code

We propose an unsupervised group recommendation method named ITR first to identify user groups and then conduct self-supervised group recommendation via two pre-text tasks. Results on both open data and industrial data show the effectiveness.

End-to-end Learnable Clustering for Intent Learning in Recommendation
Yue Liu*, Shihao Zhu*, J. Xia, Y. Ma, J. Ma, W. Zhong, G. Zhang, K. Zhang, Xinwang Liu
NeurIPS, 2024
Paper / Code

We propose an intent learning method termed ELCRec, which leverages end-to-end learnable clustering and cluster-assisted contrastive learning to improve recommendation. Both the results on open benchmarks and industrial engines demonstrate the superiority.

Improved Dual Correlation Reduction Network with Affinity Recovery
Yue Liu*, Sihang Zhou*, X. Yang, Xinwang Liu, W. Tu, L. Li, Xin Xu, Funchun Sun,
IEEE T-NNLS, 2024
Paper / Code

We explore deep-in reasons of representation collapse in deep graph clustering and improve the dual correlation reduction network with the affinity recovery strategy.

Deep Temporal Graph Clustering
Meng Liu, Yue Liu, K. Liang, S. Wang, S. Zhou, Xinwang Liu
ICLR, 2024; Selected as Best Paper of China Computational Power Conference, 2024.
Paper / Code

We aim to extend deep graph clustering to temporal graphs, which are more practical in real-world scenarios. We propose a general framework TGC by clustering distribution assignment and adjacency reconstruction.

At Which Training Stage Does Code Data Help LLM Reasoning?
Yingwei Ma*, Yue Liu*, Y. Yu, Y. Jiang, C. Wang, S. Li
ICLR (Spotlight), 2024
Paper / Code

We explore at which training stage code data can help LLMs reason. The extensive experiments and insights deepen our understanding of LLMs' reasoning capability and the corresponding applications, e.g., scientific question answering, legal support, etc.

Reinforcement Graph Clustering with Unknown Cluster Number
Yue Liu, Ke Liang, Jun Xia, X. Yang, S. Zhou, Meng Liu, Xinwang Liu, Stan Z. Li
ACM MM, 2023
Paper / Code

We show that the promising performance of deep graph clustering methods relies on the pre-defined cluster number and propose RGC to determine the cluster number via reinforcement learning.

Knowledge Graph Contrastive Learning based on Relation-Symmetrical Structure
Ke Liang*, Yue Liu*, S. Zhou, W. Tu, Y. Wen, X. Yang, X. Dong, Xinwang Liu
IEEE T-KDE (ESI Highly Cited Paper), 2023
Paper / Code

We propose a plug-and-play knowledge graph contrastive learning method named KGE-SymCL by mining the symmetrical structure information in knowledge graphs.

Dink-Net: Neural Clustering on Large Graphs
Yue Liu, K. Liang, Jun Xia, S. Zhou, X. Yang, Xinwang Liu, Stan Z. Li
ICML, 2023
Paper / Project Page / Code

We analyze the drawbacks of existing deep graph clustering methods and scale deep graph clustering to large-scale graphs. The proposed shrink and dilation loss functions optimize clustering distribution adversarially, allowing batch training without performance dropping.

Simple Contrastive Graph Clustering
Yue Liu, X. Yang, S. Zhou, Xinwang Liu, S. Wang, K. Liang, W. Tu, L. Li,
IEEE T-NNLS, 2023
Paper / Code

We propose to replace the complicated and consuming graph data augmentations by designing parameter un-shared Siamese encoders and perturbing node embeddings.

Hard Sample Aware Network for Contrastive Deep Graph Clustering
Yue Liu, X. Yang, S. Zhou, X. Liu, Z. Wang, K. Liang, W. Tu, L. Li, J. Duan, C. Chen
AAAI (Oral & Most Influential AAAI Paper) (13/539) [Link], 2023
Paper / Code

We propose a Hard Sample Aware Network (HSAN) to mine both the hard positive samples and hard negative samples with a comprehensive similarity measure criterion and a general dynamic sample weighing strategy.

A Survey of Deep Graph Clustering: Taxonomy, Challenge, and Application
Yue Liu, J. Xia, S. Zhou, S. Wang, X. Guo, X. Yang, K. Liang, W. Tu, Stan Z. Li, X. Liu
arXiv, 2022
Paper / Project Page

Deep graph clustering, which aims to group the nodes in graph into disjoint clusters, has become a new hot research spot. This paper summarizes the taxonomy, challenge, and application of deep graph clustering. We hope this work will serve as a quick guide and help researchers to overcome the challenges in this field.

Deep Graph Clustering via Dual Correlation Reduction
Yue Liu*, Wenxuan Tu*, S. Zhou, X. Liu, L. Song, X. Yang, E. Zhu
AAAI, 2022
Paper / Code

We propose a self-supervised deep graph clustering method termed Dual Correlation Reduction Network (DCRN) to address the representation collapse issue by reducing information correlation in both sample and feature levels.

"If we knew what it was we were doing, it would not be called research, would it?"                                                     --Albert Einstein

Experience
Service
  • Reviewer for ICML'24, ICLR'24/25, NeurIPS'23/24, AAAI'23/24/25, AISTATS'25
  • Reviewer for CVPR'24
  • Reviewer for EMNLP'23, COLING'25
  • Reviewer for KDD'24/25, WWW'24, CIKM'23/24, WSDM'23/24/25, LoG'24, IEEE T-KDE
  • Reviewer for ACM MM'23/24, IEEE T-MM
  • Reviewer for PRCV'22/23, IEEE/CAA JAS, IEEE T-NNLS, Pattern Recognition
Award
  • President's Graduate Fellowship, National University of Singapore. [Link]
  • China National Scholarship. [PDF]
  • China National Scholarship. [PDF]
  • China National Scholarship. [PDF]

Design and source code from Jon Barron's website