A Survey on Sequential Recommendation in the Era of Large Language Models
The Hong Kong University of Science and Technology (Guangzhou)
数据科学与分析学域
PhD Qualifying Examination
By Mr. Peilin ZHOU
摘要
Sequential recommendation systems aim to predict the next item a user will interact with based on their historical interaction sequence. Traditional models, including recurrent neural networks (RNNs) and attention mechanisms, often fail to fully capture the complex semantic information inherent in user behavior sequences. The emergence of large language models (LLMs) such as GPT-3.5 and GPT-4 has introduced transformative capabilities to this domain, leveraging vast pre-trained knowledge and superior contextual understanding. These models enhance sequential recommendation systems in multiple ways: serving as feature extractors to provide rich semantic embeddings, acting as data augmentorsto generate synthetic user behavior data, and functioning as direct sequential recommenders capable of understanding and generating interaction sequences. This survey comprehensively reviews the integration of LLMs into sequential recommendation, exploring their applications, methodologies, and the current state of research. Additionally, it addresses the challenges and outlines potential future research directions in this rapidly evolving field. By synthesizing recent advancements and identifying existing gaps, this survey aims to provide a foundational roadmap for future investigations into LLM-empowered sequential recommendation systems.
PQE Committee
Chairperson: Prof. Xiaowen CHU
Prime Supervisor: Prof Sung Hun KIM
Co-Supervisor: Prof Raymond WONG
Examiner: Prof Wei ZENG
日期
2024年6月4日
时间
11:10:00 - 12:25:00
地点
E1-150
Join Link
Zoom Meeting ID: 886 6058 7134
Passcode: dsa2024