RecGPT: Generative Pre-training for Text-based Recommendation
We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. The general architecture and experimental results of RecGPT can be found in our paper:
@inproceedings{RecGPT,
title = {{RecGPT: Generative Pre-training for Text-based Recommendation}},
author = {Hoang Ngo and Dat Quoc Nguyen},
booktitle = {Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics},
year = {2024}
}
We publicly release the RecGPT models along with their pre-training and fine-tuning datasets.
Please cite our paper whenever RecGPT or the datasets are used to help produce published results or are incorporated into other software.
For further information or requests, please go to RecGPT's homepage!
- Downloads last month
- 4