{"paper_url": "https://huggingface.co/papers/2309.10202", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Aligning Language Models with Offline Reinforcement Learning from Human Feedback](https://huggingface.co/papers/2308.12050) (2023)\n* [RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback](https://huggingface.co/papers/2309.00267) (2023)\n* [Efficient RLHF: Reducing the Memory Usage of PPO](https://huggingface.co/papers/2309.00754) (2023)\n* [Qwen Technical Report](https://huggingface.co/papers/2309.16609) (2023)\n* [Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for LLM Alignment](https://huggingface.co/papers/2310.00212) (2023)\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space"}