{"paper_url": "https://huggingface.co/papers/2307.15217", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Reinforcement Learning from LLM Feedback to Counteract Goal Misgeneralization](https://huggingface.co/papers/2401.07181) (2024)\n* [Uncertainty-Penalized Reinforcement Learning from Human Feedback with Diverse Reward LoRA Ensembles](https://huggingface.co/papers/2401.00243) (2023)\n* [Improving Reinforcement Learning from Human Feedback with Efficient Reward Model Ensemble](https://huggingface.co/papers/2401.16635) (2024)\n* [Secrets of RLHF in Large Language Models Part II: Reward Modeling](https://huggingface.co/papers/2401.06080) (2024)\n* [West-of-N: Synthetic Preference Generation for Improved Reward Modeling](https://huggingface.co/papers/2401.12086) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}