{"paper_url": "https://huggingface.co/papers/2309.13356", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [The Moral Machine Experiment on Large Language Models](https://huggingface.co/papers/2309.05958) (2023)\n* [Rethinking Machine Ethics - Can LLMs Perform Moral Reasoning through the Lens of Moral Theories?](https://huggingface.co/papers/2308.15399) (2023)\n* [The Confidence-Competence Gap in Large Language Models: A Cognitive Study](https://huggingface.co/papers/2309.16145) (2023)\n* [From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models](https://huggingface.co/papers/2308.12014) (2023)\n* [Investigating the Efficacy of Large Language Models in Reflective Assessment Methods through Chain of Thoughts Prompting](https://huggingface.co/papers/2310.00272) (2023)\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space"} |