Update README.md
Browse files
README.md
CHANGED
@@ -38,4 +38,19 @@ We use the [Alpaca fine-tuning script](https://github.com/tatsu-lab/stanford_alp
|
|
38 |
|
39 |
Although this project aims to better align current LMs with social norms, inappropriate content and inherent biases in the training data will still impair the alignment of the model.
|
40 |
|
41 |
-
The model should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
Although this project aims to better align current LMs with social norms, inappropriate content and inherent biases in the training data will still impair the alignment of the model.
|
40 |
|
41 |
+
The model should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
|
42 |
+
|
43 |
+
# Citation
|
44 |
+
|
45 |
+
Please cite our paper if you use the data or code in this repo:
|
46 |
+
|
47 |
+
```bibtex
|
48 |
+
@misc{liu2023sociallyaligned,
|
49 |
+
title={Training Socially Aligned Language Models in Simulated Human Society},
|
50 |
+
author={Ruibo Liu and Ruixin Yang and Chenyan Jia and Ge Zhang and Denny Zhou and Andrew M. Dai and Diyi Yang and Soroush Vosoughi},
|
51 |
+
year={2023},
|
52 |
+
eprint={2305.16960},
|
53 |
+
archivePrefix={arXiv},
|
54 |
+
primaryClass={cs.CL}
|
55 |
+
}
|
56 |
+
```
|