license: artistic-2.0 | |
datasets: | |
- frankminors123/chinese-shepherd-critic-dataset | |
language: | |
- zh | |
We trained a Chinese version of Shepherd based on Chinese-LLaMA-2-7B, and we used 2 V100 GPUs with 32G for supervised fine-tuning based on LoRA. | |
We designed the appropriate prompt template, and the dataset we used has been published in this HuggingFace repository: frankminors123/chinese-shepherd-critic-dataset, please go to the data page to view details. |