Edit model card

We trained a Chinese version of Shepherd based on Chinese-LLaMA-2-7B, and we used 2 V100 GPUs with 32G for supervised fine-tuning based on LoRA.

We designed the appropriate prompt template, and the dataset we used has been published in this HuggingFace repository: frankminors123/chinese-shepherd-critic-dataset, please go to the data page to view details.

The prompt template used is as follows:

PROMPT_TEMPLATE = (
    "请试着评论下面问题的答案.\n"
    "### 问题:\n{question}\n### 答案:\n{answer}\n### 评论:\n"
)
Downloads last month
5
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Dataset used to train frankminors123/Chinese-Shepherd