Edit model card

Chinese-CodeLlama-7B-SFT-V1

We implemented SFT based on our Chinese-CodeLlama-7B-PT. The dataset comes from CodeLlama-2-20k, we used Google Translate to translate it into Chinese.

In addition, we designed appropriate Chinese prompt template for coding tasks, and during the fine-tuning stage, memory efficient attention was applied which save us a lot of GPU memory space.

The Chinese prompt template used is as follows:

PROMPT_TEMPLATE = (
  "下面是描述一项任务的指令,并且与一则输入配对用来提供更多的上下文。请给出尽可能满足请求的回答.\n"
  "### 指令:\n{instruction}\n### 输入:\n{input}\n### 回答:\n"
)

If you are interested in our work, please follow our progress in the future.

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train frankminors123/Chinese-CodeLlama-7B-SFT-V1