Edit model card

介绍

  1. ✅ 对bloom-560m模型做了sft,在这个数量级和模型规模下,效果非常好!
  2. 🚀 训练代码和推理代码全部分享,可以查看链接https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom

个人感受

  1. 🎯 bloom系列的模型,在中文领域,具有极大的潜力,在经过有监督微调训练之后,效果非常惊人!
  2. 🔄 bloom系列的模型,覆盖中文、英文、代码、法语、西班牙语等。即使拿来做翻译、拿来做代码生成,也都没问题!(后期将会分享相关教程)
  3. 😛 当前的这个bloom-560m模型,也只是为了跑通整个训练流程,可以无缝切换模型到不同尺寸,如3b7b等!

如何使用

from transformers import AutoModelForCausalLM, AutoTokenizer


checkpoint = "yuanzhoulvpi/chinese_bloom_560m"#"bigscience/bloomz-3b" #"bigscience/bloom-7b1"#  "output_dir/checkpoint-8260"#
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)

PROMPT_DICT = {
    "prompt_input": (
        "Below is an instruction that describes a task, paired with an input that provides further context. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
    ),
    "prompt_no_input": (
        "Below is an instruction that describes a task. "
        "Write a response that appropriately completes the request.\n\n"
        "### Instruction:\n{instruction}\n\n### Response:"
    ),
}

from typing import Optional
def generate_input(instruction:Optional[str]= None, input_str:Optional[str] = None) -> str:
    if input_str is None:
        return PROMPT_DICT['prompt_no_input'].format_map({'instruction':instruction})
    else:
        return PROMPT_DICT['prompt_input'].format_map({'instruction':instruction, 'input':input_str})


for i in range(5):
    print("*"*80)

    inputs = tokenizer.encode(generate_input(instruction="你是谁"), return_tensors="pt")
    outputs = model.generate(inputs,num_beams=3,
                            max_new_tokens=512,
                            do_sample=False, 
                            top_k=10,
                            penalty_alpha=0.6,
                            temperature=0.8,
                            repetition_penalty=1.2)
    print(tokenizer.decode(outputs[0]))

example1

  1. 训练前效果
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
写一篇关于水杯的文章

### Response: 杯子是用来喝水用的</s>
  1. 训练后效果
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
写一篇关于水杯的文章

### Response:
水杯是人们日常生活中必不可少的物品之一,它能够帮助人们盛放、储存和运输水。以下是一些有关水杯的详细信息:
1. 形状:水杯通常有圆形、方形、长方形等不同形状。
2. 容量:水杯通常有10毫升、20毫升、30毫升、40毫升等不同容量。
3. 材质:水杯通常有玻璃、陶瓷、金属等不同材质。
4. 颜色:水杯通常有红色、蓝色、黄色、绿色等不同颜色。
5. 用途:水杯通常用于盛放、储存和运输水,如饮料、果汁、茶、咖啡等。
6. 功能:水杯可以用来盛放、储存和运输各种液体,如饮料、果汁、茶、咖啡等。
7. 用途广泛:水杯不仅用于盛放、储存和运输水,还可以用于制作各种饮料、果汁、茶、咖啡等。
总之,水杯是一个不可或缺的物品,它能够帮助人们盛放、储存和运输水,同时还可以用于制作各种饮料、果汁、茶、咖啡等。</s>

example 2

  1. 训练前效果
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
你是谁

### Response: I am a student.</s>
  1. 训练后效果
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
你是谁

### Response:我是一个AI语言模型,没有个人身份。</s>
Downloads last month
28
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train yuanzhoulvpi/chinese_bloom_560m

Space using yuanzhoulvpi/chinese_bloom_560m 1