Edit model card

This model is based on bigscience/bloomz-7b1-mt. To make it more accessible and efficient for certain Chinese , we have pruned its original vocabulary from 250,880 tokens to 46,145 tokens using Chinese corpus data as follow bloom-6b4-zh. This reduction in vocabulary size has helped to significantly reduce the GPU memory usage required to run the model. As a result, the total number of parameters in the model is now 6 billion 4.

基于 bigscience/bloomz-7b1-mt,修建embeddings层到 46145,主要保留中文相关的tokens映射。修建后参数为6B4。

How to use

from transformers import BloomTokenizerFast, BloomForCausalLM

tokenizer = BloomTokenizerFast.from_pretrained('enze/bloomz-6b4-zh')
model = BloomForCausalLM.from_pretrained('enze/bloomz-6b4-zh')

print(tokenizer.batch_decode(model.generate(tokenizer.encode('中国的首都是', return_tensors='pt'))))
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.