Edit model card

BLOSSOM-v5.1-34b

💻Github🚀Blossom Chat Demo

Introduction

Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Yi-1.5-34B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.

Training was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs.

Inference

Inference is performed in the form of dialogue continuation.

Single-turn dialogue

A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|: 

Multi-turn dialogue

A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|: Hello! How can I assist you today?<|endoftext|>
|Human|: Generate a random number using python
|Bot|: 

Note: At the end of the Bot's output in the historical conversation, append a <|endoftext|>.

Downloads last month
3,549
Safetensors
Model size
34.4B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Azure99/blossom-v5.1-34b

Finetunes
1 model
Merges
2 models
Quantizations
1 model

Datasets used to train Azure99/blossom-v5.1-34b

Spaces using Azure99/blossom-v5.1-34b 2

Collection including Azure99/blossom-v5.1-34b