Edit model card

CausalLM 34B β

PROMPT FORMAT:

chatml

There are some issues with the model weights in terms of precision. In the next version update, we will roll back some progress and retrain to fix these issues as soon as possible.

Please note: Do not use "accelerated inference frameworks" like VLLM temporarily. Instead, use Transformers for inference. Otherwise, due to precision issues, the output quality will be significantly degraded. If you need faster inference, you can consider using the q8_0 quantization (faster and better than bf16 vllm for this model only) with llama.cpp temporarily or wait for the official version. To be fixed in the upcoming next version update.

no repetition_penalty!

Please do not use wikitext for quantization calibration because all wikitext have been re-aligned on synthetic dataset, and its distribution differs significantly from the original wikitext.

MT-Bench: 8.5

mt-bench

Some contamination detection if you want to check:

Models MMLU (ref: llama7b) TBA
microsoft/Orca-2-7b 0.77
mistralai/Mistral-7B-v0.1 0.46
CausalLM/34b-beta 0.38
01-ai/Yi-6B-200K 0.3

data from https://huggingface.co/spaces/Yeyito/llm_contamination_detector

It should be safe. It was not trained on the benchmark, but the contamination of the training dataset is unavoidable due to cost constraints.

Downloads last month
20
GGUF
Model size
34.4B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference API
Unable to determine this model's library. Check the docs .