File size: 5,274 Bytes
6b8ab8a f372339 90564bb 6b8ab8a f372339 6b8ab8a acbe89e f372339 88cff7b d21619b f372339 88cff7b 6b8ab8a c58ff52 5e9291c c58ff52 12a7432 c58ff52 6b8ab8a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
---
tags:
- merge
- mergekit
base_model:
- cstr/llama3-8b-spaetzle-v31
- cstr/llama3-8b-spaetzle-v28
- cstr/llama3-8b-spaetzle-v26
- cstr/llama3-8b-spaetzle-v20
license: llama3
language:
- de
- en
---
# llama3-8b-spaetzle-v33
This is a merge of the following models:
* [cstr/llama3-8b-spaetzle-v31](https://huggingface.co/cstr/llama3-8b-spaetzle-v31)
* [cstr/llama3-8b-spaetzle-v28](https://huggingface.co/cstr/llama3-8b-spaetzle-v28)
* [cstr/llama3-8b-spaetzle-v26](https://huggingface.co/cstr/llama3-8b-spaetzle-v26)
* [cstr/llama3-8b-spaetzle-v20](https://huggingface.co/cstr/llama3-8b-spaetzle-v20)
It attempts a compromise in usefulness for German and English tasks.
For GGUF quants see [cstr/llama3-8b-spaetzle-v33-GGUF](https://huggingface.co/cstr/llama3-8b-spaetzle-v33-GGUF),
# Benchmarks
It achieves on EQ-Bench v2_de as q4km (old version without pre-tokenizer-fix) quants 66.59 (171 of 171 parseable) and 73.17 on v2 (english) (171/171).
For the int4-inc quants:
| Benchmark | Score |
|-------------|--------|
| Average | 66.13 |
| ARC-c | 59.81 |
| ARC-e | 85.27 |
| Boolq | 84.10 |
| HellaSwag | 62.47 |
| Lambada | 73.28 |
| MMLU | 64.11 |
| OpenbookQA | 37.2 |
| Piqa | 80.30 |
| TruthfulQA | 50.21 |
| Winogrande | 73.72 |
<!--
| Average | ARC-c | ARC-e | Boolq | HellaSwag | Lambada | MMLU | Openbookqa | Piqa | Truthfulqa | Winogrande |
|----------|-------|-------|--------|-----------|---------|-------|------------|-------|------------|------------|
| 66.13 | 59.81 | 85.27 | 84.10 | 62.47 | 73.28 | 64.11 | 37.2 | 80.30 | 50.21 | 73.72 |
-->
## Nous
| Model |Average|AGIEval|GPT4All|TruthfulQA|Bigbench|
|----------------------------------------------------------------------------|------:|------:|------:|---------:|-------:|
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
| [**cstr/llama3-8b-spaetzle-v33**](https://huggingface.co/cstr/llama3-8b-spaetzle-v33) [π](https://gist.github.com/CrispStrobe/0047d967ddc4bb50064c9722b9f934a5) | 55.26| 42.61| 73.9| 59.28| 45.25|
| [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 |
| [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [π](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 |
| [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [π](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 |
| [mlabonne/Llama-3-8B-Instruct-abliterated-dpomix](https://huggingface.co/mlabonne/Llama-3-8B-Instruct-abliterated-dpomix) [π](https://gist.github.com/mlabonne/d711548df70e2c04771cc68ab33fe2b9) | 52.26 | 41.6 | 69.95 | 54.22 | 43.26 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [π](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 |
| [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [π](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
## 𧩠Configuration
```yaml
models:
- model: cstr/llama3-8b-spaetzle-v20
# no parameters necessary for base model
- model: cstr/llama3-8b-spaetzle-v31
parameters:
density: 0.65
weight: 0.25
- model: cstr/llama3-8b-spaetzle-v28
parameters:
density: 0.65
weight: 0.25
- model: cstr/llama3-8b-spaetzle-v26
parameters:
density: 0.65
weight: 0.15
merge_method: dare_ties
base_model: cstr/llama3-8b-spaetzle-v20
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/llama3-8b-spaetzle-v33"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |