|
--- |
|
license: other |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
base_model: |
|
- nbeerbower/llama-3-stella-8B |
|
- Hastagaras/llama-3-8b-okay |
|
- nbeerbower/llama-3-gutenberg-8B |
|
- openchat/openchat-3.6-8b-20240522 |
|
- Kukedlc/NeuralLLaMa-3-8b-DT-v0.1 |
|
- cstr/llama3-8b-spaetzle-v20 |
|
- mlabonne/ChimeraLlama-3-8B-v3 |
|
- flammenai/Mahou-1.1-llama3-8B |
|
- KingNish/KingNish-Llama3-8b |
|
--- |
|
**Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.0.21 |
|
|
|
Other EXL2 quants: |
|
| **Quant** | **Model Size** | **lm_head** | |
|
| ----- | ---------- | ------- | |
|
|<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |
|
|<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-2_5bpw_exl2)**</center> | <center>3478 MB</center> | <center>6</center> | |
|
|<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-3_0bpw_exl2)**</center> | <center>3894 MB</center> | <center>6</center> | |
|
|<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |
|
|<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-3_75bpw_exl2)**</center> | <center>4518 MB</center> | <center>6</center> | |
|
|<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |
|
|<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> | |
|
|<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-5_0bpw_exl2)**</center> | <center>5556 MB</center> | <center>6</center> | |
|
|<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-6_0bpw_exl2)**</center> | <center>6497 MB</center> | <center>8</center> | |
|
|<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-6_5bpw_exl2)**</center> | <center>6893 MB</center> | <center>8</center> | |
|
|<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_Daredevil-8B-8_0bpw_exl2)**</center> | <center>8125 MB</center> | <center>8</center> | |
|
|
|
|
|
# Daredevil-8B |
|
|
|
**tl;dr: It looks like a successful merge** |
|
|
|
Daredevil-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
|
* [nbeerbower/llama-3-stella-8B](https://huggingface.co/nbeerbower/llama-3-stella-8B) |
|
* [Hastagaras/llama-3-8b-okay](https://huggingface.co/Hastagaras/llama-3-8b-okay) |
|
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B) |
|
* [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) |
|
* [Kukedlc/NeuralLLaMa-3-8b-DT-v0.1](https://huggingface.co/Kukedlc/NeuralLLaMa-3-8b-DT-v0.1) |
|
* [cstr/llama3-8b-spaetzle-v20](https://huggingface.co/cstr/llama3-8b-spaetzle-v20) |
|
* [mlabonne/ChimeraLlama-3-8B-v3](https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v3) |
|
* [flammenai/Mahou-1.1-llama3-8B](https://huggingface.co/flammenai/Mahou-1.1-llama3-8B) |
|
* [KingNish/KingNish-Llama3-8b](https://huggingface.co/KingNish/KingNish-Llama3-8b) |
|
|
|
## π Applications |
|
|
|
It is a highly functional censored model. You might want to add `<end_of_turn>` as an additional stop string. |
|
|
|
## β‘ Quantization |
|
|
|
* **GGUF**: https://huggingface.co/mlabonne/Daredevil-8B-GGUF |
|
|
|
## π Evaluation |
|
|
|
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |
|
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------: | --------: | --------: | ---------: | --------: | |
|
| [**mlabonne/Daredevil-8B**](https://huggingface.co/mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | **55.87** | **44.13** | **73.52** | **59.05** | **46.77** | |
|
| [mlabonne/ChimeraLlama-3-8B](https://huggingface.co/mlabonne/Chimera-8B) [π](https://gist.github.com/mlabonne/28d31153628dccf781b74f8071c7c7e4) | 51.58 | 39.12 | 71.81 | 52.4 | 42.98 | |
|
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | |
|
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | |
|
|
|
## π³ Model family tree |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/LplqNg6iXHm_JXfX02Aj1.png) |
|
|
|
## 𧩠Configuration |
|
|
|
```yaml |
|
models: |
|
- model: NousResearch/Meta-Llama-3-8B |
|
# No parameters necessary for base model |
|
- model: nbeerbower/llama-3-stella-8B |
|
parameters: |
|
density: 0.6 |
|
weight: 0.16 |
|
- model: Hastagaras/llama-3-8b-okay |
|
parameters: |
|
density: 0.56 |
|
weight: 0.1 |
|
- model: nbeerbower/llama-3-gutenberg-8B |
|
parameters: |
|
density: 0.6 |
|
weight: 0.18 |
|
- model: openchat/openchat-3.6-8b-20240522 |
|
parameters: |
|
density: 0.56 |
|
weight: 0.12 |
|
- model: Kukedlc/NeuralLLaMa-3-8b-DT-v0.1 |
|
parameters: |
|
density: 0.58 |
|
weight: 0.18 |
|
- model: cstr/llama3-8b-spaetzle-v20 |
|
parameters: |
|
density: 0.56 |
|
weight: 0.08 |
|
- model: mlabonne/ChimeraLlama-3-8B-v3 |
|
parameters: |
|
density: 0.56 |
|
weight: 0.08 |
|
- model: flammenai/Mahou-1.1-llama3-8B |
|
parameters: |
|
density: 0.55 |
|
weight: 0.05 |
|
- model: KingNish/KingNish-Llama3-8b |
|
parameters: |
|
density: 0.55 |
|
weight: 0.05 |
|
merge_method: dare_ties |
|
base_model: NousResearch/Meta-Llama-3-8B |
|
dtype: bfloat16 |
|
``` |
|
|
|
## π» Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "mlabonne/Daredevil-8B" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |