RichardErkhov's picture
uploaded readme
12f7952 verified
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Meta-Llama-3-8B-InitializedEmbeds - GGUF
- Model creator: https://huggingface.co/chargoddard/
- Original model: https://huggingface.co/chargoddard/Meta-Llama-3-8B-InitializedEmbeds/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Meta-Llama-3-8B-InitializedEmbeds.Q2_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q2_K.gguf) | Q2_K | 2.96GB |
| [Meta-Llama-3-8B-InitializedEmbeds.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Meta-Llama-3-8B-InitializedEmbeds.IQ3_S.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Meta-Llama-3-8B-InitializedEmbeds.IQ3_M.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q3_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q3_K.gguf) | Q3_K | 3.74GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Meta-Llama-3-8B-InitializedEmbeds.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q4_0.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Meta-Llama-3-8B-InitializedEmbeds.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q4_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q4_K.gguf) | Q4_K | 4.58GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q4_1.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q5_0.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q5_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q5_K.gguf) | Q5_K | 5.34GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q5_1.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q6_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q6_K.gguf) | Q6_K | 6.14GB |
| [Meta-Llama-3-8B-InitializedEmbeds.Q8_0.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Meta-Llama-3-8B-InitializedEmbeds-gguf/blob/main/Meta-Llama-3-8B-InitializedEmbeds.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model:
- NousResearch/Meta-Llama-3-8B-Instruct
- NousResearch/Meta-Llama-3-8B
library_name: transformers
tags:
- mergekit
- merge
---
# Meta-Llama-3-8B-InitializedEmbeds
This is just [Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) with the embeddings for special tokens copied from the Instruct version. Should behave pretty much identically to the base model, but with less glossolalia when it encounters `<|start_header_id|>` and the like.
I'm using this as a base to fine tune. Having these embeddings reasonable instead of randomly initialized should give a smoother start.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
dtype: float32
out_dtype: bfloat16
models:
- model: NousResearch/Meta-Llama-3-8B
parameters:
weight: 1.0
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.0
tokenizer:
source: NousResearch/Meta-Llama-3-8B-Instruct
tokens:
<|start_header_id|>:
source: NousResearch/Meta-Llama-3-8B-Instruct
force: true
<|end_header_id|>:
source: NousResearch/Meta-Llama-3-8B-Instruct
force: true
<|eot_id|>:
source: NousResearch/Meta-Llama-3-8B-Instruct
force: true
<|end_of_text|>:
source: NousResearch/Meta-Llama-3-8B-Instruct
force: true
<|begin_of_text|>:
source: NousResearch/Meta-Llama-3-8B-Instruct
force: true
```