RichardErkhov
commited on
Commit
•
91f29ff
1
Parent(s):
8beb0c8
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
btulu - GGUF
|
11 |
+
- Model creator: https://huggingface.co/vwxyzjn/
|
12 |
+
- Original model: https://huggingface.co/vwxyzjn/btulu/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [btulu.Q2_K.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q2_K.gguf) | Q2_K | 2.96GB |
|
18 |
+
| [btulu.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
|
19 |
+
| [btulu.IQ3_S.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.IQ3_S.gguf) | IQ3_S | 3.43GB |
|
20 |
+
| [btulu.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
|
21 |
+
| [btulu.IQ3_M.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.IQ3_M.gguf) | IQ3_M | 3.52GB |
|
22 |
+
| [btulu.Q3_K.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q3_K.gguf) | Q3_K | 3.74GB |
|
23 |
+
| [btulu.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
|
24 |
+
| [btulu.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
|
25 |
+
| [btulu.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
|
26 |
+
| [btulu.Q4_0.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q4_0.gguf) | Q4_0 | 4.34GB |
|
27 |
+
| [btulu.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
|
28 |
+
| [btulu.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
|
29 |
+
| [btulu.Q4_K.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q4_K.gguf) | Q4_K | 4.58GB |
|
30 |
+
| [btulu.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
|
31 |
+
| [btulu.Q4_1.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q4_1.gguf) | Q4_1 | 4.78GB |
|
32 |
+
| [btulu.Q5_0.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q5_0.gguf) | Q5_0 | 5.21GB |
|
33 |
+
| [btulu.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
|
34 |
+
| [btulu.Q5_K.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q5_K.gguf) | Q5_K | 5.34GB |
|
35 |
+
| [btulu.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
|
36 |
+
| [btulu.Q5_1.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q5_1.gguf) | Q5_1 | 5.65GB |
|
37 |
+
| [btulu.Q6_K.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q6_K.gguf) | Q6_K | 6.14GB |
|
38 |
+
| [btulu.Q8_0.gguf](https://huggingface.co/RichardErkhov/vwxyzjn_-_btulu-gguf/blob/main/btulu.Q8_0.gguf) | Q8_0 | 7.95GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
Original model description:
|
44 |
+
---
|
45 |
+
base_model: []
|
46 |
+
library_name: transformers
|
47 |
+
tags:
|
48 |
+
- mergekit
|
49 |
+
- merge
|
50 |
+
|
51 |
+
---
|
52 |
+
# Untitled Model (1)
|
53 |
+
|
54 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
55 |
+
|
56 |
+
## Merge Details
|
57 |
+
### Merge Method
|
58 |
+
|
59 |
+
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
|
60 |
+
|
61 |
+
### Models Merged
|
62 |
+
|
63 |
+
The following models were included in the merge:
|
64 |
+
* ./llama-3-8b-tulu-v2-numina
|
65 |
+
* ./llama_3_8b-tulu_v3_mix_preview_4096_OLMoE
|
66 |
+
|
67 |
+
### Configuration
|
68 |
+
|
69 |
+
The following YAML configuration was used to produce this model:
|
70 |
+
|
71 |
+
```yaml
|
72 |
+
models:
|
73 |
+
- model: ./llama-3-8b-tulu-v2-numina
|
74 |
+
parameters:
|
75 |
+
weight: 0.4
|
76 |
+
- model: ./llama_3_8b-tulu_v3_mix_preview_4096_OLMoE
|
77 |
+
parameters:
|
78 |
+
weight: 0.6
|
79 |
+
merge_method: linear
|
80 |
+
dtype: bfloat16
|
81 |
+
|
82 |
+
```
|
83 |
+
|
84 |
+
|