Update README.md
Browse files
README.md
CHANGED
@@ -8,13 +8,38 @@ model-index:
|
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
15 |
<details><summary>See axolotl config</summary>
|
16 |
|
17 |
-
axolotl version: `0.4.1`
|
18 |
```yaml
|
19 |
base_model: google/gemma-2-9b
|
20 |
model_type: AutoModelForCausalLM
|
@@ -96,51 +121,23 @@ fsdp:
|
|
96 |
fsdp_config:
|
97 |
special_tokens:
|
98 |
```
|
99 |
-
|
100 |
</details><br>
|
101 |
|
102 |
-
|
103 |
-
|
104 |
-
This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b) on the None dataset.
|
105 |
-
|
106 |
-
## Model description
|
107 |
-
|
108 |
-
More information needed
|
109 |
-
|
110 |
-
## Intended uses & limitations
|
111 |
-
|
112 |
-
More information needed
|
113 |
-
|
114 |
-
## Training and evaluation data
|
115 |
-
|
116 |
-
More information needed
|
117 |
-
|
118 |
-
## Training procedure
|
119 |
-
|
120 |
-
### Training hyperparameters
|
121 |
-
|
122 |
-
The following hyperparameters were used during training:
|
123 |
-
- learning_rate: 6e-06
|
124 |
-
- train_batch_size: 1
|
125 |
-
- eval_batch_size: 1
|
126 |
-
- seed: 42
|
127 |
-
- distributed_type: multi-GPU
|
128 |
-
- num_devices: 8
|
129 |
-
- gradient_accumulation_steps: 8
|
130 |
-
- total_train_batch_size: 64
|
131 |
-
- total_eval_batch_size: 8
|
132 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
133 |
-
- lr_scheduler_type: cosine
|
134 |
-
- lr_scheduler_warmup_steps: 50
|
135 |
-
- num_epochs: 2
|
136 |
|
137 |
-
|
138 |
|
|
|
|
|
|
|
|
|
|
|
139 |
|
|
|
|
|
140 |
|
141 |
-
|
142 |
|
143 |
-
|
144 |
-
|
145 |
-
- Datasets 2.20.0
|
146 |
-
- Tokenizers 0.19.1
|
|
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/9ZBUlmzDCnNmQEdUUbyEL.png)
|
12 |
+
|
13 |
+
This is the 10th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
|
14 |
+
|
15 |
+
This model is fine-tuned on top of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b).
|
16 |
+
|
17 |
+
## Prompting
|
18 |
+
Model has been Instruct tuned with the customgemma2 formatting. A typical input would look like this:
|
19 |
+
|
20 |
+
```py
|
21 |
+
"""<start_of_turn>system
|
22 |
+
system prompt<end_of_turn>
|
23 |
+
<start_of_turn>user
|
24 |
+
Hi there!<end_of_turn>
|
25 |
+
<start_of_turn>model
|
26 |
+
Nice to meet you!<end_of_turn>
|
27 |
+
<start_of_turn>user
|
28 |
+
Can I ask a question?<end_of_turn>
|
29 |
+
<start_of_turn>model
|
30 |
+
"""
|
31 |
+
```
|
32 |
+
|
33 |
+
## SillyTavern templates
|
34 |
+
|
35 |
+
WIP
|
36 |
+
|
37 |
+
</details><br>
|
38 |
+
|
39 |
+
## Axolotl config
|
40 |
|
|
|
41 |
<details><summary>See axolotl config</summary>
|
42 |
|
|
|
43 |
```yaml
|
44 |
base_model: google/gemma-2-9b
|
45 |
model_type: AutoModelForCausalLM
|
|
|
121 |
fsdp_config:
|
122 |
special_tokens:
|
123 |
```
|
|
|
124 |
</details><br>
|
125 |
|
126 |
+
## Credits
|
127 |
+
We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
128 |
|
129 |
+
We would also like to thank all members of Anthracite who made this finetune possible.
|
130 |
|
131 |
+
- [anthracite-org/stheno-filtered-v1.1](https://huggingface.co/datasets/anthracite-org/stheno-filtered-v1.1)
|
132 |
+
- [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
|
133 |
+
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
|
134 |
+
- [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
|
135 |
+
- [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
|
136 |
|
137 |
+
## Training
|
138 |
+
The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
|
139 |
|
140 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
141 |
|
142 |
+
## Safety
|
143 |
+
...
|
|
|
|