AuriAetherwiing commited on
Commit
2a3a1b0
1 Parent(s): eba320b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -66
README.md CHANGED
@@ -1,6 +1,17 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
4
  base_model: Qwen/Qwen2.5-32B
5
  tags:
6
  - generated_from_trainer
@@ -9,8 +20,57 @@ model-index:
9
  results: []
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
16
  <details><summary>See axolotl config</summary>
@@ -368,67 +428,4 @@ weight_decay: 0.1
368
  # fsdp_mixed_precision: BF16 # Added
369
  ```
370
 
371
- </details><br>
372
-
373
- # EVA-Qwen2.5-32B-SFFT-v0.1
374
-
375
- This model is a fine-tuned version of [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) on the None dataset.
376
- It achieves the following results on the evaluation set:
377
- - Loss: 0.9476
378
-
379
- ## Model description
380
-
381
- More information needed
382
-
383
- ## Intended uses & limitations
384
-
385
- More information needed
386
-
387
- ## Training and evaluation data
388
-
389
- More information needed
390
-
391
- ## Training procedure
392
-
393
- ### Training hyperparameters
394
-
395
- The following hyperparameters were used during training:
396
- - learning_rate: 5e-05
397
- - train_batch_size: 1
398
- - eval_batch_size: 1
399
- - seed: 42
400
- - distributed_type: multi-GPU
401
- - num_devices: 8
402
- - gradient_accumulation_steps: 8
403
- - total_train_batch_size: 64
404
- - total_eval_batch_size: 8
405
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
406
- - lr_scheduler_type: cosine
407
- - lr_scheduler_warmup_steps: 20
408
- - num_epochs: 3
409
-
410
- ### Training results
411
-
412
- | Training Loss | Epoch | Step | Validation Loss |
413
- |:-------------:|:------:|:----:|:---------------:|
414
- | 1.4144 | 0.0078 | 1 | 1.3757 |
415
- | 1.0796 | 0.2498 | 32 | 0.9875 |
416
- | 1.0367 | 0.4995 | 64 | 0.9437 |
417
- | 0.9919 | 0.7493 | 96 | 0.9212 |
418
- | 0.9305 | 0.9990 | 128 | 0.9097 |
419
- | 0.6963 | 1.2427 | 160 | 0.9228 |
420
- | 0.686 | 1.4922 | 192 | 0.9187 |
421
- | 0.6656 | 1.7417 | 224 | 0.9127 |
422
- | 0.6818 | 1.9912 | 256 | 0.9029 |
423
- | 0.467 | 2.2391 | 288 | 0.9575 |
424
- | 0.459 | 2.4879 | 320 | 0.9502 |
425
- | 0.4957 | 2.7366 | 352 | 0.9487 |
426
- | 0.493 | 2.9854 | 384 | 0.9476 |
427
-
428
-
429
- ### Framework versions
430
-
431
- - Transformers 4.45.1
432
- - Pytorch 2.4.0+cu121
433
- - Datasets 2.21.0
434
- - Tokenizers 0.20.2
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ datasets:
5
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
6
+ - Nopm/Opus_WritingStruct
7
+ - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
8
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
9
+ - Gryphe/ChatGPT-4o-Writing-Prompts
10
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
11
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
12
+ - nothingiisreal/Reddit-Dirty-And-WritingPrompts
13
+ - allura-org/Celeste-1.x-data-mixture
14
+ - cognitivecomputations/dolphin-2.9.3
15
  base_model: Qwen/Qwen2.5-32B
16
  tags:
17
  - generated_from_trainer
 
20
  results: []
21
  ---
22
 
23
+ # EVA Qwen2.5-32B v0.2
24
+
25
+ <p>
26
+ A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.<br>
27
+ It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
28
+ </p>
29
+
30
+ <p>Dedicated to Nev.</p>
31
+
32
+ <p><b>Version notes for 0.2</b>: Basically, reprocessed the whole dataset again, due to a severe mistake in previously used pipeline, while left the data poisoned with a lot of non-unicode characters. Now, no more weird generation artifacts, and more stability. Major kudos to Cahvay for his work on fixing this critical issue.</p>
33
+
34
+ <p>
35
+ <p>Prompt format is ChatML.</p><br>
36
+ <h3>Recommended sampler values:</h3>
37
+ <ul>
38
+ <li>Temperature: 1</li>
39
+ <li>Min-P: 0.05</li>
40
+ <li>Top-A: 0.2</li>
41
+ <li>Repetition Penalty: 1.03</li>
42
+ </ul>
43
+
44
+ <h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
45
+
46
+ - [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
47
+ - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
48
+ </p>
49
+
50
+ <p>
51
+ <br>
52
+ <h3>
53
+ Training data:
54
+ </h3>
55
+ <ul>
56
+ <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
57
+ <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
58
+ <li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
59
+ <li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
60
+ <li>Synthstruct and SynthRP datasets by Epiculous</li>
61
+ <li>A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.</li>
62
+ </ul>
63
+ <h3>
64
+ Training time and hardware:
65
+ </h3>
66
+ <ul><li>7 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br>
67
+ </p>
68
+ <p>Model was created by Kearm, Auri and Cahvay.</p>
69
+ <h4>Special thanks:</h4><ul>
70
+ <li><b>to Cahvay for his work on investigating and reprocessing the corrupted dataset, removing the single biggest source of data poisoning.</b></li>
71
+ <li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li>
72
+ <li>to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CogninitiveComputations for the data</li>
73
+ <li>and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.</li></ul>
74
 
75
  [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
76
  <details><summary>See axolotl config</summary>
 
428
  # fsdp_mixed_precision: BF16 # Added
429
  ```
430
 
431
+ </details><br>