artificialguybr commited on
Commit
3235183
1 Parent(s): 4929107

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-0.5B
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - generated_from_trainer
10
+ - instruction-tuning
11
+ model-index:
12
+ - name: outputs/qwen2.5-0.5b-ft-synthia15-i
13
+ results: []
14
+ ---
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+
18
+ # Qwen2.5-0.5B Fine-tuned on Synthia v1.5-I
19
+
20
+ This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the Synthia v1.5-I dataset, which contains over 20.7k instruction-following examples.
21
+
22
+ ## Model Description
23
+
24
+ Qwen2.5-0.5B is part of the latest Qwen2.5 series of large language models. The base model brings significant improvements in:
25
+ - Instruction following and generating long texts
26
+ - Understanding structured data and generating structured outputs
27
+ - Support for over 29 languages
28
+ - Long context support up to 32,768 tokens
29
+
30
+ This fine-tuned version enhances the base model's instruction-following capabilities through training on the Synthia v1.5-I dataset.
31
+
32
+ ### Model Architecture
33
+ - Type: Causal Language Model
34
+ - Parameters: 0.49B (0.36B non-embedding)
35
+ - Layers: 24
36
+ - Attention Heads: 14 for Q and 2 for KV (GQA)
37
+ - Context Length: 32,768 tokens
38
+ - Training Framework: Transformers 4.45.0.dev0
39
+
40
+ ## Intended Uses & Limitations
41
+
42
+ This model is intended for:
43
+ - Instruction following and task completion
44
+ - Text generation and completion
45
+ - Conversational AI applications
46
+
47
+ The model inherits the multilingual capabilities and long context support of the base Qwen2.5-0.5B model, while being specifically tuned for instruction following.
48
+
49
+ ## Training Procedure
50
+
51
+ ### Training Data
52
+ The model was fine-tuned on the Synthia v1.5-I dataset containing 20.7k instruction-following examples.
53
+
54
+ ### Training Hyperparameters
55
+
56
+ The following hyperparameters were used during training:
57
+ - Learning rate: 1e-05
58
+ - Train batch size: 5
59
+ - Eval batch size: 5
60
+ - Seed: 42
61
+ - Gradient accumulation steps: 8
62
+ - Total train batch size: 40
63
+ - Optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - LR scheduler type: cosine
65
+ - LR scheduler warmup steps: 100
66
+ - Number of epochs: 3
67
+ - Sequence length: 4096
68
+ - Sample packing: enabled
69
+ - Pad to sequence length: enabled
70
+
71
+ ## Framework Versions
72
+
73
+ - Transformers 4.45.0.dev0
74
+ - Pytorch 2.3.1+cu121
75
+ - Datasets 2.21.0
76
+ - Tokenizers 0.19.1
77
+
78
+ <details><summary>See axolotl config</summary>
79
+
80
+ axolotl version: `0.4.1`