XeroCodes commited on
Commit
47fad1b
1 Parent(s): 4f63845

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: microsoft/Phi-3-mini-4k-instruct
3
+ datasets:
4
+ - AlignmentLab-AI/alpaca-cot-collection
5
+ language:
6
+ - en
7
+ library_name: peft
8
+ license: apache-2.0
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ # Xenith-3B
13
+ Xenith-3B is a fine-tuned language model based on the microsoft/Phi-3-mini-4k-instruct model. It has been specifically trained on the AlignmentLab-AI/alpaca-cot-collection dataset, which focuses on chain-of-thought reasoning and instruction following.
14
+
15
+ # Model Overview
16
+ - Model Name: Xenith-3B
17
+ - Base Model: microsoft/Phi-3-mini-4k-instruct
18
+ - Fine-Tuned On: AlignmentLab-AI/alpaca-cot-collection
19
+ - Model Size: 3 Billion parameters
20
+ - Architecture: Transformer-based LLM
21
+
22
+ # Training Details
23
+ - Objective: Fine-tune the base model to enhance its performance on tasks requiring complex reasoning and multi-step problem-solving.
24
+ - Training Duration: 10 epochs
25
+ - Batch Size: 8
26
+ - Learning Rate: 3e-5
27
+ - Optimizer: AdamW
28
+ - Hardware Used: 2x NVIDIA L4 GPUs
29
+
30
+ # Performance
31
+ Xenith-3B excels in tasks that require:
32
+
33
+ - Chain-of-thought reasoning
34
+ - Instruction following
35
+ - Contextual understanding
36
+ - Complex problem-solving
37
+ -
38
+ The model has shown significant improvements in these areas compared to the base model.