Crystalcareai commited on
Commit
f8c7261
1 Parent(s): d93dd30

Upload howto.md

Browse files
Files changed (1) hide show
  1. howto.md +53 -0
howto.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GemMoE: Sharing Tools and Improved Base Models
2
+
3
+ I'm excited to share the tools I used to create GemMoE and release improved base models for the community to explore and build upon.
4
+
5
+ ## Updates to GemMoE-Beta-1
6
+
7
+ GemMoE-Beta-1 will continue to serve as the repository for the `modeling_files` required to operate the Mixture of Experts (MoEs). However, I will be removing the PyTorch files from this repository.
8
+
9
+ ## New Models
10
+
11
+ I'm introducing two new models:
12
+
13
+ 1. **Crystalcareai/GemMoE-Base-Hidden**
14
+ - This is a new MoE created using an improved method that I will explain below.
15
+ - It utilizes a hidden gate and shows strong potential.
16
+ - The model has not been altered and requires finetuning to reach its full potential.
17
+ - If you're looking to achieve great performance with relatively minimal training, this is an excellent starting point.
18
+
19
+ 2. **Crystalcareai/GemMoE-Base-Random**
20
+ - This model was created using the same merge method as GemMoE-Base-Hidden, but with a RANDOM gate.
21
+ - It randomly selects the experts during the merging process.
22
+ - With finetuning, the model learns to choose the appropriate experts naturally, potentially leading to better results compared to GemMoE-Base-Hidden.
23
+ - This method offers an intriguing mix between clown-car and mixtral-style approaches.
24
+
25
+ The new merge method and modeling files also reduce VRAM usage, making the models easier to finetune.
26
+
27
+ ## Training Experiences and Challenges
28
+
29
+ I have successfully trained the models on a single A100 using Qlora, although it required careful monitoring and posed some difficulties. It appears there is currently an issue with Qlora and GemMoE. I observed better VRAM usage when using 4 A6000 cards and finetuning with Dora without any quantization and deepspeed_Zero3.
30
+
31
+ ## Creating Your Own Merges
32
+
33
+ You can create your own merges using my modified branch of mergekit:
34
+
35
+ ```bash
36
+ git clone -b gemmoe https://github.com/Crystalcareai/mergekit.git
37
+ ```
38
+
39
+ To create an exact replica of Crystalcareai/GemMoE-Base-Hidden, use the following command:
40
+
41
+ ```bash
42
+ mergekit-moe examples/gemmoe.yml ./merged --cuda --lazy-unpickle --allow-crimes
43
+ ```
44
+
45
+ Feel free to modify the `/examples/gemmoe.yml` file to customize the merge according to your preferences.
46
+
47
+ Alternatively, you can use my modified lazymergekit available on Colab: [Link to Colab Notebook](https://colab.research.google.com/drive/1WWxCE4NYvJNZkjFhkL79cf-dRc3xTpGn?usp=drive_link)
48
+
49
+ ## Let's Collaborate!
50
+
51
+ I'm thrilled to see what we can create together using these tools and improved base models. Let's push the boundaries of what's possible with GemMoE and explore new possibilities in the world of AI and machine learning.
52
+
53
+ Happy experimenting and building!