danielpark
commited on
Commit
•
af90776
1
Parent(s):
388fbd9
Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,8 @@ tags:
|
|
13 |
|
14 |
Required Weights for Follow-up Research
|
15 |
|
16 |
-
The original model is **AI21lab's Jamba-v0.1**, which requires an **A100 80GB GPU**. Unfortunately, this was not available via Google Colab or cloud computing services.
|
17 |
- **Original Model:** [Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)
|
18 |
- **MoE Layer Separation**: Consult [this script](https://github.com/TechxGenus/Jamba-utils/blob/main/dense_downcycling.py) and using [TechxGenus/Jamba-v0.1-9B](https://huggingface.co/TechxGenus/Jamba-v0.1-9B).
|
19 |
|
|
|
|
13 |
|
14 |
Required Weights for Follow-up Research
|
15 |
|
16 |
+
The original model is **[AI21lab's Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)**, which requires an **A100 80GB GPU**. Unfortunately, this almonst was not available via Google Colab or cloud computing services. Thus, attempts were made to perform **MoE (Mixture of Experts) splitting**, using the following resources as a basis:
|
17 |
- **Original Model:** [Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)
|
18 |
- **MoE Layer Separation**: Consult [this script](https://github.com/TechxGenus/Jamba-utils/blob/main/dense_downcycling.py) and using [TechxGenus/Jamba-v0.1-9B](https://huggingface.co/TechxGenus/Jamba-v0.1-9B).
|
19 |
|
20 |
+
Check [ai21labs/Jamba-tiny-random](https://huggingface.co/ai21labs/Jamba-tiny-random), which has 128M parameters (instead of 52B), and is initialized with random weights and did not undergo any training.has 128M parameters (instead of 52B), and is initialized with random weights and did not undergo any training.
|