danielpark
commited on
Commit
•
8ff94ab
1
Parent(s):
5610aae
doc: update model cards
Browse files
README.md
CHANGED
@@ -7,11 +7,14 @@ tags:
|
|
7 |
- moe
|
8 |
---
|
9 |
|
10 |
-
# Jamba-v0.1-9B
|
11 |
|
12 |
-
|
13 |
-
|
14 |
-
|
|
|
|
|
|
|
|
|
15 |
|
16 |
---
|
17 |
|
|
|
7 |
- moe
|
8 |
---
|
9 |
|
|
|
10 |
|
11 |
+
|
12 |
+
### Required Weights for Follow-up Research
|
13 |
+
|
14 |
+
The original model is **AI21lab's Jamba-v0.1**, which requires an **A100 80GB GPU**. Unfortunately, this was not available via Google Colab or cloud computing services. Attempts were made to perform **MoE (Mixture of Experts) splitting**, using the following resources as a basis:
|
15 |
+
|
16 |
+
- **Base creation**: Referenced for subsequent tasks.
|
17 |
+
- **MoE Layer Separation**: Consult [this script](https://github.com/TechxGenus/Jamba-utils/blob/main/dense_downcycling.py) from [TechxGenus/Jamba-v0.1-9B](https://huggingface.co/TechxGenus/Jamba-v0.1-9B).
|
18 |
|
19 |
---
|
20 |
|