Update README.md
Browse files
README.md
CHANGED
@@ -17,23 +17,21 @@ model-index:
|
|
17 |
should probably proofread and complete it, then remove this comment. -->
|
18 |
|
19 |
# Mixtral_8x7b_WuKurtz
|
20 |
-
|
21 |
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the generator dataset.
|
22 |
|
23 |
## Model description
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
## Intended uses & limitations
|
28 |
|
29 |
More information needed
|
30 |
|
31 |
## Training and evaluation data
|
32 |
|
33 |
-
|
34 |
-
|
35 |
## Training procedure
|
36 |
-
|
37 |
### Training hyperparameters
|
38 |
|
39 |
The following hyperparameters were used during training:
|
|
|
17 |
should probably proofread and complete it, then remove this comment. -->
|
18 |
|
19 |
# Mixtral_8x7b_WuKurtz
|
20 |
+
Model is fine-tuned from the nephrology 80k dataset that we curated, injected into mixtral 8x7b instruct.
|
21 |
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the generator dataset.
|
22 |
|
23 |
## Model description
|
24 |
+
Mixtral 8x7b WuKurtz was created by Sean Wu, Michael Koo, Andy Black, Lesley Blum, Fabien Scalzo, and Ira Kurtz at Pepperdine and UCLA.
|
25 |
+
Arxiv paper out soon!
|
|
|
26 |
## Intended uses & limitations
|
27 |
|
28 |
More information needed
|
29 |
|
30 |
## Training and evaluation data
|
31 |
|
32 |
+
Training data out soon!
|
|
|
33 |
## Training procedure
|
34 |
+
Parameter efficient fine tuning.
|
35 |
### Training hyperparameters
|
36 |
|
37 |
The following hyperparameters were used during training:
|