MaziyarPanahi
commited on
Commit
•
1252ee4
1
Parent(s):
8d93013
Update README.md
Browse files
README.md
CHANGED
@@ -119,6 +119,32 @@ This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mixtral-8x7B-S
|
|
119 |
It achieves the following results on the evaluation set:
|
120 |
- Loss: 1.0276
|
121 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
122 |
## Model description
|
123 |
|
124 |
More information needed
|
|
|
119 |
It achieves the following results on the evaluation set:
|
120 |
- Loss: 1.0276
|
121 |
|
122 |
+
## How to use
|
123 |
+
|
124 |
+
**PEFT**
|
125 |
+
```python
|
126 |
+
from peft import PeftModel, PeftConfig
|
127 |
+
from transformers import AutoModelForCausalLM
|
128 |
+
|
129 |
+
config = PeftConfig.from_pretrained("MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-SFT-Alpaca")
|
130 |
+
model = AutoModelForCausalLM.from_pretrained("NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT")
|
131 |
+
model = PeftModel.from_pretrained(model, "MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-SFT-Alpaca")
|
132 |
+
```
|
133 |
+
|
134 |
+
**Transformers**
|
135 |
+
```python
|
136 |
+
# Use a pipeline as a high-level helper
|
137 |
+
from transformers import pipeline
|
138 |
+
|
139 |
+
pipe = pipeline("text-generation", model="MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-SFT-Alpaca")
|
140 |
+
|
141 |
+
# Load model directly
|
142 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
143 |
+
|
144 |
+
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-SFT-Alpaca")
|
145 |
+
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-SFT-Alpaca")
|
146 |
+
```
|
147 |
+
|
148 |
## Model description
|
149 |
|
150 |
More information needed
|